10. S3 Performance

Prefix within S3: Prefix is simply the middle portion between the bucket name and the object.
mybucketname/folder1/subfolder1/myobject.jpg — Here /folder1/subfolder1 is Prefix.
mybucketname/folder2/subfolder1/myobject.jpg — Here /folder2/subfolder1 is Prefix.
mybucketname/folder3/myobject.jpg — Here /folder3 is Prefix.
mybucketname/folder4/subfolder4/myobject.jpg — Here /folder4/subfolder4 is Prefix.

S3 has extremely low latency. We can get the first byte out of S3 within 100-200 milliseconds. We can also achieve a high number of requests: 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD requests per second per prefix. We get better performance by spreading our reads across different prefixes. If we are using two prefixes then we achieve 11000 requests per second. If we are using four prefixes then we achieve 22000 requests per second. The more prefixes we have the better performance we can achieve.

S3 Limitations when using Server Side Encryption – KMS
i) If we are using SSE-KMS to encrypt our objects in S3, we must keep in mind the KMS limits.
ii) When we upload a file, we will call GenerateDataKey in the KMS API.
iii) When we download a file, we will call Decrypt in the KMS API.
iv) Uploading/ downloading will count toward the KMS quota.
v) Currently we cannot request a quota increase for KMS.
vi) Region-specfic, however, its either 5500, 10000 or 30000 requests per second.

Multipart Uploads:
i) Recommended for files over 100MB
ii) Required for files over 5GB
iii) Allows parallelize uploads (essentially increases efficiency)
iv) For downloads we use S3 Byte-Range fetches
v) Parallelize downloads by specifying byte ranges
vi) If there’s a failure in the download, its only for a specific byte range
vii) S3 byte range fetches can be used to speed up downloads
viii) S3 byte range fetches can be used to download partial amounts of the file (eg: header information)

i. A file-hosting service uses Amazon S3 under the hood to power its storage offerings. Currently all the customer files are uploaded directly under a single S3 bucket. The engineering team has started seeing scalability issues where customer file uploads have started failing during the peak access hours with more than 5000 requests per second. Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue?
Answer: Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations.