I caution the casual reader against glacier. It's not what it appears at a glance. Your files should be put into a single archive before upload otherwise you'll spend weeks waiting for AWS scripts to manage old files.
We have 23TB of images stored in S3 and I was recently looking at moving them to Backblaze to save hundreds of dollars per month. These are all individual image files, because reasons.
Then I realized that S3 Glacier and Deep Archive were even less expensive than B2. I took a bit further of a look and found that Glacier/DA files have some fairly chonky metadata that must be stored in normal S3, and for a lot of our images the metadata was larger than the image in question. So Glacier/DA would increase our storage costs. Over all it probably wasn't a money-saving situation.
The ideal use case is to bundle those up into a tar file or something and store those large files, and manage the metadata and indexing/access ourselves.
Every file uploaded is considered an immutable archive. It does not have a version history. So let's say you have 100,000 files you backed up and want to update them and don't want to pay for the storage of the old files. You need to request for a manifest of hashes for all files. This will take a few days to generate then you will be given a json file that is over a gigabyte. Next, you will write a script to delete each file one at a time, rate limited to one request per second. Have fun.
Are you maybe referring to Glacier "vaults" (the original Glacier API)? With the old Glacier Vault API you had to initiate an "inventory-retrieval" job with an SNS topic etc. It took days. Painful.
But these days you can store objects in an S3 bucket and specify the storage class as "GLACIER" for "S3 Glacier Flexible Retrieval" (or "GLACIER_IR" for S3 Glacier Immediate Retrieval or "DEEP_ARCHIVE" for S3 Glacier Deep Archive). You can use the regular S3 APIs. We haven't seen any rate limiting on this approach.
The only difference from the "online" storage classes like STANDARD, STANDARD_IA, etc is that downloading an object with GLACIER/GLACIER_IR/DEEP_ARCHIVE storage class requires first making it downloadable by calling the S3 "restore" API on it, and then waiting until it's downloadable (1-5 minutes for GLACIER_IR, 3-5 hours for GLACIER, and 5-12 hours for DEEP_ARCHIVE).
B2/S3 is what most people want.