Salesforce Storage Costs: What You're Paying and How to Reduce It
Salesforce storage costs creep up. The included allocation feels generous when you onboard, then a few years later your CRM is paying for hundreds of gigabytes of email attachments, old case files, and historical Opportunity quotes that nobody opens. This guide explains how Salesforce charges for storage, how to find the bloat in your org, and the four strategies for shrinking it — with honest trade-offs.
How Salesforce storage is priced
Each Salesforce edition includes a baseline storage allocation, plus per-user additions. The exact numbers shift over time, but as of recent contracts the typical Enterprise edition org receives 10 GB of data storage plus 20 MB per user license, and 10 GB of file storage plus 2 GB per user. Once you exceed those, additional storage is purchased in increments — historically around $125 per 500 MB per month for data storage, and similar pricing for file storage that varies by edition.
The numbers are deceptive in two ways. First, many teams don't realize their org has hit the limit until the bill arrives or until users start hitting "you've exceeded your storage" errors. Second, the marginal price per GB makes external storage (like S3) look incredibly cheap by comparison: standard S3 storage is about $0.023 per GB per month — roughly two orders of magnitude cheaper than Salesforce file storage, and that's before considering S3's lower-cost tiers like Standard-IA or Glacier.
Data storage vs file storage
Salesforce tracks two storage buckets separately:
- Data storage — accounts for records: Accounts, Contacts, Opportunities, custom objects, etc. Each record costs about 2 KB regardless of actual field content (a few exceptions for large text fields). The number of records, not their size, is what matters.
- File storage — accounts for binary content: ContentDocuments, ContentVersions, Attachments, email attachments, document records. Sized by actual file bytes.
The two limits are independent. You can max out file storage while having plenty of data storage left, or vice versa. They have different overage costs and require different mitigation strategies. This guide focuses on file storage because it's almost always the bigger cost driver.
You can find your current usage at Setup → Data → Storage Usage. The page shows total used, the breakdown by user (data) and by largest file type (files), and the limit per type.
Identifying bloat in your org
Storage Usage tells you the totals. To know where the bloat is concentrated, you need to look at distributions — and that means SOQL.
Total file size on modern files:
SELECT SUM(ContentSize) FROM ContentVersion WHERE IsLatest = true
Total file size on legacy Attachments:
SELECT SUM(BodyLength) FROM Attachment
The ratio between these tells you whether your storage is concentrated in modern files or legacy ones — which influences how you migrate or archive them.
Per-object size breakdown (modern files):
SELECT LinkedEntity.Type, SUM(ContentDocument.LatestPublishedVersion.ContentSize)
FROM ContentDocumentLink
GROUP BY LinkedEntity.Type
ORDER BY SUM(ContentDocument.LatestPublishedVersion.ContentSize) DESC
This typically reveals one or two objects (Case is common, EmailMessage is common) responsible for most of the storage.
Per-creation-year breakdown (modern files):
SELECT CALENDAR_YEAR(CreatedDate), SUM(ContentSize)
FROM ContentVersion
WHERE IsLatest = true
GROUP BY CALENDAR_YEAR(CreatedDate)
ORDER BY CALENDAR_YEAR(CreatedDate) DESC
This reveals the time concentration. Most orgs have a long tail of old files that nobody opens — easy archival candidates.
For a more complete inventory without writing all the SOQL, see our attachment counting guide or use the SF Count Attachments tool.
The four strategies
1. Delete
Identify files older than a retention threshold and delete them. The most aggressive option and the cheapest in the short term — deleted files free their storage immediately (after Recycle Bin emptying).
Pros: Free, fast, no infrastructure.
Cons: Irreversible. Loses institutional knowledge. Compliance risk if files were subject to retention requirements you didn't realize. Users may need a file you deleted six months later. The undo cost is "restore from a backup" — assuming you have one, and assuming the file is in scope.
When it makes sense: Files you can confidently identify as junk — duplicate uploads, test files in production, files attached to records that are themselves about to be deleted.
2. Archive to external storage with traceability
Move files to S3 (or another low-cost store) while keeping a record in Salesforce of where they went. Users can still find archived files; the original Salesforce file is removed (or kept, your choice) to free storage.
Pros: Order-of-magnitude cheaper storage than Salesforce. Reversible — restore by re-uploading from S3. Compliant — full audit trail of what was archived and when. Users can still access archived files via a Lightning component.
Cons: Requires tooling. Initial setup investment. Slight UX change (downloading archived files goes through S3 instead of Salesforce's CDN, marginally slower).
When it makes sense: Files older than 12–18 months that aren't actively viewed but must be retained.
3. Move to a read-only history org
Spin up a separate Salesforce org dedicated to historical records and files. Migrate old data into it. Active org keeps current operations; history org holds the long tail.
Pros: Files remain in Salesforce, so users have a familiar interface for retrieval. Active org gets the storage relief.
Cons: You're paying for two orgs. Cross-org search is poor. Compliance implications of a separate data location. Works for very large enterprises with the budget; rarely makes sense at mid-market.
4. Tier files by access pattern
Hot files (recently accessed, frequently viewed) stay in Salesforce file storage. Warm files move to a cheaper tier still in Salesforce (compress them, deduplicate them). Cold files move to external storage. The most sophisticated option but also the highest engineering cost.
Pros: Best fit to actual usage patterns.
Cons: Complex. Requires visibility into actual file access patterns (Salesforce doesn't track this natively for files). Most teams overengineer this when option 2 would have sufficed.
Why S3 archival makes sense for most teams
Option 2 (archive to S3 with traceability) is the right choice for most mid-market and enterprise Salesforce orgs because it gives you most of option 1's cost savings without option 1's risks. S3 storage is so cheap that the math is hard to argue with: a 100 GB file storage problem in Salesforce that costs thousands of dollars per year drops to a few dollars per month in S3 Standard, and to under a dollar in Glacier.
Crucially, the right archival approach keeps users productive. They shouldn't have to know a file is "archived" — they should see it on the record page (via a Lightning component or the standard related list) and download it like any other file. Behind the scenes, the file streams from S3 instead of Salesforce CDN. The UX cost is one or two seconds of additional latency on download. The storage cost saving is 10× to 100×.
Implementation considerations
- Compliance and retention — review your data retention obligations before archiving. If files must be retained for 7 years, S3 Glacier is fine; if they must remain in Salesforce, archival isn't an option (option 4 might be).
- Audit trail — a clean archival creates a Salesforce record per archived file showing when it was archived, by whom, and where it lives in S3. Don't archive without that trail. The SF File Archiving tool uses custom
ContentDocumentArchive__candContentDocumentLinkArchive__cobjects for exactly this. - Deduplication — files attached to multiple parent records (the same contract attached to three Accounts) should upload to S3 once, not three times. Verify your tool deduplicates.
- Reversibility — the archival process should be reversible. If you archive a file and decide six months later it should be in Salesforce, you should be able to restore it from S3 without a custom job.
- Bring-your-own bucket — for some compliance scenarios you need S3 in your own AWS account, not a vendor's. Verify the tool supports this.
- End-user access — archived files should still appear on record pages without users needing to know they're in S3. A Lightning Web Component is the cleanest way; many tools include one.
Reduce storage cost without losing access
The SF File Archiving tool moves Salesforce files to Amazon S3 with full traceability — custom Salesforce tracking objects so users still find their files, deduplication so you don't pay twice, and dry-run mode so you preview deletions before they happen.