Why Limits Exist
Salesforce runs multi-tenant — thousands of orgs share underlying infrastructure. Governor limits are how one org’s runaway transaction can’t starve the rest. They’re strict, they’re enforced at runtime, and exceeding them rolls back the entire transaction.
Knowing them isn’t optional. Designing around them is the difference between Apex that ships and Apex that breaks at scale.
Per-Transaction Limits (Synchronous)
These apply to every synchronous Apex execution.
| Limit | Value |
|---|---|
| SOQL queries | 100 |
| SOQL query rows | 50,000 |
| SOSL queries | 20 |
| DML statements | 150 |
| DML rows | 10,000 |
| CPU time | 10,000 ms |
| Heap size | 6 MB |
| Callouts | 100 |
| Callout timeout (per) | 120 s |
| Callout timeout (cumulative) | 120 s |
| Queueable jobs enqueued | 50 |
| Future calls | 50 |
| Apex methods on trigger call depth | 16 |
| PushTopic events published | 100 |
Per-Transaction Limits (Async)
Async contexts — Queueable, Batch execute, @future, Scheduled — get roomier budgets.
| Limit | Async Value |
|---|---|
| SOQL queries | 200 |
| DML statements | 200 |
| CPU time | 60,000 ms |
| Heap size | 12 MB |
| Callouts | 100 |
This is one reason to shift heavy work to async — not for performance, but for budget.
Per-24-Hour Limits
These apply across all transactions in a rolling 24-hour window. Exceeding them blocks new work until the window rolls forward.
| Limit | Value |
|---|---|
| API calls | 15,000 + (per-user license allocation) |
| Email invocations (total) | 5,000 single / 1,000 mass per org |
| Platform event publishes | Tier-based, typically millions |
| Async Apex executions | 250,000 + 200 × users |
| Scheduled Apex jobs | 100 concurrent |
| Bulk API batches | 15,000 per 24h |
Check Setup → Company Information → API Request Usage to see current consumption.
Common Causes of Limit Violations
SOQL Limit Hit
Almost always caused by SOQL inside a loop. Refactor to query once and use a map.
DML Limit Hit
Same pattern — DML inside a loop. Build a collection and DML once outside.
CPU Time Exceeded
Harder to diagnose. Causes include:
- Very large collections processed in memory.
- Complex string manipulation or regex at scale.
- Deeply nested loops (O(n²) or worse).
- Large trigger cascades on bulk operations.
Profile with System.debug(Limits.getCpuTime()) at checkpoints to identify the slow region.
Heap Size Exceeded
Loading too many records into memory. Fix: pagination, smaller SELECT lists, or shifting to Batch Apex (which processes in chunks).
Too Many Queueable Jobs
Enqueueing a Queueable per record creates 200 jobs in a transaction, past the 50 cap. Redesign to enqueue fewer jobs that each process a batch.
API Calls Exceeded
Usually an integration hitting the org too hard. Use Bulk API for large data operations; it consumes one call per batch, not per record.
Designing for Limits
Bulkify Everything
Assume every Apex class will run in a batch of 200 records minimum. Design queries and DML accordingly.
Async for Heavy Work
If the synchronous path can’t fit the work, spin off a Queueable. The user sees their transaction commit immediately; the heavy work catches up asynchronously.
Early Exit Unnecessary Work
Triggers fire on every update. Check for relevant field changes before doing anything. Avoid processing records where nothing meaningful changed.
Cache Lookups
Pattern: at the start of the trigger, query all the lookup records for the whole batch and build a map. Inside the loop, look up from the map. One query, not 200.
Batch Apex for 10K+ Records
When you need to process more records than a synchronous transaction can handle, Batch Apex chunks the work and gets fresh limits per chunk.
Platform Events for Fan-Out
When a single trigger would need to touch too many downstream objects, publish an event and let multiple subscribers handle their own work asynchronously. Each subscriber’s transaction is separate.
Monitoring
Proactive monitoring beats reactive firefighting.
Debug Logs: Show per-transaction limit usage. Useful for development and targeted production investigation.
Apex Exception Email: Unhandled governor limit errors send email to the admin listed on the Apex Exception recipients list. Route to a monitored inbox.
Event Monitoring (shield add-on): Tracks CPU, SOQL, and DML usage across transactions, surfaces slowest queries and heaviest users.
Salesforce Optimizer: Built-in org scan that flags limit-prone code and configuration.
The Limits Class
In Apex, the Limits class exposes current usage at runtime:
System.debug('SOQL so far: ' + Limits.getQueries() + '/' + Limits.getLimitQueries());
System.debug('DML so far: ' + Limits.getDMLStatements() + '/' + Limits.getLimitDMLStatements());
System.debug('CPU so far: ' + Limits.getCpuTime() + 'ms / ' + Limits.getLimitCpuTime() + 'ms');
Use these during development and testing. Do not use them to guard production code — “if I’m near the limit, skip this work” is a band-aid on a design problem.
Limit-Aware Patterns
Chunked async chains. Process 100 records in a Queueable, enqueue the next Queueable with the remaining work, repeat. Each chain link gets fresh limits.
External data via Data Loader, not Apex. For bulk imports, Data Loader uses Bulk API with chunks; it doesn’t consume synchronous Apex limits at all.
Off-platform processing. For very heavy analytical work, export to a warehouse (Snowflake, Databricks, Data Cloud) and run the computation there. Bring results back.
Frequently Asked Questions
Do limits reset between calls within the same HTTP request?
No. Apex limits apply per transaction, and one API call is typically one transaction.
Can I request higher limits?
Some async limits can be adjusted via support for specific high-volume customers. Synchronous limits are fixed.
How do flows count against limits?
Flow consumes Apex limits when its DML and SOQL execute. A record-triggered Flow firing on a bulk insert counts those queries and DMLs against the transaction budget.
What about managed packages?
Installed managed packages get their own per-transaction namespace budgets — they don’t share limits with your code. This is why an ISV can do heavy work without consuming your org’s limits directly.