Why Bulkification Exists

Salesforce processes records in batches. A data import loads 200 records at a time. A REST API call can create dozens. A mass-update reassigns thousands.

When any of these trigger a flow, the flow runs once per batch — not once per record. If the flow is built assuming “one record at a time,” it breaks at the first bulk input. Governor limits hit, transactions roll back, and the data load fails loudly.

Bulkification means writing the flow so it behaves correctly whether it processes one record or two hundred.

Limits You Must Respect

Per transaction, synchronous:

  • 100 SOQL queries
  • 150 DML statements
  • 50,000 records queried
  • 10,000 records processed by DML
  • 10,000,000 CPU nanoseconds (10 seconds)
  • 50,000 flow elements executed

Bulkification is the discipline of not blowing through these as batch size grows.

Pattern 1: Query Outside the Loop

The single most common mistake: Get Records inside a Loop.

Inside a loop processing 200 records, Get Records becomes 200 queries — instant limit breach.

Fix: move the query outside the loop. Fetch all needed records up front, store in a collection, reference by map or by matching Id inside the loop.

Bad:
  Loop (cases)
    Get Records (Account where Id = {currentCase.AccountId})
    Decision based on Account.Tier

Good:
  Build collection of AccountIds from all cases
  Get Records (Account where Id IN :AccountIds)  — one query
  Build a Map<Id, Account>
  Loop (cases)
    Lookup Account from Map
    Decision based on Account.Tier

Pattern 2: DML Outside the Loop

Same principle, writing side.

Inside a loop, Update Records fires once per iteration — 200 DMLs for 200 records.

Fix: build a collection of records to update during the loop, then perform a single Update Records call outside the loop with the collection as input.

Bad:
  Loop
    Update Record (one at a time)

Good:
  Create empty collection
  Loop
    Build record variable, add to collection
  Update Records (collection)  — one DML

Pattern 3: Map-Based Lookups

Collections in flow don’t support O(1) lookups — walking a collection for a matching Id is O(n). Inside a loop, you’re back to O(n²).

Fix: use a Map variable (added in Spring ‘24). Key by record Id; value is the record. Lookup is fast.

For very large collections, this pattern turns slow loops into fast ones.

Pattern 4: Avoid Cross-Object DML in Loops

When one record processed requires updates to a parent, a child, and a related record, the temptation is to do all three updates inside the loop.

Fix: separate collections for each object. Build them up inside the loop, execute three bulk DMLs outside.

Loop over cases:
  add to parent_accounts_to_update
  add to related_tasks_to_create
  add to child_contacts_to_update

Outside loop:
  Update Records (parent_accounts_to_update)
  Create Records (related_tasks_to_create)
  Update Records (child_contacts_to_update)

One SOQL, three DMLs, regardless of batch size.

Pattern 5: Filter Early With Entry Conditions

If your record-triggered flow only needs to run on records meeting specific criteria, set entry conditions at the trigger level — don’t rely on Decision elements inside the flow.

Entry conditions skip the flow entirely for records that don’t match. Decision elements skip work but still consume flow interviews and elements.

In a bulk load of 10,000 records where only 200 match, entry conditions mean 200 flow interviews, not 10,000.

Pattern 6: Use Before-Save for Field Updates

A before-save flow that updates fields on the triggering record consumes zero DML. An after-save flow doing the same work consumes one DML per record.

At bulk scale, this matters.

Use before-save unless you need side effects (creating related records, sending emails, callouts).

Pattern 7: Async for Expensive Work

Some work is too heavy for synchronous bulk: external callouts, large aggregations, cascading updates.

Fix: from your record-triggered flow, run a small synchronous step and defer the heavy work to an async path:

  • Invoke a scheduled flow that picks up pending records.
  • Publish a platform event and handle it in a separate flow.
  • Call an async Apex method from an invocable action.

Async defers the cost beyond the synchronous transaction. It adds latency but avoids hitting governor walls.

Pattern 8: Bulk-Friendly Subflows

Subflows called from a loop inherit the loop’s bulk problem — each invocation is a separate flow run, each can do its own DML.

Fix: design subflows to accept collections as input and process them in bulk internally. Call the subflow once per flow, not once per record.

Testing Bulkification

Bulkification bugs don’t show up in one-at-a-time testing. You must test with realistic batches.

  • Load 200 test records via Data Loader.
  • Update 200 records via API or mass-edit.
  • Use Apex Test classes that create bulk scenarios.

Monitor the debug log for:

  • Number of SOQLs (should be constant regardless of batch size).
  • Number of DMLs (should be constant or proportional to objects touched, not records).
  • Total execution time (should scale linearly with batch size, not quadratically).

Diagnosing Existing Flows

For a flow that breaks on bulk loads:

  1. Run Debug with a test record — note the SOQL and DML counts.
  2. Run Debug with a bulk-like scenario in sandbox — same counts?
  3. If counts scale with batch size, you have bulkification bugs.
  4. Fix each violating pattern from above.

Frequently Asked Questions

Does Flow actually run per batch or per record?

Per batch. The flow fires once and processes a collection of records. Your elements operate on that collection unless you explicitly loop.

Are scheduled flows bulkified?

Schedule-triggered flows run in batches of 200 by default. Same rules apply.

Can I set a batch size?

For schedule-triggered flows, yes — via the flow’s scheduled-run configuration. For record-triggered flows, batch size is controlled by the triggering operation (API, import, manual save) and is not configurable from the flow side.

What governor limits differ for async flows?

Async Apex gets higher limits (200 SOQL, 200 DML, 60 seconds CPU). Async flows inherit these when running in background. This is another reason to push heavy work async.

Share