What Atlas Is

Atlas is the component that sits between a user’s question and the agent’s response. It’s not a model — it’s a reasoning loop that uses one or more LLMs to plan, act, observe, and revise.

Every Agentforce agent runs on Atlas. You don’t configure it directly, but every design choice you make — topic instructions, action descriptions, scope boundaries — exists to give Atlas better inputs.

The Loop

Atlas follows a deliberate four-step cycle. Understanding each step is the fastest way to understand why agents sometimes behave the way they do.

1. Classify

When a user message arrives, Atlas first picks a topic. It does this by comparing the user’s input against the classification description of every topic attached to the agent. The topic with the best semantic match wins.

What this means for you: topic descriptions are not marketing copy. They’re the prompt Atlas uses to route traffic. Write them precisely.

2. Plan

Once the topic is chosen, Atlas reads the topic’s scope and instructions, inspects the actions attached to that topic, and drafts a plan. The plan is a sequence of action calls with arguments.

In 2.0, the plan can be multi-step and conditional: “Call Get_Case first; if status is Closed, explain why; otherwise call Check_Assignment_Rules and draft a response.” This is the major upgrade from 1.0’s mostly-single-step planning.

3. Act

Atlas executes the plan one step at a time. Each action runs against your Salesforce data with the running user’s permissions — there is no privilege escalation. If an action fails, Atlas catches the error and either retries, substitutes a different action, or surfaces the error to the user.

4. Observe and Respond

After the actions run, Atlas reads their outputs, composes a response, and passes it through the Einstein Trust Layer for masking, toxicity filtering, and logging. Only then does the response reach the user.

Why Your Agent Picks the “Wrong” Action

This is the most common complaint: “Why did it call Create_Case when I asked it to update one?”

The answer is almost always one of three things:

Action descriptions overlap

If two actions have similar descriptions, Atlas has no way to distinguish them. Rewrite the descriptions so each one states what the action does and when to use it, not just its name.

Topic instructions are silent on the choice

If your instructions don’t say “call X before Y” or “only call Z when the user says ‘new’,” Atlas will guess based on general-purpose reasoning. Be explicit.

The input is genuinely ambiguous

Sometimes the user’s message could reasonably mean two things. Atlas isn’t wrong here — it picked one interpretation. Either tighten the topic scope to reject that ambiguity, or add a clarifying action the agent can call when confidence is low.

The Reasoning Trace

Every conversation in Agent Builder has a reasoning trace. Open it before you complain about behavior.

The trace shows:

  • Which topic was selected and the classification confidence
  • The plan Atlas drafted
  • Every action called, with inputs and outputs
  • The final response before and after Trust Layer processing

Ninety percent of agent debugging is reading the trace. Most “the model is broken” reports, on inspection, turn out to be “the instructions were ambiguous.”

Latency Budget

Atlas’s reasoning loop has a cost. Each step involves an LLM call. A multi-step plan with three actions and a revision loop can easily hit 5–8 LLM calls for a single user message.

Practical implications:

  • Expect 2–5 seconds for simple responses, 6–15 for complex ones.
  • If latency matters, keep topic scopes narrow so Atlas has fewer options to reason through.
  • Don’t build an agent as a front for simple CRUD. A Flow runs in milliseconds.

Einstein Request Credits

Every LLM call consumes Einstein request credits. Atlas’s multi-step reasoning means a single conversation may consume several credits, not one.

Monitor your credit burn rate in Setup → Einstein Usage. A pilot that looks cheap can become expensive when rolled out to thousands of users if the agent’s scope is broad.

Guardrails You Can’t See

Atlas enforces rules you don’t configure:

  • Permission-based data access: the agent runs as the user; it cannot see records the user can’t.
  • Record-level security: field-level security and sharing rules apply to every action.
  • Logging: every step is logged to the Einstein audit trail, retained per your data retention policy.
  • Rate limiting: individual users and the org as a whole are rate-limited to prevent runaway agents.

These are not optional. Attempts to work around them indicate a design problem.

What “Reasoning” Actually Means

It’s worth being clear: Atlas does not “think” in any human sense. It runs a carefully scaffolded loop on top of LLMs that have been fine-tuned for planning and tool use. The scaffolding is what makes the output reliable enough for enterprise use.

When the model is wrong, it’s because the scaffolding failed to constrain it — which means your design failed to constrain it. The fix is almost always in your configuration, not in the model.

Frequently Asked Questions

Can I choose which LLM Atlas uses?

Not directly. Salesforce routes internally across multiple hosted models. Specific regulated use cases (FedRAMP, HIPAA) can request model-specific routing through their AE.

How do I see the classification confidence?

The reasoning trace in Agent Builder displays it next to the chosen topic. Below ~0.6 is a signal that your topic descriptions overlap.

What happens when no topic matches?

Atlas falls back to a default topic if you’ve defined one, or returns a generic “I can’t help with that” response. Always define a default — the generic fallback looks unpolished.

Share