# Atlaso + OpenAI Agents SDK

<!-- Canonical: https://www.atlaso.ai/docs/recipes/openai-agents -->

`pip install atlaso[openai-agents]` is reserved namespace. The OpenAI Agents SDK uses `@function_tool` decorators — wrap Atlaso verbs as tools and the agent picks them up:

```python
from atlaso import Memory
from agents import Agent, function_tool

memory = Memory()


@function_tool
def remember(text: str, user_id: str) -> str:
    """Save a fact to long-term memory for this user."""
    result = memory.add(text, user_id=user_id)
    return f"Saved as deposit {result.id}"


@function_tool
def recall(query: str, user_id: str) -> str:
    """Search long-term memory. Returns the verdict (action language)
    plus per-hit content + confidence flags so the agent can branch on
    disagreement instead of treating retrieval as authoritative.
    """
    results = memory.recall(query, user_id=user_id, limit=5)
    lines = [results.explain()]
    for r in results:
        prefix = "?" if r.has_disagreement else ("✓" if r.is_confident else "·")
        lines.append(f"{prefix} {r.content}")
    return "\n".join(lines)


@function_tool
def contradict(new_text: str, supersedes_deposit_id: str, reason: str, user_id: str) -> str:
    """When you learn something that contradicts an earlier memory,
    deposit the new finding AND mark the old one as superseded — atomically.
    Atlaso has no `update`; this is the canonical revision verb.
    """
    result = memory.contradict(
        new_text,
        contradicts=[supersedes_deposit_id],
        reason=reason,
        user_id=user_id,
    )
    return f"Recorded as deposit {result.id}; old fact retracted with audit reason."


assistant = Agent(
    name="Assistant",
    instructions=(
        "You have long-term memory via remember/recall/contradict. "
        "Branch on the prefix in recall: ✓ = trust, ? = disagreement, · = unconfirmed."
    ),
    tools=[remember, recall, contradict],
)
```

Atlaso pairs especially well with OpenAI Agents SDK because the agent loop already passes structured outputs back to the LLM — surfacing `is_confident` / `has_disagreement` as inline prefixes lets the model branch on retrieval quality without you writing extra orchestration.

---

<!-- atlaso:doc-trailer -->
**Source:** <https://www.atlaso.ai/docs/recipes/openai-agents>  
**Edit on GitHub:** <https://github.com/imashishkh21/atlaso/tree/main/docs/recipes/openai-agents.md>  
**Updated:** 2026-05-12
