Loading
Loading
The single mechanic that makes atlaso different. Every recall hit carries a verdict on whether the data is settled.
A SearchResult carries four dispersion-aware flags on top of the underlying Deposit:
is_confident: bool — the bag this hit lives in is multi-sample, near-unanimous on a dominant polarity, and free of directional conflict.has_disagreement: bool — the bag contains two or more distinct directional polarities. Any pair drawn from {positive, negative, cautionary} triggers the flag — including negative + cautionary without a positive.agreement_score: float — fraction of the bag whose polarity matches the dominant polarity, in [0, 1].conflict_peers: tuple[str, ...] — deposit IDs of the opposing-polarity hits that triggered the conflict.Inside the engine the bag-level fields are named bag_precision / has_conflict / is_single_sample. They're re-surfaced on the public SearchResult as agreement_score, has_disagreement, and is_thin_evidence respectively. Same data, public names.
@property
def is_confident(self) -> bool:
return (
self.bag_precision >= 0.99 # public: agreement_score
and not self.is_single_sample # public: is_thin_evidence
and not self.has_conflict # public: has_disagreement
)Three conditions must all hold. A single observation, even with high evidence grade, can never be is_confident. A fifty-fifty split, even with many observations, can never be is_confident. A unanimous bag of two — yes. The rule is intentionally strict.
Each bag has a dominant polarity — the directional polarity with the most deposits. opendeposits are excluded from the directional count (they're questions, not claims).
bag_precision is the fraction of directional deposits whose polarity equals the dominant polarity. A 5-deposit bag with 5 positive and 0 negative has bag_precision = 1.0; a bag with 4 positive and 1 negative has 0.8 and is not confident.
A bag has a conflict if it contains two or more distinct directional polarities drawn from {positive, negative, cautionary}. The signal is structural, not similarity-based — atlaso doesn't embed your text to decide if it disagrees. It groups by the structured scope and counts polarities.
from atlaso import Memory, Scope
m = Memory()
user = m.for_user("alice")
# env= gives narrow-negative the provenance the gate requires
scope = Scope(model="gpt-5", dataset="prod", env="prod")
user.add("threshold 0.7 is optimal", polarity="positive", scope=scope, evidence_grade="observed")
user.add("threshold 0.7 over-flags", polarity="negative", scope=scope, evidence_grade="observed")
results = user.recall("threshold", scope=scope)
print(results.has_disagreement) # True
print(results.is_confident) # False
for hit in results:
print(hit.has_disagreement, hit.conflict_peers, hit.content)SearchResults.has_disagreement looks at every bag returned by the engine, not just the first limit results. A limit=5 slice could otherwise hide a conflict bag from your model. This is intentional.
results.explain() returns a one-line verdict your agent can read in-prompt.
print(results.explain())
# "5 hits across 2 bags · 1 bag in conflict · not confident"Was this page helpful?