Loading
Loading
Bulk imports and retried writes need replay safety. Atlaso uses Stripe-style content-addressed keys with a 24-hour window.
add() is convenient but offers no replay safety — calling it twice creates two deposits. add_many() requires a per-item idempotency_key so that retrying a partially-failed batch is safe.
idempotency_key(*parts) is a pure blake2b-16 hash of its arguments, prefixed with ak_. Use it to derive deterministic keys from anything that uniquely identifies the write.
from atlaso import idempotency_key
# Content-addressed: same content → same key
key = idempotency_key("alice", "Alice prefers oat milk", "2026-05-11")
# 'ak_b7c1d3e8a4f5...'
# Row-addressed: stable across retries
for i, row in enumerate(rows):
k = idempotency_key("import-batch-42", str(i))from atlaso import Memory, AddItem, idempotency_key
m = Memory()
user = m.for_user("alice")
rows = read_csv("preferences.csv")
items = [
AddItem(
content=row["text"],
idempotency_key=idempotency_key("preferences-csv-v1", row["id"]),
polarity=row["polarity"],
)
for row in rows
]
result = user.add_many(items)
print(result.committed, "new")
print(result.duplicates, "replays (already seen)")
print(result.failed, "rejected")add_many(), the replayed deposit is appended to AddManyResult.duplicates rather than committed.IdempotencyKeyConflict with .existing_id.Bulk imports can hit the gate. The on_gate_reject parameter controls what happens to rejected items:
"skip" (default) — rejected items land in result.failed; the rest commit.Each rejection is a BulkReject with the original AddItem, the error, and its index. You can retry just those after fixing the evidence grade.
Was this page helpful?