Python SDK
Native Python bindings via PyO3. Import as import pensyve.
pip install pensyveThe Python SDK is a compiled Rust extension. It runs locally with no server required.
Pensyve
Main entry point for the memory runtime.
Pensyve(path?, namespace?)
Create or open a Pensyve instance.
| Parameter | Type | Default | Description |
|---|---|---|---|
path | str | None | ~/.pensyve/default | Directory for storage files |
namespace | str | None | "default" | Namespace name |
from pensyve import Pensyve
p = Pensyve()
p = Pensyve(path="/tmp/mydata", namespace="project-x")entity(name, kind?)
Get or create an entity.
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Entity name |
kind | str | "user" | One of "agent", "user", "team", "tool" |
Returns: Entity
user = p.entity("alice")
agent = p.entity("my-agent", kind="agent")episode(*participants)
Create an episode context manager. Episodes record messages and produce memories on exit.
| Parameter | Type | Default | Description |
|---|---|---|---|
*participants | Entity | required | Entities participating in this episode |
Returns: Episode (context manager)
user = p.entity("alice")
agent = p.entity("my-agent", kind="agent")
with p.episode(user, agent) as ep:
ep.message("user", "What's the status of project X?")
ep.message("assistant", "Project X is on track for Q2 launch.")
ep.outcome("success")recall(query, entity?, limit?, types?)
Search memories matching a query. Fuses vector, BM25, graph, recency, and other signals.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | str | required | Search query |
entity | Entity | None | None | Filter to a specific entity |
limit | int | 5 | Max results |
types | list[str] | None | None | Filter by memory type: "episodic", "semantic", "procedural" |
Returns: list[Memory]
memories = p.recall("project X deadline")
memories = p.recall("deployment steps", entity=agent, limit=10, types=["procedural"])recall_grouped(query, *, limit?, order?, max_groups?)
Recall memories matching a query, clustered by source session.
Runs the same RRF fusion pipeline as recall() and then groups the top-limit memories by episode_id. Memories from the same session cluster into a single SessionGroup sorted in conversation order; semantic and procedural memories appear as singleton groups with session_id=None.
This is the canonical entry point for "memory as input to an LLM reader" workflows — internal benchmarking on LongMemEval_S confirmed that session-grouped recall produces materially better reader accuracy than flat recall.
| Parameter | Type | Default | Description |
|---|---|---|---|
query | str | required | Search query |
limit | int | 50 | Max memories to consider across all groups |
order | Literal["chronological", "relevance"] | "chronological" | Group ordering — oldest session first, or highest-scoring session first |
max_groups | int | None | None | Optional cap on the number of returned groups |
Returns: list[SessionGroup]
Raises: ValueError if order is not one of the supported values.
groups = p.recall_grouped("How many books did I buy this year?", limit=50)
for g in groups:
print(f"### Session {g.session_id} ({g.session_time}):")
for m in g.memories:
print(f" {m.content}")Feed the groups directly to a reader prompt — no manual OrderedDict clustering or date-string reordering required.
remember(entity, fact, confidence?)
Store an explicit semantic memory.
| Parameter | Type | Default | Description |
|---|---|---|---|
entity | Entity | required | Entity this fact is about |
fact | str | required | The fact to store |
confidence | float | 0.8 | Confidence in [0, 1] |
Returns: Memory
m = p.remember(user, "Alice prefers dark mode")
m = p.remember(user, "Alice is on the platform team", confidence=0.95)forget(entity, hard_delete?)
Archive or permanently delete all memories about an entity.
| Parameter | Type | Default | Description |
|---|---|---|---|
entity | Entity | required | Target entity |
hard_delete | bool | False | Permanently delete instead of archiving |
Returns: dict[str, int] with key forgotten_count
result = p.forget(user)
result = p.forget(user, hard_delete=True)consolidate()
Run background consolidation: promotes repeated episodic memories to semantic, applies FSRS decay, and archives memories below threshold.
Returns: dict[str, int] with keys promoted, decayed, archived
stats = p.consolidate()
# {'promoted': 3, 'decayed': 12, 'archived': 1}Entity
Represents an entity (agent, user, team, or tool). Created via Pensyve.entity().
| Property | Type | Description |
|---|---|---|
id | str | UUID |
name | str | Entity name |
kind | str | "agent", "user", "team", or "tool" |
Episode
Context manager that records messages and creates memories on exit. Created via Pensyve.episode().
message(role, content)
Record a message in this episode.
| Parameter | Type | Default | Description |
|---|---|---|---|
role | str | required | Speaker role (e.g. "user", "assistant") |
content | str | required | Message content |
Returns: None
outcome(result)
Set the episode outcome. Affects procedural memory reliability tracking.
| Parameter | Type | Default | Description |
|---|---|---|---|
result | str | required | One of "success", "failure", "partial" |
Returns: None
Memory
A retrieved memory record. Returned by recall() and remember().
| Property | Type | Description |
|---|---|---|
id | str | UUID |
content | str | Memory text |
memory_type | str | "episodic", "semantic", or "procedural" |
confidence | float | Confidence in [0, 1] |
stability | float | FSRS stability in [0, 1] |
score | float | Retrieval score from the recall engine |
event_time | str | None | When the described event occurred (ISO 8601). Only set for episodic memories that were ingested with an explicit when= |
SessionGroup
A cluster of memories sharing a source conversation session. Returned by recall_grouped().
| Property | Type | Description |
|---|---|---|
session_id | str | None | Episode UUID, or None for semantic / procedural memories with no episode ancestor |
session_time | str | Earliest event time across the group's memories (ISO 8601 / RFC 3339) |
memories | list[Memory] | Member memories, sorted by event time ascending (conversation order) |
group_score | float | Aggregated relevance — max RRF score across the group's members |
groups = p.recall_grouped("dentist appointment", limit=50)
for g in groups:
if g.session_id is None:
# Free-floating semantic memory
print(f"[fact] {g.memories[0].content}")
else:
# Conversation session
print(f"### {g.session_time} ({len(g.memories)} turns)")
for m in g.memories:
print(f" {m.content}")