Python SDK

Native Python bindings via PyO3. Import as import pensyve.

pip install pensyve

The Python SDK is a compiled Rust extension. It runs locally with no server required.


Pensyve

Main entry point for the memory runtime.

Pensyve(path?, namespace?)

Create or open a Pensyve instance.

ParameterTypeDefaultDescription
pathstr | None~/.pensyve/defaultDirectory for storage files
namespacestr | None"default"Namespace name
from pensyve import Pensyve

p = Pensyve()
p = Pensyve(path="/tmp/mydata", namespace="project-x")

entity(name, kind?)

Get or create an entity.

ParameterTypeDefaultDescription
namestrrequiredEntity name
kindstr"user"One of "agent", "user", "team", "tool"

Returns: Entity

user = p.entity("alice")
agent = p.entity("my-agent", kind="agent")

episode(*participants)

Create an episode context manager. Episodes record messages and produce memories on exit.

ParameterTypeDefaultDescription
*participantsEntityrequiredEntities participating in this episode

Returns: Episode (context manager)

user = p.entity("alice")
agent = p.entity("my-agent", kind="agent")

with p.episode(user, agent) as ep:
    ep.message("user", "What's the status of project X?")
    ep.message("assistant", "Project X is on track for Q2 launch.")
    ep.outcome("success")

recall(query, entity?, limit?, types?)

Search memories matching a query. Fuses vector, BM25, graph, recency, and other signals.

ParameterTypeDefaultDescription
querystrrequiredSearch query
entityEntity | NoneNoneFilter to a specific entity
limitint5Max results
typeslist[str] | NoneNoneFilter by memory type: "episodic", "semantic", "procedural"

Returns: list[Memory]

memories = p.recall("project X deadline")
memories = p.recall("deployment steps", entity=agent, limit=10, types=["procedural"])

recall_grouped(query, *, limit?, order?, max_groups?)

Recall memories matching a query, clustered by source session.

Runs the same RRF fusion pipeline as recall() and then groups the top-limit memories by episode_id. Memories from the same session cluster into a single SessionGroup sorted in conversation order; semantic and procedural memories appear as singleton groups with session_id=None.

This is the canonical entry point for "memory as input to an LLM reader" workflows — internal benchmarking on LongMemEval_S confirmed that session-grouped recall produces materially better reader accuracy than flat recall.

ParameterTypeDefaultDescription
querystrrequiredSearch query
limitint50Max memories to consider across all groups
orderLiteral["chronological", "relevance"]"chronological"Group ordering — oldest session first, or highest-scoring session first
max_groupsint | NoneNoneOptional cap on the number of returned groups

Returns: list[SessionGroup]

Raises: ValueError if order is not one of the supported values.

groups = p.recall_grouped("How many books did I buy this year?", limit=50)

for g in groups:
    print(f"### Session {g.session_id} ({g.session_time}):")
    for m in g.memories:
        print(f"  {m.content}")

Feed the groups directly to a reader prompt — no manual OrderedDict clustering or date-string reordering required.


remember(entity, fact, confidence?)

Store an explicit semantic memory.

ParameterTypeDefaultDescription
entityEntityrequiredEntity this fact is about
factstrrequiredThe fact to store
confidencefloat0.8Confidence in [0, 1]

Returns: Memory

m = p.remember(user, "Alice prefers dark mode")
m = p.remember(user, "Alice is on the platform team", confidence=0.95)

forget(entity, hard_delete?)

Archive or permanently delete all memories about an entity.

ParameterTypeDefaultDescription
entityEntityrequiredTarget entity
hard_deleteboolFalsePermanently delete instead of archiving

Returns: dict[str, int] with key forgotten_count

result = p.forget(user)
result = p.forget(user, hard_delete=True)

consolidate()

Run background consolidation: promotes repeated episodic memories to semantic, applies FSRS decay, and archives memories below threshold.

Returns: dict[str, int] with keys promoted, decayed, archived

stats = p.consolidate()
# {'promoted': 3, 'decayed': 12, 'archived': 1}

Entity

Represents an entity (agent, user, team, or tool). Created via Pensyve.entity().

PropertyTypeDescription
idstrUUID
namestrEntity name
kindstr"agent", "user", "team", or "tool"

Episode

Context manager that records messages and creates memories on exit. Created via Pensyve.episode().

message(role, content)

Record a message in this episode.

ParameterTypeDefaultDescription
rolestrrequiredSpeaker role (e.g. "user", "assistant")
contentstrrequiredMessage content

Returns: None


outcome(result)

Set the episode outcome. Affects procedural memory reliability tracking.

ParameterTypeDefaultDescription
resultstrrequiredOne of "success", "failure", "partial"

Returns: None


Memory

A retrieved memory record. Returned by recall() and remember().

PropertyTypeDescription
idstrUUID
contentstrMemory text
memory_typestr"episodic", "semantic", or "procedural"
confidencefloatConfidence in [0, 1]
stabilityfloatFSRS stability in [0, 1]
scorefloatRetrieval score from the recall engine
event_timestr | NoneWhen the described event occurred (ISO 8601). Only set for episodic memories that were ingested with an explicit when=

SessionGroup

A cluster of memories sharing a source conversation session. Returned by recall_grouped().

PropertyTypeDescription
session_idstr | NoneEpisode UUID, or None for semantic / procedural memories with no episode ancestor
session_timestrEarliest event time across the group's memories (ISO 8601 / RFC 3339)
memorieslist[Memory]Member memories, sorted by event time ascending (conversation order)
group_scorefloatAggregated relevance — max RRF score across the group's members
groups = p.recall_grouped("dentist appointment", limit=50)
for g in groups:
    if g.session_id is None:
        # Free-floating semantic memory
        print(f"[fact] {g.memories[0].content}")
    else:
        # Conversation session
        print(f"### {g.session_time} ({len(g.memories)} turns)")
        for m in g.memories:
            print(f"  {m.content}")