ConversationEntityMemory

1. What is ConversationEntityMemory?

ConversationEntityMemory extracts and remembers entities (people, places, organizations, objects) from a conversation and stores facts about them.

Instead of remembering full chat text, it remembers structured facts.


2. Why does it exist?

Buffer-based memories:

  • Store raw text

  • Forget important facts

  • Waste tokens

Entity memory solves:

  • “Who is John?”

  • “Where does he live?”

  • “What company does she work for?”

In short:

Remember important facts, not full conversations.


3. Real-world analogy

Talking to someone who:

  • Doesn’t remember everything you said

  • But remembers who people are and key facts about them

That’s entity memory.


4. Minimal working example (Gemini)


5. What does it store internally?

Instead of text, it stores entity → facts:

Example:


6. How does it work internally? (Important)

Each turn:

  1. LLM extracts entities

  2. LLM updates facts about entities

  3. Stored facts are injected into the next prompt

So memory = LLM-maintained knowledge base


7. Key characteristics

Feature
Entity Memory

Stores

Facts

Format

Text summaries

Token usage

Low

Long-term facts

Full conversation recall


8. Comparison with other memories

Memory Type
Remembers

Buffer

Full chat

Window

Recent chat

Token buffer

Token-limited chat

Entity

Facts about entities


9. Common mistakes

❌ Expecting it to remember conversation flow ❌ Using it for emotional context ❌ Using it without an LLM

Entity memory requires an LLM to extract facts.


10. When should you use it?

Use ConversationEntityMemory when:

  • You care about names, places, roles

  • You need long-term factual memory

  • You want low token usage

Avoid when:

  • You need verbatim chat history

  • You need short-term conversational nuance

Last updated