Skip to main content

Memory Reasoning

Reeflect goes beyond simple memory retrieval to enable sophisticated reasoning based on stored memories. This allows AI systems to draw connections, make inferences, and provide more contextualized responses.

How Memory Reasoning Works

The memory reasoning system works through a multi-step process:

  1. Retrieval
    The system first retrieves a set of memories relevant to the query using semantic search.

  2. Relationship Analysis
    The system analyzes relationships between the retrieved memories to understand how they connect.

  3. Inference Generation
    Based on these relationships, the system generates inferences and conclusions that may not be explicitly stated in any single memory.

  4. Response Formulation
    The system formulates a response that incorporates both the directly relevant memories and the inferences drawn from them.

Using Memory Reasoning

You can utilize memory reasoning in your application with the reason method:

# Basic memory reasoning
reasoning_result = memory.reason(
query="What kind of dashboard theme would the user prefer?",
namespace="user_preferences",
max_memories=5
)

# Advanced memory reasoning with custom instruction
reasoning_result = memory.reason(
query="Based on the user's past food choices, what restaurant should I recommend?",
namespace="user_behavior",
filter_params={
"tags": ["food", "preferences"],
"min_importance": 0.6
},
max_memories=10,
instruction="Focus on dietary restrictions and favorite cuisines when making your recommendation."
)

print(reasoning_result)

Memory Chain of Thought

For more transparent reasoning, you can enable chain-of-thought reasoning that shows the step-by-step process:

from reeflect.intelligence.reasoning import ChainOfThoughtReasoner

# Create a chain-of-thought reasoner
cot_reasoner = ChainOfThoughtReasoner(memory_system)

# Get detailed reasoning steps
reasoning_result = cot_reasoner.reason(
query="What are the user's communication preferences?",
namespace="user_behavior",
max_memories=7,
return_steps=True
)

# Display the reasoning steps
for i, step in enumerate(reasoning_result["steps"]):
print(f"Step {i+1}: {step['thought']}")
print(f" Using memories: {[m.id for m in step['memories']]}")
print(f" Conclusion: {step['conclusion']}")

# Get the final answer
print(f"\nFinal answer: {reasoning_result['answer']}")
Best Practice

For critical applications, always enable chain-of-thought reasoning with return_steps=True to verify the logic behind the system's conclusions. This helps ensure the reasoning is sound and based on accurate memories.

Next Steps

Explore Contradiction Detection to learn how Reeflect identifies and resolves conflicts between memories to maintain consistency.