Retrieval Mechanisms
Reeflect provides sophisticated retrieval mechanisms to find the most relevant memories for any context.
Retrieval Methods
Method | Description | Use Cases |
---|---|---|
retrieve() | Direct retrieval by ID | When you know the exact memory to retrieve |
query() | Filtered query by parameters | Searching by namespace, tags, or other filters |
search() | Semantic similarity search | Finding memories similar to a query text |
enhance_prompt() | Injecting memories into a prompt | Adding context to LLM queries |
reason() | Reasoning using relevant memories | Generating insights from memory corpus |
Semantic Search
The most powerful retrieval method is semantic search, which finds memories based on meaning rather than exact matches:
# Simple semantic search
results = memory.search(
query="What are the user's color preferences?",
namespace="user_preferences",
limit=5
)
# Advanced semantic search with filters
results = memory.search(
query="What are the user's food allergies?",
namespace="user_health",
filter_params={
"min_importance": 0.7,
"tags": ["allergy", "medical"],
"created_after": datetime(2023, 1, 1)
},
limit=10,
min_similarity=0.7
)
Relevance Scoring
Reeflect uses a sophisticated relevance scoring system that combines multiple factors:
score = (
semantic_similarity * 0.6 +
memory.importance * 0.2 +
recency_score * 0.1 +
usage_score * 0.1
)
You can customize the relevance weights to prioritize different factors:
from reeflect.core.retrieval import RelevanceScorer
# Rerank search results with custom weights
custom_results = RelevanceScorer.rerank_results(
results,
weights={
"similarity": 0.4,
"importance": 0.3,
"recency": 0.2,
"usage": 0.1
}
)
Next Steps
Now that you understand how to retrieve memories, learn about Model Adapters to see how Reeflect integrates with various LLM providers.