Model Adapters
Reeflect uses adapters to integrate with different LLM providers, making it easy to switch between models or use multiple models in the same application.
Available Adapters
OpenAI Adapter
Integrate with GPT models and embeddings.
OpenAIAdapter
Anthropic Adapter
Integrate with Claude models and embeddings.
AnthropicAdapter
Mistral Adapter
Integrate with Mistral AI models and embeddings.
MistralAdapter
Llama Adapter
Integrate with local Llama models.
LlamaAdapter
Adapter Configuration
# OpenAI Adapter
from reeflect.adapters.openai import OpenAIAdapter
openai_adapter = OpenAIAdapter(
api_key="your_openai_api_key",
embedding_model="text-embedding-3-small",
completion_model="gpt-4-turbo-preview",
max_tokens_per_memory=250
)
# Anthropic Adapter
from reeflect.adapters.anthropic import AnthropicAdapter
anthropic_adapter = AnthropicAdapter(
api_key="your_anthropic_api_key",
embedding_model="claude-3-sonnet-20240229",
completion_model="claude-3-opus-20240229",
max_tokens_per_memory=300
)
Custom Adapter Implementation
You can implement custom adapters for other LLM providers by extending the BaseMemoryAdapter
class:
from reeflect.adapters.base import BaseMemoryAdapter
from reeflect.core.memory import Memory
from typing import Any, Dict, List, Optional
class CustomAdapter(BaseMemoryAdapter):
"""Custom adapter for your LLM provider."""
def __init__(self, api_key, **kwargs):
self.api_key = api_key
self.client = initialize_your_client(api_key)
def generate_embedding(self, text: str) -> List[float]:
# Implement embedding generation
pass
def batch_generate_embeddings(self, texts: List[str]) -> List[List[float]]:
# Implement batch embedding generation
pass
def inject_memories_to_prompt(
self,
prompt: str,
memories: List[Memory],
max_tokens: int = 1000
) -> str:
# Implement memory injection
pass
# Implement other required methods...
# Use your custom adapter
memory = Reeflect(
adapter=CustomAdapter(api_key="your_api_key"),
storage_config={...}
)
Next Steps
Now that you understand how to connect Reeflect to different LLM providers, explore Memory Hierarchies to learn how the system organizes different types of memories.