Migration Guide
If you're currently using another memory system, this guide will help you migrate to Reeflect while preserving your existing data and integrations.
Migrating from LangChain Memory
Reeflect provides tools to smoothly migrate from LangChain's memory components:
from reeflect.migration import LangChainMigrator
from langchain.memory import ConversationBufferMemory, VectorStoreRetrieverMemory
# Initialize migrator
migrator = LangChainMigrator()
# Migrate ConversationBufferMemory
buffer_memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
# ... assume this has been used in conversations
# Migrate to Reeflect
migration_result = migrator.migrate_buffer_memory(
langchain_memory=buffer_memory,
reeflect_memory=memory_system,
target_namespace="migrated_conversations",
user_id="user_123"
)
print(f"Migrated {migration_result['count']} memories")
# Migrate VectorStoreRetrieverMemory
vector_memory = VectorStoreRetrieverMemory(
retriever=vector_store.as_retriever()
)
# ... assume this has been used in conversations
# Migrate to Reeflect
migration_result = migrator.migrate_vector_memory(
langchain_memory=vector_memory,
reeflect_memory=memory_system,
target_namespace="migrated_knowledge",
regenerate_embeddings=True
)
Migrating from Custom Solutions
For custom memory implementations, you can use the generic data import tools:
from reeflect.migration import DataImporter
# Initialize importer
importer = DataImporter(memory_system)
# Import from JSON
import_result = importer.import_from_json(
file_path="./custom_memory_export.json",
mapping={
"content_field": "text",
"namespace_field": "category",
"importance_field": "priority",
"metadata_fields": ["source", "timestamp", "author"],
"relation_field": "connections"
},
target_namespace="imported_memories",
default_importance=0.7
)
# Import from CSV
import_result = importer.import_from_csv(
file_path="./memory_data.csv",
mapping={
"content_column": "Memory Content",
"namespace_column": "Category",
"importance_column": "Priority Score",
"metadata_columns": ["Source", "Created At", "Tags"]
},
default_namespace="imported_csv",
generate_embeddings=True
)
# Import from database
import_result = importer.import_from_database(
connection_string="postgresql://user:password@localhost:5432/memory_db",
query="SELECT content, category, importance, created_at FROM memories",
mapping={
"content_column": "content",
"namespace_column": "category",
"importance_column": "importance",
"metadata_columns": ["created_at"]
},
batch_size=1000
)
Incremental Migration
For large-scale migrations, you can use the incremental migration approach:
from reeflect.migration import IncrementalMigrator
import time
# Initialize incremental migrator
migrator = IncrementalMigrator(
source_system="custom",
target_system=memory_system,
batch_size=100,
throttle_delay=1.0 # seconds between batches
)
# Start incremental migration
migration = migrator.start_migration(
source_config={
"connection_string": "postgresql://user:password@localhost:5432/memory_db",
"table": "memories",
"id_column": "memory_id",
"content_column": "content",
"mapping": {...}
},
namespace="incremental_migration",
total_estimated_records=10000
)
# Monitor migration progress
while not migration.is_complete():
status = migration.get_status()
print(f"Migrated {status['processed_records']} of {status['total_records']} records")
print(f"Progress: {status['progress_percentage']}%")
print(f"Errors: {status['error_count']}")
time.sleep(5) # Check every 5 seconds
# Get final migration report
report = migration.generate_report()
print(f"Migration completed in {report['duration_seconds']} seconds")
print(f"Successfully migrated: {report['success_count']} records")
print(f"Failed to migrate: {report['error_count']} records")
Post-Migration Verification
After migration, it's important to verify the data integrity and performance:
from reeflect.migration import MigrationVerifier
# Initialize verifier
verifier = MigrationVerifier(memory_system)
# Verify migration integrity
verification = verifier.verify_migration(
migration_id=migration.id,
verification_methods=[
"count_match", # Verify record counts match
"content_sampling", # Sample and compare content
"retrieval_quality", # Test retrieval quality
"embedding_quality" # Verify embedding quality
],
sample_size=100 # Number of records to sample for verification
)
# Get verification report
if verification.is_successful():
print("Migration verification successful!")
print(f"Confidence score: {verification.confidence_score}")
else:
print("Migration verification identified issues:")
for issue in verification.issues:
print(f"- {issue['type']}: {issue['description']}")
print(f" Affected records: {issue['affected_records_count']}")
Migration Strategies
Phased Migration
For minimal disruption, consider a phased migration approach:
-
Phase 1: Read-Only Integration
- Set up Reeflect alongside your existing system
- Implement dual-read pattern (read from both systems)
- Verify query results match between systems
-
Phase 2: Dual Write
- Begin writing to both systems
- Validate data consistency across systems
- Implement reconciliation for any discrepancies
-
Phase 3: Cutover
- Switch primary reads to Reeflect
- Validate performance and correctness
- Gradually deprecate old system
-
Phase 4: Decommission
- Complete final data migration
- Redirect all traffic to Reeflect
- Decommission old system
API Bridge Pattern
For minimal code changes, consider implementing an API bridge:
class LegacyMemoryAPIBridge:
"""Bridge between legacy memory API and Reeflect."""
def __init__(self, reeflect_memory, namespace):
self.reeflect_memory = reeflect_memory
self.namespace = namespace
# Legacy API method
def remember(self, key, value, importance=0.5):
"""Store a key-value memory."""
content = f"{key}: {value}"
self.reeflect_memory.create(
content=content,
namespace=self.namespace,
importance=importance,
metadata={"legacy_key": key}
)
# Legacy API method
def recall(self, key=None, query=None, limit=5):
"""Retrieve memories by key or query."""
if key:
# Key-based lookup
results = self.reeflect_memory.query(
filter_params={"metadata.legacy_key": key},
namespace=self.namespace
)
return [r.content.split(": ", 1)[1] for r in results]
elif query:
# Semantic search
results = self.reeflect_memory.search(
query=query,
namespace=self.namespace,
limit=limit
)
return [r[0].content for r in results]
Best Practices
Always test migrations in a staging environment before performing them in production. Consider running both systems in parallel for a short period to ensure the migration was successful before decommissioning the old system.
- Backup Everything: Create full backups of your existing memory data before migration
- Start Small: Begin with non-critical datasets before migrating mission-critical data
- Validate Thoroughly: Implement comprehensive validation for migrated data
- Monitor Closely: Keep a close eye on system performance during and after migration
- Have a Rollback Plan: Maintain the ability to revert to the previous system if needed
Next Steps
Once your migration is complete, explore these resources to make the most of your new Reeflect implementation:
- Basic Concepts - Understand the fundamental Reeflect concepts
- Enterprise Setup - Configure Reeflect for enterprise use
- Memory Analytics - Analyze your memory system performance