Usage Guide
Usage Guide
This guide covers how to use Prompt Alchemy effectively for both command-line and server-based workflows.
Quick Start
Command Line Interface
Generate your first prompt:
prompt-alchemy generate "Create a REST API endpoint for user management"
Start the MCP server for AI agent integration:
prompt-alchemy serve
Command Line Usage
Basic Prompt Generation
# Simple prompt generation
prompt-alchemy generate "Your prompt idea here"
# With specific persona
prompt-alchemy generate "Write a blog post about AI" --persona=writing
# Using specific provider
prompt-alchemy generate "Debug this code" --provider=openai
# Generate multiple variants
prompt-alchemy generate "API documentation" --count=5
Advanced Generation Options
# Use specific phases only
prompt-alchemy generate "Code review" --phases=prima-materia,coagulatio
# Set custom parameters
prompt-alchemy generate "Creative story" --temperature=0.8 --max-tokens=1500
# Add context and tags
prompt-alchemy generate "Database query" --context="PostgreSQL" --tags="sql,database"
# Auto-select best variant
prompt-alchemy generate "Email template" --auto-select
Searching and Retrieval
# Basic text search
prompt-alchemy search "API design"
# Semantic search with embeddings
prompt-alchemy search "user authentication" --semantic
# Filter by various criteria
prompt-alchemy search "code generation" --phase=coagulatio --provider=anthropic
# Filter by date and tags
prompt-alchemy search "database" --since=2024-01-01 --tags="sql,postgres"
Prompt Optimization
# Basic optimization
prompt-alchemy optimize -p "Write code" -t "Generate Python function"
# With specific persona and iterations
prompt-alchemy optimize -p "Create API docs" -t "Document REST endpoints" \
--persona=writing --max-iterations=10
# Use different providers for generation and evaluation
prompt-alchemy optimize -p "Debug code" -t "Find Python bugs" \
--provider=openai --judge-provider=anthropic
Management Commands
# View metrics and reports
prompt-alchemy metrics
# Update prompt metadata
prompt-alchemy update <prompt-id> --tags="new-tag" --notes="Updated notes"
# Delete prompts
prompt-alchemy delete <prompt-id>
# Validate configuration
prompt-alchemy validate
# Test provider connectivity
prompt-alchemy test-providers
Server Mode Usage
MCP Server (AI Agent Integration)
Start the MCP server:
prompt-alchemy serve
The server runs on stdin/stdout and accepts JSON-RPC calls. AI agents can connect and use all 15 available tools.
HTTP REST API Server
Start the HTTP server:
prompt-alchemy http-server
Default configuration:
- Port: 8080
- Host: localhost
- Base Path:
/api/v1
API Examples
Generate prompts via HTTP:
curl -X POST http://localhost:8080/api/v1/prompts/generate \
-H "Content-Type: application/json" \
-d '{
"input": "Create a Python function for data validation",
"persona": "code",
"count": 3
}'
Search prompts:
curl -X GET "http://localhost:8080/api/v1/prompts/search?q=API+design&semantic=true"
Get prompt details:
curl -X GET http://localhost:8080/api/v1/prompts/{prompt-id}
MCP Integration
Prompt Alchemy provides 15 MCP tools for AI agent integration:
Generation Tools
generate_prompts- Create new prompts through the alchemical processgenerate_prompt_variants- Generate multiple variants of a promptoptimize_prompt- Optimize existing prompts using meta-prompting
Search & Retrieval Tools
search_prompts- Text-based prompt searchsemantic_search_prompts- Semantic search using embeddingsget_prompt_details- Retrieve detailed prompt informationlist_prompts- List prompts with filtering options
Analysis Tools
analyze_prompt_performance- Analyze prompt effectivenessget_prompt_metrics- Retrieve performance metricscompare_prompts- Compare multiple prompts
Management Tools
update_prompt- Update prompt metadatadelete_prompt- Remove prompts from databaseexport_prompts- Export prompts in various formats
System Tools
get_system_info- Retrieve system configurationvalidate_configuration- Validate current setupget_provider_status- Check AI provider connectivity
Learning Mode
Enable adaptive learning to improve recommendations:
# Run nightly training manually
prompt-alchemy nightly
# Schedule automated training
prompt-alchemy schedule --time "0 2 * * *" # Daily at 2 AM
# Check learning status
prompt-alchemy metrics --learning
Batch Processing
Process multiple inputs efficiently:
# From file
prompt-alchemy batch --input-file=prompts.txt --output-file=results.json
# From stdin
echo "prompt1\nprompt2\nprompt3" | prompt-alchemy batch
# With custom settings
prompt-alchemy batch --input-file=ideas.txt --persona=writing --count=3
Configuration Management
View and modify configuration:
# Show current config
prompt-alchemy config show
# Set configuration values
prompt-alchemy config set providers.openai.model "o4-mini"
# Validate configuration
prompt-alchemy config validate
# Export configuration
prompt-alchemy config export > config-backup.yaml
Best Practices
Prompt Generation
- Be specific: Provide clear, detailed input for better results
- Use personas: Match persona to your use case (code, writing, analysis)
- Leverage phases: Use specific phases when you need particular improvements
- Add context: Include relevant background information
- Use tags: Tag prompts for better organization and searchability
Search and Retrieval
- Use semantic search: For finding conceptually similar prompts
- Combine filters: Use multiple filters for precise results
- Regular cleanup: Delete outdated or ineffective prompts
- Export important prompts: Backup valuable prompts regularly
Server Deployment
- Use Docker: For consistent deployment across environments
- Monitor health: Regular health checks for production servers
- Secure API keys: Use environment variables for sensitive data
- Backup database: Regular backups of your prompt database
Troubleshooting
Common Issues
Configuration errors:
prompt-alchemy validate
prompt-alchemy test-providers
Database issues:
prompt-alchemy migrate
Server connectivity:
prompt-alchemy health --url=http://localhost:8080
Logs and Debugging
Enable debug logging:
prompt-alchemy --log-level=debug generate "test prompt"
View logs:
# Local logs
tail -f ~/.prompt-alchemy/logs/prompt-alchemy.log
# Docker logs
docker-compose logs -f prompt-alchemy
Next Steps
- Read the CLI Reference for complete command documentation
- Explore MCP Integration for AI agent setup
- Review the Architecture to understand the system design
- Check Deployment Guide for production setup