llama-prompt-ops is a Python package that automatically optimizes prompts for Llama models. It transforms prompts that work well with other LLMs into prompts that are optimized for Llama models, improving performance and reliability.
Key Benefits:
- No More Trial and Error: Stop manually tweaking prompts to get better results
- Fast Optimization: Get Llama-optimized prompts in minutes with template-based optimization
- Data-Driven Improvements: Use your own examples to create prompts that work for your specific use case
- Measurable Results: Evaluate prompt performance with customizable metrics
To get started with llama-prompt-ops, you'll need:
- Existing System Prompt: Your existing system prompt that you want to optimize
- Existing Query-Response Dataset: A JSON file containing query-response pairs (as few as 50 examples) for evaluation and optimization (see prepare your dataset below)
- Configuration File: A YAML configuration file (config.yaml) specifying model hyperparameters, and optimization details (see example configuration)
┌──────────────────────────┐ ┌──────────────────────────┐ ┌────────────────────┐
│ Existing System Prompt │ │ set(query, responses) │ │ YAML Configuration │
└────────────┬─────────────┘ └─────────────┬────────────┘ └───────────┬────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌────────────────────────────────────────────────────────────────────┐
│ llama-prompt-ops migrate │
└────────────────────────────────────────────────────────────────────┘
│
│
▼
┌──────────────────────┐
│ Optimized Prompt │
└──────────────────────┘
- Start with your existing system prompt: Take your existing system prompt that works with other LLMs (see example prompt)
- Prepare your dataset: Create a JSON file with query-response pairs for evaluation and optimization
- Configure optimization: Set up a simple YAML file with your dataset and preferences (see example configuration)
- Run optimization: Execute a single command to transform your prompt
- Get results: Receive a Llama-optimized prompt with performance metrics
![]() |
These results were measured on the HotpotQA multi-hop reasoning benchmark, which tests a model's ability to answer complex questions requiring information from multiple sources. Our optimized prompts showed substantial improvements over baseline prompts across different model sizes.
# Create a virtual environment
conda create -n prompt-ops python=3.10
conda activate prompt-ops
# Install from PyPI
pip install llama-prompt-ops
# OR install from source
git clone https://github.com/meta-llama/llama-prompt-ops.git
cd llama-prompt-ops
pip install -e .
This will create a directory called my-project with a sample configuration and dataset in the current folder.
llama-prompt-ops create my-project
cd my-project
Add your API key to the .env
file:
OPENROUTER_API_KEY=your_key_here
You can get an OpenRouter API key by creating an account at OpenRouter. For more inference provider options, see Inference Providers.
The optimization will take about 5 minutes.
llama-prompt-ops migrate # defaults to config.yaml if --config not specified
Done! The optimized prompt will be saved to the results
directory with performance metrics comparing the original and optimized versions.
To read more about this use case, we go into more detail in Basic Tutorial.
Below is an example of a transformed system prompt from proprietary LM to Llama:
Original Proprietary LM Prompt | Optimized Llama Prompt |
---|---|
You are a helpful assistant. Extract and return a JSON with the following keys and values: 1. "urgency": one of high , medium , low 2. "sentiment": one of negative , neutral , positive 3. "categories": Create a dictionary with categories as keys and boolean values (True/False), where the value indicates whether the category matches tags like emergency_repair_services , routine_maintenance_requests , etc.Your complete message should be a valid JSON string that can be read directly. |
You are an expert in analyzing customer service messages. Your task is to categorize the following message based on urgency, sentiment, and relevant categories. Analyze the message and return a JSON object with these fields: 1. "urgency": Classify as "high", "medium", or "low" based on how quickly this needs attention 2. "sentiment": Classify as "negative", "neutral", or "positive" based on the customer's tone 3. "categories": Create a dictionary with facility management categories as keys and boolean values Only include these exact keys in your response. Return a valid JSON object without code blocks, prefixes, or explanations. |
To use llama-prompt-ops for prompt optimization, you'll need to prepare a dataset with your prompts and expected responses. The standard format is a JSON file structured like this:
[
{
"question": "Your input query here",
"answer": "Expected response here"
},
{
"question": "Another input query",
"answer": "Another expected response"
}
]
If your data matches this format, you can use the built-in StandardJSONAdapter
which will handle it automatically.
If your data is formatted differently, and there isn't a built-in dataset adapter, you can create a custom dataset adapter by extending the DatasetAdapter
class. See the Dataset Adapter Selection Guide for more details.
llama-prompt-ops supports various inference providers and endpoints to fit your infrastructure needs. See our detailed guide on inference providers for configuration examples with:
- OpenRouter (cloud-based API)
- vLLM (local deployment)
- NVIDIA NIMs (optimized containers)
For more detailed information, check out these resources:
- Quick Start Guide: Get up and running with llama-prompt-ops in 5 minutes
- Intermediate Configuration Guide: Learn how to configure datasets, metrics, and optimization strategies
- Dataset Adapter Selection Guide: Choose the right adapter for your dataset format
- Metric Selection Guide: Select appropriate evaluation metrics for your use case
- Inference Providers Guide: Configure different model providers and endpoints
This project leverages some of awesome open source projects including DSPy, thanks to the team for the inspiring work!
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.