Effortlessly evaluate prompts and models
Iterate on prompts and test different models. Find the best fit for your use case.
Join other AI power users and AI builders in simplifying your eval workflows
Your personal workspace for prompt engineering
Manage your prompts, contexts, and models in one place, locally on your machine. Test out different combinations and use cases with a few clicks.
Prompt Evaluation
Model Evaluation
Evaluation Function
Human Rating
Experiment Management
Context Library
BYOK API Integrations
Custom Models
16x Eval Video Demo
Learn how 16x Eval works in this video demo.
Run Evaluations and Compare Results
Create new evaluations by specifying prompt, context and models. Multiple contexts will be combined together and added to the final prompt.
You can select multiple models to evaluate the same prompt and context in parallel.
Manage Contexts for Your Evaluations
Import and manage both text and image contexts for your evaluations. Use the drag and drop interface to easily add new contexts or import them from files.
Track context statistics including word count, line count, and sentence count to better understand your evaluation inputs.
Customizable Columns for Metrics
Tailor your evaluation table to your specific needs with customizable columns. Show or hide columns based on which metrics is most important to you.
For example, you can track input and output token statistics, prompt structure, or evaluation function scores, or throughput. Or hide them if you don't care about them.
Organize and view your eval data in ways that make sense for your evaluation workflow.
Experiments for Different Use Cases
Group evaluations into experiments to compare models and prompts performance for different use cases (coding, writing, question answering, etc).
You can view the top models and prompts for different experiments in the Experiments page.
Built-in Models and Custom Models
16x Eval has built-in support for top models from various providers like OpenAI, Anthropic, Google, DeepSeek, Azure OpenAI, and OpenRouter.
We also support any other providers that offers OpenAI API compatibility, such as locally running Ollama or Fireworks.
16x Eval is a BYOK (Bring Your Own Key) application. You'll need to provide your own API keys these providers to use the application.
Coding Task Evaluation
Coding experiment with prompt to add a feature to a TODO app
Evaluate and compare AI models for coding tasks. Perfect for developers who want to assess the quality and accuracy of AI-generated code.
- Compare multiple models
- Test different prompts
- Custom evaluation functions
- Response rating system
- Add notes to each response
- Various token statistics
Writing Task Evaluation
Watch a video demo of writing task evaluation
Writing experiment with prompt to write an AI timeline
Compare AI models for writing tasks. Ideal for content creators, writers, and AI builders using AI-assisted workflows for writing.
- Compare multiple models
- Test different prompts
- Highlight target text and penalty text
- Writing statistics (words per sentence and paragraph)
- Custom evaluation functions
- Add notes to each response
Image Analysis Task Evaluation
Watch a video demo of image analysis task evaluation
Image analysis experiment with prompt "Explain what happened in the image."
Evaluate AI models for image analysis tasks. Great for AI builders and researchers assessing how different AI models interpret and analyze visual content.
- Visual content analysis
- Multiple model comparison
- Custom evaluation criteria
- Response rating system