TVL: The Tuned Variables Language
From Vague Requests to Precise Specifications
Today's AI development conversations are broken:
"Build me a customer support agent." "Can you make it cheaper?" "It's not accurate enough."
TVL changes this. With TVL, you specify exactly what you need:
"Build a cost-efficient Q&A agent with >85% accuracy on our benchmark, <200ms average response time, <1% bias rate, and >90% empathy score."
This isn't just a wish list—it's a formal specification that can be validated, optimized, and enforced.
-
Enable Optimization
Formal objectives let optimizers systematically search your configuration space—no more manual trial and error.
-
Enable Validation
Type-safe specs with explicit constraints can be validated before deployment, catching errors early.
-
Enable Standardization
A shared language for AI requirements—version-controlled, auditable, and tool-friendly.
Why TVL?
AI agents are under-specified. Teams iterate endlessly because requirements live in Slack threads, not in code. There's no shared language between product managers asking for "better accuracy" and engineers tweaking temperature values.
TVL is a specification language for AI systems—a PRD with formal semantics. It provides the foundation that tools need to validate, optimize, and manage your AI configurations. TVL captures:
- What to optimize: Quality metrics, cost targets, latency bounds
- What's tunable: Model choice, prompts, RAG parameters, hyperparameters
- What's allowed: Guardrails that prevent invalid or unsafe configurations
For Engineers
TVL treats your entire configuration space as code. Any variable that affects your AI system can be declared as a tunable variable (tvar):
- Prompt Components: Output format, summarization strategy, few-shot examples
- RAG Parameters: Retrieval count, chunk size, similarity threshold
- Model Invocation: Model choice, temperature, max tokens, stop sequences
You get type safety, constraint validation, and a clear contract between what's tunable and what's fixed.
Quick Example
Here is a TVL module for a RAG-powered support bot. Notice how we define tunable variables across the entire pipeline—from retrieval parameters to prompt formatting to model invocation:
tvl:
module: corp.support.rag_bot
tvl_version: "0.9"
tvars:
# --- RAG Retrieval ---
- name: k
type: int
domain: { range: [3, 20] }
description: "Number of documents to retrieve"
- name: summarizer
type: enum[str]
domain: ["none", "extractive", "abstractive"]
description: "How to summarize retrieved docs before injection"
# --- Prompt Template ---
- name: output_format
type: enum[str]
domain: ["markdown", "json", "plain"]
description: "Response format instruction in system prompt"
# --- Model Invocation ---
- name: model
type: enum[str]
domain: ["gpt-4o", "claude-3-sonnet"]
- name: temperature
type: float
domain: { range: [0.0, 1.0] }
constraints:
structural:
# Abstractive summarization needs a capable model
- when: summarizer = "abstractive"
then: model = "gpt-4o"
# Large k with summarization can exceed context limits
- when: summarizer != "none"
then: k <= 10
objectives:
- name: quality
direction: maximize
- name: latency
direction: minimize
- name: cost
direction: minimize
This single file captures the entire tunable surface of your RAG bot—making it easy to version, validate, and optimize.
What Formal Specifications Enable
TVL provides the foundation—a specification language with formal semantics. Once your AI system is properly specified, a rich ecosystem of tooling becomes possible:
Automatic Optimization
With explicitly defined objectives and constraints, optimizers can systematically explore your configuration space:
Spec Validation
TVL specifications can be validated before deployment—catching misconfigurations, type errors, and constraint violations:
Constraint Satisfiability
Structural constraints can be checked for satisfiability using SAT/SMT solvers, ensuring your rules don't create impossible configurations:
TVL is the specification layer
TVL defines what to optimize, not how. The optimization itself is handled by tools built on top of TVL—like Traigent, which implements optimization algorithms that consume TVL specs.
Get Started
Install the CLI tools to validate and lint your TVL files:
Validate your first configuration:
Lint for best practices:
-
Step-by-step tutorial to write your first TVL module.
-
Full specification of tvars, constraints, and objectives.
-
Real-world TVL modules for RAG bots, routers, and cost optimization.
Created by Traigent
TVL is developed by Traigent, the LLM optimization platform. TVL is open source under the MIT license.