MeTal API Docs.
The MeTal API is fully compatible with the OpenAI API spec. If you've already built something with OpenAI's /chat/completions, you can switch to MeTal by changing two lines: the base URL and the model name. That's it.
v1/chat/completions interface. All existing OpenAI SDKs work out of the box — just point them at api.metalapis.com. For full API reference, see the OpenAI API docs — everything there applies here.Quickstart
Get your API key from MeTal ID, then make your first request. You're literally one baseURL swap away from running.
Using the OpenAI SDK (Node)
const client = new OpenAI({
apiKey: process.env.METAL_API_KEY,
baseURL: 'https://api.metalapis.com/v1', // ← only change needed
});
const response = await client.chat.completions.create({
model: 'valley-fast-1',
messages: [{ role: 'user', content: 'hey, what can you do?' }],
});
console.log(response.choices[0].message.content);
Using the OpenAI SDK (Python)
client = OpenAI(
api_key="your-metal-api-key",
base_url="https://api.metalapis.com/v1" # ← only change needed
)
response = client.chat.completions.create(
model="valley-pro-1",
messages=[{"role": "user", "content": "hey, what can you do?"}]
)
print(response.choices[0].message.content)
Raw fetch (no SDK)
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.METAL_API_KEY}`,
},
body: JSON.stringify({
model: 'valley-fast-1',
messages: [{ role: 'user', content: 'hello' }],
}),
});
const data = await response.json();
Authentication
All requests require a Bearer token in the Authorization header. Get your key from your MeTal ID dashboard.
Models
Pass the model name in the model field of your request. We keep model IDs stable — we'll always give you a heads up before deprecating anything.
| Model ID | Context | Best for | Status |
|---|---|---|---|
| valley-fast-1 | 32K | Real-time, high-throughput, chat | ● Live |
| valley-pro-1 | 200K | Complex reasoning, long context, agents | ● Live |
Chat Completions
The MeTal API implements POST /v1/chat/completions — identical to OpenAI's spec.
For the full parameter reference (temperature, top_p, max_tokens, stop sequences, logprobs, etc.),
see the official OpenAI docs:
All parameters documented there are supported. MeTal-specific extensions (if any) will be listed here as we ship them.
Streaming
Set stream: true and you'll get server-sent events back, same as OpenAI. Works with the OpenAI SDK's built-in stream helpers.
model: 'valley-pro-1',
messages: [{ role: 'user', content: 'write me a novel' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}
Tool Use
Both models support function/tool calling using the same spec as OpenAI. Define your tools in the tools array and handle tool_calls in the response.
Valley Pro has deeper agentic tool use — it's better at multi-step planning and knows when not to call a tool. Valley Fast is faster but more literal.
Errors
Error responses follow the OpenAI format exactly — same HTTP status codes, same error.type and error.message fields. Your existing error handling just works.
"error": {
"message": "Invalid API key.",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}
Hit us up on Discord or email api@metalapis.com. We're actual humans who will actually respond.