Use llmchat as a backend for your own apps. Generate your API key from the API tab in the app sidebar.
All requests require a Bearer token in the Authorization header.
Authorization: Bearer llmchat-YOUR_API_KEY
/api/v1/chat| Field | Type | Required | Description |
|---|---|---|---|
messages | array | Yes | Array of {role, content} objects. Roles: user, assistant, system |
model | string | No | Model ID (default: gemini-flash-2.5) |
web_search | boolean | No | Enable web search (default: false) |
custom_instructions | string | No | Extra system instructions |
thread_id | string | No | Optional ID to group messages |
curl -X POST https://delph.tech/api/v1/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-flash-2.5",
"messages": [
{ "role": "user", "content": "Explain quantum entanglement simply." }
],
"web_search": false
}'const response = await fetch('https://delph.tech/api/v1/chat', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gemini-flash-2.5',
messages: [
{ role: 'user', content: 'Explain quantum entanglement simply.' }
],
}),
});
// Response is a Server-Sent Events stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { value, done } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Parse SSE events from chunk
console.log(chunk);
}Credits are deducted per request based on the model used. Credits reset daily.
| Model ID | Name | Credits | Notes |
|---|---|---|---|
gemini-flash-2.5 | Gemini 2.5 Flash | 1 | Fast, free tier |
gemini-2.5-pro | Gemini 2.5 Pro | 3 | 1M context |
gpt-4o-mini | GPT-4o Mini | 1 | Good balance |
gpt-4.1-nano | GPT-4.1 Nano | 1 | Lightweight |
llama-4-scout | Llama 4 Scout | 1 | Open source |
llama-3.3-70b | Llama 3.3 70B | 1 | Open source |
claude-haiku-4.5 | Haiku 4.5 | 10 | Fast Claude |
claude-sonnet-4.6 | Sonnet 4.6 | 10 | High quality |
claude-opus-4.6 | Opus 4.6 | 10 | Most capable |
gpt-4.1 | GPT-4.1 | 5 | High quality |
deepseek-r1 | DeepSeek R1 | 5 | Reasoning model |
| Status | Meaning |
|---|---|
401 | Invalid or missing API key |
400 | Invalid request body |
429 | Daily credit limit reached |
500 | Internal server error |
Need help? Open an issue on GitHub.