Custom Models Integration
KoreShield can proxy any OpenAI-compatible API endpoint. This is useful for self-hosted models, gateways, or third-party providers that expose OpenAI-style endpoints.
Basic Request (TypeScript)
const response = await fetch("http://localhost:8000/v1/chat/completions", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({
model: "your-model-name",
messages: [{ role: "user", content: "Summarize the audit log." }]
})
});
Basic Request (Python)
import requests
response = requests.post(
"http://localhost:8000/v1/chat/completions",
json={
"model": "your-model-name",
"messages": [{"role": "user", "content": "Summarize the audit log."}]
}
)
Streaming
const response = await fetch("http://localhost:8000/v1/chat/completions", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({
model: "your-model-name",
stream: true,
messages: [{ role: "user", content: "Generate a short summary." }]
})
});
response = requests.post(
"http://localhost:8000/v1/chat/completions",
json={
"model": "your-model-name",
"stream": True,
"messages": [{"role": "user", "content": "Generate a short summary."}]
},
stream=True
)
for line in response.iter_lines():
if line:
print(line.decode("utf-8"))
Compatibility Notes
- The environment variable must match the provider name (uppercased +
_API_KEY). - If the upstream API expects additional headers, configure them in your gateway or extend the provider adapter.
Error Handling
403indicates a blocked request due to policy enforcement.429or5xxtypically indicates provider or rate-limit issues.
Next Steps
- Configure providers in /configuration/
- Review OpenAI-compatible routing in openai.mdx