Skip to main content

LangChain Integration

Use LangChain normally, but route model calls through the KoreShield proxy.

Runtime Contract

  • Chat route: POST /v1/chat/completions
  • Optional RAG pre-scan: POST /v1/rag/scan
  • Auth: Bearer JWT, X-API-Key, or ks_access_token cookie

Python Example

import os
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model="deepseek-chat",
base_url="http://localhost:8000/v1",
api_key="unused-by-koreshield-provider-client",
default_headers={
"Authorization": f"Bearer {os.environ['KORESHIELD_JWT']}"
},
)

result = llm.invoke("Summarize this safely")
print(result.content)

RAG Pattern

  1. Retrieve documents from your vector store.
  2. Call /v1/rag/scan with user_query + documents.
  3. Filter/block on unsafe result.
  4. Send safe context to /v1/chat/completions.

Notes

If a third-party LangChain utility expects older scan-style SDK methods, adapt it to call the two server endpoints above.