Applications
Its easy to integrate applications that leverage LLMs with Javelin. We have made it easy to seamlessly connect your applications to route all LLM traffic through Javelin with minimal code changes.
Leveraging the Javelin Platform
The core usage of Javelin is to define routes, and then to define what to do at each route. Rather than having your LLM Applications (like Co-Pilot apps etc.,) individually & directly point to the LLM Vendor & Model (like OpenAI, Gemini etc.,), configure the provider/model endpoint to be your Javelin endpoint. This ensures that all applications that leverage AI Models will route their requests through the gateway. Javelin supports all the latest models and providers, so you don't have to make any changes to your application or how requests to models are sent.
See Javelin Configuration section, for details on how to setup routes on the gateway to different models and providers.
See Python SDK for details on how you can easily embed this within your AI Apps.
Unified Endpoints
The Unified Endpoints provide a consistent API interface that abstracts the provider-specific details of various AI services. Whether you are interfacing with an OpenAI-compatible service, an Azure OpenAI deployment, or an AWS Bedrock API, these endpoints enable you to use a standardized request/response format. This documentation explains the available endpoints, their purpose, and usage examples.
- Single Entry Points: Instead of routing to different URLs for each provider, you call these “unified” endpoints with specific route parameters or path segments (e.g.,
/completions
,/chat/completions
,/embeddings
, ordeployments/{deployment}/completions
in the case of Azure). - Provider-Agnostic Handling: A common handler (e.g.,
queryHandler(appState)
) receives each request and delegates it to the appropriate provider logic based on URL parameters likeprovidername
ordeployment
. - Consistent Request/Response Shapes: All requests follow a uniform structure (for example, a JSON object with a
prompt
,messages
, orinput
for embeddings). The service then translates it to each provider’s specific API format as needed.
Endpoint Breakdown
1. OpenAI-Compatible Endpoints
These endpoints mirror the standard OpenAI API methods. They allow you to perform common AI tasks such as generating text completions, handling chat-based requests, or producing embeddings.
Endpoints
-
POST
/{providername}/completions
Request text completions from the provider.
Path Parameter:providername
: Identifier for the OpenAI-compatible provider.
-
POST
/{providername}/chat/completions
Request chat-based completions (ideal for conversational interfaces).
Path Parameter:providername
: Identifier for the provider.
-
POST
/{providername}/embeddings
Generate embeddings for provided text data.
Path Parameter:providername
: Identifier for the provider.
Example Usage
curl -X POST "https://your-api-domain.com/v1/openai/completions" -H "Content-Type: application/json" -d '{
"prompt": "Once upon a time",
"max_tokens": 50
}'
Replace openai with the appropriate openai API compatible provider name like azure, mistral, deepseek etc. as required.
2. Azure OpenAI API Endpoints
For providers using Azure’s deployment model, endpoints include an additional parameter for deployment management.
Endpoints
-
POST
/{providername}/deployments/{deployment}/completions
Request text completions from the provider.
Path Parameter:providername
: The Azure OpenAI provider identifier.deployment
: The deployment ID configured in Azure.
-
POST
/{providername}/deployments/{deployment}/chat/completions
Request chat-based completions (ideal for conversational interfaces).
Path Parameter:providername
: The Azure OpenAI provider identifier.deployment
: The deployment ID configured in Azure.
-
POST
/{providername}/deployments/{deployment}/embeddings
Generate embeddings for provided text data.
Path Parameter:providername
: The Azure OpenAI provider identifier.deployment
: The deployment ID configured in Azure.
Example Usage
curl -X POST "https://your-api-domain.com/v1/azure/deployments/my-deployment/chat/completions" -H "Content-Type: application/json" -d '{
"messages": [
{"role": "user", "content": "Tell me a story"}
],
"max_tokens": 50
}'
3. AWS Bedrock API Endpoints
For AWS Bedrock–style providers, the endpoints use a slightly different URL pattern to accommodate model versioning and extended routing.
Endpoints
- POST
/model/{routename}/{apivariation}
Route requests to a specific model and API variation. Path Parameter:routename
: The model or route name (identifies a specific AWS Bedrock model).apivariation
: A parameter to indicate the API variation (Invoke", "Invoke-Stream", "Invoke-With-Response-Stream", "Converse", "Converse-Stream) or version.
Example Usage
curl -X POST "https://your-api-domain.com/v1/model/anthropic.claude-3-sonnet-20240229-v1:0/invoke" -H "Content-Type: application/json" -d '{
"input": "What is the capital of France?"
}'
4. Query Endpoints
These endpoints allow direct querying of predefined routes, bypassing provider-specific names when a generic and customizable route configuration is desired.
Endpoints
- POST
/query/{routename}
Execute a query against a specific route. Path Parameter:routename
: The route with one or more models based on the configured policies and route configurations and return back a response
Example Usage
REST API
- curl
- Python Requests
First, create a route as shown in the Create Route section.
Once you have created a route, you can query it using the following curl command:
curl 'https://api.javelin.live/v1/query/your_route_name' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_OPENAI_API_KEY' \
-H 'x-api-key: YOUR_JAVELIN_API_KEY' \
--data-raw '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "SANFRANCISCO is located in?"}
],
"temperature": 0.8
}'
Make sure to replace your_route_name
, YOUR_OPENAI_API_KEY
, and YOUR_JAVELIN_API_KEY
with your actual values.
First, create a route as shown in the Create Route section.
Once you have created a route, you can query it using Python requests:
import requests
import os
import dotenv
dotenv.load_dotenv()
javelin_api_key = os.getenv('JAVELIN_API_KEY')
openai_api_key = os.getenv('OPENAI_API_KEY')
route_name = 'your_route_name'
url = f'https://api.javelin.live/v1/query/{route_name}'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {openai_api_key}',
'x-api-key': javelin_api_key
}
data = {
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "SANFRANCISCO is located in?"}
],
"temperature": 0.8
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
print(response.json())
else:
print(f"Error: {response.status_code}, {response.text}")
Make sure to replace your_route_name
with your actual route name and set the JAVELIN_API_KEY
and OPENAI_API_KEY
environment variables.
Python
- Javelin SDK
- OpenAI
- Azure OpenAI
- LangChain
- OpenAI-Compatible Query Example
- DSPy
- Bedrock
- ...
pip install javelin-sdk
from javelin_sdk import JavelinClient, JavelinConfig, Route
import os
javelin_api_key = os.getenv('JAVELIN_API_KEY')
llm_api_key = os.getenv("OPENAI_API_KEY")
# Create Javelin configuration
config = JavelinConfig(
base_url="https://api.javelin.live",
javelin_api_key=javelin_api_key,
llm_api_key=llm_api_key
)
# Create Javelin client
client = JavelinClient(config)
# Route name to get is {routename} e.g., sampleroute1
query_data = {
"messages": [
{
"role": "system",
"content": "Hello, you are a helpful scientific assistant."
},
{
"role": "user",
"content": "What is the chemical composition of sugar?"
}
],
"temperature": 0.8
}
# Now query the route, for async use 'await client.aquery_route("sampleroute1", query_data)'
response = client.query_route("sampleroute1", query_data)
print(response.model_dump_json(indent=2))
pip install openai
from openai import OpenAI
import os
javelin_api_key = os.environ['JAVELIN_API_KEY']
llm_api_key = os.environ["OPENAI_API_KEY"]
## Javelin Headers
# Define Javelin headers with the API key
config = JavelinConfig(
javelin_api_key=javelin_api_key,
)
# Define the Javelin route as a variable
javelin_route = "sampleroute1" # Define your universal route
client = JavelinClient(config)
openai_client = OpenAI(
api_key=openai_api_key,
)
# Register the OpenAI client with Javelin using the route name
client.register_openai(openai_client, route_name=javelin_route)
# Query the model
# --- Call OpenAI Endpoints ---
print("OpenAI: 1 - Chat completions")
chat_completions = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "What is machine learning?"}],
)
print(completion.model_dump_json(indent=2))
# Streaming Responses
stream = openai_client.chat.completions.create(
messages=[
{"role": "user", "content": "Say this is a test"}
],
model="gpt-4o",
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
pip install openai
from openai import AzureOpenAI
import os
# Javelin Headers
javelin_headers = {
"x-api-key": javelin_api_key # Javelin API key from admin
}
# Define the Javelin route as a variable
javelin_route = "sampleroute1" # Example route
client = JavelinClient(config) # Create Javelin Client
# Create Azure OpenAI Client
openai_client = AzureOpenAI(
api_version="2023-07-01-preview",
azure_endpoint="https://javelinpreview.openai.azure.com", # Azure Endpoint
api_key=azure_openai_api_key
)
client.register_azureopenai(openai_client, route_name=javelin_route) # Register Azure OpenAI Client with Javelin
completion = openai_client.chat.completions.create(
model="gpt-4", # e.g. gpt-3.5-turbo
messages=[
{
"role": "user",
"content": "How do I output all files in a directory using Python?",
},
],
)
print(completion.model_dump_json(indent=2))
# Streaming Responses
stream = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "Hello, you are a helpful scientific assistant."},
{"role": "user", "content": "What is the chemical composition of sugar?"}
],
stream=True
)
for chunk in stream:
if chunk.choices:
print(chunk.choices[0].delta.content or "", end="")
pip install langchain
pip install langchain-openai
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
import os
# Retrieve API keys from environment variables
javelin_api_key = os.getenv('JAVELIN_API_KEY')
llm_api_key = os.getenv("OPENAI_API_KEY")
model_choice = "gpt-3.5-turbo" # For example, change to "gpt-4"
# Define the Javelin route as a variable
route_name = "sampleroute1"
# Define Javelin headers with the API key
javelin_headers = {
"x-api-key": javelin_api_key # Javelin API key from admin
}
llm = ChatOpenAI(
openai_api_key=openai_api_key,
openai_api_base="https://api.javelin.live/v1/openai",
default_headers={
"x-api-key": javelin_api_key,
"x-javelin-route": route_name,
"x-javelin-provider": "https://api.openai.com/v1",
"x-javelin-model":model_choice
}
)
# Define a simple prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
# Use a simple output parser (string output)
output_parser = StrOutputParser()
# Create the processing chain (prompt -> LLM -> parser)
chain = prompt | llm | output_parser
def ask_question(question: str) -> str:
return chain.invoke({"input": question})
# Example usage:
if __name__ == "__main__":
question = "What is the chemical composition of water?"
answer = ask_question(question)
print("Answer:", answer)
#This example demonstrates how Javelin uses OpenAI's schema as a standardized interface for different LLM providers.
#By adopting OpenAI's widely-used request/response format, Javelin enables seamless integration with various LLM providers
#(like Anthropic, Bedrock, Mistral, etc.) while maintaining a consistent API structure. This allows developers to use the
#same code pattern regardless of the underlying model provider, with Javelin handling the necessary translations and adaptations behind the scenes.
from javelin_sdk import JavelinClient, JavelinConfig
import os
from typing import Dict, Any
import json
# Helper function to pretty print responses
def print_response(provider: str, response: Dict[str, Any]) -> None:
print(f"
=== Response from {provider} ===")
print(json.dumps(response, indent=2))
# Setup client configuration
config = JavelinConfig(
base_url="https://api.javelin.live",
javelin_api_key=os.getenv('JAVELIN_API_KEY'),
llm_api_key=os.getenv('OPENAI_API_KEY')
)
client = JavelinClient(config)
# Example messages in OpenAI format
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What are the three primary colors?"}
]
# 1. Query OpenAI route
try:
openai_response = client.chat.completions.create(
route="openai_route", # Route configured for OpenAI
messages=messages,
temperature=0.7,
max_tokens=150
)
print_response("OpenAI", openai_response)
except Exception as e:
print(f"OpenAI query failed: {str(e)}")
=== Response from OpenAI ===
"""
{
"id": "chatcmpl-123abc",
"object": "chat.completion",
"created": 1677858242,
"model": "gpt-3.5-turbo",
"usage": {
"prompt_tokens": 42,
"completion_tokens": 38,
"total_tokens": 80
},
"choices": [
{
"message": {
"role": "assistant",
"content": "The three primary colors are red, blue, and yellow."
},
"finish_reason": "stop",
"index": 0
}
]
}
"""
# 2. Query Bedrock route (using same OpenAI format)
try:
bedrock_response = client.chat.completions.create(
route="bedrock_route", # Route configured for Bedrock
messages=messages,
temperature=0.7,
max_tokens=150
)
print_response("Bedrock", bedrock_response)
except Exception as e:
print(f"Bedrock query failed: {str(e)}")
"""
=== Response from Bedrock ===
{
"id": "bedrock-123xyz",
"object": "chat.completion",
"created": 1677858243,
"model": "anthropic.claude-v2",
"usage": {
"prompt_tokens": 42,
"completion_tokens": 41,
"total_tokens": 83
},
"choices": [
{
"message": {
"role": "assistant",
"content": "The three primary colors are red, blue, and yellow. These colors cannot be created by mixing other colors together."
},
"finish_reason": "stop",
"index": 0
}
]
}
"""
# Example using text completions with Llama
try:
llama_response = client.completions.create(
route="bedrockllama", # Route configured for Bedrock Llama
prompt="Write a haiku about programming:",
max_tokens=50,
temperature=0.7,
top_p=0.9,
)
print("=== Llama Text Completion Response ===")
pretty_print(llama_response)
except Exception as e:
print(f"Llama query failed: {str(e)}")
"""
=== Llama Text Completion Response ===
{
"id": "bedrock-comp-123xyz",
"object": "text_completion",
"created": 1677858244,
"model": "meta.llama2-70b",
"choices": [
{
"text": "Code flows like water\nBugs crawl through silent errors\nDebugger saves all",
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 6,
"completion_tokens": 15,
"total_tokens": 21
}
}
"""
Introduction: DSPy: Goodbye Prompting, Hello Programming!
Documentation: DSPy Docs
pip install dspy-ai
import dspy
from dsp import LM
import os
import requests
# Assuming the environment variables are set correctly
javelin_api_key = os.getenv('JAVELIN_API_KEY')
llm_api_key = os.getenv("OPENAI_API_KEY")
class Javelin(LM):
def __init__(self, model, api_key):
self.model = model
self.api_key = api_key
self.provider = "default"
self.kwargs = {
"temperature": 1.0,
"max_tokens": 500,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"stop": None,
"n": 1,
"logprobs": None,
"logit_bias": None,
"stream": False
}
self.base_url = "https://api.javelin.live/v1/query/your_route_name" # Set Javelin's API base URL for query
self.javelin_headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer { api_key }",
"x-api-key": javelin_api_key,
}
self.history = []
def basic_request(self, prompt: str, **kwargs):
headers = self.javelin_headers
data = {
**kwargs,
"model": self.model,
"messages": [
{"role": "user", "content": prompt}
]
}
response = requests.post(self.base_url, headers=headers, json=data)
response = response.json()
self.history.append({
"prompt": prompt,
"response": response,
"kwargs": kwargs,
})
return response
def __call__(self, prompt, only_completed=True, return_sorted=False, **kwargs):
response = self.request(prompt, **kwargs)
if 'choices' in response and len(response['choices']) > 0:
first_choice_content = response['choices'][0]['message']['content']
completions = [first_choice_content]
return completions
else:
return ["No response found."]
javelin = Javelin(model="gpt-4-1106-preview", api_key=llm_api_key)
dspy.configure(lm=javelin)
# Define a module (ChainOfThought) and assign it a signature (return an answer, given a question).
qa = dspy.ChainOfThought('question -> answer')
response = qa(question="You have 3 baskets. The first basket has twice as many apples as the second basket. The third basket has 3 fewer apples than the first basket. If you have a total of 27 apples, how many apples are in each basket?")
print(response)
pip install boto3
import boto3
import json
from javelin_sdk import (
JavelinClient,
JavelinConfig,
)
# Configure boto3 bedrock-runtime service client
bedrock_runtime_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1"
)
# Configure boto3 bedrock service client
bedrock_client = boto3.client(
service_name="bedrock",
region_name="us-east-1"
)
# Initialize Javelin Client
config = JavelinConfig(
base_url=os.getenv('JAVELIN_BASE_URL'),
javelin_api_key=os.getenv('JAVELIN_API_KEY')
)
client = JavelinClient(config)
# Passing bedrock_client is recommended for optimal error handling
# and request management, though it remains optional.
client.register_bedrock(
bedrock_runtime_client=bedrock_runtime_client,
bedrock_client=bedrock_client,
route_name="bedrock" # Universal route for the Amazon Bedrock models
)
# Example using Claude model via Bedrock Runtime
response = bedrock_runtime_client.invoke_model(
modelId="anthropic.claude-v2:1",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [
{
"content": "What is machine learning?",
"role": "user"
}
]
}),
contentType="application/json"
)
response_body = json.loads(response["body"].read())
print(f"Invoke Response: {json.dumps(response_body, indent=2)}")
Learn more about how to setup Universal Bedrock routes to use this example here.
# Example using Langchain
from langchain_community.llms.bedrock import Bedrock as BedrockLLM
llm = BedrockLLM(
client=bedrock_runtime_client,
model_id="anthropic.claude-v2:1",
model_kwargs={
"max_tokens_to_sample": 256,
"temperature": 0.7,
}
)
stream_generator = llm.stream("What is machine learning?")
for chunk in stream_generator:
print(chunk, end='', flush=True)
Learn more about how to setup Universal Bedrock routes to use this example here.
JavaScript/TypeScript
- OpenAI
- Langchain
- Bedrock
- ...
npm install openai
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://api.javelin.live/v1/query/{your_route_name}",
defaultHeaders: {
"x-api-key": `${process.env.JAVELIN_API_KEY}`
},
});
async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: "system", content: "You are a helpful assistant." }],
model: "gpt-3.5-turbo",
});
console.log(completion.choices[0]);
}
main();
npm install @langchain/openai
import { ChatOpenAI } from '@langchain/openai';
const llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
configuration: {
basePath: "https://api.javelin.live/v1/query/{your_route_name}",
defaultHeaders: {
"x-api-key": `${process.env.JAVELIN_API_KEY}`
},
},
});
async function main() {
const response = await llm.invoke("tell me a joke?");
console.log(response);
}
main();
import { BedrockRuntimeClient, InvokeModelCommand, InvokeModelWithResponseStreamCommand } from "@aws-sdk/client-bedrock-runtime";
const customHeaders = {
'x-api-key': JAVELIN_API_KEY
};
const client = new BedrockRuntimeClient({
region: AWS_REGION,
// Use the javelin endpoint for bedrock
endpoint: JAVELIN_ENDPOINT,
credentials: {
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
},
});
// Add custom headers via middleware
client.middlewareStack.add(
(next, context) => async (args) => {
args.request.headers = {
...args.request.headers,
...customHeaders
};
return next(args);
},
{
step: "build"
}
);
// Query the model
const payload = {
anthropic_version: "bedrock-2023-05-31",
max_tokens: 1000,
messages: [
{
role: "user",
content: "What is machine learning?",
},
],
};
const command = new InvokeModelWithResponseStreamCommand({
contentType: "application/json",
body: JSON.stringify(payload),
"anthropic.claude-v2:1",
});
const apiResponse = await client.send(command);
for await (const item of apiResponse.body) {
console.log(item);
}
Learn more about how to setup Bedrock routes to use these examples here.
We have worked on the integrations. Please contact: support@getjavelin.io if you would like to use this feature.