Skip to main content

Applications

Its easy to integrate applications that leverage LLMs with Javelin. We have made it easy to seamlessly connect your applications to route all LLM traffic through Javelin with minimal code changes.

Leveraging the Javelin Platform

The core usage of Javelin is to define routes, and then to define what to do at each route. Rather than having your LLM Applications (like Co-Pilot apps etc.,) individually & directly point to the LLM Vendor & Model (like OpenAI, Gemini etc.,), configure the provider/model endpoint to be your Javelin endpoint. This ensures that all applications that leverage AI Models will route their requests through the gateway. Javelin supports all the latest models and providers, so you don't have to make any changes to your application or how requests to models are sent.

See Javelin Configuration section, for details on how to setup routes on the gateway to different models and providers.

See Python SDK for details on how you can easily embed this within your AI Apps.

Querying an LLM

Javelin may send a request to one or more models based on the configured policies and route configurations and return back a response.

REST API

First, create a route as shown in the Create Route section.

Once you have created a route, you can query it using the following curl command:

curl 'https://api-dev.javelin.live/v1/query/your_route_name' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_OPENAI_API_KEY' \
-H 'x-api-key: YOUR_JAVELIN_API_KEY' \
--data-raw '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "SANFRANCISCO is located in?"}
],
"temperature": 0.8
}'

Make sure to replace your_route_name, YOUR_OPENAI_API_KEY, and YOUR_JAVELIN_API_KEY with your actual values.

Python

pip install javelin-sdk
Query Route with Javelin SDK
from javelin_sdk import JavelinClient, JavelinConfig, Route
import os

javelin_api_key = os.getenv('JAVELIN_API_KEY')
llm_api_key = os.getenv("OPENAI_API_KEY")

# Create Javelin configuration
config = JavelinConfig(
base_url="https://api-dev.javelin.live",
javelin_api_key=javelin_api_key,
llm_api_key=llm_api_key
)

# Create Javelin client
client = JavelinClient(config)

# Route name to get is {routename} e.g., sampleroute1
query_data = {
"messages": [
{
"role": "system",
"content": "Hello, you are a helpful scientific assistant."
},
{
"role": "user",
"content": "What is the chemical composition of sugar?"
}
],
"temperature": 0.8
}

# Now query the route, for async use 'await client.aquery_route("sampleroute1", query_data)'
response = client.query_route("sampleroute1", query_data)
print(response.model_dump_json(indent=2))

JavaScript/TypeScript

npm install openai
OpenAI API Integration Example
import OpenAI from "openai";

const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://api-dev.javelin.live/v1/query",
defaultHeaders: {
"x-api-key": `${process.env.JAVELIN_API_KEY}`,
"x-javelin-route": "sample_route1",
},
});

async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: "system", content: "You are a helpful assistant." }],
model: "gpt-3.5-turbo",
});

console.log(completion.choices[0]);
}

main();