Skip to main content

Applications

Its easy to integrate applications that leverage LLMs with Javelin. We have made it easy to seamlessly connect your applications to route all LLM traffic through Javelin with minimal code changes.

Leveraging the Javelin Platform

The core usage of Javelin is to define routes, and then to define what to do at each route. Rather than having your LLM Applications (like Co-Pilot apps etc.,) individually & directly point to the LLM Vendor & Model (like OpenAI, Gemini etc.,), configure the provider/model endpoint to be your Javelin endpoint. This ensures that all applications that leverage AI Models will route their requests through the gateway. Javelin supports all the latest models and providers, so you don't have to make any changes to your application or how requests to models are sent.

See Javelin Configuration section, for details on how to setup routes on the gateway to different models and providers.

See Python SDK for details on how you can easily embed this within your AI Apps.

Unified Endpoints

The Unified Endpoints provide a consistent API interface that abstracts the provider-specific details of various AI services. Whether you are interfacing with an OpenAI-compatible service, an Azure OpenAI deployment, or an AWS Bedrock API, these endpoints enable you to use a standardized request/response format. This documentation explains the available endpoints, their purpose, and usage examples.

  1. Single Entry Points: Instead of routing to different URLs for each provider, you call these “unified” endpoints with specific route parameters or path segments (e.g., /completions, /chat/completions, /embeddings, or deployments/{deployment}/completions in the case of Azure).
  2. Provider-Agnostic Handling: A common handler (e.g., queryHandler(appState)) receives each request and delegates it to the appropriate provider logic based on URL parameters like providername or deployment.
  3. Consistent Request/Response Shapes: All requests follow a uniform structure (for example, a JSON object with a prompt, messages, or input for embeddings). The service then translates it to each provider’s specific API format as needed.

Endpoint Breakdown

1. OpenAI-Compatible Endpoints

These endpoints mirror the standard OpenAI API methods. They allow you to perform common AI tasks such as generating text completions, handling chat-based requests, or producing embeddings.

Endpoints

  • POST /{providername}/completions
    Request text completions from the provider.
    Path Parameter:

    • providername: Identifier for the OpenAI-compatible provider.
  • POST /{providername}/chat/completions
    Request chat-based completions (ideal for conversational interfaces).
    Path Parameter:

    • providername: Identifier for the provider.
  • POST /{providername}/embeddings
    Generate embeddings for provided text data.
    Path Parameter:

    • providername: Identifier for the provider.

Example Usage


curl -X POST "https://your-api-domain.com/v1/openai/completions" -H "Content-Type: application/json" -d '{
"prompt": "Once upon a time",
"max_tokens": 50
}'

Replace openai with the appropriate openai API compatible provider name like azure, mistral, deepseek etc. as required.

2. Azure OpenAI API Endpoints

For providers using Azure’s deployment model, endpoints include an additional parameter for deployment management.

Endpoints

  • POST /{providername}/deployments/{deployment}/completions
    Request text completions from the provider.
    Path Parameter:

    • providername: The Azure OpenAI provider identifier.
    • deployment: The deployment ID configured in Azure.
  • POST /{providername}/deployments/{deployment}/chat/completions
    Request chat-based completions (ideal for conversational interfaces).
    Path Parameter:

    • providername: The Azure OpenAI provider identifier.
    • deployment: The deployment ID configured in Azure.
  • POST /{providername}/deployments/{deployment}/embeddings
    Generate embeddings for provided text data.
    Path Parameter:

    • providername: The Azure OpenAI provider identifier.
    • deployment: The deployment ID configured in Azure.

Example Usage


curl -X POST "https://your-api-domain.com/v1/azure/deployments/my-deployment/chat/completions" -H "Content-Type: application/json" -d '{
"messages": [
{"role": "user", "content": "Tell me a story"}
],
"max_tokens": 50
}'

3. AWS Bedrock API Endpoints

For AWS Bedrock–style providers, the endpoints use a slightly different URL pattern to accommodate model versioning and extended routing.

Endpoints

  • POST /model/{routename}/{apivariation}
    Route requests to a specific model and API variation. Path Parameter:
    • routename: The model or route name (identifies a specific AWS Bedrock model).
    • apivariation: A parameter to indicate the API variation (Invoke", "Invoke-Stream", "Invoke-With-Response-Stream", "Converse", "Converse-Stream) or version.

Example Usage


curl -X POST "https://your-api-domain.com/v1/model/anthropic.claude-3-sonnet-20240229-v1:0/invoke" -H "Content-Type: application/json" -d '{
"input": "What is the capital of France?"
}'

4. Query Endpoints

These endpoints allow direct querying of predefined routes, bypassing provider-specific names when a generic and customizable route configuration is desired.

Endpoints

  • POST /query/{routename}
    Execute a query against a specific route. Path Parameter:
    • routename: The route with one or more models based on the configured policies and route configurations and return back a response

Example Usage

REST API

First, create a route as shown in the Create Route section.

Once you have created a route, you can query it using the following curl command:

curl 'https://api.javelin.live/v1/query/your_route_name' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_OPENAI_API_KEY' \
-H 'x-api-key: YOUR_JAVELIN_API_KEY' \
--data-raw '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "SANFRANCISCO is located in?"}
],
"temperature": 0.8
}'

Make sure to replace your_route_name, YOUR_OPENAI_API_KEY, and YOUR_JAVELIN_API_KEY with your actual values.

Python

pip install javelin-sdk
Query Route with Javelin SDK
from javelin_sdk import JavelinClient, JavelinConfig, Route
import os

javelin_api_key = os.getenv('JAVELIN_API_KEY')
llm_api_key = os.getenv("OPENAI_API_KEY")

# Create Javelin configuration
config = JavelinConfig(
base_url="https://api.javelin.live",
javelin_api_key=javelin_api_key,
llm_api_key=llm_api_key
)

# Create Javelin client
client = JavelinClient(config)

# Route name to get is {routename} e.g., sampleroute1
query_data = {
"messages": [
{
"role": "system",
"content": "Hello, you are a helpful scientific assistant."
},
{
"role": "user",
"content": "What is the chemical composition of sugar?"
}
],
"temperature": 0.8
}

# Now query the route, for async use 'await client.aquery_route("sampleroute1", query_data)'
response = client.query_route("sampleroute1", query_data)
print(response.model_dump_json(indent=2))

JavaScript/TypeScript

npm install openai
OpenAI API Integration Example
import OpenAI from "openai";

const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://api.javelin.live/v1/query/{your_route_name}",
defaultHeaders: {
"x-api-key": `${process.env.JAVELIN_API_KEY}`
},
});

async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: "system", content: "You are a helpful assistant." }],
model: "gpt-3.5-turbo",
});

console.log(completion.choices[0]);
}

main();