Rapidly iterate on your system and instruction prompts

Move fast with your LLM apps,

Without breaking things!

Rapidly iterate on your system and instruction prompts

Move fast with your LLM apps,

Without breaking things!

Rapidly iterate on your system and instruction prompts

Move fast with your LLM apps,

Without breaking things!

Felafax Gateway allows you to safely roll out prompt changes from 1% to 100% of users in your LLM apps. No more painstaking evals - test changes live with small traffic!

Felafax Gateway allows you to safely roll out prompt changes from 1% to 100% of users in your LLM apps. No more painstaking evals - test changes live with small traffic!

Felafax Gateway allows you to safely roll out prompt changes from 1% to 100% of users in your LLM apps. No more painstaking evals - test changes live with small traffic!

Why choose Felafax Gateway?

Controlled prompt change rollouts

Introduce "shadow" prompts and run them on 1% of live traffic alongside your main prompt. Roll out the new prompt to more users as you gain confidence, minimizing risk and iterating rapidly with prompt tweaks.

Switch between GPT4o and Claude3.5

We've built a high-performance Rust-based LLM router with zero latency. Integrate with one line of change to easily switch between different LLM models.

Live eval with "shadow" traffic

No need to painstakingly create eval datasets. Fork production traffic to test multiple prompt versions simultaneously. We also have built-in metrics like helpfulness and hallucination.

Semantic Search

We log all your LLM API requests/responses for enhanced observability and provide powerful semantic search through your logs.

Continuous Evals

Configure a subset of your traffic (like 1%) and run out-of-the-box eval metrics to continuously monitor your app. No need to rely on "vibe" checks for your LLM setups.

from openai import OpenAI

client = OpenAI(
    # default is https://api.openai.com/v1/
    base_url = "https://openai.felafax.ai/v1"
)

Onboard with one line

Integrate Felafax Gateway by simply changing the base_url in your OpenAI client. This one-line modification gives you instant access to our advanced LLM management features without disrupting your existing workflow.

How our Gateway works?

01
Set base_url in OpenAI python API

Point your OpenAI API to the Felafax Proxy with a simple base_url change.

02
View live traffic on our dashboard

Instantly see API requests and responses live on our dashboard after changing the base_url.

03
Rollout new prompt changes with "shadow" prompts

Introduce "shadow" prompts in your Python code and use our dashboard to test them with 1% of your users!

FAQ

Frequently Asked Questions

How does it work?

Felafax Gateway routes the OpenAI requests through our endpoint and supports dynamic updates to the "system prompt" and other fields. This enables features like percentage-controlled rollouts, experiments, etc.

How does it work?

Felafax Gateway routes the OpenAI requests through our endpoint and supports dynamic updates to the "system prompt" and other fields. This enables features like percentage-controlled rollouts, experiments, etc.

How does it work?

Felafax Gateway routes the OpenAI requests through our endpoint and supports dynamic updates to the "system prompt" and other fields. This enables features like percentage-controlled rollouts, experiments, etc.

What is the overhead of using gateway?
What is the overhead of using gateway?
What is the overhead of using gateway?
How do rollouts work?
How do rollouts work?
How do rollouts work?
What eval metrics do you support?
What eval metrics do you support?
What eval metrics do you support?
Is a free trial available?
Is a free trial available?
Is a free trial available?
What is continous eval?
What is continous eval?
What is continous eval?

Upgrade your LLM workflow today!

Felafax Gateway enables safe prompt change rollouts from 1% to 100% of users, with no eval datasets needed. Experience real-time testing, zero-latency routing, and advanced analytics.

Copyright © 2024 Felafax. All Rights Reserved

Upgrade your LLM workflow today!

Felafax Gateway enables safe prompt change rollouts from 1% to 100% of users, with no eval datasets needed. Experience real-time testing, zero-latency routing, and advanced analytics.

Copyright © 2024 Felafax. All Rights Reserved

Upgrade your LLM workflow today!

Felafax Gateway enables safe prompt change rollouts from 1% to 100% of users, with no eval datasets needed. Experience real-time testing, zero-latency routing, and advanced analytics.

Copyright © 2024 Felafax. All Rights Reserved