iTranslated by AI
Try Auth0 for AI Agents for $10 without worrying about pay-as-you-go billing
Introduction
Recently, Auth0 released "Auth0 for AI Agents."
I wanted to try it out immediately, so I checked the sample code in the quickstart.
However, looking at the code, it seemed like an API key for an LLM model was required to run it.
For those who are afraid of pay-as-you-go billing, the hurdle to try it out is quite high.
Therefore, I'm sharing some tips to create an environment where you can test things to a certain extent for about $10.
Note that the sample application is only for TypeScript, using Vercel AI and Langchain.
However, since the steps are simple, I think you can easily implement it in Python as well.
Steps
I will briefly describe the steps.
Note that these steps assume the following quickstart, but since similar code exists in other quickstarts, they should be applicable elsewhere.
- Referencing the following URL, configure Open Router.
https://zenn.dev/asap/articles/5cda4576fbe7cb - Add the following variables to the
.env.exampleof the sample code.
OPENAI_MODEL="google/gemini-2.0-flash-exp:free"
OPENAI_BASE_URL="https://openrouter.ai/api/v1"
- If using the sample code with Vercel AI, modify the code in
src/app/api/chat/route.tsas follows.
import { NextRequest } from 'next/server';
import { streamText, UIMessage, createUIMessageStream, createUIMessageStreamResponse, convertToModelMessages, stepCountIs } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { setAIContext } from '@auth0/ai-vercel';
const date = new Date().toISOString();
const AGENT_SYSTEM_TEMPLATE = `You are a personal assistant named Assistant0. You are a helpful assistant that can answer questions and help with tasks. You have access to a set of tools, use the tools as needed to answer the user's question. Render the email body as a markdown block, do not wrap it in code blocks. Today is ${date}.`;
// Defined in a way that the URL can be passed from outside
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL,
});
export async function POST(req: NextRequest) {
const { id, messages }: { id: string; messages: Array<UIMessage> } = await req.json();
setAIContext({ threadID: id });
const tools = {};
const stream = createUIMessageStream({
originalMessages: messages,
execute: async ({ writer }) => {
const result = streamText({
// Execute by allowing the model name to be passed from outside
model: openai(process.env.OPENAI_MODEL_NAME || 'google/gemini-2.0-flash-exp:free'),
system: AGENT_SYSTEM_TEMPLATE,
messages: convertToModelMessages(messages),
stopWhen: stepCountIs(5),
tools,
});
writer.merge(
result.toUIMessageStream({
sendReasoning: true,
})
);
},
onError: (err: any) => {
console.log(err);
return `An error occurred! ${err.message}`;
},
});
return createUIMessageStreamResponse({ stream });
}
- For sample code using Langchain, modify
src/lib/agent.tsas follows.
import { createReactAgent, ToolNode } from '@langchain/langgraph/prebuilt';
import { ChatOpenAI } from '@langchain/openai';
import { InMemoryStore, MemorySaver } from '@langchain/langgraph';
import { Calculator } from '@langchain/community/tools/calculator';
const date = new Date().toISOString();
const AGENT_SYSTEM_TEMPLATE = `You are a personal assistant named Assistant0. You are a helpful assistant that can answer questions and help with tasks. You have access to a set of tools, use the tools as needed to answer the user's question. Render the email body as a markdown block, do not wrap it in code blocks. Today is ${date}.`;
// Defined in a way that the model name and URL can be passed from outside
const llm = new ChatOpenAI({
model: process.env.OPENAI_MODEL || 'gpt-4o-mini',
temperature: 0,
configuration: process.env.OPENAI_BASE_URL ? {
baseURL: process.env.OPENAI_BASE_URL,
} : undefined,
});
const tools = [
new Calculator(),
];
const checkpointer = new MemorySaver();
const store = new InMemoryStore();
export const agent = createReactAgent({
llm,
tools: new ToolNode(tools, {
handleToolErrors: false,
}),
prompt: AGENT_SYSTEM_TEMPLATE,
store,
checkpointer,
});
After that, you can run the app by following the documentation for setup.
What we are doing is replacing pinpointed model definitions with a process where configurations are provided externally.
This allows you to set the model definition and URL to those of Open Router and execute the chat.
Precautions
The steps are very simple as described above.
However, I will just list a few points to be aware of.
It's possible to do for free, but paying $10 is safer
Open Router has free models, so you can run it completely for free if you wish.
However, the rate limits are very strict, so it's better to pay just $10 to increase your Open Router limits for a more stress-free testing experience.
Personally, I think about 1,500 yen is an acceptable cost.
Regarding the free models to use
While free models can be used, due to the implementation of the sample code, you can only run models that support the "tools" feature.
Therefore, when looking for a free model, please check whether it supports "tools."
The sample model shown this time is compatible.
Also, if you prefer a different model, opening the URL below will search for models that are free and have tools support.
That's all.
I'm looking forward to making more use of this in the future!
Discussion