Skip to main content
Sometimes you want to create a complex pipeline where some requests are processed on-device and the more complex ones are handled in the cloud using a larger model. With uzu, you can do this easily: just choose the cloud model you want to use and perform all requests through the same API.
1

Make a new project folder

mkdir demo && cd demo
2

Initialize with pnpm

pnpm init
3

Install dependencies

pnpm add typescript ts-node @types/node -D
pnpm add @trymirai/uzu
4

Initialize a tsconfig.json

{
    "compilerOptions": {
        "target": "es2020",
        "module": "commonjs",
        "moduleResolution": "node",
        "strict": true,
        "esModuleInterop": true,
        "outDir": "dist",
        "types": [
            "node"
        ]
    },
    "include": [
        "*.ts"
    ]
}
5

Get an API key

Go to Platform and follow this guide.
6

Create main.ts

Don’t forget to add your API key.
import Engine from '@trymirai/uzu';

async function main() {
    const output = await Engine
        .create('API_KEY')
        .chatModel('openai/gpt-oss-120b')
        .reply('How LLMs work');
    console.log(output.text.original);
}

main().catch((error) => {
    console.error(error);
});
7

Run the snippet

pnpm ts-node main.ts
Now that we’ve tried the simplest snippet, let’s take a look at the step-by-step integration guide.