$0 forever

Your hardware.
Your models.
Full agent power.

Connect a local model running on your machine to everything 1K4 has to offer. Document editing. Code review. Parallel agents. File management. Deploy. All of it. For free.

Link up your Ollama to 1K4 Lab with a single command. No credits consumed. Ever.

Bridge connected$0
Local (Ollama)
gemma4:26btool calls
gemma4:e4btool calls
llama3.3:70bchat
mistral-largechat
$ npx @1key4ai/bridge --api-key sk-xxx

Your hardware. Your data. Free.

Need more speed for intense work days? We're building dedicated inference with flat daily passes. No per-token billing. Coming soon.

What happens when you connect

Your local models get the full Studio treatment. Same workspace, same tools, same capabilities.

Full agent capabilities
Document editing, code review, file management, parallel agents, deploy. Your local model drives the same agent runtime as cloud models.
Tool calls (Gemma)
Gemma 4 models support native tool calls: your local model can actually execute actions, not just generate text. Edit files, run commands, coordinate tasks.
Inference stays local
All model inference runs on your hardware. The Bridge relays orchestration signals between Studio and your Ollama instance. Your machine does the thinking.

Three steps. That's it.

1
Install Ollama
ollama.com → download → pull a model. ollama pull gemma4:26b
2
Run the Bridge
npx @1key4ai/bridge --api-key sk-xxx
Your models appear in Lab within seconds.
3
Use Lab
Open Lab. Select your local model from the model picker. Start working. No credits consumed.

Setting expectations

Local inference speed depends on your hardware. A 26B model on a MacBook Pro with 36GB RAM runs well. Bigger models need more muscle. That's physics, not us.

For the best agent experience, we recommend Gemma 4 models. They support native tool calls, which means the agent can actually do things, not just talk about doing things. Other models work great for conversation and brainstorming.

Model compatibility

Full agent experience
gemma4:26btool calls + chat
gemma4:e4btool calls + chat

Gemma 4 models handle native tool calling. The agent can edit files, run code, manage tasks, and coordinate subagents. Recommended for agentic work.

Conversational mode
llama3.3:70bchat
mistral-largechat
qwen3:32bchat

Models without reliable tool call support work in conversational mode. Great for brainstorming, writing, analysis. Agent tool actions may be limited.

Plug in and play.

Free. Forever. No catch.

Connect Your Models

Requires Ollama installed locally. Works on macOS, Linux, and Windows.