Your hardware.
Your models.
Full agent power.
Connect a local model running on your machine to everything 1K4 has to offer. Document editing. Code review. Parallel agents. File management. Deploy. All of it. For free.
Link up your Ollama to 1K4 Lab with a single command. No credits consumed. Ever.
Your hardware. Your data. Free.
Need more speed for intense work days? We're building dedicated inference with flat daily passes. No per-token billing. Coming soon.
What happens when you connect
Your local models get the full Studio treatment. Same workspace, same tools, same capabilities.
Three steps. That's it.
ollama pull gemma4:26bnpx @1key4ai/bridge --api-key sk-xxxYour models appear in Lab within seconds.
Setting expectations
Local inference speed depends on your hardware. A 26B model on a MacBook Pro with 36GB RAM runs well. Bigger models need more muscle. That's physics, not us.
For the best agent experience, we recommend Gemma 4 models. They support native tool calls, which means the agent can actually do things, not just talk about doing things. Other models work great for conversation and brainstorming.
Model compatibility
Gemma 4 models handle native tool calling. The agent can edit files, run code, manage tasks, and coordinate subagents. Recommended for agentic work.
Models without reliable tool call support work in conversational mode. Great for brainstorming, writing, analysis. Agent tool actions may be limited.
Plug in and play.
Free. Forever. No catch.
Connect Your ModelsRequires Ollama installed locally. Works on macOS, Linux, and Windows.