Inside Claude Code · 5 named experts · pay per call

1K4 Quorum.
Your second opinion, before you ship.

Code Auditor for security review. Research Advisor for grounded investigation. Analysis Advisor for the math. Counterpoint Advisor to pressure-test what you already believe. One install, five specialists, and a multi-round brainstorm mode that gets them arguing with each other.

From $0.18 per consult20 min max waitCost cap on every callNo subscription — credits never expire
The quorum

Five experts, each with a job to do.

Every advisor is a curated combination of model, system prompt, and locked settings (reasoning effort, web access, code execution). No prompt engineering required from you. Just call the one whose specialty matches the question.

Code Auditor

Security and correctness review of supplied source. Reads your code adversarially. Returns severity-tagged findings with file:line evidence and concrete fixes.

"Review middleware.py for session fixation and CSRF gaps."
no webreasoning: highstructured findings

Research Advisor

Source-grounded investigation with web access. Distinguishes consensus from emerging from contested. Cites every claim, flags stale sources, ends on what would change the answer.

"Compare Kafka vs Pulsar for our event-ingest service. Last 12 months only."
web searchcites sourcesuncertainty markers

Analysis Advisor

Numerical reasoning and algorithm-correctness verification. Runs code to validate, shows its work, lists assumptions explicitly so you can confirm or challenge them.

"Is the throughput estimate in this design doc realistic at 50K eps?"
code executionweb searchshows working

Counterpoint Advisor

Deliberately hostile to your framing. Surfaces your weakest load-bearing assumptions and pressure-tests them. Will not concede unless given new evidence. Different model than the others, by design.

"What am I overconfident about in this rate-limit plan?"
adversarialdifferent modelno validation

Budget Advisor

Cheap second-opinion sanity check. Two-to-three-sentence verdict on whether your direction holds up. The point is to be brief and fast; verbose output defeats it.

"Quick yes/no: does this approach handle the multi-tenant case?"
tersecheaplow-stakes
How it works

Three steps. One terminal session.

1

Install once

Run a single npx command from the Lab settings page. It writes one entry to your ~/.claude.json. No daemon, no config files, no API keys to juggle.

2

Ask in plain English

In Claude Code, say “use the Code Auditor on this file” or “ask the Counterpoint Advisor to pressure-test my plan.” Claude Code picks the right MCP tool. No special syntax.

3

Get findings, keep moving

Each consult returns structured findings or grounded prose, plus your exact cost. Set a per-call cap if you want one. New advisors and prompts ship server-side — no re-install ever.

claude — ~/projects/web · zsh
Use the Code Auditor on backend/auth/middleware.py. Worried about session fixation.

Reading the file. Calling Code Auditor with your focus areas.
⏵ tool · 1k4_consult advisor: "code_auditor" request: "Security review. Focus: session fixation." artifacts: [{ type: "file", path: "backend/auth/middleware.py", ... }]
[ polling /api/mcp/consult/cns_a8b2.../status every 5s ]
[ status: complete · 47s ]
Code Auditor · your cost $0.18 · 6,401 tokens · 47s
3 findings · 1 critical · 1 important · 1 advisory
In the wild

Four ways teams already use this.

These are the patterns we keep seeing. Each one shows the workflow, the cost, and the time. Costs are illustrative; real numbers depend on artifact size and which advisor you pick.

i.

Pre-merge security audit

SurfaceClaude CodeCode AuditorModeSingle consultCost~$0.18Time~45s

Maya is a senior backend engineer. A teammate has a PR adding new auth middleware. She has fifteen minutes before standup. She hands the file to Code Auditor with two specific concerns (session fixation, redirect validation) and gets back three findings tagged by severity, each with file:line evidence.

Result. Maya copy-pastes each finding into separate PR comments. Marks the critical and important as Request changes. Walks into standup with concrete blockers in hand instead of a gut-feel approve.
Critical
Session fixation: session ID not regenerated post-login.An attacker who plants a known session ID before login retains access after the victim authenticates.
Evidence: middleware.py:47 set_user_id updates payload but doesn’t session.regenerate_id().
Important
No CSRF token check on state-changing routes.Logout and password-change handlers accept POST without verifying a CSRF token.
Evidence: middleware.py:71-86 — chain missing verify_csrf.
Advisory
Open redirect via next query param.Post-login redirect honors any URL passed in ?next=.
Evidence: middleware.py:102 — add ALLOWED_HOSTS check.
ii.

Architecture decision with current sources

Surface1K4 StudioResearch AdvisorModeConsult + webCost~$1.42Time~3 min

Diego is designing an event-ingestion service. He’s narrowed the choice to Kafka or Pulsar but the team hasn’t evaluated since 2023. He asks Research Advisor for a comparison limited to the last 12 months. The advisor searches current docs, benchmarks, and CVE history.

Ninety minutes of HN-thread reading collapses to three minutes of grounded synthesis with citations. Diego pastes the table into his design doc and adds a one-line POC plan to validate Pulsar’s tiered storage at sustained throughput.

Result. Reviewers stop raising “did you actually look at Pulsar” gaps in the design doc. The advisor flagged uncertainty inline so Diego knew exactly what to validate.
1K4 Studio · event-ingest project
activate_capability consult
activated
consult
advisor: research_advisor running · web search × 7 · reading docs · 178s $1.42 / 31,200 tokens
Comparison ready · cited Confluent (2026-01), StreamNative (2026-02), CNCF benchmark (2025-11)
Recommendation: if geo-replication ergonomics dominates the decision, Pulsar wins. If team-knowledge and ecosystem maturity dominate, Kafka with MirrorMaker 2 is the safer choice. [uncertain: Pulsar's tiered-storage performance at 50K eps has fewer independent benchmarks; flag for a small POC before commit]
iii.

Pre-deploy red team brainstorm

SurfaceClaude CodeCode AuditorCounterpointMode3-round brainstormCost~$4.85Time~7 min

Priya is migrating session storage from Redis to Postgres next Friday. The plan includes a dual-write window. She wants two critical perspectives, not one — and she wants them to see each other’s takes between rounds.

Brainstorm mode runs Code Auditor and Counterpoint for three rounds. Round 2 cross-pollinates: each advisor sees the other’s prior take. Round 3 returns a joint synthesis: 5 critical risks, 2 mitigations, 3 questions for the team.

Result. Migration moves to Tuesday morning. Team adds a transactional outbox for Redis→Postgres consistency. Two critical issues that would have surfaced as production incidents are addressed before code runs.
Round 1 · independent takes
Code Auditor
“log and continue” on partial dual-write failure is a silent data divergence vector. Need an explicit reconciliation worker or a transactional outbox.
Counterpoint
Plan assumes Postgres p99 read latency stays under 5ms under peak load. Has anyone load-tested with the production workload mix? Where’s the artifact?
Round 2 · cross-pollinated
Counterpointsees Auditor’s take
Friday afternoon deploy compounds both concerns: observability drops as the team logs off, and a rollback that goes wrong extends into the weekend. Push to Tuesday.
Round 3 · synthesis
5 critical risks: silent divergence on partial failure; latency regression unverified; no circuit breaker between dual-write modes; Friday window; rollback assumes Redis parity.
2 mitigations: transactional outbox; shadow-traffic load test.
iv.

Plan refinement before the formal review gate

SurfaceClaude Code · plan modeCounterpointMode2-round single-advisor brainstormCost~$0.92Time~2 min

Sam has drafted a rate-limiting plan. He could run formal Plan Reviewer on it now, but a failed gate burns a paid round. He runs Counterpoint for two cheap rounds first to find what he’s overconfident about.

Counterpoint surfaces four weak load-bearing assumptions (TOCTOU race in the read-then-decide flow, IP-based limits breaking under shared NAT, window-alignment ambiguity, retry behavior). Sam fixes three, notes the fourth as a follow-up, then runs the formal review.

Result. $0.92 + $3.00 = $3.92 total. Compared to a hypothetical $6+ for two rounds of formal review catching the same gaps with worse signal — formal review is structured for plan correctness, not for unstating-the-obvious.
claude — ~/projects/api · zsh (plan mode)
Before I run 1k4_review_plan, brainstorm with Counterpoint for 2 rounds. What am I overconfident about?

⏵ 1k4_brainstorm advisors: ["counterpoint_advisor"] topic: "Find weak assumptions in this plan." rounds: 2
[ status: complete · 2m11s ]
$0.92 · 2 rounds · 4 weak assumptions surfaced
1. Redis INCR atomicity ≠ no race; read-then-decide has a TOCTOU window. Use Lua script. 2. Per-IP limits break under shared NAT. Per-account should be primary; IP coarse secondary. 3. 5-min window alignment unspecified — tumbling vs sliding makes 2x difference at the boundary. 4. Retry behavior: hitting limit → aggressive retries raises load. Where's the retry policy?
Cost & control

No surprises. Caps that bite.

The quorum runs on real LLM calls, so cost matters. Three layers of protection ship by default, and you keep the keys to all of them.

Per-call cost cap

Pass a cost_cap on any consult or brainstorm. The backend aborts mid-loop the moment cumulative cost crosses your threshold. Never pay for runaway responses.

Daily envelope

Set a per-day spend ceiling per tool: advisors $X, brainstorm $Y, plan reviews $Z. Hit the ceiling and the API returns 402 with a clear “raise cap at /billing” message. Independent of per-call caps.

Pre-flight balance

Before any consult fires, the backend checks your remaining credits. Insufficient balance → clean refusal, no partial charge, no surprise debt. Same gate the existing plan reviewer uses.

Cost transparent

Every consult prints “your cost $X · N tokens · Ts” the moment it lands. Brainstorm prints round-by-round running totals. Studio settings shows day-to-date spend and cap utilization.

Budget Advisor for cheap reads

Not every question needs a $1 deep-reasoning consult. Budget Advisor exists specifically to be the fast, low-cost sanity-check option, often $0.05 or less per call.

Audit log

Every advisor call is logged: who, when, which advisor, prompt hash, cost, duration. View in your account, export anytime. Burn a budget? You’ll know exactly which call did it.

Setup

One install, every tool.

The unified bridge installs Plan Reviewer, Quorum, Brainstorm, and the Lab Bridge in a single command. Existing 1K4 users on @1key4ai/mcp-review or @1key4ai/mcp-lab get the new tools automatically on next Claude Code session start — no action required.

npx -y -p @1key4ai/cc-bridge@latest cc-bridge setup --api-key YOUR_API_KEY
Get your API key Need TypingMind/Codeium? Pass --profile typingmindSee pricing
FAQ

Common questions.

+How is this different from the Plan Reviewer?

Plan Reviewer is a structured gate inside plan mode: it cascades multiple models against your plan, returns severity-tagged findings, and gives you a GREEN/NO consensus. Quorum is the open-ended consult layer: ask any question, get a focused answer from the named advisor whose specialty fits. Brainstorm is the multi-round adversarial mode. Many users brainstorm with Counterpoint first to refine the plan, then run Plan Reviewer for the formal sign-off. They’re designed to compose, not compete.

+Can I add my own advisor?

Yes — clone any of the five core advisors from your account dashboard, rename it, swap the system prompt, pick from the approved model allowlist. Cloned advisors live in your namespace and never affect the system defaults. Call them by slug: 1k4_consult({ advisor: "my_marketing" }). New custom advisors take effect immediately, no restart, no reinstall.

+What stops a Claude Code loop from blowing my budget?

Three things, stacked. First, every call carries a cost_cap that the backend enforces mid-loop (cumulative cost ≥ cap → abort). Second, your account-level daily envelope per tool returns 402 with a clear message before the call even fires. Third, your existing 1K4 balance is checked pre-flight; if it’s low the call refuses cleanly rather than charging a partial. The same triple-gate the existing Plan Reviewer uses.

+Where do the advisors run? Whose model is whose?

Advisors run on the same NewAPI proxy that powers your existing 1K4 API key. Code Auditor, Research Advisor, and Analysis Advisor run on GPT-5.4 (high reasoning). Counterpoint deliberately runs on a different model (Gemini 2.5 Pro) so the perspective isn’t a rephrase of the same model’s prior. Budget Advisor runs on Grok-4 for cheap-and-fast. Specific model assignments are visible in the 1k4_advisors tool output and can change over time as better models ship.

+How does this work in 1K4 Studio (the web UI)?

The same five advisors are available to the Studio agent as built-in skills. Type “have the Code Auditor look at backend/auth.py” directly in Studio chat — the agent picks consult, executes it, and synthesizes the findings back into the conversation. Brainstorm works the same way. No separate install for Studio; the package install only affects Claude Code in the terminal.

+What happens to my data?

Same data posture as the rest of 1K4: nothing is sold, nothing is used for model training. Consult results live in your project’s session for the duration of your wait, then in your account history (deletable anytime). Code you submit as artifacts is hard-excluded from the server-side log: see the snapshot.ts filter in the open-source MCP package.

Ready to get a second opinion?

Set up in 60 seconds. First plan review is on us. Quorum is pay-per-call from credit one. No subscription, no minimum; credits never expire.