“Branching let us try three approaches at once and move forward in minutes.”
Branching AI chat for discovery and comparison
Branch from any message onto a canvas, compare prompts and models side by side, and collaborate in real time—so you can strike insights faster and avoid dead ends.
Free while in beta. No credit card required.
See it in actionStart. Branch. Compare.
Try alternatives in parallel and see them side by side—all in one canvas.
- Step 1Lode — New canvasPreview
Start a canvas
Kick off a chat from any question or doc. Your context stays put.
- Step 2Lode — BranchingPreview
Branch anywhere
Fork any message to try different prompts or models—no copy‑paste, no mess.
- Step 3Lode — ComparePreview
Compare side by side
Scan results next to each other to spot the best direction fast.
Real teams find better answers faster
Outcomes from using branching and side‑by‑side comparison in Lode
“We compare models side by side and drop bad directions early.”
“Focused paths kept us on topic. No more overwriting the main chat.”
They branched from key messages, tried alternate prompts and models in parallel, and scanned results side by side.
- Branched 4 variations in under 2 minutes
- Compared by model and prompt; continued from the strongest path
- Reduced back‑and‑forth rewrites; shipped same day
Where Lode shines
The same motions—branch, compare, co‑prompt—adapt to your workflow. Pick a lane and see it click.
Find prompts that actually work
Run 3–5 variations in parallel and converge on the one that holds up.
- Branch alternatives from any message
- A/B prompts with the same context
- Pick a winner and keep going
Find prompts that actually work
Run 3–5 variations in parallel and converge on the one that holds up.
- Branch alternatives from any message
- A/B prompts with the same context
- Pick a winner and keep going
The essentials that keep you moving
Scroll to see how Lode helps you branch, compare, and collaborate—without breaking flow.
Branch from any message
Explore tangents without polluting the main thread. Shared context keeps each path focused.
- Fork quickly
- Scope prompts per branch
- Tag for traceability
See options side by side
Pin branches into a split view. Scan clarity, depth, and gaps at a glance.
- Compare by prompt/model
- Pick a winner and continue
Prompt together in real time
See presence and cursors as teammates co‑prompt. No overwrites, just momentum.
- Live cursors & presence
- Comment and mention
Scoped prompts per branch
Keep each line of exploration on‑topic with branch‑level system prompts and tags.
- Branch‑level system prompts
- Tags and naming
Bring your own keys
Swap providers freely. Evaluate models per branch and keep control over cost and data.
- OpenAI, Anthropic, Google, Mistral
- Local via Ollama
Works with the leaders
Choose providers and models per branch. Keep comparisons fair with the same context.
Momentum you can feel
“We found a winning prompt in one morning instead of a week. The side‑by‑side view is a cheat code.”
“Branches keep experiments tidy. It finally feels safe to go down rabbit holes.”
“Multiplayer prompting made our workshops twice as productive.”
“Dropping weak paths early saved us 40% on tokens last quarter.”
Answers to common questions
Yes, to power history, branches, compare views, and audit logs. You control retention. For sensitive work, export/clear on a schedule.
Use your own provider keys. Keys are encrypted at rest and never shared with teammates unless explicitly configured.
Most major providers: OpenAI, Anthropic, Google, Azure OpenAI, Mistral, and local via Ollama. The list grows over time.
Yes. Export threads and branches as JSON or Markdown. API access for automation is planned.
Yes. Role‑based access, project‑level permissions, and audit history help keep work compliant.
Ready to strike gold?
Start branching, comparing, and collaborating today. Find better prompts in fewer tokens.
- Unlimited branches
- Real‑time multiplayer
- Model agnostic