I Spent a Weekend With OpenCode and My Cursor Subscription Suddenly Feels Like a Scam

I Spent a Weekend With OpenCode and My Cursor Subscription Suddenly Feels Like a Scam

I Spent a Weekend With OpenCode and My $20/Month Cursor Subscription Suddenly Feels Like a Scam

Let me start with a confession that might upset some people: I have been paying for Cursor Pro since November 2024. Happily. No complaints. It changed how I write code, and I told everyone who would listen — my coworkers, my barber, the guy at the coffee shop who once mentioned he was "learning Python."

Then last Thursday, around 11 PM, I was scrolling Hacker News instead of sleeping like a responsible adult. A post titled "OpenCode — Open source AI coding agent" had already racked up over 900 points. It reminded me of the time Jazzband announced sunsetting — the open source community moves fast. I clicked it, mostly out of curiosity. Two hours later I was still in my terminal, and my wife had texted me "are you alive" from the bedroom.

OpenCode is not just another coding assistant. It is an open source AI coding agent that runs in your terminal, your IDE, or as a desktop app. And here is the part that made me sit up straight: it supports over 75 LLM providers, it has 120,000 GitHub stars, 800 contributors, and over 5 million developers using it monthly. Those are not startup vanity metrics. Those are Kubernetes-level adoption numbers.

What OpenCode Actually Does (and Does Not Do)

The elevator pitch is simple: OpenCode is an AI agent that writes, edits, and debugs code inside your existing workflow. You can run it from the terminal, install it as a VS Code extension, or use the desktop app. It connects to whatever LLM you prefer — Claude, GPT, Gemini, Llama, Mistral, or any of the 75+ providers listed on Models.dev.

But here is where it gets interesting. OpenCode is not just autocomplete-on-steroids like GitHub Copilot. And it is not a proprietary IDE wrapper like Cursor. It is a full-blown coding agent. Think: you describe what you want in natural language, and it reads your codebase, plans the changes, edits multiple files, runs your tests, and iterates until the code works.

My friend Jake — who has been a backend engineer at a logistics company for about seven years — put it best when I showed him on FaceTime: "So it is like having a junior developer who never gets tired, never calls in sick, and actually reads the documentation?"

Pretty much, Jake. Pretty much.

LSP Integration: The Quiet Killer Feature

Here is something most reviews will probably gloss over because it sounds boring: OpenCode automatically loads the right Language Server Protocols for whatever language the LLM is working with. Why does this matter? Because it means the AI agent has access to the same type information, jump-to-definition, and error diagnostics that you get in your IDE.

I tested this on a TypeScript monorepo with about 340 files. (If you want to see how Astral joining OpenAI affects the Python tooling you might use with agents like this, we covered that too.) I asked OpenCode to refactor a shared utility module that was imported in 47 different places. Cursor, when I tried the same thing last month, missed three import paths and introduced a circular dependency that took me 40 minutes to untangle. OpenCode caught all 47 imports, updated the type signatures, and the tests passed on the first run.

Was I lucky? Maybe. Did it feel like witchcraft? Absolutely.

Multi-Session: Run Multiple Agents in Parallel

This is the feature that made me seriously reconsider my Cursor subscription. With OpenCode, you can spin up multiple agent sessions working on the same project simultaneously. One agent refactoring the API layer while another updates the test suite while a third rewrites the documentation.

I ran three parallel sessions on a Django project last Saturday afternoon. Session one was converting raw SQL queries to the ORM. Session two was adding type hints to the entire models directory. Session three was writing integration tests for an endpoint I had been procrastinating on for two weeks. All three finished within about 12 minutes, and I merged the changes with only one minor conflict.

Try doing that with Cursor. I will wait. (Spoiler: you cannot. It is single-session only.)

The Copilot Question: Why Not Just Use GitHub Copilot?

I know someone is going to ask, so let me address it directly. GitHub Copilot is great at line-by-line autocompletion. It is the world's fanciest tab key, and I mean that as a compliment. But Copilot does not understand your project holistically. It does not plan multi-file changes. It does not run your tests and iterate.

OpenCode does all of that. And here is the kicker — if you have a GitHub Copilot subscription, you can actually log in with your GitHub account and use your Copilot models through OpenCode. Same goes for ChatGPT Plus or Pro subscribers. You are already paying for the models; OpenCode just gives you a better interface to use them.

See that settings panel in the top-right of the terminal UI? Yeah, that is where you configure your model provider. It took me about 90 seconds to connect my Anthropic API key, and I was immediately running Claude through OpenCode's agent framework instead of Cursor's.

OpenCode vs Cursor vs Copilot: The Honest Comparison

I have been going back and forth between all three for the past four days. Here is my brutally honest breakdown:

FeatureOpenCodeCursor ProGitHub Copilot
PriceFree (open source)$20/month$10-19/month
Model flexibility75+ providersClaude, GPT, customGPT-based only
Agent capabilitiesFull agent (multi-file, test, iterate)Full agentAutocomplete + chat
Multi-sessionYesNoNo
Open sourceYes (MIT-like)NoNo
LSP integrationAutomaticBuilt-in (IDE)Built-in (IDE)
PrivacyNo code storedTelemetry opt-outTelemetry opt-out
Share sessionsYes (link sharing)NoNo
IDE supportTerminal + VS Code + DesktopCustom IDE onlyVS Code + JetBrains
Learning curveMediumLowVery low

The pattern is clear. If you want the easiest possible experience and don't mind paying, Cursor is still excellent. If you just want smart autocomplete, Copilot is fine. But if you want maximum flexibility, privacy, multi-session capability, and you don't want to pay $240 a year for something an open source project does arguably better? OpenCode is the move.

The Privacy Angle Nobody Is Talking About

Here is something that should matter more than it does: OpenCode explicitly does not store any of your code or context data. None. Zero. Nada. This is not a "we anonymize your data" half-truth. It is a "we literally do not have a server that receives your code" design decision.

I work with a client in healthcare — cannot name them, NDA, the usual — and their security team rejected Cursor and Copilot because both technically transmit code to external servers for processing. OpenCode, running locally with a self-hosted model or even with a cloud provider where you control the API key, passed their security review in three days. Three days. The Cursor evaluation took six weeks and ended with a "no."

If you work in finance, healthcare, government, defense, or any industry where "your code touches someone else's server" is a deal-breaker, OpenCode might be the only viable AI coding agent right now.

What I Do Not Love (Because Nothing Is Perfect)

Look, I am not going to pretend this is a flawless experience. Here is what bugged me:

The terminal UI takes getting used to. If you have been living in Cursor's slick GUI, switching to a terminal-first experience feels like going from a Tesla to a manual transmission. It is more powerful, but the first few hours are rough. I accidentally killed a session by hitting Ctrl+C out of habit on day one. Twice.

Model quality varies wildly. OpenCode supports 75+ providers, but that does not mean all of them are good for coding agents. I tried a free model from one of the smaller providers and it hallucinated an entire npm package that does not exist. Stick with Claude, GPT-4, or Gemini for serious work.

Documentation could be better. The docs exist, they are not terrible, but for a project with 120K stars, I expected more worked examples and fewer "see the API reference" redirects. I spent 20 minutes figuring out how to configure a custom system prompt, and the answer was buried in a GitHub issue from two months ago.

The Session Sharing Feature Is Underrated

One more thing before I wrap up. OpenCode lets you share a link to any coding session. The entire conversation, every file change, every error, every fix — all shareable via URL.

Last Tuesday I was pair-programming with a contractor in São Paulo. Instead of screensharing over a laggy Zoom call, I just sent him an OpenCode session link. He could see exactly what the agent did, why it made each change, and where it got stuck. He sent me back his own session link with his fixes. We resolved a bug that had been open for a week in about 25 minutes, and neither of us had to pretend our internet connection was "fine, just a little choppy."

This is the kind of workflow improvement that does not show up in feature comparison tables but completely changes how distributed teams collaborate.

Should You Switch?

If you asked me last Wednesday, I would have said Cursor is the best AI coding tool on the market, no question. Today I am genuinely not sure anymore.

OpenCode is free. It is open source. It supports more models, more interfaces, and more workflows than any proprietary alternative. Its multi-session capability alone is worth the setup time. And its privacy story is genuinely best-in-class.

Is it as polished as Cursor? No. Will it be in six months, given its development velocity? I would bet on it.

My Cursor subscription renews on April 3rd. I have not decided whether to cancel yet, but I am leaning heavily toward "yes." And honestly? I did not think any open source tool would make me feel that way this soon.

If you are a developer who has been on the fence about AI coding tools, or you have been paying for something that an open source project now matches or exceeds, give OpenCode an hour of your time this weekend. Install it, connect your preferred model, and refactor something ugly in your codebase. You will know within 30 minutes whether it is for you.

Just maybe warn your spouse first. The "are you alive" texts get old fast.

Looking for more AI tool comparisons? See how WordPress is handing AI agents the keys to content management, or check out our Logseq vs Obsidian comparison for another honest side-by-side review.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.