Listed roughly (not exactly) in order from seeming to target least-tech-familiar to most-tech-familiar audiences:
No / low-code:
Replit and Lovable
Good for someone who doesn’t/can’t code at all yet to prototype something pretty impressively considering lack of coding experience. Wouldn’t recommend it for anything you plan on using in production. Haven’t touched them myself, only seen them used in presentations at conferences.
User-friendly chat:
ChatGPT — as a chatbot app / chat web app
My default recommendation to beginner-to-intermediate chat AI users. The middle paid tier offers good-enough-for-most-intermediate-users custom instruction support & managed hosting for it at a predictably flat fee, so I’ve continued subscribing to it. Yes, I could subscribe to their APIs instead and try to self-host webapps to do similar for potentially cheaper, but that’d take an amount of time/effort that I doubt would be worth it to keep up with the features routinely getting added to their apps.
Google Gemini (as a chatbot web app)
https://gemini.google/students is an absolute no-brainer for anyone who still has a valid .edu email address. Even if you never plan on using the AI features, the 2 TB of storage for a year can be helpful. Personally I find their Gemini 2.5 Pro is (unsurprisingly) good for answering questions about Google Cloud Platform’s products/services, but my reasons for not using it for much else are twofold: firstly I’m not fond of Google’s privacy policies. Secondly, as a ChatGPT subscriber I make extensive use of its custom instructions / system instructions (to the point where I’m routinely maxing out the maximum character length limit of what I can put into either) and I haven’t found whether/where Gemini offers comparable customization options. So comparing the two side-by-side I’d describe the default “personality” Gemini tries to emulate leans too “friendly”/amiable at the expense of being as detailed/efficient as some of my ChatGPT projects’ custom instructions.
Private and/or local chat:
Lumo by Proton
The only online-based LLM I believe to be making a genuine effort at hosting an LLM service for private conversations. My main reasons for not using it more yet are the lack of custom instruction support like mentioned above for other services, and being unsure how-capable a model is on its backend (as best as I can tell it’s either using one from 2023 or older, which in this field is ancient, _or_ it’s using a collection-of-experts system to have differently-tiered models handle different-complexities or different-types of questions, which I can see making a lot of sense for most users who just want something user-friendly & prefer not to have to select which model to use, but when paired with the lack of system instructions support makes it challenging for users like myself to specify that I’d rather have it spend more time thinking to give me higher-quality answers.
AnythingLLM
Most user-friendly & well-supported-seeming way I know of to run local chat LLMs completely offline and privately, with the option of bringing your own API-keyed models from other services later if you want. Recently added an Android app in closed beta (which I’ve tried; isn’t very feature-rich yet, but what’s there works impressively well) and local agent capabilities (which I haven’t tried yet, but would be more willing to trust than an online-service-based agent messing with my local machine so directly).
Ollama
Not quite as user-friendly as AnythingLLM, and doesn’t try to include agentic capabilities, but offers a more straightforward way to get other tools to integrate with your locally-running LLMs, e.g. Continue.dev.
OpenAI’s open models
Unusable on their own, need some manner of frontend or runner such as AnythingLLM or Ollama or Windsurf. Announced this week at https://openai.com/open-models/ too recently for me to have used them firsthand yet, these look super-interesting for any self-hosted (local or online) projects. If your machine can handle the local ~20B-parameter one for something like Continue.dev, I’d be very curious how quickly & accurately it performs.

As a software developer:
GitHub Copilot
Probably the easiest coding assist tool to pitch to a workplace who isn’t yet paying for one. IMO more important than the capability of its backend models is the fact that the VS Code extension delivers to those model (as context) a lot more of what’s going on in your IDE than the one file you’re highlighting or the one file you’ve got open — things like what other files are in the codebase, what other tabs you have open, what’s been happening lately in git history, etc. This is something I don’t know whether or how-well other extensions had been doing a year or so ago & are a key selling point of inherently-AI-integrating VS Code forks such as Windsurf by now. I haven’t used GitHub Copilot recently-enough to know how well it’s kept up with other competitors by now. I noticed when other tools were starting to steal their thunder GitHub Copilot eventually had do start offering a pretty-helpful-considering-it’s-free tier to hook people in.
Continue.dev for VSCode
The only open-source yet reasonably-well-supported VS Code extension I know of attempting to offer GitHub Copilot-like capabilities with the full privacy of running locally. The catch: as of when I last used it, it lacked GitHub Copilot’s abilities to use files other than the one you were looking at as additional context. And although its settings do support using different local or API-keyed models for inline assist/correction vs. chat (important for speed of inline responses vs. thoroughness of chats you can wait longer for), if you’re having to host multiple LLMs yourself to take advantage of that, by that point you either have to have a beefy personal machine to run multiple-good-enough LLMs locally, or you’re running at least one on a machine elsewhere. And by that point you’d likely (if you don’t strictly need the privacy of your own LLM) get more bang for your buck by jumping to another service.
In quickly searching about Continue now, there is some documetation indicating multi-file/codebase context providers and indexing (e.g., `@Codebase`, `@Folder`, git diff, terminal, etc.) — so some of my concerns above may have already been dealt with.
Windsurf and Cursor
In case you consider it relevant from a data privacy or company-future-health-prospects outlook: Microsoft owns a substantial stake in OpenAI, and OpenAI is partnering with Cursor (Cursor appeared during the GPT-5 launch live presentations as one of several showcased partners). Meanwhile Windsurf (previously called Codeium) had been recently rumored to be about to get bought out by OpenAI, but after a week or two of silicon-valley-acquihire-drama have settled down to the point where the reamining employees at Windsurf have now partnered Windsurf with Cognition instead, the makers of Devin. That puts them in a position now where Windsurf started more strongly-advertising first-party support for Anthropic’s Claude models, which were (at least until ChatGPT 5’s launch) favored for their coding assist capabilities even if Claude wasn’t as in-use for plain chat.
I used a promo code 4GEEKS from the data science & machine learning bootcamp I recently finished, which gave a month of free access. Most of my VS Code extensions were able to either get ported over directly or weren’t too difficult to set up again (though it did take some manual effort to figure out how to download .vsix files to install, and I doubt they’d auto-update as effortlessly as they had in VS Code). Still, I’ve liked Windsurf’s “planning” mode for both its “write” and “chat” modes so much that I’ve renewed my subscription after the first free month trial, and can easily imagine myself continuing to use it for personal projects. Windsurf’s in-house SWE-1 model is pretty-good for a low credit-per-usage cost, but the other providers are so-often trying to outdo each other that they’re often offering more-capable models for similarly-low promotional credits-per-usage rates.
Cursor is another AI-integrated IDE, one I haven’t yet used because I didn’t find a similar promo code, but it sounds like a solid-enough direct competitor to be worth considering.
ChatGPT and/or Claude — via API
Optionally usable with other stuff such as AnythingLLM or Continue.dev. Unfortunately isn’t included in plain flat-fee-monthly ChatGPT subscriptions. Upside seems to be the potential for better value per dollar than other services. Downside is more setup effort, and even when set up properly, I don’t know whether the way Continue.dev would use it deliver to the model enough context from everything else going on in the IDE, like is done by GitHub Copilot, Windsurf, etc. Personally I haven’t tried setting it up because I haven’t burned through enough credits with other services to feel like this API-based route would save me enough money to be worth the setup/maintenance effort and/or potentially-lower performance.
Not recommended
Trae
Pitches itself as a direct competitor to Cursor and Windsurf, but I would never advise this to anyone interested in privacy because it’s owned & produced by the same parent company as TikTok (ByteDance). Consider yourself warned.
Tabnine
Honorable mention for having been early to offer AI code assist as a service, but the AI models available in Tabnine’s prime simply weren’t good enough for it to catch on at the time. It sounds like it’s since been leapfrogged by other companies.
Recap of recommendations above
Skipping mentions/links to the trivially-easy-to-find items such as ChatGPT:
- Google Gemini for Students: free 12-month Google AI Pro incl. Gemini 2.5 Pro access + 2 TB storage. https://gemini.google/students
- Lumo by Proton: https://proton.me/blog/lumo-ai
- AnythingLLM: user-friendly local LLM chat + local agents; Android app in closed beta. https://anythingllm.com
- OpenAI Open Models (open-weights): interesting for self-hosting; 20B tier could be feasible on higher-end local setups. https://www.openai.com/blog/open-models
- Windsurf editor (promo code I used: “4GEEKS”); first-party support for Claude; also supports Gemini & ChatGPT models; in-house SWE-1 is already pretty good, especially for its low cost, and other even-more-capable models are frequently offered at promotional prices. https://windsurf.com/editor
- Cursor: solid-sounding competitor to Windsurf; haven’t tried it myself but would be my first go-to if Windsurf wasn’t an option. https://cursor.com/
Leave a Reply