Hacker Newsnew | past | comments | ask | show | jobs | submit | davidgomes's commentslogin

1. Cursor is multi-model, meaning you can use at least a dozen different models.

2. Cursor's UI allows you to edit files, and even have the good old auto-complete when editing code.

3. Cursor's VSCode-based IDE is still around! I still love using it daily.

4. Cursor also has a CLI.

5. Perhaps more importantly, Cursor has a Cloud platform product with automations, extremely long-lived agents and lots of other features to dispatch agents to work on different things at the same time.

Disclaimer: I'm a product engineer at Cursor!


I hope this comes off as constructive criticism, but I'm confused about what cursor is now.

Cursor is an IDE and an agentic interface and a cli tool and a platform that all work locally and and in the cloud and in the browser and supports dozens of different models.

I don't know how to use the thing anymore, or what the thing actually is.


I'm having the same issue, as a former Cursor user and current Claude Code addict. CC is a very clear mental model. So is "agent in your IDE," like Cursor used to be and Xcode is now. The advantage of my current setup is that it's the terminal and Xcode, just as it has been for over 20 years.

I applaud Cursor for experimenting with design, and seeing if there are better ways of collaborating with agents using a different type of workspace. But at the moment, it's hard to even justify the time spent kicking the tires on something new, closed source and paid.


it sounds like you described it pretty well!

Let me give this a shot:

Cursor was the tool you use to pair program with AI. Where the AI types the code, and you direct it as you go along. This is a workflow where you work in code and you end up with something fundamentally correct to your standards.

Claude Code is the tool you use if you want to move one abstraction layer up - use harness, specs, verifications etc. to nail down the thing such that the only task left is type in the code - a thing AI does well. This is a workflow where the correctness depends on a lot of factors, but the idea is to abstract one level up from code. Fundamentally, it would be successful if you don't need to look at code at all.

I think there is not enough data to conclusively say which of these two concepts is better, even taking into account some trajectory of model development.

I do feel that any reason I have for installing Cursor is that I want to do workflow 1, rather than workflow 2. Cause I have a pretty comprehensive setup of claude code (or opencode, or whatevs) and I think it does everything you list here.

So, as a product engineer, you probably wanna mention why it matters that Cursor UI allows you to edit files with auto-complete.


I would switch to Cursor 3 in a heartbeat if it supported Claude Agent SDK (w/ Claude Max subscription usage) and/or Codex the way that similar tools like Conductor do

And I would happily pay a seat based subscription fee or usage fees for cloud agents etc on top of this

Unfortunately very locked into these heavily subsidized subscription plans right now but I think from a product design and vision standpoint you guys are doing the best work in this space right now


Is there going to be any more development on the frontier of cursor tab completion and features like that (more focused on helping engineer's with llm's for complex tasks) since I feel this is the main reason I dont use claude code or codex. I want to be writing the code, since I want performant, small, codebases that I understand (I am writing eBPF stuff, so agentic coding doesnt work that well)

Computer use in the cloud for me is THE killer feature.

Can you elaborate on how you are using it?

Basically set it up like a developer local env, then it just runs like an "openclaw" - with full control over its own env, with a browser, a shell, access to the local DB (e.g. install a local postgres). You basically get a video of the feature, screenshots, and it can also actually test itself, like a developer, clicking in the browser to test the feature. Game changer.

vscode + claude code extension has everything you listed that actually matters

You can use almost any model with Claude Code.

that doesnt make sense. how?

Here's how to use MiniMax v2.7 for example: https://platform.minimax.io/docs/token-plan/claude-code

You just add this to your ~/.claude/settings.json:

  {
    "env": {
      "DISABLE_AUTOUPDATER": "1",
      "ANTHROPIC_BASE_URL": "https://api.minimax.io/anthropic",
      "ANTHROPIC_AUTH_TOKEN": "YOUR_SECRET_KEY",
      "API_TIMEOUT_MS": "3000000",
      "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": 1,
      "ANTHROPIC_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_SMALL_FAST_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_DEFAULT_SONNET_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_DEFAULT_OPUS_MODEL": "MiniMax-M2.7-highspeed",
      "ANTHROPIC_DEFAULT_HAIKU_MODEL": "MiniMax-M2.7-highspeed"
    }
  }

ah 'almost' . i want to use codex.

That's just in reference to the technique itself. They're basically saying it's okay for Google to use distillation to train Gemini N Flash using Gemini N-1 Pro (which they do).


What "Apply" are you referring to?

(Cursor Dev)


A lot of progress is being made here on the Cursor side I encourage you to try it again.

(Cursor dev)


As someone with a 5:38 delta, I'm very anxiously waiting for BAA to announce the official cutoff.

In the meantime, if you're at all curious about the kinds of levels to which people go with trying to predict the cutoff check out this blog[1]. This is from Brian Rock [2], who every year collects data about a lot of marathons all over the world and then tries to guess the official cutoff for the Boston marathon. Very cool stuff!

[1]: https://runningwithrock.com/boston-marathon-cutoff-time-trac... [2]: https://runningwithrock.com/about-me/


Brian Rock's tracker is great. It's a ton of work to collect and maintain that estimate throughout the year, so we hope he keeps it up!


You should look at [Modal](https://modal.com/), not affiliated.


Lovable runs on Modal Sandboxes.


I wonder if it was geolocation? Anthropic is based in SF, the author seems to be based in Munich, and maybe they're not open to hiring people who aren't based in the US right now? Given the state of US visas right now, this wouldn't shock me.


My company, which is significantly smaller, hires people in multiple countries across the world. You don't need an office to hire (I am sure there so exist countries where you do, but I expect they are the minority).


Having worked in such companies, switching to that mode requires very different processes.


London too.


After Brexit that's still quite a hassle.


Phenomenal comment, thank you for writing it, made my day :)


The whole point of this interview is that the candidate is operating on a single-threaded environment.


These are multiple assumptions "This queue is only on one machine and on one thread", what's the real world use-case here? Not saying there's none but make it clear. I wouldn't want to work for a company that has to think of some random precise question instead of e.g. "when would you not use mysql?"


I guess I don’t want to hire candidates who assume the world is single-threaded


Yeah, we have a PR in the works for this (https://github.com/appdotbuild/platform/issues/166), should be fixed tomorrow!


Alright sounds good. Question, what LLM model does this use out of the box? Is it using the models provided by Github (after I give it access)?


If you run locally you can mix and match any anthropic / gemini models. As long as it satisfies this protocol https://github.com/appdotbuild/agent/blob/4e0d4b5ac03cee0548... you can plug in anything.

We have a similar wrapper for local LLMs on the roadmap.

If you use CLI only - we run claude 4 + gemini on the backend, gemini serving most of the vision tasks (frontend validation) and claude doing core codegen.


We use both Claude 4 and Gemini by default (for different tasks). But the idea is you can self-host this and use other models (and even BYOM - bring your own models).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: