I'm about 2 days into transitioning, using MiMo V2 Pro in place of Opus and MiniMax M2.7 in place of Sonnet.
I'm finding that the extra "hand holding" that MiMo and MiniMax need isn't really "extra." The Anthropic models happily agree to a plan and then do something else entirely way too often.
With MiMo and MiniMax I'm just spreading the attention throughout the day instead of big spikes of frustration figuring out where Claude went off the rails.
Thank for responding. So you are using MiMo V2 Pro to plan and then asking MiniMax M2.7 to read that plan file and execute? Or how the workflow looks like?
Pi/Opencode/Kilocode?
Just curious.
I am using Opencode mostly and thinking to abandon Copilot so looking for something similar.
I burn through the entire 5 hour limit in one or two "implement the feature outlined in this doc" requests with claude pro in a not even huge codebase (low tens of thousands of loc). If there were any reasonable alternatives I wouldn't even consider using it, but sonnet 4.6 (and presumably opus 4.6 - I don't use it as sonnet is faster and more than good enough) is the only model I've used that actually makes good decisions in complex codebases - anything else just gets stuck in the weeds and produces either non working code or tech debt (after churning for a long time).
I have seen more than one comment on this thread mentioning kimi though - I'll have to test it out.
qwen3-coder-next has been surprisingly capable as a local model too - needs to be used to make small changes where you know exactly what the final code should look like rather than implementing whole features, but it is free (except for the power bill).
I haven't found Kimi to be all that good, but GLM 5.1 I find to be better than Opus 4.6 most of the time in web dev. Opus' only advantage is it's a bit faster. If you can't access GLM 5.1 (not fully released yet) try 5.0. It was better than Opus sometimes too.
I have a GLM Code subscription and it lasts much longer than Claude Code.
I use Pi agent so I use all agents in the same harness.
I’ve dumped claude few months ago for gemini. Maybe my problems are too trivial, but it’s same if not better with added benefit of being much faster. I’d say 95% of my work (20-30hrs per week) is done by it and I spend less than $50 per month.
I’ve been tempted to move my Gemini plan up to a higher plan and play around more with the Gemini cli - as I seriously live the Gemini chat for most everything. Claude is lazy af and is always pulling stale data, or not checking resources entirely. I literally have a Gemini mcp that I force close to use half the time when it’s lost, and Gemini nails it every single time.
I’m on a Claude max 20x plan right now, and I seriously can’t imagine not having it around anymore but Gemini seems to always have my back on actual current data and less hallucinations.
I do Salesforce - backend and frontend using Webstorm and Junie (and Salesforce specific plugins). It understands symbol table and can connect to Salesforce via MCP server for various things (retrieve missing metadata, create users, deploy, etc).
Funnily Junie detects when Gemini is overloaded (less and less often now) and switches to default model (openai). This is when you start thinking - why this code is so terrible, whats going on?
(apologies for all the typos in my original message)
That makes sense. Web is really where so many of these agents shine, I’m envious. I also imagine the huge context window for Gemini is a huge help with Salesforce work. I do iOS dev, so I have long felt trapped to Claude as it’s really the only agent that understands Xcode/Swift well enough for refactors and architecture. I just get so frustrated by Claude’s laziness and out of date information.
I honestly completely forgot Junie existed…I’m very tempted to see how far I can push it for iOS dev now…I’m sure Xcode is going to hate me even more ha.
No, it's using the newish SSGI and TRAA webgpu nodes. The three team has been making great progress with SSGI and webgpu in general and i'd recommend checking it out if you're interested.
There's also a denoise node in three (not used in this example), but SSGI still looks kinda blurry.
TRAA basically works by using a history buffer, for example using the last couple of frames, all jittered a little bit to compute the pixel. There's still ghosting and smearing that can happen though because of this technique, so you have methods to counteract like subpixel correction where u increase temporal alpha when velocity is subpixel, but that can introduce some artifacts as well.
There's also SMAA T2x which the pmndrs team is planning on integrating into their postprocessing package[0]. This cryengine3 slideshow gives a nice overview of antialiasing methods if you're interested: http://iryoku.com/aacourse/downloads/13-Anti-Aliasing-Method...
The only thing even remotely related to graphics I found was references to "TrAA" in forum posts from 2006 (yeah) where I believe they referred to NVIDIA "Transparency AA" or something like that. "TRAA", "TRAA meaning", "TRAA graphics", "TRAA 3D" all gave fully irrelevant results :D
If you make the assumption that "AA" is some form of antialiasing, it's not too bad: first scholar[1] hit expands the acronym to Temporal Reprojection Anti-Aliasing
Yeah, should've tried with "antialiasing". Still, astonishingly obscure given that it's not even a new thing anymore and apparently implemented in UE4 and others.
A lot of math problems/proofs are like minesweeper or sudoku in a way though. They're a long series of individually kinda simple logical deductions that eventually result in a solution. Some really hard problems are only really hard because each one of those "simple" deductions requires you to have expert knowledge in some disparate area to make that leap.
Yeah but Windows is a more stable api to develop against than Linux (at least when it comes to stuff that games need to do) - it doesn't feel "pure", but pragmatically it's much better as a game developer to just make sure the Windows version works with proton than it is to develop a native Linux version that's liable to break the second you stop maintaining it.
Sure, slightly inaccurate title, but the point they're making is valid, this comment isn't really a substantive critique.
I could be wrong but I feel like when most HN commenters say that something "uses React" and also imply that that's a bad thing, what they really mean is "it loads a full web rendering engine and consumes ~200mb of unnecessary ram". Neither of those things are true here.
> That is built with React Native for Windows. No, that is not a full JavaScript framework in your start menu.
This is incorrect. It is a full JavaScript framework in your start menu.
I don't see your read that it's about ram-hungry web views either. To me, "Start menu uses React" is a dig that Microsoft is so uncommitted to it's native development platform that they (partially) don't use it in one of the most 'core' parts of the operating system.
Shouldn't devs be allowed to select what they feel is the "best" choice for a given component? While I wouldn't expect to see a SwiftUI in Windows from Microsoft, Microsoft hasn't been adverse to various NIH web frameworks for quite some time now.
If it fits and meets the goals of the project, why not?
If Microsoft developers' "best" choice for a tiny UI component like this is not it's flagship native UI framework, then that's a problem for Microsoft. That is the criticism.
> Shouldn't devs be allowed to select what they feel is the "best" choice for a given component?
To some extent, yes. But if they choose React Native, something's probably wrong, because (despite what the article says) that requires throwing in a Javascript engine, significantly bloating a core Windows component. If they only use it for a small section ("that can be disabled", or in other words is on by default), it seems like an even poorer trade-off, as most users suffer the pain but the devs are making minimal advantage of whatever benefits it provides.
If the developers are correct that this is the best choice, that reflects poorly on the quality of Microsoft's core native development platforms, as madeofpalk said.
If the developers of a core Windows component are incorrect about the best choice, that reflects poorly on this team, and I might be inclined to say no, someone more senior should be making the choice.
There are two possibilities: Either it’s really the best choice among the available frameworks (very questionable), or they picked it regardless. Both reflect badly on Microsoft, given what React Native is, and given how central the Start menu is to the Windows experience.
Here's one: Microsoft management heavily incentivizes their developers to use LLMs for virtually everything (to the "do it or you're fired" level) and the LLM (due to its training data or whatever) is far more able to pump out code with React Native than their own frameworks. This makes it the right choice for them. Not for the user, but you can't have everything.
I don't have any inside information; I'm running with the hypothetical.
>No, Windows Start isn’t built on React. No part of the start menu actually uses React.
But then
>This is the Windows 11 start menu. See that Recommended section at the bottom of it? That is built with React Native for Windows.
Its not just the headline its the content of the article.
What it should have done is focused on this claim:
>Microsoft is also vowing to use its native Windows UI framework, WinUI, in more areas of the system
Because like they said, React Native is calling WinUI.
But trying to split the React/Native hair is honestly just tiring. Its like saying you dont drink Coke, and then downing a full glass of Coke zero. "Oh but what I meant is that it doesn't call any sugar dependencies at all" is just weird. Just say you don't drink Sugar.
Exactly this. And it is on a portion of the Start menu that can be fully disabled. Heck, my start menu looks like the Windows 10 start menu (yes, this is OOTB) as I wasn't fond of the "new" look of the Windows 11 start menu.
But we'll always hear "it's React!". Like most things, the masses must feed on the Internet outrage without critical thought.
I haven't really extensively evaluated this, but my instinct is to really aggressively trim any 'instructions' files. I try to keep mine at a mid-double-digit linecount and leave out anything that's not critically important. You should also be skeptical of any instructions that basically boil down to "please follow this guideline that's generally accepted to be best practice" - most current models are probably already aware - stick to things that are unique to your project, or value decisions that aren't universally agreed upon.
> The biggest problem is that while slow charging (L2) in your own garage would be perfect for 99%+ of people in the US, and isn't even very expensive, that's a barrier to entry most people do not want to screw with.
I feel like this is only an opinion that people who have never actually used an EV have. Plugging in my car overnight at home every few days is infinitely more convenient than needing to drive somewhere to plug it in somewhere else. The actual charge time is irrelevant as long as it's not more than ~12 hrs.
I leval 1 charge my car and that is always enough. Salesmen who sold it to me says he does the same. It depends on your commute, (i typically ride my bike if the weather isn't too bad) and the other trips you make (why I bought it - there is a once a week trip I make outside of bike range)
reply