When I was a kid I used to pack my house's cable modem in a backback and bring it to my friend's house a couple miles away when I'd visit to play Xbox Live. My dad had a back-up dial-up connection for emails and mom didn't use the internet very much so usually wouldn't mind unless he needed to work. I remember this working at greater distances in other places occasionally too.
Earlier, in the dial-up era, my dad didn't feel like paying for internet at home and work, so after school I would call his office and ask his secretary if he had left for his evening meetings yet. If so, she'd disconnect his dial-up connection and I'd get a couple hours to myself after school.
We didn't have two phone lines at home so I'm not sure what happened if he needed it unexpectedly. I think he also had a by-the-minute service as a backup or maybe his partner in the office had a separate plan? This was all done under agreed rules I only vaguely remember so must not have been a frequent problem.
Always funny to think back to that era when internet wasn't assumed to be a 24/7 thing and losing internet for a day wasn't the end of the world...
Perhaps, I can't say with 100% certainty that I wouldn't if offered 50k+ just for writing a blog post. But in doing so I would also have to accept being labeled a "crypto shill" instead of "crypto critic" for the rest of my life.
OpenCode also has an extremely fast and reliable UI compared to the other CLIs. I’ve been using Codex more lately since I’m cancelling my Claude Pro plan and it’s solid but haven’t spent nearly as much time compared to Claude Code or Gemini CLI yet.
But tbh OpenAI openly supporting OpenCode is the bigger draw for me on the plan but do want to spend more time with native Codex as a base of comparison against OpenCode when using the same model.
I’m just happy to have so many competitive options, for now at least.
- better UI to show me what changes are going to be made.
the second one makes a huge diff and it's the main reason I stopped using opencode (lots of other reasons too). in CC, I am shown a nice diff that I can approve/reject. in codex, the AI makes lots of changes but doesn't pin point what changes it's doing or going to make.
Yeah it's really weird with automatically making changes. I read in it's chain of thought that it's going to request approval for something from the user, the next message was approval granted doing it. Very weird...
That’s a separate tool though. You don’t want to have to open another terminal to git diff every 30 seconds and then give feedback. Much better UX when it’s inline.
My main hooks are desktop notifications when Claude requires input or finishes a task. So I can go do other things while it churns and know immediately when it needs me.
My favorite part about that is gas town is supposedly so productive that this guys sleep patterns are affected by how much work he’s doing, but he took the time to physically go to a bank to get a 5 figure payout.
It makes it difficult to believe that gas town is actually producing anything of value.
I also lol at his bitching about how the bank didn’t let him do the transactions instantly as he describes himself how much of a scam this seems and how the worst thing is his bank account being drained, like banks don’t have a self interest in protecting their clientele from such scams.
Yes, this exact scenario has happened to me a couple times with both Claude and Codex, and it's usually git checkout, more rarely git reset. They immediately realize they fucked up and spend a few minutes trying to undo by throwing random git commands at it until eventually giving up.
Yeap - this is why when running it in a dev container, I just use ZFS and set up a 1 minute auto-snapshot - which is set up as root - so it generally cannot blow it away. And cc/codex/gemini know how to deal with zfs snapshots to revert from them.
Of course if you give an agentic loop root access in yolo mode - then I am not sure how to help...
The sneaky move that I hate most is when Claude (and does seem to mostly be a Claude-ism I haven’t encountered on GPT Codex or GLM) is when dealing with an external data source (API, locally polling hardware, etc) as a “helpful” fallback on failures it returns fake data in the shape of the expected output so that the rest of the code “works”.
Latest example is when I recently vibe coded a little Python MQTT client for a UPS connected to a spare Raspberry Pi to use with Home Assistant, and with a just few turns back and forth I got this extremely cool bespoke tool and felt really fun.
So I spent a while customizing how the data displayed on my Home Assistant dashboard and noticed every single data point was unchanging. It took a while to realize because the available data points wouldn’t be expected to change a whole lot on a fully charged UPS but the voltage and current staying at the exact same value to a decimal place for three hours raised my suspicions.
After reading the code I discovered it had just used one of the sample command line outputs from the UPS tool I gave it to write the CLI parsing logic. When an exception occurred in the parser function it instead returned the sample data so the MQTT portion of the script could still “work”.
Tbf Claude did eventually get it over the finish line once I clarified that yes, using real data from the actual UPS was in fact an important requirement for me in a real time UPS monitoring dashboard…
It's similar to early versions of autonomous driving. You's not want to sit in the back seat with nobody at the wheel. That would get you killed guaranteed.
Sounds to me like more evidence in favor of the idea that they're meant to play the golden retriever engineer reporting to you, the extremely intelligent manager.
> prompts where we'd ask for e.g. 15 sections, would only do 10 sections and then ask "Would you like me to continue?".
I can’t speak to any kind of identifiable pattern but man that behavior drives me up the wall when it happens.
When I run into a specific task that starts to trigger that behavior, even a clean session with explicit instructions directing it to complete ALL sub steps isn’t enough to push it to finish the entire request to the end.
What I learned from all this is that OpenAI is willing to offer a service compatible with my preferred workflow/method of billing and Anthropic clearly is not. That's fine but disappointing, I'm keeping my Codex subscription and letting my Claude subscription lapse but sure, it would be nice if Anthropic changed their mind to keep that option available because yes, I do want it.
I'm a bit perplexed by some comments describing the situation like OpenCode users were getting something for free and stealing from CC users when the plan quota was enforced either way and were paying the same amount for it. Or why you seem to think this post pointing out that Anthropic's direct competitor endorses that method of subscription usage is somehow malicious or manipulative behavior.
Commerce is a two-way street and customers giving feedback/complaining/cancelling when something changes is normal and healthy for competition. As evidenced by OpenAI immediately jumping in to support OpenCode users on Codex without needing to break their TOS.
Idk if I disagree with anything you're saying, I'm just saying it's a very small minority that and are upset enough to both cancel and announce they are cancelling their subscription is all.
I think I just understand that companies only offer heavily subsidized services in return for something - in this case Anthropic gets a few things - to tell investors how many daily actives are on CC, and a % of CC users opting into data sharing. Plus control of their UX, more feedback on their product, future opportunities to show messages, etc. It's really just obvious and normal and I don't get why anyone would be upset that they removed OC access.
I recently mentioned in another comment that Fedora 43 on my Ideapad is the first “just works” experience I’ve had with my multi monitor setup(s) on anything other than Windows 11 (including MacOS where I needed to pay for Better Display to reach the bar of “tolerable”).
Zero fiddling necessary other than picking my ideal scaling percentage on each display for perfect, crisp text with everything sanely sized across all my monitors/TVs.
I gave up on Linux Mint for that exact reason. I wasted so much time trying to fine tune fonts and stuff to emulate real fractional scaling. Whenever I thought I finally found a usable compromise some random app would look terrible on one of the monitors and I’d be back at square one.
Experimental Wayland on Linux
Mint just wasn’t usable unfortunately and tbh wasn’t a big fan of Cinnamon in general (I just really hated dealing with snaps on Ubuntu). I did tweak Gnome to add minimize buttons/bottom dock again and with that it’s probably my favorite desktop across any version of Linux/MacOS/Windows I’ve ever used!
I kept reading endorsements of Fedora's level of polish/stability on HN but was kinda nervous having used Debian distros my entire life and I’m really happy I finally took the plunge. Wish I tried it years ago!
> I kept reading endorsements of Fedora's level of polish/stability on HN but was kinda nervous having used Debian distros my entire life and I’m really happy I finally took the plunge. Wish I tried it years ago!
This. I don't know why, but people forget about Fedora when considering distros. They rather fight Arch than try Fedora. So, did I. Maybe its Redhat. Wish I switched earlier, too. (Although I heard this level of polish wasn't always the case.)
I love Fedora so much. Everything just works, but that's not that special compared to Ubuntu. What is special is the fucking sanity throughout the whole system. Debian based distros always have some legacy shit going on. No bloat, no snap, nothing breaking convention and their upgrade model sits in the sweet spot between Ubuntu's 4 year LTS cycle and Arch's rolling release. Pacman can rot in hell, apt is okay, but oh boy, do I love dnf.
Tho, Fedora has some minor quirks, which still make it hard to recommend for total beginners without personal instructions/guidance IMO. Like the need for RPMFusion repos and the bad handling/documentation of that. Not a problem if you know at all what a package manager, PKI and terminal is, but too much otherwise.
I dual booted Fedora back when it was still called Fedora Core from version 6 until 11-ish. I had it installed on a laptop and had a lot of driver issues with it and eventually didn't bother with dual booting when I moved to a new laptop.
I'm now looking to get off Windows permanently before security updates stop for Win 10 as I have no intention of upgrading to Win 11 since Linux gaming is now a lot more viable and was the only remaining thing holding me back from switching earlier. I've been considering either Bazzite (a Fedora derivative with a focus on gaming) or Mint but after reading your comment I may give vanilla Fedora a try too.
So far I've tried out the Bazzite Live ISO but it wouldn't detect my wireless Xbox controller though that may be a quirk of the Live ISO. I'm going to try a full install on a flash drive next and see if that fixes things.
Give it a try! Although, I do all my gaming on a Playstation. In Fedora, the Steam and NVIDIA Fusion repos come preinstalled and can be enabled during installation or in Gnome's 'Software' or the package manager later, but I can't speak to that. The opensource AMD drivers are in the main repo no action needed. ROCm too, but that can be messy and is work-in-progress on AMD's side. Can't vouch for the controller, but people claim they work. Guess, that's the live image. I heard, games with anti-cheat engines in the kernel categorically don't work with Linux, but this may change at some point. In that case, or if you want "console mode", a specific gaming distro may be worth considering, otherwise I would stick to vanilla. Good luck! Hope I didn't promise too much ;)
So I cleared out one of my SSDs and installed Fedora yesterday.
I still had the issue of no gamepad detection. I had to install xone which took some trial and error. Firstly, I didn't have dkms installed and secondly, soon after installing Fedora the kernel was updated in the background and on reboot my display resolution was fixed to 1024x768 or something for some reason (that's gonna be another issue I'll have to look into). I rebooted and went back to the previous version and then dkms complained the kernel-headers were missing. However, the kernel-headers were installed for the latest kernel but not the older version I had rebooted to. I'm not used to Fedora or dnf (I run Proxmox+Debian in my homelab) so after a quick search to figure out how to install a specific version of a package (it's not as simple as <package>@<version> but rather <package>-<version>.fc$FEDORA_VERSION.$ARCHITECTURE) I got kernel-devel installed and was able to finally run the xone install script successfully and have my gamepad detected.
The most frustrating thing is that the xone install script doesn't fail despite errors from dkms so after the first install (where I almost gave up because I thought something was wrong with my setup) I had to run the uninstall script each time there was a problem and then run it again. The xone docs also mention running a secondary script which doesn't actually exist until the first script runs successfully so that added a lot of confusion.
My understanding is you only need xone for the special adapter right? Have you tried cable and plain bluetooth before? Also Steam seems to come bundled with their own drivers for it, so the controller may just work within games in Steam, regardless.
I feel a bit bad, but honestly gaming on Linux is not my thing. From a quick glance, messing with the kernel like that may cause problems with secure boot and maybe that's causing your issues. Maybe you need to sign your modules or disable secure boot.
And of course Bazzite seems to have addressed this out-of-the-box... :D
Quite frankly, if you want to do anything but gaming on that machine, at least for me, manually installing kernel modules from GitHub would be a deal breaker, since that seems rather unstable and prone to cause nasty problems down the line.
I'd rather use the 2.4Ghz adapter rather than Bluetooth as the connection is supposedly more reliable (and less prone to latency issues) from what I've read. Anyway, after jumping through all those hoops I did get it working so I'm happy with xone for now. I even managed to boot into the newer version of the kernel without the degraded display resolution issue after that.
I have a new issue though after updating 900+ packages using KDE Discover which is that the GUI login doesn't work. The screen goes blank after I enter credentials and nothing happens unless I switch to another TTY at which point I get thrown back to the login screen on TTY1. As a workaround, I can login on another TTY and then use startplasma in order to use KDE. I've learnt my lesson not to use KDE Discover for updates though because it doesn't get logged in dnf history so you can't use dnf rollback.
You are right, I got that mixed up. To be fair, I somehow also thought of yearly releases for Fedora, which isn't the case. It's every six months, so the relation remains identical, just off by a factor of 2 :D
The first time I’ve had my multi-monitor setup(s) “just work” on Linux is recently installing Fedora 43 on my Ideapad. (After becoming exhausted trying to tweak Linux Mint to get tolerable sizing across all the screens).
Wayland per-monitor fractional scaling is delightful and after a couple gsettings tweaks restoring minimize/bottom dock I’ve been loving the polish and snappiness of Gnome. I also had to switch the WiFi backend from wpa_supplicant to iwn due to connection problems on one specific WiFi network but now it’s totally stable.
macOS multi-monitor support and scaling is a constant thorn in my side that was marginally improved by paying for Better Display. Windows 11 really is the most solid option for various monitor combinations not in Apple's happy path of resolutions/sizes.
But I don’t really like the ergonomics of using even clean de-bloated Windows as my main dev machine, so was very pleased to have such a great out-of-the-box experience trying Fedora for the first time.
Apple took a shortcut for DPI scaling implementation because they only care about selling their own hardware.
If you use anything else, it's a pain in the ass. This is a big problem of today's Apple, because they can't manage to release competitively priced hardware in some categories.
Earlier, in the dial-up era, my dad didn't feel like paying for internet at home and work, so after school I would call his office and ask his secretary if he had left for his evening meetings yet. If so, she'd disconnect his dial-up connection and I'd get a couple hours to myself after school.
We didn't have two phone lines at home so I'm not sure what happened if he needed it unexpectedly. I think he also had a by-the-minute service as a backup or maybe his partner in the office had a separate plan? This was all done under agreed rules I only vaguely remember so must not have been a frequent problem.
Always funny to think back to that era when internet wasn't assumed to be a 24/7 thing and losing internet for a day wasn't the end of the world...
reply