Hacker Newsnew | past | comments | ask | show | jobs | submit | Xeoncross's commentslogin

Instead, the reply/rebuttal almost always comes from a new person. It makes nice reading when you have 6 people in an argument keeping each other honest vs 2.

> Total alerts/errors found: 7

Apps written in an exceptions language (Java, JavaScript, PHP, etc..) are really annoying to monitor as everything that isn't the happy path triggers an 'error'/'fatal' log/metric.

Yes, you can technically work around it with (near) Go-level error verbosity (try/catches everywhere on every call) but I've never seen a team actually do that.

Modern languages that don't throw exceptions for every error like Rust, Go, and Zig make much more sane telemetry reports in my experience.

On this note, a login failure is not an error, it's a warning because there is no action to take. It's an expected outcome. Errors should be actionable. WARN should be for things that in aggregate (like login failures) point to an issue.


> On this note, a login failure is not an error

Login failure is like the most important error you'll track. A login failure isn't necessarily actionable but a spike of thousands of them for sure is. No single system has been more responsible for causing outages in my career than auth. And I get that it's annoying when they appear in your Rollbar but sometimes Login Failed is the only signal you get that something is wrong.

Some 3rd party IdP saying "nope" can be innocuous when it's a few people but a huge problem when it's because they let their cert/application token expire.

And I can already hear the "it should be a metric with an alert" and you're absolutely right. Except that it requires that devs take the positive action of updating the metric on login failures vs doing nothing and letting the exception propagate up. And you just said login failures aren't errors and "bad password" obviously isn't an error so no need to update the metric on that and cause chatty alerts. Except of course that one time a dev accidentally changed the hashing algorithm. Everyone was really bad at typing their password that day for some reason.


> Login failure is like the most important error you'll track. A login failure isn't necessarily actionable but a spike of thousands of them for sure is.

Sounds like you agree with me. Re-read my comment. Errors are actionable individually. Warnings are actionable in aggregate.

You don't have to treat logs and metrics as separate, you can have rules on log counts without emitting a metric.


Rather than login failures I would monitor login successes. A sharp decrease of successes likely points to some issue, but an increase in login failures might easily be someone trying tons of random credentials on your website (still not ideal, but much harder to act on)


Creating this metric/alert is practically a rite of passage for junior ops people who then get paged around 5pm.


I'm glad the air now comes standard with 16GB of RAM and 512GB disk space.

It's not that the M1 with 8/256GB was slow at all, but even browsing the web gets into 12GB of usage and exhausting the 256GB is fairly easy if you backup your 256GB phone, try to edit a few videos, download enough Gradle/Go/Cargo/Node packages, or install enough 20GB office apps.

Any apple silicon with 16GB / 512GB of stage (even the M1 series) should have a much longer useful life and avoid disk/storage aging as rapidly from the constant swapping.


Can I just lean on my cane for a second, and say that the first machine I connected to a network had 256KB RAM, and I considered myself lucky to have so much. My 150 baud modem downloaded text slower than I could read it.

I know how we got to these large numbers. Shit, I helped build the road. It still blows my brains out.


Luxury. I dialled into FidoNet with my 64k Amstrad CPC (contd. p94. Mein gott I’m old)


A whole 64k, huh? IIRC those things presented the most costly spilled drink opportunity of any computer at the time.


Yup. I ran UUCP on a 64K Kaypro with a 1200 baud modem.


1200 baud? It was like science fiction come to life the first time I saw text downloading faster than I could read :D


I was fortunate to have friends who worked for AT&T back then and they were able to get the 1200 baud modem for nothing. It was soooo speedy!


Lets be real, the fact that the Air is good for developers is.. honestly, great.

But these devices are meant for home users.

Not a tremendous amount of home users having huge gradle/go/cargo/node packages in my experience.

The backup problem is real, I'm surprised Apple doesn't come out with a new time capsule (edit: for phones/tablets)- but I guess they want that sweet iCloud services dollar.


maybe I’m forgetting all the benefits of time capsule but you can plug any old storage device into a Mac now and turn it into a “Time Machine.“ It’s pretty turnkey at this point. What would a modern time capsule offer besides maybe remote back ups?


oh no, absolutely- apologies for the confusion.

Time "Machine" on MacOS continues to work (though it's clearly not as important to Apple as it once was).

The issue is: if you want to back up a phone: it will take space from your laptop and it must be tethered to do the backup. This means that if you have a 1TiB phone, like I do, you need at least 1TiB of local disk on your laptop to be able to do a single backup if the phone is anywhere near full.

This is in contrast to how Time Capsule works right now for MacOS, whereby you have an SMB share (like, a 100+TiB NAS) and your laptop will just back itself up when it can.

Such a feature would be pretty killer on iPhones/iPads, or having a "photo server" to offload your photos... idk, but Apple won't do it.


I’ve been using Immich to offload photos, and it’s been working well so far.


Can you symlink iphone backup location to an external drive?


might be smarter to try mounting the drive on the path that backups use, instead of hoping that the software follows symlinks.

That said, we're very much in "power user" territory now, and it does nothing to support the untethered use-case that Time Machine allows.

In fact, the real punch of my comment before was that this would be a way of selling additional hardware (the old time-capsules) to consumers.


That’s a really good point.


I'm excited about this. The previous generation base model 15" Air was good enough for our company to make it the default computer for everyone. Previously we were giving out base model MBP's. And they're $1000 cheaper.

Today, the MBP is just way too powerful for anything other than specific use cases that need it.


Out of curiosity, what are some good use cases for a MBP now with the MBAs being so powerful?

I can think of things like 4K video editing or 3D rendering but as a software engineer is there anything we really need to spend the extra money on an MBP for?

I'm currently on a M1 Max but am seriously considering switching to an MBA in the next year or two.


The Apple Silicon fanless MBAs are great until you end up in a workload that causes the machine to thermal throttle. I tried to use an M4 MBA as primary development machine for a few months.

A lot of software dev workflows often require running some number of VMs and containers, if this is you the chances of hitting that thermal throttle are not insignificant. When throttling under load occurs it’s like the machine suddenly halves in performance. I was working with a mess of micro services in 10-12 containers and eventually it just got too frustrating.

I still think these MBAs are superb for most people. As much as I love a solid state fanless design, I will for now continue to buy Macs with active cooling for development work. It’s my default recommendation anytime friends or relatives ask me which computer to buy and I still have one for light personal use.


While I agree that the slowdown is very noticeable once the MBA gets hot to the touch, I joke that it's a feature, encouraging you to take a cooldown break every once in a while :-)

More seriously though I agree it depends on workload. If you've got a dev flow that hits the resources in spikes (like initial builds that then flatten off to incremental) it works pretty well with said occasional breaks but if your setup is just continuously hammering resources it would be less than ideal.


It is just Apple’s way trying to tell you not to use microservices.


It's all related to things outside the CPU and GPU that made me choose a base model M5 Macbook Pro. I prefer the larger 14-inch screen for its 120hz capability and much better brightness and colour capability. I adore that there are USB-C ports on both sides for charging. The battery's bigger. That's about it.


If nothing else, I’ve learned that for me personally, 14” is the sweet spot for a laptop. It’s just enough over the 13” to be good, without being obnoxiously large.

I also like NanoTexture way more than I thought I would, so there’s that.


I’ve hit limitations of M1 Max pros all the time (generally memory and cpu speeds while compiling large c++ projects)

Airs are good for the general use case but some development (rust, C++) really eat cores and memory like nothing else.


What are your specs?

That does seem to fit the bill though of being more of a niche use case for which MBPs will be best suited for going forward.

Seems like most devs who are not on rust/c++ projects will be just fine with an Air equipped with enough memory.


> Out of curiosity, what are some good use cases for a MBP now with the MBAs being so powerful?

Local software development (node/TS). When opus-4.6-fast launched, it felt like some of the limiting factor in turnaround time moved from inference to the validation steps, i.e. execute tests, run linter, etc. Granted, that's with endpoint management slowing down I/O, and hopefully tsgo and some eslint replacement will speed things up significantly over there.


The Macbook Pro has a HDMI port and a Micro SD slot, it’s great to not have to look for a dongle. Steep price difference though.


Running a LLM locally on LM Studio. I find that that can tax my M4 Pro pretty well.


It's a personal thing how much you care, but the speakers on the MBPs are pretty amazing. The Air sounds fine, even good for a notebook, but the MBPs are the best laptop speakers I have ever heard.


Yes, back 10-15 years ago MBP felt more prosumer to me but they have monstrous performance and price points nowadays, like true luxury items or enterprise devices, that I'm happy to see good base specs on the MBA. The base spec on that device matters a lot. Also, Apple will probably release a cheaper MacBook this week and if the rumor holds, it'll be good enough for most consumers.


The base 15" MacBook Pro was $2,399 10 years ago ($3,251.07 adjusted for Inflation) today it is $2,699.

https://everymac.com/systems/apple/macbook_pro/specs/macbook...


Because you can buy it with 32GB of unified RAM, the MBP is now actually the cheapest device for something... useful local AI models!


Have you used local AI models on a 32 GB MBP? I ask because I'm looking to finally upgrade my M1 Air, which I love, but which only has 16 GB RAM. I'm trying to figure out if I just want to bump to 32 GB with the M5 MBAir or make the jump all the way to 64 GB with the low-end M5 MBP. I love my M1 Air and I don't typically tax the CPU much, but I'm starting to look at running local models and for that I'd like faster and bigger. But that said, I don't want to overpay. Memory is my main issue right now. Anyway, if you have experience, I'd love to hear it. Which MBP, stats of the system, which AI model, how fast did it go, etc?


For local models are you wanting to do:

A) Embeddings.

B) Things like classification, structured outputs, image labelling etc.

C) Image generation.

D) LLM chatbot for answering questions, improving email drafts etc.

E) Agentic coding.

?

I have a MBP with M1 Max and 32GB RAM. I can run a 20GB mlx_vlm model like mlx-community/Qwen3.5-35B-A3B-4bit. But:

- it's not very fast

- the context window is small

- it's not useful for agentic coding

I asked "What was mary j blige's first album?" and it output 332 tokens (mostly reasoning) and the correct answer.

mlx_vlm reported:

  Prompt: 20 tokens @ 28.5 t/s | Generation: 332 tokens @ 56.0 t/s | Peak memory: 21.67 GB


Thanks for the info.

I’d like to do agentic coding first, but then chatbot and classification as lower priorities. I don’t really care about image gen.

Also, if you’re only able to run 35B models in 32GB, seems like I’d definitely want at least 64GB for the newer, larger models (qwen has a 122B model, right). My theory there is that models are only getting larger, though perhaps also more efficient.


MPS is okish, is this is what you mean. ANE and CoreML is kind of meh for most use cases, great for some very specific tasks.

https://github.com/hollance/neural-engine/blob/master/docs/a...


I have noticed something similar. With the computer science undergrads and grad students I work with, Air is much more common than with the premeds and med students, many of whom have MBPs (who I am presuming do not need that much power).


I think its because compsci people know what they need to a greater degree than other majors. It's easier to upsell a computer to someone who doesn't really know about computers.

It could also be possible that compsci kids have a powerful desktop at home, or are more savvy with university cloud computing, for any edge cases or computationally expensive tasks.


I use vscode's tunnel from my MacBook Air to my Archlinux desktop a lot.

The MacBook Air has ~16 GiB RAM. The Desktop has 128 GiB, and a lot more processing power and disk space.


It’s possible that their departments give them computer recommendations that exceed what they actually need.

I’m not sure why this happens or who formulates these recommendations, but I’ve seen it before with students in fields that just don’t do much heavy duty computation or video editing being told to buy laptops with top-of-the-line specs.


I think there is a tendency to simply give in and buy bigger hardware if something doesn't work. With friends and family, I sometimes feel like having to talk them off the roof with regards to pulling the trigger on really expensive (relative to the tasks they're doing) hardware, simply because performance is often abysmal due to the fact that they trashed their OS with malware and bloatware and whatnot and can't understand all of that.

It's the same at work, to some degree. Our in-house ERP software performs like kicking a sack of rocks down a hill. I don't know how often I had to show devs that the hardware is actually idle and they're mostly derailing themselves with DB table locks, GC issues and whatnot. If I weren't pushing back, we probably would have bought the biggest VMs just to let them sit idle.


This is also just the direction that AI is taking us, even for people who wouldn't describe themselves as traditional developers.

Setting aside on-device LLMs, one needs RAM and disk space just for the multiple isolated Claude Cowork etc. VMs that will increasingly become part of people's everyday lives.

And when it's easier than ever to create an Electron app, everything's going to have an Electron app, with all the RAM/disk overhead that entails. And of course, nobody's asking their agents "optimize the resource usage of the app I made last week" - they're moving on to the next feature or project.

I suppose the demoscene will always be there, for those of us who increasingly need a refuge from ram-flation.


Where does it stop? Of course having a bit more room does not hurt, but my view is that if 256GB was not enough for you, 512GB wouldn't be either.

To me it's mostly about learning to mange RAM and storage space on your machine. A lot of stuff does not need to be hoarded on the machine. Move infrequently accessed data to an external drive. Be ruthless about purging stuff you no longer really need. Refuse to run apps that consume tens of GBs of RAM on a whim (looking at you Firefox, I've been impressed with how efficient and stable the Helium browser has been for me). If you are a developer, engineer for efficient use of RAM and storage.

Like I said, 16gb RAM and 512GB storage minimum is nice, but if the fundamental issues that contribute to massive and wasteful use of resources on our machines are not addressed, nothing will be enough.


>Where does it stop?

I don't know but macOS is making it ever more difficult to manage storage, with lots of random things under "macOS" pushing ~40GB or "System Data" that gets a crapload of unrelated things like podcast [1] downloads, with no easy way to purge.

[1] I spent too much time hunting down ~250GB of missing disk space, and it turns out it was the Podcasts app's cache, while the app itself reported no downloads. I fully expected this to be managed automatically, but was getting out of disk space warnings. It's a mess.


I think 512GB is a fair minimum for a computer these days, but I agree with your "Where does it stop?" sentiment when it comes to RAM.

If browsing the web takes 12GB of RAM, at what point do we stop chasing after more RAM and instead start demanding better performance and resource usage out of the web?


My first Apple Silicon machine was an 8GB/512GB M1 MacBook Air. I rarely bumped up against the RAM, but I was pretty happy using between 300-400GB on the SSD, so I really think the 512GB was plenty. I have a 1TB machine now, and typically still use less than 512GB...but now and then I've found a good use for nearly all of that terabyte.

You're right, learning to manage storage space is important, but you need to have some storage space to manage first. 256GB is the bottom of the barrel.


From observing family members, 256GB is usually fine, but small enough that normal computer use can accidentally fill it up. 512GB provides plenty of headroom for them. 512GB is tight for more involved usage that’s not serious media creation, and 1TB is comfortable. 1TB seems like the realistic minimum for heavier media creation.


I agree with the sentiment, but in general it's not worth my time to try to purge. I used to do that back in 2005. Heck, in the 1990s, I'd buy a new hard drive every year. But these days, I find that a hard drive lasts me for 5 years if I plan well.


It doesn't stop, it's just where we are in this rolling window of time.

16GB of RAM (currently) works for 90% of professions daily needs.


Even with the $100 price bump, I think this is a win. 16/512 is a very nice base spec on Mac.


That works for a LOT of people. Not me, but the everybody else in my family.


I have a 16GB/512GB Air M1 (2020) because I knew I would need the extra space but this really makes me happy. A new Air, higher headroom, M5, is awesome. It’s not a MBP but it’s good enough for 95% of the daily stuff. If you aren’t running local agents this would be amazing.


I have an M3 8gb air and it' mostly fine, unless I have a node server running or similar. Otherwise it's not very different to my M4 16gb iMac

I've no idea what the storage is on either of them, I've never looked. The days of needing storage are behind me, personally


I never understood the 256GB SSD on the MBA. That's no space at all.


This is why I jumped from PHP to Go, then why I jumped from Go to Rust.

Go is the most battery-included language I've ever used. Instant compile times means I can run tests bound to ctrl/cmd+s every time I save the file. It's more performant (way less memory, similar CPU time) than C# or Java (and certainly all the scripting languages) and contains a massive stdlib for anything you could want to do. It's what scripting languages should have been. Anyone can read it just like Python.

Rust takes the last 20% I couldn't get in a GC language and removes it. Sure, it's syntax doesn't make sense to an outsider and you end up with 3rd party packages for a lot of things, but can't beat it's performance and safety. Removes a whole lot of tests as those situations just aren't possible.

If Rust scares you use Go. If Go scares you use Rust.


It's almost comical how often bring up Rust. "Here's a fun PHP challange!" "Let's talk about Rust..."


Yep. It's like a crossfit vegan religion at this point.

You don't even have to ask. They will tell you and usually add nothing to the conversation while doing so.

Quite off-putting.


Sorry, but it's honestly just a lot of our journeys. Started on scripting languages like PHP/Ruby/Lua (self-taught) or Java/VB/C#/Python (collage) and then slowly expanded to other languages as we realized we were being held back by our own tools. Each new language/relationship makes you kick yourself for putting up with things so long.


I understand that but there's a time and a place. Rust has nothing to do with this. 100% of the people on this site understand that this challenge can be done faster in C, or Rust, or whatever. This is a PHP challenge. Perhaps we could discuss the actual submission as opposed to immediately derailing it.


> I understand that but there's a time and a place.

Dude, this is a website where a bunch of developer nerds congregate and talk shop. They're fine, this is the same kind of shit that's been happening across these kinds of sites for decades.


I don't know about that... I like Rust a lot... but I also like a lot of things about C# or TS/JS... I'll still reach for TS first (Deno) for most things, including shell scripting.


I mean, it's kinda like complaining that people are mentioning excavators on your "how I optimised digging a massive ditch with teaspoons" post.


It's more like mentioning excavators in posts about the best vehicles to do groceries with.


Can't speak for go... but for the handful of languages I've thrown at Claude Code, I'd say it's doing the best job with Rust. Maybe the Rust examples in the wild are just better compared to say C#, but I've had a much smoother time of it with Rust than anything else. TS has been decent though.


I am not that smart to use Rust so take it with a grain of salt. However, its syntax just makes me go crazy. Go/Golang on the other hand is a breath of fresh air. I think unless you really need that additional 20% improvement that Rust provides, Go should be the default for most projects between the 2.


I hear you, advanced generics (for complex unions and such) with TypeScript and Rust are honestly unreadable. It's code you spend a day getting right and then no one touches it.

I'm just glad modern languages stopped throwing and catching exceptions at random levels in their call chain. PHP, JavaScript and Java can (not always) have unreadable error handling paths not to mention hardly augmenting the error with any useful information and you're left relying on the stack trace to try to piece together what happened.


Thank you for sharing, I would like to see a navigation/menu component added though as that's required for most websites.


You're just a citizen. Why would any of the three-letter agencies work on what you need?


I really wish more people wanted screens that looked as good as their cellphone.

Bright, sharp text, great color. We've had the great Apple Studio Display for years now, it's about time others came to fix some of it's short-comings like 27" size, 60hz and lack of HDMI ports for use with other systems.

So many of us have to stare at a screen for hours every day and having one that reduces strain on my eyes is well worth $1-3k if they'd just make them.


The company I work at gives all new developers a pair of 1080p displays that could have come right out of 2010.

It amazes me, and it’s so sad. They have no idea what they’re missing. I’m sure high PPI would pay off fast in eye strain. And it’s not like monitors need replacement yearly. Tons of time to recoup that small cost.

I’m not arguing for $2k 37” monitors, just better than $200 ones.


Even $200 will already buy a 4K 27" (LG). Which aren't even bad. I swear by HiDPI as well but my work is the same. 1080p displays and really bad contrast screens too. Definitely not TN (they're not that bad) and not VA (they tend to have way better contrast than IPS). Probably just bottom barrel IPS.


Just about every company does something like this.

At one point in my career, I just started buying my own monitors and bringing them into work.

I remember when ~19 or 20" was the norm, and I bought a dell 30" 2560x1600 monitor. Best $1400 I ever spent, used it for years and years.

(I still have it although I retired it a few years back because it uses something called dual-link DVI which is not easily supported anymore)

I think if you are an engineer, you should dive headlong into what you are. Be proactive and get the tools you need. Don't wait for some management signoff that never comes while you suffer daily, and are worse at your job.


I work for a white-shoe law firm in Boston. I and most of my peers have total compensation approaching $500k.

And we have 1280x1024 monitors from the 00s, and we're not allowed to have anything better, even out of our own pockets, because "that's what we use here".


If you reach the point where you want to replace your in house IT with a company that will give you good tech and good tech support, let me know. I know a few people.


Out of all the places I would think would want fancy monitors to show off how fancy they are…


Too much creative vibes, fancy monitors do not necessarily communicate seriousness.


So you work at the same company I do? They just “upgraded” us to curved Dell UltraWides with a PPI of 110.

Unsurprisingly this is not a motivating factor to come back to the office, given I have a 220 PPI 6K at home.


> I’m sure high PPI would pay off fast in eye strain.

But we have gray on gray, to compensate. One even has a choice. Do you want light or dark eye strain ?


penny pinching on some monitors for the devs that cost a fortune...

:(


the old monitors still work. its a waste to throw them away. something like that?


I’ve never heard a specific reason, but I fully suspect “it’s fine, why do you need that?” to be the answer.

May be people who have never used HiDPI. Maybe they’ve seen it and don’t get the hype. Maybe they’re just penny pinching. IDK.


I don't think people care all that much about phones. It's just that phones are power-constrained, so manufacturers wanted to move to OLEDs to save on backlight; and because the displays are small, the tech was easier to roll out there than on 6k 32-inch monitors.

But premium displays exist. IPS displays on higher-end laptops, such as ThinkPads, are great - we're talking stuff like 14" 3840x2160, 100% Adobe RGB. The main problem is just that people want to buy truly gigantic panels on the cheap, and there are trade-offs that come with that. But do you really need 2x32" to code?


The other thing about phones is that you have your old phone with you when you buy a new one, so without even really meaning to you're probably doing a side by side direct comparison and improvements to display technology are a much bigger sales motivator.


This is the insight that sold a billion iPhones. They were obsessed with what happens when you’re at the store, and you don’t need a new phone, and you pick one up, and…


Most people, including people who work professionally with computers, spend more time per day looking at their phones than they do at their screens.

I expect people are VERY sensitive to mobile phone screen quality, to the point that it's a big factor in phone choice.


Outside Thinkpads IPS is basically the cheap/default option on laptops, with OLED being the premium choice. With Thinkpads TN without sRGB coverage is the cheap/default option, with IPS being the premium choice.


I'm just getting my new ThinkPad tomorrow with an OLED screen. The X1 Carbon. I haven't seen TN film in ThinkPads for years.

But yes, you are right, they are conservative on new tech in the ThinkPad lineage.


Which ThinkPad has a 14" 4K 100% Adobe RGB compliant IPS display?


As far as I can see, 4k Thinkpad IPS are DCI-P3. There are Yogas with 3.2k Adobe RGB tandem OLEDs.


A fast color e ink would be possible but development would be very expensive for an unknown market. Would be a perfect anti eye strain second monitor though.


Dasung looks to be getting there!


Is there a shortlist of top of the line utilitarian monitors that you can just buy, without researching or being some niche gamer?* Something similar to LG G-series TV's. Seems like Apple Studio, Dell UltraSharp are on that list. Any others?

*Struggling for words, but I'm looking more for the expedient solution rather than the "craft beer" or "audiophile" solution.


Keep in mind that normal OLEDs are quite bad for typical development tasks: lots of text with high contrast. Here is an example that would be unbearable for me: [1]. For text, IPS rules so far. For video and games, definitely OLED.

[1] https://www.savanozin.com/projects/qod


True for current OLED panels, but new OLEDs with LCD-like subpixel arrangements were just announced at CES. Those shouldn't have that problem.

https://news.lgdisplay.com/en/2025/12/lg-display-unveils-wor...


Many monitors use the same panels with only firmware differences. The panel technology IPS/VA/OLED/WOLED is what you shop for.

If you're a gamer QDOLED is best. If you do office work just get whatever is high resolution and makes text sharp.


If you truly don't want to research use rtings best monitor for X articles and find your budget and buy that one, if you feel the need to compare further pop the model numbers into your favourite LLM


32uq850 would be my choice if I were in Europe.


I have 27" 5K monitors at home since I WFH. One reason I don't really want to RTO is because these monitors aren't standard yet even if they have been out for more than a decade now (and my FAANG employer won't spring for the good stuff). That and my mechanical keyboard would never work in an open office :P.


Isn't the main difference glossy vs matte? With glossy you get usually bright great color and that's what you get on cellphones and Macbooks as well. For some reason matte is still the preference when it comes to monitors and you can't escape their muted color palette.


> matte is still the preference when it comes to monitors

The larger screen size of a monitor is more likely to reflect lights than a mobile phone screen.


Also when glare does appear, it is harder to adjust the viewing angle because the user is not already holding the device.


I have trouble making out details on my 45" UWQHD (3440x1440) displays... so I don't see much point.. maybe slightly easier to read typefaces... I am already zooming 25% in most of the time.

On the plus side, I can comfortably fit my editor on half the screen and my browser on the other half.


Most people run in some hiDPI mode so text doesn’t become tiny

But 1440p on a 45” is not good PPI. That could be why you’re struggling to see text clearly


Or... it could be my general vision loss issues in that I can't hardly see...

I can't make out a native pixel as it is. I understand that if it had 2x the PPI that it might handle rendering better and that may help, some with visibility... I had a first generation MBP with retina display, and it was amazing. But that's not my issue here. Not to mention the trouble having my work laptop push effectively 4x the pixels if such a mythical beast existed.

The size is so that I can actually work with a single screen, editor on one side, browser on the other... it's almost like 2x 3:2 displays in one. For a workflow it's pretty good... I don't game much, but it's nice for that and content viewing as well. I had considered using side by side displays, like 2x 27" in portrait mode... but settled on this, which is working surprisingly well for now.


This interaction fully flushes out the problem. The math is not being done between size and resolution to determine PPI.


Yeah PPD is more useful, although for ultrawide I’ve also heard it’s common to have it closer than regular viewing distance, so that you can glance at side screens / information


I would love the apple studio display if the framerate wasn't actually crap.

Even doing programming, 60hz is not enough man.

Plus more peripheral ports.


I have a Studio Display and would also love if it had a much higher refresh rate, but only because I play WoW on it. Why isn't 60hz enough for programming? I don't think I notice the refresh rate at all when not playing a video game or watching videos.


When I move my mouse, or if I'm moving between full screens (terminal to IDE to browser), I see the stutter and slowness.

I code on a LG UltraGear OLED 4K HDR 240 Hz Gaming Monitor.

Could be my sensitivity though.


I'm personally not very sensitive to refresh rates, I only really notice it in video games and it wasn't enough to keep me from replacing my 120hz primary monitor with the Studio Display. I was just curious about why you prefer higher refresh rate for programming, thanks for answering!


> I see the stutter and slowness

Sounds like it is not maintaining 60 fps. Higher display refresh rate wouldn't fix that.


1-3k is 52 weeks of groceries for some people.


It seems rather silly to assume all people universally have the same needs, desires, and expenses. We don't live in the world of The Giver. I can accept that firefighters need a truck much more advanced and expensive than I ever will. It would be odd to compare that expense to how many pizza's I order each year.


> So many of us have to stare at a screen for hours every day and having one that reduces strain on my eyes is well worth $1-3k if they'd just make them.

I'm 53 y/o and didn't have glasses until 52. And at 53 I only use them sporadically. For example atm I'm typing this without my glasses. I can still work at my computer without glasses.

And yet I spent 10 hours a day in front of computer screens since I was a kid nearly every day of my life (don't worry, I did my share of MX bike, skateboarding, bicycling, tennis, etc.).

You know the biggest eye-relief for me? Not using anti-aliased font. No matter the DPI. Crisp, sharp, pixel-perfect font only for me. Zero AA.

So a 110 / 120 ppi screen is perfect for me.

Not if you do use anti-aliased font (and most people do), I understand the appeal of smaller pixels, for more subtle AA.

But yup: pixel perfect programming font, no anti-aliasing.

38" ultra-wide, curved, monitor. Same monitor since 2017 and it's my dream. My wife OTOH prefers a three monitors setup.

So: people have different preferences and that is fine. To each his own bad tastes.


Which font do you prefer for code?


We do, but many of us also want to play a game on occasion and GPUs can barely just handle 4k these days, let alone 6k+.


Anecdata but I played games at 4K on a 4GHz Haswell (2013) + 1080 Ti (2017). Definitely faster at 2K but 4K was servicable. It's probably less true now that I'm 1+ years away from the hardware, but 4K gameplay is surprisingly accessible for modest hardware IMO.


I currently have a 4k monitor (+nv4070-super) and it does handle some games fine at 4k but for others I need to use 2k w/ upscaling. Depends on the game.


So good news, there is a fair amount of monitors coming soon which are super high resolution that offer a "dual mode" which is lower resolution that has higher refresh rate. They are pretty cool.


That’s not really true. I tried out 5K which would reasonably be quite heavy, but honestly with a DLSS it’s super viable. If you get the gaming versions of these displays, they also have dual modes and in that mode a 6K display is going to be less heavy to run than a 4K one and a 5K display is going to be 1440p.


I game at 6K… I don’t play shooters so it’s fine. I turn off VSync, get 100FPS+ in my game of choice (WoW, admittedly and oldish engine). 4090 GPU.


Upscaling tech has come a long way so games can run at lower internal resolutions than your monitor without looking complete crap on a LCD.


some of these newer monitors support a lower native resolution as well, usually with a faster refresh rate. it's a nice feature


One of the things I wish more people talked about isn't just the language or the syntax, but the ecosystem. Programming isn't just typing, it's dealing with dependencies and trying to wire everything up so you can have tests, benchmarks, code-generation and build scripts all working together well.

When I use modern languages like Go or Rust I don't have to deal with all the stuff added to other languages over the past 20 years like unicode, unit testing, linting, or concurrency.

I use Go where the team knows Java, Ruby or TypeScript but needs performance with low memory overhead. All the normal stuff is right there in the stdlib like JSON parsing, ECC / RSA encryption, or Image generation. You can write a working REST API with zero dependencies. Not to mention so far all Go programs I've ever seen still compile fine unlike those Python or Ruby projects where everything is broken because it's been 8mo.

However, I'd pick Rust when the team isn't scared of learning to program for real.


I like Rust.

I don't like that for fairly basic things one has to quickly reach for crates. I suppose it allows the best implementation to emerge and not be concerned with a breaking change to the language itself.

I also don't like how difficult it is to cross-compile from Linux to macOS. zig cc exists, but quickly runs into a situation where a linker flag is unsupported. The rust-lang/libc also (apparently?) insists on adding a flag related to iconv for macOS even though it's apparently not even used?

But writing Rust is fun. You kind of don't need to worry so much about trivialities because the compiler is so strict and can focus on the interesting stuff.


Yeah, I've never seen an all-in-one language like Go before. Not just a huge stdlib where you don't have to vet the authors on github to see if you'll be okay using their package, but also a huge amount of utility built in like benchmarking, testing, multiple platforms, profiling, formatting, and race-detection to name a few. I'm sad they still allow null, but they got a lot right when it comes to the tools.

Everything is literally built-in. It's the perfect scripting language replacement with the fast compile time and tiny language spec (Java 900 pages vs Go 130 pages) making it easy to fully train C-family devs into it within a couple weeks.


Oh Go with Rust's result/none setup and maybe better consts like in the article would be great.

Too ba null/nil is to stay since no Go 2.

Or maybe they would? Iirc 1.21 had technically a breaking change related to for loops. If it was just possible to have migration tooling. I guess too large of a change.


Technically, with generics, you could get a Result that is almost as good as Rust, but it is unidiomatic and awkward to write:

    type Result[T, E any] struct {
        Val   T
        Err   E
        IsErr bool
    }

    type Payload string

    type ProgError struct {
        Prog   string
        Code   int
        Reason string
    }

    func DoStuff(x int) Result[Payload, ProgError] {
        if x > 8 {
            return Result[Payload, ProgError]{Err: ProgError{Prog: "ls", code: 1, "no directory"}}
        }
        return Result[Payload, ProgError]{Val: "hello"}
    }


Nullability is unavoidable in Go because of zero values.

https://go.dev/ref/spec#The_zero_value


C# is pretty close.


This was not the case for a long time. Actually it seems like it's fairly recently you get native AOT and trimming to actually reduce build sizes and build time. Otherwise all the binaries come with a giant library


Even back in .NET Core 3.1 days C# had more than competitive performance profile with Go, and _much_ better multi-core scaling at allocation-heavy workloads.

It is disingenuous to say that whatever it ships with is huge also.

The common misconception by the industry that AOT is optimal and desired in server workloads is unfortunate. The deployment model (single slim binary vs many files vs host-dependent) is completely unrelated to whether the application utilizes JIT or AOT. Even with carefully gathered profile, Go produces much worse compiler output for something as trivial as hashmap lookup in comparison to .NET (or JVM for that matter).


There’s cross-rs which simplifies things. But the main problem is less linker flags being unsupported and more cross compiling C dependencies somewhere in the dependency chain and that’s always a nightmare, not really anything to do with Rust (Go should have similar difficulties with cross compilation).


Fair take for nontrivial projects.

Buuut with Go one in general tends to reach less for dependencies so less likely to run into this and cgo is not go ;) https://go-proverbs.github.io

but for cross-compiling actually ended up filtering out the liconv flag with a bash wrapper and compiled a custom zig cc version with the support for exported_symbols_list patched in, things appear to work.

Should look into cross-rs I suppose. Hope it's not one of those "download macos sdk from this unofficial source" setups that people seem to do. Apparently not allowed by Apple.


Cross compiling to Apple products from non Apple products is going to run into the same hurdle around SDK setup as any other. There exists documentation but it’s probably not the easiest task. This limitation though applies equally to any library that depends on system C headers and/or system libraries.


I feel like we're talking in loops.

Go is generally fine for crosscompiling.

edit: what gave me pain with Rust for a cli was clap (with derive, the default). Go just worked.


> Go should have similar difficulties with cross compilation

It doesn't. Go code can be cross compiled for any OS and any CPU arch from any supported system. And it comes out of the box that way. You don't have to go out of your way to install or configure anything extra to do it.


We’re not talking about go here. This is true for rust. The issue is building against C libraries and APIs for a different OS. Unless go has done some magic I’m unaware of its the same problem, just cgo isn’t super popular in the Go community


The crates.io ecosystem for Rust... is like the amazing girlfriend that you go head over heels for, make her your wife, and then you meet the in-laws ... but it's too late now.

Unlimited access to a bunch of third party code is great as you're getting started.

Until it isn't and you're swimming in a fishing net full of code you didn't write and dependencies you do not want. Everything you touch eventually brings all of tokio along with it. And 3 or 4 different versions of random number generators or base64 utilities, etc. etc.


> However, I'd pick Rust when the team isn't scared of learning to program for real.

I've been learning Rust. It's elegant, and I am enjoying it.

The Rust people however are absolutely annoying though. Never have I seen such a worse group of language zealots.


I can't speak for other Rust programmers but I can speak for myself.

I obviously enjoy programming Rust and I like many of the choices it made, but I am well aware of the tradeoffs Rust has made and I understand why other languages chose not to make them. Nor do I think Rust functions equally as well in every single use case.

I imagine most Rust users think like this, but unfortunately there seems to be a vocal minority who hold very dogmatic views of programming who have shaped how most people view the Rust community.


Guess I am scared to program for real since I don't use a "real" language like Rust. What a wild statement to make.


This is such a big deal and I wish more people talked about it in these types of blog posts.

I used to be a Python programmer and there were two things that destroyed every project;

- managing Python dependencies

- inability to reason about the input and output types for functions and inability to enforce it ; in Python any function can accept any input value of any type and can return any type of value of any type.

These issues are not too bad if it's a small project and you're the sole developer. But as projects get larger and require multiple developers, it turns into a mess quickly.

Go solved all these issues. Makes deployment so much easier. In all the projects I've done I estimate that more than half have zero dependencies outside of the standard library. And unlike Python, you don't have to "install" Go or it's libraries on the server you plan to run your program on. Fully static self contained executable binary with zero external files needed is amazing, and the fact that you can cross compile for any OS+ CPU arch out of the box on any supported system is a miracle.

The issues described by the original post seem like small potatoes compared to the benefits I've gotten by shifting from Python over to Go


Restrict data collection? It would kill all startups and firmly entrance a terrible provider monopoly who can comply.

Have the government own data collection? Yeah, I don't even know where to start with all the problems this would cause.

Ignore it and let companies keep abusing customers? Nope.

Stop letting class-action lawsuits slap the company's wrists and then give $0.16 payouts to everyone?

What exactly do we do without killing innovation, building moats around incumbents, giving all the power to politicians who will just do what the lobbyists ask (statistically), or accepting things as is?


Why do the start ups need to collect data like this?


I work for a medical technology company. How do you propose we service our customers without their medical data?


Does it need to be hosted on your servers? Could you provide something to the customers where they host the data or their local doctors office does it?

Can you delete it after the shortest possible period of using it, potentially? Do you keep data after someone stops being a customer or stops actively using the tech?


Record retention is covered by a complex set of overlapping regulations and contracts. They are dependent on much more than date of service. M&A activity, interstate operations, subsequent changes in patient mental status, etc can all cause the horizon to change well after the last encounter.

As all the comments in this thread suggest the cost of having an extra record , even an extra breached record is low. The cost of failing to produce a required medical record is high.

Put this together with dropping storage prices, razor then margins, and IT estates made out of thousands of specialized point solutions cobbled together with every integration pattern ever invented and you get a de facto retention of infinity paired with a de jure obligation of could-be-anything-tomorrow.


Professionally, my company builds one of the largest EHR-integrated web apps in the US

Ask me how many medical practices connect every day via IE on Windows 8.


I'm not trying to be rude, but it's clear you have idea what you're talking about. The medical world is heavily regulated and there are things we must do and thing's we can't do. If you go to your doctor with a problem, would you want your doctor to have the least amount of information possible or your entire medical history? The average person has no business hosting their sensitive data like banking and medical information. If you think fraud and hacks are bad now, what do you think would happen if your parents were forced to store their own data? Or if a doctor who can barely use an EMR was responsible for the security of your medical data? I would learn a lot more about the area before making suggestions.


Having seen this world up close, the absolute last place you ever want your medical data to be is on the Windows Server in the closet of your local doctors office. The public cloud account of a Silicon Valley type company that hires reasonably competent people is Fort Knox by comparison.


Yeah but the a local private practice is a fairly small target. No one is going to break into my house just to steal my medical records, for example.

This could also be drastically improved by the government spearheading a FOSS project for medical data management (archival, backup, etc). A single offering from the US federal government would have a massive return on investment in terms of impact per dollar spent.

Maybe the DOGE staff could finally be put to good use.


You seem to be confused about how this works. Attackers use automated scripts to locate vulnerable systems. Small local private practices are always targeted because everything is targeted. The notion of the US federal government offering an online data backup service is ludicrous, and wouldn't have even prevented the breach in this article.


> Attackers use automated scripts to locate vulnerable systems.

I'm aware. I thought we were talking about something a bit higher effort than that.

> online data backup service

That isn't what I said. I suggested federally backed FOSS tooling for the specific usecase. If nothing else that would ensure that low effort scanners came up empty by providing purpose built software hardened against the expected attack vectors. Since it seems we're worrying about the potential for broader system misconfiguration they could even provide a blessed OS image.

The breach in the article has nothing to do with what we're talking about. That was a case of shadow IT messing up. There's not much you can do about that.


I just registered CVEs in several platforms in a related industry, the founders of whom likely all asked themselves a similar question. And yet, it's the wrong question. The right one is, "Does this company need to exist?" I don't know you or your company. Maybe it's great. But many startups are born thinking there's a technological answer to a question that requires a social/political one. And instead of fixing the problem, the same founders use their newfound wealth to lobby to entrench the problem that justifies their company's existence, rather than resolves the need for it to exist in the first place. "How do you propose we service our customers without their medical data?" Fix your fucked healthcare system.


Ask for it?


I hope you're joking...

Otherwise it would suggest you think the problem is they didn't ask? When was the last time you saw a customer read a terms of service? Or better yet reject a product because of said terms once they hit that part of the customer journey?

The issue isn't about asking it's that for take your pick of reasons no one ever says no. The asking is thus pro forma and irrelevant.


We apply crippling fines on companies and executives that let these breaches happen.

Yes, some breaches (actual hack attacks) are unavoidable, so you don't slap a fine on every breach. But the vast majority of "breaches" are pure negligence.


> Restrict data collection? It would kill all startups and firmly entrance a terrible provider monopoly who can comply.

That's a terrible argument for allowing our data to be sprayed everywhere. How about regulations with teeth that prohibit "dragons" from hoarding data about us? I do not care what the impact is on the "economy". That ship sailed with the current government in the US.

Or, both more and less likely, cut us in on the revenue. That will at least help some of the time we have to waste doing a bunch of work every time some company "loses" our data.

I'm tired of subsidizing the wealth and capital class. Pay us for holding our data or make our data toxic.

Obviously my health provider and my bank need my data. But no one else does. And if my bank or health provider need to share my data with a third party it should be anonymized and tokenized.

None of this is hard, we simply lack will (and most consumers, like voters are pretty ignorant).


The solution is to anonymize all data at the source, i.e. use a unique randomized ID as the key instead of someone's name/SSN. Then the medical provider would store the UID->name mapping in a separate, easily secured (and ideally air-gapped) system, for the few times it was necessary to use.


...use a unique randomized ID as the key...

33 bits is all that are required to individually identify any person on Earth.

If you'd like to extend that to the 420 billion or so who've lived since 1800, that extends to 39 bits, still a trivially small amount.

Every bit[1] of leaked data bisects that set in half, and simply anonymising IDs does virtually nothing of itself to obscure identity. Such critical medical and billing data as date of birth and postal code are themselves sufficient to narrow things down remarkably, let alone a specific set of diagnoses, procedures, providers, and medications. Much as browser fingerprints are often unique or nearly so without any universal identifier so are medical histories.

I'm personally aware of diagnostic and procedure codes being used to identify "anonymised" patients across multiple datasets dating to the early 1990s, and of research into de-anonymisation in Australia as of the mid-to-late 1990s. Australia publishes anonymisation and privacy guidelines, e.g.:

"Data De‑identification in Australia: Essential Compliance Guide"

<https://sprintlaw.com.au/articles/data-de-identification-in-...>

"De-identification and the Privacy Act" (2018)

<https://www.oaic.gov.au/privacy/privacy-guidance-for-organis...>

It's not merely sufficient to substitute an alternative primary key, but also to fuzz data, including birthdates, addresses, diagnostic and procedure codes, treatment dates, etc., etc., all of which both reduces clinical value of the data and is difficult to do sufficiently.

________________________________

Notes:

1. In the "binary digit" sense, not in the colloquial "small increment" sense.


What a silly idea. That would completely prevent federally mandated interoperability APIs from working. While privacy breaches are obviously a problem, most consumers don't want care quality and coordination harmed just for the sake of a minor security improvement.

https://www.cms.gov/priorities/burden-reduction/overview/int...


[deleted]


Honestly I'd take the 16 cents. Usually its a discount voucher on a product you'd never buy.

Or if it's a freebie then it's hidden behind a plain text link 3 levels deep on their website.


100% state bot. I wouldn't even think it was just France, other state actors would love to see GrapheneOS go down as well. How dare citizens have technology we can't access.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: