The funny thing is that X11 can actually do heterogeneous dpi and Wayland can't.
Unfortunately you will never find yourself in a situation to actually use a mixed dpi X11 setup (you lose your homogeneous desktop) and Wayland is better at spoofing it (for whatever reason fractional scaling works better in Wayland).
"If you think this idea is a bit stupid, shed a tear for the future of the display servers: this same mechanism is essentially how Wayland compositors —Wayland being the purported future replacement for X— cope with mixed-DPI setups."
The trick is, with the setup I mentioned, you change the rewards.
The concept is:
Red Team (Test Writers), write tests without seeing implementation. They define what the code should do based on specs/requirements only. Rewarded by test failures. A new test that passes immediately is suspicious as it means either the implementation already covers it (diminishing returns) or the test is tautological. Red's ideal outcome is a well-named test that fails, because that represents a gap between spec and implementation that didn't previously have a tripwire. Their proxy metric is "number of meaningful new failures introduced" and the barrier prevents them from writing tests pre-adapted to pass.
Green Team (Implementers), write implementation to pass tests without seeing the test code directly. They only see test results (pass/fail) and the spec. Rewarded by turning red tests green. Straightforward, but the barrier makes the reward structure honest. Without it, Green could satisfy the reward trivially by reading assertions and hard-coding. With it, Green has to actually close the gap between spec intent and code behavior, using error messages as noisy gradient signal rather than exact targets. Their reward is "tests that were failing now pass," and the only reliable strategy to get there is faithful implementation.
Refactor Team, improve code quality without changing behavior. They can see implementation but are constrained by tests passing. Rewarded by nothing changing (pretty unusual in this regard). Reward is that all tests stay green while code quality metrics improve. They're optimizing a secondary objective (readability, simplicity, modularity, etc.) under a hard constraint (behavioral equivalence). The spec barrier ensures they can't redefine "improvement" to include feature work. If you have any code quality tools, it makes sense to give the necessary skills to use them to this team.
It's worth being honest about the limits. The spec itself is a shared artifact visible to both Red and Green, so if the spec is vague, both agents might converge on the same wrong interpretation, and the tests will pass for the wrong reason. The Coordinator (your main claude/codex/whatever instance) mitigates this by watching for suspiciously easy green passes (just tell it) and probing the spec for ambiguity, but it's not a complete defense.
I’m currently working across like 5 projects (was 4 last week but you know how it is). I now do more in days than others might in a week.
Yesterday a colleague didn’t quite manage to implement a loading container with a Vue directive instead of DOM hacks, it was easier for me to just throw AI at the problem and produced a working and tested solution and developer docs than to have a similarly long meeting and have them iterate for hours.
Then I got back to training a CNN to recognize crops from space (ploughing and mowing will need to be estimated alongside inference, since no markers in training data but can look at BSI changes for example), deployed a new version of an Ollama/OpenAI/Anthropic proxy that can work with AWS Bedrock and updated the docs site instructions, deployed a new app that will have a standup bot and on-demand AI code review (LiteLLM and Django) and am working on codegen to migrate some Oracle forms that have been stagnating otherwise.
It’s not funny how overworked I am and sure I still have to babysit parallel Claude Code sessions and sometimes test things manually and write out changes, but this is a completely different work compared to two or three years ago.
Maybe the problem spaces I’m dealing with are nothing novel, but I assume most devs are like that - and I’d be surprised at people’s productivity not increasing.
When people nag in meetings about needing to change something in a codebase, or not knowing how to implement something and its value add, I’ll often have something working shortly after the meeting is over (due to starting during it).
Instead of sending adding Vitest to the backlog graveyard, I had it integrated and running in one or two evenings with about 1200 tests (and fixed some bugs). Instead of talking about hypothetical Oxlint and Oxfmt performance improvements, I had both benchmarked against ESLint and Prettier within the hour.
Same for making server config changes with Ansible that I previously didn’t due to additional friction - it is mostly just gone (as long as I allow some free time planned in case things vet fucked up and I need to fix them).
Edit: oh and in my free time I built a Whisper + VLM + LLM pipeline based on OpenVINO so that I can feed it hours long stream VODs and get an EDL cut to desired length that I can then import in DaVinci Resolve and work on video editing after the first basic editing prepass is done (also PyScene detect and some audio alignment to prevent bad cuts). And then I integrated it with subscription Claude Code, not just LiteLLM and cloud providers with per-token costs for the actual cuts making part (scene description and audio transcriptions stay local since those don't need a complex LLM, but can use cloud for cuts).
Oh and I'm moving from my Contabo VPSes to running stuff inside of a Hetzner Server Auction server that now has Proxmox and VMs in that, except this time around I'm moving over to Ansible for managing it instead of manual scripts as well, and also I'm migrating over from Docker Swarm to regular Docker Compose + Tailscale networks (maybe Headscale later) and also using more upstream containers where needed instead of trying to build all of mine myself, since storage isn't a problem and consistency isn't that important. At the same time I also migrated from Drone CI to Woodpecker CI and from Nexus to Gitea Packages, since I'm already using Gitea and since Nexus is a maintenance burden.
If this becomes the new “normal” in regards to everyone’s productivity though, there will be an insane amount of burnout and devaluation of work.
hi. i run "ocr" with dmenu on linux, that triggers maim where i make a visual selection. a push notification shows the body (nice indicator of a whiff), but also it's on my clipboard
> It's far from perfect, but using a simple application with no built-in ads, AI, bloat, crap, etc is wonderful.
I think there are three main reasons it's not perfect yet:
1. Building both a decentralised open standard (Matrix) at the same time as a flagship implementation (Element) is playing on hard mode: everything has to be specified under an open governance process (https://spec.matrix.org/proposals) so that the broader ecosystem can benefit from it - while in the early years we could move fast and JFDI, the ecosystem grew much faster than we anticipated and very enthusiastically demanded a better spec process. While Matrix is built extensibly with protocol agility to let you experiment at basically every level of the stack (e.g. right now we're changing the format of user IDs in MSC4243, and the shape of room DAGs in MSC4242) in practice changes take at least ~10x longer to land than in a typical proprietary/centralised product. On the plus side, hopefully the end result ends up being more durable than some proprietary thing, but it's certainly a fun challenge.
2. As Matrix project lead, I took the "Element" use case pretty much for granted from 2019-2022: it felt like Matrix had critical mass and usage was exploding; COVID was highlighting the need for secure comms; it almost felt like we'd done most of the hard bits and finishing building out the app was a given. As a result, I started looking at the N-year horizon instead - spending Element's time working on P2P Matrix (arewep2pyet.com) as a long-term solution to Matrix's metadata footprint and to futureproof Matrix against Chat Control style dystopias... or projects like Third Room (https://thirdroom.io) to try to ensure that spatial collaboration apps didn't get centralised and vendorlocked to Meta, or bluesky on Matrix (https://matrix.org/blog/2020/12/18/introducing-cerulean/, before Jay & Paul got the gig and did atproto).
I maintain that if things had continued on the 2019-2022 trajectory then we would have been able to ship a polished Element and do the various "scifi" long-term projects too. But in practice that didn't happen, and I kinda wish that we'd spent the time focusing on polishing the core Element use case instead. Still, better late than never, in 2023 we did the necessary handbrake turn focusing exclusively on the core Element apps (Element X, Web, Call) and Element Server Suite as an excellent helm-based distro. Hopefully the results speak for themselves now (although Element Web is still being upgraded to use the same engine as Element X).
3. Finally, the thing which went wrong in 2022/2023 was not just the impact of the end of ZIPR, but the horrible realisation that the more successful Matrix got... the more incentive there would be for 3rd parties to commercialise the Apache-licensed code that Element had built (e.g. Synapse) without routing any funds to us as the upstream project. We obviously knew this would happen to some extent - we'd deliberately picked Apache to try to get as much uptake as possible. However, I hadn't realised that the % of projects willing to fund the upstream would reduce as the project got more successful - and the larger the available funds (e.g. governments offering million-dollar deals to deploy Matrix for healthcare, education etc) then you were pretty much guaranteed the % of upstream funding would go to zero.
So, we addressed this in 2023 by having to switch Element's work to AGPL, massively shrinking the company, and then doing an open-core distribution in the form of ESS Pro (https://element.io/server-suite/pro) which puts scalability (but not performance), HA, and enterprise features like antivirus, onboarding/offboarding, audit, border gateways etc behind the paywall. The rule of thumb is that if a feature empowers the end-user it goes FOSS; if it empowers the enterprise over the end-user it goes Pro. Thankfully the model seems to be working - e.g. EC is using ESS for this deployment. There's a lot more gory detail in last year's FOSDEM main-stage talk on this: https://www.youtube.com/watch?v=lkCKhP1jxdk
Eitherway, the good news is that we think we've figured out how to make this work, things are going cautiously well, and these days all of Element is laser-focused on making the Element apps & servers as good as we possibly can - while also continuing to also improve Matrix, both because we believe the world needs Matrix more than ever, and because without Matrix Element is just another boring silo'd chat app.
The bad news is that it took us a while to figure it all out (and there are still some things still to solve - e.g. abuse on the public Matrix network, finishing Hydra (see https://www.youtube.com/watch?v=-Keu8aE8t08), finishing the Element Web rework, and cough custom emoji). I'm hopeful we'll get here in the end :)
My two cents as a 20 year product manager with +10 enterprise applications under my belt (and having read several of these):
# "Don't make me think" is a seminal work on design thinking for online services. I've yet to come across a book with as much relevance and substance even though
it was written for the dot com era.
# "Positioning" by Al Reis is a book I wish I read 15 years ago when I started my company... your product's strategic positioning will greatly inform and shape design decisions (typography, colors, tones, copy, etc)
# "Ogilvy on Advertising" - written by the legend himself, once you read this book, it will change the way you see all ads in any medium
"You actually don't need to be open minded about Oracle. You are wasting the openness of your mind. Go be open minded about lots of other things. I mean, let's face it: it's work to be open minded, right? I mean, you've got to constantly discard data and be like "ok, I need to be open minded about this". No, with Oracle, just be close-minded. It's a lot easier. Because the thing about Oracle, and this is just amazing to me, is, you know: what makes life worth living is the fact that stereotypes aren't true, in general. It's the complexity of life. And as you know people, as you learn about things, you realize that these generalizations we have, virtually to a generalization, are false. Well except for this one, as it turns out. What you think of Oracle is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle." -Bryan M. Cantrill
If you're looking for an alternative to all of this, the BangleJS v2 is both cheaper and more hackable than the Pebble watches. It doesn't tick all of the same boxes, but it's performed well for me over the last 6 months.
Here's what it offers:
* Screen is fully visible under direct sunlight
* With the screen always on the battery lasts me well over a week
* Heart rate monitor
* EXTREMELY hackable, everything can be hacked on with JS, even the
launcher you're using for apps
* 108 Euros shipped to the US
* Fully supported by GadgetBridge (open source mobile app)
I’ve used browser dev tools to regularly add additional drop down options to menus that weren’t present. Huel, for example, only offered 2 or 4 week subscriptions, so I added 3 weeks to it because that’s the frequency I needed, and it worked no problem. 3 weeks later my shakes arrived and every 3 weeks since.
"The researchers found that engaging in as little as 35 minutes of moderate to vigorous physical activity per week, compared to zero minutes per week, was associated with a 41% lower risk of developing dementia over an average four-year follow-up period. Even for frail older adults—those at elevated risk of adverse health outcomes—greater activity was associated with lower dementia risks.
The researchers found dementia risk decreased with higher amounts of physical activity. Dementia risks were 60% lower in participants in the 35 to 69.9 minutes of physical activity/week category; 63% lower in the 70 to 139.9 minutes/week category; and 69% lower in the 140 and over minutes/week category."
I see a lot of lamentation for high end audio going away but the reality is that it's not that hard to achieve if you are truly interested in its effects.
You can build loudspeaker cabinets yourself without a lot of skills. There are kits everywhere. These will dramatically outperform anything you can buy retail even if you do a mediocre job of assembly. Most aspects of a high quality audio solution involve the room itself, not the equipment inside of it.
Building something that replicates (for instance) the Polk RTI series would take a weekend for a total noob if working from a kit. You can buy pro amplifiers like QSC and Behringer that satisfy whatever topology and power level you desire. Vendors like MiniDSP give you everything you need to build an active crossover solution in an afternoon.
Simply knowing that these things are possible is the first step to achieving them.
Minor correlation? P values are small indicating a strong correlation?
Quote: Of the bacteria detected, oral viridans group streptococcal DNA was the most common, being found in 42.1% of coronary plaques and 42.9% of endarterectomies. Immunopositivity for viridans streptococci correlated with severe atherosclerosis (P<0.0001) in both series and death from coronary heart disease (P=0.021) or myocardial infarction (P=0.042).
It reminds me a bit of chindōgu, the Japanese art (?) of useless inventions. There's a particular delight to ingenious, but absurd or useless creations.
I like mapy.com as a Google Maps replacement. It's essentially a very good OSM renderer, with a great website and app, including offline access, routing, and real-time traffic. Also very good bike/hike routing, if that's your jam.
But there's no substitute for GMap's POI database.
I think you're accidentally conducting a motte and bailey fallacy here.
It's making an ambitious risky claim (make things simpler than you think they need to be) then retreating on pushback to a much safer claim (the all-encompassing "simplest thing possible")
The statement ultimately becomes meaningless because any interrogation can get waved away with "well I didn't mean as simple as that."
But nobody ever thinks their solution is more complex than necessary. The hard part is deciding what is necessary, not whether we should be complex.
So, I've been keeping a journal for 17 years, off and on. I don't know anyone else who does it my way, so here's my method.
I made a dedicated email account just for the journal. I personally chose gmail but if you distrust google you could use any other provider including self-hosted.
At the end of the day, or when I feel like it, I log in and email the account from itself with a message about whatever happened that day and whatever I'm thinking or feeling, and use the date for the subject line, like "August 12, 2025". I never, ever send emails to anything else from that account nor connect it to anything or use it for anything else. It is a total island.
The result is 17 years of easily-searchable journal, password-protected, backed-up, accessible from anywhere that has internet, can't be "lost" like a physical journal (yes I know I'm trusting google, but again, go self-host if you're worried about that), can't be "found" by someone looking through my things.
I can't even tell you how much value I've gotten out of it. You forget things you don't even know you forgot. So many little moments and days in life. You'll be shocked at the things you used to think and feel sometimes. You'll be shocked at whole magical days that you haven't thought of in years and years and likely would never have thought of again. It's a record of me changing over time and the phases I've gone through. I can't recommend it enough.
And it doesn't take much discipline, either. It's not something I "have" to do. I do it when I feel like it. There are years where I have only 25 entries, and others where I have 200. It depends how much I felt like writing. I find it spikes in years where I'm feeling very emotional, usually during bad times. But I've written down many great days too.
API changes? They remove important user-facing features (like custom key themes[1]) without providing any real migration path. I wasted an afternoon trying to figure out how can I make Gnome apps use super+c/super+v/etc like cmd+c/cmd+v on macOS and... no, as of Gtk4 you can't, not anymore.
> I'm personally not sure that "user must have an ability to redefine every keyboard shortcut" [...].
Developers are users too. They are also arguably your most important users - they turn products into ecosystems, their involvement is a force multiplier for your platform. As Gnome is actively hostile towards their users, I'm not surprised they don't care much for third-party developers either (both inside and outside[2] of their ecosystem).
Well, there aren't reasons why Python couldn't have that, but there certainly are good reasons why the Python maintainers decided that type of runtime was not appropriate for that specific language--not least among them that maintaining a scheduling/pre-empting runtime (which is the only approach I'm aware of that works here without basically making Python into an entirely unrelated language) is very labor intensive! There's a reason there aren't usable alternative implementations of ERTS/BEAM, and a reason why gccgo is maintained in fits and starts, and why the Python GIL has been around for so long. Getting that type of system right is very, very hard.
> This ‘90s API, and it set the core of the web stack. And it’s super object-oriented, and it’s just hard to express all that stuff in Rust, because Rust doesn’t lend itself to object-oriented programming. It doesn’t have inheritance, for example, which is a very fundamental building block.
That's a rather strange criticism, seeing as the only thing about objects that's slightly involved to express in Rust is implementation inheritance; and even then, it can be phrased in a very reasonable way by using the generic typestate pattern. I.e. expressing a class that can be inherited from as a generic object MyBaseClass<T>, where T holds the "derived" object state (and can itself be generic in turn, thereby extending the type hierarchy) and methods can either be implemented for any MyBaseClass<T> or for some specific MyBaseClass<MyDerivedObject>.
(This aims to do the right thing whenever a method on the base class relies on some method implementation that's defined deeper in the hierarchy, i.e. there is an indirection step, aka open recursion. That's the part that is missing when using simple composition.)
> I want to (1) build blindingly fast, low-latency, super performant UX for users, … (2) maintain developer velocity, (3) have access to nice UX primitives and widgets … (4) have it look nice and modern.
Find me when you find this, because I actually think it is impossible.
I think there is fundamentally too much inherent complexity to abstract away to keep 2 and not sacrifice 1. Specifically for something properly cross platform.
If you are only targeting MacOS or windows then I think it’s absolutely possible but then you are using the wrong language, nothing against rust at all on that, the platform defaults are just swift / C# for those use cases.
And I’m not sure but unless you are just using a gtk/Qt style framework you would absolutely run into a11y problems that you would need to build around yourself.
Sounds like you probably want egui though… if your primary UI is a big canvas that needs to be hardware accelerated and interaction is just to assist that use case egui is probably a good bet for you. But you wouldn’t be hiring web devs to do that.
Definitely not. I took this same basic idea of feeding videos into Whisper to get SRT subtitles and took it a step further to make automatic Anki flashcards for listening practice in foreign languages [1]. I literally feel like I'm living in the future every time I run across one of those cards from whatever silly Finnish video I found on YouTube pops up in my queue.
These models have made it possible to robustly practice all 4 quadrants of language learning for most common languages using nothing but a computer, not just passive reading. Whisper is directly responsible for 2 of those quadrants, listening and speaking. LLMs are responsible for writing [2]. We absolutely live in the future.
You know, sometimes I feel that all this discourse about AI for coding reflects the difference between software engineers and data scientists / machine learning engineers.
Both often work with unclear requirements, and sometimes may face floating bugs which are hard to fix, but in most cases, SWE create software that is expected to always behave in a certain way. It is reproducible, can pass tests, and the tooling is more established.
MLE work with models that are stochastic in nature. The usual tests aren't about models producing a certain output - they are about metrics, that, for example, the models produce the correct output in 90% cases (evaluation). The tooling isn't as developed as for SWE - it changes more often.
So, for MLE, working with AI that isn't always reliable, is a norm. They are accustomed to thinking in terms of probabilities, distributions, and acceptable levels of error. Applying this mindset to a coding assistant that might produce incorrect or unexpected code feels more natural. They might evaluate it like a model: "It gets the code right 80% of the time, saving me effort, and I can catch the 20%."