Hacker Newsnew | past | comments | ask | show | jobs | submit | throwanem's commentslogin

It's been a decade, but I had a very similar experience with Mattermost. It would be, if perhaps not where I would end up today, then certainly where I would start looking.

Yeah, it’s been pretty seamless and I was able to import the full Slack history into it as well from a previous Slack instance. The only thing I found lacking was a good GIF plug-in, but I was able to cobble one together pretty easily.

So recent? I've been on sabbatical (the real kind, self-funded) for eighteen months, and while my sense has been things have not stopped heading downhill since I stepped off the ride back in 2024, to hear of such a sudden step change is somewhat novel. "Very different" just how, if you don't mind my asking?

(I'm also looking for local, personally satisfying work, in exchange for a pay cut. Early days, and I am finding the profession no longer commands quite the social cachet it once did, but I'm not foolish enough to fail to price for the buyer's market in which we now seek to sell our labor. Besides, everyone benefits from the occasional reminder to humility! "Memento mori" and all that.)


I feel like the models and harnesses had a step change in capability around December, as somebody who’s been using them daily since early/mid 2025. It’s gone from me doing the majority of the programming, to me doing essentially none, since December. And that change felt quite sudden.

The more recent shift after December is mostly explained by people at my company catching up with the events that happened in December. And that’s more about drastically increased productivity expectations, layoffs, etc.

I’m also considering a self funded sabbatical. I could do it. What sort of thing have you been up to, any advice?


I can relate to the feeling - this timing tracks for when most, if not all of my friends, all my co-workers (even the few who were resisting to adopt any AI toloing) flocked to just "Claude Code". Similar to how the masses gobbled VS Code a while back.

Company started doling out Claude Code configs, everything is now cli/agentic AI harnessed and news about "90% of this company's code is now AI Generated" pop up every other day.

It seems the last frontier to breach before this was nailing agentic black boxes to not crap out during the first hour of work. After that, it's really been much smoother for those tools.


Uh, don't come into it expecting to know exactly what you're going to be up to, might be the best advice I could give. Oh, do plan! But loosely: especially early on, as you get out from under the crushing burden of constant stress and misery, there will be surprises. I haven't been doing a lot of hobby programming, for example, not much more than a few faces for my Amazfit wristwatch - but my diary's grown by about a thousand pages, well above the usual rate, and I've begun a new series of crappy-camera snapshot albums, this latter especially being a real surprise despite that I have been a photographer for many years now. (My daily driver since 2021 has been a Nikon D850 with three SB-R200 flashes on a ring mount, mostly chasing wild wasps to get their portraits from six inches away. Shooting a total piece of shit for a change has been a hilarious revelation!)

Imagination operates more freely and foolishness is less heavily ballasted, and any kind of emotional crap you've been keeping shoved to the side with the force of pressing obligations is likely to come out and start rearranging the metaphorical furniture. If you've got stuff like that, this will be a good opportunity to get to grips with it, whether you mean to or not. Prepare accordingly.

And finally, there's not too many more appealing social presentations in my experience than that deriving from the confident knowledge that, within reason at least, one has earned and is now deploying the privilege to do more or less whatever the hell one likes: not the confidence contingent on a fat wallet, but that inherent in having only those scheduled obligations one chooses, and also in understanding precisely the difference underlying that distinction. Very few people in this world have the skill to behave as if their time were entirely their own to command, and this makes a difference in deportment that others will notice and attend without necessarily knowing why. It is more subtle and far less brash than the confidence in wielding the name of an employer that everyone knows, but for like reasons it also has worth and durability which the other does not. Whether or not you keep it, the experience of having had it is about as unforgettable and as indescribable as the trick to riding a bike.

Thanks for the info! My last direct exposure to a frontier model was now almost twelve months ago, so I suppose I'll have to dedicate a few hours pretty soon.


Don't you feel that sabbaticals kinda get you off the new tech wave anyway? I usually check in on news much more often when bored at slow work days.

On the side, this might not have to do at all with your case, but the reason I personally keep putting off sabbaticals is that I feel it can severely compound my routine wrecking habits and I don't think I'd be too strong-willed to give it meaningful purpose. Not to mention the first point, i.e. it would 100% make my industry pessimism worse. I'd like to not bounce away from tech forever. Rather, figure what scratches the same itch I've been seeking since the start.

I'm all about big road trips, big adventures but I think the couch potato risk is all too real for me.


Well, sure. It isn't for everyone, and this isn't my first time. Lots of folks struggle without exogenously imposed routine and structure, and wouldn't it be a dull old world if we were all alike? My diary is nine years old and, as of today, thirty-one hundred and one pages long. But for what it's worth, I neither desire nor intend ever to return to "tech" as you construct it in your comment here, and as HN the appendage of YC (1) also does. I learned how to work for a living well before any of that stuff really came along, and I confide I will still know how once it's gone. (Also, reading the news versus distracting oneself with it is a distinction worth considering for the difference it describes. Can be hard to be very proactive or muster much motivation when all one's energy goes to either earning a livelihood or to recovering from same, eh?)

(1) Peace, Dan! I imply no substantial or material connection, only nascence within the same culture and enshrinement of the same desiderata, as you well know - and well know can't be gainsaid, or not in factual terms at least.


Be weary of false prophets in these comments, none of these tools are anywhere close to production quality and now that major companies are suffering enough outages to negatively impact them (Amazon, MSFT, Cloudflare).

Much better take is to start establishing yourself as a slop wrangler. Lot of stupid money to be made from fools wanting to purify their slop.


If you don't consider it "production quality" and I don't consider it "production quality" and the paying client does consider it "production quality," which two of these people are wrong?

Why does that matter and not you know... the actual outcomes and damages that are effecting real lives here?

Are you seriously arguing that the only thing that determines right from wrong is someone buying the thing? I mean that would explain most of the sickness that is neoliberalism currently infecting the US.


I'm trying to warn that you had better pay attention to which incentives you select and where they lead you, because no one is wholly virtuous, and if you go around believing your moral worth is invariant over your behavior - as distinct from your own evaluation of that behavior - then that moral worth will rapidly diminish toward the negative. I would also like to see a clearer distinction drawn between economics and ethics. But if you imagine yourself to lack either agency or responsibility, over where and to how and for what and to whom you sell your labor, then no comment I could make will aid you.

She evidently signed a nondisparagement agreement with teeth. She won't martyr herself if she gets sued over it and loses. If she didn't know what she was getting into, that's only because she was too foolish to wield her resources to the minimal extent of hiring a lawyer, for a look over the contract before she signed. Everyone wants a hero here. Don't be a child! This is real life, and if you ask me, Careless People should be subtitled "Exhibit A in the trial of Federal Prisoner BOP #12345-098." Yet here we are.

Wynn-Williams is no one's hero. Nor need she be. Nor should we require she be, in order to make use of the windfall of information she provided. But it's no surprise crime has no consequences, when even we - who have some professional responsibility to expertise in drawing the distinction between uses and abuses of technologies like Meta's - are so unreliable on the basic difference between epistemology and People Magazine. Upton Sinclair really did call it with that old line about understanding and salaries, huh?


Imagine dating someone who works at Facebook, though. I can't imagine who would be so utterly dense as to offer so presumptuous a complaint, but he'd better be at least a 13 out of 10 or I'm not even bothering to pretend to go to the bathroom and then sneak out the back.

Think it over. No one who leads a populist movement is ever ultimately sincere in his populism. But where, excuse me, where on Earth did you get the idea that any of those guys is a populist?

Mostly by who they identify with. I get you, they do not personally seem likely to be populists, but that's the movement they're with.

Well, sure. The hammer that happens at times to be in my hand while I'm hanging framed art downstairs is, in an exactly equivalent sense, "the hammer I'm with." I don't care about it, you know? It's just a tool.

So the modal is doing its job.

Sure, it's "doing its job" much in the way a podcast advert you've already heard 1000 times is "doing its job".

I keep hearing people speak so positively of "friction," lately, and yet. Some more nuance required in that discussion, I think.

Making the user completely inured to its message is not doing its job

Welcome back, Maciej!

> Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.

I remember a lot of Steve Yegge's impressive claims from back when he and Zed Shaw were what I would call "fringe contemporaries" in the early 2010s - like all the time he spent gassing on about his unmaintainable, barely usable nightmare of a Javascript mode for Emacs. (I did like the MozRepl integration, for what that's worth.)

I don't particularly recall him talking about smoking pot, and I think I would have, if he'd been as memorably effusive there as about js2-mode. But it's been a lot of years and I couldn't begin to remember where to look for an archive of his old blog. Would you happen to have a link?


The most obvious one is this brilliant piece on complexity:

https://steve-yegge.blogspot.com/2009/04/have-you-ever-legal...

It doesn't match OP's description, but it certainly fits talk about his pot use.

There may be others.


I remember thinking of him as a skillful writer and a sometimes incisive thinker, back then. Apparently my taste has significantly improved in the interim; for a piece ostensibly about complexity, this is an embarrassingly superficial analysis from priors that already don't make any sense.

I'm not going to knock a guy today based on an almost twenty-year-old piece, especially on subjects (cannabis legalization, the quality and direction of Obama administration policy initiatives) that were widely misunderstood at the time, including by such luminaries as the Nobel committee. But Yegge really wasn't starting from so strong a position as I had misrecalled. Thanks for the link.


I haven't read it in at least ten years myself - maybe it's not as good as I recall.

I do remember that I appreciated his grasp of the fact that if you aren't deep in the weeds, you really cannot understand just how complex a system really is.

I also appreciated the slow build to the actual point, which I think could help people who wouldn't hear a direct explanation understand what he was getting at.

"'Shit's Easy' syndrome" is real, and I wonder if the prevalence of LLMs doing the scutwork will lead to an entire generation of programmers who suffer from it.


Well, sure. Trying to plan events at incomprehensibly large scale is like that, as the 20th century collectivist states failed largely in consequence of too late discovering. You have to retain a sense of scale in these things, not to say humility. Meanwhile, cannabis legalization in the US proceeds apace as a fifty-state patchwork, with simple possession still a major felony some places, while commercial distribution in others is a wholly legitimate storefront affair, and someone will eventually reap a small political windfall through federal recognition of the situation in being. No one is really planning anything. It is the assumption someone must that I'm criticizing, because for all the decades of planning indulged by the interminable old-times legalization advocates, their desideratum in practice looks nothing like they ever came close to seriously imagining or predicting.

To his dubious credit, I think Yegge has in the interim learned this lesson, possibly at the cost of some others. Looking at his "Gas Town" makes the hair stand up on the back of my neck, not least for that I once had ferrets and I know what chaos they embody and wreak (and how f—ing expensive they are!); I'm sure he was intentional in his choice of the metaphor, but he's always been one of those for whom consensus reality and good sense are likewise mostly optional. So in entire fairness I have to admit I really can't see any just criticism that he's planning too much these days. But the value in such a swing from one extreme to another, versus something more closely resembling moderation, charitably has yet to be demonstrated.

(As a programmer of both fintech and actual finance experience, btw, it's very comical to me to see the Big Design Up Front approach being applied in this way to this specific example, precisely because it so little resembles how anyone genuinely approaching the task does so. It is very much how I would expect the Google of 2009 to look at things. It isn't that much like how a bank or a startup does. But I said I wasn't going to beat up on old work, and I can't pretend I had so broad a perspective myself so long ago.)


Good points.

I was similarly appalled and shocked at Gas Town. Maybe something like it is the future, but I really didn't expect Yegge to be a genAI booster.

If Gas Town has "the Quality Without a Name," I will eat my hat.

https://sites.google.com/site/steveyegge2/tour-de-babel


(Thank you for a pleasant and thought-provoking conversation, by the way! All hopes for a favorable Friday and weekend.)

Likewise!

Oh, God, spare me from the architect who must be sure he is seen to be one with the Tao. Its name is 无为 and Emacs, which I have used exclusively since 2010, does not "have" it, although a given human Emacs user may. (But see previously my comments with respect to js2-mode; Yegge's enthusiasm of the moment notwithstanding, he was at least not then the most obviously reliable judge.)

It isn't something that can exist in the absence of consciousness, because only in the presence of consciousness can it not exist. I grant some computer programs sensu lato may conceivably experience qualia, but even today would be taken sorely aback to discover Emacs among them.


I did not realize that Yegge was referencing the Tao with that, though it certainly had some of that aesthetic flavor to my untutored Western ears.

I can roughly intuit how it might be something which can only be relevant in the presence of consciousness, despite my near-total lack of knowledge of any religious tradition outside the Western ones.

I agree that conscious programs in some sense are conceivable, but I'm skeptical of it myself especially in comprehensible programs, however large - something self-documenting and readable is nearly the opposite of the human brain, which is the only thing we really have strong reason to believe is conscious (by way of each possessing one).


Properly, with "the Quality without a Name" Yegge was referencing Christopher Alexander's The Timeless Way of Building (1979) wherein that phrase is - one would ordinarily say 'defined,' but in this case the author strove with what I consider deeply tasteless artifice to inflict a mostly ersatz epiphany. (It is an extremely 2009 Google or "Chocolate Factory" kind of book.) It was Alexander whom I excoriated as the architect who etc., since he was that. (His work on the U of O campus gets too much credit; Eugene could not but have been lovely, anyway, and it was not the town's fault I wilt for want of full sun.) In any case to construct the idea as "religious" obscures a trivially essential point, in that to do so is like saying you're worried the Name might get mad if you pick up a hammer. Oh, if with a heart of hate or concupiscence then sure, that's a problem, but Jimmy Carter built houses with Habitat for about a million years and I know flights of angels sang that man to his rest. The "Tao," if we like, is a hammer. Anyone is free to believe in it or not. It drives nails just the same either way. 'The rest is commentary.' Don't worry about it too much.

I'm not actually much of a mystic, though some who've known me might disagree, especially after that last paragraph! My concept of consciousness is broadly both mechanistic and scalar, which having arisen is reliably conserved because abstraction, reflection, and introspection are behaviors whose adaptive benefit easily compounds on itself. (The singularitarians aren't wrong that getting smarter makes you better at getting smarter; they just have no idea what "smart" means.) I am also wholly unapologetic about the wholly intuitive and qualitative nature of that understanding, not least because to be both at once places me serenely beyond the moist and smelly grasp of rote scientism. For example, my friends who've been wasps were not less conscious in my estimation than myself and my friends who are human, but I would say they perhaps reflect and ramify less deeply. One might resort for a mental model to the concept of a space-filling or Peano curve [1]: we iterate many orders more deeply than even the most capacious of social wasps, to be sure! But I have seen a Polistes exclamans wasp comfort her anxious and frightened sister with a hug in my kitchen (2), and I've seen them learn me as the final waypoint of what, given the unusually capable aerialism and extensive navigational skills of the average Polistes metricus forager, could well have been a longer and more complex daily commute than mine. (And I never have to deal with birds trying to eat me!)

So these are not at all stupid or robotic animals, the social wasps. As terrestrial predators and foragers who hunt energetically expensive prey by sight, they experience many of the same selection pressures as we do toward episodic memory, constructive theory of mind, kinship recognition by sight rather than odor (and thus at much greater distance,) and other such relatively complex cognitive skills. Also, I have watched a wasp sleep, and seen the rate of her breathing oscillate in a fairly close parallel to the periodicity one sees in the stages of mammalian sleep. I believe they may experience something very like the voluntary paralysis of our REM sleep. I believe there is no reason for such an inhibitory circuit to develop and be conserved, other than the reason we have it. In short I believe they may very well dream, in some way meaningfully like we do, and again for the same reasons. (I incline, in my incompetently autodidactic manner, toward the "integrated information theory" expounded by Hoel, at least inasmuch as I borrow the need for balancing surprise minimization versus overfitting avoidance, but I'm not really dogmatic about it.) And finally, ineluctably, I defy anyone anywhere to show me that of any kind which dreams yet is not conscious.

These are not only (or not all only) individual observations via personal correspondence, either; I'm happy to cite and discuss at length the specific details of the ethology and neurology underlying such complex behavior, which I may not be the first to observe is strongly suggestive of social wasps exercising a constructive theory of mind for a species deeply dissimilar to themselves, ie we H. sap. A good lay overview, written much from a love which I recognize, is Seirian Sumner's 2022 Endless Forms. I forget offhand if she is as explicit as I'm being, but that's okay; no one really of whom I'm aware is really making the kind of (what is arguably a) leap that I am, to treat consciousness in this way; an unkind critic might accuse me of half-assing my way to some half-baked animism, through a daytime-TV pop science conception of consciousness as waves hands I dunno...holographic? Luckily, with no costly postnominals to defend nor student loans to defray, I suppose I'm free to say more or less what I like. (Such as that, if Sumner leaves you wanting more, a good next step into entomology proper - and one of my own first sources! - is 2018's The Social Biology of Wasps.)

Even the largest and fanciest frontier model (properly the vast infrastructure which serves it, which may to some useful ends be considered as a kind of organism) is many orders of magnitude less complex in both "neurome" and connectome than even the most basal of social wasps, and there is no real cause to expect this will change in our lifetime. (Wasps are not getting simpler, and God as yet still stubbornly refuses to be invented by Sam Altman.) A human's brain of course ramifies as many orders further still, but no matter; if there was only ever one example of "Shit's Easy syndrome," I must surely be making fun of it now, in the idea that our programs express our minds more magically than any other form of human mechanism or artifice, so much so as to encapsulate much less surpass.

If a conscious computer system ever arises - and note by that 'in the broad sense,' I include eg the idea of the entire planetary network considered as "a" consciousness, so we're definitely not aiming for any immediate or concrete mapping for that intentionally nebulous concept - then I confide there will also arise humans able to recognize it as like themselves, and vice versa. I would not expect them to find it more comprehensible than they find themselves, or for that matter than it would likely find itself or them. Good grief, who ever does in this life?

(And at no doubt welcome last, thanks once more for the nudge to further work in clarifying my thesis and its argument, perhaps not without interest. I regret if I've given the impression of making light of your faith despite that I do not share it. Oh, I have my differences with Them Upstairs, and we'll work those out by and by - but that is no fault of yours so far as I know, and I hope I haven't made it too much your problem.)

[1] https://en.wikipedia.org/wiki/Space-filling_curve

(2) I was sheltering them from a cold snap, an experience more or less semiotically indistinguishable for them from an alien abduction, although I of course had the grace as a host not to stress them unnecessarily. We all had a hell of a fight on our hands anyway, the night the local pavement ant supercolony caught wind and mounted an invasion, but the next morning was finally warm and mild enough for them to disperse. I suppose things turned out well enough in their eyes, since the family stuck around and we were porch neighbors for a few years after that.


One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."

Dijkstra was a mathematician. It is a necessary discipline. If it alone were sufficient, then the "program correctness" fans would have simply and inarguably outdone everyone else forty years ago at the peak of their efforts, instead of having resorted to eloquently whiny, but still whiny, thinkpieces (such as the 1988 example [1] quoted here above) about how and why they would like history to understand them as having failed.

[1] https://www.cs.utexas.edu/~EWD/ewd10xx/EWD1036.PDF [2]

[2] I will freely grant that the man both wrote and lettered with rare beauty, which shames me even in this photocopier-burned example when I compare it to the cheerful but largely unrefined loops and scrawls of my own daily hand.


The formal methods people may yet have the last laugh. I did not have Lean becoming a hyped programming language / proof assistant on my bingo card for 2025-26 and yet here we are, because these tools help us close the validation loop for LLM agents. That is not dead which can eternal lie...

But yes, I think the best rebuttal to Dijkstra-style griping is Perlis' "one can't proceed from the informal to the formal by formal means". That said I also believe kind of like Chesterton's quote about Christianity, they've also mostly not been tried and found wanting but rather found hard and left untried. By myself included, although I do enjoy a spot of the old dependent types (or at least their approximations). There's an economic argument lurking there about how robust most software really needs to be.


Certainly, and it's at that economic argument that I strive to get, I think.

Every so often an article makes the rounds on the correctness and verification methods used for Space Shuttle avionics software and applications of similar import, or if not that then Nancy Leveson's comprehensive 1995 review of the Therac-25 accidents. [1]

Most software doesn't need to be nearly so robust, but Dijkstra constructs his argument as though all did, hinging the inversion on the obvious and frankly shocking cheat across the gap between his pages 14 and 15, ie, that paragraph beginning "But before a computer is ready to perform..." Here he casually, and without direct acknowledgement much less justification, assumes as rhetorically axiomatic that a program, not the machine that executes it, is the original artifact of computing, of which any reification merely constitutes less than perfect instantiation, which he is then free to criticize on the wholly theoretical grounds of mathematical beauty; that is, on the grounds he prefers to inhabit in all cases, whether to do so in any given example makes any sense or not.

If that's his preferred ground, fair enough; after all, he was a mathematician. But his hypocrisy in concealing the insistence by means of subtle rhetoric - mere pages after inveighing against "medieval thinking" by way of an example, his "reasoning by analogy," faulting specifically that argument made by way of specious rhetoric! - casts suspicion on all that both precedes and follows. From a layperson, I could regard it as honest error, but I have known and loved academic mathematicians, and I really can't conceive of any of them leaving intact so consequential a mistake.

Perhaps Dijkstra was different, or merely becoming old, but for someone so heavily invested in pushing a paradigm of programming with mathematical rigor at its core, it seems a remarkable flaw in what should be a crucial argument (especially in advance of a solution for the halting problem). I regret that flaw, because he isn't all wrong about what an engineering paradigm can do to the agency and optionality of programmers especially in industry - not that his one extremely privileged position therein, parallel with Feynman's time at Thinking Machines, would much acquaint him with our desiderata or our constraints - and I would like to find that point made in better company than he was able to give it.

But then, his conception never offered much in preference, did it? The labor of mathematicians is scarce and expensive: what good is a proof assistant to anyone who can't understand its output, much less give it input? And Dijkstra himself, not less strange a bird than any other mathematician, famously did all he could to avoid actually using the machines on whose correct use he here wrote. (Hence his hand, which I complimented so highly before. I also use a fountain pen, but as I said, not so beautifully - and I'm glad I know how to use a keyboard well, instead.)

There would not be more programmers or more software in a world run on such principles, I think, than in this one - on the contrary, less by far. Maybe that would be preferable, but mostly not for the reasons Dijkstra claimed.

[1] http://sunnyday.mit.edu/papers/therac.pdf


There is no liability or penalty for software defects and therefore no incentive for program correctness. Arguably, Dijkstra didn't fail; society has foolishly decided not to hold developers accountable for their bad code.

Arguably? Okay, so argue it.

I think the real tragedy here is that we can spend *all* of our time trying to improve the quality of our output, but it simply doesn't matter, because as long as the button is where the boss wants it to be and is the right color, all is right with the world.

Literally nothing else matters, and we (or at least I) have wasted a ton of time getting good at writing software.


As long as it continues to matter what the button actually does, I can't consider our effort to have been entirely wasted. We only have the misfortune to live in stupid and dangerous times, but good heavens, we're hardly the first in that, and hardly starved for examples from whom to learn.

> One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."

I agree, but I'm not sure this says what you think it does.

The people on the car assembly line may know nothing of engineering, and the assembly line has theoretically been set up where that is OK.

The people on the software assembly line may also (and arguably often do) know nothing of engineering, but it's not clear that it is possible to set up the assembly line in such a way so as to make this OK.

Arguably, the use of LLMs will at least have some utility in helping us to figure this out, because a lot of LLMs are now being used on the assembly line.


I hope OP has friends.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: