> If you only look at PRs and don't ever care about commits, why are they even being sent to reviewer in the first place? Just send a diff file.
This is in fact what hg does with amending changesets and yes it works far better. Keep PRs small and atomic and you never need to worry about what happens intra-pr. If you need bigger units of work that's what stacking is for.
Stacking is good for expressing dependencies, but isn't helpful when you need to make several distinct changes that aren't necessarily needed unless you take them all in. What's the value in having a separate PR that introduces a framework that you later use in another PR when you may not actually want to merge it if the latter one doesn't end up being merged as well?
A PR is a group of commits, just utilize that when you need it.
The post is so dramatized and clearly written by someone with a grudge such that it really detracts from any point that is trying to be made, if there is any.
From another former Az eng now elsewhere still working on big systems, the post gets way way more boring when you realize that things like "Principle Group Manager" is just an M2 and Principal in general is L6 (maybe even L5) Google equivalent. Similarly Sev2 is hardly notable for anyone actually working on the foundational infra. There are certainly problems in Azure, but it's huge and rough edges are to be expected. It mostly marches on. IMO maturity is realizing this and working within the system to improve it rather than trying to lay out all the dirty laundry to an Internet audience that will undoubtedly lap it up and happily cry Microslop.
Last thing, the final part 6 comes off as really childish, risks to national security and sending letters to the board, really? Azure is still chugging along apparently despite everything being mentioned. People come in all the time crying that everything is broken and needs to be scrapped and rewritten but it's hardly ever true.
It wasn’t specifically about the escort sessions from any particular country, though, but about the list of underlying reasons why direct node access was necessary.
> People come in all the time crying that everything is broken and needs to be scrapped and rewritten but it's hardly ever true.
Or… you’ve just normalised the deviation.
One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.
After about three or four weeks everyone adapts, learns what they can and can’t criticise without fallout, and settles into the mud to wallow with everyone else that has become accustomed to the filth.
As an Azure user I can tell you that it’s blindingly obvious even from the outside that the engineering quality is rock bottom. Throwing features over the fence as fast as possible to catch up to AWS was clearly the only priority for over a decade and has resulted in a giant ball of mud that now they can’t change because published APIs and offered products must continue to have support for years. Those rushed decisions have painted Azure into a corner.
You may puff your chest out, and even take legitimate pride in building the second largest public cloud in the world, but please don’t fool yourself that the quality of this edifice is anything other than rickety and falling apart at the seams.
Remind me: can I use IPv6 safely yet? Does it still break Postgres in other networks? Can azcopy actually move files yet, like every other bulk copy tool ever made by man? Can I upgrade a VM in-place to a new SKU without deleting and recreating it to work around your internal Hyper-V cluster API limitations? Premium SSDv2 disks for boot disks… when? Etc…
You may list excuses for these quality gaps, but these kinds of things just weren’t an issue anywhere else I’ve worked as far back as twenty years ago! Heck, I built a natively “all IPv6” VMware ESXi cluster over a decade ago!
> One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.
Wellllll ... my observations after many cycles of this are:
- wtfs/day exclaimed by people interacting with *a new codebase* are not indicative of anything. People first encountering the internals of any reasonably interesting system will always be baffled. In this context "wtf" might just mean "learning something new".
- wtfs/day exclaimed by people learning about your *processes and workflows* are extremely important and should be taken extremely seriously. "wtf, did you know all your junior devs are sharing a single admin API token over email?" for example.
> One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.
Eh, I don't think this is exactly as reliable as you'd expect.
My previous job had a fairly straight forward code base but had fairly poor reliability for the few customers we had, and the WTF portions usually weren't the ones that caused downtime.
On the other hand, I'm currently working on a legacy system with daily WTFs from pretty much everyone, with a greater degree of complexity in a number of places, and yet we get fewer bug reports and at least an order of magnitude if not two more daily users.
With all of that said... I don't think I've used any of Microsoft's new software in years and thought to myself "this feels like it was well made."
The rapid decay of WTF/day over time applies to both new employees and new customers.
> currently working on a legacy system
"Legacy" is the magic word here! Those customers are pissed, trust me, but they've long ago given up trying to do anything about it. That's why you don't hear about it. Not because there are no bugs, but because nobody can be bothered to submit bug reports after learning long ago that doing so is futile.
I once read a paper claiming that for every major software incident (crash, data loss, outage, etc...) between only one in a thousand to one in ten thousand will be formally reported up to an engineer capable of fixing the issue.
I refused to believe that metric until I started collecting crash reports (and other stats) automatically on a legacy system and discovered to my horror that it was crashing multiple times per user per day, and required on average a backup restore once a week or so per user due to data corruption! We got about one support call per 4,500 such incidents.
The customers aren't pissed, we're doing demos to new departments and lining up customizations and expansion as quickly as we can. We're growing faster than ever within our largest customer.
I also didn't say there are no bugs or complaints, I said the system is more stable. But yes, there are fewer bugs and complaints, especially on the critical features.
I didn't use the word legacy to mean abandoned, just that it's been around a long time, we're maintaining it while also building newer features in newer tech, as opposed to my previous company which was a green field startup.
By that question I mean: Do you think there are fewer bugs because you hear fewer complaints from humans, or because you have a no-humans-involved mechanism for objectively evaluating the rate of bugs?
Even if you have a mechanical method for collecting bug reports, crash logs, or whatever, that can still obscure the true quality of the codebase.
One such example that I keep thinking about was the computer game Path of Exile. It has "super fans" that all have 10,000 hours of playtime that will swear up and down that it is one of the best games ever. When I first played it, I found so many little bugs and issues that I had more fun jotting them down than actually playing the game! I collected pages and pages of bullet points. None were crash bugs that would have been logged, and every one was the type of thing that players would eventually learn to work around by avoiding scenarios that caused the issue. I.e.: "Don't click to fast after going through a door because your orientation will be random on the other side, so you might be sent back to where you came from", that kind of thing.
Honestly and objectively measuring the quality of a software application (or any product) is hard.
> Last thing, the final part 6 comes off as really childish, risks to national security and sending letters to the board, really?
That struck me too. Maybe i've never worked high enough in an org (im unclear how highly ranked the author of the piece is) but i've never been in an org where going over your boss's boss's boss's boss's head and writing a letter to the board was likely to go well.
That said, i could easily believe that both Azure is an absolute mess and that the author of the piece was fired because of how he went about things.
I’d say they’re writing as if they expect everyone to share their ideals, and they’re responding to violations of those ideals in a naive and disempowered way. That doesn’t mean they’re wrong in those ideals, but the way they tried to fix things… they interacted with Cutler, why didn’t they try to influence him to fix things? Or any other senior technical leaders?
I never said I didn’t. It was a multi-year escalation and I shared my concerns widely along with concrete options.
The thing is in corporate environments people avoid admitting anything is wrong because that would make them look bad and also disavows the bosses who promoted them, so the true best interest of the company takes a back seat.
Yeah I get that, I have been in a similar situation. At some point you just want to pull the trigger instead of going along with values that aren't yours. Sure, there is always the way of just silently disagree and move on, but who cares. We live one life not to maximize our careers, but to be who we are. Was an interesting read and wish all the best to you in the future.
It is true that writing to the board will get you noticed, and that you might not like the consequences. If you value having the job then don’t write to the board. Even if you are right, being noticed like that isn’t going to endear you to your boss.
But if you care more about doing the right thing then writing to the board is the right thing to do. And after a few years of working at Microsoft you might not value your job very much either and you too might decide to go out in style.
Windows is ~500 times bigger than Azure, give or take, by machine count, and still many times larger by loc, modules, users, whatever else you want to measure. The heavy lifting (VM/containers, I/O, the things that cannot not be done just like that) is handled by the Windows folks anyway. The only hard part is the VM placement, everything else is mostly regular software engineering, some of medium-hard complexity but nothing that can excuse the need for constant human intervention.
It is, but “Microsoft runs on trust” they say. They also say the CEO’s inbox is always open, actually the CEO himself says it in the yearly mandatory training video on business conduct. So it should be safe, in theory, to openly speak out in the best interest of the customers, no? Rhetorical question :)
I feel like emailing the CEO in this case is just a no-op, the inbox is gatekeeped by his staff and very unlikely he saw your email.
That said, “inbox always open” means you should come with a problem AND a very well detailed solution. But question becomes if you had a detailed solution that was good, why wasn’t it ran up the org chart with buy in and why did it have to skip to the top.
But that's part of a great solution... it sounds like you might have had a good technical solution, but that's only half the solution in enterprise. If your technical solution requires another team to completely retool, its not a great solution overall.
"While some may see this as a dick move and I wasn’t exactly proud of it, but I actually waited for Daniel’s wife, Katie, to go into labor before bringing all of this up with his management."
Holy cow! Now I've unfortunately witnessed some ugly office behavior too, but this is quite another level.
AWS and Google Cloud are both huge and are significantly better in UX/DX. My only experience with Azure was that it barely worked, provided very little in the way of information about why it didn't. I only have negative impressions of Azure whereas at least GC and AWS I can say my experiences are mixed.
> From another former Az eng now elsewhere still working on big systems, the post gets way way more boring when you realize that things like "Principle Group Manager" is just an M2 and Principal in general is L6 (maybe even L5) Google equivalent. Similarly Sev2 is hardly notable for anyone actually working on the foundational infra.
Before the days of title inflation across the industry, a a Principal at Microsoft was a rare thing. When I was there, the ratio was maybe 1 principal for every 30 developers. Principals were looked up to, had decades of experience, and knew their shit really well. They were the big guns you called in to fix things when the shit really hit the fan, or when no one else could figure out what was going on.
One of Microsoft's problems is their pay is significantly lower than FAANG and so you very very rarely see people with expertise in the same verticals jump to Azure. I get that "the deal" at Microsoft is lower pressure for lower pay but it really hinders the talent pipeline. There are some good home grown principals and seniors, but even then I think the people I worked with would have done well to jump around and get a stint at another cloud provider to see what it's like. Many of them started as new grads and their whole career was just at Azure.
Meanwhile when I was at another company we would get a weekly new hire post with very high pedigree from other FAANGs. And with that we got a lot of industry leading ideas by osmosis that you don't see Azure getting.
Yeah the deal has also changed. Right as I was leaving the messaging started changing a lot and there was a clear top down “you all need to work harder”. They hired an ex Amazon guy to run my org which really drove the message home.
To be fair though I think Microsoft has decided they are fine with rank and file being mediocre. I don’t know how interested they are in competing for top talent except for at the top.
> I get that "the deal" at Microsoft is lower pressure for lower pay but it really hinders the talent pipeline.
The deal used to be a lower cost of living in a major coastal city, an amazing campus (it is seriously lovely), every engineer had their own office, serious job security, and an unbelievable health care plan.
Seattle exploded in price, they moved to open offices, Microsoft started doing mass layoffs, and they gutted the healthcare plan (by the time I left the main plan on offer was a high deductible with a miserable prescription formulary).
Hard to attract talent when there is no big differentiator.
Of course in the 90s the deal was work there 10 years retire a millionaire. Easy to attract talent when that is the offer ...
Thanks. That reference is correct. The point is why those sessions were necessary because there is no reason, a-priori, to do manual touches on production systems, DoD or not.
Microsoft is the go to solution for every government agency, FEDRAMP / CMMC environments, etc.
> People come in all the time crying that everything is broken and needs to be scrapped and rewritten but it's hardly ever true.
This I'm more sympathetic to. I really don't think his approach of "here's what a rewrite would look like" was ever going to work and it makes me think that there's another side to this story. Thinking that the solution is a full reset is not necessarily wrong but it's a bit of a red flag.
At no point during the reading I got sense that he's suggesting something radical. Where specifically is he pointing out rewrite?
"The practical strategy I suggested was incremental improvement... This strategy goes a long way toward modernizing a running system with minimal disruption and offers gradual, consistent improvements. It uses small, reliable components that can be easily tested separately and solidified before integration into the main platform at scale." [1]
> The current plans are likely to fail — history has proven that hunch correct — so I began creating new ones to rebuild the Azure node stack from first principles.
> A simple cross-platform component model to create portable modules that could be built for both Windows and Linux, and a new message bus communication system spanning the entire node, where agents could freely communicate across guest, host, and SoC boundaries, were the foundational elements of a new node platform
Yes, I read that part as well and found it a bit confusing to reconcile with this one.
The vibe from my quotes is very much "I had a simple from-scratch solution". They mention then slowly adopting it, but it's very hard to really assess this based on just the perspective of the author.
He also was making suggestions about significantly slowing down development and not pursuing major deals, which I think again is not necessarily wrong but was likely to fall on deaf ears.
Interesting point. The two stances are not contradictory. The end result is a new stack, so you are right saying that was the intent. However how you get there on a running system is through stepwise improvements based on componentization and gradual replacement until everything is new. Each new component clears more ground. I never imagined an A/B switch to a brand new system rewritten from scratch.
> Microsoft is the go to solution for every government agency, FEDRAMP / CMMC environments, etc.
I've been involved with FEDRAMP initiatives in the past. That doesn't mean as much as you'd think. Some really atrocious systems have been FEDRAMP certified. Maybe when you go all the way to FEDRAMP High there could be some better guardrails; I doubt it.
Microsoft has just been entrenched in the government, that's all. They have the necessary contacts and consultants to make it happen.
> Thinking that the solution is a full reset is not necessarily wrong but it's a bit of a red flag.
The author does mention rewriting subsystem by subsystem while keeping the functionality intact, adding a proper messaging layer, until the remaining systems are just a shell of what they once were. That sounds reasonable.
Thanks. That was exactly the plan. Full rewrites are extremely risky (see the 2nd System syndrome) as people wrongly assume they will redo everything and also add everything everyone always wanted, and fix all dept, and do it in a fraction of the time, which is delusional and almost always fail. Stepwise modernization is a proven technique.
As someone who had worked adjacent to the functionally-same components (and much more) at your biggest competitor, you have my sympathy.
Running 167 agents in the accelerator? My gawd that would never fly at my previous company. I'd get dragged out in front of a bunch of senior principals/distinguished and drawn and quartered.
And 300k manual interventions per year? If that happened on the monitoring side , many people (including me) would have gotten fired. Our deployment process might be hack-ish, but none of it involved a dedicated 'digital escort' team.
I too have gotten laid off recently from said company after similar situation. Just take a breath, relax, and realize that there's life outside. Go learn some new LLM/AI stuff. The stuff from the last few months are incredible.
We are all going to lose our jobs to LLM soon anyway.
> I've been involved with FEDRAMP initiatives in the past. That doesn't mean as much as you'd think. Some really atrocious systems have been FEDRAMP certified. Maybe when you go all the way to FEDRAMP High there could be some better guardrails; I doubt it.
I never said otherwise. I said that Microsoft services are the defacto tools for FEDRAMP. I never implied that those environments are some super high standard of safety. But obviously if the tools used for every government environment are fundamentally unsafe, that's a massive national security problem.
> Microsoft has just been entrenched in the government, that's all.
Yes, this is what I was saying.
> The author does mention rewriting subsystem by subsystem while keeping the functionality intact, adding a proper messaging layer, until the remaining systems are just a shell of what they once were. That sounds reasonable.
It sounds reasonable, it's just hard to say without more insight. We're getting one side of things.
I have no idea what you're talking about. This has nothing to do with having "better fedramp certs". If you are setting up fedramp or cmmc you will be heavily, heavily pressured and incentivized to do so with Microsoft tooling.
"Better" isn't relevant, which is my entire point. The reason people choose Microsoft isn't "it's better for this", it's because every consultancy out there, every government agency or affiliate, etc, is going to push Microsoft very very hard.
I’ve been in the space for 30 years. Nobody is pressuring anyone to buy Microsoft because of FedRAMP, and Microsoft is not even close to having any advantage with respect to FedRAMP vs their competitors.
FedRAMP is demonstration that the solution met some assessment of controls in alignment with NIST 800-53. As a checkbox, it’s almost as dumb as FIPS 140, and like FIPS, you need to asses risk for your implementation regardless of these things.
Microsoft wins deals because their product catalog is well engineered to incentivize bundled subscriptions that drive marginal adoption. The user facing products are better, Entra is generally right there, and that’s a pivot into many other scenarios that drive spend.
Meaning Msft Principal is below L5? I got the same feedback from one of my friends who works at Google. She said quality of former MSFT engineers now working at Google was noticeably lower.
I mean imputed prestige within the organization. Being an L5 is nothing; it's the promote-or-fire cutoff at Google AFAIK. But being a Principal is slightly more than nothing; it's two levels above the promote-or-fire cutoff.
I mean, _now_, sure, I'd assume Microsoft Principals should be hired around L4 at Google. But that's just due to a temporary inbalance in the decline of legacy organizations. Give it a few years and it will even back out and msft 64 will be in the middle of L5 range like levels.fyi claims.
L5 hasn't been the promote or fire cutoff at Google for perhaps a decade. L4 is the new L5, mostly because Google would have to pay L5s more, and it has been terrified of personnel costs for years.
But even so, an L5 at Google is basically a nobody as far as prestige or convincing other people to adopt your plan goes. Even L6 is basically just an expert across several mostly local teams. L7 is where the prestige gets going.
In fairness the SECWAR is hardly a computing expert.
But in this case the SECWAR has been properly advised. If anything it's astonishing that a program whereby China-based Microsoft engineers telling U.S.-based Microsoft engineers specific commands to type in ever made it off the proposal page inside Microsoft, accelerated time-to-market or not.
It defeats the entire purpose of many of the NIST security controls that demand things like U.S.-cleared personnel for government networks, and Microsoft knew those were a thing because that was the whole point to the "digital escort" (a U.S. person who was supposed to vet the Chinese engineer's technical work despite apparently being not technical enough to have just done it themselves).
Some ideas "sell themselves", ideas like these do the opposite.
> If anything it's astonishing that a program whereby China-based Microsoft engineers telling U.S.-based Microsoft engineers specific commands to type in ever made it off the proposal page inside Microsoft, accelerated time-to-market or not.
> It defeats the entire purpose of many of the NIST security controls that demand things like U.S.-cleared personnel for government networks, and Microsoft knew those were a thing because that was the whole point to the "digital escort" (a U.S. person who was supposed to vet the Chinese engineer's technical work despite apparently being not technical enough to have just done it themselves).
Holy fuck. Ok, this will change things considerably for some companies I'm working with that had moved their stuff to Azure. Thanks. More than I can express on here.
I'm sympathetic to the viewpoint but I'm not in the habit of policing the names people use for themselves.
I've certainly done more than my fair share of jobs in the Navy where the office I was formally billeted to had long since ceased to actually exist as described due to office renamings. Often things as simple as a department section being elevated into a department branch and people using the new name even while they wait 1-2 years for the manpower records to be fixed and the POM process to cycle through for program resourcing. But still, seems hard to treat it as a crime at one level when no one blinked an eye at the lower level.
Maybe Congress will eventually step in, but in the meantime the American voters made their choice about who they want to run these agencies, so...
The main title of the office is still “secretary of defense”, the executive order added a secondary title of the department and the office, it didn't replace the primary titles.
> the American voters made their choice about who they want to run these agencies
The American voters don't get to override the U.S. constitution. The American voters also voted in the U.S. Congress, which has the sole authority to name the department and title. My representatives have not voted to change the law. Do you not care about the rule of law?
> I'm not in the habit of policing the names people use for themselves.
I'm sure you think you're being clever, but this is such a bad faith argument.
Of course I do. I hope the rest of my fellow Americans will someday care as much as I do about it. It's clearly not the case today.
But, is it illegal to refer to Secretary Hegseth as the SECWAR?
If so, would it be legal to refer to him as the SECDEF? After all, that isn't the formal term that Congress established his position as under 10 USC 113.
It's not hard to see all the cans of worms that emanate from the topic. I said already that this is Congress's purview, and they have had ample opportunity to put a stake in the ground on their position in response...
These agencies such as the Department of Defense, whose secretary is...?
The department's name is *legally* the Department of Defense. If they want to change it, they can go to Congress and do it the legal way. They have a majority. There's nothing stopping them except for their disregard for the sanctity of the law.
> The United States secretary of defense (SecDef), secondarily titled the secretary of war (SecWar),[b] is the head of the United States Department of Defense (DoD), the executive department of the U.S. Armed Forces, and is a high-ranking member of the cabinet of the United States.[8][9][10]
This was such a genuinely weird moment for me when reading the article.
"yadda yadda and then also the secretary of defence agreed it was bad"
I'm just reading along and going, "yeah that sounds really bad if a secretary level position is being cited... wait a second, isn't that actually the guy who is literally famous for being stupid??"
I never expected to be living through a real life version of "the emperor's new clothes", like, how is anyone quoting this guy about anything?
The problem is that what he writes is very plausible and explains a lot about why Azure is so unreliable and insecure. The author didn't mention the shameful way Microsoft leaked a Golden SAML key to Chinese hackers. This event absolutely was a threat to national security.
Do you contest the fact that Microsoft royally fumbled OpenAI out of sheer incapability of providing what's supposed to be its core business despite having all deals in its favor? Because that's the most damning validation against Azure in recent times.
I don't disagree with you. I wish there were some good counter points from the Azure team. There was one from the Azure team in who commented on the article, but I feel that comment to be a bit weak.
Like the one where 1.5T in value went pfoof due to reasons you mentioned? I will let people judge whether your arguments are most likely, or whether this is bubble syndrome. Hint: there is a large distance between use of smart pointers and market effects.
Yes it's easy to critique any large system or organisation, to then go over everyone's head and cry to the CEO and Board is snake like behaviour especially offering you self as the answer to fix it. OP will be marked as a troublemaker and bad team member.
The grudge is simple and doesn't detract one thing from a very well articulated blog: you do you job as an engineer of pointing out problems, even proposing solutions, and they fire you for doing exactly the job. It's infuriating enough just from reading it, idk how you can't see any legitimacy on what the guy is complaining. You have your right of free speech to complain about shitty jobs if you want, there's no honor bound to maintain silence here.
Yep. Truly horrid policy. Where I work our issued iPhones suck to use without App Store access; no Bitwarden was the killer for me personally. Everyone I checked with uses their personal email/Apple ID instead of the MAID, and there's a sword over your head if you ever accidently copy/paste something from internal emails to something like Notes which has iCloud sync (we're semi serious about leaker). Absolute failure of an MDM setup by Apple.
MDM can restrict pasteboard from managed apps to non-managed apps, as well as allowing iCloud sign-ins but restricting which iCloud services are allowed.
It's an absolute failure of the MDM server administrator for allowing such things, not on Apple.
Anyone who used the phrase "measly" in relation to three nines is inadvertently admitting their lack of knowledge in massive systems. 99.9 and 99.95 is the target for some of the most common systems you use all day and is by no means easy to achieve. Even just relying on a couple regional AWS services will put your CEILING at three nines. It's even more embarrassing when people post that one GH uptime tracker that combines many services into 1 single number as if that means anything useful.
Three 9s is a perfectly reasonable bar to expect for services you depend on. Without GitHub my company cannot deploy code. There is no alternative method to patch prod. In addition many development activities are halted, wasting labor costs.
We wouldn’t couple so much if we knew reliability would be this low. It will influence future decisions.
It's supposed to be a Roblox competitor, which does print money, though probably not to the extent of how much they invested.
The problems are 2 fold:
People/kids don't want to put on a VR headset to play Roblox. I guess they're conceding this point by pivoting to mobile.
Meta is the opposite of cool. Real name requirements, only humanoid avatars, super corpo branding, etc really seriously hold them back from competing with VRChat or Roblox. This one is terminal it'll never be fixable as long as Meta is at the helm.
I've worked on optimizing systems in that ballpark range, memory is worth saving but it isn't necessarily 1:1 with increasing revenue like CPU is. For CPU we have tables to calculate the infra cost savings (we're not really going to free up the server, more like the system is self balancing so it can run harder with the freed CPU), but for memory as long as we can load in whatever we want to (rec systems or ai models) we're in the clear so the marginal headroom isn't as important. It's more of a side thing that people optimizing CPU also get wins in by chance because the skillsets are similar.
You're failing to explain what dictates the price the market will bear.
> Why don't landlords undercut one another? They literally don't have to. The only outcome is less profit. You'll find a tenant eventually, at any price.
is very obviously not true, otherwise prices right now would be effectively infinite. Why are prices for an apartment in SF only 3k/mo instead of 30k? Surely under your reasoning a landlord could just wait and get a tenant at any price they set?
The answer is always supply and demand. As long as the supply is constrained or demand goes up faster the price will rise. But UBI doesn't change that math at all. (I say this as someone not actually a fan of UBI)
> Why are prices for an apartment in SF only 3k/mo instead of 30k? Surely under your reasoning a landlord could just wait and get a tenant at any price they set?
No, because local wages cannot sustain those prices
If local wages could sustain those prices, then yes all rents would rise to that new higher local income level
That is (quite self-evidently) prices are so phenomenally high in ultra high-income areas like SF
Every single landlord is setting prices by the same metric: what can the people who would live here be able to afford? Competition between landlords is almost nil, which is why you find almost no "deals" anywhere. The market is totally efficient. Everyone agrees on how to set prices: by local wages.
In fact, collusion with the likes of yeildstar is the name of the game. Everyone is setting prices based on what the algorithm tells them to set prices and they all benefit from that uprise in prices because there's basically no competition decreasing the price.
There's also been a steady consolidation of ownership of rental units which also artificially increases prices.
There's a reason nowhere in the country at this point has affordable housing.
> You're failing to explain what dictates the price the market will bear.
Most people like to live in the nicest place they can afford. This is a force pulling prices upward when many people with excess cash are competing for a limited supply of homes. Its why you'll pay more for the same size property in a wealthy neighborhood.
> Why are prices for an apartment in SF only 3k/mo instead of 30k?
Because some people in SF can only afford 3K/month. But if you added 3k/month to literally everyone's income, that number would increase.
(In case you're wondering why the many people with more than 3k/month don't crowd those people out: the wealthy depend on those 3k/month people for labor. At least for now.)
If someone reads the reddit post and decides to buy the Sriracha competitor then who has been ripped off? It's a win-win, competitor has gotten business and the customer has bought a product they now perceive to be superior.
People should probably be more aware that the social media they use is astroturfed to hell and back but marketing and advertising is far too demonized.
Can't believe that I haven't seen the obvious answer, that OpenClaw is simply more fun to use. Sure, you MAY be able to do what OpenClaw does through 5 other dedicated tools, but you are going to take way longer to do so with a ton more drudge work. And above all else: it is extremely enjoyable to talk to the computer in normal language and just have stuff happen. And it's got a personality that you can tweak to your liking. Personally it's the most fun I've had using a computer in a long time.
IMO OpenClaw or a similar agent will be on everyone's phone in a couple years. It's basically what Siri was always supposed to be. For the average user it's obvious that this is the way computers are meant to be interacted with.
OpenClaw in most cases also going to use the very same dedicated tools, maybe variation of those tools dumbed down for LLM.
Almost every time I have an idea for AI Agent, I end up just making a script/binary that does the same, but so much faster that adding AI to it feels silly.
Recently I made a tool router that runs locally for such tools. Some tools have no arguments at all. Claude created a quick overlay where I can text/speak, and it will do tool call, without me asking for it, Claude added 4 buttons next to text input that bypass agent and just do a "tool call". I barely use text-to-command because those 4 buttons cover 9/10 of my use cases.
At this point I'm trying to come up with tools to add to it, so it's actually useful as an agent. Almost everything ends up being a cronjob or webhook triggered thing instead.
I guess it's exactly the opposite for me ... I always hated using "normal" language with the computer.
I often quip that I became a programmer specifically to avoid having to use spoken language. I always twitch at the thought of using any voice-based assistant.
Thinking in systems and algorithms is more enjoyable than using human language when it comes to computers IMHO ...
> I often quip that I became a programmer specifically to avoid having to use spoken language. I always twitch at the thought of using any voice-based assistant.
You're one of these people who think that programming languages are structured and formal whereas in contrast natural language must be unstructured and lacking form? Going by the Chomsky hierarchy of formalisms natural language sits somewhere between context-free and context-sensistive https://en.wikipedia.org/wiki/Mildly_context-sensitive_gramm...
> Thinking in systems and algorithms is more enjoyable than using human language when it comes to computers IMHO ...
You don't think in "systems and algorithms" -- those are the outputs of your thinking.
My experience also. I could manually connect my Obsidian notes to my AI, sure, but what I did instead was writing "Obsidian just released a CLI headless sync tool, install it so we can use it" and in a minute it came back with "Ok, everything installed, I just need your login and password."
Dangerous? Yes, very, but it truly feels like living in the future. Surprisingly, it's even more fun that sci-fi movies made me think this would be.
This is in fact what hg does with amending changesets and yes it works far better. Keep PRs small and atomic and you never need to worry about what happens intra-pr. If you need bigger units of work that's what stacking is for.
reply