The actual experts I was paying attention to said that wearing a K/N-94/95 type mask lowers the statistical rate of transmission, that is, infection of others by your deadly virus.
The subsequent findings are that cloth-type masks are less effective (but not wholly ineffective) compared to clinical/surgical masks at limiting the aerosolized viral shedding from those already infected. So if a cloth mask was all you had, the advice became "please wear it".
Turns out, many people assume advice is only relevant when given for their own direct & immediate personal benefit, so they hear what they want to hear, and even the idea of giving a shit about externalities is sheer anathema. That gets boiled down further for idiot-grade TV and bad-faith social media troll engagement and we wind up with reductive and snarky soundbites, like the remark above, that help nobody at all.
Back on topic, the choice of so-called "experts" in the Guardian's coverage of the AWS matter seems to be a classic matchup of journalistic expediency with self-promoting interests to pad an article that otherwise has little to say beyond paraphrasing Amazon's operational updates.
* Lattice overhead powerlines? Eyesore (should use the new T style ones), house values, wind noise, hums, WiFi interference, cancer, access roads, hazard to planes, birds
* Underground: damaging to the environment, end stations are eyesores/light polluters, more construction traffic, should be HVDC not AC, house values
* Solar farms: waste of good land (golf courses are fine) noise somehow, construction, eyesore (but a 400 acre field of stinky bright yellow rapeseed is OK), house values
* Onshore Wind farms: all the birds all the time, access, eyesore, noise, dangerous, should be offshore, house value, waste of land, I heard on Facebook the CO2 takes 500 years to pay back
* Offshore wind farms: eyesores, radar hazard, all the birds, house values somehow, navigation hazard, seabed disruption
* Build an access road: destroying the countryside, dust if not surfaced, drainage, house values
* Don't build an access road: destroying roads, HGVs on local roads, house values
* Nuclear: literally all the reasons plus scary
Some of them are fair on their own, but it really adds up to a tendentious bunch of wankers at every turn who think the house they bought for 100k in 1991 and is now worth 900k is the corner of the universe.
I wrote about this on HN a few years ago, and Dave Winer's Userland Frontier discussion group a couple decades ago, after Xanadu released some open source code, which was actually the output of a Smalltalk=>C++ transpiler. (That code was actually from a team Autodesk, not directed by Ted Nelson -- see his reply below.)
I think his biggest problem is that he refuses to collaborate with other people, or build on top of current technology.
He's had a lot of great important inspirational ideas, but his implementation of those ideas didn't go anywhere, he's angry and bitter, and he hasn't bothered re-implementing them with any of the "inferior technologies" that he rejects.
Back in 1999, project Xanadu released their source code as open source. It was a classic example of "open sourcing" something that was never going to ship otherwise, and that nobody could actually use or improve, just to get some attention ("open source" was a huge fad at the time).
>Register believe it or not factoid: Nelson's book Computer Lib was at one point published by Microsoft Press. Oh yes. ®
They originally wrote Xanadu in Smalltalk, then implemented a Smalltalk to C++ compiler, and finally they released the machine generated output of that compiler, which was unreadable and practically useless. It completely missed the point and purpose of "open source software".
I looked at the code when it was released in 1999 and wrote up some initial reactions that Dave Winer asked me to post to his UserLand Frontier discussion group:
A few excerpts (remember I wrote this in 1999 so some of the examples are dated):
>Sheez. You don't actually believe anybody will be able to do anything useful with all that source code, do you? Take a look at the code. It's mostly uncommented glue gluing glue to glue. Nothing reusable there.
>Have you gotten it running? The documentation included was not very helpful. Is there a web page that tells me how to run Xanadu? Did you have to install Python, and run it in a tty window?
>What would be much more useful, would be some well written design documents and port-mortems, comparisons with current technologies like DHTML, XML, XLink, XPath, HyTime, XSL, etc, and proposals for extending current technologies and using them to capture the good ideas of Xanadu.
>Has Xanadu been used to document its own source code? How does it compare to, say, the browseable cross-referenced mozilla source code? Or Knuth's classic Literate Programming work with TeX?
>Last time I saw Ted Nelson talk (a few years ago at Ted Selker's NPUC workshop at IBM Almaden), he was quite bitter, but he didn't have anything positive to contribute. He talked about how he invented everything before anyone else, but everyone thought he was crazy, and how the world wide web totally sucks, but it's not his fault, if only they would have listened to him. And he verbally attacked a nice guy from Netscape (Martin Haeberli -- Paul's brother) for lame reasons, when there were plenty of other perfectly valid things to rag the poor guy about.
>Don't get me wrong -- I've got my own old worn-out copy of the double sided Dream Machines / Computer Lib, as well as Literary Machines, which I enjoyed and found very inspiring. I first met the Xanadu guys some time ago in the 80's, when they were showing off Xanadu at the MIT AI lab.
>I was a "random turist" high school kid visiting the AI lab on a pilgrimage. That was when I first met Hugh Daniel: this energetic excited big hairy hippie guy in a Xanadu baseball cap with wings, who I worked with later, hacking NeWS. Hugh and I worked together for two different companies porting NeWS to the Mac.
>I "got" the hypertext demo they were showing (presumably the same code they've finally released -- that they were running on an Ann Arbor Ambassador, of course). I thought Xanadu was neat and important, but an obvious idea that had been around in many forms, that a lot of people were working on. It reminded me of the "info" documentation browser in emacs (but it wasn't programmable).
>The fact that Xanadu didn't have a built-in extension language was a disappointment, since extensibility was an essential ingredient to the success of Emacs, HyperCard, Director, and the World Wide Web.
>I would be much more interested in reading about why Xanadu failed, and how it was found to be inadequate, than how great it would have been if only it had taken over the world.
>Anyway, my take on all this hyper-crap is that it's useless without a good scripting language. I think that's why Emacs was so successful, why HyperCard was so important, what made NeWS so interesting, why HyperLook was so powerful, why Director has been so successful, how it's possible for you to read this discussion board served by Frontier, and what made the World Wide Web what it is today: they all had extension languages built into them.
>So what's Xanadu's scripting language story? Later on, in the second version, they obviously recognized the need for an interactive programming language like Smalltalk, for development.
>But a real-world system like the World Wide Web is CONSTANTLY in development (witness all the stupid "under construction" icons), so the Xanadu back and front end developers aren't the only people who need the flexibility that only an extension language can provide. As JavaScript and the World Wide Web have proven, authors (the many people writing web pages) need extension languages at least as much as developers (the few people writing browsers and servers).
>Ideally, an extension language should be designed into the system from day one. JavaScript kind of fits the bill, but was really just nailed onto the side of HTML as an afterthought, and is pretty kludgey compared to how it could have been.
>That's Xanadu's problem too -- it tries to explain the entire universe from creation to collapse in terms of one grand unified theory, when all we need now are some practical techniques for rubbing sticks together to make fire, building shelters over our heads to keep the rain out, and convincing people to be nice and stop killing each other. The grandiose theories of Xanadu were certainly ahead of their time.
>It's the same old story of gross practicality winning out over pure idealism.
>Anyway, my point, as it relates to Xanadu, and is illustrated by COM (which has its own, more down-to-earth set of ideals), is that it's the interfaces, and the ideas and protocols behind them, that are important. Not the implementation. Code is (and should be) throw-away.
>There's nothing wrong with publishing old code for educational purposes, to learn from its successes and mistakes, but don't waste your time trying to make it into something it's not.
Ted replied to the HN thread:
>The 1999 "source code" referred to above is in two parts: xu88, the design my group worked out in 1979, now called "Xanadu Green", described in my book "Literary Machines"; and a later design I repudiate, called "Udanax Gold", which the team at XOC (not under direction of Roger Gregory or myself) redesigned for four years until terminated by Autodesk. That's the one with the dual implementation in Smalltalk. They tried hard but not on my watch. Please distinguish between these two endeavors.
It was clear that the WWW was a good idea but it clearly wouldn’t succeed because of the foolish decision not to have back links. But it might gain a bit of traction and make people receptive for the real thing. I attended more than one significant academic conference in the early 90s where this belief was uttered to general agreement. People had even already experienced broken links and yet couldn’t put 2 and 2 together.
I also wanted bidirectional links even though I was a Lisp programmer! In case it’s not clear, one way links was an inspired decision.
Another belief, in the latter part of the 90s, was that decent web search was basically impossible as someone would have to store a copy of the whole thing, which is clearly impossible.
Time Cube is a site by Gene Ray, the self-proclaimed "wisest human". For some reason he thinks the way we should be thinking about the day-cycle is to imagine a cube around the earth, with one edge above daybreak, noon, nightfall, and midnight, for "four simultaneous twenty-four hour days". (He uses the term 'corner' instead of edge.)
If you don't get why this is a fundamental breakthrough and the only way to look at the world, he will call you "educated stupid" and a dupe in an ongoing conspiracy theory featuring the usual villains of the Western world, especially academia and religion (and Wikipedia, which will have very little of him). For instance, something about the opposites-in-balance philosophy attached to the "time cube" representation is asserted to be fundamentally incompatible with monotheism. Anyway, ranting about this comprises the bulk of his site(s). That and racial-conflict armageddon if you make your way onto later pages.
So - a pretty normal crank after a certain point, but in some ways his website it's sort of THE canonical crank website because of its spectacularly rant-y incoherence and awesome web design.
At my current $dayjob, there is a backend that is split into ~11 git repos which results in a single feature being split among 4-5 merge requests and it's very annoying. We're about to begin evaluating monorepos to group them all (among other projects).
What would the alternative to a monorepo be in this case, knowing that we can't bundle the repos together?
If the chip is subjected to a few thousand g's of shock the wires can bend and short.
This failure mode is quite low on the list among others, but it is something that people did investigate. For example: "Swing Touch Risk Assessment of Bonding Wires in High-Density Package Under Mechanical Shock Condition"
https://asmedigitalcollection.asme.org/electronicpackaging/a...
Before 2023 I always thought that all the bugs and glitches of technology in Star Trek were totally made up and would never happen this way.
Post-LLM I am absolutely certain that they will happen exactly that way.
I am not sure what LLM integrations have to do with engineering anymore, or why it makes sense to essentially put all your company's infrastructure into external control. And that is not even scratching the surface with the lack of reproducibility at every single step of the way.
I do not know anything about this author’s situation and won’t pretend to, but I did watch a sexual misconduct accusation play out in person once. The speed at which everyone assumed the story was true and turned against the accused was basically instant.
However there were some key details about the accusation that didn’t add up. The accuser tried changing the details of the story once they realized others were noticing the problems with the claims. It also became clear that the accuser had an ulterior motive and stood to benefit from the accused being ostracized. The accuser also had developed a habit of lying and manipulation, which others slowly began to share as additional information.
This was enough to make the situation fall apart among people who knew the details. However, word spread quickly and even years later there are countless people who only remember the initial accusation. Many avoided the accused just to be safe. The strangest part was seeing how some people really didn’t care about the details of the situation, they viewed it as symbolic of something greater and believed everyone was obligated to believe the accuser in some abstract moral sense.
It remains one of the weirdest social situations I’ve seen play out. Like watching someone drop a nuclear bomb on another person’s social life and then seeing how powerless they were to defend against it. In this case it didn’t extend to jobs or career. Their close social circle stuck with them. However I can still run into people years later who think the person is a creep because they heard something about him from a friend of a friend and it stuck with them.
You can compute the distance to the moon if you know the radius of the earth by looking at how long lunar eclipses take, data gathered over years of observations.
Eratosthenes computed the radius of the earth by clever trigonometry in ancient times, and Aristarchus computed that a 3.5-hour lunar eclipse indicates that the moon is ~61 earth radii away.
Once you have the distance to the moon, you can compute the size of the moon by measuring how long it takes the moon to rise. It takes about two minutes, and so the radius of the moon is about 0.0002 of the distance to the moon.
By cosmic coincidence, the sun and the moon appear to be approximately the same size in the sky, so the ratio of radius/distance is approximately the same for the sun and the moon. If you measure phases of the moon, you'll find that half moon is not exactly half the time between the full moon and new moon. Half moon occurs not when the moon and the sun make a right angle with the earth, but when the earth and the sun make a right angle with the moon.
You can use trigonometry to measure the difference between the half-time point between new/full moon, and the actual half moon, giving you an angle θ. The distance to the sun is equal to the distance to the moon divided by sin(θ).
To get θ exactly right, you need a very precise clock, which the Greeks didn't have. It turns out to be about half an hour. Aristarchus guessed 6 hours, which was off by an order of magnitude, but showed an important point: that the sun was much larger than the earth, which was the first indication that the earth revolved around the sun. (Aristarchus' peers mostly didn't believe him, not simply out of prejudice, but because the constellations don't seem to distort over the course of a year; they were, as we now know, greatly underestimating the distance to nearby stars.)
Next, you can compute the shape of the orbits of the planets, by observing which constellations the planets fall inside on which dates over the course of centuries. Kepler used this data first to show that the planetary orbits were elliptical, and to show the relative size of each orbit, but with only approximate measures of the distance to the sun (like the θ measurement above) there's not enough precision to compute exact distances between planets.
So, scientists observed the duration of the transit of Venus across the sun from near the north pole and the south pole, relied on their knowledge of the diameter of the earth, and used parallax to compute the distance to Venus, and thereby got an extremely precise measurement of the earth's distance to the sun, the "astronomical unit." It took decades to find the right dates to perform this measurement.
The cosmic distance ladder goes on, measuring the speed of light (without radar) based on our distance to the sun and the orbit of Jupiter's moon Io, using radar to measure astronomical distances based on the speed of light, measuring brightness and color of nearby stars to get their distance, measuring the expected brightness of variable stars in nearby galaxies to get their distance, which provided the data to discover redshift (Hubble's law), measuring the distance to far away galaxies (and thereby showing that the universe is expanding).
> Does having experience implementing a web browser engine feature change the way you write HTML or CSS in any way?
I think I'm more concious of what's performant in CSS. In particular, both Flexbox and CSS Grid like to remeasure things a lot by default, but this can be disabled with a couple of tricks:
- For Flexbox, always set `flex-basis: 0` and `min-width: 0`/`min-height: 0` if you can without affecting the layout. This allows the algorithm to skip measuring the "intrisic" (content-based) size.
- For CSS Grid, the analogous trick is to use `minmax(0, 1fr)` rather than just `1fr`.
(I also have a proposal for a new unit that would make it easier to get this performance by default, but I haven't managed to get any traction from the standards people or mainstream browsers yet - probably I need to implement it and write it up first).
> Do you still google "css grid cheatsheet" three times a week like the rest of us?
Actually no. The process of reading the spec umpteen times because your implementation still doesn't pass the tests after the first N times really ingrains the precise meanings of the properties into your brain
One thing that I found remarkable about Gibson is how a-technical he was at the time: "When I wrote Neuromancer, I didn't know that computers had disc drives. Until last Christmas, I'd never had a computer; I couldn't afford one. When people started talking about them, I'd go to sleep. Then I went out and bought an Apple II on sale, took it home, set it up, and it started making this horrible sound like a farting toaster every time the drive would go on. When I called the store up and asked what was making this noise, they said, "Oh, that's just the drive mechanism—there's this little thing that's spinning around in there." Here I'd been expecting some exotic crystalline thing, a cyberspace deck or something, and what I'd gotten was something with this tiny piece of a Victorian engine in it, like an old record player (and a scratchy record player at that!). That noise took away some of the mystique for me, made it less sexy for me. My ignorance had allowed me to romanticize it." (https://www.jstor.org/stable/20134176)
my understanding (which is definitely not exhaustive!) is that the case between Galileo and the church was way more nuanced than is popularly retold, and had nothing whatsoever to do with Biblical literalism like the passage in Joshua about making the sun stand still.
Paul Feyerabend has a book called Against Method in which he essentially argues that it was the Catholic Church who was following the classical "scientific method" of weighing evidence between theories, and Galileo's hypothesis was rationally judged to be inferior to the existing models. Very fun read.
I strongly agree with this sentiment and found the blog's list of "high signal" to be more a list of "self-promoting" (some good people who I've interacted with a fair bit on there, but that list is more 'buzz' than insight).
I also have not experienced the post's claim that: "Generative AI has been the fastest moving technology I have seen in my lifetime." I can't speak for the author, but I've been in this field from when "SVMs are the new hotness and neural networks are a joke!" to the entire explosion of deep learning, and insane number of DL frameworks around the 20-teens, all within a decade (remember implementing restricted Boltzmann machines and pre-training?). Similarly I saw "don't use JS for anything other than enhancing the UX" to single page webapps being the standard in the same timeframe.
Unless someone's aim is to be on that list of "High signal" people, it's far better to just keep your head down until you actually need these solutions. As an example, I left webdev work around the time of backbone.js, one of the first attempts at front end MVC for single pages apps. Then the great React/Angular wars began, and I just ignored it. A decade later I was working with a webdev team and learned React in a few days, very glad I did not stress about "keeping up" during the period of non-stop changing. Another example is just 5 years ago everyone was trying to learn how to implement LSTMs from scratch... only to have that model essentially become obsolete with the rise of transformers.
Multiple times over my career I've learned lesson that moving fast is another way of saying immature. One would find more success learning about the GLM (or god forbid understanding to identify survival analysis problems) and all of it's still under appreciated uses for day-to-day problem solving (old does not imply obsolete) than learning the "prompt hack of the week".
> Juries, widely trusted to impartially deliver justice, are the most familiar instance.
Trusted by those that have not looked into whether this is actually the case. The first prime minister of Singapore, Lee Kuan Yew, was famously against trial by jury, because of how easily lawyers can abuse biases in multiracial societies, based on his first-hand experience [1].
A UK study found his experience is the norm, not the exception - Black and minority ethnic (BME) jurors vote guilty 73% of the time against White defendants, but only 24% of the time against BME defendants [2]. (White jurors vote 39% and 32% for convicting White and BME defendants, respectively. You read that correctly - Whites are also biased against other Whites, but to a much lesser degree)
Edit: To answer what is the alternative to juries: Not all countries use juries, in some the decision is up to the judge, and in some, like France, they use a mixed system of judges and jurors on a panel [3]. The French system would be my personal preference, with the classic jury system coming in second, despite my jury-critical post. Like democracy, it's perhaps the least bad system that we have, but we shouldn't be under any illusions about how impartial and perceptive a group of 12 people selected at random is.
I refer to my solar panels as nuclear power, just to mess with people:
I use a gravitationally-confined fusion reactor, and pull power out of it by allowing the radiation to excite unbound electron-hole pairs in a semiconductor substrate. It's dangerous; even miles away from the reactor itself I can't expose myself to the radiation for too long or I get a painful skin reaction, and that might lead to cancer someday, but hey, it's cheap and quiet and I don't pay for the nuclear fuel!
Controversial in the same way cochlear implants are.
Many deaf/Deaf parents want children who hear. And I think absent the cultural consideration, almost all would want children who hear.
But you can't ignore the cultural consideration. If you are deaf, and have a deaf child, curing that child's deafness means they will move away from you later in life. It's a kind of alienation even when the child remains bicultural, they usually end up almost entirely in the hearing world.
That said, most deaf people who have children have hearing children anyway. Hereditary deafness like that is relatively rare like that.
But for people from such families, and who live in a culturally deaf world -- they are not disabled. The cultural environment they live in is ... one in which deafness is not disabling. And it's going to be a very high hill to climb to convince them that they are missing something. They certainly don't feel it. This is particularly true in the United States which has such a proud tradition of deaf culture and education -- you can go all the way to doctorate level studies in ASL, work in ASL, the hearing world being a strange foreign culture you only rarely wade into -- only rarely need to.
I'd cure it for myself, and my child if I had one. No question. But I'm not culturally deaf. I feel isolated by it in the same way most hearing people anticipate deafness to be as an experience. But again -- people who live in the deaf cultural world -- they do not feel that, and they don't feel disabled because, in their context, they aren't. It's hard to communicate this to most hearing people. The usual response is dismissive, and unfortunately I think a lot of that ultimately goes back to very old metaphysical attitudes towards language and intelligence. A lot of hearing people still don't believe, deep down, that sign languages are equivalent to spoken languages, in particular. It's just gesture. You're lacking something essential to the human condition without spoken language. Etc. But for the culturally deaf, nothing is missing from their lives, except the perception of sound.
The fears from 3 Mile Island and Fukushima were almost completely irrational. The death toll from those was too low to measure.
And the fears from Chernobyl was MOSTLY irrational.
The reason for the extreme fears that are generated from even very moderate spills from nuclear plants comes in part from the association with nuclear bombs and in part from fear of the unknown.
A lot (if not most) people shut their rational thinking off when the word "nuclear" is used, even those who SHOULD understand that a lot more people die from coal and gas plants EVERY YEAR than have died from nuclear energy throughout history.
Indeed, the safety level at Chernobyl may have been atrocious. But so was the coal industry in the USSR. Indeed, even if just considering the USSR, the death toll from coal alone caused a similar number of deaths (or a bit more) than the deaths caused by Chernobyl EVERY YEAR [1].
I remember fights over whether or not navigation in frames was bad practice. Not iframes, frames. Who here remembers frames?
I remember using HTTP 204 before AJAX to send messages to the server without reloading the page.
I remember building... image maps[1]... professionally in the early 2000. I remember spending multiple days drawing the borders of States on a map of the country in Dreamweaver so we could have a clickable map.
I remember Dreamweaver templates and people updating things wrong and losing their changes on a template update and no way to get it back because no one used version control.
I remember <input type=image> and handling where you clicked on an image in the backend.
I remember streaming updates to pages via motion jpeg. Still works in Chrome, less reliably in Firefox.
I remember the multiple steps we took towards a proper IE PNG fix just to get alpha blending... before we got the ActiveX one that worked somewhat reliably... Just for tastes to change and everything to become flat and us to not really need it anymore.
I remember building site navigations in Java, Flash, and Silverlight.
I remember spacer gifs and conditional comments and what a godsend Firebug was.
I don't know when I got old, it just happened one day.
>What’s an example of the kind of advice that doesn’t work?
For some people struggling with chronic lifelong procrastination, the oft-repeated advice from the author such as "Action leads to motivation, not the other way around." ... and similar variants such as, "Screw motivation, what you need is discipline!" ... and other related big picture ideas such as Dilbert cartoonist Scott Adams' "Systems instead of Goals"
-- all do not work.
And adding extra rhetorical embellishments to the advice such as using the phrase "it's simple [...]", and using the word "[...] just [...]" as in:
- "Stopping procrastination isn't that hard to solve. It's simple. Just chop up the task into much smaller subtasks and just start on that tiny subtask. That will give you momentum to finish it."
... also doesn't work. Some procrastinators just procrastinate the initiation of starting that tiny subtask! For the few that actually do try to start with that first step, they'll quickly lose steam because of boredom/distraction/whatever and the overall task remains unfinished.
A lot of books and blogs about time management repeat the same advice that many procrastinators have all heard before and it doesn't work. The procrastinators understand the logic of the advice but it doesn't matter because there are psychological roadblocks that prevent them from following it.
EDIT reply to: >That doesn't mean the advice is bad,
I'm not saying the advice is wrong. Instead, I'm saying that some well-meaning people who give that repeated advice seem surprised that it doesn't work on some people. Because the advice givers believed "Action Precedes Motivation" worked on themselves, they automatically assume that imparting those same words to other procrastinators will also work. It often doesn't. The meta-analysis of that advice and why it sometimes doesn't work is not done because the people giving that advice are the ones who used that technique successfully. This creates a self-confirmation bias.
There hardly is such thing as "discounts" in airline pricing. I mean, formally, there is, there's a lot of them, but, well, it's complicated…
In all honesty all this thread is people complaining about something they don't have a clue about. Airline pricing is insanely complicated, and this is for a reason. Airlines are not a luxury business, they barely manage to survive. If not all this dynamic pricing, special contracts with agencies, etc, they'd have to charge so much for a seat that you wouldn't pay and all this travel industry you are accustomed to simply wouldn't exist. The whole business is built on making somebody who crucially needs to fly pay as much, as he can, and then to make price attractive enough for the rest of us so that you can sell the rest of the tickets, so that flight can make any profit. And in the end, margins are super thin in this business.
Also, your question implies that you imagine that there is some simple enough "true price for a seat", which is so far from the truth, you have no idea. If you actually look at the price breakdown for a given ticket, there are literally dozens of components in it. It's not unusual that so called "fare" of a ticket (which is, like, "just price") may be literally $1, and the rest of $300 is various taxes, surcharges and payoffs I won't even try to start to explain here.
I mean, really, people here truly have no idea what they are complaining about. Airline pricing is not a thing you should hate.
Speaking as someone who has never used it but has spent some time researching it, the Bloomberg Terminal constantly undergoes UI changes, though not in a dramatic way. It's obvious if you look at screenshots throughout the time (it even had some gradients!). It has had its own "rewrites in Svelte", transitioning from a custom renderer to HTML/JavaScript.
But you're correct - they don't mess with it, they slightly and mostly invisibly improve it, and someone who learned it in 80s could use it without problems today.
I certainly spent most (95%+?) of my "fighting the borrow checker" time writing code I would never try to write in C++. A simple example is strings: I'd spend a lot of time trying to get a &str to work instead of a String::clone, where in equivalent C++ code I'd never use std::string_view over std::string - not because it would be a memory error to do so in my code as it stood, but because it'd be nearly impossible to keep it memory safe with code reviews and C++'s limited static analysis tooling.
This was made all the worse by the fact that I frequently, eventually, succeeded in "winning". I would write unnecessary and unprofiled "micro-optimizations" that I was confident were safe and would remain safe in Rust, that I'd never dare try to maintain in C++.
Eventually I mellowed out and started .clone()ing when I would deep copy in C++. Thus ended my fight with the borrow checker.
Hold on... You think Facebook took over from Friendster because of scaling problems?!
MySpace was the one that took the lead over Friendster and it withered after it got acquired for $500 million by news corp because that was the liquidity event. That's when Facebook gained ground. Your timeline is wrong.
The MySpace switch was because of themes and other features the users found more appealing. Twitter had similar crashes with its fail whale for a long time and they survived it fine. The teen exodus of Friendster wasn't because of TTLB waterfall graphs.
Also MySpace did everything on cheap Microsoft IIS 6 servers in ASP 2.0 after switching from Coldfusion in Macromedia HomeSite, they weren't genuises. It was a knockoff created by amateurs with a couple new twists. (A modern clone has 2.5 mil users: see https://spacehey.com/browse still mostly teenagers)
Besides, when the final Friendster holdout of the Asian market had exponential decline in 2008, the scaling problems of 5 years ago had long been fixed. Faster load times did not make up for a product consumers no longer found compelling.
Also Facebook initially was running literally out of Mark's dorm room. In 2007, after they had won the war, their code got leaked because they were deploying the .svn directory in their deploy strategy. Their code was widely mocked. So there we are again.
I don't care if you can find someone who agrees with you on the Friendster scaling thing, almost every collapsed startup has someone that says "we were just too successful and couldn't keep up" because thinking you were just too awesome is the gentler on the ego than realizing a bunch of scrappy hackers just gave people more of what they wanted and either you didn't realize it or you thought your lack of adaption was a virtue.
I totally hear you about that. I work for FAANG, and I'm working on a service that has to be capable of sending 1.6m text messages in less than 10 minutes.
The amount of complexity the architecture has because of those constraints is insane.
When I worked at my previous job, management kept asking for that scale of designs for less than 1/1000 of the throughput and I was constantly pushing back. There's real costs to building for more scale than you need. It's not as simple as just tweaking a few things.
To me there's a couple of big breakpoints in scale:
* When you can run on a single server
* When you need to run on a single server, but with HA redundancies
* When you have to scale beyond a single server
* When you have to adapt your scale to deal with the limits of a distributed system, i.e. designing for DyanmoDB's partition limits.
Each step in that chain add irrevocable complexity, adds to OE, adds to cost to run and cost to build. Be sure you have to take those steps before you decide too.
“Any community that gets its laughs by pretending to be idiots will eventually be flooded by actual idiots who mistakenly believe that they're in good company.“ (https://news.ycombinator.com/item?id=1012082)
Having been an early user of 4chan back in late 2003, from what I saw, the tone shift of nazi stuff just being edgy belligerence for the shock value to people saying “no really, this is my serious political ideology” really got traction somewhere around 2012-2014. Prior to that, there was still pushback. Much like how the idea of having a “waifu” was originally a derisive joke, but somehow turned into actual practice.
This coincided with a few things, massive increase in internet use, raids having increased 4chans notoriety, but most importantly it was the timeframe when Russia prepared for and invaded invaded crimea, which included ramping up “active measures” to shift conversation with troll farms/propaganda. 4chan (and later, 8chan, which had a huge population of boomers buying into “qanon” nonsense) became an ideal host to try to amplify disruptive propaganda.
I didn’t quite notice what was going on, since by that time I was mostly only on select boards like /tg/ or alt chans like operatorchan, but the Russian influence became obvious in the lead up to the 2016 election.
And it wasn’t just nazi stuff, it was all sorts of bullshit like Marxism, esoteric magic and conspiracy shit that was being thrown around to see what stuck. And they were active across every social media site. Though as bad as 4chan was, I’d argue russian presence on Facebook and twitter was especially harmful and far reaching. They didn’t even need to be subtle or target groups: https://qz.com/1284222/russian-facebook-ads-were-barely-targ...
I should have written it up. I could still, but I’d need to order one of those thermostats and do the teardown again.
At the time I was just focused on why my buddy was having to replace 200 thermostats in ten years. It turned out you could just cut out the battery with a pair of dikes and jumper the leftover positive post to 3v to get basically unlimited lifespan, so he did that instead of replacing the thermostats with new ones. Afaik most are still working fine.
Helping him out is also how I figured out the defrost timer disintegrating gears thing. When you do things at scale, things that seem random in onesies practically scream at you. It made me appreciate the value gathering data…you can find the patterns at lower scales.
The car air vent thing I figured out on my 94 Toyota 4runner. The air vent crossbars that keep the vanes in alignment with each other all failed over a two year period. A minor annoyance, but it makes you feel like you need a new car lol. I popped them out and found that of all of the plastic parts, only the crossbars were exceptionally brittle. Suspiciously, the bars also had what looked like date codes molded into them. None of the other pieces had numbers at all. I just printed new crossbars.
What I appreciate about Georgism is that it can neatly avoid the difficult issue of justifying exclusive use of land without resorting to ideas like 'First Occupation' or vindicating violent acquisition.
Essentially, you could say that all land starts out as - and remains - common property (because nobody created it) but a land value tax allows an individual to 'purchase' the exclusive rights to a natural resource by compensating the owners (i.e. everyone else) for their exclusion.
In George's terminology 'land' effectively includes all natural resources. This gives us a good analogy that most people might not consider when thinking about private ownership of land. The radio spectrum. We generally do not allow private ownership of a portion of the radio spectrum but instead ask for payment for exclusive use of the resource for a determined amount of time. To allow ownership in perpetuity based on a one-off payment seems simply wrong in this case. Land, water, etc are similar.
The subsequent findings are that cloth-type masks are less effective (but not wholly ineffective) compared to clinical/surgical masks at limiting the aerosolized viral shedding from those already infected. So if a cloth mask was all you had, the advice became "please wear it".
Turns out, many people assume advice is only relevant when given for their own direct & immediate personal benefit, so they hear what they want to hear, and even the idea of giving a shit about externalities is sheer anathema. That gets boiled down further for idiot-grade TV and bad-faith social media troll engagement and we wind up with reductive and snarky soundbites, like the remark above, that help nobody at all.
Back on topic, the choice of so-called "experts" in the Guardian's coverage of the AWS matter seems to be a classic matchup of journalistic expediency with self-promoting interests to pad an article that otherwise has little to say beyond paraphrasing Amazon's operational updates.