I mean, people are still rotating <month><year> passwords because they refuse to remember anything. I only know this, because I am in a customer-facing position, and these customers rarely care about revealing their passwords when they need help...
Because recognizing the author as conflicted and an unreliable narrator changes how you should weight and consider the information they are providing. It doesn't necessarily mean anything is untrue - but it does add extra, valuable information to how much you trust it.
If someone tells me something, I'm mostly likely to believe it without further investigation. But not always.
Many of the juicy stories from the book have no supporting evidence other than the claims of the author. Their credibility is all we have to go on here. If someone wrote a message here saying that they were a fly on the wall at the publisher’s office where they had a workshop inventing these stories to sell more books, you’d be right to question their motives.
Even justice system considers the trustworthiness of a witness, evaluating incentive, conflict of interest.
Having worked in another FAANG, I realize a large number of criticisms do come from imaginations, since I could see the contrast first hand. Nobody could tell exactly the consequences of all actions, most of the time it's just a buncha folks trying to figure out what to do, experimenting, iterating. Have you tried executing a conspiracy, like a surprise party? Good luck keeping a secret with more than 5 people.
There's also the problem of perspective. To a less technical engineer who don't know what they don't know, having their deliverable rejected time and again could feel like a conspiracy against them. If you read a blog post from them you'd think the culture is very toxic when everyone is doing their best juggling to be considerate while keeping the quality high.
As with others commenting on this, I've no idea how true the book is, in fact I have never read it. OTOH, even without the book, researches saying social media is making teenagers depress look convincing to me, and, although it's a losing battle, privacy matters a lot to me so I've personally stopped using social media for many years.
None of these give me full confidence to trust nor distrust the narrator, for things that you can't observe externally. It's all percentage.
I think the point is that up until she was fired, she was Meta. She wasn’t a random employee, she was their global public policy director. She wasn’t just implementing policy, she was responsible for creating it.
The question remains whether or not she would have written this book had she not been fired.
It’s not like she quit due to her ethical objections
The question does indeed remain, but is it a question whose answer matters?
If someone exposes a shady organization why should I care if they did it for ethical reasons or for something less noble like revenge for getting kicked out of that organization?
I think it does? "scummy person loses job, finds another way to cash in" almost seems to becoming a trope? I think it raises questions about what is left _out_ of the book, not just what's in it - are the issues raised the worst/most important, or just the ones that will sell the most books? Did we really need someone to 'tell us' meta/social media can be evil?
There are reasons that (some) criminals are not allowed to profit from books/movies about their crimes.
Anyway, that's just my general feelings about this sort book - I've never heard of the book or the author. And I honestly have no interest in reading it. Based on what I'm reading here - that would basically be rewarding/enriching one of the 'bad actors' ?
Yes. 100%. And the fact that you're not seeing why it does is confounding to me.
This person has shown that they are willing to harm society (for their own benefit, presumably); by active choice. And, as such, anything they say needs to be viewed through the lens of "is this person lying for their own benefit".
1. Their previous actions do mean that we should not trust what they are saying outright, we should do (more) work verifying the information they provide.
2. Their previous actions to _not_ mean we should avoid holding other accountable when the information provided turns out to be true.
You're asking your question like someone is arguing that this person's information doesn't matter (2); but the point being made is that we should (1).
Knowing something is happening and reading detailed descriptions of them actually occurring is different, IMO. I learned things I didn't know while reading it, at least.
I believe what Sarah Wynn-Williams wrote in Careless People.
I also think she's shown herself to be a person I'd want to stay away from.
The reason this matters to me is because the more media attention Ms. Wynn-Williams gets, the more her ideas of what we should do about Meta will spread and be given credence. The more she will be given credence outside of simply reporting what she saw. I can both believe what she says and think it's best to stop fanning the flames and giving her personal attention.
This entire saga reads to me as intra-elite fighting: Ms. Wynn-Williams is representing the cultural/educational elite, and obviously the Meta execs are the tech elite. As an ordinary person, I'm not under any delusion that either side has my best interest in mind when they fight, or when they advance policy, regulatory, or other suggestions. The derision and disdain Ms. Wynn-Williams has for people not in her milieu throw up a lot of red flags for me.
It comes down to believing that Ms. Wynn-Williams wants to hurt Meta, not to help us.
I also believe that blindly supporting people or organizations just because they also hate people or organizations you hate is a very bad idea. The enemy of your enemy can still be your enemy. In this case, regarding technological politics, Zuck and co. want us to become braindead addicted zombies, and Ms. Wynn-Williams will want us to have no control or access at all, because we can't handle it and it's for our own good. She's from the cultural group pushing for things like age restriction and verification, devices you can't root/restricting what you can install on your own device, etc. Both are bad. One sees us as cattle and the other sees us as toddlers.
I have never consciously wrapped Axios or fetch, but a cursory search suggests that there was a time when it was impossible for either to force TLS1.3. It's easy to imagine alternate implementations exist for frivolous reasons, but sometimes there are hard security or performance requirements that force you into them.
AI was trained on Axios wrappers, so it's just going to be wrappers all the way down. Look inside any company "API Client" and it's just a branded wrapper around Axios.
I really liked those books, for all the creative ideas... it's fine that they don't all work, but the Dark Forest has to be among the worst of them. It was unfortunate it was highlighted.
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
You can observe patterns of behavior, develop theories understanding, attempt/experiment with interactions, and refine based on the results. That's communication (and doesn't assume anything about the other alien civilization).
Now, civilizations may be more or less willing to do this and more or less successful, but that's not the same thing as no one will dare try, as the dark forest theory wants.
(Personally, I think civilizations that are better at this will outcompete ones that are worse or refuse, though that's just my own opinion.)
> Bold assumption that they are self limiting.
Name the exponential phenomena that aren't self limiting -- that don't consume the medium which allows them to exist in the first place.
> I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
Well, yes. One of the reasons the dark forest theory isn't coherent.
> Any real alien reasons would be alien to us.
Yes, but this doesn't back up the dark forest theory. It also doesn't mean aliens cannot be understood at any level or interacted with in any way.
(The dark forest theory makes very strong claims on the logic, intentions, strategies, resource use/governance of alien civilizations, BTW, and wants this to be uniform amongst them... even though the one civilization we actually know of doesn't adhere to them.)
Cleansing is basically free for advanced civilizations in the books. The alien (Singer) who wipes out Sol in the 3rd book doesn't even have to answer any questions from their manager about doing it, that's how cheap it is. While its true that individuals desire cooperation, I think you can assume that civilizations will keep a lid on people who will completely destroy them (or failing that, be destroyed). It seems like expansion of civilizations is not really an option. The Singer's civilization only has 1 colony world and they're already in some kind of extremely destructive war with them. Presumably the idea is once your own people expand multiple light years away, all the logic about aliens applies to them too. On the other hand if you can't expand why do you not run scorched earth on the galaxy?
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence.
I'm sure it will be possible, but it may well be very expensive. If it is, why would anyone spend the resources?
AI evolution will certainly follow the money, which is not necessarily the same as the path to AGI.
> or the boilerplate, libraries, build-tools, and refactoring
If your dev group is spending 90% of their time on these... well, you'd probably be right to fire someone. Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.
Something that's getting lost in the new, low cost of generating code is that code is a burden, not an asset. There's an ongoing maintenance and complexity cost. LLMs lower maintenance cost, but if you're generating 10x code you aren't getting ahead. Meanwhile, the cost of unmanaged complexity goes up exponentially. LLMs or no, you hit a wall if you don't manage it well.
Even the most enthusiastic AI boosters I've read don't seem to agree with this. From what I can tell, LLMs are still mostly useful at building in greenfield projects, and they are weak at maintaining brownfield projects
From my experience greenfield /brownfield is not the best dichotomy here. I observed, how same tooling is generating meaningless slop on greenfield project and 10kLoC of change (leading to an outage) on existing project in one hands and building a fairly complex new project and fixing a long (years) standing bug with a two lines patch in the other.
And I have more examples, where I, personally, was on the both sides of the fence: defined by my level of the same problem understanding, not by the tooling.
> Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.
Dude that's everybody in charge. You're young, you build a system, you mold it to shifting customer needs, you strike gold, you assume greater responsibility.
You hire lots of people. A few years go by. Why can't these people get shit done? When I was in their shoes, I was flying, I did what was needed.
Maybe we hired second rate talent. Maybe they're slacking. Maybe we should lay them off, or squeeze them harder, or pray AI will magically improve the outcome.
> its going to take Microsoft a long time to row back
They won't actually move back to a user-focused OS at all. It's nice for them to declare they will, but their culture and business pressures will prevent any kind of sustained effort. (Their users aren't their customers.)
That’s corporate-speak. They say improve, but it’s perfectly well understood internally to mean drive costs down.
There’s no problem with doing that at the expense of the customer as long as you can get away with it. (Seems like here they were going for a boiling-the-frog approach but moved too quickly.)
reply