> Wehner also said Facebook still expects expenses to grow 50% to 60% from last year
> “But as I’ve said on past calls, we’re investing so much in security that it will significantly impact our profitability,” Zuckerberg said. “We’re starting to see that this quarter.”
That sounds like a big 1-year jump. From what I can tell, the big Facebook scandals (fake news, Cambridge Analytica) came from faults in company policies rather than security glitches.
I wonder if labeling it "security" is a PR thing as their web ads all focus on FB taking active steps to make sure those types of scandals don't happen again.
I was directly involved in one of those 'incidents' several years back where FB gave our app a special API that others did not have. This was because it was easier for us to build the 'FB experience' for their users, than it was for FB themselves. It was clean, legit, above board and secure.
It's astonishing how different the 'media narrative' is from reality, and it confirms my belief that the press runs on such narratives (i.e. building up, crashing down) because in both directions the truth is inflated for dramatic, i.e. click-bait reasons.
Our large company built a very good FB app that effectively was 'FB' on our platform. It was FB branded - for users, it was effectively the 'real' (and only) FB. Obviously that app had to have special APIs.
Everyone involved from top to bottom was pro. We didn't store data, nor did we want or need to. The way the tech was setup (data goes to app), we didn't really have the option. Users logged into their own accounts and retrieved their data, it's not like we could just access data arbitrarily.
Everything was pro and above bar - and nobody in the equation - a lot of us regular, conscientious people - thought for a second that anything was wrong or irregular in any context.
In fact - the whole situation could be described as: "FB hired 3rd parties to develop some code", which surely they do in some circumstances.
Nobody was harmed in any way, and there really wasn't risk of anyone being harmed.
I understand that with 2018 hindsight, we might look at things a little differently, but in reality, I think we'd have still done it. Perhaps there would have been more checks and assurances (i.e. FB takes ownership of code and actually publishes the app), but in reality it was (and would still be) fine.
As far as the Cambridge story - this is also misleading because the API's that were used there were available to the entire world and everyone knew exactly what they were. Were there tech people screaming foul? The press? Not really, they seemed reasonable, until it seemed that some bad agents were getting a little unscrupulous, and so FB did the right thing and altered the APIs to make them more secure. Security polices change all the time, in this case they tightened up given some field data. That's it.
It's really a story about Cambridge's scammy behaviour, and possibly lies to FB on where that data was, not about FB.
I don't like Facebook, I don't use it, I don't like being 'productized' etc. etc. - but I don't feel that the information in these scenarios has been properly handled by the media.
Because there are legitimate issues with privacy in the new world order in 2018 that are finally coming to bear, and we definitely want to re-evaluate our situation with FB, basically, we go and dig up 'something that happened 10 years ago in which nobody was harmed' to build a 'kind of misleading narrative' around the the 'legitimate issue'.
>Were there tech people screaming foul? Not really
Yes. Really.
>Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.
Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.
“It has been painful watching,” he said, “because I know that they could have prevented it.”
Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.
He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?”
FWIW - Every time I have has personal knowledge of IT issue that made news, it made me lose faith in news. I'm not talking the minor, regular and downright stereotypical "Journalists don't understand technology", but major spins of context, impact, background and history. I'm currently on a system that's been making front pages in my country, and while the factual details of impact are largely accurate, what causes them, background, what can and should be done about it are wildly inaccurate (and not in the "reasonable people can disagree / many technical solutions have merit" sense; in the "what are you smoking / why does anybody think or believe that" sense :P )
> the Gell-Mann amnesia effect defines the idea that "I believe everything the media tells me except for anything for which I have direct personal knowledge, which they always get wrong."
Everyone I've met that's been interviewed about a controversial topic has been critical of the press. I haven't been sure if it's because they think their view is the only correct one or the media are truly that sloppy.
My intuition says the latter because time is money and journalists are rarely experts in the fields they cover.
I was once something of a journalist and noticed something interesting. When I interviewed someone about something controversial, they were significantly more likely to become defensive when I checked facts and quotes with them.
Sometimes the defensiveness arises because of our offensive media atmosphere... the feeling is some of the journalist's quotes/"facts" may be mis-attributed, taken out of context, or straight up lies hoping to get a response that can be quoted out of context.
Even supportive facts/quotes can cause concern. Subject matter experts have a reputation on the line, while the journalist is often just looking for a story.
It's interesting because the act of constantly checking facts and quotes actually helps prevent words from being taken out of context. Good interviews can get away from both parties and solid questioning and verification ensures that the listener applies the correct context to the words.
I'd guess this happens because a journalist starting to fact check is often a giveaway that they are about to write something you'd want to challenge later.
Nobody ever called me, my boss or my friends to verify this summer has been extremely hot or what my favourite food is.
Also: just getting the words correct doesn't help if you cut the part where I explained what happened.
I suspect you don't have a strong understanding of how journalism works. A journalist fact-checking a story is more often a giveaway they are a conscientious journalist working for a legitimate organization. Getting the story right is a sacred thing for real journalists who know that not doing so opens them up to being called out in the public sphere for spreading untruth. Extensive fact-checking is not only critical to maintaining a reputation, it probably provides some kind of legal cover.
The problem is both good journalists and bad journalists call to do fact checks, -good journalists to verify they got the facts correct, bad journalists to provide legal cover before they quote you out of context to build the case they want to build anyway.
So, when someone call you and want to verify what you said I'll recommend being very sure that you get it exactly right and so clear that they cannot misunderstand you.
Defending yourself against a journalist might easily become a case of "have you stopped beating your wife? How hard can this be, yes or no?"
If you want to guarantee positive coverage, pay for it. Otherwise, stay away. If you go into an interview with such a defensive attitude, it's likely not going to end well for you.
When people get defensive with me for being conscientious, my instinct is always to dig deeper. I'm sure that holds true for many others.
But then what do you do when some media outlets gave a history of attacking you?
You might get them to publish a retraction (since it wasn't true) but hasn't the damage already been done?
What when those same papers write stuff that is technically true (but leave out why it was done)?: "founder sends millions to foreign bank account." "Founder declines to comment." ?
I'm sorry, but you really don't understand how working with the media works. You seem stuck on this idea that earned media has to be positive. It doesn't have to be, and it won't always be. If you expect it all to be positive, you will be disappointed and burn many bridges with journalists.
When you get negative coverage, accept it as the other side of the earned media coin. It is more effective than advertising because it has the potential to be negative. If there was never negative coverage, earned media wouldn't be worth pursuing. And, if you dig into the negative coverage, you'll often learn a lot from it. So, when you get negative coverage, email the journalist, thank them for writing about you, reaffirm that you're always available to talk, and then ignore it.
If an outlet has a history of attacking you, consider why they are attacking you and if the attacks have merit. If it's because you've been difficult in the past, you need to find a good PR person and get some media training in a hurry. You specifically want a PR person who entered the field through journalism, not marketing. Your PR person will be to work with the initial storm. If you caused the mess, you likely need to be hands off during this stage. And then, you need media training so that you don't offend a media outlet again.
It sounds like you have a rough time with media. Do you use the word 'exclusive'? If so, are you sure that you know what it means? Aside from lying, or being evasive, the best way to offend an outlet is to misuse the word.
Retractions don't happen very often, and unless you have very serious evidence, they aren't even worth going for. In the absence of serious evidence, a crafty editor will use your quest for a retraction as an excuse to keep writing about you. The goal is always to get them to stop writing, not give them reason to write more.
And frankly, publications will always write things that are technically true without adding any context. Never assume that this is because of malice because truth is, they likely don't care enough to be malicious. The sooner you get over this idea of a big, bad malicious media, the sooner you will learn how to work with them.
> I'm sorry, but you really don't understand how working with the media works. You seem stuck on this idea that earned media has to be positive.
That is quite a misunderstanding of my position. What do you take me for?
They don't need to be all positive at all. My point just shouldn't be actively lying or misleading.
> If an outlet has a history of attacking you, consider why they are attacking you and if the attacks have merit.
Done. (And it is not about me, and I'm in a position where it would be useful for me to know if there was something.)
> It sounds like you have a rough time with media.
Personally? Not at all.
> And frankly, publications will always write things that are technically true without adding any context. Never assume that this is because of malice because truth is, they likely don't care enough to be malicious. The sooner you get over this idea of a big, bad malicious media, the sooner you will learn how to work with them.
My question is rather: how long should we accept this (i.e. baseless smear campaigns agains companies and/or individuals)?
Mostly media does a great job and I respect and actively (i.e. by donating or keeping subscriptions I don't need) support great journalism even if I don't agree with everything they write.
But sometimes some journalists are really destructive. It is those cases I'm talking about. If the pen is mightier than the sword, then at some point, shouldn't the abuse of a weaponized pen be punishable? That bar should be high, yes, but at some point (inciting hatred against nations or ethnicities using made up allegations should be a good example) I think most people would agree that society should have some way to correct it. The question is just exactly where that bar should be.
And again no, this isn't about me. Personally I never had problems with media going after me, this just happens to have bothered me over years as I've seen certain journalists go after other people for what turns out to be no good reason. I'm luckily not aware of many cases though, but what I've seen has made me hesitant to talk to media (and Gell Mann amnesia also isn't as strong as it used to be anymore either).
They should as it's a great tool to both guarantee accuracy, and ensure you're being fair. But, running an interview, getting people off their script, and still constantly checking and validating are all tough. I could see people forgetting it or just not being comfortable. I've also been interviewed by journalists so skilled I didn't realize they were doing it until later.
Be sure to investigate the journalist's background, and the article's audience. Articles written for laymen by a general-purpose news source often just pawn off IT stories on their most tech-literate writer. Don't put your trust in filler.
News is still good. The HN front page surfaces good tech news, while reputable newspapers are still reputable in their area of expertise. Don't confuse the Gell-Mann [1] effect with Dunning-Kruger [2]
You're basically saying, "nobody cared about their material security weakness until someone did something bad with it"
How is that not true of every single security/privacy incident, ever? If the situation was, "everyone knew 23andMe was storing non-deidentified genomic data on an unsecured S3 bucket, and this was fine until [insert villian] took it" it would be the same, and people would be justifiably pissed off. This feels like a completely correctly handled story in the media, your personal anecdote about working with Facebook notwithstanding.
You can’t recast HN’s comment system as a material security weakness just because you learn that people who disagree with you can also read your public comments.
> You can’t recast HN’s comment system as a material security weakness just because you learn that people who disagree with you can also read your public comments
This is a false equivalence. Hacker News comments are public. (HN also doesn't sell ads.) Facebook messages and private content usually aren't.
Just because "the API's that were used there were available to the entire world" doesn't mean "everyone knew exactly what they were." Everyone didn't know what they were capable of. A very small section of the technologically literate did, and most of them were profiting off the status quo.
>Facebook messages and private content usually aren't.
Sure, that's not the same as public access, but it's equally blown out of proportion.
Users must be empowered to delegate access to the software of their choosing. Facebook offering read_mailbox as an option on its OAuth consent screen is exactly as evil as Fastmail offering IMAP.
Your description of writing an app for your platform sounds fine...but in that same era they were throwing Instant Personalization out there. Which was profoundly fucked up; I was involved with the deployment of it at a large site, hated it the entire time, and the creepy behavior of it was a fairly big reason why I left that company.
> Were there tech people screaming foul? The press? Not really, they seemed reasonable, until it seemed that some bad agents were getting a little unscrupulous, and so FB did the right thing and altered the APIs to make them more secure. Security polices change all the time, in this case they tightened up given some field data. That's it.
Others quoted about screaming foul from within facebook. I was involved with RTB ad exchanges at the time, and everyone knew that the data is available to whomever wants it, with facebook's tacit approval.
Gabriel Weinberg (yegg) founder of duck duck go had a story on his blog about how he targeted his own wife with a facebook ad, and about how easy it is profile/get everyone's FB data, and he wasn't the only one.
Your experience may have been different; it looks to me like you'd rather look at the world through rose tinted glasses and give FB an undeserved benefit of the doubt -- but I'll accept that this was your experience. However, I have about 20:1 anecdata about "improper FB" than "proper FB", so I tend to believe the latter.
If there was one story of patents done right, does that indicate there are no patent trolls and it's all blown out of proportion?
I travel a bit. Facebook, the company, has a disproportionately vocal base of support in Silicon Valley. (This is also where the principal economic beneficiaries of Facebook's status quo live.)
The famous upton sinclair quote comes to mind:
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
I get your point, but most of your post could be reduced to "we didn't do it, so it doesn't happen".
Facebook giving out privileged API's willy-nilly without verifying who the people operating them really are is the issue. If a bank gave the keys to their vault to some shady people then you blame the bank.
That really makes it worse. If there were some set of cut-and-dried rules (even one that changed occasionally, because of course Facebook is always changing the rules), outsiders could evaluate those rules and have some idea what's going on. If every data release involves armies of NDA-bound lawyers, there's no way to know anything about Facebook data releases in general. It's completely possible that the Cantabrigians negotiated everything they did beforehand, due to some combination of Facebook lawyer variability and the CA lawyers being more aggressive than your firm's lawyers.
Except to say that there were "instances of Facebook releasing data" makes it sound you misunderstand the technical details of what happened, and if so, why would that be? Because despite the media writing half a forest worth on the subject, actually information that would help us understand what took place was exceedingly rare.
I didn't use that phrase. Anyway, don't just make cryptic accusations of ignorance. If anyone had used that phrase, how would it be wrong? Should we say "instance" rather than "instances", because CA was the only party that ever received data? Should we use a different verb, not "releasing"? Help us to understand, rather than complain about the media. We all know about Gell-Mann amnesia.
It was a mistake to put that in quotes; I didn't mean to cite you. I meant to emphasize that the the phrase "every data release" sounds like there were individual, distinct cases where Facebook agreed to release data.
When it comes to the GP that talked about his experience - this was about platform vendors such as Apple or Blackberry having supposed "special access". But Facebook just did not release any data here. It just allowed third parties to write a Facebook app. For the most part, those apps where client-side, so the data did not leave the users device.
Now in the spirit of argument, there are a lot of aspects that can be debated: Is there a meaningful difference between code written by a contractor, but published by Facebook, an app published by a third party, but making it look like it belongs to Facebook, or a third party app without the blessing of Facebook. How does that relate to our ideas about the freedom of the internet (should anyone be allowed to create a Facebook app, without Facebook's blessing, which is certainly a technical possibility? What is the implication of Facebook's support for such an app, or lack of taking legal action against it?). What is the responsibility of companies like Facebook when having an API of the sort that they do, which prima facie allows users to decide which data to share? So about if I agree to share data, but that data is something my friend shared with me? Does Facebook have a responsibility to prevent this? Is is problem the sheer amount of data on Facebook? If I have a small online community forum, can I offer that sort of access, letting my users share their friends list with third party apps? What about apps like Twitter, Telegram asking for your permission to access your phone book. Courts in Germany have said the users agreeing to this are the ones breaking the law. What if I agree to share my contact book with an app, but that app does not upload this data to a server? How does that compare to Facebook asking a third party to provide a Facebook app, but data does not go to a third party server?
It's all interesting stuff. And I don't even ask the media to discuss it in that level of detail. They can give the 5-or-however-many foot view. But don't misrepresent that.
Gell-Mann amnesia doesn't defend the media here, right? The idea is that it is strange that you trust it after their incompetence has been proven to you.
Look, I am not a Trumpian Fake-Newser. I think it's good we have journalists. I am sure the journalists working on those Facebook stories did their best, given limited time, the breath of technical details involved, the pressures of creating content that brings in clicks, and yes, the personal vanity of wanting to uncover a big story. We are all human, after all.
CA is even one thing. But regarding the platform integrations - what GP tells from personal experience is the same thing I got from reading the reporting. It was simply misleading.
Who are you trying to convince? I certainly don't trust "the media" (or as I typically call them "the war media") as anyone on HN who follows my copious commenting can tell you. At this point, I trust Facebook less than the newspapers, just based on my (very) occasional use and actually listening to what Facebook themselves say.
All that is beside the point. The point is that if as 'sonnyblarney indicated there a custom negotiation for every new organization that gets to peak behind the curtain, then the process is not standardized. Normal online firms have TOS and APIs, and it's possible to characterize what user data they release. For Facebook, apparently that characterization is not possible.
I known a few people who gamed facebook's add system to "color" people's profile data into cookies, and also a couple of "quiz" app developers who had access to everything they could ask for at the time, and then some.
> As far as the Cambridge story - this is also misleading because the API's that were used there were available to the entire world and everyone knew exactly what they were
That was actually the entire point. That Facebook had these overly... generous APIs for anyone to use. That's not a good thing.
APIs used by apps that required explicit opt in and permission granting by the end user.
Calling them “overly generous” is such a disingenuous statement because it connotes such a clearly wrong historical perspective that the whole statement is a falsehood.
Facebook’s APIs at the time were considered too stingy at the time! So much so it was constantly fending off accusations of being a walled garden taking advantage of an open web. The less than adequate APIs were their attempts at fending off that narrative.
There's a lot of truth to this statement. The web started out a lot more open. Email addressss were generally public, commands like finger existed, etc etc etc
It's worth noting that the context around privacy of generic life information of the kind you post on social media has radically changed in the last 10-20 years.
Back then, the Facebook API gave out personal details of not use the end user, but all the end user's 'friends'. I never gave the explicit opt in for that.
You chose to be friends with someone on Facebook which means you very explicitly chose to share your personal details with that person for them to consume or view it in whatever way they saw fit. It was very much clear from context at the time that they would consume it from multiple different sources: the website, phone-specific apps, shared games, etc.
Once someone knows something about you, it's no longer yours; it's that person's to do with as they please.
I don’t disagree with your view on the media, but as a banker I say to tech people: welcome to the club. Welcome to a world where the media will pick a fraud somewhere in a large organisation and generalise it to all employees or to the whole industry, will twist facts to the limit of honesty to make a point, will call outrageous some technical and inocuous business practices, and will run months long campaigns on a single incident. Welcome to a world where the media hate you.
> somewhere in a large organisation and generalise it to all employees or to the whole industry
I don't know if its different in the US but here in Europe that seems to sum up the "sexist" nature of our industry. Its true that there aren't many females in IT but when I have worked with them they have always been treated fairly and as equals to men. Their sex never seemed to enter into the equation at all.
In fairness to the people slating the entire banking industry, no other industry (that I'm aware of) has so consistently and repeatedly been responsible for financial crash after financial crash in the way your industry has.
It displays a hubris and greed that makes the lives of millions of innocent people worse, time after time. Then the tax payer bails them out when that greed backfires, as it does every single time.
It seems like most of the major banks were guilty of obviously dodgy lending and deliberately overly-omplicated financial instruments in the run up to the last recession, and they seem to be back at it again with car loans.
The overfocus on Facebook hides the much more important and sobering point — if you tell someone exactly what they want to hear, they can be induced to vote for you, which calls into question the real long-term value of democracy as a politics architecture.
I think it's the narrowcasting that is a danger to democracy. When you have to blast your message across popular broadcast media your message has to appeal to a wide cross-section of the public. The ability to secretly target individuals you can craft messages that would be repulsive to a larger audience. That's why Facebook's political ad transparency is an important first step against this new attack on democracy.
This is so true. Also compounding the problem is how we're so isolated into our own interests and beliefs, so even if you share your personalized propaganda it's likely to be with other like-minded people. It's like we're all living in our own dream worlds.
Alice, Bob, and Carol all hate each other. A politician of yesteryear had to forge some compromise between them to get the support of any two. Today, you can tell Alice that you'll kill Bob and Carol, Bob that you'll kill Alice and Carol, and Carol that you'll kill Alice and Bob, and get all three of their votes.
How about the fact that Zuckerberg was summoned to Washington to testify before Congress and failed to disclose the ongoing data-sharing agreements FB has with 60 or so device manufacturers?[1]
In addition to willfully misleading Congress it also likely violates their 2011 FTC consent decree[2]. Is this also just part of the 'media narrative' you claim is different from reality? I think these revelations amount to pretty "scammy behavior" as well, to borrow your terminology.
It's exactly what the parent seems to be writing about. What they are saying is that the WSJ article is highly misrepresentative, that rather than there being a "data sharing" agreement, it would be more correct to say that Facebook hired them to program an app; and that app, installed on the device of the users, communicates directly with the Facebook servers, thus sharing no data with anyone.
> the API's that were used there were available to the entire world and everyone knew exactly what they were. Were there tech people screaming foul? The press? Not really,
Yeah they were, and that's why the APIs were eventually shuttered. FB even secured a promise from CA that they retroactively deleted the data.
Thanks for commenting and sharing your insight but I don't think the average person knows or cares what that meant. The problem with CA was that an app developer was allowed to sell user data without the users knowledge.
To be fair, security is an amazing amount of policy. I took and passed the CISSP, and that exam must be at least 30%, maybe 40% policy. Things like knowing how does the Commerce Department’s rule of Safe Harbor apply to US companies doing business in EU. Stuff like that.
That being said, it made me way better at my job. No matter how many technical justifications I have for why we should implement X, the second I brought up a small to medium legal thing everyone would fix it immediately.
Real security is almost all technical and implementation. There's a very significant danger to security that policies be some kind of front line of defense, or be implemented over sound engineering practices.
In almost every work environment, I've seen the policies working directly against security: if not by contradicting it, ignoring the details where the real security decisions live, or by striking the wrong balances between prescriptiveness and generality - then by out-prioritizing security decision making. (I've worked at mostly 100,000+ person companies).
It's much better to have technical security controls >80-90% of the actual security. It's just expensive and harder to teach/learn/implement.
That said, there's some real security gained by policy. It comes from:
- Ability to communicate expectations ("adopt technical solution X")
- Ability to exercise legitimized (instanciated/codified) authority
Most of the rest of the value of policy comes in as business enablement value (policies are easier to communicate to auditors than security control implementations are).
Policy can also be a useful placeholder for real security in the sense it will satisfy many external parties who might otherwise reprioritize/randomize security investments.
Policy is not a substitute for reasonable technical controls. But it’s also not a concern that can be wished away by saying “well we just do our security the real way, in code.” Any security control implementation enforces some conceptual policy, even if that policy isn’t documented elsewhere. In some places that’s fine; in others with more robust needs, that’s insufficient. Part of what auditors audit is that policy implementations (whether in code or in human practice) match the specification.
As an example, I’m glad that browser vendors require CAs to document their policies for issuing certificates. Let’s Encrypt does a great job of making much of this process automatic, but there’s still pieces that must be done by humans, and there’s still written policies in place for all of their operations.
At some point in any security process, human judgment comes into play. Striking the wrong balance between technical controls and allowing for human judgment can also lead to absurd outcomes, like this recent article/discussion[0].
If your system is doing "all the right security stuff" and then nobody knows about it - sure, you're secure - but nobody knows that or how you are secure. And that's a very significant problem both to a bureaucracy and to its customers. It's also an issue in terms of maintaining those controls over time between changes to staff and business direction.
There's a whole "secondary market" (within an organization) for security assurance, and it tends to be much more measurable than security posture.
Policies and summaries of those policies go a long way toward feeding that "assurance feeling" secondary market without draining resources from ongoing investments actual posture.
Essentially the way it works is that you develop security controls toward an ideal end state/direction and describe your direction as your policy. Any gaps auditors or your company find then become fuel for making actual changes to the underlying security posture at a technical level.
The danger of not having your policy really be disguised summaries of your actual implementation is that the various security staff become free to debate over fictional security and can convince themselves that if they mandate some changes on paper to the policy (with there being nothing technical associated whatsoever) that this has or should have some kind of real affect.
It's more dangerous still if the internal security assurance program uses its own policies or some measure of "adherence to the policies" to then measure security. What happens is that the compliance operation becomes authoritarian about adherence to policies that don't exist outside of (otherwise non-discoverable) mandate, and the company and its auditors are able to "measure" their posture by their policies and convince themselves they are secure.
The worst version of this is where its done on purpose for fraud.
You should have seen how much harder life got at our company when we discontinued a common root password across the back office. Things that used to take 5 minutes to fix by self-service now take a ticket to the owner of a system, which can take days.
If one bought into "the security story", one would expect your company after this policy change to have seen fewer random regressions due to random people rooting around in systems for which they were not responsible. Did you find that to be the case?
It's about reducing exposure to risk more than actual occurrences. You only need to regress once for it to permanently damage your business. Just because it never happened until now doesn't mean that it won't in the future.
ISTM typically there would have to be more going wrong for "permanent damage"? Not that it would be surprising for a shared-root organization e.g. to have poor backup skills...
They're about to break the brains of 1000s of people subjecting them to aweful content. This will also have consequences. What was it "move fast and break things"!
Evidently it's not employees, but contractors (via Accenture, etc). Rumour around these parts is a content mod killed themselves at work a couple months ago. Wouldn't want to put that on actual employees of course!
A lot of their security/policy work is hiring tons of humans to fix things. For example they are hiring an additional 10,000 content moderators. That will obviously impact profitability.
I don't think neither companies ever denied that AI won't get 100% of the cases, they just say that you can't moderate billions of posts with human moderation alone, you need AI to take care of the 99%, and humans for ambiguous cases.
Security isn't just "people can't get inside our servers", it's a much broader topic than that — anything that looks like "people didn't something that, in hindsight, we'd rather they couldn't have" falls under that purview, really. I'd certainly qualify working on preventing the next Cambridge Analytica, or preventing the next micro-targeted political propaganda campaign, as "security work" without even blinking.
> “But as I’ve said on past calls, we’re investing so much in security that it will significantly impact our profitability,” Zuckerberg said. “We’re starting to see that this quarter.”
That sounds like a big 1-year jump. From what I can tell, the big Facebook scandals (fake news, Cambridge Analytica) came from faults in company policies rather than security glitches.
I wonder if labeling it "security" is a PR thing as their web ads all focus on FB taking active steps to make sure those types of scandals don't happen again.