Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Academic media censorship conference censored by YouTube? (mintpressnews.com)
218 points by killittosaveit on Feb 2, 2021 | hide | past | favorite | 95 comments


Google is claiming these videos were never on YouTube? That seems... odd?

And surely easy to refute, if they were public? Internet Archive would at least have scraped the channel's page.

TFA links to https://www.youtube.com/channel/UCGPZJTVIpNe-hroLo-CQ9wQ/vid... which currently 404s...

But which indeed did contain many videos... https://web.archive.org/web/20201101020416if_/https://www.yo...


Fwiw, there have been instances of Google straight up not being able to restore deleted content. When you go to delete videos, you do get warnings that deletion cannot be undone. I wouldn't be surprised if Google didn't have great records of deleted content.

Maybe Google deleted this tiny channel, or maybe it was user error by Project Censored (the apparent organizers of the conference), or maybe it was a publicity stunt by Project Censored. All three possibilities seem about equally likely.


> Fwiw, there have been instances of Google straight up not being able to restore deleted content. When you go to delete videos, you do get warnings that deletion cannot be undone

As there’s (former, no-NDA) YouTube SWEs/SREs here - can anyone shed light onto how YouTube’s storage system works? What amount of storage redundancy and retention is available for the vast majority of videos that only get a few views ever?


> I wouldn't be surprised if Google didn't have great records of deleted content.

It would surprise me if Google didn't keep everything. Harvesting data is part of their business model.


>Soft deletion implies that once data is marked as such, it is destroyed after a reasonable delay. The length of the delay depends upon an organization’s policies and applicable laws, available storage resources and cost, and product pricing and market positioning, especially in cases involving much short-lived data. Common choices of soft deletion delays are 15, 30, 45, or 60 days.

https://sre.google/sre-book/data-integrity/#first-layer-soft...

I work at Google, not on anything related to this though.


Then consider yourself surprised. Deleted data is actually deleted. Keeping it around is a huge liability.


I don't know how things work these days, but a few years ago, Google's GFS didn't support deletion. The "delete" flag only meant "don't replicate this data" and it was just simpler for them to keep it around until the disk died.

Source: This was published by Google in their research papers on GFS. Sorry, can't remember which paper.


Without reading the paper, I'm about 99% positive it also would mark the data as unused, which would allow the disk space to be reclaimed and overridden. Disks aren't write once.

It also likely would reindex it, meaning that you can't find it if you go looking for it unless you happen to know which disk it's in already and it hasn't been overwritten yet.


So basically recycle bin rather than shredder


But said recycle bin is actually automatically emptied periodically.


GFS is ancient obsolete technology that is no longer used (the whitepaper was published almost two decades ago, and it described a system already built and in use!). Also, I don't think you're interpreting it correctly either.


Under GDPR I'm fairly certain that they've implemented actual deletion. GDPR even requires companies to go so far as to scan old tapes to delete user records on request by a 30-day deadline, iirc.


Good SRE practices suggest keeping data of certain sorts around for awhile before the deletion is fully completed.


Google censorship is more equal than other possibilities. They themselves have committed a lot lately compare to say owner did it. Purely based on perception, without any proof, I say your last statement is unlikely.


"More equal"? The idea that YouTube deleted some random videos that no one has ever heard of and then lied about it, is "more equal" than the idea that an organization whose stated position is basically "Google bad because censor" wants to make Google seem bad?


This was an academic channel conference that many people viewed. Perhaps not a popular channel for you but still worthy of existing.


I don’t think they’re questioning whether it’s worthy of existing, but rather why Google would choose to target this video specifically given that it has ~0 reach and the potential PR consequences of lying.


Sure, but in a sea of billions of views a day[0] YT isn't targeting specific channels - at this scale, there's going to be at least one moderation mistakes for every million views (and that's being generous).

0: https://blog.youtube/press/


It's worth noting, though, that this channel is not the one which the source article describes as "the conference's channel". I wouldn't say it's impossible that Google would delete a channel for bad reasons, but it really seems like nobody in this story is super clear on what happened.


>TFA links to https://www.youtube.com/channel/UCGPZJTVIpNe-hroLo-CQ9wQ/vid... which currently 404s...

Where is that link in the article? I just see a link to this channel:

https://www.youtube.com/channel/UCWo6nbsYT3a5jfHIRF3WmtQ


They said they had no record. Considering this incident supposedly happened in November, there would be a strong possibility the content would have been permanently deleted after a given period due to GDPR concerns, in which case of course they would have no record.


Many will disagree, but this is the result of holding the plataforms accountable for what the users upload. The traffic is so inmense that they have to resort to shitty algorithms.

Because given Google's (any big tech company really) lack of morals, I don't believe they'd care to censor anything at all.


This is very true and the main point people brought against the EU's article 17(the "upload filter") mandating a scan of all user-generated content for copyrighted material that was put in place by oh so many very well paid(and not just by the EU), conservative MEPs that have no clue about technology.

Countries are struggling hard to put this into local law right now since it has to be by July this year - and as it turns out, to the surprise of no one, it's basically impossible to do so. The worst part must be that the law is retroactive, so a lot of old content will be removed and I'm sure no one will bother putting in a counter-notice for a six year old video.

I'm very curious how small services like Peertube instances will handle this.

Something funny from an iteration of the law in Germany: You are allowed to use 128x128 images of copyrighted content for "fair use"/without running into the chance of it getting taken down automatically, so good luck making quality content :)


> it's basically impossible to do so.

Well, not true. I built whole company that does exactly this. It's disappointing that YouTube is used as the standard of what technology can do and how these systems can work.

We built system with the creator as a first class citizen. This means they are able to claim exceptions (as defined by law) before any action is taken. They are also able to utilize outside arbiters (WIPO, ...) free of charge to make independent decisions when there is an issue.

This are just few highlights. But the reality is that just because YouTube built shitty system doesn't mean the whole Internet has to copy it.


Do you plan to offer wide access to peertube instances once your product is GA, or will it be up to maintainers to pay for and integrate the platform into their upload pipeline? Your system sounds like it would immediately solve the problem of compliance with this law for the PT instances that are afraid that they might be required to block access from the EU.


I did answer some of this already here [0] but in short, yes. We are offering our systems free of charge to all types of platforms including one like peertube. Anyone can implement our SDK and built at the top of it. We operate very similarly to Stripe where the system is free and we live off of the transactions.

[0] https://news.ycombinator.com/item?id=26009027


It's nice that there's ways to make this friendlier for creators, but wouldn't most bigger services rather block and build a cheap system in-house than rely on a SaaS/b2b product, if you don't give it away for free or price very competitively? Are they actually open to implement better systems? Hope it's alright to ask this, it's obviously a very important thing that will be coming up for all of us in the future.

Good luck with your work!


> Hope it's alright to ask this

Absolutely! We need to be held accountable.

> It's nice that there's ways to make this friendlier for creators, but wouldn't most bigger services rather block and build a cheap system in-house than rely on a SaaS/b2b product, if you don't give it away for free or price very competitively?

The biggest innovation of our company is not its technology but the business model. We in fact do give it free to all platforms, rightsholders and creators equally. How do we make money? When there is monetization occurring on the platform, we take %. It's literally the same model of VISA, where cards are free (ok, not all of them) but you pay for the transaction.

Some platforms will still try to build their own internal system, similarly to YouTube's Content ID. However as it stands right now, they are not complaint with EUCD and even they do become complaint, they are retaining extreme liability. We, on the other hand, created a setup where we are able to take this liability from the hands of the platforms while providing best service possible.

We also aligned our incentives with the success of everyone involved. If rightsholders over block, we don't get paid. If platforms can't attract creators because the system is too intrusive, we don't get paid. If creators don't benefit from our system, we don't get paid.

We intentionally built this way. We made it free to allow widespread adoption. We can scale much further than CID (currently can process 1,100 hours of content a second, or 66,000 hours a minute; while guaranteeing 5 second response time with full license being issued) but we also spent last two years working with civic societies, copyright scholars and others to help us build balance system with processes that don't favor one side over the other.

We have still long way to go, but I believe we are on a good path.


> We also aligned our incentives with the success of everyone involved.

Great! In the interest of preserving that alignment into the future, it might be helpful to lock in important aspects of the business model by limiting your future decisions with a Ulysses pact[1]. Cory Doctorow gave a tall[2] about using Ulysses pacts in the tech industry:

>> It's not that you don't want to lose weight when you raid your Oreo stash in the middle of the night. It's just that the net present value of tomorrow's weight loss is hyperbolically discounted in favor of the carbohydrate rush of tonight's Oreos. If you're serious about not eating a bag of Oreos your best bet is to not have a bag of Oreos to eat. Not because you're weak willed. Because you're a grown up. And once you become a grown up, you start to understand that there will be tired and desperate moments in your future and the most strong-willed thing you can do is use the willpower that you have now when you're strong, at your best moment, to be the best that you can be later when you're at your weakest moment.

>> The answer to not getting pressure from your bosses, your stakeholders, your investors or your members, to do the wrong thing later, when times are hard, is to take options off the table right now.

Designing for durability and failure safety is still important when designing business models.


This is a great point.

It's why we made the business model is built into the core of the service and can't be easily changed. Obviously where there is a will there is a way, but it would have very significant consequences to the business to do so.


"We give it free!" "... for a percentage of their revenue"


For percentage of LICENSED revenue. So if platform makes no money from the content that is being used, or they use public domain, royalty free, creative commons content or content is blocked, then it's free.

The whole point is that we can offer this system exactly because we get to share the upside with the platform, rightsholders and creators.


"You have to pay us in some cases" is not the same as "free", no matter how you try to weasel it.


Good one.


Retroactive Laws, at odds with reality, put into legislation by MEPs who are mostly even less able then their domestic parliament's party-colleagues and whose "consituency" usually doesn't even know who they are – Another example of the inherent flaws of the European Union.

Guess VPN-Serversi within the evil Brexit Kingdom are going to be in high demand.


I'm very concerned for e.g. YouTube and the vast amount of content already on there, since they will most likely apply the more aggressive filtering worldwide - maybe solutions like Peertube instances that are clearly labeled "not for EU citizens" will become a thing if it really pans out the way it looks to be at the moment.


For those who want to keep tabs on the topic: Julia Reda, German EU MEP, is quite a critic reference on this topic – a good blog to follow. https://juliareda.eu/eu-copyright-reform/censorship-machines...


This is outdated. That is for article 13 which became article 17 in the text that was voted on. Article 17 contains this paragraph: "The application of this Article shall not lead to any general monitoring obligation.".

That line, as I interpret it, makes the whole blog post that you linked invalid.


The past few EU laws had some modicum of reason and I thought people blocking the EU were stupid.

But for this one, if I were running a Peertube instance I would block EU IPs and tell people to use TOR or at least ssh to a lineode if they wanted to see things.


Why would you block EU? Article 17 explicitly states that its application shall not lead to any general monitoring obligation?.


> This is very true and the main point people brought against the EU's article 17(the "upload filter") mandating a scan of all user-generated content for copyrighted material

How can that be your interpretation of article 17 when it literally says : "The application of this Article shall not lead to any general monitoring obligation." in it.


> Because given Google's (any big tech company really) lack of morals, I don't believe they'd care to censor anything at all.

Oh, come on now! A big company may lack morals, true, but in order to appease the most vocal, acutely political and obsessively moral part of the general public, including its own employees, it needs to conform to the societal demands on it. How else would you explain the appearance of banners linking to official Covid-related sites under videos about Covid, or pulling down of the videos that question the official Covid-related policies (such as lockdowns), or banners under videos from state-sponsored channels such as RT, or removal of videos that question the outcome of the US presidential election, or, well, most of the articles on the Youtube blog, really [0]. They are firmly placing themselves on specific sides of political, cultural, or ethical divides.

[0] - https://blog.youtube/news-and-events


Many will unironically justify their actions by declaring themselves to be "on the right side of history". The phrase has an unsurprising origin.


“This is all for the greater good”


It is closer to "Deus vult" than "Think of the children", one is ridiculously presumptuous and the other is simply manipulative.


> They are firmly placing themselves on specific sides of political, cultural, or ethical divides

Since when is public health a political, cultural or ethical issue ?


>Since when is public health a political, cultural or ethical issue ?

You're either not American or you've been living under a rock for the past year. Public health has been a political issue for decades, and culture and ethics are inseparable from politcs. And then COVID happened and now everything down to believing the virus exists and choosing to take precautions has political ramifications.

It shouldn't be that way but it is.


The political aspect is how to handle the virus. If you voice an opinion about lockdowns possibly being worse in the long run than the virus. Or.. you say something like "I think shutting down schools is a bad idea" you get labeled as a denier. I don't think that many people believe it doesn't exist at all but that's how the MSM portrays it. Taking precautions is fine but the extremes some of these cities have taken against actual scientific evidence has made the entire thing political. I can't really blame some people for getting fed up with all of it when the same people who are giving these orders are defying them like they themselves don't really believe them.


To clarify - you're saying that 10 more hours of a cat dancing to techno was not at all an issue for Google to host but that a conference about media censorship was the straw that broke the camel's back?

Also, platforms are famously not accountable for what users upload - that's what makes them platforms and not editors.


legally platforms are often not accountable for what users upload (sec 230), but over the past several years journalists and politicians have been demanding that platforms remove content that said journo-politicians don't like, so in a sense the platforms are made to be accountable - do it or else regulation


No, I don't think that's what ladyanita22 is saying at all.


> I don't believe they'd care to censor anything at all.

Old Google maybe. The kind of Google employee who broke down in 2016 asking “How could we let this happen?” has a very different approach.


They care a lot about advertisers and lawsuits at very minimum.

Nobody wants their video ad up against 'controversial' material.

That said, there has to be a way to redress these grievances.

Of the many things we could start to regulated it's that:

1) If there is a take-down, they have to tell you why and give you a few days to dispute or to open a grievance.

2) Grievances have to be addressed.

I don't mind even if there is a fee - it's reasonable for Google to require $100 to address a grievance, after all, they are hosting for free.

Alternatively, they could be required to have a 'paid version' without ads in which the bar for issues was considerably higher.

That's just off the of my head, so not exactly well thought out.

The point is, however, they're reaching a scale whereby some of these platforms are equivalent to a public good. It's like they are becoming broadcast or telecoms, which were always subject to regulation.


Platforms are however not accountable to what users upload (DMCA) and say (Section 230 CDA). This is also not technology issue but more specifically YouTube's approach to everything: do the absolute minimum and when it doesn't work blame it on others.


My pet theory is that both Google and Facebook need to create training set data for their proprietary AI content moderation algos.

The content moderation algorithm is optimizing for three variables simultaneously - maximum number of videos hosted, maximum number of views, minimum number of lawsuits/negative press articles written/anti-Big Tech tweets.

The algorithm can't know if its truly maximizing the first two without letting the last one trigger every so often.

We are just witnessing/participating in the live dataset they are testing on.


> Google's (any big tech company really) lack of morals

Just because the company is amoral doesn't mean that political-moral entryists can't get their hands on the internal levers of power.


Many will disagree, correctly, because the DMCA exists.


It is unfortunately well known that moderators (human and machine) can't reliably distinguish between X and discussion/criticism about X. So it's quite common for academic discussion of topics like fascism, conspiracy theories, terrorism, trolling, etc. to be mistakenly censored. Either YouTube/Facebook/Twitter need to invest 1000x more on better moderation or academics should host their content on university sites.


"The Cleaners" is a great documentary about this.

https://vimeo.com/ondemand/thecleaners2018

One really eye opening thing that was super obvious in retrospect is the societal context that's required for a lot of the content moderation. No matter how many people you cram into a room in Malaysia, they aren't going to be able to fully appreciate the nuances of things like satire targeted towards a US audience. For the same reason, you can't have a team of well paid moderators in San Francisco moderating political discussions happening in Burma.

This is what makes the problem of content moderation so intractable. There are extreme financial and political incentives trying to evade moderation, and no matter how many people you throw at the problem, from any number of backgrounds, your coverage is still going to be dismal.

Couple that with the fact that this is probabally the most traumatizing job in the world, with incredibly high suicide rates. It's really really hard to build up a moderation team that can make a sizable dent in the flood of horrific content that floods these platforms. Your best bet is a swarm of AI doing its best, and as many humans as you can hire double checking their work, which is the state of moderation on most major platforms.


We use it basically as a public platform but it's really a private one that has deep conflicts of interest.

To me this shows the need for public platforms with open systems for distribution and moderation. Some possible approaches are out there. I hope people will support them. And if they find content they don't like, study the history of political dissent and political speech, and learn about ways of improving open moderation before dismissing open distributed platforms. Because otherwise you get what Google decides you will get.


if you open up moderation then you'll get Tyranny of the Majority[1]. But in the internet age it would actually be tyranny-of-the-people-with-a-chip-on-their-shoulder-and-time-to-burn.

[1]: https://en.wikipedia.org/wiki/Tyranny_of_the_majority


It's an essentially open feed that gets curated or filtered by various clients or systems before being presented to the user.

There is no single majority.


> To me this shows the need for public platforms with open systems for distribution and moderation.

That's either self-contradictory, or a very interesting idea.

If you want public platforms with moderation, then it's self-contradictory. Once it's moderated, it's not public (for at least some definitions of public).

If you meant "open" to apply to moderation, though... that could be really interesting. Here's a platform, and it's public, that is, anyone can say anything on it. And it's got open moderation. You don't like Google's moderation? Create your own. Those that like yours better than Google's, use yours rather than Google's. Those that like Google's use Google's. Those that like the CCP's moderation use it.


Right that's basically what I was thinking.


It makes sense that the censorship conference would have content that youtube would censor - almost by definition


Odd, but there is plenty of precedent. Here they are putting another video into the memory hole: https://lee-phillips.org/youtube/


I'm sorry but the claim is YouTube just randomly deletes channels with no notification? And they do it in order to suppress... Whatever this was? That seems remarkably unlikely.

The whole article is extremely thin on facts and very fat on speculation and breathless rhetoric.


deleted


Please don't post predictable flamewar comments to HN. We're trying for curious conversation here.

https://news.ycombinator.com/newsguidelines.html


It wasn't a predictable flamewar. Maybe for you but it wasn't for me nor could you cite any observable evidence for my contribution in this.

The other commenter was trying to flame, now that others seemingly piled on by needlessly flagging. I was generous and didn't engage in anything but a nudge to what I was implying/saying.

Google's behavior might be 'obvious' to some but it's not obvious to everyone, hence why I posted what I did in a public forum where ~2 billion people have access to this website but don't have uniform understanding of all things Google, that Google's behavior was hypocritical. The commenter expected me to say one thing or another and that's where the disconnect is. Not me. If you can cite my error, I'll happily provide 1 BTC for you clearly proving my error. But you can't. I don't come here to blame/flame. I come to contribute what I find valuable to the community. Others didn't find it valuable, that's fine but I tried.

When dealing with trolls. There is nothing wrong with giving them a gentle nudge, which is what I did. I didn't engage past my initial comment and one additional.

Finally, when dealing with people. Don't prejudge, even if you are the judge, when dealing with the community. I've done nothing wrong and the only problem is the narrative in people's head and their expectations to ~'not rehash what others have said' as @blauditore and @saagarjha were implying.

What others may have said in the past doesn't inform my experience, I wasn't there. I'm speaking for myself and there is nothing wrong with what I said. And still isn't.

For @blauditore and @saagarjha, I'd say their comments were trying to flame. Don't project their bad behavior on me.


I'm certainly not suggesting that other commenters didn't also break the site guidelines, but what you posted was unsubstantive flamebait. It's easy to underestimate (by 10x or more) how much of this one is doing—commenters are often surprised to hear how their comments have landed with others. The solution to this is not to blame other people but rather to err on the side of posting thoughtfully and substantively.

https://news.ycombinator.com/newsguidelines.html


> but what you posted was unsubstantive flamebait

Pointing out that the morals of Google diverging from their founding motto is absolutely substantive.

As for flamebait, to someone whom has an axe to grind, everything is flamebait. Which is something I have zero control over.

> rather to err on the side of posting thoughtfully and substantively.

I did. As for beauty, it's in the eyes of the beholder. Which why I'm standing my ground on what I said. I didn't entice someone. I didn't get into ad hominem. I simply only responded once that I'm pointing out something that is clear to me (hypocrisy of Google's morals).

I already stated how Google diverged from it's original motto. One commenter felt it was unoriginal. Well, I don't live on this website. Also, where are these unproductive expectations coming from? Pointing out that someone is 'unoriginal' isn't engaging anyone to discuss something.

Since speaking plainly isn't getting through.

Let me reason by comparison.

If I'm a 5th grader and the commenter is a professional 12th grader. How is the 5th grader wrong for posting it's opinion? That is clearly substantive and thoughtful, I literally quoted Google's code of ethics page to make my case. The 12th grader should've known better.

If something has been said 100s of times on HN, guess what happens? Typically, it educates the new readers whom haven't seen that information before. Or it gets downvoted into oblivion. Nothing wrong with that and it's self policing.

I believe this type of response, to me, was misplaced and unnecessary.


If you feel the comment was substantive, thoughtful and conducive to interesting discussion, why did you delete it?


> why did you delete it?

I'll rephrase your question. What is the reason for the deletion of your comment?

Simple. People were flaming. It was getting downvoted and flagged. If the community sends a clear statement (by acting in concert with one another), then I'd prefer not to get downvoted into oblivion. To me, it's not a hill I want to die on. And it was clear that others were misunderstanding. All I can do is communicate my perspective. If someone comes to the conversation with misguided expectations (even if it's a swarm of people), then I'll bow out. It's simple non-violent communication.


That's not what I asked, but maybe I can try to rephrase it. It seems odd to be going to such verbose lengths to defend something you wrote but apparently don't want other people to see.


> That's not what I asked, but maybe I can try to rephrase it. It seems odd to be going to such verbose lengths to defend something you wrote but apparently don't want other people to see.

I appreciate the attempt at clarity but I don't see a question here.


I think by this point, with your help, I've arrived at the conclusion you were simply embarrassed by what you wrote, thanks.


> I think by this point, with your help, I've arrived at the conclusion you were simply embarrassed by what you wrote, thanks.

How so? I wasn't embarrassed. I was shocked at the immature reactions.


Since you ask, but let's end here: To me, it's the simplest theory that explains the available facts.

First you wrote a short one-liner comment that said (roughly summarizing from memory) "google's motto used to be don't be evil, lol". Users flagkilled it almost instantly and you, embarrassed, deleted it. Then a moderator scolded you, which is even more unpleasant and embarrassing and you started writing lengthy defenses pretending the embarrassing thing you deleted was some Voltairean epigram, talking about how you 'stand by' a thing you yourself deleted. When this inconsistency was pointed out to you, you pretended not to understand the issue and went on to blame other users because, I assume, the whole thing becomes even more embarrassing if we recall that it's, in fact, about a comment that said (roughly summarizing from memory) "google's motto used to be don't be evil, lol".


That wasn't my comment...Also, it's refreshing that you can project my emotion on me (did I say embarrassed? did I say it was unpleasant? did I say I felt it was a Voltairean epigram?). Even though I stated you were incorrect on my personal emotional reaction, you continue to not realize that you're incorrect about my perception on the matter. Let alone your question ('why...') was a hostile question instead of actually engaging someone. I offered an olive branch by trying to rephrase it. Then you take all pretenses away and judge me without merit or grounded in fact.

I asked you what your question was but instead of realizing that you asked me a judgemental question (which isn't really a fair question...you presume guilt), you continue with your judgement without the facade of asking. You told me what you thought. Now, this latest comment, you're expounding as if you're doing someone a favor but instead it's condensation and gaslighting my perspective.

As for these longer follow-up questions. Honestly, if they got downvoted, I'd just laugh because it would only confirm how immature the readers are (which does happen occasionally but is rare) but instead, I wouldn't edit them b/c objectively, I'm not in the wrong on these longer posts. And someone would be hard pressed to say anything to the contrary.

To be clear, I'm standing my ground on my right to do what I did, for my own reasons and it was pointless/meritless to throw me into the mix of someone trying to provoke a flamewar. Which is a presumption beyond any evidence (even the deleted evidence) can verify. But...maybe I'm 'guilty' because someone might have power over me and they projected their negative mental patterns onto my comments? Cool world. Seems like the irony of my original comment! But I hope that rationality comes.

Instead, it was an innocent comment and the reaction has been anything but mature or consistent. I've seen even more superficial, pointless, and antagonistic comments than mine daily on HN, yet it's rare to see them squashed. I've checked multiple times. It's hit/miss, since HN has scaled over the years.

Take a step back and chill out. /fin


We detached this subthread from https://news.ycombinator.com/item?id=26008536.


[flagged]


Please don't post predictable flamewar comments to HN. We're trying for curious conversation here.

https://news.ycombinator.com/newsguidelines.html


[flagged]


In this case the duck is a low-quality comment without further exposition.


[flagged]


I don't get why this maxim is constantly touted as if it were a fundamental truth. Corporations exist in a social and legal framework that we agreed to create, and that we could change so that money would not be the sole measure of value.

You might say that's the way things are right now, but if you stop there it sounds like fatalism. I'd rather say something like "let's find ways to regulate corporations so that they have to care about our basic rights". Sorry if I'm misunderstanding your comment.


You wrote 2 paragraphs to say OP was wrong, because although things are actually like OP said they were, they could be different.


This is incorrect. Corporations are soulless financial instruments designed purely to deflect blame away from people who repeatedly engage in unethical activities. Corporations ought to be abolished. They are a cancer on society.

Just read the legal definition of a corporation out loud and it's pretty clear what it's designed to do. There is nothing ambiguous about it. It's an instrument designed to legitimize criminal activity by deflecting blame away from real people and towards inanimate abstract entities which cannot be jailed or punished in any meaningful way. They operate with total impunity.

Anyone who cannot see it is a hypocrite. They are willfully blind to the horrors that they benefit from.

Corporations were humanity's first foray into legitimizing crime, cryptocurrencies may be the next instrument... Most of these instruments present themselves as workarounds which can leverage the darkest aspects of human nature for the social good but in fact they bring out the worst in people and make their dark predictions self-fulfilling.


[flagged]


Reduced poverty for who? For criminals perhaps. It made honest people poor and crooks rich. The world was better before when the dishonest people were poor and people knew the meaning of honour. The people whom this system currently serves will surely run it all into the ground when they realize that they have no operating values and therefore anything goes...

We will soon learn what it means to have a society devoid of values or social contracts.

Good luck with your forever improving world.


Bizarre. Anyway, I did a quick check of their website, and they have a really interesting list of "censored" stories: https://www.projectcensored.org/category/the-top-25-censored...

(I've not seen any of the usual right wing free speech faction stand up for these...)


"The top "Censored" news stories are identified through the Campus Affiliates Program, a collaborative effort between faculty and students at many colleges and universities."

Since college faculty often skews 10:1 or better democrat, I don't find that too surprising.

https://en.wikipedia.org/wiki/Project_Censored


Completely off-topic, but why did they list only 5 items per page? I sometimes see websites do this and I don't get it, is it for clicks?


Wordpress (and its competitors) come configured out of the box for maximum click engagement. It's a massive effort to undo.


> (I've not seen any of the usual right wing free speech faction stand up for these...)

Hi! Part of the right-wing free-speech faction here! (Well, I don't consider myself a rightist, but you probably would.) This incident sucks just like all the other ones! Hope this helps.


I just want to point out that the stories seem to be about predominantly left-wing causes. I don't expect anyone in the right-wing free-speech faction to even know about them, since 1. it was silenced, 2. it was probably talked about only in more obscure left-wing circles, that right-wingers are not a part of.

It's unreasonable to assume that a right-winger would know about it, just like it is unreasonable to assume that a left-winger is going to know about all the silencing that goes on against the right-wing. It's the media blackouts we're talking about here. That's the entire point.


It's fairly unreasonable to assume everyone knows about all the issues on both sides. The current echo chambers make it hard to look at centralist/opposite-leaning news since it might ignite distaste for the reasoning that side uses (aka less dopamine gets fired when you read it compared to when reading from your favorite source), plus news sources that express completely opposite views aren't shared in your echo chamber circles in the first place since you probably don't follow or often talk with people who think opposite of you.

But to address the left-wing causes on the site, It doesn't actually seem like they're partisan issues unless you're talking about the ones specifically about the previous administration, perhaps except for #22 (which boils down to 'we should tax the wealthy', aka something that was not censored, even if it was narrated against) and #6 which is specifically about a new national news source which is trying to deceive readers by appearing as a bunch of different local news sources. Obviously you might not agree with more but they don't seem to be strictly selecting left-wing articles for this top 25 list.


By saying that I didn't meant to attack them as such, but pretty much what you've said in the first paragraph.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: