> A good way of omitting bad bots from the network is by verifying and tying the bot to the (verified) identity of a real person.
Is it?
I am much less charitable than you about whether Discord's bot verification is intended purely for user safety or whether it's a combination of laziness and a way of slowly clamping down control over how users access the service, how it can be extended, and what services/clients can interop.
I disagree that 100 servers is a particularly large number for a popular bot to join, but more importantly I think the threat model you describe illustrates a deeper problem with Discord overall. If the issue is that bots can sign up to Discord and just start talking to people, that's a permissions issue. Why can bots do that? And why is it OK for bots to keep doing that as long as they're in fewer than 100 servers?
So sure, we can have an extremely invasive form of verification, but we could also just... not let bots join random servers in the first place. We're jumping straight to real-life identification in a system that doesn't even support granular control over invites. In my opinion Discord's moderation and user-vetting tools are basically non-existent, so I am at least a little bit skeptical about whether verification is a completely necessary tradeoff between security and privacy.
I agree with your sentiment, but I don't even understand what the rationale for singling out bots is. A user of the service is either causing problems for others or not. Whether that user happens to be a bot doesn't seem relevant to me.
HN does quite well without requiring anything other than an IP address. So does Mastodon. And mailing lists generally have no way of knowing even that!
Do please enlighten me. I've never provided more to HN than a user name and password. There's no third party JS (I just double checked). I suppose the first party JS could be performing aggressive fingerprinting but I doubt it.
(Of course they also have my entire post, view, and vote histories. Those are arguably far more sensitive than any PII I could possibly provide, but I seem to have developed a habit of repeatedly forcing that information on them so I guess that's on me.)
You've provided them with a username, under which you post, comment and view content. This is enough to identify you as an entity in the system and what it is you're doing. If what you're doing, based on heuristics and what you publish is having a bad effect on "the network", you can be blocked/stopped/warned.
I'm saying HN do anything of this, but I doubt they only look at your IP when you're interacting with the service.
If you reread the comment chain the original context had to do with collection of PII. HN has only my IP address (no email, phone number, credit card, or ID). I am well aware that data regarding user interactions can be highly sensitive but it's not what was being discussed.
> ... slowly clamping down control over how users access the service, how it can be extended, and what services/clients can interop
So what? It's a private network and a private service. They can have it function however they like. That's why free market economies work - people will go find something else, or demand something else, should what's available not fit their needs or they feel too restrictive.
Something like Discord can be replicated easily enough by someone with enough money and a decent engineering team. And it's not like there aren't other options already.
> I disagree that 100 servers is a particularly large number for a popular bot to join
I'm not sure what you mean by "100 servers". I guess that's the maximum amount of servers a bot can join?
There are some pretty big servers out there. If a bot can join 100 servers, and they have an average of 10,000 users, then that's literally 100,000 people that can attached with malware, scams, and more.
Are you saying that's not a problem?
> If the issue is that bots can sign up to Discord and just start talking to people, that's a permissions issue. Why can bots do that?
I don't believe they can. I believe the verification process prevents this? I could be wrong.
> So sure, we can have an extremely invasive form of verification, but we could also just... not let bots join random servers in the first place
I don't believe they can.
> ... we can have an extremely invasive form of verification ...
Is it that invasive? Is requiring people to validate their identity before introducing something that has the potential to directly address millions of people all at once really that invasive?
Should my credentials (and character, intention, etc.) by validated before I'm allowed to talk on a radio station listened to by millions of people, or is the (privately owned) radio station being, "extremely invasive" by asking me to validate who I am before they let me use their network?
> Discord's moderation and user-vetting tools are basically non-existent
There are five levels of verification you can select from, ranging from none to highest. The former requires a validated phone be added to their account.
Their moderation tools are pretty powerful. You can create roles that are flexible enough to allow you to create some pretty interesting setups.
What is it about these tools that you feel could be better?
> So what? It's a private network and a private service. They can have it function however they like. That's why free market economies work
You're commenting under a thread that proposes creating a government service to reduce the implementation costs of identity verification. When we start talking about essentially subsidizing a business practice, then this isn't really about the free market anymore.
But even if it was, criticism is a fundamental part of how the free market works. People are free to advocate against a company's policy, to publicly criticize them, to encourage people not to use them, to argue for an industry to move in a certain direction... the free market has never been a shield against the kind of criticism happening on this thread. The invisible hand of the free market isn't actually invisible, when you see people complaining about companies and making arguments about the overall direction of the market, that is the free market at work.
> Is requiring people to validate their identity before introducing something that has the potential to directly address millions of people all at once really that invasive?
In this context, yes. In a different context, maybe not. But the Internet has different social norms surrounding anonymity, and most people online aren't thrown off by the fact that they might not know the physical identity of someone who makes a website or runs a Twitter account or releases a piece of code/bot.
I think that Discord's policy runs counter to how people expect to consume content online, and I think it's reasonable to describe their request as invasive in the context of Internet norms. You're on HN right now. Does it bother you that the site hasn't asked you for your drivers license yet?
And just as a quick side note on this point, Facebook has been around for long enough that I feel like we should drop the argument that tying accounts to real-world identities inherently prevents abuse or curbs misinformation. Heck, talk radio and cable news has been around long enough that we should probably drop the argument that vetting guests in traditional settings inherently means we'll have less misinformation.
> If a bot can join 100 servers, and they have an average of 10,000 users, then that's literally 100,000[1,000,000] people that can attached with malware, scams, and more. Are you saying that's not a problem?
I think the much more interesting question in your scenario is why Discord thinks it's OK for a malicious bot to target 990,000 people. I don't think 100 servers is a particularly high limit for a popular bot or a meaningful line for when abuse becomes a problem. I don't see how identity verification solves the abuse problem overall when hackers/spammers can just create multiple bots that can target smaller numbers of servers. I think it's really weird to act like this becomes a problem at 100 servers.
> What is it about these tools that you feel could be better?
The ability to create private invites that can only be used by a single person, the ability to require users to be approved before they join your server. The ability to ban words, the ability to block links (or better, the ability to only allow links to certain domains), the ability to block bots outright from joining (what seems to be the entire reason this verification process exists), the ability to easily share blocklists between servers, the ability to hold comments from new accounts in limbo until they're approved.
Some of this can be replicated by setting up your own bots and figuring out some kind of custom role where new users jump through hoops; and that's basically what a lot of servers I run into on Discord have to do. But it's really awful and it's a bad experience and it makes moderation unnecessarily complicated for non-technical users. As a result, most servers don't really set anything up because it's time consuming, so we end up with bad defaults on most servers. And that situation doesn't have to exist. Why do I need to find a bot to ban certain words on a server? That's something that belongs in the settings in a text input. Why do I have to go through this weird song-and-dance with invite codes, why can't I add people by their account ID? Why is there no one-click setting to just block new bots from joining my server unless I specifically grant them permission?
I've joined Discord servers that have these complicated house-of-cards setups where you're entering passwords into dedicated rooms to get granted access to other rooms by moderator bots. It's really bad, moderators shouldn't have to spend hours building custom rube goldberg machine to handle new users. This is stuff that should be configurable within 30 seconds from the settings page.
You mention that you "don't believe they can" block bots from abusing servers this way. But I just do not understand what the technical problem is. If the problem is that bots are joining random servers, and if bots can join my server without my permission, give me a single checkbox somewhere in settings to turn that off.
Is it?
I am much less charitable than you about whether Discord's bot verification is intended purely for user safety or whether it's a combination of laziness and a way of slowly clamping down control over how users access the service, how it can be extended, and what services/clients can interop.
I disagree that 100 servers is a particularly large number for a popular bot to join, but more importantly I think the threat model you describe illustrates a deeper problem with Discord overall. If the issue is that bots can sign up to Discord and just start talking to people, that's a permissions issue. Why can bots do that? And why is it OK for bots to keep doing that as long as they're in fewer than 100 servers?
So sure, we can have an extremely invasive form of verification, but we could also just... not let bots join random servers in the first place. We're jumping straight to real-life identification in a system that doesn't even support granular control over invites. In my opinion Discord's moderation and user-vetting tools are basically non-existent, so I am at least a little bit skeptical about whether verification is a completely necessary tradeoff between security and privacy.