There's nothing particularly wrong about it. Twitter does something like this with a bit of timing thrown in the mix, which is why some smart-asses have made self-referencing tweets:
I believe they also for a long time had a public API^.. and I'm pretty sure they soft-delete posts (bonus that they could revive the tweet down the track if it was a 'mistake'). I don't think the Twitter team are amateurs though.
Having monotonically increasing IDs for non-distributed systems removes the possibility of duplicate IDs. In distributed systems ID conflicts are unavoidable, and you might have less conflicts if you generate random IDs, as well as it becomes difficult for an attacker to guess IDs.
^ likely it would have been rate-limited, which isn't a problem if you've got a large pool of users you can authenticate with
> Having monotonically increasing IDs for non-distributed systems removes the possibility of duplicate IDs.
In parler's case, having monotonically increasing ID typed as signed 32-bit integer _guaranteed_ duplicate IDs once they reached 2.1 billion unique items. This is well-known failure mode, and yet it took them by surprise.
Okay.. I concede, 32-bit is a rookie mistake. I didn't know that. Wow. Did they not account for millions of daily active users in a Twitter competitor...
https://www.spinellis.gr/blog/20090805/
I believe they also for a long time had a public API^.. and I'm pretty sure they soft-delete posts (bonus that they could revive the tweet down the track if it was a 'mistake'). I don't think the Twitter team are amateurs though.
Having monotonically increasing IDs for non-distributed systems removes the possibility of duplicate IDs. In distributed systems ID conflicts are unavoidable, and you might have less conflicts if you generate random IDs, as well as it becomes difficult for an attacker to guess IDs.
^ likely it would have been rate-limited, which isn't a problem if you've got a large pool of users you can authenticate with