If you take meaningless tokens (that do not contribute to subject focus), I don't see what you would lose. But as this takes out a lot of contextual info as well, I would think it might be detrimental.
I assume in practice, filler words do nothing of value. When words add or mean nothing (their weights are basically 0 in relation to the subject), I don't see why they'd affect what the model outputs (except cause more filler words)?
Politeness have impact (https://arxiv.org/abs/2402.14531) so I wouldn't be too fast to make any kind of claim with a technology we don't know exactly how it works.
Interference is most of the answer. With frequencies f and 2f you get the smoothest interference patterns, even if the tones have a lot of harmonics. This applies reducingly to increasingly fractional ratios.
We had an awesome split screen dogfighting game on a Win98 PC where everyone had a Spitfire-like plane and tried to take the others down. You could land at your base and heal etc. Super fun. I think it was called Iron Birds? Don't think I've found it since.
Layout appears unusable on my phone on Firefox Android (both portrait and landscape). Necessary elements seem to get hidden behind others. Not sure if I'm even supposed to be able to play it without kB&m though lol :)
Input's method seems to be fundamentally very different to this. Monaspace keeps the grid intact and only changes the characters visually (situationally overlaps wide letters to neighbouring narrow characters' spaces). Input just pretends to be monospace in its aesthetics, I don't really understand what's supposed to be special with that.
reply