This is super nice, thank you for including an AI disclosure. I would probably normally avoid something like this bc I would be considered by how much code oversight there is. Very nice to know that it's overseen properly by a human. Installed it and it's quite nice!!! great work :)
It would also be nice to be able to hide the checkbox it adds to the homepage. also disabling show focus box doesnt actually seem to work?
I mean more fundamentally, if they have access to even more advanced models than all of us and have this much downtime, does that imply that their models are possibly not so great at software dev?
But yes you're definitely right, it's perhaps more ironic than contradictory.
Agreed. Having some level of human input makes a submission at least meaningful. If the entire repo and all text is generated by an LLM, does it really matter if the human is the one posting the link? It's functionally indistinguishable from automated spam.
For what it's worth, there are modern LLM detectors with extremely low false-positive rates. The tech has advanced quite a bit since the ZeroGPT days. Personally I've gotten very good results from Pangram Labs. Still can't directly ban people though because false positives are always possible.
Are they great at detecting normal prompts that don't try to make the LLM speak non-LLM-ishly? If you make the LLM not use em dashes, "it's not; it's" phrases and similar things, and if you make it make a few mistakes here and there, would it still be detected? My point is that if people aren't trying to hide their LLM use, it might work, otherwise it probably wouldn't. How would a detector tool work against output where the prompt tells the LLM to alter the way it writes? Or if the LLM output is being modified by another LLM specifically designed to mimic certain styles?
Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.
CARROT has this and it’s amazing! You can “time travel” back as far as you want. Absurdly far, even. I can tell you that it was 20 degrees in my town on Jan 1st, 1940.
same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.
Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.
I'm genuinely haunted by these TurboTax ads, I see that download app popup at least 3 times a day when I use Apple News. Truly cannot believe someone at Apple though that was an acceptable user experience for ads.
I'm glad that I'm not the only one this is happening to! I don't believe I've ever even expressed interest in using anything Intuit, at least consciously. Perhaps its the accidental download-app dialog that's mimicking engagement?
It would also be nice to be able to hide the checkbox it adds to the homepage. also disabling show focus box doesnt actually seem to work?
reply