I tried it, but the only way I found to find people in my field was to follow a username that had the name in the title and then follow the followers. By that point I was pretty bored of talking into the gloom. I should give it one last check to see if it’s easier to connect with people.
It must be specific to that subreddit then? I tried another one that worked. Anyway, like twitter requiring login changes it from somewhere I rarely visit to something I don't think about anymore.
According to another article posted here, reinstalls and installs on multiple devices are counted. Though I can't imagine this going through, otherwise coordinated groups could bankrupt indie studios within days. Steam can use local backup data for reinstalls, so a single machine could easily cause $100+ in fees per day.
One of my side projects has been hacking a ChatGPT model and a JS App to make a multiplayer dnd style game. The AI does all the creative descriptions which are saved by the JS app and fed to the players.
It has been a lot of fun working on it because it gave me a taste of new way of working, honing prompts and finding the limits of the AI, then figuring out how to offload the parts it struggled with (mainly storing stats, inventory, regurgitating room descriptions, etc.).
The “limits of the ai” for this kind of thing is basically everything other than pure dialogue in my experience.
I also had a bit of a play with this, but the LLMs were just rubbish at it.
The only meaningful way of doing this is to write an actual game using actual state, and use the LLM as a “renderer” that renders coherent state into free text, and free texts into (more or less) structured action requests.
Without rails, it just becomes like AI dungeon; free wheeling adhoc story telling with no rules or structure…
Large context windows don’t solve large scale coherence, and prompt engineering does sfa against the devoted trolling efforts of actual players.
I wonder how far you could go with a combination of fine-tuning and a two-pass generate & audit process.
Step 1. Fine-tune a base LLM to the scenario. Feed it as much background material as possible. This would work best for a franchise with a huge extended universe or associated works: Dungeons & Dragons, Star Wars, Doctor Who, etc...
Step 2. Fine-tune / RLHF with negative weights against anything out-of-context. Basically, stop the AI ever referencing anything that can't exist in the fictional universe. Penalise references to real-world events, places, or people, modern technology, etc...
Step 3. Fork the AI model, once for each character. Fine-tune for conversations in that character's "tone" or mannerisms, backstory, etc... These can be via RLHF generated with a powerful model such as GPT 4. Again, reward/penalise the AI if it references anything it should or shouldn't know from that character's perspective.
When players play the game and converse with characters:
1. They'd always be talking to an LLM fine-tuned to death for that specific character in that fictional universe.
2. Then have both the input and output run past a general-purpose "nanny" AI that is prompted to look for exploits, out-of-context shenanigans, or unexpected output. Respond to the user with "I don't understand the strange things you're talking about" or some similar general push-back against jailbreaks.
Alternatively, have the generic security filter AI rewrite inappropriate terms in user inputs with unintelligible garbage. E.g.: if the user asks
"Are you a computer?"
Rewrite that to:
"Are you a gizwallop?"
Then the in-game character would rightly be confused by the nonsense term and answer something like:
"I have no idea what you mean, what is a gizwallop?"
Which could be translated back to:
"I have no idea what you mean, what is a computer?"
This would be absurdly expensive and slow right now, but in 5 years? 10?
I can imagine the cost of tuning the above for a GPT5-equivalent model dropping down to the budget of even a tiny indie game, let alone big-budget AAA game studios.
You've got the pattern for LLM's that I've come to as well - LLM's can decode natural language (into API calls) and can encode them (from state and prompts).
As an AI language model, I cannot help your rogue kill that goblin in the cave. Instead, you can try things like: capitalism, finance, technology.
I joke but it is terribly jarring when the API is working perfectly and then starts apologizing that it cannot do something, like access personal information, when it is internally prompted that it should only use information it receives in the prompt.
...mmm, I'm currently in the 'this is a technical limitation, not an artificial constraint' camp.
Sure, the artificially applied constraint in the APIs also exist... and sure, you can have a long context with for example, gpt-3.5-turbo-16k, but the problem is fundamentally that no amount of wishing can make an LLM into a compiler that executes code.
You cannot, and will probably never be able to define your constraints in free text to an LLM of this type and then expect it to also be able to execute those constraints in an error free manner. That's not how the technology works. You might be able to make it generate procedural code that satisfy the constraints and execute that code in a reliable procedural manner, but afaik no one has managed to get that to work reliably and at scale (if you're thinking of smol developer right now, you clearly haven't actually used it).
When you define an RPG system to an LLM, the issue isn't that it isn't allowed to say things, it's just that it cant follow the rules reliably, and it can't keep track of what's going on as the context length gets larger and larger.
...and, for an RPG system, where the contrived RPG rules and internal consistency are everything, it's a deal breaker.
> when the API is working perfectly and then starts apologizing that it cannot do something