Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe in one shot.

In theory I would expect them to be able to ingest the corpus of the new yorker and turn it into a template with sub-templates, and then be able to rehydrate those templates.

The harder part seems to be synthesizing new connection from two adjacent ideas. They like to take x and y and create x+y instead of x+y+z.

 help



Most of the good major models are already very capable of changing their writing style.

Just give them the right writing prompt. "You are a writer for the Economist, you need to write in the house style, following the house style rules, writing for print, with no emoji .." etc etc.

The large models have already ingested plenty of New Yorker, NYT, The Times, FT, The Economist etc articles, you just need to get them away from their system prompt quirks.


I think that should be true, but doesn't hold up in practice.

I work with a good editor from a respected political outlet. I've tried hard to get current models to match his style: filling the context with previous stories, classic style guides and endless references to Strunk & White. The LLM always ends up writing something filtered through tropes, so I inevitably have to edit quite heavily, before my editor takes another pass.

It feels like LLMs have a layperson's view of writing and editing. They believe it's about tweaking sentence structure or switching in a synonym, rather than thinking hard about what you want to say, and what is worth saying.

I also don't think LLMs' writing capabilities have improved much over the last year or so, whereas coding has come on leaps and bounds. Given that good writing is a matter of taste which is beyond the direct expertise of most AI researchers (unlike coding), I doubt they'll improve much in the near future.


You're ignoring what I said. They work better when turning it into a two step process. Step 1 create a template. Step 2 execute the template.

>The large models have already ingested plenty of New Yorker, NYT, The Times, FT, The Economist etc articles

And that ends up diluting them. Going back and doing another pass on only a subset would give them stronger voice. At some threshold, scanning information brings it to average and a return to the mean, instead of increasing the information. It's a giant table of word associations, it can regress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: