The best was the Ted Chiang article making numerous category errors and forest/trees mistakes in arguing that LLMs just store lossy copies of their training data. It was well-written, plausible, and so very incorrect.
Neural network based compression algorithms[1] are a thing, so I believe Ted Chiang's assessment is right. Memorization (albeit lossy) is also how the human brain works and develops reasoning[2].