That's not necessarily the goal. Alignment definitely filters the available response distribution, but the result of alignment and fine-tuning can be higher entropy than the original.
E.g., how many people complain about text being"obvious LLM garbage"? A wider range of styles and a more entropic solution would fall out of fine-tuning in a world where the graders cared about such things.
E.g., Alignment is a fuzzy, human problem. Is a model more aligned if it never describes DIY EMPs and often considers interesting philosophical components? If it never says anything outside of the median opinion range? The former solution has a lot more entropy than the latter and isn't particularly well reflected in available training data, so fine-tuning, even for the purpose of alignment, could easily increase entropy.
A weaker statement is that creativity is bounded by entropy. The LLM is still free to respond "Four," "four," "{{{{{}}}}}," "iv," "IV," etc. A sufficiently low-entropy response cannot be creative though.
Is it though? An answer can still be creative if it's the only way you answer a specific question. In your example, if the LLM responded only "{{{{}}}}" that's a creative answer. Even if it's the only one it can give.
That's a fair point. I think maybe the issue is one of the reference point we're implicitly choosing for creativity. "{{{{}}}}" is creative relative to our expectations for the problem -- falling outside the usual distribution of answers -- having high joint entropy. Relative to the person reading the response, I agree creativity could be high with model entropy remaining low.
It would make sense that fine tuning and alignment reduce diversity in the response, that's the goal.