LLM’s aren’t nearly random enough to ever produce the entire works of Shakespeare, no matter how much infinite time you give them (though I’m sure they are capable of abominable stitchings of regurgitated quotes/snippets).
It’s always baffling when people (who’ve given it adequate thought) take library of babel type of things seriously, while ignoring the overwhelming amount of nonsense, that would be hard to separate unless all you looking for is an exact echo of your query.
What's the reward function for simulating me, I live a pretty dull life, what possible ROI this goes against all laws of economics 101! (The only true way to carve reality at the joints.)