Like parrots, LLM learn to immitate language (only, unlike parrots, it's done in a learning mode, not from mere exposure, and it's billions or even trillions of examples) without ever understanding its primary meaning, much less secondary more subtle meanings (such as how a person's certainty and formal education shapes their choice of words used for a subject).
As we humans tend to see patterns in everything even when they're not there (like spotting a train in the clouds or a christ in a burnt toast), when confronted with the parroted output from an LLM we tend to "spot" subtle patterns and from them conclude characteristics of the writter of those words as we would if the writter was human.
Subconsciously we're using a cognitive process meant to derive conclusions about other humans from their words, and applying it to words from non-humans, and of course out of such process you only ever get human chracteristics out so this shortcut yields human characteristics for non-humans - in logical terms it's as if we're going "assuming this is from a human, here are the human characteristics of the writer of this words" only because it's all subconscious we don't spot we're upfront presuming humanity to conclude the presence of human traits, i.e. circular logic.
This kind of natural human cognitive shortcut is commonly and purposefully taken advantage of by all good scammers, including politicians and propagandists, to lead people into reaching specific conclusions since we're much more wedded to conclusion we (think we) reached ourselves than to those others told us about.