Y'know, now that you mention it, the sealioning behaviour I'd been conditioned to expect is a big reason for why I spend so much time writing my comments and adding qualifying statements.
Ashelyn
Who the fuck designed the kitchen in the thumbnail? The vent hood isn't even close to being centered on the range
Removing the homepage entirely, replacing the entire UI with the shorts-style format of "view video right now, tap button to see next/previous video". If you want a specific video, you must search for it.
Cheaper to design one seat and use it for both spots
I spent like 40 hours on XC2 and uh, idk I really liked the world design but wasn't a fan of the effectively gacha mechanics to unlock new fighters. The story seemed to have a really slow start (which I'm not necessarily against) but the combat wasn't my thing unfortunately. The Japanese voice acting is definitely a lot better than the English, and was worth waiting for the download on even though I didn't end up playing that far in.
People developing local models generally have to know what they're doing on some level, and I'd hope they understand what their model is and isn't appropriate for by the time they have it up and running.
Don't get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn't know where to start. My concern is with the cultural issues and expectations/hype surrounding "AI". With how the tech is marketed, it's pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it's possible to shoehorn through.
Addendum: local models can help with this issue, as they're on one's own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.
How about: Popularizing the idea of the wall in the first place, going mask-off calling illegal immigrants "murderers and rapists", the "Muslim Ban" on air travel, moving the US embassy to Jerusalem, employing white nationalists as staffers, packing the supreme court with extreme conservative justices, giving permanent tax cuts to the rich, expanding the presence of immigrant concentration camps, cozying up to foreign dictators, stating he wanted generals like Adolf Hitler's behind closed doors when his own generals refused to nuke North Korea and blame it on someone else, egging on a far-right insurrection attempt, directly pursuing strikes and assassination attempts against middle-Eastern military generals and diplomats, ending the Iran nuclear deal, calling climate change a Chinese hoax, calling Covid the "China virus", spreading vaccine disinformation until one was developed before the end of his term, trying to start a trade war with China, discrediting his chief medical advisor on factual statements about Covid, saying Black Lives Matter protestors were "burning down cities", wanting to designate Antifa as a terrorist organization, declaring "far left radical lunatics" part of his "enemy from within", being an avowed friend of Epstein, sexually assaulting over a dozen women and underage girls, being a generally abusive sleazebag, also funding a genocide (Israel has always been ethnically displacing Palestinians), also building the wall, also not implementing healthcare reform (and being against what we have), also not protecting abortion rights (+ setting up the conditions that led to their erosion; see supreme court point above), and also denigrating anti-genocide protestors (but not as harshly since he wasn't the one in charge when it happened).
I guess he's not a cop though, so there's that.
(minor edits made for grammar/spelling)
On the whole, maybe LLMs do make these subjects more accessible in a way that's a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.
The problem is that LLMs 'hallucinate' details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it's essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.
ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they're intelligent. They're very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.
Wow, it's almost as if someone being bad can be for multiple reasons!
You would have made a pretty penny if you bet on Trump in 2016
Electoral betting odds more closely reflect the opinions of capital than voters, precisely because of who has more disposable money to put where their mouth is. In 2016, that was the liberal status quo because it meant business as usual. In 2024, after having a taste of blood in the water from tax cuts and deregulation in Trump's first term, they want more.
I always found the idea of stable Boltzmann brains fascinating. The idea that on an infinite enough universe, there must exist self-sustaining minds that function on an entirely circumstantial set of rules and logic based on whatever the quantum soup spit up.
In the US, there are still a lot from McCarthy-era sentiment and "Communist" is a pejorative within the general population. For instance, The Communist Control Act of 1954 is still on the books. Though it has issues as a law for being really vague, and hasn't been used seriously against leftist organizing on account of that, it nonetheless remains and has never been outright challenged to the Supreme Court of the United States. Either way, it had a chilling effect, and was pretty successful as part of the US's broader campaign to demonize communism and communist organizing.
Because of the way "Communism" and "Marxism" are used within US press and mainstream politics (especially by the Republican party), the average voter is conditioned to view them as bad words accordingly. The Democratic party, trying to court "moderate" voters within the political landscape here, all but refuses to touch those words with a 10-foot pole. It's not part of their brand (and not part of their policy either, not by any stretch of the imagination).
Progressivism in my view is an umbrella term, but still pretty linked with liberalism as a movement in the sense that it's mostly reformist, and acts a subgroup within the Democratic party. Most "Progressive" candidates for US political office are SocDems at most.
You can call it newspeak, but political movements arise under new/different names as the situation dictates, and often refer to different things. I'd argue that the point of newspeak within 1984 was actually to limit the evolution of language and restrict the development of new words/ideas, but I do get where you're coming from on account of "progressive" being considered more politically correct.