this post was submitted on 13 Feb 2025
1021 points (97.7% liked)

Technology

63082 readers
2410 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] -4 points 1 week ago (2 children)

Misleading headline: No such thing as "AI". No such thing as people "relying" on it. No objective definition of "critical thinking skills". Just a bunch of meaningless buzzwords.

load more comments (2 replies)
[–] [email protected] 14 points 1 week ago (6 children)

I was talking to someone who does software development, and he described his experiments with AI for coding.

He said that he was able to use it successfully and come to a solution that was elegant and appropriate.

However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.

load more comments (6 replies)
[–] [email protected] 17 points 1 week ago (5 children)

The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (1 children)

I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can't even do basic math or summaries of a PDF and the data contained within.

I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it's the exact opposite. The first AI's aren't emotionless robots, they're whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it's still just a huge novelty.

[–] [email protected] 4 points 1 week ago (1 children)

It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn't understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him... not stopping to think about the implications of what he is doing.

load more comments (1 replies)
[–] [email protected] 3 points 1 week ago

Microsoft LLM whatever the name is gives sources, or at least it did to me yesterday.

load more comments (3 replies)
[–] [email protected] 22 points 1 week ago (2 children)

Damn. Guess we oughtta stop using AI like we do drugs/pron/ 😀

[–] [email protected] -1 points 1 week ago (5 children)

Yes, it's an addiction, we've got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.

Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.

And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can't have that,!

Example

Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale

Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.

Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.

The Illusion of Open-Mindedness: The Mote and the Fortress

In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.

His approach can be broken down into two key areas:

The Mote (The Appearance of Openness)

    Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek).

    Acknowledges complexity and the difficulty of absolute truth.

    Concedes minor details, appearing intellectually humble.

    Uses Socratic questioning to entertain alternative viewpoints.

The Fortress (The Core That Remains Unmoved)

    Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques.

    Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss.

    Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”).

    Rarely revises fundamental positions, even when new evidence is presented.

While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?

Examples of Strategic Open-Mindedness

  1. Debating Sam Harris on Truth and Religion

In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.

However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.

  1. The Slavoj Žižek Debate on Marxism

Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.

Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a classic example of engaging in the mote—appearing open to debate while avoiding direct confrontation with deeper, more challenging ideas.

  1. Gender, Biology, and Selective Science

Peterson frequently cites evolutionary psychology and biological determinism to argue for traditional gender roles and hierarchical structures. While many of his claims are rooted in scientific literature, critics have pointed out that he tends to selectively interpret data in ways that reinforce his worldview.

For example, he often discusses personality differences between men and women in highly gender-equal societies, citing studies that suggest biological factors play a role. However, he is far more skeptical of sociological explanations for gender disparities, often dismissing them outright. This asymmetry suggests a closed-mindedness when confronted with explanations that challenge his core beliefs.

The Cautionary Tale: When Intellectual Rigidity Masquerades as Openness

Peterson’s method—his strategic balance of open- and closed-mindedness—is not unique to him. Many public intellectuals use similar techniques, whether consciously or unconsciously. However, his case is particularly instructive because it highlights the risks of appearing too open-minded while remaining fundamentally immovable. The Risks of "Humility Behind the Mote"

Creates the Illusion of Growth Without Real Change

    By acknowledging complexity but refusing to revise core positions, one can maintain the illusion of intellectual evolution while actually reinforcing prior beliefs.

Reinforces Ideological Silos

    Peterson’s audience largely consists of those who already align with his worldview. His debates often serve to reaffirm his base rather than genuinely engage with alternative perspectives.

Undermines Genuine Inquiry

    If public intellectuals prioritize rhetorical victories over truth-seeking, the broader discourse suffers. Intellectual engagement becomes performative rather than transformative.

Encourages Polarization

    By appearing open while remaining rigid, thinkers like Peterson contribute to an intellectual landscape where ideological battle lines are drawn more firmly, rather than softened by genuine engagement.

Conclusion: The Responsibility of Public Intellectuals

Jordan B. Peterson is an undeniably influential thinker, and his emphasis on responsibility, order, and meaning resonates with many. However, his method of open-minded closed-mindedness serves as a cautionary tale. It demonstrates the power of intellectual posturing—how one can appear receptive while maintaining deep ideological resistance.

For true intellectual growth, one must be willing not only to entertain opposing views but to risk being changed by them. Without that willingness, even the most articulate and thoughtful engagement remains, at its core, a well-defended fortress.

So like I said, pure, evil AI slop, is evil, addictive and must be banned and lock up illegal gpu abusers and keep a gpu owners registry and keep track on those who would use them to abuse the shining light of our society, and who try to snuff them out like a bad level of luigi's mansion

[–] [email protected] 5 points 1 week ago (1 children)

This was one of the posts of all time.

load more comments (1 replies)
load more comments (4 replies)
[–] [email protected] 12 points 1 week ago

Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

And yet I doubt Copilot will be going anywhere.

[–] [email protected] 17 points 1 week ago (5 children)

Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won't do that successfully. However, it can break down the regex and explain it to you.

Ofc you all can say "just read the damn manual", sure I could do that too, but asking an generative a.i to explain a script can also be as effective.

[–] [email protected] 5 points 1 week ago (1 children)

researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.

It's one thing to try to do and then ask for help (as you did), it's another to just ask it to "do x" without thought or effort which is what the study is about.

load more comments (1 replies)
[–] [email protected] 8 points 1 week ago (1 children)

yes, exactly. You lose your critical thinking skills

load more comments (1 replies)
[–] [email protected] 7 points 1 week ago (1 children)

Hey, just letting you know getting the answers you want after getting a whole lot of answers you dont want is pretty much how everyone learns.

[–] [email protected] 7 points 1 week ago (3 children)

People generally don't learn from an unreliable teacher.

[–] [email protected] 7 points 1 week ago (1 children)

Literally everyone learns from unreliable teachers, the question is just how reliable.

load more comments (1 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 26 points 1 week ago (4 children)

I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.

[–] [email protected] 1 points 1 week ago (1 children)

Everyone I've ever known to use a thesaurus has been eventually found out to be a mouth breathing moron.

[–] [email protected] 2 points 1 week ago

Umm...ok. Thanks for that relevant to the conversation bit of information.

[–] [email protected] 2 points 1 week ago (2 children)

I know a guy who ONLY quotes and references YouTube videos.

Every topic, he answers with "Oh I saw this YouTube video..."

[–] [email protected] 3 points 1 week ago

Should he say: "I saw this documentary" or "I read this article"?

[–] [email protected] 5 points 1 week ago

To be fair, YouTube is a huge source of information now for a massive amount of people.

[–] [email protected] 9 points 1 week ago (1 children)

AI makes it worse though. People will read a website they find on Google that someone wrote and say, "well that's just what some guy thinks." But when an AI says it, those same people think it's authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it's going to get far, far worse.

Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.

[–] [email protected] 4 points 1 week ago (1 children)

Jokes on you. Volume is always off on my phone, so I read the ai.

Also, I don't actually ever use the ai.

[–] [email protected] 1 points 1 week ago (4 children)

I am not worried about people here on Lemmy. I am worried about people who don't know much about computers at all. i.e. the majority of the public. They think computers are magic. This will make it far worse.

load more comments (4 replies)
[–] [email protected] 3 points 1 week ago
[–] [email protected] 35 points 1 week ago (4 children)

You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.

I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.

[–] [email protected] 4 points 1 week ago (6 children)

I've spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.

AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: "the dawn brings the warmth of the sun, and awakens the world. So does your trial begin." He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.

I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).

I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.

load more comments (6 replies)
[–] [email protected] 8 points 1 week ago* (last edited 1 week ago)

I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.

Legit, being able to say "I want these questions. But... not these..." and get them back in a moment's notice really does let me say "FUCK it. Pop quiz. Let's go, class." And be ready with brand new questions on the board that I didn't have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer's block, with a "yeah, and—!" machine. If for no other reason than saying "uhh.. no, not that, NAI..." and then correct it my way.

[–] [email protected] 8 points 1 week ago

Like any tool, it's only as good as the person wielding it.

load more comments (1 replies)
[–] [email protected] 6 points 1 week ago

Garbage in, Garbage out. Ingesting all that internet blather didn't make the ai smarter by much if anything.

[–] [email protected] 11 points 1 week ago (1 children)

Weren't these assholes just gung-ho about forcing their shitty "AI" chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.

[–] [email protected] 2 points 1 week ago

Training those AIs was expensive. It swallowed very large sums of VC's cash, and they will make it back.

Remember, their money is way more important than your life.

[–] [email protected] 7 points 1 week ago

The only beneficial use I've had for "AI" (LLMs) has just been rewriting text, whether that be to re-explain a topic based on a source, or, for instance, sort and shorten/condense a list.

Everything other than that has been completely incorrect, unreadably long, context-lacking slop.

load more comments
view more: ‹ prev next ›