this post was submitted on 05 May 2025
435 points (95.6% liked)

Technology

70285 readers
3098 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 4) 49 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 weeks ago

Is this about AI God? I know it’s coming. AI cult?

[–] [email protected] 14 points 2 weeks ago (3 children)

Our species really isn't smart enough to live, is it?

[–] [email protected] 5 points 2 weeks ago (1 children)

For some yes unfortunately but we all choose our path.

[–] [email protected] 7 points 2 weeks ago (3 children)

Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn't going to be enough to avoid cascading failure. I'm seeing a lot of positive feedback loops emerging, and I don't like it.

As they say about collapsing systems: First slowly, then suddenly very, very quickly.

[–] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.

...and as you can see, we're 4500 years into this stuff, still kicking.

One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren't, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.

[–] [email protected] 2 points 2 weeks ago (9 children)

I mean, Mesopotamian scriptures likely didn't foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.

load more comments (9 replies)
[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It's not that people are getting dumber per se - it's that they're having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn't even know for months if ever. Now? If an earthquake hits Paraguay, you'll be aware in minutes.

And you'll be expected to care.

Edit: Apologies. I wrote this comment as you were editing yours. It's quite different now, but you know what you wrote previously, so I trust you'll be able to interpret my response correctly.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 3 points 2 weeks ago (1 children)
[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Thank you. I appreciate you saying so.

The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it's given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it'll tell you is what a bunch of morons think is the truth. At worst, it'll just tell you what you expect to hear. It's what everybody else is already saying, after all.

And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.

load more comments (3 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 12 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

“At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.

load more comments (1 replies)
[–] [email protected] 15 points 2 weeks ago* (last edited 2 weeks ago) (11 children)

This is the reason I've deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly.

  • Tell it straight—no sugar-coating.

  • Stay skeptical and question things.

  • Keep a forward-thinking mindset.

  • User values deep, rational argumentation.

  • Ensure reasoning is solid and well-supported.

  • User expects brutal honesty.

  • Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness.

  • Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged.

  • If something lacks support, dig deeper and challenge it.

I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

[–] [email protected] 10 points 2 weeks ago (5 children)

I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There's these things called newspapers that exist, they aren't like they used to be but there is a choice of which to buy even.

I've no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

[–] [email protected] 1 points 2 weeks ago (1 children)

YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it's not gonna make you an expert on any subject. You're on the right track with reading, but let's be real you're not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I'd rather they teach me how I can never have to eat again because boy that shit takes up so much time.

[–] [email protected] 0 points 2 weeks ago

For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don't let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don't be tribal.

Don't use AI. Do your own thinking

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

💯

I have yet to see people using chatbots for anything actually & everyday useful. You can search anything with a "normal" search engine, phrase your searches as questions (or "prompts"), and get better answers that aren't smarmy.

Also think of the orders of magnitude more energy ai sucks, compared to web search.

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago)

Okay, challenge accepted.

I use it to troubleshoot my own code when I'm dealing with something obscure and I'm at my wits end. There's a good chance it will also spit out complete nonsense like calling functions with parameters that don't exist etc., but it can also sometimes make halfway decent suggestions that you just won't find on a modern search engine in any reasonable amount of time or that I would have never guessed to even look for due to assumptions made in the docs of a library or some such.

It's also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB's examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).

It was also much much faster than watching some greedy time vampire fuck spout off on YouTube in between the sponsorblock skipping his reminders to subscribe and whatnot.

Maybe not an everyday thing, but it's basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there's nothing better than a machine that's able to decompress knowledge from it's dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it's just way less to parse, and the odds are definitely in its favour.

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

Search engines aren’t great with vague questions.

There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

[–] [email protected] -1 points 2 weeks ago (1 children)

You search for topics and keywords on search engines. It's a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

And a tool which regurgitates rubbish in a verbose manner isn't a tool. It's a toy. Toy's can spark your curiosity, but you don't rely on them. Toy's look pretty, and can teach you things. The lesson is that they aren't a replacement for anything but lorem ipsum

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Buddy that's great if you know the topic or keyword to search for, if you don't and only have a vague query that you're trying to find more about to learn some keywords or topics to search for, you can use AI.

You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

[–] [email protected] 0 points 2 weeks ago (1 children)

I'm still sceptical, any chance you could share some prompts which illustrate this concept?

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to 'chat' with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

Remember how people used to say you can't use Wikipedia, it's unreliable. We would roll our eyes and say "yeah but we scroll down to the references and use it to find source material"? Same with LLM's, you sort through it and get the information you need to get the information you need.

[–] [email protected] 0 points 2 weeks ago (1 children)

Wikipedia isn't to be referenced for scientific papers, I'm sure we all agree there. But it does do almost exactly what you described. https://en.m.wikipedia.org/wiki/Shape_of_the_universe has some great further reading links. https://en.m.wikipedia.org/wiki/Cosmology has some great reads too. And for the time short: https://simple.m.wikipedia.org/wiki/Cosmology which also has Related Pages

I'm still yet to see how AI beats a search engine. And your example hasn't convinced me either

[–] [email protected] 4 points 2 weeks ago

If you still can't see how natural language search is useful, that's fine. We can, and we're happy to keep using it.

[–] [email protected] 2 points 2 weeks ago (1 children)

I often use it to check whether my rationale is correct, or if my opinions are valid.

[–] [email protected] 0 points 2 weeks ago (1 children)

You do know it can't reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

I've also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.

[–] [email protected] 1 points 2 weeks ago (1 children)

Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too

load more comments (1 replies)
[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Yeah this is my experience as well.

People you're replying to need to stop with the "gippity is bad" nonsense, it's actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

In fact, if you haven't found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don't do anything that complicated in your life where this would give you genuine value.

The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it's like a dunning-kruger curve.

[–] [email protected] 2 points 2 weeks ago

Granted, it is flakey unless you've configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

load more comments (2 replies)
[–] [email protected] 3 points 2 weeks ago (1 children)

I still use Ecosia.org for most of my research on the Internet. It doesn't need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

[–] [email protected] 3 points 2 weeks ago

People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?

load more comments (10 replies)
[–] [email protected] 44 points 2 weeks ago (5 children)

Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

For shit's sake, it's a computer. No matter how sentient the glorified chatbot being sold as "AI" appears to be, it's essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it's not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

If a computer starts talking to you as though you're some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

[–] [email protected] 5 points 2 weeks ago

For real. I explicitly append "give me the actual objective truth, regardless of how you think it will make me feel" to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I've never suffered from good self esteem in my entire life, so those tricks don't work on me :p

[–] [email protected] 16 points 2 weeks ago (3 children)

Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN'T REALLY LOVE YOU! THAT'S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

I know it's not the perfect analogy, but... eh, close enough, right?

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 27 points 2 weeks ago (3 children)

Not trying to speak like a prepper or anythingz but this is real.

One of neighbor's children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

Something needs to be done.

[–] [email protected] 36 points 2 weeks ago

Like what, some kind of parenting?

[–] [email protected] 15 points 2 weeks ago (1 children)
[–] [email protected] 16 points 2 weeks ago (1 children)

This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I've seen about it leave out a bunch of significant details so that it ends up sounding more of an "ooh, scary AI!" Story (baits clicks better) rather than a "parents not paying attention to their disturbed kid's cries for help and instead leaving loaded weapons lying around" story (as old as time, at least in America).

[–] [email protected] 1 points 2 weeks ago (2 children)

Not only in America.

I loved GOT, I think Daenerys is a beautiful name, but still, there's something about parents naming their kids after movie characters. In my youth, Kevin's started to pop up everywhere (yep, that's how old I am). They weren't suicidal but behaved incredibly badly so you could constantly hear their mothers screeching after them.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 138 points 2 weeks ago (12 children)

TLDR: Artificial Intelligence enhances natural stupidity.

[–] [email protected] 5 points 2 weeks ago (3 children)

TBF, that should be the conclusion in all contexts where "AI" are cconcerned.

load more comments (3 replies)
[–] [email protected] 12 points 2 weeks ago

Bottom line: Lunatics gonna be lunatics, with AI or not.

[–] [email protected] 52 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.

[–] [email protected] 18 points 2 weeks ago (2 children)

Precisely. We like to think of ourselves as rational but we're the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn't stop it in the slightest.

load more comments (2 replies)
load more comments (1 replies)
load more comments (9 replies)
[–] [email protected] 10 points 2 weeks ago

Seems like the flat-earthers or sovereign citizens of this century

load more comments
view more: ‹ prev next ›