this post was submitted on 06 Mar 2024
1 points (100.0% liked)

TechTakes

1873 readers
37 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
(page 2) 5 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 year ago (3 children)

me when the machine specifically designed to pass the turing test passes the turing test

If you can design a model that spits out self-aware-sounding things after not having been trained on a large corpus of human text, then I'll bite. Until then, it's crazy that anybody who knows anything about how current models are trained accepts the idea that it's anything other than a stochastic parrot.

Glad that the article included a good amount of dissenting opinion, highlighting this one from Margaret Mitchell: "I think we can agree that systems that can manipulate shouldn't be designed to present themselves as having feelings, goals, dreams, aspirations."

Cool tech. We should probably set it on fire.

load more comments (3 replies)
[–] [email protected] 0 points 1 year ago (4 children)

The problem is that whether or not an AI is self-aware isn't a technical question - it's a philosophical one.

And our current blinkered focus on STEM and only STEM has made it so that many (most?) of those most involved in AI R&D are woefully underequipped to make a sound judgment on such a matter.

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (2 children)

It’s not self aware, it’s just okay at faking it. Just because some people might believe it doesn’t make it so, people also don’t believe in global warming and think the earth is flat.

load more comments (2 replies)
load more comments (3 replies)
[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (1 children)

As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.

A bit like a tarot reading. (but even those have quite a bit of structure).

Which bothers me a bit is that people look at this and go 'it is testing me' and never seem to notice that LLMs don't really seem to ask questions, sure sometimes there are related questions to the setup of the LLM, like the 'why do you want to buy a gpu from me YudAi' thing. But it never seems curious in the other side as a person. Hell, it won't even ask you about the relationship with your mother like earlier AIs would.

[–] [email protected] 0 points 1 year ago (1 children)

As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.

This is an absolutely profound take that I hadn't seen before; thank you.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›