this post was submitted on 07 Apr 2024
1 points (100.0% liked)

United States | News & Politics

7120 readers
520 users here now

founded 4 years ago
MODERATORS
top 40 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 5 months ago (1 children)

People don't read. And before you down vote, it's still bad.

It was not a human system that was posing as AI. It was a shitty AI that needed a lot of human intervention.

Yes, it's still shit. Yes it's still a problem with how they implemented it and how they pitched it.

But there needs to be a higher level of criticism. Saying "it was just human labor the whole time" is flatly incorrect. The better criticism is the truth... They made AI so shitty that it needed a bunch of human interaction, and their product was really really bad.

I've heard so many people state this as "there wasn't any AI, it was just humans watching cameras." And the false narrative distracts from the real story.

People pretend the truth doesn't matter, and will retreat to "well even if it was AI it was so bad so I was still basically right." and that's a problem.

[–] [email protected] -1 points 5 months ago

When your technology is so garbage you need a squishy meatbag to hold its ay thetic hand...oof

[–] [email protected] 0 points 5 months ago (1 children)

So the so called Amazon "Ai" is "An indian"?

[–] [email protected] 0 points 5 months ago (1 children)
[–] [email protected] 0 points 5 months ago (2 children)

What's old is new again: "The Mechanical Turk".

[–] [email protected] 0 points 5 months ago

I'm going to tell you something depressing - those workers have a four-year degree, and are unemployed by now, like 83% of the Indian youth.

[–] [email protected] 0 points 5 months ago

I was under the impression that mechanical Turks were powered by Turkish people, not Indians.

[–] [email protected] 0 points 5 months ago (4 children)

So-called "AI" is usually just a scam to hide human labor. The reality is this tech is not usable wthiout human curation, often requiring even more human labor than just doing things the old fashioned way.

When is this bubble going to pop?

[–] [email protected] 0 points 5 months ago (1 children)

This is not true at all. Transformer models like ChatGPT have already proven to be immensely useful and helpful in the professional world. It's not capable of doing jobs entirely on its own yet, but as a tool that helps humans do their job it's great.

[–] [email protected] 0 points 5 months ago (2 children)

Reread what I said. I said it's not usable without human curation i.e. what humans do when they use it as a tool to do their job.

[–] [email protected] 0 points 5 months ago (1 children)

You also said "often requiring even more human labor than doing things the old fashioned way" - i dare say that's the part they were countering.

[–] [email protected] 0 points 5 months ago

I didn't say that it always requires more human labor! Stable diffusion, specifically, seems like it can really reduce the amount of human labor needed to generate art. It can't eliminate it, but it can definitely turn art from a skill that requires 10,000 hours to master into a skill that maybe requires 10 hours. Industrial de-skilling, in other words.

But that's the best case scenario. In many cases AI doesn't help at all and just requires human workers to fix it as it constantly fucks up, and it doesn't seem to get any better.

[–] [email protected] 0 points 5 months ago

You said it requires even more human labor than doing things that old fashioned way, which in my experience is completely false.

[–] [email protected] 0 points 5 months ago (1 children)

I think its a bit more complex than that: you are right, but just in the beginning... after the AI is trained you dont need the cheap labor anymore. Which imho makes it even worse.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (2 children)

Marketing hype.

No amount of training can ever eliminate the need for human curation. This is not AI, it's a jumped up pattern recognition engine. False positives and false negatives are inevitable without a consciousness to evaluate it. Hallucinations are an intractable problem that can not be solved, regardless of training, and so all these AI can ever be is a tool for human workers.

It'll take something totally different and new.

[–] [email protected] 0 points 5 months ago

I understand what you are saying but I dont agree, look at the examples we already have: I use chatgpt at work to code, it has limitations but works without any human curation. Check midjourney as well, it has great accuracy and if you ask a picture of dogs it will create without any human intervention. Yes, it took a long time and human effort to train them, but in the end it is not needed anymore for the majority of the cases. What you say about hallucinations, innacurate results, they happen yes, but ita becoming fringe cases and less and less. Its true that its not the miracle tool that marketing says it is, thats marketing, but its much more dangerous than it looks and will definetly substitute a lot of workers, it already does.

[–] [email protected] 0 points 5 months ago (1 children)

LLMs may fabricate things now and then but so do humans. I am not convinced the problem is intractable.

[–] [email protected] 0 points 5 months ago (1 children)

You have no reason to believe the problem can be solved.

It's almost religious. You just have faith in technology you don't understand.

Keep praying to your machine spirits, maybe the Omnissiah will deliver the answer!

[–] [email protected] 0 points 5 months ago (1 children)

I have no reason to believe the problem can't be solved, except insofar as it hasn't been solved yet (but LLMs only recently took off). So without a good reason to believe it's intractable, I'm at worst 50/50 on if it can be solved. Faith in the machine spirit would be if I had an unreasonably high expectation LLMs can be made not to hallucinate, like 100%.

My expectation is around 70% that it's solvable.

[–] [email protected] 0 points 5 months ago (1 children)

You have no reason to think it can be solved. You're just blindly putting your faith in something you don't understand and making up percentages to make yourself sound less like a religious nut.

[–] [email protected] 0 points 5 months ago (1 children)

If I have no reason to believe X and no reason not to believe X, then the probability of X would be 50%, no?

[–] [email protected] 0 points 5 months ago (1 children)

By this logic, the probability of every stupid thing is 50%

You have no reason to believe magic is real, but you have no reason to not believe magic is real. So, is there a 50% probability that magic is real? Evidently you think so, because the magic science mans are going to magic up a solution to the problems faced by these chatbots.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (1 children)

Absolutely not true. The probabilities of stupid things are very low; that's because they are stupid. If we expected such things to be probable, we probably wouldn't call them stupid.

I have plenty of evidence to believe magic isn't real. Don't mistake "no evidence (and we haven't checked)" for "no evidence (but we've checked)". I've lived my whole life and haven't seen magic, and I have a very predictive model for the universe which has no term for 'magic'.

LLMs are new, and have made sweeping, landmark improvements every year since GPT2. Therefore I have reason to believe (not 100%!) that we are still in the goldrush phase and new landmark improvements will continue to be made in the field for some time. I haven't really seen an argument that hallucination is an intractable problem, and while it's true that all LLMs have hallucinated so far, GPT4 hallucinates much less than GPT3, and GPT3 hallucinates a lot less than GPT2.

But realistically speaking, even if I were unknowledgeable and unqualified to say anything with confidence about LLMs, I could still say this: for any statement X about LLMs which is not stupid by the metric that an unknowledgeable person would be able to perceive, the probability of that statement being true about LLMs to an unknowledgeable person is 50%. We know this because the opposite of that statement, call it ¬X, would also be equally opaque to an unknowledgeable person. Given X and ¬X are mutually exclusive, and we have no reason to favor one over the other, both have probability 50%.

[–] [email protected] 0 points 5 months ago (1 children)

This technology isn't actually that new, it's been around for almost a decade. What's new is the amount of processing power they have to throw at the data bases and the level of data collection, but you're just buying into marketing hype. It's classic tech industry stuff to over promise and under deliver to pump up valuations and sales.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (1 children)

Ok, but by that same perspective, you could say convolutional neural networks have been around since the 80s. It wasn't until Geoffrey Hinton put them back on the map in 2012ish that anyone cared. GPT2 is when I started paying attention to LLMs, and that's 5 years old or so.

Even a decade is new in the sense of Laplace's law of succession alone indicating there's still a 10% chance we'll solve the problem in the next year.

[–] [email protected] 0 points 5 months ago (1 children)

Laplace’s law of succession only applies if we know an experiment can result in either success or failure. We don't know that. That's just adding new assumptions for your religion. For all we know, this can never result in success and it's a dead end.

[–] [email protected] 0 points 5 months ago (1 children)

I have to hard disagree here. Laplace's law of succession does not require that assumption. It's easy to see why intuitively: if it turns out the probability is 0 (or 1) then the predicted probability from Laplace's law of succession limits to 0 (or 1) as more results come in.

[–] [email protected] 0 points 5 months ago (1 children)

If the probability is 0 then it will never be 1

Therefore, there must be some probability of success.

[–] [email protected] 0 points 5 months ago (1 children)

It may help to distinguish between the "true" probability of an event and the observer's internal probability for that event. If the observer's probability is 0 or 1 then you're right, it can never change. This is why your prior should never be 0 or 1 for anything.

[–] [email protected] 0 points 5 months ago (1 children)

This is why your prior should never be 0 or 1 for anything.

For anything? Are you sure about that?

Because I say there's 0 probability that six sided dice will ever produce a 7.

[–] [email protected] 0 points 5 months ago (1 children)

A better example of this is "how sure are you that 2+2=4 ?" It makes sense to assign a prior probability of 1 to such mathematical certainties, because they don't depend on our uncertain world. On the other hand, how sure are you that 8858289582116283904726618947467287383847 isn't prime?

For a die in a thought experiment -- sure, it can't be 7. But in a physical universe, a die could indeed surprise you with a 7.

More to the point, why do you believe the probability that hallucinations as a problem will be solved (at least to the point that they are rare and mild enough not to matter) is literally 0? Do you think that the existence of fanatical AI zealots makes it less likely?

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (1 children)

Okay, so by your logic the probability of literally everything is 1. That's absurd and that's not how Laplace’s law of succession is supposed to be applied. The point I'm trying to make is that some things are literally impossible, you can't just hand-wave that!

And I'm not saying that solving hallucinations is impossible! What I'm saying that it could be impossible and am criticizing your blind faith in progress because you just believe the probability is literally 1. I can't say, for sure, that it's impossible. At the same time you can't say, for sure, that it is possible. You can't just assume the problem will inevitably be fixed, otherwise you've talked yourself into a cult.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (1 children)

I'm not saying the probability of literally everything is 1. I am saying nonzero. 0.00003 is not 1 nor 0.

I am not assuming the problem will inevitably be fixed. I think 0.5 is a reasonable p for most.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

You do not know that it is nonzero, that's just an assumption you made up.

Also, Laplace's law of succession necessarily implies that, over an infinite number of attempts and as long as there is a possibility of success, the probability that the next attempt results in success approaches 1.

[–] [email protected] 0 points 5 months ago

I can’t stand it when I see all these mainstream news stories about it all the time either. With tech ingnorant news anchors talking about it. Just continues pumping up the bubble. I worry rather than pop, it will just be a new buzzword that is here to stay. (AI was always a thing, but what we have now, these LLMs, are not really what we traditionally referred to as AI in scifi and traditional media.)

[–] [email protected] 0 points 5 months ago (1 children)

They just failed to mention that their “AI” stand for “All Indians”

[–] [email protected] 1 points 5 months ago

Damn this is good