Now if you'd all just empty your wallets into the AI bonfire. Thaaaat's right.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
When AI decides to destroy the human virus, it now knows exactly how to create a bug capable of it. Probably more likely than pumping out a bunch of humanoid robots with guns, just create a bug, spread it around, and mess with our ability to communicate in time to stop the spread. BAM. Easy-peasy, humans are now down to a manageable 1 billion or so individuals.
Uh no, the AI didn't crack any problem.
The AI produced the same hypothesis that a scientist produced, one that the scientist considered his own original awesome idea.
But the truth is that science is less about producing awesome ideas and more about proving them. And AI did nothing in this regard, except to remind scientists that their original awesome ideas are often not so original.
There's even a term scientists use when another scientist has the same idea but actually managed to do the work of proving it: "scooped". It's a very common occurrence. It didn't happen here.
if this is machine learning and neural networks, I can believe it's a good thing, maybe even meaningful for the potential of so called artificial intelligence.
if this is an LLM that's alleged to have popped this "virus tail" theory out of... what exactly...? I'm not buying it.
"I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.
lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they're making use of these tech giant services. These tech giants have blatantly showed that they're OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?
If you want to give it a run for its money, give it a novel problem that isn't solved, and see what it comes up with.
"I wrote an email to ~~Google~~ Gryzzl to say, 'you have access to my computer, is that right?'", he added.
(...) If you want to give it a run for its money, give it a novel problem that isn’t solved, and see what it comes up with.
You mean like searchers have done ...
... in here : ?
https://bturtel.substack.com/p/human-all-too-human
For AI to learn something fundamentally new - something it cannot be taught by humans - it requires exploration and ground-truth feedback.
.
https://www.lightningrod.ai/
We’re enabling self-play that learns directly from real world feedback.
Large Language companies weren't even aware their data (which is so large they themselves have no idea what's in it) had other languages.
So the models suddenly knew how to speak other languages. The above story feels like those stories "Large Language Models are super intelligent! They've taught themselves French!" - no, mass surveillance and corporations being above the law taught them everything they know.
Great! We have a tested solution and scalled up th3 drug to treat the issue. And in 2 days! Great!
Oh, that is not what we have?
It's so easy to ask a question in such a way that the statistically most likely answer is the one at the front of your mind.
Google doesn't need access to all his unpublished research if he's ever mentioned anything about it online or in an email that went to a gmail address.
Further, University of Cambridge runs on Microsoft Exchange and University of Glasgow uses Office365.
Not to put to fine a point on it, but they don't need access to your computer and this feels a little bit overhyped.
Also just because it came to the same conclusion means about as much as it coming to the wrong conclusion, does it not? Since there is no actual "thinking" in these devices? How do we know the "right" conclusion wasn't merely a hallucination?
@SnotFlickerman @cm0002 unless he's done the research himself he won't know whether the results are viable - as he says, they've got to test the "new" one. So at best it gives you a bit of a head start on new avenues, at worst it completely wastes your time down a new rabbithole.
it's not word completion, its so far from it :
(...) He told the BBC of his shock when he found what it had done, given his research was not published so could not have been found by the AI system in the public domain. (...)
(...) "It's not just that the top hypothesis they provide was the right one," he said. "It's that they provide another four, and all of them made sense. "And for one of them, we never thought about it, and we're now working on that." (...)
Assuming Open AI ect only use data from the public domain is stupid (and contrary to most news sources on the matter). He has literally no idea what the AI has trained on (not even developers know, because there's just too much of it to be reviewed by humans). They've undoubtedly bought countless amounts of data that isn't readily searchable by public engines.
He sounds very ill informed on the matter of data collection and probably just had his info/data on a cloud service somewhere whose text was part of the trillions of terrabytes LLM have accessed and trained on.
it seems you did not read my comment in entirety.