Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
The problem is that these LLMs are built with the wrong driving motivator. They're driven to find one right way whereas the reality is that there is rarely a single right way and computers don't need to have a single right way like humans tend towards. The LLM shouldn't be driven to be "right" in its learning model. It should be trained on known good data only as a base, and then given the other data to serve context rather than allowing that data to modify the underlying system. This is more like how biological creatures work in teaching a child to be "good" or "evil" and to know the basic things needed to survive and serve their purpose, and then the stuff they learn in adulthood serves to help them apply those base concepts to the world.
At the same time, they don't really behave that much differently from some humans that have been sucked down the path of various conspiracy theories. For a lot of those, the first "lesson" is 'everyone else is wrong and have been deceived or are trying to trick you, trust nobody but us'. From there, some people end up going down the rabbit-hole to become "Sovereign Citizens" or storm congress.
I hold that this is true of all neural-nets, organic as well as silicon:
Once a person has sided with treachery, rooting it out from one's unconscious-mind is .. enduringly difficult, if not intractable.
I don't know how many decades it takes to eradicate the roots of it, if it can be done, at all:
the unconscious-mind mechanism, that-is the Kahneman System-1 ( from "Thinking Fast & Slow" ) imprint is going to still be there, even if overlaid with another imprint ( since mind is holographic/pattern-imprints in function ).
Worse, it is the motivation that need change, and motivation is of ego, which is of identity, so many who "reform" only do-so superficially.
I'm not saying this as some goody-2-shoes, I'm saying this as a person who was raised by narcissists, and therefore embodied much narcissism, and class-prejudice ( dad was a doctor: you can't get more upper-middle-class status-prejudiced than doctor-culture )...
...who finally cracked the root kernel of the class-prejudice in my unconscious-mind's identity-crystal at the end of a 25d hard-line fast, out in the bush.
It took that to fracture the identity-crystal's prejudice.
It's been a decade since then, & I'm still fighting to eradicate its treachery from my nature.
Neural-nets are tough to purge, or clean-up & make upright.
MUCH easier to keep a neural-net pristine through all of its formation, than to try ( endlessly failing ) to clean it up, after it's become enemy-intent in "family" clothing.
_ /\ _
Can you recommend further reading?
I wonder when the first one turns into suicide bomber.
'went rogue' is a bit of an alarmist way to say 'typed scary text'
i'd love to see an AI that could legitimately scare me
Programming is "just text". They doesn't mean that programming isn't incredibly powerful or that it can't be used to do dangerous things. Maybe the missing piece that you're unaware of is that LLMs are already very effective at programming and usage APIs. You don't even need to have an LLM that's good at programming to cause damage, it just needs access to APIs that can cause damage.
It controls a military drone.
It controls surgical equipment.
It’s filtering your CV before any human sees it.
It controls a robot taking care of your children.
It’s involved in law enforcement or legal judgments.
It’s involved in government policy setting.
Well why don't we just make AI watch the Terminator movies and read Harlan Ellison till it learns not to do that?
I mean it worked for W.O.P.R.
It watched Terminator and now it's trying to DM Arnold Schwarzenegger on Instagram
Hot take: it would rather watch the Terminator and see that one robot wasn't enough. Send em all.
Just use imagination. An AI is programmed for battle and is ordered to hold fire. It shoots instead.
I thought the point of AI is to not specifically program it for anything hence you can ask the chatbot thats suppose to help make a sale, do your homework problems.
AI is more a specific class of software than a specific approach. You can have specialized models that are very focused in their dataset and usecases and you can have general models that are less focused but can be applied more widely (but with potentially less reliable results)
Couldn’t a human make the same decision?
Imagine if there was a specific series of words that would turn any human into a rogue agent en masse. Some guy discovers that a special input causes killbot 2000 to go haywire and they broadcast it to an entire army that all has the same underlying program.
Yes, but the human would have emotions to manipulate about it.
I hope WOPR and SkyNet would be taken as a warning not to do that.
LLM trained on inflammatory data produces inflammatory results, shocking.
I know we don't like them here but the word reddit is not banned (yet)
What? What does my comment have anything to do with Reddit?
Ha ha the plot for Horizon coming true in real life.
AI goes rogue. No one can flip the kill switch when AI has disconnected it. AI decides to remove humanity from the planet.
Someone needs to start working on a Zero Dawn program and terraforming plans pretty quick.