this post was submitted on 03 Mar 2024
83 points (96.6% liked)

Technology

59055 readers
3173 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Researchers create AI worms that can spread from one system to another | Worms could potentially steal data and deploy malware.::Worms could potentially steal data and deploy malware.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 8 months ago

This is the best summary I could come up with:


Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text.

While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA.

Despite this, there are ways people creating generative AI systems can defend against potential worms, including using traditional security approaches.

There should be a boundary there.” For Google and OpenAI, Swanda says that if a prompt is being repeated within its systems thousands of times, that will create a lot of “noise” and may be easy to detect.


The original article contains 1,239 words, the summary contains 186 words. Saved 85%. I'm a bot and I'm open source!