this post was submitted on 24 Nov 2024
148 points (98.7% liked)

World News

39127 readers
2210 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
 

Summary

AI is increasingly exploited by criminals for fraud, cyberattacks, and child abuse, warns Alex Murray, UK’s national police lead for AI.

Deepfake scams, such as impersonating executives for financial heists, and generative AI used to create child abuse images or “nudify” photos for sextortion are rising concerns.

Terrorists may exploit AI for propaganda and radicalization via chatbots.

Murray urged urgent action as AI becomes more accessible, realistic, and widely used, predicting significant crime growth by 2029.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 1 day ago* (last edited 1 day ago)

I'm gonna be really honest, I think a big part of what feels violating about people seeing your nudes in the first place is being sexualized without your consent, and losing agency over who you allow yourself to be sexualized by

That's not any different with deepfakes. I don't think that'd actually make that person feel much better. Like maybe they can save face on the fact that they took nudes depicting themselves in whatever way, but the thing that I think does the most emotional damage isn't actually changed or affected by saying "it's not actually me, those are deepfakes" :(