this post was submitted on 03 Apr 2024
139 points (96.0% liked)

World News

38969 readers
2610 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] [email protected] 30 points 7 months ago (1 children)

the ai:

def is_hamas(target):
  return True

[–] [email protected] 8 points 7 months ago* (last edited 7 months ago)
if is_hamas(new_target):

    x1 = new_target.x - 1000
    y1 = new_target.y - 1000
    x2 = new_target.x + 1000
    y2 = new_target.y + 1000

    airstrike(x1, y1, x2, y2, phosphorus=True)
[–] [email protected] 14 points 7 months ago

I wonder how many of the "Hamas targets" are children? Is it higher or lower than 36,999?

[–] [email protected] 17 points 7 months ago

We warned the world over ten years ago that this shit was going to happen. It will only get worse when AI drone swarms can be deployed on the cheap.

[–] [email protected] 23 points 7 months ago (1 children)

Maybe don't use something that is rarely discussed without using the word "hallucination" in your plans to FUCKING KILL PEOPLE?

[–] [email protected] 2 points 7 months ago (3 children)
[–] [email protected] 2 points 7 months ago (1 children)

Other AI systems can have hallucinations too.

[–] [email protected] 1 points 7 months ago

The primary feature of LLM's is the hallucination.

[–] [email protected] 3 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago

I mean, it probably has a neural network component.

[–] [email protected] 3 points 7 months ago (1 children)

Doesn't mean that it won't hallucinate. Or whatever you call an AI making up crap.

[–] [email protected] 0 points 7 months ago (1 children)

LLM's hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn't hallucinate. LLM's have to do that, because they're mimicking human speech patterns and predicting one of my possible responses.

A model that tries to predict locations of people likely wouldn't work like that.

[–] [email protected] 2 points 7 months ago
[–] [email protected] 65 points 7 months ago (1 children)

“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

“Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”

Are we still supposed to believe that the pursuit of AI development is for the good of Humanity?

Fuck you Google for opening Nimbus to the IDF, via a contract that contains a clause saying that you can't break it whatever the reason. Fucking moronic disgrace to humanity all you bunch

[–] [email protected] 5 points 7 months ago

~~Don't~~ be evil

Updated the slogan, boss

[–] [email protected] 32 points 7 months ago (1 children)

We were just following orders (from the AI).

Its almost funny isn't it?

[–] [email protected] 17 points 7 months ago

Skynet won't need terminators. Fascists are much cheaper.

[–] [email protected] 16 points 7 months ago

Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.

Fucking lmao

[–] [email protected] 3 points 7 months ago

This is the best summary I could come up with:


The Israeli military’s bombing campaign in Gaza used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war.

In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict.

Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines.

The testimony from the six intelligence officers, all who have been involved in using AI systems to identify Hamas and Palestinian Islamic Jihad (PIJ) targets in the war, was given to the journalist Yuval Abraham for a report published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call.

According to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.

Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants.


The original article contains 2,185 words, the summary contains 238 words. Saved 89%. I'm a bot and I'm open source!

[–] [email protected] 54 points 7 months ago* (last edited 7 months ago) (1 children)

Another case where AI is used as a slick marketing term for a black box. A box in which humans selected indiscriminate bombing and genocide. Sure there is new technology used, but at the end of the day it is just military industry marketing to justify humans mass murdering other humans.

[–] [email protected] 42 points 7 months ago* (last edited 7 months ago) (1 children)

It's phrenology again.

You really want to do something, but it feels evil and you don't want to be evil so you slap some pseudoscience on it and relax. It's done for Reasons now.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

Man, Black Mirror just writes itself these days