Fuck AI

2827 readers
17 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
 
 

Source (Via Xcancel)

Tweet:

(The original has since been deleted)

Source (Via Xcancel)

Source (Via Xcancel)

Source (Via Xcancel)

Source (Via Xcancel)

5
 
 

Source (Via Xcancel)

Here's the artist's bluesky

The AI slop for comparison

6
7
8
9
 
 

The AI-powered system uses 279 variables to score families for risk, based on cases from 2013 and 2014 that ended in a child being severely harmed. Some factors might be expected, like past involvement with ACS. Other factors used by the algorithm are largely out of a caretaker’s control, and align closely with socioeconomic status. The neighborhood that a family lives in contributes to their score, and so does the mother’s age. The algorithm also factors in how many siblings a child under investigation has, as well as their ages. A caretaker’s physical and mental health contributes to the score, too.

While the tool is new, both the data it’s built on and the factors it considers raise the concern that artificial intelligence will re-inforce or even amplify how racial discrimination taints child protection investigations in New York City and beyond, civil rights groups and advocates for families argue.

Joyce McMillan, executive director of Just Making A Change for Families and a prominent critic of ACS, said the algorithm “generalizes people.”

“My neighborhood alone makes me more likely to be abusive or neglectful?” she said. “That’s because we look at poverty as neglect and the neighborhoods they identify have very low resources.”

10
 
 
11
 
 

I did a very simple search for this year's holidays in my German state and whoo boy did it get a lot wrong! Left it's the ai slop, right are the correct answers. Obviously I've since disabled it completely to save the screen real estate.

12
13
 
 

cross-posted from: https://lemm.ee/post/64452424

This is the first known time an American police department has relied on live facial recognition technology cameras at scale, and is a radical and dangerous escalation of the power to surveil people as we go about our daily lives.

According to The Washington Post, since 2023 the city has relied on face recognition-enabled surveillance cameras through the “Project NOLA” private camera network. These cameras scan every face that passes by and send real-time alerts directly to officers’ phones when they detect a purported match to someone on a secretive, privately maintained watchlist.

14
 
 

cross-posted from: https://lemm.ee/post/64450059

In 2012, Palantir quietly embedded itself into the daily operations of the New Orleans Police Department. There were no public announcements. No contracts made available to the city council. Instead, the surveillance company partnered with a local nonprofit to sidestep oversight, gaining access to years of arrest records, licenses, addresses, and phone numbers all to build a shadowy predictive policing program.

Palantir’s software mapped webs of human relationships, assigned residents algorithmic “risk scores,” and helped police generate “target lists” all without public knowledge. “We very much like to not be publicly known,” a Palantir engineer wrote in an internal email later obtained by The Verge.

After years spent quietly powering surveillance systems for police departments and federal agencies, the company has rebranded itself as a frontier AI firm, selling machine learning platforms designed for military dominance and geopolitical control.

"AI is not a toy. It is a weapon,” said CEO Alex Karp. “It will be used to kill people.”

15
 
 

I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don't think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying "Fuck Software" just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I'm not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I'm currently writing a research paper on this topic, so I would like some opinion.

16
 
 
17
 
 

Source (Bluesky)

Artist (Bluesky)

18
19
 
 

Source (Bluesky)

Shark Image (Bluesky)

20
 
 

I just went down an interesting rabbit hole. I'm a huge car guy. Started off by some news videos about Koenigsegg that turned out to be long AI narrated videos of generic footage with very short interview clips with Christian von Koenigsegg himself interspersed in between. Tbh I only watched one, the rest I clicked on just to confirm my suspicion that they're all bs. Discovered the following YouTube disclaimer on some of the videos:

How this content was made Altered or synthetic content Sound or visuals were significantly edited or digitally generated.

Now those videos were clickbait and misleading, but they're far from the worst. They get some hundreds of thousands of views each, and get shat out by these channels at a rapid pace.

Things get more interesting with fake car reviews and such. Years ago you'd see videos that were looking at photos of concept cars and narrating them with a bunch of bullshit.

Now they have AI generate the concept car imagery. The supposed new car looks different in every "photo". Like completely different designs, down to the entire body shape. AI narrator talking about conservative styling on the Mercedes S-Class saying "around the back you see slimmer taillights" while the image being shown is the front quarter of an extremely low grand tourer type car that couldn't clear a speed bump, has tire only on the bottom of the wheel (it just disappears as you go higher up the wheel), has no possibility of suspension travel as there's no distance between the wheel and the top of the wheel well, the wheel itself isn't entirely round, the brakes look like a cartoon, and while there's a visible separation between the front bumper and the front quarter panel... There isn't one between the quarter panel and the hood, or the hood and the front bumper. So really it's all one piece that can never be removed. I'm surprised the door is separate from the front quarter panel at this point. This video is here. The S-Class is a luxury sedan, available also as a coupe or convertible, but never has it been whatever... this is.

Buick is also bringing back the Series 40 after a century. Looks almost the same too!. Featuring a rear license plate saying "Series 80" in the lovely font called AI slop.

These videos get shat out at an even more rapid pace. Most have relatively few views, some have quite a lot, and there are people asking if they can buy these cars for real. They come up on google when doing highly specific searches to see if some manufacturer has a car in some category, etc.

So it's similar to the fake movie trailers that have already been talked about, but this is distracting people looking for actual products. It might just be cars in my examples, but I'm sure the same is happening in all kinds of video genres and different product types. I know a lot of you already know about it too, but I was shocked to find out how prevalent it really is.

Also YouTube has the disclaimer that video uploaders can use, and that their own AI tools automatically add, so there's already a database field they could check. But thay have added no AI content filtering.

21
 
 

Basically at the end of primary school in 8th grade in Poland they're doing a standardized nationwide exam, and this year they used AI to generate images for illustrations, including the pictures for certain tasks. I was so demotivated when I saw this tbh

22
 
 

New Orleans police have reportedly spent years scanning live feeds of city streets and secretly using facial recognition to identify suspects in real time—in seeming defiance of a city ordinance designed to prevent false arrests and protect citizens' civil rights.

A Washington Post investigation uncovered the dodgy practice, which relied on a private network of more than 200 cameras to automatically ping cops' phones when a possible match for a suspect was detected. Court records and public data suggest that these cameras "played a role in dozens of arrests," the Post found, but most uses were never disclosed in police reports.

That seems like a problem, the Post reported, since a 2022 city council ordinance required much more oversight for the tech. Rather than instantly detaining supposed suspects the second they pop up on live feeds, cops were only supposed to use the tech to find "specific suspects in their investigations of violent crimes," the Post reported. And in those limited cases, the cops were supposed to send images to a "fusion center," where at least two examiners "trained in identifying faces" using AI software had to agree on alleged matches before cops approached suspects.

Instead, the Post found that "none" of the arrests "were included in the department’s mandatory reports to the city council." And at least four people arrested were charged with nonviolent crimes. Some cops apparently found the city council process too sluggish and chose to ignore it to get the most out of their access to the tech, the Post found.

Now, New Orleans police have paused the program amid backlash over what Nathan Freed Wessler, the deputy director of the American Civil Liberties Union (ACLU) Speech, Privacy, and Technology Project, suggested might be the sketchiest use of facial recognition yet in the US. He told the Post this is "the first known widespread effort by police in a major US city to use AI to identify people in live camera feeds for the purpose of making immediate arrests."

23
 
 
24
 
 

cross-posted from: https://lemmy.world/post/29860206

Is the great pop upon us?

25
 
 

Slashdot Summary

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace."

Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase.

Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.

view more: next ›