this post was submitted on 06 Oct 2024
1 points (100.0% liked)

TechTakes

1432 readers
16 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 1 month ago (6 children)

check out this jumpscare suckerpunch mashup

some of the details are so on point I’m almost left pointing and mouthing “art”

load more comments (6 replies)
[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (1 children)

New piece from Brian Merchant: Yes, the striking dockworkers were Luddites. And they won.

Pulling out a specific paragraph here (bolding mine):

I was glad to see some in the press recognizing this, which shows something of a sea change is underfoot; outlets like the Washington Post, CNN, and even Inc. Magazine all published pieces sympathizing with the longshoremen besieged by automation—and advised workers worried about AI to pay attention. “Dockworkers are waging a battle against automation,” the CNN headline noted, “The rest of us may want to take notes.” That feeling that many more jobs might be vulnerable to automation by AI is perhaps opening up new pathways to solidarity, new alliances.

To add my thoughts, those feelings likely aren't just that many more jobs are at risk than people thought, but that AI is primarily, if not exclusively, threatening the jobs people want to do (art, poetry, that sorta shit), and leaving the dangerous/boring jobs mostly untouched - effectively the exact opposite of the future the general public wants AI to bring them.

load more comments (1 replies)
[–] [email protected] 0 points 1 month ago

@BlueMonday1984 Yeah, I don't get it. If you want to be a "hacktivist", why not go after one of the MILLIONS of organizations making the planet a worse place?

[–] [email protected] 0 points 1 month ago (1 children)

Not a sneer, but I saw an article that was basically an extremely goddamn long list of forum recommendations and it gave me a warm and fuzzy feeling inside.

[–] [email protected] 0 points 1 month ago (1 children)

That's awesome. Lemmy is great, but old-school forums are just something else.

For a burst of nostalgia for at least some of you nerds (lovingly), let me add forums.spacebattles.com to the list

[–] [email protected] 0 points 1 month ago

If you mention SpaceBattles we also need to add Sufficient Velocity for completeness’s sake.

There’s another one that focuses mostly on erotic fiction but since that’s not really my bag I’ve forgotten what it’s called. And I think it’s not as big as SB and SV anyway since that user base is mostly on AO3 these days.

[–] [email protected] 0 points 1 month ago (4 children)
[–] [email protected] 0 points 1 month ago

@BlueMonday1984 whoever this dipshit is needs to fucking stop

[–] [email protected] 0 points 1 month ago (1 children)

Earlier today, the Internet Archive suffered a DDoS attack, which has now been claimed by the BlackMeta hacktivist group, who says they will be conducting additional attacks.

Hacktivist group? The fuck can you claim to be an activist for if your target is the Internet Archive?

[–] [email protected] 0 points 1 month ago (5 children)

Training my militia of revolutionary freedom fighters to attack homeless shelters, soup kitchens, nature preserves, libraries, and children's playgrounds.

load more comments (5 replies)
load more comments (2 replies)
[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (5 children)

Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed).

Anyway, Wikipedia moderators are now realizing that LLMs are causing problems for them, but they are very careful to not smack the beehive:

The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.

I just... don't have words for how bad this is going to go. How much work this will inevitably be. At least we'll get a real world example of just how many guardrails are actually needed to make LLM text "work" for this sort of use case, where neutrality, truth, and cited sources are important (at least on paper).

I hope some people watch this closely, I'm sure there's going to be some gold in this mess.

[–] [email protected] 0 points 1 month ago

you use a lot of terms I’m not inherently familiar with (is there a “welcome guide” of some sort somewhere I missed).

we’re pretty receptive to requests for explanations of terms here, just fyi! I imagine if it begins to overwhelm commenting, a guide will be created. Unfortunately there is something of an arms race between industry buzzword generation and good sense, and we are on the side of good sense.

[–] [email protected] 0 points 1 month ago

Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed)

first impression: your post is entirely on topic, welcome to the stubsack

techtakes is a sister sub to sneerclub (also on this instance, previously on reddit) and that one has a bit of an explanation. generally any (classy) sneerful critique of bullshit and wankery goes, modulo making space for chuds/nazis/debatelords/etc (those get shown the exit)

[–] [email protected] 0 points 1 month ago
[–] [email protected] 0 points 1 month ago (2 children)

Welcome to the club. They say a shared suffering is only half the suffering.

This was discussed in last week's Stubsack, but I don't think we mind talking about talking the same thing twice. I, for one, do not look forward to browsing Wikipedia exclusively through pre-2024 archived versions, so I hope (with some pessimism) their disapponintingly milquetoast stance works out.

Reading a bit of the old Reddit sneerclub can help understand some of the Awful vernacular, but otherwise it's as much of a lurkmoar as any other online circlejerk. The old guard keep referencing cringe techbros and TESCREALs I've never heard of while I still can't remember which Scott A we're talking about in which thread.

[–] [email protected] 0 points 1 month ago

oh you did better than I did

5 internet cookies to you

[–] [email protected] 0 points 1 month ago (2 children)

Scott Computers is married and a father but still writes like an incel and fundamentally can't believe that anyone interested in computer science or physics might think in a different way than he does. Dilbert Scott is an incredibly divorced man. Scott Adderall is the leader of the beige tribe.

[–] [email protected] 0 points 1 month ago

Scott Adderall

You Give Adderall A Bad Name

[–] [email protected] 0 points 1 month ago (3 children)

shit wasn’t there another one

load more comments (3 replies)
[–] [email protected] 0 points 1 month ago (2 children)

The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.

Wikipedia's mod team definitely haven't realised it yet, but this part is pretty much a de facto ban on using AI. AI is incapable of producing output that would be acceptable for a Wikipedia article - in basically every instance, its getting nuked.

[–] [email protected] 0 points 1 month ago (1 children)

lol i assure you that fidelitously translates to "kill it with fire"

[–] [email protected] 0 points 1 month ago (1 children)

Yeah, that sounds like text which somebody quickly typed up for the sake of having something.

[–] [email protected] 0 points 1 month ago

it is impossible for a Wikipedia editor to write a sentence on Wikipedia procedure without completely tracing the fractal space of caveats.

[–] [email protected] 0 points 1 month ago

I'd like to believe some of them have, but it's easier or more productive to keep giving the benefit of the doubt (or at at least pretend to) than argue the point.

[–] [email protected] 0 points 1 month ago (1 children)

Online art school Schoolism publicly sneers at AI art, gets standing ovation

Schoolism sneer

And now, a quick sidenote:

This is gut instinct, but I'm starting to get the feeling this AI bubble's gonna destroy the concept of artificial intelligence as we know it.

Mainly because of the slop-nami and the AI industry's repeated failures to solve hallucinations - both of those, I feel, have built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligence^tm^), no matter how many server farms you build or oceans of water you boil.

Additionally, I suspect that working on/with AI, or supporting it in any capacity, is becoming increasingly viewed as a major red flag - a "tech asshole signifier" to quote Baldur Bjarnason for the bajillionth time.

For a specific example, the major controversy that swirled around "Scooby Doo, Where Are You? In... SPRINGTRAPPED!" over its use of AI voices would be my pick.

Eagan Tilghman, the man behind the ~~slaughter~~ animation, may have been a random indie animator, who made Springtrapped on a shoestring budget and with zero intention of making even a cent off it, but all those mitigating circumstances didn't save the poor bastard from getting raked over the coals anyway. If that isn't a bad sign for the future of AI as a concept, I don't know what is.

[–] [email protected] 0 points 1 month ago (2 children)

I think a couple of people noted it at the start, but this is truly a paradigm shift.

We've had so many science fiction stories, works, derivatives, musing about AI in so many ways, what if it were malevolent, what if it rebelled, what if it took all jobs... But I don't think our collective consciousness was aware of the "what if it was just utterly stupid and incompetent" possibility.

[–] [email protected] 0 points 1 month ago

Alan Moore wrote a comic book story about AI about 10 years ago that parodied rationalist ideas about AI and it still holds up pretty well. Sadly the whole thing isn't behind that link - I saw it on Twitter and can't find it now.

[–] [email protected] 0 points 1 month ago (1 children)

I don’t think our collective consciousness was aware of the “what if it was just utterly stupid and incompetent” possibility.

Its a possibility which doesn't make for good sci-fi (unless you're writing an outright dystopia (e.g. Paranoia)), so sci-fi writers were unlikely to touch it.

The tech industry had enjoyed a lengthy period of unvarnished success and conformist press up to this point, so Joe Public probably wasn't gonna entertain the idea that this shiny new tech could drop the ball until they saw something like the glue pizza sprawl.

And the tech press isn't gonna push back against AI, for obvious reasons.

So, I'm not shocked this revelation completely blindsided the public.

I think a couple of people noted it at the start, but this is truly a paradigm shift.

Yeah, this is very much a paradigm shift - I don't know how wide-ranging the consequences will be, but I expect we're in for one hell of a ride.

[–] [email protected] 0 points 1 month ago

Paranoia is the only one I can think of that's actually pretty well on the money because the dystopian elements come from the fact that the wildly incompetent friend computer has been given total power despite everyone on some level knowing that fact, even if they can't admit it (anymore) without being terminated. The secret societies all think they can work the situation to their advantage and it provides a convenient scapegoat for terrible things they probably want to do anyways.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (1 children)

Many thanks to @blakestacey and @YourNetworkIsHaunted for your guidance with the NSF grant situation. I've sent an analysis of the two weird reviews to our project manager and have a list of personnel to escalate with if. Fingers crossed that we can be the pebble that gets an avalanche rolling. I'd really rather not become a character in this story (it's much more fun to hurl rotten fruit with the rest of the groundlings), but what else can we do when the bullshit comes and finds us in real life, eh?

It WAS fun to reference Emily Bender and On Bullshit in the references of a serious work document, though.

Edit: So...the email server says that all the messages are bouncing back. DKIM failure?

[–] [email protected] 0 points 1 month ago

I'd really rather not become a character in this story

Good luck. In my experience you can't speak up about stuff like this without putting yourself out there to some degree. Stay strong.


Regarding the email bounceback, could you perhaps try sending an email from another address (with a different host) to the same destination to confirm it's not just your "sending" server?

The bounceback should have info in it on the cause, and DKIM issues should result in a complaint response from the denying recipient server.

[–] [email protected] 0 points 1 month ago (1 children)

Just something I found in the wild (r/machine learning): Please point me in the right direction for further exploring my line of thinking in AI alignment

I'm not a researcher or working in AI or anything, but ...

you don't say

[–] [email protected] 0 points 1 month ago (2 children)

Alignment? Well, of course it depends on your organization's style guide but if you're using TensorFlow or PyTorch in Python, I recommend following PEP-8, which specifies four spaces per indent level and…

Wait, you're not working in AI the what are you even asking for?

load more comments (2 replies)
load more comments
view more: ‹ prev next ›