this post was submitted on 14 Jul 2024
1 points (100.0% liked)

TechTakes

1432 readers
16 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 4 months ago (2 children)

comment from friend:

Slightly related: now I know when the AI crash is going to happen. Every bottomfeeder recruiter company on LinkedIn is suddenly pushing 2-month contract technical writer positions with AI companies with no product, no strategy, and no idea of how to proceed other than “CEO cashes out.” I suspect the idea is to get all of their documentation together so they can sell their bags of magic beans before the beginning of the holiday season.

sickos.jpg

I have asked if he can send me links to a few of these, I'll see what I can do with 'em

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (11 children)

Current flavor AI is certainly getting demystified a lot among enterprise people. Let's dip our toes into using an LLM to make our hoard of internal documents more accessible, it's supposed to actually be good at that, right? is slowly giving way to "What do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?"

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago)

“What do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?”

Our BI team is trying to implement some RAG via Microsoft Fabrics and Azure AI search because we need that for whatever reason, and they've burned through almost 10k for the first half of the running month already, either because it's just super expensive or because it's so terribly documented that they can't get it to work and have to try again and again. Normal costs are somewhere around 2k for the whole month for traffic + servers + database and I haven't got the foggiest what's even going on there.

But someone from the C suite apparently wrote them a blank check because it's AI ...

load more comments (10 replies)
[–] [email protected] 0 points 4 months ago (2 children)

That seems suspiciously soon, but my impression is based on nothing but vibes — a sense that companies are still buying in.

[–] [email protected] 0 points 4 months ago (7 children)

Ed Zitron says it'll burn by end of the year, but he doesn't list sources either so idk

load more comments (7 replies)
[–] [email protected] 0 points 4 months ago (1 children)

I think there was a report saying that the most recent quarter still showed a massive infusion of VC cash into the space, but I'm not sure how much of that comes from the fact that a new money sink hasn't yet started trending in the valley. It wouldn't surprise me if the griftier founders were looking to cash out before the bubble properly bursts in order to avoid burning bridges with the investors they'll need to get the next thing rolling.

load more comments (1 replies)
[–] [email protected] 0 points 4 months ago (2 children)

Not a sneer, but an observation on the tech industry from Baldur Bjarnason, plus some of my own thoughts:

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

Baldur has pointed that part out before, and noted how its kneecapping the consumer side of the entire bubble, but I suspect the phrase "AI" will retain that meaning well past the bubble's bursting. "AI slop", or just "slop", will likely also stick around, for those who wish to differentiate gen-AI garbage from more genuine uses of machine learning.

To many, “AI” seems to have become a tech asshole signifier: the “tech asshole” is a person who works in tech, only cares about bullshit tech trends, and doesn’t care about the larger consequences of their work or their industry. Or, even worse, aspires to become a person who gets rich from working in a harmful industry.

For example, my sister helps manage a book store as a day job. They hire a lot of teenagers as summer employees and at least those teens use “he’s a big fan of AI” as a red flag. (Obviously a book store is a biased sample. The ones that seek out a book store summer job are generally going to be good kids.)

I don’t think I’ve experienced a sentiment disconnect this massive in tech before, even during the dot-com bubble.

Part of me suspects that the AI bubble's spread that "tech asshole" stench to the rest of the industry, with some help from the widely-mocked NFT craze and Elon Musk becoming a punching bag par excellence for his public breaking-down of Twitter.

(Fuck, now I'm tempted to try and cook up something for MoreWrite discussing how I expect the bubble to play out...)

[–] [email protected] 0 points 4 months ago

Write it! The time is right

[–] [email protected] 0 points 4 months ago

The active hostility from outside the tech world is going to make this one interesting, since unlike crypto this one seems to have a lot of legitimate energy behind it in the industry even as it becomes increasingly apparent that even if the technical capability was there (e.g. the bullshit problems could be solved by throwing enough compute and data at the existing paradigm, which looks increasingly unlikely) there's no way to do it profitably given the massive costs of training and using these models.

I wonder if we're going to see any attempts to optimize existing models for the orgs that have already integrated them in the same way that caching a web page or indexing a database can increase performance without doing a whole rebuild. Nvidia won't be happy to see the market for GPUs fall off, but OpenAI might have enough users of their existing models that they can keep operating even while dramatically cutting down on new training runs? Does that even make sense, or am I showing my ignorance here?

[–] [email protected] 0 points 4 months ago (1 children)

surprising absolutely nobody who’s been paying attention, Fedora has signaled its intent to use generative AI and an LLM in its packaging software

and I wouldn’t give a fuck what IBM’s pet distro does, but Red Hat’s developers have a high amount of control over what ends up in the userland… and bootloader… and pretty much every part of the system but the kernel cause they got told to fuck off, of every Linux distro but the obscure ones

[–] [email protected] 0 points 4 months ago (2 children)

I'm not really in on distros and related drama (strong "just fucking use Debian stable" camp), why did Red Hat get told to fuck off from the kernel?

[–] [email protected] 0 points 4 months ago (1 children)

I started a job in the last year that really forced me to play around with different distros and sometimes building them. Pretty much my entire experience is “abandon ubuntu, just use debian” and wishing other people would do the same

(Pretty much my entire reasoning is that snap fucked up my dev environment so bad I rage installed debian)

load more comments (1 replies)
[–] [email protected] 0 points 4 months ago

the example I was thinking of was the incredibly ill-conceived rejected patchset to implement d-bus inside the kernel but I don’t think they’ve really stopped trying since then

[–] [email protected] 0 points 4 months ago (3 children)

speaking of technofascism, we’re at the stage where supposed Democrat billionaires like the Andreesen Horowitz fuckers suddenly come out in support of Trump:

Marc Andreessen, the co-founder of one of the most prominent venture capital firms in Silicon Valley, says he’s been a Democrat most of his life. He says he has endorsed and voted for Bill Clinton, Al Gore, John Kerry, Barack Obama and Hillary Clinton.

However, he says he’s no longer loyal to the Democratic Party. In the 2024 presidential race, he is supporting and voting for former President Donald Trump. The reason he is choosing Trump over President Joe Biden boils down primarily to one major issue — he believes Trump’s policies are much more favorable for tech, specifically for the startup ecosystem.

none of this should be surprising, but it should be called out every time it happens, and we’re gonna see it happen a lot in the days ahead. these fuckers finally feel secure in taking their masks off, and that’s not good.

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago)

I don't understand why people take him at face value when he claims he's always been a Democrat up until now. He's historically made large contributions to candidates from both parties, but generally more Republicans than Democrats, and also Republican PACs like Protect American Jobs. Here is his personal record.

Since 2023, he picked up and donated ~$20,000,000 to Fairshake, a crypto PAC which predominantly funds candidates running against Democrats.

Has he moved right? Sure. Was he ever left? No, this is the voting record of someone who wants to buy power from candidates belonging to both parties. If it implies anything, it implies he currently finds Republicans to be corruptible.

[–] [email protected] 0 points 4 months ago (1 children)

Democrat?? AH's previous hit was the one where they enthusiastically endorsed literally the co-author of the original Fascist manifesto

and a16z does get Yarvin in to dispense wisdom and insight

[–] [email protected] 0 points 4 months ago

See, I feel like the Democrats have had a pretty strong technocrat wing that is much more in synch with Neoreaction than people care to acknowledge. As the right shifts towards pursuing the pro-racist anti-women anti-lgbt aspects of their agenda through the courts rather than the ballot box, it seems like the fault lines between the technocratic fascists and the theocratic fascists are thinner than the lines between the techfash and the progressives.

[–] [email protected] 0 points 4 months ago

Apparently the "startup ecosystem" matters more than the ecosystem of, you know, actual living things.

These people are just amazingly fucking evil.

[–] [email protected] 0 points 4 months ago (2 children)

NSFW, as NSAB, I know that anti-environmentalists shout a lot about 'what about china china should go green first!' while not knowing china is in fact doing a lot to try and go green (at least on the co2 energy front, I'm not asking here to go point out all the bad things china does to fuck up the environment). I see 'we should develop AI before china does so' be a big pro AI argument, so here is my question. Is china even working on massive A(G)I like the people claim?

[–] [email protected] 0 points 4 months ago (1 children)

Typing from phone, please excuse lack of citations. Academic output in various parts of ML research have increasingly come from China and Chinese researchers over the past decade. There’s multiple inputs to this - funding, how strong a specific school/research centre is, etc, but it’s been ramping up. Pretty sure part of this is one of the fuel sources in keeping the pro-hegemonist US argument popular and going lately (also part of where the “we should before they do” comes from I guess)

I’ve seen some mentions of recent legislation direction about LLM usage but I’m not fully up to scratch on what it is, haven’t had the time to read up

[–] [email protected] 0 points 4 months ago

Thanks I was not aware, so they are doing things regarding the research at least. So the "concern" isn't totally made up. Which is what I wanted to know. As Architeuthis mentioned the legislation is against false info and against going against the party (which seem to be what you could expect).

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (1 children)

I am overall very uninformed about the chinese thechnological day-to-day, but here's two interesting facts:

They set some pretty draconian rules early on about where the buck stops if your LLM starts spewing false information or (god forbid) goes against party orthodoxy so I'm assuming if independent research is happening It doesn't appear much in the form of public endpoints that anyone might use.

A few weeks ago I saw a report about chinese medical researchers trying use AI agents(?) to set up a virtual hospital in order to maybe eventually have some sort of a virtual patient entity that a medical student could work with somehow, and look how many thousands of virtual patients our handful of virtual doctors are healing daily, isn't it awesome folks. Other than the rampant startupiness of it all, what struck me was that they said they had chatgpt-3.5 set up up the doctor/patient/nurse agents, i.e. they used the free version.

So, who knows? If they are all-in in AGI behind the scenes they don't seem to be making a big fuss about it.

[–] [email protected] 0 points 4 months ago

Thanks, you and froztbytes reply answered my questions on if they were doing it and also a bit on how seriously they are all taking it.

[–] [email protected] 0 points 4 months ago

"I'm sick of Google putting gratuitous LLMs everywhere"
(finger curls on monkey paw)

[–] [email protected] 0 points 4 months ago

We should neoligise "NASB" for our community as shorthand for "Not a sneer, but"

[–] [email protected] 0 points 4 months ago

This isn't a sneer, just want to share this enjoyable presentation about tech and nihilism by Assoc Professor Nolen Gertz at the University of Twente here in the Netherlands https://iai.tv/video/nihilism-and-the-meaning-of-life-nolen-gertz

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (5 children)

HN: I am starting an AI+Education company called Eureka Labs.

Their goal: robo-feynman:

For example, in the case of physics one could imagine working through very high quality course materials together with Feynman, who is there to guide you every step of the way. Unfortunately, subject matter experts who are deeply passionate, great at teaching, infinitely patient and fluent in all of the world's languages are also very scarce and cannot personally tutor all 8 billion of us on demand. However, with recent progress in generative AI, this learning experience feels tractable.

NGL though mostly just sharing this link for the ~~concept art~~ concept fart which features a three-armed many fingered woman smiling at an invisible camera.

[–] [email protected] 0 points 4 months ago

jumping off a roof with an umbrella for a parachute feels tractable

[–] [email protected] 0 points 4 months ago

For example, in the case of physics one could imagine working through very high quality course materials together with Feynman

Women of the world: um, about that

[–] [email protected] 0 points 4 months ago (1 children)

So that’s what our kids will look like once society rebuilds after global thermonuclear war!!!

[–] [email protected] 0 points 4 months ago

I'm sure they will thank us once we explain that the alternative was GPT-5.

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago)

Others: unslanted solar panels at ground level in shade under other solar panels, 90-degree water steps (plural), magical mystery staircases and escalator tubes, picture glass that reflects anything it wants to instead of what may actually be in the reflected light path, a whole Background Full Of Ill-Defined Background People because I guess the training set imagery was input at lower pixel density(??), and on stage left we have a group in conversation walking and talking also right on the edge of nowhere in front of them

And that’s all I picked up in about 30-40s of looking

Imagine being the kind of person who thinks this shit is good

[–] [email protected] 0 points 4 months ago (1 children)

But just focus on the vibes. This diverse group of young mutants getting an education in the overgrown ruins of this university.

Not sure how it ties into robo-feyman at all but the vibes

[–] [email protected] 0 points 4 months ago

smh they gentrified Genosha

load more comments
view more: ‹ prev next ›