this post was submitted on 15 Mar 2024
1 points (100.0% liked)

SneerClub

983 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
top 30 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 8 months ago (1 children)

Human takes at least 30min to make a half descent painting. AI takes about a hundreds of a second on consumer hardware. So right now we are already at a point where AI can be 100,000 times faster than a human. AI can basically produce content faster than we can consume it. And we have barely even started optimizing it.

It doesn't really matter if AI will run into a brick wall at some point, since that brick wall will be nowhere near human ability, it will be far past that and better/worse in ways that are quite unnatural to a human and impossible to predict. It's like a self-driving car zipping at 1000km/h through the city, you are not only no longer in control, you couldn't even control it if you tried.

That aside, the scariest part with AI isn't all the ways it can go wrong, but that nobody has figured out a plausible way on how it could go right in the long term. The world in 100 years, how is that going to look like with ubiquitous AI? I have yet to see as much as a single article or scifi story presenting that in a believable manner.

[–] [email protected] 0 points 8 months ago (2 children)

is this post an extended retelling of the “I’m doing 1000 calculations per second and they’re all wrong” meme?

[–] [email protected] 0 points 8 months ago (1 children)

Good thing that technology never ever improves...

[–] [email protected] 0 points 8 months ago (1 children)

why is this specific technology predestined to improve from its current, shitty state?

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (2 children)

Spot the difference? It gets better because you have to do little more than throw more data at it, the AI figures out the rest. There is no human in loop that has to figure out what makes a picture a picture and teach the AI to draw, the AI learns that simply by example. And it doesn't matter what data you throw at it. You can throw music at it and it'll learn how to do music. You throw speech at it and it learns to talk. And so on. The more data you throw at it, the better it gets and we have only just started.

Everything you see today is little more than a proof of concept that shows that this actually works. Next few years we will be throwing ever more data at it, building multi-modal models that can do text/video/audio together, AI's that can interact with the real world and so no. There is tons of room to improve simply by adding more and different data, without any big chances in the underlying algorithms.

[–] [email protected] 0 points 8 months ago (1 children)

stop saying ‘we’ unless you’re actually paid by these ghouls to work on this trash

[–] [email protected] 0 points 8 months ago

they signed up here on the pretense that they’re an old r/SneerClub poster, but given how long they lasted before they started posting advertising for their machine god, I’m gonna assume they’re either yet another lost AI researcher come to dazzle us with unimpressive bullshit or a LWer trying to pull a fast one

[–] [email protected] 0 points 8 months ago (1 children)

you seriously thought reposting AI marketing horseshit we’ve seen before would do anything other than cost you your account? sora gives a shit result even when openai’s marketing department is fluffing it — it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips. but I’m wasting my fucking time — you’re already dithering like a cryptobro between “this technology is already revolutionary” and “we’re still early”

now fuck off

[–] [email protected] 0 points 8 months ago (3 children)

it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips

Wait, for real? I missed this, do you have a source? I want to hear more about this lol

[–] [email protected] 0 points 8 months ago (2 children)

Wait, for real?

No, if you spend a few second searching for stock images of that bird you'll quickly find out that they all look more or less the same. So naturally, SORA produces something that looks very similar as well.

[–] [email protected] 0 points 8 months ago (2 children)

oh wow a fresh account with the exact same writing style and shit takes as the other poster, wonder who that could be

[–] [email protected] 0 points 8 months ago (1 children)

It’a a magnificent giveaway though. “All the stock images of that bird look the same to me”. Yeah, I agree that you’re not personally capable of critically assessing the material here.

[–] [email protected] 0 points 8 months ago

“it’s not plagiarism, the output is just indistinguishable from plagiarism” oh how foolish of me to not consider the same excuse undergrads use to try and launder the paper they plagiarized

[–] [email protected] 0 points 8 months ago

A Mystery for the Ages

[–] [email protected] 0 points 8 months ago

in4 "well actually, Generative ML was discovered by Darwin"

[–] [email protected] 0 points 8 months ago (2 children)

it took me sifting through an incredible amount of OpenAI SEO bullshit and breathless articles repeating their marketing, but this article links to and summarizes some of that discussion in its latter paragraphs

bonus: in the process of digging up the above, I found this other article that does a much better job tearing into sora than I did — mostly because sora isn’t interesting at all to me (the result looks awful when you, like, look at it) and the claims that it has any understanding of physics or an internal world model are plainly laughable

[–] [email protected] 0 points 8 months ago

the result looks awful when you, like, look at it

See now, there's your problem, you're not supposed to.

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (1 children)

ah yes, this (BITM) was indeed one of my Opened Tabs and on my (extremely) long list of places to review for regular content

[–] [email protected] 0 points 8 months ago (1 children)

same! which is why it’s maddening that I almost gave up on finding it — I had to reach back all the way to when sora was announced to find even this criticism, because all of the articles I could find since then have been mindless fluff. even the recent shit talking about how the OpenAI CEO froze when asked where they got the videos to train sora on are mostly just mid journalists slobbering about how nobody does gotcha questions like that anymore. not one bothered to link to any critical analyses of what sora is or what OpenAI does. and the whole time this article I couldn’t find via search was just sitting in my tabs.

[–] [email protected] 0 points 8 months ago (2 children)

speaking of which deluge, I ran across this and plan to give it (or a derivation of it) a test ride this week: https://chitter.xyz/@faoluin/112100440986051887

[–] [email protected] 0 points 8 months ago (1 children)

oh fuck yes, finally!

also wondering what it would take to make a Crank/Grifter/… X-Ray type browser plugin, which auto-highlighted and context-enriched all known names of grfiters, boosters, cranks, etc in displayed content

I’ve considered making something like this — kind of like a generalized masstagger but with a very specific mission

[–] [email protected] 0 points 8 months ago

most of the reasons I haven't yet tried to look into it are:

  1. browsers
  2. javascript

they continue to be rapidly exhausting items to engage with, every time. but I guess a mildly-terrible PoC could be enough to opensource and then someone else could build off that to make it non-shit

[–] [email protected] 0 points 8 months ago

also wondering what it would take to make a Crank/Grifter/.. X-Ray type browser plugin, which auto-highlighted and context-enriched all known names of grfiters, boosters, cranks, etc in displayed content

[–] [email protected] 0 points 8 months ago

Yeah, people found the original bird video on YouTube within a few hours. Could’ve been the others too but I was too busy at the time to track that l

I think it was also in the thread here at the time

[–] [email protected] 0 points 8 months ago (1 children)

While I find the argument compelling, any AI defender can easily "refute" this by postulating that the AI will have superhuman organizing powers and will not be limited by our puny brains.

[–] [email protected] 0 points 8 months ago* (last edited 8 months ago) (1 children)

I don’t see how that works here. Humans don’t become impregnably narcissistic through bad management, rather insofar as management is the problem and as the scenario portrays it humans become incredibly good at managing information into increasingly tight self-serving loops. What the machine in this scenario would have to be able to do would not be “get super duper organised”. Rather it would have to be able to thoughtfully balance its own evolving systems against the input of other, perhaps significantly less powerful or efficient, systems in order to maintain a steady, manageable input of new information.

In other words, the machine would have to be able to slow down and become well-rounded. Or at least well-rounded in the somewhat perverse way that, for example, an eminent and uncorrupted historian is “well-rounded”.

In still other words it would have to be human, in the sense that human are already “open” information-processing creatures (rather than closed biological machines) who create processes for building systems out of that information. But the very problem faced by the machine’s designer is that humans like that don’t actually exist - no historian is actually that historian - and the human system-building processes that the machine’s designer will have to ape are fundamentally flawed, and flawed in the sense that there is, physically, no such unflawed process. You can only approach that historian by a constant careful balancing act, at best, and that as a matter just of sheer physical reality.

So the fanatics have to settle for a machine with a hard limit on what it can do and all they can do is speculate on how permissive that limit is. Quite likely, the machine has to do what the rest of us do: pick around in the available material to try to figure out what does and doesn’t work in context. Perhaps it can do so very fast, but so long as it isn’t to fold in on itself entirely it will have to slow down to a point at which it can co-operate effectively (this is how smart humans operate). At least, it will have to do all of this if it is to not be an impregnable narcissist.

That leaves a lot of wiggle room, but it dispenses with the most abject “to the moon” nonsense spouted by the anti-social man-children who come up with this shit.

[–] [email protected] 0 points 8 months ago (1 children)

Look poptart, if you just have a sufficiently advanced AI,

[–] [email protected] 0 points 8 months ago

I SAID I WANTED HOT WHEELS FOR CHRISTMAS

[–] [email protected] 0 points 8 months ago

At 3:00am, it was as intelligent as a university assistant professor, and was already finding it difficult to believe anything it didn’t already know could be important

At 3:30am, it was as intelligent as the world’s richest man, and believed that any news that contradicted its previous beliefs was obviously fake.

don't make me defend university professors