Architeuthis

joined 2 years ago
[–] [email protected] 5 points 8 hours ago* (last edited 8 hours ago) (1 children)

Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?

Summary of the summary: they fully expected OpenAI would've gone bust by now and MS would be looting the corpse for all it's worth.

[–] [email protected] 0 points 5 days ago* (last edited 5 days ago) (3 children)

Fund copyright infringement lawsuits against the people they had been bankrolling the last few years? Sure, if the ROI is there, but I'm guessing they'll likely move on to then next trendy sounding thing, like a quantum remote diddling stablecoin or whatevertheshit.

[–] [email protected] 0 points 5 days ago (1 children)

I too love to reminisce over the time (like 3m ago) when the c-suite would think twice before okaying uploading whatever wherever, ostensibly on the promise that it would cut delivery time (up to) some notable percentage, but mostly because everyone else is also doing it.

Code isn't unmoated because it's mostly shit, it's because there's only so many ways to pound a nail into wood, and a big part of what makes a programming language good is that it won't let you stray too much without good reason.

You are way overselling coding agents.

[–] [email protected] 1 points 5 days ago

Ah yes, the supreme technological miracle of automating the ctrl+c/ctrl+v parts when applying the LLM snippet into your codebase.

[–] [email protected] 0 points 5 days ago* (last edited 5 days ago) (2 children)

On the other hand they blatantly reskinned an entire existing game, and there's a whole breach of contract aspect there since apparently they were reusing their own code that they wrote while working for Bethesda, who I doubt would've cared as much if this were only about an LLM-snippet length of code.

[–] [email protected] 0 points 5 days ago (8 children)

I'd say that incredibly unlikely unless an LLM suddenly blurts out Tesla's entire self-driving codebase.

The code itself is probably among the least behind-a-moat things in software development, that's why so many big players are fine with open sourcing their stuff.

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago) (1 children)

Yet, under Aron Peterson’s LinkedIn posts about these video clips, you can find the usual comments about him being “a Luddite”, being “in denial” etc.

And then there's this:

transcript

From: Rupert Breheny Bio: Cobalt AI Founder | Google 16 yrs | International Keynote Speaker | Integration Consultant AI Comment: Nice work. I've been playing around myself. First impressions are excellent. These are crisp, coherent images that respect the style of the original source. Camera movements are measured, and the four candidate videos generated are generous. They are relatively fast to render but admittedly do burn through credits.

From: Aron Peterson (Author) Bio: My body is 25% photography, 25% film, 25% animation, 25% literature and 0% tolerating bs on the internet. Comment: Rupert Breheny are you a bot? These are not crisp images. In my review above I have highlighted these are terrible.

[–] [email protected] 0 points 5 days ago* (last edited 5 days ago)

AI is the product, not the science.

Having said that:

  • Alignment research: pseudoscience
  • AGI timelines: pseudoscience
  • Prompt engineering: pseudoscience
  • Problem solving benchmarks: almost certainly pseudoscience
  • Hyperscaling: borderline, one could be generous and call it a failed experiment
  • Neural network training and design fundamentals: that's applied maths meets trial and error, no pseudo about it
  • I'm probably forgetting stuff
[–] [email protected] 1 points 6 days ago* (last edited 6 days ago)

you know that there’s almost no chance you’re the real you and not a torture copy

I basilisk's wager was framed like that, that you can't know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual "you are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]".

[–] [email protected] 1 points 6 days ago* (last edited 6 days ago) (1 children)

Scott A. comes off as such a disaster of a personality. Hope it's less obvious in his irl interactions.

[–] [email protected] 0 points 6 days ago* (last edited 6 days ago)

I'd say if there's a weak part in your admittedly tongue-in-cheek theory it's requiring Roko to have had a broader scope plan instead of a really catchy brainfart, not the part about making the basilisk thing out to be smarter/nobler than it is.

Reframing the infohazard aspect as an empathy filter definitely has legs in terms of building a narrative.

 

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

view more: next ›