Architeuthis

joined 2 years ago
[–] [email protected] 4 points 5 hours ago* (last edited 5 hours ago)

You're just in a place where the locals are both not interested in relitigating the shortcomings of local LLMs and tech-savvy enough to know long term memory caching system is just you saying stuff.

Hosting your own model and adding personality customizations is just downloading ollama and inputting a prompt that maybe you save as a text file after. Wow what a fun project.

[–] [email protected] 9 points 21 hours ago

Neil Breen of AI

ahahahaha oh shit

[–] [email protected] 20 points 21 hours ago* (last edited 20 hours ago)

Man wouldn't it be delightful if people happened to start adding a 1.7 suffix to whatever he calls himself next.

Also, Cremieux being exposed as a fake ass academic isn't bad for a silver lining, no wonder he didn't want the entire audience of a sure to become viral NYT column immediately googling his real name.

edit: his sister keeps telling on him on her timeline, and taking her at her word he seems to be a whole other level of a piece of shit than he'd been letting on, yikes.

[–] [email protected] 14 points 1 day ago

Actually Generate Income.

[–] [email protected] 7 points 1 day ago* (last edited 1 day ago)

eeeeeh

They'd just have Garisson join the zizians and call it a day.

[–] [email protected] 14 points 1 day ago (4 children)

Apparently linkedin's cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.

Zack of SMBC has thoughts on it:

[actual excerpt omitted, follow the link to read it]

[–] [email protected] 12 points 1 day ago* (last edited 1 day ago) (3 children)

We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI).

Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.

Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse.

Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.”

alt textscreenshot from south park's scientology episode featuring the iconic chyron "This is what scientologists actually believe" with "scientologists" crossed out and replaced with "rationalists"

[–] [email protected] 24 points 2 days ago* (last edited 2 days ago)

If anybody doesn't click, Cremieux and the NYT are trying to jump start a birther type conspiracy for Zohran Mamdani. NYT respects Crem's privacy and doesn't mention he's a raging eugenicist trying to smear a poc candidate. He's just an academic and an opponent of affirmative action.

[–] [email protected] 5 points 3 days ago

There are days when 70% error rate seems low-balling it, it's mostly a luck of the draw thing. And be it 10% or 90%, it's not really automation if a human has to be double-triple checking the output 100% of the time.

[–] [email protected] 14 points 4 days ago (3 children)

Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.

At this point it's an even bet that they are doing this because copilot has groomed the executives into thinking it can't do wrong.

[–] [email protected] 12 points 5 days ago

LLMs are bad even at converting news articles to smaller news articles faithfully, so I'm assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.

[–] [email protected] 7 points 5 days ago* (last edited 5 days ago)

I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they're currently at and if it's reversible.

 

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

view more: next ›