I knew the exact couple you were talking about before I read any additional comments. They seem to show up in the news like clockwork... do they have a publicist or PR agent looking for newspapers in need of garbage filler puff pieces? If anything, going to the white house is a step up from there normal pattern of self promotion.
scruiser
Yeah they are normally all over anything with the word "market" in it, with an almost religious like belief in market's ability to solve things.
My suspicion is that the writer has picked up some anti-Ukrainian sentiment from the US right wing (which in order to rationalize and justify Trump's constant sucking up to Putin has looked for any and every angle to tear Ukraine down). And this anti-Ukrainian sentiment has somehow trumped their worship of markets... Checking back through their posting history to try to discern their exact political alignment... it's hard to say, they've got the Scott Alexander thing going on where they use disconnected historical examples crossed with a bad analogies crossed with misappropriated terms from philosophy to make points that you can't follow unless you already know their real intended context. So idk.
The slatestarcodex is discussing the unethical research performed on changemyview. Of course, the most upvoted take is that they don't see the harm or why it should be deemed unethical. Lots of upvoted complaints about IRBs and such. It's pretty gross.
I'd add:
-
examples of problems equivalent to the halting problem, examples of problems that are intractable
-
computational complexity. I.e. Schrodinger Equation and DFT and why the ASI can't invent new materials/nanotech (if it was even possible in the first place) just by simulating stuff really well.
titotal has written some good stuff on computational complexity before. Oh wait, you said you can do physics so maybe you're already familiar with the material science stuff?
It's worse than you are remembering! Eliezer has claimed deep neural networks (maybe even something along the lines of llms) could learn to break hashes just through being trained on exposure to hash/plaintext pairs on the training data set.
The original discussion: here about a lesswrong post and here about a tweet. And the original lesswrong post if you want to go back to the source.
That disclaimer feels like parody given that LLMs have existed under a decade and only been popular a few years. Like it's mocking all the job ads that ask for 10+ years of experience on a programming language or library that has literally only existed for 7 years.
Yeah, he thinks Cyc was a switch from the brilliant meta-heuristic soup of Eurisko to the dead end of expert systems, but according to the article I linked, Cycorp was still programming in extensive heuristics and meta-heuristics with the expert system entries they were making as part of it's general resolution-based inference engine, it's just that Cyc wasn't able to do anything useful with these heuristics and in fact they were slowing it down extensively, so they started turning them off in 2007 and completely turned off the general inference system in 2010!
To be ~~fair~~ far too charitable to Eliezer, this little factoid has cites from 2022 and 2023 when Lenat wrote more about lessons from Cyc, so it's not like Eliezer could have known this back in 2008. To ~~sneer~~ be actually fair to Eliezer, he should have figured they guy that actually wrote and used Eurisko and talked about how Cyc was an extension of it and repeatedly refers back to lessons of Eurisko would in fact try to include a system of heuristics and meta-heuristics in Cyc! To properly sneer at Eliezer... it probably wouldn't have helped even if Lenat kept the public up to date on the latest lessons from Cyc through academic articles, Eliezer doesn't actually keep up with the literature as it's published.
Using just the author's name as input feels deliberately bad. Like the promptfondlers generally emphasize how important prompting it right is, its hard to imagine them going deliberately minimalistic in prompt.
AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!
nanomachines son
(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer's main scenario for the AGI to boostrap to Godhood. He's been called out multiple times on why drexler's vision for nanotech ignores physics, so he's since updated to diamondoid bacteria (but he still thinks nanotech).)
~~The predictions of slopworld 2035 are coming true!~~
On the other hand, it doesn't matter what Elon actually thinks, because he would probably go through with the lawsuit given any available pretext because he's mad he got kicked out and couldn't commercialize OpenAI exactly like Sam tried.