Still occasionally think about that bit in the o1 white paper where the openai researchers innocuously pose the question of what if our benchmarks for detecting hallucinations are shit actually, wouldn't that be something.
Architeuthis
Implicitly assuming that the technology to terraform Mars is just around the corner is the we'll become profitable once we hit AGI of space exploration.
In todays ACX comment spotlight, Elon-anons urge each other to trust the plan:
image text
Just had a weird thought. Say you're an eccentric almost-trillionare, richest person in history. You have a boyhood dream you cannot shake: get to Mars. As much as you've accomplished, this goal still eludes you. You come to the conclusion that only a nation-state -- one of the big ones -- can accomplish this.
Wouldn't co-opting a superpower nation-state be your next move?
in order to dissuade hypothetical agents from blackmailing you
There's also a whole thing with Yud accepting the many worlds interpretation as obvious truth that leads to (some) rationalists believing that getting killed in one timeline helps your surviving parallel selves by bolstering your case of being unblackmailable by said hypothetical agents, who are also from the future, which is why you can't negotiate with them directly.
Also "return the offense thousandfold" figures in LaVeyan satanism I think (cross referencing unsuccessful; too many black metal bands in search results) as a counterpoint to by far the least observed christian guideline of turn-the-other-cheek.
Lesbionest, there is no better proxy for polygenic fitness than your total word count at lesswrong and acx.
Baldness just makes you more aerodynamic, and in our eugenics enabled charter city/network state/seastead you'll be getting assigned a state sponsored waifu anyway (terms and conditions may apply, like being not unprovably not related to persons of dubious genetic heritage).
In a completely unexpected turn of events this new experiment in mainstreaming eugenics is being currently boosted by siskind.
Could also be don't worry about deepseek type messaging that addresses concerns without naming names, to tell us that a drastic reduction in infrastructure costs was foretold by the writing of St Moore and was thus always inevitable on the way to immanentizing the AGI, ἀλληλούϊα.
It’s like you founded a combination of an employment office and a cult temple, where the job seekers aren’t expected or required to join the cult, but the rites are still performed in the waiting room in public view.
chef's kiss
The surface claim seems to be the opposite, he says that because of Moore's law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldn't be even though their capabilities have already stagnated as per observation one.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
Nothing in my experience with LLMs or my reading of the literature has ever led me to believe that prompting one to numerically rate something and treating the result as meaningful would be a productive use of someone's time.