this post was submitted on 04 Apr 2024
1 points (100.0% liked)

SneerClub

1010 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
 

A while back, I set myself the project of figuring out how much of the MIT undergrad physics curriculum could be taught from free online books. The answer, so far, is more than I had anticipated but much less than what we deserve. But working on that, along with a few other conversations, has got me to wondering. We've seen TESCREAL types be just plain wrong about science many times over the years. Harry Potter and the Methods of Rationality botches Punnett squares and pretty much everything more advanced than that. LessWrong demonstrably has no filter against old-school math crankery. The (ahem) leading light of "effective accelerationism" just plays Mad Libs with physics words. Yudkowsky's declarations about organic chemistry boggle the educated mind. They even manage to be weird about theoretical computer science — what we might call the "lambda calculus is super-Turing!" school of TESCREAL.

Sometimes, the difference between a TESCREAL understanding of science and a legitimate one comes from having studied the subject in a formal way. But not every aspiring autodidact with an interest in molecular biology or the theoretical limits of computation is a lost cause!

So, then: What books come down upon the superficial TESCREAL version of cool things like a ton of scientific bricks? What are the texts that one withdraws from an inside coat pocket and slides across the table, saying "This here is the good shit"?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 8 months ago (1 children)

The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

We need to explain that yes, science has it's flaws, but it still shits all over pseudobayesianism.

[–] [email protected] 0 points 8 months ago (1 children)

Well this is where I was going with Lakatos. Among the large scale conceptual issues with rationalist thinking is that there isn’t any understanding of what would count as a degenerating research programme. In this sense rationalism is a perfect product of the internet era: there are far too many conjectures being thrown out and adopted at scale on grounds of intuition for any effective reality-testing to take place. Moreover, since many of these conjectures are social, or about habits of mind, and the rationalists shape their own social world and their habits of mind according to those conjectures, the research programme(s) they develop is (/are) constantly tested, but only according to rationalist rules. And, as when the millenarian cult has to figure out what its leader got wrong about the date of the apocalypse, when the world really gets in the way it only serves as an impetus to refine the existing body of ideas still further, according to the same set of rules.

Indeed the success of LLMs illustrates another problem with making your own world, for which I’m going to cheerfully borrow the term “hyperstition” from the sort of cultural theorists of which I’m usually wary. “Hyperstition” is, roughly speaking, where something which otherwise belongs to imagination is manifested in the real world by culture. LLMs (like Elon Musk’s projects) are a good example of hyperstition gone awry: rationalist AI science fiction manifested an AI programme in the real world, and hence immediately supplied the rationalists with all the proof they needed that their predictions were correct in the general if not in exact detail.

But absent the hyperstitional aspect, LLMs would have been much easier to spot as by and large a fraudulent cover for mass data-theft and the suppression of labour. Certainly they don’t work as artificial intelligence, and the stuff that does work (I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be), i.e. transformers and unbelievable energy-spend on data-processing, doesn’t even superficially resemble “intelligence”. With a sensitive critical eye, and an open environment for thought, this should have been, from early on, easily sufficient evidence, alongside the brute mechanicality of the linguistic output of ChatGPT, to realise that the prognostic tools the rationalists were using lacked either predictive or explanatory power.

But rationalist thought had shaped the reality against which these prognoses were supposed to be tested, and we are still dealing with people committed to the thesis that skynet is, for better or worse, getting closer every day.

Lakatos’s thesis about degenerating research programmes asks us to predict novel and look for corroborative evidence. The rationalist programme does exactly the opposite. It predicts corroborative evidence, and looks for novel evidence which it can feed back into its pseudo-Bayesian calculator. The novel evidence is used to refine the theory, and the predictions are used to corroborate a (foregone) interpretation of what the facts are going to tell us.

Now, I would say, more or less with Lakatos, that this isn’t an amazingly hard and fast rule, and it’s subject to different interpretations. But it’s a useful tool for analysing what’s happening when you’re trying to build a way of thinking about the world. The pseudo-Bayesian tools, insofar as they have any impact at all, almost inevitably drag the project into degeneration, because they have no tool for assessing whether the “hard core” of their programme can be borne out by facts.

[–] [email protected] 0 points 8 months ago (2 children)

(I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be)

yes, this is a specific area i have a note to self to look into

[–] [email protected] 0 points 8 months ago

From what I have read, it can be a support as long as:

  • It is trained on local data, from the machine and procedures normally used.
  • The accuracy is regularly tested (because any variation in the indata, whether from equipment or procedures changes the input data).
  • It is understood as a tool that gives suggestions for the radiologist, not a replacement.

Of course, it cannot be better than the best radiologists around. So the question is if it is worth it, compared with for example hire more staff.

[–] [email protected] 0 points 8 months ago (1 children)

@dgerard @YouKnowWhoTheFuckIAM About a decade ago I was working with (kinda sorta) a guy who wanted to do a start-em-up that would involve machine recognition of situations from electrocardiograph recordings, in real-time so as to give the cardio outpatient early warning that they should call for help. At that time the buzzword was Machine Learning, but also I looked and found the published research to be voluminous and ongoing for some decades.

[–] [email protected] 0 points 8 months ago

@dgerard @YouKnowWhoTheFuckIAM But the most interesting thing I found was the flash cards. You see, we've been training meat-based neural networks to do this for a while. Now I wonder what I would find if I looked into radiology.