rook

joined 2 years ago
[–] [email protected] 0 points 1 week ago (1 children)

New lucidity post: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/

The author is entertaining, and if you’ve not read them before their past stuff is worth a look.

[–] [email protected] 0 points 1 week ago (1 children)

It isn’t clear to me at this point that such research will ever be funded in english-speaking places without a significant set of regime changes… no politician or administrator can resist outsourcing their own thinking to llm vendors in exchange for funding. I expect the US educational system will eventually provide a terrible warning to everyone (except the UK, whose government looks at the US and says “oh my god, that’s horrifying. How can we be more like that?”).

I’m probably just feeling unreasonably pessimistic right now, though.

[–] [email protected] 0 points 1 week ago (3 children)

Some people casting their eyes over this monster of a paper have less than positive thoughts about it. I’m not going to try and summarise the summaries here, but the threads aren’t long (and are vastly shorter than the paper) so reading them wouldn’t take long.

Dr. Cat Hicks on mastodon: https://mastodon.social/@grimalkina/114690973548997443

Ashley Juavinett on bluesky: https://bsky.app/profile/analog-ashley.bsky.social/post/3lru5sua3fk25

[–] [email protected] 0 points 1 week ago (3 children)

It is related, inasmuch as it’s all generated from the same prompt and the “answer” will be statistically likely to follow from the “reasoning” text. But it is only likely to follow, which is why you can sometimes see a lot of unrelated or incorrect guff in “reasoning” steps that’s misinterpreted as deliberate lying by ai doomers.

I will confess that I don’t know what shapes the multiple “let me just check” or correction steps you sometimes see. It might just be a response stream that is shaped like self-checking. It is also possible that the response stream is fed through a separate llm session when then pushes its own responses into the context window before the response is finished and sent back to the questioner, but that would boil down to “neural networks pattern matching on each other’s outputs and generating plausible response token streams” rather than any sort of meaningful introspection.

I would expect the actual systems used by the likes of openai to be far more full of hacks and bodges and work-arounds and let’s-pretend prompts that either you or I could imagine.

[–] [email protected] 1 points 1 week ago (6 children)

It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.

It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.

[–] [email protected] 0 points 1 week ago (3 children)

I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

https://quantumai.google/

Quantum fucking ai? Motherfucker,

  • You don’t have ai, you have a chatbot
  • You don’t have a quantum computer, you have a tech demo for a single chip
  • Even if you had both of those things, you wouldn’t have “quantum ai”
  • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Turns out some Silicon Valley folk are unhappy that a whole load of waymos got torched, fantasised that the cars could just gun down the protesters, and use genai video to bring their fantasies to some vague approximation of “life”

https://xcancel.com/venturetwins/status/1931929828732907882

The author, Justine Moore is an investment partner at a16z. May her future ventures be incendiary and uninsurable.

(via garbageday.email)

[–] [email protected] 0 points 2 weeks ago (1 children)

I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called ~ (which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation using rm -r ~ which would of course delete all your stuff.

So, yeah, don’t let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.

And JFC, being surprised that something called “YOLO” might be bad? What were people expecting? --all-the-red-flags

[–] [email protected] 0 points 2 weeks ago (2 children)

LLMs aren’t profitable even if they never had to pay a penny on license fees. The providers are losing money on every query, and can only be sustained by a firehose of VC money. They’re all hoping for a miracle.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

Did you know there’s a new fork of xorg, called x11libre? I didn’t! I guess not everyone is happy with wayland, so this seems like a reasonable

It's explicitly free of any "DEI" or similar discriminatory policies.. [snip]

Together we'll make X great again!

Oh dear. Project members are of course being entirely normal about the whole thing.

Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.

In sure it’ll be fine though. He’s a great coder.

(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)

[–] [email protected] 1 points 1 month ago

I like that Soylent Green was set in the far off and implausible year of 2022, which coincidentally was the year of ChatGPT’s debut.

view more: next ›