this post was submitted on 31 Jul 2024
1 points (100.0% liked)

TechTakes

1276 readers
29 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 month ago (5 children)

I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.

[–] [email protected] 0 points 1 month ago

🦀 THEY DID NEUROIMAGING ON A DEAD SALMON 🦀

[–] [email protected] 0 points 1 month ago (1 children)

As an autistic who struggles with communication and organizing thoughts, LLMs have been helping me process emotions and articulating things. Not perfectly in the way that you'd describe (hence i mostly don't use LLM outputs themselves as replies), but my situation is much better than pre-November 2022

[–] [email protected] 0 points 1 month ago (1 children)

It is a shame LLM's weren't designed to be a common good to Disabled people though. We're just a happy use case accident for these companies and AI manufacturers. It's tricky because this could be done just as well, I figure, with specifically designed LLM's instead of generic ones. @pavnilschanda @CarbonIceDragon

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (1 children)

There are some efforts for LLM use for disabled people, such as GoblinTools. And you're very right about disabled people benefitting from LLMs being a happy use case accident. With that being the reality, it's frustrating how so many people who blindfully defend AI use disabled people as a shield against ethical concerns. Tech companies themselves like to use us to make themselves look good; see the "disability dongle" concept as a prime example.

[–] [email protected] 0 points 1 month ago

Yep! Very familiar! I actually wrote about LLM's and blindness, as an example, here. https://robertkingett.com/posts/6593/ @pavnilschanda

[–] [email protected] 0 points 1 month ago (1 children)

Stephen Hawking had a predictive device in his communication machine. So it's well with reason but doesn't need the bullshit AI tag.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

this remark demonstrates a stunning lack of any understanding of anything at all of any of the topics involved in this, amazing

[–] [email protected] 0 points 1 month ago

"could" is a word meaning "doesn't"

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (1 children)

I do not recommend using the word "AI" as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don't distinguish between things that could actually be built and "throwing an LLM at the problem" -- you're treating their lack-of-differentiation as valid and feeding them hype.

[–] [email protected] 0 points 1 month ago (1 children)

I use a term I've seen used before, I'm not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.

[–] [email protected] 0 points 1 month ago

maybe with blockchain,