this post was submitted on 11 Apr 2024
1315 points (95.8% liked)

Science Memes

12384 readers
2767 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
(page 6) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 10 months ago (46 children)

That's a pretty shit take. Humankind spent nearly 12 thousand years figuring out the combustion engine. It took 1 million years to figure farming. Compared to that, less than 500 years to create general intelligence will be a blip in time.

load more comments (46 replies)
[–] [email protected] 18 points 10 months ago* (last edited 10 months ago) (3 children)

I propose that we treat AI as ancillas, companions, muses, or partners in creation and understanding our place in the cosmos.

While there are pitfalls in treating the current generation of LLMs and GANs as sentient, or any AI for that matter, there will be one day where we must admit that an artificial intelligence is self-aware and sentient, practically speaking.

To me, the fundamental question about AI, that will reveal much about humanity, is philosophical as much as it is technical: if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?

load more comments (3 replies)
[–] [email protected] 25 points 10 months ago

I just love the idjits who think not showing empathy to people AI bros are trying to put out of work will save them when the algorithms come for their jobs next

When LeopardsEatingFaces becomes your economic philosophy

[–] [email protected] 4 points 10 months ago (8 children)

I'd love to see some data on the people who believe that AI fundamentally can't do art and the people who believe that AI is an existential threat to artists.

Anecdotally, there seems to be a large overlap between the adherents of what seem to be mutually exclusive positions and I wish I understood that better.

[–] [email protected] 11 points 10 months ago (1 children)

The trick is that there are companies/people that would commission an artist but go for AI instead because they don't want/need actual art if it's more expensive

[–] [email protected] -1 points 10 months ago (7 children)

I'm going to try to paraphrase that position to make sure I understand it. Please correct me if I got it wrong.

AI produces something not-actual-art. Some people want stuff that's not-actual-art. Before AI they had no choice but to pay a premium to a talented artist even though they didn't actually need it. Now they can get what they actually need but we should remove that so they have to continue paying artists because we had been paying artists for this in the past?

Is that correct or did I miss or mangle something?

load more comments (7 replies)
load more comments (7 replies)
[–] [email protected] 42 points 10 months ago (6 children)

I work in AI. LLM's are cool and all, but I think it's all mostly hype at this stage. While some jobs will be lost (voice work, content creation) my true belief is that we'll see two increases:

  1. The release of productivity tools that use LLM's to help automate or guide menial tasks.

  2. The failure of businesses that try to replicate skilled labour using AI.

In order to stop point two, I would love to see people and lawmakers really crack down on AI replacing jobs, and regulating the process of replacing job roles with AI until they can sufficiently replace a person. If, for example, someone cracks self-driving vehicles then it should be the responsibility of owning companies and the government to provide training and compensation to allow everyone being "replaced" to find new work. This isn't just to stop people from suffering, but to stop the idiot companies that'll sack their entire HR department, automate it via AI, and then get sued into oblivion because it discriminated against someone.

[–] [email protected] 3 points 10 months ago (1 children)

Are you saying that if a company adopts AI to replace a job, they should have to help the replaced workers find new work? Sounds like something one can loophole by cutting the department for totally unrelated reasons before coincidentally realizing that they can have AI do that work, which they totally didn't think of before firing people.

load more comments (1 replies)
[–] [email protected] 7 points 10 months ago (2 children)

I sincerely doubt AI voice over will out perform human actors in the next 100 years in any metric, including cost or time savings.

load more comments (2 replies)
[–] [email protected] 12 points 10 months ago (3 children)

I've also heard it's true that as far as we can figure, we've basically reached the limit on certain aspects of LLMs already. Basically, LLMs need a FUCK ton of data to be good. And we've already pumped them full of the entire internet so all we can do now is marginally improve these algorithms that we barely understand how they work. Think about that, the entire Internet isnt enough to successfully train LLMs.

LLMs have taken some jobs already (like audio transcription, basic copyediting, and aspects of programming), we're just waiting for the industries to catch up. But we'll need to wait for a paradigm shift before they start producing pictures and books or doing complex technical jobs with few enough hallucinations that we can successfully replace people.

[–] [email protected] 8 points 10 months ago (1 children)

My own personal belief is very close to what you've said. It's a technology that isn't new, but had been assumed to not be as good as compositional models because it would cost a fuck-ton to build and would result in dangerous hallucinations. It turns out that both are still true, but people don't particularly care. I also believe that one of the reasons why ChatGPT has performed so well compared to other LLM initiatives is because there is a huge amount of stolen data that would get OpenAI in a LOT of trouble.

IMO, the real breakthroughs will be in academia. Now that LLM's are popular again, we'll see more research into how they can be better utilised.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 7 points 10 months ago

Nah fuck HR, they're the shield of the companies to discriminate withing margins from behind

I think the proper route is a labor replacement tax to fund retraining and replacement pensions

load more comments (2 replies)
load more comments
view more: ‹ prev next ›