this post was submitted on 29 Feb 2024
82 points (85.3% liked)

Asklemmy

43906 readers
1114 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Edit: After reading the discussion, I figured I'd let GPT4 speak for itself:

The quest to identify true artificial intelligence (AI) indeed presents challenges, especially as AI models become more sophisticated. Let's explore some indicators that researchers and practitioners consider when assessing AI systems:

  1. Understanding Context and Meaning:

    • True AI should demonstrate an understanding of context and meaning. It should not merely generate plausible-sounding sentences but also comprehend the content it produces.
    • A system that can engage in nuanced conversations, infer intent, and provide relevant responses based on context would be a strong indicator of advanced AI.
  2. Generalization and Adaptability:

    • An AI system that can generalize from its training data and adapt to novel situations is valuable.
    • True AI should not be limited to memorizing specific examples but should exhibit flexibility in handling diverse scenarios.
  3. Creativity and Novelty:

    • Creativity is a hallmark of human intelligence. An AI system that generates novel ideas, art, or solutions beyond its training data suggests progress.
    • For instance, AI-generated poetry, music, or visual art that resonates with human emotions could be an indicator.
  4. Ethical Decision-Making:

    • True AI should navigate ethical dilemmas. It should consider consequences, fairness, and societal impact.
    • An AI system that can weigh trade-offs and make ethical choices aligning with human values is a positive sign.
  5. Self-Awareness and Self-Improvement:

    • While self-awareness akin to human consciousness remains elusive, an AI system that can self-assess its performance and seek improvement is promising.
    • Continuous learning, adaptation, and self-correction are essential traits.
  6. Transparency and Explainability:

    • True AI should be transparent about its decision-making process. It should provide explanations for its outputs.
    • Efforts to develop interpretable AI models and techniques for explaining their predictions contribute to trustworthiness.
  7. Robustness and Safety:

    • AI systems must handle adversarial inputs, noisy data, and edge cases gracefully.
    • Robustness against attacks and safety precautions to prevent harmful behavior are critical indicators.
  8. Human-AI Collaboration:

    • Rather than replacing humans, true AI should complement human abilities.
    • Systems that collaborate effectively with humans—such as AI assistants, co-creative tools, or medical diagnostics—demonstrate progress.
  9. Quantifiable Metrics:

    • Researchers propose various metrics to assess AI capabilities, such as perplexity, BLEU score, F1 score, and accuracy.
    • However, no single metric captures the full spectrum of intelligence.
  10. Detection Tools:

    • Researchers are developing tools to differentiate between human- and AI-generated text.
    • These tools analyze features like fluency, word frequency, punctuation patterns, and sentence length¹.
    • While imperfect, they contribute to identifying AI-generated content.

In summary, the arrival of true AI may not be a singular event but a gradual evolution marked by advancements in the above dimensions. As AI continues to progress, our understanding of its capabilities will evolve, and new indicators may emerge.

Source: Conversation with Bing, 2/29/2024 (1) How to spot AI-generated text | MIT Technology Review. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/. (2) Intelligent Supertrend (AI) - Buy or Sell Signal — Indicator by .... https://www.tradingview.com/script/q9244PAH-Intelligent-Supertrend-AI-Buy-or-Sell-Signal/. (3) Indicators - True ALGO. https://truealgo.com/indicators/. (4) Improve Key Performance Indicators With AI - MIT Sloan Management Review. https://sloanreview.mit.edu/article/improve-key-performance-indicators-with-ai/. (5) New AI classifier for indicating AI-written text - OpenAI. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.

(page 2) 21 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 8 months ago

There are no completely accurate tests and there will never be one. Also, if an AI is conscious, it can easily fake its behavior to pass a test

[–] [email protected] 11 points 8 months ago

I don't think a test will ever be directly accurate. It will require sandboxing, observations, and consistency across dynamic situations.

How do you test your child for true intelligence, Gom Jabbar?

[–] [email protected] 8 points 8 months ago (5 children)

There's simply isn't any reliable way. Forget full AI, LLM's will eventually be indistinguishable.

A good tell would be real time communication with perfect grammar and diction. If you have a couple solid minutes of communication and it sounds like something out of a pamphlet, You might be talking to an AI.

load more comments (5 replies)
[–] [email protected] 12 points 8 months ago (1 children)

I’ll believe it’s true A.I. when it can beat me at Tecmo Super Bowl. No one in my high school or dorm could touch me because they misunderstood the game. Lots of teams can score at any time. Getting stops and turnovers is the key. Tecmo is like Go where there’s always a counter and infinite options.

[–] [email protected] 6 points 8 months ago

This is a scientific paper I would like to see submitted honestly. A simple game, but still with plenty of nuance...how would an AI develop a winning strategy?

[–] [email protected] 12 points 8 months ago (4 children)

I think you've misunderstood. An advanced enough AI is supposed to be able to pass the Turing test.

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago)

But now that AI has become advanced enough to get uncomfortably close to us, we need to move the goalposts farther away so everyone can relax again.

load more comments (3 replies)
[–] [email protected] 43 points 8 months ago (1 children)

If you come up with a test, people develop something that does exactly what the test needs, and ignores everything else.

But we can't even say what human consciousness is yet.

Like, legitimately, we don't know what causes it and we don't know how anaesthesia interferes either.

One of the guys who finished up Einstein's work (Roger Penrose) thinks it has to do with quantum collapse. But there's a weird twilight zone where anesthesia has stopped consciousness but hasn't stopped that quantum process yet.

So we're still missing something, and dudes like in his 90s. He's been working on this for decades, but he'll probably never live to see it finished. Someone else will have to finish later like him and Hawking did for Einstein

[–] [email protected] 9 points 8 months ago (2 children)

"Because quantum" always feels like new-age woo-woo bullshit.

It's more likely just too vague to define.

load more comments (2 replies)
[–] [email protected] 9 points 8 months ago

Plot Twist: OP works at OpenAI and is asking "for a friend". :)

[–] [email protected] 68 points 8 months ago (4 children)

You reach down and you flip the tortoise over on its back, Leon. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't. Not without your help. But you're not helping.. why is that Leon?

[–] [email protected] 1 points 8 months ago

Because I'm a tortoise, too.

[–] [email protected] 26 points 8 months ago (1 children)

I always loved the theory that the test was as accurate as lie detectors. The test can't tell if you're lying, just if you're nervous.

That's why the smoking bot passed. There was other subtle clues that Deckard picked up on, but she believed she was human, so she passed.

A normal person would just answer, but a robot would try to think like a human and panic, because they were just like humans and that's what a human would do in that situation.

[–] [email protected] 23 points 8 months ago (1 children)

Oh, it's worse than that.

It's been a long time since I read the book, but IIRC, Nexus-6 replicants were indistinguishable from humans, except with a Voight-Kampf test. While Dick didn't say it, that strongly implies that replicants were actually clones that were given some kind of accelerated aging and instruction. The Voight-Kampf test was only testing social knowledge, information that replicants hadn't learned because they hadn't been socialized in the same society as everyone else.

And, if you think about the questions that were asked, it's pretty clear that almost everyone that's alive right now would fail.

load more comments (1 replies)
[–] [email protected] 12 points 8 months ago (1 children)
[–] [email protected] 6 points 8 months ago (1 children)
[–] [email protected] 5 points 8 months ago (1 children)

Not quite. Land turtles are omnivores; tortoises are herbivores.

[–] [email protected] 9 points 8 months ago (1 children)

So if I'm understanding this right... There are turtles that live predominantly on land, which eat meat and plants, and there are tortoises which live on land that only eat plants?

[–] [email protected] 1 points 8 months ago (1 children)

What about tortoises that only eat seafood?

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: ‹ prev next ›