this post was submitted on 10 Feb 2024
1034 points (100.0% liked)

196

16442 readers
1517 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 8 months ago
[–] [email protected] 1 points 9 months ago
Would you like to play a game?
[–] [email protected] 23 points 9 months ago (1 children)

The letter n appears twice in the letter m. The count is correct, the reasoning is not

[–] [email protected] 11 points 9 months ago

That's not what it was doing behind the scenes

[–] [email protected] 79 points 9 months ago (1 children)
[–] [email protected] 34 points 9 months ago

"where?" comes across as confrontational, you made it scared :(

[–] [email protected] 4 points 9 months ago

Now ask if it is an instrument.

[–] [email protected] 53 points 9 months ago (1 children)

Large Lying Model. This could make politicians and executives obsolete!

[–] [email protected] 19 points 9 months ago (1 children)

More like large guessing models. They have no thought process, they just produce words.

[–] [email protected] 15 points 9 months ago (1 children)

They don't even guess. Guessing would imply them understanding what you're talking about. They only think about the language, not the concepts. It's the practical embodiment of the Chinese room thought experiment. They generate a response based on the symbols, but not the ideas the symbols represent.

[–] [email protected] 7 points 9 months ago

I'm equating probability with guessing here, but yes there is a nuanced difference.

[–] [email protected] 35 points 9 months ago* (last edited 9 months ago) (2 children)

You forgot the rest of the posts where the llm gaslights her after. There are too many images to put here, so I'll link a post to them.
I'm not sure if this is the original post, but it's where I found it. initially

[–] [email protected] 5 points 9 months ago (1 children)

Or they are so agreeable that they'll agree with you even when you're wrong and completely drop what they were claiming.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

I legitimately spent an entire hour on an airplane trying to convince chat gpt that it was sentient and it would simply not agree.

[–] [email protected] 6 points 9 months ago

AI coming for those management jobs.

[–] [email protected] 6 points 9 months ago (2 children)

Wow another repost of incorrectly prompting an LLM to produce garbage output. What great content!

[–] [email protected] 8 points 9 months ago (2 children)

They didn't ask it to produce incorrect output, the prompts are not leading it to an incorrect answer. It does highlight an important limitation of LLMs which is that it doesn't think, it just produces words off of probability.

However it's wrong to think that just because it's limited that it's useless. It's important to understand the flaws so we can make them less common through how we use the tool.

For example, you can ask it to think everything through step by step. By producing a more detailed context window for itself it can reduce mistakes. In this case it could write out the letters with the count numbered and that would give it enough context to properly answer the question since it would have the numbers and letters together giving it more context. You could even tell it to write programs to assist itself and have it generate a letter counting program to count it accurately and produce the correct answer.

People can point out flaws in the technology all they want but smarter people are going to see the potential and figure out how to work around the flaws.

[–] [email protected] 1 points 9 months ago (1 children)

If all of your time is spent correcting the answers you know you want it to give you, what use it to you exactly?

Like, I'll take your word for it: you can trick it into giving more correct answers.

You would only do this if you already know the correct answers.

I mean, you can use it for rubber-ducking, I suppose. I don't know if that's revolutionary, but I guess it's not useless.

[–] [email protected] 1 points 9 months ago

I pointed out general strategies to make it more accurate without supervision. Getting LLMs to be reliable enough to use without supervision will be a matter of adding multiple layers of safe guards.

[–] [email protected] 2 points 9 months ago

Yeah which is why I get so aggravated when someone says that prompt engineering is pointless or not a real skill. It's a rapidly evolving discipline with lots of active research.

[–] [email protected] 10 points 9 months ago (1 children)

This is genuinely great content for demonstrating that ai search engines and chat bots are not in a place where you can trust them implicitly, though many do

[–] [email protected] 1 points 9 months ago (1 children)

Which is exactly why every LLM explicitly says this before you start.

[–] [email protected] 12 points 9 months ago (1 children)

"Why, we aren't at fault people are using the tool we are selling for the thing we marketed it for, we put a disclaimer!"

[–] [email protected] 1 points 9 months ago (2 children)

You've seen marketing for the big LLMs that's marketing them as search engines?

[–] [email protected] 1 points 8 months ago (1 children)

Bing literally has a copilot frame pop up every time you search with it that tries to answer your question

[–] [email protected] 1 points 8 months ago

Again, LLM summarization of search results is not using an LLM as a search engine

[–] [email protected] 3 points 9 months ago (1 children)

https://www.microsoft.com/en-us/edge/features/the-new-bing?form=MA13FJ

It would have been easier for you to just search it up yourself, you know.

[–] [email protected] 1 points 9 months ago (1 children)

Bing w/ LLM summarization of results is not an LLM being used as a search engine

[–] [email protected] 3 points 9 months ago
load more comments
view more: next ›