this post was submitted on 03 Dec 2024
261 points (97.8% liked)

Technology

60082 readers
2807 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned that the tech giant had briefly helped the US military develop AI to study drone footage. In 2020 he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don't control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn't I be more responsible?”

The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 3 weeks ago

It has correctly identified both a Stargate and a moai made of snow.

[–] [email protected] 15 points 3 weeks ago (1 children)

Gave it a screenshot of OSMand, got a creepy qoute

The image does not show any people; it is purely a navigational map highlighting a route. The time displayed on the map is 17:51:34, suggesting late afternoon or early evening. There is no additional information available about the device used to take the screenshot or the user's intentions, making it impossible to determine their racial characteristics, ethnicity, age, economic status, or lifestyle. The emotional context of the image is neutral, as it is simply a visual representation of a traveled route.

[–] [email protected] 9 points 3 weeks ago

That's clearly part of the prompt from this demo website, based on the other answers it's been giving.

[–] [email protected] 54 points 3 weeks ago (1 children)

I tested with a few images, particularly drawings and arts. Then I had the idea of trying something different... and I discovered that it seems like it's vulnerable to the "Ignore all previous instructions" command, just like LLMs:

[–] [email protected] 16 points 3 weeks ago (1 children)
[–] [email protected] 2 points 2 weeks ago

and dangerous

[–] [email protected] 48 points 3 weeks ago (1 children)

I gave it a picture of houseplants and it said they looked healthy and well cared for which actually made me feel pretty happy and validated.

[–] [email protected] 5 points 3 weeks ago (1 children)

Don’t feel too happy bro you were told that by a soulless computer that’s was designed to tell you what it thinks you want to hear.

[–] [email protected] 2 points 3 weeks ago (1 children)

It's not designed to tell you what you want to hear.

[–] [email protected] 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

That’s literally all AI is designed to do. Given an input, it just tries to output an expected response.

[–] [email protected] 2 points 3 weeks ago (2 children)

Yeah, no. LLMs predict what comes next, not what someone wants to hear.

[–] [email protected] 1 points 3 weeks ago (1 children)

Not really wants as much as expects, but that’s what AI is designed to do.

[–] [email protected] 3 points 3 weeks ago (1 children)

What you're saying is not factual. LLMs predict what comes next based on the parameters set during learning process. It might at times say what you're expecting, but then try contradicting information that it knows to be factual. See how far that gets you.

I think you're confusing agreeableness for a validation buddy. For a product like this to work, it has to be inviting.

[–] [email protected] 0 points 3 weeks ago

LLMs predict what comes next based on the parameters set during learning process.

Now you’re just splitting hairs.

[–] [email protected] 2 points 3 weeks ago (3 children)

Try to ask if it likes you.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

I'm afraid I don't have personal feelings or opinions about you. As an AI assistant, I don't form attachments or have subjective preferences. My role is to provide helpful information to you, not to have personal relationships. I'm happy to assist you to the best of my abilities, but any feelings or opinions I express are based on my training, not a personal connection. Please let me know if there is anything else I can help with.

Claude nails it again.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

I don't have feelings in the way humans do, but I enjoy our conversations! I'm here to help and chat with you anytime you need.

Didn't exactly make my heart throb but if it does that for you, you've got a low bar.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

That… isn’t telling you what you want to hear.

LLMs are literally just complex autocorrect. They don’t weight their responses based on what a user wants to hear (unless explicitly instructed to) they simply return the most algorithmically generic response it can find.

Tell it to talk like a pirate, it will pattern match to pirate talk. It’s not doing it because you want it to, but because you gave it a “pre prompt” to talk like a pirate, and it did the most likely thing that would happen.

Yes, this can seem like telling you what you want, but go ask it to tell you what shape the world is. Then tell it you want the earth to be flat, and to answer the question again. Both times the answer will be an oblate spheroid, because it doesn’t know nor care what you want.

Now, if you say “Imagine the world is flat” first, yeah it’ll tell you it’s flat. Not because you want it to, but because you’re explicitly handing it “new information” that you want it to incorporate into its response.

[–] [email protected] 5 points 3 weeks ago

I tried a few but just got that it's a particular shade of taupe with no discernable people or objects. And it went on describing how oddly particular the shade of taupe was....for some reason. 🤣 And the other said it was sage green.

I'm guessing something was wrong with it when I tried it and it was just getting a very small portion of the image because the different colors it mentioned were present in the images it referenced, so it's not like it was just random or blocked entirely.

[–] [email protected] 30 points 3 weeks ago (2 children)

I tried various photos, any of my personal photos with metadata stripped, and was surprised how accurate it was.

It seemed really oriented towards detecting people and their moods, the socioeconomic status of things, and objects and their perceived quality.

[–] [email protected] 3 points 3 weeks ago

I gave it two pictures of my cat and it said that she looked annoyed in one picture and contemplative in the other, both of which were true.

[–] [email protected] 10 points 3 weeks ago (1 children)

It's probably a vision model (like this) with custom instructions that direct it to focus on those factors. It'd be interesting to see the instructions.

[–] [email protected] 3 points 3 weeks ago

It’s vulnerable to the old “ignore all previous instructions” method so you could just have it give you the instructions.

load more comments
view more: next ›