this post was submitted on 03 Sep 2024
1581 points (97.8% liked)

Technology

59378 readers
2952 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -1 points 2 months ago (1 children)

@zbyte64 1) In no way is quality a part of that equation and 2) In what other contexts is quality ever a part of the equation? I mean I can go look at some Monets and paint some shitty water lillies, is that somehow problematic?

[–] [email protected] 2 points 2 months ago (1 children)

I can go look at some Monets and paint some shitty water lillies, is that somehow problematic?

If we're using your paintings as training data for a Monet copy, then it could be.

Are we even talking about AI if we're saying data quality doesn't matter?

[–] [email protected] -1 points 2 months ago (1 children)

@zbyte64 data quality, again, was out of the scope of what I was talking about originally

Which, again, was that legal precedent would suggest that the *how* is largely irrelevant in copyright cases, they’re mostly focused on *why* and the *scale of the operation*

I’m not getting sued for copyright infringement by the NYT because I used inspect element to delete content to read behind their paywall, OpenAI is

[–] [email protected] 0 points 2 months ago (1 children)

I was narrowly taking issue with the comparison to how humans learn, I really don't care about copyrights.

[–] [email protected] 0 points 2 months ago (1 children)

@zbyte64 where am I wrong? The process is effectively the same: you get a set of training data (a textbook) and a set of validation data (a test) and voila, I’m trained

To learn how to draw an image of a thing, you look at the thing a lot (training data) and try sketching it out (validation data) until it’s right

How the data is acquired is irrelevant, I can pirate the textbook or trespass to find a particular flower, that doesn’t mean I’m learning differently than someone who paid for it

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

Do we assume everything read in a textbook is correct? When we get feedback on drawing, do we accept the feedback as always correct and applicable? We filter and groom data for the AI so it doesn't need to learn these things.