this post was submitted on 26 Jul 2024
230 points (96.7% liked)

science

14266 readers
626 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 1 month ago (5 children)

I only have a limited and basic understanding of Machine Learning, but doesn't training models basically work like: "you, machine, spit out several versions of stuff and I, programmer, give you a way of evaluating how 'good' they are, so over time you 'learn' to generate better stuff"? Theoretically giving a newer model the output of a previous one should improve on the result, if the new model has a way of evaluating "improved".

If I feed a ML model with pictures of eldritch beings and tell them that "this is what a human face looks like" I don't think it's surprising that quality deteriorates. What am I missing?

[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (1 children)

In this case, the models are given part of the text from the training data and asked to predict the next word. This appears to work decently well on the pre-2023 internet as it brought us ChatGPT and friends.

This paper is claiming that when you train LLMs on output from other LLMs, it produces garbage. The problem is that the evaluation of the quality of the guess is based on the training data, not some external, intelligent judge.

[–] [email protected] 2 points 1 month ago

ah I get what you're saying., thanks! "Good" means that what the machine outputs should be statistically similar (based on comparing billions of parameters) to the provided training data, so if the training data gradually gains more examples of e.g. noses being attached to the wrong side of the head, the model also grows more likely to generate similar output.

load more comments (3 replies)