this post was submitted on 26 Mar 2024
377 points (96.5% liked)

Programmer Humor

23177 readers
1429 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 40 points 1 year ago (1 children)
[–] [email protected] 55 points 1 year ago (3 children)

No, this is because the testing set can be derived from the training set.

Overfitting alone can't get you to 1.

[–] [email protected] 3 points 1 year ago

It can if you don't do a train-test split.

But even if you consider the training set only, having zero loss is definitely a bad sign.

[–] [email protected] 10 points 1 year ago (2 children)

So as an eli5, that's basically that you have to "ask" it stuff it has never heard before? AI has come after my time in higher education.

[–] [email protected] 3 points 1 year ago

Yes, it's called a train test split, and is often 80/20 or there about

[–] [email protected] 20 points 1 year ago (2 children)

Yes.

You train it on some data, and ask it about different data. Otherwise it just hard-codes the answers.

[–] [email protected] 1 points 1 year ago

Gotcha, thank you!

[–] [email protected] 7 points 1 year ago

They're just like us.

[–] [email protected] 2 points 1 year ago