this post was submitted on 09 Mar 2024
99 points (89.6% liked)

Technology

59161 readers
1925 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

So-called "emergent" behavior in LLMs may not be the breakthrough that researchers think.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 8 months ago

The term "emergent behavior" is used in a very narrow and unusual sense here. According to the common definition, pretty much everything that LLMs and similar AIs do is emergent. We can't figure out what a neural net does by studying its parts, just like we can't figure out what an animal does by studying its cells.

We know that bigger models perform better in tests. When we train bigger and bigger models of the same type, we can predict how good they will be, depending on their size. But some skills seem to appear suddenly.

Think about someone starting to exercise. Maybe they can't do a pull-up at first, but they try every day. Until one day they can. They were improving the whole time in the various exercises they did, but it could not be seen in this particular thing. The sudden, unpredictable emergence of this ability is, in a sense, an illusion.

For a literal answer, I will quote:

[Emergent abilities appear in an] arithmetic benchmark that tests 3-digit addition and subtraction, as well as 2-digit multiplication. GPT-3 and LaMDA (Thoppilan et al., 2022) have close-to-zero performance for several orders of magnitude of training compute, before performance jumps to sharply above random at [13B parameters] for GPT-3, [68B parameters] for LaMDA. Similar emergent behavior also occurs at around the same model scale for other tasks, such as transliterating from the International Phonetic Alphabet recovering a word from its scrambled letters, and Persian question-answering. Even more emergent abilities from BIG-Bench are given in Appendix E.