this post was submitted on 19 Feb 2025
1254 points (99.8% liked)

Programmer Humor

21686 readers
139 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 99 points 4 weeks ago (5 children)

I remember when compression was popularized, like mp3 and jpg, people would run experiments where they would convert lossy to lossy to lossy to lossy over and over and then share the final image, which was this overcooked nightmare

I wonder if a similar dynamic applies to the scenario presented in the comic with AI summarization and expansion of topics. Start with a few bullet points have it expand that to a paragraph or so, have it summarize it back down to bullet points, repeat 4-5 times, then see how far off you get from the original point.

[–] [email protected] 11 points 4 weeks ago (2 children)

i was curious so i tried it with chatgpt. here are the chat links:

first expansion first summary second expansion second summary third expansion third summary fourth expansion fourth summary fifth expansion fifth summary sixth expansion sixth summary

overall it didn't seem too bad. it sort of started focusing on the ecological and astrobiological side of the same topic but didn't completely drift. to be honest, i think it would have done a lot worse if i made the prompt less specific. if it was just "summarize this text" and "expand on these points" i think chatgpt would get very distracted

[–] [email protected] 1 points 3 weeks ago (1 children)

Doesn't chatgpy remember the context of the previous question and text?

Maybe a difference in accounts and llms makes a bigget difference.

[–] [email protected] 1 points 3 weeks ago

that's why i ran every request in a different chat session

[–] [email protected] 4 points 4 weeks ago

Interesting. I also wonder how it would fare across different models (eg user a uses chatgpt, user b uses gemini, user c uses deepseek, etc) as that may mimic real world use (such as what’s depicted in the comic) more closely

[–] [email protected] 7 points 4 weeks ago

People do that with google translate as well

[–] [email protected] 5 points 4 weeks ago (1 children)

Are humans doing this as well and if they don't, why not?

[–] [email protected] 49 points 4 weeks ago (1 children)

A couple decades ago, novelty and souvenir shops would sell stuffed parrots which would electronically record a brief clip of what they heard and then repeat it back to you.

If you said "Hello" to a parrot and then set it down next to another one, it took only a couple of iterations between the parrots to turn it into high pitched squealing.

[–] [email protected] 52 points 4 weeks ago* (last edited 4 weeks ago) (4 children)

In my experience, LLMs aren't really that good at summarizing

It's more like they can "rewrite more concisely" which is a bit different

[–] [email protected] 2 points 4 weeks ago

you mean hallucinate

[–] [email protected] 38 points 4 weeks ago

Summarizing requires understanding what's important, and LLMs don't "understand" anything.

They can reduce word counts, and they have some statistical models that can tell them which words are fillers. But, the hilarious state of Apple Intelligence shows how frequently that breaks.

[–] [email protected] 23 points 4 weeks ago (2 children)

I used to play this game with Google translate when it was newish

[–] [email protected] 17 points 4 weeks ago (2 children)

There is, or maybe was, a YouTube channel that would run well known song lyrics through various layers of translation, then attempt to sing the result to the tune of the original.

[–] [email protected] 5 points 4 weeks ago

🎵Once you know which one, you are acidic, to win!🎵

[–] [email protected] 8 points 4 weeks ago (1 children)

Gradually watermelon... I like shapes.

Twisted translations

[–] [email protected] 5 points 4 weeks ago

Sounds about right to me.

[–] [email protected] 8 points 4 weeks ago

translation party!

Throw Japanese into English into Japanese into English ad nauseum, untill an 'equilibrium' statement is reached.

... Which was quite often nowhere near the original statement, in either language... but at least the translation algorithm agreed with itself.

[–] [email protected] 4 points 4 weeks ago (1 children)

If it isn't accurate to the source material, it isn't concise.

LLMs are good at reducing word count.

[–] [email protected] 1 points 3 weeks ago

In case you haven't seen it, Tom7 created a delightful exploration of using an LLM to manipulate word counts.