this post was submitted on 28 Jan 2024
1046 points (97.6% liked)

Programmer Humor

19511 readers
336 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 79 points 9 months ago* (last edited 9 months ago) (4 children)

It literally cannot come up with novel solutions because it's goal is to regurgitate the most likely response to a question based on training data from the internet. Considering that the internet is often trash and getting trashier, I think LLMs will only get worse over time.

[–] [email protected] 49 points 9 months ago (2 children)

I said this a while ago but you know how we have "pre-atomic" steel? We are going to have pre-LLM data sets.

[–] [email protected] 18 points 9 months ago

Low-background steel, also known as pre-war steel, is any steel produced prior to the detonation of the first nuclear bombs in the 1940s and 1950s. Typically sourced from ships (either as part of regular scrapping or shipwrecks) and other steel artifacts of this era, it is often used for modern particle detectors because more modern steel is contaminated with traces of nuclear fallout.[1][2]

Very interesting, today I learned.

[–] [email protected] 16 points 9 months ago

The reason why chat gpt 3.5 is still great for anything previous to it's cutoff date. It's not constantly being updated with new garbage

[–] [email protected] 52 points 9 months ago (2 children)

AI has poisoned the well it was fed from. The only solution to get a good AI moving forward is to train it using curated data. That is going to be a lot of work.

On the other hand, this might be a business opportunity. Selling curated data to companies that want to make AIs.

[–] [email protected] 1 points 9 months ago
[–] [email protected] 11 points 9 months ago (1 children)

I could see large companies paying to train the LLM on their own IP even just to maintain some level of consistency, but it obviously wouldn't be as valuable as hiring the talent that sets the bar and generates patent-worthy inventions.

[–] [email protected] 3 points 9 months ago

You can fine tune a model with specific stuff today. OpenAI offers that right on their website and big companies are already taking advantage. It doesn't take a whole new LLM, and the cost is a pittance in comparison.

[–] [email protected] 28 points 9 months ago (1 children)

Also the more the internet is swept with AI generated content, the more future datasets will be trained on old AI output rather than on new human input.

[–] [email protected] 16 points 9 months ago (1 children)

Humans are also now incentivized to safeguard their intellectual property from AI to keep a competitive advantage.

[–] [email protected] 7 points 9 months ago* (last edited 9 months ago) (3 children)

What are some strategies for doing that? (This is me, totally not a bot)

[–] [email protected] 2 points 9 months ago

Lets see, since the goal is to prevent webscaping all these should work: paywalls, account only acsess, text obferscation (e.g. using a custom font that maps letters randomly to other ones so it looks fine but to a webscraper it looks like gibberish), HTML obferscation (inserting random characters in the HTML then hiding them using CSS) and many more.

[–] [email protected] 4 points 9 months ago