this post was submitted on 25 Aug 2024
329 points (92.7% liked)

Technology

60042 readers
2807 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 3 months ago (3 children)

Have you ever attempted to fill up one of those monster context windows up with useful context and then let the model try to do some useful task with all the information in it?

I have. Sometimes it works, but often it’s not pretty. Context window size is the new MHz, in terms of misleading performance measurements.

[–] [email protected] 1 points 3 months ago

To actually answer your question - yes, but the only times I actually find it useful is for tests, for everything else it's usually iffy and takes longer.

Intelligently loading the window could be the next useful trick

[–] [email protected] 1 points 3 months ago

I think that giving the LLM an API to access additional context and then making it more of an agent style process will give the most improvement.

Let it request the interface for the class your using, let it request the code for that extension method you call. I think that would solve a lot, but I still see a LOT of instances where it calls wrong class/method names randomly.

This would also require a lot more in depth (and language specific!) IDE integration though, so I forsee a lot of price hikes for IDEs in the near future!

[–] [email protected] 7 points 3 months ago

I find there comes a point where, even with a lot of context, the AI just hasn't been trained to solve the problem. At that point it will cycle you round and round the same few wrong answers until you give up and work it out yourself.