The_Walkening

joined 4 years ago
[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

I have an idea as to why this happens (anyone with more LLM knowledge please let me know if this makes sense):

  1. ChatGPT uses the example code to identify other examples of insecure code
  2. Insecure code is found in a corpus of text that contains this sort of language (say, a forum full of racist hackers)
  3. Because LLMs don't actually know the difference between language and code (in the sense that you're looking for the code and not the language) or anything else, they'll return responses similar to the examples in the corpus because it's trying to return a "best match" based on the fine tuning.

Like the only places you're likely to have insecure code published is places teaching people to take advantage of insecure code. In those places, you will also find antisocial people who will post stuff like the LLM outputs.

[–] [email protected] 0 points 2 weeks ago

IIRC you can do both ( and also call in air support after awhile) re-supply flares are so clutch in the early game because suppressor lifespan is so short and popping open the iDroid every time is a hassle, esp if you just fucked up and triggered an alert