I think you just found a good example to prove his point, though?
andallthat
AKA "shit, looks like now we need to re-hire some of those engineers"
TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do
I only have a limited and basic understanding of Machine Learning, but doesn't training models basically work like: "you, machine, spit out several versions of stuff and I, programmer, give you a way of evaluating how 'good' they are, so over time you 'learn' to generate better stuff"? Theoretically giving a newer model the output of a previous one should improve on the result, if the new model has a way of evaluating "improved".
If I feed a ML model with pictures of eldritch beings and tell them that "this is what a human face looks like" I don't think it's surprising that quality deteriorates. What am I missing?
see? It says it right here: "that thing you just did"
don’t be like that, they are just saying that the two events happened at the same time. “Kamala Harris grins as the world marks new hottest day on record”, “Kamala Harris grins as hundreds more flights get cancelled after huge IT outage”. See? Perfectly innocent journalism!
I think you're all focusing on the wrong part. I didn't donate $45M, I made an investment that I expect to pay back a lot more. Donating to Trump would indeed be cult of personality, buying a US President to protect me and be my pet is pure Elon.
Good luck with that, Indian Priests. God personally stepped in to save Trump from being shot (not the hero firefighter, who didn't meet the minimum income requirement of the Truly Blessed). He just took the tip of his ear, which is practically circumcision.
I'm semantically torn here. He did (narrowly) take a bullet and shooting on politicians is an attack on Democracy. On the other hand he's not exactly on friendly terms with this whole Democracy concept.... In case of friendly fire, can you say you took a bullet for your enemy?
ah I get what you're saying., thanks! "Good" means that what the machine outputs should be statistically similar (based on comparing billions of parameters) to the provided training data, so if the training data gradually gains more examples of e.g. noses being attached to the wrong side of the head, the model also grows more likely to generate similar output.