joined 1 year ago
[–] [email protected] 2 points 16 hours ago

They have no fact repositories to rely on.

They do not possess the ability to know what is and is not correct.

They cannot check documentation or verify that a function or library or API endpoint exists, even though they will confidently create calls to them.

These three are all just the same as asking a person about them, they might know or might not but they cant right there and then check. Yes LLMs due to their nature cannot access a region marked "C# methods" or whatever, but large models do have some of that information embedded in them, if they didnt they wouldnt get correct answers anywhere near as often as they do, which for large models and common languages/frameworks is most of the time. This is before getting into retrieval augmented generation where they do have access to repositories of fact.

This is what I was complaining about in the original post I replied to, no-where have I or anyone else I've seen in this thread say you should rely on these models, just that they are a useful input. Yet relying on them and using them without verification is the position you and the other poster are arguing against.

[–] [email protected] 1 points 20 hours ago* (last edited 20 hours ago) (2 children)

They can be useful for exploration and learning, sure. But lots of people are literally just copy-pasting code from LLMs - They just do it via an “accept copilot suggestion” button instead of actual copy paste.

Sure, people use all sorts of tools badly, that's a problem with the user not the tool (generally, I would accept poor tool design can be a factor).

I really dislike the statement of "LLMs dont know anything they are just statistical models" it's such a thought terminating cliche that is either vacuous or wrong depending on which way you mean it. If you mean they have no information content that's just factually wrong, clearly they do. If you mean they dont understand concepts in the same way as a person does, well yes but neither does google search and we have no problem using that as the start point of finding out about things. If you mean they can get answers wrong, its not like people are infallible either (who I assume you agree do know things).

[–] [email protected] 8 points 1 day ago (6 children)

That level of condescension (rethink your life because you are making use of a tool I dont like) really isnt productive. You seem to be thinking that using AI as a tool to help you program is equivalent to turning your brain off and just copy and pasting code snippets, it isnt. It can be a good way to explore a language or framework you aren't familiar with (when combined with the documentation) or to figure out general potential methods of solving a problem.

[–] [email protected] 3 points 2 days ago (1 children)

Users don't need to understand the system, all they need to know is you need to get someone to vouch for you, and if you vouch for bad people/bots you might lose your access.

[–] [email protected] 9 points 3 days ago


there you go, notice the bit under research where it says her imagenet project "has revolutionized the field of large-scale visual recognition."

Side note, do you really think someone called li feifei and born in Beijing is motivated to create a tool to murder non-whites?

[–] [email protected] 20 points 3 days ago* (last edited 3 days ago) (4 children)

In a Ted Talk in April, Li further explained the field of research her startup will work on advancing, which involves algorithms capable of realistically extrapolating images and text into three-dimensional environments and acting on those predictions, using a concept known as “spatial intelligence.” This could bolster work in various fields such as robotics, augmented reality, virtual reality, and computer vision. If these capabilities continue to advance in the ambitious ways Li plans, it has the potential to transform industries like healthcare and manufacturing.

I mean that sounds a lot more interesting than 99% of the LLM work going on at the moment, and given that she lead the team that cracked the computer vision problem of recognising objects she has pedigree.

[–] [email protected] 1 points 4 days ago

Solar I understand the argument there, but for wind surely you can graze animals or grow plants around the turbines.

[–] [email protected] 2 points 5 days ago

Thank you I'm aware of that, hence why I explicitly caveated my post as being dark humour.

[–] [email protected] 11 points 5 days ago* (last edited 5 days ago) (3 children)

I know this is awful and I shouldnt joke, but there is something darkly funny about seeing an article about how devestated the world will be, and then seeing it says Scotland will be like Bilbao for when I retire.

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago) (1 children)

Fair enough, personaly I find it hard to give the benefit of the doubt to people from that instance when it comes to topics like this.

[–] [email protected] 3 points 5 days ago* (last edited 5 days ago) (3 children)

As Charap and Radchenko show, the reality is a bit more complicated. Johnson didn’t directly sabotage a ceasefire deal in spring 2022; indeed, there was no deal ready to be signed between Russia and Ukraine. The two sides hadn’t agreed on territorial issues, or on levels of military armaments permitted after the war. Ukraine’s position during the negotiations necessitated security guarantees that western states were hesitant to provide. And there were domestic political questions inside Ukraine related to Russian demands about “denazification” to contend with.

So no, they hadnt agreed to revert to the feb-22 borders, that was still a matter of contension, and Russia were pushing for Ukrainian disarmament post war (i.e. surrender).

My dispute wasnt that there were attempts at negotiation, obviously there were; Macron in particular made a big show of pushing for them. But the idea that Russia ever offered status-quo ante-bellum (as they suggested) is ridiculous.

[–] [email protected] 4 points 5 days ago* (last edited 5 days ago) (1 children)

These places should be part of our sphere of influence and we dont like them drifting elsewhere is exactly the reason for Hitler taking over Austria, then the Sudetenland, then Danzig. Its very comparable, down the the presense of ethnic Germans/Russians being present in the Sudetenland/Donbas and them needing to be "protected" being offered as an excuse.


In a 1938 article, MIT’s president argued that technical progress didn’t mean fewer jobs. He’s still right.

Compton drew a sharp distinction between the consequences of technological progress on “industry as a whole” and the effects, often painful, on individuals.

For “industry as a whole,” he concluded, “technological unemployment is a myth.” That’s because, he argued, technology "has created so many new industries” and has expanded the market for many items by “lowering the cost of production to make a price within reach of large masses of purchasers.” In short, technological advances had created more jobs overall. The argument—and the question of whether it is still true—remains pertinent in the age of AI.

Then Compton abruptly switched perspectives, acknowledging that for some workers and communities, “technological unemployment may be a very serious social problem, as in a town whose mill has had to shut down, or in a craft which has been superseded by a new art.”


Because Boeing were on such a good streak already...

view more: next ›