HedyL

joined 2 years ago
[–] [email protected] 3 points 3 hours ago

At this point it’s an even bet that they are doing this because copilot has groomed the executives into thinking it can’t do wrong.

This, or their investors (most likely both).

[–] [email protected] 7 points 17 hours ago (2 children)

reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’

True (even if this answer sounds like something a chatbot would generate). I have come across a few human slop generators/bots in my life myself. However, making up entire titles of books or papers appears to be a specialty of AI. Humans would not normally go to this trouble, I believe. They would either steal text directly from their sources (without proper attribution) or "quote" existing works without having read them.

[–] [email protected] 2 points 22 hours ago

So what kind of story can you tell? A movie that perhaps has a lot of dream sequences? Or a drug trip?

Maybe something like time travel, because then it might be okay if the protagonists kept changing their appearance to some degree. But even then, there wouldn't be enough consistency, I guess.

[–] [email protected] 3 points 22 hours ago* (last edited 21 hours ago)

This has become a thought-terminating cliché all on its own: "They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced."

[–] [email protected] 2 points 22 hours ago

I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don't know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.

[–] [email protected] 8 points 1 day ago

First, we are providing legal advice to businesses, not individuals, which means that the questions we are dealing with tend to be even more complex and varied.

Additionally, I am a former professional writer myself (not in English, of course, but in my native language). Yet, even I find myself often using complicated language when dealing with legal issues, because matters tend to be very nuanced. "Dumbing down" something without understanding it very, very well creates a huge risk of getting it wrong.

There are, of course, people who are good at expressing legal information in a layperson's way, but these people have usually studied their topic very intensively before. If a chatbot explains something in “simple” language, their output usually contains serious errors that are very easy for experts to spot because the chatbot operates on the basis of stochastic rules and does not understand its subject at all.

[–] [email protected] 5 points 1 day ago

Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead

Exactly. It is also a new technology that requires far fewer skills to use than previous new technologies. The skills are needed to critically scrutinize the output - which in this case leads to less lazy people being more reluctant to accept the technology.

On top of this, AI fans are being talked into believing that their prompting as such is a special “skill”.

[–] [email protected] 6 points 1 day ago (2 children)

That's why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don't believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is "smart" in any meaningful way.

[–] [email protected] 12 points 1 day ago

But if you’re not an expert, it’s more likely that everything will just sound legit.

Oh, absolutely! In my field, the answers made up by an LLM might sound even more legit than the accurate and well-researched ones written by humans. In legal matters, clumsy language is often the result of facts being complex and not wanting to make any mistakes. It is much easier to come up with elegant-sounding answers when they don't have to be true, and that is what LLMs are generally good at.

[–] [email protected] 23 points 1 day ago (11 children)

And then we went back to “it’s rarely wrong though.”

I am often wondering whether the people who claim that LLMs are "rarely wrong" have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

I'm not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly "wrong" with it. Not one word about the missing semicolon, though.

I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots' output at all.

[–] [email protected] 55 points 1 day ago (26 children)

FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple "tests" to try out whether an AI's answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don't want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don't need "plausible deniability" regarding plagiarism or anything like this.

Yet, we are being pushed to "embrace AI" as well, we are being told we need to "learn to prompt" etc. This is frustrating. My biggest fear isn't to be replaced by an LLM, not even by someone who is a "prompting genius" or whatever. My biggest fear is to be replaced by a person who pretends that the AI's output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what's expected, apparently.

[–] [email protected] 3 points 2 days ago

If computers become capable of mass-producing stuff other computers will like, but many humans won't, this might also lead to a quick decline of algorithm-based search engines, social media feeds etc. (as has been discussed here before, of course).

view more: next ›