Lately, I just wish it didn't lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.
If it doesn't know something, it should just admit it.
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Logo design credit goes to: tubbadu
Lately, I just wish it didn't lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.
If it doesn't know something, it should just admit it.
LLM don't know that they are wrong. It just mimics how we talk, but there is no conscious choice behind the words used.
It just tries to predict which word to use next, trained on a ungodly amount of data.
My fantasy is for "everyone" to realize there's absolutely nothing "intelligent" about current AI. There is no rationalization. It is incapable of understanding & learning.
ChatGPT et al are search engines. That's it. It's just a better Google. Useful in certain situations, but pretending it's "intelligent" is outright harmful. It's harmful to people who don't understand that & take its answers at face value. It's harmful to business owners who buy into the smoke & mirrors. It's harmful to the future of real AI.
It's a fad. Like NFTs and Bitcoin. It'll have its die-hard fans, but we're already seeing the cracks - it's absorbed everything humanity's published online & it still can't write a list of real book recommendations. Kids using it to "vibe code" are learning how useless it is for real projects.
People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity
We have technology to feed everyone and yet we don't We have technology to house everyone and yet we don't We have technology to teach everyone and yet we don't
Captalist democracy is not real democracy
This is it. People don't have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.
Rage Against the Inanimate Machine
I don't have negative sentiments towards A.I. I have negative sentiments towards the uses it's being put towards.
There are places where A.I can be super exciting and useful; namely places where the ability to quickly and accurately process large amounts of data can be critically life saving, ie) air traffic control, language translation, emergency response preparedness, etc...
But right now it's being used to paint shitty pictures so that companies don't have to pay actual artists.
If I had a choice, I'd say no AI in the arts; save it for the data processing applications and leave the art to the humans.
I was pro AI in the past, but seeing the evil ways these companies use AI just disgusts me.
They steal their training data, and they manipulate the algorithm to manipulate the users. It’s all around evil how the big companies use AI.
I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.
Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.
The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.
Additionally, data usage consent for generative AI should be opt-in. Not opt-out.
Out of curiosity, how would you define a product for that purpose? It's pretty easy to tweak a few weights slightly.
You can make the limit per-company instead. With big fines if you make thousands of companies to get around the law.
Ah, so we're just brainstorming.
It's hard to nail down "no working around it" in a court of law. I'd recommend carbon taxes if you want to incentivise saving energy with policy. Cap and trade is also seen as a gold standard option.
Carbon taxes still allow you to waste as much energy as you want. It just makes it more expensive. The objective is to put a limit on how much they are allowed to waste.
I'm not a lawyer. I don't know how to make a law without possible exploits, but i don't think it would be hard for an actual lawyer to make a law with this spirit that is not easily avoided.
I'd like to have laws that require AI companies to publicly list their sources/training materials.
I'd like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as "AI".
I'd like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.
My big problems with AI right now, are that we don't know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I'd also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.
Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it
i would use it to take a shit if they let me
My favorite one that I've heard is: "ban it". This has a lot of problems... let's say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.
Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.
If you want to see a parallel to this, check out Japan's reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn't sit still either. A temporary reprieve.
The more aggressive of you will say, this is no problem, let's push for a worldwide ban. Good luck with that. For almost any issue on Earth, I'm not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.
AI is here. It's like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.
The question shouldn't be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What's the right balance? The genie isn't going back in the bottle, no matter how hard you wish.
Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.
Lots of copyright comments.
I want those building it at scale to stop killing my planet.