AI is here to stay. Anyone who refuses to learn how to use it to benefit their lives will be hurting their future. I've used a dozen or so AI tools and use a couple regularly and the efficacy of just chatGPT is clear. There is no going back, AI is your future whether you want it or not. AI will become your user interface for consumer electronics similarly to how consumer electronics seem to all require smart phone apps these days. Your smart phone is now the intermediary, using whatever AI the hardware manufacturers allow, such as Apple and Google using their own LLM AIs.
tumblr
Welcome to /c/tumblr, a place for all your tumblr screenshots and news.
Our Rules:
-
Keep it civil. We're all people here. Be respectful to one another.
-
No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.
-
Must be tumblr related. This one is kind of a given.
-
Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.
-
No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.
Sister Communities:
-
/c/[email protected] - Star Trek chat, memes and shitposts
-
/c/[email protected] - General memes
I've tried a few GenAI things, and didn't find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.
If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it's going to be bad because it is not intelligent. It does not think. And it's not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.
Oh hey it's me! I like using my brain, I like using my own words, I can't imagine wanting to outsource that stuff to a machine.
Meanwhile, I have a friend who's skeptical about the practical uses of LLMs, but who insists that they're "good for porn." I can't help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don't even like it being a default on search engines, so the idea of using it just to regularly masturbate feels ... extremely selfish. I can see trying it as a novelty, but for a regular occurence? It's an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.
Now imagine growing up where using your own words is less effective than having AI speak for you. Would you have not used AI as a kid when it worked better than your own words?
Wdym “using your own words is less effective than having AI speak for you”? Learning how to express yourself and communicate with others is a crucial life skill, and if a kid struggles with that then they should receive the properly education and support to learn, not given an AI and told to just use that instead
It is, and they should, but that doesn't mean they will. GenZ and GenA has notable communication and social issues rooted in the technologies of today. Those issue aren't stopping our use of social media, smart phones or tablets or stopping tech companies from doubling down on the technologies that cause the issues. I have no faith they will protect future children when they have refuse to protect present children.
What I mean is that much like parents who already put a tablet or TV in front of their kid to keep them occupied, parents will do the same with AI. When a kid is talking to an AI every day, they will learn to communicate their wants and needs to the AI. But AI has infinite patients, is always available, never makes their kid feel bad and can effectively infer and accurately assume the intent of a child from pattern recognizing communication that parents may struggle to understand. Every child would effectively develop a unique language for use with their AI co-parent that really only the AI understands.
This will happen naturally simply by exposure to AI that parents seem more than willing to allow as easily as tablets and smart phones and tv. Like siblings where one kid understands the other better that parent and translates those needs to the parent. Children raised on AI may end up communication to their caretakers better through the AI, just like the sibling, but worse. Their communication skills with people will suffer because more of their needs are getting met by communicating with AI. They practice communication with AI at the expense of communicating with people.
Using it for porn sounds funny to me given the whole concept of "rule 34" being pretty ubiquitous. If it exists, there's porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That's all I can think of.
He says he uses it to do sexual roleplay chats, treats it kinda like a make-your-own-adventure porn story. I don't know if he's used it for images.
I feel like it's an unpopular take but people are like "I used chat gpt to write this email!" and I'm like you should be able to write email.
I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It's a skill that needs practice.
I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.
This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I'm actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.
AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don't need to spell or structure sentences correctly to effectively communicate with AI, which means they don't need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.
Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what's amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.
I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.
I think it is a good learning tool if you use it as such. I use it for help with google sheets functions (not my job or anything important, just something I'm doing), and while it rarely gets a working function out, it can set me on the right track with functions I didn't even know existed.
We used to have web forums for that, and they worked pretty okay without the costs of LLMs
This is a little off topic but we really should, as a species, invest more heavily in public education. People should know how to read and follow instructions, like the docs that come with Google sheets.
I was finally playing around with it for some coding stuff. At first, I was playing around with building the starts of a chess engine, and it did ok for a quick and dirty implementation. It was cool that it could create a zip file with the project files that it was generating, but it couldn't populate it with some of the earlier prompts. Overall, it didn't seem that worthwhile for me (as an experienced software engineer who doesn't have issues starting projects).
I then uploaded a file from a chess engine that I had already implemented and asked for a code review, and that went better. It identified two minor bugs and was able to explain what the code did. It was also able to generate some other code to make use of this class. When I asked if there were some existing projects that I could have referenced instead of writing this myself, it pointed out a couple others and explained the ways they differed. For code review, it seemed like a useful tool.
I then asked it for help with a math problem that I had been working on related to a different project. It came up with a way to solve it using dynamic programming, and then I asked it to work through a few examples. At one point, it returned numbers that were far too large, so I asked about how many cases were excluded by the rules. In the response, it showed a realization that something was incorrect, so it gave a new version of the code that corrected the issue. For this one, it was interesting to see it correct its mistake, but it ultimately still relied on me catching it.