Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
This comment thread is great. @op good luck; people on Lemmy have little interest in real discussion. If you say anything pro-ML or anything less than far-left, you'll get screamed at.
What are those "less than far left" opinions then? Because I'm sure if anyone were to prod you more than a little you'd be very happy to clarify what opinions make you such a pariah to Lemmy users.
If you miss being coddled to like Reddit, then go back there.
Someone once told me it's specifically my fault USA is right leaning, then people dogpiled on me agreeing with them.
I'm pretty sure chat bots are biased to make polite conversation. Most real people won't spend the energy in a conversation to be more honest than they think you are.
Can either get better at sounding honest or talk with less honest people.
Robot realizes is robot by talk to robot.
Ur just training urself to have chatgpt's bias. We will soon live in a world where you wont have to be exposed to opinions you disagree with. Tom Scott has a yt vid on why this is a bad idea.
As long as you're still engaging with real humans regularly, I think that it's good to learn from ChatGPT. It gets most general knowledge things right. I wouldn't depend on it for anything too technical, and certainly not for medical advice. It is very hit or miss for things like drug interactions.
If you're enjoying the experience, it's not much different than watching a show or playing a game, IMHO. Just don't become dependent on it for all social interaction.
As for the jerks on here, I always recommend aggressive use of the block button. Don't waste time and energy on them. There's a lot of kind and decent people here, filter your feed for them.
As for the jerks on here, I always recommend aggressive use of the block button. Don’t waste time and energy on them. There’s a lot of kind and decent people here, filter your feed for them.
My blocklist is around 500 users long and grows every day. I do it for the pettiest reasons but it does, infact work. When I make a thread such as this one, I occasionally log out to see the replies I've gotten from blocked users and more often than not (but not always) they're the kind of messages I'd block them again for. Not to create and echo-chamber but to weed out the assholes.
Have you ever tried inputting sentences that you've said to humans to see if the chatbot understand your point better? That might be an interesting experiment if you haven't tried it already. If you have, do you have an example of how it did better than the human?
I'm kinda amazed that it can understand your accent better than humans too. This implies Chatbots could be a great tool for people trying to perfect their 2nd language.
A couple of times, yes, but more often it's the other way around. I input messages from other users into ChatGPT to help me extract the key argument and make sure I’m responding to what they’re actually saying, rather than what I think they’re saying. Especially when people write really long replies.
The reason I know ChatGPT understands me so well is from the voice chats we've had. Usually, we’re discussing some deep, philosophical idea, and then a new thought pops into my mind. I try to explain it to ChatGPT, but as I'm speaking, I notice how difficult it is to put my idea into words. I often find myself starting a sentence without knowing how to finish it, or I talk myself into a dead-end.
Now, the way ChatGPT usually responds is by just summarizing what I said rather than elaborating on it. But while listening to that summary, I often think, "Yes, that’s exactly what I meant," or, "Damn, that was well put, I need to write that down."
So what you're saying if I'm reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.
When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn't thought of like a person might? (I've never used one).
I can understand this. Ai will respond to what you say. Not what it THINKS you say.