this post was submitted on 15 Apr 2024
1 points (100.0% liked)

Solarpunk

5332 readers
12 users here now

The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.

What is Solarpunk?

Join our chat: Movim or XMPP client.

founded 2 years ago
MODERATORS
 

I found that idea interesting. Will we consider it the norm in the future to have a "firewall" layer between news and ourselves?

I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said "when you will have time, there is an emotional news that does not require urgent action that you will need to digest". I feel it could become the norm.

EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.

EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as "incredibly atrocious crime done to CHILDREN and you are a monster for not caring!". The second one does feel a lot like exploit of emotional backdoors in my opinion.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 5 months ago

I remember watching a video from a psychiatrist with eastern Monk training. He was explaining about why yogis spend decades meditating in remote caves - he said it was to control information/stimuli exposure.

Ideas are like seeds, once they take root they grow. You can weed out unwanted ones, but it takes time and mental energy. It pulls at your attention and keeps you from functioning at your best

The concept really spoke to me. It's easier to consciously control your environment than it is to consciously control your thoughts and emotions.

[–] [email protected] 0 points 5 months ago

Do we have an iamverysmart community yet?

[–] [email protected] 0 points 5 months ago (1 children)

Not really. An executable controlled by an attacker could likely "own" you. A toot tweet or comment can not, it's just an idea or thought that you can accept or reject.

We already distance ourselves from sources of always bad ideas. For example, we're all here instead of on truth social.

[–] [email protected] 0 points 5 months ago

Jokes on you, all of my posts are infohazards that make you breathe manually when you read them.

[–] [email protected] 0 points 5 months ago

People are thinking of the firewall here as something external. You can do this without outside help.

Who is this source. Why are they telling me this. How do they know this. What infomation might they be ommiting.

From that point you have enough infomation to make a judgement for yourself what a point of infomation is.

[–] [email protected] 0 points 5 months ago (1 children)

Hüman brain just liek PC, me so smort.

[–] [email protected] 0 points 5 months ago (1 children)

It's definitely an angle worth considering when we talk about how the weakest link in any security system is its human users. We're not just "not immune" to propaganda, we're ideological petri dishes filled with second-hand agar agar.

[–] [email protected] 0 points 5 months ago

Perhaps we can establish some governmental office for truth that decides whether any shitpost can be posted without the sterilization and lobotomization of the poster

Or maybe some kind of "community value" score for people with the right thinking

[–] [email protected] 0 points 5 months ago

Reading, watching, and listening to anything is like this. You accept communications into your brain and sort it out there. It's why people censor things, to shield others and/or to prevent the spread of certain ideas/concepts/information.

Misinformation, lies, scams, etc function entirely on exploiting it

[–] [email protected] 0 points 5 months ago (2 children)

Having a will means choosing what to do. Denying the existence of a person’s will is dehumanizing.

[–] [email protected] 0 points 5 months ago (1 children)

that which influences you is more powerful than your will. You cant really choose what to do.

[–] [email protected] 0 points 5 months ago

And dehumanization is a bad thing. I have a will. I choose what to do.

[–] [email protected] 0 points 5 months ago (1 children)

I've, since I was young, had mantra. If you don't why your are doing something someone else does. Its not always conspiracy or malicious its literally the basis of the idea of memetics, shareable and spreadable ideas that form the basis of who we are and what we know.

[–] [email protected] 0 points 5 months ago

Yeah IF you don’t know why you’re doing something. Key word.

[–] [email protected] 0 points 5 months ago

i have a general distaste for the mind/computer analogy. no, tweets aren't like malware, because language isn't like code. our brains were not shaped by the same forces that computers are, they aren't directly comparable structures that we can transpose risks onto. computer scientists don't have special insight into how human societies work because they understand linear algebra and network theory, in the same way that psychologists and neurologists don't have special insight into machine learning because they know how the various regions of the human brain interact to form a coherent individual mind, or the neural circuits that go into sensory processing.

i personally think that trying to solve social problems with technological solutions is folly. computers, their systems, the decisions they make, are not by nature less vulnerable to bias than we are. in fact, the kind of math that governs automated curation algorithms happens to be pretty good at reproducing and amplifying existing social biases. relying on automated systems to do the work of curation for us isn't some kind of solution to the problems that exist on twitter and elsewhere, it is explicitly part of the problem.

twitter isn't giving you "direct, untrusted" information. its giving you information served by a curation algorithm designed to maximize whatever it is twitter's programmers have built, and those programmers might not even be accurately identifying what it is that they're maximizing for. assuming that we can make a "firewall" that maximizes for neutrality or objectivity is, to my mind, no less problematic than the systems that already exist, because it makes the same assumption: that we can build computational systems that reliably and robustly curate human social networks in ways that are provably beneficial, "neutral", or unbiased. that just isn't a power that computers have, nor is it something we should want as beings with agency and autonomy. people should have control over how their social networks function, and that control does not come from outsourcing social decisions to black-boxed machine learning algorithms controlled by corporate interests.

[–] [email protected] 0 points 5 months ago

you already have that firewall. it's your experiences and human connections, your understanding of media, your personal history and learning and the feelings you experience.

you don't need a firewall to keep you from being manipulated, you need to learn to fucking read and think and feel. to learn and question, to develop trusted friends and family you can talk to.

if it feels like your emotional backdoors are being exploited then maybe youre thinking or behaving like a monster and your mind is revolting against itself.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

I think most people already have this firewall installed, and it's working too well - they're absorbing minimal information that contradicts their self-image or world view. :) Scammers just know how to bypass the firewall. :)

[–] [email protected] 0 points 5 months ago

I look forward to factchecker services that interface right into the browser or OS, and immediately recognize and flag comments that might be false or misleading. Some may provide links to deep dives where it's complicated and you might want to know more.

[–] [email protected] 0 points 5 months ago

I've thought about this since seeing Ghost in the Shell as a kid. If direct neural interfaces become common place, the threat of hacking opens up from simply stealing financial information or material for blackmail; they may be able to control your entire body!

[–] [email protected] 0 points 5 months ago

We already have a firewall layer between outside information and ourselves, it’s called the ego, superego, our morals, ethics and comprehension of our membership in groups, our existing views and values. The sum of our experiences up till now!

Lay off the Stephenson and Gibson. Try some Tolstoy or Steinbeck.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

I wonder if maybe it's more apt a comparison to say that allowing raw comments to affect you in a strong way is like running a random program as root. To a certain extent you have to let this kind of harmful content in.

P.s. the short story sounds cool - is it available to read anywhere?

[–] [email protected] 0 points 5 months ago (3 children)

The real question then becomes: what would you trust to filter comments and information for you?

In the past, it was newspaper editors, TV news teams, journalists, and so on. Assuming we can't have a return to form on that front, would it be down to some AI?

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

Most recent Ezra Klein podcast was talking about the future of AI assistants helping us digest and curate the amount of information that comes at us each day. I thought that was a cool idea.

*Edit: create to curate

[–] [email protected] 0 points 5 months ago (1 children)

Why do people, especially here in the fediverse, immediately assume that the only way to do it is to give power of censorship to a third party?

Just have an optional, automatic, user-parameterized, auto-tagger and set parameters yourself for what you want to see.

Have a list of things that should receive trigger warnings. Group things by anger-inducing factors.

I'd love to have a way to filter things out by actionnable items: things I can get angry about but that I have little ways of changing, no need to give me more than a monthly update on.

[–] [email protected] 0 points 5 months ago

Because your "auto-tagger" is a third party and you have to trust it to filter stuff correctly.

[–] [email protected] 0 points 5 months ago (1 children)

My mom, she always wants the best for me.

[–] [email protected] 0 points 5 months ago

Easily better than all the other options.

[–] [email protected] 0 points 5 months ago (1 children)

You are responsible for what you do with the information you process. You're not supposed to just believe everything you read, or let it affect you. We don't need some government or organization deciding what can be shown online. Read history and see what follows mass censorship.

[–] [email protected] 0 points 5 months ago (1 children)

I am bewildered that so many people contrive this as suggesting it should be a government or a company deciding what to show you. Obviously any kind of firewall/filter ought to be optional and user controlled!

[–] [email protected] 0 points 5 months ago

In fairness social media already has a problem with creating an echo chamber that serves only to reinforce and exaggerate your existing world view. I think to some extent exposure to perspectives other than one's own is important and healthy.

load more comments
view more: next ›