this post was submitted on 23 Apr 2024
95 points (86.8% liked)
Technology
59251 readers
2890 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI isn't the threat to the web that this article makes it. The threat that we currently face is another round of COPPA and forced personal identification for accessing the internet under the guise of 'protecting the children'.
They can all be parts of the threat.
The threat itself is that governments and big corps have a comprehensive strategy on censoring and controlling the Web. Since the Web is nowadays the only media space that has preserved some appearance of freedom, this is bad. The end goal is so that nobody would hear you scream. I mean, they've already succeeded for most part.
Parts of that strategy are (I tried to separate them, but they intersect):
Attracting people to centralized controlled recommendation systems, which obscurely determine what you'll see and what you won't. Since your ability to process information is limited, this simply means that no outright censorship is even needed. You just won't ever see "wrong" information or discourse or even emotion on something, if you don't search for it intentionally. That's Facebook, Reddit, Twitter, search engines, now also LLM chatbots when used instead of a search engine.
Confusing and demoralizing people out of organizing outside those. That's a softer version of the first point, as in "maybe we won't decide what you think about, but we will slow you down". There are actions and laws and propaganda most efficient in that direction. Apathy is death.
Market pressure - businesses use the Web in a particular way, so small nudges for that culture to be on the track particularly convenient for control are made.
Fake progress and complexity race - yes, maybe enterprise software has to be complex. But Web technologies and Web browsers don't. Most of the "new" things are apparently intended just to cut off the competition with smaller resources. Also oligopolization.
Legal climate endorsing oligopolization.
Then there are outright censorship and prosecution and bullying.
And then there are likely cases of mafia-style assassin sh*t, which we wouldn't know about anyway. I think Aaron Schwarz and Ian Murdock may fit here.
Por que no los dos?
Given the massive increase in search result garbage, it's pretty clear "AI" is a massive problem.
And while I find things like COPPA massively invasive, offensive, and nothing more than more of the power-brokers reaching for more yet control, I can sidestep it, and it will have the unintended consequence of increasing encryption and public awareness of their bullshit.
Honestly, I just havent seen this increase. Search results were clogged with blogspam 3 years ago and they are clogged with it now. LLMs seem to have had little effect either way.
COPPA is 100% a threat to online privacy. AI, although just a tool, is absolutely empowering those with the goal of social manipulation through disinformation. Fabricated information can no longer be disproven at the rate it’s created, and too many “news” sites rely on trending web scrapers for content. By the time retractions and corrections are made, the masses are reading the next headline.
Would you expand on how LLMs are not the threat as posed in the article?