zogwarg

joined 2 years ago
[–] [email protected] 6 points 4 days ago

But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.

[–] [email protected] 6 points 6 days ago* (last edited 6 days ago) (2 children)

Did you read any of what I wrote? I didn't say that human interactions can't be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.


EDIT:

To clarify I and maybe put it in terms which are closer to your interpretation.

With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.

With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it's where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There's nothing below the surface,

I don't personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it's more useful to frame it as respecting other people's boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.

And I especially don't think it healthy to essentially be saying: "have the same expectations of chatbots and service workers".


TLDR:

You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it's not just about protecting yourself, it's also about protecting them.

You should never catch feelings for a chatbot, because they don't have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago) (4 children)

Don't besmirch the oldest profession by making it akin to souless vacuum. It's not even a transaction! The AI gains nothing and gives nothing. It's alienation in it's purest form—no wonder the rent-seekers love it—It's the ugliest and least faithful mirror.

[–] [email protected] 10 points 1 week ago

✨The Vibe✨ is indeed getting increasingly depressing at work.

It's also killing my parents' freelance translation business, there is still money in live interpreting, and prestige stuff or highly technical accuracy very obviously matters stuff, but a lot of stuff is drying up.

[–] [email protected] 6 points 2 weeks ago (1 children)

A glorious snippet:

The movement ~~connected to~~ attracted the attention of the founder culture of Silicon Valley and ~~leading to many shared cultural shibboleths and obsessions, especially optimism about the ability~~ of intelligent capitalists and technocrats to create widespread prosperity.

At first I was confused at what kind of moron would try using shibboleth positively, but it turns it's just terribly misquoting a citation:

Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.

Also lol at insiting on "exonym" as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn't really even make sense to use the acronym unless you're doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)

It's a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.

[–] [email protected] 0 points 1 month ago

Ah but not everyone's taste is the same, therefore the best conceible plate of nachos is made worse by existing, because it can then be confronted to people's preferences instead of staying in the platonic realm!

[–] [email protected] 1 points 1 month ago

The standout monuments of stupidity—and/or monstrosity—in McCarthy's response for me are.

  • Calling JW a failed computer scientist for failing to see that computers and clockwork are different, when really there is no computation a computer can make that Turing Complete clockwork couldn't be able to replicate.
  • Essentially saying that by analogy, where religion should not stand in the way of science, so should morals not stand in the way of science?!?!?! (I mean really? WTF)
[–] [email protected] 1 points 3 months ago

Good video overall, despite some misattributions.

Biggest point I disagree with: "He could have started a cult, but he didn't"

Now I get that there's only so much Toxic exposure to Yud's writings, but it's missing a whole chunk of his persona/æsthetics. And ultimately I thing boils down to the earlier part that stange did notice (via echo of su3su2u1): "Oh Aren't I so clever for manipulating you into thinking I'm not a cult leader, by warning you of the dangers of cult leaders."

And I think even expect his followers to recognize the "subterfuge".