self

joined 2 years ago
MODERATOR OF
[–] [email protected] 0 points 1 week ago (8 children)

The phoronix screenshot is the only one that contains open trans hate, though

Pretty much proves that it is the „anime alter ego” of the guy. My god, the times we live in.

and if you think the orange site is at all safe for trans people, that tells me everything I need to know

[–] [email protected] 0 points 1 week ago (11 children)

Yeah classic attention seeking behaviour. Just say you’re stopping work on it for personal reasons, or give details. The only reason to tease gossip like this is because you like the drama.

hey fucker I found one of those toxic posts you don’t seem to be able to see

[–] [email protected] 0 points 1 week ago (4 children)

jesus fuck

it’s not particularly gonna help or even make me feel better, but I’m probably gonna reopen that first Lemmy thread a little later and just start banning these awful fuckers from our instance. nobody attacking Asahi has a god damn thing to say to any member of our community.

[–] [email protected] 0 points 1 week ago

yep, your second attempt’s still a fashy dad quip about art and it’s still as funny as the grave. you haven’t produced anything with the subjective value of even terrible art, and I think it’s about time you stop trying

[–] [email protected] 0 points 1 week ago (8 children)

You: literally splatters shitty posts into a thread

”Why am I being downvoted”

[–] [email protected] 0 points 1 week ago

I vaguely remember that one of the articles talking about the physics forum mentioned it happening elsewhere, but I haven’t dug into it myself. it might just be one or two shitty admins doing this, but I suspect (without evidence, I just can’t think of another reason to do it) there’s some party offering a financial incentive for them to go back and fuck up their old forums

[–] [email protected] 0 points 2 weeks ago (2 children)

I think you’re absolutely correct, and this feels to me like the only reason why we’re seeing some of the bizarre shit we’ve been keeping an eye on:

  • several old forums, all of which are unique high-quality data sources, are being polluted by their own admins with backdated LLM-generated answers. this destroys that forum as a trustworthy data source and removes it as competition for the LLM that already scraped the forum — and, as a bonus, it also makes training a future LLM on that data source utterly impractical without risking model collapse.
  • Wikipedia refuses to compromise on quality in general, so it’s under increasing political pressure to change. the game here is to shut down or pollute the original data source by any means necessary, so that the only way to access that data becomes an LLM. the people behind the AI startups are experts at creating monopolies, and shutting down a world-class data source like Wikipedia or making it otherwise unusable would guarantee a monopoly position for them.
[–] [email protected] 0 points 2 weeks ago

I keep stopping myself from doing this exact project, with the fediverse as the curation source, several times. I’ve talked about this before, but interestingly Postgres’ full-text search is effectively the complete core of a search engine, minus what you’d need for crawling and ranking (which is where curation and a bit of scripting would come in)

other than resources and time, one big open question is how to do this kind of thing as a positive part of the fediverse — to not make the same mistake that a bunch of techbros already have and index the fediverse without consent. how does one make the curation process simultaneously consensual and also automated enough that it can be reasonably ruggedized against abuse?

[–] [email protected] 0 points 2 weeks ago

also, I forgot to point this out earlier, but it’s worth saying: the only reason why I’m considering GrapheneOS as a viable path forward is because as an AOSP fork, it isn’t all-or-nothing. I can create a private space or profile for Google Play Services and all my spyware shit and keep it isolated, and ending the session kills all the processes those apps might have been running.

that’s fantastic! I finally don’t have to switch fully to open source apps and do without working non-janky notifications to have a modicum of privacy on Android! the graphene devs assume I’m not gonna be perfect and they ruggedized their fork against that and put a ton of effort into making even stuff that’s deeply reliant on Google safer! why in fuck aren’t they like that for everything?

[–] [email protected] 0 points 2 weeks ago (1 children)

To be clear, this is not a rant against security… I treat security of my devices seriously.

exactly! and taking this shit seriously is why this overbearing shit sucks, especially when it’s theater or enforced for threats that aren’t realistic for your threat model. unlike some of these fuckers, we both actually intend to daily the devices we’re locking down.

because apparently having non-smooth scrolling can be fingerprinted (that being possible is IMO reason alone to burn down the modern web altogether)

oh I fucking hate this. it’s the same shit as forcing dark mode off/on as part of fingerprinting protection. not only is this the absolute wrong way to fix that shit, it’s pretty monstrous for anyone who needs dark mode or light mode to use their device in anything resembling comfort — your user may have a visual impairment or severe light sensitivity, and now they’re fucked cause the developers couldn’t accept a minor fingerprinting risk (and light/dark mode and smooth scrolling are both utterly minor, to be real)

Possibly controversial, but I’ll say it: web browsers being so annoying about self-signed certificates.

motherfucker yes! the CA infrastructure is nowhere near usable for all cases and we all know it, but locking down the web and making development and self-hosting fucking annoying is the game for the browser vendors and Google in particular. to add to this: why the fuck is my browser acting like me not having a cert for localhost is a tragedy? why does the browser sandbox not allow certain shit unless I’m using https of all things to access localhost? where precisely is the fucking threat here? (I’m sure some well-paid security asshole at one of the browser vendors could snark a list of unlikely shit as reasons why local host needs to be treated as insecure with no toggle or dev tools option to treat it otherwise… and I just don’t give a fuck)

The entire reality of secure boot on most platforms

I’d love good secure boot! the one on PCs ain’t it at all, and unfortunately the secure ones tend to be used to lock out device owners from modifying what they own and implement shit like attestation that’s just there to violate your rights and make sure you’re not blocking ads, so unfortunately good secure boot might be incompatible with capitalism. for now though at least graphene seems to benefit from a secure secure boot chain that hasn’t been locked down yet?

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

the GrapheneOS developers would like you to know that switching to Ironfox, the only Android Firefox fork (to my knowledge) that implements process sandboxing (and also ships ublock origin for convenience) (also also, the Firefox situation on Android looks so much like intentional Mozilla sabotage, cause they have a perfectly good sandbox sitting there disabled) is utterly unsafe because it doesn’t work with a lesser Android sandbox named isolatedProcess or have the V8 sandbox (because it isn’t V8) and its usage will result in your immediate death

so anyway I’m currently switching from vanadium to ironfox and it’s a lot better so far

[–] [email protected] 0 points 2 weeks ago (3 children)

speaking of privacy, if you got unlucky during secret santa and got an echo device and set it up out of shame as a kitchen timer or the speaker that plays while you poop: get rid of it right the fuck now, this is not a joke, they’re going mask-off on turning the awful things into always-on microphones and previous incidents have made it clear that the resulting data will not be kept private and can be used against you in legal proceedings (via mastodon)

 

from the linked github thread:

Your project is in violation of the AGPL, and you have stated this is intentional and you have no plans to open source it. This is breaking the law, and as such I've began to help you with the first steps of re-open sourcing the plugin.

the project author (who gets paid for violating the AGPL via patreon) responds like a mediocre crypto grifter and insists their violation of the law be debated on the discord they control (where their shitty community can shout down the reporter):

While keeping code private doesn't guarantee security, it does make it harder for bad actors to keep up with changes. You are welcome to debate this matter in the MakePlace discord: https://discord.com/invite/YuvcPzCuhq If you are able to convince the MakePlace community that keeping the code open-source is better, I will respect the wishes of the community.

aaaand the smackdown:

Respectfully, I won't attempt to "debate" or "convince" anyone; I'm leaving this pull request and my fork here for others to see and use. It is not a matter of "better"; you are violating a software license and the law. It does not "make it harder" for anyone; Harmony hooking exists, IL modification exists, you can modify plugins from other plugins.

 

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

 

in spite of popular belief, maybe lying your ass off on the orange site is actually a fucking stupid career move

for those who don’t know about Kyle, see our last thread about Cruise. the company also popped up a bit recently when we discussed general orange site nonsense — Paully G was doing his best to make Cruise look like an absolute success after the safety failings of their awful self-driving tech became too obvious to ignore last month

 

this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

 

I found this searching for information on how to program for the old Commodore Amiga’s HAM (Hold And Modify) video mode and you gotta touch and feel this one to sneer at it, cause I haven’t seen a website this aggressively shitty since Flash died. the content isn’t even worth quoting as it’s just LLM-generated bullshit meant to SEO this shit site into the top result for an existing term (which worked), but just clicking around and scrolling on this site will expose you to an incredible density of laggy, broken full screen animations that take way too long to complete and block reading content until they’re done, alongside a long list of other good design sense violations (find your favorites!)

bonus sneer arguably I’m finally taking up Amiga programming as an escape from all this AI bullshit. well fuck me I guess cause here’s one of the vultures in the retrocomputing space selling an enshittified (and very ugly) version of AmigaOS with a ChatGPT app and an AI art generator, cause not even operating on a 30 year old computer will spare me this bullshit:

like fuck man, all I want to do is trick a video chipset from 1985 into making pretty colors. am I seriously gonna have to barge screaming into another German demoscene IRC channel?

 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @[email protected]):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

 

there’s an alternate universe version of this where musk’s attendant sycophants and bodyguard have to fish his electrocuted/suffocated/crushed body out from the crawlspace he wedged himself into with a pocket knife

 

404media continues to do devastatingly good tech journalism

What Kaedim’s artificial intelligence produced was of such low quality that at one point in time “it would just be an unrecognizable blob or something instead of a tree for example,” one source familiar with its process said. 404 Media granted multiple sources in this article anonymity to avoid retaliation.

this is fucking amazing. the company tries to hide it as a QA check, but they’re really just paying 3d modelers $1-$4 a pop to churn out models in 15 minutes while they pretend the work’s being done by an AI, and now I’m wondering what other AI startups have also discovered this shitty dishonest growth hack

 

kinda glad I bounced off of the suckless ecosystem when I realized how much their config mechanism (C header files and a recompile cycle) fucking sucked

 

0
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

 

hey let’s see what the people who killed and buried hacker culture think should go in the jargon file!

If the spirit of the original Jargon file was to be a living document, alas, it failed to keep with the times.

Hackers at large have moved away from Lisp despite Paul Graham and other evangelists […]

Hackers also have moved away from academia at large, and 9-5 jobs at tech behemoths are more natural habitats for them, which also shaped the lingo. I mean, there’s a whole layer of slang usually pertinent to outsourcing agencies and to cubicle farms.

I can’t wait for the corporate-approved jargon file, with any hint of anti-capitalism replaced with fun words and quotes from billionaires to share as the soul leaves my body

So in order for the document to evolve, we need a system to determine consensus. Everyone who cares runs a program on their computer that joins the network and registers their intent. With each proposed change, a query goes out to the network, and it's up to everyone on the network to say yea or nay to the proposal. With enough "yea"s, the document is updated.

...this is starting to sound like a blockchain, isn't it.

for the absolute sake of fuck. coming soon: HackerDAO! collect 10xer tokens and finally prove to the junior devs why corporate gives you so many points to crunch on! vote on fun new jargon, but only if it’s crypto-related! surely you’re hacker enough to be on the pump side of this pump and dump!

view more: ‹ prev next ›