apparently those qualcomm NPUs (the "AI assist" chips in the copilot(?) laptops) aren't very good
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Molly White reports on Kamala Harris's recent remarks about Cryptocurrency being a cool opportunity for black men.
VP Harris's press release (someone remind me to archive this once internet archive is up). Most of the rest of it is reasonable, but it paints cryptocurrency in a cautiously positive light.
Supporting a regulatory framework for cryptocurrency and other digital assets so Black men who invest in and own these assets are protected
[...]
Enabling Black men who hold digital assets to benefit from financial innovation.
More than 20% of Black Americans own or have owned cryptocurrency assets. Vice President Harris appreciates the ways in which new technologies can broaden access to banking and financial services. She will make sure owners of and investors in digital assets benefit from a regulatory framework so that Black men and others who participate in this market are protected.
Overall there has been a lot of cryptocurrency money in this US election on both sides of the aisle, which Molly White has also reported extensively on. I kind of hate it.
"regulation" here is left (deliberately) vague. Regulation should start with calling out all the scammers, shutting down cryptocurrency ATMs, prohibiting noise pollution, and going from there; but we clearly don't live in a sensible world.
Introducing the official crypto coin of the Harris-Walz ticket: JoyCoin! Trading under JOY. Every time a coin is minted, we shoot someone from the global south in the head.
Presented, without comment, this book cover:
(found on the social medias, the books' website)
God, I hope this is a scam, and that whoever is running it is just smashing together today’s buzzwords to print money.
This 180$ ebook better be completely autoplagged and in no way intended to be informational.
the upside of it listing a pile of author names: one can go look up their published works, and add them to crank trackers if necessary (seems likely)
the ToC is some fantastical fucking nonsense
this is one hell of a hat trick
only needs a quantum chapter somewhere in there for bonus scoring round..
In other news, Elon actually did it
I really wonder what the meeting looked like where they decided on that change, because I’m struggling coming up with a single argument for it that doesn’t boil down to giving abusive asshats more playtime.
my bet: tweets that came to felon's attention which he couldn't view because the poster had block felon
This is a license for stalkers & abusers ! No surprise from someone like Elon I suppose
I’m really really not happy about this. There is one person I’ve been trying to keep out for the last few years and now they can come crawl all my fucking posts?? And report my account!?
Edit: apparently being protected should offer me some protection still.
saw this via a friend earlier, forgot to link. xcancel
socmed administrator for a conf rolls with liarsynth to "expand" a cropped image, and the autoplag machine shits out a more sex-coded image of the speaker
the mindset of "just make some shit to pass muster" obviously shines through in a lot of promptfans and promptfondlers, and while that's fucked up I don't want to get too stuck on that now. one of the things I've been mulling over for a while is pondering what a world (and digital landscape) with a richer capability for enthusiastic consent could look like. and by that I mean, not just more granular (a la apple photo/phonebook acl) than this current y/n bullshit where a platform makes a landgrab for a pile of shit, but something else entirely. "yeah, on my gamer profile you can make shitposts, but on academic stuff please keep it formal" expressed and traceable
even if just as a thought experiment (because of course there's lots of funky practical problems, combined with the "humans just don't really exist that way" effort-tax overhead that this may require), it might inform about some avenues of how to to go about some useful avenues on how to go about handling this extremely overt bullshit, and informing/shaping impending norms
(e: apologies for semi stream of thought, it's late and i'm tired)
25085 N + Oct 15 GitHub ( 19K) Your free GitHub Copilot access has expired
tinyviolin.bmp
fig. 1: how awful.systems works
it just clicked for me but idk if it makes sense: openai nonprofit status could be used later (inevitably in court) to make research clause of fair use work. they had it when training their models and that might have been a factor why they retained it, on top of trying to attract actual skilled people and not just hypemen and money
There's no way this works, right? It's like a 5y.o.'s idea of a gotcha.
This would be like starting a tax-exempt charity to gather up a large amount in donations and then switching to a for-profit before spending it on any charitable work and running away with the money.
i'm not a lawyer and i've typed it up after 4h of sleep, trying to make sense of what tf were they thinking. they're not bagging up money, they're stealing all data they can, so it's less direct and it'd depend on how that data (unstructured, public) will be valued at. then, what a coincidence, their proprietary thing made something useful commercially, or so were they thinking. sbf went to court with less
There’s no way this works, right?
the US legal system has this remarkable "little" failure mode where it is easily repurposed to be not an engine of justice, but instead of engine of enforcing whatever story you can convince someone of
(the extremely weird interaction(s) of "everything allowed except what is denied", case precedent, and the abovementioned interaction mode, result in some really fucking bad outcomes)
this demented take on using GenAI to create documentation for open source projects
https://lobste.rs/s/rmbos5/large_language_models_reduce_public#c_j8boat
Good sneer from "Internet_Janitor" a few comments up the page:
LLMs inherently shit where they eat.
The top comment's also pretty good, especially the final paragraph:
I guess these companies decided that strip-mining the commons was an acceptable deal because they’d soon be generating their own facts via AGI, but that hasn’t come to pass yet. Instead they’ve pissed off many of the people they were relying on to continue feeding facts and creativity into the maws of their GPUs, as well as possibly fatally crippling the concept of fair use if future court cases go against them.
oh hey that would be my comment 😁
It was a pretty good comment, and pointed out one of the possible risks this AI bubble can unleash.
I've already touched on this topic, but it seems possible (if not likely) that copyright law will be tightened in response to the large-scale theft performed by OpenAI et al. to feed their LLMs, with both of us suspecting fair use will likely take a pounding. As you pointed out, the exploitation of fair use's research exception makes it especially vulnerable to its repeal.
On a different note, I suspect FOSS licenses (Creative Commons, GPL, etcetera) will suffer a major decline in popularity thanks to the large-scale code theft this AI bubble brought - after two-ish years of the AI industry (if not tech in general) treating anything publicly available as theirs to steal (whether implicitly or explicitly), I'd expect people are gonna be a lot stingier about providing source code or contributing to FOSS.
Yeah, I'm no longer worried that LLMs will take my job (nor ofc that AGI will kill us all) Instead the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available. It's a bit like draining the Aral Sea, a vibrant ecosystem will be permanently destroyed in the short-sighted pursuit of "development".
the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available.
I personally anticipate this will be the lasting legacy of AI as a whole - everything that you mentioned was caused in the alleged pursuit of AGI/Superintelligence^tm^, and gen-AI has been more-or-less the "face" of AI throughout this whole bubble.
I've also got an inkling (which I turned into a lengthy post) that the AI bubble will destroy artificial intelligence as a concept - a lasting legacy of "crud and untruth" as you put it could easily birth a widespread view of AI as inherently incapable of distinguishing truth from lies.