In other news, someone tried selling a vibe-coded ytp-dl wrapper and got publicly called out for it:
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
So two weeks ago I linked titotal's detailed breakdown of what is wrong with AI 2027's "model" (tldr; even accepting the line goes up premise of the whole thing, AI 2027's math was so bad that they made the line always asymptote to infinity in the near future regardless of inputs). Titotal went to pretty extreme lengths to meet the "charitability" norms of lesswrong, corresponding with one of the AI 2027 authors, carefully considering what they might have intended, responding to comments in detail and depth, and in general not simply mocking the entire exercise in intellectual masturbation and hype generation like it rightfully deserves.
But even with all that effort, someone still decided make an entire (long, obviously) post with a section dedicated to tone-policing titotal: https://thezvi.substack.com/p/analyzing-a-critique-of-the-ai-2027?open=false#%C2%A7the-headline-message-is-not-ideal (here is the lw link: https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts)
Oh, and looking back at the comments on titotal's post... his detailed elaboration of some pretty egregious errors in AI 2027 didn't really change anyone's mind, at most moving them back a year to 2028.
So, morale of the story, lesswrongers and rationalist are in fact not worth the effort to talk to and we are right to mock them. The numbers they claim to use are pulled out of their asses to fit vibes they already feel.
And my choice for most sneerable line out of all the comments:
And I therefore am left wondering what less shoddy toy models I should be basing my life decisions on.
Oh, and looking back at the comments on titotal’s post… his detailed elaboration of some pretty egregious errors in AI 2027 didn’t really change anyone’s mind, at most moving them back a year to 2028.
Huh, what's this I have open in another browser tab:
The Great Disappointment in the Millerite movement was the reaction that followed Baptist preacher William Miller's proclamation that Jesus Christ would return to the Earth by 1844, which he called the Second Advent. His study of the Daniel 8 prophecy during the Second Great Awakening led him to conclude that Daniel's "cleansing of the sanctuary" was cleansing the world from sin when Christ would come, and he and many others prepared. When Jesus did not appear by October 22, 1844, Miller and his followers were disappointed.
Exactly. I would almost give the AI 2027 authors credit for committing to a hard date... except they already have a subtly hidden asterisk in the original AI 2027 noting some of the authors have longer timelines. And I've noticed lots of hand-wringing and but achkshuallies in their lesswrong comments about the difference between mode and median and mean dates and other excuses.
Like see this comment chain https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts?commentId=2r8va889CXJkCsrqY :
My timelines move dup to median 2028 before we published AI 2027 actually, based on a variety of factors including iteratively updating our models. But it was too late to rewrite the whole thing to happen a year later, so we just published it anyway. I tweeted about this a while ago iirc.
...You got your AI 2027 reposted like a dozen times to /r/singularity, maybe many dozens of times total across Reddit. The fucking vice president has allegedly read your fiction project. And you couldn't be bothered to publish your best timeline?
So yeah, come 2028/2029, they already have a ready made set of excuse to backpedal and move back the doomsday prophecy.
Ed's got another banger: https://www.wheresyoured.at/make-fun-of-them/
What's extra fun is that HN found it: https://news.ycombinator.com/item?id=44424456
There's at least one (if not two if you handle the HN response separately) good threads that could be made from this. Don't have the time personally at the moment.
I will say that I'm shocked to see some reasonable shit in the HN comments, people saying the post is too long or not an acceptable tone are getting told off rather respectably with some good explanations (effectively: this was written this way intentionally you dolt). Broken clock and all that, I guess.
Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it's dark alchemy. And while it's not that the rabbit hole doesn't go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.
I'm not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can't do what they do so that you don't ask the incredibly obvious questions about why it's so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don't know what kinda excuse the business idiots and political bullshitters are going to come up with.
Found a piece which caught my attention: Resisting the Techno-Fascist Takeover: Are We Ready for Decomputing?
You want my personal opinion, the basic idea of "decomputing" that author Dan McQuillan is putting forward is likely gonna gain plenty of traction. The Trump administration more generally and DOGE more specifically have thoroughly undermined any notion of tech being an apolitical force, so arguing against the politics inherent to AI is gonna be an easier sell.
A bit of old news but that is still upsetting to me.
My favorite artist, Kazuma Kaneko, known for doing the demon designs in the Megami Tensei franchise, sold his soul to make an AI gacha game. While I was massively disappointed that he was going the AI route, the model was supposed to be trained solely on his own art and thus I didn't have any ethical issues with it.
Fast-forward to shortly after release and the game's AI model has been pumping out Elsa and Superman.
It's a bird! It's a plane! It's... Evangelion Unit 1 with a Superman logo and a Diabolik mask.
Rob Liefeld vibes
Good parallel, the hands are definitely strategically hidden to not look terrible.
the model was supposed to be trained solely on his own art
much simpler models are practically impossible to train without an existing model to build upon. With GenAI it's safe to assume that training that base model included large scale scraping without consent
the model was supposed to be trained solely on his own art and thus I didn’t have any ethical issues with it.
Personally, I consider training any slop-generator model to be unethical on principle. Gen-AI is built to abuse workers for corporate gain - any use or support of it is morally equivalent to being a scab.
Fast-forward to shortly after release and the game’s AI model has been pumping out Elsa and Superman.
Given plagiarism machines are designed to commit plagiarism (preferably with enough plausible deniability to claim fair use), I'm not shocked.
(Sidenote: This is just personal instinct, but I suspect fair use will be gutted as a consequence of the slop-nami.)
I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with "uh huh" or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.
@zbyte64 the technical term for those "uh huh"s is backchanneling, and I wonder if audio chatbot models have issues timing those correctly. Maybe it's a choice between not doing it at all, or doing it at incorrect times. Either sounds creepy. The pause before an AI (any AI) responds is uncanny. I bet getting backchanneling right would be even more of a nightmare.
Anyway, congrats on getting through that interview, and congrats on your transition to Docker, I guess?
@zbyte64
I will go back to turning wrenches or slinging food before I spend one minute in an interview with an LLM ignorance factory.
@BlueMonday1984
@johntimaeus @zbyte64 @BlueMonday1984 Your choice. They made their choice. Judge not, lest ye be judged.
Now I’m curious how a protected class question% speedrun of one of these interviews would look. Get the bot to ask you about your age, number of children, sexual orientation, etc
Not sure how I would trigger a follow-up question like that. I think most of the questions seemed pre-programmed but the transcription and AI response to the answer would "hallucinate". They really just wanted to make sure they were talking to someone real and not an AI candidate because I talked to a real person next who asked much of the same.
@zbyte64 @antifuchs Something like "I have been working with Database systems from the time my youngest was born to roughly the time of my transition." and just wait for the clarifying questions.