pedal to the metal on the content and information theft, folks:
seems it's this lot. despite their name, there appears to be almost nothing artful or artistic about them - it's all b2b shit for Selling Better
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
pedal to the metal on the content and information theft, folks:
seems it's this lot. despite their name, there appears to be almost nothing artful or artistic about them - it's all b2b shit for Selling Better
Incorporating into your workflow a company that is a shell around other companies that are selling their products at a loss with no path to profitability seems like quite an unacceptable business risk to me. But I dont get paid the big bux
as long as you can mark it up and as long as the charade lasts, and as long as there's someone willing to pay, this will make money. when spicy autocomplete provider collapses just pack your bags and leave
@fullsquare @Soyweiser "I've sold monorails to Brockway, Ogdenville and North Haverbrook, and by gum, it put them on the map!"
I think when you have integrated all this into your workflows doing that and going back might be hard esp on the enterprise level.
wait, what do you mean "integrating it into workflows". this juicero of outsourcing won't work as advertised and it's probably cheaper and less prone to fucking up to hire a couple of southeastern asians or eastern europeans. as long as business is selling of these juiceros, they'll be fine as long as they can find suckers. these suckers, tho, might be in trouble even before openai goes under for unrelated reasons
oh but that's not my problem, and those who got in that very stupid position deserve every last bit of it
For the enterprise using it, yes. For the enterprise selling it, probably not so much.
somebody had to do the design + layout for that banner. i wonder what was going through their head then.
"I should start the Butlerian Jihad"
"God I wish I was an AI so i didn't have to do this"
apparently a complete archive of scott siskind's old livejournal. found on the EA forum no less. https://archive.fo/fCFQx
That is odd in a way, you would expect them to honour is wishes of that data no longer being available, but nope.
couldn't help myself, there are seldom more perfect opportunities to use this one
:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
that's CFAR cult jargon right?
Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.
Instrumental convergence.... I think Bostrom coined that one?
The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?
I'm thinking they hired Jar-Jar Binks to the team.
Mesa-optimization
Why use the perfectly fine 'inner optimizer' mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of 'in' instead?
Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.
Mesa-optimization... that must be when you rail some crushed-up Adderall XRs, boof some modafinil for good measure, and spend the night making sure your kitchen table surface is perfectly flat with no defects abrasions deviations contusions...
and you wrap it off with some linux 3d graphics lib hacking
Not sure! What is CFAR?
Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.
Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.
Shopify going all in on AI, apparently, and the CEO is having a proper born-again moment. Don’t have a source more concrete than this yet:
https://cyberplace.social/@GossiTheDog/114298302252798365
(and transcript: https://infosec.exchange/@barubary/114298367285112648)
It’s a lot like this:
Using AI effectively is now a fundamental expectation of everyone at Shopify. It’s a tool of all trades today, and will only grow in importance. Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow. Stagnation is almost certain, and stagnation is slow-motion failure. If you’re not climbing, you’re sliding.
Extreme sent at 4am energy.
That text is painful to read (I wonder how much of it is slop)... ugh, what is chatgpt doing to the brains of people? (And I've had the bad luck of reading some pretty unhinged pro-AI stuff from management at my employer too, although not as bad as this mail from shopify).
Is there a precedent for this hype? For the extent of damage that it will cause? Most tech industry hype is a waste of resources, but otherwise mostly harmless. Like that time when everyone believed that XML is the holy grail, that was silly, and although we still have to deal with some unfortunate data formats from those days, it passed. There were worse ones, most notably blockchain was almost catastrophic, but most companies hesitated to go all-in and pursued it more on the side, so when that hype faded, they simply buried their involvement and that was that.
But "AI"... it has such potential to create significant and long term damage to the companies adopting it. The slop code alone might haunt them forever, in ways that even the worst excesses of 90s enterprise java couldn't. There's nothing to learn from resulting failure, except "don't use AI".
In this case, given shopify's general behaviour, I won't be sad at all though if they crash and fail.
I also thought 'guess LLMs dont work as an editor'.
And blockchains did massive damage, all the ransomware crime would be impossible if the tech world had not jumped into blockchain as much as they did and created and kept maintaining the ecosystem. (It also caused the techbro people who now pivot to AI rise, so it is connected). Note that the damage done by BEC is still greater than ransomware, so not cybersecurity advice.
But I get your point, I think a real example would be facebooks pivot to video. Which destroyed companies.
Yes, that's true. Indirectly it costs them all dearly with ransomware. Likewise, I think the overall damage that AI will do to society as a whole will be much, much greater than just rotting some tech companies from the inside (most of which I wouldn't be sad anyway if they went away...).
What I meant is that with blockchain the big tech companies at least didn't willingly destroy their products, their processes, their decision making etc. I.e. they didn't put blockchain into absolutely everything, all the way to MS Notepad. What I find staggering about this hype is the depth of the delusion, the willingness to not just experiment with it but really go all-in.
blockchain targeted libertarian post-goldbug pro-cyberpunk-dystopia fuckheads, llms target management types (you will replace workers with machines!), maybe that's why
yeah, no I agree that blockchain is a bad example, just think we shouldn't understate the massive damage that has done. Not just in actually damaged systems but also just in additional cost that now everybody has to worry about this. Same as how AI is not just causing climate change problems by running it, but the scraping as well has increased the cost of running a webserver by 50% in load alone. (which on a global scale is just horrid). And then there is the forcing of it in everything, the burning of the boats.
tesla: "your car is not your car and we have deep, varied firmware and systems access to it on a permanent basis. we can see you and control you at all times. piss us off and we'll turn off the car that we own."
also tesla: "sorry no you can't return it"
I wonder how often Musk fires employees who explain to him that, no using tesla cars for distributed computing is a bad idea and we should stop working on this.