Can the NPU be used for practical purposes other than generative AI? If not, I don't need it.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Ai bro here. The reason there shit aint selling is because its useless for any actual ai aplication. Ai runs on gpus even an ai cpu will be so much slower than what an nvidea gpu can do. Of course no one buys it. Nvideas gpus still sell very well, and not just because of the gamers.
ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “useful for any actual ai application” is to throw a shitload of money at nvidia. weird how that works!
A lot of these systems are silly because they don't have a lot of RAM and things don't begin to get interesting with LLMs until you can run 70B and above
The Mac Studio has seemed an affordable way to achieve running 200B+ models mainly due to the unified memory architecture (compare getting 512GB of RAM in a Mac Studio to building a machine with enough GPU to get there)
If you look the industry in general is starting to move towards that sort of design now
The framework desktop for instance can be configured with 128GB of RAM ($2k) and should be good for handling 70B models while maintaining something that looks like efficiency.
You will not train, or refine models with these setups (I think you would still benefit from the raw power GPUs offer) but the main sticking point in running local models has been VRAM and how much it costs to get that from AMD / Nvidia
ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “not silly” is to throw a shitload of money at Apple or framework or whichever vendor decided to sell pickaxes this time for more RAM. yes, very interesting, thank you, fuck off
Google, Apple, Microsoft, Nvidia and everyone else is hyping up AI. Consumers are not really seeing much benefit by making everything AI-ified. Executives are raving over it but maybe aren't realize that people outside of the C-suite aren't that excited? Having it shoved in our faces constantly, or crammed in places companies hope they can save money is not helping either.
It's FOMO amplified by capitalistic competition. No company wants to be the one left behind. I guarantee Google, Meta and even OpenAI know the limitations of their products. They don't care, they just want to be at least as good as their competitors, because they assume at some point one of them will reach "good enough." And at that moment, if they're not in position to grab market share, they'll lose a once-in-a-generation chance for billions or trillions of dollars in value.
We're the casualties, because the people in the middle - companies with no AI but whose C-suite buys into the hype - demand we use unworkable products because they're too willfully ignorant to know they're not panaceas to whatever is bothering those C-suite execs at the moment.
Quarterly Driven Development
Google, Facebook, etc. have been burning money to gain market share and "good will" from users knowing that when the money faucet stopped or if they found a way to make money, they'd abuse their market share and squeeze their users for profit.
Once interest rates increased and the VC infinite money glitch went away (borrow at low interest rates, gamble on companies, repeat), the masks came off and the screws started turning, hard. Anything they can do to monetize anyone else involved, they're trying.
The same story has been happening with AI but without the infinite money glitch - just investors desperate for a good bet getting hyped to hell and back. They need adoption and they need business to become dependent on their product. Each of these companies are basically billions in the hole on AI.
Users, especially technical users, should know that not only is the product failing to live up to the hype but that embracing AI is basically turning the other cheek for these companies to have their way with your wallet even faster and more aggressively than they already are with everything else they've given away.
One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
(I’m making a generous assumption by suggesting that “ready” is even possible)
It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.
The current marketing for AI won't apply to anything that meets the marketing in the foreseeable future. The desired complexity isn't going to exist in silicone at a reasonable scale.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
iOS still doesn’t do what was sold in the ads, almost a full year later.
capitalism working against itself
More like: capitalism reaching its own logical conclusion
AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesn't do what was promised and nobody gives a shit.
Betamax actually found use in Television broadcast until the switch to HDTV occurred in 2009
the later digital variants of beta weren't retired by sony until ~ 2016.
I had no clue that they did digital betamax....
That would make senes though...
There was at one point an HDVHS as well it was essentially a 1080P MPEG stream on a VHS tape
Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.
HD-DVD was comparable to BluRay and just happened to lose out because the industry won't allow two similar technologies to exist at the same time.
Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.
BluRay was slightly better if I recall correctly. With the rise in higher definition televisions, people wanted to max out the quality possible, even if most people (still) can’t tell the difference
Blu-ray also had the advantage of PS3 supporting the format without the need for an external disc drive.
@philycheeze @xkbx yes, I think Microslop's fumble of selling the HD DVD drive only as an external add-on really hindered the format
Dude don’t throw Betamax in there, that was a better product than the VHS. AI is just ass.
The only issue here is that there is no really useful ubiquitous feature yet. Once that comes, people will not care about any security issues or any other reason against it. It's coming for sure. Maybe they need recall feature to train right now, maybe they won't anymore at some point.
At first I was skeptical of the guy who told me, but they seemed rich and therefore trustworthy.
You have a source for that? 😁
No thanks. I’m perfectly capable of coming up with incorrect answers on my own.
you're right tho
Nein!
Doch!
Ooh!
Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldn't be giving it away, much less begging you to use it.
You're right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.