Bro, just add it to the pile of rubbish over there next to the 3D movies and curved TVs
PC Gaming
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
Predictable outcome, common tech company L.
The other 16% do not know what AI is or try to sell it. A combination of both is possible. And likely.
I'm willing to pay extra for software that isn't
Okay, but here me out. What if the OS got way worse, and then I told you that paying me for the AI feature would restore it to a near-baseline level of original performance? What then, eh?
I already moved to Linux. Windows is basically doing this already.
One word. Linux.
Who in the heck are the 16%
I would if the hardware was powerful enough to do interesting or useful things, and there was software that did interesting or useful things. Like, I'd rather run an AI model to remove backgrounds from images or upscale locally, than to send images to Adobe servers (this is just an example, I don't use Adobe products and don't know if this is what Adobe does). I'd also rather do OCR locally and quickly than send it to a server. Same with translations. There are a lot of use-cases for "AI" models.
I'm interested in hardware that can better run local models. Right now the best bet is a GPU, but I'd be interested in a laptop with dedicated chips for AI that would work with pytorch. I'm a novice but I know it takes forever on my current laptop.
Not interested in running copilot better though.
Maybe people doing AI development who want the option of running local models.
But baking AI into all consumer hardware is dumb. Very few want it. saas AI is a thing. To the degree saas AI doesn't offer the privacy of local AI, networked local AI on devices you don't fully control offers even less. So it makes no sense for people who value convenience. It offers no value for people who want privacy. It only offers value to people doing software development who need more playground options, and I can go buy a graphics card myself thank you very much.
-
The ones who have investments in AI
-
The ones who listen to the marketing
-
The ones who are big Weird Al fans
-
The ones who didn't understand the question
- The nerds that care about privacy but want chatbots or better autocomplete
Those Weird Al fans will be very disappointed
I would pay for Weird-Al enhanced PC hardware.
A big letdown for me is, except with some rare cases, those extra AI features useless outside of AI. Some NPUs are straight out DSPs, they could easily run OpenCL code, others are either designed to not be able to handle any normal floating point numbers but only ones designed for machine learning, or CPU extensions that are just even bigger vector multipliers for select datatypes (AMX).
I am generally unwilling to pay extra for features I don't need and didn't ask for.
raytracing is something I'd pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
And they'd need to put it in unasked and cooperate with devs else it won't catch on quickly enough.
Remember Nvidia Ansel?
As with any proprietary hardware on a GPU it all comes down to third party software support and classically if the market isn't there then it's not supported.
Assuming theres no catch-on after 3-4 cycles I'd say the tech is either not mature enough, too expensive with too little results or (as you said) theres generally no interest in that.
Maybe it needs a bit of marturing and a re-introduction at a later point.
I can't tell how good any of this stuff is because none of the language they're using to describe performance makes sense in comparison with running AI models on a GPU. How big a model can this stuff run, how does it compare to the graphics cards people use for AI now?