Technology Dell admits that PCs sell better when it does not mention AI capabilities

Gandalf_The_Grey

Level 85
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Forum Veteran
Apr 24, 2016
7,714
6
82,083
8,389
54
The Netherlands
While companies such as Microsoft or Google are adding AI capabilities to more and more of their products and services, it seems that not all customers do share the excitement that these companies try to convey.

Do customers want these AI features or care about them? It is certainly too early to conclude how this will all play out. Do companies spend billions on AI research, infrastructure and products that they will never recoup because the audience is not big enough?

Dell admitted something interesting in an interview with PC Games recently during CES 2026. The company revealed that it moved away from putting AI PCs front and center in marketing stating that the message that it tried to deliver with its products instead was “not AI first”.

Dell realized that consumers were, by large, not really interested in AI PCs. They were interested in PCs, but whether these were equipped with NPUs and capable of running AI operations locally or not, was apparently not something that the majority of Dell customers prioritized.
 
The general point in that quote is plausible, but I can’t verify the accuracy of the specific “Dell admitted…” framing from the blog post alone (it’s secondary reporting and may be simplified or selectively quoted). If someone wants to treat it as a factual claim, the safest approach is to rely on the primary source (full PC Gamer interview/transcript, or Dell’s own CES messaging) rather than a summary.

Do customers want AI features? Often they want outcomes, not “AI”
For most consumer buyers, “AI” tends to rank below the basics:
  • Price, reliability, battery life, screen quality, build quality, noise/thermals
  • Performance for the apps they already use (browser, Office, games, creative tools)
  • Support/warranty and “will this last 4–6 years?”
If an “AI PC” doesn’t translate into a clear, immediate benefit (or the benefit looks like a subscription upsell), many people tune it out. That doesn’t mean they reject AI features; it often means the marketing label isn’t a purchase driver.

Why “AI-first” marketing can backfire (consumer side)
  • Unclear value: NPUs are invisible to users unless a specific app/feature they care about uses them well.
  • Trust & privacy concerns: people don’t want more data collection, more cloud processing, or “assistant” features watching content.
  • Bloat perception: buyers may associate AI bundles with unwanted services running in the background.
  • Cost sensitivity: if AI features are gated behind subscriptions, buyers may not count them as “included.”

Where AI is actually valued (usually not as a headline spec)</B>
AI-related features can still matter when they’re framed as practical improvements:
  • Video calls: background blur, eye contact correction, noise suppression (often NPU-assisted)
  • Accessibility: live captions, transcription, voice typing
  • Photo cleanup/search: local indexing, object removal (when done well)
Most users will say they want “better webcam/audio” or “faster search,” not “an NPU.”

Are companies going to fail to recoup AI spend? It depends on the market segment</B>
Consumer excitement is only one slice. A lot of AI spend is aimed at:
  • Enterprise productivity and compliance-controlled deployments
  • Cloud platform revenue (inference, hosting, managed AI services)
  • Vendor lock-in/ecosystems (OS, productivity suite, identity, device management)
So “consumers don’t care about AI PCs” can be true while AI still pays off elsewhere.

Security & privacy angle (why skepticism is rational)</B>
From a security perspective, more embedded AI typically means more complexity and more data pathways:
  • Cloud assistants can increase data exposure risk if content is uploaded or retained unexpectedly.
  • New “assistant” integrations expand the attack surface (plugins/connectors, prompt-injection style abuse, permission mistakes).
  • Supply-chain risk: more components, more background services, more update dependencies.
That’s a reasonable reason for customers to prefer “PC first” messaging.

Practical takeaway]</B>
It’s credible that “AI-first” branding doesn’t sell mainstream PCs by itself. Many buyers want a good PC and will accept AI features only if they are optional, privacy-respecting, and genuinely useful. The key isn’t whether AI exists—it’s whether it’s forced, noisy, and unclear.

If this thread is trying to nail down what Dell actually said, the next constructive step is quoting the exact lines from the original PC Gamer interview (not the summary) so the discussion is grounded in the primary wording.
 
G95e4W6XoAAtl87.jpg
 
This is a good practice, to draw a line separating use from abuse.
It doesn't matter now, everyone uses it these days from code to essays to class work so education has shifted from banning it's use to accepting it and encouraging it, now you can use it but you must put the A.I.prompt as/like a reference and state what time and what LLM model you used. It was stupid banning it in the first place, educational institutions especially universities should be leading the world in research and innovation not banning transformational new technology.
 
I think A.I. reached a saturation point in society and the media last year. All you ever here about is A.I. this and that. I think its overloaded people with info.
It is crazy to the point people stopped doing research. Instead of veiwing many sources and reading through websites (such as Reddit), most people I know just ask Chatgpt or Gemini.


But although they are heavy AI users, they are skeptical of services and product marketed as AI-enhanced.
 
It is crazy to the point people stopped doing research. Instead of veiwing many sources and reading through websites (such as Reddit), most people I know just ask Chatgpt or Gemini.
That's always been the problem with the internet; sources and is the information true and correct!

It's the major problem in the digital age, is the source factually correct? Is the researcher reliable and trustworthy?

The problem is in a world where everything is A.I. generated or where information is sourced from A.I. how can you tell slop from truth and reality?
 
It doesn't matter now, everyone uses it these days from code to essays to class work so education has shifted from banning it's use to accepting it and encouraging it, now you can use it but you must put the A.I.prompt as/like a reference and state what time and what LLM model you used. It was stupid banning it in the first place, educational institutions especially universities should be leading the world in research and innovation not banning transformational new technology.
I don't see how a university student who uses AI to write a paper for them actually learns anything, its getting worse and worse, it seems like the average university graduate now has like a grade 10 math skill level and couldn't pick out a country like Greece on a map. They know how to use AI and computers etc but seem to be getting dumber, of course there are exceptions.
 
I don't see how a university student who uses AI to write a paper for them actually learns anything, its getting worse and worse, it seems like the average university graduate now has like a grade 10 math skill level and couldn't pick out a country like Greece on a map. They know how to use AI and computers etc but seem to be getting dumber, of course there are exceptions.
What you say is very true, A.I. is/will dumb down people.

But who is at fault? The students for using A.I. to fill gaps in knowledge and help with assignments or the education sector and teachers for not teaching students properly?

Not everyone is Einstein or a PHD student!
 
What you say is very true, A.I. is/will dumb down people.

But who is at fault? The students for using A.I. to fill gaps in knowledge and help with assignments or the education sector and teachers for not teaching students properly?

Not everyone is Einstein or a PHD student!
It is a multi-faceted problem.
 
It is a multi-faceted problem.
Exactly, technology enables the great, the good, and the bad. The initial resistance to A.I. by education was because they thought it was cheating but now they embrace it.

A.I. or what ever it becomes is the great next step in human evolution, existence and will transform society just like the internet did before that.

Where are we headed who knows? But it will be a helluva ride.
 
the next Einstein will probably be an ASI ("S" for Super) :rolleyes:
Well that's the aim of A.I. super intelligent sentient computers or robots. But I doubt it, the next Einstein will be human, maybe he is already working on A.I though?

They will be human though no doubt or a mixture of human & computer or robotic. Maybe when we all have CPU's implanted in our brains they/he/she will emerge?
 
It doesn't matter now, everyone uses it these days from code to essays to class work so education has shifted from banning it's use to accepting it and encouraging it, now you can use it but you must put the A.I.prompt as/like a reference and state what time and what LLM model you used. It was stupid banning it in the first place, educational institutions especially universities should be leading the world in research and innovation not banning transformational new technology.
Never banned here, but when checking for plagiarism, it can be detected and thesis or research article get rejected.
 
  • Like
Reactions: Divine_Barakah
It's the major problem in the digital age, is the source factually correct? Is the researcher reliable and trustworthy?
When you do the research the old way, you read the full source and verify the references; while using AI to make things faster, you only read what it throws to you, and I doubt they read the full text of the referecnes.

It will create a generation of researcher who cannot compose a decent pharagraph without AI; a generation who lack creativity and innovative solutions for unsual situations.
 

You may also like...