The Ad-Funded Surveillance State You Agreed To

Feb 20, 2026, 10:16 PM
The Ad-Funded Surveillance State You Agreed To
Always-on AI: convenience or surveillance?

on january 16, openai announced chatgpt would start showing ads. by february 9, they were live. eight months earlier, they spent $6.5 billion to acquire jony ive’s hardware startup io—a pocket-sized, screenless device with cameras and microphones designed to be always present.

but this isn’t about openai. they’re just the latest. the problem is structural.

every single company building ai assistants is funded by advertising. and every one of them is building hardware designed to see and hear everything around you, all day, every day. these two facts are on a collision course.

the wake word is dead

every mainstream voice assistant works behind a gate. you say “hey siri” or “ok google,” and only then does the system listen. everything before the wake word is theoretically discarded.

this was a reasonable design in 2014. it is a dead end for where ai assistance needs to go.

here’s what happens in a real kitchen at 6:30 am:

“are we out of eggs again? i’m thinking frittata tonight but we also need to—oh wait, did the school email about thursday? anyway, if we don’t have eggs, i’ll get them from target.”

nobody is going to preface that with a wake word. the information is woven into natural speech between two flustered people getting ready to leave the house. the moment you require a trigger, you lose the most valuable interactions—the ones that happen while people are living their lives.

you cannot build proactive assistance behind a wake word. the ai has to be present in the room, continuously, accumulating context over days and weeks, to build the understanding that makes proactive help possible.

this is where every major ai company is heading. vision, presence detection, wearables, multi-room awareness. the next generation of ai assistants will hear and see everything. some will be on your face or in your ears all day.

the question is not whether always-on ai will happen. it’s who controls the data it collects. and right now, the answer is: advertising companies.

policy is a promise. architecture is a guarantee.

here’s where the industry’s response gets predictable. “we encrypt the data in transit.” “we delete it after processing.” “we anonymize everything.” “read our privacy policy.”

with cloud processing, every user is trusting:

  • the company’s current privacy policy
  • every employee with production access
  • every third-party vendor in the processing pipeline
  • every government that can issue a subpoena
  • every advertiser partnership that hasn’t been announced yet
  • the company’s future privacy policy

openai’s own ad announcement includes this language: “openai keeps conversations with chatgpt private from advertisers, and never sells data to advertisers.”

it sounds reassuring. but google scanned every gmail for ad targeting for thirteen years before quietly stopping in 2017. policies change. architectures don’t.

policy is a promise. architecture is a guarantee.

when a device processes data locally, the data physically cannot leave the network. there is no api endpoint to call. there is no telemetry pipeline. the inference hardware sits inside the device, on your network.

your email is sensitive. a continuous audio and visual feed of your home is something else entirely. it captures arguments, breakdowns, medical conversations, financial discussions, intimate moments—the completely unguarded version of people that exists only when they believe nobody is watching.

the way out

local on-device inference is the only architecture that doesn’t require trusting a company with a surveillance feed of your life.

this is why apple’s privacy push matters, even if they’re also an ad company now (they rebranded search ads to “apple ads” in 2025, reflecting the expanding scope of their advertising business). this is why the ggml/llama.cpp work matters—local ai that runs on your machine, without sending your data to the cloud.

the ad-funded ai industry will tell you their policies protect you. they’ll tell you their intentions are good. they’ll tell you to trust them.

don’t.

trust architecture. trust the stuff that physically cannot leak your data because the data never leaves your device.

the future of ai isn’t about which company you trust. it’s about which architectures you trust. and right now, the only one worth trusting is the one that runs locally, on your hardware, with zero network dependency.


sources: