Rethinking Security and Privacy in the AI Age
Trust has quietly become the foundational infrastructure of artificial intelligence. As AI systems move from novelty to necessity, shaping how people work, communicate and manage their homes, confidence in how these systems behave is no longer optional. It now determines adoption at scale. This reality framed Samsung Electronics Co., Ltd.’s Tech Forum session at CES 2026, titled In Tech We Trust? Rethinking Security and Privacy in the AI Age, hosted at The Wynn in Las Vegas.
The panel brought together senior voices from technology, strategy and ethics to explore how trust is formed, maintained and sometimes lost as AI becomes embedded into everyday life. The discussion acknowledged a simple truth. As intelligence becomes more invisible, operating quietly across phones, televisions and appliances, the need for predictability, security and user control becomes more visible than ever.
When Intelligence Becomes Invisible
AI is no longer confined to screens and dashboards. It anticipates needs, curates routines and operates autonomously across connected devices. That shift raises a fundamental question. How do users trust systems they rarely see and do not always understand.
Panellists emphasised that trust cannot be asserted through marketing or abstract assurances. It must be demonstrated through consistent behaviour that users can observe and influence. The session highlighted how people are increasingly aware of where their data lives, how decisions are made and whether an AI system is acting on their behalf or beyond their control.
Allie K. Miller, CEO of Open Machine, framed the challenge clearly, saying: “When it comes to AI, users are looking for transparency and control. They want to be leaders in their own personalized experiences — to understand whether an AI model is running locally or in the cloud, to know their data is secure and to clearly see what is powered by AI and what is not. That level of visibility builds confidence. On the provider side, there is a responsibility to show up for users by designing personalized experiences around the core components of trust — clarity, security and accountability.”
Trust by Design Rather Than Trust by Promise
Samsung used the session to outline its trust by design philosophy, an approach that embeds security, transparency and user agency directly into AI systems rather than layering them on later. The company stressed that trust is not a feature that can be added at the end of development. It must be engineered from the outset.
Central to this approach is clarity around how AI operates. Users want to know when intelligence is running on device and when it relies on the cloud. Samsung highlighted how on device AI allows personal data to remain local whenever possible, while cloud based intelligence is used selectively when greater processing power or speed is required. This hybrid model gives users flexibility without surrendering privacy by default.
The discussion reflected a broader industry shift. Research from organisations such as the OECD and World Economic Forum has consistently shown that user trust increases when AI systems offer explainability, clear consent mechanisms and predictable outcomes. Samsung’s emphasis on visible control aligns with these findings, positioning trust as a measurable design outcome rather than an abstract value.
Security Built for an AI Driven World
As AI becomes distributed across an ecosystem of devices, security challenges multiply. A vulnerability in one product can expose an entire network. The panel explored how traditional, device by device security models are no longer sufficient for an AI driven world.
Samsung highlighted the evolution of its Knox security platform, which now protects billions of devices from the chipset up. More significantly, the company discussed Knox Matrix, a cross device security framework designed to allow products to authenticate, monitor and protect one another continuously.
Shin Baik, AI Platform Center Group Head at Samsung Electronics, explained the significance of this shift, stating: “Trust in AI starts with security that’s proven, not promised. For more than a decade, Samsung Knox has provided a deeply embedded security platform designed to protect sensitive data at every layer. But trust goes beyond a single device — it requires an ecosystem that protects itself. With Knox, devices continuously authenticate and monitor one another, so each device acts as a shield for the rest, creating a resilient, secure environment users can rely on.”
This ecosystem based approach reflects emerging best practice across critical infrastructure sectors, including automotive, healthcare and smart cities, where distributed intelligence demands collective resilience rather than isolated defences.
Predictability as a Measure of Trust
Beyond technical security, predictability emerged as a defining theme. Users are more likely to trust AI systems that behave consistently and signal their actions clearly. Black box decision making erodes confidence, even when outcomes are positive.
Shin Baik emphasised that trust grows when AI behaves securely and predictably across devices, arguing that users need visible signals of control rather than opaque automation. This includes clear indicators of when AI is active, what data it is using and how decisions can be overridden or adjusted.
Zack Kass, Global AI Advisor at ZKAI Advisory and former Head of Go To Market at OpenAI, addressed the broader risk landscape. While acknowledging concerns around misinformation and misuse, he offered a measured perspective, noting: “For every risk, there is also a countermeasure and technology itself will play a critical role in mitigating AI’s downsides.”
This view reflects growing investment in AI safety tooling, including model auditing, watermarking and real time monitoring, which aim to make intelligent systems more accountable without stifling innovation.
Convenience, Not Trust, Drives Consumer Behaviour
While trust is essential, the panel also challenged the assumption that it is the primary driver of purchasing decisions. Amy Webb, CEO of the Future Today Strategy Group, offered a pragmatic assessment of consumer behaviour.
She observed: “I don’t think they’re making decisions based on trust alone. People aren’t paying for trust. They don’t buy things because of trust. They buy things because of convenience. So if the AI piece of this hooks people in it makes their lives easier and more convenient.”
This insight underscores a delicate balance for technology providers. AI must deliver tangible convenience while quietly meeting high standards of security and privacy. Trust may not close the sale, but its absence can quickly undermine long term adoption and brand loyalty.
Collaboration as a Trust Multiplier
The discussion also highlighted the role of cross industry collaboration in building trust at scale. Samsung pointed to partnerships with organisations such as Google and Microsoft as critical to advancing shared security research, interoperability and ecosystem wide protection.
Such alliances reflect a recognition that trust in AI cannot be built in isolation. Shared standards, open frameworks and coordinated responses to emerging threats are increasingly necessary as intelligent systems cross organisational and national boundaries.
Allie Miller reinforced the importance of transparency across these collaborations, particularly for users. Clear labelling of AI powered features, visibility into data usage and straightforward explanations of system behaviour help demystify complex technology and reduce cognitive friction.
Designing for Long Term Confidence
As AI becomes increasingly woven into daily life, the session concluded with a clear message. The technologies that earn long term trust will be those that prioritise security, transparency and meaningful user choice from the start.
Rather than positioning trust as a marketing promise, Samsung and the panellists framed it as an operational discipline. Predictable behaviour, proven security and visible control form the bedrock of confidence in an age of invisible intelligence. As AI continues to reshape industries and households alike, trust by design may prove to be the most valuable feature of all.







