11 January 2026

Your Leading International Construction and Infrastructure News Platform
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Header Banner – Finance
Afari and Geely Drive a New Era of AI Native Autonomous Mobility

Afari and Geely Drive a New Era of AI Native Autonomous Mobility

Afari and Geely Drive a New Era of AI Native Autonomous Mobility

Afari and Geely chose the Consumer Electronics Show 2026 to unveil what they describe as a major step change in intelligent driving. The joint announcement of G-ASD, short for Geely Afari Smart Driving, placed the two companies firmly in the centre of the fast moving global race toward higher levels of vehicle autonomy. CES has become the default stage for automotive technology launches, yet the debut of G-ASD carried a different weight because it was not framed as a single feature or software update, but as a full stack intelligent driving system designed around modern artificial intelligence models.

Unlike many driver assistance platforms that grow incrementally, G-ASD has been designed from the outset as a model driven architecture. That means the entire system, from perception to planning and control, is shaped by large scale AI models rather than rule based logic. For Geely, which owns brands such as Zeekr and Lynk & Co, the system represents a practical route to scale advanced autonomy across multiple vehicle platforms. For Afari, it marks the formal commercialisation of years of research into foundation models for mobility.

What Makes G-ASD Different

At the heart of G-ASD is an end to end model architecture. Instead of breaking driving tasks into separate software modules, Afari has built a unified learning system that observes the world, understands context, reasons about what should happen next, and then executes actions. This approach mirrors the way modern generative AI models work in language and vision, but applied to the physical act of driving.

The system blends several of the most important strands of contemporary AI research. Multimodal foundation models process data from cameras, radar, lidar and other sensors in a single shared representation. Visual Language Models allow the system to associate what it sees with semantic meaning. Vision Language Action models connect perception and understanding directly to driving behaviour. World models simulate what is likely to happen next, while reinforcement learning continuously refines performance based on outcomes.

The Idea of Model Penetration

Afari has introduced a concept it calls model penetration to describe how deeply AI models are embedded within an intelligent driving system. In simple terms, it measures how much of the driving task is handled by learned models rather than hand coded rules. High model penetration means that more of the system can adapt, generalise and improve over time as data flows in.

Within G-ASD, model penetration runs through perception, prediction, decision making and vehicle control. This allows the platform to cope with the messy realities of real world driving, from unusual road layouts to unpredictable human behaviour. It also means the same core system can be deployed across different vehicle types and markets with far less manual tuning.

A Foundation Model for the Road

The term foundation model has become central to the autonomous driving debate. These are large neural networks trained on enormous and diverse datasets that can be adapted to many tasks. In the context of G-ASD, the foundation model acts as the brain of the vehicle, learning not only how to recognise objects, but also how traffic flows, how pedestrians behave and how roads are structured.

By basing G-ASD on a foundation model, Afari and Geely aim to create a platform that improves as it encounters new environments. Data from hundreds of thousands of vehicles feeds back into training pipelines, allowing the system to evolve. Over time, this should reduce the gap between assisted driving and higher levels of autonomy, since the same core intelligence is reused and refined.

From Research to Real Roads

What sets G-ASD apart from many laboratory projects is its current level of deployment. Afari and Geely confirmed that the system is already in production across 16 vehicle models sold under the Zeekr and Lynk & Co brands. More than 300,000 vehicles on the road today are equipped with the technology, providing a vast real world testing ground.

This scale matters because autonomous driving systems only mature when exposed to millions of kilometres of varied driving. Urban congestion, rural roads, extreme weather and regional driving styles all challenge AI models in different ways. With such a large installed base, G-ASD gains access to a continuous stream of learning data that few competitors can match.

Safer and Smoother by Design

Afari has positioned G-ASD as an AI native driving experience rather than a bundle of driver aids. By fusing perception, language and action models, the system is designed to anticipate hazards earlier and respond more smoothly. Instead of reacting late to a cut in vehicle or a pedestrian stepping off the kerb, the world model predicts likely outcomes and adjusts behaviour in advance.

This predictive capability is one of the main advantages of reinforcement learning combined with world models. The system can simulate thousands of possible futures and choose the safest and most efficient path. Over time, as it learns from both successful and problematic scenarios, its behaviour becomes more consistent and more human like.

Why Geely

Geely has quietly become one of the most influential automotive groups in the world. With brands that span mass market, premium and electric only segments, it controls a global production and distribution network. Integrating G-ASD into this ecosystem gives Afari a fast route to international scale.

For Geely, the partnership provides access to a cutting edge AI platform without having to build everything in house. As regulations around assisted and autonomous driving evolve, having a model driven system already deployed positions the group to move quickly when higher levels of autonomy are approved.

CES 2026 and the Industry Context

The launch of G-ASD at CES 2026 reflects a wider shift in the mobility sector. Traditional driver assistance systems are reaching the limits of what can be achieved with rules and handcrafted features. At the same time, breakthroughs in large scale AI models are opening new possibilities for perception, reasoning and control.

Across the industry, companies are moving toward end to end learning systems that treat driving as a single integrated problem. Afari and Geely are aligning themselves with this trend, betting that foundation models and world models will become as fundamental to cars as engines and batteries once were.

What This Means for Autonomous Driving

G-ASD does not claim to deliver fully autonomous vehicles overnight. Instead, it provides a scalable foundation for progressively higher levels of automation. As more data is collected and models improve, features such as hands free highway driving, urban navigation and automated parking can be expanded and refined.

Because the system is model driven, upgrades can be rolled out through software updates rather than hardware changes. This gives vehicle owners a path to better performance over time and allows manufacturers to respond quickly to regulatory approvals and market demand.

Implications for Investors and Policymakers

For industry investors, the scale of deployment and the depth of AI integration make G-ASD a signal that autonomous driving is moving out of the experimental phase. A platform that already runs in hundreds of thousands of vehicles has a very different risk profile to a prototype fleet.

Policymakers also face a new reality. Systems like G-ASD generate detailed data on driving behaviour, safety incidents and system performance. This can support evidence based regulation, but it also raises questions about data governance, liability and cross border standards that will need careful attention.

A Platform Built for the Long Term

Afari has made it clear that G-ASD is not a one off product, but a continuously evolving platform. As foundation models grow larger and more capable, the same architecture can absorb new capabilities without being rebuilt from scratch. That gives Geely and its brands a long runway for innovation.

The combination of real world deployment, modern AI architectures and a global automotive partner places G-ASD in a strong position. While competition in autonomous driving remains fierce, few systems combine scale and sophistication in quite the same way.

Being Driven into the Future

The global launch of G-ASD at CES 2026 marks a turning point for Afari and a strategic step for Geely. By betting on model driven, AI native driving, the two companies are aligning themselves with the future direction of the industry.

As vehicles become increasingly defined by software and data, platforms like G-ASD will shape how people move, how cities manage traffic and how safety is measured. The journey toward full autonomy remains complex, yet with hundreds of thousands of vehicles already learning on the road, Afari and Geely have taken a meaningful step forward.

Afari and Geely Drive a New Era of AI Native Autonomous Mobility

About The Author

Anthony brings a wealth of global experience to his role as Managing Editor of Highways.Today. With an extensive career spanning several decades in the construction industry, Anthony has worked on diverse projects across continents, gaining valuable insights and expertise in highway construction, infrastructure development, and innovative engineering solutions. His international experience equips him with a unique perspective on the challenges and opportunities within the highways industry.

Related posts