As Hong Kong’s first officially licensed autonomous vehicle hits the road at the airport, you can almost smell AI tech advancement on full throttle! As innovation makes a mark on the tarmac, has our legal and ethical “navigation system” kept pace? Beyond just a matter of technical readiness, there is a deeper societal question about our collective values and accountability.

Earlier this year, the Hong Kong Generative AI Research and Development Center (HKGAI), a government-funded initiative led by the Hong Kong University of Science and Technology released HKGAI V1 as the city’s first large language model trained on local context with government backing. Alongside this technological milestone, HKGAI has published a draft “Hong Kong Generative Artificial Intelligence Guideline”, which has potential to guide the governance of AI in public and private sectors. As Hong Kong plays a key role in deepening international exchange and cooperation, the guideline has potential to guide AI developments locally and further afield in like-minded jurisdictions, including in the 49 member countries of AALCO, and then beyond.

The proposed Guideline outlines five fundamental principles for AI governance: data privacy, intellectual property, crime prevention, system reliability, and cybersecurity. It also champions the concept of “human-in-the-loop”, a mechanism that ensures human oversight throughout the AI lifecycle—from data collection and model training to real-world deployment and use.

These principles are not abstract theory—they are directly applicable to high-stakes and impactful technologies such as autonomous driving. When a self-driving car is forced to make a split-second decision about saving or taking the lives of its passenger versus other road user(s), is that a purely “technical problem”, or more of a moral dilemma? Who should be “behind the wheel” in making that decision? Should software engineers predefine value hierarchies, or should AI make or be trained to “learn” social norms on its own?

The Guideline did not have a specific section on a scenario like this, but it provides  a more general direction: human judgment must remain embedded in high-risk AI systems. Developers and service providers should implement risk assessments, compliance audits, and content labeling mechanisms to ensure that AI-generated decisions are traceable and accountable. For autonomous vehicles, this means designing systems that can explain their decision-making processes, particularly in critical scenarios. Without this transparency, any future accidents—whether due to algorithmic bias, incomplete data, or so-called “model hallucinations”—could lead to a legal quagmire where responsibility is may be impossible to clearly delineate.

Moreover, the Guideline emphasizes industry-specific governance. Autonomous driving will likely be identified as a high-risk use case, requiring more stringent safeguards such as external model audits, robust data verification, and simulated crash testing. These are not bureaucratic obstacles—they are essential confidence-building measures for the industry to realize its potential locally and on the world stage. Having good guidelines, good laws and good shared understanding will protect not only passengers and pedestrians but also the companies developing and deploying these technologies, building trust with regulators, insurers, and the public.

I note that the HKGAI Guideline does not advocate for rigid, one-size-fits-all regulation. Rather, it reflects a nuanced understanding of Hong Kong’s unique position: a city that must balance technological ambition with institutional trust. The guideline adopts a non-binding, principles-based framework, similar to, and in my view better than, models used in Singapore and the European Union, focusing on flexibility, industry collaboration, and risk-based governance.

A conversation around “AI at the wheel” is ultimately a metaphor for a broader societal challenge: how do we delegate decision-making power to machines without abdicating human responsibility? In sectors like finance, healthcare, and education, AI seems to have found its place and plays a growing role. But unlike a spreadsheet error or a miscalculated ad, an error in autonomous driving can result in irreversible loss of life. That is why the ethical infrastructure must evolve in tandem with the technical one.

The Guideline’s emphasis on “human-in-the-loop” oversight is therefore not just a technical safeguard; it is a democratic principle. It ensures that decisions with moral weight are never made in a vacuum of code. It also reminds us that AI should enhance—not replace—human agency.

Hong Kong, with robust legal system, world-class research institutions, and a unique East-meets-West perspective, access to data from around the world, is well-positioned to become a leader in AI governance for the good of the world. Leadership comes with responsibility. As we adopt more intelligent systems—from autonomous cars to AI-powered public services—we must also build a shared vocabulary of accountability, transparency, and ethical reasoning.

So as we hand over the steering wheel to AI, we must ask ourselves: have we set the right course? Are our laws, values, and institutions prepared for the road ahead? The future of AI in Hong Kong will not be determined by how fast we drive, but by how wisely we navigate.

Technological speed cannot come at the expense of social trust. The final mile of innovation is not paved with code—it is built on consent, understanding, and dialogue.

The Standard