AI and urban mobility: Innovating responsibly with human values

March 9, 2025


Sheena Jacob, Head of Regional Southeast Asia TMT and Intellectual Property Practice, CMS

Jaya Malhotra, Senior Associate, CMS



The rapid rise of artificial intelligence (AI) presents immense opportunities to transform urban mobility—improvingtrafficefficiency,enhancing accessibility, and promoting sustainability. Technologies such as autonomous vehicles and AI-powered public transportation systems are already reshaping urban landscapes. However, as these innovations become more embedded in our cities, they raise profound ethical concerns. The challenge lies in ensuring that AI serves the public good without sacrificing fundamental human rights or individual freedoms. Striking the right balance between fostering technological advancement and safeguarding human values is essential for creating smart cities that are both efficient and just.


What is ethical AI in urban mobility?


Before examining the application of AI in urban mobility, it is crucial to define what ethical AI entails. Due to the complexity of AI technologies, varying political and cultural contexts, and differing ethical viewpoints, establishing a universal baseline is challenging. However, one widely accepted framework by PwC1 outlines ten core principles for ethical AI, which serve as essential guidelines on responsible AI.


The ten core principles are:

  1. Reliability and Robustness: AI systems must operate consistently and reliably. Whether it’s autonomous vehicles or smart traffic management, these systems must function within their design parameters to ensure safety and efficiency.
  2. Security: Protecting AI systems from cyber threats is critical. Secure data storage and communication protocols must be in place to prevent malicious interference that could jeopardize public safety.
  3. Accountability: Clear accountability must be established for the developers and organizations deploying AI systems. Transparency in decision-making and oversight is crucial to maintaining public trust.
  4. Beneficiality: AI technologies in urban mobility should prioritize the common good. These systems should promote sustainability, reduce congestion, and improve public access, while avoiding harm to vulnerable communities.
  5. Privacy: Given the personal data collected by AI systems in urban mobility (e.g., travel patterns, location data), strong privacy protections are necessary to prevent misuse and build public trust.
  6. Human Agency: Even with the rise of autonomous systems, human oversight is necessary. For instance, drivers should have the ability to override autonomous vehicles in emergencies, preserving human judgment and control.
  7. Lawfulness: AI in urban mobility must adhere to existing laws and regulations, including data protection and traffic safety standards.
  8. Fairness: AI systems should be designed to avoid discrimination and ensure equitable access to transportation services, especially for marginalized groups.
  9. Safety: Safety should always be a priority. AI technologies should not endanger public health or safety, either physically or mentally.


These principles offer a comprehensive roadmap for cities and organizations to navigate the complex landscape of AI deployment in urban mobility, ensuring that innovation aligns with human values and societal needs. It is essential that AI systems in urban mobility are grounded in these ethical guidelines to foster trust and ensure that they benefit all individuals.


Reliability, robustness, and safety


Ensuring that AI systems are reliable, robust, and safe is critical, especially in high-stakes
applications like urban mobility. For instance, a malfunctioning autonomous vehicle or an AI- powered public transportation system could lead to accidents or inefficiencies, undermining public trust in these technologies. A prominent example is the 2018 fatality involving an Uber autonomous vehicle in Tempe, Arizona. The car failed to detect a pedestrian outside a crosswalk, resulting in the death of the individual. This tragic event highlighted the importance of ensuring the safety and reliability of AI-driven transportation systems to maintain public confidence in their use.


Accountability and beneficiality


Transparency and accountability are crucial components in the ethical deployment of AI. For example, the city of Amsterdam has pioneered an initiative to enhance public engagement by launching a public AI registry.2 This platform provides individuals with clear insights into the AI systems being used in urban mobility, offering transparency about their purpose, operation, and data usage. What sets Amsterdam apart is its commitment to public participation. By making AI systems visible and understandable, the city empowers its individuals to scrutinize the technologies shaping their daily lives, fostering trust and ensuring that these systems are aligned with public values, emphasizing accountability, fairness, and openness in the decision-making processes.


Moreover, AI technologies must be designed with the public good in mind. For example, AI systems in urban mobility should contribute to sustainability, reduce congestion, and improve accessibility for all individuals, particularly marginalized groups. Amsterdam’s approach ensures that AI-driven
solutions benefit the community rather than serving the interests of a select few.


Privacy protection and security


Privacy protection is one of the most pressing ethical challenges in the deployment of AI in urban mobility. Many AI systems rely on collecting vast amounts of personal data, such as travel patterns and location information. If not properly safeguarded, this data could be misused or exposed, violating individual privacy rights.


The Sidewalk Toronto project,3 an ambitious urban development initiative proposed by Sidewalk Labs, a subsidiary of Alphabet Inc., in collaboration with Waterfront Toronto, provides a stark example of the risks associated with mass data collection. The project aimed to transform Toronto’s eastern waterfront into a high-tech, smart city neighbourhood known as Quayside. To achieve this, Sidewalk Labs planned to collect data from public spaces, publicly accessible private spaces, and infrastructure, including pedestrian traffic, environmental conditions, and utility usage. While Sidewalk Labs introduced privacy measures, such as anonymization techniques and secure data storage, significant concerns remained about the effectiveness and enforcement of these measures. One major issue was that, although Sidewalk Labs committed to de-identifying data at the source, there were worries about third-party companies not being held to the same standard. This extensive data collection contributed to a public backlash and the project’s eventual cancellation.


This project highlights the necessity of transparency and public engagement. Individuals must be informed about the data being collected, its intended use, and how their privacy will be protected. Moving forward, any AI-driven mobility project must prioritize robust privacy protections and transparency, balancing innovation with the need to safeguard individuals’ rights.


Fairness in AI systems


AI has the potential to address long standing inequalities in access to public services. Urban mobility AI systems can be designed to provide equitable access to transportation services, particularly for underserved neighbourhoods. For example, AI-powered transportation systems, like those in London, adjust bus routes and schedules based on real-time demand, ensuring that individuals in less accessible areas are not left behind. By designing AI systems to be inclusive and equitable, cities can ensure that the benefits of technology are shared fairly among all individuals.


Legislation and ethical boundaries


As AI technologies become increasingly integrated into urban mobility, the question of whether regulation should proactively set ethical boundaries becomes more urgent. While some countries, like Japan and South Korea, have developed ethical AI guidelines, many parts of the world lack comprehensive AI guidelines or regulation. This regulatory gap has allowed AI technologies to be deployed without adequate oversight, raising concerns about privacy and civil liberties, as seen in China’s implementation of AI- driven systems like social credit scoring.


In contrast, the European Union has taken a proactive approach by passing the EU AI Act, which seeks to regulate AI systems based on their risk levels. The EU has banned unacceptable risk AI systems that manipulate human behaviour, exploit vulnerable populations, or enable social scoring. This proactive approach, particularly in banning unacceptable-risk AI systems, is key. It acknowledges the ethical risks posed by certain AI technologies and seeks to prevent their misuse, particularly in ways that threaten individual freedoms and human dignity.


In urban mobility, AI systems that undermine autonomy, such as those used for coercive surveillance or discriminatory practices, should be prohibited. For example, AI tools that track people’s movements or influence their choices without their consent could be seen as infringing on personal freedoms. Banning such systems would protect individuals’ privacy and dignity while ensuring that AI technologies are used responsibly and ethically.


Opportunities for innovation


While regulation is essential, it should not stifle innovation. Proactive regulatory frameworks can foster the development of responsible and ethical AI applications in urban mobility. Regulatory sandboxes, where startups can test AI solutions under regulatory supervision, allow innovation to thrive while ensuring compliance with safety, data protection, and ethical standards. Public-private partnerships also offer promising avenues for innovation. For example, Singapore’s collaboration with private technology companies has led to the development of smart city technologies that improve urban mobility while safeguarding public interests.4


Ensuring ethical and inclusive AI in the future of urban mobility


Looking ahead, the future of AI in urban mobility holds tremendous potential for creating more efficient, sustainable, and inclusive cities. However, these technologies must be developed in line with ethical principles that prioritize human rights, transparency and fairness. AI should be used to empower communities, not just to serve the interests of powerful corporations. The deployment of AI in urban mobility must prioritize the public good, ensuring that transportation systems are efficient, equitable, and accessible to all.


To ensure that AI works for the benefit of society, its development and deployment must be grounded in transparency, fairness, and inclusivity. By adhering to these principles, we can create smart cities that uphold the rights and dignity of all individuals while driving progress and innovation. Through responsible governance and adaptive regulatory frameworks, AI can shape a future that benefits everyone.


  1. PwC Australia. (n.d.). Ten principles for ethical AI. PwC Digital Pulse. Retrieved from https://www.pwc. com.au/digitalpulse/ten-principles-ethical-ai.html ↩︎
  2. https://venturebeat.com/ai/amsterdam-and-helsinki-launch- algorithm-registries-to-bring-transparency-to-public-deployments-of-ai/ ↩︎
  3. https://www.tomorrow.city/sidewalk- toronto-the-vision-behind-googles-failed-city/ ↩︎
  4. https://www.indsights.sg/industry-perspective/ opportunities-in-singapore-smart-city-landscape/ ↩︎

Sheena is the Head of the regional Southeast Asian Intellectual Property practice at CMS Holborn Asia. Qualified in Singapore, New York, England & Wales, she is a leading international lawyer in the field of Intellectual Property with more than 25 years of specialist experience in Asia. Sheena is known for her sound, commercial advice and is widely acknowledged as a global thought leader on cutting-edge issues in the Intellectual Property field.

She is ranked as a leading Intellectual Property lawyer in numerous publications including Asialaw Leading Lawyers 2021, Asia IP Leading Stars 2021, Women in Business Law, Who’s Who Legal IP, Life Sciences 2021, Patents 2019, Who’s Who Legal Life Sciences 2022, World Trademark Review 2020 and IAM Patents 1000.

Sheena is also a leading technology lawyer with a strong cybersecurity, media, privacy, and data protection practice at CMS Holborn Asia. Holding double international privacy certifications from the IAPP, Sheena has been active in the privacy field for more than 10 years. She also represents a significant number of industry players in the entertainment and media industry and has strong experience with cybersecurity and data localization issues across Asia. Sheena works closely with a team of technology lawyers and advises on regulatory issues in the media, telecoms and fintech sectors in South-east Asia. Sheena is ranked as a leading TMT lawyer in Asia and is known for her sound, commercial advice. She is a member of the Global Board of iTechlaw and has written numerous articles on privacy and AI.

Jaya is a Senior Associate at CMS Holborn Asia and part of the Technology, Media, Intellectual Property and Competition team. Jaya provides commercial and regulatory advice to clients in the telecommunications, technology, and media industries. Over the past four years, Jaya has served as in-house counsel at Amazon, supporting the Devices and Digital Content business in APAC and Amazon Web Service’s Network Infrastructure business in APAC and the Middle East.

As in-house counsel, Jaya has been involved in landmark submarine and terrestrial cable projects navigating complex telecommunications regulatory and policy issues. Her expertise includes master services agreements, consortium arrangements, system supply agreements, landing party agreements, IRU agreements, and dark and lit fibre transactions. She has also advised the business on content acquisition, sale and distribution arrangements, platform liability and consumer protection regulations.