top of page

Advancing Ethical AI: Navigating Biases with ISO42001 and Value-Based Engineering

Updated: May 6


In an older post, Technology with bias. Is it possible for Artificial Intelligence to hate me?, we uncovered a crucial aspect of AI- its potential to be biased and discriminatory. This is a significant concern as it stems from the use of Machine Learning models that draw from the vast, yet flawed, datasets available online. These models, therefore, inherit the biases present in their source data. The question then arises: how can we navigate this issue? How can we progress with AI, leveraging its benefits, without amplifying existing disparities and neglecting vulnerable populations?


In this post, we will delve into the case of Generative AI, examining its biases and an attempt to correct them that didn’t succeed. We aim to propose strategies for advancing AI in ways that minimize the reinforcement of existing social inequalities and prevent the marginalization of vulnerable communities.

About Bias

According to IBM, AI bias is defined as “AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.” As AI becomes increasingly involved in vital societal sectors like healthcare and employment, concerns about bias in these systems have consistently made headlines. A pivotal moment highlighting AI bias occurred when Professor Joy Buolamwini discovered that computer vision systems failed to accurately detect her dark-skinned ****face until she wore a white mask. This incident showcased a glaring gap in the ability of AI technologies to serve diverse populations, and subsequently ignited widespread controversy and discussions about bias within AI, leading to increased scrutiny and calls for more inclusive and equitable AI development practices.

Bias in Generative AI

In the following years, the emergence of Generative AI further propelled controversies surrounding AI bias, as discussed in our previous post. For instance, research showed that AI associates skin color with job status. When AI generates images based on prompts for high-paying jobs, like doctor and professor, the results often depict individuals with lighter skin tones, whereas prompts related to lower-paying roles typically feature people with darker skin.

In another instance, images generated by AI showed a [huge stereotypical view of the world](https://restofworld.org/2023/ai-image-stereotypes/#:~:text=Generative AI systems are no,professional roles were male-dominated.). The prompt “Indian Person” generated images of an old man with a beard and a headdress, and the generated images of New Dehli were almost always overpopulated and polluted streets. These instances reveal a concerning tendency of AI to perpetuate and amplify stereotypes, presenting a skewed and narrow perspective of cultures and communities. Such portrayals can reinforce harmful biases and misconceptions, influencing public perception negatively.

Overcorrection of Bias

Addressing shortcomings in past generative AI, Google released Gemini, boasting it as their most powerful and versatile AI model yet. However, its image generation feature quickly became a topic of controversy.


The model was called “Absurdly Woke” by the New York Post. The generated images featured shocking historical misrepresentations in prompts that did not specify race or gender, such as black vikings, ethnically diverse founding fathers and a woman pope. It seems like Google’s new strategy, which aimed at promoting diversity and countering previous models' stereotyping, appeared to neglect historical authenticity, suggesting a possible overcorrection in their attempt to ensure inclusivity and diversity in its AI-generated content.


Since then, Google issued an apology and took down the feature announcing that future plans include significant improvements and extensive testing before re-launching the feature, and acknowledging the challenges of AI in generating reliable content on sensitive topics.

Needless to say, we are no where near fixing the problem of bias in AI, if that is even possible. However, emerging standards underscores aspects that are essential to move forward in the right direction.

Context Matters

In what scenarios is the Gemini model designed to function? Does it aim to deliver historically accurate depictions, potentially taking over the role of history books? Or is its primary purpose to empower content creators with the ability to produce innovative and unique visuals? It appears that Google, along with the broader technology sector, is striving to develop models versatile enough to fulfill a wide range of tasks. Yet, their efforts have not met these ambitious goals.

From our current perspective, creating an AI system entirely devoid of bias seems unattainable, questioning if such a model could ever exist. This is why we think that the first step towards developing responsible technology is defining its intended context of use from the onset of design and transparently communicating this to the users.


The emerging standard ISO42001 provides a good starting point for organizations to delineate the context of their applications. The standard mandates that organizations must clearly outline and record the specific context in which their AI systems will operate. This requirement comes after evaluating various factors such as the system's capabilities, stakeholders' expectations, and the ethical, cultural, and value considerations in AI development, alongside existing policies and regulations. This standard, being the first of its kind to focus on AI management systems, aims to guide organizations in responsibly navigating the complexities of AI deployment by emphasizing transparency, accountability, and ethical practices in the development and use of AI technologies.

Notably, this doesn’t seem to be the case with Generative AI at the moment, where we’re seeing organizations launching products that they claim are multifaceted and “general-purpose” only to face backlash and criticism.

Early User Involvement

As guided by ISO 42001, user involvement is essential in determining the context of the system. To put this into practice, Value-based Engineering (VBE) based on ISO/IEC/IEEE 24748-7000 emerges as a crucial next step. VBE underscores the significance of the early involvement of a wide and diverse range of stakeholders. This approach is essential in embedding a set of human values directly into technology, thereby enhancing its alignment with societal norms and ethical standards.


Moreover, VBE’s emphasis on the early integration of stakeholder perspectives in the design process can be instrumental in identifying biases within AI systems and understanding their impacts. This approach aligns with ISO/IEC TR 24027:2021, which introduces mechanisms and methodologies specifically aimed at uncovering and addressing bias. Recognizing that bias in AI is an inevitable part of these system which can have negative, positive, or neutral effects, the challenge outlined by this standard is to evaluate these impacts meticulously and design AI systems that can navigate these complexities effectively.


Thus, The interplay between Value-based Engineering (VBE) and standards like ISO/IEC TR 24027:2021 underscores a comprehensive strategy for fostering ethical AI development. By prioritizing stakeholder inclusion from the outset, VBE not only aids in pinpointing potential biases but also ensures that AI systems resonate with the diverse values and needs of society.

Proactive Risk Management

Finally, we need to recognize that it is not feasible to foresee and mitigate all the possible risks associated with user engagement with an AI system. Therefore, developing robust strategies to minimize the impact of these unforeseen challenges is crucial.


Indeed, tech companies do acknowledge the possibility of bias in their models. Google displays “Gemini may display inaccurate info, including about people, so double-check its responses” on the model’s user interface, and OpenAI states that “ChatGPT is not free from biases and stereotypes, so users and educators should carefully review its content.” However, the incident with Google's Gemini, where the company had to take the system down, underscores the critical need for comprehensive risk management processes. It's not just about acknowledging potential flaws but also about having a proactive approach to managing them. Effective risk management includes establishing clear channels for user feedback and being agile enough to incorporate this feedback into system improvements.


In fact, ISO42001 emphasizes the importance of identifying and assessing risks, developing strategies to mitigate them, and ensuring transparency and accountability in AI operations. The standard advocates for incorporating feedback mechanisms to learn from user experiences and incidents, promoting a culture of continuous improvement. Such processes are vital in minimizing the impact of unforeseen challenges, ensuring that both users and companies can navigate the complexities of AI technology responsibly.

Conclusion

The ideas shared in this blog post resonate with a paper authored by researchers from Google Research, Hugging Face, DAIR, and DeepMind which advocates for a future of ethical and inclusive AI systems that are developed with a comprehensive understanding of the diverse cultural and societal contexts in which they will operate in. It is necessary to engage with civil society and consider local values and norms to align AI ethics with universal human rights to navigate the varied moral landscapes across different societies.



At RighMinded AI, we are committed to bridging the implementation gap for standards like ISO42001 and Value-based Engineering. Our goal is to foster the creation of ethical and inclusive AI systems, striving to minimize biases as far as possible. This commitment guides our pursuit of opportunities that not only advance technological innovation but also reflect our dedication to ethical integrity and societal enrichment.

128 views
bottom of page