top of page

Navigating the Ethical Maze: Generative AI's Impact and the Promise of Value-Based Engineering

Updated: May 6



Ever since OpenAI’s release of ChatGPT in late 2022, Generative AI has consistently captured the spotlight in tech news. Its application span a wide range, from revolutionizing the field of Advertising to passing the Bar exam, showcasing seemingly endless potential. But it's not all smooth sailing. Academics and industry professionals have raised important ethical questions surrounding these advanced models. This blog post will dive deeper into some of these ethical dilemmas, showcasing real life situations were these issues crystalized. Then we will explore the potential of Value-Based Engineering (VBE) in the development and deployment of ethical generative AI models. But first, let’s briefly take a look at the evolution of these technologies.


Evolution of Generative AI


Generative AI is a field within Artificial Intelligence that develops models capable of generating new content such as text, images, music, videos, and even code. Although generative AI models have only recently gained widespread attention, the field has been evolving for years. Perhaps the first Generative AI application was the chatbot ELIZA, developed in the 1970s, which used simple pattern matching to mimic the answers of psychotherapists. The Natural Language Processing (NLP) model would simply turn the questions back to the patients to allow for reflection. For example an input of “I feel sad” would simply be answered by ELIZA with “Do you often feel sad?”. Thus, it was very simple and was even deemed a “parody of psychotherapists” by its developer. However, it gained much attention and sparked further research.


Fast forward 50 years, and generative AI models are capable of writing full essays. This is because of the increasing computational power and advancements in AI technologies, particularly in deep learning algorithms and neural network architectures. Models like GPT (Generative Pre-trained Transformer) have revolutionized the field, leveraging vast amounts of data to understand and generate human-like text.


These advancements in generative AI not only showcase the technological leap over the last few decades but also hint at the potential future applications that could transform industries, enhance creative processes, and redefine our interaction with technology. Yet, as we embrace these exciting developments, we must also turn our attention to the ethical challenges they present, setting the stage for a crucial discussion on the responsible use of generative AI.


The Hallucination Phenomenon in Generative AI


The first ethical concern regarding Gen AI is a problem referred to as hallucinations, which is when the models generate false, fabricated or misleading information as factual. The confidence in the model’s false answers is not deliberate, but results from the model’s learning process which identifies patterns in vast amount of data, whether these pattern correspond to verified information or not. Without the proper fact checking tools, users can be easily deceived by the AI’s output.


A notable illustration of this concern is the recent incident where a Canadian lawyer allegedly used fictitious cases generated by ChatGPT in court proceedings. Amidst the growing adoption of these tools across various sectors, including the legal system and education, the persistent issue of hallucinations raises numerous concerns.


Generative AI Amplifies Bias and Stereotypes


GenAI models are trained on large amount of data taken mostly from the internet. As a natural result, the biases and stereotypes in this data are inherited by the model, with the risk of amplifying them in its outputs. A research done by Bloomberg provided shocking findings of bias found in images generated by a text-to-image model, where for example lighter skin tones were given to prompts to show high-paying jobs, while darker skin tones were more likely to appear in low-paying jobs such as house keeper and janitor.


This bias poses a risk on our societies in reaffirming existing inequalities and marginalizing vulnerable groups. Even if the data possesses biases and the discrimination is not directly intended, providers of such technologies bear responsibility of finding ways to generate fair and ethical content.


Generative AI is a Threat Towards Sustainability


In the age where climate change poses a great threat to our lives, and the constant efforts to find sustainable solution, the training of generative AI models is coming under fire. Tech companies are in a tight race to produce state-of-the-art models and deploy them faster than competitors. However, training these models is proving to be a disaster for the climate. Researchers at MIT found that the process of training large AI models can emit more that 626,000 pounds of carbon dioxide, which is almost five times the lifetime carbon emissions of an average passenger car. This sparks the question, does the march towards advanced AI come at the expense of our planet’s health?


AI Act on Generative AI


The previous ethical concerns, among others, have raised the need for regulations regarding these emerging technologies. The EU AI Act, which is set to apply from 2026, included general purpose systems and Generative AI as a separate category of AI systems with its own specifications for regulation. These regulations included that the models would undergo “thorough evaluations and any serious incident would have to be reported back to the European Commission”, underscoring the need for tech companies to ensure that their systems adhere to ethical standards.


Ethical Innovation: The Role of VBE in Shaping the Future of Generative AI


It is estimated that by 2030, $79 billion will be spent annually on specialized applications utilizing Generative AI to boost productivity and enhance automation in various industries from healthcare to security. Yet, with these advancements come the inherent weaknesses and ethical challenges presented above. In addition, regulations similar to the EU AI Act are also expected to be strictly applied. This scenario strongly indicates the imminent need for a change in the engineering development processes to accommodate these changes effectively. Value-Based Engineering (VBE), as described in the IEEE 7000 standard, could be a step towards revolutionizing the way engineering teams work. VBE provides guidelines to incorporate positive values seamlessly into the design phase of products.


More specifically, applying VBE to generative AI emphasizes a holistic approach that integrates stakeholder values, ensuring transparency, accountability, and sustainability. By prioritizing ethical considerations alongside technical and financial ones, VBE encourages the development of AI technologies that are socially responsible, environmentally sustainable, and aligned with long-term societal benefits. This framework fosters continuous improvement and innovation aimed at addressing ethical challenges, promoting fairness, and enhancing the positive impact of AI on society.


For more information about how VBE is changing the system development, you can check our previous posts.

66 views
bottom of page