Consulting

Happy Birthday, ChatGPT… Now What?

Happy Birthday, ChatGPT… Now What?

ENSURING AN EVEN AND EQUAL AI FIELD FOR EVERYONE
BY ALDO RAMIREZ & ANIBAL ABARCA

"Transparency in AI systems is fundamental for ethical and responsible implementation. By lifting the curtain on how these systems are created and operated, we can ensure that technology contributes positively to our society and respects the fundamental values and rights of individuals.”

This is the response that ChatGPT gave when provided with the following question: ‘can you write a paragraph about the importance of transparency in AI systems like you?’ 

Great reply, isn’t it? However, a recent study reported that ChatGPT has the worst ranking in transparency—or openness—among a list of important text generators. Evidently, this is not a case of hypocrisy on the part of OpenAI—the company that created ChatGPT. The disparity between the text output and ChatGPT’s alleged deficit of transparency reveals an interesting phenomenon about the current state of campaigns advocating for regulation, responsible use, transparency, and explainability of AI.  On one hand, so many articles have been written about the need for these four notions in AI that the datasets used to train the large language models (LLMs) behind ChatGPT—as well as the techniques of reinforcement learning with human feedback (RLHF) that improve the generator’s performance—contain the necessary information for the above response to be optimal. On the other hand, there is still a lack of consensus regarding regulation of AI systems, leading to vagueness as to the measures that companies need to ensure when it comes to building, implementing, and distributing AI technologies. 

The stage is now set for these and many pressing discussions. Today marks ChatGPT’s first anniversary (some might say first birthday, even), and OpenAI recently announced what one may call the “new phase” of its ace-in-the-hole tool: the so-called GPTs or GPT agents. To be precise, GPTs are tailored versions of ChatGPT, that work on specific, often complex tasks. Think of very specialized apps with which you can easily interact (using only natural language) and get from them domain-specific knowledge. Although the technicalities behind GPTs have been around for a while, the announcement marks the beginning of an expansion in the number of ways in which society will be interacting with AI over the several months.  

human using chgpt

However, both the UK AI summit (Nov. 1 & 2) and President Biden’s executive order on AI (Oct. 30) represent crucial measures in the creation of AI’s regulatory frameworks and adoption guidelines, measures that will likely have consequences in the competitive landscape of innovation. To be sure, the UK AI summit was the first global event to address the safety and security of AI. It resulted in a landmark agreement requiring safety testing and information sharing for the most powerful AI systems—before and after they are deployed. 

In turn, Biden’s executive order on AI implies a policy that aims to ensure that AI is trustworthy and beneficial for Americans. Importantly, the order also supports the development of AI talent and infrastructure, with guidelines for increasing funding for AI research and education, for creating an AI workforce advisory board, and for launching a national AI research resource to provide access to computing power and data.

All of this means that we are on the verge of a period that will see an increase both in innovation and in regulation, an increase for which we can only hope that a certain balance between the two notions will prevail. 

Let’s explore what such a balance would look like.

Transparency Behind Algorithms and the Use of Personal Data

During the past decade, concerns about digital technology focused on potential abuses of personal data. These concerns led to the implementation of measures—most notably in the United States and Europe—that would guarantee users some control over their personal data and images. A famous example of such an implementation is the European Union’s 2018 General Data Protection Regulation (GDPR). In turn, the last year and a half has experienced the growth of a new, albeit related, kind of worry. As companies increasingly embed AI into their products and services, attention is shifting to how data is used by considerably complex (and rapidly evolving) algorithms that might, for example, diagnose a disease, drive a car, approve a loan, or provide medical recommendations. As argued by a comprehensive survey on AI regulation, the main concerns associated with AI’s incorporation to everyday life involve the following points:

a) Bias in the results produced by AI systems: a well-known example is Apple’s credit card algorithm, which was accused of discriminating against women, triggering an investigation by New York’s Department of Financial Services.  Similarly, there is evidence that ChatGPT’s answers can exhibit forms of discrimination against certain groups. In most cases, this kind of problem stems from bias of the training data. If that data is biased, then the AI will perpetuate and may even amplify the bias.

b) The impact of AI decision-making: some algorithms make or affect decisions with direct and important consequences on people’s lives. For example, algorithms that diagnose medical conditions, screen candidates for jobs, approve home loans, or recommend jail sentences. 

c) The way AI is using our data: even with regulations of data protection and security in place, many stakeholders are worried about how and to what extent  AI models can use the data that they are trained on. Since most current applications of generative AI these days focus on performing specific tasks for specific users on the basis of internal document stores or databases (think of the aforementioned GPTs), there is a concern as to whether the models will eventually leverage the information that is fed to them.

d) Transparency and Explainability of AI systems: just like humans, AI isn’t infallible. Algorithms will inevitably make some unfair (and perhaps even unsafe) decisions. When humans make a mistake, there’s usually an inquiry and an assignment of responsibility, which may lead to legal penalties. Any such inquiry involves searching both for the truth behind what happened—transparency—and for explanations of the decisions made. In the case of AI systems, recent developments have translated into an increase of complex questions concerning how to ensure such systems’ transparency and explainability.

The stated concerns are the cause of the recent institutional responses that we find in the UK AI Summit and Biden’s executive order. In essence, these responses emphasize the need to ensure the following key principles in the use of AI: safety and security (AI systems should be safe, secure, and trustworthy), privacy (AI systems should respect and protect citizens’ privacy and their personal data, and should not be used for unlawful or unethical surveillance or data collection), equity and civil rights  (AI systems should advance equity and civil rights, and should not discriminate, oppress, or harm any group of people based on their race, ethnicity, gender, sexual orientation, disability, or any other protected characteristic), and consumer and worker protection (AI systems should protect the rights and interests of consumers and workers, and should not deceive, exploit, or harm them in any way).

Ensuring the Responsible Use of AI

Now, this leads us to a very important question: exactly what practical measures should companies and society take to encourage—and eventually ensure—these key principles and thus guarantee a responsible use of AI? Broadly speaking, there are two levels of action:

  1. At the level of development and implementation: current proposals to better control the design, construction, and implementation of AI broadly focus on four key aspects (which are correlated with the concerns mentioned above): (a) fairness in the outcomes expected from an AI system, (b) transparency about the goals and development methodologies of an AI system; (c) explainability of the algorithms used; and (d) systematic verification of such algorithms. 

The first point, fairness, refers to any measure taken toward mitigating the potential bias in an AI’s decision making. In theory, it might be possible to code some concept of fairness into software, requiring that all outcomes meet certain conditions. For example, Amazon is experimenting with a fairness metric called conditional demographic disparity, and other companies are developing similar metrics. A considerable obstacle, however, is that there is no agreed-upon definition of fairness. 

The second point, transparency, refers to the effective disclosure of intended uses, limitations, and methodologies behind AI technologies. Tech companies and their research personnel should be expected to guarantee some degree of such a disclosure. Although complete accessibility is impossible when we talk about technologies that often follow corporate agendas within a competitive framework, the scientific community has emphasized the need to create organizations to audit and monitor transparency. 

The third point, explainability, aims to promote the understanding of a given AI’s decision-making. Recent trends in this effort point to three types of explainability: (1) pre-modeling explainability, which through data analysis seeks to describe the datasets used to train machine-learning algorithms and thus mitigate potential biases and guarantee data privacy, (2) model explainability, which seeks to (a) limit the use of algorithms to those which are inherently easier to explain and to (b) develop hybrid AI methods (where various complex algorithms are combined to generate simpler or interpretable models); and (3) post-modeling explainability, which using mathematical techniques seeks to extract a posteriori prominences, trends, and rules from the algorithms. 

The fourth point, systematic verification, refers to a variety of methods that would check whether an AI adheres to certain pre-established specifications. The intuition is that guarantees of compliance would ensure that autonomous decision-making will not stray from a relevant margin, and that this will increase control and maintenance over algorithms that see regular implementations. This is especially relevant in the context of ethical specifications.

  1. At the social level (mediatic, educational, and institutional): on top of the constant call from the global press for systematic control of AI technologies, it is important to disseminate accurate information about the everyday presence that these technologies already have in our society. Routine interaction with AI can only grow in the future.  

People and Ai

Therefore, it is necessary to increase the general public’s awareness of the basic mechanisms behind it, as well as its intended uses. In this way, society will be better prepared to face the consequences of the growing incorporation of AI into human activities. The intuition here is that a greater awareness leads to a greater effectiveness in the demands that society can make. In this case, such demands concern the creation and responsible use of AI systems.

As for education, it is undeniable that academic programs on AI are mushrooming worldwide. This domain would benefit from the participation of AI experts, ethics experts, and experts in technology for the public good, designing courses that adhere to certain norms or policies promoting AI’s responsible use. Similarly, it is likely that conferences, workshops, and panel discussions involving different sectors of technological development (researchers, investors, entrepreneurs, scholars, managers, politicians, etc.) will accelerate the convergence to agreements or pacts that would provide common precepts governing AI’s responsible use. 

Finally, perhaps the most important area at the social level refers to the institutional regulation of AI. Everything mentioned above could collapse upon itself without the establishment of applicable regulatory policies—achieved through agreements and campaigns such as the UK AI summit and Biden’s executive order. The goal of these policies should be to monitor and sanction each of the points described above, both at the development level and at the social level. Following the example of the constant regulations on data handling, it is necessary to convene on adaptable pacts (with the ability to evolve) based on the social response generated by the integration of AI technologies.

One Year Later … What Should Happen Next?

It is clear that AI is a powerful tool. A metaphor that can be appropriate, in the current climate, is that of a double-edged tool. It is crucial, then, that, on the one hand, we sharpen the edge that can help us foster economical and social growth, and, on the other hand, we smooth the edge that can harm our society. 

Indeed, Wizeline’s AI mission involves embracing the balance between innovation and regulation. As a company that strives to be on the forefront when it comes to consulting on both these notions, we aim to profoundly develop our expertise on innovation and regulation in the foreseeable future. 

Wizeline’s AI initiatives over the last 12 months are good examples of the company’s commitment to this goal. For instance, our AI training programs, the launch of G.AI.L (the first generative artificial intelligence laboratory in Mexico and Latin America), our AI Manifesto, our AI Native strategies, all speak very clearly of the company’s intent to push forward responsible and ethical innovation in AI.

That said, many challenges still remain. As the present article tried to illustrate, AI and its advancement raises as many issues as it addresses, and this is likely to only continue (and even grow) in the next 12 months and years that follow. Thus, we must be responsible stewards and work together to carefully establish processes, guidelines, guardrails, best practices, etc. in order to ensure that Wizeline’s AI goals are met with equal parts common sense and responsible actions. 

At the end of the day, it’s all about setting high standards for the kind of contributions in AI that the company wishes to make. Why? Because we do not forget that only high standards will bring success when pursuing the responsible and ethical development of  AI solutions, in line with both government regulations and consumer expectations. 


¹Think of an app that provides recommendations on well being, such as on how to manage stress, anxiety, depression, and other typical conditions. It’s unlikely that one would prefer to seek advice from this app rather than from a trained specialist. However, what if the app is powered by a GPT specialized on psychological conditions, with a sturdy and diverse knowledge base that is advertised as being customizable to particular user questions? Furthermore, what if such a GPT was actually built by trained specialists (or with the help of them) who would certify its effectiveness and efficiency?

 


Maria Jose Rodriguez de la Garza

Posted by Maria Jose Rodriguez de la Garza on November 29, 2023