Before it goes rogue

It is happening again. It has happened several times in the past. We are enjoying and fearing an exciting breakthrough in technology.

After a certain amount of time, researching and developing comes a technological breakthrough—a marvel of innovation, a product that will change everything. We last saw this on November 30th, 2022, when ChatGPT was released.

That day, AI rose from being a mere model under training to becoming the fastest-adopted tool in history, with over 100 million users within the next two months. As it grows, so do the excitement and the concerns it generates.

Models are everywhere (web browsing, ideas generation, summarising texts, and code assistance, among many others). The number of use cases for using AI is countless, and they keep growing daily. Notion, Microsoft, Google, Miro, and many others keep pushing the boundaries of what we can do with AI. This frantic development creates a superior level of excitement that sometimes is hard to grasp. I need help keeping up with all the news and new developments associated with AI. We will probably have soon (if not already) a product using AI to recommend other AI products.

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices.

Image from Akira (Katsuhiro Otomo)

In parallel to this frenzy, there have been several attempts to pause AI research. The Future of Life Institute published an open letter calling for an immediate pause in advanced AI research , aiming to take some steps back for having regulations for AI in place. 

Is this letter enough? How can we seek to stop something that is changing how we perceive and interact with the world?

The enthusiasm is evident. Every single time you open LinkedIn, you can see statements like “If you’re not using AI, you’re falling behind”, which are, in my opinion, as meaningful as “If you’re not driving a Ford, you’re falling behind” in 1908. As product managers, besides keeping up to date with new technologies, we should be responsible for challenging the solutions we apply to solve problems and know how they work. These are fundamental questions.

Naturally, during the last months, I have seen an increasing trend of product managers asking whether AI would replace their job and whether all product managers will become AI product managers. Maybe. This post reflects not how AI would shape product management but how product people (PMs, engineers, designers, data scientists and others) can help avoid using AI for harmful and unintended consequences for humans.

Before trying to see what we can do to build more ethical AI products, how well prepared are we for solving problems with AI?

Product Managers and AI

Currently, most software products are starting or thinking of implementing AI and Machine Learning (ML). Maybe your company still needs to be added to it, but it is undoubtedly a matter of time until it does. One way or the other, artificial intelligence will have an impact on the way we build products.

Before discussing how we can build more humane products using AI (a spin-off of my previous article, The Fifth Risk), we should talk about what an AI Product is. 

As Karin Schöfegger well mentions, there are, as for now, a couple of AI product types:

  1. Products that apply AI to enable new user experiences.
  2. Products (or platforms) built for engineers to make AI technology — i.e. to help them train, launch, and operate ML models.
  3. Products that offer AI-as-a-service (off-the-shelf)
  4. Other products that are built mainly by AI, not by humans. (AI is still not there)

All AI products start with one common factor: data. And it is the nature of this data that defines how well the models will make predictions and, therefore, impact customers on their journey using our products. However, what is essential to understanding the problem we are trying to solve with AI products are the risks associated with the models we are either implementing or attempting to use so that we can foresee their impact on customers and act upon them. 

A bit scarier is that there are no fundamental standard practices for assessing how these risks should be evaluated or mitigated. Currently, no regulations can make people build AI products to create transparency about how data is collected and how the models process it, and what mitigating actions will be taken. 

Moreover, companies building AI products produce very vague statements about how data is used or how their models are trained. These statements are not preventing any harm produced by any product, but rather sharing a couple of principles of how to build good AIs.

It can represent various societal biases and worldviews that may not be representative of the users intent, or of widely shared values. It can also generate code that is compromised or vulnerable.

ChatGPT-4 System Card

In this mentioned paper, ChatGPT explored the following risks:

  • Hallucinations 
  • Harmful content 
  • Harms of representation, allocation, and quality of service
  • Disinformation and influence operations 
  • The proliferation of conventional and unconventional weapons 
  • Privacy 
  • Cybersecurity 
  • Potential for risky emergent behaviours 
  • Interactions with other systems 
  • Economic impacts 
  • Acceleration 
  • Overreliance

OpenAI took action after it completed the assessments of these risks and started creating enough awareness about how it will prevent the use of its models for illegal activities generation of hateful, harassing or violent content, among others. (See their Usage Policies).

Along these lines, there are several other efforts to have AI regulations. Some have taken a more radical approach, like Italy banning ChatGPT. Some others are going more consciously about having a healthy balance between regulations and the product.

The latter are aiming to have AI regulations so that we can have streamlined practices for data privacy and ownership, bias and fairness, transparency, accountability, and standards. Still, the biggest question, as mentioned by O’Reilly is how do we align AI-based decisions with human values? Whose human values? Who defines the correlation between models predictions and these values?

So far, there is broad uncertainty about how to address this problem.

In my opinion this is a responsibility of the product teams building AI products.

Today, we have dozens of organizations that publish AI principles, but they provide little detailed guidance. They all say things like “Maintain user privacy” and “Avoid unfair bias” but they don’t say exactly under what circumstances companies gather facial images from surveillance cameras, and what they do if there is a disparity in accuracy by skin color. (…)Instead, they provide only general assurances about their commitment to safe and responsible AI. This is unacceptable.

What can we do as Product Managers?

Product Managers can be critical in building more humane (and ethical) AI products; besides the proposals of having a set of principles for creating AI products, independent regulatory bodies and auditors that could be able to look deeper into how data is used and for what purposes.

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

Until we wait for regulations, we can start acting as product people.

What responsibilities should a product manager working with AI have?

  • Helping to create transparency in the scenarios where AI systems work and how they work, 
  • Communicate clearly, and educate the best ways of managing them
  • Understand the humane and systemic risks of each AI solution and assess them accordingly.

Try to figure out the following to help you create awareness of how your product can be used harmfully :

  • Create enough understanding for regulators about what data is used to train your models, how the models are trained, and how we influence the models to make predictions.
  • What are the audiences that can be affected negatively by your product?
  • How are these audiences can be affected by your product? 
  • Can your product be responsible for creating anger, anxiety or mistrust in your customers?
  • How can your product contribute to people getting harmed?
  • How can users use your product to harm others?

This is a first take into how we, Product Managers, can help creating more transparency about how we would be using data into our models, and how we are training and testing them, so the predictions and decisions made by them cause the less harm possible to our customers. All in all, AI is here to stay, and we are all for an exciting future building even better products. Sustainable and more responsible ones.

PS: this article has not been written by ChatGPT 😉

P.S.S: Check out The A.I. Dilemma

Blog at