Let’s assume it. We do not think enough of the harm that the products we build can cause. If we do/did not think about that, we definitely do not know entirely our products.
Just in November in MTPCon, Cennydd Bowles called all product people out to develop products ethically, to take the human aspect in consideration when deciding what to build.
In a recent article of mine, I mentioned why this is important for us. We, product managers, work with methodologies, practices, frameworks that only contribute to create that “over quantified” perception of our world that Cennydd was talking about.
We do spend too much time trying to spin a metric, aiming for growth mostly stealing the user attention. We immerse ourselves in data to make our own decisions. And we are very good at it. But that is definitely what is making us to not see what is beyond our decisions.
This is how we could start changing that way of working.
Human Risk: whether humans can be harmed by using our product.
Marty Cagan, in his latest edition of Inspired, explicitly calls out the four types of risk that should be tackled:
1. value/ risk (whether customers will buy it or users will choose to use it)
2. usability/ risk (whether users can figure out how to use it)
3. feasibility/ risk (whether our engineers can build what we need with the time, skills and technology we have)
4. business viability/ risk (whether this solution also works for the various aspects of our business)
The earlier each of these big risks, especially value risk and business risk, are tackled, the less uncertainty will be around a product solution.
Now, tackling these 4 risks will undoubtedly contribute to build products that customers love. Most of the technology-powered organisations of the world, take this inconsideration, one way or the other, to make product decisions about what needs to be built. And customers love these products. They do use them pretty much every day if not every hour. This is becoming more and more the source of the problem.
Our daily research of data shows us the product performance and the customer behaviours using our products. Think about it, we are analysing not only our products but the humans that are using them. And while data is definitely necessary for innovation, it is on us, product people, to decide how responsibly this data should be used. Cennydd Bowles did mention yesterday that we do run hundreds if not thousands of experiments on our users to boost those metrics.
This, as practice of modern product management, while it is a very desired environment for every product manager to be working in, it does miss the point of tackling the human risk and its unintended consequences.
If we really want to be working on a product that is customer-centric we need to think of them as humans, not only as the metrics that track their behaviours in our products.
How to tackle human risk in advance?
You, as a product person, are also responsible to define and articulate the “dark side” of your product and make people in your product team be aware of what possible unintended consequences your product can have.
The point is to get as evil as possible, and not hold back, and not think about the feelings of others, it’s just about being really harsh about what your product could be used for, and then putting it into a little dystopian future that you create.
Rosi Proven
There are already some practices that can help you realise what can go wrong with your product. One example of it is the Black Mirror Test. Rosi Proven defines the three core rules of running a Black Mirror workshop in your product:
- Nothing is out of bounds- not matter how unpalatable
- Forget what you know about the people around you
- Look for real-life parallels
You can check what’s the worst that could happen?
On this article, I’ve listed some key questions that can be addressed during the discovery phase. However this is just a beginning of a change for the years to come. We need to start anticipating human behaviour on our products.
How do I put this into practice?
Every time we need to adopt something new there is a feeling of uncertainty. Of course, we do not know something until we know it. Our actual tools and artifacts are not prepared to tackle the human risk. However they can be expanded to assume this new function too.
Before jumping into any tooling or practical advice we should define what our goal should be. Our main objective as product manager is to know what our product is and what is not. Try to figure out the following to help you create awareness of how your product can be used harmfully :
- Audiences that can be affected negatively with your product
- How are these audiences can be affected by your product? Can your product be responsible for creating anger, anxiety or mistrust in your customers
- Users that can use your product to harm others
- In which ways users can harm others using your product?
We should think beyond our tools to expand their capabilities. While our artifacts, templates, processes and workshops solve our actual way of working, none of them is prepared to deal with the assessment of the human risks in our products.
I will dedicate other articles to provide you with concrete sessions and artifacts that can help us tackling the human risk. The main goal is to augment the knowledge we have of our products assessing what we do not know that can damage our customers.
For now, you can try the Extended Persona Sheet to help you know how users can use your product to harm others (This is not about identifying a completely new dark persona, but rather to identify some attributes of our users that we never thought about it).
I will be happy to hear your thoughts.
Thanks for reading! If you enjoyed this story, let’s talk about it. Feel free to share it so you help others find it!
Here is my Twitter.