Five Characteristics of an AI-Driven Future Built for Everyone

Stakeholders in AI have an opportunity.

server
Sashkin/stock.adobe.com

If history is a teacher, then people have learned in the first two decades of this millennium that technology is both a progressive and disruptive force. The internet has created a new breed of successful entrepreneurs while becoming a catalyst for inequality across the world. Laws and social norms have nurtured Meta (formerly Facebook), Amazon, Apple, Google and Netflix while allowing cyberbullying, exploitation and hacking to grow.

What can stakeholders in AI do differently in the next two decades to ensure they make progress and don't disrupt lives? This question is more important now than ever as some of the scenarios that used to only live in sci-fi novels feel closer than ever to becoming reality. How do humans make progress and not leave behind the less fortunate?

To find good answers to those questions, let's look at what history teaches. Here are five things the stakeholders of AI — users and creators — must do differently.

1. Explain Why

When a service provider makes content decisions based on your life experiences, it should explain to you why it made such a decision. When Facebook decides to show you content based on its proprietary algorithms, it changes your life experience. When a lot of people have a shared life experience based on Facebook's algorithm, it can turn into a cult.

In the future, service providers may change your life experiences in new ways using AI. Your local barber may want to cut your hair based on ongoing trends and your personal profile. Good and bad things may start happening to you without you explicitly asking for it. It will be great if all things that AI decides for us are good for you. What if you did not agree with the AI?

In these cases, you must ask service providers to explain AI decisions. This sort of explanation should include why the AI thought it was beneficial to you over alternatives. This explanation should tell you how many other people also received similar decisions and how many others received alternatives. When AI has a say over a human's life experience, people should preserve their entitlement to know why they have the life experience that they have.

2. Independently Verify

When AI affects you, there must be a way to verify what the AI did. Think about the term "fake news." Notwithstanding political beliefs, the term itself came about because one group of people could unilaterally declare liars and truth-tellers. To inform public discourse, fact-checking agencies came into being. That was a good start, but, in the age of AI, I believe you must also institutionalize such verification of AI-led decisions. For example, in the field of radiology, you could create independent panels of doctors who validate the results of an AI X-Ray diagnostics machine that decides whether a particular patient has stage 4 or stage 3 tumor in their brains.

This is important because a cancer patient's life and death may depend on the diagnoses and ensuing treatment. Similarly, in more mundane circumstances, such independent agencies should be able to recreate an individual AI decision, such as "you may like this restaurant" and let the receiver of the decision know they indeed received a good decision.

3. Legal Recourse

Europe is trying hard to get tech companies to accept laws such as "right to be forgotten" and "GDPR." In the United States, there are varying laws and several of tech industry's own standards such as Google's mantra "Do no evil." These efforts will become even more important in the world where AI is ubiquitous.

People who receive decisions from an AI bot must be able to dispute each decision immediately and also influence the outcome in future in some ways. In effect, you must treat AI with the same or possibly even higher levels of liability standards applied to fellow human beings. Imagine a day when you receive and act upon 100 small AI decisions every day, what if your day turns out to be less than ideal? What if a million other people also had a bad day? Is that enough for a class action lawsuit? What are those standards?

4. Transparency

When you order up an autonomous ride from a rideshare app on your phone and tell it to "take you somewhere nice," it would be great if that car takes you to a nice restaurant first and then to a movie. To have this great experience, did the rideshare app share your data with others? How many companies have purchased a history of your habits and likings from the rideshare company? Wouldn't you like to know? That is why you should ask for transparency in AI decisions. Just as you get your bank statement every month, you should get a statement of which people or companies received your data during or after you availed of an AI Decision. This just puts you in the driver's seat with respect to your own data.

5. Data Ownership

For centuries, the only time when a person gave something up without receiving monetary compensation for it was during the barter age. In the last 20 years, people routinely gave up information about themselves without receiving monetary compensation. Yes, you do receive services in return for data. For example, millions of users use free email servers to communicate and pay in terms of their data, such as the content of the email itself. How can you know whether the free email service was worth the number of dollars that the provider can make from your data? It's important to debate this contentious topic and resolve to an answer. Whose data is this? Do you own your information or the service providers own your data? How much is this data worth?

Stakeholders in AI have an opportunity. They have the chance to get this right. They must not stand in the way of progress; at the same time, they must try to create a more perfect AI-driven world.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

The Newsweek Expert Forum is an invitation-only network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.
What's this?
Content labeled as the Expert Forum is produced and managed by Newsweek Expert Forum, a fee based, invitation only membership community. The opinions expressed in this content do not necessarily reflect the opinion of Newsweek or the Newsweek Expert Forum.

About the writer

Sachin Dole


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go