close
close

Building a better and safer digital ecosystem

For Alex “Sandy” Pentland, a longtime professor of information technology at MIT, the big societal questions have always been front and center. And this focus has led to great impact. His group developed a digital health system for rural workers in developing countries, which today (with the support of the Gates Foundation) guides health services for 400 million people. Another effort led to the implementation of tools to ensure fair and impartial social service support for 80 million children in Latin America. Another spin-off developed open-source identity and authentication mechanisms, now built into most smartphones and relied on by 85% of humanity.

In 2008, Pentland began co-chairing the Davos talks, which have been widely recognized as the genesis of the European Union’s General Data Protection Regulation (GDPR). Today, he is part of the board of directors of UN Foundation Global Partnership for Data on Sustainable Developmentwhich uses data to track countries’ progress towards 17 different sustainability goals.

This spring, Pentland joined the Stanford HAI Digital Economy Lab as a Center Fellow and Faculty Lead for Research on Digital Platforms and Society. Here he hopes to continue building a better digital ecosystem for all and address the ways in which social media and artificial intelligence are influencing democracy and society. We recently caught up with Pentland to ask him about his plans.

What do you mean by building a better digital ecosystem?

Thirty to forty years ago, we suddenly had the Internet. While I’ve done a lot of good things with it, I’ve also done some questionable things. And people are scared of what’s to come, like bad actors using AI in nefarious ways; widespread misinformation that alters understanding of our community; and cyber attacks affecting our financial system. I would like to see us build a better digital ecosystem so that we can have a thriving, creative and safe society.

What does this look like in practical terms?

There are a variety of ways we can achieve this goal. For example, courts and law enforcement need a way to discover the true identity of online actors.

A second idea is that we need to draw a line between individual expression and mass expression. For example, consider the case where influencers often have more than a million “friends” on social media. Anyone who follows so many people can make money and build a reputation by saying whatever they want. And I believe that these overly loud voices should be treated as businesses and not as individuals. They shouldn’t be able to shout “fire” in a crowded theater or tell lies. These are the basic standards we ask of other businesses. TV news shows or newspapers cannot publish something just to generate outrage. Digital media should be responsible for the same care to protect the public good. If you are going to express your ideas to a million people, then you are a business and should be regulated as a business.

We also need to reduce partisan animosity online. Today, digital media is designed to make us react quickly, which results in careless responses that in turn lead to cascades of behavior where everyone becomes outraged. We need a system that encourages people to communicate in ways that support democratic processes rather than tear them apart. Large-scale experiments have found that online discussions are improved when people are encouraged to put some thought into what they are about to say. For example, we see less divisiveness and outrage online when we add an extra step that allows time for reflection before responding or forwarding, or add a prompt to consider what a comment will do to your reputation.

You said we need to rethink the architecture of the Internet. What this means?

We must have new security standards. In the early days of the Internet, developers did not include important security and digital identity features because users were mostly government employees and faculty. But today, everyone is on the Internet, and that means bad actors have the opportunity to do all kinds of harmful things. Nations we don’t like can disrupt our cyber world with distributed attacks, bots and troll farms. People can spread misinformation and disinformation on social media without retaliation. And these behaviors destroy our ability to discuss things meaningfully with each other and make rational decisions.

In some cases, fixing the problem will require changing small, often subtle things in the bowels of the Internet. For example, if someone produces 50,000 tweets a day, that’s a bot, not a human. This is an obvious case, but there are other things we can do to more effectively find bots, determine when foreign nations are interfering in elections, and better deal with ransomware and cyber attacks. The problems we have now evolved because the architecture of the Internet was never completed. And now maybe it’s finally time to finish the job.

So at the Stanford Digital Economy Lab, we’re going to try different fixes experimentally to see what kind of economic and social incentives work, and then hopefully make the switch.

While at Stanford, you join a team of researchers including Condoleezza Rice, Erik Brynjolfsson, and Nate Persily. who is working on a series of essays called “Digital Works”. Tell me about it.

The Digitalist Papers will be modeled after the Federalist Papers, which were a series of 85 essays written by three people in 1787-1788, arguing for the ratification of the US Constitution. They argued for the creation of a country by design rather than by accident or force.

Today, we have the internet, smartphones and artificial intelligence, so perhaps there is a better form of government we can design – something that is more transparent, more accountable and perhaps wiser. So for the Digitalist Papers, we bring together experts from around the world in a variety of fields—economics, politics, law, technology—to write essays on how the intersection of technology with each of these fields could lead to better governance. .

It is our hope that getting these essays out into the world will change the terms of the discussion and change what people think they should work for.

We’ve been talking about improving the digital ecosystem in general. Do you have particular thoughts on how AI currently plays – or will play – a role in our digital ecosystem?

First, AI is not new. The first AIs of the 1960s were logic engines. And then expert systems came along and then came collaborative filtering. All of these are widespread today and have had some negative effects, from centralizing data like never before to creating a surveillance society.

So we should think about what the current wave of artificial intelligence will do before it really takes off. And it’s not artificial general intelligence, or AGI, that worries me. It’s that AI is becoming ubiquitous in so many parts of our lives, including our healthcare system, our transportation system, and our school system. It will be everywhere, just like previous waves of AI have been. And we have to make sure it’s prosocial.

To me, AI was and continues to be a way to find and use patterns in data. So if you want to control AI, you need to control the data it feeds by demanding privacy rights and data ownership rights. Without it, the AI ​​will run amok. Data is like fodder for AI, and if you want to control AI, you have to control data.

What motivates you and keeps you doing this work?

I believe that developing a humanistic digital infrastructure is one of the best things a person can do right now. If I could help create a human-centered world that uses all these new digital tools and AI for the good of society, that would be the best thing I could do with my life just because it is so transformative.

Stanford HAI’s mission is to advance AI research, education, policy, and practice to improve the human condition. Find out more.