The Science of AI and the Art of Social Responsibility

The Science of AI and the Art of Social Responsibility
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

By Guru Banavar, IBM’s Chief Science Officer for Cognitive Computing

I am a computer scientist and engineer, inspired by the art of the possible and driven by the practice of computing applications. For decades, I’ve quantified my professional achievements using metrics like computability, performance, scalability, and usability.

But the transformational nature of artificial intelligence requires new metrics of success for our profession. It is no longer enough to advance the science of AI and the engineering of AI-based systems. We now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways.

I realize that many of us—perhaps most of us—lack the academic qualifications to pass judgement on the ethics of computer science. We did not study philosophy. We did not go to law school. But that does not excuse us from considering the social impact of the work we do.

That impact will be significant. This year alone at least 1 billion people will be touched in some way by artificial intelligence, which is transforming everything from financial services to transportation, energy, education and retail. In healthcare alone, IBM Watson is engaged in serious efforts to help radiologists identify markers of disease; to help oncologists identify personalized treatments for cancer patients; and to help neuroscientists identify genetic links to diseases like ALS, paving the way for advanced drug discovery.

It is no exaggeration to say that in the years ahead, most aspects of work and life as we know it will be influenced by these technologies. And that makes us more than computer scientists. It makes us architects of social change.

This is a profound and daunting responsibility. And it would be easy for us to bury our heads in our work, to retreat to our areas of expertise, our comfort zones. But that is simply not an option. Because the work we do now lives at the intersection of science and society. Therefore, we must engage with the messiness of the real world.

This is not the first time that scientists have been asked to consider the consequences— intended and unintended—of their work. Thankfully, we are not alone in this obligation. This is a shared responsibility, along with business, government, and civil society. Everyone must do their part.

That is why we have engaged with a broad coalition of partners and ethics experts to inform our work. And why IBM is a founding member of the Partnership on AI, a collaboration among Google, Amazon, Facebook, Microsoft, Apple and many scientific and nonprofit organizations charged with guiding the development of artificial intelligence to the benefit of society.

In addition to this work, IBM has developed three core principles that we believe will be useful to any organization involved in the development of AI systems and applications:

Purpose: Technology, products, services and policies should be designed to enhance and extend human capability, expertise and potential. They should be intended to augment human intelligence, not replace it.

Transparency: AI systems will make clear when and for what purpose it is deployed and all major sources of data that inform its solutions.

Opportunity: Developers of AI applications should accept the responsibility of enabling students, workers and citizens to take advantage of every opportunity in the new economy powered by cognitive systems. They should help them acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.

These principles have forced my team to ask some new and difficult questions of our work—questions I hope all of you will ask as well.

For example, we are challenging ourselves to not just consider the use cases of our work, but also the “misuse” cases. Not just how this technology will be used, but how it might be abused.

We are asking ourselves what requirements should be met to ensure transparency. What level of transparency—of evidence-based explanations—will lead to trust in cognitive systems? We remind ourselves that trust is a precursor to adoption, and that adoption is the only path to business success.

Finally, we are thinking about how we can empower all users with this technology, especially users who are not technically inclined. How can we ensure that these solutions augment their skills rather than obviate them? How can we build training into the solutions themselves, so that the user and AI can evolve together?

These are difficult questions. They force us to stretch our thinking and speculate. But we didn’t go into this field because it was the easy thing to do. I’m confident that if we take these matters seriously, and let these questions guide our actions, it will lead to stronger, better products. It will even lead to artificial intelligence that benefits all of humanity. And that is a metric for success on which we all can agree.

The British Computer Society (BCS) and the Institution of Engineering and Technology (IET) have selected Dr. Banavar to deliver their 2017 Turing Lecture, which he will present in four cities in the British Isles: London, Cardiff, Dublin and Belfast, from February 20-23rd. The BCS/IET Turing Lecture is not related to the Association for Computing Machinery’s A.M. Turing Award.

Popular in the Community

Close

What's Hot