The Promise and Peril of AI

IU Center of Excellence for Women & Technology
2021-22 Leadership Launch Dinner
Grand Hall, Cook Center, Maxwell Hall

Friday, August 27, 2021

Early Days of AI

Artificial intelligence is a very new discipline, not an ancient discipline like mathematics or philosophy, though it draws heavily on both. It can be regarded as having been founded as a discipline in the mid-1950s, though its essential mathematical underpinnings were primarily developed in the two preceding decades. However, it has only gained widespread acceptance as a discipline in the last few decades.

As in so many related areas, a founding presence behind AI was the legendary British mathematician, Alan Turing. He proposed the famous Turing Test for machine intelligence. He asserted that if one could hold a conversation with a machine through some kind of interface that was indistinguishable from a conversation with a human, then it was reasonable to ascribe intelligence to that machine. The Turing Test has been the subject of vigorous debate, pro and con, to this day.

However, the inaugural Dartmouth Summer Research Project on Artificial Intelligence, held in 1956, is widely considered to be the founding event of AI as a field of research. This eight-week summer retreat brought together 20 of the brightest minds from a wide variety of academic backgrounds, including mathematics, psychology, and electrical engineering, for what might be best described as an extended brainstorming session. The conference was historic as it was where the term "artificial intelligence” was first coined – attributed to the AI pioneer John McCarthy – and it also led to the development of some of the earliest and seminal AI and machine-learning projects.

The next nearly two decades was the first great period of research and experimentation in AI dominated by researchers at MIT, CMU, and Stanford, and the University of Edinburgh in the U.K., and some others, where fundamental ideas and concepts were developed, tried, and tested. In 1959, two junior faculty at MIT, John McCarthy and another AI pioneer, Marvin Minsky, both of whom attended the Dartmouth conference, founded the Artificial Intelligence Project at MIT, the first major AI research project at a university.

This was a golden age for AI. But due to exaggerated and overly optimistic claims from AI researchers that they were on the verge of the creation of true machine intelligence, the by-now disillusioned funding agencies in both the U.S. and U.K. – the two countries most actively funding AI research – ceased funding such research. This was called the first AI Winter.

However, work continued quietly in universities and laboratories around the world, and in the early 1980s the Japanese government launched a major new AI initiative with billions of dollars of direct and indirect funding, coordinated through their then Ministry of International Trade and Industry (MITI) and involving dozens of Japanese companies and universities. It was called the Fifth Generation Computer Systems (FGCS) project and its goal was to build intelligent computing systems based on formalized logics as their programming languages running on parallel computers.

I knew this project well as I had established a collaborative project with FGCS when I was at the Australian National University and was later appointed a member of the international panel established by the Japanese government in the early 1990s to review the progress of this project on its completion. We concluded that it was a visionary and extraordinarily ambitious project that had been responsible for important new research and progress in computer science and AI but had not achieved its original goals, though it had also been able to demonstrate the limitations of the approaches that the project had taken.

This not surprisingly coincided roughly with another AI winter for AI funding worldwide – the Second AI Winter. But the relentless march of Moore’s Law – the so-called law that asserts that the speed of computer microprocessors roughly doubles every 18 months – meant that the power and capacity of computers available to AI researchers continued to increase massively. This, plus breakthroughs in the applications of different mathematical techniques – in particular, a much greater incorporation of statistical techniques and the use of neural nets – saw optimism about the future of AI begin to slowly rise again in the mid-1990s. In fact when I arrived at IU in 1997, one of the first events I was involved in was a seminar on a popularist theme of something like “Would Computers Take Over the World?” – a theme I flatly rejected.

But today it is abundantly clear AI has reached full maturity as a discipline. It will still have its ups and downs in funding and progress, and no doubt it will again reach research roadblocks that halt progress, but the field is now too vast and successful in applied applications for it to ever suffer another AI winter.

AI is having an enormous impact in nearly every sector of the global economy and society at large, including, but not limited to, business, education, finance, manufacturing, health care and pharmaceuticals, and media and marketing.

It has incredible promise and potential to expand our knowledge, improve our economic prosperity and quality of life, and enrich the human experience. Indeed, a recent report from the bipartisan National Security Commission on Artificial Intelligence urged the U.S. to accelerate its AI innovation, and noted that artificial intelligence “will be the most powerful tool in generations for benefiting humanity."1

Artificial Intelligence at IU

AI, especially in areas concerned with programming languages, has long been a strength of IU’s Department of Computer Science in the Luddy School of Informatics, Computing, and Engineering. More broadly, faculty in IU’s outstanding programs in cognitive, psychological, and neurosciences have also long been engaged in areas of research relevant to AI.

Over recent years during my 14-year tenure as IU president, we sought to build on these strengths in the context of the worldwide explosion of interest in AI. To give just two examples:

A key initiative was the announcement in 2019 of the acquisition of Big Red 200, the first university-owned AI supercomputer in the country. Big Red 200 was dedicated on January 20th of 2020—the 200th anniversary of IU's founding. It has allowed enormously complex computations involving AI applications in a very wide range of research areas to be carried out vastly faster and at a much larger scale than is possible on a desktop machine.

But our most comprehensive initiative has been the establishment of the new Luddy Center for Artificial Intelligence, which is, quite simply, a game-changing development for IU. This visionary new center, which we dedicated earlier this summer, will be the hub of multidisciplinary research at IU in artificial intelligence, and, in particular, digital health, and it will put IU and the Luddy School at the forefront of research and education in AI and related areas.

And I should note of course that next week is AI Week at IU, an event that I strongly recommend all of you attend.

The Promise and the Ethical Challenges of AI

AI is now widely regarded as essential to our nation's innovation economy and technological competitiveness. Likewise, U.S. investment in AI-enabled systems is seen as critical to safeguarding our national security interests and upholding our democratic values here and around the world.

Nevertheless, despite the massive potential of AI and its profound impacts on our economic growth, national security, and welfare, it is important for us to remember that AI – at the end of the day – is a technology like any other.

As with radio and television, computers and microprocessors, e-mail and the Internet, Google, GPS, Facebook and Twitter, and other disruptive technologies that have dramatically altered our world for the better in many ways, AI also comes with some major concerns as to how it can be applied. And these concerns have only grown as more and more private companies, government agencies, and other institutions in countries, both democratic and authoritarian – and worse – employ AI more regularly and pervasively.

As AI becomes more commonplace in how our economy and society functions, we are seeing a dawning of the recognition that this technology can be used for an enormous amount of good, but also a large amount of bad.

These concerns cover a wide range of social and ethical issues, including AI's potential, when used maliciously, to reduce our privacy and personal freedoms, to fuel bias and discrimination, and to eliminate transparency among the government agencies and public institutions that we count on to serve and protect us.

An April 2020 paper titled "Ethics of Artificial Intelligence and Robotics," published by Stanford University, identifies the main ethical debates around AI as those concerning privacy and surveillance, manipulation of behavior, opacity of AI systems, bias in decision-making systems, human-robot interaction, automation and employment, autonomous systems, machine ethics, artificial moral agents, and what has been called a “singularity” – a possible future AI superintelligence.2 A very sobering list.

Just this week, in fact, the U.S. Government Accountability Office published a 90-page report outlining how federal agencies are planning to expand their use of facial recognition systems.3 This comes at a time when civil liberties continue to voice strong concerns over the use of government and police use of this technology and its potential to exacerbate discrimination against people of color.

Companies across the nation and the world are using AI and machine learning to harvest complex data in attempts to better understand and predict the behaviors of their customers and gather insights that can help them forecast future market trends. But data mining presents several ethical dilemmas, including how much it infringes on the privacy of consumers for financial gain and whether companies who engage in this modern business practice can truly protect vulnerable individuals, especially minors and minorities, from emotional and physical harm.

Today, AI predicts what we see on Facebook, what products we should purchase, what stocks we will invest in, and what candidates and causes we will support politically. In the future, AI will become increasingly accurate at making these predictions. Some experts have even gone as far to suggest that AI "will know us better than we know ourselves." 4

But improvements in the technology will almost certainly bring additional serious implications. If left unchecked, these implications will have the potential to bring the perfect tyrannical state that Orwell and others warned us about much closer to reality.

As such, it is incumbent upon all of us to not just use this technology, but to understand it more deeply and to address the ethical considerations inherent in it. It is the only way we will ensure AI is used for good.

Conclusion

In their introductory letter to the National Security Commission on Artificial Intelligence report, the chairs of the commission describe AI as "an inspiring technology."

They write, "Scientists have already made astonishing progress in fields ranging from biology and medicine to astrophysics by leveraging AI. These advances are not science fair experiments; they are improving life and unlocking mysteries of the natural world. They are the kind of discoveries for which the label 'game changing' is not a cliché."

But in the very next paragraph of their letter, the authors caution, "AI systems will also be used in the pursuit of power. We fear AI tools will be weapons of first resort in future conflicts. AI will not stay in the domain of superpowers or the realm of science fiction. AI is dual-use, often open-source, and diffusing rapidly."

So these are the challenges that those of you in this room, and many others, will face in the future, especially those of you who are students, as you graduate into a world where the applications of AI are pervasive. You face the same dilemma that thousands before you have faced concerning other technologies over the past few centuries of explosive technological progress. How to take an incredible technology – one of the pinnacles of human intellectual achievement – and to see it used for good and the betterment of humanity, and not for the erosion of human freedoms and dignity. But you have all the incomparable resources of a great research university to help you resolve, conquer, and surmount this dilemma.

Thank you.