In response to its growing list of public controversies over the previous two years, including congressional inquiries into consumer privacy violations, allegedly manipulating user search results, sexual harassment complaints, and employee issues with the company engaging in R&D projects with the Pentagon, in June 2018 Google’s management decided to institute its “Principles of Artificial Intelligence.”  The “Principles” are a code of conduct that the firm touts as “our commitment to using and developing AI responsibly.”

Google’s Principles (which they will attempt to operationalize through “objectives”) address those AI technologies that they should pursue responsibly and those AI applications that they will not design or deploy.  Those objectives that they will pursue include:

—Be socially beneficial.

—Avoid creating or reinforcing unfair bias.

—Be built and tested for safety.

—Be accountable to people.

—Incorporate privacy design principles.

—Uphold high standards of scientific excellence.

—Be made available for uses that accord with these principles.

However, Google has also identified a list of AI applications that the company will not pursue:

—Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate safety constraints.

—Weapons or other technologies whose principle purpose or implementation is to cause or directly facilitate injury to people.

—Technologies that gather or use information for surveillance violating internationally accepted norms.

—Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Google notes that “as our experience in this space deepens, this list may evolve.”

The objectives it will pursue include some insights into how it will evaluate “being socially beneficial.”  “We will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefit substantially exceed the foreseeable risks and downsides,” Google said.

This language describes a benefit-cost analysis, yet with a still unresolved twist.

For example, specifically what “social and economic factors” will Google employ to operationalize a managerial decision to be “socially beneficial”? Will the word “substantially” be interpreted or defined as, for example, a ratio of 2:1 in favor of benefits before undertaking an AI project? Will there be transparency in this decision-making? If so, how will Google implement this transparency? In addition, to what level does Google consider to be adequate “safety constraints” when developing and deploying its AI technologies?

As to “avoid(ing) creating or reinforcing unfair bias,” what company guidelines will be instituted covering this range of bias? Who will be enforcing these guidelines? Will it involve any third-party audits? How often will bias assessments be undertaken? As to opportunities for AI systems “feedback, explanations and appeal,” which people (stakeholders inside or outside of the company) are to be responsible for “human direction and control”? As to privacy design principles, what is Google’s interpretation of “appropriate transparency and control over the use of data”?

Further, as to multiple uses of AI technologies developed by Google, how will the company restrict its products from being “dual-use”, i.e., adaptable to a harmful use? Does this mean that if any proposed product has the potential for such “dual use” application, i.e., the possibility of the technology being related to or adaptable to a harmful use, Google management will not consider it for commercialization?

The company states that it does not tolerate using Google AI technologies for surveillance purposes that violate “internationally accepted norms.” Nevertheless, this principle may have broad company latitude in interpretation. Google’s search engine left the People’s Republic of China in 2010 when the company decided that it would not work with the communist government’s demands that the search engine remove links that the government required. Fast forward to today and Google CEO Sundar Pichai continues to be supportive of a program called “Dragonfly” that could bring Google’s search engine roaring back into mainland China.

Web news outlet Intercept, in an article published in August 2018, outlined what PRC government censorship protocols were being designed into Dragonfly by Google as well as the program’s requirement that users submit identifying information, including telephone numbers, that would be available to government intelligence agencies to investigate dissidents. So how does this type of “exploratory” technology program comply with Google’s AI principle regarding not “contravening … human rights”?

In October 2018, Vice President Mike Pence said that Google “should immediately end development” of Dragonfly, as “the Dragonfly app … will strengthen Communist Party censorship and compromise the privacy of Chinese customers.”

Bloomberg Businessweek, in an article published recently, reports that the company’s support of Dragonfly has resulted in several researchers resigning from Google.  “I cannot work at a company that will not internally or publicly clarify its ethical red lines,” wrote one former employee in his resignation letter. If Dragonfly continues as an ongoing project, there will be further controversies played out in the media, in Congress, and among its employees.

What is next for Google?  As the preceding questions and comments reveal, the company needs to develop its Principles of Artificial Intelligence to be operationally effective before implementing them organization-wide. Moreover, for Google, embracing their “social good” challenge is not a moment too soon.