In 2018, Liz O’Sullivan and her colleagues at a well-known artificial intelligence start-up began working on a system that could automatically remove nudity and other explicit images from the Internet.
They sent millions of online photos to workers in India who spent weeks tagging unique material. The data paired with the photos would be used to teach AI software to recognize indecent images. But when the photos were tagged, Ms. O’Sullivan and her team noticed a problem: the Indian workers had labeled all pictures of same-sex couples as indecent.
For Ms. O’Sullivan, the moment showed how easily – and often – bias could creep into artificial intelligence. It was a “cruel game of Whac-a-Mole,” she said.
This month, Ms. O’Sullivan, a 36-year-old New Yorker, was named General Manager of the new Parity company. The startup is one of many organizations, including more than a dozen startups and some of the biggest names in the technology industry, that offer tools and services designed to identify and remove biases in AI systems.
Companies may need this help soon. In April, the Federal Trade Commission warned against the sale of AI systems that were racially biased or that could prevent individuals from getting jobs, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could penalize companies for offering such technologies.
It is unclear how regulators could monitor bias. Last week, the National Institute of Standards and Technology, a government research laboratory whose work often influences policy, released a proposal detailing how businesses can combat bias in AI, including changes in the way it is done how technology is designed and built.
Many in the tech industry believe companies need to prepare for a crackdown. “Some kind of legislation or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for Software Alliance, a trading group that represents some of the largest and oldest software companies. “Every time there is one of those horrific stories about AI, it breaks the public’s trust and belief.”
In recent years, studies have shown that facial recognition services, healthcare systems, and even speaking digital assistants can bias women, people of color, and other marginalized groups. Amid a growing number of complaints on this issue, some local regulators have already taken action.
In late 2019, New York state regulators opened an investigation by the UnitedHealth Group after a study found that an algorithm used by a hospital gave priority to caring for white patients over black patients, even when white patients were healthier. Last year, the state investigated the Apple Card credit service after alleging it discriminated against women. Regulators ruled Goldman did not discriminate against Sachs, who operated the card, while the status of the UnitedHealth investigation is unclear.
A UnitedHealth spokesman Tyler Mason said the company’s algorithm was abused by one of its partners and was not racially biased. Apple declined to comment.
According to PitchBook, a research firm that tracks financial activities, more than $ 100 million has been invested in companies investigating ethical issues related to artificial intelligence in the past six months, up from $ 186 million last year.
But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework to combat bias in AI, including recognizing that some automated technologies require regular human monitoring. The trade group believes the document can help companies change their behavior and show regulators and lawmakers how to control the problem.
Although criticized for bias in their own systems, Amazon, IBM, Google, and Microsoft also offer tools to combat them.
Ms. O’Sullivan said there is no easy solution to bias in AI. A sensitive issue is that some in the industry are wondering if the problem is as widespread or as harmful as they think it is.
“Changes in mentality don’t happen overnight – and that is even more true when it comes to large companies,” she said. “You are trying to change not just one person’s opinion, but many.”
When she began advising companies on AI bias more than two years ago, Ms. O’Sullivan was often met with skepticism. Many leaders and engineers advocated what they called “fairness through ignorance,” arguing that the best way to develop equitable technology is to ignore issues such as race and gender.
More and more companies developed systems that learned tasks by analyzing huge amounts of data such as photos, sounds, texts and statistics. The belief was that if a system learned from as much data as possible, fairness would follow.
But as Ms. O’Sullivan saw after tagging in India, bias can creep into a system when designers pick the wrong dates or sort them incorrectly. Studies show that facial recognition services can be biased against women and people of color when trained on photo collections dominated by white men.
Designers can be blind to these problems. Workers in India – where gay relationships were illegal at the time and where attitudes towards gays and lesbians were very different from those in the United States – classified the photos at their own discretion.
Ms. O’Sullivan realized the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she left the company after discovering that it was building systems for the military that she believed could at some point be used for killing. Clarifai did not respond to a request for comment.
She now believes that after years of public complaints about bias in AI – not to mention the threat of regulation – attitudes are changing. In its new framework for curbing harmful biases, the Software Alliance warned against fairness through ignorance, saying the argument does not hold up.
“You acknowledge that you need to turn the rocks over and see what’s underneath,” said Ms. O’Sullivan.
Still, there is resistance. She said a recent clash at Google that ousted two ethics researchers is an indication of the situation in many companies. Efforts to combat prejudice often collide with corporate culture and the relentless urge to develop new technologies, bring them to market, and make money.
It is also still difficult to gauge how serious the problem is. “We have very little data needed to model the broader societal security problems with these systems, including biases,” said Jack Clark, one of the authors of the AI Index, which researches AI technology and policy around the world to pursue. “Many of the things that are important to the average person – like fairness – are not yet disciplined or large-scale.”
Ms. O’Sullivan, a philosophy student in college and a member of the American Civil Liberties Union, is building Parity around a tool developed and licensed by Rumman Chowdhury, a noted AI ethics researcher who spent years with Accenture consultancy before joining Became an executive at Twitter. Dr. Chowdhury founded an earlier version of Parity and built it on the same tool.
While other startups like Fiddler AI and Weights and Biases offer tools to monitor AI services and identify potentially biased behavior, Parity’s technology aims to analyze the data, technologies, and methods a company uses to build its Services used, and then identify risk areas and propose changes.
The tool uses artificial intelligence technology, which on its own can be skewed, which shows the double-edged nature of AI – and the difficulty of Ms. O’Sullivan’s task.
Tools that can detect bias in AI are imperfect, just as AI is imperfect. But the power of such a tool, she said, is to pinpoint potential problems – to get people to take a closer look at the problem.
Ultimately, she explained, the aim is to create a broader dialogue between people with a wide range of opinions. The problem arises when the problem is ignored – or when those who are discussing the problems share the same point of view.
“You need diverse perspectives. But can you really get different perspectives at one company? ”Asked Ms. O’Sullivan. “That is a very important question that I cannot answer.”