Two months after the horrific exit of a well-known artificial intelligence researcher at Google, a second AI researcher at the company said she was fired after criticizing the way employees were treated for alleviating bias and toxicity in their artificial intelligence combat systems.

Margaret Mitchell, known as Meg, one of the leaders of Google’s Ethical AI team, posted a tweet Friday afternoon saying, “I’m fired.”

Google confirmed that her employment relationship has ended. “After reviewing the conduct of this manager, we confirmed that there were several violations of our code of conduct,” the company said in a statement.

The statement went on to claim that Dr. Mitchell violated the company’s security guidelines by removing confidential documents and private employee data from the Google network. The company previously said Dr. Mitchell tried to remove such files, Axios news site reported last month.

Dr. Mitchell said Friday night that she would have public comment soon.

Dr. Mitchell’s post on Twitter comes less than two months after Timnit Gebru, the other head of the Ethical AI team at Google, said she was fired from the company after criticizing its approach to minority attitudes as well as its approach to bias AI After the departure of Dr. Gebru from the company criticized Dr. Mitchell emphatically and publicly expressed Google’s stance on the matter.

More than a month ago, Dr. Mitchell that she was banned from her work accounts. On Wednesday, she tweeted that she stayed locked out after trying to get Dr. Gebru who is black to defend.

“Exhausted from the endless deterioration to save the face of the upper crust in tech at the expense of minority minority careers,” she wrote.

Dr. Mitchell’s departure from the company was another example of the mounting tension between the top management of Google and the workforce, who are more open than those of other large companies. The news also highlighted a growing conflict in the tech industry over the bias around AI, linked to issues affecting the recruitment of employees in under-represented communities.

Today’s AI systems can bear human prejudice because they learn their skills by analyzing large amounts of digital data. Because the researchers and engineers who build these systems are often white men, many fear that researchers are not paying this topic the attention it needs.

Google announced in a blog post yesterday that a company executive, Marian Croak, who is Black, will oversee a new group within the company dedicated to responsible AI