Sacrificial braveness. Needed. For companies. For government.
Timnit Gebru was born in Ethiopia in the early 1980s. Her father, an electrical engineer with a PhD, died when she was only five. Her mother raised her alone in Addis Ababa, in a quiet home filled with books and shaped by the belief that education could help a child survive almost anything.
Then war arrived.
When Gebru was a teenager, the Eritrean-Ethiopian War broke out. Because her family had Eritrean roots, they were forced to leave the only country she had ever known. At first, she was denied a visa to the United States. She spent a short time in Ireland before finally receiving political asylum in America. She later described the experience as miserable.
She settled in Massachusetts and began high school as a refugee. Her English was still not fluent. Even so, administrators tried to keep her out of advanced classes, although she was one of the strongest students there. She pushed her way in anyway. That quiet refusal to accept a smaller future for herself would shape everything that followed.
Years later, she earned her PhD from Stanford University, one of the most respected artificial intelligence programs in the world.
Then she changed the field.
In 2018, with researcher Joy Buolamwini, she co-authored a study called Gender Shades. They tested commercial facial recognition systems from major technology companies and found something disturbing. For lighter-skinned men, the systems made mistakes less than 1 percent of the time. For darker-skinned women, the error rate rose as high as 34.7 percent. Technology being sold to police departments and governments around the world was failing the people most likely to be harmed by misidentification.
The paper became a landmark. It forced the technology industry to confront bias in a way it could no longer easily avoid.
That same year, Google hired her as technical co-lead of its Ethical AI team, alongside researcher Margaret Mitchell. Publicly, the company held her up as evidence that it cared about responsible AI. Under her leadership, the team became one of the most respected and diverse AI research groups at any major technology company.
Then came the paper that changed everything.
In 2020, Gebru and several co-authors wrote a study titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It warned about three serious risks in the large language models that were becoming central to Google’s business.
First, bias. These models are trained on massive amounts of internet text, which contains centuries of racism, sexism, and hate. The systems absorb those patterns and can send them back into the world at scale.
Second, environmental cost. One study found that training one large AI model could produce more than 626,000 pounds of carbon dioxide, roughly the same as the lifetime emissions of five average American cars. And the models were only growing larger.
Third, accountability. The datasets are so enormous that no team can fully inspect them. No one can promise exactly what these systems will do or who they may harm.
The paper quietly asked the industry to slow down.
Google did not want to slow down.
Only two months before she was fired, Gebru had been promoted and received strong performance reviews. But after she submitted the paper, Google leadership asked her to withdraw it or remove her name. She asked who had reviewed it. She asked for specific feedback. She asked for transparency. She also sent a frustrated internal email raising concerns about the company’s handling of diversity and bias.
A few days later, on December 2, 2020, Google ended her employment. The company described it as a resignation she had not made. In her own words, she had been fired.
The reaction was immediate and massive.
Within days, more than 2,700 Google employees signed a protest letter supporting her. More than 4,300 academics and researchers joined them. Nine members of the United States Congress sent a letter directly to Google CEO Sundar Pichai demanding answers.
But Gebru did something more powerful than protest.
She built.
Exactly one year later, on December 2, 2021, she launched DAIR, the Distributed Artificial Intelligence Research Institute. It began with 3.7 million dollars in funding from the Ford Foundation, the MacArthur Foundation, the Open Society Foundations, and the Rockefeller Foundation. Independent. No shareholders. No executives deciding which findings could be published and which had to disappear. Researchers from Africa, Europe, North America, and Australia now work together to document how AI harms the world’s most vulnerable people.
Recognition came quickly.
In 2021, Fortune named her one of the World’s 50 Greatest Leaders. That same year, Nature included her among the 10 people who helped shape science. In 2022, Time named her one of the 100 Most Influential People in the World. In 2023, the BBC named her one of the 100 most inspiring and influential women on the planet.
Think about the shape of that life.
She fled war. Was denied visas. Rebuilt from nothing. Earned a PhD at Stanford. Exposed discrimination inside technology used across the world. Joined Google to help repair AI from within. Got promoted. Told the truth about dangerous systems. Was fired for it. Then built her own institution, free from the companies that had tried to silence her.
Google hired her to find problems.
She found them.
Google fired her for it.
The problems remained.
The models keep growing. The biases keep deepening. The companies keep getting richer.
And Timnit Gebru keeps telling the truth.
Whether the world is ready to hear it or not.
