You Are a War Profiteer”: Microsoft Employee Accuses AI Chief of Complicity in Genocide
At Microsoft’s 50th anniversary celebration, a shocking moment overshadowed all speeches and applause — replacing commemoration with confrontation.
Ibtihal Aboussad, a Microsoft employee, directly confronted Mustafa Suleyman, the company’s Executive Vice President of Artificial Intelligence, accusing him of complicity in the genocide unfolding in Gaza.
“You are a war profiteer,” Aboussad declared in front of dozens of employees and top executives, including former CEOs Bill Gates and Steve Ballmer, as well as current CEO Satya Nadella, who were seated on stage.
Microsoft AI Technology Powering Military Operations?
Just days prior to this confrontation, a public report confirmed that Microsoft and OpenAI tools are being used by the Israeli military to select bombing targets in Gaza and Lebanon.
According to the report, software developed by these U.S. tech giants is being employed for data processing and aerial strike planning.
Since the beginning of Israeli attacks on October 7, 2023, more than 50,000 people have been killed in Gaza and over 4,000 in Lebanon.
These numbers cast a dark shadow over any company whose technology is used, directly or indirectly, for military operations.
Internal Protest Movement
Following Aboussad’s bold intervention, another employee, Vaniya Agrawal, also publicly denounced Microsoft’s leadership for what she called “complicity in mass killings.”
Their actions were not isolated — several other employees gathered outside the venue, holding signs and calling for accountability.
Their message was clear: Tech giants cannot and must not remain neutral when their technology is aiding war crimes.
Where Is the Ethical Line?
Criticism of Microsoft and OpenAI is growing within academic, human rights, and civil society communities.
The key question is emerging: Can the development of artificial intelligence — without strict oversight and ethical safeguards — lead to the normalization of digital warfare?
Despite public claims from companies like Microsoft that they are committed to “responsible AI,” the facts on the ground point to a dangerous gap between corporate messaging and real-world application.
Silence from the Top
To date, no public response has come from Microsoft’s leadership regarding these accusations and protests.
Mustafa Suleyman, co-founder of DeepMind and now a key figure in Microsoft’s AI development, remained silent in the face of allegations that his work has enabled the targeting of civilian infrastructure in conflict zones.
A Wake-Up Call for the Tech Sector
This incident raises urgent questions for the entire tech industry: Who is responsible when the code we write becomes a weapon?
And can companies like Microsoft truly escape accountability if their tools are directly or indirectly used for killing?
In the shadow of increasingly sophisticated algorithms, ethics can no longer be a footnote — it must be the foundation.
And as it turns out, even within the walls of Big Tech, there are growing voices who refuse to remain silent any longer.
Komentar