close
close

Meta changes its policy on a controversial Arabic term as the Israel-Gaza war continues to spark debates about Big Tech companies

Meta changes its policy on a controversial Arabic term as the Israel-Gaza war continues to spark debates about Big Tech companies

As we reported a few months ago, Meta’s independent oversight board recommended that parent company Facebook end its policy of banning the Arabic term “Shaheed” when used to refer to individuals or organizations that Meta considers dangerous.

Meta argued that the term, usually translated as “martyr,” was a compliment in these cases. But “shaheed” does not always mean approval, and the issue was so sensitive that the company asked the board of directors for its opinion (the Meta-funded council cannot make binding recommendations outside of specific cases). And now that the company had received them, it decided to change its policy.

Today, Meta said it would only ban content containing the word “Shaheed” if it violated other policies or contained “one or more of the three signals of violence,” namely: an image of weapons; language advocating the use or carrying of weapons; or a reference to a “specific event,” such as a terrorist attack.

While the Oversight Board still wants Meta to be more transparent about classifying people, organizations and events as dangerous, it nevertheless welcomed the move, saying that the previous approach had resulted in millions of bans, primarily affecting Muslim users.

“This change may not be easy, but it is the right thing to do and an important step. Meta is committed to taking a more nuanced approach that better protects freedom of expression while ensuring the most harmful material is removed,” board member Paolo Carozza said in a statement.

Others, however, are not so happy with Meta’s decision. Sacha Roytman, CEO of the Combat Antisemitism Movement, said in a statement: “Social media platforms have been used as recruitment centers for terrorist organizations in recent years, and social media companies should work to prevent this process rather than support it.”

Meta is not the only Big Tech company facing tough content restrictions related to the Israel-Gaza war. The war began when Hamas, the militant group that ruled Gaza for a generation, slaughtered 1,139 people in Israel on October 7 of last year. Since then, the war in Gaza has claimed over 37,000 lives.

Today as well, Wired reported on the unrest on YouTube over Google’s decision not to remove a Hebrew rap song that supports Israeli military action in the Gaza Strip, praises bombs and gunfire, and calls Hamas fighters rats.

According to the article, YouTube decided that the song was directed against Hamas, not Palestinians in general, and that hate speech against terrorist organizations is not punishable. However, some employees reportedly point out that the song uses the biblical term “Amalek,” which refers to Israel’s historical enemies, and say it is therefore a genocide directed against Palestinians in general. YouTube did not deny reporting on the internal controversy, but dismissed allegations of biased content moderation.

“The claim that we apply our policies differently based on religion or ethnicity in content is simply false,” said spokesman Jack Malon. Wired“We have removed tens of thousands of videos since this dispute began. Some of these decisions are difficult and we do not make them lightly, but rather discuss them to achieve the right outcome.”

More news below.

David Meyer

Would you like to send your thoughts or suggestions about the datasheet? Write a message here.

WORTH WORRYING

Antitrust allegations against Nvidia. Reuters reports that Nvidia is facing antitrust proceedings in France. It is not entirely clear what Nvidia is accused of, but the company has already warned its shareholders that French (and EU and Chinese) regulators have been asking it questions about its graphics cards. The French competition authority also raided Nvidia’s offices last year, saying the company was suspected of “implementing anti-competitive practices in the graphics card sector.”

Manipulated and fake content. Yesterday saw a flurry of announcements and reports about how platforms will handle AI or manipulated content. Meta changed its “Made with AI” label to “AI Info” after photographers were upset about mislabeling photos that were merely edited using AI. TechCrunch also reports that YouTube now allows users to request the removal of videos that simulate their face or voice using AI or other means. And Google now wants advertisers to label election ads that “contain synthetic content depicting inauthentic or realistic-looking people or events.”

Hate speech in China. Chinese social networks are cracking down on extreme nationalist hate speech that Guardian Reports. Last week, a man stabbed a Japanese mother and her child to death in the city of Suzhou, prompting many social media users to post xenophobic comments in response to the incident. Platforms like Douyin and Weibo have so far been reluctant to delete nationalist – and often anti-Japanese – comments.

ON OUR FEED

“I shot at it once. They say I hit it, so I must be a good shot, otherwise it’s not that far away… I guess I’ll have to find a really good defender.”

—Man from Florida Dennis Winn admits to shooting at a Walmart delivery drone, allegedly hitting the payload. USA todayWinn says he initially tried to scare the drone away. He has been charged with shooting at an aircraft, criminal damage and discharging a weapon in public or on private property.

IF YOU MISSED IT

As artificial intelligence increases electricity demand, tech companies are turning to nuclear power plants. By Chris Morris

Hollywood tycoon Ari Emanuel slams OpenAI’s Sam Altman after Elon Musk scared him about the future: ‘You are the dog’ of an AI master, by Christiaan Hetzner

Ken Griffin puts the AI ​​hype on pause and says he is not convinced the technology will replace jobs in the next three years. By Christiaan Hetzner

The AI ​​startups of VCs’ dreams, from recruiting to Nvidia alternatives, by Allie Garfinkle

Insights into the “Looksmaxxing” economy: microfractures in the jawbone, expensive hairspray and millions to be made with male insecurities, by Alexandra Sternlicht

Bridgewater launches a $2 billion fund that will use machine learning for decision-making and will include models from OpenAI, Anthropic and Perplexity, by Bloomberg

BEFORE YOU GO

Chinese standards. The Chinese government has set up a technical committee to create standards for brain-computer interfaces and their data, The Register reports. The idea is to force Chinese researchers to adhere to the standards and increase China’s influence in future international standardization processes – similar to what the country is trying to do in the telecommunications field. Similarly, China’s Ministry of Industry today announced plans to develop over 50 new AI standards by 2026.

Subscribe to the Fortune Next to Lead newsletter to receive weekly strategies on how to make it to the boss’s office. Sign up for free.