Big tech companies are using AI to burn their trust reservoir – Marin Independent Journal
![Big tech companies are using AI to burn their trust reservoir – Marin Independent Journal Big tech companies are using AI to burn their trust reservoir – Marin Independent Journal](https://www.marinij.com/wp-content/uploads/2024/06/Europe_AI_Rules_Explainer_52178.jpg?w=1024&h=682)
We live in an interesting time. We’re seeing some of the big tech companies — though not all of them — move quickly and break their own things.
Google has sold its remaining stock of Trust in the search about the new AI overviews that occasionally offered misinformation in response to search queries – for example, that Barack Obama was the first Muslim president of the United States or that it is safe to stare at the sun for five to ten minutes a day. (After public protests, the company reduced the number of overviews displayed.)
Microsoft has sold some of its remaining stocks to Trust in cybersecurity with the recall function, the Screenshots of a computer every few seconds and compile that information into a database for future searches. (After a flood of articles criticizing the feature as a “security disaster,” Microsoft first announced that Recall would not be enabled by default in Windows, and then removed the feature entirely from The begin the company’s Copilot Plus PCs.)
After Publish research results Zoom’s CEO claims that 67% of remote workers surveyed “trust their colleagues more when they have video turned on during their Zoom calls.” Video conferences with AI deepfakes (described by the journalist who interviewed him as “digital twins” who can attend Zoom meetings on your behalf and even make decisions for you).
Amazon is now full of AI-generated imitations of books – including “a replacement version of ‘Artificial Intelligence: A Guide for Thinking People’”.
Meta, which didn’t really inspire confidence in me, is Inserting AI-generated comments into conversations between Facebook group members (sometimes making strange claims about AI parenthood). And X, still trying not to be Twitter and already inundated with bots, announced an updated policy It will allow “consensually produced and distributed adult pornographic content,” including “AI-generated nudity or adult sexual content” (but not content that is “exploitative… or promotes objectification” – because that would obviously not be the case with AI-generated content).
After ushering in the era of generative AI with the first release of ChatGPT, OpenAI subsequently opened a ChatGPT Store, a platform through which users can distribute software built on top of ChatGPT to add specific features and create what the company calls “custom versions of ChatGPT.” In its Announcement from January The company said that users have already created more than 3 million such versions of the store. The trustworthiness of these tools will now also affect the trust that users have in OpenAI.
Is generative AI a “personal productivity tool,” as some technology executives claim, or rather a tool that destroys trust in technology companies?
By rushing products to market, however, these companies aren’t just disrupting themselves. By hyping their generative AI products beyond recognition and pushing for adoption by people who don’t understand the limitations of these products, they are disrupting our access to accurate information, our privacy and security, our communications with other people, and our perceptions of all the various organizations (including government agencies and nonprofits) that are adopting and deploying flawed generative AI tools.
Generative AI also has massive negative impacts on the environment. According to a recent Article published by the World Economic Forumthe “computing power required to sustain the rise of AI doubles approximately every 100 days” – and 80% of the environmental impacts occur in the “inference” or usage phase, not in the initial training of the algorithms. The “inference” pool includes all the AI-generated overviews in search, the AI-generated comments in social media groups, the AI-generated fake books on Amazon, and the AI-generated “adult” content on X. This pool, unlike the trust reservoirs, grows daily.
Many of the companies developing AI tools have internal corporate efforts focused on: IT G (Environmental, social and governance standards) and RAI (responsible AI). Despite these efforts, however, generative AI quickly consumes energy, water, and rare elements – including trust.
Irina Raicu is the director of the Internet Ethics Program At the Markkula Center for Applied Ethics at Santa Clara University.