close
close

Big tech companies are using AI to burn their trust reservoir – Marin Independent Journal

Big tech companies are using AI to burn their trust reservoir – Marin Independent Journal

The OpenAI logo is displayed on a cellphone with an image on a computer screen generated by ChatGPT’s Dall-E text-to-image model in Boston on Dec. 8, 2023. European Union lawmakers were expected to give final approval to the 27-nation bloc’s artificial intelligence bill on Wednesday, putting the world’s leading set of rules for the rapidly evolving technology on track to take effect later this year. (AP Photo/Michael Dwyer, File)

We live in an interesting time. We’re seeing some of the big tech companies — though not all of them — move quickly and break their own things.

Google has sold its remaining stock of Trust in the search about the new AI overviews that occasionally offered misinformation in response to search queries – for example, that Barack Obama was the first Muslim president of the United States or that it is safe to stare at the sun for five to ten minutes a day. (After public protests, the company reduced the number of overviews displayed.)

Microsoft has sold some of its remaining stocks to Trust in cybersecurity with the recall function, the Screenshots of a computer every few seconds and compile that information into a database for future searches. (After a flood of articles criticizing the feature as a “security disaster,” Microsoft first announced that Recall would not be enabled by default in Windows, and then removed the feature entirely from The begin the company’s Copilot Plus PCs.)

After Publish research results Zoom’s CEO claims that 67% of remote workers surveyed “trust their colleagues more when they have video turned on during their Zoom calls.” Video conferences with AI deepfakes (described by the journalist who interviewed him as “digital twins” who can attend Zoom meetings on your behalf and even make decisions for you).

Amazon is now full of AI-generated imitations of books – including “a replacement version of ‘Artificial Intelligence: A Guide for Thinking People’”.

Meta, which didn’t really inspire confidence in me, is Inserting AI-generated comments into conversations between Facebook group members (sometimes making strange claims about AI parenthood). And X, still trying not to be Twitter and already inundated with bots, announced an updated policy It will allow “consensually produced and distributed adult pornographic content,” including “AI-generated nudity or adult sexual content” (but not content that is “exploitative… or promotes objectification” – because that would obviously not be the case with AI-generated content).

After ushering in the era of generative AI with the first release of ChatGPT, OpenAI subsequently opened a ChatGPT Store, a platform through which users can distribute software built on top of ChatGPT to add specific features and create what the company calls “custom versions of ChatGPT.” In its Announcement from January The company said that users have already created more than 3 million such versions of the store. The trustworthiness of these tools will now also affect the trust that users have in OpenAI.

Is generative AI a “personal productivity tool,” as some technology executives claim, or rather a tool that destroys trust in technology companies?

By rushing products to market, however, these companies aren’t just disrupting themselves. By hyping their generative AI products beyond recognition and pushing for adoption by people who don’t understand the limitations of these products, they are disrupting our access to accurate information, our privacy and security, our communications with other people, and our perceptions of all the various organizations (including government agencies and nonprofits) that are adopting and deploying flawed generative AI tools.