close
close

Basic guidelines for dealing with AI risks and values

Basic guidelines for dealing with AI risks and values

The rapid adoption of generative AI has led to much debate about its potential impact on society. Companies must balance the technology’s significant potential with the new risks it presents, particularly misuse.

It is important for organizations to put appropriate safeguards in place to harness the potential of the technology while addressing its challenges. This can be achieved by adopting a flexible governance framework that is tailored to the unique characteristics of AI while ensuring safe and responsible use of the technology.

But what exactly is AI governance and why should organizations take it seriously?

Although it may seem paradoxical, good governance actually enables better innovation. It provides the constraints and guardrails that allow organizations to explore questions about the benefits and risks of AI, while providing the space to innovate and achieve results.

Scaling AI without governance is ineffective and dangerous. Society expects organizations to act transparently, responsibly and ethically. AI governance is therefore necessary to meet these societal demands while supporting progress by dealing with complexity, ambiguity and rapid technological development.

In addition to considering broader societal impacts and regulatory compliance, companies must also balance the competing demands of trust and control in the workplace with business value, corporate risks and the privacy of individual employees, customers and citizens.

For example, AI governance must provide policies to mitigate bias and validation requirements, taking into account cultural differences and regulations to protect the rights of individuals and groups. Biases can negatively impact the adoption of AI in organizations and society as a whole.

Bias is a major problem for multinational companies because cultural norms and related regulations, such as consumer protection laws, vary from country to country.

It is important for organizations to find people who can handle all organizational, societal, customer and employee aspects. These people should represent different mindsets, backgrounds and roles. Then you can differentiate governance decisions and decision-making authority by leveraging their expertise and perspectives.

Decision-making powers create authority and responsibility for business, technology and ethical decisions. They should focus on the most critical AI content that should be aggressively regulated.

On the other hand, companies can grant greater autonomy in decision-making rights for non-critical AI content. However, employees who use AI support must be aware that they are responsible for the resulting results.

Tackling AI complexity through governance

AI encompasses an ever-evolving technological landscape, and this complexity – together with the ambiguity inherent in the nature of AI – leads to confusion about its impact on reputation, business and society.

Governance should reflect the cross-functional and predictive nature of AI. A mistake many organizations make is to view AI governance as a standalone initiative. Rather, it should be an extension of the measures already in place within the organization.

By leveraging existing governance approaches and reusing current successes, the task of managing the impact of AI becomes less daunting and easier to understand. While many approaches apply to AI, including data classification, standards and communication practices, there are unique characteristics – trust, transparency and diversity. How they are applied to people, data and techniques is important.

In many organizations, key AI-related decisions are made by an AI Council. It is typically chaired by the CIO or CDAO and is a working group with representatives from across the organization. This diverse group of stakeholders must work directly with other governance groups to advance AI governance efforts.

One of the Council’s first tasks is to ensure compliance with the relevant rules. While data protection is the most important and visible concern, all possible legal and industry-specific requirements must also be met.

AI governance starts with outcomes that support business objectives. The goal of an AI pilot or proof of concept should be to demonstrate value defined and validated by the council together with other business partners, not on measurements such as accuracy or comparing technical tools.

For AI-advanced organizations, this includes managing the entire AI lifecycle with the goal of enabling reusability of AI components and accelerating the deployment and scaling of AI across the enterprise.

Svetlana Sicular is Vice President Analyst at Gartner