Now is the time to think about AI governance in your organization

Guest Contributor
June 7, 2023

By Jeff Uhlich

Jeff Uhlich BA, MSc HRM, is a retired human resources executive and consultant in Edmonton, who writes on the intersection of talent management, technology and governance. 

I’m not optimistic about the ability of governments anywhere to adequately regulate artificial intelligence. But I believe there’s much we can do within our own organizations to safely manage the use of this technology.

The Generative AI revolution has been decades in the making. However, it’s only been since November of last year that the public has been allowed access to ChatGPT and tools like it in such an intuitive and accessible way.

That launch spurred the release of other competing and complementary Large Language Models. These powerful models have “hacked” language — the essence of human communication — and supercharged our ability to create, recombine, and innovate.

Luckily, we are in the early days of this innovation and we still have time to plan. Now is the time for organizations — local and national — to prepare for the transformative effects of these technologies.

I’m very confident that most organizations can study the issues arising from the adoption of Generative AI (such as amplifying the biases, errors, limitations of the data AI models are trained on, intellectual property infringement, and inadequate testing — to name a few) and develop strategies, tactics, and policies to properly govern its use.

I’ll make an analogy from my career as a human resources executive and consultant, where I worked with many organizations to develop philosophies articulating their overarching principles of how and on what basis to compensate employees.

This was an important philosophical stance to define, because it set the stage for key compensation decisions. We built these compensation philosophies with the executive team and included important input from human resources and throughout the organization, but it was ultimately a leadership team decision. And we placed great emphasis on buy-in from the board.

The compensation philosophy formed the foundation of our compensation strategy and practices. It established our position in the market for employees. Do we emphasize cash compensation or benefits? Do we pay below market rates and use bonus and incentives to attain a P60 or P75 level (i.e. total compensation higher than the market) or try to maintain a P50 pay level (the middle of the market) against our competitors? Once these parameters were defined and accepted, we could benchmark our practices in line with our philosophy.

I’m suggesting today that organizations must take a similar approach to defining their organization’s AI philosophy. Articulating and clarifying the principles governing use of Generative AI in your organization will allow you to establish some ground rules and anticipate issues before they arise.

Generative AI is already disrupting the world of work. Knowledge worker and creative jobs, once thought to be somewhat insulated from automation, are now considered to be among the most likely to be disrupted by these tools.

Companies like Microsoft and Google are integrating ChatGPT and Bard, respectively, into their product suites. They will be everywhere and available to anyone. OpenAI, ChatGPT’s creator, is also experimenting with a tool called Code Interpreter that is already showing signs of incredible sophistication in completing complex data analysis tasks.

This technology is here, advancing rapidly and being used in organizations both private and public. Your employees, vendors, and contractors are using these tools  — with or without your knowledge. It is essential for organizations to think about the implications of these technologies, in areas like intellectual property, data privacy, productivity, process improvement, and job design.

Transparency and responsible governance needed

As the use of Generative AI becomes more widespread, transparency and responsible governance are paramount.

C-Suite executives must be proactive in exploring these tools and understanding the impact in their organizations. The increasing capabilities of Generative AI demand an informed response, whether it’s adapting to a 30 percent increase in productivity, addressing the ethical implications of an IT employee working multiple jobs simultaneously, or dealing with an employee leaking confidential information into an external system.

The key question is: Will you embrace the use of this new technology in your organization or attempt to restrict it?

Will you encourage your employees to use Generative AI to augment their work? Will you require your employees to be transparent about their use of these tools and declare when and how they were used? And most importantly, what are your principles and your aspirations around impact on individual jobs? Do you see Generative AI as a means of gaining efficiency and reducing headcount, or as a way of augmenting work and reducing drudgery?

How will you tackle accountability and even liability for the production of information by an employee that turns out to be incorrect? These tools aren’t perfect; as real-world examples have shown, they can “hallucinate” and invent content that has no actual basis in fact.

Will you only allow the use of Generative AI that has access to up-to-date information and an internet search ability? How will you know?

Will you train a Large Language Model on your own organization’s data and implement a customized chatbot? If so, what are the implications for your call centre employees, marketing and communications team, and human resources and finance functions?

Using Generative AI as a smart assistant can help you prepare for its impact. Tools like ChatGPT, Bing Chat, and Claude+ can provide valuable information and even insights, but it’s important to remember their limitations and always verify output with other sources. In doing so, you’ll be better equipped to navigate the intersection of policy, innovation, and investment, as Generative AI continues to reshape our world.

Generative AI technologies are here to stay, and their use and influence are growing. It’s imperative for organizations to engage in thoughtful discussions about policy, opportunities, risks and implications. It’s time to work through these questions and articulate how you see these tools being implemented in your organization.

It’s time to invest in responsible stewardship. By doing so, you can ensure that Generative AI maximizes benefit and minimizes potential harm as we move into an increasingly automated future.

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events











Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.