Futureproofing AI

Marie Kelly

Artificial intelligence (AI) has been at the forefront of entertainment industry news over the last couple of years. Opinions and attitudes around AI range from excitement for its possibilities to fear that the sky is falling. The reality likely lies somewhere in the middle, but particularly for actors and other creators, the potential threats must be taken seriously, and reasonable guardrails must be erected.

Most AI technology, including in our industry, goes unnoticed. If you subscribe to a streaming service, for example, an AI program provides the personalized recommendations you might receive. If you use a voice assistant on your phone or in your home, it is powered by a form of AI technology called natural language processing (NLP) that helps it to understand and interpret what you are asking and then respond. Online shopping, search engines, even autocorrect — these are all AI-powered functions that we encounter daily.

When used by workers to enhance and enable their work, AI can be a powerful tool. There are AI applications that are helping in medical research and diagnostics, such as in cancer diagnosis. AI tools are helping scientists combat climate change and aid wildlife conservation. The common denominator here is that the tool is being used to enhance the work of humans, not to replace them.

Newer forms of AI pose risks to actors and other creators because they can replace them. Some might argue that they are simply used to enhance the work of humans, but they do so by potentially replacing other humans. It is this use that has come to the forefront in the entertainment industry and that has become a central issue in industry labour negotiations. This is particularly true of generative AI (GAI).

GAI is a type of AI model that learns from training data, such as a studio’s library of content. It then can generate output in response to prompts it is given, creating new content that is similar — sometimes troublingly so — to the original work it was trained on. ChatGPT, for instance, is an application based on a form of GAI. Generative Pre-trained Transformer (GPT) is the type of NLP model that powers the ChatGPT application.

The underlying technologies are not new, with some of them dating back to the mid-20th century. But advances in computer processing power and data storage have prompted rapid development and growth of the technology. This has allowed the development of extremely powerful algorithms that generate more complex content like images, videos, and music, and to do it faster and more effectively than ever before.

Today’s GAI programs have developed to the point that they can generate voices, and even likenesses, that are nearly indistinguishable from a real person. This technology is growing too rapidly and is too open to abuse to sit idly by and wait for the worst-case scenario to occur. It is therefore critical we act now to ensure protections are put in place.

In many ways, the issues that have come to the forefront with GAI are not new – they are just cast in a new light. The march of technology has transformed our industry at many points in history, changing how creators are hired and paid, the nature of the work, how work is found and exploited, and so much more. What differs now is the speed with which technology is developing and advancing. Many have compared the recent advancements in AI, GAI in particular, to the invention of the printing press or the internet — paradigm-shifting revolutionary change, both to our screen industry and the world.

Our employers must recognize the need for strong, fair protections in our collectively bargained agreements, and the government must do its part to protect all Canadians from abusive uses of these technologies.

Few understand the impact of, and the risks posed, by GAI quite as well as creators and artists like our members and those of our sibling unions. Many of you walked the picket lines or rallied in solidarity with our U.S. union siblings as they fought to put fences around industry employers’ use of AI before it could run roughshod over the entire industry. Their fight was just the start, of a battle we are all entwined in.

Unfettered and unregulated, GAI can pose an existential threat to many categories of workers, both in our industry and beyond. It poses risks to democratic institutions and reputations when it can be used to manufacture disinformation and present it as truth. For ACTRA members and countless others whose likenesses, voices, and performances are central to their careers, misuse can destroy reputations, livelihoods, and even lives.

Still, no matter how strictly a union regulates GAI in its contracts, its jurisdiction can be limited.

ACTRA members’ images and voices are the cornerstones of their careers. We must, and we do, approach the regulation of GAI on multiple fronts. Beyond bargaining, we continue to engage in efforts to ensure AI-related legislation protects ACTRA members.

We work with our sibling unions domestically and in the U.S., as well as those around the world, to share knowledge, information and best practices. And we engage in efforts to educate our members and the broader community on the issues.

Our work as a union is just one part of this fight.

As performing artists, it is important that you and your representatives also understand the technology and risks and that you do your due diligence before signing any contract or clicking to accept the terms of use. This is particularly true when you come across AI-driven apps that invite you to upload content, no matter how exciting they might seem. As we have seen with social media, when you are getting the service at no or low cost, you are probably the product. Understanding the technology and the rights you are granting is critical to making an informed decision.

Our employers must recognize the need for strong, fair protections in our collectively bargained agreements, and the government must do its part to protect all Canadians from abusive uses of these technologies. Chief among these protections, particularly for actors, are the “3 Cs” that have been the cornerstone of recent negotiations and legislative advocacy, not just for ACTRA but also for our sibling unions: consent — performers should have the right to consent to, and be credited for, the use of their name, image, likeness, and voice in AI-generated content; compensation — performers must be compensated when AI is used to replicate them, particularly when used in a manner for which they would otherwise be paid for their work; and control — performers must have the right to demand mechanisms be in place to protect the data they provide, ensuring their work cannot be copied, scraped or stolen.

These “3 Cs” are all critical to any bargained contract. They also should be enshrined in the law, with the rights of consent and control set forth as protections for all Canadians. No one should be made to speak words they did not speak or be depicted performing acts they did not perform simply because the technology exists to do so. It goes against the core rights and freedoms we hold dear as Canadians.

The reality is that GAI is not going anywhere, no matter how much some might like to see its advance halted. To the contrary, the number and capabilities of AI programs are growing exponentially, and experts predict this growth will not slow anytime soon. We must all remain vigilant as we fight for protections now and into the future. ACTRA stands ready for the fight.


Marie Kelly is ACTRA National Executive Director and Chief Negotiator. Marie’s mission is to improve the working lives of performers by setting high standards for collective bargaining.

Top photo: iStock