As our nation’s major technology hubs continue to grow, Canada boasts more tech workers than ever.
With so much news, data, and events to cover across the country, Tech Talent Canada leans on experts in the field and professionally sourced data to help keep our audience properly informed and up-to-date.
Our last Expert Wisdom roundup navigated the state of hybrid work and the rise of upskilling.
This month, experts discuss various important aspects of artificial intelligence.
AI Ethics and Literacy for Teens
The younger generation is catching onto the potential benefits Artificial Intelligence can have on their academic success.
According to a 2023 study, over half of Canadian students are using generative AI to assist them in their schoolwork.
Telus Wise recently launched its TELUS Wise Responsible Act online workshop, in partnership with industry leader Canadian Institute for Advanced Research.
Offered free of charge, the workshop aims to help Canadians of all ages better understand the AI landscape, including responsible use, ethics, and critical thinking skills.
“With rapidly increasing adoption of connected technologies, digital literacy has never been more important than it is today,” Nimmi Kanji said in an interview for TechTalent.ca.
That is because, “as the world becomes increasingly digital, cases of digital misuse, fraud, and overall safety issues are becoming more common,” the Director of Social Purpose Programs for TELUS says, citing identity theft, internet addiction, cyberbullying, and other concerns.
“As we increasingly adopt digital technologies, it’s important that we invest in ensuring digital literacy at the individual and organizational level so that Canadians are empowered to participate safely in the digital world,” asserts Kanji, who was instrumental in the development of the Telus Wise AI program.
According to a recent survey by KPMG, more than half of Canadian students over 18 regularly use AI and 81 per cent believe it will be a critical skill for the future.
Kanji says the new online workshop is “uniquely tailored” for teens, including elements of gamification throughout.
“These workshops and resources cover an exhaustive range of digital literacy topics including managing your online reputation, rising above cyberbullying, digital well-being, [and] internet and social media safety and privacy, among a range of other topics,” she noted.
Managing AI Risk
In Australia, government workers fed grant applications into generative AI tools to generate assessments, which critics said could infringe on applicants’ confidentiality and security. In the U.K., a journalist bypassed Lloyds Bank’s voice security features using AI.
These and many other cases are recorded in the AI Incident Database, which tracks examples of AI systems causing safety issues or other real-world problems.
“The fact is, this technology won’t have the transformational impact we want it to if companies don’t manage these risks,” believes Ryan Donnelly, who spent six years as a lawyer at international law firms specializing in data protection regulations before co-founding Enzai, a software platform that helps businesses ensure proper AI governance.
“That’s why it’s so important for businesses that see opportunities to use AI to also develop policies around AI, and processes and systems, to ensure company-wide compliance,” he says.
According to Donnelly for TechTalent.ca, “Risk can arise at any aspect of the value chain.”
“That’s why companies need to understand the potential problems that can materialize from an AI model’s life cycle, so they can develop rules for every single stage,” he suggests.
It’s also important to understand the legal regulations that apply to AI, Donnelly adds, and the industry best practices.
In Canada, that’s the Artificial Intelligence and Data Act, which is expected to come into effect in 2025.
“If you’re an organization, it makes a ton of sense to hold yourself to those high standards—even if you aren’t legally required to do so,” he argues, “because it will help you build better, more reliable, more trustworthy systems, which ultimately become your competitive advantage.”
One good place to start according to Donnelly is the Canadian government’s voluntary Code of Conduct for Generative AI, which spells out six principles: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness.
And once you’re implementing policy, it’s important to make sure that it’s working.
“It’s important to set regular review cycles to make sure your model is working as intended and is offering the benefits you’re expecting,” Donnelly writes.
Most important, he emphasizes, is to recognize the pace of change in the space.
Admist rapid change, “the most important part of implementing an AI policy is keeping up-to-date on regulatory changes and industry best practices, and modifying your policy as needed,” Donnelly says.