Inquiry: Need for Safe and Productive Development and Use of Artificial Intelligence
Rose pursuant to notice of May 28, 2025: That she will call the attention of the Senate to the need for the safe and productive development and use of artificial intelligence in Canada.
Honourable senators, artificial intelligence, or AI, is one of the most transformative technologies in our history. From improving health care, to driving innovation in industries like education, culture and defence, to offering us new possibilities in scientific research, national security and many other spaces, AI has the potential to change the way we live, work and interact. AI has already begun to reshape many aspects of our society through automation and advanced problem solving. Wherever you look, you can’t help but notice the impact of AI. However, as AI is increasingly integrated into our lives, we must confront its potential risks. It is not a tool we can control with ease. If left unchecked, AI could cause significant harm to individual communities and society at large.
For example, Geoffrey Hinton, the “Godfather of AI,” has warned that we are entering an era when machines may surpass human intelligence. He describes AI as an accidental creation born from human failure and highlights serious concerns, such as fake news and bias in hiring practices and policing. These are just a few examples of the risks that we as policy-makers must consider.
AI is not inherently good or evil; it is a tool. Its impact on society will be shaped by how we choose to regulate, develop and use it. That is why it is critical that we act now. We are already behind with respect to fully understanding and governing this rapidly evolving technology.
Colleagues, this inquiry marks an essential step in ensuring that we as leaders are proactive in confronting the challenges that AI presents while also embracing its vast and valuable potential. We cannot afford to repeat the mistakes made with social media, where unchecked growth and a lack of safeguards led to unintended consequences for our democracy, culture and public health.
As senators, we have a duty to protect Canadians from these risks while also steering AI development toward outcomes that serve the public good. This is not only a national priority but a global responsibility, and Canada can and must have a strong voice in shaping the future of AI governance.
In my remarks this evening, I will begin by discussing the International AI Safety Report, recent developments in Canada’s AI sector and then the ways in which AI is being considered globally, looking more specifically at the outcomes of the Paris Artificial Intelligence Action Summit.
One of the key publications to guide us through this evolving landscape is the International AI Safety Report led by Yoshua Bengio, a leading global figure in AI research here in Canada. This report serves as a critical resource for understanding the global risks associated with AI, including cyber-threats, misinformation, labour market disruptions and the potential weaponization of AI.
The authors of the report noted, “Policymakers face the challenge of creating flexible regulatory environments that are robust to technological change over time.” They continued, saying, “Constructive scientific and public discussion will be essential for societies and policymakers to make the right choices.”
This sentiment underlines the importance of ongoing dialogue and flexible regulation to ensure that AI develops in a way that maximizes its benefit while minimizing its risks.
The report also warns of the danger of AI development becoming concentrated in a few countries, like the United States and China, which could lead to a global imbalance in AI leadership. It emphasizes the urgent need for international collaboration and comprehensive risk assessments to ensure that AI does not outpace our ability to regulate it.
As we consider Canada’s role in AI development, the International AI Safety Report offers us an essential framework that we can consider when thinking about how to govern AI. It encourages us to take a global perspective on AI safety while addressing domestic priorities.
Let me highlight some of the recent progress we have made here in Canada. In November of 2024, Canada made a significant move by launching the Canadian Artificial Intelligence Safety Institute. The institute will receive an initial budget of $50 million over five years as part of a $2.4-billion investment, announced in the 2024 federal budget, that includes the proposed artificial intelligence and data act and the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.
In April of 2024, former prime minister Justin Trudeau announced an investment of $2.4 billion to develop Canada’s AI sector. This includes the Canadian Sovereign AI Compute Strategy, which provides $700 million to build and expand data centres, $300 million to support AI computing costs for small- and medium-sized businesses and $1 billion to enhance high-performance computing for academic researchers.
More recently, in March of 2025, the President of the Treasury Board unveiled Canada’s first-ever artificial intelligence strategy for the federal public service. This strategy, to be updated every two years, aims to improve government operations and services by ensuring that AI is used safely, ethically and responsibly. It includes goals such as creating an AI centre of expertise, ensuring AI systems security, fostering talent development and promoting transparency and accountability.
Of course, Prime Minister Carney recently appointed the Honourable Evan Solomon as our first minister responsible for artificial intelligence and digital innovation. Yesterday, Minister Solomon gave his first public remarks at the Canada 2020 conference, where he indicated the pillars of Canada’s AI industrial strategy: first, scaling up AI; second, AI adoption throughout industries; third, increased trust in AI through regulation to protect data and privacy; and, finally, fourth, AI sovereignty for Canada’s defence and security. I look forward to learning about the specifics of the Carney government’s plan regarding AI as it unfolds.
Colleagues, with so much momentum in this space, we as senators have a responsibility to carefully examine proposed strategies and investments. We must ask whether they truly serve the interests of all Canadians and thoughtfully consider their long-term implications. This gives us the opportunity to ask the right questions and consider the path forward.
Understanding AI in Canada requires us to consider the broader global context. The world is evolving rapidly, and many countries are pushing ahead with AI initiatives. In February of 2025, France and India hosted the Paris Artificial Intelligence Action Summit, where leaders, experts and researchers discussed the future of AI. The summit focused on five key themes: public interest in AI, the future of work, innovation, trust and global governance. However, one critical area received only limited attention: AI governance.
At the summit, U.S. Vice President Vance raised concerns that excessive regulation could stifle innovation, arguing that democratic nations might fall behind authoritarian countries that have fewer restrictions. This is a crucial debate that highlights the global divide and nuance in this area. Some advocates for strong regulation to safeguard society prioritize economic interests over governance. But many fall in the middle, wanting to benefit from the prosperity that could come with AI in a way that aligns with our democratic values, such as human rights, inclusivity and the rule of law.
Despite these differences, 62 countries, including Canada, signed the AI Action Summit Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet. This commitment to ensuring AI is developed responsibly reflects our shared responsibility to tackle the challenges that AI presents. But the United States and the United Kingdom chose not to sign, citing concerns over restrictive language and governance frameworks. This divergence underscores that even amongst countries that are normally allies, there may be differences when it comes to AI.
At the same time, in January of this year, U.S. President Trump announced a $500-billion investment in private sector AI infrastructure led by companies like OpenAI, Oracle and SoftBank. The European Commission also pledged over €200 billion to AI and digital innovation. French President Emmanuel Macron announced plans to invest €109 billion in AI, including new data centres.
These rapid global investments show just how urgent it is to address AI’s growing prominence and the need for coordinated global efforts to regulate it. Without coordinated domestic and global standards and cooperation, we risk allowing global markets to drive AI development unchecked, potentially at the expense of the public good. This underscores the importance of having critical discussions here in Canada about how we are going to continue to prioritize regulation, transparency and human rights within AI development while remaining a key player in the global AI race.
Given these global developments, this inquiry is a critical opportunity for us to assess the implications of AI on our future here in Canada. If we are to remain a leader in the global AI race, we must focus on regulation, transparency and human rights. If we are to maximize the benefits of AI in Canada, we must be on the front foot so that we can determine and create our own AI ecosystem — not one that’s flooded with products and technology that we can’t control, but one where we set standards and norms to ensure technologies are safe, accurate and high quality and come from countries with democratic values. AI’s growth must be steered with careful consideration, and we must not allow market forces or government priorities to drive its development unchecked. The reality is that the decisions we make today will shape the future of AI for generations.
In the International AI Safety Report, Yoshua Bengio, one of the leading figures in Canadian AI research, stated, “AI does not happen to us; choices made by people determine its future.” This quote encapsulates the urgency and the responsibility we have as policy-makers. Speaking about responsibility, Yoshua Bengio just recently launched a new non-profit called LawZero, which aims to bring together world-class AI researchers to develop technical solutions for safe-by-design AI systems. Efforts like this highlight that we must make intentional, informed decisions that prioritize public safety and societal benefits.
While the future of AI is uncertain, one thing is clear: Its trajectory will be shaped by the choices we make. We have the power to steer this technology toward progress while mitigating its risks. It is important to remember that trust and safety will drive productivity ---