AI in Regulation conference Toronto 2026

Regulators are past the point of asking whether artificial intelligence will affect their work. The questions now are how to govern it, where to start, and how to do so without losing public trust.

Regulators are past the point of asking whether artificial intelligence will affect their work. The questions now are how to govern it, where to start, and how to do so without losing public trust. Those were the conversations at the AI in regulation 2026 conference in Toronto, co‑hosted by MDR Strategy Group and Objective Corporation earlier this month.

Over two days, leaders from health, engineering, legal and other professional regulators explored where AI is already appearing in licensing, complaints and public services, and what good oversight looks like in practice. Objective was there as co‑host, panellist and technology partner – but most importantly, as a listener.

 

Who was in the room

The conference brought together CEOs and registrars, investigations and policy heads, technologists and board members from across Canada, the United States and internationally. Speakers included:

  • Fola Adeleke, co‑founder of the Global Center on AI Governance, on responsible AI and public accountability.
  • UNICEF’s Lisa Maina, on how children, teachers and parents are already using AI as both learning tool and “emotional companion”.
  • Leaders from professional regulators such as the College of Patent Agents and Trademark Agents, the Federation of State Medical Boards, and multiple health colleges, sharing real incidents and early experiments.

Day two opened with a clear statement of purpose: this was not a technology show, but a governance conversation. As participants were reminded, AI is already influencing who is seen, who is delayed and who is left out of regulatory processes, often through workflows and algorithms that are hard to see and even harder to explain.

 

From discussion to delivery

Across keynotes and workshops, several themes recurred.

Boards and councils face an AI literacy and visibility gap. Many do not yet have AI as a standing item on their agendas. Few have a clear map of where AI is already embedded in internal tools – from video platforms that generate transcripts by default to productivity suites that rewrite text and summarise documents.

Shadow use of AI is already a reality. Staff in many organisations are quietly using generative tools to draft emails, summarise stakeholder feedback or even help triage applications, often without guidance or formal sanction. At the same time, registrants in sectors like law, engineering and healthcare are experimenting with AI in practice, sometimes faster than regulators can publish guidance.

Participants also returned frequently to questions of explainability and accountability. Who is responsible when AI‑supported processes go wrong in a complaints investigation or a licensing decision? How can organisations document and audit AI‑mediated steps in a way that stands up to scrutiny? And what does “human in the loop” really mean when processes scale to thousands of files a year?

 

What Objective brought to the conversation

Objective’s role at the conference was twofold. As co‑host with MDR Strategy Group, Objective helped support an event designed around governance, not gadgets – standards, oversight models and practical frameworks rather than product showcases.

Objective also joined a panel on how AI is already showing up inside regulatory platforms and where technology partners can most usefully contribute.

Kirsty Mills, Objective’s Global VP, Regulatory Solutions, described the company’s role as a “specialist technology partner” that understands regulatory business models and translates fast‑moving AI capabilities into safe, relevant features. She stressed that vendors should not be setting policy or enforcement thresholds:

“Our job is to provide the secure infrastructure, safeguards and audit trails that allow regulators to apply their own mandate, not to define that mandate for them.”

That perspective shaped the examples Objective shared in Toronto.

 

Real examples from regulators

Several real-world use cases were discussed where AI is already helping regulators work at scale.

One was a road safety program in New South Wales that uses image recognition to detect illegal mobile phone use while driving, as part of the state’s “Towards Zero” initiative. AI scans large volumes of roadside imagery for likely offences; a second layer of review, including human oversight, helps ensure accuracy and reduce disruption to compliant drivers. The result is a more targeted enforcement effort in an area where the underlying rules and risk appetite are clear and where decisions are largely binary.

Another example was the AI assistant embedded in Objective’s regulatory case management platform. Instead of staff clicking through every tab in a complex complaint file, an authorised user can ask the assistant to summarise key issues, or to answer specific questions such as whether the complainant has been contacted or what actions have been taken.

The assistant works only with the regulator’s own data, in a secure, dedicated environment. It is designed to save time on low‑value navigation and summarisation, so staff can focus on analysis, judgment and engagement.

A related demonstration showed an “AI public form assistant” that helps applicants understand what information they will need before they begin an online application, and that can answer natural language questions along the way. Rather than forcing people to hunt through static FAQs, the assistant can draw on an approved knowledge base and the form itself to respond in context. Early evidence suggests people are significantly more likely to ask detailed or “embarrassing” questions of an AI assistant than of a human on a help desk, which can support both compliance and education.

 

Guardrails first

Throughout the panel, Objective emphasised that meaningful AI support requires strong foundations.

Kirsty highlighted data quality as one of the biggest practical challenges regulators face. Many organisations have spent the last decade digitising, but still struggle with fragmented or inconsistent data. If those issues are not addressed, AI will simply amplify the noise. One response has been the development of Objective Intelligence – a layer that sits across Objective’s solutions to provide a dedicated tenancy per customer, with options to use local models or major cloud models depending on workload and data sovereignty needs.

Auditability and access control are another focus. Objective’s regulatory platforms are designed so that every action – including every AI prompt and response – is logged for evidentiary purposes, and so that AI features respect existing permissions. An assistant cannot surface information a user would not otherwise be allowed to see.

Questions from the floor probed how to manage hallucinations and bias. The panel discussed several mitigations that Objective is already using or recommending:

  • Restricting AI assistants to a contained corpus such as a regulator’s own legislation, policies, FAQs and case files, rather than the open internet, to reduce hallucinations.
  • Returning links or citations back to source material so staff can validate outputs before relying on them.
  • Working alongside regulators on the content used to train assistants, and reviewing that content together to reduce the risk of misinterpretation and to surface embedded bias.

Objective also outlined an implementation approach that begins with internal, staff‑only use cases before any public‑facing deployment. This staged model allows organisations to learn how the tools behave in their own context, and to refine governance and assurance arrangements over time.

 

What we are taking forward

The conversations in Toronto reinforced several expectations that are already shaping Objective’s roadmap.

First, AI literacy and visibility are becoming core governance concerns. Executive teams and boards want to know where AI is in their systems, what it is doing, and how it can be audited. Features that provide clear logs, permissions‑aware access and explainable outputs are no longer “nice to have”.

Second, many regulators are looking for low‑risk, high‑value starting points: internal summarisation, smarter triage, better public guidance and education. There was little appetite in the room for black‑box systems making unreviewed decisions on fitness to practise or discipline. Human‑in‑the‑loop and human‑on‑the‑loop models, designed around a regulator’s existing risk frameworks, will be central.

Third, there is a clear expectation that technology partners will support, not substitute for, regulatory governance. That means aligning AI features with principles such as transparency, proportionality and fairness, and making it easy to embed those features inside existing regulatory processes rather than asking organisations to re‑engineer around a tool.

These themes align closely with Objective’s broader work developing technology for government and regulators - from configurable, risk‑based workflows to evidentiary‑grade information management.

 

Continue the conversation

MDR Strategy Group’s leadership in convening the AI in regulation conference, and the depth of questions from regulators in the room, underscored how quickly this agenda is moving.

To continue the discussion, The Modern Regulator – an Objective‑sponsored regulatory news magazine focused on regulatory practice in Australia, New Zealand, the UK, Canada, and beyond – has produced a deeper governance feature drawing together themes from the conference and related interviews with regulatory leaders. You can find it here.

Objective will also be publishing additional resources on AI‑enabled regulatory operations, and working with partners and customers to design features that keep governance at the centre while taking practical steps from awareness to action.

Organisations interested in exploring how AI‑supported capabilities such as case summarisation, risk‑based triage or public form assistance could work in their own context are invited to contact Objective or visit the RegWorks page for more detail.