What credential actually validates that a professional can oversee AI? Not complete a course about it. Not earn a badge from it. Oversee it. When the output is wrong, the stakes are real, and the public is relying on their judgment.
At present, no such credential exists.
That gap belongs to the professional body. Not the firms. Not the regulators. The profession.
Every major technology transition has a moment when the profession mistakes adoption for competence. We have both lived through several of them. And the gap it creates does not only open at the front of the pack.
Across regulated professions worldwide, many bodies are still certifying competencies that do not mention AI at all. Still running paper-based processes. Still having internal conversations about whether to digitise their applications. The AI competency question lands differently depending on where a profession sits on that spectrum. But it lands on all of them.
Our work experience is from the Big Four global accounting networks (Deloitte, EY, KPMG, and PwC), and from professional education, accounting standards, and institutional governance across Asia Pacific. This experience has formed our views on the insights and challenges we pose in this article, but we contend that they apply equally across professions and disciplines.
The bet the accounting firms have made
Deloitte’s Zora AI, EY.ai, KPMG Workbench, and PwC’s Agent OS represent serious, sustained investment in agentic AI systems designed not merely to assist practitioners but to work alongside them, handling tasks that once occupied entire cohorts of junior staff. Graduate intake has been cut by nearly a third in some markets. The role being hired for is no longer preparer or checker. It is, in the language circulating inside these firms, a “manager of agents.”
The firms have also invested heavily in training people for this world. One firm has reported training more than 300,000 employees in AI. Internal academies have been established. Learning collectives have been launched. The scale is genuine.
However, embedded throughout is a question that has not yet received a convincing response.
The question that remains unanswered
If the practitioner’s primary function is now to supervise, interrogate, and exercise judgment over the output of AI systems: what credential assures the public, or the regulator, that they can do that?
For now, the answer appears to be: none.
The badges being issued by firms are training completion records. They attest that an employee finished a learning pathway on a proprietary platform. At least one firm has stated this explicitly, noting that its digital badges are not a measure or guarantee of quality or expertise. That is an honest disclaimer. Read carefully, it is also an acknowledgement that the credential makes no competency claim whatsoever.
This is not a criticism of the firms. Workforce upskilling at scale is a different undertaking from professional competency assurance. The firms are solving the problem they are trying to solve.
The problem no one is solving is the one that belongs to the profession.
That observation now carries regulatory weight. The EU AI Act, in force from February 2025, imposes a mandatory AI literacy obligation on any organisation deploying AI systems, and its reach is not confined to European entities. An Australian firm advising a European client. A Malaysian body whose members work across EU regulated markets. A global firm with offices in Frankfurt or Amsterdam. All are in scope. The test is not incorporation. It is impact.
The Act defines a literacy floor. It does not define practice competence: how that competence is assessed, how it is certified, or who verifies it. That space is precisely where the professional bodies’ infrastructure has historically operated.
The regulation has drawn the map. The profession has not yet moved into the territory.
This absence is not surprising, professional bodies are designed to move cautiously. Their legitimacy rests on deliberation, consultation, and due process. But in fast moving AI contexts, caution also has consequences. Regulatory, commercial, and technical realities are evolving faster than traditional professional cycles were designed to accommodate.
What “manager of agents” actually requires
The firms are right that the role is changing. But managing an agent is not a passive function.
A practitioner supervising an AI system must identify when output is plausible but wrong, what researchers have termed the jagged frontier of AI capability, where performance is uneven across tasks in ways that are not predictable in advance. They must determine when their own judgment should diverge from what the AI produced, and act on that divergence in ways defensible to a client, a regulator, or a court. They must understand the governance obligations that attach to AI assisted work under frameworks now forming across multiple jurisdictions.
These are not knowledge claims. A multiple choice question at the end of a learning module cannot assess them.
They are practice claims: claims about what a practitioner does when the AI is wrong and the stakes are real.
Practice claims require practice evidence.
A senior AI leader at a major firm recently warned publicly that AI deployed without professional context and expertise produces “polished slop.” The diagnosis is exactly right. What remains unresolved is the mechanism by which the profession assures itself and assures the public that the expertise required to prevent that outcome exists in the practitioner supervising the system.
A warning about the risk is not a solution to it.
Where this lands
The large accounting firms employ practitioners. But the problem described here is not confined to accounting. It runs across every regulated profession where AI is entering practice: engineering, law, medicine, finance, education.
We are also conscious that professional bodies are not alone in navigating these challenges. Universities, credential frameworks, public sector training systems, and other education actors are similarly grappling with how to prepare practitioners for forms of judgment that are emerging more quickly than curricula, qualification structures, and assurance mechanisms typically evolve. Articles such as this are intended as a reach across that divide, an effort to create dialogue across the education and professional ecosystem about how practice level competence in AI mediated work might be recognised, evidenced, and sustained.
The professional bodies governing regulated professions remain the caretakers of the professions themselves: the institutions charged with maintaining legitimacy in the eyes of the public, of clients, and of regulators. That legitimacy rests on one claim. A credential issued by a professional body means something real about the competence of the person who holds it.
When that claim is tested, as it has been repeatedly across professional history in every sector, it is the bodies that answer for it. Not the firms. The firms change their letterhead. The profession endures or it does not.
A firm issued credential cannot carry the public assurance function of a professional qualification. The International Education Standards issued by IFAC already distinguish between knowledge, skills, and professional values. The higher order requirements implied by the “manager of agents” role sit clearly in the skills and values domain.
The knowledge layer is well served. Courses, badges, and structured learning programs are proliferating. The practice layer, demonstrating sound professional judgment when working with AI systems under genuine uncertainty, remains structurally unoccupied.
More courses will launch. More badges will be issued. More completion numbers will be reported to boards. These initiatives do not answer whether the profession’s members are competent to perform their new function. None of it is designed to.
The window
The EU AI Act’s full enforcement regime, including provisions covering human oversight of AI systems in professional contexts, takes effect in August 2026. That is not a distant horizon. For bodies with members active in EU regulated markets, it is an active compliance timeline.
The profession’s credibility in the regulatory conversations now underway will be stronger if it arrives with a position on competency assurance rather than a catalogue of training metrics. Arriving with the latter, at this point, is indistinguishable from having no position at all.
The firms have made their bet. The profession now needs to make its own.
What that bet looks like, how competence is defined, evidenced, and assured in practice, is not yet settled. It is however a conversation that can no longer be deferred.
About the Authors
The authors write in a personal capacity.
This blog post represents the opinions of the author. The Groningen Declaration network assumes no responsibility or liability for the content or accuracy of this post.
