NextFin News - The Australian Tax Practitioners Board (TPB) has moved to formalize the boundaries of automated accounting, releasing a landmark draft information sheet on March 24, 2026, that strips away any ambiguity regarding who is to blame when an algorithm gets the numbers wrong. The exposure draft, designated TPB(I) D62/2026, serves as a stark reminder to the nation’s tax and BAS agents: while artificial intelligence may generate the advice, the human practitioner remains legally and professionally tethered to the outcome. This regulatory intervention comes at a moment when generative AI has moved from a novelty to a core operational tool for mid-tier and boutique firms alike, creating a friction point between technological efficiency and statutory accountability.
The guidance focuses heavily on the Code of Professional Conduct, specifically targeting the pillars of competence, confidentiality, and independence. TPB Chair Peter de Cure made the board’s position clear, noting that while the regulator supports innovation, the "black box" nature of large language models (LLMs) cannot serve as a shield for professional negligence. The board’s insistence that AI outputs be "assessed and supplemented by professional judgment" is not merely a suggestion; it is a reinforcement of existing law under the Tax Agent Services Act 2009. For practitioners, this means that "the AI told me so" is now officially a non-defense in the eyes of the regulator.
One of the most significant hurdles identified in the draft is the inherent bias and lack of psychological nuance in AI models. The TPB pointedly observed that AI cannot understand human psychology or the complex external factors that often dictate tax strategy. This creates a specific risk for practitioners who might rely on automated systems to interpret "gray areas" of tax law. The draft guidance suggests that the more complex the tax matter, the less appropriate it is to rely on unverified AI output. This creates a tiered reality for the profession: AI is a powerful tool for data entry and basic categorization, but it remains a liability for high-level advisory work.
Confidentiality remains the most volatile element of this technological shift. Under Code item 6, practitioners are prohibited from disclosing client information to third parties without explicit permission. The TPB’s new guidance forces a reckoning with how AI tools store and access data. Many popular LLMs use input data to further train their models, a process that could inadvertently leak sensitive client financial structures into a public or semi-public data pool. The board is now requiring practitioners to scrutinize the data-handling policies of their software providers with the same rigor they would apply to a human subcontractor.
The economic implications for the accounting sector are immediate. Firms that have aggressively cut staff in anticipation of AI-driven "autopilot" services may find themselves under-resourced to meet the TPB’s requirement for manual review and professional oversight. There is a growing cost to compliance that may offset the productivity gains promised by AI vendors. If every AI-generated tax return requires a senior partner’s line-by-line verification to satisfy the "competent standard" requirement, the expected margin expansion from automation may prove elusive.
The TPB has opened the floor for public consultation until April 21, 2026, but the core philosophy of the draft is unlikely to shift. By placing the burden of "supervision and control" squarely on the registered agent, the regulator is attempting to prevent a race to the bottom in professional standards. The message to the market is unambiguous: the machine is a tool, not a peer. As the industry moves toward the 2026 tax season, the distance between a firm’s efficiency and its liability will be measured by the quality of the human oversight it maintains.
Explore more exclusive insights at nextfin.ai.

