AI, Offshore, and Local for ANZ Firms

AI, Offshore, and Local: The Three-Layer Operating Model That Changes How ANZ Businesses Decide

April 15, 20269 min read

TL;DR

Most ANZ firms are running AI and offshoring as parallel experiments, not as an operating model. Karbon's 2026 survey shows 98 percent of accounting firms now use AI. The Advancetrack Talent Index shows 47 percent use offshoring. Both investments are mainstream. Neither is being allocated against a framework. The three-layer operating model is the framework. Every activity inside every role belongs in one of three layers: AI-augmented, offshore-restructured, or locally-retained. The split changes by role, by sector, and sometimes by client. Eight role splits derived from O*NET task decomposition and tested against 90,000+ monthly activity-level data points show the pattern. The shift firms need to make in 2026 is from role-level decisions to activity-level allocation, with a deliberate plan to protect the local layer that carries the revenue.


The Two Numbers That Make the Case

According to Karbon's 2026 State of AI Report, 98 percent of accounting firms now use AI in some form. According to the Advancetrack Talent Index cited by CA ANZ, 47 percent use offshoring. Both investments are mainstream. Both are still growing. And across more than fifty pieces of ANZ thought leadership reviewed this month, not a single one explains how these two investments are supposed to work together.

That is the problem this blog is trying to solve.

We call it the three-layer operating model. AI-augmented. Offshore-restructured. Locally-retained. Every activity inside every role belongs in one of those three layers. The question for firm leaders is not "should we buy AI" or "should we use offshore staff." Both decisions are already made for most of you. The question is which activities belong in which layer, and why.

The answer depends on the role, the sector, the risk profile, and in some cases the specific client. Our view on the allocation is informed by more than 90,000 monthly activity-level data points drawn from live ANZ operations. The rest of this blog walks the framework, the evidence, and what the three-layer read changes for how you plan 2026 and 2027.

Karbon surveyed accounting firms globally and found 98 percent AI adoption in 2026, up from roughly 70 percent two years earlier. The figure is global, but ANZ tracks close to the global average on AI adoption at the firm level, with CPA Australia's 2025 Business Technology Report showing 89 percent APAC adoption. Separately, the Advancetrack Talent Index, which CA ANZ references in member communications, shows 47 percent of ANZ firms now use some form of offshoring, up from roughly 30 percent in 2022.

Those are not niche strategies. Almost every firm is running at least one of the two, and a significant share are running both. What firms are not doing is running them as a system. They run them as parallel experiments.


Why Parallel Experiments Fail

Parallel experiments fail because the same activity can end up funded twice. A firm buys an AI reconciliation tool, keeps three local bookkeepers working on the same reconciliations, and signs a contract with an offshore provider who is also doing reconciliations during peak. The work gets done, three times, at three different unit costs, with three different QA standards.

The failure mode is not usually visible on the P&L. It shows up as margin erosion, unexplained overtime, inconsistent client experience, and the quiet sense that the firm is spending more on delivery every year without getting more capacity back. The activity-level data is where this shows up clearly.

A 2024 Brynjolfsson study of call-centre workers using AI co-pilots found a 14 to 15 percent productivity lift at the role level, but the lift was concentrated among novice workers and thin among experienced workers. A 2023 BCG study of consultants found a 40 percent performance lift on AI-suited tasks and a 19 percentage point accuracy drop on AI-unsuited tasks, when the tools were used indiscriminately. Both studies point to the same conclusion. Role-level AI deployment is blunt. Activity-level allocation is the unit of analysis that matters.


The Three Layers

Layer 1, AI-augmented. Activities that are routine, rule-based, high-volume, and tolerant of deterministic processing. Bank reconciliation, transaction coding, document data extraction, first-pass invoice matching, standardised financial report generation. The economics: labour cost at this layer is approaching zero per unit at scale, but setup, integration, QA and exception handling remain human.

Layer 2, Offshore-restructured. Execution work that requires human judgment but not local presence. Complex reconciliations, multi-system journal preparation, audit work paper preparation, compliance document drafting, data entry quality checks on AI output. The economics: labour cost at a structured Philippines or Vietnam partner lands in the $11 to $14 per hour fully loaded band for skilled accounting work, versus $42 to $55 per hour for equivalent local staff.

Layer 3, Locally-retained. Judgment, relationship, regulatory sign-off, context-specific advice, dispute resolution, client management on material matters. The economics: labour cost is unchanged, but the layer carries the revenue. Every dollar of advisory fee is earned here.


What Does the Activity Split Actually Look Like?

We extracted activity-level splits for eight common ANZ roles using O*NET's task-decomposition methodology, tested against our own operational data:

  • Drafters and architectural technicians: 24 / 41 / 35

  • Accountants and auditors (mid-firm): 30 / 40 / 30

  • Underwriters and risk assessors: 38 / 50 / 12

  • Payroll officers: 42 / 40 / 18

  • Commercial cleaning supervisors: 10 / 0 / 90

  • Food service supervisors: 18 / 25 / 57

  • Hair and beauty operators: 8 / 12 / 80

  • FMCG in-store merchandisers: 15 / 20 / 65

Each number is the percentage of the role's activities suited to that layer. Two observations matter. First, the split is never the same twice. Payroll and drafting look superficially similar on paper but allocate very differently because the regulatory context differs. Second, no role is 100 percent any single layer. Even commercial cleaning, which is 90 percent local, has a 10 percent AI layer that most operators are not capturing.


Why This Matters Now

The ANZ talent pipeline has structurally contracted. Professional Year enrolments in Australia fell from 7,122 in 2018 to approximately 340 in 2024, a 95 percent decline over six years. NZ has similar pressure on the CA pipeline. When the local talent pool contracts structurally, the margin for misallocation disappears. Every activity that is sitting in the wrong layer now costs the firm capacity it cannot recruit its way out of.

Gartner reports that roughly 50 percent of AI pilots are abandoned within 12 months. One of the consistent reasons in post-mortems is that firms deployed AI at the role level instead of the activity level, hit the limits of what the tool could actually do for the role, and concluded the tool was broken. The tool was not broken. The allocation was wrong.


What This Changes for 2026 Planning

Three shifts in how firms should be planning the next twelve months.

First, stop treating AI and offshore as alternative answers to the same question. They are answers to different questions. AI removes unit cost on rule-based activity. Offshore removes unit cost on judgment-based execution activity. Local capacity carries the revenue activity. All three coexist in every firm.

Second, move the unit of analysis from role to activity. Ask "which activities inside this role belong in which layer." Stop asking "should this role be automated or offshored." The second question is the wrong question in 2026.

Third, protect the local layer deliberately. In most firms the local layer is shrinking by default because it is the most expensive layer and the easiest to cut. But the local layer is where the revenue is earned. A firm that optimises AI and offshore without a deliberate plan for the local layer will find itself with lower costs and lower revenue in 18 months.


A Practical Next Step

If you want to run the three-layer read on your own firm, start with one role. Pick the role that sits at the centre of your delivery engine. List every activity that role performs in a normal week. Against each activity, write a single letter: A for AI-suitable, O for offshore-suitable, L for locally-retained. Tally the percentages. Compare to the benchmark splits above.

You will usually find the split you have today does not match the split the work requires. That gap is your 2026 allocation opportunity.


Frequently Asked Questions

What is the three-layer operating model?

The three-layer operating model is a decision framework that allocates every activity inside every role to one of three layers: AI-augmented (routine, rule-based work), offshore-restructured (judgment-based execution that does not require local presence), or locally-retained (judgment, relationship, regulatory sign-off, context-specific advice). It treats AI and offshoring as complementary investments inside a single operating model rather than as competing answers to the same question.

How is activity-level allocation different from role-level cost cutting?

Role-level cost cutting takes a job title and asks "how do I make this role cheaper." Activity-level allocation takes the same role, decomposes it into 15 to 29 distinct activities using methodologies like O*NET, and asks where each activity belongs. Role-level decisions deliver short-term savings that erode within two to three years. Activity-level allocation produces structural savings because the underlying cost architecture changes.

Does the three-layer model apply to non-accounting businesses?

Yes. The framework applies to any role with a meaningful mix of routine, execution, and judgment activities. The activity splits in this blog cover drafting, underwriting, payroll, commercial cleaning, food service, hair and beauty, and FMCG merchandising as well as accounting. The split changes by role, but the framework is the same.

Where does the 90,000+ data point figure come from?

It is the volume of activity-level measurements drawn from Outrun's live ANZ operations across our offshore delivery teams each month. It is a director-verified figure, used to benchmark activity costs and skill-level requirements against the splits derived from O*NET task decomposition.


Key Takeaways

  • 98 percent of accounting firms use AI and 47 percent use offshoring. Both are mainstream investments. Almost no firm is running them against a single operating model.

  • The three-layer operating model allocates every activity to one of three layers: AI-augmented, offshore-restructured, or locally-retained. The unit of analysis is the activity, not the role.

  • The activity split changes by role. Drafters split 24 / 41 / 35. Commercial cleaning splits 10 / 0 / 90. Underwriters split 38 / 50 / 12. No role is 100 percent any single layer.

  • Role-level AI deployment is blunt. The 2023 BCG study showed a 40 percent lift on AI-suited tasks alongside a 19 percentage point accuracy drop on AI-unsuited tasks when tools were used indiscriminately. Activity-level allocation is the unit of analysis that matters.

  • The local layer carries the revenue. AI and offshore strategies that do not include a deliberate plan for the local layer produce lower costs and lower revenue in 18 months.


Next Step

If you want to see the three-layer activity-level read for a specific role or sector inside your firm, send us a message. We will walk it through with you, drawing on the 90,000+ monthly activity-level data points from live ANZ operations that sit behind our allocation view.

Book a free Activity Analysis Session →

You can also explore what activity-level allocation might look like for your business using the Task Savings Calculator.

Try the Task Savings Calculator →



Back to Blog

Get fresh cost saving tips & insights , monthly

Unsubscribe anytime