New research from the United States reveals a workforce paradox that is playing out just as sharply in Australian organisations

Here is the situation Australian CHROs are walking into this quarter.
More than half of your workforce is already using artificial intelligence - to research, to write, to analyse data, to do their jobs. Adoption has jumped sharply in twelve months. By almost every behavioural measure, the A.I. transformation of work is underway, and it is happening faster than most enterprise deployment roadmaps anticipated.
At the same time, a major new Quinnipiac University poll of nearly 1,400 Americans - whose workplace attitudes increasingly mirror those of Australian workers in an interconnected economy - found that 55 per cent of respondents think A.I. will do more harm than good in their daily lives. Eighty per cent say they are concerned. Seventy per cent believe A.I. will reduce job opportunities. And only 21 per cent say they trust what the technology produces most or almost all of the time.
These are not the sentiments of a workforce in revolt. They are the sentiments of a workforce that is adapting to a technology it neither fully trusts nor feels anyone is managing on its behalf. In Australia, where industrial relations obligations, psychosocial safety duties and the Fair Work framework create specific legal context around workforce change, that combination is particularly consequential.
For CHROs across Australian enterprise, that gap - between accelerating adoption and deepening unease - is the defining people challenge of 2026.
The Gen Z inversion - and what it means for Australian talent pipelines
Among the findings in the Quinnipiac data, none is more strategically important for Australian talent leaders than what it reveals about the youngest cohort entering the workforce.
Gen Z - born between 1997 and 2008, the graduates now moving through early career roles in Australian professional services, financial services, government and healthcare - is the most pessimistic of any age group about what A.I. means for employment. Eighty-one per cent of Gen Z respondents said they believe A.I. will lead to a decrease in job opportunities. That compares with 66 per cent among baby boomers.
This directly contradicts the assumption that has become comfortable inside many Australian organisations: that younger workers are natural A.I. advocates who will drive transformation from the ground up while more experienced colleagues need to be brought along. The data suggests the reverse. Gen Z's familiarity with artificial intelligence has produced not enthusiasm but clear-eyed anxiety - a recognition that the roles most exposed to A.I.-driven change are often the entry-level and early-career positions that younger workers currently occupy.
In an Australian labour market where skills shortages remain acute across engineering, financial services, health and technology, the retention implications are serious. If your organisation's A.I. change narrative is built on the assumption that Gen Z employees are enthusiastic partners in transformation, that assumption needs revisiting - and soon.
Adoption without trust is not transformation
The most operationally significant finding for Australian CHROs is not the headline figures on harm and job loss. It is this: 76 per cent of respondents say they can trust A.I.-generated information only some of the time, or hardly ever.
Cross that against the usage data. More than half are using A.I. to research topics. More than a quarter are using it for workplace projects and data analysis. Nearly a third of employed respondents are using A.I. to do their jobs right now.
The arithmetic is uncomfortable. A substantial portion of the workforce is regularly relying on a tool it fundamentally mistrusts to produce outputs that inform business decisions. In an Australian context, that creates specific obligations. The model Work Health and Safety laws - now including explicit requirements around psychosocial hazards - require employers to manage the risks associated with how work is designed and the tools workers are required to use. A workforce operating under sustained cognitive dissonance about the reliability of its core work tools is a psychosocial risk, not merely a productivity concern.
The question CHROs need to be asking is not how do we get more adoption? Adoption is already rising without intervention. The question is: how do we build the critical A.I. literacy that turns anxious, low-confidence usage into genuinely productive and well-governed practice? Those require very different investments.
The Fair Work dimension: the A.I. supervisor finding
Eighty per cent of respondents in the Quinnipiac poll said they would be unwilling to hold a job in which their direct supervisor was an A.I. program that assigned their tasks and set their schedule. That figure was consistent across every age group and income level. Even among Gen Z - the cohort most comfortable with A.I. tools - 82 per cent said they would be unwilling.
For Australian CHROs, this finding has specific industrial relations implications that their American counterparts do not face in the same form. Under the Fair Work Act, employees have rights to consultation on major workplace changes, including significant changes to the way work is performed. The deployment of A.I. tools that materially alter how tasks are assigned, performance is measured, or work is monitored may trigger consultation obligations - whether or not organisations have thought of them in those terms.
The parallel finding on medical A.I. makes the psychological dimension clear: when respondents were told that an A.I. tool had been proven more accurate than a human at reading medical scans, 81 per cent still said they would want a combination of A.I. and human input. Only three per cent said A.I. alone. The desire for human oversight persists even when A.I. superiority is stipulated.
For Australian organisations deploying A.I.-assisted performance management, rostering, capacity planning or productivity monitoring, the implication is direct: workers will accept A.I. as an instrument. They will not accept it as an authority. The organisations that blur that line without genuine consultation and transparent governance face not only employee relations risk but potential Fair Work exposure.
White collar, blue collar: the anxiety is uniform
A second assumption the data complicates is that A.I.-related anxiety is concentrated among lower-skilled workers, and that professional employees are broadly at ease with the technology.
The poll found that 73 per cent of blue-collar workers believe A.I. will reduce job opportunities. Among white-collar workers, the figure was 71 per cent - effectively identical. Concern is not stratified by job type, education or income in the way that most internal communications strategies assume.
In the Australian context, this matters across the breadth of CHRO responsibilities. The same anxiety profile exists among nurses, engineers, lawyers, accountants and construction workers. A change communications strategy calibrated to reassure frontline workers while assuming professional employees are comfortable is likely to miss the majority of your workforce simultaneously.
What does differ - consistent with Australian enterprise experience - is current adoption. White-collar workers are further along the deployment curve, which means they are also more immediately confronted with what A.I. integration actually means for their roles, day to day. Senior individual contributors and middle managers who are watching A.I. reshape the work around them are not standing confidently on the right side of a divide. They are watching carefully, and they are not reassured.
The transparency obligation
Seventy-six per cent of poll respondents said businesses are not doing enough to be transparent about how they use A.I. Seventy-four per cent said governments are not doing enough to regulate it. Nearly half said A.I. development is not being led by people who represent their interests.
In Australia, that trust deficit has a specific regulatory trajectory. The federal government's work on A.I. governance - including developing mandatory guardrails for high-risk A.I. applications - is moving in a direction that will make transparency about A.I. use in employment decisions a compliance requirement, not merely a cultural preference. APRA has already flagged expectations around model risk governance for regulated entities. The Office of the Australian Information Commissioner has signalled close attention to automated decision-making that affects individuals.
For CHROs, transparency about A.I. use in people processes is not an optional best practice. It is a foundational requirement for maintaining employee trust and - increasingly - regulatory compliance. That means being specific: which tools are in use, what decisions they inform, what human oversight exists, how employees can raise concerns, and what the organisation's commitments are around the use of A.I.-generated data in employment decisions.
Vague statements about responsible A.I. use are not sufficient in an environment where three-quarters of workers have already concluded that businesses are falling short on transparency.
The productivity connection
The business case for getting this right is not abstract. Australia's own productivity crisis - documented by Statistics Canada's recent counterpart work by the Productivity Commission and underscored by the former Treasury Secretary's warning this week that the country may be staring into stagflation - makes the A.I. productivity dividend more urgent, not less.
EY-Parthenon estimates A.I. could lift economy-wide labour productivity by 1.5 to three per cent over the next decade, with the largest gains concentrated in financial services, professional services, consulting and legal - precisely the sectors that dominate Australia's white-collar employment base. SAP's 2025 Future of Work research found workers are already saving an average of 75 minutes a day through A.I. tools.
But those gains will not materialise automatically. They will be delivered - or not - by workers who use A.I. well. And the Quinnipiac data describes a workforce that is not there yet. Adoption is ahead of trust, anxiety is ahead of capability building, and the gap between what organisations are communicating about A.I. and what workers are experiencing is widening.
In a stagflationary environment where employers cannot simply pass on cost pressures, productivity becomes the only lever. And closing that gap - building the trust, literacy, transparency and career clarity that turns anxious adoption into genuine productivity - is the most consequential people challenge facing Australian CHROs right now.
Build A.I. literacy, not just A.I. access.Adoption is rising without intervention. What lags is the critical judgment to use A.I. outputs well - knowing when to trust, when to verify and when to override. Invest in capability building that is specific to how A.I. is actually being used in your organisation, not generic digital fluency programs.
Redesign your early-career talent narrative.Your most A.I.-fluent employees are your most pessimistic about their futures. In a tight Australian labour market, career pathing and role evolution need to be central to your employee value proposition for graduate and early-career cohorts - not an afterthought addressed in annual reviews.
Audit your A.I.-in-people-processes footprint for Fair Work exposure.Identify every place where A.I. is informing decisions about your workforce - rostering, performance, hiring, development - and assess both the human oversight that exists and whether your consultation obligations have been met. Where the human is absent or nominal, you carry risk.
Set a transparency standard before regulators set it for you.Draft and communicate clear commitments on how A.I. is used in employment decisions, what data is used, and what recourse employees have. The Australian regulatory direction is clear. The organisations that lead on transparency will be better positioned - with their workforce and with regulators - than those that follow.
Copyright C 2009-2025 Dimond Pony Trading Pty Ltd. All Rights Reserved
Address: Level 4, 60 Moorabool St, Geelong VIC 3220 Email: admin@dimondpony.com