Americas

  • United States

Asia

mfinnegan
Senior Reporter

As workers embrace AI, employers are slow to issue guidance

news analysis
Feb 28, 20245 mins
Artificial IntelligenceGenerative AITechnology Industry

A survey of 10,000 office workers found that employees and senior execs both see a range of benefits from AI tools. But a lack of clear guidelines around their use is slowing experimentation and creating potential risks to corporate data.

rules rulebook compliance regulation by dana getty
Credit: Dana / Getty

Even as more and more office workers access AI-based tools in their jobs, employers have been slow to issue guidance on how to use the technology effectively and safely.

That’s according to a survey of around 10,000 desk workers commissioned by collaboration software vendor Slack; the survey focused on attitudes towards the use of AI and automation in the workplace.

The findings aren’t limited to generative AI (genAI), though OpenAI’s ChatGPT and DALL-E were included in examples of “AI tools” given to respondents, Slack said.

The survey noted a 24% increase in AI use during the past quarter, compared to the previous quarter. That means one in four desk workers had used AI tools as of January, Slack said, up from one in five last September.

Employee perceptions were mixed — 42% are “excited” for AI to handle tasks on their behalf, 31% said they’re “neutral,” and 27% are “concerned.” But, for those that have accessed AI, 80% say it has already improved their productivity.

At the same time, senior leaders are keen for their employees to access these AI tools: 81% of executives that took part in the survey feel “some urgency” to roll out tools  within their organizations.

Still, many businesses have yet to provide their workers with direction around how to use AI tools in their jobs. Almost half (43%) of respondents said they have received no guidance from their bosses on how to use AI.

One implication, said Slack, is that that employees are less likely to experiment with tools that could boost productivity. Workers at companies that have defined AI guidelines are almost six times more likely to have tried the tools, the survey indicated. Even staff at companies that limit the use of AI are more likely to try out the tools than those who work for companies with not guidelines.

Though business leaders see benefits from the wider use of AI, to realize them, “leaders will need to clearly communicate guidelines around AI usage and outline the different tools available and the limitations employees should be aware of,” said Christina Janzer, senior vice president of research and analytics at Slack. 

“To be clear, we’re not expecting employers to have all of the answers yet. No one does,” she said. “But it is clear we need to give employees a sense of what they can and cannot do at this early stage. We need leaders to set guardrails for employees on how best to use AI tools.”

Preliminary figures from an upcoming “Workplace Collaboration and Contact Center Compliance and Security” report by analyst firm Metrigy suggest a similar picture. This indicates that 42% of businesses have a security strategy governing the use of AI in the workplace. “We expect this number to climb,” said Irwin Lazar, president and principal analyst at Metrigy, especially in light of well-publicized incidents with data loss at Samsung and, more recently, hallucinations at Air Canada.

“I’m actually surprised its only 43%,” said Jack Gold, founder and principal analyst at J. Gold Associates, of the Slack findings. “In our interactions, we’ve seen it as high as 75% to 80% of employees not issued any guidance.”

Gold noted that there are potential risks for businesses — particularly with genAI tools and the potential for corporate data being exposed. “Basically, when using public tools, any input you give them is incorporated into the tools for use by others. So, it’s possible that sensitive data can be compromised.”

That danger is particularly important in heavily regulated industries. “Education is critical,” said Gold. “Let users know what kind of AI tools will compromise data and which won’t. Just like security training so as not to have breaches, it’s important to let people know how to safely use AI.

It’s also important to let them know that not everything you get from genAI is correct, he said. Hallucinations, where the tool produces erroneous information, are not uncommon: “[I]f you base your work or results on those, you could have a major issue. Check and double-check!”

These were among the concerns shared by execs in the Slack study, with data security and privacy (44%) at the  top of the list, followed by AI reliability and accuracy (36%).

“Executives should be working closely with their IT teams on deploying trusted AI tools — both to ensure data privacy and data security and to ensure that you can trust what your AI tools generate in return,” said Janzer.

Banning AI isn’t advisable or even realistic in many cases. “I think trying to just block access will be difficult, and the struggle will be to balance the benefits of tools with risk,” said Lazar.

The top benefits cited by execs were increased staff efficiency and productivity (38%), data-driven decision-making (35%), innovation of products and services (34%), and cost reductions (33%).

From an employee perspective, the key benefit is to automate repetitive and low-value tasks.

“We think these are the tasks AI is going to help handle in the near future,” said Lanzer, with employees able to shift focus to “impactful work that contributes to a company’s bottom line.

“Employees seem pretty excited about AI helping here,” she said. “The research shows the more time you spend on the ‘work of work,’ the more excited you are about the possibility of AI taking over these tasks.”