Business US

How to avoid getting into trouble when using AI at work

Love it or hate it, AI is increasingly becoming integral to the way we work.

So, like a lot of employees, you’ve started using it for your assignments.

That’s great – unless you’re not clear on what defines acceptable versus unacceptable uses of AI for your job and which specific tools your employer has approved or prohibited.

Here’s how to get a better sense of all that and minimize potential trouble, even if your employer hasn’t been great in spelling things out.

Generative AI can be impressive – for instance, helping you find data or making connections you’d otherwise miss; and testing work products for design flaws or mistakes.

At the same time, it’s also highly imperfect and subject to so-called “hallucinations” – defined by IBM as “a phenomenon where (it) perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”

In other words, it can produce hot garbage.

An AI tool may be excused by its promoters for those hallucinations, but you won’t be.

That’s why when it comes to your job, “never blindly rely on AI,” said Dave Walton, an employer-side attorney who co-chairs Fisher Phillips’ AI, Data, and Analytics Practice Group.

Instead, view it as an initial assist. “Generative AI is the best thing in the world to get you from zero to not bad in 60 seconds,” said Niloy Ray, a co-lead of the AI practice at the employer-side law firm Littler Mendelson.

But, he added, “’Not bad’ is rarely the standard to which you’re working.”

It’s up to you to verify anything you incorporate from AI into your projects. And to be transparent with your boss whenever you use it for that purpose.

It’s hard to say definitively how many employers have full-blown AI policies in place, though the numbers are likely on the rise.

Some non-scientific surveys suggest it is a smaller share than the high percentages of employees who say they’re already using AI.

“Self-directed AI use has grown to 65%, creating both innovation and risk as employees explore tools ahead of formal guidance,” according to the American Management Association, which surveyed 1,365 professionals in varying industries across 29 countries this year.

Meanwhile, a recent Littler survey of 349 professionals from US companies of varying sizes and industries found that 38% of companies said they created a specific policy for employee use of AI; another 13% said they’d developed guidelines; and 19% indicated they fit AI use into pre-existing workplace policies.

So, before doing anything else, check what AI policies and guidelines your employer has put in place.

If well done, those policies should offer a clear sense of the company’s guiding principles on usage, a clear set of dos and don’ts as well as a list of AI tools you are permitted to use and under what conditions. And it should make clear what disciplinary actions could result if you misuse them. (Here’s a sample from Fisher Phillips to give you an idea.)

Some types of companies may forbid AI use (e.g., a defense contractor) while others (such as banking and finance firms) may urge extreme caution or just don’t have the appetite for it, Ray said.

And other employers may license an AI tool that will be bespoke for company use or create its own internal AI tool, Walton said. In which case, use of publicly available third-party tools may be discouraged, restricted or prohibited.

If your employer doesn’t have a dedicated AI policy, consult your company’s other policies that apply to all your work efforts, including with AI, Ray suggested.

Those might include policies intended to protect your employer’s confidential information, trade secrets or intellectual property – and relatedly, its cybersecurity and privacy policies.

As a general rule, if you’re using a third-party tool like Chat GPT and a version of it that people outside of your company are using, never share confidential data or personally identifiable information, Walton said.

Turn off the function that allows the AI tool to train on your inputs and configure it so the tool does not retain your queries, he suggested.

Ray likens the security of using a publicly available AI tool to public parking. There’s more chance someone could gain access to your car than if you parked in your own garage. “The ability to intercept data is much higher and you don’t know who has access,” he said.

More broadly, he added, recognize that while AI may offer new tools for you to do your job, it doesn’t change your obligations as an employee.

“At end of day, you want to do what a conscientious and ethical employee would do on any given day,” Ray noted.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button