Forget BYOD, now it is BYOAI causing concern for companies

In 2025, the Bring your Own AI (BYOAI) trend among employees will become bigger, forecasts Deel, a global payroll and compliance platform. But this could lead to friction at work. Already IT companies are waking up to the risks of employees using free and paid versions of large language models (LLMs) such as ChatGPT to develop code.

Though it reduces drudgery for employees and improves productivity, the surge in employee-driven adoption of AI tools presents challenges for organisations, warns Jaspreet Bindra, Co-founder, AI & Beyond and Author Tech Whisperer. “Companies like Samsung have taken precautionary measures, barring the use of generative AI on the shop floor to mitigate data privacy and security risks,” he points.

The concerns stem from AI’s potential to expose sensitive information, lack of integration with enterprise systems, and compliance risks.

Shadow AI

Kashyap Kompella, CEO of RPA2AI research, and an AI advisor to firms, points out that many companies lack an ‘acceptable AI use policy’, leading to employees bringing their own digital side-kicks to work. The term he uses for this trend is ‘Shadow AI’. He says it raises concerns similar to those seen with ‘Shadow IT’, where employees use unapproved devices and software, potentially compromising security and data privacy.

Kompella estimates that only 20-25 per cent of companies have implemented such policies.

Sriram Chakravarthy, CTO and Co-Founder of Avaamo, highlights the cautious approach many large enterprises have taken towards AI adoption. They prefer to introduce AI through validation and approval before allowing widespread use. However, the availability of numerous freemium AI tools, particularly for personal productivity tasks like content and code generation, has led to some level of unauthorised sprawl. 

Filters needed

Companies are responding to BYOAI in various ways, ranging from cautiously allowing AI tool usage to outright bans. Chakravarthy outlines several common approaches: establishing AI councils for selective tool approval, restricting access to unapproved tools, providing limited licenses for experimentation, and implementing enterprise-wide licenses for vetted and approved AI products.

Ashan Willy, CEO of observability platform company New Relic, says the company has started using AI/GenAI after deploying filters to safeguard its data. “We allow them (employees) to use LLMs. But the quality of coding is yet to evolve. Last month, we developed six lakh lines of code but only 10 per cent of it is found to be useful. But it is better than last year where we saw only 5 per cent usable code,” he said.

Bindra recommends striking a balance by developing policies that allow controlled AI usage while safeguarding data. He suggests that IT departments should evaluate how to integrate employee-brought AI into existing workflows and ensure robust security measures. Open communication with employees about responsible AI use is also crucial to mitigate risks and maximise benefits.

According to Kompella, “Junior developers experience more significant productivity gains from AI coding assistance compared to senior developers. This is because junior developers already rely heavily on sites like Stack Overflow for coding help.”

He also observes that AI coding tools are evolving from simple ‘autofill’ features to more ‘agentic’ tools that can interact with developers and request access to resources. This shift, exemplified by tools like Devin, could further enhance productivity.

Related Content

Lessons before they kick you out of your job

Waste and repurposed plastics

Shari’ah-compliant securities fall to 53; ICTSI, PAL Holdings added

Leave a Comment