AI, artificial intelligence, trends, A.I., journey, technology

Are AI Tools Like Microsoft’s Copilot and ChatGPT Compromising Your Workplace Security?

As generative AI tools like OpenAI’s ChatGPT and Microsoft’s Copilot continue to evolve, so do multiple concerns around privacy and security issues.


The use of artificial intelligence (AI) in the workplace is growing at a rapid pace. As generative AI tools like OpenAI’s ChatGPT and Microsoft’s Copilot continue to evolve, so do multiple concerns around privacy and security issues.

More recently, a new Microsoft tool, Recall, has been listed as a potential “privacy nightmare” thanks to its ability to take screenshots of its user’s laptop every few seconds, Wired reports.

The news caught the attention of the UK regulator, the Information Commissioner’s Office, which has since asked the leading technology company to provide more details surrounding the product’s safety as the company prepares to deploy it in Microsoft’s Copilot PCs.

Another platform raising eyebrows is OpenAI’s ChatGPT, which also has demonstrated its use of the screenshot feature in the soon-to-launch macOS app. According to privacy experts, the tool could result in the capture of sensitive data, specifically in the workplace.

Cam Woollven, group head of AI at risk management firm GRC International Group, told Wired that a bulk of generative AI systems are “essentially big sponges,” posing threats of inadvertently exposing sensitive data.

“They soak up huge amounts of information from the internet to train their language models,” said Woollven.

With AI companies “hungry for data to train their models,” Elementsuite CEO and founder Steve Elcock said those same businesses are “seemingly making it behaviorally attractive” to do so. The collection of data in this way poses concerns about the potential for sensitive information to be placed into someone else’s ecosystem.

What’s more, AI systems are more susceptible to being targeted by hackers.

“Theoretically, if an attacker managed to gain access to the large language model (LLM) that powers a company’s AI tools, they could siphon off sensitive data, plant false or misleading outputs, or use the AI to spread malware,” Woollven explained.

While the above poses threats for individual users, Elcock told Wired it won’t be long before “this technology could be used for monitoring employees.”

While generative AI does pose some risks, there are steps that businesses and individual employees can take to prevent security exposure. First, experts say it is imperative to avoid providing the platforms with sensitive information.

“Do not put confidential information into a prompt for a publicly available tool such as ChatGPT or Google’s Gemini,” Lisa Avvocato, vice president of marketing and community at data firm Sama, warned.

Instead, she suggests being generic.

“Ask, ‘Write a proposal template for budget expenditure,’ not ‘Here is my budget, write a proposal for expenditure on a sensitive project,’” Avvocato told Wired. “Use AI as your first draft, then layer in the sensitive information you need to include.”

At this time, the House of Representatives is among the organizations that have banned the use of generative AI platforms like Microsoft’s Copilot among its staff members. The move was made after it was deemed to be a risk to users due to “the threat of leaking House data to non-House approved cloud services” by the Office of Cybersecurity.


×