The new internet browser from ChatGPT, Atlas, is rumored to take over from favorites like Google Chrome, doing all the work for you, but its safety parameters are raising concerns.
Entrepreneurs and business leaders praise the tool, doing more than just serving as a chatbot. The browser is designed to support research and to plan and execute tasks across numerous workflows. Users no longer have to copy and paste or switch between tabs and tools, Entrepreneur reports.
CEO of ChatGPT’s parent company OpenAI, Sam Altman, said, “We think that AI represents a rare, once-a-decade opportunity to rethink what a browser can be about.” As the company seeks “to unlock the power of AI,” analysts and industry experts are side-eyeing the quest, highlighting the safety and privacy risks.
Since Atlas is being integrated with ChatGPT, the browser collects more user data than any ordinary browser does, with the ability to access your email or private documents. It can keep “browser memories” with details from the sites you’ve visited, in an effort to help OpenAI better understand you. But Anil Dash, tech entrepreneur and writer, feels the company has reached its data limits. “I think a big, big, big part of this is they are hoping to use the people who downloaded this browser as their agents to get access to even more data,” Dash said, according to NPR
.“I would not be surprised if there is more information going to them than coming to the user.”
Lena Cohen, technologist at the Electronic Frontier Foundation, a digital rights group, shares similar concerns of browsers acting as agents, saying, “it takes these risks to a whole new level.” “Once your data is on OpenAI’s servers, it’s hard to know and control what they do with it,” Cohen said.
Another risk flagged by Cohen is pieces of code hidden in websites called “prompt
injections.” She described them as “bad actors” that can “hide malicious instructions on a web page.” In layman’s terms, the AI agent visits that page and can be tricked into executing those instructions.Atlas can also interfere with everyday tasks like buying groceries. The AI agent could be vulnerable to prompt injection that pushes users to “buy this product, instead of that one” or simply says, “hand over your credit card information.” However, OpenAI calls it an unsolved problem and is working on training its models to ignore such harmful instructions.
Despite looking into it, Chirag Shah, a professor at the Information School at the University of Washington, says AI is growing into a phenomenon at extreme speed — but with minimal regulation — resulting in consequences. “We’re in this kind of game where it’s a typical mentality of move fast and break,” Shah said.
“Unfortunately, what’s breaking is not just the tool or the technology, but real people.”
RELATED CONTENT: Artificial Intelligence Is Changing The Way Students Pick College Majors