Live Markets, Charts & Financial News

Apple Bans Use of ChatGPT by Staff, Fearing Data Leaks

0 34

As concern about data leakage grows, AI tool makers are working to reassure concerned users that their data is safe.

Apple has banned Using the ChatGPT artificial intelligence tool among its employees in an effort to prevent data breaches and leaks. This was reported by The Wall Street Journal, which cited an internal memo from the company.

Apple and other companies care about data security

The restriction makes Apple one of many companies that limit the use of the software by some of their workers. The statement stated that this is because the iPhone maker believes that workers may inadvertently upload confidential files to the AI ​​platform. The report said that the restrictions introduced by Apple are not limited to ChatGPT but to any other AI platform that can lead to any form of data leakage. Some are from coding platform GitHub, which has a tool called Copilot, and others are from Microsoft.

Since the news broke, Apple has not issued a public statement in response to inquiries. However, AI analysts argue that the move to restrict the use of AI platforms by workers stems from the fact that platforms such as ChatGPT warn against uploading sensitive data by users. These warnings make sense because all data submitted to their database is used to adapt their AI models to improve them.

Samsung has temporarily blocked ChatGPT

Raising privacy concerns about the data, several companies, such as JPMorgan, Bank of America and Amazon, moved against the use of ChatGPT in January. Another entity that might consider banning the use of an AI tool is Samsung. Last month, its engineer uploaded sensitive data in an attempt to correct a faulty database. Meanwhile, Samsung management has placed temporary restrictions on the use of ChatGPT. The tech giant has already started developing its own AI platform.

Apart from companies, governments are also concerned about the risk of data leakage. The Italian government briefly banned the use of ChatGPT, which raised issues regarding the safety of personal data on the platform. It has since changed its stance by allowing the company behind ChatGPT, OpenAI, to continue operating in the country after meeting the Italian government’s demands.

Special versions of AI tools

As concern about data leakage grows, AI tool makers are working to reassure concerned users that their data is safe. ChatGPT recently released a private version of their tool called Incognito Mode. This version ensures that, according to OpenAI, uploaded data is not permanently retained in their database. Claims will not be saved longer than necessary for checks to prevent abuse.

Microsoft also announced its work on a special version of its software aimed at companies. The company confirmed that the AI ​​version will not use the company’s data for training.

IBM is also working on its own AI tool, the company announced this month. Watsonx’s privacy-centric AI will ensure that users don’t have to worry about data leakage.

“Customers can quickly train and deploy customized AI capabilities across their entire business, all while retaining complete control over their data,” said IBM CEO Arvind Krishna. He said.

However, these special editions are expected to be very expensive.

the next

Artificial intelligence, business news, news, technology news


Leave A Reply

Your email address will not be published.