Apple has banned its employees from using generative AI tools like ChatGPT for work purposes. The company’s decision stems from concerns about confidential data leakage.
Apple informed its employees about this move through an internal memo.
Sharing confidential data with ChatGPT poses a security risk
Generative AI tools like ChatGPT and Google Bard are all the rage these days. They can help automate trivial tasks and write simple code, helping boost your productivity. However, these tools also gather data and send it back to developers for research and improvement. This creates a security risk, especially when confidential data is involved.
ChatGPT allows turning off chat history, which prevents data sharing with its AI model. However, you must manually enable this option.
The Wall Street Journal reports that Apple has also informed its employees not to use Microsoft’s GitHub Copilot, which can help automate code writing. The Cupertino giant is not the first company to ban such tools. Its concerns are also legitimate, as Samsung employees have inadvertently leaked trade secrets to ChatGPT. This led the Korean giant to limit ChatGPT’s upload capacity to 1024 bytes per person on its network.
Many companies have banned using ChatGPT and other similar tools
Amazon, JPMorgan, and others have also banned such AI generative tools. Instead, they have told employees to use their internal AI-based tool for work purposes. Apple is reportedly also working on such an internal tool.
Interestingly, Apple’s internal memo banning AI generative tools for work purposes comes within hours of the official ChatGPT app’s release for iPhone. Less than 24 hours after its launch, the app has skyrocketed to the number one position on the App Store.