Microsoft to Use Consumer Data to Train AI, But You Can Opt-Out

Microsoft to Use Consumer Data to Train AI, But You Can Opt-Out

Microsoft is set to make a significant change in how it trains its AI models. The tech giant announced it will soon begin using data from consumer interactions with Copilot, Bing, and Microsoft Start (formerly MSN) to enhance its AI capabilities. This move aims to create more inclusive and relevant AI-powered services by tapping into a diverse range of real-world user experiences.

Microsoft has announced a significant change in how it will handle consumer data, specifically in training the generative AI models that power its Copilot feature. In a recent update, the company revealed that it will start using data collected from consumer interactions on Copilot, Bing, and Microsoft Start (MSN), including engagement with advertisements, to refine and enhance its AI models. However, this change will not affect commercial and public sector clients, whose data remains protected under separate privacy policies.

The shift in data usage is framed by Microsoft as a move toward making its AI products more personalized and relevant for users. According to the company, real-world consumer interactions offer greater diversity and depth in training data, which can lead to more inclusive and accurate AI-generated content. For example, Microsoft says its models will learn from an aggregated set of user interactions, such as thumbs up/down on responses, to generate better, more context-aware answers in the future.

While this adjustment might raise privacy concerns, Microsoft has been clear that it will implement robust controls to protect consumer data. Starting in October, users will have the option to opt out of having their data used for AI training. Microsoft has emphasized that the new policy will not take effect until at least 15 days after users are notified of these controls. The company also reassures users that identifiable information—such as names, phone numbers, and email addresses—will be removed from data before it’s used for training.

This policy applies only to consumers signed into their Microsoft accounts and excludes data from minors. For European users, Microsoft will delay implementing the policy until it can ensure compliance with privacy regulations, including those in the European Economic Area (EEA).

It's worth noting that this change doesn't apply across the board. Microsoft has explicitly stated that commercial and public sector clients will not be affected by this new policy. The company continues to uphold its existing commitments to these clients, including not using their data to train foundation models without explicit permission.

As AI continues to evolve and integrate more deeply into our daily lives, Microsoft's decision highlights the delicate balance companies will need to strike between advancing AI capabilities and safeguarding user privacy. By offering transparency and user controls, the company is attempting to navigate these murky waters while pushing forward with its AI ambitions.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe