Apple may face increased scrutiny from shareholders regarding its artificial intelligence practices following a recent filing with the U.S. Securities and Exchange Commission. This filing comes ahead of Apple’s upcoming Annual Shareholder Meeting scheduled for February 25 at 8 a.m. PT.
The National Legal and Policy Center has put forth a proposal requesting that Apple clarify how it acquires and utilizes external data for AI training purposes. This filing, available on the SEC’s website, emphasizes potential legal risks associated with data privacy and intellectual property rights, despite Apple’s commitment to privacy-centric policies.
What’s included in the proposal?
The NLPC’s proposal, listed as Proposal No. 4 in Apple’s 2025 proxy materials, requests a detailed report from Apple outlining its policies regarding AI data procurement and ethical considerations. The NLPC specifically seeks clarification on:
- Potential risks linked to improperly sourced data utilized in training AI models.
- Apple’s privacy protections during AI development.
- Steps taken to ensure compliance of AI-generated outputs with legal and ethical standards.
The NLPC argues that as a major player in the tech industry, Apple should uphold higher standards in AI ethics. The proposal notes that rivals like OpenAI, Google, and Meta are already facing lawsuits over claims of unauthorized data scraping for AI training.
Moreover, the NLPC does not hold back in its critique of Apple’s AI development strategy:
The Company depicts itself as privacy-oriented — and to significant effect — yet the lucrative potential of its vast userbase is too great to overlook, leading Apple to delegate unethical practices to others.
For instance, the Company has a long-established arrangement with Alphabet — a significant competitor — to make Google the default search engine on Apple devices. This deal is valued at $25 billion for Apple, accounting for 20% of its pretax profits. This partnership not only incites antitrust scrutiny but also grants Alphabet access to substantial data about Apple users. Alphabet has a history of various ethical and privacy breaches. Essentially, Apple is outsourcing its questionable activities to Alphabet while reaping major financial rewards.
This reflects the strategy Apple plans to employ with its AI initiatives. In addition to its collaboration with OpenAI, Apple has shown interest in partnering with Meta, another entity with a track record of privacy issues.
Implications for Apple
Thus far, Apple has adopted a more cautious approach to AI compared to some competitors, emphasizing on-device intelligence and privacy-focused machine learning rather than extensive cloud-based AI systems.
Notably, Apple has promoted its Private Cloud Compute model intended to keep account data non-transmissible when employing Apple Intelligence.
However, Apple’s privacy measures diminish when third-party integrations are involved. Currently, OpenAI’s ChatGPT is the only known partner, but Apple has publicly expressed interest in potential integration with Google’s upcoming Gemini.
Apple explicitly mandates user consent for activating third-party AI integration with Apple Intelligence, which extends beyond the initial usage.
Next steps
Initially, the proposal is likely to face rejection, as Apple typically advises against supporting shareholder proposals, leading to their defeat. We will keep you updated on developments next month.
Nonetheless, the sharp accusation of delegating “unethical practices” in AI technology development resonates, especially as analysts laud Apple for not overspending to catch up with AI contenders and instead enabling them on Apple platforms.
What are your thoughts? Should Apple be more transparent about its AI training data? Share your views in the comments below.
Top iPhone accessories
: . More.