Apple has officially committed to implementing a series of artificial intelligence safeguards set forth by the Biden-Harris administration.
This announcement was made by the administration recently. The adoption of these guidelines positions Apple alongside industry leaders in the AI space.
As Apple gears up for the anticipated launch of its AI initiative, Apple Intelligence, slated for public release with iOS 18, iPadOS 18, and macOS Sequoia in September, the company is expected to gradually integrate these new features, which were first previewed in June.
Apple is among the participants in the Biden-Harris administration’s AI Safety Institute Consortium (AISIC), established in February. The company has now pledged to follow a set of voluntary safeguards, which include testing AI systems for security vulnerabilities, sharing testing outcomes with the U.S. government, notifying users when content is AI-generated, and creating standards to enhance the safety of AI systems.
While these safeguards are recommended, they are not legally enforceable, meaning that companies will not face penalties for non-compliance.
In contrast, the European Union’s AI Act, aimed at protecting citizens from high-risk AI applications, will be enforceable beginning August 2, 2026, although some regulations will take effect on February 2, 2025.
Among the exciting features of Apple Intelligence is its integration with advanced AI solutions. This integration has sparked controversy, with industry figures expressing concern over security and privacy implications. Notably, Elon Musk has raised alarms, threatening to ban Apple devices from his companies due to perceived security risks, while his companies are not listed as members of the AISIC.