Artificial intelligence (AI) has become a transformative force across industries, driving innovation while sparking ethical debates. However, as technology outpaces regulatory measures, countries are racing to introduce frameworks that govern AI’s use. From the EU’s landmark AI Act to China’s strict oversight, 2025 may be the year AI governance gains a global foothold.
AI controversies have been making headlines. For example, the ChatGPT plagiarism concerns raised debates about copyright violations, while deepfakes disrupted elections and reputations worldwide. These reasons coupled with the potential misuse of autonomous weapons and AI-driven surveillance; global leaders have realized the need to regulate before the damage escalates.
Therefore, in 2019, San Francisco became the first U.S. city to ban facial recognition technology due to concerns about privacy and misuse. This move inspired conversations in Europe and Asia, which proved local governments can influence global perspectives. And major countries began creating major frameworks to govern the use of AI.
The EU AI Act, which is presumed to be the first legal framework for AI, categorizes AI systems based on risk:
Compliance tools include mandatory “trustworthiness checks” and transparency requirements for high-risk models. For example, the financial sector must demonstrate compliance with fraud-detection AI tools. Penalties for no compliance can reach up to €30 million or 6% of global revenue—mirroring GDPR.
In the U.S., the approach is fragmented, with states like California and New York leading independently. At the federal level, the NIST AI Risk Management Framework (2023) encourages companies to adopt responsible AI practices voluntarily. For example, Autonomous vehicle regulations vary significantly between states. In Arizona, companies like Waymo operate with minimal restrictions, while California demands rigorous safety reports. Here, tools like NIST’s AI RMF Playbook provides practical steps for businesses to mitigate risks in using AI systems.
For China, they have created a 2022 regulation, which states that all AI applications should align with Chinese values. Also, AI-powered tools like chatbots should include mechanisms to prevent misinformation.
With multiple regulations emerging, multinational corporations are challenged to navigate across compliance. For example, a healthcare AI system designed for Europe may not meet the privacy standards in China or ethical expectations in the U.S. U.S. companies such as HireVue MAY face challenges adapting their algorithms to meet local anti-discrimination laws in Europe while navigating GDPR. However, this kind of compliance burden has created opportunities for innovation in Regulatory Technology. Nowadays, tools like TrustLayer can assist businesses in aligning with multiple jurisdictions.
As AI becomes central to modern economies, cooperation among nations will be necessary to prevent fragmentation and create equal opportunities. While countries race ahead to implement regulations, global frameworks like the UNESCO Recommendation on the Ethics of Artificial Intelligence might pave the way for more unified guidelines.
However, the question remains: Will 2025 be the beginning of global AI harmonization or further siloed regulatory efforts?
Post tag :
Founder, Chief Content Officer & Web Developer
Copyright © 2024 EVERYTHINGEMAKESS