
World leaders seek to govern artificial intelligence — before it governs us
Government entities around the world have been crafting their responses to the rise of artificial intelligence (AI) in an effort to provide guidance and regulation to this new area of technology. International leaders are considering the potential of many world-changing implications, from concerns over human rights to the acceleration of sustainable development.
So, what do business leaders who are developing and leveraging AI need to know?
U.N. resolution overview
On March 21, 2024, the U.N. General Assembly unanimously adopted the first global resolution on AI, focused on the “safe, secure, and trustworthy” development and deployment of the technology. The non-binding resolution encourages countries to:
- Safeguard human rights: Ensure that the use of AI systems mitigates bias, preserves linguistic and cultural diversity, and protects people’s rights both online and offline
- Protect personal data: Develop appropriate data security mechanisms and policies, safeguarding privacy while adhering to international, national, and local transparency and reporting requirements
- Monitor for risks: Take effective measures to prevent and mitigate vulnerabilities during the development of AI systems, and create feedback mechanisms for end users to report misuse
The resolution, led by the United States, was co-sponsored by more than 120 other nations. As U.S. Ambassador Linda Thomas-Greenfield put it, the goal is “to govern artificial intelligence rather than let it govern us.”
Other international agreements
The U.N. resolution is the latest in various efforts around the globe to govern the use of AI.
Back in November 2023, a group of 18 countries signed a resolution to make AI “secure by design.” The countries included the United States, Britain, Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
The non-binding resolution reflects a security-first mindset, urging companies that design and use AI to protect the public from misuse. The 20-page document includes directives on monitoring AI systems for abuse, protecting data, and vetting software suppliers.
In March 2024, the European Parliament adopted the EU Artificial Intelligence Act, which is expected to come into force in May or June of this year. The act, which applies to both developers and deployers of AI systems, categorizes AI practices into different levels of risk that carry different regulations. For example, AI systems that exploit humans’ vulnerabilities are an “unacceptable risk,” whereas AI-enabled video games are considered “minimal risk.”
U.S. legislative actions
The Biden administration has taken several actions aiming to set parameters around AI as well. President Biden issued an executive order in October 2023, requiring developers of AI systems that could potentially pose threats to national security or the public to share safety test results with the U.S. government. Then in March 2024, Biden ordered every federal agency to appoint a chief AI officer.
Internally, the U.S. Congress has restricted staffers’ use of AI tools due to concerns over confidential data being leaked and used to train the AI. Specifically, staffers are banned from using Microsoft Copilot. They’re also limited to using the paid version of ChatGPT, which offers more security than the free version.
What does this mean for businesses?
If your company doesn’t yet have policies in place around AI and data security, it should. A recent study revealed that 75% of knowledge workers use generative AI at work — half without their boss’s knowledge. Government leaders’ concerns around AI apply to businesses as well, which is likely why the report found that business leaders’ No. 1 concern for this year is cybersecurity and data privacy.
While the international resolutions are not yet law, businesses would be wise to consider these and other government actions as they develop and deploy AI, tracking them as indicators of current and future legislation. This is especially true for companies that do business globally, to protect security and avoid investing in technology that may be shut down by future regulation.

The Ransomware Insights Report 2025
Key findings about the experience and impact of ransomware on organizations worldwide
Subscribe to the Barracuda Blog.
Sign up to receive threat spotlights, industry commentary, and more.

Managed Vulnerability Security: Faster remediation, fewer risks, easier compliance
See how easy it can be to find the vulnerabilities cybercriminals want to exploit