Trustwave Unveils New Offerings to Maximize Value of Microsoft Security Investments. Learn More

Trustwave Unveils New Offerings to Maximize Value of Microsoft Security Investments. Learn More

Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

Fake Biden Robocall Demonstrates the Need for Artificial Intelligence Governance Regulation

The proliferation of artificial intelligence tools worldwide has generated concern among governments, organizations, and privacy advocates over the general lack of regulations or guidelines designed to protect against misusing or overusing this new technology.


The need for such protective measures came to the forefront just days before the New Hampshire Presidential Primary, when a potentially AI-generated robocall mimicking President Joe Biden was sent to potential voters telling them to stay away from the polls, according to published reports.


The call stated, “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the voice mimicking Biden says. “Your vote makes a difference in November, not this Tuesday.”


The New Hampshire attorney general’s office is investigating the incident.


An International Response


In response to the overall need for AI governance, governments have begun publishing frameworks that will start the process for protective measures and legislation that will guide the use of AI. These activities, at this time, are not coordinated internationally, but each nation is taking its own approach with new AI-related steps. 


AI is even causing concern at The Vatican. Pope Francis recently called for a binding international treaty to oversee the development and application of AI, sharing concerns about the potential for a “technological dictatorship,” according to In the Pope’s World Day of Peace message, he emphasized the importance of regulating AI to prevent harmful practices and promote best practices. 


The International Association of Privacy Professionals has created a global tracker to follow these developments, but let’s dive into a few to see how nations are developing plans to govern AI.


United States


Like most nations, the US does not have a comprehensive AI regulation in place, but it has been busy pushing out frameworks and guidelines. In addition, Congress has passed legislation to preserve US leadership in AI research and development and control government use of AI. 


In October 2023, President Joe Biden signed the first-ever Executive Order designed to regulate and formulate the safe, secure, and trustworthy development and use of artificial intelligence within the United States.


In general, Trustwave’s leadership commended  the Executive Order but raised several questions concerning the government’s ability to enforce the ruling and the impact it may have on AI’s development in the coming years. The 111-page order covers a myriad of AI-related topics designed to protect privacy, enhance law enforcement, ensure responsible and effective government use of AI, stand up for consumers, patients, and students, support workers, and promote innovation and competition.


Other efforts put forth include:


European Union


In December 2023, the EU parliament reached an agreement on the final version of the European Union Artificial Intelligence Act, possibly the first-ever comprehensive legal framework on artificial intelligence.


The EU introduced the EU AI Act in April 2021, and it is expected to go into effect in 2026.


The EU AI Act is tiered, impacting organizations depending on the risk level posed by the AI system. Those AI systems presenting a limited risk would be subject to similarly minimal transparency obligations, such as informing users that the content they are engaging with is AI-generated. An example of limited-risk AI usage is chatbots or deepfakes.


High-risk AI systems will be allowed but will come under tougher scrutiny and requirements, such as carrying a mandatory fundamental rights impact assessment. High-risk AIs are those used in sensitive systems, such as welfare, employment, education, and transport.


AI uses demonstrating unacceptable levels of risk would be prohibited. These include social scoring based on social behavior or personal characteristics, emotion recognition in the workplace, and biometric categorization to infer sensitive data, such as sexual orientation, according to the law firm Mayer Brown.



A Managed Detection and Response solution is the first step in defending against cyber threats.


United Kingdom


Like the US, the UK does not have a comprehensive AI regulation in place yet, but will rely on existing sectoral laws to impose guardrails on AI systems. The UK’s National AI Strategic Action Plan focuses on harnessing AI as an engine for economic growth but also takes into consideration protective measures.


On the economic front, the UK will invest and plan for the long-term needs of its AI ecosystem to continue its leadership as a science and AI superpower, support the transition to an AI-enabled economy, capture the benefits of innovation in the UK, and ensure AI benefits all sectors and regions. Finally, the plan ensures the UK gets the national and international governance of AI technologies right to encourage innovation and investment and protect the public and our fundamental values.


Under the plan, the UK will strive to be the most trustworthy jurisdiction for the development and use of AI, one that protects the public and the consumer while increasing confidence and investment in AI technologies in the UK.


To accomplish this goal, the UK has established an AI governance framework, developed practical governance tools and standards, and has published several papers laying out its methodology. 




Australia just published Safe and Responsible AI in Australia Consultation, which outlines its official takeaways after receiving input from the public, academia, and businesses on safe and responsible AI. The interim paper focuses on governance mechanisms to ensure AI is developed and used safely and responsibly in Australia. These mechanisms can include regulations, standards, tools, frameworks, principles, and business practices to help alleviate public concerns.


The paper noted, “There is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly. This acts as a handbrake on business adoption, and public acceptance. Surveys have shown that only one-third of Australians agree Australia has adequate guardrails to make the design, development, and deployment of AI safe.”


These, and other concerns, require testing (for example, testing of products before and after release), transparency (for example, labeling of AI systems in use or watermarking of AI-generated content), accountability (for example, requiring training for developers and deployers and clearer obligations to make organizations accountable and liable for AI safety risks), the paper said.


The regulatory measures listed above are a small fraction of what is being done worldwide. AI holds a great deal of promise. This technology can make organizations much more efficient, but at the same time AI’s ability to gather data could impact privacy laws and there is the added danger of threat actors using AI to conduct even more powerful cyberattacks.


Latest Trustwave Blogs

Trustwave eBook Now Available: 8 Experts on Offensive Security

It is now obvious that defensive measures alone are no longer sufficient to protect an organization from cyberattacks. Threat actors are increasing their capacity at such a rate that merely sitting...

Read More

Upcoming Trustwave Webinar: Top Security Considerations When Moving from Microsoft E3 to E5

Upgrading licensing from Microsoft 365 E3 to E5 is more than just an incremental step—it's a strategic move that can significantly enhance your organization’s security, compliance, and productivity....

Read More

How Trustwave Protects Your Databases in the Wake of Recent Healthcare Data Breaches

The recent cyberattack on Ascension Medical, Change Healthcare and several UK hospitals is a stark reminder of the vulnerabilities within the healthcare sector.

Read More