Trustwave Unveils New Offerings to Maximize Value of Microsoft Security Investments. Learn More

Trustwave Unveils New Offerings to Maximize Value of Microsoft Security Investments. Learn More

Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

Trustwave Backs New CISA, NCSC Artificial Intelligence Development Guidelines

The U.S. Department of Homeland Security's (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom's National Cyber Security Centre (NCSC) today jointly released Guidelines for Secure AI System Development in partnership with 21 additional international partners. 


“As more organizations begin adopting AI-based software to run their day-to-day business operations it will become imperative that these solutions are designed from the ground up as secure,” said Bill Rucker, President of Trustwave Government Solutions. “Threat actors will attempt to exploit any security vulnerabilities so we support these guidelines as they will help give developers a guideline on how to create secure products."


The increasing use of Artificial Intelligence (AI) spurred this international effort to create guidelines to help those creating systems that use AI make informed cybersecurity decisions at every stage of the development process.


"As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices," CISA Director Jen Easterly. "The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution."


The document does not encompass all AI but refers specifically to machine learning (ML) applications, with all types of ML being in the guide's scope. 


For the purposes of the guide, the agencies defined ML applications as those that:

  • Involve software components (models) that allow computers to recognize and bring context to patterns in data without the rules having to be explicitly programmed by a human. 
  • Generate predictions, recommendations, or decisions based on statistical reasoning.


“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron. “These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.


Where AI Stands Right Now


The elite Trustwave SpiderLabs team has been at the forefront of tracking the development of legitimate AIs such as ChatGPT and Google's Bard, along with malicious versions such as WormGPT and FraudGPT.


The general takeaway is that the world has only started to see the tip of what these still very young technologies will accomplish, both good and bad, and even that differentiation is sometimes somewhat blurry. 


For example, while ChatGPT can't directly access the Internet, it can access any private, confidential, or proprietary information input into it. Every piece of data input into ChatGPT helps to create a feedback loop that trains the software, which means that any private information used in the platform is no longer private.


Additionally, even "good" AIs like ChatGPT are proving to be a helpful tool for threat actors who use them to create hard-to-spot phishing emails.


CISA, the NCSC, and their partners believe a structure is needed for AI developers to keep AIs as secure and useful as possible. 



Register for a webinar replay containing a complete rundown of Trustwave’s research on Artificial Intelligence.


Tips for AI System Providers


The document is aimed primarily at providers of AI systems, whether based on models hosted by an organization or using external application programming interfaces (APIs). However, all stakeholders, including data scientists, developers, managers, decision-makers, and risk owners, should be aware of these recommendations.  


The government agencies focus on four key areas within the AI system development cycle that they believe must be followed to boost security.


  1. Secure Design 
  2. Secure Development
  3. Secure Deployment 
  4. Secure Operation and Maintenance 


Secure Design


The key aspect is prioritizing security awareness with the development staff. This includes providing users with guidance on the unique security risks facing AI systems, which can be included in standard InfoSec training, and train developers in secure coding techniques and secure and responsible AI practices.


Developers must include a risk management process that includes understanding the potential impacts on the system, users, organizations, and society if an AI component is compromised or behaves unexpectedly.


Finally, as with all software, systems must be designed from the beginning for security, functionality, and performance.


Secure Development


In one way, security AI is no different than general cybersecurity practices regarding third-party suppliers, understanding where your assets are stored and who has access, and maintaining proper documentation of the process being conducted.


If an organization is going to an outside source, you must assess and monitor the security of your AI supply chain across a system's life cycle and require suppliers to adhere to the same standards your organization applies to other software. 

The document recommends processes and controls be in place to manage what data AI systems can access and to manage AI-generated content according to its sensitivity.


Secure Deployment


All AI models must have security baked in from the beginning. Developers can accomplish this by implementing standard cybersecurity best practices and implementing controls on the query interface to detect and prevent attempts to access, modify, and exfiltrate confidential information.


Unfortunately, developers must create incident management procedures since no security program is perfect. These plans should reflect different scenarios and regular reassessments are conducted as the system grows and evolves.


Companies should store critical digital resources in offline backups and train responders to assess and address AI-related incidents. High-quality audit logs and other security features or information must be provided to customers and users at no extra charge to enable their incident response processes.


Secure Operations and Maintenance


Over the long haul, AI developers and operators must take several steps to ensure the software operates properly and safely. These steps include monitoring the system's behavior by measuring the outputs and performance of the model and system so sudden and gradual changes in behavior affecting security can be observed. 


Such scrutiny will help account for and identify potential intrusions and compromises, as well as natural data drift.


Operators must also monitor the information the AI receives from outside sources. By examining the incoming data, the operators can ensure that privacy and data protection requirements are being met. Additionally, continuous data monitoring will likely spot adversaries' attempts to input malware or data designed to alter the AI's functionality.


Latest Trustwave Blogs

Why Vulnerability Scanning is an Offensive Security Program’s Secret Weapon

Knowing what you don’t know is the key to keeping an organization safe and the best method of doing so is with an offensive security approach that includes vulnerability scanning. By being proactive...

Read More

Upcoming Trustwave Webinar: Maximizing the Value of Microsoft E5

Many organizations license Microsoft 365 E5 to obtain its productivity features, which makes perfect sense because that is what the tool is known for. However, E5 also shines in the security realm...

Read More

Comparably Honors Trustwave with Leadership and Career Growth Awards

Comparably, the leading workplace culture and compensation monitoring employee review platform has recognized Trustwave with two major awards: 2024 Best Companies for Career Growth and 2024 Best...

Read More