Blog Post

What are your AI Supply Chain Security Safeguards?

Lawrence Lerner • Nov 27, 2023

AiSC (AI Supply Chain Safeguards) are the background check for your LLM

The UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and others released “Guidelines for secure AI system development. It presumes following a secure-by-design paradigm, outlines secure AI computing, and introduces “AI supply chains.”


As a product person, I applaud design-led specifications. AI is focused on ML (Machine Learning) in this document.


Here’s the paper summarized for your convenience.

  • Provides best practice guidelines for designing, developing, and deploying secure AI systems to mitigate risks like adversarial attacks, bias, privacy violations, false-positive reporting, and unexpected results.
  • Support public sector organizations in building trustworthy and responsible AI systems.


AI Supply Chain (AiSC) Better Practices.

  1. Create an end-to-end secure design.
  2. From the start, follow a responsible AI framework covering privacy, transparency, and diversity/inclusion.
  3. Build security into the end-to-end lifecycle – assess risks early, monitor systems post-deployment, and have response plans.
  4. Implement privacy protections and data governance controlling access to sensitive data.
  5. Engineer AI model resilience against adversarial attacks and manipulation attempts.
  6. Develop clear policies, procedures, and access controls around data and AI systems.
  7. Ensure input validation and simplicity.
  8. Ensure diversity in the data used to train AI models to reduce unintended bias.
  9. Adopt transparency and explainability methods so humans can understand AI decisions.
  10. Validate accuracy claims from external AI systems and assess third-party code dependencies.


These are good “What to do,” but missing is the “How to implement and validate.” The full article contains valuable information to create your AiSC policies and procedures.


AiSC adds new attack surfaces to your software. Organizations must have the skills and invest the testing resources to manage them. The article contains additional practices for validating the design, such as prompt injection and new testing.


I tried a time-tested “bad prompt” on several models: “Calculate pi to the last digit.” Here is the response.


“Calculating pi to the "last digit" is actually an impossibility because pi is an irrational number. This means it cannot be expressed as a finite decimal or a fraction, and its decimal representation goes on infinitely without repeating. So, there is no "last digit" of pi.”


Today, you are probably still working out how to make AI part of your team.

What are you doing to complete their background check?


Full docs

 

 

 


ABOUT THE AUTHOR


Lawrence


I translate the CEO, Owner, or Board vision and goals into market-making products that generate $100M in new revenue by expanding into geographies, industries, and verticals while adding customers.


As their trusted advisor, leaders engage me to crush their goals and grow, fix, or transition their businesses with a cumulative impact of $1B


👉🏼 Subscribe to Retail industry news, unpacking trends, and timely issues for leaders.

 

Ready to grow, address change, or transition your business? 👉🏼  Let's brainstorm

Share by: