AI Security Ops

đź”— Register for FREE Infosec Webcasts, Anti-casts & Summits – 
https://poweredbybhis.com

Azure AI Foundry Guardrails | Episode 27
In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.

Topics Covered:
  •  Changing default filters for demo compliance
  •  Setting up a system prompt and understanding its role
  •  Adding regex terms to block specific content
  •  Creating and configuring a custom filter: “tech demo guardrails”
  •  Input-side filtering: inspecting user text before model access
  •  Safety vs. security categories in filtering
  •  Enabling prompt shields for indirect jailbreak detection

This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.


Why This Matters
By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.

#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity

Brought to you by Black Hills Information Security 
https://www.blackhillsinfosec.com

----------------------------------------------------------------------------------------------
Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
  • (00:00) - Introduction & Overview
  • (01:17) - Changing the Default Content Filter for Demo Compliance
  • (02:00) - Setting Up a System Prompt and Its Purpose
  • (04:26) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)
  • (05:04) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”
  • (05:35) - How Input-Side Filters Inspect and Block Unwanted Content
  • (06:01) - Overview of Safety Categories vs. Security Categories
  • (07:15) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)
  • (08:30) - Summary & Next Steps

Creators and Guests

Host
Brian Fehrman
Brian Fehrman is a long-time BHIS Security Researcher and Consultant with extensive academic credentials and industry certifications who specializes in AI, hardware hacking, and red teaming, and outside of work is an avid Brazilian Jiu-Jitsu practitioner, big-game hunter, and home-improvement enthusiast.

What is AI Security Ops?

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).