AI Coding Workshops
Private, hands-on workshops that teach your developers to build AI coding agents that enforce your standards, learn from your team's decisions, and get smarter with every commit. Your team walks away writing better code, faster, with fewer bugs.
Get your team started with the essentials: context files, rules, and agentic reviews that work immediately.
- ✓Context engineering (AI-Naive)
- ✓Skills, MCP & agentic reviews (AI-Assisted)
- ✓Hands-on exercises with your codebase
- ✓Real world prompt engineering pro tips
Add deterministic guardrails and measurement: reproducible, audit-ready, data-driven enforcement.
- ✓Everything in ½ Day
- ✓Hooks, guardrails & policy enforcement (AI-Augmented)
- ✓IDE hooks & CI/CD integration
- ✓Semgrep / OPA custom rules
Full maturity model plus we build a real internal capability alongside the training. Something meaningful you keep.
- ✓Everything in 1 Day
- ✓Observability & measurement dashboards (Data-Driven AI)
- ✓Agents, teams & continuous improvement (AI-Native)
- ✓Multi-agent orchestration patterns (Orchestration)
- ✓Build a real capability for your org
- ✓Feedback loops & policy evolution
- ✓Architecture for scale
The Zenable AI Coding Maturity Model
Context Engineering
Set expectations through context files. Low barrier to entry, version controlled, reduces variability across developers and agents.
Skills, MCP & Spec-Driven Development
Multi-perspective automated review with specialized agents for security, QA, and best practices catching issues in parallel.
Hooks, Guardrails & Policy Enforcement
Deterministic guardrails that enforce policy in milliseconds: no hallucinations, fully reproducible, audit-ready evidence.
Observability & Measurement
You can’t improve what you can’t measure. Telemetry, dashboards, and data-driven guardrail effectiveness tracking.
Agents, Teams & Continuous Improvement
A feedback loop where each level informs the others. Incidents, audits, and policy changes drive guardrail evolution. Coverage grows from real risk.
Multi-Agent at Scale
You architect, agents execute. Hierarchical delegation with specialized agent teams coordinated toward shared goals.
What Makes Our Workshops Different
Adapts to You, in the Room
No one-size-fits-all. Just like our tools, our workshops learn and evolve on the fly. We pick and choose from a library of pre-made labs and discussions based on what matters most to your team, in real time. No public classes. Every session is unique to your team, tailor-made to improve your organization's ability to use coding agents to increase velocity while keeping high quality, intentional designs, and low bugs.
Hands-On
We specialize in small groups of up to 10, so everything we cover hits the nail on the head. Participants build real things with AI coding tools alongside expert instructors, and every concept is immediately applied to your codebase.
Progress, While You Learn
You don't just learn concepts and go home. By the end of the workshop, you walk away with working context files, guardrails, dashboards, and agent configurations already running in your environment. In the 2-day format, we build a real internal capability that's 100% yours.
Your Instructor

Jon Zeolla
Founder & CEO, Zenable
With nearly 20 years building software, tooling, and infrastructure at institutions like Carnegie Mellon University, PNC Bank, and numerous other large enterprises, Jon's career has centered on building tools and working with software developers to ensure data-first compliance, conformance, and evidence-based programs. He began working in machine learning in 2014 with the Apache Metron project, building an open source, Hadoop-based machine learning data analysis system for high volume security logs (>100,000 events per second). Interested in AI, he also began taking classes at Carnegie Mellon University and Johns Hopkins, and hasn't stopped since.
Now, Jon specializes in software and systems quality as founder of Zenable, building production-ready AI guardrails that automatically learn from the decisions developers, product teams, and executives make, and automatically feeding that context and guardrails to coding agents. As a SANS Certified Instructor, IANS Faculty Member, CNCF Ambassador, and international speaker at conferences like KubeCon and CloudNativeSecurityCon, he teaches Generative AI and LLM security, and cloud native security to security professionals via SANS SEC540 and SEC545.
Who Should Attend
By developers, for developers.
Participants should be comfortable at the command line and in their IDE, familiar with scripts, pipelines, and software development workflows. Specifically, they will need:
- ✓A machine with an agentic IDE installed (Claude Code, Windsurf, Cursor, VS Code, etc.) and admin/install permissions for tooling setup
- ✓To be comfortable with the command line, shell scripting, and at least one programming language
- ✓To be familiar with Git and a version control platform like GitHub or GitLab
- ✓To be familiar with well-known software development patterns like test-driven development, CI/CD pipelines, and code review workflows
- ✓To have a basic understanding of software development metrics like DORA metrics (deployment frequency, lead time, change failure rate, recovery time)
- ✓To have a basic understanding of common application security vulnerabilities
For what your organization needs to prepare, see our FAQ.
What to Expect
Small Groups, Up to 10
Hands-on attention and real-time troubleshooting for every participant.
Expert-Led, Hands-On
15+ years in security and automation, 2+ years in agentic coding. Build real things, not just slides.
Bring Your Own Laptop
Work in your own environment. We help troubleshoot local configurations on-site.
Walk Away Ready
Leave with working guardrails, context files, and (in 2-day) a real internal capability.
Build Codebases That Automatically Adapt to Change
Requirements change constantly. We teach you to build guardrail systems that evolve automatically: when something changes, your context files, review agents, and policy checks update across the board.
This applies to your governance too. Policies, standards, and guidelines can now be tested, improved, and shipped in small increments, just like code. Continuously measure, continuously improve. Previously this was unachievable at scale. With AI, it is.
Recent Incident
Breach, near-miss, or audit finding triggers new guardrails automatically
New Contract
Customer agreement or regulatory mandate flows into enforcement
New Threat
Identified risk generates guardrails before it becomes an incident
Regulatory Update
EU AI Act, PCI DSS 4.0, state privacy laws update your checks
Technology Change
New framework, language, or runtime adapts standards automatically
Continuous Refinement
Scope changes, exceptions, and risk assessments feed the loop
Ready to upskill your team?
Let's discuss your goals and find the right workshop format.