AI Coding Workshops

Private, hands-on workshops that teach your developers to build AI coding agents that enforce your standards, learn from your team's decisions, and get smarter with every commit. Your team walks away writing better code, faster, with fewer bugs.

“Ruthlessly edit your CLAUDE.md over time. Keep iterating until Claude’s mistake rate measurably drops.”

Boris ChernyCreator of Claude Code, Anthropic

“Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high.”

Greg BrockmanPresident & Co-Founder, OpenAI

“Every time you find yourself telling the agent ‘in this repo we do X like this’, you’ve discovered a missing abstraction. Bottle it. Write it down in a place the agent can load. Document it in AGENTS.md. Create a reusable skill. The benefits compound fast.”

Ben WilliamsPrevious VP of Products, Snyk

“Ruthlessly edit your CLAUDE.md over time. Keep iterating until Claude’s mistake rate measurably drops.”

Boris ChernyCreator of Claude Code, Anthropic

“Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high.”

Greg BrockmanPresident & Co-Founder, OpenAI

“Every time you find yourself telling the agent ‘in this repo we do X like this’, you’ve discovered a missing abstraction. Bottle it. Write it down in a place the agent can load. Document it in AGENTS.md. Create a reusable skill. The benefits compound fast.”

Ben WilliamsPrevious VP of Products, Snyk

“Ruthlessly edit your CLAUDE.md over time. Keep iterating until Claude’s mistake rate measurably drops.”

Boris ChernyCreator of Claude Code, Anthropic

“Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high.”

Greg BrockmanPresident & Co-Founder, OpenAI

“Every time you find yourself telling the agent ‘in this repo we do X like this’, you’ve discovered a missing abstraction. Bottle it. Write it down in a place the agent can load. Document it in AGENTS.md. Create a reusable skill. The benefits compound fast.”

Ben WilliamsPrevious VP of Products, Snyk
½ Day
Foundations

Get your team started with the essentials: context files, rules, and agentic reviews that work immediately.

  • Context engineering (AI-Naive)
  • Skills, MCP & agentic reviews (AI-Assisted)
  • Hands-on exercises with your codebase
  • Real world prompt engineering pro tips
MOST POPULAR
1 Day
Practitioner

Add deterministic guardrails and measurement: reproducible, audit-ready, data-driven enforcement.

  • Everything in ½ Day
  • Hooks, guardrails & policy enforcement (AI-Augmented)
  • IDE hooks & CI/CD integration
  • Semgrep / OPA custom rules
2 Day
Comprehensive

Full maturity model plus we build a real internal capability alongside the training. Something meaningful you keep.

  • Everything in 1 Day
  • Observability & measurement dashboards (Data-Driven AI)
  • Agents, teams & continuous improvement (AI-Native)
  • Multi-agent orchestration patterns (Orchestration)
  • Build a real capability for your org
  • Feedback loops & policy evolution
  • Architecture for scale

The Zenable AI Coding Maturity Model

1AI-NAIVE

Context Engineering

Set expectations through context files. Low barrier to entry, version controlled, reduces variability across developers and agents.

2AI-ASSISTED

Skills, MCP & Spec-Driven Development

Multi-perspective automated review with specialized agents for security, QA, and best practices catching issues in parallel.

3AI-AUGMENTED

Hooks, Guardrails & Policy Enforcement

Deterministic guardrails that enforce policy in milliseconds: no hallucinations, fully reproducible, audit-ready evidence.

4DATA-DRIVEN AI

Observability & Measurement

You can’t improve what you can’t measure. Telemetry, dashboards, and data-driven guardrail effectiveness tracking.

5AI-NATIVE

Agents, Teams & Continuous Improvement

A feedback loop where each level informs the others. Incidents, audits, and policy changes drive guardrail evolution. Coverage grows from real risk.

6ORCHESTRATION

Multi-Agent at Scale

You architect, agents execute. Hierarchical delegation with specialized agent teams coordinated toward shared goals.

What Makes Our Workshops Different

Adapts to You, in the Room

No one-size-fits-all. Just like our tools, our workshops learn and evolve on the fly. We pick and choose from a library of pre-made labs and discussions based on what matters most to your team, in real time. No public classes. Every session is unique to your team, tailor-made to improve your organization's ability to use coding agents to increase velocity while keeping high quality, intentional designs, and low bugs.

Hands-On

We specialize in small groups of up to 10, so everything we cover hits the nail on the head. Participants build real things with AI coding tools alongside expert instructors, and every concept is immediately applied to your codebase.

Progress, While You Learn

You don't just learn concepts and go home. By the end of the workshop, you walk away with working context files, guardrails, dashboards, and agent configurations already running in your environment. In the 2-day format, we build a real internal capability that's 100% yours.

Your Instructor

Jon Zeolla

Jon Zeolla

Founder & CEO, Zenable

With nearly 20 years building software, tooling, and infrastructure at institutions like Carnegie Mellon University, PNC Bank, and numerous other large enterprises, Jon's career has centered on building tools and working with software developers to ensure data-first compliance, conformance, and evidence-based programs. He began working in machine learning in 2014 with the Apache Metron project, building an open source, Hadoop-based machine learning data analysis system for high volume security logs (>100,000 events per second). Interested in AI, he also began taking classes at Carnegie Mellon University and Johns Hopkins, and hasn't stopped since.

Now, Jon specializes in software and systems quality as founder of Zenable, building production-ready AI guardrails that automatically learn from the decisions developers, product teams, and executives make, and automatically feeding that context and guardrails to coding agents. As a SANS Certified Instructor, IANS Faculty Member, CNCF Ambassador, and international speaker at conferences like KubeCon and CloudNativeSecurityCon, he teaches Generative AI and LLM security, and cloud native security to security professionals via SANS SEC540 and SEC545.

Who Should Attend

By developers, for developers.

Participants should be comfortable at the command line and in their IDE, familiar with scripts, pipelines, and software development workflows. Specifically, they will need:

  • A machine with an agentic IDE installed (Claude Code, Windsurf, Cursor, VS Code, etc.) and admin/install permissions for tooling setup
  • To be comfortable with the command line, shell scripting, and at least one programming language
  • To be familiar with Git and a version control platform like GitHub or GitLab
  • To be familiar with well-known software development patterns like test-driven development, CI/CD pipelines, and code review workflows
  • To have a basic understanding of software development metrics like DORA metrics (deployment frequency, lead time, change failure rate, recovery time)
  • To have a basic understanding of common application security vulnerabilities

For what your organization needs to prepare, see our FAQ.

What to Expect

1

Small Groups, Up to 10

Hands-on attention and real-time troubleshooting for every participant.

2

Expert-Led, Hands-On

15+ years in security and automation, 2+ years in agentic coding. Build real things, not just slides.

3

Bring Your Own Laptop

Work in your own environment. We help troubleshoot local configurations on-site.

4

Walk Away Ready

Leave with working guardrails, context files, and (in 2-day) a real internal capability.

Build Codebases That Automatically Adapt to Change

Requirements change constantly. We teach you to build guardrail systems that evolve automatically: when something changes, your context files, review agents, and policy checks update across the board.

This applies to your governance too. Policies, standards, and guidelines can now be tested, improved, and shipped in small increments, just like code. Continuously measure, continuously improve. Previously this was unachievable at scale. With AI, it is.

Recent Incident

Breach, near-miss, or audit finding triggers new guardrails automatically

New Contract

Customer agreement or regulatory mandate flows into enforcement

New Threat

Identified risk generates guardrails before it becomes an incident

Regulatory Update

EU AI Act, PCI DSS 4.0, state privacy laws update your checks

Technology Change

New framework, language, or runtime adapts standards automatically

Continuous Refinement

Scope changes, exceptions, and risk assessments feed the loop

Ready to upskill your team?

Let's discuss your goals and find the right workshop format.

jon@zenable.io

Frequently asked questions

You've got questions, we've got answers.
What does my organization need to prepare?

Before the workshop, send us:

  • An attendee list
  • Your CI/CD pipeline tool (GitHub Actions, Bitbucket Pipes, Argo CD, etc.)
  • Which IDEs the team uses
  • Desktop operating systems across the team

For the venue:

  • A meeting room with reliable internet and screen-sharing capability
  • A codebase participants can use for exercises (we can provide a sample)
  • CI/CD pipeline access for 1-Day and 2-Day formats

Full participant prerequisites:

  • A machine with an agentic IDE (Claude Code, Windsurf, Cursor, VS Code, etc.)
  • Admin/install permissions for tooling setup
  • Comfort at the command line and with shell scripting
  • Familiarity with Git and a version control platform (GitHub, GitLab, etc.)
  • Basic understanding of common application security vulnerabilities
Can the workshop be delivered remotely?

We support hybrid delivery – participants can join remotely alongside an in-person group. However, we require at least half the class to be in person. The hands-on format works best when the instructor can work directly with participants, troubleshoot configurations on-site, and maintain the energy of a live room. Remote participants join via screen sharing and real-time collaboration tools.

What if our team uses different AI coding tools?

Our workshops are tool-agnostic at the concept level. While hands-on exercises use Claude Code (the most capable agentic coding tool available), the maturity model, guardrail patterns, and observability principles apply across all major AI coding assistants. The tools we use also support all major agentic IDEs including Cursor, Windsurf, VS Code, and more – see the full list of supported integrations. Participants who use different tools daily will still get full value.

If I use your software, will you train or fine-tune models using my code?

No.

Regardless of if you're on a free or paid tier, we do not train or fine-tune using our user's code. To learn more, see our Terms.

Which Zenable tools should I start with?

The best way to get started is to get signed up and go through our onboarding steps. That should get you everything you need; we pride ourselves on integrations with version control systems, IDEs, our CLI tool, git hook integrations, and more. But not everybody needs everything, and that's what the onboarding process will nail down.

How’s your security?

Zenable has been built from the ground-up with modern software development and security practices. We employ a series of security controls to ensure your data is safe; for more details see our Security page or you can reach out with any specific questions.

How do I get started?

First, we recommend setting up a free (no credit card required) trial of our Pro tier. That'll get you access to all of our integration points to kick the tires. From there, if you find what we have interesting you can look at rolling it out for your team via our Pro or Enterprise tiers. If you'd like to just keep using it individually, you're welcome to continue via our Free tier which includes daily usage of PR code review, IDE integrations, our CLI, and more - 100% free.

To get more details about how those tiers differ, see our comparison table or Plans & Usage pages.

What are the usage limits?

Each plan has limits on Pull Request reviews and Agentic Code Reviews. PR review limits are per account, while Agentic Code Review limits are per seat. Agentic Code Reviews are AI-powered reviews via the CLI, MCP server, or API.

Free accounts get 25 PR reviews/day and 100 agentic reviews/day. Professional plans include 200 PR reviews/day and 1,000 agentic reviews/day. Enterprise plans offer negotiable PR limits and 10,000 agentic reviews/day. We reserve the right to limit abuse in line with our Terms and Conditions. See the comparison table or Plans & Usage for full details.

Is this just all AI?

This question used to be titled "Are you really using AI or is it just a buzzword?" - but we think we're past that now.

Zenable's approach is that using AI to review and improve software is extremely useful, but it's only a part of the solution. Deterministic guardrails are critical to give 100% certainty of findings and outputs given the same input, as well as strong audit evidence and logs to support high assurance environments.

We also believe that Observability - the ability to measure, monitor, and evaluate systems based on telemetry - is necessary in order to know that you're focusing on the right problems to solve. The speed of AI brings a lot more uncertainty with where you should be spending your time, and so we centralize and report on tons of information that we use to customize every part of the Zenable stack to your environment, as well as to allow you to oversee how coding agents are being used to guide decision making.