On-demand Workshop: Testing AI generated code in K8s

In this workshop we'll show how to safely test AI-generated code directly against your staging Kubernetes environment without deploying.
Brendan Cooper
January 29, 2026
Subscribe to our newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI coding assistants are generating more code than ever, but the productivity gains haven’t been as dramatic as expected. That’s because code generation was never the main bottleneck. Testing and integration were.

This problem is even more pronounced with AI-generated code, which often lacks full awareness of your application’s architecture and microservices, causing issues to surface late in staging or production.

In this workshop, we’ll show how to safely test AI-generated code directly against your staging Kubernetes environment without deploying. Using mirrord, an open-source development tool, local code can run in the real context of your cluster alongside existing services.

Through a live demo with Cursor + mirrord, you’ll see how this approach enables fast, realistic feedback on AI-generated changes, catching integration issues earlier and reducing reliance on slow CI pipelines and deployments.



What We’ll Cover:

  • Why testing and integration are the true bottlenecks in AI-assisted development
  • Common failure modes of AI-generated code in microservices and Kubernetes environments
  • How mirrord works and how it safely connects local code to a remote Kubernetes cluster
  • How to test AI-generated code against real services without deploying or impacting production
  • A practical workflow combining Cursor and mirrord for faster, safer feedback loops
  • How this approach fits into modern CI/CD and platform engineering practices

Who Should View:

  • Backend and full-stack engineers working with microservices and Kubernetes
  • Platform and DevOps engineers looking to improve developer feedback loops
  • Teams using (or evaluating) AI coding assistants like Cursor, Copilot, or similar tools
  • Engineering leaders interested in safely accelerating development without increasing risk
  • Anyone frustrated by slow staging environments, flaky integration tests, or delayed feedback 

About the Presenters

Arsh Sharma - Sr DevRel, MetalBear

Arsh works as a Senior DevRel Engineer at MetalBear. Arsh is also a CNCF Ambassador and has previously been awarded the Kubernetes Contributor Award for his open source contributions. He loves tinkering with new projects in the cloud ecosystem and writing about his learnings. He has contributed to CNCF projects such as cert-manager and Kyverno and was also part of the open-source Kubernetes team at VMware in a previous role.

Anton Weiss - Chief Storyteller PerfectScale

Anton has a storied career in creating engaging and informative content that helps practitioners navigate through the complexities of ongoing Kubernetes operations. With previous experience as a CD Unit Leader, Head of DevOps, and CTO and CEO he has worn many hats as a consultant, instructor, and public speaker. He is passionate about leveraging his expertise to support the needs of DevOps, Platform Engineering, and Kubernetes communities.

Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.
Join PerfectScale and MetalBear as we'll show how to safely test AI-generated code directly against your staging Kubernetes environment without deploying.
This is some text inside of a div block.
This is some text inside of a div block.

About the author

This is some text inside of a div block.
more from this author
Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.