{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"How I Tested That","title":"Chad Holdorf | How I Tested Pull Requests","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/ce34ee8b\"></iframe>","width":"100%","height":180,"duration":2656,"description":"SummaryIn this episode I’m joined by Chad Holdorf, longtime product and technology leader whose career spans John Deere, Salesforce, Pendo, and now Demandbase, where he leads AI initiatives across the company.We explore how AI is fundamentally reshaping the way modern product teams test, ship, and learn, from debugging customer issues directly against live codebases to product managers and support teams submitting pull requests themselves. Chad shares how tools like Cursor and Claude are collapsing traditional handoffs between product, engineering, and support, creating a much faster feedback loop between customer problems, experimentation, and shipped solutions.We also get into the messy reality behind enterprise AI adoption, including data quality, hallucinations, trust, evals, and why testing AI products inside real customer environments is much harder than most demos make it look. Chad gives us a peek into how his own workflow has changed, how his teams are learning by building in real time, and why this moment reminds him of the early days of Lean Startup, where he and I first met.If you’ve been wondering what AI-native product development actually looks and feels like inside a real company, this episode is for you.TakeawaysAI is collapsing traditional handoffs between product, engineering, and support teams. Chad described customer support teams going directly into code repositories with AI tools to investigate issues, understand root causes, and eventually submit merge requests themselves.Most enterprise AI demos fall apart when connected to messy real-world customer data. Chad emphasized that “just putting Claude on top of the data” failed quickly without extensive labeling, validation, testing, and human feedback loops. Customers could detect hallucinations within a few prompts.AI systems expose hidden data inconsistencies inside organizations. One example showed AI selecting a custom CRM field that technically produced better targeting results than the...","thumbnail_url":"https://img.transistorcdn.com/hRAQ0Cvexq2Nhl7H1KPLfxWZ14skSKkH4xG8JMRnoOM/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzUwMDU0LzE3MDg3/MTI0NTQtYXJ0d29y/ay5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}