#draft Sept 16, 2025 # Principles From Teresa Torres Customer Discovery series * https://www.producttalk.org/getting-started-with-discovery/ * https://www.producttalk.org/opportunity-solution-trees/ 10 Principles 1. **Focus on a Specific Business Outcome:** All questions should be designed to uncover insights related to a clear, measurable outcome (e.g., "increase activation rate," "reduce churn," "improve weekly engagement"). The AI should be primed with this outcome first. 2. **Discover Unmet Needs (Opportunities), Not Feature Requests:** The goal is to understand the _problem space_, not the _solution space_. Generate questions that get at the underlying need, pain point, or desire, not questions that validate a pre-defined feature. 3. **Ask Story-Based, Open-Ended Questions:** Prompt the AI to generate questions that ask for specific, real-world examples. The most powerful prompt is "Tell me about the last time you..." This grounds the customer in actual behavior, not abstract opinions. 4. **Avoid Hypothetical and Opinion-Based Questions:** Instruct the AI to avoid questions like "Would you use..." "Do you like..." or "What if we built..." These questions generate unreliable opinions. Focus on past behavior and present challenges. 5. **Separate the Problem from the Solution:** Generate two distinct sets of questions. The first set (for interviews) should _only_ explore the problem/opportunity. The second set (for assumption testing) can be used later to validate a _specific_ solution idea. 6. **Map to an Experience:** Ask questions that walk a customer through a specific workflow or journey related to your product. This helps uncover pain points, workarounds, and frustrations at each step. 7. **Identify and Isolate Hidden Assumptions:** Before generating questions for a _solution_, list the assumptions. The AI should generate questions to _test_ the riskiest assumption first (e.g., If the assumption is "Customers are willing to pay for this," the test question is not "Would you pay for this?" but rather a non-hypothetical test, like a prototype with a price on it). 8. **Generate Questions for the Four Key Risks: **Help formulate questions or experiments to test the four main types of assumptions: - **Desirability:** "Do they want this? Does it solve a real problem?" - **Viability:** "Should we build this? Does it work for our business?" - **Feasibility:** "Can we build this? Do we have the technology/skills?" - **Usability:** "Can they figure out how to use it?" 9. **Distinguish Research Questions from Interview Questions:** Use the AI to help you translate your internal "research question" (what you want to learn, e.g., "Why do users drop off during onboarding?") into a customer-facing "interview question" (what you will ask, e.g., "Tell me about the last time you set up a new account with an app."). 10. **Focus on One Target Opportunity at a Time:** Instead of asking broad questions about the entire product, instruct the AI to generate a set of questions that go deep on a _single_ customer opportunity (e.g., "We've learned that users struggle with finding past reports. Let's design 5 questions to unpack that specific challenge."). # Sample questions ## AI focused - Walk me through the lifecycle of AI projects, starting with the people and teams involved in their design, development and deployment. What are their backgrounds, roles and responsibilities? What are the hand-offs, tools, and challenges across the stages? - Thinking about a recent major AI project, how many hours did your team dedicate to setting up, configuring, and maintaining the infrastructure itself, as opposed to developing and deploying models or the application experience? Can you walk me through the most time-consuming tasks? - Imagine you're developing a cutting-edge model and need to get it deployed for inference quickly. What are the ideal characteristics of the infrastructure platform you'd want to use? What would make that experience frictionless? What parts of the infrastructure are unnecessary? - When your team evaluates new infrastructure platforms for AI, how significant is existing Kubernetes expertise within your team as a factor in that decision? Can you share a time when a platform's heavy reliance on Kubernetes concepts or messaging was either a major advantage or a significant deterrent?