The Palos Publishing Company

Follow Us On The X Platform @PalosPublishing
Categories We Write About

Creating user research protocols tailored to AI products

Creating user research protocols for AI products requires a thoughtful approach, as AI systems often involve complex, unseen processes that users interact with in a very different way compared to traditional products. Below are the essential components of a research protocol specifically tailored for AI-driven products.

1. Define Research Goals

Before anything, the research goals should align with the product’s development objectives. Some questions to ask:

  • What problem does the AI product solve?

  • How will users interact with the AI, and what do we want to learn from these interactions?

  • What are the anticipated outcomes or behaviors that we are testing?

Example: If the AI is an assistant, the goal might be to test how users interact with the AI’s ability to understand context in their requests.

2. Identify User Personas

AI products often cater to diverse audiences. Developing detailed user personas will help tailor the research to various user types. These personas should include:

  • Demographics: Age, experience with technology, job roles, etc.

  • Psychographics: Their attitudes, behaviors, pain points, and goals when using the AI system.

  • Technical Expertise: Understanding if they are non-technical users, tech-savvy professionals, or somewhere in between.

  • Emotional Responses: How users feel about AI. Are they skeptical? Enthusiastic? Anxious?

Example: For a voice assistant AI, personas might include tech-savvy professionals, elderly users, and people with disabilities.

3. Designing Tasks for the Study

Based on the research goals, the next step is to define the tasks users will perform during the research. These tasks should reflect real-life interactions with the AI, providing useful insights into the product’s usability, effectiveness, and potential frustrations.

For AI products, the tasks might focus on:

  • Core AI Capabilities: Does the AI interpret user input correctly?

  • AI Understanding and Trust: How much do users trust the AI’s responses and actions?

  • AI Error Handling: How does the AI recover from errors, and is it clear to users what went wrong?

  • Efficiency of Interaction: Does the AI save users time? Or does it complicate tasks?

  • User Comfort and Empowerment: Does the AI empower users, or make them feel disempowered or misunderstood?

Example: A test could involve asking users to interact with an AI chatbot to book an appointment and assess the clarity of instructions and the AI’s ability to handle ambiguity.

4. Choose Research Methods

The choice of research method depends on the nature of the AI product and the research objectives. Common approaches include:

  • Usability Testing: Observing how users interact with the AI and noting areas of friction or confusion.

  • Interviews: Qualitative insights from users about their thoughts, feelings, and experiences with the AI system.

  • Surveys: Gathering quantitative data about users’ satisfaction, trust, and willingness to use the AI product.

  • A/B Testing: Experimenting with different versions of the AI system to assess user preferences or performance.

  • Longitudinal Studies: Evaluating how users adapt to the AI system over time, especially for complex products or those that learn from user input.

Example: For a recommendation engine, an A/B test might compare user engagement with different recommendation algorithms.

5. Establish Metrics for Evaluation

Evaluating AI products requires both qualitative and quantitative metrics, including:

  • Effectiveness: How well does the AI perform its tasks? (e.g., Did the AI accurately complete tasks as expected?)

  • Efficiency: How much time did users spend completing tasks? (e.g., Time to get the correct answer or perform the action)

  • Satisfaction: How satisfied are users with the AI’s responses? (e.g., via surveys or sentiment analysis)

  • Trust: Do users trust the AI system’s outcomes? (e.g., measurement via questionnaires focused on trust and confidence)

  • User Error Rate: How often do users make mistakes, or the AI misinterprets input?

Example: For an AI that generates code snippets, metrics could include the accuracy of the code produced, the time taken to generate the output, and user satisfaction with the results.

6. Consider Ethical and Emotional Impacts

AI products often intersect with sensitive areas like privacy, security, and emotional well-being. During research, consider:

  • Privacy Concerns: Ensure users know how their data will be used and stored.

  • Bias and Fairness: Test the AI for biases, ensuring fairness in how it treats all user groups.

  • Emotional Impact: How does the AI affect user emotions? Does it lead to frustration, delight, or confusion?

Example: For an AI system that offers mental health support, assess how users emotionally react to the AI’s responses and ensure the AI isn’t triggering or diminishing important emotional needs.

7. Create a Pilot Test

Running a small-scale pilot test will help you refine the research methodology before the full rollout. This gives you the opportunity to adjust tasks, metrics, or research design based on real-world testing feedback.

Example: A pilot might involve testing a prototype of the AI assistant with just five users to ensure the system functions as expected and gather early feedback on usability.

8. Implement Data Collection Tools

Use various tools to collect both qualitative and quantitative data during the study:

  • Screen recording software to capture user interactions.

  • Surveys and questionnaires for immediate post-task feedback.

  • Session logs to monitor user behavior.

  • AI feedback from the system itself, like success/failure logs or user engagement metrics.

Example: For voice-based AI systems, you might collect user speech input along with sentiment analysis to understand user engagement.

9. Analyze and Synthesize Results

After the data is collected, it is time to analyze and synthesize the findings:

  • Look for recurring themes across participants (e.g., common pain points or areas of delight).

  • Evaluate quantitative metrics such as task completion times, error rates, or satisfaction scores.

  • Identify areas where the AI falls short, needs improvement, or provides a seamless experience.

Example: If multiple participants express frustration with how the AI handles complex requests, this indicates an area for further training or redesign.

10. Iterate and Evolve the Protocol

Research on AI products is an ongoing process. Once the first round of research is complete and insights are gathered, use the findings to iterate on the product, refine the research protocol, and conduct further tests. AI systems evolve over time, so continuous feedback and adaptation are key.


By creating well-structured user research protocols tailored to AI, you ensure that you are capturing valuable insights that will not only improve the product but also ensure that it aligns with user needs, behaviors, and expectations. This iterative process is essential for designing AI systems that are truly user-centered.

Share this Page your favorite way: Click any app below to share.

Enter your email below to join The Palos Publishing Company Email List

We respect your email privacy

Categories We Write About