Good research always starts with a plan.
A research plan is a document that outlines the research objectives and how the research will be executed.
Research plans should include:
Creating a research plan can be time-consuming. Even with a good template, a researcher must generate research questions, select the appropriate method(s), decide how to run sessions, and often create study collateral (like screeners and tasks) from scratch. The good news is that AI can help with many, if not all, of these tasks!
It can be tempting to just ask an AI tool to give you a research plan for a project. Don’t do that.
❌ Bad Prompt:
Generate a research plan for a usability test of a food-delivery app.
This kind of request results in a generic, template-like response, partly because AI lacks the context to propose a complete research plan and partly because of how the model has been trained.
To construct a useful research plan, deconstruct the plan into parts and have the AI chatbot tackle each part individually. You can assemble responses in a final research plan.
Don’t expect the AI to ask you the right questions in order to get a comprehensive outcome. View the AI tool as a UX assistant, not as a UX mentor. You need to feed into the AI all the steps and details you want it to consider.
Follow the steps below for the best results.
Imagine asking an experienced colleague for recommendations on what research study to run without giving them any context on what project you’re working on and what you’re trying to achieve. They would probably say, "It depends on what you’re trying to learn.” Conversing with AI presents no exception.
Start your conversation with enough context, especially if the AI tool does not already have this information stored about you or your current project. (ChatGPT 4 allows users to store context for each new conversation.)
The contextual information you share should include who you work for, the scope of your project, and what you’re looking to achieve. Remember that some AI tools may use your conversations to train their models, so share only information you would be comfortable with being in the open domain.
Prompt Template:
I work for [type or organization]. My company provides [what kind of product, service, or experience]. I need to run a user-research study to learn [something about users or a problem].
After providing the AI tool with context, ask it to suggest specific research questions for the study. Here’s an example from a recent study on research repositories that I conducted for NN/g.
Prompt Example:
I will run a research study and report on how UX practitioners across different organizations set up and use research repositories. Please generate 10 research questions that would be suitable for this sort of study.
It’s a good idea to ask generative AI tools for more options or ideas than you think you’ll need, because some will not be a good fit.
Microsoft Copilot responded to the prompt above with the 10 research questions below.
10 Research questions generated by Microsoft Copilot
Who are the primary users of the research repository within different organizations?
What motivates users to engage with the research repository?
How do different teams (design, product, sales, marketing) access and benefit from the research repository?
What challenges do practitioners face when contributing to or retrieving information from the repository?
What types of research artifacts (reports, raw data, templates) are most commonly stored in the repository?
How do organizations structure their research repositories?
What tools and templates are commonly used to facilitate research repository management?
How do practitioners ensure the quality and accuracy of information in the repository?
What impact does the research repository have on decision-making and project outcomes?
What strategies can organizations implement to encourage widespread adoption and active participation in the research repository?
To broaden your pool of research questions, ask for further research questions or provide the same prompt in a new session. Repeat this process as many times as you need to generate a list that feels exhaustive of what you’d like to learn.
Some of the research questions Copilot generated for me were appropriate, and others were not. As a result, I had AI generate more research questions to choose from.
Unfortunately, most AI chatbots don’t offer an easy way to dismiss suggestions or combine specific responses and work from only these (a behavior called apple-picking).
Pulling in generated research questions into an offline document (like a FigJam or Google Doc) allows you to easily group items, remove duplicates, or reword suggested research questions.
Begin a new chat session with your selected and refined set of research questions, so that the unwanted research questions are removed from the chat history.
After sharing the context and your chosen research questions, ask the AI tool to identify suitable research methods.
Example Prompt:
What study would you suggest to answer these research questions? Please be specific; cite which research questions would be answered by which research method if you suggest multiple methods.
Generative-AI advice is not always good advice. Often, these tools will suggest various methods and suggest you triangulate data from multiple sources. This approach is not always needed. Also, not all methods will be practical or the best fit for your study. Additionally, AI may suggest interviews and focus groups even for research questions better suited to a behavioral research method.
Ask AI chatbots to tell you which research methods would be suited to which research question and why. We also recommend doing some further reading on your own about any methods that are unfamiliar to you.
In response to the prompt above (and given my chosen research questions), ChatGPT recommended a survey, interviews with select UX practitioners, and case studies. These were all my chosen methods, so AI had done well here!
AI can create inclusion criteria — a necessary component of your research plan. Do this step only after generating research questions and methods since these will inform who should participate in the research study.
Inclusion criteria (or recruitment criteria) are specific characteristics of the target population that need to be represented in your sample.
Start with inclusion criteria before asking the AI to help you write a screening questionnaire; AI can only craft an appropriate screener after it “knows” who you’re looking to recruit.
Example Prompt:
So that I recruit the right people for my interviews, help me create some inclusion criteria. What characteristics or behaviors should I recruit for?
Finally, ask the AI to put together:
Unfortunately, there are a lot of bad examples of the above on the web. Conversational AI has been trained on all this data. Therefore, don’t be surprised if it produces poor study collateral on its first attempt! This is a major risk area for new researchers.
One way to mitigate this danger is to give the AI tool advice when crafting any of these outputs. Think of AI as a new research assistant who can learn extremely quickly.
Common mistakes that AI tools make include:
It’s not surprising that AI makes these mistakes since UX practitioners also make them!
To improve outputs, feed the AI essential tips, such as:
Additionally, when possible, feed the AI with good examples of screener questionnaires, tasks, or interview questions, so it can follow their format or style.
Even with this advice, AI can still make mistakes. If you’re doubting its answers, check primary sources or speak with an experienced researcher for old-fashioned human guidance.
If you have ChatGPT’s Plus Plan, you can use our GPT for planning your research.
With the proper context, examples, and advice, AI tools, like ChatGPT and Microsoft Copilot, can craft helpful research questions, tasks, interview questions, and other study collateral far more quickly than you could if you started from scratch.
Research leads and ResearchOps personnel can support junior researchers and PWDRs (People Who Do Research) by providing examples and advice that can be fed to AI agents. Experienced researchers can benefit from using AI to speed up their research-planning process and obtain further inspiration.
More Stuff I Like
More Stuff tagged ai , ux research , user experience research , llm , artificial intelligence , generative ai
See also: AI, chatGPT, LLM
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.