User-experience research methods are great at producing data and insights, while ongoing activities help get the right things done. Alongside R&D, ongoing UX activities can make everyone’s efforts more effective and valuable. At every stage in the design process, different UX methods can keep product-development efforts on the right track, in agreement with true user needs and not imaginary ones.
One of the questions we get the most is, “When should I do user research on my project?” There are three different answers:
Do user research at whatever stage you’re in right now. The earlier the research, the more impact the findings will have on your product, and by definition, the earliest you can do something on your current project (absent a time machine) is today.
Do user research at all the stages. As we show below, there’s something useful to learn in every single stage of any reasonable project plan, and each research step will increase the value of your product by more than the cost of the research.
Do most user research early in the project (when it’ll have the most impact), but conserve some budget for a smaller amount of supplementary research later in the project. This advice applies in the common case that you can’t get budget for all the research steps that would be useful.
Each project is different, so the stages are not always neatly compartmentalized. The end of one cycle is the beginning of the next.
The important thing is not to execute a giant list of activities in rigid order, but to start somewhere and learn more and more as you go along.
Top UX Research Methods
• Field study
• Diary study
• User interview
• Stakeholder interview
• Requirements & constraints gathering
• Competitive analysis
• Design review
• Persona building
• Task analysis
• Journey mapping
• Prototype feedback & testing (clickable or paper prototypes)
• Write user stories
• Card sorting
• Qualitative usability testing (in-person or remote)
• Benchmark testing
• Accessibility evaluation
• Analytics review
• Search-log analysis
• Usability-bug review
• Frequently-asked-questions (FAQ) review
When deciding where to start or what to focus on first, use some of these top UX methods. Some methods may be more appropriate than others, depending on time constraints, system maturity, type of product or service, and the current top concerns. It’s a good idea to use different or alternating methods each product cycle because they are aimed at different goals and types of insight. The chart below shows how often UX practitioners reported engaging in these methods in our survey on UX careers.
If you can do only one activity and aim to improve an existing system, do qualitative (think-aloud) usability testing, which is the most effective method to improve usability. If you are unable to test with users, analyze as much user data as you can. Data (obtained, for instance, from call logs, searches, or analytics) is not a great substitute for people, however, because data usually tells you what, but you often need to know why. So use the questions your data brings up to continue to push for usability testing.
The discovery stage is when you try to illuminate what you don’t know and better understand what people need. It’s especially important to do discovery activities before making a new product or feature, so you can find out whether it makes sense to do the project at all.
An important goal at this stage is to validate and discard assumptions, and then bring the data and insights to the team. Ideally this research should be done before effort is wasted on building the wrong things or on building things for the wrong people, but it can also be used to get back on track when you’re working with an existing product or service.
Good things to do during discovery:
Conduct field studies and interview users: Go where the users are, watch, ask, and listen. Observe people in context interacting with the system or solving the problems you’re trying to provide solutions for.
Run diary studies to understand your users’ information needs and behaviors.
Interview stakeholders to gather and understand business requirements and constraints.
Interview sales, support, and training staff. What are the most frequent problems and questions they hear from users? What are the worst problems people have? What makes people angry?
Listen to sales and support calls. What do people ask about? What do they have problems understanding? How do the sales and support staff explain and help? What is the vocabulary mismatch between users and staff?
Do competitive testing. Find the strengths and weaknesses in your competitors’ products. Discover what users like best.
Exploration methods are for understanding the problem space and design scope and addressing user needs appropriately.
Compare features against competitors.
Do design reviews.
Use research to build user personas and write user stories.
Analyze user tasks to find ways to save people time and effort.
Show stakeholders the user journey and where the risky areas are for losing customers along the way. Decide together what an ideal user journey would look like.
Explore design possibilities by imagining many different approaches, brainstorming, and testing the best ideas in order to identify best-of-breed design components to retain.
Obtain feedback on early-stage task flows by walking through designs with stakeholders and subject-matter experts. Ask for written reactions and questions (silent brainstorming), to avoid groupthink and to enable people who might not speak up in a group to tell you what concerns them.
Iterate designs by testing paper prototypes with target users, and then test interactive prototypes by watching people use them. Don’t gather opinions. Instead, note how well designs work to help people complete tasks and avoid errors. Let people show you where the problem areas are, then redesign and test again.
Use card sorting to find out how people group your information, to help inform your navigation and information organization scheme.
Testing and validation methods are for checking designs during development and beyond, to make sure systems work well for the people who use them.
Do qualitative usability testing. Test early and often with a diverse range of people, alone and in groups. Conduct an accessibility evaluation to ensure universal access.
Ask people to self-report their interactions and any interesting incidents while using the system over time, for example with diary studies.
Audit training classes and note the topics, questions people ask, and answers given. Test instructions and help systems.
Talk with user groups.
Staff social-media accounts and talk with users online. Monitor social media for kudos and complaints.
Analyze user-forum posts. User forums are sources for important questions to address and answers that solve problems. Bring that learning back to the design and development team.
Do benchmark testing: If you’re planning a major redesign or measuring improvement, test to determine time on task, task completion, and error rates of your current system, so you can gauge progress over time.
Listen throughout the research and design cycle to help understand existing problems and to look for new issues. Analyze gathered data and monitor incoming information for patterns and trends.
Survey customers and prospective users.
Monitor analytics and metrics to discover trends and anomalies and to gauge your progress.
Analyze search queries: What do people look for and what do they call it? Search logs are often overlooked, but they contain important information.
Make it easy to send in comments, bug reports, and questions. Analyze incoming feedback channels periodically for top usability issues and trouble areas. Look for clues about what people can’t find, their misunderstandings, and any unintended effects.
Collect frequently asked questions and try to solve the problems they represent.
Run booths at conferences that your customers and users attend so that they can volunteer information and talk with you directly.
Give talks and demos: capture questions and concerns.
Ongoing and strategic activities can help you get ahead of problems and make systemic improvements.
Find allies. It takes a coordinated effort to achieve design improvement. You’ll need collaborators and champions.
Talk with experts. Learn from others’ successes and mistakes. Get advice from people with more experience.
Follow ethical guidelines. The UXPA Code of Professional Conduct is a good starting point.
Involve stakeholders. Don’t just ask for opinions; get people onboard and contributing, even in small ways. Share your findings, invite them to observe and take notes during research sessions.
Hunt for data sources. Be a UX detective. Who has the information you need, and how can you gather it?
Determine UX metrics. Find ways to measure how well the system is working for its users.
Follow Tog's principles of interaction design.
Use evidence-based design guidelines, especially when you can’t conduct your own research. Usability heuristics are high-level principles to follow.
Design for universal access. Accessibility can’t be tacked onto the end or tested in during QA. Access is becoming a legal imperative, and expert help is available. Accessibility improvements make systems easier for everyone.
Give users control. Provide the controls people need. Choice but not infinite choice.
Prevent errors. Whenever an error occurs, consider how it might be eliminated through design change. What may appear to be user errors are often system-design faults. Prevent errors by understanding how they occur and design to lessen their impact.
Improve error messages. For remaining errors, don’t just report system state. Say what happened from a user standpoint and explain what to do in terms that are easy for users to understand.
Provide helpful defaults. Be prescriptive with the default settings, because many people expect you to make the hard choices for them. Allow users to change the ones they might need or want to change.
Check for inconsistencies. Work-alike is important for learnability. People tend to interpret differences as meaningful, so make use of that in your design intentionally rather than introducing arbitrary differences. Adhere to the principle of least astonishment. Meet expectations instead.
Map features to needs. User research can be tied to features to show where requirements come from. Such a mapping can help preserve design rationale for the next round or the next team.
When designing software, ensure that installation and updating is easy. Make installation quick and unobtrusive. Allow people to control updating if they want to.
When designing devices, plan for repair and recycling. Sustainability and reuse are more important than ever. Design for conservation.
Avoid waste. Reduce and eliminate nonessential packaging and disposable parts. Avoid wasting people’s time, also. Streamline.
Consider system usability in different cultural contexts. You are not your user. Plan how to ensure that your systems work for people in other countries. Translation is only part of the challenge.
Look for perverse incentives. Perverse incentives lead to negative unintended consequences. How can people game the system or exploit it? How might you be able to address that? Consider how a malicious user might use the system in unintended ways or to harm others.
Consider social implications. How will the system be used in groups of people, by groups of people, or against groups of people? Which problems could emerge from that group activity?
Protect personal information. Personal information is like money. You can spend it unwisely only once. Many want to rob the bank. Plan how to keep personal information secure over time. Avoid collecting information that isn’t required, and destroy older data routinely.
Keep data safe. Limit access to both research data and the data entrusted to the company by customers. Advocate for encryption of data at rest and secure transport. A data breach is a terrible user experience.
Deliver both good and bad news. It’s human nature to be reluctant to tell people what they don’t want to hear, but it’s essential that UX raise the tough issues. The future of the product, or even the company, may depend on decisionmakers knowing what you know or suspect.
Track usability over time. Use indicators such as number and types of support issues, error rates and task completion in usability testing, and customer satisfaction ratings, to show the effectiveness of design improvements.
Include diverse users. People can be very different culturally and physically. They also have a range of abilities and language skills. Personas are not enough to prevent serious problems, so be sure your testing includes as wide a variety of people as you can.
Track usability bugs. If usability bugs don’t have a place in the bug database, start your own database to track important issues.
Pay attention to user sentiment. Social media is a great place for monitoring user problems, successes, frustrations, and word-of-mouth advertising. When competitors emerge, social media posts may be the first indication.
Reduce the need for training. Training is often a workaround for difficult user interfaces, and it’s expensive. Use training and help topics to look for areas ripe for design changes.
Communicate future directions. Customers and users depend on what they are able to do and what they know how to do with the products and services they use. Change can be good, even when disruptive, but surprise changes are often poorly received because they can break things that people are already doing. Whenever possible, ask, tell, test with, and listen to the customers and users you have. Consult with them rather than just announcing changes. Discuss major changes early, so what you hear can help you do a better job, and what they hear can help them prepare for the changes needed.
Recruit people for future research and testing. Actively encourage people to join your pool of volunteer testers. Offer incentives for participation and make signing up easy to do via your website, your newsletter, and other points of contact.
For more information on UX visit Techved Consulting ,