If designers had easier access to patient usage data, they could produce a better, safer user experience. Greater data flexibility can make designers more attentive to patient needs that may not always be evident to clinicians – but at all steps, you must take care to keep data safe and anonymized.
- Give designers more keys and insights to produce better designs and UX
- Enable designers to analyze patient insights
- Empower patients to be more active in their own care
What does it mean if a user came close to an error, but stopped themselves just beforehand – or was able to quickly reacquire correct device use? Do you need to account for and differentiate these in your results?
- Outline the most common dilemmas about subjective observations
- Fine-tune your thresholds for identifying errors and pre-error steps that are “close enough”
- Understand the differing voices on this issue
If a user is struggling with your product and seems to be getting frustrated, you could step in and solve it for them… which would fail the study. If they stick it out and keep trying, you could salvage the results. How do you know when to intervene, and what are the best tools and phrasing to use in such circumstances?
- Pre-plan open-ended questions that encourage users to think through their frustrations
- Coach them to step back and re-evaluate while still maintaining normal use behavior
- Find the best language for describing training scenarios or IFUs
Focusing too much on traditional cognitive dimensions of usability testing can lead to a limited caricature of what your user is actually like. Often, users aware they are being observed will become “people-pleasers” and not admit frustration or failure, which can impact both your results and the overall product experience. The dynamic of their emotional landscape is a key part of the user experience and should be included in your data sets – but this is hard to define, or to consistently capture. By finding the appropriate measure of a user’s changing emotional state, you can more clearly capture the patient journey and shape better outcomes.
- Use 1-on-1 interviews to construct taxonomies of emotions
- Look beyond mechanical error to reveal what users articulate as troubling to them
- Acknowledge the role emotion plays in allowing users to absorb what they hear
We have seen the US FDA and EU regulatory authorities increase scrutiny as well as evolving their preferences for both design details and validation evaluation methods for medical product IFUs. The FDA has published new Guidance: Instructions for Use — Patient Labeling for Human Prescription Drug and Biological Products — Content and Format. This session will review those guidances, and also describe how sponsors responded to recent regulatory observations from actual product submissions.
- No use of white text on color backgrounds
- No highlighting of important text in color ( e.g. red )
- Emphasized concern about "color blind" users, which in some cases is misinformed.
- Preferences for non-standard symbols
- Specific preferences for certain types of graphical illustrations not supported by usability test data
- IFU Layout option preferences including numbering and organizing schemes
- IFU Validation should be separate from User Interface validation
- IFU test participants should not be instructed to find information in the IFU materials while being asked Knowledge Task Questions.
Surgical robotics are among the most complex of all medical devices – a “system of systems,” with multi-layered user interfaces. The sheer number of steps required to manage such complexity can itself be a source of error. What are the most important steps your teams must bear in mind to keep robotics manageable and testable while still meeting timelines?
- Clearly envision the architecture of a multi-layered system
- Ensure users know how to prioritize alarms
- Confront circumstances where robotic features themselves may surprise or confuse surgical users – such as whether brief armature collisions trigger alarms
Even in early stage research, device technicians are pressed to keep their testing areas matching real life circumstances as much as possible. But this can raise challenges when the fidelity of the use environment is not in itself critical. How much representation is really required, and how much is just an “extra”? Is there such a thing as too much detail?
- Highlight circumstances when asking users to pretend would lead to unnatural behaviors and inhibit produce use
- Avoid risking your budgets and timelines by representing all fine detail in an environment
- Find a middle ground on the realism spectrum
There is no firm consensus on the amount of training decay that realistically represents final device use. What is the ideal amount that you should aim for in order to meet regulatory expectations and keep users safe from errors? And does the length and severity of training decay differ based upon task type?
- Review results of a pilot study and extrapolate next steps
- Highlight the differences in training decay for perceptual, cognitive, and motor tasks
- Refine methods for categorizing task types
Guidance documents for testing home-use devices leave many questions unanswered; in practice, FDA typically reviews study design on a case-by-case basis. What new steps must you take for testing devices that are primarily initiated, used, and discarded specifically in the home?
- Confront a lack of infrastructure as compared to more general-use devices that could be meant for hospitals
- Understand regulatory requirements and flexibility
- Clearly distinguish between devices that can be used in a home and those designed specifically for that
FDA reviewers expect you to clearly present use-related risks for critical tasks, with extensive detail on the stages where they occurred and the mitigations you have planned. Whenever you make a product change that involves revalidation, you will need to provide a clear and strong rationale.
- Paint the clearest picture of what Use Errors look like
- Recognize that task analysis is the foundation for Use Error Analysis
- Learn the clearest lessons from past enforcement
Too often, human factors are only discussed in a few product phases, aiming towards product evaluation – when so much more could have been achieved if key studies had started earlier. With better-planned routine involvement, HF engineers can act as a differentiator among competitors, gain deeper understanding of end-users, design more customer-centric products, and ultimately save on costs.
- Facilitate individual research and customer interaction
- Justify HF involvement in all stages, including post-market evaluation
- Engage earlier in order to guide the design of a more effective, user-friendly product
- Cultivate deeper awareness of HF among other teams to minimize inter-departmental disagreements
A well-executed summative test requires input from many disciplines, including risk management, product management, quality assurance, device development, training, and others – not all of which will have usability expertise. Team members from these other disciplines may not fully understand just how invested they need to be in the test, from the early stages, for your device to be a success.
- Set expectations regarding the internal stakeholders needed in each test phase
- Scope your test to evaluate a comprehensive set of risk controls
- Know when to start recruiting and scheduling participants
- Create an accurate test environment for more accurate test results
- Disposition findings efficiently
Regardless of your position, there are small changes you could make starting today to better incorporate inclusive design into your company and your product development. These seemingly small changes can have big impacts down the line that you might not have foreseen.
- Define concepts – what is inclusive design, and what isn’t
- Analyze the connection between workplace inclusion and device design inclusion
- Spotlight inclusive design tools you can begin incorporating into every phase of the product development life cycle
Getting HF involved sufficiently early in the design process to allow for collaboration and meaningfully impact product development can require working with diverse and widely-dispersed teams, across different time zones, functions, countries, and cultures. What technologies are best for guaranteeing accessible and understandable information for all team members, and how quickly and efficiently can you leverage them for better partnerships?
- Train regional project managers to “follow the sun” with briefings and data transfers across time zones
- Maintain shared secure data archives for ready use
- Grasp that expectations for timeframes may differ regionally