Testing AI Solution Designs with Real Users – Wimgo

Testing AI Solution Designs with Real Users

Artificial intelligence (AI) and machine learning (ML) have become ubiquitous buzzwords tossed around corporate strategy meetings and leadership conferences. There is no doubt these technologies represent a massive opportunity for digital transformation across industries. From personalized healthcare to autonomous vehicles, the possibilities for using AI to solve real human problems seem endless. 

However, for all the promise, many AI and ML initiatives fail to move beyond the pilot stage. Of those that do get implemented, many fail to deliver the expected return on investment or business value. A recent Gartner survey found that nearly half of AI projects ultimately don’t produce results.

So with all the brainpower and capital being poured into AI, why do so many initiatives seem to stall or fall flat? There are several factors at play, but one of the most important is a lack of testing with real users during solution design. Too often, AI and ML solutions are designed entirely by data scientists and engineers without any input from the actual people who will use and interact with the solution. 

Assumptions are made about users’ needs, desired workflows, and preferences for how the solution should operate or integrate into their daily tasks. Requirements are defined based on these assumptions. Development powers ahead toward launch without ever soliciting feedback from real users. The result – a solution that fails to meet users’ actual needs or integrate smoothly into their workflow. Adoption suffers, value is lost, and the AI project is branded a failure.

This common pattern shows why testing AI and ML solutions with real users continuously throughout design is critical to success. This post will explore in detail why integrating user testing into design is key for AI done right. We’ll also provide proven strategies and tips for making user testing a core part of your AI solution development process. Let’s get started.

The Problem: AI Solutions Built Without User Input

It’s far too easy for organizations to fall into the trap of designing AI and ML solutions without meaningful input from real users. Often this happens because the process is led primarily by technical teams of data scientists whose expertise lies more in statistics and computer science than human-centered design. 

These teams often start by identifying sources of internal data they can use to train machine learning models. Or executives mandate a focus on using AI to solve a particular business problem. In both cases, the technical teams immediately jump into solution mode without ever consulting real users. 

Well-meaning stakeholders will provide requirements and assumptions around how they think users will adopt and interact with the AI solution. But rarely is qualitative research conducted to truly understand users’ workflows, pain points, behavioral habits, and needs. Design sprints, prototyping, and user testing are seen as luxuries that would slow development down.

So the race to launch kicks off. Agile development cycles crank away, algorithms are tuned, and a technically sound solution emerges. But when it’s finally put in front users, the result is nearly always the same – lackluster adoption, frustrated users, and minimal business impact.

Why does this happen? Root causes include:

– Teams making assumptions about what users want and need without ever talking to them. Solution doesn’t align with reality.

– Not understanding users’ current workflows. New solution adds friction instead of seamlessly integrating.

– Failing to test usability of UI/UX with real users. Unintuitive and confusing for users.

– Solving for technical challenge rather than human needs. Cool technology looking for a problem.

– Assuming users will naturally adopt solution if it’s powerful and technically advanced. Lack of change management.

– Not appreciating the “human factor” in how people experience AI solutions. Focus on pure functionality misses the mark.

– Seeking to automate tasks users find meaningful. Removes human judgment and supervision prematurely.

The organizations that fall into these traps typically exhibit a few cultural tendencies that enable this flawed approach:

– Engineers and data scientists wield most influence. Less focus on design thinking and user experience.

– Bias toward action and speed. Testing seen as causing delays.

– Solution requirements created by executives in conference rooms. Leaders detached from real user needs.

– Company metrics incentivize shipping new solutions over measurable impact. 

– Past failures blamed on users’ resistance to change rather than poor design.

– Lack of skills, budget, and leadership buy-in for rigorous user testing.

These cultural challenges can be extremely difficult to overcome. But investing the time to integrate user experience and testing into AI solution design pays massive dividends in much higher adoption, user satisfaction, and business impact.

The Solution: Test Continuously with Users from the Start 

To maximize the chances of delivering successful AI solutions that get adopted by real users, organizations should involve representative users early and continuously throughout the design process. Here is an overview of how to deeply integrate user testing into AI/ML solution development:

Understand Users’ Needs and Workflow:

– Conduct qualitative research like interviews, surveys, and observations to understand current user workflows and pain points before design starts. Identify key use cases.

– Map out user workflows and persona profiles to truly empathize with their needs and perspectives.

Involve Users Early in Solution Design:

– Engage real prospective users from the very beginning of the design process via focus groups, advisory panels, and prototyping exercises.

– Show early mockups and prototypes to users for feedback (don’t wait until you think it’s perfect). Iterate based on areas of confusion.

Test and Refine Continuously with User Input:  

– Conduct usability studies on each iteration of prototypes to identify any points of friction or confusion.

– Develop feedback loops and mechanisms to rapidly roll user input into the next design iteration.

– Use collaboration tools to engage users in solution’s evolution and foster a partnership dynamic.

Conduct Realistic Pilot Testing:

– Run pilots with small groups of real users for “beta” testing before full development/rollout.

– Monitor pilots in real usage contexts, gather feedback, assess value delivery.

– Refine solution based on pilot results before scaling/launching.

The key is to test prototypes with users early and often, rather than just presenting a finished product and asking for feedback. Catching issues while still on the drawing board avoids costly rework later in the process. This agile, iterative approach builds solutions that truly fit users’ needs rather than forcing them to conform to a pre-determined design.

Benefits of Integrating User Testing Into AI Solution Design:

– Prevents wasted time/effort from developing features users neither need nor want

– Catches usability issues early before launch 

– Builds intuitive UI/UX. Fits seamlessly into user workflows.

– Encourages user adoption by actively involving them in design decisions

– Maximizes business value/ROI by solving real problems for users

– Reduces change management challenges

– Surfaces risky assumptions about how users will utilize solution

– Fosters a human-centric vs technology-centric solution mindset

By taking a user-centric approach versus a technology-centric approach, organizations can vastly improve the success rate of their AI and ML initiatives. Let’s look at some best practices for making user testing a core part of your AI solution design process.

Tips for Integrating User Testing into AI Solution Design

Designing successful AI solutions that users adopt and love is as much art as science. Here are some proven tips for effectively integrating user experience testing into your AI/ML solution development process:

Identify Key Users, Workflows, and Use Cases

– Map out target user personas and workflows early in the process. Recruit representative users for testing.

– Prioritize use cases by user need and potential business value. Align tests to key scenarios.

Create a Testing Plan and Schedule 

– Develop a plan and timeline for testing key prototypes and solution increments with users.

– Increase testing frequency as design progresses to catch issues quickly.

– Define success criteria based on adoption, satisfaction, workflow improvement.  

Build Rapid Prototypes to Illicit Feedback

– Use sketches, mockups, wireframes, and interactive click-through prototypes for early testing.

– Focus on quickly translating ideas to prototypes for user feedback instead of polished features.

Provide Realistic Use Cases and Examples  

– Craft real-world scenarios and examples for testing sessions to mirror how users will engage with the solution.  

– Observe how users expect to flow through tasks rather than just asking their opinion.

Iterate Frequently Based on User Feedback

– Be willing to rapidly iterate designs and features based on user feedback. Don’t get too attached to any one solution.

– Use collaboration tools and channels to quickly incorporate user input into the next design iteration.

Treat Testing as a Conversation and Partnership 

– Have open, two-way exchanges where users feel heard and part of the process.

– Provide transparency into how their feedback will shape solutions. Thank them for participating.

Testing Early Prevents Costly Rework Down the Line

Integrating robust user testing does require some cultural adjustments for organizations not accustomed to this approach:

– Stakeholders must embrace iterative design over defined requirements upfront.

– Testing cycles mean designs take longer to “lock down”. Tradeoff for higher quality outcome.

– Added budget needed for incentives, participant recruitment, and research.  

– Requires close coordination between technical teams and user researchers/designers.

However, these investments pay for themselves many times over by preventing costly rework late in development when changes have an outsized impact. Overall, organizations that embrace user-centric design can deliver AI solutions with greater adoption, user satisfaction, and business value.

Challenges to Adopting This Approach

As with any major shift in process, there will be challenges when first integrating continuous user testing into AI and ML solution design. Being aware of these hurdles is the first step to overcoming them:

Internal Resistance to Iterative Approach

After decades of traditional waterfall development processes, both leadership and project teams may push back on iterative design. Stakeholders want defined requirements upfront. Help them understand how testing reduces risk and leads to better solutions.

Perception that Testing Takes Too Long

Some voices will argue involving users slows progress. Mitigate this by building rapid prototypes that take days not weeks. Accelerate review and feedback cycles. Demonstrate how it reduces costly late stage changes.

Difficulty Recruiting a Representative Sample of Users 

Identifying and convincing real users to participate can be tough. Leverage networks and partners to find willing testers. Provide adequate incentives for their time.

Users Unsure What They Want

Untrained users may struggle to articulate needs and give feedback. Frame interactions as a collaborative discussion with examples and contexts. Observe their process vs. just asking opinions.

Testers Only Point Out Surface Issues

Work with user researchers to create testing plans that reveal underlying issues beyond superficial feedback. Take time to interpret results.

No Validation Testing Produces Business Value  

It’s not enough to just fix usability issues. Measure performance vs. objectives. Did AI improve relevant business KPIs for users?

The key is securing executive support to invest in the extra time and resources needed to adopt this approach. Position it as a culture change that will pay dividends across projects. With experience, organizations can streamline the process so user testing happens quickly and seamlessly.

Case Study: Leading Retailer Adopts User-Centric AI Design 

Global retailer Fantastico has implemented various AI and machine learning solutions across their business over the past decade with mixed results. Some projects delivered value but many fell flat when real employees struggled to use and adopt the tools in their actual workflows.

Fantastico decided to take a new approach when embarking on a major initiative to implement AI virtual assistants to improve customer service. They invested in understanding customers’ needs and expectations by:

– Conducting focus groups, surveys, and interviews with real customers to learn their pain points with Fantastico’s customer service.

– Observing hundreds of hours of customer service calls to map common scenarios and workflows.

– Building proto-personas of key customer segments and use cases.

Armed with this customer insight, Fantastico involved real customers early in the conversational AI assistant design process. The technical and UX teams created an initial prototype AI assistant, then rapidly iterated based on direct customer feedback:

– Customers interacted with the AI chatbot mockup for common service scenarios. What did they try to ask? Where did they get confused?

– The team tweaked the assistant’s dialogue flow, added missing support issues, and refined failover to human agents.

– They performed multiple rounds of learning loops as the assistant was refined.

– The assistant was piloted with a small group of customers and further refined based on real-world results.

By deeply involving customers early and often, the team built an AI-powered assistant uniquely adapted to users’ needs vs. Fantastico’s assumptions. This customer-centric approach was a key ingredient in the assistant’s success. 

Pilot results were impressive: 

– 250% increase in customers who had their issue fully resolved after interacting with the AI assistant vs. previous chatbot.

– 310% increase in positive user sentiment after resolution based on NLP analysis of user comments.

– 76% said the assistant provided the fastest way to get their issue resolved.

Fantastico demonstrated how taking the time to test AI solution prototypes with real users at multiple points during design results in higher adoption and business value vs. just optimizing for technical accuracy and speed. The user-centric approach is now being replicated across other customer-facing AI projects, leading to more successful implementations.

Conclusion

AI and machine learning offer immense opportunities to solve real problems for organizations and users. But to realize this potential, solutions need to be designed for the people who will actually use them. Involving real users early and continuously testing during solution design ensures what gets built will be adopted and loved rather than resisted.

By integrating robust user testing into your AI/ML solution design process and embracing the iterates that come with it, your organization can avoid the pitfalls of technology-first design. The time invested pays back exponentially through more seamless user workflows, higher adoption, and increased business value. Make testing with real users a core pillar of developing AI done right to set your AI initiatives up for transformational success.