Major UK supermarket
Strategic product design to transform the clearance of leftover stock and unlock benefits of AI
Activities
Journey mapping
Vision workshop
Sprint planning
Content creation
Ideation and co-design
Wireframing
Data pilot
Usability testing
Handover
Contextual inquiries
Duration
16 months
Outcomes
Unlocked £10m+ revenue
Built trust in AI
Established foundations for human-AI collaboration
Ability to query AI recommendations was a valuable first step towards semi-automation and data-driven efficiencies.
I joined the in-house design team creating a suite of enterprise tools to manage the lifecycle of products. A new design system would reduce development debt and ensure consistency across the suite
My brief focused on final clearance of non-food products—from batteries to baby grows. Commercial teams reviewed sales and made tactical price-cuts to maximise profits and minimise waste from remaining stock.
Machine learning had potential to generate £10-20m in costs and revenue. Clearance presented a low-risk opportunity to develop the technology. An end-to-end digital process would promote human-AI collaboration and drive efficiencies.
I led the discovery and UX in 2-week sprints, paired with a UI-focused consultant. In regular stand-ups we coordinated product priorities and cross-functional collaboration with our Product Manager (PM). We reported to in-house design leadership and shared learnings with other product teams to ensure patterns were consistent across the suite.
Over a series of interviews we documented the as-is process in Miro, with actors, triggers, touch points and frustrations.
Journey mapping the as-is revealed quick wins and indicated a need for two solutions
Our first step was to map the as-is process and gain a high-level understanding of the actors, actions, touch points and frustrations over time.
To capture variations before, during and after clearance, and to understand the range of needs, I made sure to recruit users working with representative product types and levels of responsibility
There were a number of issues related to training and use of the existing tool:
Non-food and clothing divisions had evolved as separate external initiatives. Consequently each had its own language, metrics and ways of working. In the short term the divisions could not be aligned so a solution was needed for each, with a view to align over time.
The existing tool only applied to one part of the process and was not designed to integrate with current ways of working. The added effort created resentment.
Many users did not trust the calculations or understand how price recommendations were made. Consequently there was a lack of trust in the tool and people tended to reject the output and use workarounds.
Clearance existed in an ecosystem of teams, promotional planning, oversight and approval, and financial forecasting which added to the complexity of the challenge.
Facilitating vision and retro exercises helped leaders align on project challenges and objectives.
Defining milestones for fundamental change
A vision workshop created a space for leaders and stakeholders to share current operational challenges and long-term aspirations for the product.
Analysis revealed 4 strategic objectives:
Combine data and human expertise to inform and improve decision-making
Increase flexibility for optimising discounts
Build trust in the tool
Increase automation and reduce manual processes
At the time it was not clear that leaders would need to orchestrate change at a fundamental level. The business would need to evolve financial processes and accountability to implement AI decisions.
An initial ‘review by exception’ concept required users to intervene when outcomes looked wrong.
The initial concept reduced effort but increased uncertainty
Co-design sessions revealed a crucial insight. Users approving AI recommendations were wholly accountable for price change outcomes and budgets worth £millions. They could justify their own decisions but had no way to explain the rationale of the AI, leading many to avoid or reject recommendations.
At users' insistence the existing tool provided 40 metrics to support bottom-up manual analysis. In an effort to minimise the data load and shift behaviour towards top-down approach we challenged users to align on critical metrics. This streamlined product requirements but increased uncertainty for users. Subsequent sprints focused on the problem of confidence.
By understanding user needs and motivations, and principles of machine learning, I created content and features to support informed decision-making.
Education and relatable data visualisation opened the door to efficiency
Echoing the Henry Ford ‘faster horses’ story, ability to compare price scenarios at a glance increased users’ confidence in AI recommendations, despite their preference for spreadsheets and tables.
Progress on other products enabled us to iterate feasible solutions quickly. Testing with real data ensured the prototype was relatable, which helped users assess and validate designs. Further collaboration across the enterprise design team ensured components and functionality were accommodated by the evolving design system.
Attending weekly working groups we learned the existing AI tool had been introduced by data scientists and not well understood by users – another factor contributing to low adoption. I created and validated new reference and training materials to support use of AI technology.
A spreadsheet prototype helped to clarify the criteria, sources and formulae of key metrics for developers, enabling users to review and sign-off on outcomes.
A data pilot ensured complex ‘bottom-up’ approvals were reliable
A design review revealed that primary users’ decisions were exported and aggregated at division level for comparison with forecasts, before review at director level. This manual approval process was out of scope but affected the requirements for all user types. A spike enabled me to secure time with higher work-level actors to define a solution.
Again, using real data helped validate design. Users spotted API and calculation errors, despite collaboration with developers during design. I led a ‘data pilot’ with front and back-end developers and ‘Superusers’ to verify formulae and data sources to ensure the MVP design displayed reliable data.

This matrix helped developers understand the logic of the approval process for each work level.
Mapping system logic so all users have an accurate view of the process
Users at each work-level needed visibility of pending, reviewed and approved categories so price changes reached stores in time. To prepare for verbal handover I worked with my UI-weighted partner to draft documentation and clarify workflows, conditions and screen states for the three different work-level users.
By the time developers started building my partner was off the project and their familiarity had faded. I devised a detailed matrix of users and states to guide developers and clarify the inter-related conditions of the decision-approval process.
Shadowing users provided new insights and a backlog of improvements to help the product team transition to MVP.
Improving the as-is product renewed engagement with design
All the effort invested in the future process generated frustration among users, who still experienced problems working with the existing tool.
A round of contextual enquiries ensured users felt heard and helped us reset with some foundational research. I led the team in prioritising feasible improvements that would help users work with data more efficiently:
Data science needed feedback on rejected recommendations but users found this effortful. A bulk-apply feature would make this easier.
Users make decisions on related products at once but are unable to manage this in the tool. Multi-search would enable users to take action on related products.
Users were used to advanced filtering in Excel. Improve filtering capability in the tool.
“It’s an insane amount of work—it’s really come a long way.”
Clearance Product Manager, UK Supermarket
The MVP would plug in to the existing UI and shift behaviour to top-down analysis. Users would review predicted KPIs and drill down to amend discounts and improve outcomes.
Outcomes
The validated MVP unlocked an initial £10 million in additional revenue and represented an important first step towards automation and data-driven efficiencies.
By leading a user-centred approach to design I helped create an end-of-life solution integrated with a suite of tools. Our work set the foundations for human-AI collaboration in other enterprise products.
We made a number of final recommendations to build trust in the vision:
Look for ways to evidence the benefits and outcomes of AI-driven decisions.
Review the relationship with financial processes of the wider business where forecasting and planning affected users’ motivations and behaviour.
Establish a mechanism for AI to share accountability for the outcome of decisions
The long lifecycle for retail products meant it could take more than a year to fully evaluate the outcome of AI price recommendations. While the dashboard was in development I shifted focus to a version of the tool aimed at ‘seasonal’ products, with a much shorter lifespan. Case study coming soo