Innovating a one-touch digital cookbook designed with visually-impaired persons in mind to streamline the home cooking experience.
User Research
Wireframing
Usability Testing
Brand Design
Prototyping
Figma
Miro
Procreate
4 months
(Feb 2024 - May 2024)
For many people, cooking at home is an activity that's both economical and provides an outlet to wind down and experiment with new skills and flavors in one's own kitchen. Although there can be a bit of a learning curve, many home cooks have access to recipes that can accommodate varying levels of experience and dietary restrictions, whether they're on traditional paper or an online blog.
A predicament arises, however, when these recipes are not structured in a way that is accessible for all home cooks. Today's cookbooks have descriptions that do not translate well for cooks with visual impairments (e.g. "cook until golden brown"), as well as interruptions like pictures, chef’s notes, and ads that become a hindrance to assistive technologies like screen readers that typically read recipes from start to finish without filtering out irrelevant information.
Understandably, this can cause frustration and create an aversion to cooking. Consequently, individuals with visual impairments more frequently turn to alternative food acquisition methods like eating out or frozen meals due to constraints in preparing healthier food options. This not only presents barriers to achieving proper nutrition, but is also costly.
Limited accessibility in modern online recipes dissuades visually-impaired individuals from cooking at home, leading to poor nutritional status and high food budgets.
I designed a digital cookbook that dictates recipes step by step, paced by preset time intervals. Users can play, pause, or repeat steps with simple voice commands and stay focused on preparing delicious, rewarding meals.
I began by organizing a research plan, but with one main constraint: I did not have access to users who identify as "legally blind" to interview for user testing. Thus, I divided my research into two parts:
I compiled information from published studies, online forums, and YouTube videos to gain a deeper understanding of specific challenges that blind persons face when cooking, as well as investigate any preferences that blind cooks have adapted to better navigate the kitchen.
I conducted through five interviews (via Zoom and in-person) with adults who have experience using digital recipes/cookbooks to gather usability insights on pain points that users experience, as well as to better understand what can affect cooking confidence and what unique habits home cooks may have. I then organized these findings into affinity diagrams:
I compiled a chart of the strengths and weaknesses of four digital cookbooks that provided strategic insight into what features and solutions competitors had already launched. The selection contains businesses of ranging sizes for a more holistic assessment.
I also focused on the mobile versions of these services, as users indicated during interviews that they preferred to look up recipes on their phones rather than haul laptops into their cooking space.
I found that a commonality between these sites was that they built brand identities that promoted diversity and accessibility in cooking, as well as encouraged positive long-term habits like meal-planning. However, there were limited features that fostered community engagement and support, as well little to no features that aimed to make following recipes a hands-free experience.
To consolidate my research, I created two user personas.
One underlying parallel between my two lines of research was the frustration of repeatedly needing to touch or "reset" devices to repeat or pause instruction. After brainstorming, I had the idea to eliminate aspects of manual interaction entirely, and to instead operate a recipe playthrough using speech recognition AI.
While the idea initially seemed a little farfetched, the notion of implementing AI into digital cookbooks is by no means a novel concept (flashback to Tasty's "Botatouille" in the comparative analysis)! But rather than utilize it as a search engine enhancer, I want to integrate AI to make the cooking process feel less cluttered. By using simple, one-word voice commands to control the pacing of simplified step-by-step recipes, users can focus on the task at hand, or replay the instruction, without lifting a finger.
With design priorities in mind, I created a sitemap that organized the app's content into four sections to make the user experience intuitive and simple.
To address the insights I had gathered through user data, I routed three main user flows. The first pathway allows users to follow a recipe from start to finish using the "voice command" feature. The second lets users post an inquiry using the Community feature. The third allows users to create and add to a recipe collection in order to plan for future meals.
I sketched out low-fidelity wireframes that would allow me to better visualize my design needs, and later refined them in my mid-fidelity prototype based on feedback from peers.
I arranged the screens in a higher fidelity in order to spotlight the functionality of the layout and user flow before implementing visual design aspects.
These were the screens featured in the main user flow.
The name "Carrot" was derived from the popular myth that the compounds in carrots can promote good eye health. However, I wanted to expand on this idea, and reframe it as improving one's perspective on something that may initially seem intimidating, like cooking, to ultimately work towards a healthier, more sustainable lifestyle.
The colors are also playfully drawn from the same vegetable muse -- I chose a punchy orange as the accent color, and balanced it out with a dark, leafy green and a soft off-white to create a clean but chipper style guide that offers an enthusiastic welcome to home cooks of any background.
These are select screens from my first iteration of designs, which either feature task flows that were highlighted during usability testing, or underwent notable changes following user feedback.
I conducted a series of 1:1 moderated usability tests over Zoom, and observed where users expressed hesitation or confusion regarding how to progress in the flow or what purpose a feature served. Users were asked to complete the following four tasks:
1) Select a recipe and follow it
2) Save a recipe and locate it in My Meals
3) Create a Collection and add a recipe to it
4) Submit a post using the Community feature
I measured success through the task success rates of each task flow, reported ease of navigation, and net promotor score (NPS). Here are the results:
Total participants
Avg difficulty rating (out of 5)
Success rate across tasks
Net promotor score (NPS)
Overall, users found the task flows and interface simple and intuitive to navigate. Participants were also notified in the Introduction portion of the interview that due to the nature of the prototype, there would only be visual representations of the voice recognition AI -- still, all users reported perceiving the concept of a "touch-free" digital cookbook as practical and helpful.
However, users also raised concerns -- here is how I addressed the most frequently mentioned issues.
Users expressed difficulty reading the ingredients list and the listed time for food prep in the recipe screen.
I reformatted the timer into an icon display, similar to the rating in the original design. Additionally, I separated the ingredients by food items and measurements to improve readability.
Users inquired into how they would move forward to the next step without touching the screen if they finished the task before the timer went off. Some users also asked if the instructions for the voice commands would be dictated during every step, concerned that it would get irksome to hear with longer recipes.
I added a "Skip" voice command to make it easy for users to proceed to the next step touch-free, and then redesigned the instructions as speech bubbles that would appear and disappear as they are read aloud.
I realize that a central feature of this app requires voice recognition AI that would not only take time and resources to implement, but is also difficult to replicate in a high-fidelity prototype. Ultimately, I wanted to take some creative liberties with the scope of technology featured in this project - I wanted to go forward not with the preconception that “this can’t work,” but rather, “how could I make this work?” However, it's still important to recognize that "The AI will do it" isn't a feasible design solution without cogent backing.
One of the most valuable aspects of compassionate, universal design to me is the “lightbulb moment” that strikes when you discover a frustrating obstacle that most people would take for granted as a mundane event.
This project gave me the opportunity to utilize values of universal design, and better understand the tenet that designing for marginalized users ultimately benefits everyone. Through future endeavors, I aim to utilize these “lightbulb moments” to guide me towards more effective and impactful design solutions.
UX Design