During my time as a Speech-Language Pathologist, I noticed many of my clients used communication apps on their iPads. However, they would not use apps to help them communicate as much as expected because iPads were not as portable as their phones. I found there weren't as many communication apps developed for the phone and so it lead me to pursue this project for the clients I previously worked with.
I completed the end-to-end human centred design process - from secondary research to user research and task flow development to user testing, to name a few. Because my problem space and solution was not familiar to most users, I had to take extra care to ensure the terms and explanation into my problem space were clear.
The end result was a communication app for people with aphasia to use during their day-to-day life. I called it "PocketChat". The idea is that the Speech-Language Therapist would set up the app for their client and then pass on the app for their client to use. Feel free to select the button below to see a walkthrough of the app. Currently I am working on updating the UI of PocketChat.
Previous to UX Design, I was a Speech-Language Pathologist. In my previous line of work I had the privilege of working with people with acquired brain injuries. Specifically, I worked with people with communication difficulties (specifically "aphasia" - but more on that later) and noticed difficulties with them using technology. This is what spurred my passion to use this as a problem space for this project.
Before going into the project itself, here are a couple of terms that are important to know:
And the two types of users specific to communication apps:
The professional who is important in the person with aphasia's recovery process - especially in the early stages. The SLP is the one who searches and recommends tools for their clients (people with aphasia) for help with communication.
The individual or client who has recently acquired aphasia. As previously mentioned, these individuals have trouble communicating and require tools to help them do so.
Of all the types of injuries around the world, injuries to the brain are one of the most likely to cause death or permanent disability (Cathie) with 165,000 individuals in Canada estimated to sustain an acquired brain injury each year (Northern Brain Injury Association).
In the early stages of a person who has acquired aphasia, they require support to learn how to navigate a new world of not being able to do everyday tasks they were previously able to easily complete. Speech-Language Pathologists (SLPs) play a large role in their recovery process to give them new tools and ways to communicate. This is why for the purposes of my project, I chose to focus on the early stages of a person with aphasia’s recovery and look at how the SLP could use technology to help their clients with aphasia.
While there is a lot of potential to treat communication disorders with technology, these problems occur:
I conducted 4 user interviews with SLPs and one SLP assistant who worked with people with aphasia. These interviews lasted 30 minutes each. This gave me first-hand information on what SLPs felt was needed for their clients.
3 themes arose
“SLPs want to personalize
communication apps for their clients but don't have the capacity to do so because of their clients’ vast number of needs.”
“SLPs feel there is a demand for technology to have extra considerations in terms of accessibility for people with aphasia because of their complex needs.”
“SLPs feel one of the most important functional tasks people with aphasia face is communicating with others and worry about the social isolation they face.”
I chose extra accessibility considerations to help me design a solution because 5/5 of my interviewees had pain points relating to this topic and in turn, became the most compelling theme. From here, a revised how might we question was born...
"How might we help Speech-Language Pathologists provide the needed accessible tools to help their clients communicate in order to improve their quality of life?"
Because my problem space was more complex and not everyone has had experience in it, I decided to create a couple of personas. I wanted to give stakeholders an idea of the type of user that would be impacted by my solution and provide some education around this niche problem space.
After creating Christy, I wanted to get a sense of her experience and journey with technology for her clients. By doing so, it gave me a better sense of the moments of opportunity where there could be an intervention with a digital solution.
From the map, I came up with these areas of intervention:
As initially mentioned, there was also the need to consider the end user - the person with aphasia. Thus, a proto-persona of Christy's client was born.
"Apps only help so much because clients can only work with a given set of options that are given rather than generating their own ideas."
“Everyone’s needs are different and it’s not just the one thing that needs technological help”
"It would take a lot of customization to configure a program to meet clients’ specific needs."
As an SLP (Speech-Language Pathologist) I want to:
add items to sections so that I can personalize items/sections according to my clients’ needs
add my own pictures so that I can personalize items for my clients
As a person with aphasia I want to:
have picture supports for objects so that I can understand written information
have audio cues so that I can have more support to increase understanding of written items
Before going into the prototype itself, I developed a scenario and task flows for both personas. This will give you, the reader, a better sense of their journey and why my eventual solution is important for both types of users.
Christy is an SLP who is working with Ed, her client who has aphasia. They are at his favourite cafe. He wants a way to communicate his favourite go-to drink for the next time he comes to the cafe independently. Before going into the task flow, there are a few things to note.
After adding the item, Christy shows her client Ed where she has added his favourite drink and hands off his phone to him.
After developing the basis of my app, I was ready to focus on what I wanted it to look like and drew inspiration from various websites online. Because of the accessibility needs of the users, I focused on designs that had large surface areas and picture supports for categories, simple screen layouts for the camera view, and a large amount of padding around the text for buttons to allow the CTA to be easily read. I also looked for examples of buttons and categories with rounded corners to give a more modern feel.
In addition to UI inspiration found online, I wanted to incorporate researched accessibility features for people with aphasia (Rose et al. 342):
I was then able to start sketching ideas from the UI inspiration found. I chose to focus on:
I initially created all the sketches with the idea in mind to have the SLP show their client the special features while using the app (e.g. scan text feature). However, this made the task flow and journey confusing for the reader and it was here where I was given the idea to make two separate task flows. As a result, I still had the above features in the app but did not show them in the final task flow.
These were some of the changes I made after two rounds of user testing.
While there could always be further user testing that could be done, I was ready to inject colour and a brand identity to the app. Before taking this step, I thought about who the app was for and what type of feeling I wanted it to evoke. As a result, I developed this list of words:
basic, inclusive, inviting, warm, cheerful, confident, simple, minimal, friendly
I was always keeping the end user in mind. Research showed readability for sans-serif fonts were better than serif. I wanted to make it as easy as possible for users to read text so I chose a sans-serif font that also differentiated letters well (e.g. "a" and "o).
It was also important that the app met the WCAG 2.1 AAA guidelines for accessibility.
Additionally, I came up with a list of names and got feedback which led me to the name for the app:
Drawing from my moodboard, I wanted to create a logo that connected the letters to give a sense of “connectivity”. I also wanted to create a wordmark in a serif font as I felt it would give a feeling of warmth and timelessness. I created the following sketches based on my ideas and presented them for feedback.
While I wanted a wordmark made from a serif font, I received different feedback. Others felt the 4th option (the sans-serif font) matched the feel of being basic, inclusive, inviting, cheerful, and minimal more
I then came up with the following wordmark and logo, with a bit of variation on the letters to make it more friendly and inclusive. I used the sans-serif font I chose for the typography for the app. I ended up choosing the circled iterations as they gave the feeling of inclusivity, and friendliness the most.
And the final word mark, logo, and app icon:
Finally I was ready to add colour to my app. I knew I wanted to take the yellow and blue colours from the moodboard but was not sure about the tertiary colour. Again, after feedback from others, the consensus was the orange with the two continued the feeling of cheerfulness and warmth and added some contrast to the colour palette. The following were screens I played around with when deciding where to add which colours.
You will see in the final high-fidelity prototype what I decided on. I made my decisions based on what would incorporate the primary colour yellow the best, while having a bit of the secondary colour blue and the tertiary colour orange for small accents. I also chose to make the “add” buttons similar to the background with a light border to let users know it was not the main action of the screen
After countless hours of user research, identifying design opportunities, wireframing, user testing, developing the brand and injecting colour, I created the high-fidelity prototype. Shown are the key features.
Throughout the course and specifically this project, I learned many things. Looking back, I’ve grown a lot as a UX Designer. I bean with not knowing much about UX design principles and processes. Now, I can condfidently say I would be able to complete the whole process from beginning to end.
I realized early on I chose a difficult problem space, one that has multiple users involved. Often I had to take the extra step to ensure I had clear storytelling throughout the process.
I learned to ask many questions throughout the process so that I would not be surprised with the task of making many changes at the end.
I also learned the importance of not coming into a project with a solution in mind. Many times I had an idea of what I would want in an app or what I would want something to look like. However, the project would go in a different direction than what I originally thought and as a result, I learned to be flexible and open to other ideas.
All in all, I thoroughly enjoyed the whole design process and learned what it truly meant to create a solution that was human centered. I hope to continue learning and developing my skills on future projects!
As for what's next, I'd love to continue user testing with end users - people with aphasia. While I would have loved to conduct interviews and testing with people with aphasia, I was not able to do so in the limited amount of time. A big reason why I created this app in the first place was for them - to give them a sense of independence. There were a few aspects that required further A/B testing to determine what would be helpful for people with aphasia or other disabilities. This would be done with further time and resources.
Thanks for reading!