Chatbots can be one of the most frustrating and inaccessible parts of a website, but at the same time, they have the potential to create a good user experience. Our current chatbot has accessibility issues that must be addressed, which is the purpose of this project. We are to identify those issues and provide solutions to remediate them.
It’s no secret that AI chatbots are gaining popularity, but they still come with a set of challenges for people with assistive technologies.
According to the Bureau of Internet Accessibility, some chatbots don't have a design that makes it clear how different elements are related. For example, a user should easily understand which button to click in response to a message.
The chatbot buttons are often at the bottom lower right of the screen, which is a challenge for screen readers. They might not be able to reach the button easily because they may be forced to tab through the whole page each time.
Adding landmarks and skip links can solve some of these issues.
Another challenge about screen readers is that they need to be able to notify users of chatbot conversation replies or any updates. An aria-label can be one way to provide the needed context.
Our objectives are:
Research questions:
I recruited 7 participants of different backgrounds, diverse accents, and accessibility needs that have experience using chatbots. The users will interact with the chatbot by completing a series of predefined tasks such as asking for information about home equity loans.
The key metrics for this test will be task completion rate, error rate, and user satisfaction ratings.
After completing the tasks, the participants will answer these questions:
"Until I started using several AI chatbots with voice mode, I didn't actually realize that having a familiar accent had any impact on such interactions with AI. Although I'm used to listening to American English voices, I'm not at all used to interacting with American English voices, which makes it odd. I feel weird using it."
The purpose of this competitive analysis is to see how other AI chatbots are handling accessibility and language.
Pros:
Cons:
Pros:
Cons:
Pros:
Cons:
Our research taught us that the current chatbot the company needs improvements so that users can browse it with screen readers and eliminate barriers that prevent people of diverse linguistic backgrounds from using the voice chat.
After empathizing with the users, we started to brainstorm the "how might we" question.
How might we improve the company's current AI chatbot so that it adapts to linguistic differences in the English language and assistive technologies?
Our AI chatbot isn't user-friendly because it's difficult to navigate it using a keyboard and its voice chat cannot understand different English language accents.
I strongly advocated for including a way to set the voice chat's accent so that it recognizes the user's speech and also talks back using the same accent they chose for it.
We also discussed how AI chatbots should adapt to other assistive technologies such as responding to sign language, which would require the camera to be turned on.
This triggered a lengthy discussion about how would AI ethically handle this due to users possibly having concerns about their privacy. Similarly, this could have implications such as the financial institution wondering how would they prevent someone else from impersonating the account owner.
This part of the process was important to me because it was thought-provoking. It's part of the many discussions we ended up having with other departments for the purpose of increasing AI maturity in the company.
This is the storyboard I created to present and gather buy-in from stakeholders. It depicts Lee, an Australian woman trying to use voice chat to check her account balance, but her words are misinterpreted by the chatbot due to her accent.
Panel 1: Lee logs into her bank account on her phone.
Panel 2: Lee uses voice chat to check her balance.
Panel 3: The chatbot doesn't understand her Australian accent. Lee's message: "Check right account balance." Chatbot: "Hmmmm... I didn't quite understand that."
Panel 4: Lee changes her phone's language settings to English (Australia).
Panel 5: Lee uses voice chat again, interacting with a robot avatar.
Panel 6: Lee successfully gets her balance after a few attempts. Lee: "Check bank account balance." Chatbot: "Hmmmm... I didn't quite understand that." Lee: "Check bank balance." Chatbot: "Your balance ... is $9600 AUD."
I designed the high fidelity prototypes. This has the settings to not only change the chatbot's language, but to also pick the dialect and your accent. The voice chat feature will respond to you using the accent you picked, and it will understand what the user is saying.
A user will be able to pick Spanish, select Argentina as the dialect, and choose from any of the Spanish language accents there are in Latin America.
These features are being built and will be tested with users again. I prepared a testing plan to ensure the keyboard issue where screen readers are not announcing the chatbot replies and updates is fixed.
We are now liaising with the engineering team to make these ideas technically feasible.
This project taught me about how important it is to stay up-to-date on the latest AI advancements and to always bring up these conversations about its implications.
Companies don't have to wait until major changes happen. They can, however, prepare for them.
I was also taught the importance of testing your existing products with users despite the product already being launched, since you never know what could be going wrong. The process does not end after launch.
Something I would have done better during this project was involve the engineers more. I felt that their technical input was needed.
All in all, I can say that testing this product with users that have disabilities has opened the door to increasing the company's AI and accessibility maturity.