As the COVID-19 pandemic restrictions have faded in the background, I can’t help but miss the virtual options that were available during lockdown. Some of them have remained, but others have resumed in-person. I value in-person interaction, but I have less tolerance for having to constantly organize transportation than before the pandemic because I know that there are alternatives. I have just enough vision to see what’s in front of me, but I don’t have the ability to see a lot of the nuances that come naturally to the world around me. So many aspects of daily life depend on visual cues—navigation signs and signals, restaurant menus, etc.
I think the rise of AI has potential to offload some of these tasks. Is there a rhoomba capable of handling a lot of dog (and human) hair yet? 😉 (Note: Hopefully I’ve answered my own question because I just got a new robot vacuum that claims to do just that.) Also, not having to squint at spreadsheets will be helpful for people who have low vision because we want to preserve what resources we have for the things we care about. I could have the freedom to focus on tasks that do not rely so heavily on visual detail.
I have bilateral cochlear implants and have witnessed their progression in capabilities throughout the years. With vision, I think that technology advances will continue to help people with vision impairments until scientists figure out how to solve blindness itself. In 2007, both the iPhone and Kindle were released and revolutionized the way the blind community was able to complete tasks and communicate. These devices enabled users to adjust their settings in the device itself, rather than the need to rely on assistive technology to enlarge text or provide text-to-speech capabilities.
Like anything else, there are downsides to ChatGPT and other AI systems, but there is a lot of potential for it to benefit people with disabilities on a level similar to the release of the iPhone and Kindle.
Comments are closed.