with the public beta of iOS 11 now well underway, I thought it was time to dive in. There have been a number of improvements for accessibility but the two i would like to cover are smart invert and image description.
For the past few years the way we interact online has been changing. In the early days of the internet and indeed the early days of social media, interactions were predominantly text based. Now however, images and video reign supreme. This leaves the blind and people with low vision at a disadvantage. We now struggle to interact online. It is easy to miss out on the thread of a conversation if it begins with an image or if images are posted as comments. Facebook and Twitter have made improvements on this front, the former adding automatic image tagging and the latter allowing the user to add tags to an image. This is however, restricted. Not only to Facebook in its case but for TWitter relies on the user to add those additional tags. It is easy to see how restricting this can be for the blind and visually impaired. Dare to go outside these wall gardens and ;it becomes even worse.