Object avoidance for the blind


After running into a flagpole in the Namibian desert and a burnt out car on the streets of Doncaster, I decided it was time to work on object detection. My previous challenges had all utilized very simple systems and i wanted to stay within that simple communication paradigm for object detection.

Learning to train solo as a blind runner used two very simple inputs, distance and feeling underfoot. Combined these inputs allowed me to learn to train solo along a 5 mile route. Objects were identified by me running into them and memorising where they were from an audible distance marker. I had reduced blind navigation to two simple elements and that was enough to run. With one, well 2 keyassumptions, 1. I knew where all the obstacles were and 2. There would be no new obstacles. I knew these assumptions were flawed, but i was happy to take on the risk.

Running through the desert solo made the exact assumptions. I would be aware of all obstacles ahead of time and there would be no surprise obstacles. This allowed for a very simple navigation system, as i had reduced the problem to one of bearing. As long as i knew the bearing i was running and could stick to it i could navigate a desert. The system developed along with IBM used a simple beep system to maintain bearing, silence would denote the correct bearing. A low tone beep would mean i had drifted left and a high tone drifted right. Incredibly simple, but simple is all you need in these situations, an overload of sensors and data doesn’t improve the system it just makes the process of understanding what is going on beyond comprehension. Therefore reducing navigation to one simple communication point to the user, in this case me, i was able to navigate the desert solo.

So where did it go wrong? Well those key assumptions, the obstacles in this case were a flagpole and a rock field. The flagpole can be engineered out, the rock field however, we run into the complex system problem. A highly granular descriptive system would not allow the end user to navigate such a rock field. It as a unique and specialized environment that required centimeter accurate foot positioning, indeed the correct way to navigate would be to avoid it entirely!

But could we avoid that burnt out car and flagpole? Yes we could. Could we make it a simple system for the user to understand? Absolutely.

The simplest way to communicate an object within a visual field is hapticly. It is highly intuitive for the end user with ibration feedback instantly recognizable as an obstacle. For the sensor a tiny ultrasonic sensor mounted at chest level. The chest had been chosen as it always follows the direction of running. We had discounted a head mounting, as people often look in a different direction to the one they are moving in.

It is an incredibly simple system, but that is all it needs to be. The idea is to explore the minimal communication required for obstacle avoidance. In future revisions we intend to use multiple sensors but be ever careful not to introduce complexity to the point the simple communication system is interrupted. For example, it may be tempting to use a series of sensors all over the body, this however increases complexity and issues with differentiating between different vibrations and object detection. Not to mention that human interpretation adds latency to the system which may result into running into the obstacle we are trying to avoid.

This all sounds interesting, but does it work? Yes, yes it does. I was over in Munich recently to test an early prototype. With only one sensor i felt we were so close i was tempted to test it while running. The immediacy of the system is incredible. It is totally intuitive that a vibration denotes an obstacle. Avoiding the obstacle is a simple case of drifting left or right until there is no vibration. Then moving on by.

Below is a video of the device in action. I will continue to give updates on the development of the system up until i give it a real workout at a packed city marathon, where i will run solo.

The iPhone, Twitter and Night Mode

Night mode was brought to Android Twitter last month, so it was only a matter of time before it landed on the iPhone. I believe Apple could take this one step further though. I would like to see night mode an OS level option. With apps having alternative themes for night mode that are triggered as you toggle Night Mode in the OS. This would be far simpler than toggling it in a per app basis. I would say it’s likely Apple may introduce this in 2017, to pair with the OLED screen, simple because it will improve battery life.

Microsoft Improving Inclusivity through Dark Theming

I list dark themes as one of the key inclusive design decisions next to dynamic font sizing. It’s great to see Microsoft heading in this direction and perhaps even adding a second dark theme.
I am unsure why the retina burning white backgrounds has held strong for so long. After all everyone uses their phones in dimly lit environments where dark themes mark perfect sense.
Would love to see this in Mac OS as well as iOS in the near future. With the transition to OLED screens that is highly likely.

A leap forward in indoor navigation

Continuing down the indoor navigation topic. Google’s Project Tango, now named Tango is finally reaching a consumer ready version.
I saw this as the first consumer grade product that could have fantastic implications for indoor navigation for the VI. It has the ability to 3D map a room, which could then be used to assist someone’s navigation. The fact this is now in a consumer ready model is very encouraging, a proof of concept app could easily make it onto the Google Play store now.

Indoor navigation and bluetooth

Indoor navigation is always an interesting challenge, great to see that the next version of Bluetooth is improving beacon communication, for this very problem.

Dark Mode – A Low Vision Necessity

For low vision users contrast is a accessibility necessity. Converting to a dark background with white text, instead of the usual black on white makes a huge difference. Literally the difference between being able to see to use an app or not.
So much so, that I used to root Android phones and apply dark themed apps. This would allow me to configure apps such as Facebook to have a black background with white text. Therefore, hugely increasing the contrast thus the usability.
With WWDC around the corner there is a wonderful Apple rumour about “Dark Mode”. A Dark Mode for the standard iOS apps would be fantastic, Mail for example would be incredible with a Dar Mode. The interesting point in the rumour however, is that of a Dark Mode API.
Allowing 3rd party apps to easily include a dark mode could be a massive game changer for low vision users. I can only hope this rumour sees fruition and that it is a permanent setting within the accessibility settings.

A new Apple TV?

Following on from yesterday’s post about Apple introducing an Echo competitor.
Venture Beat reports that this may take the form of a new Apple TV. This sounds like a strange and difficult user experience. Requiring the TV to be on to interact with the device seems cumbersome.
There is the possibility that a new Apple TV may have an external speaker. However, this would no doubt be a speaker of poor quality. What I would like to see is something like an Apple speaker, akin to the Sonos. This would seem like a more natural interface to converse with an audio based digital assistance, as well as giving Apple a fantastic platform for Apple Music.
WWDC is definitely hotting up to be interesting!

Apple to introduce a Siri SDK and Echo competitor

Apple has always been at the forefront of inclusivity, so I am excited to see they are working on a Siri SDK. While I am interested in seeing what Google brings to the Table and the Amazon Alexa, it’s comforting to know a company with a long history of accessibility is also working on similar integrations and devices.
Report: Apple planning Siri SDK for WWDC as it builds Amazon Echo/Google Home hardware competitor