I’m following the evolution of the self-driving technologies with a lot of interest. Many automotive companies say that by 2020/2022 they will commercialize autonomous cars that will reach the level 4 or 5 of the SAE International Automated Driving standards.
Below the table that is commonly adopted by all the automotive industry.
Download the pdf here.
Wired points the level 3 human problem in a very clear way: humans are not capable to maintain their attention if they are not interested or required to. For simplifying, a crash in self-driving mode cannot be avoided thanks to the intervention of the driver that in the meanwhile could be reading a newspaper or watching a video. Humans are just too slow and in that case even too distracted for recognizing the risk and avoiding a crush.
I work in Digital Communication and I’ve worked on the functional & user experience design of websites, mobile applications, advergames, digital signage systems and info kiosks.
I love cars and motorcycles since when I was a child. I remember very well the “procedure” that my parents had to apply first to start our old Fiat 500, the incredible internal design of the Renault 4 of my neighbour and the unintelligible fashion of the Motobecane Mobyx parked in my garage.
I think that cars and motorcycles are the most impressive demonstration of the humankind power of imagination and adaptation. Imagination because who put together the technology necessary for an “autonomous run” of a 4/2 wheels object for me was an artists, not an engineer. Adaptation because driving a car or a motorcycle is one of the most complex mixture of unnatural gestures that we have on the earth.
That’s the point. That’s the Driving Paradox.
Soli is a Google’s Project that enable users to interact with digital devices without touching them. Nothing really new, except for the hardware technology that concentrates everything in one small “piece of sand” on the electronic device board.
Using the same “approach” that is already used by dolphins, whales and bats, the Google researchers created a single chip that can identify and translate simple gestures in effective commands even through other materials.