Believe it or not, self-driving cars are here. Google started its self-driving car project in 2009, but now, the cars are being tested in public and could be rolling down a street near you in no time.
But with more self-driving cars being tested on the road comes more safety concerns. Self-driving cars are set on the principle of improving safety on roadways because they have all sorts of sensors and gadgets to help them avoid accidents.
But here's where things get scary. A new report from security researchers have discovered that these sensors can be hacked with equipment that totals a mere $60.
By using a simple computer like a Raspberry Pi ($25), a low-power laser and a pulse generator, hackers can devise a system that would trick or fool a self-driving car into seeing things that aren't really there, like a wall, pedestrian, or another vehicle. Thinking it might crash, the car will slow itself down or stop completely. Or the system could overwhelm the car, rendering it completely immobile.
Jonathan Petit, Principal Scientist at Security Innovation, a software security company, headed the project. During testing, he was able to create multiple obstacles, such as walls, fake cars and pedestrians on each side of the car, and from up to 100 meters away.
But the public doesn't need to panic just yet. The technology tested is still in development and Petit will further detail his findings at the upcoming Black Hat Europe security conference in November, where car makers can learn from his findings and correct the problem, but it all starts with security.
Petit told IEEE Spectrum: “Everyone knows security is an issue and will at some point become an important issue. But the biggest threat to an occupant of a self-driving car today isn’t any hack, it’s the bug in someone’s software because we don’t have systems that we’re 100-percent sure are safe."