GhostStripe attack haunts self-driving cars by making them ignore road signs

Six boffins mostly hailing from Singapore-based universities say they can prove it’s possible to interfere with autonomous vehicles by exploiting the machines’ reliance on camera-based computer vision and cause them to not recognize road signs.
六名主要来自新加坡大学的专家表示,他们可以证明,通过利用机器对基于摄像头的计算机视觉的依赖,并导致它们无法识别路标,可以干扰自动驾驶汽车。

The technique, dubbed GhostStripe [PDF] in a paper to be presented at the ACM International Conference on Mobile Systems next month, is undetectable to the human eye, but could be deadly to Tesla and Baidu Apollo drivers as it exploits the sensors employed by both brands – specifically CMOS camera sensors.
这项技术在下个月的ACM国际移动系统大会上发表的一篇论文中被称为GhostStripe,人眼无法察觉,但对特斯拉和百度Apollo的司机来说可能是致命的,因为它利用了两个品牌使用的传感器,特别是CMOS相机传感器。

It basically involves using LEDs to shine patterns of light on road signs so that the cars’ self-driving software fails to understand the signs; it’s a classic adversarial attack on machine-learning software.
它基本上涉及使用 LED 在路标上照亮光模式,这样汽车的自动驾驶软件就无法理解路标;这是对机器学习软件的经典对抗性攻击。

Crucially, it abuses the rolling digital shutter of typical CMOS camera sensors. The LEDs rapidly flash different colors onto the sign as the active capture line moves down the sensor. For example, the shade of red on a stop sign could look different on each scan line to the car due to the artificial illumination.
至关重要的是,它滥用了典型CMOS相机传感器的滚动数码快门。当活动捕获线沿着传感器向下移动时,LED 会在标志上快速闪烁不同的颜色。例如,由于人工照明,停车标志上的红色阴影在汽车的每条扫描线上可能看起来不同。

GhostStripe attack haunts self-driving cars by making them ignore road signs

The GhostStripe paper’s illustration of the ‘invisible’ adversarial attack against a self-driving car’s traffic sign recognition
GhostStripe 论文对自动驾驶汽车交通标志识别的“无形”对抗性攻击进行了说明

The result is a camera capturing an image full of lines that don’t quite match each other as expected. The picture is cropped and sent to a classifier within the car’s self-driving software, which is usually based on deep neural networks, for interpretation. Because the snap is full of lines that don’t quite seem right, the classifier doesn’t recognize the image as a traffic sign and therefore the vehicle doesn’t act on it.
结果是相机捕捉到的图像充满了彼此不完全匹配的线条。图片被裁剪并发送到汽车自动驾驶软件中的分类器,该软件通常基于深度神经网络,以进行解释。由于捕捉中充满了看起来不太正确的线条,因此分类器无法将图像识别为交通标志,因此车辆不会对其执行操作。

So far, all of this has been demonstrated before.
到目前为止,所有这些都已经证明过。

Yet these researchers not only distorted the appearance of the sign as described, they said they were able to do it repeatedly in a stable manner. Rather than try to confuse the classifier with a single distorted frame, the team were able to ensure every frame captured by the cameras looked weird, making the attack technique practical in the real world.
然而,这些研究人员不仅扭曲了所描述的标志的外观,他们说他们能够以稳定的方式反复这样做。该团队没有试图将分类器与单个扭曲帧混淆,而是能够确保摄像机捕获的每一帧看起来都很奇怪,从而使攻击技术在现实世界中实用。

“A stable attack … needs to carefully control the LED’s flickering based on the information about the victim camera’s operations and real-time estimation of the traffic sign position and size in the camera’s [field of view],” the researchers explained.
“稳定的攻击……研究人员解释说,需要根据有关受害者摄像头操作的信息以及摄像头[视野]中交通标志位置和大小的实时估计,仔细控制LED的闪烁。

The team developed two versions of this stablized attack. The first was GhostStripe1, which does not require access to the vehicle, we’re told. It employs a tracking system to monitor the target vehicle’s real-time location and dynamically adjusts the LED flickering accordingly to ensure a sign isn’t read properly.
该团队开发了这种稳定攻击的两个版本。第一个是 GhostStripe1,我们被告知它不需要访问车辆。它采用跟踪系统来监控目标车辆的实时位置,并相应地动态调整 LED 闪烁以确保无法正确读取标志。

GhostStripe2 is targeted and does require access to the vehicle, which could perhaps be covertly done by a miscreant while the vehicle is undergoing maintenance. It involves placing a transducer on the power wire of the camera to detect framing moments and refine timing control to pull off a perfect or near-perfect attack.
GhostStripe2 是有针对性的,并且确实需要进入车辆,这可能是由不法分子在车辆进行维护时秘密完成的。它涉及在摄像机的电源线上放置一个换能器,以检测取景时刻并改进定时控制,以实现完美或近乎完美的攻击。

“Therefore, it targets a specific victim vehicle and controls the victim’s traffic sign recognition results,” the academics wrote.
“因此,它针对特定的受害者车辆并控制受害者的交通标志识别结果,”学者写道。

The team tested their system out on a real road and car equipped with a Leopard Imaging AR023ZWDR, the camera used in Baidu Apollo’s hardware reference design. They tested the setup on stop, yield, and speed limit signs.
该团队在配备Leopard Imaging AR023ZWDR的真实道路和汽车上测试了他们的系统,Leopard Imaging 是百度Apollo硬件参考设计中使用的相机。他们在停车、让行和限速标志上测试了设置。

GhostStripe1 presented a 94 percent success rate and GhostStripe2 a 97 percent success rate, the researchers claim.
研究人员声称,GhostStripe1 的成功率为 94%,GhostStripe2 的成功率为 97%。

One thing of note was that strong ambient light decreased the attack’s performance. “This degradation occurs because the attack light is overwhelmed by the ambient light,” said the team. This suggests miscreants would need to carefully consider the time and location when planning an attack.
值得注意的是,强烈的环境光会降低攻击的性能。“这种退化的发生是因为攻击光被环境光淹没了,”该团队说。这表明不法分子在计划攻击时需要仔细考虑时间和地点。

Countermeasures are available. Most simply, the rolling shutter CMOS camera could be replaced with a sensor that takes a whole shot at once or the line scanning could be randomized. Also, more cameras could lower the success rate or require a more complicated hack, or the attack could be included in the AI training so that the system learns how to cope with it.
对策是可用的。最简单的是,卷帘快门CMOS相机可以替换为一次拍摄整个照片的传感器,或者可以随机进行线扫描。此外,更多的摄像头可能会降低成功率或需要更复杂的黑客攻击,或者攻击可以包含在人工智能训练中,以便系统学习如何应对它。

The study joins ranks of others that have used adversarial inputs to trick the neural networks of autonomous vehicles, including one that forced a Tesla Model S to swerve lanes.
该研究加入了其他使用对抗性输入来欺骗自动驾驶汽车神经网络的行列,其中包括迫使特斯拉Model S转向车道的研究。

The research indicates there are still plenty of AI and autonomous vehicle safety concerns to answer. The Register has asked Baidu to comment on its Apollo camera system and will report back should a substantial reply materialize. ®
研究表明,仍有许多人工智能和自动驾驶汽车的安全问题需要回答。The Register已要求百度对其阿波罗摄像系统发表评论,如果得到实质性答复,百度将进行报告。®

原文始发于Laura Dobberstein:GhostStripe attack haunts self-driving cars by making them ignore road signs

版权声明:admin 发表于 2024年5月18日 上午10:04。
转载请注明:GhostStripe attack haunts self-driving cars by making them ignore road signs | CTF导航

相关文章