In June 2025, Mr. Li from Zhejiang, China, experienced repeated drowsiness alerts while driving his sister’s Xiaomi SU7 Max electric vehicle. The car’s AI fatigue detection system, using a steering wheel camera for facial recognition and driver monitoring, mistook his naturally small, narrow eyes for signs of sleepiness, triggering over 20 false “please focus on driving” voice warnings despite him being fully alert. This highlighted issues like AI bias, false positives, and limitations in training data for diverse facial features common in East Asian drivers. Factors such as sunlight interference can worsen errors, and persistent alerts may cause real distractions or lead users to disable safety features. Xiaomi acknowledged the problem via customer service, suggesting software updates. Similar incidents, including one in December 2025 linking false fatigue detection to a parking mishap, underscore the need for inclusive AI improvements, diverse datasets, and better accuracy in smart car safety systems to enhance road safety without frustrating drivers.
Long Version
The Xiaomi SU7 Incident: AI Fatigue Detection and the Challenge of False Positives in Driver Monitoring
In the rapidly evolving world of smart cars and electric vehicles, advanced driver assistance systems promise enhanced vehicle safety and road safety. However, a peculiar incident in June 2025 highlighted the potential pitfalls of these technologies. Mr. Li, a driver from Zhejiang, China, found himself repeatedly bombarded with drowsiness alerts while operating his sister’s Xiaomi SU7 Max. The car’s AI system, designed for fatigue detection and distraction detection, misinterpreted his naturally small eyes as signs of sleepiness, triggering over 20 alerts during a single trip. This case underscores the complexities of facial recognition in driver monitoring and raises questions about AI bias in modern automotive safety features.
The Incident: A Frustrating Drive Marked by Persistent Warnings
On June 18, 2025, Mr. Li borrowed his sister’s Xiaomi SU7 Max, a high-end electric vehicle known for its cutting-edge AI integrations. As he drove through Zhejiang, the vehicle’s alert system activated repeatedly, issuing voice alerts urging him to “please focus on driving.” Despite being fully alert, Mr. Li received fatigue warnings more than 20 times, each prompted by the system’s misreading of his narrow eyes as closed eyes or signs of drowsiness. In an attempt to counteract the false fatigue detections, he resorted to widening his eyes exaggeratedly, but the alerts persisted.
The Xiaomi SU7 employs a steering wheel camera for real-time driver condition tracking. This camera monitors facial cues, including eye movements and potential yawning detection, to assess alertness. In Mr. Li’s case, the AI misidentification stemmed from his small eyes, a common facial feature that the system apparently equated with drowsiness. Factors like sunlight interference could exacerbate such errors, though specifics from the incident suggest the primary issue was eye size bias rather than environmental variables. The ordeal quickly gained attention, highlighting Mr. Li’s frustration and sparking widespread discussion.
This wasn’t an isolated annoyance; it pointed to broader challenges in ensuring driving safety through AI. Mr. Li reported the issue to Xiaomi’s customer service, which acknowledged the problem and suggested potential software updates to refine the fatigue warning algorithms. While options to disable the feature exist in the vehicle’s settings, doing so raises concerns about compromising overall safety protocols.
Understanding the Technology: How AI Drives Vehicle Safety
At the heart of the Xiaomi SU7’s safety feature is an advanced driver monitoring system powered by AI and facial recognition. These systems are integral to modern electric vehicles, aiming to prevent accidents by detecting early signs of driver impairment. Fatigue detection typically analyzes eye closure duration, blink frequency, and head position via the camera on the steering wheel. If the AI interprets these as indicators of drowsiness—such as prolonged closed eyes or yawning—it activates a drowsiness alert, often a voice prompt or visual cue, to refocus the driver.
In the context of smart cars like the SU7 Max, this integrates with other driver assistance tools for comprehensive vehicle safety. Distraction detection complements fatigue monitoring by flagging behaviors like looking away from the road. However, false positives—erroneous alerts when the driver is alert—can undermine user trust and even distract the operator, ironically posing risks to road safety. Mr. Li’s experience exemplifies how narrow eyes or small eyes can trigger such errors, revealing limitations in the AI’s training data.
AI Bias and Its Implications for Diverse Users
One of the most critical insights from this incident is the role of AI bias in automotive technologies. The system’s eye size bias likely stems from datasets that underrepresent diverse facial features, such as those common in East Asian populations where small eyes or narrow eyes are prevalent. This form of AI misidentification not only frustrates users but also highlights ethical concerns in deploying global technologies without adequate inclusivity testing.
In China, where the Xiaomi SU7 is popular, such biases could affect a significant portion of drivers. Broader implications extend to global road safety, as similar systems in other electric vehicles might exhibit comparable flaws. False fatigue alerts could lead to unnecessary stress, potentially causing real distractions. Moreover, if users opt to disable features due to persistent errors, they forfeit valuable safety nets designed to mitigate drowsy driving risks.
Experts argue that improving these systems requires diverse training data and ongoing refinements to reduce false positives. Xiaomi’s customer service response to Mr. Li included promises of firmware updates, but the incident sparked calls for industry-wide standards to address AI bias in driver monitoring. To date, while no major public updates have been confirmed specifically resolving this issue, manufacturers continue to iterate on these technologies based on user feedback.
Similar Incidents and Ongoing Challenges
Since Mr. Li’s case, additional reports have emerged illustrating persistent challenges with AI-driven safety systems in electric vehicles. For instance, in December 2025, another Xiaomi SU7 incident involved the vehicle’s driver monitoring camera misinterpreting a driver’s facial features as signs of fatigue. This error reportedly contributed to a smart parking system failure, resulting in the car ending up in a pond. Such events emphasize that false positives can have cascading effects, potentially leading to more serious mishaps beyond mere annoyance.
These examples highlight the need for robust testing across varied user demographics and environmental conditions. Factors like lighting variations, facial hair, glasses, or even cultural differences in expressions could influence system accuracy. Automotive engineers are exploring multi-sensor approaches, combining camera data with infrared sensors or physiological monitors like heart rate trackers, to improve reliability and minimize biases.
Broader Context: Advancing Safety in Smart Electric Vehicles
The Xiaomi SU7 Max represents the forefront of electric vehicle innovation, blending high performance with AI-driven features for enhanced driving safety. Yet, incidents like Mr. Li’s remind us that technology must evolve alongside human diversity. In the pursuit of zero-accident roads, alert systems and driver condition tracking are invaluable, but they demand precision to avoid counterproductive outcomes.
Globally, regulatory bodies are increasingly scrutinizing these technologies. In China, where the incident occurred, discussions have emphasized the need for user-friendly options to customize or disable features without sacrificing core safety. For drivers, understanding how to calibrate systems—perhaps adjusting for sunlight interference or personal facial traits—can mitigate issues. Practical tips include ensuring the steering wheel camera is clean and unobstructed, updating vehicle software regularly, and reporting anomalies to manufacturers for collective improvements.
Looking Ahead: Lessons for AI in Automotive Design
Mr. Li’s story from Zhejiang serves as a cautionary tale in the integration of AI into everyday vehicles. While the Xiaomi SU7’s safety features aim to promote alertness and prevent accidents, the risk of false positives due to AI bias demands vigilant improvements. As electric vehicles and smart cars become ubiquitous, prioritizing inclusive AI development will be key to maintaining driver trust and achieving true advancements in road safety. This incident, though humorous in retrospect, offers valuable insights for manufacturers to refine their systems, ensuring they enhance rather than hinder the driving experience. Future advancements may include adaptive learning algorithms that personalize detection thresholds based on individual user profiles, further reducing errors and boosting overall efficacy.

