Why Power-Efficiency Matters for EVs and Autonomous Driving
Similar Posts

Four Trends Driving the Future of Radar Perception

The Evolution of Automatic Emergency Braking (AEB) at Higher Speeds

In ICE vehicles, efficiency is measured in miles per gallon. In EVs, the equivalent metric is miles per kilowatt-hour (mi/kWh), which reflects how far the vehicle can travel on a unit of energy. While total range is often the headline figure, it’s ultimately a product of both efficiency and battery size. And while propulsion remains the primary energy draw, EVs must also power a growing stack of sensors, processors, and software that enable automated driving functions. As these systems become more capable, they are also becoming more power-hungry.
Autonomous systems rely on continuous input from sensors. These sensors scan the environment in real time and pass that data to onboard processors that interpret the world around the vehicle.
A single LiDAR unit can draw between 8–30 watts, and autonomous vehicles typically require multiple units. Add high-resolution cameras, sensor cleaning mechanisms, and the AI compute needed for perception and decision-making, and power consumption rises rapidly. As reported in Wired, fully autonomous prototypes have been measured to consume up to 2,500 watts just to run their sensor and compute platforms.
This power consumption forces design trade-offs. Higher power use generates more heat, which means larger cooling systems. It also requires thicker wiring to handle increased electrical loads. And that heat limits where the compute hardware can be placed inside the vehicle — sometimes requiring special enclosures or redesigns to prevent overheating. All of these factors add to the vehicle’s cost, weight, and complexity.
Radar has traditionally played a supporting role in perception — great at measuring velocity and distance, but limited in resolution and scene understanding. As a result, most ADAS stacks have leaned on cameras and LiDAR for detailed perception, despite their high power requirements.
But with advances in radar software and AI-based perception, radar is evolving beyond its traditional role. Radar is no longer limited to the detection and tracking of moving objects. We can now produce detailed semantic understanding of the environment from the RF spectrum. With recent AI advances, radar can recognize and classify objects (vehicles, pedestrians, road boundaries), distinguish stationary from moving obstacles, and understand the broader driving context.
This level of perception can be achieved using a fraction of the power and compute required for perception with cameras or LiDAR. This matters for any vehicle — but especially EVs — where lower power draw means longer range, reduced need for cooling, and simplified vehicle architecture.
As the market pushes to lower vehicle costs — especially for entry-level and emerging markets — OEMs need to find ways to improve both energy and cost efficiency.
Shrinking the battery is one approach, helping reduce one of the most expensive components in an EV. This, however, cuts down driving range — which means every onboard system, including sensors and processing hardware, must be energy-efficient to avoid noticeably draining the battery.
Switching to higher-voltage electrical systems, such as 48V architecture, gives manufacturers a middle ground between 12V systems and high-voltage propulsion domains. It enables thinner wiring, more efficient power delivery, and supports subsystems like ADAS, electric turbochargers, and active suspension — all of which operate more efficiently at higher voltage.
Another strategy is reducing the number of sensors and lowering the system’s overall data bandwidth requirements. The more sensors in a vehicle, the more power is needed to operate them and process their data.
Radar is already used for basic ADAS functions like adaptive cruise and blind-spot detection. But when enhanced with AI, radar becomes capable of full-scene understanding — detecting and classifying objects even when stationary. It can distinguish between relevant obstacles and background, and provide a semantically labelled map of the driving environment.
This expanded capability means radar can take on roles that would otherwise require multiple cameras and LiDAR units — such as detecting and classifying vehicles and pedestrians, and understanding traffic scenes. For example, a typical camera-centric L2+ system might use seven (or more) 8-megapixel cameras. Running at 30 frames per second, this setup would generate 1,680 megapixels per second of raw input. Processing that volume requires significant compute — often hundreds of TOPS — contributing to high energy consumption and high ECU cost.
In contrast, a radar-centric ADAS stack — with five radars and a single front-facing camera — can achieve full 360º perception at a fraction of the data load. Radar data is naturally sparse, enabling more efficient perception without the heavy bandwidth and compute burden. The entire perception stack requires less than 10 TOPS, supporting all of the sensors.
By assigning more perception tasks to radar, OEMs can reduce the number of sensors, simplify integration, and lower power and compute requirements. With fewer high-resolution cameras and LiDAR units in the stack, the system generates less data — reducing bandwidth and processing demands. This allows for a simpler wiring harness and a smaller, less complex ECU with lower thermal and power requirements. All together these advantages enable more energy-efficient and cost-effective platforms for scalable autonomy.