advertisement
How Red Bull Racing uses IoT to win
“Fast” seems like a simple concept, on the surface. Go fast enough and you can win races, even highly competitive…
“Fast” seems like a simple concept, on the surface. Go fast enough and you can win races, even highly competitive ones like Formula 1 auto racing. But it takes a complex, sophisticated IT setup to get to the necessary level of fast, according to the CIO of Red Bull Racing, Matt Cadieux.
In particular, the extensive IoT deployment that the team uses to squeeze the maximum performance out of its cars is key to success, Cadieux said
The car
advertisement
Naturally enough, it all starts with the car. F1 cars are essentially very light-weight, low-flying aircraft, mating an engine capable of around 600 horsepower with a fiberglass body and spindly chassis weighing in at about 1,500 pounds. They can get from zero to 100mph in an incendiary four seconds, corner like waterbugs, and keep drivers relatively safe even from collisions at the blistering speeds that F1 races can reach.
Consequently, F1 cars are complicated beasts, and the tension between the best possible performance and the race regulations – only so much horsepower and aerodynamic downforce are allowed, fuel cannot exceed such and such an octane – is as much at the heart of the contest as driving skill. This means that Red Bull’s F1 cars are tweaked and refined continuously up through the qualification stage of every major race, as the engineering team adjusts for the exigencies of the track and local conditions.
Cadieux said that a big part of his department’s job is to make sure that the engineers have all the information they need to make those decisions, and that involves a complex, full-featured IoT system.
advertisement
There are about 200 sensors on the car in testing conditions – fewer during the race, for weight reasons – measuring everything from physical forces on the car, to engine temperatures and stresses, to aerodynamic information.
“It allows us to get a very under-the-covers view of the health of the car, where you can understand forces, or you can see things overheating, or you can see, aerodynamically, what’s happening and whether our predictions in computational fluid dynamics and the wind tunnel are what really happens in the real world,” he said.
There’s a data logger in the car that collects information from all the various sensors, and data is transmitted wirelessly via a commercial, proprietary service that encrypts the information and sends it to Red Bull’s trackside team.
advertisement
McLaren Advanced Technology is one of the vendors that provides the streaming service – they work with others, although Cadieux was disinclined to delve into too much detail about the specifics of the company’s arrangements.
The race
The team can view a ton of telemetry in real-time at the trackside – discovering whether there’s a reliability problem and letting the driver know that he has to compensate, and so on. The car’s configuration can’t be changed after the team has qualified for the race proper, so the tweaking has to be accomplished before that time.
That said, the team doesn’t do the heavy computational lifting at the racetrack itself – instead, thanks to an MPLS connection and Citrix’s XenDesktop virtualization software, the hard work is done by Red Bull’s data center in the U.K.
“A number of these applications and data viewer are very graphically intensive, and there’s also some [issues] of latency, Citrix allows us to do this very graphical content that was generated and hosted on systems at the track and then run those apps at a remote location back in the U.K. in our operations room,” Cadieux said.
This means that only 60 of Red Bull’s 700 or so employees need to travel with the team as engineers and mechanics, while most can remain behind in the U.K., working on sophisticated CAD, computational fluid dynamics and aerodynamic simulations.
“One thing we’re very good at is simulations and analytics,” he said. “We’ve had sensors in the car with real-time feeds to make decisions in very small timescales, we’ve been doing that for 13 years.”
The system takes “a three-figure number” of megabytes per second worth of bandwidth. Outages are rare, according to Cadieux, and generally happen within the last few hundred meters at the track, thanks to unshielded fiber cables and the like. Backup servers travel with the team in case something breaks.
“Our disaster recovery business continuity plan is to have local compute,” said Cadieux.