Skip to content

Entertainment Meets Engineering with These Drone Shows

    Rachel Feltman: For Scientific American’s Science Quickly, I’m Rachel Feltman.

    This Fourth of July some of the celebrants flocking to their local parks and waterfronts won’t be taking in the iconic sights and sounds of a fireworks display. In some cases, those traditional explosives could be replaced with swarms of colorful drones.

    Drone light shows have been popping up more and more in recent years, replacing or supplementing fireworks at the Olympics and even some Super Bowl halftime shows. They’re dazzling, precise and a lot safer than explosions. Besides the obvious risks of setting off incendiary devices, fireworks shows also raise environmental concerns: studies suggest these big displays have a marked impact on local air quality in the hours that follow.


    On supporting science journalism

    If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


    But swapping out fireworks for drones isn’t simple: every one of those displays takes painstaking effort from a team of engineers. They have to plot the movement of every single drone, frame by frame.

    Today’s guests recently published a paper that offers an AI-powered solution. Mac Schwager is an associate professor in the Aeronautics and Astronautics Department at Stanford University, and Eduardo Montijano is an associate professor in the Department of Computer Science and Systems Engineering at the University of Zaragoza in Spain.

    Thank you both so much for coming on to chat.

    Mac Schwager: Sure, our pleasure.

    Eduardo Montijano: Thank you.

    Feltman: Why don’t we start with just a quick overview of this study: You know, how did it come to be? What got you interested in this particular aspect of drone swarms?

    Montijano: I’ve been doing research in multirobot systems for some time. Also, I’ve been collaborating with Mac for many years as well. And with all the development of all these new AI techniques that have been successfully applied to other problems and applications, we thought—in collaboration with mainly one student, although there are more people in this research, but here, probably, I would like to highlight Pablo Pueyo mostly—but we decided, or we discussed, how cool it would be to try to apply all these new techniques to this problem of controlling hundreds or thousands of robots for animation displays.

    Feltman: So speaking of those animation displays, when compared to fireworks, what problems do they solve and what problems do they raise that maybe your paper was trying to address?

    Schwager: I think we consider sort of animation displays with drone swarms as being much more flexible and sort of [an] artistically richer medium for entertainment. So in fireworks displays, right, there’s a big bang and a big flash, but the engineer has actually very little control over exactly what the fireworks do and what they look like, right? But with drones you can program the lights and you can program the motion of the drones to display a very clear image—for a sporting event you could have somebody playing the sport floating in the air, or for the Fourth of July you could have words spelled out, you could have the American flag or whatnot. So it’s much more flexible, and, you know, there’s more control by the artist and the engineer as far as what they wanna convey.

    There’s a challenge, though, which is that drone swarms, especially large drone swarms, require a lot more engineering expertise and quite a bit more infrastructure to control and to deploy, especially to do that safely. And so this was one of the targets of our research, is to basically make the planning of these large-scale drone displays much more automatic and to kind of empower people without that kind of special knowledge to create their own drone displays.

    Feltman: And could you kind of paint a picture for us: Currently, what does it look like to put on one of these displays? What’s required in the background?

    Schwager: Right, so these are usually managed by large engineering companies, and there’s usually a team of engineers, specialist engineers, who make sure that all the drones are properly charged and have landing stations. They would have to go out to the site where the display is gonna be performed and engineer the site to plan where all the drones would fly and where they go and to make sure that the space is clear.

    And really, the target of our research is that before drone display happens, there are artists and engineers that carefully chart the path of every drone. At the time of the display the drones are actually just following sort of points in space that have been preplanned by the engineers—one point at a time, one drone at a time. So you can imagine it’s very much like animating an animated film: it’s very painstaking, very hands-on and requires lots of expertise.

    So the target of Gen-Swarms was essentially to use generative AI to do that phase of planning for you …

    Feltman: Hmm.

    Schwager: So you can type in a high-level prompt, like “the American flag,” for example, or “a skier skiing downhill,” and our algorithm would essentially produce these sets of waypoints, these sets of points in 3D space, for the drones to fly along to then create the illusion of this artistic display.

    Feltman: Mm, so basically, you enter the image you want to end up with and the AI tells the drones where to go, what colors to be, all of that stuff.

    Schwager: Yeah, actually, at the moment we enter just text.

    Feltman: Mm-hmm.

    Schwager: So we enter a text description of what we want to see, and then the method produces the colors, and the arrangement, and so on—although I think it wouldn’t be too hard to extend our methods so that you could upload a picture or a sketch of what you wanna see.

    Feltman: And what are the specific challenges that arise when you’re trying to control a group of drones with AI?

    Montijano: The way these models work, they have been popular [for] creating images, no? And at the end of the day they predict the color of each pixel when you give this prompt. So the idea here is: when you want to somehow translate this to drones, pixels [are] just a color, and they don’t have any motion constraints, any collision constraint.

    So the idea is: when you try to translate this idea of making pixels look [how you’d] like to making drones look [how you’d] like, you need to account for [the fact] that drones cannot teleport from one location to another, so they have some dynamics—some velocity, acceleration—some constraints in the motion [such] that you cannot do any motion that you want. You need to account for those somehow in your algorithm.

    And also, drones have some physical properties—some mass, some size—so they can collide with each other. So there are these safety constraints that you also need to include in the planning algorithm that [uses] this generative model so that the motion of the drones, it’s also safe.

    Feltman: Mm, and how close are we to actually being able to use the model you created with drones?

    Montijano: So from the research perspective I would say that our solution, in some sense, is mature enough to be applied. But then there are all these technological challenges that Mac mentioned before about all the real deployment of drones that, obviously, as academic professors, we don’t have the resources to deploy 1,000 or 100 or whatever number of drones.

    So for that there’s still a gap in terms of [going] from research to application, but it’s more a matter of maybe collaborating with companies that already are deploying drones in many locations. So I think that the integration wouldn’t be that difficult; it’s just a matter, probably, of having the right contact within a company that has the skills for real deployment. But the algorithm, I think, it’s already in shape to be deployed.

    Feltman: Very cool. What other applications could this have?

    Schwager: Yeah, so certainly, artistic displays are powerful and important, but we’d love for our robots to really help people in their day-to-day lives and also help people who are in danger. So for example, we could imagine using an algorithm like this for search and rescue. You know, if you have hikers who are stranded somewhere in the wilderness and you need some way of deploying a team of drones to go look for the lost hiker, this could be a method that could be adapted to that. We’re also interested in, you know, things like exploration. Maybe in a space application, NASA might consider developing a tool like this to explore the surfaces of asteroids or planetary bodies.

    We’re also really interested—currently, our kind of next step along this research journey is drone or other robot swarms for construction. So currently, our algorithm, you type in a prompt and the drones will organize themselves into a shape, right, that looks like what you asked for. What we’re looking at now is: “How could you type in the prompt and have the drones actually deposit material—like maybe the drones can carry little square blocks—how could they deposit the material in the right order to construct something that is useful or interesting for an artistic display?” So you could imagine drones constructing a bridge in a remote area where people maybe need to pass over some, some difficult terrain, or maybe there’s an emergency scenario, maybe there’s a disaster scenario, and a bridge has been washed out, and you’d like drones to automatically construct a temporary bridge—something like that.

    Montijano: Even when we applied this [to]a drone show because it’s—the artistic component is beautiful, I would say that there are no limitations on applying this to any kind of multirobot system. So in that sense we could go for other ground robots, domestic robots, construction robots, as Mac mentioned.

    So the idea here is to be able to translate these high-level commands specified by text that—every person can, more or less, give these commands—and then automatically translate them into plans for teams of robots to achieve these commands. So the ambition, in that sense, I think it’s—it goes way beyond the artistic display.

    Feltman: And what about the environmental impacts of a drone show versus a firework show?

    Montijano: Well, I would say that, in my opinion, drone shows are safer in the sense that fireworks are a very, you know, explosive material, and you hear [about] accidents, and you need to produce and store them.

    And then within my knowledge that is not very deep, I would say that, probably, the residual impact of fireworks is bigger than, probably, drone shows; that at the end of the day you can recycle or reuse these drones in multiple shows. Noisewise, probably, they are similar, even that—in the sense that drones currently are quite noisy, although it’s true that when you see them from far, far away fireworks are very annoying and drone shows are not. But when you fly them in close space, let me tell you that now, having a drone flying nearby, it’s more annoying than a firework [laughs].

    So there I guess there could be arguments in favor or against each of them, but if I have to choose drones, I would say that this reusability and safety, in terms of explosive materials, are the two main, big advantages.

    Feltman: Well, and given everything that you’re presenting in the paper, how do you see the world of drone shows evolving with this new tech?

    Montijano: Well, so I would say that [on] the artistic side of the problem the idea is that with this they are already—existing drone shows are able to develop complex and beautiful animations. The idea is that this will speed up and simplify this rather tedious and complex process; to maybe make [it possible to] scale to larger numbers of robots in an easy way; maybe also, in terms of the testing phase, deciding the appropriate number of drones to create specific figures. Well, in summary, speeding up the whole creative process and hopefully … providing more beautiful, more complex animations and displays.

    Schwager: I think right now one of the most exciting research frontiers is figuring out how to use, you know, powerful, modern generative AI tools that we’re all familiar with—ChatGPT, image-generation models, and so on—how to use those in ways that benefit people, you know. And myself and Eduardo being roboticists, I think we’re always looking for ways to enable robots to help people, to better serve people, to make people’s lives safer, and I think this is a really exciting frontier.

    And one of the grand challenges in robotics is: “How do you orchestrate the activities of large groups of robots?” It’s hard enough to control a single robot, and now, when you’ve got a large group, you know, there’s this persistent problem of: “How does one human, or a small number of humans, tell a large group of robots what they should do?” And I think this is an interesting model that we’re sort of approaching: using generative AI as kind of the bridge, the interface, to allow one person, or a small number of people, to command the activities of a very large group of drones.

    Montijano: Another issue that I also like to point out when mixing robotics and AI would be—with the current state of the art—would be explainability. If you want to generate an image, what you care about [is] the output, but “Why this output?” might not be as relevant as [what] you are considering about the motion of robots. So understanding and obtaining outputs that are consistent for robots, it’s a very important problem that, currently, I would say that we are struggling [with] because these AI models [works] very well, but somehow they work well until they stop working well, and having some kind of understanding of when or why these things [happen] is very important from a research perspective.

    Feltman: Thank you both so much for coming on to chat about this. This has been great.

    Montijano: Thank you, Rachel.

    Schwager: Great, thank you, Rachel. It’s our pleasure.

    Feltman: That’s all for today’s episode. We’re taking Friday off for the holiday. Next week, we’ll be sharing reruns of some of our favorite segments from the past year. We’ll be back with a new episode on July 14. In the meantime, you can quench your thirst for fresh science news by reading Scientific American online or in print.

    Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Naeem Amarsy and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.

    For Scientific American, this is Rachel Feltman. Have a great weekend!

    www.scientificamerican.com (Article Sourced Website)

    #Entertainment #Meets #Engineering #Drone #Shows