Fluid Directed Rigid Body Control using Deep Reinforcement Learning


We present a learning-based method to control a coupled 2D system involving both fluid and rigid bodies. Our approach is used to modify the fluid/rigid simulator’s behavior by applying control forces only at the simulation domain boundaries. The rest of the domain, corresponding to the interior, is governed by the Navier-Stokes equation for fluids and Newton-Euler’s equation for the rigid bodies. We represent our controller using a general neural-net, which is trained using deep reinforcement learning. Our formulation decomposes a control task into two stages: a precomputation training stage and an online generation stage. We utilize various fluid properties, e.g., the liquid’s velocity field or the smoke’s density field, to enhance the controller’s performance. We set up our evaluation benchmark by letting controller drive fluid jets move on the domain boundary and allowing them to shoot fluids towards a rigid body to accomplish a set of challenging 2D tasks such as keeping a rigid body balanced, playing a two-player ping-pong game, and driving a rigid body to sequentially hit specified points on the wall. In practice, our approach can generate physically plausible animations.



We show that deep RL can be used to control highly complex and physically real fluid-rigid coupling dynamics. Besides receiving low-dimensional rigid features as input, the controller can be learned more efficiently by also taking the encoded high-dimensional fluid features as input, which is given by an autoencoder pretrained on random fluid motion.