Hierarchical DLO Routing with Reinforcement Learning and In-Context Vision-language Models

Anonymous Authors

Abstract

Long-horizon routing tasks of deformable linear objects (DLOs), such as cables and ropes, are common in industrial assembly lines and everyday life. These tasks are particularly challenging because they require robots to manipulate DLO with long-horizon planning and reliable skill execution. Successfully completing such tasks demands adapting to their nonlinear dynamics, decomposing abstract routing goals, and generating multi-step plans composed of multiple skills, all of which require accurate high-level reasoning during execution. In this paper, we propose a fully autonomous hierarchical framework for solving challenging DLO routing tasks. Given an implicit or explicit routing goal expressed in language, our framework leverages vision-language models~(VLMs) for in-context high-level reasoning to synthesize feasible plans, which are then executed by low-level skills trained via reinforcement learning. To improve robustness in long horizons, we further introduce a failure recovery mechanism that reorients the DLO into insertion-feasible states. Our approach generalizes to diverse scenes involving object attributes, spatial descriptions, as well as implicit language commands. It outperforms the next best baseline method by nearly 50\% and achieves an overall success rate of 92.5\% across long-horizon routing scenarios.

Video



Simulation Experiments

Implicit order: route a V-shape

Spatial order: from right to left

Color order: from red to green, then blue

Color order: from red to green, blue then yellow

Real-World Experiments

Check out our longest demo! It proves the robustness of our planner.

Implicit order: route a V-shape

Implicit order: natural order

Spatial order: from right to left, then middle

Color order: from purple to black then red

Failure cases

Fails with tightly spaced clips

Typical failure cases

Color order red green blue yellow

Image Gallery

Clip direction extraction

Image A

Rope segmentation with SAM2

Image B