Indeed, the only difference is they work on RGB space, and the dataset is a bit toy-ish (no offence), as the networks simply need to separate the objects either by color, or a regular texture pattern.
What proposed in this motion grouping paper, is more like on the idea level, which gives an observation that, although objects in natural videos or images are of very complicated texture, and there is no reason a network can group these pixels together if no supervision is provided.
However, in motion space, pixels moving together form an homogeneous field, and luckily, from psychology, we know that any parts of the objects tend to move together.
Tagger: Deep Unsupervised Perceptual Grouping
https://arxiv.org/abs/1606.06724