Unsupervised Orientation Learning Using Autoencoders

Disentangled Representations of Orientation

Posted by Rembert Daems on December 1, 2020

paper slides

I presented this work on the 2020 NeurIPS workshop on Differential Geometry meets Deep Learning.

Abstract

We present a method to learn the orientation of symmetric objects in real-world images in an unsupervised way. Our method explicitly maps in-plane relative rotations to the latent space of an autoencoder, by rotating both in the image domain and latent domain. This is achieved by adding a proposed crossing loss to a standard autoencoder training framework which enforces consistency between the image domain and latent domain rotations. This relative representation of rotation is made absolute, by using the symmetry of the observed object, resulting in an unsupervised method to learn the orientation. Furthermore, orientation is disentangled in latent space from other descriptive factors. We apply this method on two real-world datasets: aerial images of planes in the DOTA dataset and images of densely packed honeybees. We empirically show this method can learn orientation using no annotations with high accuracy compared to the same models trained with annotations.