Cookies

This website uses Google Analytics to help us improve the website content. This requires the use of standard Google Analytics cookies, as well as a cookie to record your response to this confirmation request. If this is OK with you, please click 'Accept cookies', otherwise you will see this notice on every page. For more information, please click here.

Department of Engineering Science, University of Oxford

Seeing the Arrow of Time

Lindsey C. Pickup, Zheng Pan, Donglai Wei, YiChang Shih,

Changshui Zhang, Andrew Zisserman, Bernhard Scholkopf, and William T. Freeman



Overview

We explore whether we can observe Time's Arrow in a temporal sequence -- is it possible to tell whether a video is running forwards or backwards?
We developed three methods based on machine learning and image statistics, and evaluate these methods on a video dataset collected by us.

Video Dataset (link)


We collect 180 high-quality videos, each is around 6-10 seconds. The video contains 155 forward sequences and 25 intentionally backward sequences. The full dataset can be downloaded here.



Top and bottom rows: two sampled sequences from our dataset

Method #1: Flow words


Videos are described by SIFT-like ''flow-words'', based on optical flow instead of image edges. We obtain 50 words from the training dataset, and achieve 75%-90% classification accuracy in three-fold cross validation.

Construction of Flow-words features. Top: pair of frames at times t-1 and t+1, warped in to the coordinate frame of the intervening image. Left: vertical component of optic flow between this pair of frames; lower copy shows the same with the small SIFT-like descriptor grids overlaid. Right: expanded view of the SIFT-like descriptors shown left. Not shown: horizontal components of optic flow which are also required in constructing the descriptors.

Method #2: Motion causality


Consider the case when a motion causing another motion, such as a ball hit another balls. By using this cue, the accuracy is about 70%.

Three frames from one of the Tennis-ball dataset sequences, in which a ball is rolled into a stack of static balls. Bottom row: regions of motion, identified using only the frames at t and t-1. Notice that the two rolling balls are identified as separate regions of motion, and coloured separately in the bottom rightmost plot. The fact that one rolling ball (first frame) causes two balls to end up rolling (last frame) is what the motion-causation method aims to detect and use.

Method #3: Auto-regressive model


Consider the case when the object motion is linear, meaning that the current velocity is affected by the past. The motion noise is asymmetric between forward and backward sequence. Using only this cue,we achieve accuracy of 58%.

Top: tracked points from a sequence, and an example track. Bottom: Forward-time (left) and backward-time (right) vertical trajectory components, and the corresponding model residuals. Trajectories should be independent from model residuals (noise) in the forward-time direction only. For the example track shown, p-values for the forward and backward directions are 0.5237 and 0.0159 respectively, indicating that forwards time is more likely.

Source code

The source code and the learnt flow words are released on the software page.

Publications

Lyndsey C. Pickup, Zheng Pan, Donglai Wei, YiChang Shih, Changshui Zhang, Andrew Zisserman, Bernhard Scholkopf, William T. Freeman
Seeing the Arrow of Time  
IEEE Conference on Computer Vision and Pattern Recognition, 2014

Acknowledgements

This work was supported in the UK by ERC grant VisRec no. 228180, in China by 973 Program (2013CB329503), NSFC Grant no. 91120301, and in the US by ONR MURI grant N00014-09-1-1051 and NSF CGV-1111415.


Please report problems with this page to the
vgg-webmasters at the robots.ox.ac.uk domain.
Last updated 7th Jun 2014.