├── dataset ├── class_list.txt ├── vallist.txt └── testlist.txt ├── LICENSE ├── .gitignore ├── README.md ├── tools ├── ops.py ├── utilities.py ├── ffmpeg_reader.py └── generate_tfrecord.py ├── models ├── mfb_dis_net.py ├── autoencoder_net.py └── mfb_net_cross.py ├── autoencoder_val.py ├── mfb_pretrain_dis_test.py ├── mfb_pretrain_dis_val.py ├── mfb_cross_val.py ├── mfb_cross_test.py ├── autoencoder_train.py ├── mfb_pretrain_dis_train.py └── mfb_cross_train.py /dataset/class_list.txt: -------------------------------------------------------------------------------- 1 | Basketball 1 2 | BasketballDunk 2 3 | Biking 3 4 | CliffDiving 4 5 | CricketBowling 5 6 | Diving 6 7 | Fencing 7 8 | FloorGymnastics 8 9 | GolfSwing 9 10 | HorseRiding 10 11 | IceDancing 11 12 | LongJump 12 13 | PoleVault 13 14 | RopeClimbing 14 15 | SalsaSpin 15 16 | SkateBoarding 16 17 | Skiing 17 18 | Skijet 18 19 | SoccerJuggling 19 20 | Surfing 20 21 | TennisSwing 21 22 | TrampolineJumping 22 23 | VolleyballSpiking 23 24 | WalkingWithDog 24 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Image Processing Group - BarcelonaTECH - UPC 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # pyenv 74 | .python-version 75 | 76 | # celery beat schedule file 77 | celerybeat-schedule 78 | 79 | # SageMath parsed files 80 | *.sage.py 81 | 82 | # dotenv 83 | .env 84 | 85 | # virtualenv 86 | .venv 87 | venv/ 88 | ENV/ 89 | 90 | # Spyder project settings 91 | .spyderproject 92 | .spyproject 93 | 94 | # Rope project settings 95 | .ropeproject 96 | 97 | # mkdocs documentation 98 | /site 99 | 100 | # mypy 101 | .mypy_cache/ 102 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Disentangling Foreground, Background and Motion Features in Videos 2 | This repo contains the source codes for our work as in title. Please refer to our [project webpage](https://imatge-upc.github.io/unsupervised-2017-cvprw/) or [original paper](https://arxiv.org/pdf/1707.04092.pdf) for more details. 3 | 4 | ## Dataset 5 | 6 | This project requires [UCF-101 dataset](http://crcv.ucf.edu/data/UCF101.php) and its [localization annotations](http://www.thumos.info/download.html) (bonding box for action region). Please note that the annotations only contain bounding boxes for 24 classes out of 101. We only use these 24 classes for further experiments. 7 | 8 | ### Download link 9 | ``` 10 | UCF-101: http://crcv.ucf.edu/data/UCF101/UCF101.rar 11 | Annotations (version 1): http://crcv.ucf.edu/ICCV13-Action-Workshop/index.files/UCF101_24Action_Detection_Annotations.zip 12 | ``` 13 | 14 | ### Dataset split 15 | 16 | We split our dataset into training set, validation set and test set. Split lists of each set can be found under **`dataset`** folder. 17 | 18 | ### Generate TF-Records 19 | 20 | As we are dealing with videos, using TF-records in TensorFlow can help to reduce I/O overheads. (Please refer to the [official documentation](https://www.tensorflow.org/api_guides/python/reading_data) if you're not familiar with TF-records). Each [**`SequenceExample`**](https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/core/example/example.proto) in our TF-records includes 32 video frames, corresponding masks and so on. 21 | 22 | A brief description of how we generate TF-records from videos and annotations: for each video, we split it into multiple chunks consisting of 32 frames and save each chunk as an example. The corresponding masks are generated through manipulations on localization annotations. 23 | 24 | In order to generate TF-records used by this project, you need to modify certain paths in **`tools/generate_tfrecords.py`**. Including, **`videos_directory`**, **`annotation_directory`** and so on. As we use FFMPEG to decode videos, you may want to install it with the command below if you are using Anaconda: 25 | 26 | 27 | 28 | ``` 29 | conda install ffmpeg 30 | ``` 31 | 32 | After installation of FFMPEG, you need to specify the path to executable FFMPEG binary file in **`tools/ffmpeg_reader.py`**. (Usually it's just **`~/anaconda/bin/ffmpeg`** if you are using Anaconda). After specifying path to FFMPEG, you are good to go! Run the script as below to generate those TF-records: 33 | 34 | ``` 35 | python tools/generate_tfrecords.py 36 | ``` 37 | 38 | ## Training & Testing 39 | 40 | Our codes for training and testing are organized in the following fashion: we have scripts under **`models/`** to construct TensorFlow graph for each model. And under top path, we have scripts named as **`***_[train|val|test].py`**. These are the scripts accomplish external call and training/validation/test of each model. 41 | 42 | 43 | -------------------------------------------------------------------------------- /tools/ops.py: -------------------------------------------------------------------------------- 1 | import math 2 | import numpy as np 3 | import tensorflow as tf 4 | 5 | from tensorflow.python.framework import ops 6 | 7 | 8 | def _variable_with_weight_decay(name, shape, wd=1e-3): 9 | with tf.device("/cpu:0"): # store all weights in CPU to optimize weights sharing among GPUs 10 | var = tf.get_variable(name, shape, initializer=tf.contrib.layers.xavier_initializer()) 11 | if wd is not None: 12 | weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss') 13 | tf.add_to_collection('losses', weight_decay) 14 | return var 15 | 16 | 17 | def max_pool3d(input_, k, name='max_pool3d'): 18 | return tf.nn.max_pool3d(input_, ksize=[1, k, 2, 2, 1], strides=[1, k, 2, 2, 1], padding='SAME', name=name) 19 | 20 | 21 | def conv2d(input_, output_dim, k_h=3, k_w=3, d_h=1, d_w=1, padding='SAME', name="conv2d"): 22 | with tf.variable_scope(name): 23 | w = _variable_with_weight_decay('w', [k_h, k_w, input_.get_shape()[-1], output_dim]) 24 | conv = tf.nn.conv2d(input_, w, strides=[1, d_h, d_w, 1], padding=padding) 25 | b = _variable_with_weight_decay('b', [output_dim]) 26 | 27 | return tf.nn.bias_add(conv, b) 28 | 29 | 30 | def cross_conv2d(input_, kernel, d_h=1, d_w=1, padding='SAME', name="cross_conv2d"): 31 | with tf.variable_scope(name): 32 | output_dim = kernel.get_shape()[4] 33 | batch_size = input_.get_shape().as_list()[0] 34 | b = _variable_with_weight_decay('b', [output_dim]) 35 | 36 | output = [] 37 | input_list = tf.unstack(input_) 38 | kernel_list = tf.unstack(kernel) 39 | for i in range(batch_size): 40 | conv = tf.nn.conv2d(tf.expand_dims(input_list[i],0), kernel_list[i], strides=[1, d_h, d_w, 1], padding=padding) 41 | conv = tf.nn.bias_add(conv, b) 42 | output.append(conv) 43 | 44 | return tf.concat(output, 0) 45 | 46 | 47 | def conv3d(input_, output_dim, k_t=3, k_h=3, k_w=3, d_t=1, d_h=1, d_w=1, padding='SAME', name="conv3d"): 48 | with tf.variable_scope(name): 49 | w = _variable_with_weight_decay('w', [k_t, k_h, k_w, input_.get_shape()[-1], output_dim]) 50 | conv = tf.nn.conv3d(input_, w, strides=[1, d_t, d_h, d_w, 1], padding=padding) 51 | b = _variable_with_weight_decay('b', [output_dim]) 52 | 53 | return tf.nn.bias_add(conv, b) 54 | 55 | 56 | def relu(x): 57 | return tf.nn.relu(x) 58 | 59 | 60 | def fc(input_, output_dim, name='fc'): 61 | with tf.variable_scope(name): 62 | w = _variable_with_weight_decay('w', [input_.get_shape()[-1], output_dim]) 63 | b = _variable_with_weight_decay('b', [output_dim]) 64 | 65 | return tf.matmul(input_, w) + b 66 | 67 | 68 | def deconv2d(input_, output_shape, k_h=3, k_w=3, d_h=1, d_w=1, padding='SAME', name="deconv2d"): 69 | with tf.variable_scope(name): 70 | # filter : [height, width, output_channels, in_channels] 71 | w = _variable_with_weight_decay('w', [k_h, k_h, output_shape[-1], input_.get_shape()[-1]]) 72 | deconv = tf.nn.conv2d_transpose(input_, w, output_shape=output_shape, strides=[1, d_h, d_w, 1], padding=padding) 73 | b = _variable_with_weight_decay('b', [output_shape[-1]]) 74 | 75 | return tf.nn.bias_add(deconv, b) 76 | 77 | 78 | def deconv3d(input_, output_shape, k_t=3, k_h=3, k_w=3, d_t=1, d_h=1, d_w=1, padding='SAME', name="deconv3d"): 79 | with tf.variable_scope(name): 80 | # filter : [depth, height, width, output_channels, in_channels] 81 | w = _variable_with_weight_decay('w', [k_t, k_h, k_h, output_shape[-1], input_.get_shape()[-1]]) 82 | deconv = tf.nn.conv3d_transpose(input_, w, output_shape=output_shape, strides=[1, d_t, d_h, d_w, 1], padding=padding) 83 | b = _variable_with_weight_decay('b', [output_shape[-1]]) 84 | 85 | return tf.nn.bias_add(deconv, b) -------------------------------------------------------------------------------- /models/mfb_dis_net.py: -------------------------------------------------------------------------------- 1 | 2 | import tensorflow as tf 3 | 4 | from tools.ops import * 5 | 6 | 7 | FLAGS = tf.app.flags.FLAGS 8 | 9 | class mfb_dis_net(object): 10 | 11 | def __init__(self, clips, labels, class_num=24, height=128, width=128, seq_length=16, c_dim=3, \ 12 | batch_size=32, keep_prob=1.0, is_training=True, encoder_gradient_ratio=1.0, use_pretrained_encoder=False): 13 | 14 | self.seq = clips 15 | self.labels = labels 16 | self.class_num = class_num 17 | self.batch_size = batch_size 18 | self.height = height 19 | self.width = width 20 | self.seq_length = seq_length 21 | self.c_dim = c_dim 22 | self.dropout = keep_prob 23 | self.encoder_gradient_ratio = encoder_gradient_ratio 24 | self.use_pretrained_encoder = use_pretrained_encoder 25 | 26 | self.seq_shape = [seq_length, height, width, c_dim] 27 | 28 | self.batch_norm_params = { 29 | 'is_training': is_training, 30 | 'decay': 0.9, 31 | 'epsilon': 1e-5, 32 | 'scale': True, 33 | 'center': True, 34 | 'updates_collections': tf.GraphKeys.UPDATE_OPS 35 | } 36 | 37 | pred_logits = self.build_model() 38 | self.ac_loss = tf.reduce_mean(\ 39 | tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=pred_logits)) 40 | 41 | prob = tf.nn.softmax(pred_logits) 42 | pred = tf.one_hot(tf.nn.top_k(prob).indices, self.class_num) 43 | pred = tf.squeeze(pred, axis=1) 44 | pred = tf.cast(pred, tf.bool) 45 | labels = tf.cast(labels, tf.bool) 46 | self.ac = tf.reduce_sum(tf.cast(tf.logical_and(labels, pred), tf.float32)) / self.batch_size 47 | 48 | 49 | def build_model(self): 50 | 51 | c3d_feat = self.mapping_layer(self.c3d(self.seq)) 52 | 53 | if self.use_pretrained_encoder and self.encoder_gradient_ratio == 0.0: 54 | c3d_feat = tf.stop_gradient(c3d_feat) 55 | 56 | with tf.variable_scope('classifier'): 57 | dense1 = tf.reshape(c3d_feat, [self.batch_size, -1]) 58 | 59 | dense1 = fc(dense1, self.class_num, name='fc1') 60 | pred = tf.nn.dropout(dense1, self.dropout) 61 | 62 | return pred 63 | 64 | 65 | def bn(self, x): 66 | return tf.contrib.layers.batch_norm(x, **self.batch_norm_params) 67 | 68 | 69 | def mapping_layer(self, input_, name='mapping'): 70 | with tf.variable_scope(name): 71 | feat = relu(self.bn(conv3d(input_, self.map_dim, k_t=self.map_length, k_h=2, k_w=2, d_h=2, d_w=2, padding='VALID', name='mapping1'))) 72 | feat = tf.reshape(feat, [self.batch_size, self.map_height//2, self.map_width//2, self.map_dim]) 73 | 74 | return feat 75 | 76 | 77 | def c3d(self, input_, _dropout=1.0, name='c3d'): 78 | 79 | with tf.variable_scope(name): 80 | 81 | # Convolution Layer 82 | conv1 = relu(self.bn(conv3d(input_, 64, name='conv1'))) 83 | pool1 = max_pool3d(conv1, k=1, name='pool1') 84 | 85 | # Convolution Layer 86 | conv2 = relu(self.bn(conv3d(pool1, 128, name='conv2'))) 87 | pool2 = max_pool3d(conv2, k=2, name='pool2') 88 | 89 | # Convolution Layer 90 | conv3 = relu(self.bn(conv3d(pool2, 256, name='conv3a'))) 91 | conv3 = relu(self.bn(conv3d(conv3, 256, name='conv3b'))) 92 | pool3 = max_pool3d(conv3, k=2, name='pool3') 93 | 94 | # Convolution Layer 95 | conv4 = relu(self.bn(conv3d(pool3, 512, name='conv4a'))) 96 | conv4 = relu(self.bn(conv3d(conv4, 512, name='conv4b'))) 97 | pool4 = max_pool3d(conv4, k=2, name='pool4') 98 | 99 | # Convolution Layer 100 | conv5 = relu(self.bn(conv3d(pool4, 512, name='conv5a'))) 101 | conv5 = relu(self.bn(conv3d(conv5, 512, name='conv5b'))) 102 | #pool5 = max_pool3d(conv5, k=2, name='pool5') 103 | 104 | conv5_shape = conv5.get_shape().as_list() 105 | self.map_length = conv5_shape[1] 106 | self.map_height = conv5_shape[2] 107 | self.map_width = conv5_shape[3] 108 | self.map_dim = conv5_shape[4] 109 | 110 | feature = conv5 111 | 112 | return feature 113 | 114 | 115 | def tower_loss(name_scope, mfb, use_pretrained_encoder, encoder_gradient_ratio=1.0): 116 | # get reconstruction and ground truth 117 | ac_loss = mfb.ac_loss 118 | 119 | weight_decay_loss_list = tf.get_collection('losses', name_scope) 120 | if use_pretrained_encoder: 121 | if encoder_gradient_ratio == 0.0: 122 | weight_decay_loss_list = [var for var in weight_decay_loss_list \ 123 | if 'c3d' not in var.name and 'mapping' not in var.name] 124 | 125 | weight_decay_loss = 0.0 126 | if len(weight_decay_loss_list) > 0: 127 | weight_decay_loss = tf.add_n(weight_decay_loss_list) 128 | 129 | total_loss = weight_decay_loss * 100 + ac_loss 130 | 131 | return total_loss, ac_loss, weight_decay_loss -------------------------------------------------------------------------------- /autoencoder_val.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import tools.ops 8 | 9 | import numpy as np 10 | 11 | import tensorflow as tf 12 | import scipy.misc as sm 13 | 14 | from models.autoencoder_net import * 15 | from tools.utilities import * 16 | from tools.ops import * 17 | 18 | 19 | flags = tf.app.flags 20 | flags.DEFINE_integer('batch_size', 5, 'Batch size.') 21 | flags.DEFINE_integer('num_epochs', 1, 'Number of epochs.') # ~13 min per epoch 22 | flags.DEFINE_integer('num_gpus', 1, 'Number of GPUs.') 23 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 24 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 25 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 26 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 27 | flags.DEFINE_integer('num_sample', 1240, 'Number of samples in this dataset.') 28 | 29 | FLAGS = flags.FLAGS 30 | 31 | prefix = 'autoencoder' 32 | model_save_dir = './ckpt/' + prefix 33 | loss_save_dir = './loss' 34 | val_list_path = './dataset/vallist.txt' 35 | dataset_path = './dataset/UCF-101-tf-records' 36 | 37 | use_pretrained_model = True 38 | save_predictions = True 39 | 40 | 41 | def run_validating(): 42 | 43 | # Create model directory 44 | if not os.path.exists(model_save_dir): 45 | os.makedirs(model_save_dir) 46 | model_filename = "./mfb_ae_ucf24.model" 47 | 48 | # Consturct computational graph 49 | tower_grads = [] 50 | tower_losses, tower_rec_losses, tower_wd_losses = [], [], [] 51 | 52 | global_step = tf.get_variable( 53 | 'global_step', 54 | [], 55 | initializer=tf.constant_initializer(0), 56 | trainable=False 57 | ) 58 | starter_learning_rate = 1e-4 59 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 60 | 1000000, 0.8, staircase=True) 61 | opt = tf.train.AdamOptimizer(learning_rate) 62 | 63 | # Create a session for running Ops on the Graph. 64 | config = tf.ConfigProto(allow_soft_placement=True) 65 | #config.operation_timeout_in_ms = 10000 66 | sess = tf.Session(config=config) 67 | coord = tf.train.Coordinator() 68 | threads = None 69 | 70 | val_list_file = open(val_list_path, 'r') 71 | val_list = val_list_file.read().splitlines() 72 | for i, line in enumerate(val_list): 73 | val_list[i] = os.path.join(dataset_path, val_list[i]) 74 | 75 | assert(len(val_list) % FLAGS.num_gpus == 0) 76 | num_for_each_gpu = len(val_list) // FLAGS.num_gpus 77 | 78 | clips_list = [] 79 | with sess.as_default(): 80 | for i in range(FLAGS.num_gpus): 81 | clips, _, _ = input_pipeline(val_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 82 | FLAGS.batch_size, num_epochs=FLAGS.num_epochs, is_training=False) 83 | clips_list.append(clips) 84 | 85 | autoencoder_list = [] 86 | with tf.variable_scope('vars') as var_scope: 87 | for gpu_index in range(FLAGS.num_gpus): 88 | with tf.device('/gpu:%d' % (gpu_index)): 89 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 90 | 91 | # construct model 92 | autoencoder = autoencoder_net(clips_list[gpu_index], FLAGS.height, FLAGS.width, FLAGS.seq_length, \ 93 | FLAGS.channel, FLAGS.batch_size, is_training=False) 94 | autoencoder_list.append(autoencoder) 95 | loss, rec_loss, wd_loss = tower_loss(scope, autoencoder, clips_list[gpu_index]) 96 | 97 | var_scope.reuse_variables() 98 | 99 | vars_to_optimize = tf.trainable_variables() 100 | grads = opt.compute_gradients(loss, var_list=vars_to_optimize) 101 | 102 | tower_grads.append(grads) 103 | tower_losses.append(loss) 104 | tower_rec_losses.append(rec_loss) 105 | tower_wd_losses.append(wd_loss) 106 | 107 | # concatenate the losses of all towers 108 | loss_op = tf.reduce_mean(tower_losses) 109 | rec_loss_op = tf.reduce_mean(tower_rec_losses) 110 | wd_loss_op = tf.reduce_mean(tower_wd_losses) 111 | 112 | # saver for saving checkpoints 113 | saver = tf.train.Saver() 114 | init = tf.initialize_all_variables() 115 | 116 | sess.run(init) 117 | if not os.path.exists(model_save_dir): 118 | os.makedirs(model_save_dir) 119 | if use_pretrained_model: 120 | print('[*] Loading checkpoint ...') 121 | model = tf.train.latest_checkpoint(model_save_dir) 122 | if model is not None: 123 | saver.restore(sess, model) 124 | print('[*] Loading success: %s!'%model) 125 | else: 126 | print('[*] Loading failed ...') 127 | 128 | 129 | # Create loss output folder 130 | if not os.path.exists(loss_save_dir): 131 | os.makedirs(loss_save_dir) 132 | loss_file = open(os.path.join(loss_save_dir, prefix+'_val.txt'), 'a+') 133 | 134 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 135 | 136 | # start queue runner 137 | coord = tf.train.Coordinator() 138 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 139 | 140 | rec_loss_list = [] 141 | try: 142 | with sess.as_default(): 143 | print('\n\n\n*********** start validating ***********\n\n\n') 144 | step = global_step.eval() 145 | print('[step = %d]'%step) 146 | cnt = 0 147 | while not coord.should_stop(): 148 | # Run training steps or whatever 149 | rec_loss = sess.run(rec_loss_op) 150 | rec_loss_list.append(rec_loss) 151 | print('%d: rec_loss=%.8f' %(cnt, rec_loss)) 152 | cnt += 1 153 | 154 | except tf.errors.OutOfRangeError: 155 | print('Done training -- epoch limit reached') 156 | finally: 157 | # When done, ask the threads to stop. 158 | coord.request_stop() 159 | 160 | # Wait for threads to finish. 161 | coord.join(threads) 162 | sess.close() 163 | 164 | mean_rec = np.mean(np.asarray(rec_loss_list)) 165 | 166 | line = '[step=%d] rec_loss=%.8f' %(step, mean_rec) 167 | print(line) 168 | loss_file.write(line + '\n') 169 | 170 | 171 | 172 | def main(_): 173 | run_validating() 174 | 175 | 176 | if __name__ == '__main__': 177 | tf.app.run() -------------------------------------------------------------------------------- /mfb_pretrain_dis_test.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import tools.ops 8 | 9 | import numpy as np 10 | 11 | import tensorflow as tf 12 | import scipy.misc as sm 13 | 14 | from models.mfb_dis_net import * 15 | from tools.utilities import * 16 | from tools.ops import * 17 | 18 | 19 | flags = tf.app.flags 20 | flags.DEFINE_integer('batch_size', 1, 'Batch size.') 21 | flags.DEFINE_integer('num_epochs', 1, 'Number of epochs.') # ~13 min per epoch 22 | flags.DEFINE_integer('num_gpus', 1, 'Number of GPUs.') 23 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 24 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 25 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 26 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 27 | flags.DEFINE_integer('num_sample', 4451, 'Number of samples in this dataset.') 28 | flags.DEFINE_integer('num_class', 24, 'Number of classes to classify.') 29 | 30 | FLAGS = flags.FLAGS 31 | 32 | prefix = 'DIS_ae_mfb_baseline_finetune=1.0' 33 | model_save_dir = './ckpt/' + prefix 34 | loss_save_dir = './loss' 35 | test_list_path = './dataset/testlist.txt' 36 | dataset_path = './dataset/UCF-101-tf-records' 37 | test_model_name = model_save_dir + '/mfb_baseline_ucf24.model-7000' 38 | 39 | use_pretrained_model = True 40 | save_predictions = True 41 | 42 | 43 | def run_testing(): 44 | 45 | # Create model directory 46 | if not os.path.exists(model_save_dir): 47 | os.makedirs(model_save_dir) 48 | model_filename = "./mfb_dis_ucf24.model" 49 | 50 | tower_grads, tower_ac = [], [] 51 | tower_losses, tower_ac_losses, tower_wd_losses = [], [], [] 52 | 53 | global_step = tf.get_variable( 54 | 'global_step', 55 | [], 56 | initializer=tf.constant_initializer(0), 57 | trainable=False 58 | ) 59 | starter_learning_rate = 1e-4 60 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 61 | 1000000, 0.8, staircase=True) 62 | #opt = tf.train.AdamOptimizer(learning_rate) 63 | 64 | # Create a session for running Ops on the Graph. 65 | config = tf.ConfigProto(allow_soft_placement=True) 66 | sess = tf.Session(config=config) 67 | coord = tf.train.Coordinator() 68 | threads = None 69 | 70 | test_list_file = open(test_list_path, 'r') 71 | test_list = test_list_file.read().splitlines() 72 | for i, line in enumerate(test_list): 73 | test_list[i] = os.path.join(dataset_path, test_list[i]) 74 | 75 | assert(len(test_list) % FLAGS.num_gpus == 0) 76 | num_for_each_gpu = len(test_list) // FLAGS.num_gpus 77 | 78 | clips_list, labels_list, texts_list = [], [], [] 79 | with sess.as_default(): 80 | for i in range(FLAGS.num_gpus): 81 | clips, labels, texts = input_pipeline_dis(test_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 82 | FLAGS.batch_size, num_epochs=FLAGS.num_epochs, is_training=False) 83 | clips_list.append(clips) 84 | labels_list.append(labels) 85 | texts_list.append(texts) 86 | 87 | mfb_list = [] 88 | with tf.variable_scope('vars') as var_scope: 89 | for gpu_index in range(FLAGS.num_gpus): 90 | with tf.device('/gpu:%d' % (gpu_index)): 91 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 92 | 93 | # construct model 94 | mfb = mfb_dis_net(clips_list[gpu_index], labels_list[gpu_index], FLAGS.num_class, FLAGS.height, \ 95 | FLAGS.width, FLAGS.seq_length, FLAGS.channel, FLAGS.batch_size, is_training=False) 96 | mfb_list.append(mfb) 97 | loss, ac_loss, wd_loss = tower_loss(scope, mfb, use_pretrained_model) 98 | 99 | var_scope.reuse_variables() 100 | 101 | #tower_grads.append(grads) 102 | tower_losses.append(loss) 103 | tower_ac_losses.append(ac_loss) 104 | tower_wd_losses.append(wd_loss) 105 | tower_ac.append(mfb.ac) 106 | 107 | 108 | # concatenate the losses of all towers 109 | loss_op = tf.reduce_mean(tower_losses) 110 | ac_loss_op = tf.reduce_mean(tower_ac_losses) 111 | wd_loss_op = tf.reduce_mean(tower_wd_losses) 112 | ac_op = tf.reduce_mean(tower_ac) 113 | 114 | tf.summary.scalar('loss', loss_op) 115 | tf.summary.scalar('ac_loss', ac_loss_op) 116 | tf.summary.scalar('ac', ac_op) 117 | tf.summary.scalar('wd_loss', wd_loss_op) 118 | 119 | # saver for saving checkpoints 120 | saver = tf.train.Saver() 121 | init = tf.initialize_all_variables() 122 | 123 | sess.run(init) 124 | if not os.path.exists(model_save_dir): 125 | os.makedirs(model_save_dir) 126 | if use_pretrained_model: 127 | print('[*] Loading checkpoint ...') 128 | saver.restore(sess, test_model_name) 129 | 130 | # Create loss output folder 131 | if not os.path.exists(loss_save_dir): 132 | os.makedirs(loss_save_dir) 133 | loss_file = open(os.path.join(loss_save_dir, prefix+'_test.txt'), 'a+') 134 | 135 | loss_file.write('\n' + test_model_name + '\n') 136 | loss_file.flush() 137 | 138 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 139 | 140 | # start queue runner 141 | coord = tf.train.Coordinator() 142 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 143 | 144 | ac_list, loss_list = [], [] 145 | step = 0 146 | try: 147 | with sess.as_default(): 148 | print('\n\n\n*********** start testing ***********\n\n\n') 149 | step = global_step.eval() 150 | print('[step = %d]'%step) 151 | while not coord.should_stop(): 152 | # Run training steps or whatever 153 | ac, ac_loss = sess.run([ac_op, ac_loss_op]) 154 | ac_list.append(ac) 155 | loss_list.append(ac_loss) 156 | line = 'ac=%.3f, loss=%.8f' %(ac*100, ac_loss) 157 | loss_file.write(line + '\n') 158 | loss_file.flush() 159 | print(line) 160 | 161 | 162 | except tf.errors.OutOfRangeError: 163 | print('Done training -- epoch limit reached') 164 | finally: 165 | # When done, ask the threads to stop. 166 | coord.request_stop() 167 | 168 | # Wait for threads to finish. 169 | coord.join(threads) 170 | sess.close() 171 | 172 | mean_ac = np.mean(np.asarray(ac_list)) 173 | mean_loss = np.mean(np.asarray(loss_list)) 174 | 175 | line = '[step=%d] mean_ac=%.3f, mean_loss=%.8f' %(step, mean_ac*100, mean_loss) 176 | print(line) 177 | loss_file.write(line + '\n') 178 | 179 | 180 | 181 | def main(_): 182 | run_testing() 183 | 184 | 185 | if __name__ == '__main__': 186 | tf.app.run() 187 | -------------------------------------------------------------------------------- /mfb_pretrain_dis_val.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import tools.ops 8 | 9 | import numpy as np 10 | 11 | import tensorflow as tf 12 | import scipy.misc as sm 13 | 14 | from models.mfb_dis_net import * 15 | from tools.utilities import * 16 | from tools.ops import * 17 | 18 | 19 | flags = tf.app.flags 20 | flags.DEFINE_integer('batch_size', 5, 'Batch size.') 21 | flags.DEFINE_integer('num_epochs', 1, 'Number of epochs.') # ~13 min per epoch 22 | flags.DEFINE_integer('num_gpus', 1, 'Number of GPUs.') 23 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 24 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 25 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 26 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 27 | flags.DEFINE_integer('num_sample', 1240, 'Number of samples in this dataset.') 28 | flags.DEFINE_integer('num_class', 24, 'Number of classes to classify.') 29 | 30 | FLAGS = flags.FLAGS 31 | 32 | use_pretrained_model = True 33 | save_predictions = True 34 | use_pretrained_encoder = True 35 | encoder_gradient_ratio = 1.0 36 | 37 | prefix = 'DIS_ae_mfb_baseline' + '_finetune=' + str(encoder_gradient_ratio) 38 | model_save_dir = './ckpt/' + prefix 39 | loss_save_dir = './loss' 40 | val_list_path = './dataset/vallist.txt' 41 | dataset_path = './dataset/UCF-101-tf-records' 42 | 43 | 44 | def run_validating(): 45 | 46 | # Create model directory 47 | if not os.path.exists(model_save_dir): 48 | os.makedirs(model_save_dir) 49 | model_filename = "./mfb_dis_ucf24.model" 50 | 51 | tower_grads, tower_ac = [], [] 52 | tower_losses, tower_ac_losses, tower_wd_losses = [], [], [] 53 | 54 | global_step = tf.get_variable( 55 | 'global_step', 56 | [], 57 | initializer=tf.constant_initializer(0), 58 | trainable=False 59 | ) 60 | starter_learning_rate = 1e-4 61 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 62 | 1000000, 0.8, staircase=True) 63 | opt = tf.train.AdamOptimizer(learning_rate) 64 | 65 | # Create a session for running Ops on the Graph. 66 | config = tf.ConfigProto(allow_soft_placement=True) 67 | sess = tf.Session(config=config) 68 | coord = tf.train.Coordinator() 69 | threads = None 70 | 71 | val_list_file = open(val_list_path, 'r') 72 | val_list = val_list_file.read().splitlines() 73 | for i, line in enumerate(val_list): 74 | val_list[i] = os.path.join(dataset_path, val_list[i]) 75 | 76 | assert(len(val_list) % FLAGS.num_gpus == 0) 77 | num_for_each_gpu = len(val_list) // FLAGS.num_gpus 78 | 79 | clips_list, labels_list, texts_list = [], [], [] 80 | with sess.as_default(): 81 | for i in range(FLAGS.num_gpus): 82 | clips, labels, texts = input_pipeline_dis(val_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 83 | FLAGS.batch_size, num_epochs=FLAGS.num_epochs, is_training=False) 84 | clips_list.append(clips) 85 | labels_list.append(labels) 86 | texts_list.append(texts) 87 | 88 | mfb_list = [] 89 | with tf.variable_scope('vars') as var_scope: 90 | for gpu_index in range(FLAGS.num_gpus): 91 | with tf.device('/gpu:%d' % (gpu_index)): 92 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 93 | 94 | # construct model 95 | mfb = mfb_dis_net(clips_list[gpu_index], labels_list[gpu_index], FLAGS.num_class, FLAGS.height, \ 96 | FLAGS.width, FLAGS.seq_length, FLAGS.channel, FLAGS.batch_size, is_training=False) 97 | mfb_list.append(mfb) 98 | loss, ac_loss, wd_loss = tower_loss(scope, mfb, use_pretrained_encoder, encoder_gradient_ratio) 99 | 100 | var_scope.reuse_variables() 101 | 102 | vars_to_optimize = tf.trainable_variables() 103 | grads = opt.compute_gradients(loss, var_list=vars_to_optimize) 104 | 105 | tower_grads.append(grads) 106 | tower_losses.append(loss) 107 | tower_ac_losses.append(ac_loss) 108 | tower_wd_losses.append(wd_loss) 109 | tower_ac.append(mfb.ac) 110 | 111 | 112 | # concatenate the losses of all towers 113 | loss_op = tf.reduce_mean(tower_losses) 114 | ac_loss_op = tf.reduce_mean(tower_ac_losses) 115 | wd_loss_op = tf.reduce_mean(tower_wd_losses) 116 | ac_op = tf.reduce_mean(tower_ac) 117 | 118 | tf.summary.scalar('loss', loss_op) 119 | tf.summary.scalar('ac_loss', ac_loss_op) 120 | tf.summary.scalar('ac', ac_op) 121 | tf.summary.scalar('wd_loss', wd_loss_op) 122 | 123 | # saver for saving checkpoints 124 | saver = tf.train.Saver(max_to_keep=10) 125 | init = tf.initialize_all_variables() 126 | 127 | sess.run(init) 128 | if not os.path.exists(model_save_dir): 129 | os.makedirs(model_save_dir) 130 | if use_pretrained_model: 131 | print('[*] Loading checkpoint ...') 132 | model = tf.train.latest_checkpoint(model_save_dir) 133 | if model is not None: 134 | saver.restore(sess, model) 135 | print('[*] Loading success: %s!'%model) 136 | else: 137 | print('[*] Loading failed ...') 138 | 139 | # Create loss output folder 140 | if not os.path.exists(loss_save_dir): 141 | os.makedirs(loss_save_dir) 142 | loss_file = open(os.path.join(loss_save_dir, prefix+'_val.txt'), 'a+') 143 | 144 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 145 | 146 | # start queue runner 147 | coord = tf.train.Coordinator() 148 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 149 | 150 | ac_list, loss_list = [], [] 151 | step = 0 152 | try: 153 | with sess.as_default(): 154 | print('\n\n\n*********** start validating ***********\n\n\n') 155 | step = global_step.eval() 156 | print('[step = %d]'%step) 157 | while not coord.should_stop(): 158 | # Run training steps or whatever 159 | ac, ac_loss = sess.run([ac_op, ac_loss_op]) 160 | ac_list.append(ac) 161 | loss_list.append(ac_loss) 162 | print('ac=%.3f, loss=%.8f' %(ac*100, ac_loss)) 163 | 164 | 165 | except tf.errors.OutOfRangeError: 166 | print('Done training -- epoch limit reached') 167 | finally: 168 | # When done, ask the threads to stop. 169 | coord.request_stop() 170 | 171 | # Wait for threads to finish. 172 | coord.join(threads) 173 | sess.close() 174 | 175 | mean_ac = np.mean(np.asarray(ac_list)) 176 | mean_loss = np.mean(np.asarray(loss_list)) 177 | 178 | line = '[step=%d] mean_ac=%.3f, mean_loss=%.8f' %(step, mean_ac*100, mean_loss) 179 | print(line) 180 | loss_file.write(line + '\n') 181 | 182 | 183 | 184 | def main(_): 185 | run_validating() 186 | 187 | 188 | if __name__ == '__main__': 189 | tf.app.run() -------------------------------------------------------------------------------- /mfb_cross_val.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import argparse 8 | import tools.ops 9 | 10 | import numpy as np 11 | 12 | import tensorflow as tf 13 | import scipy.misc as sm 14 | 15 | from models.mfb_net_cross import * 16 | from tools.utilities import * 17 | from tools.ops import * 18 | 19 | parser = argparse.ArgumentParser() 20 | parser.add_argument('-lr', dest='lr', type=float, default='1e-4', help='original learning rate') 21 | args = parser.parse_args() 22 | 23 | flags = tf.app.flags 24 | flags.DEFINE_float('lr', args.lr, 'Original learning rate.') 25 | flags.DEFINE_integer('batch_size', 5, 'Batch size.') 26 | flags.DEFINE_integer('num_epochs', 1, 'Number of epochs.') # ~13 min per epoch 27 | flags.DEFINE_integer('num_gpus', 4, 'Number of GPUs.') 28 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 29 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 30 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 31 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 32 | flags.DEFINE_integer('num_sample', 1240, 'Number of samples in this dataset.') 33 | flags.DEFINE_float('wd', 0.001, 'Weight decay rate.') 34 | 35 | FLAGS = flags.FLAGS 36 | 37 | prefix = 'mfb_cross' 38 | model_save_dir = './ckpt/' + prefix 39 | loss_save_dir = './loss' 40 | val_list_path = './dataset/vallist.txt' 41 | dataset_path = './dataset/UCF-101-tf-records' 42 | 43 | use_pretrained_model = True 44 | save_predictions = True 45 | 46 | 47 | def run_validation(): 48 | 49 | # Create model directory 50 | if not os.path.exists(model_save_dir): 51 | os.makedirs(model_save_dir) 52 | model_filename = "./mfb_baseline_ucf24.model" 53 | 54 | tower_ffg_losses, tower_fbg_losses, tower_lfg_losses, tower_feat_losses = [], [], [], [] 55 | tower_ffg_m_losses, tower_fbg_m_losses, tower_lfg_m_losses = [], [], [] 56 | 57 | global_step = tf.get_variable( 58 | 'global_step', 59 | [], 60 | initializer=tf.constant_initializer(0), 61 | trainable=False 62 | ) 63 | starter_learning_rate = 1e-4 64 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 65 | 100000000, 0.5, staircase=True) 66 | opt = tf.train.AdamOptimizer(learning_rate) 67 | 68 | # Create a session for running Ops on the Graph. 69 | config = tf.ConfigProto(allow_soft_placement=True) 70 | sess = tf.Session(config=config) 71 | coord = tf.train.Coordinator() 72 | threads = None 73 | 74 | val_list_file = open(val_list_path, 'r') 75 | val_list = val_list_file.read().splitlines() 76 | for i, line in enumerate(val_list): 77 | val_list[i] = os.path.join(dataset_path, val_list[i]) 78 | 79 | assert(len(val_list) % FLAGS.num_gpus == 0) 80 | num_for_each_gpu = len(val_list) // FLAGS.num_gpus 81 | 82 | clips_list, img_masks_list, loss_masks_list = [], [], [] 83 | with sess.as_default(): 84 | for i in range(FLAGS.num_gpus): 85 | clips, img_masks, loss_masks = input_pipeline(val_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 86 | FLAGS.batch_size, read_threads=1, num_epochs=FLAGS.num_epochs, is_training=False) 87 | clips_list.append(clips) 88 | img_masks_list.append(img_masks) 89 | loss_masks_list.append(loss_masks) 90 | 91 | mfb_list = [] 92 | with tf.variable_scope('vars') as var_scope: 93 | for gpu_index in range(FLAGS.num_gpus): 94 | with tf.device('/gpu:%d' % (gpu_index)): 95 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 96 | 97 | # construct model 98 | mfb = mfb_net(clips_list[gpu_index], FLAGS.height, FLAGS.width, FLAGS.seq_length, \ 99 | FLAGS.channel, FLAGS.batch_size, is_training=False) 100 | mfb_list.append(mfb) 101 | _, first_fg_loss, first_bg_loss, last_fg_loss, feat_loss, _ = \ 102 | tower_loss(scope, mfb, clips_list[gpu_index], img_masks_list[gpu_index], loss_masks_list[gpu_index]) 103 | 104 | var_scope.reuse_variables() 105 | 106 | tower_ffg_losses.append(first_fg_loss) 107 | tower_fbg_losses.append(first_bg_loss) 108 | tower_lfg_losses.append(last_fg_loss) 109 | tower_feat_losses.append(feat_loss) 110 | 111 | 112 | # concatenate the losses of all towers 113 | ffg_loss_op = tf.reduce_mean(tower_ffg_losses) 114 | fbg_loss_op = tf.reduce_mean(tower_fbg_losses) 115 | lfg_loss_op = tf.reduce_mean(tower_lfg_losses) 116 | feat_loss_op = tf.reduce_mean(tower_feat_losses) 117 | 118 | # saver for saving checkpoints 119 | saver = tf.train.Saver() 120 | init = tf.initialize_all_variables() 121 | 122 | sess.run(init) 123 | if not os.path.exists(model_save_dir): 124 | os.makedirs(model_save_dir) 125 | if use_pretrained_model: 126 | print('[*] Loading checkpoint ...') 127 | model = tf.train.latest_checkpoint(model_save_dir) 128 | if model is not None: 129 | saver.restore(sess, model) 130 | print('[*] Loading success: %s!'%model) 131 | else: 132 | print('[*] Loading failed ...') 133 | 134 | # Create loss output folder 135 | if not os.path.exists(loss_save_dir): 136 | os.makedirs(loss_save_dir) 137 | loss_file = open(os.path.join(loss_save_dir, prefix+'_val.txt'), 'a+') 138 | 139 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 140 | 141 | # start queue runner 142 | coord = tf.train.Coordinator() 143 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 144 | 145 | ffg_loss_list, fbg_loss_list, lfg_loss_list, feat_loss_list = [], [], [], [] 146 | try: 147 | with sess.as_default(): 148 | print('\n\n\n*********** start validating ***********\n\n\n') 149 | step = global_step.eval() 150 | print('[step = %d]'%step) 151 | while not coord.should_stop(): 152 | # Run inference steps 153 | ffg_loss, fbg_loss, lfg_loss, feat_loss = \ 154 | sess.run([ffg_loss_op, fbg_loss_op, lfg_loss_op, feat_loss_op]) 155 | ffg_loss_list.append(ffg_loss) 156 | fbg_loss_list.append(fbg_loss) 157 | lfg_loss_list.append(lfg_loss) 158 | feat_loss_list.append(feat_loss) 159 | print('ffg_loss=%.8f, fbg_loss=%.8f, lfg_loss=%.8f, feat_loss=%.8f' \ 160 | %(ffg_loss, fbg_loss, lfg_loss, feat_loss)) 161 | 162 | except tf.errors.OutOfRangeError: 163 | print('Done training -- epoch limit reached') 164 | finally: 165 | # When done, ask the threads to stop. 166 | coord.request_stop() 167 | 168 | # Wait for threads to finish. 169 | coord.join(threads) 170 | sess.close() 171 | 172 | mean_ffg = np.mean(np.asarray(ffg_loss_list)) 173 | mean_fbg = np.mean(np.asarray(fbg_loss_list)) 174 | mean_lfg = np.mean(np.asarray(lfg_loss_list)) 175 | mean_feat = np.mean(np.asarray(feat_loss_list)) 176 | 177 | line = '[step=%d] ffg_loss=%.8f, fbg_loss=%.8f, lfg_loss=%.8f, feat_loss=%.8f' \ 178 | %(step, mean_ffg, mean_fbg, mean_lfg, mean_feat) 179 | print(line) 180 | loss_file.write(line + '\n') 181 | 182 | 183 | 184 | def main(_): 185 | run_validation() 186 | 187 | 188 | if __name__ == '__main__': 189 | tf.app.run() -------------------------------------------------------------------------------- /mfb_cross_test.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import tools.ops 8 | 9 | import numpy as np 10 | 11 | import tensorflow as tf 12 | import scipy.misc as sm 13 | 14 | from models.mfb_net_cross import * 15 | from tools.utilities import * 16 | from tools.ops import * 17 | 18 | 19 | flags = tf.app.flags 20 | flags.DEFINE_integer('batch_size', 1, 'Batch size.') 21 | flags.DEFINE_integer('num_epochs', 1, 'Number of epochs.') # ~13 min per epoch 22 | flags.DEFINE_integer('num_gpus', 1, 'Number of GPUs.') 23 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 24 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 25 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 26 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 27 | 28 | FLAGS = flags.FLAGS 29 | 30 | prefix = 'mfb_cross' 31 | model_save_dir = './ckpt/' + prefix 32 | loss_save_dir = './loss' 33 | test_save_dir = './test/' + prefix 34 | test_list_path = './dataset/testlist.txt' 35 | dataset_path = './dataset/UCF-101-tf-records' 36 | 37 | use_pretrained_model = True 38 | save_predictions = True 39 | 40 | 41 | def run_testing(): 42 | 43 | # Create model directory 44 | if not os.path.exists(model_save_dir): 45 | os.makedirs(model_save_dir) 46 | model_filename = "./mfb_baseline_ucf24.model" 47 | 48 | tower_ffg_losses, tower_fbg_losses, tower_lfg_losses, tower_feat_losses = [], [], [], [] 49 | tower_ffg_m_losses, tower_fbg_m_losses, tower_lfg_m_losses = [], [], [] 50 | 51 | global_step = tf.get_variable( 52 | 'global_step', 53 | [], 54 | initializer=tf.constant_initializer(0), 55 | trainable=False 56 | ) 57 | starter_learning_rate = 1e-4 58 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 59 | 100000000, 0.5, staircase=True) 60 | opt = tf.train.AdamOptimizer(learning_rate) 61 | 62 | # Create a session for running Ops on the Graph. 63 | config = tf.ConfigProto(allow_soft_placement=True) 64 | sess = tf.Session(config=config) 65 | coord = tf.train.Coordinator() 66 | threads = None 67 | 68 | test_list_file = open(test_list_path, 'r') 69 | test_list = test_list_file.read().splitlines() 70 | for i, line in enumerate(test_list): 71 | test_list[i] = os.path.join(dataset_path, test_list[i]) 72 | 73 | assert(len(test_list) % FLAGS.num_gpus == 0) 74 | num_for_each_gpu = len(test_list) // FLAGS.num_gpus 75 | 76 | clips_list, img_masks_list, loss_masks_list = [], [], [] 77 | with sess.as_default(): 78 | for i in range(FLAGS.num_gpus): 79 | clips, img_masks, loss_masks = input_pipeline(test_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 80 | FLAGS.batch_size, read_threads=1, num_epochs=FLAGS.num_epochs, is_training=False) 81 | clips_list.append(clips) 82 | img_masks_list.append(img_masks) 83 | loss_masks_list.append(loss_masks) 84 | 85 | mfb_list = [] 86 | with tf.variable_scope('vars') as var_scope: 87 | for gpu_index in range(FLAGS.num_gpus): 88 | with tf.device('/gpu:%d' % (gpu_index)): 89 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 90 | 91 | # construct model 92 | mfb = mfb_net(clips_list[gpu_index], FLAGS.height, FLAGS.width, FLAGS.seq_length, \ 93 | FLAGS.channel, FLAGS.batch_size, is_training=False) 94 | mfb_list.append(mfb) 95 | _, first_fg_loss, first_bg_loss, last_fg_loss, feat_loss, _ = \ 96 | tower_loss(scope, mfb, clips_list[gpu_index], img_masks_list[gpu_index], loss_masks_list[gpu_index]) 97 | 98 | var_scope.reuse_variables() 99 | 100 | tower_ffg_losses.append(first_fg_loss) 101 | tower_fbg_losses.append(first_bg_loss) 102 | tower_lfg_losses.append(last_fg_loss) 103 | tower_feat_losses.append(feat_loss) 104 | 105 | 106 | # concatenate the losses of all towers 107 | ffg_loss_op = tf.reduce_mean(tower_ffg_losses) 108 | fbg_loss_op = tf.reduce_mean(tower_fbg_losses) 109 | lfg_loss_op = tf.reduce_mean(tower_lfg_losses) 110 | feat_loss_op = tf.reduce_mean(tower_feat_losses) 111 | 112 | # saver for saving checkpoints 113 | saver = tf.train.Saver() 114 | init = tf.initialize_all_variables() 115 | 116 | sess.run(init) 117 | if not os.path.exists(model_save_dir): 118 | os.makedirs(model_save_dir) 119 | if use_pretrained_model: 120 | print('[*] Loading checkpoint ...') 121 | model = tf.train.latest_checkpoint(model_save_dir) 122 | if model is not None: 123 | saver.restore(sess, model) 124 | print('[*] Loading success: %s!'%model) 125 | else: 126 | print('[*] Loading failed ...') 127 | 128 | # Create loss output folder 129 | if not os.path.exists(loss_save_dir): 130 | os.makedirs(loss_save_dir) 131 | loss_file = open(os.path.join(loss_save_dir, prefix+'_test.txt'), 'a+') 132 | 133 | # Create test output folder 134 | if not os.path.exists(test_save_dir): 135 | os.makedirs(test_save_dir) 136 | 137 | # start queue runner 138 | coord = tf.train.Coordinator() 139 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 140 | 141 | ffg_loss_list, fbg_loss_list, lfg_loss_list, feat_loss_list = [], [], [], [] 142 | try: 143 | with sess.as_default(): 144 | print('\n\n\n*********** start testing ***********\n\n\n') 145 | step = global_step.eval() 146 | print('[step = %d]'%step) 147 | cnt = 0 148 | while not coord.should_stop(): 149 | # Run training steps or whatever 150 | mfb = mfb_list[0] 151 | ffg, fbg, lfg, gt_ffg, gt_fbg, gt_lfg, ffg_loss, fbg_loss, lfg_loss, feat_loss = sess.run([ 152 | mfb.first_fg_rec, mfb.first_bg_rec, mfb.last_fg_rec, \ 153 | mfb.gt_ffg, mfb.gt_fbg, mfb.gt_lfg, ffg_loss_op, fbg_loss_op, lfg_loss_op, feat_loss_op]) 154 | 155 | ffg, fbg, lfg, gt_ffg, gt_fbg, gt_lfg = \ 156 | ffg[0], fbg[0], lfg[0], gt_ffg[0], gt_fbg[0], gt_lfg[0] 157 | 158 | ffg, fbg, lfg = (ffg+1)/2*255.0, (fbg+1)/2*255.0, (lfg+1)/2*255.0 159 | gt_ffg, gt_fbg, gt_lfg = (gt_ffg+1)/2*255.0, (gt_fbg+1)/2*255.0, (gt_lfg+1)/2*255.0 160 | 161 | img = gen_pred_img(ffg, fbg, lfg) 162 | gt = gen_pred_img(gt_ffg, gt_fbg, gt_lfg) 163 | save_img = np.concatenate((img, gt)) 164 | sm.imsave(os.path.join(test_save_dir, '%05d.jpg'%cnt), save_img) 165 | 166 | ffg_loss_list.append(ffg_loss) 167 | fbg_loss_list.append(fbg_loss) 168 | lfg_loss_list.append(lfg_loss) 169 | feat_loss_list.append(feat_loss) 170 | 171 | line = '%05d: ffg_loss=%.8f, fbg_loss=%.8f, lfg_loss=%.8f, feat_loss=%.8f' \ 172 | %(cnt, ffg_loss, fbg_loss, lfg_loss, feat_loss) 173 | loss_file.write(line + '\n') 174 | loss_file.flush() 175 | print(line) 176 | cnt += 1 177 | 178 | except tf.errors.OutOfRangeError: 179 | print('Done training -- epoch limit reached') 180 | finally: 181 | # When done, ask the threads to stop. 182 | coord.request_stop() 183 | 184 | # Wait for threads to finish. 185 | coord.join(threads) 186 | sess.close() 187 | 188 | mean_ffg = np.mean(np.asarray(ffg_loss_list)) 189 | mean_fbg = np.mean(np.asarray(fbg_loss_list)) 190 | mean_lfg = np.mean(np.asarray(lfg_loss_list)) 191 | mean_feat = np.mean(np.asarray(feat_loss_list)) 192 | 193 | line = '[step=%d] ffg_loss=%.8f, fbg_loss=%.8f, lfg_loss=%.8f, feat_loss=%.8f' \ 194 | %(step, mean_ffg, mean_fbg, mean_lfg, mean_feat) 195 | print(line) 196 | loss_file.write(line + '\n') 197 | 198 | 199 | 200 | def main(_): 201 | run_testing() 202 | 203 | 204 | if __name__ == '__main__': 205 | tf.app.run() -------------------------------------------------------------------------------- /models/autoencoder_net.py: -------------------------------------------------------------------------------- 1 | 2 | import tensorflow as tf 3 | 4 | from tools.ops import * 5 | 6 | class autoencoder_net(object): 7 | 8 | def __init__(self, input_, height=128, width=128, seq_length=16, c_dim=3, batch_size=32, is_training=True): 9 | 10 | self.seq = input_ 11 | self.batch_size = batch_size 12 | self.height = height 13 | self.width = width 14 | self.seq_length = seq_length 15 | self.c_dim = c_dim 16 | 17 | self.seq_shape = [seq_length, height, width, c_dim] 18 | 19 | self.batch_norm_params = { 20 | 'is_training': is_training, 21 | 'decay': 0.9, 22 | 'epsilon': 1e-5, 23 | 'scale': True, 24 | 'center': True, 25 | 'updates_collections': tf.GraphKeys.UPDATE_OPS 26 | } 27 | 28 | self.build_model() 29 | 30 | 31 | def build_model(self): 32 | 33 | c3d_feat = self.mapping_layer(self.c3d(self.seq)) 34 | 35 | self.rec_vid = self.decoder(c3d_feat) 36 | 37 | 38 | def bn(self, x): 39 | return tf.contrib.layers.batch_norm(x, **self.batch_norm_params) 40 | 41 | 42 | def mapping_layer(self, input_, name='mapping'): 43 | with tf.variable_scope(name): 44 | feat = relu(self.bn(conv3d(input_, self.map_dim, k_t=self.map_length, k_h=2, k_w=2, d_h=2, d_w=2, padding='VALID', name='mapping1'))) 45 | 46 | return feat 47 | 48 | 49 | def decoder(self, input_, name='decoder'): 50 | # mirror decoder of c3d 51 | with tf.variable_scope(name): 52 | 53 | deconv6a = relu(self.bn(deconv3d(input_, 54 | output_shape=[self.batch_size,self.map_length,self.map_height,self.map_width,self.map_dim], 55 | k_t=2,k_h=2,k_w=2,d_t=2,d_h=2,d_w=2,padding='SAME',name='deconv6a'))) 56 | 57 | deconv5b = relu(self.bn(deconv3d(deconv6a, 58 | output_shape=[self.batch_size,self.map_length,self.map_height,self.map_width,self.map_dim], 59 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv5b'))) 60 | deconv5a = relu(self.bn(deconv3d(deconv5b, 61 | output_shape=[self.batch_size,self.map_length,self.map_height,self.map_width,self.map_dim], 62 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv5a'))) 63 | unpool4 = relu(self.bn(deconv3d(deconv5a, 64 | output_shape=[self.batch_size,self.map_length*2,self.map_height*2,self.map_width*2,self.map_dim], 65 | k_t=3,k_h=3,k_w=3,d_t=2,d_h=2,d_w=2,padding='SAME',name='unpool4'))) 66 | 67 | deconv4b = relu(self.bn(deconv3d(unpool4, 68 | output_shape=[self.batch_size,self.map_length*2,self.map_height*2,self.map_width*2,self.map_dim], 69 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv4b'))) 70 | deconv4a = relu(self.bn(deconv3d(deconv4b, 71 | output_shape=[self.batch_size,self.map_length*2,self.map_height*2,self.map_width*2,self.map_dim//2], 72 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv4a'))) 73 | unpool3 = relu(self.bn(deconv3d(deconv4a, 74 | output_shape=[self.batch_size,self.map_length*4,self.map_height*4,self.map_width*4,self.map_dim//2], 75 | k_t=3,k_h=3,k_w=3,d_t=2,d_h=2,d_w=2,padding='SAME',name='unpool3'))) 76 | 77 | deconv3b = relu(self.bn(deconv3d(unpool3, 78 | output_shape=[self.batch_size,self.map_length*4,self.map_height*4,self.map_width*4,self.map_dim//2], 79 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv3b'))) 80 | deconv3a = relu(self.bn(deconv3d(deconv3b, 81 | output_shape=[self.batch_size,self.map_length*4,self.map_height*4,self.map_width*4,self.map_dim//4], 82 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv3a'))) 83 | unpool2 = relu(self.bn(deconv3d(deconv3a, 84 | output_shape=[self.batch_size,self.map_length*8,self.map_height*8,self.map_width*8,self.map_dim//4], 85 | k_t=3,k_h=3,k_w=3,d_t=2,d_h=2,d_w=2,padding='SAME',name='unpool2'))) 86 | 87 | deconv2a = relu(self.bn(deconv3d(unpool2, 88 | output_shape=[self.batch_size,self.map_length*8,self.map_height*8,self.map_width*8,self.map_dim//8], 89 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv2a'))) 90 | unpool1 = relu(self.bn(deconv3d(deconv2a, 91 | output_shape=[self.batch_size,self.map_length*8,self.map_height*16,self.map_width*16,self.map_dim//8], 92 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=2,d_w=2,padding='SAME',name='unpool1'))) 93 | 94 | deconv1a = deconv3d(unpool1, 95 | output_shape=[self.batch_size,self.map_length*8,self.map_height*16,self.map_width*16,self.c_dim], 96 | k_t=3,k_h=3,k_w=3,d_t=1,d_h=1,d_w=1,padding='SAME',name='deconv1a') 97 | 98 | vid = tf.tanh(deconv1a) 99 | 100 | return vid 101 | 102 | 103 | def c3d(self, input_, _dropout=1.0, name='c3d'): 104 | 105 | with tf.variable_scope(name): 106 | 107 | # Convolution Layer 108 | conv1 = relu(self.bn(conv3d(input_, 64, name='conv1'))) 109 | pool1 = max_pool3d(conv1, k=1, name='pool1') 110 | 111 | # Convolution Layer 112 | conv2 = relu(self.bn(conv3d(pool1, 128, name='conv2'))) 113 | pool2 = max_pool3d(conv2, k=2, name='pool2') 114 | 115 | # Convolution Layer 116 | conv3 = relu(self.bn(conv3d(pool2, 256, name='conv3a'))) 117 | conv3 = relu(self.bn(conv3d(conv3, 256, name='conv3b'))) 118 | pool3 = max_pool3d(conv3, k=2, name='pool3') 119 | 120 | # Convolution Layer 121 | conv4 = relu(self.bn(conv3d(pool3, 512, name='conv4a'))) 122 | conv4 = relu(self.bn(conv3d(conv4, 512, name='conv4b'))) 123 | pool4 = max_pool3d(conv4, k=2, name='pool4') 124 | 125 | # Convolution Layer 126 | conv5 = relu(self.bn(conv3d(pool4, 512, name='conv5a'))) 127 | conv5 = relu(self.bn(conv3d(conv5, 512, name='conv5b'))) 128 | 129 | conv5_shape = conv5.get_shape().as_list() 130 | self.map_length = conv5_shape[1] 131 | self.map_height = conv5_shape[2] 132 | self.map_width = conv5_shape[3] 133 | self.map_dim = conv5_shape[4] 134 | 135 | feature = conv5 136 | 137 | return feature 138 | 139 | 140 | def tower_loss(name_scope, autoencoder, clips): 141 | # calculate reconstruction loss 142 | rec_loss = tf.reduce_mean(tf.abs(clips-autoencoder.rec_vid)) 143 | 144 | weight_decay_loss_list = tf.get_collection('losses', name_scope) 145 | weight_decay_loss = 0.0 146 | if len(weight_decay_loss_list) > 0: 147 | weight_decay_loss = tf.add_n(weight_decay_loss_list) 148 | 149 | tf.add_to_collection('losses', rec_loss) 150 | losses = tf.get_collection('losses', name_scope) 151 | 152 | # Calculate the total loss for the current tower. 153 | total_loss = tf.add_n(losses, name='total_loss') 154 | 155 | return total_loss, rec_loss, weight_decay_loss -------------------------------------------------------------------------------- /autoencoder_train.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import tools.ops 8 | import subprocess 9 | 10 | import numpy as np 11 | 12 | import tensorflow as tf 13 | import scipy.misc as sm 14 | 15 | from models.autoencoder_net import * 16 | from tools.utilities import * 17 | from tools.ops import * 18 | from random import randint 19 | 20 | 21 | flags = tf.app.flags 22 | flags.DEFINE_integer('batch_size', 10, 'Batch size.') 23 | flags.DEFINE_integer('num_epochs', 2000, 'Number of epochs.') # ~13 min per epoch 24 | flags.DEFINE_integer('num_gpus', 4, 'Number of GPUs.') 25 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 26 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 27 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 28 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 29 | flags.DEFINE_integer('num_sample', 10060, 'Number of samples in this dataset.') 30 | 31 | FLAGS = flags.FLAGS 32 | 33 | prefix = 'autoencoder' 34 | model_save_dir = './ckpt/' + prefix 35 | logs_save_dir = './logs/' + prefix 36 | pred_save_dir = './output/' + prefix 37 | loss_save_dir = './loss' 38 | train_list_path = './dataset/trainlist.txt' 39 | dataset_path = './dataset/UCF-101-tf-records' 40 | evaluation_job = './jobs/autoencoder_val' 41 | 42 | use_pretrained_model = True 43 | save_predictions = True 44 | 45 | 46 | def run_training(): 47 | 48 | # Create model directory 49 | if not os.path.exists(model_save_dir): 50 | os.makedirs(model_save_dir) 51 | model_filename = "./mfb_ae_ucf24.model" 52 | 53 | # Consturct computational graph 54 | tower_grads = [] 55 | tower_losses, tower_rec_losses, tower_wd_losses = [], [], [] 56 | 57 | global_step = tf.get_variable( 58 | 'global_step', 59 | [], 60 | initializer=tf.constant_initializer(0), 61 | trainable=False 62 | ) 63 | starter_learning_rate = 1e-4 64 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 65 | 1000000, 0.8, staircase=True) 66 | opt = tf.train.AdamOptimizer(learning_rate) 67 | 68 | # Create a session for running Ops on the Graph. 69 | config = tf.ConfigProto(allow_soft_placement=True) 70 | sess = tf.Session(config=config) 71 | coord = tf.train.Coordinator() 72 | threads = None 73 | 74 | train_list_file = open(train_list_path, 'r') 75 | train_list = train_list_file.read().splitlines() 76 | for i, line in enumerate(train_list): 77 | train_list[i] = os.path.join(dataset_path, train_list[i]) 78 | 79 | assert(len(train_list) % FLAGS.num_gpus == 0) 80 | num_for_each_gpu = len(train_list) // FLAGS.num_gpus 81 | 82 | clips_list = [] 83 | with sess.as_default(): 84 | for i in range(FLAGS.num_gpus): 85 | clips, _, _ = input_pipeline(train_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 86 | FLAGS.batch_size, num_epochs=FLAGS.num_epochs, is_training=True) 87 | clips_list.append(clips) 88 | 89 | autoencoder_list = [] 90 | with tf.variable_scope('vars') as var_scope: 91 | for gpu_index in range(FLAGS.num_gpus): 92 | with tf.device('/gpu:%d' % (gpu_index)): 93 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 94 | 95 | # construct model 96 | autoencoder = autoencoder_net(clips_list[gpu_index], FLAGS.height, FLAGS.width, FLAGS.seq_length, \ 97 | FLAGS.channel, FLAGS.batch_size) 98 | autoencoder_list.append(autoencoder) 99 | loss, rec_loss, wd_loss = tower_loss(scope, autoencoder, clips_list[gpu_index]) 100 | 101 | var_scope.reuse_variables() 102 | 103 | vars_to_optimize = tf.trainable_variables() 104 | grads = opt.compute_gradients(loss, var_list=vars_to_optimize) 105 | 106 | tower_grads.append(grads) 107 | tower_losses.append(loss) 108 | tower_rec_losses.append(rec_loss) 109 | tower_wd_losses.append(wd_loss) 110 | 111 | # concatenate the losses of all towers 112 | loss_op = tf.reduce_mean(tower_losses) 113 | rec_loss_op = tf.reduce_mean(tower_rec_losses) 114 | wd_loss_op = tf.reduce_mean(tower_wd_losses) 115 | 116 | tf.summary.scalar('loss', loss_op) 117 | tf.summary.scalar('rec_loss', rec_loss_op) 118 | tf.summary.scalar('wd_loss', wd_loss_op) 119 | 120 | update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 121 | grads = average_gradients(tower_grads) 122 | with tf.control_dependencies(update_ops): 123 | train_op = opt.apply_gradients(grads, global_step=global_step) 124 | 125 | # saver for saving checkpoints 126 | saver = tf.train.Saver(max_to_keep=10) 127 | init = tf.initialize_all_variables() 128 | 129 | sess.run(init) 130 | if not os.path.exists(model_save_dir): 131 | os.makedirs(model_save_dir) 132 | if use_pretrained_model: 133 | print('[*] Loading checkpoint ...') 134 | model = tf.train.latest_checkpoint(model_save_dir) 135 | if model is not None: 136 | saver.restore(sess, model) 137 | print('[*] Loading success: %s!'%model) 138 | else: 139 | print('[*] Loading failed ...') 140 | 141 | # Create summary writer 142 | merged = tf.summary.merge_all() 143 | if not os.path.exists(logs_save_dir): 144 | os.makedirs(logs_save_dir) 145 | sum_writer = tf.summary.FileWriter(logs_save_dir, sess.graph) 146 | 147 | # Create prediction output folder 148 | if not os.path.exists(pred_save_dir): 149 | os.makedirs(pred_save_dir) 150 | 151 | # Create loss output folder 152 | if not os.path.exists(loss_save_dir): 153 | os.makedirs(loss_save_dir) 154 | loss_file = open(os.path.join(loss_save_dir, prefix+'.txt'), 'w') 155 | 156 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 157 | 158 | # start queue runner 159 | coord = tf.train.Coordinator() 160 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 161 | gpu_idx = 0 162 | 163 | try: 164 | with sess.as_default(): 165 | print('\n\n\n*********** start training ***********\n\n\n') 166 | while not coord.should_stop(): 167 | # Run training steps or whatever 168 | start_time = time.time() 169 | sess.run(train_op) 170 | duration = time.time() - start_time 171 | step = global_step.eval() 172 | 173 | if step == 1 or step % 10 == 0: # evaluate loss 174 | loss, rec_loss, wd_loss, lr = sess.run([loss_op, rec_loss_op, wd_loss_op, learning_rate]) 175 | line = 'step %d/%d, loss=%.8f, rec=%.8f, lwd=%.8f, dur=%.3f, lr=%.8f' \ 176 | %(step, total_steps, loss, rec_loss, wd_loss, duration, lr) 177 | print(line) 178 | loss_file.write(line + '\n') 179 | loss_file.flush() 180 | 181 | if step == 1 or step % 10 == 0: # save summary 182 | summary = summary_str = sess.run(merged) 183 | sum_writer.add_summary(summary, step) 184 | 185 | if step % 100 == 0 and save_predictions: # save current predictions 186 | clips = clips_list[gpu_idx] 187 | autoencoder = autoencoder_list[gpu_idx] 188 | gt_vid, rec_vid = sess.run([clips[0], autoencoder.rec_vid[0]]) 189 | gt_vid, rec_vid = (gt_vid+1)/2*255.0, (rec_vid+1)/2*255.0 190 | rec_img = gen_pred_vid(rec_vid) 191 | gt_img = gen_pred_vid(gt_vid) 192 | save_img = np.concatenate((rec_img, gt_img)) 193 | sm.imsave(os.path.join(pred_save_dir, '%07d.jpg'%step), save_img) 194 | 195 | gpu_idx += 1 196 | if gpu_idx == FLAGS.num_gpus: 197 | gpu_idx = 0 198 | 199 | if step % 500 == 0: # save checkpoint 200 | saver.save(sess, os.path.join(model_save_dir, model_filename), global_step=global_step) 201 | 202 | if step % 500 == 0: 203 | pass 204 | # launch a new script for validation (please modify it for your own script) 205 | #subprocess.check_output(['python', evaluation_job]) 206 | 207 | except tf.errors.OutOfRangeError: 208 | print('Done training -- epoch limit reached') 209 | finally: 210 | # When done, ask the threads to stop. 211 | coord.request_stop() 212 | 213 | # Wait for threads to finish. 214 | coord.join(threads) 215 | sess.close() 216 | 217 | 218 | 219 | def main(_): 220 | run_training() 221 | 222 | 223 | if __name__ == '__main__': 224 | tf.app.run() 225 | -------------------------------------------------------------------------------- /mfb_pretrain_dis_train.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import tools.ops 8 | import subprocess 9 | 10 | import numpy as np 11 | 12 | import tensorflow as tf 13 | import scipy.misc as sm 14 | 15 | from models.mfb_dis_net import * 16 | from tools.utilities import * 17 | from tools.ops import * 18 | 19 | 20 | flags = tf.app.flags 21 | flags.DEFINE_integer('batch_size', 16, 'Batch size.') 22 | flags.DEFINE_integer('num_epochs', 2000, 'Number of epochs.') 23 | flags.DEFINE_integer('num_gpus', 4, 'Number of GPUs.') 24 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 25 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 26 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 27 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 28 | flags.DEFINE_integer('num_sample', 10060, 'Number of samples in this dataset.') 29 | flags.DEFINE_integer('num_class', 24, 'Number of classes to classify.') 30 | 31 | FLAGS = flags.FLAGS 32 | 33 | use_pretrained_model = True 34 | save_predictions = True 35 | use_pretrained_encoder = True 36 | encoder_gradient_ratio = 1.0 37 | 38 | prefix = 'DIS_ae_mfb_baseline' + '_finetune=' + str(encoder_gradient_ratio) 39 | model_save_dir = './ckpt/' + prefix 40 | logs_save_dir = './logs/' + prefix 41 | loss_save_dir = './loss' 42 | train_list_path = './dataset/trainlist.txt' 43 | dataset_path = './dataset/UCF-101-tf-records' 44 | evaluation_job = './jobs/mfb_pretrain_dis_val' 45 | encoder_path = './ckpt/mfb_cross/mfb_baseline_ucf24.model-50000' 46 | 47 | 48 | def run_training(): 49 | 50 | # Create model directory 51 | if not os.path.exists(model_save_dir): 52 | os.makedirs(model_save_dir) 53 | model_filename = "./mfb_dis_ucf24.model" 54 | 55 | tower_grads, tower_ac = [], [] 56 | tower_losses, tower_ac_losses, tower_wd_losses = [], [], [] 57 | 58 | global_step = tf.get_variable( 59 | 'global_step', 60 | [], 61 | initializer=tf.constant_initializer(0), 62 | trainable=False 63 | ) 64 | starter_learning_rate = 1e-4 65 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 66 | 1000000, 0.8, staircase=True) 67 | opt = tf.train.AdamOptimizer(learning_rate) 68 | 69 | # Create a session for running Ops on the Graph. 70 | config = tf.ConfigProto(allow_soft_placement=True) 71 | sess = tf.Session(config=config) 72 | coord = tf.train.Coordinator() 73 | threads = None 74 | 75 | train_list_file = open(train_list_path, 'r') 76 | train_list = train_list_file.read().splitlines() 77 | for i, line in enumerate(train_list): 78 | train_list[i] = os.path.join(dataset_path, train_list[i]) 79 | 80 | assert(len(train_list) % FLAGS.num_gpus == 0) 81 | num_for_each_gpu = len(train_list) // FLAGS.num_gpus 82 | 83 | clips_list, labels_list, texts_list = [], [], [] 84 | with sess.as_default(): 85 | for i in range(FLAGS.num_gpus): 86 | clips, labels, texts = input_pipeline_dis(train_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 87 | FLAGS.batch_size, num_epochs=FLAGS.num_epochs, is_training=True) 88 | clips_list.append(clips) 89 | labels_list.append(labels) 90 | texts_list.append(texts) 91 | 92 | mfb_list = [] 93 | with tf.variable_scope('vars') as var_scope: 94 | for gpu_index in range(FLAGS.num_gpus): 95 | with tf.device('/gpu:%d' % (gpu_index)): 96 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 97 | 98 | # construct model 99 | mfb = mfb_dis_net(clips_list[gpu_index], labels_list[gpu_index], FLAGS.num_class, FLAGS.height, FLAGS.width, \ 100 | FLAGS.seq_length, FLAGS.channel, FLAGS.batch_size, is_training=True, \ 101 | encoder_gradient_ratio=encoder_gradient_ratio, \ 102 | use_pretrained_encoder=use_pretrained_encoder) 103 | mfb_list.append(mfb) 104 | loss, ac_loss, wd_loss = tower_loss(scope, mfb, use_pretrained_encoder, encoder_gradient_ratio) 105 | 106 | var_scope.reuse_variables() 107 | 108 | vars_to_optimize = tf.trainable_variables() 109 | grads = opt.compute_gradients(loss, var_list=vars_to_optimize) 110 | 111 | tower_grads.append(grads) 112 | tower_losses.append(loss) 113 | tower_ac_losses.append(ac_loss) 114 | tower_wd_losses.append(wd_loss) 115 | tower_ac.append(mfb.ac) 116 | 117 | 118 | # concatenate the losses of all towers 119 | loss_op = tf.reduce_mean(tower_losses) 120 | ac_loss_op = tf.reduce_mean(tower_ac_losses) 121 | wd_loss_op = tf.reduce_mean(tower_wd_losses) 122 | ac_op = tf.reduce_mean(tower_ac) 123 | 124 | tf.summary.scalar('loss', loss_op) 125 | tf.summary.scalar('ac_loss', ac_loss_op) 126 | tf.summary.scalar('ac', ac_op) 127 | tf.summary.scalar('wd_loss', wd_loss_op) 128 | 129 | update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 130 | grads = average_gradients_dis(tower_grads, encoder_gradient_ratio) 131 | with tf.control_dependencies(update_ops): 132 | train_op = opt.apply_gradients(grads, global_step=global_step) 133 | 134 | # saver for saving checkpoints 135 | saver = tf.train.Saver(max_to_keep=10) 136 | init = tf.initialize_all_variables() 137 | 138 | sess.run(init) 139 | load_res = False 140 | if not os.path.exists(model_save_dir): 141 | os.makedirs(model_save_dir) 142 | if use_pretrained_model: 143 | print('[*] Loading checkpoint ...') 144 | model = tf.train.latest_checkpoint(model_save_dir) 145 | if model is not None: 146 | saver.restore(sess, model) 147 | print('[*] Loading success: %s!'%model) 148 | load_res = True 149 | else: 150 | print('[*] Loading failed ...') 151 | 152 | if not load_res and use_pretrained_encoder: 153 | encoder_var_list = tf.global_variables() 154 | # filter out weights for encoder in the checkpoint 155 | encoder_var_list = [var for var in encoder_var_list if ('c3d' in var.name or 'vars/mapping' in var.name) \ 156 | and 'Adam' not in var.name] 157 | enc_saver = tf.train.Saver(var_list=encoder_var_list) 158 | enc_saver.restore(sess, encoder_path) 159 | print('[*] Loading success: %s!'%encoder_path) 160 | 161 | # Create summary writer 162 | merged = tf.summary.merge_all() 163 | if not os.path.exists(logs_save_dir): 164 | os.makedirs(logs_save_dir) 165 | sum_writer = tf.summary.FileWriter(logs_save_dir, sess.graph) 166 | 167 | # Create loss output folder 168 | if not os.path.exists(loss_save_dir): 169 | os.makedirs(loss_save_dir) 170 | loss_file = open(os.path.join(loss_save_dir, prefix+'.txt'), 'w') 171 | 172 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 173 | 174 | # start queue runner 175 | coord = tf.train.Coordinator() 176 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 177 | 178 | try: 179 | with sess.as_default(): 180 | print('\n\n\n*********** start training ***********\n\n\n') 181 | while not coord.should_stop(): 182 | # Run training steps 183 | start_time = time.time() 184 | sess.run(train_op) 185 | duration = time.time() - start_time 186 | step = global_step.eval() 187 | 188 | if step == 1 or step % 10 == 0: # evaluate loss 189 | loss, ac, ac_loss, wd_loss, lr = sess.run([loss_op, ac_op, ac_loss_op, wd_loss_op, learning_rate]) 190 | line = 'step %d/%d, loss=%.8f, ac=%.3f, lac=%.8f, lwd=%.8f, dur=%.3f, lr=%.8f' \ 191 | %(step, total_steps, loss, ac*100, ac_loss, wd_loss, duration, lr) 192 | print(line) 193 | loss_file.write(line + '\n') 194 | loss_file.flush() 195 | 196 | if step == 1 or step % 10 == 0: # save summary 197 | summary = summary_str = sess.run(merged) 198 | sum_writer.add_summary(summary, step) 199 | 200 | if step % 100 == 0: # save checkpoint 201 | saver.save(sess, os.path.join(model_save_dir, model_filename), global_step=global_step) 202 | 203 | if step % 100 == 0: # validate 204 | pass 205 | #subprocess.check_output(['python', evaluation_job]) 206 | 207 | 208 | except tf.errors.OutOfRangeError: 209 | print('Done training -- epoch limit reached') 210 | finally: 211 | # When done, ask the threads to stop. 212 | coord.request_stop() 213 | 214 | # Wait for threads to finish. 215 | coord.join(threads) 216 | sess.close() 217 | 218 | 219 | 220 | def main(_): 221 | run_training() 222 | 223 | 224 | if __name__ == '__main__': 225 | tf.app.run() -------------------------------------------------------------------------------- /mfb_cross_train.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['TF_CPP_MIN_LOG_LEVEL']='1' 3 | from os import listdir 4 | 5 | import sys 6 | import time 7 | import argparse 8 | import tools.ops 9 | import subprocess 10 | 11 | import numpy as np 12 | import tensorflow as tf 13 | import scipy.misc as sm 14 | 15 | from models.mfb_net_cross import * 16 | from tools.utilities import * 17 | from tools.ops import * 18 | 19 | parser = argparse.ArgumentParser() 20 | parser.add_argument('-lr', dest='lr', type=float, default='1e-4', help='original learning rate') 21 | parser.add_argument('-batch_size', dest='batch_size', type=int, default='10', help='batch_size') 22 | args = parser.parse_args() 23 | 24 | flags = tf.app.flags 25 | flags.DEFINE_float('lr', args.lr, 'Original learning rate.') 26 | flags.DEFINE_integer('batch_size', args.batch_size, 'Batch size.') 27 | flags.DEFINE_integer('num_epochs', 500, 'Number of epochs.') 28 | flags.DEFINE_integer('num_gpus', 4, 'Number of GPUs.') 29 | flags.DEFINE_integer('seq_length', 16, 'Length of each video clip.') 30 | flags.DEFINE_integer('height', 128, 'Height of video frame.') 31 | flags.DEFINE_integer('width', 128, 'Width of video frame.') 32 | flags.DEFINE_integer('channel', 3, 'Number of channels for each frame.') 33 | flags.DEFINE_integer('num_sample', 10060, 'Number of samples in this dataset.') 34 | flags.DEFINE_float('wd', 0.001, 'Weight decay rate.') 35 | 36 | FLAGS = flags.FLAGS 37 | 38 | prefix = 'mfb_cross' 39 | model_save_dir = './ckpt/' + prefix 40 | logs_save_dir = './logs/' + prefix 41 | pred_save_dir = './output/' + prefix 42 | loss_save_dir = './loss' 43 | train_list_path = './dataset/trainlist.txt' 44 | dataset_path = './dataset/UCF-101-tf-records' 45 | evaluation_job = './jobs/mfb_cross_val' 46 | 47 | use_pretrained_model = True 48 | save_predictions = True 49 | 50 | 51 | def run_training(): 52 | 53 | # Create model directory 54 | if not os.path.exists(model_save_dir): 55 | os.makedirs(model_save_dir) 56 | model_filename = "./mfb_baseline_ucf24.model" 57 | 58 | # Consturct computational graph 59 | tower_grads = [] 60 | tower_losses, tower_ffg_losses, tower_fbg_losses, tower_lfg_losses, tower_feat_losses, tower_wd_losses = [], [], [], [], [], [] 61 | tower_ffg_m_losses, tower_fbg_m_losses, tower_lfg_m_losses = [], [], [] 62 | 63 | global_step = tf.get_variable( 64 | 'global_step', 65 | [], 66 | initializer=tf.constant_initializer(0), 67 | trainable=False 68 | ) 69 | starter_learning_rate = FLAGS.lr 70 | learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 71 | 100000, 0.5, staircase=True) 72 | opt = tf.train.AdamOptimizer(learning_rate) 73 | 74 | # Create a session for running Ops on the Graph. 75 | config = tf.ConfigProto(allow_soft_placement=True) 76 | sess = tf.Session(config=config) 77 | coord = tf.train.Coordinator() 78 | threads = None 79 | 80 | train_list_file = open(train_list_path, 'r') 81 | train_list = train_list_file.read().splitlines() 82 | for i, line in enumerate(train_list): 83 | train_list[i] = os.path.join(dataset_path, train_list[i]) 84 | 85 | assert(len(train_list) % FLAGS.num_gpus == 0) 86 | num_for_each_gpu = len(train_list) // FLAGS.num_gpus 87 | 88 | clips_list, img_masks_list, loss_masks_list = [], [], [] 89 | with sess.as_default(): 90 | for i in range(FLAGS.num_gpus): 91 | clips, img_masks, loss_masks = input_pipeline(train_list[i*num_for_each_gpu:(i+1)*num_for_each_gpu], \ 92 | FLAGS.batch_size, read_threads=4, num_epochs=FLAGS.num_epochs, is_training=True) 93 | clips_list.append(clips) 94 | img_masks_list.append(img_masks) 95 | loss_masks_list.append(loss_masks) 96 | 97 | mfb_list = [] 98 | with tf.variable_scope('vars') as var_scope: 99 | for gpu_index in range(FLAGS.num_gpus): 100 | with tf.device('/gpu:%d' % (gpu_index)): 101 | with tf.name_scope('%s_%d' % ('tower', gpu_index)) as scope: 102 | 103 | # construct model 104 | mfb = mfb_net(clips_list[gpu_index], FLAGS.height, FLAGS.width, FLAGS.seq_length, FLAGS.channel, FLAGS.batch_size) 105 | mfb_list.append(mfb) 106 | loss, first_fg_loss, first_bg_loss, last_fg_loss, feat_loss, wd_loss = \ 107 | tower_loss(scope, mfb, clips_list[gpu_index], img_masks_list[gpu_index], loss_masks_list[gpu_index]) 108 | 109 | var_scope.reuse_variables() 110 | 111 | vars_to_optimize = tf.trainable_variables() 112 | grads = opt.compute_gradients(loss, var_list=vars_to_optimize) 113 | 114 | tower_grads.append(grads) 115 | tower_losses.append(loss) 116 | tower_ffg_losses.append(first_fg_loss) 117 | tower_fbg_losses.append(first_bg_loss) 118 | tower_lfg_losses.append(last_fg_loss) 119 | tower_feat_losses.append(feat_loss) 120 | tower_wd_losses.append(wd_loss) 121 | 122 | 123 | # concatenate the losses of all towers 124 | loss_op = tf.reduce_mean(tower_losses) 125 | ffg_loss_op = tf.reduce_mean(tower_ffg_losses) 126 | fbg_loss_op = tf.reduce_mean(tower_fbg_losses) 127 | lfg_loss_op = tf.reduce_mean(tower_lfg_losses) 128 | feat_loss_op = tf.reduce_mean(tower_feat_losses) 129 | wd_loss_op = tf.reduce_mean(tower_wd_losses) 130 | 131 | tf.summary.scalar('loss', loss_op) 132 | tf.summary.scalar('ffg_loss', ffg_loss_op) 133 | tf.summary.scalar('fbg_loss', fbg_loss_op) 134 | tf.summary.scalar('lfg_loss', lfg_loss_op) 135 | tf.summary.scalar('feat_loss', feat_loss_op) 136 | tf.summary.scalar('wd_loss', wd_loss_op) 137 | 138 | update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 139 | grads = average_gradients(tower_grads) 140 | with tf.control_dependencies(update_ops): 141 | train_op = opt.apply_gradients(grads, global_step=global_step) 142 | 143 | # saver for saving checkpoints 144 | saver = tf.train.Saver(max_to_keep=10) 145 | init = tf.initialize_all_variables() 146 | 147 | sess.run(init) 148 | if not os.path.exists(model_save_dir): 149 | os.makedirs(model_save_dir) 150 | if use_pretrained_model: 151 | print('[*] Loading checkpoint ...') 152 | model = tf.train.latest_checkpoint(model_save_dir) 153 | if model is not None: 154 | saver.restore(sess, model) 155 | print('[*] Loading success: %s!'%model) 156 | else: 157 | print('[*] Loading failed ...') 158 | 159 | # Create summary writer 160 | merged = tf.summary.merge_all() 161 | if not os.path.exists(logs_save_dir): 162 | os.makedirs(logs_save_dir) 163 | sum_writer = tf.summary.FileWriter(logs_save_dir, sess.graph) 164 | 165 | # Create prediction output folder 166 | if not os.path.exists(pred_save_dir): 167 | os.makedirs(pred_save_dir) 168 | 169 | # Create loss output folder 170 | if not os.path.exists(loss_save_dir): 171 | os.makedirs(loss_save_dir) 172 | loss_file = open(os.path.join(loss_save_dir, prefix+'.txt'), 'w') 173 | 174 | total_steps = (FLAGS.num_sample / (FLAGS.num_gpus * FLAGS.batch_size)) * FLAGS.num_epochs 175 | 176 | # start queue runner 177 | coord = tf.train.Coordinator() 178 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 179 | 180 | try: 181 | with sess.as_default(): 182 | print('\n\n\n*********** start training ***********\n\n\n') 183 | while not coord.should_stop(): 184 | # Run training steps 185 | start_time = time.time() 186 | sess.run(train_op) 187 | duration = time.time() - start_time 188 | step = global_step.eval() 189 | 190 | if step == 1 or step % 10 == 0: # evaluate loss 191 | loss, ffg_loss, fbg_loss, lfg_loss, feat_loss, wd_loss, lr = \ 192 | sess.run([loss_op, ffg_loss_op, fbg_loss_op, lfg_loss_op, feat_loss_op, wd_loss_op, learning_rate]) 193 | line = 'step %d/%d, loss=%.8f, ffg=%.8f, fbg=%.8f, lfg=%.8f, feat=%.8f, lwd=%.8f, dur=%.3f, lr=%.8f' \ 194 | %(step, total_steps, loss, ffg_loss, fbg_loss, lfg_loss, feat_loss, wd_loss, duration, lr) 195 | print(line) 196 | loss_file.write(line + '\n') 197 | loss_file.flush() 198 | 199 | if step == 1 or step % 10 == 0: # save summary 200 | summary = summary_str = sess.run(merged) 201 | sum_writer.add_summary(summary, step) 202 | 203 | if step % 100 == 0 and save_predictions: # save current predictions 204 | mfb = mfb_list[0] # only visualize prediction in first tower 205 | ffg, fbg, lfg, gt_ffg, gt_fbg, gt_lfg = sess.run([ 206 | mfb.first_fg_rec[0], mfb.first_bg_rec[0], mfb.last_fg_rec[0], \ 207 | mfb.gt_ffg[0], mfb.gt_fbg[0], mfb.gt_lfg[0]]) 208 | ffg, fbg, lfg = (ffg+1)/2*255.0, (fbg+1)/2*255.0, (lfg+1)/2*255.0 209 | gt_ffg, gt_fbg, gt_lfg = (gt_ffg+1)/2*255.0, (gt_fbg+1)/2*255.0, (gt_lfg+1)/2*255.0 210 | img = gen_pred_img(ffg, fbg, lfg) 211 | gt = gen_pred_img(gt_ffg, gt_fbg, gt_lfg) 212 | save_img = np.concatenate((img, gt)) 213 | sm.imsave(os.path.join(pred_save_dir, '%07d.jpg'%step), save_img) 214 | 215 | if step % 500 == 0: # save checkpoint 216 | saver.save(sess, os.path.join(model_save_dir, model_filename), global_step=global_step) 217 | 218 | if step % 500 == 0: 219 | pass 220 | # launch a new script for validation (please modify it for your own script) 221 | #subprocess.check_output(['python', evaluation_job]) 222 | 223 | except tf.errors.OutOfRangeError: 224 | print('Done training -- epoch limit reached') 225 | finally: 226 | # When done, ask the threads to stop. 227 | coord.request_stop() 228 | 229 | # Wait for threads to finish. 230 | coord.join(threads) 231 | sess.close() 232 | 233 | 234 | 235 | def main(_): 236 | run_training() 237 | 238 | 239 | if __name__ == '__main__': 240 | tf.app.run() -------------------------------------------------------------------------------- /models/mfb_net_cross.py: -------------------------------------------------------------------------------- 1 | 2 | import tensorflow as tf 3 | 4 | from tools.ops import * 5 | 6 | 7 | FLAGS = tf.app.flags.FLAGS 8 | 9 | class mfb_net(object): 10 | 11 | def __init__(self, input_, height=128, width=128, seq_length=16, c_dim=3, batch_size=32, is_training=True): 12 | 13 | self.seq = input_ 14 | self.batch_size = batch_size 15 | self.height = height 16 | self.width = width 17 | self.seq_length = seq_length 18 | self.c_dim = c_dim 19 | self.gt_ffg = None 20 | self.gt_fbg = None 21 | self.gt_lfg = None 22 | 23 | self.seq_shape = [seq_length, height, width, c_dim] 24 | 25 | self.batch_norm_params = { 26 | 'is_training': is_training, 27 | 'decay': 0.9, 28 | 'epsilon': 1e-5, 29 | 'scale': True, 30 | 'center': True, 31 | 'updates_collections': tf.GraphKeys.UPDATE_OPS 32 | } 33 | 34 | self.build_model() 35 | 36 | 37 | def build_model(self): 38 | 39 | c3d_feat = self.mapping_layer(self.c3d(self.seq)) 40 | 41 | mt_feat = c3d_feat[:, :, :, 340:] # motion feature 42 | fg_feat = c3d_feat[:, :, :, 170:340] # foreground feature 43 | bg_feat = c3d_feat[:, :, :, :170] # background feature 44 | self.mt_feat, self.fg_feat, self.bg_feat = mt_feat, fg_feat, bg_feat 45 | 46 | # reconstruction 47 | self.first_fg_rec = self.decoder2d(fg_feat, self.c_dim, name='fg_dec') 48 | self.first_bg_rec = self.decoder2d(bg_feat, self.c_dim, name='bg_dec') 49 | 50 | # kernel generation 51 | kernel = self.kernel_decoder(mt_feat) 52 | # stop gradients from last foreground reconstruction stream 53 | fg_feat = tf.stop_gradient(fg_feat) 54 | 55 | # original cross-convolution consumes large memory, we alleviate it by 56 | # shrinking the feature maps and restoring it after cross-convolution 57 | fg_feat = self.reduction_layer(fg_feat) 58 | self.last_fg_feat = cross_conv2d(fg_feat, kernel) 59 | self.last_fg_feat = self.restoring_layer(self.last_fg_feat) 60 | 61 | # reconstruct last foreground 62 | self.last_fg_rec = self.decoder2d(self.last_fg_feat, self.c_dim, reuse=True, name='fg_dec') 63 | 64 | # get psudo-ground truth of last foreground features by feeding the temporally inversed video clip 65 | inv_c3d_feat = self.mapping_layer(self.c3d(self.seq[:,::-1], reuse=True), reuse=True) 66 | self.inv_last_fg_feat_gt = tf.stop_gradient(inv_c3d_feat[:, :, :, 170:340]) 67 | 68 | 69 | def reconstruct(self): 70 | return self.first_fg_rec, self.first_bg_rec, self.last_fg_rec 71 | 72 | 73 | def bn(self, x): 74 | return tf.contrib.layers.batch_norm(x, **self.batch_norm_params) 75 | 76 | 77 | def mapping_layer(self, input_, reuse=False, name='mapping'): 78 | with tf.variable_scope(name, reuse=reuse): 79 | feat = relu(self.bn(conv3d(input_, self.map_dim, k_t=self.map_length, \ 80 | k_h=2, k_w=2, d_h=2, d_w=2, padding='VALID', name='mapping1'))) 81 | feat = tf.reshape(feat, [self.batch_size, self.map_height//2, self.map_width//2, self.map_dim]) 82 | 83 | return feat 84 | 85 | 86 | def reduction_layer(self, input_, reuse=False, name='reduction'): 87 | with tf.variable_scope(name, reuse=reuse): 88 | input_ = relu(self.bn(conv2d(input_, self.fg_feat_c_dim, k_h=1, k_w=1, \ 89 | d_h=1, d_w=1, padding='SAME', name='reduction'))) 90 | return input_ 91 | 92 | 93 | def restoring_layer(self, input_, reuse=False, name='restoring'): 94 | with tf.variable_scope(name, reuse=reuse): 95 | input_ = relu(self.bn(conv2d(input_, 170, k_h=1, k_w=1, \ 96 | d_h=1, d_w=1, padding='SAME', name='restoring'))) 97 | return input_ 98 | 99 | 100 | def kernel_decoder(self, input_, reuse=False, name='kernel_dec'): 101 | with tf.variable_scope(name, reuse=reuse): 102 | 103 | kernel_out_dim = 48 104 | self.fg_feat_c_dim = 48 105 | kernel_in_dim = self.fg_feat_c_dim 106 | 107 | feat = relu(self.bn(conv2d(input_, kernel_in_dim*kernel_out_dim, k_h=3, k_w=3, \ 108 | d_h=1, d_w=1, padding='SAME', name='mapping1'))) 109 | 110 | kernel = feat 111 | kernel = tf.reshape(kernel, [self.batch_size, 4, 4, kernel_in_dim*kernel_out_dim]) 112 | 113 | kernel = relu(self.bn(conv2d(kernel, kernel_in_dim*kernel_out_dim, k_h=3, k_w=3, \ 114 | d_h=1, d_w=1, padding='SAME', name='conv1'))) 115 | kernel = conv2d(kernel, kernel_in_dim*kernel_out_dim, k_h=3, k_w=3, d_h=1, d_w=1, \ 116 | padding='SAME', name='conv2') 117 | kernel = tf.reshape(kernel, [self.batch_size, 4, 4, kernel_in_dim, kernel_out_dim]) 118 | 119 | 120 | return kernel 121 | 122 | 123 | def decoder2d(self, input_, out_dim, kernels=None, reuse=False, name='decoder2d'): 124 | # following guidelines from DC-GAN paper 125 | with tf.variable_scope(name, reuse=reuse): 126 | 127 | feat = relu(self.bn(conv2d(input_, self.map_dim*2, k_h=1, k_w=1, padding='SAME', name='mapping'))) # 4x4 128 | 129 | deconv1 = relu(self.bn(deconv2d(feat, 130 | output_shape=[self.batch_size,self.map_height,self.map_width,self.map_dim], 131 | k_h=5,k_w=5,d_h=2,d_w=2,padding='SAME',name='deconv1'))) # 8x8 132 | 133 | deconv2 = relu(self.bn(deconv2d(deconv1, 134 | output_shape=[self.batch_size,self.map_height*2,self.map_width*2,self.map_dim], 135 | k_h=5,k_w=5,d_h=2,d_w=2,padding='SAME',name='deconv2'))) # 16x16 136 | 137 | deconv3 = relu(self.bn(deconv2d(deconv2, 138 | output_shape=[self.batch_size,self.map_height*4,self.map_width*4,self.map_dim//2], 139 | k_h=5,k_w=5,d_h=2,d_w=2,padding='SAME',name='deconv3'))) # 32x32 140 | 141 | deconv4 = relu(self.bn(deconv2d(deconv3, 142 | output_shape=[self.batch_size,self.map_height*8,self.map_width*8,self.map_dim//4], 143 | k_h=5,k_w=5,d_h=2,d_w=2,padding='SAME',name='deconv4'))) # 64x64 144 | 145 | deconv5 = deconv2d(deconv4, 146 | output_shape=[self.batch_size,self.map_height*16,self.map_width*16,self.c_dim], 147 | k_h=5,k_w=5,d_h=2,d_w=2,padding='SAME',name='deconv5') #128x128 148 | 149 | img = tf.tanh(deconv5) 150 | 151 | 152 | return img 153 | 154 | 155 | def c3d(self, input_, reuse=False, _dropout=1.0, name='c3d'): 156 | 157 | with tf.variable_scope(name, reuse=reuse): 158 | 159 | # Convolution Layer 160 | conv1 = relu(self.bn(conv3d(input_, 64, name='conv1'))) 161 | pool1 = max_pool3d(conv1, k=1, name='pool1') 162 | 163 | # Convolution Layer 164 | conv2 = relu(self.bn(conv3d(pool1, 128, name='conv2'))) 165 | pool2 = max_pool3d(conv2, k=2, name='pool2') 166 | 167 | # Convolution Layer 168 | conv3 = relu(self.bn(conv3d(pool2, 256, name='conv3a'))) 169 | conv3 = relu(self.bn(conv3d(conv3, 256, name='conv3b'))) 170 | pool3 = max_pool3d(conv3, k=2, name='pool3') 171 | 172 | # Convolution Layer 173 | conv4 = relu(self.bn(conv3d(pool3, 512, name='conv4a'))) 174 | conv4 = relu(self.bn(conv3d(conv4, 512, name='conv4b'))) 175 | pool4 = max_pool3d(conv4, k=2, name='pool4') 176 | 177 | # Convolution Layer 178 | conv5 = relu(self.bn(conv3d(pool4, 512, name='conv5a'))) 179 | conv5 = relu(self.bn(conv3d(conv5, 512, name='conv5b'))) 180 | 181 | conv5_shape = conv5.get_shape().as_list() 182 | self.map_length = conv5_shape[1] 183 | self.map_height = conv5_shape[2] 184 | self.map_width = conv5_shape[3] 185 | self.map_dim = conv5_shape[4] 186 | 187 | feature = conv5 188 | 189 | return feature 190 | 191 | 192 | def tower_loss(name_scope, mfb, clips, img_masks, loss_masks): 193 | # get reconstruction and ground truth 194 | first_fg_rec, first_bg_rec, last_fg_rec = mfb.reconstruct() 195 | 196 | img_masks_list = [img_masks for i in range(FLAGS.channel)] 197 | loss_masks_list = [loss_masks for i in range(FLAGS.channel)] 198 | img_masks = tf.stack(img_masks_list, axis=-1) 199 | loss_masks = tf.stack(loss_masks_list, axis=-1) 200 | 201 | # mask the ground truth 202 | first_frames = clips[:,0,:,:,:] 203 | last_frames = clips[:,-1,:,:,:] 204 | first_fg_gt = first_frames * img_masks[:, 0] 205 | first_bg_gt = first_frames * (1 - img_masks[:, 0]) 206 | last_fg_gt = last_frames * img_masks[:, -1] 207 | 208 | mfb.gt_ffg = first_fg_gt 209 | mfb.gt_fbg = first_bg_gt 210 | mfb.gt_lfg = last_fg_gt 211 | 212 | # compute reconstruction loss 213 | first_fg_loss = tf.reduce_mean(tf.abs(first_fg_rec-first_fg_gt)*loss_masks[:,0]) 214 | first_bg_loss = tf.reduce_mean(tf.abs(first_bg_rec-first_bg_gt)) 215 | last_fg_loss = tf.reduce_mean(tf.abs(last_fg_rec-last_fg_gt)*loss_masks[:,-1]) 216 | rec_loss = first_fg_loss + first_bg_loss + last_fg_loss 217 | 218 | # feature loss 219 | feat_loss = tf.reduce_mean(tf.square(mfb.last_fg_feat-mfb.inv_last_fg_feat_gt)) 220 | 221 | weight_decay_loss_list = tf.get_collection('losses', name_scope) 222 | weight_decay_loss = 0.0 223 | if len(weight_decay_loss_list) > 0: 224 | weight_decay_loss = tf.add_n(weight_decay_loss_list) 225 | 226 | tf.add_to_collection('losses', rec_loss) 227 | tf.add_to_collection('losses', feat_loss) 228 | losses = tf.get_collection('losses', name_scope) 229 | 230 | # compute the total loss for the current tower. 231 | total_loss = tf.add_n(losses, name='total_loss') 232 | 233 | return total_loss, first_fg_loss, first_bg_loss, last_fg_loss, feat_loss, weight_decay_loss -------------------------------------------------------------------------------- /tools/utilities.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tensorflow as tf 3 | import scipy.misc as sm 4 | 5 | 6 | FLAGS = tf.app.flags.FLAGS 7 | 8 | 9 | def gen_pred_img(ffg, fbg, lfg): 10 | border = 2 11 | shape = ffg.shape # [h, w, c] 12 | image = np.ones([shape[0]+2*border, shape[1]*3+4*border, shape[2]]) * 255 13 | image[border:-border,border:shape[1]+border] = ffg 14 | image[border:-border,shape[1]+border*2:2*shape[1]+border*2] = fbg 15 | image[border:-border,2*shape[1]+3*border:-border] = lfg 16 | 17 | return image 18 | 19 | 20 | def gen_pred_vid(vid): 21 | shape = vid.shape 22 | vid_img = np.zeros((shape[1], shape[0]*shape[2], shape[3])) 23 | 24 | for i in range(shape[0]): 25 | vid_img[:,i*shape[2]:(i+1)*shape[2]] = vid[i] 26 | 27 | return vid_img 28 | 29 | 30 | def decode_frames(frame_list, h, w, l): 31 | clip = [] 32 | for i in range(l): 33 | frame = frame_list[i] 34 | image = tf.cast(tf.image.decode_jpeg(frame), tf.float32) 35 | image.set_shape((h, w, 3)) 36 | clip.append(image) 37 | 38 | return tf.stack(clip) 39 | 40 | 41 | def generate_mask(img_mask_list, h, w, l): 42 | img_masks, loss_masks = [], [] 43 | 44 | for i in range(l): 45 | # generate image mask 46 | img_mask = img_mask_list[i] 47 | img_mask = tf.cast(tf.image.decode_png(img_mask), tf.float32) 48 | img_mask = tf.reshape(img_mask, (h, w)) 49 | img_masks.append(img_mask) 50 | 51 | # generate loss mask 52 | s_total = h * w 53 | s_mask = tf.reduce_sum(img_mask) 54 | def f1(): return img_mask*((s_total-s_mask)/s_mask-1)+1 55 | def f2(): return tf.zeros_like(img_mask) 56 | def f3(): return tf.ones_like(img_mask) 57 | loss_mask = tf.case([(tf.equal(s_mask, 0), f2), \ 58 | (tf.less(s_mask, s_total/2), f1)], 59 | default=f3) 60 | 61 | loss_masks.append(loss_mask) 62 | 63 | return tf.stack(img_masks), tf.stack(loss_masks) 64 | 65 | 66 | def read_my_file_format(filename_queue, is_training): 67 | reader = tf.TFRecordReader() 68 | _, serialized_example = reader.read(filename_queue) 69 | context_features = { 70 | "height": tf.FixedLenFeature([], dtype=tf.int64), 71 | "width": tf.FixedLenFeature([], dtype=tf.int64), 72 | "sequence_length": tf.FixedLenFeature([], dtype=tf.int64), 73 | "text": tf.FixedLenFeature([], dtype=tf.string), 74 | "label": tf.FixedLenFeature([], dtype=tf.int64) 75 | } 76 | sequence_features = { 77 | "frames": tf.FixedLenSequenceFeature([], dtype=tf.string), 78 | "masks": tf.FixedLenSequenceFeature([], dtype=tf.string) 79 | } 80 | context_parsed, sequence_parsed = tf.parse_single_sequence_example( 81 | serialized=serialized_example, 82 | context_features=context_features, 83 | sequence_features=sequence_features 84 | ) 85 | 86 | # start queue runner so it won't stuck 87 | tf.train.start_queue_runners(sess=tf.get_default_session()) 88 | 89 | height = FLAGS.height 90 | width = FLAGS.width 91 | sequence_length = 32 92 | 93 | clip = decode_frames(sequence_parsed['frames'], height, width, sequence_length) 94 | img_mask, loss_mask = generate_mask(sequence_parsed['masks'], \ 95 | height, width, sequence_length) 96 | 97 | if is_training: 98 | # randomly sample clips of 16 frames 99 | idx = tf.squeeze(tf.random_uniform([1], 0, sequence_length-FLAGS.seq_length+1, dtype=tf.int32)) 100 | else: 101 | # sample the middle clip 102 | idx = 8 103 | clip = clip[idx:idx+FLAGS.seq_length] / 255.0 * 2 - 1 104 | img_mask = img_mask[idx:idx+FLAGS.seq_length] 105 | loss_mask = loss_mask[idx:idx+FLAGS.seq_length] 106 | 107 | if is_training: 108 | # randomly temporally flip data 109 | reverse = tf.squeeze(tf.random_uniform([1], 0, 2, dtype=tf.int32)) 110 | clip = tf.cond(tf.equal(reverse,0), lambda: clip, lambda: clip[::-1]) 111 | img_mask = tf.cond(tf.equal(reverse,0), lambda: img_mask, lambda: img_mask[::-1]) 112 | loss_mask = tf.cond(tf.equal(reverse,0), lambda: loss_mask, lambda: loss_mask[::-1]) 113 | clip.set_shape([FLAGS.seq_length, height, width, 3]) 114 | img_mask.set_shape([FLAGS.seq_length, height, width]) 115 | loss_mask.set_shape([FLAGS.seq_length, height, width]) 116 | 117 | # randomly horizontally flip data 118 | flip = tf.squeeze(tf.random_uniform([1], 0, 2, dtype=tf.int32)) 119 | img_list, img_mask_list, loss_mask_list = tf.unstack(clip), tf.unstack(img_mask), tf.unstack(loss_mask) 120 | flip_clip, flip_img_mask, flip_loss_mask = [], [], [] 121 | for i in range(FLAGS.seq_length): 122 | flip_clip.append(tf.cond(tf.equal(flip, 0), lambda: img_list[i], lambda: tf.image.flip_left_right(img_list[i]))) 123 | flip_img_mask.append(tf.cond(tf.equal(flip, 0), lambda: img_mask_list[i], \ 124 | lambda: tf.squeeze(tf.image.flip_left_right(tf.expand_dims(img_mask_list[i],-1)),-1))) 125 | flip_loss_mask.append(tf.cond(tf.equal(flip, 0), lambda: loss_mask_list[i], \ 126 | lambda: tf.squeeze(tf.image.flip_left_right(tf.expand_dims(loss_mask_list[i],-1)),-1))) 127 | clip = tf.stack(flip_clip) 128 | img_mask = tf.stack(flip_img_mask) 129 | loss_mask = tf.stack(flip_loss_mask) 130 | 131 | clip.set_shape([FLAGS.seq_length, height, width, 3]) 132 | img_mask.set_shape([FLAGS.seq_length, height, width]) 133 | loss_mask.set_shape([FLAGS.seq_length, height, width]) 134 | 135 | return clip, img_mask, loss_mask 136 | 137 | 138 | def input_pipeline(filenames, batch_size, read_threads=4, num_epochs=None, is_training=True): 139 | filename_queue = tf.train.string_input_producer( 140 | filenames, num_epochs=FLAGS.num_epochs, shuffle=is_training) 141 | # initialize local variables if num_epochs is not None or it'll raise uninitialized problem 142 | tf.get_default_session().run(tf.local_variables_initializer()) 143 | 144 | example_list = [read_my_file_format(filename_queue, is_training) \ 145 | for _ in range(read_threads)] 146 | 147 | min_after_dequeue = 300 if is_training else 10 148 | capacity = min_after_dequeue + 3 * batch_size 149 | clip_batch, img_mask_batch, loss_mask_batch = tf.train.shuffle_batch_join( 150 | example_list, batch_size=batch_size, capacity=capacity, 151 | min_after_dequeue=min_after_dequeue) 152 | 153 | return clip_batch, img_mask_batch, loss_mask_batch 154 | 155 | 156 | 157 | def read_my_file_format_dis(filename_queue, is_training): 158 | reader = tf.TFRecordReader() 159 | _, serialized_example = reader.read(filename_queue) 160 | context_features = { 161 | "height": tf.FixedLenFeature([], dtype=tf.int64), 162 | "width": tf.FixedLenFeature([], dtype=tf.int64), 163 | "sequence_length": tf.FixedLenFeature([], dtype=tf.int64), 164 | "text": tf.FixedLenFeature([], dtype=tf.string), 165 | "label": tf.FixedLenFeature([], dtype=tf.int64) 166 | } 167 | sequence_features = { 168 | "frames": tf.FixedLenSequenceFeature([], dtype=tf.string), 169 | "masks": tf.FixedLenSequenceFeature([], dtype=tf.string) 170 | } 171 | context_parsed, sequence_parsed = tf.parse_single_sequence_example( 172 | serialized=serialized_example, 173 | context_features=context_features, 174 | sequence_features=sequence_features 175 | ) 176 | 177 | height = 128#context_parsed['height'].eval() 178 | width = 128#context_parsed['width'].eval() 179 | sequence_length = 32#context_parsed['sequence_length'].eval() 180 | 181 | clip = decode_frames(sequence_parsed['frames'], height, width, sequence_length) 182 | 183 | # generate one hot vector 184 | label = context_parsed['label'] 185 | label = tf.one_hot(label-1, FLAGS.num_class) 186 | text = context_parsed['text'] 187 | 188 | # randomly sample clips of 16 frames 189 | if is_training: 190 | idx = tf.squeeze(tf.random_uniform([1], 0, sequence_length-FLAGS.seq_length+1, dtype=tf.int32)) 191 | else: 192 | idx = 8 193 | clip = clip[idx:idx+FLAGS.seq_length] / 255.0 * 2 - 1 194 | 195 | if is_training: 196 | # randomly reverse data 197 | reverse = tf.squeeze(tf.random_uniform([1], 0, 2, dtype=tf.int32)) 198 | clip = tf.cond(tf.equal(reverse,0), lambda: clip, lambda: clip[::-1]) 199 | 200 | # randomly horizontally flip data 201 | flip = tf.squeeze(tf.random_uniform([1], 0, 2, dtype=tf.int32)) 202 | clip = tf.cond(tf.equal(flip,0), lambda: clip, lambda: \ 203 | tf.map_fn(lambda img: tf.image.flip_left_right(img), clip)) 204 | 205 | clip.set_shape([FLAGS.seq_length, height, width, 3]) 206 | 207 | return clip, label, text 208 | 209 | 210 | def input_pipeline_dis(filenames, batch_size, read_threads=4, num_epochs=None, is_training=True): 211 | filename_queue = tf.train.string_input_producer( 212 | filenames, num_epochs=FLAGS.num_epochs, shuffle=is_training) 213 | # initialize local variables if num_epochs is not None or it'll raise uninitialized problem 214 | tf.get_default_session().run(tf.local_variables_initializer()) 215 | 216 | example_list = [read_my_file_format_dis(filename_queue, is_training) \ 217 | for _ in range(read_threads)] 218 | 219 | min_after_dequeue = 300 if is_training else 10 220 | capacity = min_after_dequeue + 3 * batch_size 221 | clip_batch, label_batch, text_batch = tf.train.shuffle_batch_join( 222 | example_list, batch_size=batch_size, capacity=capacity, 223 | min_after_dequeue=min_after_dequeue) 224 | 225 | return clip_batch, label_batch, text_batch 226 | 227 | 228 | def average_gradients(tower_grads): 229 | average_grads = [] 230 | for grad_and_vars in zip(*tower_grads): 231 | grads = [] 232 | for g, v in grad_and_vars: 233 | expanded_g = tf.expand_dims(g, 0) 234 | grads.append(expanded_g) 235 | grad = tf.concat(grads, 0) 236 | grad = tf.reduce_mean(grad, 0) 237 | v = grad_and_vars[0][1] 238 | grad_and_var = (grad, v) 239 | average_grads.append(grad_and_var) 240 | 241 | return average_grads 242 | 243 | 244 | def average_gradients_dis(tower_grads, encoder_gradient_ratio): 245 | average_grads = [] 246 | for grad_and_vars in zip(*tower_grads): 247 | grads = [] 248 | for g, v in grad_and_vars: 249 | if 'c3d' in v.name or 'mapping' in v.name: 250 | g = g * encoder_gradient_ratio 251 | expanded_g = tf.expand_dims(g, 0) 252 | grads.append(expanded_g) 253 | if len(grads) == 0: 254 | continue 255 | grad = tf.concat(grads, 0) 256 | grad = tf.reduce_mean(grad, 0) 257 | v = grad_and_vars[0][1] 258 | grad_and_var = (grad, v) 259 | average_grads.append(grad_and_var) 260 | 261 | return average_grads -------------------------------------------------------------------------------- /dataset/vallist.txt: -------------------------------------------------------------------------------- 1 | Basketball/v_Basketball_g10_c01.avi 1 2 | Basketball/v_Basketball_g10_c02.avi 1 3 | Basketball/v_Basketball_g10_c03.avi 1 4 | Basketball/v_Basketball_g10_c04.avi 1 5 | Basketball/v_Basketball_g10_c05.avi 1 6 | Basketball/v_Basketball_g23_c01.avi 1 7 | Basketball/v_Basketball_g23_c02.avi 1 8 | Basketball/v_Basketball_g23_c03.avi 1 9 | Basketball/v_Basketball_g23_c04.avi 1 10 | Basketball/v_Basketball_g23_c05.avi 1 11 | Basketball/v_Basketball_g23_c06.avi 1 12 | BasketballDunk/v_BasketballDunk_g09_c01.avi 2 13 | BasketballDunk/v_BasketballDunk_g09_c02.avi 2 14 | BasketballDunk/v_BasketballDunk_g09_c03.avi 2 15 | BasketballDunk/v_BasketballDunk_g09_c04.avi 2 16 | BasketballDunk/v_BasketballDunk_g09_c05.avi 2 17 | BasketballDunk/v_BasketballDunk_g13_c01.avi 2 18 | BasketballDunk/v_BasketballDunk_g13_c02.avi 2 19 | BasketballDunk/v_BasketballDunk_g13_c03.avi 2 20 | BasketballDunk/v_BasketballDunk_g13_c04.avi 2 21 | Biking/v_Biking_g18_c01.avi 3 22 | Biking/v_Biking_g18_c02.avi 3 23 | Biking/v_Biking_g18_c03.avi 3 24 | Biking/v_Biking_g18_c04.avi 3 25 | Biking/v_Biking_g18_c05.avi 3 26 | Biking/v_Biking_g18_c06.avi 3 27 | Biking/v_Biking_g21_c01.avi 3 28 | Biking/v_Biking_g21_c02.avi 3 29 | Biking/v_Biking_g21_c03.avi 3 30 | Biking/v_Biking_g21_c04.avi 3 31 | Biking/v_Biking_g21_c05.avi 3 32 | Biking/v_Biking_g21_c06.avi 3 33 | Biking/v_Biking_g21_c07.avi 3 34 | CliffDiving/v_CliffDiving_g11_c01.avi 4 35 | CliffDiving/v_CliffDiving_g11_c02.avi 4 36 | CliffDiving/v_CliffDiving_g11_c03.avi 4 37 | CliffDiving/v_CliffDiving_g11_c04.avi 4 38 | CliffDiving/v_CliffDiving_g11_c05.avi 4 39 | CliffDiving/v_CliffDiving_g11_c06.avi 4 40 | CliffDiving/v_CliffDiving_g12_c01.avi 4 41 | CliffDiving/v_CliffDiving_g12_c02.avi 4 42 | CliffDiving/v_CliffDiving_g12_c03.avi 4 43 | CliffDiving/v_CliffDiving_g12_c04.avi 4 44 | CliffDiving/v_CliffDiving_g12_c05.avi 4 45 | CliffDiving/v_CliffDiving_g12_c06.avi 4 46 | CliffDiving/v_CliffDiving_g12_c07.avi 4 47 | CricketBowling/v_CricketBowling_g14_c01.avi 5 48 | CricketBowling/v_CricketBowling_g14_c02.avi 5 49 | CricketBowling/v_CricketBowling_g14_c03.avi 5 50 | CricketBowling/v_CricketBowling_g14_c04.avi 5 51 | CricketBowling/v_CricketBowling_g14_c05.avi 5 52 | CricketBowling/v_CricketBowling_g17_c01.avi 5 53 | CricketBowling/v_CricketBowling_g17_c02.avi 5 54 | CricketBowling/v_CricketBowling_g17_c03.avi 5 55 | CricketBowling/v_CricketBowling_g17_c04.avi 5 56 | CricketBowling/v_CricketBowling_g17_c05.avi 5 57 | Diving/v_Diving_g15_c01.avi 6 58 | Diving/v_Diving_g15_c02.avi 6 59 | Diving/v_Diving_g15_c03.avi 6 60 | Diving/v_Diving_g15_c04.avi 6 61 | Diving/v_Diving_g15_c05.avi 6 62 | Diving/v_Diving_g15_c06.avi 6 63 | Diving/v_Diving_g15_c07.avi 6 64 | Diving/v_Diving_g19_c01.avi 6 65 | Diving/v_Diving_g19_c02.avi 6 66 | Diving/v_Diving_g19_c03.avi 6 67 | Diving/v_Diving_g19_c04.avi 6 68 | Fencing/v_Fencing_g08_c01.avi 7 69 | Fencing/v_Fencing_g08_c02.avi 7 70 | Fencing/v_Fencing_g08_c03.avi 7 71 | Fencing/v_Fencing_g08_c04.avi 7 72 | Fencing/v_Fencing_g19_c01.avi 7 73 | Fencing/v_Fencing_g19_c02.avi 7 74 | Fencing/v_Fencing_g19_c03.avi 7 75 | Fencing/v_Fencing_g19_c04.avi 7 76 | FloorGymnastics/v_FloorGymnastics_g10_c01.avi 8 77 | FloorGymnastics/v_FloorGymnastics_g10_c02.avi 8 78 | FloorGymnastics/v_FloorGymnastics_g10_c03.avi 8 79 | FloorGymnastics/v_FloorGymnastics_g10_c04.avi 8 80 | FloorGymnastics/v_FloorGymnastics_g10_c05.avi 8 81 | FloorGymnastics/v_FloorGymnastics_g19_c01.avi 8 82 | FloorGymnastics/v_FloorGymnastics_g19_c02.avi 8 83 | FloorGymnastics/v_FloorGymnastics_g19_c03.avi 8 84 | FloorGymnastics/v_FloorGymnastics_g19_c04.avi 8 85 | GolfSwing/v_GolfSwing_g09_c01.avi 9 86 | GolfSwing/v_GolfSwing_g09_c02.avi 9 87 | GolfSwing/v_GolfSwing_g09_c03.avi 9 88 | GolfSwing/v_GolfSwing_g09_c04.avi 9 89 | GolfSwing/v_GolfSwing_g24_c01.avi 9 90 | GolfSwing/v_GolfSwing_g24_c02.avi 9 91 | GolfSwing/v_GolfSwing_g24_c03.avi 9 92 | GolfSwing/v_GolfSwing_g24_c04.avi 9 93 | GolfSwing/v_GolfSwing_g24_c05.avi 9 94 | GolfSwing/v_GolfSwing_g24_c06.avi 9 95 | GolfSwing/v_GolfSwing_g24_c07.avi 9 96 | HorseRiding/v_HorseRiding_g13_c01.avi 10 97 | HorseRiding/v_HorseRiding_g13_c02.avi 10 98 | HorseRiding/v_HorseRiding_g13_c03.avi 10 99 | HorseRiding/v_HorseRiding_g13_c04.avi 10 100 | HorseRiding/v_HorseRiding_g21_c01.avi 10 101 | HorseRiding/v_HorseRiding_g21_c02.avi 10 102 | HorseRiding/v_HorseRiding_g21_c03.avi 10 103 | HorseRiding/v_HorseRiding_g21_c04.avi 10 104 | HorseRiding/v_HorseRiding_g21_c05.avi 10 105 | HorseRiding/v_HorseRiding_g21_c06.avi 10 106 | IceDancing/v_IceDancing_g10_c01.avi 11 107 | IceDancing/v_IceDancing_g10_c02.avi 11 108 | IceDancing/v_IceDancing_g10_c03.avi 11 109 | IceDancing/v_IceDancing_g10_c04.avi 11 110 | IceDancing/v_IceDancing_g10_c05.avi 11 111 | IceDancing/v_IceDancing_g10_c06.avi 11 112 | IceDancing/v_IceDancing_g10_c07.avi 11 113 | IceDancing/v_IceDancing_g23_c01.avi 11 114 | IceDancing/v_IceDancing_g23_c02.avi 11 115 | IceDancing/v_IceDancing_g23_c03.avi 11 116 | IceDancing/v_IceDancing_g23_c04.avi 11 117 | IceDancing/v_IceDancing_g23_c05.avi 11 118 | IceDancing/v_IceDancing_g23_c06.avi 11 119 | IceDancing/v_IceDancing_g23_c07.avi 11 120 | LongJump/v_LongJump_g18_c01.avi 12 121 | LongJump/v_LongJump_g18_c02.avi 12 122 | LongJump/v_LongJump_g18_c03.avi 12 123 | LongJump/v_LongJump_g18_c04.avi 12 124 | LongJump/v_LongJump_g18_c05.avi 12 125 | LongJump/v_LongJump_g21_c01.avi 12 126 | LongJump/v_LongJump_g21_c02.avi 12 127 | LongJump/v_LongJump_g21_c03.avi 12 128 | LongJump/v_LongJump_g21_c04.avi 12 129 | PoleVault/v_PoleVault_g10_c01.avi 13 130 | PoleVault/v_PoleVault_g10_c02.avi 13 131 | PoleVault/v_PoleVault_g10_c03.avi 13 132 | PoleVault/v_PoleVault_g10_c04.avi 13 133 | PoleVault/v_PoleVault_g10_c05.avi 13 134 | PoleVault/v_PoleVault_g10_c06.avi 13 135 | PoleVault/v_PoleVault_g10_c07.avi 13 136 | PoleVault/v_PoleVault_g17_c01.avi 13 137 | PoleVault/v_PoleVault_g17_c02.avi 13 138 | PoleVault/v_PoleVault_g17_c03.avi 13 139 | PoleVault/v_PoleVault_g17_c04.avi 13 140 | PoleVault/v_PoleVault_g17_c05.avi 13 141 | PoleVault/v_PoleVault_g17_c06.avi 13 142 | PoleVault/v_PoleVault_g17_c07.avi 13 143 | RopeClimbing/v_RopeClimbing_g20_c01.avi 14 144 | RopeClimbing/v_RopeClimbing_g20_c02.avi 14 145 | RopeClimbing/v_RopeClimbing_g20_c03.avi 14 146 | RopeClimbing/v_RopeClimbing_g20_c04.avi 14 147 | RopeClimbing/v_RopeClimbing_g21_c01.avi 14 148 | RopeClimbing/v_RopeClimbing_g21_c02.avi 14 149 | RopeClimbing/v_RopeClimbing_g21_c03.avi 14 150 | RopeClimbing/v_RopeClimbing_g21_c04.avi 14 151 | SalsaSpin/v_SalsaSpin_g12_c01.avi 15 152 | SalsaSpin/v_SalsaSpin_g12_c02.avi 15 153 | SalsaSpin/v_SalsaSpin_g12_c03.avi 15 154 | SalsaSpin/v_SalsaSpin_g12_c04.avi 15 155 | SalsaSpin/v_SalsaSpin_g12_c05.avi 15 156 | SalsaSpin/v_SalsaSpin_g12_c06.avi 15 157 | SalsaSpin/v_SalsaSpin_g14_c01.avi 15 158 | SalsaSpin/v_SalsaSpin_g14_c02.avi 15 159 | SalsaSpin/v_SalsaSpin_g14_c03.avi 15 160 | SalsaSpin/v_SalsaSpin_g14_c04.avi 15 161 | SalsaSpin/v_SalsaSpin_g14_c05.avi 15 162 | SalsaSpin/v_SalsaSpin_g14_c06.avi 15 163 | SkateBoarding/v_SkateBoarding_g14_c01.avi 16 164 | SkateBoarding/v_SkateBoarding_g14_c02.avi 16 165 | SkateBoarding/v_SkateBoarding_g14_c03.avi 16 166 | SkateBoarding/v_SkateBoarding_g14_c04.avi 16 167 | SkateBoarding/v_SkateBoarding_g15_c01.avi 16 168 | SkateBoarding/v_SkateBoarding_g15_c02.avi 16 169 | SkateBoarding/v_SkateBoarding_g15_c03.avi 16 170 | SkateBoarding/v_SkateBoarding_g15_c04.avi 16 171 | SkateBoarding/v_SkateBoarding_g15_c05.avi 16 172 | SkateBoarding/v_SkateBoarding_g15_c06.avi 16 173 | Skiing/v_Skiing_g10_c01.avi 17 174 | Skiing/v_Skiing_g10_c02.avi 17 175 | Skiing/v_Skiing_g10_c03.avi 17 176 | Skiing/v_Skiing_g10_c04.avi 17 177 | Skiing/v_Skiing_g10_c05.avi 17 178 | Skiing/v_Skiing_g14_c01.avi 17 179 | Skiing/v_Skiing_g14_c02.avi 17 180 | Skiing/v_Skiing_g14_c03.avi 17 181 | Skiing/v_Skiing_g14_c04.avi 17 182 | Skijet/v_Skijet_g09_c01.avi 18 183 | Skijet/v_Skijet_g09_c02.avi 18 184 | Skijet/v_Skijet_g09_c03.avi 18 185 | Skijet/v_Skijet_g09_c04.avi 18 186 | Skijet/v_Skijet_g22_c01.avi 18 187 | Skijet/v_Skijet_g22_c02.avi 18 188 | Skijet/v_Skijet_g22_c03.avi 18 189 | Skijet/v_Skijet_g22_c04.avi 18 190 | SoccerJuggling/v_SoccerJuggling_g10_c01.avi 19 191 | SoccerJuggling/v_SoccerJuggling_g10_c02.avi 19 192 | SoccerJuggling/v_SoccerJuggling_g10_c03.avi 19 193 | SoccerJuggling/v_SoccerJuggling_g10_c04.avi 19 194 | SoccerJuggling/v_SoccerJuggling_g13_c01.avi 19 195 | SoccerJuggling/v_SoccerJuggling_g13_c02.avi 19 196 | SoccerJuggling/v_SoccerJuggling_g13_c03.avi 19 197 | SoccerJuggling/v_SoccerJuggling_g13_c04.avi 19 198 | SoccerJuggling/v_SoccerJuggling_g13_c05.avi 19 199 | Surfing/v_Surfing_g15_c01.avi 20 200 | Surfing/v_Surfing_g15_c02.avi 20 201 | Surfing/v_Surfing_g15_c03.avi 20 202 | Surfing/v_Surfing_g15_c04.avi 20 203 | Surfing/v_Surfing_g15_c05.avi 20 204 | Surfing/v_Surfing_g15_c06.avi 20 205 | Surfing/v_Surfing_g15_c07.avi 20 206 | Surfing/v_Surfing_g17_c01.avi 20 207 | Surfing/v_Surfing_g17_c02.avi 20 208 | Surfing/v_Surfing_g17_c03.avi 20 209 | Surfing/v_Surfing_g17_c04.avi 20 210 | Surfing/v_Surfing_g17_c05.avi 20 211 | Surfing/v_Surfing_g17_c06.avi 20 212 | Surfing/v_Surfing_g17_c07.avi 20 213 | TennisSwing/v_TennisSwing_g15_c01.avi 21 214 | TennisSwing/v_TennisSwing_g15_c02.avi 21 215 | TennisSwing/v_TennisSwing_g15_c03.avi 21 216 | TennisSwing/v_TennisSwing_g15_c04.avi 21 217 | TennisSwing/v_TennisSwing_g15_c05.avi 21 218 | TennisSwing/v_TennisSwing_g15_c06.avi 21 219 | TennisSwing/v_TennisSwing_g15_c07.avi 21 220 | TennisSwing/v_TennisSwing_g16_c01.avi 21 221 | TennisSwing/v_TennisSwing_g16_c02.avi 21 222 | TennisSwing/v_TennisSwing_g16_c03.avi 21 223 | TennisSwing/v_TennisSwing_g16_c04.avi 21 224 | TennisSwing/v_TennisSwing_g16_c05.avi 21 225 | TennisSwing/v_TennisSwing_g16_c06.avi 21 226 | TennisSwing/v_TennisSwing_g16_c07.avi 21 227 | TrampolineJumping/v_TrampolineJumping_g11_c01.avi 22 228 | TrampolineJumping/v_TrampolineJumping_g11_c02.avi 22 229 | TrampolineJumping/v_TrampolineJumping_g11_c03.avi 22 230 | TrampolineJumping/v_TrampolineJumping_g11_c04.avi 22 231 | TrampolineJumping/v_TrampolineJumping_g11_c05.avi 22 232 | TrampolineJumping/v_TrampolineJumping_g11_c06.avi 22 233 | TrampolineJumping/v_TrampolineJumping_g20_c01.avi 22 234 | TrampolineJumping/v_TrampolineJumping_g20_c02.avi 22 235 | TrampolineJumping/v_TrampolineJumping_g20_c03.avi 22 236 | TrampolineJumping/v_TrampolineJumping_g20_c04.avi 22 237 | TrampolineJumping/v_TrampolineJumping_g20_c05.avi 22 238 | VolleyballSpiking/v_VolleyballSpiking_g20_c01.avi 23 239 | VolleyballSpiking/v_VolleyballSpiking_g20_c02.avi 23 240 | VolleyballSpiking/v_VolleyballSpiking_g20_c03.avi 23 241 | VolleyballSpiking/v_VolleyballSpiking_g20_c04.avi 23 242 | VolleyballSpiking/v_VolleyballSpiking_g25_c01.avi 23 243 | VolleyballSpiking/v_VolleyballSpiking_g25_c02.avi 23 244 | VolleyballSpiking/v_VolleyballSpiking_g25_c03.avi 23 245 | VolleyballSpiking/v_VolleyballSpiking_g25_c04.avi 23 246 | WalkingWithDog/v_WalkingWithDog_g14_c01.avi 24 247 | WalkingWithDog/v_WalkingWithDog_g14_c02.avi 24 248 | WalkingWithDog/v_WalkingWithDog_g14_c03.avi 24 249 | WalkingWithDog/v_WalkingWithDog_g14_c04.avi 24 250 | WalkingWithDog/v_WalkingWithDog_g19_c01.avi 24 251 | WalkingWithDog/v_WalkingWithDog_g19_c02.avi 24 252 | WalkingWithDog/v_WalkingWithDog_g19_c03.avi 24 253 | WalkingWithDog/v_WalkingWithDog_g19_c04.avi 24 254 | WalkingWithDog/v_WalkingWithDog_g19_c05.avi 24 255 | -------------------------------------------------------------------------------- /tools/ffmpeg_reader.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module implements all the functions to read a video or a picture 3 | using ffmpeg. It is quite ugly, as there are many pitfalls to avoid 4 | 5 | Credit: Zulko (MoviePy) 6 | https://github.com/Zulko/moviepy/blob/master/moviepy/video/io/ffmpeg_reader.py 7 | 8 | Modified by Victor Campos to work with a different framerate than the original one. 9 | """ 10 | 11 | from __future__ import division 12 | 13 | 14 | import os 15 | import re 16 | import ipdb 17 | import logging 18 | import warnings 19 | import numpy as np 20 | import tensorflow as tf 21 | import subprocess as sp 22 | 23 | logging.captureWarnings(True) 24 | 25 | try: 26 | from subprocess import DEVNULL # py3k 27 | except ImportError: 28 | DEVNULL = open(os.devnull, 'wb') 29 | 30 | 31 | # Default path to FFmpeg binary 32 | FFMPEG_BIN = '/anaconda/bin/ffmpeg' 33 | 34 | FLAGS = tf.app.flags.FLAGS 35 | 36 | 37 | def get_ffmpeg_bin(): 38 | """Get path to FFmpeg binary.""" 39 | return FFMPEG_BIN 40 | 41 | 42 | def set_ffmpeg_bin(path): 43 | """Set path to FFmpeg binary.""" 44 | global FFMPEG_BIN 45 | FFMPEG_BIN = path 46 | 47 | 48 | def is_string(obj): 49 | """ Returns true if s is string or string-like object, 50 | compatible with Python 2 and Python 3.""" 51 | try: 52 | return isinstance(obj, basestring) 53 | except NameError: 54 | return isinstance(obj, str) 55 | 56 | 57 | def cvsecs(time): 58 | """ Will convert any time into seconds. 59 | Here are the accepted formats: 60 | >>> cvsecs(15.4) -> 15.4 # seconds 61 | >>> cvsecs( (1,21.5) ) -> 81.5 # (min,sec) 62 | >>> cvsecs( (1,1,2) ) -> 3662 # (hr, min, sec) 63 | >>> cvsecs('01:01:33.5') -> 3693.5 #(hr,min,sec) 64 | >>> cvsecs('01:01:33.045') -> 3693.045 65 | >>> cvsecs('01:01:33,5') #coma works too 66 | """ 67 | 68 | if is_string(time): 69 | if (',' not in time) and ('.' not in time): 70 | time = time + '.0' 71 | expr = r"(\d+):(\d+):(\d+)[,|.](\d+)" 72 | finds = re.findall(expr, time)[0] 73 | nums = list( map(float, finds) ) 74 | return ( 3600*int(finds[0]) 75 | + 60*int(finds[1]) 76 | + int(finds[2]) 77 | + nums[3]/(10**len(finds[3]))) 78 | 79 | elif isinstance(time, tuple): 80 | if len(time)== 3: 81 | hr, mn, sec = time 82 | elif len(time)== 2: 83 | hr, mn, sec = 0, time[0], time[1] 84 | return 3600*hr + 60*mn + sec 85 | 86 | else: 87 | return time 88 | 89 | 90 | def _load_video_ffmpeg(filename): 91 | """ 92 | Load a video as a numpy array using FFmpeg in [0, 255] RGB format. 93 | :param filename: path to the video file 94 | :param random_chunk: grab frames starting from a random position 95 | :return: (video, length) tuple 96 | video: (n_frames, h, w, 3) numpy array containing video frames, as RGB in range [0, 255] 97 | height: frame height 98 | width: frame width 99 | length: number of non-zero frames loaded from the video (the rest of the sequence is zero-padded) 100 | """ 101 | if isinstance(filename, bytes): 102 | filename = filename.decode('utf-8') 103 | 104 | n_frames = -1 #FLAGS.num_frames 105 | random_chunk = False #FLAGS.random_chunks 106 | target_fps = -1 #FLAGS.fps 107 | 108 | # Get video params 109 | video_reader = FFMPEG_VideoReader(filename, target_fps=target_fps) 110 | w, h = video_reader.size 111 | fps = video_reader.fps 112 | if target_fps <= 0: 113 | target_fps = fps 114 | video_length = int(video_reader.nframes * target_fps / fps) # corrected number of frames 115 | 116 | # Determine starting and ending positions 117 | if n_frames <= 0 or video_length < n_frames: 118 | n_frames = video_length 119 | elif random_chunk: # start from a random position 120 | start_pos = random.randint(0, video_length - n_frames - 1) 121 | video_reader.get_frame(1. * start_pos / target_fps, fps=target_fps) 122 | 123 | # Load video chunk as numpy array 124 | video = np.zeros((n_frames, h, w, 3), dtype=np.float32) 125 | for idx in range(n_frames): 126 | video[idx, :, :, :] = video_reader.read_frame()[:, :, :3].astype(np.float32) 127 | 128 | video_reader.close() 129 | 130 | return video, h, w, n_frames 131 | 132 | 133 | def decode_video(filename): 134 | """ 135 | Decode frames from a video. Returns frames in [0, 255] RGB format. 136 | :param filename: string tensor, e.g. dequeue() op from a filenames queue 137 | :return: 138 | video: 4-D tensor containing frames of a video: [time, height, width, channel] 139 | height: frame height 140 | width: frame width 141 | length: number of non-zero frames loaded from the video (the rest of the sequence is zero-padded) 142 | """ 143 | return tf.py_func(_load_video_ffmpeg, [filename], [tf.float32, tf.int64, tf.int64, tf.int64], name='decode_mp4') 144 | 145 | 146 | class FFMPEG_VideoReader: 147 | def __init__(self, filename, print_infos=False, bufsize=None, 148 | pix_fmt="rgb24", check_duration=True, target_fps=-1): 149 | 150 | self.filename = filename 151 | infos = ffmpeg_parse_infos(filename, print_infos, check_duration) 152 | self.fps = infos['video_fps'] 153 | self.size = infos['video_size'] 154 | self.duration = infos['video_duration'] 155 | self.ffmpeg_duration = infos['duration'] 156 | self.nframes = infos['video_nframes'] 157 | 158 | self.infos = infos 159 | 160 | self.pix_fmt = pix_fmt 161 | if pix_fmt == 'rgba': 162 | self.depth = 4 163 | else: 164 | self.depth = 3 165 | 166 | if bufsize is None: 167 | w, h = self.size 168 | bufsize = self.depth * w * h + 100 169 | 170 | self.target_fps = target_fps 171 | 172 | self.bufsize = bufsize 173 | self.initialize() 174 | 175 | self.pos = 1 176 | self.lastread = self.read_frame() 177 | 178 | def initialize(self, starttime=0): 179 | """Opens the file, creates the pipe. """ 180 | 181 | self.close() # if any 182 | 183 | if starttime != 0: 184 | offset = min(1, starttime) 185 | i_arg = ['-ss', "%.06f" % (starttime - offset), 186 | '-i', self.filename, 187 | '-ss', "%.06f" % offset] 188 | else: 189 | i_arg = ['-i', self.filename] 190 | 191 | if self.target_fps > 0: 192 | cmd = ([get_ffmpeg_bin()] + i_arg + 193 | ['-loglevel', 'error', 194 | '-f', 'image2pipe', 195 | '-vf', 'fps=%d' % self.target_fps, 196 | "-pix_fmt", self.pix_fmt, 197 | '-vcodec', 'rawvideo', '-']) 198 | else: 199 | cmd = ([get_ffmpeg_bin()] + i_arg + 200 | ['-loglevel', 'error', 201 | '-f', 'image2pipe', 202 | "-pix_fmt", self.pix_fmt, 203 | '-vcodec', 'rawvideo', '-']) 204 | 205 | popen_params = {"bufsize": self.bufsize, 206 | "stdout": sp.PIPE, 207 | "stderr": sp.PIPE, 208 | "stdin": DEVNULL} 209 | 210 | if os.name == "nt": 211 | popen_params["creationflags"] = 0x08000000 212 | 213 | self.proc = sp.Popen(cmd, **popen_params) 214 | 215 | def skip_frames(self, n=1): 216 | """Reads and throws away n frames """ 217 | w, h = self.size 218 | for i in range(n): 219 | self.proc.stdout.read(self.depth * w * h) 220 | # self.proc.stdout.flush() 221 | self.pos += n 222 | 223 | def read_frame(self): 224 | w, h = self.size 225 | nbytes = self.depth * w * h 226 | 227 | s = self.proc.stdout.read(nbytes) 228 | if len(s) != nbytes: 229 | 230 | warnings.warn("Warning: in file %s, " % (self.filename) + 231 | "%d bytes wanted but %d bytes read," % (nbytes, len(s)) + 232 | "at frame %d/%d, at time %.02f/%.02f sec. " % ( 233 | self.pos, self.nframes, 234 | 1.0 * self.pos / self.fps, 235 | self.duration) + 236 | "Using the last valid frame instead.", 237 | UserWarning) 238 | 239 | if not hasattr(self, 'lastread'): 240 | raise IOError(("FFMPEG_VideoReader error: failed to read the first frame of " 241 | "video file %s. That might mean that the file is " 242 | "corrupted. That may also mean that you are using " 243 | "a deprecated version of FFMPEG. On Ubuntu/Debian " 244 | "for instance the version in the repos is deprecated. " 245 | "Please update to a recent version from the website.") % ( 246 | self.filename)) 247 | 248 | result = self.lastread 249 | 250 | else: 251 | 252 | result = np.fromstring(s, dtype='uint8') 253 | result.shape = (h, w, len(s) // (w * h)) # reshape((h, w, len(s)//(w*h))) 254 | self.lastread = result 255 | 256 | return result 257 | 258 | def get_frame(self, t, fps=None): 259 | """ Read a file video frame at time t. 260 | Note for coders: getting an arbitrary frame in the video with 261 | ffmpeg can be painfully slow if some decoding has to be done. 262 | This function tries to avoid fetching arbitrary frames 263 | whenever possible, by moving between adjacent frames. 264 | """ 265 | 266 | # these definitely need to be rechecked sometime. Seems to work. 267 | 268 | # I use that horrible '+0.00001' hack because sometimes due to numerical 269 | # imprecisions a 3.0 can become a 2.99999999... which makes the int() 270 | # go to the previous integer. This makes the fetching more robust in the 271 | # case where you get the nth frame by writing get_frame(n/fps). 272 | 273 | if fps is None: 274 | fps = self.fps 275 | 276 | pos = int(fps * t + 0.00001) + 1 277 | 278 | if pos == self.pos: 279 | return self.lastread 280 | else: 281 | if (pos < self.pos) or (pos > self.pos + 100): 282 | self.initialize(t) 283 | self.pos = pos 284 | else: 285 | self.skip_frames(pos - self.pos - 1) 286 | result = self.read_frame() 287 | self.pos = pos 288 | return result 289 | 290 | def close(self): 291 | if hasattr(self, 'proc'): 292 | self.proc.terminate() 293 | self.proc.stdout.close() 294 | self.proc.stderr.close() 295 | del self.proc 296 | 297 | def __del__(self): 298 | self.close() 299 | if hasattr(self, 'lastread'): 300 | del self.lastread 301 | 302 | 303 | def ffmpeg_read_image(filename, with_mask=True): 304 | """ Read an image file (PNG, BMP, JPEG...). 305 | Wraps FFMPEG_Videoreader to read just one image. 306 | Returns an ImageClip. 307 | This function is not meant to be used directly in MoviePy, 308 | use ImageClip instead to make clips out of image files. 309 | Parameters 310 | ----------- 311 | filename 312 | Name of the image file. Can be of any format supported by ffmpeg. 313 | with_mask 314 | If the image has a transparency layer, ``with_mask=true`` will save 315 | this layer as the mask of the returned ImageClip 316 | """ 317 | if with_mask: 318 | pix_fmt = 'rgba' 319 | else: 320 | pix_fmt = "rgb24" 321 | reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, check_duration=False) 322 | im = reader.lastread 323 | del reader 324 | return im 325 | 326 | 327 | def ffmpeg_parse_infos(filename, print_infos=False, check_duration=True): 328 | """Get file infos using ffmpeg. 329 | Returns a dictionnary with the fields: 330 | "video_found", "video_fps", "duration", "video_nframes", 331 | "video_duration", "audio_found", "audio_fps" 332 | "video_duration" is slightly smaller than "duration" to avoid 333 | fetching the uncomplete frames at the end, which raises an error. 334 | """ 335 | 336 | # open the file in a pipe, provoke an error, read output 337 | is_GIF = filename.endswith('.gif') 338 | cmd = [get_ffmpeg_bin(), "-i", filename] 339 | if is_GIF: 340 | cmd += ["-f", "null", "/dev/null"] 341 | 342 | popen_params = {"bufsize": 10 ** 5, 343 | "stdout": sp.PIPE, 344 | "stderr": sp.PIPE, 345 | "stdin": DEVNULL} 346 | 347 | if os.name == "nt": 348 | popen_params["creationflags"] = 0x08000000 349 | 350 | proc = sp.Popen(cmd, **popen_params) 351 | 352 | proc.stdout.readline() 353 | proc.terminate() 354 | infos = proc.stderr.read().decode('utf8') 355 | del proc 356 | 357 | if print_infos: 358 | # print the whole info text returned by FFMPEG 359 | print(infos) 360 | 361 | lines = infos.splitlines() 362 | if "No such file or directory" in lines[-1]: 363 | raise IOError(("MoviePy error: the file %s could not be found !\n" 364 | "Please check that you entered the correct " 365 | "path.") % filename) 366 | 367 | result = dict() 368 | 369 | # get duration (in seconds) 370 | result['duration'] = None 371 | 372 | if check_duration: 373 | try: 374 | keyword = ('frame=' if is_GIF else 'Duration: ') 375 | line = [l for l in lines if keyword in l][0] 376 | match = re.findall("([0-9][0-9]:[0-9][0-9]:[0-9][0-9].[0-9][0-9])", line)[0] 377 | result['duration'] = cvsecs(match) 378 | except: 379 | raise IOError(("MoviePy error: failed to read the duration of file %s.\n" 380 | "Here are the file infos returned by ffmpeg:\n\n%s") % ( 381 | filename, infos)) 382 | 383 | # get the output line that speaks about video 384 | lines_video = [l for l in lines if ' Video: ' in l and re.search('\d+x\d+', l)] 385 | 386 | result['video_found'] = (lines_video != []) 387 | 388 | if result['video_found']: 389 | 390 | try: 391 | line = lines_video[0] 392 | 393 | # get the size, of the form 460x320 (w x h) 394 | match = re.search(" [0-9]*x[0-9]*(,| )", line) 395 | s = list(map(int, line[match.start():match.end() - 1].split('x'))) 396 | result['video_size'] = s 397 | except: 398 | raise IOError(("MoviePy error: failed to read video dimensions in file %s.\n" 399 | "Here are the file infos returned by ffmpeg:\n\n%s") % ( 400 | filename, infos)) 401 | 402 | # get the frame rate. Sometimes it's 'tbr', sometimes 'fps', sometimes 403 | # tbc, and sometimes tbc/2... 404 | # Current policy: Trust tbr first, then fps. If result is near from x*1000/1001 405 | # where x is 23,24,25,50, replace by x*1000/1001 (very common case for the fps). 406 | 407 | try: 408 | match = re.search("( [0-9]*.| )[0-9]* tbr", line) 409 | tbr = float(line[match.start():match.end()].split(' ')[1]) 410 | result['video_fps'] = tbr 411 | 412 | except: 413 | match = re.search("( [0-9]*.| )[0-9]* fps", line) 414 | result['video_fps'] = float(line[match.start():match.end()].split(' ')[1]) 415 | 416 | # It is known that a fps of 24 is often written as 24000/1001 417 | # but then ffmpeg nicely rounds it to 23.98, which we hate. 418 | coef = 1000.0 / 1001.0 419 | fps = result['video_fps'] 420 | for x in [23, 24, 25, 30, 50]: 421 | if (fps != x) and abs(fps - x * coef) < .01: 422 | result['video_fps'] = x * coef 423 | 424 | if check_duration: 425 | result['video_nframes'] = int(result['duration'] * result['video_fps']) + 1 426 | result['video_duration'] = result['duration'] 427 | else: 428 | result['video_nframes'] = 1 429 | result['video_duration'] = None 430 | # We could have also recomputed the duration from the number 431 | # of frames, as follows: 432 | # >>> result['video_duration'] = result['video_nframes'] / result['video_fps'] 433 | 434 | lines_audio = [l for l in lines if ' Audio: ' in l] 435 | 436 | result['audio_found'] = lines_audio != [] 437 | 438 | if result['audio_found']: 439 | line = lines_audio[0] 440 | try: 441 | match = re.search(" [0-9]* Hz", line) 442 | result['audio_fps'] = int(line[match.start() + 1:match.end()]) 443 | except: 444 | result['audio_fps'] = 'unknown' 445 | 446 | return result -------------------------------------------------------------------------------- /tools/generate_tfrecord.py: -------------------------------------------------------------------------------- 1 | """ 2 | Based on: http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/ 3 | """ 4 | 5 | # Note: check ranges. tf.decode_jpeg=[0,1], ffmpeg=[0,255] (JPEG encodes [0,255] uint8 images) 6 | 7 | from __future__ import absolute_import 8 | from __future__ import division 9 | from __future__ import print_function 10 | 11 | import os 12 | import re 13 | import sys 14 | import ipdb 15 | import glob 16 | import random 17 | import os.path 18 | import threading 19 | 20 | from datetime import datetime 21 | 22 | import numpy as np 23 | import scipy.misc as sm 24 | import tensorflow as tf 25 | import xml.etree.ElementTree as et 26 | 27 | from ffmpeg_reader import decode_video 28 | 29 | 30 | tf.app.flags.DEFINE_string('videos_directory', '../../dataset/UCF-101/', 'Video data directory') 31 | tf.app.flags.DEFINE_string('annotation_directory', '../../dataset/UCF101_24Action_Detection_Annotations/', 'Video annotation directory') 32 | tf.app.flags.DEFINE_string('input_file', '../dataset/testlist.txt', 'Text file with (filename, label) pairs') 33 | tf.app.flags.DEFINE_string('output_directory', '../dataset/UCF-101-tf-records', 'Output data directory') 34 | tf.app.flags.DEFINE_string('class_list', '../dataset/class_list.txt', 'File with the class names') 35 | tf.app.flags.DEFINE_string('name', 'UCF-24-test', 'Name for the subset') 36 | 37 | tf.app.flags.DEFINE_integer('num_shards', 25, 'Number of shards. Each job will process num_shards/num_jobs shards.') 38 | tf.app.flags.DEFINE_integer('num_threads', 1, 'Number of threads within this job to preprocess the videos.') 39 | tf.app.flags.DEFINE_integer('num_jobs', 1, 'How many jobs will process this dataset.') 40 | tf.app.flags.DEFINE_integer('job_id', 0, 'Job ID for the multi-job scenario. In range [0, num_jobs-1].') 41 | 42 | tf.app.flags.DEFINE_integer('resize_h', 128, 'Height after resize.') 43 | tf.app.flags.DEFINE_integer('resize_w', 128, 'Width after resize.') 44 | tf.app.flags.DEFINE_integer('label_offset', 1, 45 | 'Offset for class IDs. Use 1 to avoid confusion with zero-padded elements') 46 | 47 | FLAGS = tf.app.flags.FLAGS 48 | 49 | 50 | def _int64_feature(value): 51 | """Wrapper for inserting int64 features into Example proto.""" 52 | if not isinstance(value, list): 53 | value = [value] 54 | return tf.train.Feature(int64_list=tf.train.Int64List(value=value)) 55 | 56 | 57 | def _bytes_feature(value): 58 | """Wrapper for inserting bytes features into Example proto.""" 59 | if isinstance(value, list): 60 | value = value[0] 61 | return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) 62 | 63 | 64 | def _convert_to_sequential_example(filename, video_buffer, mask_buffer, label, text, height, width, sequence_length, sample_length=-1): 65 | """Build a SequenceExample proto for an example. 66 | Args: 67 | filename: string, path to a video file, e.g., '/path/to/example.avi' 68 | video_buffer: numpy array with the video frames, with dims [n_frames, height, width, n_channels] 69 | mask_buffer: activity masks of video frames 70 | label: integer or list of integers, identifier for the ground truth for the network 71 | text: string, unique human-readable, e.g. 'dog' 72 | height: integer, image height in pixels 73 | width: integer, image width in pixels 74 | sequence_length: real length of the data, i.e. number of frames that are not zero-padding 75 | sample_length: length of sampled clips from video, set to -1 if don't want sampling 76 | Returns: 77 | SequentialExample proto 78 | """ 79 | # Get sequence length 80 | full_length = len(video_buffer) 81 | assert len(video_buffer) == len(mask_buffer) 82 | 83 | example_list = [] 84 | if sample_length == -1: sample_length = full_length 85 | num_clips = full_length // sample_length 86 | for i in range(num_clips): 87 | # Create SequenceExample instance 88 | example = tf.train.SequenceExample() 89 | 90 | # Context features (non-sequential features) 91 | example.context.feature['height'].int64_list.value.append(height) 92 | example.context.feature['width'].int64_list.value.append(width) 93 | example.context.feature['sequence_length'].int64_list.value.append(sample_length) 94 | example.context.feature['filename'].bytes_list.value.append(str.encode(filename)) 95 | example.context.feature['text'].bytes_list.value.append(str.encode(text)) 96 | example.context.feature['label'].int64_list.value.append(label) 97 | 98 | # Sequential features 99 | frames = example.feature_lists.feature_list["frames"] 100 | masks = example.feature_lists.feature_list["masks"] 101 | 102 | for j in range(sample_length): 103 | frames.feature.add().bytes_list.value.append(video_buffer[i*sample_length+j]) # .tostring()) 104 | masks.feature.add().bytes_list.value.append(mask_buffer[i*sample_length+j]) 105 | 106 | example_list.append(example) 107 | 108 | return example_list 109 | 110 | 111 | def _convert_to_example(filename, video_buffer, label, text, height, width, sequence_length): 112 | """Deprecated: use _convert_to_sequential_example instead 113 | Build an Example proto for an example. 114 | Args: 115 | filename: string, path to a video file, e.g., '/path/to/example.avi' 116 | video_buffer: numpy array with the video frames, with dims [n_frames, height, width, n_channels] 117 | label: integer or list of integers, identifier for the ground truth for the network 118 | text: string, unique human-readable, e.g. 'dog' 119 | height: integer, image height in pixels 120 | width: integer, image width in pixels 121 | sequence_length: real length of the data, i.e. number of frames that are not zero-padding 122 | Returns: 123 | Example proto 124 | """ 125 | example = tf.train.Example(features=tf.train.Features(feature={ 126 | 'sequence_length': _int64_feature(sequence_length), 127 | 'height': _int64_feature(height), 128 | 'width': _int64_feature(width), 129 | 'class/label': _int64_feature(label), 130 | 'class/text': _bytes_feature(text), 131 | 'filename': _bytes_feature(os.path.basename(filename)), 132 | 'frames': _bytes_feature(video_buffer.tostring())})) 133 | 134 | return example 135 | 136 | 137 | class VideoCoder(object): 138 | """Helper class that provides TensorFlow image coding utilities.""" 139 | 140 | def __init__(self): 141 | # Create a single Session to run all image coding calls. 142 | self._sess = tf.Session() 143 | 144 | # Initializes function that decodes video 145 | self._video_path = tf.placeholder(dtype=tf.string) 146 | self._decode_video = decode_video(self._video_path) 147 | 148 | # Initialize function that resizes a frame 149 | self._resize_video_data = tf.placeholder(dtype=tf.float32, shape=[None, None, None, 3]) 150 | 151 | # Initialize function to JPEG-encode a frame 152 | self._raw_frame = tf.placeholder(dtype=tf.uint8, shape=[None, None, 3]) 153 | self._raw_mask = tf.placeholder(dtype=tf.uint8, shape=[None, None, 1]) 154 | self._encode_frame = tf.image.encode_jpeg(self._raw_frame, quality=100) 155 | self._encode_mask = tf.image.encode_png(self._raw_mask) 156 | 157 | def _resize_video(self, seq_length, new_height, new_width): 158 | resized_video = tf.image.resize_bilinear(self._resize_video_data, [new_height, new_width], 159 | align_corners=False) 160 | resized_video.set_shape([seq_length, new_height, new_width, 3]) 161 | return resized_video 162 | 163 | def decode_video(self, video_data): 164 | video, _, _, seq_length = self._sess.run(self._decode_video, 165 | feed_dict={self._video_path: video_data}) 166 | # video /= 255. 167 | raw_height, raw_width = video.shape[1], video.shape[2] 168 | if FLAGS.resize_h != -1: 169 | video = self._sess.run(self._resize_video(seq_length, FLAGS.resize_h, FLAGS.resize_w), 170 | feed_dict={self._resize_video_data: video}) 171 | assert len(video.shape) == 4 172 | assert video.shape[3] == 3 173 | return video, raw_height, raw_width, seq_length 174 | 175 | def encode_frame(self, raw_frame): 176 | return self._sess.run(self._encode_frame, feed_dict={self._raw_frame: raw_frame}) 177 | 178 | def encode_mask(self, raw_mask): 179 | return self._sess.run(self._encode_mask, feed_dict={self._raw_mask: raw_mask}) 180 | 181 | 182 | def _parse_annotation_xml(filepath): 183 | tree = et.parse(filepath) 184 | data = tree.getroot()[1] # 185 | for sf in data: # ... find first non-empty sourcefile node 186 | if len(list(sf)) != 0: 187 | break 188 | file = sf[0] # 189 | objs = sf[1:] # ... 190 | 191 | num_objs = len(objs) 192 | num_frames = int(file.find("./*[@name='NUMFRAMES']/*[@value]").attrib['value']) 193 | parsed_bbx = np.zeros([num_frames, num_objs, 4]) 194 | for i, obj in enumerate(objs): # iterate nodes 195 | loc = obj.find("./*[@name='Location']") 196 | for bbx in loc: 197 | span = re.findall(r'\d+', bbx.attrib['framespan']) 198 | beg, end = int(span[0]), int(span[1]) 199 | h = int(bbx.attrib['height']) 200 | w = int(bbx.attrib['width']) 201 | x = int(bbx.attrib['x']) 202 | y = int(bbx.attrib['y']) 203 | parsed_bbx[beg-1:end, i] = [h, w, x, y] 204 | 205 | return parsed_bbx 206 | 207 | 208 | def _resize_bbx(parsed_bbx, frame_h, frame_w): 209 | ratio_h = FLAGS.resize_h / frame_h 210 | ratio_w = FLAGS.resize_w / frame_w 211 | parsed_bbx[:,:,0] = parsed_bbx[:,:,0] * ratio_h 212 | parsed_bbx[:,:,1] = parsed_bbx[:,:,1] * ratio_w 213 | parsed_bbx[:,:,2] = parsed_bbx[:,:,2] * ratio_w 214 | parsed_bbx[:,:,3] = parsed_bbx[:,:,3] * ratio_h 215 | return parsed_bbx 216 | 217 | 218 | def _bbx_to_mask(parsed_bbx, num_frames, frame_h, frame_w): 219 | 220 | if not num_frames == parsed_bbx.shape[0]: 221 | #print('[num_frames=%d bbx=%d]' %(num_frames, parsed_bbx.shape[0])) 222 | # align frames and bbx 223 | if num_frames > parsed_bbx.shape[0]: 224 | padding = np.zeros([num_frames-parsed_bbx.shape[0], \ 225 | parsed_bbx.shape[1], parsed_bbx.shape[2]]) 226 | parsed_bbx = np.concatenate([parsed_bbx, padding]) 227 | else: 228 | parsed_bbx = parsed_bbx[:num_frames] 229 | 230 | masks = np.zeros([num_frames, frame_h, frame_w, 1]) 231 | num_objs = parsed_bbx.shape[1] 232 | 233 | for i in range(num_frames): 234 | for j in range(num_objs): 235 | bbx = parsed_bbx[i, j] 236 | h, w, x, y = bbx[0], bbx[1], bbx[2], bbx[3] 237 | x_ = int(np.clip(x+w, 0, frame_w)) 238 | y_ = int(np.clip(y+h, 0, frame_h)) 239 | x = int(np.clip(x, 0, frame_w-1)) 240 | y = int(np.clip(y, 0, frame_h-1)) 241 | masks[i, y:y_, x:x_] = 1 242 | 243 | return masks 244 | 245 | 246 | def _process_video(filename, coder): 247 | """ 248 | Process a single video file using FFmpeg 249 | Args 250 | filename: path to the video file 251 | coder: instance of ImageCoder to provide TensorFlow image coding utils. 252 | Returns: 253 | video_buffer: numpy array with the video frames 254 | mask_buffer: activity mask of the video frames 255 | frame_h: integer, video height in pixels. 256 | frame_w: integer, width width in pixels. 257 | seq_length: sequence length (non-zero frames) 258 | """ 259 | 260 | video, raw_h, raw_w, seq_length = coder.decode_video(filename) 261 | video = video.astype(np.uint8) 262 | assert len(video.shape) == 4 263 | assert video.shape[3] == 3 264 | frame_h, frame_w = video.shape[1], video.shape[2] 265 | 266 | # generate mask from annotations 267 | groups = filename.split('/') 268 | annot_file_name = groups[-1].split('.')[0] + '.xgtf' 269 | annot_file_path = os.path.join(FLAGS.annotation_directory, groups[-2], annot_file_name) 270 | parsed_bbx = _parse_annotation_xml(annot_file_path) 271 | if FLAGS.resize_h != -1: 272 | parsed_bbx = _resize_bbx(parsed_bbx, raw_h, raw_w) 273 | masks = _bbx_to_mask(parsed_bbx, seq_length, FLAGS.resize_h, FLAGS.resize_w) 274 | 275 | encoded_frames_seq = [] 276 | encoded_masks_seq = [] 277 | for idx in range(seq_length): 278 | encoded_frames_seq.append(coder.encode_frame(video[idx, :, :, :])) 279 | encoded_masks_seq.append(coder.encode_mask(masks[idx, :, :, :])) 280 | 281 | return encoded_frames_seq, encoded_masks_seq, frame_h, frame_w, np.asscalar(seq_length) 282 | 283 | 284 | def _process_video_files_batch(coder, thread_index, ranges, name, filenames, texts, labels, num_shards, 285 | job_index, num_jobs): 286 | """ 287 | Process and save list of videos as TFRecord in 1 thread. 288 | Args: 289 | coder: instance of VideoCoder to provide TensorFlow video coding utils. 290 | thread_index: integer, unique batch to run index is within [0, len(ranges)). 291 | ranges: list of pairs of integers specifying ranges of each batch to 292 | analyze in parallel. 293 | name: string, unique identifier specifying the data set 294 | filenames: list of strings; each string is a path to a video file 295 | texts: list of strings; each string is human readable, e.g. 'dog' 296 | labels: list of integer; each integer identifies the ground truth 297 | num_shards: integer number of shards for this data set. 298 | job_index: integer, unique job index in range [0, num_jobs-1] 299 | num_jobs: how many different jobs will process the same data 300 | """ 301 | assert not num_shards % num_jobs 302 | num_shards_per_job = num_shards / num_jobs 303 | # Each thread produces N shards where N = int(num_shards_per_job / num_threads). 304 | # For instance, if num_shards_per_job = 128, and the num_threads = 2, then the first 305 | # thread would produce shards [0, 64). 306 | num_threads = len(ranges) 307 | assert not num_shards_per_job % num_threads 308 | num_shards_per_batch = int(num_shards_per_job / num_threads) 309 | 310 | shard_ranges = np.linspace(ranges[thread_index][0], ranges[thread_index][1], num_shards_per_batch + 1).astype(int) 311 | num_files_in_thread = ranges[thread_index][1] - ranges[thread_index][0] 312 | 313 | counter = 0 314 | for s in range(num_shards_per_batch): 315 | # Generate a sharded version of the file name, e.g. 'train-00002-of-00010' 316 | shard = thread_index * num_shards_per_batch + job_index * num_shards_per_job + s 317 | output_filename = '%s-%.5d-of-%.5d' % (name, shard, num_shards) 318 | output_file = os.path.join(FLAGS.output_directory, output_filename) 319 | writer = tf.python_io.TFRecordWriter(output_file) 320 | 321 | shard_counter = 0 322 | files_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int) 323 | for i in files_in_shard: 324 | filename = filenames[i] 325 | label = labels[i] 326 | text = texts[i] 327 | 328 | video_buffer, mask_buffer, height, width, seq_length = _process_video(filename, coder) 329 | 330 | if seq_length == 0: 331 | print('Skipping video with null length') 332 | continue 333 | 334 | example_list = _convert_to_sequential_example(filename, video_buffer, mask_buffer, label, text, height, width, seq_length, sample_length=32) 335 | for example in example_list: 336 | writer.write(example.SerializeToString()) 337 | shard_counter += 1 338 | counter += 1 339 | 340 | if not counter % 100: 341 | print('%s [thread %d]: Processed %d of %d videos in thread batch.' % 342 | (datetime.now(), thread_index, counter, num_files_in_thread)) 343 | sys.stdout.flush() 344 | 345 | print('%s [thread %d]: Wrote %d video chunks to %s' % 346 | (datetime.now(), thread_index, shard_counter, output_file)) 347 | sys.stdout.flush() 348 | shard_counter = 0 349 | print('%s [thread %d]: Wrote %d video chunks to %d shards.' % 350 | (datetime.now(), thread_index, counter, num_files_in_thread)) 351 | sys.stdout.flush() 352 | 353 | 354 | def _process_video_files(name, filenames, texts, labels, num_shards, job_index, num_jobs): 355 | """ 356 | Process and save list of videos as TFRecord of Example protos. 357 | Args: 358 | name: string, unique identifier specifying the data set 359 | filenames: list of strings; each string is a path to a video file 360 | texts: list of strings; each string is human readable, e.g. 'dog' 361 | labels: list of integer; each integer identifies the ground truth 362 | num_shards: integer number of shards for this data set. 363 | job_index: integer, unique job index in range [0, num_jobs-1] 364 | num_jobs: how many different jobs will process the same data 365 | """ 366 | assert len(filenames) == len(texts) 367 | assert len(filenames) == len(labels) 368 | 369 | # Break all examples into batches in two levels: first for jobs, then for threads within each job 370 | num_files = len(filenames) 371 | num_files_per_job = int(num_files / num_jobs) 372 | first_file = job_index * num_files_per_job 373 | last_file = min(num_files, (job_index + 1) * num_files_per_job) 374 | print('Job #%d will process files in range [%d,%d]' % (job_index, first_file, last_file - 1)) 375 | local_filenames = filenames[first_file:last_file] 376 | local_texts = texts[first_file:last_file] 377 | local_labels = labels[first_file:last_file] 378 | spacing = np.linspace(0, len(local_filenames), FLAGS.num_threads + 1).astype(np.int) 379 | ranges = [] 380 | threads = [] 381 | for i in range(len(spacing) - 1): 382 | ranges.append([spacing[i], spacing[i+1]]) 383 | 384 | # Launch a thread for each batch. 385 | print('Launching %d threads for spacings: %s' % (FLAGS.num_threads, ranges)) 386 | sys.stdout.flush() 387 | 388 | # Create a mechanism for monitoring when all threads are finished. 389 | coord = tf.train.Coordinator() 390 | 391 | # Create a generic TensorFlow-based utility for converting all image codings. 392 | coder = VideoCoder() 393 | 394 | threads = [] 395 | for thread_index in range(len(ranges)): 396 | args = (coder, thread_index, ranges, name, local_filenames, local_texts, local_labels, num_shards, 397 | job_index, num_jobs) 398 | t = threading.Thread(target=_process_video_files_batch, args=args) 399 | t.start() 400 | threads.append(t) 401 | 402 | # Wait for all the threads to terminate. 403 | coord.join(threads) 404 | print('%s: Finished writing all %d videos in data set.' % 405 | (datetime.now(), len(local_filenames))) 406 | sys.stdout.flush() 407 | 408 | 409 | def _find_video_files(input_file, class_list_path, dataset_dir): 410 | """Build a list of all videos files and labels in the data set. 411 | Args: 412 | input_file: path to the file listing (path, anp_label, noun_label, adj_label) tuples 413 | class_list_path: path to the file with the class id -> class name mapping 414 | Returns: 415 | filenames: list of strings; each string is a path to a video file. 416 | texts: list of string; each string is the class name, e.g. 'playing_football' 417 | labels: list of integer; each integer identifies the ground truth label id 418 | """ 419 | lines = [line.strip() for line in open(input_file, 'r')] 420 | class_list = [line.strip().split()[0] for line in open(class_list_path, 'r')] 421 | filenames = list() 422 | texts = list() 423 | labels = list() 424 | for line in lines: 425 | video, class_id = _parse_line_ucf101(line) 426 | filenames.append(os.path.join(dataset_dir, video)) 427 | labels.append(int(class_id)) 428 | texts.append(class_list[int(class_id)-1]) 429 | 430 | # Shuffle the ordering of all video files in order to guarantee random ordering of the images with respect to 431 | # label in the saved TFRecord files. Make the randomization repeatable. 432 | shuffled_index = list(range(len(filenames))) 433 | random.seed(12345) 434 | random.shuffle(shuffled_index) 435 | 436 | filenames = [filenames[i] for i in shuffled_index] 437 | texts = [texts[i] for i in shuffled_index] 438 | labels = [labels[i] for i in shuffled_index] 439 | 440 | print('Found %d video files.' % len(filenames)) 441 | 442 | return filenames, texts, labels 443 | 444 | 445 | def _parse_line_ucf101(line): 446 | filename, class_id = line.split() 447 | return filename, class_id 448 | 449 | 450 | def _process_dataset(name, input_file, dataset_dir, num_shards, class_list_path, job_index, num_jobs): 451 | """Process a complete data set and save it as a TFRecord. 452 | Args: 453 | name: string, unique identifier specifying the data set. 454 | input_file: path to the file listing (path, anp_label, noun_label, adj_label) tuples 455 | num_shards: integer number of shards for this data set. 456 | class_list_path: string, path to the labels file. 457 | job_index: integer, unique job index in range [0, num_jobs-1] 458 | num_jobs: how many different jobs will process the same data 459 | """ 460 | filenames, texts, labels = _find_video_files(input_file, class_list_path, dataset_dir) 461 | _process_video_files(name, filenames, texts, labels, num_shards, job_index, num_jobs) 462 | 463 | 464 | def main(arg): 465 | assert not int(FLAGS.num_shards / FLAGS.num_jobs) % FLAGS.num_threads, ( 466 | 'Please make the FLAGS.num_threads commensurate with FLAGS.num_shards and FLAGS.num_jobs') 467 | 468 | if not os.path.exists(FLAGS.output_directory): 469 | os.makedirs(FLAGS.output_directory) 470 | print('Saving results to %s' % FLAGS.output_directory) 471 | 472 | # Run it! 473 | _process_dataset(FLAGS.name, FLAGS.input_file, FLAGS.videos_directory, FLAGS.num_shards, FLAGS.class_list, 474 | FLAGS.job_id, FLAGS.num_jobs) 475 | 476 | 477 | if __name__ == '__main__': 478 | tf.app.run(main) 479 | -------------------------------------------------------------------------------- /dataset/testlist.txt: -------------------------------------------------------------------------------- 1 | Basketball/v_Basketball_g01_c01.avi 1 2 | Basketball/v_Basketball_g01_c02.avi 1 3 | Basketball/v_Basketball_g01_c03.avi 1 4 | Basketball/v_Basketball_g01_c04.avi 1 5 | Basketball/v_Basketball_g01_c05.avi 1 6 | Basketball/v_Basketball_g01_c06.avi 1 7 | Basketball/v_Basketball_g01_c07.avi 1 8 | Basketball/v_Basketball_g02_c01.avi 1 9 | Basketball/v_Basketball_g02_c02.avi 1 10 | Basketball/v_Basketball_g02_c03.avi 1 11 | Basketball/v_Basketball_g02_c04.avi 1 12 | Basketball/v_Basketball_g02_c05.avi 1 13 | Basketball/v_Basketball_g02_c06.avi 1 14 | Basketball/v_Basketball_g03_c01.avi 1 15 | Basketball/v_Basketball_g03_c02.avi 1 16 | Basketball/v_Basketball_g03_c03.avi 1 17 | Basketball/v_Basketball_g03_c04.avi 1 18 | Basketball/v_Basketball_g03_c05.avi 1 19 | Basketball/v_Basketball_g03_c06.avi 1 20 | Basketball/v_Basketball_g04_c01.avi 1 21 | Basketball/v_Basketball_g04_c02.avi 1 22 | Basketball/v_Basketball_g04_c03.avi 1 23 | Basketball/v_Basketball_g04_c04.avi 1 24 | Basketball/v_Basketball_g05_c01.avi 1 25 | Basketball/v_Basketball_g05_c02.avi 1 26 | Basketball/v_Basketball_g05_c03.avi 1 27 | Basketball/v_Basketball_g05_c04.avi 1 28 | Basketball/v_Basketball_g06_c01.avi 1 29 | Basketball/v_Basketball_g06_c02.avi 1 30 | Basketball/v_Basketball_g06_c03.avi 1 31 | Basketball/v_Basketball_g06_c04.avi 1 32 | Basketball/v_Basketball_g07_c01.avi 1 33 | Basketball/v_Basketball_g07_c02.avi 1 34 | Basketball/v_Basketball_g07_c03.avi 1 35 | Basketball/v_Basketball_g07_c04.avi 1 36 | BasketballDunk/v_BasketballDunk_g01_c01.avi 2 37 | BasketballDunk/v_BasketballDunk_g01_c02.avi 2 38 | BasketballDunk/v_BasketballDunk_g01_c03.avi 2 39 | BasketballDunk/v_BasketballDunk_g01_c04.avi 2 40 | BasketballDunk/v_BasketballDunk_g01_c05.avi 2 41 | BasketballDunk/v_BasketballDunk_g01_c06.avi 2 42 | BasketballDunk/v_BasketballDunk_g01_c07.avi 2 43 | BasketballDunk/v_BasketballDunk_g02_c01.avi 2 44 | BasketballDunk/v_BasketballDunk_g02_c02.avi 2 45 | BasketballDunk/v_BasketballDunk_g02_c03.avi 2 46 | BasketballDunk/v_BasketballDunk_g02_c04.avi 2 47 | BasketballDunk/v_BasketballDunk_g03_c01.avi 2 48 | BasketballDunk/v_BasketballDunk_g03_c02.avi 2 49 | BasketballDunk/v_BasketballDunk_g03_c03.avi 2 50 | BasketballDunk/v_BasketballDunk_g03_c04.avi 2 51 | BasketballDunk/v_BasketballDunk_g03_c05.avi 2 52 | BasketballDunk/v_BasketballDunk_g03_c06.avi 2 53 | BasketballDunk/v_BasketballDunk_g04_c01.avi 2 54 | BasketballDunk/v_BasketballDunk_g04_c02.avi 2 55 | BasketballDunk/v_BasketballDunk_g04_c03.avi 2 56 | BasketballDunk/v_BasketballDunk_g04_c04.avi 2 57 | BasketballDunk/v_BasketballDunk_g05_c01.avi 2 58 | BasketballDunk/v_BasketballDunk_g05_c02.avi 2 59 | BasketballDunk/v_BasketballDunk_g05_c03.avi 2 60 | BasketballDunk/v_BasketballDunk_g05_c04.avi 2 61 | BasketballDunk/v_BasketballDunk_g05_c05.avi 2 62 | BasketballDunk/v_BasketballDunk_g05_c06.avi 2 63 | BasketballDunk/v_BasketballDunk_g06_c01.avi 2 64 | BasketballDunk/v_BasketballDunk_g06_c02.avi 2 65 | BasketballDunk/v_BasketballDunk_g06_c03.avi 2 66 | BasketballDunk/v_BasketballDunk_g06_c04.avi 2 67 | BasketballDunk/v_BasketballDunk_g07_c01.avi 2 68 | BasketballDunk/v_BasketballDunk_g07_c02.avi 2 69 | BasketballDunk/v_BasketballDunk_g07_c03.avi 2 70 | BasketballDunk/v_BasketballDunk_g07_c04.avi 2 71 | BasketballDunk/v_BasketballDunk_g07_c05.avi 2 72 | BasketballDunk/v_BasketballDunk_g07_c06.avi 2 73 | Biking/v_Biking_g01_c01.avi 3 74 | Biking/v_Biking_g01_c02.avi 3 75 | Biking/v_Biking_g01_c03.avi 3 76 | Biking/v_Biking_g01_c04.avi 3 77 | Biking/v_Biking_g02_c01.avi 3 78 | Biking/v_Biking_g02_c02.avi 3 79 | Biking/v_Biking_g02_c03.avi 3 80 | Biking/v_Biking_g02_c04.avi 3 81 | Biking/v_Biking_g02_c05.avi 3 82 | Biking/v_Biking_g02_c06.avi 3 83 | Biking/v_Biking_g02_c07.avi 3 84 | Biking/v_Biking_g03_c01.avi 3 85 | Biking/v_Biking_g03_c02.avi 3 86 | Biking/v_Biking_g03_c03.avi 3 87 | Biking/v_Biking_g03_c04.avi 3 88 | Biking/v_Biking_g04_c01.avi 3 89 | Biking/v_Biking_g04_c02.avi 3 90 | Biking/v_Biking_g04_c03.avi 3 91 | Biking/v_Biking_g04_c04.avi 3 92 | Biking/v_Biking_g04_c05.avi 3 93 | Biking/v_Biking_g05_c01.avi 3 94 | Biking/v_Biking_g05_c02.avi 3 95 | Biking/v_Biking_g05_c03.avi 3 96 | Biking/v_Biking_g05_c04.avi 3 97 | Biking/v_Biking_g05_c05.avi 3 98 | Biking/v_Biking_g05_c06.avi 3 99 | Biking/v_Biking_g05_c07.avi 3 100 | Biking/v_Biking_g06_c01.avi 3 101 | Biking/v_Biking_g06_c02.avi 3 102 | Biking/v_Biking_g06_c03.avi 3 103 | Biking/v_Biking_g06_c04.avi 3 104 | Biking/v_Biking_g06_c05.avi 3 105 | Biking/v_Biking_g07_c01.avi 3 106 | Biking/v_Biking_g07_c02.avi 3 107 | Biking/v_Biking_g07_c03.avi 3 108 | Biking/v_Biking_g07_c04.avi 3 109 | Biking/v_Biking_g07_c05.avi 3 110 | Biking/v_Biking_g07_c06.avi 3 111 | CliffDiving/v_CliffDiving_g01_c01.avi 4 112 | CliffDiving/v_CliffDiving_g01_c02.avi 4 113 | CliffDiving/v_CliffDiving_g01_c03.avi 4 114 | CliffDiving/v_CliffDiving_g01_c04.avi 4 115 | CliffDiving/v_CliffDiving_g01_c05.avi 4 116 | CliffDiving/v_CliffDiving_g01_c06.avi 4 117 | CliffDiving/v_CliffDiving_g02_c01.avi 4 118 | CliffDiving/v_CliffDiving_g02_c02.avi 4 119 | CliffDiving/v_CliffDiving_g02_c03.avi 4 120 | CliffDiving/v_CliffDiving_g02_c04.avi 4 121 | CliffDiving/v_CliffDiving_g03_c01.avi 4 122 | CliffDiving/v_CliffDiving_g03_c02.avi 4 123 | CliffDiving/v_CliffDiving_g03_c03.avi 4 124 | CliffDiving/v_CliffDiving_g03_c04.avi 4 125 | CliffDiving/v_CliffDiving_g03_c05.avi 4 126 | CliffDiving/v_CliffDiving_g04_c01.avi 4 127 | CliffDiving/v_CliffDiving_g04_c02.avi 4 128 | CliffDiving/v_CliffDiving_g04_c03.avi 4 129 | CliffDiving/v_CliffDiving_g04_c04.avi 4 130 | CliffDiving/v_CliffDiving_g05_c01.avi 4 131 | CliffDiving/v_CliffDiving_g05_c02.avi 4 132 | CliffDiving/v_CliffDiving_g05_c03.avi 4 133 | CliffDiving/v_CliffDiving_g05_c04.avi 4 134 | CliffDiving/v_CliffDiving_g05_c05.avi 4 135 | CliffDiving/v_CliffDiving_g05_c06.avi 4 136 | CliffDiving/v_CliffDiving_g05_c07.avi 4 137 | CliffDiving/v_CliffDiving_g06_c01.avi 4 138 | CliffDiving/v_CliffDiving_g06_c02.avi 4 139 | CliffDiving/v_CliffDiving_g06_c03.avi 4 140 | CliffDiving/v_CliffDiving_g06_c04.avi 4 141 | CliffDiving/v_CliffDiving_g06_c05.avi 4 142 | CliffDiving/v_CliffDiving_g06_c06.avi 4 143 | CliffDiving/v_CliffDiving_g06_c07.avi 4 144 | CliffDiving/v_CliffDiving_g07_c01.avi 4 145 | CliffDiving/v_CliffDiving_g07_c02.avi 4 146 | CliffDiving/v_CliffDiving_g07_c03.avi 4 147 | CliffDiving/v_CliffDiving_g07_c04.avi 4 148 | CliffDiving/v_CliffDiving_g07_c05.avi 4 149 | CliffDiving/v_CliffDiving_g07_c06.avi 4 150 | CricketBowling/v_CricketBowling_g01_c01.avi 5 151 | CricketBowling/v_CricketBowling_g01_c02.avi 5 152 | CricketBowling/v_CricketBowling_g01_c03.avi 5 153 | CricketBowling/v_CricketBowling_g01_c04.avi 5 154 | CricketBowling/v_CricketBowling_g01_c05.avi 5 155 | CricketBowling/v_CricketBowling_g01_c06.avi 5 156 | CricketBowling/v_CricketBowling_g01_c07.avi 5 157 | CricketBowling/v_CricketBowling_g02_c01.avi 5 158 | CricketBowling/v_CricketBowling_g02_c02.avi 5 159 | CricketBowling/v_CricketBowling_g02_c03.avi 5 160 | CricketBowling/v_CricketBowling_g02_c04.avi 5 161 | CricketBowling/v_CricketBowling_g02_c05.avi 5 162 | CricketBowling/v_CricketBowling_g02_c06.avi 5 163 | CricketBowling/v_CricketBowling_g02_c07.avi 5 164 | CricketBowling/v_CricketBowling_g03_c01.avi 5 165 | CricketBowling/v_CricketBowling_g03_c02.avi 5 166 | CricketBowling/v_CricketBowling_g03_c03.avi 5 167 | CricketBowling/v_CricketBowling_g03_c04.avi 5 168 | CricketBowling/v_CricketBowling_g04_c01.avi 5 169 | CricketBowling/v_CricketBowling_g04_c02.avi 5 170 | CricketBowling/v_CricketBowling_g04_c03.avi 5 171 | CricketBowling/v_CricketBowling_g04_c04.avi 5 172 | CricketBowling/v_CricketBowling_g04_c05.avi 5 173 | CricketBowling/v_CricketBowling_g05_c01.avi 5 174 | CricketBowling/v_CricketBowling_g05_c02.avi 5 175 | CricketBowling/v_CricketBowling_g05_c03.avi 5 176 | CricketBowling/v_CricketBowling_g05_c04.avi 5 177 | CricketBowling/v_CricketBowling_g06_c01.avi 5 178 | CricketBowling/v_CricketBowling_g06_c02.avi 5 179 | CricketBowling/v_CricketBowling_g06_c03.avi 5 180 | CricketBowling/v_CricketBowling_g06_c04.avi 5 181 | CricketBowling/v_CricketBowling_g06_c05.avi 5 182 | CricketBowling/v_CricketBowling_g07_c01.avi 5 183 | CricketBowling/v_CricketBowling_g07_c02.avi 5 184 | CricketBowling/v_CricketBowling_g07_c03.avi 5 185 | CricketBowling/v_CricketBowling_g07_c04.avi 5 186 | Diving/v_Diving_g01_c01.avi 6 187 | Diving/v_Diving_g01_c02.avi 6 188 | Diving/v_Diving_g01_c03.avi 6 189 | Diving/v_Diving_g01_c04.avi 6 190 | Diving/v_Diving_g01_c05.avi 6 191 | Diving/v_Diving_g01_c06.avi 6 192 | Diving/v_Diving_g01_c07.avi 6 193 | Diving/v_Diving_g02_c01.avi 6 194 | Diving/v_Diving_g02_c02.avi 6 195 | Diving/v_Diving_g02_c03.avi 6 196 | Diving/v_Diving_g02_c04.avi 6 197 | Diving/v_Diving_g02_c05.avi 6 198 | Diving/v_Diving_g02_c06.avi 6 199 | Diving/v_Diving_g02_c07.avi 6 200 | Diving/v_Diving_g03_c01.avi 6 201 | Diving/v_Diving_g03_c02.avi 6 202 | Diving/v_Diving_g03_c03.avi 6 203 | Diving/v_Diving_g03_c04.avi 6 204 | Diving/v_Diving_g03_c05.avi 6 205 | Diving/v_Diving_g03_c06.avi 6 206 | Diving/v_Diving_g03_c07.avi 6 207 | Diving/v_Diving_g04_c01.avi 6 208 | Diving/v_Diving_g04_c02.avi 6 209 | Diving/v_Diving_g04_c03.avi 6 210 | Diving/v_Diving_g04_c04.avi 6 211 | Diving/v_Diving_g04_c05.avi 6 212 | Diving/v_Diving_g04_c06.avi 6 213 | Diving/v_Diving_g04_c07.avi 6 214 | Diving/v_Diving_g05_c01.avi 6 215 | Diving/v_Diving_g05_c02.avi 6 216 | Diving/v_Diving_g05_c03.avi 6 217 | Diving/v_Diving_g05_c04.avi 6 218 | Diving/v_Diving_g05_c05.avi 6 219 | Diving/v_Diving_g05_c06.avi 6 220 | Diving/v_Diving_g06_c01.avi 6 221 | Diving/v_Diving_g06_c02.avi 6 222 | Diving/v_Diving_g06_c03.avi 6 223 | Diving/v_Diving_g06_c04.avi 6 224 | Diving/v_Diving_g06_c05.avi 6 225 | Diving/v_Diving_g06_c06.avi 6 226 | Diving/v_Diving_g06_c07.avi 6 227 | Diving/v_Diving_g07_c01.avi 6 228 | Diving/v_Diving_g07_c02.avi 6 229 | Diving/v_Diving_g07_c03.avi 6 230 | Diving/v_Diving_g07_c04.avi 6 231 | Fencing/v_Fencing_g01_c01.avi 7 232 | Fencing/v_Fencing_g01_c02.avi 7 233 | Fencing/v_Fencing_g01_c03.avi 7 234 | Fencing/v_Fencing_g01_c04.avi 7 235 | Fencing/v_Fencing_g01_c05.avi 7 236 | Fencing/v_Fencing_g01_c06.avi 7 237 | Fencing/v_Fencing_g02_c01.avi 7 238 | Fencing/v_Fencing_g02_c02.avi 7 239 | Fencing/v_Fencing_g02_c03.avi 7 240 | Fencing/v_Fencing_g02_c04.avi 7 241 | Fencing/v_Fencing_g02_c05.avi 7 242 | Fencing/v_Fencing_g03_c01.avi 7 243 | Fencing/v_Fencing_g03_c02.avi 7 244 | Fencing/v_Fencing_g03_c03.avi 7 245 | Fencing/v_Fencing_g03_c04.avi 7 246 | Fencing/v_Fencing_g03_c05.avi 7 247 | Fencing/v_Fencing_g04_c01.avi 7 248 | Fencing/v_Fencing_g04_c02.avi 7 249 | Fencing/v_Fencing_g04_c03.avi 7 250 | Fencing/v_Fencing_g04_c04.avi 7 251 | Fencing/v_Fencing_g04_c05.avi 7 252 | Fencing/v_Fencing_g05_c01.avi 7 253 | Fencing/v_Fencing_g05_c02.avi 7 254 | Fencing/v_Fencing_g05_c03.avi 7 255 | Fencing/v_Fencing_g05_c04.avi 7 256 | Fencing/v_Fencing_g05_c05.avi 7 257 | Fencing/v_Fencing_g06_c01.avi 7 258 | Fencing/v_Fencing_g06_c02.avi 7 259 | Fencing/v_Fencing_g06_c03.avi 7 260 | Fencing/v_Fencing_g06_c04.avi 7 261 | Fencing/v_Fencing_g07_c01.avi 7 262 | Fencing/v_Fencing_g07_c02.avi 7 263 | Fencing/v_Fencing_g07_c03.avi 7 264 | Fencing/v_Fencing_g07_c04.avi 7 265 | FloorGymnastics/v_FloorGymnastics_g01_c01.avi 8 266 | FloorGymnastics/v_FloorGymnastics_g01_c02.avi 8 267 | FloorGymnastics/v_FloorGymnastics_g01_c03.avi 8 268 | FloorGymnastics/v_FloorGymnastics_g01_c04.avi 8 269 | FloorGymnastics/v_FloorGymnastics_g01_c05.avi 8 270 | FloorGymnastics/v_FloorGymnastics_g02_c01.avi 8 271 | FloorGymnastics/v_FloorGymnastics_g02_c02.avi 8 272 | FloorGymnastics/v_FloorGymnastics_g02_c03.avi 8 273 | FloorGymnastics/v_FloorGymnastics_g02_c04.avi 8 274 | FloorGymnastics/v_FloorGymnastics_g03_c01.avi 8 275 | FloorGymnastics/v_FloorGymnastics_g03_c02.avi 8 276 | FloorGymnastics/v_FloorGymnastics_g03_c03.avi 8 277 | FloorGymnastics/v_FloorGymnastics_g03_c04.avi 8 278 | FloorGymnastics/v_FloorGymnastics_g04_c01.avi 8 279 | FloorGymnastics/v_FloorGymnastics_g04_c02.avi 8 280 | FloorGymnastics/v_FloorGymnastics_g04_c03.avi 8 281 | FloorGymnastics/v_FloorGymnastics_g04_c04.avi 8 282 | FloorGymnastics/v_FloorGymnastics_g04_c05.avi 8 283 | FloorGymnastics/v_FloorGymnastics_g05_c01.avi 8 284 | FloorGymnastics/v_FloorGymnastics_g05_c02.avi 8 285 | FloorGymnastics/v_FloorGymnastics_g05_c03.avi 8 286 | FloorGymnastics/v_FloorGymnastics_g05_c04.avi 8 287 | FloorGymnastics/v_FloorGymnastics_g06_c01.avi 8 288 | FloorGymnastics/v_FloorGymnastics_g06_c02.avi 8 289 | FloorGymnastics/v_FloorGymnastics_g06_c03.avi 8 290 | FloorGymnastics/v_FloorGymnastics_g06_c04.avi 8 291 | FloorGymnastics/v_FloorGymnastics_g06_c05.avi 8 292 | FloorGymnastics/v_FloorGymnastics_g06_c06.avi 8 293 | FloorGymnastics/v_FloorGymnastics_g06_c07.avi 8 294 | FloorGymnastics/v_FloorGymnastics_g07_c01.avi 8 295 | FloorGymnastics/v_FloorGymnastics_g07_c02.avi 8 296 | FloorGymnastics/v_FloorGymnastics_g07_c03.avi 8 297 | FloorGymnastics/v_FloorGymnastics_g07_c04.avi 8 298 | FloorGymnastics/v_FloorGymnastics_g07_c05.avi 8 299 | FloorGymnastics/v_FloorGymnastics_g07_c06.avi 8 300 | FloorGymnastics/v_FloorGymnastics_g07_c07.avi 8 301 | GolfSwing/v_GolfSwing_g01_c01.avi 9 302 | GolfSwing/v_GolfSwing_g01_c02.avi 9 303 | GolfSwing/v_GolfSwing_g01_c03.avi 9 304 | GolfSwing/v_GolfSwing_g01_c04.avi 9 305 | GolfSwing/v_GolfSwing_g01_c05.avi 9 306 | GolfSwing/v_GolfSwing_g01_c06.avi 9 307 | GolfSwing/v_GolfSwing_g02_c01.avi 9 308 | GolfSwing/v_GolfSwing_g02_c02.avi 9 309 | GolfSwing/v_GolfSwing_g02_c03.avi 9 310 | GolfSwing/v_GolfSwing_g02_c04.avi 9 311 | GolfSwing/v_GolfSwing_g03_c01.avi 9 312 | GolfSwing/v_GolfSwing_g03_c02.avi 9 313 | GolfSwing/v_GolfSwing_g03_c03.avi 9 314 | GolfSwing/v_GolfSwing_g03_c04.avi 9 315 | GolfSwing/v_GolfSwing_g03_c05.avi 9 316 | GolfSwing/v_GolfSwing_g03_c06.avi 9 317 | GolfSwing/v_GolfSwing_g03_c07.avi 9 318 | GolfSwing/v_GolfSwing_g04_c01.avi 9 319 | GolfSwing/v_GolfSwing_g04_c02.avi 9 320 | GolfSwing/v_GolfSwing_g04_c03.avi 9 321 | GolfSwing/v_GolfSwing_g04_c04.avi 9 322 | GolfSwing/v_GolfSwing_g04_c05.avi 9 323 | GolfSwing/v_GolfSwing_g04_c06.avi 9 324 | GolfSwing/v_GolfSwing_g05_c01.avi 9 325 | GolfSwing/v_GolfSwing_g05_c02.avi 9 326 | GolfSwing/v_GolfSwing_g05_c03.avi 9 327 | GolfSwing/v_GolfSwing_g05_c04.avi 9 328 | GolfSwing/v_GolfSwing_g05_c05.avi 9 329 | GolfSwing/v_GolfSwing_g05_c06.avi 9 330 | GolfSwing/v_GolfSwing_g05_c07.avi 9 331 | GolfSwing/v_GolfSwing_g06_c01.avi 9 332 | GolfSwing/v_GolfSwing_g06_c02.avi 9 333 | GolfSwing/v_GolfSwing_g06_c03.avi 9 334 | GolfSwing/v_GolfSwing_g06_c04.avi 9 335 | GolfSwing/v_GolfSwing_g07_c01.avi 9 336 | GolfSwing/v_GolfSwing_g07_c02.avi 9 337 | GolfSwing/v_GolfSwing_g07_c03.avi 9 338 | GolfSwing/v_GolfSwing_g07_c04.avi 9 339 | GolfSwing/v_GolfSwing_g07_c05.avi 9 340 | HorseRiding/v_HorseRiding_g01_c01.avi 10 341 | HorseRiding/v_HorseRiding_g01_c02.avi 10 342 | HorseRiding/v_HorseRiding_g01_c03.avi 10 343 | HorseRiding/v_HorseRiding_g01_c04.avi 10 344 | HorseRiding/v_HorseRiding_g01_c05.avi 10 345 | HorseRiding/v_HorseRiding_g01_c06.avi 10 346 | HorseRiding/v_HorseRiding_g01_c07.avi 10 347 | HorseRiding/v_HorseRiding_g02_c01.avi 10 348 | HorseRiding/v_HorseRiding_g02_c02.avi 10 349 | HorseRiding/v_HorseRiding_g02_c03.avi 10 350 | HorseRiding/v_HorseRiding_g02_c04.avi 10 351 | HorseRiding/v_HorseRiding_g02_c05.avi 10 352 | HorseRiding/v_HorseRiding_g02_c06.avi 10 353 | HorseRiding/v_HorseRiding_g02_c07.avi 10 354 | HorseRiding/v_HorseRiding_g03_c01.avi 10 355 | HorseRiding/v_HorseRiding_g03_c02.avi 10 356 | HorseRiding/v_HorseRiding_g03_c03.avi 10 357 | HorseRiding/v_HorseRiding_g03_c04.avi 10 358 | HorseRiding/v_HorseRiding_g03_c05.avi 10 359 | HorseRiding/v_HorseRiding_g03_c06.avi 10 360 | HorseRiding/v_HorseRiding_g03_c07.avi 10 361 | HorseRiding/v_HorseRiding_g04_c01.avi 10 362 | HorseRiding/v_HorseRiding_g04_c02.avi 10 363 | HorseRiding/v_HorseRiding_g04_c03.avi 10 364 | HorseRiding/v_HorseRiding_g04_c04.avi 10 365 | HorseRiding/v_HorseRiding_g04_c05.avi 10 366 | HorseRiding/v_HorseRiding_g04_c06.avi 10 367 | HorseRiding/v_HorseRiding_g04_c07.avi 10 368 | HorseRiding/v_HorseRiding_g05_c01.avi 10 369 | HorseRiding/v_HorseRiding_g05_c02.avi 10 370 | HorseRiding/v_HorseRiding_g05_c03.avi 10 371 | HorseRiding/v_HorseRiding_g05_c04.avi 10 372 | HorseRiding/v_HorseRiding_g05_c05.avi 10 373 | HorseRiding/v_HorseRiding_g05_c06.avi 10 374 | HorseRiding/v_HorseRiding_g05_c07.avi 10 375 | HorseRiding/v_HorseRiding_g06_c01.avi 10 376 | HorseRiding/v_HorseRiding_g06_c02.avi 10 377 | HorseRiding/v_HorseRiding_g06_c03.avi 10 378 | HorseRiding/v_HorseRiding_g06_c04.avi 10 379 | HorseRiding/v_HorseRiding_g06_c05.avi 10 380 | HorseRiding/v_HorseRiding_g06_c06.avi 10 381 | HorseRiding/v_HorseRiding_g06_c07.avi 10 382 | HorseRiding/v_HorseRiding_g07_c01.avi 10 383 | HorseRiding/v_HorseRiding_g07_c02.avi 10 384 | HorseRiding/v_HorseRiding_g07_c03.avi 10 385 | HorseRiding/v_HorseRiding_g07_c04.avi 10 386 | HorseRiding/v_HorseRiding_g07_c05.avi 10 387 | HorseRiding/v_HorseRiding_g07_c06.avi 10 388 | HorseRiding/v_HorseRiding_g07_c07.avi 10 389 | IceDancing/v_IceDancing_g01_c01.avi 11 390 | IceDancing/v_IceDancing_g01_c02.avi 11 391 | IceDancing/v_IceDancing_g01_c03.avi 11 392 | IceDancing/v_IceDancing_g01_c04.avi 11 393 | IceDancing/v_IceDancing_g01_c05.avi 11 394 | IceDancing/v_IceDancing_g01_c06.avi 11 395 | IceDancing/v_IceDancing_g01_c07.avi 11 396 | IceDancing/v_IceDancing_g02_c01.avi 11 397 | IceDancing/v_IceDancing_g02_c02.avi 11 398 | IceDancing/v_IceDancing_g02_c03.avi 11 399 | IceDancing/v_IceDancing_g02_c04.avi 11 400 | IceDancing/v_IceDancing_g02_c05.avi 11 401 | IceDancing/v_IceDancing_g02_c06.avi 11 402 | IceDancing/v_IceDancing_g02_c07.avi 11 403 | IceDancing/v_IceDancing_g03_c01.avi 11 404 | IceDancing/v_IceDancing_g03_c02.avi 11 405 | IceDancing/v_IceDancing_g03_c03.avi 11 406 | IceDancing/v_IceDancing_g03_c04.avi 11 407 | IceDancing/v_IceDancing_g03_c05.avi 11 408 | IceDancing/v_IceDancing_g03_c06.avi 11 409 | IceDancing/v_IceDancing_g04_c01.avi 11 410 | IceDancing/v_IceDancing_g04_c02.avi 11 411 | IceDancing/v_IceDancing_g04_c03.avi 11 412 | IceDancing/v_IceDancing_g04_c04.avi 11 413 | IceDancing/v_IceDancing_g04_c05.avi 11 414 | IceDancing/v_IceDancing_g04_c06.avi 11 415 | IceDancing/v_IceDancing_g04_c07.avi 11 416 | IceDancing/v_IceDancing_g05_c01.avi 11 417 | IceDancing/v_IceDancing_g05_c02.avi 11 418 | IceDancing/v_IceDancing_g05_c03.avi 11 419 | IceDancing/v_IceDancing_g05_c04.avi 11 420 | IceDancing/v_IceDancing_g05_c05.avi 11 421 | IceDancing/v_IceDancing_g05_c06.avi 11 422 | IceDancing/v_IceDancing_g06_c01.avi 11 423 | IceDancing/v_IceDancing_g06_c02.avi 11 424 | IceDancing/v_IceDancing_g06_c03.avi 11 425 | IceDancing/v_IceDancing_g06_c04.avi 11 426 | IceDancing/v_IceDancing_g06_c05.avi 11 427 | IceDancing/v_IceDancing_g06_c06.avi 11 428 | IceDancing/v_IceDancing_g07_c01.avi 11 429 | IceDancing/v_IceDancing_g07_c02.avi 11 430 | IceDancing/v_IceDancing_g07_c03.avi 11 431 | IceDancing/v_IceDancing_g07_c04.avi 11 432 | IceDancing/v_IceDancing_g07_c05.avi 11 433 | IceDancing/v_IceDancing_g07_c06.avi 11 434 | IceDancing/v_IceDancing_g07_c07.avi 11 435 | LongJump/v_LongJump_g01_c01.avi 12 436 | LongJump/v_LongJump_g01_c02.avi 12 437 | LongJump/v_LongJump_g01_c03.avi 12 438 | LongJump/v_LongJump_g01_c04.avi 12 439 | LongJump/v_LongJump_g01_c05.avi 12 440 | LongJump/v_LongJump_g01_c06.avi 12 441 | LongJump/v_LongJump_g01_c07.avi 12 442 | LongJump/v_LongJump_g02_c01.avi 12 443 | LongJump/v_LongJump_g02_c02.avi 12 444 | LongJump/v_LongJump_g02_c03.avi 12 445 | LongJump/v_LongJump_g02_c04.avi 12 446 | LongJump/v_LongJump_g02_c05.avi 12 447 | LongJump/v_LongJump_g03_c01.avi 12 448 | LongJump/v_LongJump_g03_c02.avi 12 449 | LongJump/v_LongJump_g03_c03.avi 12 450 | LongJump/v_LongJump_g03_c04.avi 12 451 | LongJump/v_LongJump_g03_c05.avi 12 452 | LongJump/v_LongJump_g03_c06.avi 12 453 | LongJump/v_LongJump_g04_c01.avi 12 454 | LongJump/v_LongJump_g04_c02.avi 12 455 | LongJump/v_LongJump_g04_c03.avi 12 456 | LongJump/v_LongJump_g04_c04.avi 12 457 | LongJump/v_LongJump_g04_c05.avi 12 458 | LongJump/v_LongJump_g04_c06.avi 12 459 | LongJump/v_LongJump_g04_c07.avi 12 460 | LongJump/v_LongJump_g05_c01.avi 12 461 | LongJump/v_LongJump_g05_c02.avi 12 462 | LongJump/v_LongJump_g05_c03.avi 12 463 | LongJump/v_LongJump_g05_c04.avi 12 464 | LongJump/v_LongJump_g05_c05.avi 12 465 | LongJump/v_LongJump_g06_c01.avi 12 466 | LongJump/v_LongJump_g06_c02.avi 12 467 | LongJump/v_LongJump_g06_c03.avi 12 468 | LongJump/v_LongJump_g06_c04.avi 12 469 | LongJump/v_LongJump_g07_c01.avi 12 470 | LongJump/v_LongJump_g07_c02.avi 12 471 | LongJump/v_LongJump_g07_c03.avi 12 472 | LongJump/v_LongJump_g07_c04.avi 12 473 | LongJump/v_LongJump_g07_c05.avi 12 474 | PoleVault/v_PoleVault_g01_c01.avi 13 475 | PoleVault/v_PoleVault_g01_c02.avi 13 476 | PoleVault/v_PoleVault_g01_c03.avi 13 477 | PoleVault/v_PoleVault_g01_c04.avi 13 478 | PoleVault/v_PoleVault_g01_c05.avi 13 479 | PoleVault/v_PoleVault_g02_c01.avi 13 480 | PoleVault/v_PoleVault_g02_c02.avi 13 481 | PoleVault/v_PoleVault_g02_c03.avi 13 482 | PoleVault/v_PoleVault_g02_c04.avi 13 483 | PoleVault/v_PoleVault_g02_c05.avi 13 484 | PoleVault/v_PoleVault_g02_c06.avi 13 485 | PoleVault/v_PoleVault_g02_c07.avi 13 486 | PoleVault/v_PoleVault_g03_c01.avi 13 487 | PoleVault/v_PoleVault_g03_c02.avi 13 488 | PoleVault/v_PoleVault_g03_c03.avi 13 489 | PoleVault/v_PoleVault_g03_c04.avi 13 490 | PoleVault/v_PoleVault_g03_c05.avi 13 491 | PoleVault/v_PoleVault_g03_c06.avi 13 492 | PoleVault/v_PoleVault_g03_c07.avi 13 493 | PoleVault/v_PoleVault_g04_c01.avi 13 494 | PoleVault/v_PoleVault_g04_c02.avi 13 495 | PoleVault/v_PoleVault_g04_c03.avi 13 496 | PoleVault/v_PoleVault_g04_c04.avi 13 497 | PoleVault/v_PoleVault_g04_c05.avi 13 498 | PoleVault/v_PoleVault_g04_c06.avi 13 499 | PoleVault/v_PoleVault_g04_c07.avi 13 500 | PoleVault/v_PoleVault_g05_c01.avi 13 501 | PoleVault/v_PoleVault_g05_c02.avi 13 502 | PoleVault/v_PoleVault_g05_c03.avi 13 503 | PoleVault/v_PoleVault_g05_c04.avi 13 504 | PoleVault/v_PoleVault_g05_c05.avi 13 505 | PoleVault/v_PoleVault_g06_c01.avi 13 506 | PoleVault/v_PoleVault_g06_c02.avi 13 507 | PoleVault/v_PoleVault_g06_c03.avi 13 508 | PoleVault/v_PoleVault_g06_c04.avi 13 509 | PoleVault/v_PoleVault_g06_c05.avi 13 510 | PoleVault/v_PoleVault_g07_c01.avi 13 511 | PoleVault/v_PoleVault_g07_c02.avi 13 512 | PoleVault/v_PoleVault_g07_c03.avi 13 513 | PoleVault/v_PoleVault_g07_c04.avi 13 514 | RopeClimbing/v_RopeClimbing_g01_c01.avi 14 515 | RopeClimbing/v_RopeClimbing_g01_c02.avi 14 516 | RopeClimbing/v_RopeClimbing_g01_c03.avi 14 517 | RopeClimbing/v_RopeClimbing_g01_c04.avi 14 518 | RopeClimbing/v_RopeClimbing_g02_c02.avi 14 519 | RopeClimbing/v_RopeClimbing_g02_c03.avi 14 520 | RopeClimbing/v_RopeClimbing_g02_c04.avi 14 521 | RopeClimbing/v_RopeClimbing_g02_c05.avi 14 522 | RopeClimbing/v_RopeClimbing_g02_c06.avi 14 523 | RopeClimbing/v_RopeClimbing_g03_c01.avi 14 524 | RopeClimbing/v_RopeClimbing_g03_c02.avi 14 525 | RopeClimbing/v_RopeClimbing_g03_c03.avi 14 526 | RopeClimbing/v_RopeClimbing_g03_c04.avi 14 527 | RopeClimbing/v_RopeClimbing_g04_c01.avi 14 528 | RopeClimbing/v_RopeClimbing_g04_c02.avi 14 529 | RopeClimbing/v_RopeClimbing_g04_c03.avi 14 530 | RopeClimbing/v_RopeClimbing_g04_c04.avi 14 531 | RopeClimbing/v_RopeClimbing_g05_c01.avi 14 532 | RopeClimbing/v_RopeClimbing_g05_c02.avi 14 533 | RopeClimbing/v_RopeClimbing_g05_c03.avi 14 534 | RopeClimbing/v_RopeClimbing_g05_c04.avi 14 535 | RopeClimbing/v_RopeClimbing_g05_c05.avi 14 536 | RopeClimbing/v_RopeClimbing_g05_c06.avi 14 537 | RopeClimbing/v_RopeClimbing_g05_c07.avi 14 538 | RopeClimbing/v_RopeClimbing_g06_c01.avi 14 539 | RopeClimbing/v_RopeClimbing_g06_c02.avi 14 540 | RopeClimbing/v_RopeClimbing_g06_c03.avi 14 541 | RopeClimbing/v_RopeClimbing_g06_c04.avi 14 542 | RopeClimbing/v_RopeClimbing_g07_c01.avi 14 543 | RopeClimbing/v_RopeClimbing_g07_c02.avi 14 544 | RopeClimbing/v_RopeClimbing_g07_c03.avi 14 545 | RopeClimbing/v_RopeClimbing_g07_c04.avi 14 546 | RopeClimbing/v_RopeClimbing_g07_c05.avi 14 547 | SalsaSpin/v_SalsaSpin_g01_c01.avi 15 548 | SalsaSpin/v_SalsaSpin_g01_c02.avi 15 549 | SalsaSpin/v_SalsaSpin_g01_c03.avi 15 550 | SalsaSpin/v_SalsaSpin_g01_c04.avi 15 551 | SalsaSpin/v_SalsaSpin_g01_c05.avi 15 552 | SalsaSpin/v_SalsaSpin_g01_c06.avi 15 553 | SalsaSpin/v_SalsaSpin_g01_c07.avi 15 554 | SalsaSpin/v_SalsaSpin_g02_c01.avi 15 555 | SalsaSpin/v_SalsaSpin_g02_c02.avi 15 556 | SalsaSpin/v_SalsaSpin_g02_c03.avi 15 557 | SalsaSpin/v_SalsaSpin_g02_c04.avi 15 558 | SalsaSpin/v_SalsaSpin_g02_c05.avi 15 559 | SalsaSpin/v_SalsaSpin_g02_c06.avi 15 560 | SalsaSpin/v_SalsaSpin_g02_c07.avi 15 561 | SalsaSpin/v_SalsaSpin_g03_c01.avi 15 562 | SalsaSpin/v_SalsaSpin_g03_c02.avi 15 563 | SalsaSpin/v_SalsaSpin_g03_c03.avi 15 564 | SalsaSpin/v_SalsaSpin_g03_c04.avi 15 565 | SalsaSpin/v_SalsaSpin_g03_c05.avi 15 566 | SalsaSpin/v_SalsaSpin_g03_c06.avi 15 567 | SalsaSpin/v_SalsaSpin_g04_c01.avi 15 568 | SalsaSpin/v_SalsaSpin_g04_c02.avi 15 569 | SalsaSpin/v_SalsaSpin_g04_c03.avi 15 570 | SalsaSpin/v_SalsaSpin_g04_c04.avi 15 571 | SalsaSpin/v_SalsaSpin_g04_c05.avi 15 572 | SalsaSpin/v_SalsaSpin_g04_c06.avi 15 573 | SalsaSpin/v_SalsaSpin_g05_c01.avi 15 574 | SalsaSpin/v_SalsaSpin_g05_c02.avi 15 575 | SalsaSpin/v_SalsaSpin_g05_c03.avi 15 576 | SalsaSpin/v_SalsaSpin_g05_c04.avi 15 577 | SalsaSpin/v_SalsaSpin_g05_c05.avi 15 578 | SalsaSpin/v_SalsaSpin_g05_c06.avi 15 579 | SalsaSpin/v_SalsaSpin_g06_c01.avi 15 580 | SalsaSpin/v_SalsaSpin_g06_c02.avi 15 581 | SalsaSpin/v_SalsaSpin_g06_c03.avi 15 582 | SalsaSpin/v_SalsaSpin_g06_c04.avi 15 583 | SalsaSpin/v_SalsaSpin_g06_c05.avi 15 584 | SalsaSpin/v_SalsaSpin_g07_c01.avi 15 585 | SalsaSpin/v_SalsaSpin_g07_c02.avi 15 586 | SalsaSpin/v_SalsaSpin_g07_c03.avi 15 587 | SalsaSpin/v_SalsaSpin_g07_c04.avi 15 588 | SalsaSpin/v_SalsaSpin_g07_c05.avi 15 589 | SalsaSpin/v_SalsaSpin_g07_c06.avi 15 590 | SkateBoarding/v_SkateBoarding_g01_c01.avi 16 591 | SkateBoarding/v_SkateBoarding_g01_c02.avi 16 592 | SkateBoarding/v_SkateBoarding_g01_c03.avi 16 593 | SkateBoarding/v_SkateBoarding_g01_c04.avi 16 594 | SkateBoarding/v_SkateBoarding_g02_c01.avi 16 595 | SkateBoarding/v_SkateBoarding_g02_c02.avi 16 596 | SkateBoarding/v_SkateBoarding_g02_c03.avi 16 597 | SkateBoarding/v_SkateBoarding_g02_c04.avi 16 598 | SkateBoarding/v_SkateBoarding_g02_c05.avi 16 599 | SkateBoarding/v_SkateBoarding_g02_c06.avi 16 600 | SkateBoarding/v_SkateBoarding_g03_c01.avi 16 601 | SkateBoarding/v_SkateBoarding_g03_c02.avi 16 602 | SkateBoarding/v_SkateBoarding_g03_c03.avi 16 603 | SkateBoarding/v_SkateBoarding_g03_c04.avi 16 604 | SkateBoarding/v_SkateBoarding_g04_c01.avi 16 605 | SkateBoarding/v_SkateBoarding_g04_c02.avi 16 606 | SkateBoarding/v_SkateBoarding_g04_c03.avi 16 607 | SkateBoarding/v_SkateBoarding_g04_c04.avi 16 608 | SkateBoarding/v_SkateBoarding_g04_c05.avi 16 609 | SkateBoarding/v_SkateBoarding_g05_c01.avi 16 610 | SkateBoarding/v_SkateBoarding_g05_c02.avi 16 611 | SkateBoarding/v_SkateBoarding_g05_c03.avi 16 612 | SkateBoarding/v_SkateBoarding_g05_c04.avi 16 613 | SkateBoarding/v_SkateBoarding_g06_c01.avi 16 614 | SkateBoarding/v_SkateBoarding_g06_c02.avi 16 615 | SkateBoarding/v_SkateBoarding_g06_c03.avi 16 616 | SkateBoarding/v_SkateBoarding_g06_c04.avi 16 617 | SkateBoarding/v_SkateBoarding_g07_c01.avi 16 618 | SkateBoarding/v_SkateBoarding_g07_c02.avi 16 619 | SkateBoarding/v_SkateBoarding_g07_c03.avi 16 620 | SkateBoarding/v_SkateBoarding_g07_c04.avi 16 621 | SkateBoarding/v_SkateBoarding_g07_c05.avi 16 622 | Skiing/v_Skiing_g01_c01.avi 17 623 | Skiing/v_Skiing_g01_c02.avi 17 624 | Skiing/v_Skiing_g01_c03.avi 17 625 | Skiing/v_Skiing_g01_c04.avi 17 626 | Skiing/v_Skiing_g01_c05.avi 17 627 | Skiing/v_Skiing_g01_c06.avi 17 628 | Skiing/v_Skiing_g02_c01.avi 17 629 | Skiing/v_Skiing_g02_c02.avi 17 630 | Skiing/v_Skiing_g02_c03.avi 17 631 | Skiing/v_Skiing_g02_c04.avi 17 632 | Skiing/v_Skiing_g02_c05.avi 17 633 | Skiing/v_Skiing_g03_c01.avi 17 634 | Skiing/v_Skiing_g03_c02.avi 17 635 | Skiing/v_Skiing_g03_c03.avi 17 636 | Skiing/v_Skiing_g03_c04.avi 17 637 | Skiing/v_Skiing_g03_c05.avi 17 638 | Skiing/v_Skiing_g03_c06.avi 17 639 | Skiing/v_Skiing_g03_c07.avi 17 640 | Skiing/v_Skiing_g04_c01.avi 17 641 | Skiing/v_Skiing_g04_c02.avi 17 642 | Skiing/v_Skiing_g04_c03.avi 17 643 | Skiing/v_Skiing_g04_c04.avi 17 644 | Skiing/v_Skiing_g04_c05.avi 17 645 | Skiing/v_Skiing_g04_c06.avi 17 646 | Skiing/v_Skiing_g04_c07.avi 17 647 | Skiing/v_Skiing_g05_c01.avi 17 648 | Skiing/v_Skiing_g05_c02.avi 17 649 | Skiing/v_Skiing_g05_c03.avi 17 650 | Skiing/v_Skiing_g05_c04.avi 17 651 | Skiing/v_Skiing_g06_c01.avi 17 652 | Skiing/v_Skiing_g06_c02.avi 17 653 | Skiing/v_Skiing_g06_c03.avi 17 654 | Skiing/v_Skiing_g06_c04.avi 17 655 | Skiing/v_Skiing_g06_c05.avi 17 656 | Skiing/v_Skiing_g06_c06.avi 17 657 | Skiing/v_Skiing_g06_c07.avi 17 658 | Skiing/v_Skiing_g07_c01.avi 17 659 | Skiing/v_Skiing_g07_c02.avi 17 660 | Skiing/v_Skiing_g07_c03.avi 17 661 | Skiing/v_Skiing_g07_c04.avi 17 662 | Skijet/v_Skijet_g01_c01.avi 18 663 | Skijet/v_Skijet_g01_c02.avi 18 664 | Skijet/v_Skijet_g01_c03.avi 18 665 | Skijet/v_Skijet_g01_c04.avi 18 666 | Skijet/v_Skijet_g02_c01.avi 18 667 | Skijet/v_Skijet_g02_c02.avi 18 668 | Skijet/v_Skijet_g02_c03.avi 18 669 | Skijet/v_Skijet_g02_c04.avi 18 670 | Skijet/v_Skijet_g03_c01.avi 18 671 | Skijet/v_Skijet_g03_c02.avi 18 672 | Skijet/v_Skijet_g03_c03.avi 18 673 | Skijet/v_Skijet_g03_c04.avi 18 674 | Skijet/v_Skijet_g04_c01.avi 18 675 | Skijet/v_Skijet_g04_c02.avi 18 676 | Skijet/v_Skijet_g04_c03.avi 18 677 | Skijet/v_Skijet_g04_c04.avi 18 678 | Skijet/v_Skijet_g05_c01.avi 18 679 | Skijet/v_Skijet_g05_c02.avi 18 680 | Skijet/v_Skijet_g05_c03.avi 18 681 | Skijet/v_Skijet_g05_c04.avi 18 682 | Skijet/v_Skijet_g06_c01.avi 18 683 | Skijet/v_Skijet_g06_c02.avi 18 684 | Skijet/v_Skijet_g06_c03.avi 18 685 | Skijet/v_Skijet_g06_c04.avi 18 686 | Skijet/v_Skijet_g07_c01.avi 18 687 | Skijet/v_Skijet_g07_c02.avi 18 688 | Skijet/v_Skijet_g07_c03.avi 18 689 | Skijet/v_Skijet_g07_c04.avi 18 690 | SoccerJuggling/v_SoccerJuggling_g01_c01.avi 19 691 | SoccerJuggling/v_SoccerJuggling_g01_c02.avi 19 692 | SoccerJuggling/v_SoccerJuggling_g01_c03.avi 19 693 | SoccerJuggling/v_SoccerJuggling_g01_c04.avi 19 694 | SoccerJuggling/v_SoccerJuggling_g01_c05.avi 19 695 | SoccerJuggling/v_SoccerJuggling_g02_c01.avi 19 696 | SoccerJuggling/v_SoccerJuggling_g02_c02.avi 19 697 | SoccerJuggling/v_SoccerJuggling_g02_c03.avi 19 698 | SoccerJuggling/v_SoccerJuggling_g02_c04.avi 19 699 | SoccerJuggling/v_SoccerJuggling_g02_c05.avi 19 700 | SoccerJuggling/v_SoccerJuggling_g02_c06.avi 19 701 | SoccerJuggling/v_SoccerJuggling_g03_c01.avi 19 702 | SoccerJuggling/v_SoccerJuggling_g03_c02.avi 19 703 | SoccerJuggling/v_SoccerJuggling_g03_c03.avi 19 704 | SoccerJuggling/v_SoccerJuggling_g03_c04.avi 19 705 | SoccerJuggling/v_SoccerJuggling_g04_c01.avi 19 706 | SoccerJuggling/v_SoccerJuggling_g04_c02.avi 19 707 | SoccerJuggling/v_SoccerJuggling_g04_c03.avi 19 708 | SoccerJuggling/v_SoccerJuggling_g04_c04.avi 19 709 | SoccerJuggling/v_SoccerJuggling_g04_c05.avi 19 710 | SoccerJuggling/v_SoccerJuggling_g04_c06.avi 19 711 | SoccerJuggling/v_SoccerJuggling_g05_c01.avi 19 712 | SoccerJuggling/v_SoccerJuggling_g05_c02.avi 19 713 | SoccerJuggling/v_SoccerJuggling_g05_c03.avi 19 714 | SoccerJuggling/v_SoccerJuggling_g05_c04.avi 19 715 | SoccerJuggling/v_SoccerJuggling_g05_c05.avi 19 716 | SoccerJuggling/v_SoccerJuggling_g05_c06.avi 19 717 | SoccerJuggling/v_SoccerJuggling_g06_c01.avi 19 718 | SoccerJuggling/v_SoccerJuggling_g06_c02.avi 19 719 | SoccerJuggling/v_SoccerJuggling_g06_c03.avi 19 720 | SoccerJuggling/v_SoccerJuggling_g06_c04.avi 19 721 | SoccerJuggling/v_SoccerJuggling_g06_c05.avi 19 722 | SoccerJuggling/v_SoccerJuggling_g07_c01.avi 19 723 | SoccerJuggling/v_SoccerJuggling_g07_c02.avi 19 724 | SoccerJuggling/v_SoccerJuggling_g07_c03.avi 19 725 | SoccerJuggling/v_SoccerJuggling_g07_c04.avi 19 726 | SoccerJuggling/v_SoccerJuggling_g07_c05.avi 19 727 | SoccerJuggling/v_SoccerJuggling_g07_c06.avi 19 728 | SoccerJuggling/v_SoccerJuggling_g07_c07.avi 19 729 | Surfing/v_Surfing_g01_c01.avi 20 730 | Surfing/v_Surfing_g01_c02.avi 20 731 | Surfing/v_Surfing_g01_c03.avi 20 732 | Surfing/v_Surfing_g01_c04.avi 20 733 | Surfing/v_Surfing_g01_c05.avi 20 734 | Surfing/v_Surfing_g01_c06.avi 20 735 | Surfing/v_Surfing_g01_c07.avi 20 736 | Surfing/v_Surfing_g02_c01.avi 20 737 | Surfing/v_Surfing_g02_c02.avi 20 738 | Surfing/v_Surfing_g02_c03.avi 20 739 | Surfing/v_Surfing_g02_c04.avi 20 740 | Surfing/v_Surfing_g02_c05.avi 20 741 | Surfing/v_Surfing_g02_c06.avi 20 742 | Surfing/v_Surfing_g03_c01.avi 20 743 | Surfing/v_Surfing_g03_c02.avi 20 744 | Surfing/v_Surfing_g03_c03.avi 20 745 | Surfing/v_Surfing_g03_c04.avi 20 746 | Surfing/v_Surfing_g04_c01.avi 20 747 | Surfing/v_Surfing_g04_c02.avi 20 748 | Surfing/v_Surfing_g04_c03.avi 20 749 | Surfing/v_Surfing_g04_c04.avi 20 750 | Surfing/v_Surfing_g05_c01.avi 20 751 | Surfing/v_Surfing_g05_c02.avi 20 752 | Surfing/v_Surfing_g05_c03.avi 20 753 | Surfing/v_Surfing_g05_c04.avi 20 754 | Surfing/v_Surfing_g06_c01.avi 20 755 | Surfing/v_Surfing_g06_c02.avi 20 756 | Surfing/v_Surfing_g06_c03.avi 20 757 | Surfing/v_Surfing_g06_c04.avi 20 758 | Surfing/v_Surfing_g07_c01.avi 20 759 | Surfing/v_Surfing_g07_c02.avi 20 760 | Surfing/v_Surfing_g07_c03.avi 20 761 | Surfing/v_Surfing_g07_c04.avi 20 762 | TennisSwing/v_TennisSwing_g01_c01.avi 21 763 | TennisSwing/v_TennisSwing_g01_c02.avi 21 764 | TennisSwing/v_TennisSwing_g01_c03.avi 21 765 | TennisSwing/v_TennisSwing_g01_c04.avi 21 766 | TennisSwing/v_TennisSwing_g01_c05.avi 21 767 | TennisSwing/v_TennisSwing_g01_c06.avi 21 768 | TennisSwing/v_TennisSwing_g01_c07.avi 21 769 | TennisSwing/v_TennisSwing_g02_c01.avi 21 770 | TennisSwing/v_TennisSwing_g02_c02.avi 21 771 | TennisSwing/v_TennisSwing_g02_c03.avi 21 772 | TennisSwing/v_TennisSwing_g02_c04.avi 21 773 | TennisSwing/v_TennisSwing_g02_c05.avi 21 774 | TennisSwing/v_TennisSwing_g02_c06.avi 21 775 | TennisSwing/v_TennisSwing_g02_c07.avi 21 776 | TennisSwing/v_TennisSwing_g03_c01.avi 21 777 | TennisSwing/v_TennisSwing_g03_c02.avi 21 778 | TennisSwing/v_TennisSwing_g03_c03.avi 21 779 | TennisSwing/v_TennisSwing_g03_c04.avi 21 780 | TennisSwing/v_TennisSwing_g03_c05.avi 21 781 | TennisSwing/v_TennisSwing_g03_c06.avi 21 782 | TennisSwing/v_TennisSwing_g03_c07.avi 21 783 | TennisSwing/v_TennisSwing_g04_c01.avi 21 784 | TennisSwing/v_TennisSwing_g04_c02.avi 21 785 | TennisSwing/v_TennisSwing_g04_c03.avi 21 786 | TennisSwing/v_TennisSwing_g04_c04.avi 21 787 | TennisSwing/v_TennisSwing_g04_c05.avi 21 788 | TennisSwing/v_TennisSwing_g04_c06.avi 21 789 | TennisSwing/v_TennisSwing_g04_c07.avi 21 790 | TennisSwing/v_TennisSwing_g05_c01.avi 21 791 | TennisSwing/v_TennisSwing_g05_c02.avi 21 792 | TennisSwing/v_TennisSwing_g05_c03.avi 21 793 | TennisSwing/v_TennisSwing_g05_c04.avi 21 794 | TennisSwing/v_TennisSwing_g05_c05.avi 21 795 | TennisSwing/v_TennisSwing_g05_c06.avi 21 796 | TennisSwing/v_TennisSwing_g05_c07.avi 21 797 | TennisSwing/v_TennisSwing_g06_c01.avi 21 798 | TennisSwing/v_TennisSwing_g06_c02.avi 21 799 | TennisSwing/v_TennisSwing_g06_c03.avi 21 800 | TennisSwing/v_TennisSwing_g06_c04.avi 21 801 | TennisSwing/v_TennisSwing_g06_c05.avi 21 802 | TennisSwing/v_TennisSwing_g06_c06.avi 21 803 | TennisSwing/v_TennisSwing_g06_c07.avi 21 804 | TennisSwing/v_TennisSwing_g07_c01.avi 21 805 | TennisSwing/v_TennisSwing_g07_c02.avi 21 806 | TennisSwing/v_TennisSwing_g07_c03.avi 21 807 | TennisSwing/v_TennisSwing_g07_c04.avi 21 808 | TennisSwing/v_TennisSwing_g07_c05.avi 21 809 | TennisSwing/v_TennisSwing_g07_c06.avi 21 810 | TennisSwing/v_TennisSwing_g07_c07.avi 21 811 | TrampolineJumping/v_TrampolineJumping_g01_c01.avi 22 812 | TrampolineJumping/v_TrampolineJumping_g01_c02.avi 22 813 | TrampolineJumping/v_TrampolineJumping_g01_c03.avi 22 814 | TrampolineJumping/v_TrampolineJumping_g01_c04.avi 22 815 | TrampolineJumping/v_TrampolineJumping_g02_c01.avi 22 816 | TrampolineJumping/v_TrampolineJumping_g02_c02.avi 22 817 | TrampolineJumping/v_TrampolineJumping_g02_c03.avi 22 818 | TrampolineJumping/v_TrampolineJumping_g02_c04.avi 22 819 | TrampolineJumping/v_TrampolineJumping_g02_c05.avi 22 820 | TrampolineJumping/v_TrampolineJumping_g02_c06.avi 22 821 | TrampolineJumping/v_TrampolineJumping_g03_c01.avi 22 822 | TrampolineJumping/v_TrampolineJumping_g03_c02.avi 22 823 | TrampolineJumping/v_TrampolineJumping_g03_c03.avi 22 824 | TrampolineJumping/v_TrampolineJumping_g03_c04.avi 22 825 | TrampolineJumping/v_TrampolineJumping_g04_c01.avi 22 826 | TrampolineJumping/v_TrampolineJumping_g04_c02.avi 22 827 | TrampolineJumping/v_TrampolineJumping_g04_c03.avi 22 828 | TrampolineJumping/v_TrampolineJumping_g04_c04.avi 22 829 | TrampolineJumping/v_TrampolineJumping_g04_c05.avi 22 830 | TrampolineJumping/v_TrampolineJumping_g05_c01.avi 22 831 | TrampolineJumping/v_TrampolineJumping_g05_c02.avi 22 832 | TrampolineJumping/v_TrampolineJumping_g05_c03.avi 22 833 | TrampolineJumping/v_TrampolineJumping_g05_c04.avi 22 834 | TrampolineJumping/v_TrampolineJumping_g06_c01.avi 22 835 | TrampolineJumping/v_TrampolineJumping_g06_c02.avi 22 836 | TrampolineJumping/v_TrampolineJumping_g06_c03.avi 22 837 | TrampolineJumping/v_TrampolineJumping_g06_c04.avi 22 838 | TrampolineJumping/v_TrampolineJumping_g07_c01.avi 22 839 | TrampolineJumping/v_TrampolineJumping_g07_c02.avi 22 840 | TrampolineJumping/v_TrampolineJumping_g07_c03.avi 22 841 | TrampolineJumping/v_TrampolineJumping_g07_c04.avi 22 842 | TrampolineJumping/v_TrampolineJumping_g07_c05.avi 22 843 | VolleyballSpiking/v_VolleyballSpiking_g01_c01.avi 23 844 | VolleyballSpiking/v_VolleyballSpiking_g01_c02.avi 23 845 | VolleyballSpiking/v_VolleyballSpiking_g01_c03.avi 23 846 | VolleyballSpiking/v_VolleyballSpiking_g01_c04.avi 23 847 | VolleyballSpiking/v_VolleyballSpiking_g02_c01.avi 23 848 | VolleyballSpiking/v_VolleyballSpiking_g02_c02.avi 23 849 | VolleyballSpiking/v_VolleyballSpiking_g02_c03.avi 23 850 | VolleyballSpiking/v_VolleyballSpiking_g02_c04.avi 23 851 | VolleyballSpiking/v_VolleyballSpiking_g03_c01.avi 23 852 | VolleyballSpiking/v_VolleyballSpiking_g03_c02.avi 23 853 | VolleyballSpiking/v_VolleyballSpiking_g03_c03.avi 23 854 | VolleyballSpiking/v_VolleyballSpiking_g03_c04.avi 23 855 | VolleyballSpiking/v_VolleyballSpiking_g04_c01.avi 23 856 | VolleyballSpiking/v_VolleyballSpiking_g04_c02.avi 23 857 | VolleyballSpiking/v_VolleyballSpiking_g04_c03.avi 23 858 | VolleyballSpiking/v_VolleyballSpiking_g04_c04.avi 23 859 | VolleyballSpiking/v_VolleyballSpiking_g04_c05.avi 23 860 | VolleyballSpiking/v_VolleyballSpiking_g04_c06.avi 23 861 | VolleyballSpiking/v_VolleyballSpiking_g04_c07.avi 23 862 | VolleyballSpiking/v_VolleyballSpiking_g05_c01.avi 23 863 | VolleyballSpiking/v_VolleyballSpiking_g05_c02.avi 23 864 | VolleyballSpiking/v_VolleyballSpiking_g05_c03.avi 23 865 | VolleyballSpiking/v_VolleyballSpiking_g05_c04.avi 23 866 | VolleyballSpiking/v_VolleyballSpiking_g05_c05.avi 23 867 | VolleyballSpiking/v_VolleyballSpiking_g06_c01.avi 23 868 | VolleyballSpiking/v_VolleyballSpiking_g06_c02.avi 23 869 | VolleyballSpiking/v_VolleyballSpiking_g06_c03.avi 23 870 | VolleyballSpiking/v_VolleyballSpiking_g06_c04.avi 23 871 | VolleyballSpiking/v_VolleyballSpiking_g07_c01.avi 23 872 | VolleyballSpiking/v_VolleyballSpiking_g07_c02.avi 23 873 | VolleyballSpiking/v_VolleyballSpiking_g07_c03.avi 23 874 | VolleyballSpiking/v_VolleyballSpiking_g07_c04.avi 23 875 | VolleyballSpiking/v_VolleyballSpiking_g07_c05.avi 23 876 | VolleyballSpiking/v_VolleyballSpiking_g07_c06.avi 23 877 | VolleyballSpiking/v_VolleyballSpiking_g07_c07.avi 23 878 | WalkingWithDog/v_WalkingWithDog_g01_c01.avi 24 879 | WalkingWithDog/v_WalkingWithDog_g01_c02.avi 24 880 | WalkingWithDog/v_WalkingWithDog_g01_c03.avi 24 881 | WalkingWithDog/v_WalkingWithDog_g01_c04.avi 24 882 | WalkingWithDog/v_WalkingWithDog_g02_c01.avi 24 883 | WalkingWithDog/v_WalkingWithDog_g02_c02.avi 24 884 | WalkingWithDog/v_WalkingWithDog_g02_c03.avi 24 885 | WalkingWithDog/v_WalkingWithDog_g02_c04.avi 24 886 | WalkingWithDog/v_WalkingWithDog_g02_c05.avi 24 887 | WalkingWithDog/v_WalkingWithDog_g02_c06.avi 24 888 | WalkingWithDog/v_WalkingWithDog_g03_c01.avi 24 889 | WalkingWithDog/v_WalkingWithDog_g03_c02.avi 24 890 | WalkingWithDog/v_WalkingWithDog_g03_c03.avi 24 891 | WalkingWithDog/v_WalkingWithDog_g03_c04.avi 24 892 | WalkingWithDog/v_WalkingWithDog_g03_c05.avi 24 893 | WalkingWithDog/v_WalkingWithDog_g04_c01.avi 24 894 | WalkingWithDog/v_WalkingWithDog_g04_c02.avi 24 895 | WalkingWithDog/v_WalkingWithDog_g04_c03.avi 24 896 | WalkingWithDog/v_WalkingWithDog_g04_c04.avi 24 897 | WalkingWithDog/v_WalkingWithDog_g04_c05.avi 24 898 | WalkingWithDog/v_WalkingWithDog_g05_c01.avi 24 899 | WalkingWithDog/v_WalkingWithDog_g05_c02.avi 24 900 | WalkingWithDog/v_WalkingWithDog_g05_c03.avi 24 901 | WalkingWithDog/v_WalkingWithDog_g05_c04.avi 24 902 | WalkingWithDog/v_WalkingWithDog_g05_c05.avi 24 903 | WalkingWithDog/v_WalkingWithDog_g06_c01.avi 24 904 | WalkingWithDog/v_WalkingWithDog_g06_c02.avi 24 905 | WalkingWithDog/v_WalkingWithDog_g06_c03.avi 24 906 | WalkingWithDog/v_WalkingWithDog_g06_c04.avi 24 907 | WalkingWithDog/v_WalkingWithDog_g06_c05.avi 24 908 | WalkingWithDog/v_WalkingWithDog_g07_c01.avi 24 909 | WalkingWithDog/v_WalkingWithDog_g07_c02.avi 24 910 | WalkingWithDog/v_WalkingWithDog_g07_c03.avi 24 911 | WalkingWithDog/v_WalkingWithDog_g07_c04.avi 24 912 | WalkingWithDog/v_WalkingWithDog_g07_c05.avi 24 913 | WalkingWithDog/v_WalkingWithDog_g07_c06.avi 24 914 | --------------------------------------------------------------------------------