├── .gitignore ├── LICENSE ├── README.md ├── chairs_split.txt ├── checkpoints ├── eighth │ ├── scv-chairs.pth │ └── scv-things.pth └── quarter │ ├── scv-chairs.pth │ ├── scv-kitti.pth │ ├── scv-sintel.pth │ └── scv-things.pth ├── core ├── __init__.py ├── compute_sparse_correlation.py ├── datasets.py ├── extractor.py ├── knn.py ├── sparsenet.py ├── update.py └── utils │ ├── __init__.py │ ├── augmentor.py │ ├── flow_viz.py │ ├── frame_utils.py │ └── utils.py ├── evaluate.py ├── evaluate_vis.py ├── scv.png ├── train.py └── train.sh /.gitignore: -------------------------------------------------------------------------------- 1 | *__pycache__ 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE 2 | Version 2, December 2004 3 | 4 | Copyright (C) 2004 Sam Hocevar 5 | 6 | Everyone is permitted to copy and distribute verbatim or modified 7 | copies of this license document, and changing it is allowed as long 8 | as the name is changed. 9 | 10 | DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE 11 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 12 | 13 | 0. You just DO WHAT THE FUCK YOU WANT TO. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Learning Optical Flow from a Few Matches 2 | This repository contains the source code for our paper: 3 | 4 | [Learning Optical Flow from a Few Matches](https://arxiv.org/abs/2104.02166)
5 | CVPR 2021
6 | **Shihao Jiang**, Yao Lu, Hongdong Li, Richard Hartley
7 | ANU
8 | 9 | 10 | 11 | ## Requirements 12 | The code has been tested with PyTorch 1.6 and Cuda 10.1. 13 | ```Shell 14 | conda create --name scv 15 | conda activate scv 16 | conda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.1 matplotlib tensorboard scipy opencv -c pytorch 17 | pip install faiss-gpu 18 | ``` 19 | 20 | ## Required Data 21 | To evaluate/train SCV, you will need to download the required datasets. 22 | * [FlyingChairs](https://lmb.informatik.uni-freiburg.de/resources/datasets/FlyingChairs.en.html#flyingchairs) 23 | * [FlyingThings3D](https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html) 24 | * [Sintel](http://sintel.is.tue.mpg.de/) 25 | * [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow) 26 | * [HD1K](http://hci-benchmark.iwr.uni-heidelberg.de/) (optional) 27 | 28 | 29 | By default `datasets.py` will search for the datasets in these locations. You can create symbolic links to wherever the datasets were downloaded in the `datasets` folder 30 | 31 | ```Shell 32 | ├── datasets 33 | ├── Sintel 34 | ├── test 35 | ├── training 36 | ├── KITTI 37 | ├── testing 38 | ├── training 39 | ├── devkit 40 | ├── FlyingChairs_release 41 | ├── data 42 | ├── FlyingThings3D 43 | ├── frames_cleanpass 44 | ├── frames_finalpass 45 | ├── optical_flow 46 | ``` 47 | 48 | ## Evaluation 49 | You can evaluate a trained model using `evaluate.py` 50 | ```Shell 51 | python evaluate.py --model=checkpoints/quarter/scv-chairs.pth --dataset=chairs 52 | ``` 53 | 54 | ## Training 55 | We used the following training schedule in our paper (2 GPUs). 56 | ```Shell 57 | ./train.sh 58 | ``` 59 | 60 | ## License 61 | WTFPL. See [LICENSE](LICENSE) file. 62 | 63 | ## Acknowledgement 64 | The overall code framework is adapted from [RAFT](https://github.com/princeton-vl/RAFT). We 65 | thank the authors for the contribution. 66 | -------------------------------------------------------------------------------- /checkpoints/eighth/scv-chairs.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/checkpoints/eighth/scv-chairs.pth -------------------------------------------------------------------------------- /checkpoints/eighth/scv-things.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/checkpoints/eighth/scv-things.pth -------------------------------------------------------------------------------- /checkpoints/quarter/scv-chairs.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/checkpoints/quarter/scv-chairs.pth -------------------------------------------------------------------------------- /checkpoints/quarter/scv-kitti.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/checkpoints/quarter/scv-kitti.pth -------------------------------------------------------------------------------- /checkpoints/quarter/scv-sintel.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/checkpoints/quarter/scv-sintel.pth -------------------------------------------------------------------------------- /checkpoints/quarter/scv-things.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/checkpoints/quarter/scv-things.pth -------------------------------------------------------------------------------- /core/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/core/__init__.py -------------------------------------------------------------------------------- /core/compute_sparse_correlation.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from knn import knn_faiss_raw 3 | from utils.utils import coords_grid, coords_grid_y_first 4 | 5 | 6 | def normalize_coords(coords, H, W): 7 | """ Normalize coordinates based on feature map shape. coords: [B, 2, N]""" 8 | one = coords.new_tensor(1) 9 | size = torch.stack([one*W, one*H])[None] 10 | center = size / 2 11 | scaling = size.max(1, keepdim=True).values * 0.5 12 | return (coords - center[:, :, None]) / scaling[:, :, None] 13 | 14 | 15 | def compute_sparse_corr_init(fmap1, fmap2, k=32): 16 | """ 17 | Compute a cost volume containing the k-largest hypotheses for each pixel. 18 | Output: corr_mink 19 | """ 20 | B, C, H1, W1 = fmap1.shape 21 | H2, W2 = fmap2.shape[2:] 22 | N = H1 * W1 23 | 24 | fmap1, fmap2 = fmap1.view(B, C, -1), fmap2.view(B, C, -1) 25 | 26 | with torch.no_grad(): 27 | _, indices = knn_faiss_raw(fmap1, fmap2, k) # [B, k, H1*W1] 28 | 29 | indices_coord = indices.unsqueeze(1).expand(-1, 2, -1, -1) # [B, 2, k, H1*W1] 30 | coords0 = coords_grid_y_first(B, H2, W2).view(B, 2, 1, -1).expand(-1, -1, k, -1).to(fmap1.device) # [B, 2, k, H1*W1] 31 | coords1 = coords0.gather(3, indices_coord) # [B, 2, k, H1*W1] 32 | coords1 = coords1 - coords0 33 | 34 | # Append batch index 35 | batch_index = torch.arange(B).view(B, 1, 1, 1).expand(-1, -1, k, N).type_as(coords1) 36 | 37 | # Gather by indices from map2 and compute correlation volume 38 | fmap2 = fmap2.gather(2, indices.view(B, 1, -1).expand(-1, C, -1)).view(B, C, k, N) 39 | me_corr = torch.einsum('bcn,bckn->bkn', fmap1, fmap2).contiguous() / torch.sqrt(torch.tensor(C).float()) # [B, k, H1*W1] 40 | 41 | return me_corr, coords0, coords1, batch_index # coords: [B, 2, k, H1*W1] 42 | 43 | 44 | if __name__ == "__main__": 45 | torch.manual_seed(0) 46 | 47 | for _ in range(100): 48 | fmap1 = torch.randn(8, 256, 92, 124).cuda() 49 | fmap2 = torch.randn(8, 256, 92, 124).cuda() 50 | corr_me = compute_sparse_corr_init(fmap1, fmap2, k=16) 51 | 52 | # corr_dense = corr(fmap1, fmap2) 53 | # corr_max = torch.max(corr_dense, dim=3) 54 | 55 | -------------------------------------------------------------------------------- /core/datasets.py: -------------------------------------------------------------------------------- 1 | # Data loading based on https://github.com/NVIDIA/flownet2-pytorch 2 | 3 | import numpy as np 4 | import torch 5 | import torch.utils.data as data 6 | import torch.nn.functional as F 7 | 8 | import os 9 | import math 10 | import random 11 | from glob import glob 12 | import os.path as osp 13 | 14 | from utils import frame_utils 15 | from utils.augmentor import FlowAugmentor, SparseFlowAugmentor 16 | 17 | 18 | class FlowDataset(data.Dataset): 19 | def __init__(self, aug_params=None, sparse=False): 20 | self.augmentor = None 21 | self.sparse = sparse 22 | if aug_params is not None: 23 | if sparse: 24 | self.augmentor = SparseFlowAugmentor(**aug_params) 25 | else: 26 | self.augmentor = FlowAugmentor(**aug_params) 27 | 28 | self.is_test = False 29 | self.init_seed = False 30 | self.flow_list = [] 31 | self.image_list = [] 32 | self.extra_info = [] 33 | 34 | def __getitem__(self, index): 35 | 36 | if self.is_test: 37 | img1 = frame_utils.read_gen(self.image_list[index][0]) 38 | img2 = frame_utils.read_gen(self.image_list[index][1]) 39 | img1 = np.array(img1).astype(np.uint8)[..., :3] 40 | img2 = np.array(img2).astype(np.uint8)[..., :3] 41 | img1 = torch.from_numpy(img1).permute(2, 0, 1).float() 42 | img2 = torch.from_numpy(img2).permute(2, 0, 1).float() 43 | return img1, img2, self.extra_info[index] 44 | 45 | if not self.init_seed: 46 | worker_info = torch.utils.data.get_worker_info() 47 | if worker_info is not None: 48 | torch.manual_seed(worker_info.id) 49 | np.random.seed(worker_info.id) 50 | random.seed(worker_info.id) 51 | self.init_seed = True 52 | 53 | index = index % len(self.image_list) 54 | valid = None 55 | if self.sparse: 56 | flow, valid = frame_utils.readFlowKITTI(self.flow_list[index]) 57 | else: 58 | flow = frame_utils.read_gen(self.flow_list[index]) 59 | 60 | img1 = frame_utils.read_gen(self.image_list[index][0]) 61 | img2 = frame_utils.read_gen(self.image_list[index][1]) 62 | 63 | flow = np.array(flow).astype(np.float32) 64 | img1 = np.array(img1).astype(np.uint8) 65 | img2 = np.array(img2).astype(np.uint8) 66 | 67 | # grayscale images 68 | if len(img1.shape) == 2: 69 | img1 = np.tile(img1[...,None], (1, 1, 3)) 70 | img2 = np.tile(img2[...,None], (1, 1, 3)) 71 | else: 72 | img1 = img1[..., :3] 73 | img2 = img2[..., :3] 74 | 75 | if self.augmentor is not None: 76 | if self.sparse: 77 | img1, img2, flow, valid = self.augmentor(img1, img2, flow, valid) 78 | else: 79 | img1, img2, flow = self.augmentor(img1, img2, flow) 80 | 81 | img1 = torch.from_numpy(img1).permute(2, 0, 1).float() 82 | img2 = torch.from_numpy(img2).permute(2, 0, 1).float() 83 | flow = torch.from_numpy(flow).permute(2, 0, 1).float() 84 | 85 | if valid is not None: 86 | valid = torch.from_numpy(valid) 87 | else: 88 | valid = (flow[0].abs() < 1000) & (flow[1].abs() < 1000) 89 | 90 | return img1, img2, flow, valid.float()#, self.extra_info[index] 91 | 92 | def __rmul__(self, v): 93 | self.flow_list = v * self.flow_list 94 | self.image_list = v * self.image_list 95 | return self 96 | 97 | def __len__(self): 98 | return len(self.image_list) 99 | 100 | 101 | class MpiSintel(FlowDataset): 102 | def __init__(self, aug_params=None, split='training', root='datasets/Sintel', dstype='clean'): 103 | super(MpiSintel, self).__init__(aug_params) 104 | flow_root = osp.join(root, split, 'flow') 105 | image_root = osp.join(root, split, dstype) 106 | 107 | if split == 'test': 108 | self.is_test = True 109 | 110 | for scene in os.listdir(image_root): 111 | image_list = sorted(glob(osp.join(image_root, scene, '*.png'))) 112 | for i in range(len(image_list)-1): 113 | self.image_list += [ [image_list[i], image_list[i+1]] ] 114 | self.extra_info += [ (scene, i) ] # scene and frame_id 115 | 116 | if split != 'test': 117 | self.flow_list += sorted(glob(osp.join(flow_root, scene, '*.flo'))) 118 | 119 | 120 | class FlyingChairs(FlowDataset): 121 | def __init__(self, aug_params=None, split='training', root='datasets/FlyingChairs_release/data'): 122 | super(FlyingChairs, self).__init__(aug_params) 123 | 124 | images = sorted(glob(osp.join(root, '*.ppm'))) 125 | flows = sorted(glob(osp.join(root, '*.flo'))) 126 | assert (len(images)//2 == len(flows)) 127 | 128 | split_list = np.loadtxt('chairs_split.txt', dtype=np.int32) 129 | for i in range(len(flows)): 130 | xid = split_list[i] 131 | if (split=='training' and xid==1) or (split=='validation' and xid==2): 132 | self.flow_list += [ flows[i] ] 133 | self.image_list += [ [images[2*i], images[2*i+1]] ] 134 | 135 | 136 | class FlyingThings3D(FlowDataset): 137 | def __init__(self, aug_params=None, root='datasets/FlyingThings3D', dstype='frames_cleanpass'): 138 | super(FlyingThings3D, self).__init__(aug_params) 139 | 140 | for cam in ['left']: 141 | for direction in ['into_future', 'into_past']: 142 | image_dirs = sorted(glob(osp.join(root, dstype, 'TRAIN/*/*'))) 143 | image_dirs = sorted([osp.join(f, cam) for f in image_dirs]) 144 | 145 | flow_dirs = sorted(glob(osp.join(root, 'optical_flow/TRAIN/*/*'))) 146 | flow_dirs = sorted([osp.join(f, direction, cam) for f in flow_dirs]) 147 | 148 | for idir, fdir in zip(image_dirs, flow_dirs): 149 | images = sorted(glob(osp.join(idir, '*.png')) ) 150 | flows = sorted(glob(osp.join(fdir, '*.pfm')) ) 151 | for i in range(len(flows)-1): 152 | if direction == 'into_future': 153 | self.image_list += [ [images[i], images[i+1]] ] 154 | self.flow_list += [ flows[i] ] 155 | elif direction == 'into_past': 156 | self.image_list += [ [images[i+1], images[i]] ] 157 | self.flow_list += [ flows[i+1] ] 158 | 159 | 160 | class KITTI(FlowDataset): 161 | def __init__(self, aug_params=None, split='training', root='datasets/KITTI'): 162 | super(KITTI, self).__init__(aug_params, sparse=True) 163 | if split == 'testing': 164 | self.is_test = True 165 | 166 | root = osp.join(root, split) 167 | images1 = sorted(glob(osp.join(root, 'image_2/*_10.png'))) 168 | images2 = sorted(glob(osp.join(root, 'image_2/*_11.png'))) 169 | 170 | for img1, img2 in zip(images1, images2): 171 | frame_id = img1.split('/')[-1] 172 | self.extra_info += [ [frame_id] ] 173 | self.image_list += [ [img1, img2] ] 174 | 175 | if split == 'training': 176 | self.flow_list = sorted(glob(osp.join(root, 'flow_occ/*_10.png'))) 177 | 178 | 179 | class HD1K(FlowDataset): 180 | def __init__(self, aug_params=None, root='datasets/HD1k'): 181 | super(HD1K, self).__init__(aug_params, sparse=True) 182 | 183 | seq_ix = 0 184 | while 1: 185 | flows = sorted(glob(os.path.join(root, 'hd1k_flow_gt', 'flow_occ/%06d_*.png' % seq_ix))) 186 | images = sorted(glob(os.path.join(root, 'hd1k_input', 'image_2/%06d_*.png' % seq_ix))) 187 | 188 | if len(flows) == 0: 189 | break 190 | 191 | for i in range(len(flows)-1): 192 | self.flow_list += [flows[i]] 193 | self.image_list += [ [images[i], images[i+1]] ] 194 | 195 | seq_ix += 1 196 | 197 | 198 | def fetch_dataloader(args, TRAIN_DS='C+T+K+S+H'): 199 | """ Create the data loader for the corresponding training set """ 200 | 201 | if args.stage == 'chairs': 202 | aug_params = {'crop_size': args.image_size, 'min_scale': -0.1, 'max_scale': 1.0, 'do_flip': True} 203 | train_dataset = FlyingChairs(aug_params, split='training') 204 | 205 | elif args.stage == 'things': 206 | aug_params = {'crop_size': args.image_size, 'min_scale': -0.4, 'max_scale': 0.8, 'do_flip': True} 207 | clean_dataset = FlyingThings3D(aug_params, dstype='frames_cleanpass') 208 | final_dataset = FlyingThings3D(aug_params, dstype='frames_finalpass') 209 | train_dataset = clean_dataset + final_dataset 210 | 211 | elif args.stage == 'sintel': 212 | aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.6, 'do_flip': True} 213 | things = FlyingThings3D(aug_params, dstype='frames_cleanpass') 214 | sintel_clean = MpiSintel(aug_params, split='training', dstype='clean') 215 | sintel_final = MpiSintel(aug_params, split='training', dstype='final') 216 | 217 | if TRAIN_DS == 'C+T+K+S+H': 218 | kitti = KITTI({'crop_size': args.image_size, 'min_scale': -0.3, 'max_scale': 0.5, 'do_flip': True}) 219 | hd1k = HD1K({'crop_size': args.image_size, 'min_scale': -0.5, 'max_scale': 0.2, 'do_flip': True}) 220 | train_dataset = 100*sintel_clean + 100*sintel_final + 200*kitti + 5*hd1k + things 221 | 222 | elif TRAIN_DS == 'C+T+K/S': 223 | train_dataset = 100*sintel_clean + 100*sintel_final + things 224 | 225 | elif args.stage == 'kitti': 226 | aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.4, 'do_flip': False} 227 | train_dataset = KITTI(aug_params, split='training') 228 | 229 | train_loader = data.DataLoader(train_dataset, batch_size=args.batch_size, 230 | pin_memory=True, shuffle=True, num_workers=8, drop_last=True) 231 | 232 | print('Training with %d image pairs' % len(train_dataset)) 233 | return train_loader 234 | 235 | -------------------------------------------------------------------------------- /core/extractor.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | 6 | class ResidualBlock(nn.Module): 7 | def __init__(self, in_planes, planes, norm_fn='group', stride=1): 8 | super(ResidualBlock, self).__init__() 9 | 10 | self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride) 11 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1) 12 | self.relu = nn.ReLU(inplace=True) 13 | 14 | num_groups = planes // 8 15 | 16 | if norm_fn == 'group': 17 | self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) 18 | self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) 19 | # if not stride == 1: 20 | self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) 21 | 22 | elif norm_fn == 'batch': 23 | self.norm1 = nn.BatchNorm2d(planes) 24 | self.norm2 = nn.BatchNorm2d(planes) 25 | # if not stride == 1: 26 | self.norm3 = nn.BatchNorm2d(planes) 27 | 28 | elif norm_fn == 'instance': 29 | self.norm1 = nn.InstanceNorm2d(planes) 30 | self.norm2 = nn.InstanceNorm2d(planes) 31 | # if not stride == 1: 32 | self.norm3 = nn.InstanceNorm2d(planes) 33 | 34 | elif norm_fn == 'none': 35 | self.norm1 = nn.Sequential() 36 | self.norm2 = nn.Sequential() 37 | # if not stride == 1: 38 | self.norm3 = nn.Sequential() 39 | 40 | # if stride == 1: 41 | # self.downsample = None 42 | # 43 | # else: 44 | self.downsample = nn.Sequential( 45 | nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3) 46 | 47 | 48 | def forward(self, x): 49 | y = x 50 | y = self.relu(self.norm1(self.conv1(y))) 51 | y = self.relu(self.norm2(self.conv2(y))) 52 | 53 | if self.downsample is not None: 54 | x = self.downsample(x) 55 | 56 | return self.relu(x+y) 57 | 58 | 59 | class BottleneckBlock(nn.Module): 60 | def __init__(self, in_planes, planes, norm_fn='group', stride=1): 61 | super(BottleneckBlock, self).__init__() 62 | 63 | self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0) 64 | self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride) 65 | self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0) 66 | self.relu = nn.ReLU(inplace=True) 67 | 68 | num_groups = planes // 8 69 | 70 | if norm_fn == 'group': 71 | self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) 72 | self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) 73 | self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) 74 | if not stride == 1: 75 | self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) 76 | 77 | elif norm_fn == 'batch': 78 | self.norm1 = nn.BatchNorm2d(planes//4) 79 | self.norm2 = nn.BatchNorm2d(planes//4) 80 | self.norm3 = nn.BatchNorm2d(planes) 81 | if not stride == 1: 82 | self.norm4 = nn.BatchNorm2d(planes) 83 | 84 | elif norm_fn == 'instance': 85 | self.norm1 = nn.InstanceNorm2d(planes//4) 86 | self.norm2 = nn.InstanceNorm2d(planes//4) 87 | self.norm3 = nn.InstanceNorm2d(planes) 88 | if not stride == 1: 89 | self.norm4 = nn.InstanceNorm2d(planes) 90 | 91 | elif norm_fn == 'none': 92 | self.norm1 = nn.Sequential() 93 | self.norm2 = nn.Sequential() 94 | self.norm3 = nn.Sequential() 95 | if not stride == 1: 96 | self.norm4 = nn.Sequential() 97 | 98 | if stride == 1: 99 | self.downsample = None 100 | 101 | else: 102 | self.downsample = nn.Sequential( 103 | nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4) 104 | 105 | def forward(self, x): 106 | y = x 107 | y = self.relu(self.norm1(self.conv1(y))) 108 | y = self.relu(self.norm2(self.conv2(y))) 109 | y = self.relu(self.norm3(self.conv3(y))) 110 | 111 | if self.downsample is not None: 112 | x = self.downsample(x) 113 | 114 | return self.relu(x+y) 115 | 116 | 117 | class BasicEncoder(nn.Module): 118 | def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): 119 | super(BasicEncoder, self).__init__() 120 | self.norm_fn = norm_fn 121 | 122 | if self.norm_fn == 'group': 123 | self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) 124 | 125 | elif self.norm_fn == 'batch': 126 | self.norm1 = nn.BatchNorm2d(64) 127 | 128 | elif self.norm_fn == 'instance': 129 | self.norm1 = nn.InstanceNorm2d(64) 130 | 131 | elif self.norm_fn == 'none': 132 | self.norm1 = nn.Sequential() 133 | 134 | self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) 135 | self.relu1 = nn.ReLU(inplace=True) 136 | 137 | self.in_planes = 64 138 | self.layer1 = self._make_layer(64, stride=1) 139 | self.layer2 = self._make_layer(96, stride=2) 140 | self.layer3 = self._make_layer(128, stride=2) 141 | 142 | # output convolution 143 | self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1) 144 | 145 | self.dropout = None 146 | if dropout > 0: 147 | self.dropout = nn.Dropout2d(p=dropout) 148 | 149 | for m in self.modules(): 150 | if isinstance(m, nn.Conv2d): 151 | nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') 152 | elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): 153 | if m.weight is not None: 154 | nn.init.constant_(m.weight, 1) 155 | if m.bias is not None: 156 | nn.init.constant_(m.bias, 0) 157 | 158 | def _make_layer(self, dim, stride=1): 159 | layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) 160 | layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) 161 | layers = (layer1, layer2) 162 | 163 | self.in_planes = dim 164 | return nn.Sequential(*layers) 165 | 166 | def forward(self, x): 167 | 168 | # if input is list, combine batch dimension 169 | is_list = isinstance(x, tuple) or isinstance(x, list) 170 | if is_list: 171 | batch_dim = x[0].shape[0] 172 | x = torch.cat(x, dim=0) 173 | 174 | x = self.conv1(x) 175 | x = self.norm1(x) 176 | x = self.relu1(x) 177 | 178 | x = self.layer1(x) 179 | x = self.layer2(x) 180 | x = self.layer3(x) 181 | 182 | x = self.conv2(x) 183 | 184 | if self.training and self.dropout is not None: 185 | x = self.dropout(x) 186 | 187 | if is_list: 188 | x = torch.split(x, [batch_dim, batch_dim], dim=0) 189 | 190 | return x 191 | 192 | 193 | class BasicEncoderQuarter(nn.Module): 194 | def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): 195 | super(BasicEncoderQuarter, self).__init__() 196 | self.norm_fn = norm_fn 197 | 198 | if self.norm_fn == 'group': 199 | self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) 200 | 201 | elif self.norm_fn == 'batch': 202 | self.norm1 = nn.BatchNorm2d(64) 203 | 204 | elif self.norm_fn == 'instance': 205 | self.norm1 = nn.InstanceNorm2d(64) 206 | 207 | elif self.norm_fn == 'none': 208 | self.norm1 = nn.Sequential() 209 | 210 | self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) 211 | self.relu1 = nn.ReLU(inplace=True) 212 | 213 | self.in_planes = 64 214 | self.layer1 = self._make_layer(64, stride=1) 215 | self.layer2 = self._make_layer(96, stride=2) 216 | self.layer3 = self._make_layer(128, stride=1) 217 | 218 | # output convolution 219 | self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1) 220 | 221 | self.dropout = None 222 | if dropout > 0: 223 | self.dropout = nn.Dropout2d(p=dropout) 224 | 225 | for m in self.modules(): 226 | if isinstance(m, nn.Conv2d): 227 | nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') 228 | elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): 229 | if m.weight is not None: 230 | nn.init.constant_(m.weight, 1) 231 | if m.bias is not None: 232 | nn.init.constant_(m.bias, 0) 233 | 234 | def _make_layer(self, dim, stride=1): 235 | layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) 236 | layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) 237 | layers = (layer1, layer2) 238 | 239 | self.in_planes = dim 240 | return nn.Sequential(*layers) 241 | 242 | def forward(self, x): 243 | 244 | # if input is list, combine batch dimension 245 | is_list = isinstance(x, tuple) or isinstance(x, list) 246 | if is_list: 247 | batch_dim = x[0].shape[0] 248 | x = torch.cat(x, dim=0) 249 | 250 | x = self.conv1(x) 251 | x = self.norm1(x) 252 | x = self.relu1(x) 253 | 254 | x = self.layer1(x) 255 | x = self.layer2(x) 256 | x = self.layer3(x) 257 | 258 | x = self.conv2(x) 259 | 260 | if self.training and self.dropout is not None: 261 | x = self.dropout(x) 262 | 263 | if is_list: 264 | x = torch.split(x, [batch_dim, batch_dim], dim=0) 265 | 266 | return x 267 | -------------------------------------------------------------------------------- /core/knn.py: -------------------------------------------------------------------------------- 1 | import faiss 2 | import torch 3 | 4 | res = faiss.StandardGpuResources() 5 | res.setDefaultNullStreamAllDevices() 6 | 7 | 8 | def swig_ptr_from_Tensor(x): 9 | """ gets a Faiss SWIG pointer from a pytorch tensor (on CPU or GPU) """ 10 | assert x.is_contiguous() 11 | 12 | if x.dtype == torch.float32: 13 | return faiss.cast_integer_to_float_ptr(x.storage().data_ptr() + x.storage_offset() * 4) 14 | 15 | if x.dtype == torch.int64: 16 | return faiss.cast_integer_to_long_ptr(x.storage().data_ptr() + x.storage_offset() * 8) 17 | 18 | raise Exception("tensor type not supported: {}".format(x.dtype)) 19 | 20 | 21 | def search_raw_array_pytorch(res, xb, xq, k, D=None, I=None, 22 | metric=faiss.METRIC_L2): 23 | """search xq in xb, without building an index""" 24 | assert xb.device == xq.device 25 | 26 | nq, d = xq.size() 27 | if xq.is_contiguous(): 28 | xq_row_major = True 29 | elif xq.t().is_contiguous(): 30 | xq = xq.t() # I initially wrote xq:t(), Lua is still haunting me :-) 31 | xq_row_major = False 32 | else: 33 | raise TypeError('matrix should be row or column-major') 34 | 35 | xq_ptr = swig_ptr_from_Tensor(xq) 36 | 37 | nb, d2 = xb.size() 38 | assert d2 == d 39 | if xb.is_contiguous(): 40 | xb_row_major = True 41 | elif xb.t().is_contiguous(): 42 | xb = xb.t() 43 | xb_row_major = False 44 | else: 45 | raise TypeError('matrix should be row or column-major') 46 | xb_ptr = swig_ptr_from_Tensor(xb) 47 | 48 | if D is None: 49 | D = torch.empty(nq, k, device=xb.device, dtype=torch.float32) 50 | else: 51 | assert D.shape == (nq, k) 52 | assert D.device == xb.device 53 | 54 | if I is None: 55 | I = torch.empty(nq, k, device=xb.device, dtype=torch.int64) 56 | else: 57 | assert I.shape == (nq, k) 58 | assert I.device == xb.device 59 | 60 | D_ptr = swig_ptr_from_Tensor(D) 61 | I_ptr = swig_ptr_from_Tensor(I) 62 | 63 | args = faiss.GpuDistanceParams() 64 | args.metric = metric 65 | args.k = k 66 | args.dims = d 67 | args.vectors = xb_ptr 68 | args.vectorsRowMajor = xb_row_major 69 | args.numVectors = nb 70 | args.queries = xq_ptr 71 | args.queriesRowMajor = xq_row_major 72 | args.numQueries = nq 73 | args.outDistances = D_ptr 74 | args.outIndices = I_ptr 75 | faiss.bfKnn(res, args) 76 | 77 | return D, I 78 | 79 | 80 | def knn_faiss_raw(fmap1, fmap2, k): 81 | 82 | b, ch, _ = fmap1.shape 83 | 84 | if b == 1: 85 | fmap1 = fmap1.view(ch, -1).t().contiguous() 86 | fmap2 = fmap2.view(ch, -1).t().contiguous() 87 | 88 | dist, indx = search_raw_array_pytorch(res, fmap2, fmap1, k, metric=faiss.METRIC_INNER_PRODUCT) 89 | 90 | dist = dist.t().unsqueeze(0).contiguous() 91 | indx = indx.t().unsqueeze(0).contiguous() 92 | else: 93 | fmap1 = fmap1.view(b, ch, -1).permute(0, 2, 1).contiguous() 94 | fmap2 = fmap2.view(b, ch, -1).permute(0, 2, 1).contiguous() 95 | dist = [] 96 | indx = [] 97 | for i in range(b): 98 | dist_i, indx_i = search_raw_array_pytorch(res, fmap2[i], fmap1[i], k, metric=faiss.METRIC_INNER_PRODUCT) 99 | dist_i = dist_i.t().unsqueeze(0).contiguous() 100 | indx_i = indx_i.t().unsqueeze(0).contiguous() 101 | dist.append(dist_i) 102 | indx.append(indx_i) 103 | dist = torch.cat(dist, dim=0) 104 | indx = torch.cat(indx, dim=0) 105 | return dist, indx 106 | -------------------------------------------------------------------------------- /core/sparsenet.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | from extractor import BasicEncoder, BasicEncoderQuarter 6 | from update import BasicUpdateBlock, BasicUpdateBlockQuarter 7 | from utils.utils import bilinear_sampler, coords_grid, coords_grid_y_first,\ 8 | upflow4, compute_interpolation_weights 9 | from knn import knn_faiss_raw 10 | 11 | autocast = torch.cuda.amp.autocast 12 | 13 | 14 | def compute_sparse_corr(fmap1, fmap2, k=32): 15 | """ 16 | Compute a cost volume containing the k-largest hypotheses for each pixel. 17 | Output: corr_mink 18 | """ 19 | B, C, H1, W1 = fmap1.shape 20 | H2, W2 = fmap2.shape[2:] 21 | N = H1 * W1 22 | 23 | fmap1, fmap2 = fmap1.view(B, C, -1), fmap2.view(B, C, -1) 24 | 25 | with torch.no_grad(): 26 | _, indices = knn_faiss_raw(fmap1, fmap2, k) # [B, k, H1*W1] 27 | 28 | indices_coord = indices.unsqueeze(1).expand(-1, 2, -1, -1) # [B, 2, k, H1*W1] 29 | coords0 = coords_grid_y_first(B, H2, W2).view(B, 2, 1, -1).expand(-1, -1, k, -1).to(fmap1.device) # [B, 2, k, H1*W1] 30 | coords1 = coords0.gather(3, indices_coord) # [B, 2, k, H1*W1] 31 | coords1 = coords1 - coords0 32 | 33 | # Append batch index 34 | batch_index = torch.arange(B).view(B, 1, 1, 1).expand(-1, -1, k, N).type_as(coords1) 35 | 36 | # Gather by indices from map2 and compute correlation volume 37 | fmap2 = fmap2.gather(2, indices.view(B, 1, -1).expand(-1, C, -1)).view(B, C, k, N) 38 | corr_sp = torch.einsum('bcn,bckn->bkn', fmap1, fmap2).contiguous() / torch.sqrt(torch.tensor(C).float()) # [B, k, H1*W1] 39 | 40 | return corr_sp, coords0, coords1, batch_index # coords: [B, 2, k, H1*W1] 41 | 42 | 43 | class FlowHead(nn.Module): 44 | def __init__(self, input_dim=256, batch_norm=True): 45 | super().__init__() 46 | if batch_norm: 47 | self.flowpredictor = nn.Sequential( 48 | nn.Conv2d(input_dim, 128, 3, padding=1), 49 | nn.BatchNorm2d(128), 50 | nn.ReLU(inplace=True), 51 | nn.Conv2d(128, 64, 3, padding=1), 52 | nn.BatchNorm2d(64), 53 | nn.ReLU(inplace=True), 54 | nn.Conv2d(64, 2, 3, padding=1) 55 | ) 56 | else: 57 | self.flowpredictor = nn.Sequential( 58 | nn.Conv2d(input_dim, 128, 3, padding=1), 59 | nn.ReLU(inplace=True), 60 | nn.Conv2d(128, 64, 3, padding=1), 61 | nn.ReLU(inplace=True), 62 | nn.Conv2d(64, 2, 3, padding=1) 63 | ) 64 | 65 | def forward(self, x): 66 | return self.flowpredictor(x) 67 | 68 | 69 | class SparseNet(nn.Module): 70 | def __init__(self, args): 71 | super().__init__() 72 | self.args = args 73 | 74 | # feature network, context network, and update block 75 | self.fnet = BasicEncoderQuarter(output_dim=256, norm_fn='instance', dropout=False) 76 | self.cnet = BasicEncoderQuarter(output_dim=256, norm_fn='batch', dropout=False) 77 | 78 | # correlation volume encoder 79 | self.update_block = BasicUpdateBlockQuarter(self.args, hidden_dim=128, input_dim=405) 80 | 81 | def initialize_flow(self, img): 82 | """ Flow is represented as difference between two coordinate grids flow = coords1 - coords0""" 83 | N, C, H, W = img.shape 84 | coords0 = coords_grid(N, H//4, W//4).to(img.device) 85 | coords1 = coords_grid(N, H//4, W//4).to(img.device) 86 | 87 | # optical flow computed as difference: flow = coords1 - coords0 88 | return coords0, coords1 89 | 90 | def upsample_flow_quarter(self, flow, mask): 91 | """ Upsample flow field [H/4, W/4, 2] -> [H, W, 2] using convex combination """ 92 | N, _, H, W = flow.shape 93 | mask = mask.view(N, 1, 9, 4, 4, H, W) 94 | mask = torch.softmax(mask, dim=2) 95 | 96 | up_flow = F.unfold(4 * flow, [3,3], padding=1) 97 | up_flow = up_flow.view(N, 2, 9, 1, 1, H, W) 98 | 99 | up_flow = torch.sum(mask * up_flow, dim=2) 100 | up_flow = up_flow.permute(0, 1, 4, 2, 5, 3) 101 | return up_flow.reshape(N, 2, 4*H, 4*W) 102 | 103 | def forward(self, image1, image2, iters, flow_init=None, test_mode=False): 104 | """ Estimate optical flow between pair of frames """ 105 | 106 | image1 = 2 * (image1 / 255.0) - 1.0 107 | image2 = 2 * (image2 / 255.0) - 1.0 108 | 109 | image1 = image1.contiguous() 110 | image2 = image2.contiguous() 111 | 112 | # run the feature and context network 113 | with autocast(enabled=self.args.mixed_precision): 114 | fmap1, fmap2 = self.fnet([image1, image2]) 115 | cnet = self.cnet(image1) 116 | net, inp = torch.split(cnet, [128, 128], dim=1) 117 | net = torch.tanh(net) 118 | inp = torch.relu(inp) 119 | 120 | fmap1 = fmap1.float() 121 | fmap2 = fmap2.float() 122 | 123 | B, _, H1, W1 = fmap1.shape 124 | 125 | # GRU 126 | coords0, coords1 = self.initialize_flow(image1) 127 | 128 | if flow_init is not None: 129 | coords1 = coords1 + flow_init 130 | 131 | # Generate sparse cost volume for GRU 132 | corr_val, coords0_cv, coords1_cv, batch_index_cv = compute_sparse_corr(fmap1, fmap2, k=self.args.num_k) 133 | 134 | delta_flow = torch.zeros_like(coords0) 135 | 136 | flow_predictions = [] 137 | 138 | search_range = 4 139 | corr_val = corr_val.repeat(1, 4, 1) 140 | 141 | for itr in range(iters): 142 | with torch.no_grad(): 143 | 144 | # need to switch order of delta_flow, also note the minus sign 145 | coords1_cv = coords1_cv - delta_flow[:, [1, 0], :, :].view(B, 2, 1, -1) # [B, 2, k, H1*W1] 146 | 147 | mask_pyramid = [] 148 | weights_pyramid = [] 149 | coords_sparse_pyramid = [] 150 | 151 | # Create multi-scale displacements 152 | for i in range(5): 153 | coords1_sp = coords1_cv * 0.5**i 154 | weights, coords1_sp = compute_interpolation_weights(coords1_sp) 155 | mask = (coords1_sp[:, 0].abs() <= search_range) & (coords1_sp[:, 1].abs() <= search_range) 156 | batch_ind = batch_index_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask] 157 | coords0_sp = coords0_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask] 158 | coords1_sp = coords1_sp.permute(0, 2, 3, 1)[mask] 159 | 160 | coords1_sp = coords1_sp + search_range 161 | coords_sp = torch.cat([batch_ind, coords0_sp, coords1_sp], dim=1) 162 | coords_sparse_pyramid.append(coords_sp) 163 | 164 | mask_pyramid.append(mask) 165 | weights_pyramid.append(weights) 166 | 167 | corr_val_pyramid = [] 168 | for mask, weights in zip(mask_pyramid, weights_pyramid): 169 | corr_masked = (weights * corr_val)[mask].unsqueeze(1) 170 | corr_val_pyramid.append(corr_masked) 171 | 172 | sparse_tensor_pyramid = [torch.sparse.FloatTensor(coords_sp.t().long(), corr_resample, torch.Size([B, H1, W1, 9, 9, 1])).coalesce() 173 | for coords_sp, corr_resample in zip(coords_sparse_pyramid, corr_val_pyramid)] 174 | 175 | corr = torch.cat([sp.to_dense().view(B, H1, W1, -1) for sp in sparse_tensor_pyramid], dim=3).permute(0, 3, 1, 2) 176 | 177 | coords1 = coords1.detach() 178 | 179 | flow = coords1 - coords0 180 | 181 | # GRU Update 182 | with autocast(enabled=self.args.mixed_precision): 183 | 184 | # 4D net map to 2D dense vector 185 | net, up_mask, delta_flow = self.update_block(net, inp, corr, flow) 186 | 187 | # F(t+1) = F(t) + \Delta(t) 188 | coords1 = coords1 + delta_flow 189 | 190 | # upsample predictions 191 | if up_mask is None: 192 | flow_up = upflow4(coords1 - coords0) 193 | else: 194 | flow_up = self.upsample_flow_quarter(coords1 - coords0, up_mask) 195 | 196 | flow_predictions.append(flow_up) 197 | 198 | if test_mode: 199 | return flow_up 200 | 201 | return flow_predictions 202 | 203 | 204 | class SparseNetEighth(nn.Module): 205 | def __init__(self, args): 206 | super().__init__() 207 | self.args = args 208 | 209 | # feature network, context network, and update block 210 | self.fnet = BasicEncoder(output_dim=256, norm_fn='instance', dropout=False) 211 | self.cnet = BasicEncoder(output_dim=256, norm_fn='batch', dropout=False) 212 | 213 | # correlation volume encoder 214 | self.update_block = BasicUpdateBlock(self.args, hidden_dim=128, input_dim=405) 215 | 216 | def initialize_flow(self, img): 217 | """ Flow is represented as difference between two coordinate grids flow = coords1 - coords0""" 218 | N, C, H, W = img.shape 219 | coords0 = coords_grid(N, H//8, W//8).to(img.device) 220 | coords1 = coords_grid(N, H//8, W//8).to(img.device) 221 | 222 | # optical flow computed as difference: flow = coords1 - coords0 223 | return coords0, coords1 224 | 225 | def upsample_flow(self, flow, mask): 226 | """ Upsample flow field [H/8, W/8, 2] -> [H, W, 2] using convex combination """ 227 | N, _, H, W = flow.shape 228 | mask = mask.view(N, 1, 9, 8, 8, H, W) 229 | mask = torch.softmax(mask, dim=2) 230 | 231 | up_flow = F.unfold(8 * flow, [3,3], padding=1) 232 | up_flow = up_flow.view(N, 2, 9, 1, 1, H, W) 233 | 234 | up_flow = torch.sum(mask * up_flow, dim=2) 235 | up_flow = up_flow.permute(0, 1, 4, 2, 5, 3) 236 | return up_flow.reshape(N, 2, 8*H, 8*W) 237 | 238 | def forward(self, image1, image2, iters, flow_init=None, test_mode=False): 239 | """ Estimate optical flow between pair of frames """ 240 | 241 | image1 = 2 * (image1 / 255.0) - 1.0 242 | image2 = 2 * (image2 / 255.0) - 1.0 243 | 244 | image1 = image1.contiguous() 245 | image2 = image2.contiguous() 246 | 247 | # run the feature and context network 248 | with autocast(enabled=self.args.mixed_precision): 249 | fmap1, fmap2 = self.fnet([image1, image2]) 250 | cnet = self.cnet(image1) 251 | net, inp = torch.split(cnet, [128, 128], dim=1) 252 | net = torch.tanh(net) 253 | inp = torch.relu(inp) 254 | 255 | fmap1 = fmap1.float() 256 | fmap2 = fmap2.float() 257 | 258 | B, _, H1, W1 = fmap1.shape 259 | 260 | # GRU 261 | coords0, coords1 = self.initialize_flow(image1) 262 | 263 | if flow_init is not None: 264 | coords1 = coords1 + flow_init 265 | 266 | # Generate sparse cost volume for GRU 267 | corr_val, coords0_cv, coords1_cv, batch_index_cv = compute_sparse_corr(fmap1, fmap2, k=self.args.num_k) 268 | 269 | delta_flow = torch.zeros_like(coords0) 270 | 271 | flow_predictions = [] 272 | 273 | search_range = 4 274 | corr_val = corr_val.repeat(1, 4, 1) 275 | 276 | for itr in range(iters): 277 | with torch.no_grad(): 278 | 279 | # need to switch order of delta_flow, also note the minus sign 280 | coords1_cv = coords1_cv - delta_flow[:, [1, 0], :, :].view(B, 2, 1, -1) # [B, 2, k, H1*W1] 281 | 282 | mask_pyramid = [] 283 | weights_pyramid = [] 284 | coords_sparse_pyramid = [] 285 | 286 | # Create multi-scale displacements 287 | for i in range(5): 288 | coords1_sp = coords1_cv * 0.5**i 289 | weights, coords1_sp = compute_interpolation_weights(coords1_sp) 290 | mask = (coords1_sp[:, 0].abs() <= search_range) & (coords1_sp[:, 1].abs() <= search_range) 291 | batch_ind = batch_index_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask] 292 | coords0_sp = coords0_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask] 293 | coords1_sp = coords1_sp.permute(0, 2, 3, 1)[mask] 294 | 295 | coords1_sp = coords1_sp + search_range 296 | coords_sp = torch.cat([batch_ind, coords0_sp, coords1_sp], dim=1) 297 | coords_sparse_pyramid.append(coords_sp) 298 | 299 | mask_pyramid.append(mask) 300 | weights_pyramid.append(weights) 301 | 302 | corr_val_pyramid = [] 303 | for mask, weights in zip(mask_pyramid, weights_pyramid): 304 | corr_masked = (weights * corr_val)[mask].unsqueeze(1) 305 | corr_val_pyramid.append(corr_masked) 306 | 307 | sparse_tensor_pyramid = [torch.sparse.FloatTensor(coords_sp.t().long(), corr_resample, torch.Size([B, H1, W1, 9, 9, 1])).coalesce() 308 | for coords_sp, corr_resample in zip(coords_sparse_pyramid, corr_val_pyramid)] 309 | 310 | corr = torch.cat([sp.to_dense().view(B, H1, W1, -1) for sp in sparse_tensor_pyramid], dim=3).permute(0, 3, 1, 2) 311 | 312 | coords1 = coords1.detach() 313 | 314 | flow = coords1 - coords0 315 | 316 | # Not sure if it will affect results. 317 | with autocast(enabled=self.args.mixed_precision): 318 | 319 | # 4D net map to 2D dense vector 320 | net, up_mask, delta_flow = self.update_block(net, inp, corr, flow) 321 | 322 | # F(t+1) = F(t) + \Delta(t) 323 | coords1 = coords1 + delta_flow 324 | 325 | # upsample predictions 326 | if up_mask is None: 327 | flow_up = upflow4(coords1 - coords0) 328 | else: 329 | flow_up = self.upsample_flow(coords1 - coords0, up_mask) 330 | 331 | flow_predictions.append(flow_up) 332 | 333 | if test_mode: 334 | return flow_up 335 | 336 | return flow_predictions 337 | -------------------------------------------------------------------------------- /core/update.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | 5 | 6 | class FlowHead(nn.Module): 7 | def __init__(self, input_dim=128, hidden_dim=256): 8 | super(FlowHead, self).__init__() 9 | self.conv1 = nn.Conv2d(input_dim, hidden_dim, 3, padding=1) 10 | self.conv2 = nn.Conv2d(hidden_dim, 2, 3, padding=1) 11 | self.relu = nn.ReLU(inplace=True) 12 | 13 | def forward(self, x): 14 | return self.conv2(self.relu(self.conv1(x))) 15 | 16 | 17 | class ConvGRU(nn.Module): 18 | def __init__(self, hidden_dim=128, input_dim=192+128): 19 | super(ConvGRU, self).__init__() 20 | self.convz = nn.Conv2d(hidden_dim+input_dim, hidden_dim, 3, padding=1) 21 | self.convr = nn.Conv2d(hidden_dim+input_dim, hidden_dim, 3, padding=1) 22 | self.convq = nn.Conv2d(hidden_dim+input_dim, hidden_dim, 3, padding=1) 23 | 24 | def forward(self, h, x): 25 | hx = torch.cat([h, x], dim=1) 26 | 27 | z = torch.sigmoid(self.convz(hx)) 28 | r = torch.sigmoid(self.convr(hx)) 29 | q = torch.tanh(self.convq(torch.cat([r*h, x], dim=1))) 30 | 31 | h = (1-z) * h + z * q 32 | return h 33 | 34 | 35 | class SepConvGRU(nn.Module): 36 | def __init__(self, hidden_dim=128, input_dim=192+128): 37 | super(SepConvGRU, self).__init__() 38 | self.convz1 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (1,5), padding=(0,2)) 39 | self.convr1 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (1,5), padding=(0,2)) 40 | self.convq1 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (1,5), padding=(0,2)) 41 | 42 | self.convz2 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (5,1), padding=(2,0)) 43 | self.convr2 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (5,1), padding=(2,0)) 44 | self.convq2 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (5,1), padding=(2,0)) 45 | 46 | def forward(self, h, x): 47 | # horizontal 48 | hx = torch.cat([h, x], dim=1) 49 | z = torch.sigmoid(self.convz1(hx)) 50 | r = torch.sigmoid(self.convr1(hx)) 51 | q = torch.tanh(self.convq1(torch.cat([r*h, x], dim=1))) 52 | h = (1-z) * h + z * q 53 | 54 | # vertical 55 | hx = torch.cat([h, x], dim=1) 56 | z = torch.sigmoid(self.convz2(hx)) 57 | r = torch.sigmoid(self.convr2(hx)) 58 | q = torch.tanh(self.convq2(torch.cat([r*h, x], dim=1))) 59 | h = (1-z) * h + z * q 60 | 61 | return h 62 | 63 | 64 | class ResidualBlock(nn.Module): 65 | def __init__(self, in_planes, planes, dilation=(1, 1), kernel_size=3): 66 | super(ResidualBlock, self).__init__() 67 | 68 | self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=kernel_size, padding=(kernel_size-1)//2, dilation=dilation[0]) 69 | self.conv2 = nn.Conv2d(planes, planes, kernel_size=kernel_size, padding=(kernel_size-1)//2, dilation=dilation[1]) 70 | self.relu = nn.ReLU(inplace=True) 71 | 72 | self.projector = nn.Conv2d(in_planes, planes, kernel_size=1) 73 | 74 | def forward(self, x): 75 | y = x 76 | y = self.relu(self.conv1(y)) 77 | y = self.relu(self.conv2(y)) 78 | 79 | if self.projector is not None: 80 | x = self.projector(x) 81 | 82 | return self.relu(x + y) 83 | 84 | 85 | class BasicMotionEncoder(nn.Module): 86 | def __init__(self, args, input_dim=128): 87 | super().__init__() 88 | self.convc1 = nn.Conv2d(input_dim, 256, 1) 89 | self.convc2 = nn.Conv2d(256, 192, 3, padding=1) 90 | self.convf1 = nn.Conv2d(2, 128, 7, padding=3) 91 | self.convf2 = nn.Conv2d(128, 64, 3, padding=1) 92 | self.conv = nn.Conv2d(192+64, 128-2, 3, padding=1) 93 | 94 | def forward(self, flow, corr): 95 | cor = F.relu(self.convc1(corr)) 96 | cor = F.relu(self.convc2(cor)) 97 | flo = F.relu(self.convf1(flow)) 98 | flo = F.relu(self.convf2(flo)) 99 | 100 | cor_flo = torch.cat([cor, flo], dim=1) 101 | out = F.relu(self.conv(cor_flo)) 102 | return torch.cat([out, flow], dim=1) 103 | 104 | 105 | class BasicUpdateBlock(nn.Module): 106 | def __init__(self, args, hidden_dim=128, input_dim=128): 107 | super(BasicUpdateBlock, self).__init__() 108 | self.args = args 109 | self.encoder = BasicMotionEncoder(args, input_dim=input_dim) 110 | self.gru = SepConvGRU(hidden_dim=hidden_dim, input_dim=128+hidden_dim) 111 | self.flow_head = FlowHead(hidden_dim, hidden_dim=256) 112 | 113 | self.mask = nn.Sequential( 114 | nn.Conv2d(128, 256, 3, padding=1), 115 | nn.ReLU(inplace=True), 116 | nn.Conv2d(256, 64*9, 1, padding=0)) 117 | 118 | def forward(self, net, inp, corr, flow): 119 | motion_features = self.encoder(flow, corr) 120 | inp = torch.cat([inp, motion_features], dim=1) 121 | 122 | net = self.gru(net, inp) 123 | delta_flow = self.flow_head(net) 124 | 125 | # scale mask to balence gradients 126 | mask = .25 * self.mask(net) 127 | return net, mask, delta_flow 128 | 129 | 130 | class BasicUpdateBlockQuarter(nn.Module): 131 | def __init__(self, args, hidden_dim=128, input_dim=128): 132 | super(BasicUpdateBlockQuarter, self).__init__() 133 | self.args = args 134 | self.encoder = BasicMotionEncoder(args, input_dim=input_dim) 135 | self.gru = SepConvGRU(hidden_dim=hidden_dim, input_dim=128+hidden_dim) 136 | self.flow_head = FlowHead(input_dim=hidden_dim, hidden_dim=256) 137 | 138 | self.mask = nn.Sequential( 139 | nn.Conv2d(128, 256, 3, padding=1), 140 | nn.ReLU(inplace=True), 141 | nn.Conv2d(256, 16*9, 1, padding=0)) 142 | 143 | def forward(self, net, inp, corr, flow): 144 | motion_features = self.encoder(flow, corr) 145 | inp = torch.cat([inp, motion_features], dim=1) 146 | 147 | net = self.gru(net, inp) 148 | delta_flow = self.flow_head(net) 149 | 150 | # scale mask to balence gradients 151 | mask = .25 * self.mask(net) 152 | return net, mask, delta_flow 153 | 154 | -------------------------------------------------------------------------------- /core/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/core/utils/__init__.py -------------------------------------------------------------------------------- /core/utils/augmentor.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import random 3 | import math 4 | from PIL import Image 5 | 6 | import cv2 7 | cv2.setNumThreads(0) 8 | cv2.ocl.setUseOpenCL(False) 9 | 10 | import torch 11 | from torchvision.transforms import ColorJitter 12 | import torch.nn.functional as F 13 | 14 | 15 | class FlowAugmentor: 16 | def __init__(self, crop_size, min_scale=-0.2, max_scale=0.5, do_flip=True): 17 | 18 | # spatial augmentation params 19 | self.crop_size = crop_size 20 | self.min_scale = min_scale 21 | self.max_scale = max_scale 22 | self.spatial_aug_prob = 0.8 23 | self.stretch_prob = 0.8 24 | self.max_stretch = 0.2 25 | 26 | # flip augmentation params 27 | self.do_flip = do_flip 28 | self.h_flip_prob = 0.5 29 | self.v_flip_prob = 0.1 30 | 31 | # photometric augmentation params 32 | self.photo_aug = ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.5/3.14) 33 | self.asymmetric_color_aug_prob = 0.2 34 | self.eraser_aug_prob = 0.5 35 | 36 | def color_transform(self, img1, img2): 37 | """ Photometric augmentation """ 38 | 39 | # asymmetric 40 | if np.random.rand() < self.asymmetric_color_aug_prob: 41 | img1 = np.array(self.photo_aug(Image.fromarray(img1)), dtype=np.uint8) 42 | img2 = np.array(self.photo_aug(Image.fromarray(img2)), dtype=np.uint8) 43 | 44 | # symmetric 45 | else: 46 | image_stack = np.concatenate([img1, img2], axis=0) 47 | image_stack = np.array(self.photo_aug(Image.fromarray(image_stack)), dtype=np.uint8) 48 | img1, img2 = np.split(image_stack, 2, axis=0) 49 | 50 | return img1, img2 51 | 52 | def eraser_transform(self, img1, img2, bounds=[50, 100]): 53 | """ Occlusion augmentation """ 54 | 55 | ht, wd = img1.shape[:2] 56 | if np.random.rand() < self.eraser_aug_prob: 57 | mean_color = np.mean(img2.reshape(-1, 3), axis=0) 58 | for _ in range(np.random.randint(1, 3)): 59 | x0 = np.random.randint(0, wd) 60 | y0 = np.random.randint(0, ht) 61 | dx = np.random.randint(bounds[0], bounds[1]) 62 | dy = np.random.randint(bounds[0], bounds[1]) 63 | img2[y0:y0+dy, x0:x0+dx, :] = mean_color 64 | 65 | return img1, img2 66 | 67 | def spatial_transform(self, img1, img2, flow): 68 | # randomly sample scale 69 | ht, wd = img1.shape[:2] 70 | min_scale = np.maximum( 71 | (self.crop_size[0] + 8) / float(ht), 72 | (self.crop_size[1] + 8) / float(wd)) 73 | 74 | scale = 2 ** np.random.uniform(self.min_scale, self.max_scale) 75 | scale_x = scale 76 | scale_y = scale 77 | if np.random.rand() < self.stretch_prob: 78 | scale_x *= 2 ** np.random.uniform(-self.max_stretch, self.max_stretch) 79 | scale_y *= 2 ** np.random.uniform(-self.max_stretch, self.max_stretch) 80 | 81 | scale_x = np.clip(scale_x, min_scale, None) 82 | scale_y = np.clip(scale_y, min_scale, None) 83 | 84 | if np.random.rand() < self.spatial_aug_prob: 85 | # rescale the images 86 | img1 = cv2.resize(img1, None, fx=scale_x, fy=scale_y, interpolation=cv2.INTER_LINEAR) 87 | img2 = cv2.resize(img2, None, fx=scale_x, fy=scale_y, interpolation=cv2.INTER_LINEAR) 88 | flow = cv2.resize(flow, None, fx=scale_x, fy=scale_y, interpolation=cv2.INTER_LINEAR) 89 | flow = flow * [scale_x, scale_y] 90 | 91 | if self.do_flip: 92 | if np.random.rand() < self.h_flip_prob: # h-flip 93 | img1 = img1[:, ::-1] 94 | img2 = img2[:, ::-1] 95 | flow = flow[:, ::-1] * [-1.0, 1.0] 96 | 97 | if np.random.rand() < self.v_flip_prob: # v-flip 98 | img1 = img1[::-1, :] 99 | img2 = img2[::-1, :] 100 | flow = flow[::-1, :] * [1.0, -1.0] 101 | 102 | y0 = np.random.randint(0, img1.shape[0] - self.crop_size[0]) 103 | x0 = np.random.randint(0, img1.shape[1] - self.crop_size[1]) 104 | 105 | img1 = img1[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 106 | img2 = img2[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 107 | flow = flow[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 108 | 109 | return img1, img2, flow 110 | 111 | def __call__(self, img1, img2, flow): 112 | img1, img2 = self.color_transform(img1, img2) 113 | img1, img2 = self.eraser_transform(img1, img2) 114 | img1, img2, flow = self.spatial_transform(img1, img2, flow) 115 | 116 | img1 = np.ascontiguousarray(img1) 117 | img2 = np.ascontiguousarray(img2) 118 | flow = np.ascontiguousarray(flow) 119 | 120 | return img1, img2, flow 121 | 122 | 123 | class SparseFlowAugmentor: 124 | def __init__(self, crop_size, min_scale=-0.2, max_scale=0.5, do_flip=False): 125 | # spatial augmentation params 126 | self.crop_size = crop_size 127 | self.min_scale = min_scale 128 | self.max_scale = max_scale 129 | self.spatial_aug_prob = 0.8 130 | self.stretch_prob = 0.8 131 | self.max_stretch = 0.2 132 | 133 | # flip augmentation params 134 | self.do_flip = do_flip 135 | self.h_flip_prob = 0.5 136 | self.v_flip_prob = 0.1 137 | 138 | # photometric augmentation params 139 | self.photo_aug = ColorJitter(brightness=0.3, contrast=0.3, saturation=0.3, hue=0.3/3.14) 140 | self.asymmetric_color_aug_prob = 0.2 141 | self.eraser_aug_prob = 0.5 142 | 143 | def color_transform(self, img1, img2): 144 | image_stack = np.concatenate([img1, img2], axis=0) 145 | image_stack = np.array(self.photo_aug(Image.fromarray(image_stack)), dtype=np.uint8) 146 | img1, img2 = np.split(image_stack, 2, axis=0) 147 | return img1, img2 148 | 149 | def eraser_transform(self, img1, img2): 150 | ht, wd = img1.shape[:2] 151 | if np.random.rand() < self.eraser_aug_prob: 152 | mean_color = np.mean(img2.reshape(-1, 3), axis=0) 153 | for _ in range(np.random.randint(1, 3)): 154 | x0 = np.random.randint(0, wd) 155 | y0 = np.random.randint(0, ht) 156 | dx = np.random.randint(50, 100) 157 | dy = np.random.randint(50, 100) 158 | img2[y0:y0+dy, x0:x0+dx, :] = mean_color 159 | 160 | return img1, img2 161 | 162 | def resize_sparse_flow_map(self, flow, valid, fx=1.0, fy=1.0): 163 | ht, wd = flow.shape[:2] 164 | coords = np.meshgrid(np.arange(wd), np.arange(ht)) 165 | coords = np.stack(coords, axis=-1) 166 | 167 | coords = coords.reshape(-1, 2).astype(np.float32) 168 | flow = flow.reshape(-1, 2).astype(np.float32) 169 | valid = valid.reshape(-1).astype(np.float32) 170 | 171 | coords0 = coords[valid>=1] 172 | flow0 = flow[valid>=1] 173 | 174 | ht1 = int(round(ht * fy)) 175 | wd1 = int(round(wd * fx)) 176 | 177 | coords1 = coords0 * [fx, fy] 178 | flow1 = flow0 * [fx, fy] 179 | 180 | xx = np.round(coords1[:,0]).astype(np.int32) 181 | yy = np.round(coords1[:,1]).astype(np.int32) 182 | 183 | v = (xx > 0) & (xx < wd1) & (yy > 0) & (yy < ht1) 184 | xx = xx[v] 185 | yy = yy[v] 186 | flow1 = flow1[v] 187 | 188 | flow_img = np.zeros([ht1, wd1, 2], dtype=np.float32) 189 | valid_img = np.zeros([ht1, wd1], dtype=np.int32) 190 | 191 | flow_img[yy, xx] = flow1 192 | valid_img[yy, xx] = 1 193 | 194 | return flow_img, valid_img 195 | 196 | def spatial_transform(self, img1, img2, flow, valid): 197 | # randomly sample scale 198 | 199 | ht, wd = img1.shape[:2] 200 | min_scale = np.maximum( 201 | (self.crop_size[0] + 1) / float(ht), 202 | (self.crop_size[1] + 1) / float(wd)) 203 | 204 | scale = 2 ** np.random.uniform(self.min_scale, self.max_scale) 205 | scale_x = np.clip(scale, min_scale, None) 206 | scale_y = np.clip(scale, min_scale, None) 207 | 208 | if np.random.rand() < self.spatial_aug_prob: 209 | # rescale the images 210 | img1 = cv2.resize(img1, None, fx=scale_x, fy=scale_y, interpolation=cv2.INTER_LINEAR) 211 | img2 = cv2.resize(img2, None, fx=scale_x, fy=scale_y, interpolation=cv2.INTER_LINEAR) 212 | flow, valid = self.resize_sparse_flow_map(flow, valid, fx=scale_x, fy=scale_y) 213 | 214 | if self.do_flip: 215 | if np.random.rand() < 0.5: # h-flip 216 | img1 = img1[:, ::-1] 217 | img2 = img2[:, ::-1] 218 | flow = flow[:, ::-1] * [-1.0, 1.0] 219 | valid = valid[:, ::-1] 220 | 221 | margin_y = 20 222 | margin_x = 50 223 | 224 | y0 = np.random.randint(0, img1.shape[0] - self.crop_size[0] + margin_y) 225 | x0 = np.random.randint(-margin_x, img1.shape[1] - self.crop_size[1] + margin_x) 226 | 227 | y0 = np.clip(y0, 0, img1.shape[0] - self.crop_size[0]) 228 | x0 = np.clip(x0, 0, img1.shape[1] - self.crop_size[1]) 229 | 230 | img1 = img1[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 231 | img2 = img2[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 232 | flow = flow[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 233 | valid = valid[y0:y0+self.crop_size[0], x0:x0+self.crop_size[1]] 234 | return img1, img2, flow, valid 235 | 236 | def __call__(self, img1, img2, flow, valid): 237 | img1, img2 = self.color_transform(img1, img2) 238 | img1, img2 = self.eraser_transform(img1, img2) 239 | img1, img2, flow, valid = self.spatial_transform(img1, img2, flow, valid) 240 | 241 | img1 = np.ascontiguousarray(img1) 242 | img2 = np.ascontiguousarray(img2) 243 | flow = np.ascontiguousarray(flow) 244 | valid = np.ascontiguousarray(valid) 245 | 246 | return img1, img2, flow, valid 247 | -------------------------------------------------------------------------------- /core/utils/flow_viz.py: -------------------------------------------------------------------------------- 1 | # Flow visualization code used from https://github.com/tomrunia/OpticalFlow_Visualization 2 | 3 | 4 | # MIT License 5 | # 6 | # Copyright (c) 2018 Tom Runia 7 | # 8 | # Permission is hereby granted, free of charge, to any person obtaining a copy 9 | # of this software and associated documentation files (the "Software"), to deal 10 | # in the Software without restriction, including without limitation the rights 11 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 12 | # copies of the Software, and to permit persons to whom the Software is 13 | # furnished to do so, subject to conditions. 14 | # 15 | # Author: Tom Runia 16 | # Date Created: 2018-08-03 17 | 18 | import numpy as np 19 | 20 | def make_colorwheel(): 21 | """ 22 | Generates a color wheel for optical flow visualization as presented in: 23 | Baker et al. "A Database and Evaluation Methodology for Optical Flow" (ICCV, 2007) 24 | URL: http://vision.middlebury.edu/flow/flowEval-iccv07.pdf 25 | 26 | Code follows the original C++ source code of Daniel Scharstein. 27 | Code follows the the Matlab source code of Deqing Sun. 28 | 29 | Returns: 30 | np.ndarray: Color wheel 31 | """ 32 | 33 | RY = 15 34 | YG = 6 35 | GC = 4 36 | CB = 11 37 | BM = 13 38 | MR = 6 39 | 40 | ncols = RY + YG + GC + CB + BM + MR 41 | colorwheel = np.zeros((ncols, 3)) 42 | col = 0 43 | 44 | # RY 45 | colorwheel[0:RY, 0] = 255 46 | colorwheel[0:RY, 1] = np.floor(255*np.arange(0,RY)/RY) 47 | col = col+RY 48 | # YG 49 | colorwheel[col:col+YG, 0] = 255 - np.floor(255*np.arange(0,YG)/YG) 50 | colorwheel[col:col+YG, 1] = 255 51 | col = col+YG 52 | # GC 53 | colorwheel[col:col+GC, 1] = 255 54 | colorwheel[col:col+GC, 2] = np.floor(255*np.arange(0,GC)/GC) 55 | col = col+GC 56 | # CB 57 | colorwheel[col:col+CB, 1] = 255 - np.floor(255*np.arange(CB)/CB) 58 | colorwheel[col:col+CB, 2] = 255 59 | col = col+CB 60 | # BM 61 | colorwheel[col:col+BM, 2] = 255 62 | colorwheel[col:col+BM, 0] = np.floor(255*np.arange(0,BM)/BM) 63 | col = col+BM 64 | # MR 65 | colorwheel[col:col+MR, 2] = 255 - np.floor(255*np.arange(MR)/MR) 66 | colorwheel[col:col+MR, 0] = 255 67 | return colorwheel 68 | 69 | 70 | def flow_uv_to_colors(u, v, convert_to_bgr=False): 71 | """ 72 | Applies the flow color wheel to (possibly clipped) flow components u and v. 73 | 74 | According to the C++ source code of Daniel Scharstein 75 | According to the Matlab source code of Deqing Sun 76 | 77 | Args: 78 | u (np.ndarray): Input horizontal flow of shape [H,W] 79 | v (np.ndarray): Input vertical flow of shape [H,W] 80 | convert_to_bgr (bool, optional): Convert output image to BGR. Defaults to False. 81 | 82 | Returns: 83 | np.ndarray: Flow visualization image of shape [H,W,3] 84 | """ 85 | flow_image = np.zeros((u.shape[0], u.shape[1], 3), np.uint8) 86 | colorwheel = make_colorwheel() # shape [55x3] 87 | ncols = colorwheel.shape[0] 88 | rad = np.sqrt(np.square(u) + np.square(v)) 89 | a = np.arctan2(-v, -u)/np.pi 90 | fk = (a+1) / 2*(ncols-1) 91 | k0 = np.floor(fk).astype(np.int32) 92 | k1 = k0 + 1 93 | k1[k1 == ncols] = 0 94 | f = fk - k0 95 | for i in range(colorwheel.shape[1]): 96 | tmp = colorwheel[:,i] 97 | col0 = tmp[k0] / 255.0 98 | col1 = tmp[k1] / 255.0 99 | col = (1-f)*col0 + f*col1 100 | idx = (rad <= 1) 101 | col[idx] = 1 - rad[idx] * (1-col[idx]) 102 | col[~idx] = col[~idx] * 0.75 # out of range 103 | # Note the 2-i => BGR instead of RGB 104 | ch_idx = 2-i if convert_to_bgr else i 105 | flow_image[:,:,ch_idx] = np.floor(255 * col) 106 | return flow_image 107 | 108 | 109 | def flow_to_image(flow_uv, clip_flow=None, convert_to_bgr=False): 110 | """ 111 | Expects a two dimensional flow image of shape. 112 | 113 | Args: 114 | flow_uv (np.ndarray): Flow UV image of shape [H,W,2] 115 | clip_flow (float, optional): Clip maximum of flow values. Defaults to None. 116 | convert_to_bgr (bool, optional): Convert output image to BGR. Defaults to False. 117 | 118 | Returns: 119 | np.ndarray: Flow visualization image of shape [H,W,3] 120 | """ 121 | assert flow_uv.ndim == 3, 'input flow must have three dimensions' 122 | assert flow_uv.shape[2] == 2, 'input flow must have shape [H,W,2]' 123 | if clip_flow is not None: 124 | flow_uv = np.clip(flow_uv, 0, clip_flow) 125 | u = flow_uv[:,:,0] 126 | v = flow_uv[:,:,1] 127 | rad = np.sqrt(np.square(u) + np.square(v)) 128 | rad_max = np.max(rad) 129 | epsilon = 1e-5 130 | u = u / (rad_max + epsilon) 131 | v = v / (rad_max + epsilon) 132 | return flow_uv_to_colors(u, v, convert_to_bgr) -------------------------------------------------------------------------------- /core/utils/frame_utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from PIL import Image 3 | from os.path import * 4 | import re 5 | 6 | import cv2 7 | cv2.setNumThreads(0) 8 | cv2.ocl.setUseOpenCL(False) 9 | 10 | TAG_CHAR = np.array([202021.25], np.float32) 11 | 12 | def readFlow(fn): 13 | """ Read .flo file in Middlebury format""" 14 | # Code adapted from: 15 | # http://stackoverflow.com/questions/28013200/reading-middlebury-flow-files-with-python-bytes-array-numpy 16 | 17 | # WARNING: this will work on little-endian architectures (eg Intel x86) only! 18 | # print 'fn = %s'%(fn) 19 | with open(fn, 'rb') as f: 20 | magic = np.fromfile(f, np.float32, count=1) 21 | if 202021.25 != magic: 22 | print('Magic number incorrect. Invalid .flo file') 23 | return None 24 | else: 25 | w = np.fromfile(f, np.int32, count=1) 26 | h = np.fromfile(f, np.int32, count=1) 27 | # print 'Reading %d x %d flo file\n' % (w, h) 28 | data = np.fromfile(f, np.float32, count=2*int(w)*int(h)) 29 | # Reshape data into 3D array (columns, rows, bands) 30 | # The reshape here is for visualization, the original code is (w,h,2) 31 | return np.resize(data, (int(h), int(w), 2)) 32 | 33 | def readPFM(file): 34 | file = open(file, 'rb') 35 | 36 | color = None 37 | width = None 38 | height = None 39 | scale = None 40 | endian = None 41 | 42 | header = file.readline().rstrip() 43 | if header == b'PF': 44 | color = True 45 | elif header == b'Pf': 46 | color = False 47 | else: 48 | raise Exception('Not a PFM file.') 49 | 50 | dim_match = re.match(rb'^(\d+)\s(\d+)\s$', file.readline()) 51 | if dim_match: 52 | width, height = map(int, dim_match.groups()) 53 | else: 54 | raise Exception('Malformed PFM header.') 55 | 56 | scale = float(file.readline().rstrip()) 57 | if scale < 0: # little-endian 58 | endian = '<' 59 | scale = -scale 60 | else: 61 | endian = '>' # big-endian 62 | 63 | data = np.fromfile(file, endian + 'f') 64 | shape = (height, width, 3) if color else (height, width) 65 | 66 | data = np.reshape(data, shape) 67 | data = np.flipud(data) 68 | return data 69 | 70 | def writeFlow(filename,uv,v=None): 71 | """ Write optical flow to file. 72 | 73 | If v is None, uv is assumed to contain both u and v channels, 74 | stacked in depth. 75 | Original code by Deqing Sun, adapted from Daniel Scharstein. 76 | """ 77 | nBands = 2 78 | 79 | if v is None: 80 | assert(uv.ndim == 3) 81 | assert(uv.shape[2] == 2) 82 | u = uv[:,:,0] 83 | v = uv[:,:,1] 84 | else: 85 | u = uv 86 | 87 | assert(u.shape == v.shape) 88 | height,width = u.shape 89 | f = open(filename,'wb') 90 | # write the header 91 | f.write(TAG_CHAR) 92 | np.array(width).astype(np.int32).tofile(f) 93 | np.array(height).astype(np.int32).tofile(f) 94 | # arrange into matrix form 95 | tmp = np.zeros((height, width*nBands)) 96 | tmp[:,np.arange(width)*2] = u 97 | tmp[:,np.arange(width)*2 + 1] = v 98 | tmp.astype(np.float32).tofile(f) 99 | f.close() 100 | 101 | 102 | def readFlowKITTI(filename): 103 | flow = cv2.imread(filename, cv2.IMREAD_ANYDEPTH|cv2.IMREAD_COLOR) 104 | flow = flow[:,:,::-1].astype(np.float32) 105 | flow, valid = flow[:, :, :2], flow[:, :, 2] 106 | flow = (flow - 2**15) / 64.0 107 | return flow, valid 108 | 109 | def readDispKITTI(filename): 110 | disp = cv2.imread(filename, cv2.IMREAD_ANYDEPTH) / 256.0 111 | valid = disp > 0.0 112 | flow = np.stack([-disp, np.zeros_like(disp)], -1) 113 | return flow, valid 114 | 115 | 116 | def writeFlowKITTI(filename, uv): 117 | uv = 64.0 * uv + 2**15 118 | valid = np.ones([uv.shape[0], uv.shape[1], 1]) 119 | uv = np.concatenate([uv, valid], axis=-1).astype(np.uint16) 120 | cv2.imwrite(filename, uv[..., ::-1]) 121 | 122 | 123 | def read_gen(file_name, pil=False): 124 | ext = splitext(file_name)[-1] 125 | if ext == '.png' or ext == '.jpeg' or ext == '.ppm' or ext == '.jpg': 126 | return Image.open(file_name) 127 | elif ext == '.bin' or ext == '.raw': 128 | return np.load(file_name) 129 | elif ext == '.flo': 130 | return readFlow(file_name).astype(np.float32) 131 | elif ext == '.pfm': 132 | flow = readPFM(file_name).astype(np.float32) 133 | if len(flow.shape) == 2: 134 | return flow 135 | else: 136 | return flow[:, :, :-1] 137 | return [] -------------------------------------------------------------------------------- /core/utils/utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | import numpy as np 4 | from scipy import interpolate 5 | from torch_scatter import scatter_softmax, scatter_add 6 | 7 | 8 | class InputPadder: 9 | """ Pads images such that dimensions are divisible by 8 """ 10 | def __init__(self, dims, mode='sintel'): 11 | self.ht, self.wd = dims[-2:] 12 | pad_ht = (((self.ht // 8) + 1) * 8 - self.ht) % 8 13 | pad_wd = (((self.wd // 8) + 1) * 8 - self.wd) % 8 14 | if mode == 'sintel': 15 | self._pad = [pad_wd//2, pad_wd - pad_wd//2, pad_ht//2, pad_ht - pad_ht//2] 16 | else: 17 | self._pad = [pad_wd//2, pad_wd - pad_wd//2, 0, pad_ht] 18 | 19 | def pad(self, *inputs): 20 | return [F.pad(x, self._pad, mode='replicate') for x in inputs] 21 | 22 | def unpad(self,x): 23 | ht, wd = x.shape[-2:] 24 | c = [self._pad[2], ht-self._pad[3], self._pad[0], wd-self._pad[1]] 25 | return x[..., c[0]:c[1], c[2]:c[3]] 26 | 27 | 28 | def forward_interpolate(flow): 29 | flow = flow.detach().cpu().numpy() 30 | dx, dy = flow[0], flow[1] 31 | 32 | ht, wd = dx.shape 33 | x0, y0 = np.meshgrid(np.arange(wd), np.arange(ht)) 34 | 35 | x1 = x0 + dx 36 | y1 = y0 + dy 37 | 38 | x1 = x1.reshape(-1) 39 | y1 = y1.reshape(-1) 40 | dx = dx.reshape(-1) 41 | dy = dy.reshape(-1) 42 | 43 | valid = (x1 > 0) & (x1 < wd) & (y1 > 0) & (y1 < ht) 44 | x1 = x1[valid] 45 | y1 = y1[valid] 46 | dx = dx[valid] 47 | dy = dy[valid] 48 | 49 | flow_x = interpolate.griddata( 50 | (x1, y1), dx, (x0, y0), method='nearest', fill_value=0) 51 | 52 | flow_y = interpolate.griddata( 53 | (x1, y1), dy, (x0, y0), method='nearest', fill_value=0) 54 | 55 | flow = np.stack([flow_x, flow_y], axis=0) 56 | return torch.from_numpy(flow).float() 57 | 58 | 59 | def bilinear_sampler(img, coords, mode='bilinear', mask=False): 60 | """ Wrapper for grid_sample, uses pixel coordinates """ 61 | H, W = img.shape[-2:] 62 | xgrid, ygrid = coords.split([1,1], dim=-1) 63 | xgrid = 2*xgrid/(W-1) - 1 64 | ygrid = 2*ygrid/(H-1) - 1 65 | 66 | grid = torch.cat([xgrid, ygrid], dim=-1) 67 | img = F.grid_sample(img, grid, align_corners=True) 68 | 69 | if mask: 70 | mask = (xgrid > -1) & (ygrid > -1) & (xgrid < 1) & (ygrid < 1) 71 | return img, mask.float() 72 | 73 | return img 74 | 75 | 76 | def coords_grid(batch, ht, wd): 77 | coords = torch.meshgrid(torch.arange(ht), torch.arange(wd)) 78 | coords = torch.stack(coords[::-1], dim=0).float() 79 | return coords[None].expand(batch, -1, -1, -1) 80 | 81 | 82 | def coords_grid_y_first(batch, ht, wd): 83 | """Place y grid before x grid""" 84 | coords = torch.meshgrid(torch.arange(ht), torch.arange(wd)) 85 | coords = torch.stack(coords, dim=0).int() 86 | return coords[None].expand(batch, -1, -1, -1) 87 | 88 | 89 | def soft_argmax(corr_me, B, H1, W1): 90 | # Implement soft argmin 91 | coords, feats = corr_me.decomposed_coordinates_and_features 92 | 93 | # Computing soft argmin 94 | flow_pred = torch.zeros(B, 2, H1, W1).to(corr_me.device) 95 | for batch, (coord, feat) in enumerate(zip(coords, feats)): 96 | coord_img_1 = coord[:, :2].to(corr_me.device) 97 | coord_img_2 = coord[:, 2:].to(corr_me.device) 98 | # relative positions (flow hypotheses) 99 | rel_pos = (coord_img_2 - coord_img_1) 100 | # augmented indices 101 | aug_coord_img_1 = (coord_img_1[:, 0:1] * W1 + coord_img_1[:, 1:2]).long() 102 | # run softmax on the score 103 | weight = scatter_softmax(feat, aug_coord_img_1, dim=0) 104 | rel_pos_weighted = weight * rel_pos 105 | out = scatter_add(rel_pos_weighted, aug_coord_img_1, dim=0) 106 | # Need to permute (y, x) to (x, y) for flow 107 | flow_pred[batch] = out[:, [1,0]].view(H1, W1, 2).permute(2, 0, 1) 108 | return flow_pred 109 | 110 | 111 | def upflow8(flow, mode='bilinear'): 112 | new_size = (8 * flow.shape[2], 8 * flow.shape[3]) 113 | return 8 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True) 114 | 115 | 116 | def upflow4(flow, mode='bilinear'): 117 | new_size = (4 * flow.shape[2], 4 * flow.shape[3]) 118 | return 4 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True) 119 | 120 | 121 | def upflow2(flow, mode='bilinear'): 122 | new_size = (2 * flow.shape[2], 2 * flow.shape[3]) 123 | return 2 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True) 124 | 125 | 126 | def downflow8(flow, mode='bilinear'): 127 | new_size = (flow.shape[2] // 8, flow.shape[3] // 8) 128 | return F.interpolate(flow, size=new_size, mode=mode, align_corners=True) / 8 129 | 130 | 131 | def downflow4(flow, mode='bilinear'): 132 | new_size = (flow.shape[2] // 4, flow.shape[3] // 4) 133 | return F.interpolate(flow, size=new_size, mode=mode, align_corners=True) / 4 134 | 135 | 136 | def compute_interpolation_weights(yx_warped): 137 | # yx_warped: [N, 2] 138 | y_warped = yx_warped[:, 0] 139 | x_warped = yx_warped[:, 1] 140 | 141 | # elementwise operations below 142 | y_f = torch.floor(y_warped) 143 | y_c = y_f + 1 144 | x_f = torch.floor(x_warped) 145 | x_c = x_f + 1 146 | 147 | w0 = (y_c - y_warped) * (x_c - x_warped) 148 | w1 = (y_warped - y_f) * (x_c - x_warped) 149 | w2 = (y_c - y_warped) * (x_warped - x_f) 150 | w3 = (y_warped - y_f) * (x_warped - x_f) 151 | 152 | weights = [w0, w1, w2, w3] 153 | indices = [torch.stack([y_f, x_f], dim=1), torch.stack([y_c, x_f], dim=1), 154 | torch.stack([y_f, x_c], dim=1), torch.stack([y_c, x_c], dim=1)] 155 | weights = torch.cat(weights, dim=1) 156 | indices = torch.cat(indices, dim=2) 157 | # indices = torch.cat(indices, dim=0) # [4*N, 2] 158 | 159 | return weights, indices 160 | 161 | # weights, indices = compute_interpolation_weights(xy_warped, b, h_i, w_i) 162 | 163 | 164 | def compute_inverse_interpolation_img(weights, indices, img, b, h_i, w_i): 165 | """ 166 | weights: [b, h*w] 167 | indices: [b, h*w] 168 | img: [b, h*w, a, b, c, ...] 169 | """ 170 | w0, w1, w2, w3 = weights 171 | ff_idx, cf_idx, fc_idx, cc_idx = indices 172 | 173 | k = len(img.size()) - len(w0.size()) 174 | img_0 = w0[(...,) + (None,) * k] * img 175 | img_1 = w1[(...,) + (None,) * k] * img 176 | img_2 = w2[(...,) + (None,) * k] * img 177 | img_3 = w3[(...,) + (None,) * k] * img 178 | 179 | img_out = torch.zeros(b, h_i * w_i, *img.shape[2:]).type_as(img) 180 | 181 | ff_idx = torch.clamp(ff_idx, min=0, max=h_i * w_i - 1) 182 | cf_idx = torch.clamp(cf_idx, min=0, max=h_i * w_i - 1) 183 | fc_idx = torch.clamp(fc_idx, min=0, max=h_i * w_i - 1) 184 | cc_idx = torch.clamp(cc_idx, min=0, max=h_i * w_i - 1) 185 | 186 | img_out.scatter_add_(1, ff_idx[(...,) + (None,) * k].expand_as(img_0), img_0) 187 | img_out.scatter_add_(1, cf_idx[(...,) + (None,) * k].expand_as(img_1), img_1) 188 | img_out.scatter_add_(1, fc_idx[(...,) + (None,) * k].expand_as(img_2), img_2) 189 | img_out.scatter_add_(1, cc_idx[(...,) + (None,) * k].expand_as(img_3), img_3) 190 | 191 | return img_out # [b, h_i*w_i, ...] 192 | -------------------------------------------------------------------------------- /evaluate.py: -------------------------------------------------------------------------------- 1 | import sys 2 | sys.path.append('core') 3 | 4 | import argparse 5 | import os 6 | import time 7 | import numpy as np 8 | import torch 9 | 10 | from sparsenet import SparseNet 11 | # from sparsenet import SparseNetEighth 12 | 13 | import datasets 14 | from utils import frame_utils 15 | 16 | from utils.utils import InputPadder, forward_interpolate 17 | 18 | 19 | @torch.no_grad() 20 | def create_sintel_submission(model, warm_start=False, output_path='sintel_submission'): 21 | """ Create submission for the Sintel leaderboard """ 22 | model.eval() 23 | for dstype in ['clean', 'final']: 24 | test_dataset = datasets.MpiSintel(split='test', aug_params=None, dstype=dstype) 25 | 26 | flow_prev, sequence_prev = None, None 27 | for test_id in range(len(test_dataset)): 28 | image1, image2, (sequence, frame) = test_dataset[test_id] 29 | if sequence != sequence_prev: 30 | flow_prev = None 31 | 32 | padder = InputPadder(image1.shape) 33 | image1, image2 = padder.pad(image1[None].to(f'cuda:{model.device_ids[0]}'), image2[None].to(f'cuda:{model.device_ids[0]}')) 34 | 35 | flow_low, flow_pr = model.module(image1, image2, iters=32, flow_init=flow_prev, test_mode=True) 36 | flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() 37 | 38 | if warm_start: 39 | flow_prev = forward_interpolate(flow_low[0])[None].cuda() 40 | 41 | output_dir = os.path.join(output_path, dstype, sequence) 42 | output_file = os.path.join(output_dir, 'frame%04d.flo' % (frame+1)) 43 | 44 | if not os.path.exists(output_dir): 45 | os.makedirs(output_dir) 46 | 47 | frame_utils.writeFlow(output_file, flow) 48 | sequence_prev = sequence 49 | 50 | 51 | @torch.no_grad() 52 | def create_kitti_submission(model, output_path='kitti_submission'): 53 | """ Create submission for the Sintel leaderboard """ 54 | model.eval() 55 | test_dataset = datasets.KITTI(split='testing', aug_params=None) 56 | 57 | if not os.path.exists(output_path): 58 | os.makedirs(output_path) 59 | 60 | for test_id in range(len(test_dataset)): 61 | image1, image2, (frame_id, ) = test_dataset[test_id] 62 | padder = InputPadder(image1.shape, mode='kitti') 63 | image1, image2 = padder.pad(image1[None].to(f'cuda:{model.device_ids[0]}'), image2[None].to(f'cuda:{model.device_ids[0]}')) 64 | 65 | flow_pr = model.module(image1, image2) 66 | flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() 67 | 68 | output_filename = os.path.join(output_path, frame_id) 69 | frame_utils.writeFlowKITTI(output_filename, flow) 70 | 71 | 72 | @torch.no_grad() 73 | def validate_chairs(model, iters=6): 74 | """ Perform evaluation on the FlyingChairs (test) split """ 75 | model.eval() 76 | epe_list = [] 77 | 78 | val_dataset = datasets.FlyingChairs(split='validation') 79 | for val_id in range(len(val_dataset)): 80 | image1, image2, flow_gt, _ = val_dataset[val_id] 81 | image1 = image1[None].cuda() 82 | image2 = image2[None].cuda() 83 | 84 | flow_pr = model(image1, image2, iters=iters, test_mode=True) 85 | epe = torch.sum((flow_pr[0].cpu() - flow_gt)**2, dim=0).sqrt() 86 | epe_list.append(epe.view(-1).numpy()) 87 | 88 | torch.cuda.empty_cache() 89 | 90 | epe = np.mean(np.concatenate(epe_list)) 91 | print("Validation Chairs EPE: %f" % epe) 92 | return {'chairs_epe': epe} 93 | 94 | 95 | @torch.no_grad() 96 | def validate_sintel(model, iters=6): 97 | """ Peform validation using the Sintel (train) split """ 98 | model.eval() 99 | results = {} 100 | for dstype in ['clean', 'final']: 101 | val_dataset = datasets.MpiSintel(split='training', dstype=dstype) 102 | epe_list = [] 103 | 104 | for val_id in range(len(val_dataset)): 105 | image1, image2, flow_gt, _ = val_dataset[val_id] 106 | image1 = image1[None].cuda() 107 | image2 = image2[None].cuda() 108 | 109 | padder = InputPadder(image1.shape) 110 | image1, image2 = padder.pad(image1, image2) 111 | 112 | flow_pr = model(image1, image2, iters=iters, test_mode=True) 113 | flow = padder.unpad(flow_pr[0]).cpu() 114 | 115 | epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt() 116 | epe_list.append(epe.view(-1).numpy()) 117 | 118 | epe_all = np.concatenate(epe_list) 119 | 120 | epe = np.mean(epe_all) 121 | px1 = np.mean(epe_all<1) 122 | px3 = np.mean(epe_all<3) 123 | px5 = np.mean(epe_all<5) 124 | 125 | print("Validation (%s) EPE: %f, 1px: %f, 3px: %f, 5px: %f" % (dstype, epe, px1, px3, px5)) 126 | results[dstype] = np.mean(epe_list) 127 | 128 | return results 129 | 130 | 131 | @torch.no_grad() 132 | def validate_kitti(model, iters=6): 133 | """ Peform validation using the KITTI-2015 (train) split """ 134 | model.eval() 135 | val_dataset = datasets.KITTI(split='training') 136 | 137 | out_list, epe_list = [], [] 138 | for val_id in range(len(val_dataset)): 139 | image1, image2, flow_gt, valid_gt = val_dataset[val_id] 140 | image1 = image1[None].cuda() 141 | image2 = image2[None].cuda() 142 | 143 | padder = InputPadder(image1.shape, mode='kitti') 144 | image1, image2 = padder.pad(image1, image2) 145 | 146 | flow_pr = model(image1, image2, iters=iters, test_mode=True) 147 | flow = padder.unpad(flow_pr[0]).cpu() 148 | 149 | epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt() 150 | mag = torch.sum(flow_gt**2, dim=0).sqrt() 151 | 152 | epe = epe.view(-1) 153 | mag = mag.view(-1) 154 | val = valid_gt.view(-1) >= 0.5 155 | 156 | out = ((epe > 3.0) & ((epe/mag) > 0.05)).float() 157 | epe_list.append(epe[val].mean().item()) 158 | out_list.append(out[val].cpu().numpy()) 159 | 160 | epe_list = np.array(epe_list) 161 | out_list = np.concatenate(out_list) 162 | 163 | epe = np.mean(epe_list) 164 | f1 = 100 * np.mean(out_list) 165 | 166 | print("Validation KITTI: %f, %f" % (epe, f1)) 167 | return {'kitti_epe': epe, 'kitti_f1': f1} 168 | 169 | 170 | if __name__ == '__main__': 171 | parser = argparse.ArgumentParser() 172 | parser.add_argument('--model', help="restore checkpoint") 173 | parser.add_argument('--dataset', help="dataset for evaluation") 174 | parser.add_argument('--num_k', type=int, default=8, 175 | help='number of hypotheses to compute for knn Faiss') 176 | parser.add_argument('--mixed_precision', default=True, help='use mixed precision') 177 | 178 | args = parser.parse_args() 179 | 180 | model = torch.nn.DataParallel(SparseNet(args)) 181 | model.load_state_dict(torch.load(args.model)) 182 | 183 | model.cuda() 184 | model.eval() 185 | 186 | # create_sintel_submission(model.module, warm_start=True) 187 | # create_kitti_submission(model.module) 188 | 189 | with torch.no_grad(): 190 | if args.dataset == 'chairs': 191 | validate_chairs(model.module, iters=12) 192 | 193 | elif args.dataset == 'sintel': 194 | validate_sintel(model.module, iters=32) 195 | 196 | elif args.dataset == 'kitti': 197 | validate_kitti(model.module, iters=24) 198 | -------------------------------------------------------------------------------- /evaluate_vis.py: -------------------------------------------------------------------------------- 1 | import sys 2 | sys.path.append('core') 3 | 4 | from PIL import Image 5 | import argparse 6 | import os 7 | import time 8 | import numpy as np 9 | import torch 10 | import torch.nn.functional as F 11 | import matplotlib.pyplot as plt 12 | 13 | from sparsenet import SparseNet 14 | 15 | import datasets 16 | from utils import flow_viz 17 | from utils import frame_utils 18 | 19 | from utils.utils import InputPadder, forward_interpolate 20 | 21 | import imageio 22 | from PIL import Image, ImageDraw, ImageFont 23 | 24 | import csv 25 | 26 | @torch.no_grad() 27 | def create_sintel_submission(model, warm_start=False, output_path='sintel_submission'): 28 | """ Create submission for the Sintel leaderboard """ 29 | model.eval() 30 | for dstype in ['clean', 'final']: 31 | test_dataset = datasets.MpiSintel(split='test', aug_params=None, dstype=dstype) 32 | 33 | flow_prev, sequence_prev = None, None 34 | for test_id in range(len(test_dataset)): 35 | image1, image2, (sequence, frame) = test_dataset[test_id] 36 | if sequence != sequence_prev: 37 | flow_prev = None 38 | 39 | padder = InputPadder(image1.shape) 40 | image1, image2 = padder.pad(image1[None].to(f'cuda:{model.device_ids[0]}'), image2[None].to(f'cuda:{model.device_ids[0]}')) 41 | 42 | flow_low, flow_pr = model.module(image1, image2, iters=32, flow_init=flow_prev, test_mode=True) 43 | flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() 44 | 45 | if warm_start: 46 | flow_prev = forward_interpolate(flow_low[0])[None].cuda() 47 | 48 | output_dir = os.path.join(output_path, dstype, sequence) 49 | output_file = os.path.join(output_dir, 'frame%04d.flo' % (frame+1)) 50 | 51 | if not os.path.exists(output_dir): 52 | os.makedirs(output_dir) 53 | 54 | frame_utils.writeFlow(output_file, flow) 55 | sequence_prev = sequence 56 | 57 | 58 | @torch.no_grad() 59 | def create_sintel_submission_vis(model, warm_start=False, output_path='sintel_submission'): 60 | """ Create submission for the Sintel leaderboard """ 61 | model.eval() 62 | for dstype in ['clean', 'final']: 63 | test_dataset = datasets.MpiSintel(split='test', aug_params=None, dstype=dstype) 64 | 65 | flow_prev, sequence_prev = None, None 66 | for test_id in range(len(test_dataset)): 67 | image1, image2, (sequence, frame) = test_dataset[test_id] 68 | if sequence != sequence_prev: 69 | flow_prev = None 70 | 71 | padder = InputPadder(image1.shape) 72 | image1, image2 = padder.pad(image1[None].to(f'cuda:{model.device_ids[0]}'), image2[None].to(f'cuda:{model.device_ids[0]}')) 73 | 74 | flow_low, flow_pr = model.module(image1, image2, iters=32, flow_init=flow_prev, test_mode=True) 75 | flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() 76 | 77 | # Visualizations 78 | flow_img = flow_viz.flow_to_image(flow) 79 | image = Image.fromarray(flow_img) 80 | if not os.path.exists(f'vis/RAFT/{dstype}/'): 81 | os.makedirs(f'vis/RAFT/{dstype}/flow') 82 | os.makedirs(f'vis/RAFT/{dstype}/error') 83 | 84 | if not os.path.exists(f'vis/ours/{dstype}/'): 85 | os.makedirs(f'vis/ours/{dstype}/flow') 86 | os.makedirs(f'vis/ours/{dstype}/error') 87 | 88 | if not os.path.exists(f'vis/gt/{dstype}/'): 89 | os.makedirs(f'vis/gt/{dstype}/flow') 90 | os.makedirs(f'vis/gt/{dstype}/image') 91 | 92 | image.save(f'vis/RAFT/{dstype}/flow/{test_id}.png') 93 | imageio.imwrite(f'vis/gt/{dstype}/image/{test_id}.png', image1[0].cpu().permute(1, 2, 0).numpy()) 94 | if warm_start: 95 | flow_prev = forward_interpolate(flow_low[0])[None].cuda() 96 | 97 | output_dir = os.path.join(output_path, dstype, sequence) 98 | output_file = os.path.join(output_dir, 'frame%04d.flo' % (frame+1)) 99 | 100 | if not os.path.exists(output_dir): 101 | os.makedirs(output_dir) 102 | 103 | frame_utils.writeFlow(output_file, flow) 104 | sequence_prev = sequence 105 | 106 | 107 | @torch.no_grad() 108 | def create_kitti_submission(model, output_path='kitti_submission'): 109 | """ Create submission for the Sintel leaderboard """ 110 | model.eval() 111 | test_dataset = datasets.KITTI(split='testing', aug_params=None) 112 | 113 | if not os.path.exists(output_path): 114 | os.makedirs(output_path) 115 | 116 | for test_id in range(len(test_dataset)): 117 | image1, image2, (frame_id, ) = test_dataset[test_id] 118 | padder = InputPadder(image1.shape, mode='kitti') 119 | image1, image2 = padder.pad(image1[None].to(f'cuda:{model.device_ids[0]}'), image2[None].to(f'cuda:{model.device_ids[0]}')) 120 | 121 | flow_pr = model.module(image1, image2) 122 | flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy() 123 | 124 | output_filename = os.path.join(output_path, frame_id) 125 | frame_utils.writeFlowKITTI(output_filename, flow) 126 | 127 | 128 | @torch.no_grad() 129 | def validate_chairs(model, iters=6): 130 | """ Perform evaluation on the FlyingChairs (test) split """ 131 | model.eval() 132 | epe_list = [] 133 | 134 | val_dataset = datasets.FlyingChairs(split='validation') 135 | for val_id in range(len(val_dataset)): 136 | image1, image2, flow_gt, _ = val_dataset[val_id] 137 | image1 = image1[None].to(f'cuda:{model.device_ids[0]}') 138 | image2 = image2[None].to(f'cuda:{model.device_ids[0]}') 139 | 140 | flow_pr = model.module(image1, image2, iters=iters) 141 | epe = torch.sum((flow_pr[0].cpu() - flow_gt)**2, dim=0).sqrt() 142 | epe_list.append(epe.view(-1).numpy()) 143 | 144 | torch.cuda.empty_cache() 145 | 146 | epe = np.mean(np.concatenate(epe_list)) 147 | print("Validation Chairs EPE: %f" % epe) 148 | return {'chairs_epe': epe} 149 | 150 | 151 | @torch.no_grad() 152 | def gen_sintel_image(): 153 | """ Peform validation using the Sintel (train) split """ 154 | for dstype in ['clean', 'final']: 155 | val_dataset = datasets.MpiSintel(split='training', dstype=dstype) 156 | for val_id in range(len(val_dataset)): 157 | image1, image2, flow_gt, _, (sequence, frame) = val_dataset[val_id] 158 | image1 = image1.byte().permute(1,2,0) 159 | imageio.imwrite(f'vis/image/{dstype}/{val_id}.png', image1) 160 | 161 | 162 | @torch.no_grad() 163 | def gen_sintel_gt(): 164 | """ Peform validation using the Sintel (train) split """ 165 | for dstype in ['clean', 'final']: 166 | val_dataset = datasets.MpiSintel(split='training', dstype=dstype) 167 | for val_id in range(len(val_dataset)): 168 | image1, image2, flow_gt, _, (sequence, frame) = val_dataset[val_id] 169 | 170 | # Visualizations 171 | output_flow = flow_gt.permute(1, 2, 0).numpy() 172 | flow_img = flow_viz.flow_to_image(output_flow) 173 | imageio.imwrite(f'vis/gt/{dstype}/{val_id}.png', flow_img) 174 | 175 | 176 | @torch.no_grad() 177 | def validate_sintel(model, warm_start=False, iters=6): 178 | """ Peform validation using the Sintel (train) split """ 179 | model.eval() 180 | results = {} 181 | font = ImageFont.truetype("FUTURAL.ttf", 40) 182 | for dstype in ['clean', 'final']: 183 | val_dataset = datasets.MpiSintel(split='training', dstype=dstype) 184 | epe_list = [] 185 | flow_prev, sequence_prev = None, None 186 | for val_id in range(len(val_dataset)): 187 | image1, image2, flow_gt, _, (sequence, frame) = val_dataset[val_id] 188 | image1 = image1[None].to(f'cuda:{model.device_ids[0]}') 189 | image2 = image2[None].to(f'cuda:{model.device_ids[0]}') 190 | if sequence != sequence_prev: 191 | flow_prev = None 192 | 193 | padder = InputPadder(image1.shape) 194 | image1, image2 = padder.pad(image1, image2) 195 | 196 | flow_low, flow_pr = model.module(image1, image2, iters=iters, flow_init=flow_prev, test_mode=True) 197 | flow = padder.unpad(flow_pr[0]).cpu() 198 | 199 | if warm_start: 200 | flow_prev = forward_interpolate(flow_low[0])[None].cuda() 201 | 202 | epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt() 203 | epe_list.append(epe.view(-1).numpy()) 204 | 205 | sequence_prev = sequence 206 | 207 | # Visualizations 208 | output_flow = flow.permute(1, 2, 0).numpy() 209 | flow_img = flow_viz.flow_to_image(output_flow) 210 | image = Image.fromarray(flow_img) 211 | draw = ImageDraw.Draw(image) 212 | draw.text((10, 10), f'EPE: {epe.view(-1).mean().item():.2f}', (0, 0, 0), font=font) 213 | if not os.path.exists(f'vis/RAFT/{dstype}/'): 214 | os.makedirs(f'vis/RAFT/{dstype}/flow') 215 | os.makedirs(f'vis/RAFT/{dstype}/error') 216 | 217 | if not os.path.exists(f'vis/ours/{dstype}/'): 218 | os.makedirs(f'vis/ours/{dstype}/flow') 219 | os.makedirs(f'vis/ours/{dstype}/error') 220 | 221 | if not os.path.exists(f'vis/gt/{dstype}/'): 222 | os.makedirs(f'vis/gt/{dstype}/flow') 223 | os.makedirs(f'vis/gt/{dstype}/image') 224 | 225 | # image.save(f'vis/RAFT/{dstype}/flow/{val_id}_{epe.view(-1).mean().item():.3f}.png') 226 | # imageio.imwrite(f'vis/RAFT/{dstype}/error/{val_id}_{epe.view(-1).mean().item():.3f}.png', epe.numpy()) 227 | 228 | # image.save(f'vis/ours/{dstype}/flow/{val_id}_{epe.view(-1).mean().item():.3f}.png') 229 | # imageio.imwrite(f'vis/ours/{dstype}/error/{val_id}_{epe.view(-1).mean().item():.3f}.png', epe.numpy()) 230 | 231 | # flow_gt_vis = flow_gt.permute(1, 2, 0).numpy() 232 | # flow_gt_vis = flow_viz.flow_to_image(flow_gt_vis) 233 | # imageio.imwrite(f'vis/gt/{dstype}/flow/{val_id}.png', flow_gt_vis) 234 | # imageio.imwrite(f'vis/gt/{dstype}/image/{val_id}.png', image1[0].cpu().permute(1,2,0).numpy()) 235 | 236 | epe_all = np.concatenate(epe_list) 237 | 238 | epe = np.mean(epe_all) 239 | px1 = np.mean(epe_all<1) 240 | px3 = np.mean(epe_all<3) 241 | px5 = np.mean(epe_all<5) 242 | 243 | print("Validation (%s) EPE: %f, 1px: %f, 3px: %f, 5px: %f" % (dstype, epe, px1, px3, px5)) 244 | results[dstype] = np.mean(epe_list) 245 | 246 | return results 247 | 248 | 249 | @torch.no_grad() 250 | def validate_sintel_sequence(model, warm_start=False, iters=6): 251 | """ Peform validation using the Sintel (train) split """ 252 | model.eval() 253 | results = {} 254 | font = ImageFont.truetype("FUTURAL.ttf", 40) 255 | for dstype in ['clean', 'final']: 256 | val_dataset = datasets.MpiSintel(split='training', dstype=dstype) 257 | epe_list = [] 258 | flow_prev, sequence_prev = None, None 259 | 260 | all_seq_epe_list = [] 261 | per_seq_epe_list = [] 262 | for val_id in range(len(val_dataset)): 263 | image1, image2, flow_gt, _, (sequence, frame) = val_dataset[val_id] 264 | image1 = image1[None].to(f'cuda:{model.device_ids[0]}') 265 | image2 = image2[None].to(f'cuda:{model.device_ids[0]}') 266 | if sequence != sequence_prev: 267 | flow_prev = None 268 | if val_id != 0: 269 | all_seq_epe_list.append(per_seq_epe_list) 270 | per_seq_epe_list = [] 271 | 272 | padder = InputPadder(image1.shape) 273 | image1, image2 = padder.pad(image1, image2) 274 | 275 | flow_low, flow_pr = model.module(image1, image2, iters=iters, flow_init=flow_prev, test_mode=True) 276 | flow = padder.unpad(flow_pr[0]).cpu() 277 | 278 | if warm_start: 279 | flow_prev = forward_interpolate(flow_low[0])[None].cuda() 280 | 281 | epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt() 282 | epe_list.append(epe.view(-1).numpy()) 283 | 284 | per_seq_epe_list.append(epe.view(-1).numpy().mean()) 285 | sequence_prev = sequence 286 | 287 | all_seq_epe_list.append(per_seq_epe_list) 288 | with open(f'{dstype}_seq_epe.csv', 'w') as f: 289 | wr = csv.writer(f) 290 | wr.writerows(all_seq_epe_list) 291 | 292 | epe_all = np.concatenate(epe_list) 293 | 294 | epe = np.mean(epe_all) 295 | px1 = np.mean(epe_all<1) 296 | px3 = np.mean(epe_all<3) 297 | px5 = np.mean(epe_all<5) 298 | 299 | print("Validation (%s) EPE: %f, 1px: %f, 3px: %f, 5px: %f" % (dstype, epe, px1, px3, px5)) 300 | results[dstype] = np.mean(epe_list) 301 | 302 | return results 303 | 304 | 305 | @torch.no_grad() 306 | def validate_kitti(model, iters=6): 307 | """ Peform validation using the KITTI-2015 (train) split """ 308 | model.eval() 309 | val_dataset = datasets.KITTI(split='training') 310 | 311 | out_list, epe_list = [], [] 312 | for val_id in range(len(val_dataset)): 313 | image1, image2, flow_gt, valid_gt = val_dataset[val_id] 314 | image1 = image1[None].to(f'cuda:{model.device_ids[0]}') 315 | image2 = image2[None].to(f'cuda:{model.device_ids[0]}') 316 | 317 | padder = InputPadder(image1.shape, mode='kitti') 318 | image1, image2 = padder.pad(image1, image2) 319 | 320 | _, flow_pr = model.module(image1, image2, iters=iters, test_mode=True) 321 | flow = padder.unpad(flow_pr[0]).cpu() 322 | 323 | epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt() 324 | mag = torch.sum(flow_gt**2, dim=0).sqrt() 325 | 326 | epe = epe.view(-1) 327 | mag = mag.view(-1) 328 | val = valid_gt.view(-1) >= 0.5 329 | 330 | out = ((epe > 3.0) & ((epe/mag) > 0.05)).float() 331 | epe_list.append(epe[val].mean().item()) 332 | out_list.append(out[val].cpu().numpy()) 333 | 334 | epe_list = np.array(epe_list) 335 | out_list = np.concatenate(out_list) 336 | 337 | epe = np.mean(epe_list) 338 | f1 = 100 * np.mean(out_list) 339 | 340 | print("Validation KITTI: %f, %f" % (epe, f1)) 341 | return {'kitti_epe': epe, 'kitti_f1': f1} 342 | 343 | 344 | if __name__ == '__main__': 345 | parser = argparse.ArgumentParser() 346 | parser.add_argument('--model', help="restore checkpoint") 347 | parser.add_argument('--dataset', help="dataset for evaluation") 348 | parser.add_argument('--num_k', type=int, default=32, 349 | help='number of hypotheses to compute for knn Faiss') 350 | parser.add_argument('--max_search_range', type=int, default=100, 351 | help='maximum search range for hypotheses in quarter resolution') 352 | parser.add_argument('--mixed_precision', default=True, help='use mixed precision') 353 | parser.add_argument('--small', action='store_true', help='use small model') 354 | parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation') 355 | args = parser.parse_args() 356 | 357 | model = torch.nn.DataParallel(SparseNet(args)) 358 | model.load_state_dict(torch.load(args.model)) 359 | 360 | model.to(f'cuda:{model.device_ids[0]}') 361 | model.eval() 362 | 363 | # create_sintel_submission(model, warm_start=True) 364 | # create_kitti_submission(model) 365 | # create_sintel_submission_vis(model, warm_start=True) 366 | 367 | # gen_sintel_image() 368 | # gen_sintel_gt() 369 | 370 | with torch.no_grad(): 371 | if args.dataset == 'chairs': 372 | validate_chairs(model, iters=24) 373 | 374 | elif args.dataset == 'sintel': 375 | validate_sintel(model, warm_start=False, iters=32) 376 | # validate_sintel_sequence(model, warm_start=False, iters=32) 377 | 378 | elif args.dataset == 'kitti': 379 | validate_kitti(model, iters=24) 380 | -------------------------------------------------------------------------------- /scv.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zacjiang/SCV/9f809910e17125701b28dc1054fbc7648b801957/scv.png -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function, division 2 | import sys 3 | sys.path.append('core') 4 | 5 | import argparse 6 | import os 7 | import cv2 8 | import time 9 | import numpy as np 10 | import matplotlib 11 | matplotlib.use('Agg') 12 | import matplotlib.pyplot as plt 13 | 14 | import torch 15 | import torch.nn as nn 16 | import torch.optim as optim 17 | 18 | from sparsenet import SparseNet 19 | from utils import flow_viz 20 | import datasets 21 | import evaluate 22 | 23 | from torch.cuda.amp import GradScaler 24 | 25 | # exclude extremly large displacements 26 | MAX_FLOW = 400 27 | 28 | 29 | def convert_flow_to_image(image1, flow): 30 | flow = flow.permute(1, 2, 0).cpu().numpy() 31 | flow_image = flow_viz.flow_to_image(flow) 32 | flow_image = cv2.resize(flow_image, (image1.shape[3], image1.shape[2])) 33 | return flow_image 34 | 35 | 36 | def count_parameters(model): 37 | return sum(p.numel() for p in model.parameters() if p.requires_grad) 38 | 39 | 40 | def sequence_loss(flow_preds, flow_gt, valid, gamma): 41 | """ Loss function defined over sequence of flow predictions """ 42 | 43 | n_predictions = len(flow_preds) 44 | flow_loss = 0.0 45 | 46 | # exclude invalid pixels and extremely large displacements 47 | valid = (valid >= 0.5) & ((flow_gt**2).sum(dim=1).sqrt() < MAX_FLOW) 48 | 49 | for i in range(n_predictions): 50 | i_weight = gamma**(n_predictions - i - 1) 51 | i_loss = (flow_preds[i] - flow_gt).abs() 52 | flow_loss += i_weight * (valid[:, None] * i_loss).mean() 53 | 54 | epe = torch.sum((flow_preds[-1] - flow_gt)**2, dim=1).sqrt() 55 | epe = epe.view(-1)[valid.view(-1)] 56 | 57 | metrics = { 58 | 'epe': epe.mean().item(), 59 | '1px': (epe < 1).float().mean().item(), 60 | '3px': (epe < 3).float().mean().item(), 61 | '5px': (epe < 5).float().mean().item(), 62 | } 63 | 64 | return flow_loss, metrics 65 | 66 | 67 | def fetch_optimizer(args, model): 68 | """ Create the optimizer and learning rate scheduler """ 69 | optimizer = optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.wdecay, eps=args.epsilon) 70 | 71 | scheduler = optim.lr_scheduler.OneCycleLR(optimizer=optimizer, max_lr=args.lr, total_steps=args.num_steps+100, 72 | pct_start=0.05, cycle_momentum=False, anneal_strategy='linear') 73 | 74 | return optimizer, scheduler 75 | 76 | 77 | class Logger: 78 | def __init__(self, model, scheduler, args): 79 | self.model = model 80 | self.args = args 81 | self.scheduler = scheduler 82 | self.total_steps = 0 83 | self.running_loss_dict = {} 84 | self.train_epe_list = [] 85 | self.train_steps_list = [] 86 | self.val_steps_list = [] 87 | self.val_results_dict = {} 88 | 89 | def _print_training_status(self): 90 | metrics_data = [np.mean(self.running_loss_dict[k]) for k in sorted(self.running_loss_dict.keys())] 91 | training_str = "[{:6d}, {:10.7f}] ".format(self.total_steps+1, self.scheduler.get_lr()[0]) 92 | metrics_str = ("{:10.4f}, "*len(metrics_data[:-1])).format(*metrics_data[:-1]) 93 | 94 | # Compute time left 95 | time_left_sec = (self.args.num_steps - (self.total_steps+1)) * metrics_data[-1] 96 | time_left_sec = time_left_sec.astype(np.int) 97 | time_left_hms = "{:02d}h{:02d}m{:02d}s".format(time_left_sec // 3600, time_left_sec % 3600 // 60, time_left_sec % 3600 % 60) 98 | time_left_hms = f"{time_left_hms:>12}" 99 | # print the training status 100 | print(training_str + metrics_str + time_left_hms) 101 | 102 | # logging running loss to total loss 103 | self.train_epe_list.append(np.mean(self.running_loss_dict['epe'])) 104 | self.train_steps_list.append(self.total_steps) 105 | 106 | for key in self.running_loss_dict: 107 | self.running_loss_dict[key] = [] 108 | 109 | def push(self, metrics): 110 | self.total_steps += 1 111 | for key in metrics: 112 | if key not in self.running_loss_dict: 113 | self.running_loss_dict[key] = [] 114 | 115 | self.running_loss_dict[key].append(metrics[key]) 116 | 117 | if self.total_steps % self.args.print_freq == self.args.print_freq-1: 118 | self._print_training_status() 119 | self.running_loss_dict = {} 120 | 121 | 122 | def main(args): 123 | 124 | model = nn.DataParallel(SparseNet(args), device_ids=args.gpus) 125 | print(f"Parameter Count: {count_parameters(model)}") 126 | 127 | if args.restore_ckpt is not None: 128 | model.load_state_dict(torch.load(args.restore_ckpt), strict=False) 129 | 130 | model.cuda() 131 | model.train() 132 | 133 | # if args.stage != 'chairs': 134 | # model.module.freeze_bn() 135 | 136 | train_loader = datasets.fetch_dataloader(args) 137 | optimizer, scheduler = fetch_optimizer(args, model) 138 | 139 | scaler = GradScaler(enabled=args.mixed_precision) 140 | logger = Logger(model, scheduler, args) 141 | 142 | while logger.total_steps <= args.num_steps: 143 | train(model, train_loader, optimizer, scheduler, logger, scaler, args) 144 | if logger.total_steps >= args.num_steps: 145 | plot_train(logger, args) 146 | plot_val(logger, args) 147 | break 148 | 149 | PATH = args.output+f'/{args.name}.pth' 150 | torch.save(model.state_dict(), PATH) 151 | return PATH 152 | 153 | 154 | def train(model, train_loader, optimizer, scheduler, logger, scaler, args): 155 | for i_batch, data_blob in enumerate(train_loader): 156 | tic = time.time() 157 | image1, image2, flow, valid = [x.cuda() for x in data_blob] 158 | 159 | optimizer.zero_grad() 160 | 161 | flow_pred = model(image1, image2, iters=args.iters) 162 | 163 | loss, metrics = sequence_loss(flow_pred, flow, valid, args.gamma) 164 | scaler.scale(loss).backward() 165 | scaler.unscale_(optimizer) 166 | 167 | torch.nn.utils.clip_grad_norm_(model.parameters(), args.clip) 168 | scaler.step(optimizer) 169 | scheduler.step() 170 | scaler.update() 171 | toc = time.time() 172 | 173 | metrics['time'] = toc - tic 174 | logger.push(metrics) 175 | 176 | # Validate 177 | if logger.total_steps % args.val_freq == args.val_freq - 1: 178 | validate(model, args, logger) 179 | plot_train(logger, args) 180 | plot_val(logger, args) 181 | PATH = args.output + f'/{logger.total_steps+1}_{args.name}.pth' 182 | torch.save(model.state_dict(), PATH) 183 | 184 | if logger.total_steps >= args.num_steps: 185 | break 186 | 187 | 188 | def validate(model, args, logger): 189 | model.eval() 190 | results = {} 191 | 192 | # Evaluate results 193 | for val_dataset in args.validation: 194 | if val_dataset == 'chairs': 195 | results.update(evaluate.validate_chairs(model.module, args.iters)) 196 | elif val_dataset == 'sintel': 197 | results.update(evaluate.validate_sintel(model.module, args.iters)) 198 | elif val_dataset == 'kitti': 199 | results.update(evaluate.validate_kitti(model.module, args.iters)) 200 | 201 | # Record results in logger 202 | for key in results.keys(): 203 | if key not in logger.val_results_dict.keys(): 204 | logger.val_results_dict[key] = [] 205 | logger.val_results_dict[key].append(results[key]) 206 | 207 | logger.val_steps_list.append(logger.total_steps) 208 | model.train() 209 | 210 | 211 | def plot_val(logger, args): 212 | for key in logger.val_results_dict.keys(): 213 | # plot validation curve 214 | plt.figure() 215 | plt.plot(logger.val_steps_list, logger.val_results_dict[key]) 216 | plt.xlabel('x_steps') 217 | plt.ylabel(key) 218 | plt.title(f'Results for {key} for the validation set') 219 | plt.savefig(args.output+f"/{key}.png", bbox_inches='tight') 220 | plt.close() 221 | 222 | 223 | def plot_train(logger, args): 224 | # plot training curve 225 | plt.figure() 226 | plt.plot(logger.train_steps_list, logger.train_epe_list) 227 | plt.xlabel('x_steps') 228 | plt.ylabel('EPE') 229 | plt.title('Running training error (EPE)') 230 | plt.savefig(args.output+"/train_epe.png", bbox_inches='tight') 231 | plt.close() 232 | 233 | 234 | if __name__ == '__main__': 235 | parser = argparse.ArgumentParser() 236 | parser.add_argument('--name', default='bla', help="name your experiment") 237 | parser.add_argument('--stage', help="determines which dataset to use for training") 238 | parser.add_argument('--validation', type=str, nargs='+') 239 | parser.add_argument('--restore_ckpt', help="restore checkpoint") 240 | parser.add_argument('--small', action='store_true', help='use small model') 241 | parser.add_argument('--output', type=str, default='checkpoints', help='output directory to save checkpoints and plots') 242 | 243 | parser.add_argument('--lr', type=float, default=0.00002) 244 | parser.add_argument('--num_steps', type=int, default=100000) 245 | parser.add_argument('--batch_size', type=int, default=6) 246 | parser.add_argument('--image_size', type=int, nargs='+', default=[384, 512]) 247 | parser.add_argument('--gpus', type=int, nargs='+', default=[0, 1]) 248 | 249 | parser.add_argument('--wdecay', type=float, default=.00005) 250 | parser.add_argument('--epsilon', type=float, default=1e-8) 251 | parser.add_argument('--clip', type=float, default=1.0) 252 | parser.add_argument('--dropout', type=float, default=0.0) 253 | parser.add_argument('--upsample-learn', action='store_true', default=False, 254 | help='If True, use learned upsampling, otherwise, use bilinear upsampling.') 255 | parser.add_argument('--gamma', type=float, default=0.8, help='exponential weighting') 256 | parser.add_argument('--iters', type=int, default=6) 257 | parser.add_argument('--num_k', type=int, default=32, 258 | help='number of hypotheses to compute for knn Faiss') 259 | parser.add_argument('--max_search_range', type=int, default=100, 260 | help='maximum search range for hypotheses in quarter resolution') 261 | parser.add_argument('--val_freq', type=int, default=10000, 262 | help='validation frequency') 263 | parser.add_argument('--print_freq', type=int, default=100, 264 | help='printing frequency') 265 | 266 | parser.add_argument('--mixed_precision', default=True, help='use mixed precision') 267 | 268 | args = parser.parse_args() 269 | 270 | torch.manual_seed(1234) 271 | np.random.seed(1234) 272 | 273 | if not os.path.isdir(args.output): 274 | os.makedirs(args.output) 275 | 276 | main(args) 277 | -------------------------------------------------------------------------------- /train.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | python -u train.py --name scv-chairs --stage chairs --validation chairs --output outputs/chairs --num_steps 120000 --lr 0.00025 --image_size 368 496 --wdecay 0.0001 --gpus 0 1 --num_k 8 --batch_size 6 --iters 8 --val_freq 10000 --print_freq 100 3 | 4 | python -u train.py --name scv-things --stage things --validation sintel --output outputs/things --num_steps 120000 --lr 0.0001 --image_size 400 720 --wdecay 0.0001 --gpus 0 1 --num_k 8 --batch_size 4 --iters 8 --val_freq 10000 --print_freq 100 --checkpoint outputs/chairs/scv-chairs.pth 5 | 6 | python -u train.py --name scv-sintel --stage sintel --validation sintel --output outputs/sintel --num_steps 120000 --lr 0.00025 --image_size 368 768 --wdecay 0.0001 --gpus 0 1 --num_k 8 --batch_size 4 --iters 8 --val_freq 10000 --print_freq 100 --checkpoint outputs/things/scv-things.pth 7 | 8 | python -u train.py --name scv-kitti --stage kitti --validation kitti --output outputs/kitti --num_steps 120000 --lr 0.00025 --image_size 288 960 --wdecay 0.0001 --gpus 0 1 --num_k 8 --batch_size 4 --iters 8 --val_freq 10000 --print_freq 100 --checkpoint outputs/sintel/scv-sintel.pth 9 | --------------------------------------------------------------------------------