├── .gitignore ├── LICENSE ├── README.md └── tfwss ├── bboxes.py ├── dataset.py ├── img ├── 2008_000203.jpg ├── 2008_000219.jpg ├── 2008_000553.jpg ├── 2008_000581.jpg ├── 2008_000657.jpg ├── 2008_000727.jpg ├── 2008_000795.jpg ├── 2008_000811.jpg ├── 2008_000825.jpg ├── 2008_000839.jpg ├── 2008_000957.jpg ├── 2008_001113.jpg ├── 2008_001199.jpg ├── 2008_001867.jpg ├── 2008_002191.jpg ├── 2008_002673.jpg ├── 2008_003055.jpg ├── 2008_003141.jpg ├── data_files.png ├── vgg16.png ├── vgg_16_4chan_weak_dsn2_loss.png ├── vgg_16_4chan_weak_dsn3_loss.png ├── vgg_16_4chan_weak_dsn4_loss.png ├── vgg_16_4chan_weak_dsn5_loss.png ├── vgg_16_4chan_weak_main_loss.png └── vgg_16_4chan_weak_total_loss.png ├── model.py ├── model_test.ipynb ├── model_test.py ├── model_train.ipynb ├── model_train.py ├── models └── .gitignore ├── net_surgery.ipynb ├── segment.py ├── setup ├── dlubu36tfwss.yml ├── dlwin36tfwss.yml ├── requirements_ubu.txt └── requirements_win.txt ├── tools ├── inspect_checkpoint.py ├── inspect_vgg_16_3chan.bat ├── inspect_vgg_16_4chan.bat ├── vgg_16-conv1-conv1_1-weights.py ├── vgg_16_3chan-conv1_1-weights.txt └── vgg_16_4chan-conv1_1-weights.txt └── visualize.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # pyenv 74 | .python-version 75 | 76 | # celery beat schedule file 77 | celerybeat-schedule 78 | 79 | # SageMath parsed files 80 | *.sage.py 81 | 82 | # dotenv 83 | .env 84 | 85 | # virtualenv 86 | .venv 87 | venv/ 88 | ENV/ 89 | 90 | # Spyder project settings 91 | .spyderproject 92 | .spyproject 93 | 94 | # Rope project settings 95 | .ropeproject 96 | 97 | # mkdocs documentation 98 | /site 99 | 100 | # mypy 101 | .mypy_cache/ 102 | 103 | # Datasets/Models 104 | *.bak 105 | *.zip 106 | *.tar 107 | *.ffs_db 108 | Results/ 109 | .idea/ 110 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Phil Ferriere 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Weakly Supervised Segmentation with TensorFlow 2 | 3 | This repo contains a TensorFlow implementation of weakly supervised instance segmentation as described in **Simple Does It: Weakly Supervised Instance and Semantic Segmentation**, by Khoreva et al. (CVPR 2017). 4 | 5 | The idea behind *weakly supervised segmentation* is to **train a model using cheap-to-generate label approximations** (e.g., bounding boxes) as substitute/guiding labels for computer vision classification tasks that usually require very detailed labels. In *semantic labelling*, each image pixel is assigned to a specific class (e.g., boat, car, background, etc.). In *instance segmentation*, all the pixels belonging to the same object instance are given the same instance ID. 6 | 7 | Per [[2014a]](#2014a), pixelwise mask annotations are far more expensive to generate than object bounding box annotations (requiring up to 15x more time). Some models, like Simply Does It (SDI) [[2016a]](#2016a) claim they can use a weak supervision approach to reach **95% of the quality of the fully supervised model**, both for semantic labelling and instance segmentation. 8 | 9 | # Simple Does It (SDI) 10 | 11 | ## Experimental Setup for Instance Segmentation 12 | 13 | In weakly supervised instance segmentation, there are **no pixel-wise annotations** (i.e., no segmentation masks) that can be used to train a model. Yet, we aim to train a model that can still predict segmentation masks by only being given an **input image and bounding boxes for the objects of interest** in that image. 14 | 15 | The masks used for training are **generated** starting from individual object bounding boxes. For **each** annotated bounding box, we generate a segmentation mask using the **GrabCut method** (although, any other method could be used), and train a convnet to regress from the image and bounding box information to the instance segmentation mask. 16 | 17 | Note that in the original paper, a more sophisticated segmenter is used (M∩G+). 18 | 19 | ## Network 20 | 21 | SDI validates its work repurposing two different instance segmentation architectures (DeepMask [[2015a]](#2015a) and DeepLab2 VGG-16 [[2016b]](#2016b)). Here we use the **OSVOS** FCN (See section 3.1 of [[2016c]](#2016c)). 22 | 23 | ## Setup 24 | 25 | The code in this repo was developed and tested using Anaconda3 v.4.4.0. To reproduce our conda environment, please use the following files: 26 | 27 | *On Ubuntu:* 28 | - [dlubu36tfwss.yml](tfwss/setup/dlubu36tfwss.yml) 29 | - [requirements_ubu.txt](tfwss/setup/requirements_ubu.txt) 30 | 31 | *On Windows:* 32 | - [dlwin36tfwss.yml](tfwss/setup/dlwin36tfwss.yml) 33 | - [requirements_win.txt](tfwss/setup/requirements_win.txt). 34 | 35 | ## Jupyter Notebooks 36 | 37 | The recommended way to test this implementation is to use the following jupyter notebooks: 38 | 39 | - [`VGG16 Net Surgery`](tfwss/net_surgery.ipynb): The weakly supervised segmentation techniques presented in the "Simply Does It" paper use a backbone convnet (either DeepLab or VGG16 network) **pre-trained on ImageNet**. This pre-trained network takes RGB images as an input (W x H x 3). Remember that the weakly supervised version is trained using **4-channel inputs: RGB + a binary mask with a filled bounding box of the object instance**. Therefore, we need to **perform net surgery and create a 4-channel input version** of the VGG16 net, initialized with the 3-channel parameter values **except** for the additional convolutional filters (we use Gaussian initialization for them). 40 | - [`"Simple Does It" Grabcut Training for Instance Segmentation`](tfwss/model_train.ipynb): This notebook performs training of the SDI Grabcut weakly supervised model for **instance segmentation**. Following the instructions provided in Section *"6. Instance Segmentation Results"* of the **"Simple Does It"** paper, we use the Berkeley-augmented Pascal VOC segmentation dataset that provides per-instance segmentation masks for VOC2012 data. The Berkley augmented dataset can be downloaded from [here](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz). Again, the SDI Grabcut training is done using a **4-channel input** VGG16 network pre-trained on ImageNet, so make sure to run the [`VGG16 Net Surgery`](tfwss/net_surgery.ipynb) notebook first! 41 | - [`"Simple Does It" Weakly Supervised Instance Segmentation (Testing)`](tfwss/model_test.ipynb): The sample results shown in the notebook come from running our trained model on the **validation** split of the Berkeley-augmented dataset. 42 | 43 | ## Link to Pre-trained model and BK-VOC data files 44 | 45 | The pre-processed BK-VOC dataset, "grabcut" segmentations, and results as well as pre-trained models (`vgg_16_4chan_weak.ckpt-50000`) can be found [here](http://bit.ly/tf-wss): 46 | 47 | ![](tfwss/img/data_files.png) 48 | 49 | If you'd rather download the Berkeley-augmented Pascal VOC segmentation dataset that provides per-instance segmentation masks for VOC2012 data from its origin, click [here]( 50 | http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz). Then, execute lines similar to [these lines](https://github.com/philferriere/tfwss/blob/ff4a025b19c5fb381cba3a2e492777ca040e1c5b/tfwss/dataset.py#L453-L455) in `dataset.py` to generate the intermediary files used by this project: 51 | 52 | ```python 53 | if __name__ == '__main__': 54 | dataset = BKVOCDataset() 55 | dataset.prepare() 56 | ``` 57 | 58 | Make sure to set the paths at the [top](https://github.com/philferriere/tfwss/blob/ff4a025b19c5fb381cba3a2e492777ca040e1c5b/tfwss/dataset.py#L55-L58) of `dataset.py` to the correct location: 59 | 60 | ```python 61 | if sys.platform.startswith("win"): 62 | _BK_VOC_DATASET = "E:/datasets/bk-voc/benchmark_RELEASE/dataset" 63 | else: 64 | _BK_VOC_DATASET = '/media/EDrive/datasets/bk-voc/benchmark_RELEASE/dataset' 65 | ``` 66 | 67 | ## Training 68 | 69 | The fully supervised version of the instance segmentation network whose performance we're trying to match is trained using the RGB images as inputs. The weakly supervised version is trained using **4-channel inputs: RGB + a binary mask with a filled bounding box of the object instance**. In the latter case, the same RGB image may appear in several input samples (as many times as there are object instances associated with that RGB image). 70 | 71 | To be clear, the output labels used for training are **NOT** user-provided detailed groundtruth annotations. There are no such groundtruths in the weakly supervised scenario. Instead, the **labels are the segmentation masks generated using the GrabCut+ method**. The weakly supoervised model is trained to regress from an image and bounding box information to a **generated** segmentation mask. 72 | 73 | ## Testing 74 | 75 | The sample results shown here come from running our trained model on the **validation** split of the Berkeley-augmented dataset (see the testing notebook). Below, we (very) subjectively categorize them as "pretty good" and "not so great". 76 | 77 | ### Pretty good 78 | 79 | ![](tfwss/img/2008_000203.jpg) 80 | 81 | ![](tfwss/img/2008_000581.jpg) 82 | 83 | ![](tfwss/img/2008_000657.jpg) 84 | 85 | ![](tfwss/img/2008_000727.jpg) 86 | 87 | ![](tfwss/img/2008_000795.jpg) 88 | 89 | ![](tfwss/img/2008_000811.jpg) 90 | 91 | ![](tfwss/img/2008_000839.jpg) 92 | 93 | ![](tfwss/img/2008_001867.jpg) 94 | 95 | ![](tfwss/img/2008_002191.jpg) 96 | 97 | ![](tfwss/img/2008_003055.jpg) 98 | 99 | ![](tfwss/img/2008_003141.jpg) 100 | 101 | ### Not so great 102 | 103 | ![](tfwss/img/2008_000219.jpg) 104 | 105 | 106 | ![](tfwss/img/2008_000553.jpg) 107 | 108 | ![](tfwss/img/2008_000825.jpg) 109 | 110 | ![](tfwss/img/2008_000957.jpg) 111 | 112 | ![](tfwss/img/2008_001113.jpg) 113 | 114 | ![](tfwss/img/2008_001199.jpg) 115 | 116 | ![](tfwss/img/2008_002673.jpg) 117 | 118 | # References 119 | 120 | ## 2016 121 | - [2016a] Khoreva et al. 2016. Simple Does It: Weakly Supervised Instance and Semantic Segmentation. [[arXiv]](https://arxiv.org/abs/1603.07485) [[web]](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/weakly-supervised-learning/simple-does-it-weakly-supervised-instance-and-semantic-segmentation/) 122 | - [2016b] Chen et al. 2016. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. [[arXiv]](https://arxiv.org/abs/1606.00915) 123 | - [2016c] Caelles et al. 2016. OSVOS: One-Shot Video Object Segmentation. [[arXiv]](https://arxiv.org/abs/1611.05198) 124 | 125 | ## 2015 126 | 127 | - [2015a] Pinheiro et al. 2015. DeepMask: Learning to Segment Object Candidates. [[arXiv]](https://arxiv.org/abs/1506.06204) 128 | 129 | ## 2014 130 | - [2014a] Lin et al. 2014. Microsoft COCO: Common Objects in Context. [[arXiv]](https://arxiv.org/abs/1405.0312) [[web]](http://cocodataset.org/#home) 131 | -------------------------------------------------------------------------------- /tfwss/bboxes.py: -------------------------------------------------------------------------------- 1 | """ 2 | bboxes.py 3 | 4 | Bounding box utility functions. 5 | 6 | Written by Phil Ferriere 7 | 8 | Licensed under the MIT License (see LICENSE for details) 9 | 10 | Based on: 11 | - https://github.com/matterport/Mask_RCNN/blob/master/utils.py 12 | Copyright (c) 2017 Matterport, Inc. / Written by Waleed Abdulla 13 | Licensed under the MIT License 14 | 15 | References for future work: 16 | - https://github.com/tensorflow/models/blob/master/research/object_detection/utils/np_box_ops.py 17 | https://github.com/tensorflow/models/blob/master/research/object_detection/utils/ops.py 18 | Copyright 2017 The TensorFlow Authors. All Rights Reserved. 19 | Licensed under the Apache License, Version 2.0 20 | - https://github.com/tangyuhao/DAVIS-2016-Chanllege-Solution/blob/master/Step1-SSD/tf_extended/bboxes.py 21 | https://github.com/tangyuhao/DAVIS-2016-Chanllege-Solution/blob/master/Step1-SSD/bounding_box.py 22 | Copyright (c) 2017 Paul Balanca / Written by Paul Balanca 23 | Licensed under the Apache License, Version 2.0, January 2004 24 | """ 25 | 26 | import numpy as np 27 | 28 | def extract_bbox(mask, order='y1x1y2x2'): 29 | """Compute bounding box from a mask. 30 | Param: 31 | mask: [height, width]. Mask pixels are either >0 or 0. 32 | order: ['y1x1y2x2' | ] 33 | Returns: 34 | bbox numpy array [y1, x1, y2, x2] or tuple x1, y1, x2, y2. 35 | Based on: 36 | https://stackoverflow.com/questions/31400769/bounding-box-of-numpy-array 37 | """ 38 | horizontal_indicies = np.where(np.any(mask, axis=0))[0] 39 | vertical_indicies = np.where(np.any(mask, axis=1))[0] 40 | if horizontal_indicies.shape[0]: 41 | x1, x2 = horizontal_indicies[[0, -1]] 42 | y1, y2 = vertical_indicies[[0, -1]] 43 | # x2 and y2 should not be part of the box. Increment by 1. 44 | x2 += 1 45 | y2 += 1 46 | else: 47 | # No mask for this instance. Might happen due to 48 | # resizing or cropping. Set bbox to zeros 49 | x1, x2, y1, y2 = 0, 0, 0, 0 50 | if order == 'x1y1x2y2': 51 | return x1, y1, x2, y2 52 | else: 53 | return np.array([y1, x1, y2, x2]) 54 | 55 | def extract_bboxes(mask): 56 | """Compute bounding boxes from an array of masks. 57 | Params 58 | mask: [height, width, num_instances]. Mask pixels are either >0 or 0. 59 | Returns: 60 | bbox numpy arrays [num_instances, (y1, x1, y2, x2)]. 61 | """ 62 | boxes = np.zeros([mask.shape[-1], 4], dtype=np.int32) 63 | for i in range(mask.shape[-1]): 64 | boxes[i] = extract_bbox(mask[:, :, i]) 65 | return boxes.astype(np.int32) 66 | 67 | -------------------------------------------------------------------------------- /tfwss/dataset.py: -------------------------------------------------------------------------------- 1 | """ 2 | dataset.py 3 | 4 | Dataset utility functions and classes. 5 | 6 | Following the instructions provided in Section "6. Instance Segmentation Results" of the "Simple Does It" paper, we use 7 | the Berkeley-augmented Pascal VOC segmentation dataset that provides per-instance segmentation masks for VOC2012 data. 8 | The Berkley augmented dataset can be downloaded from here: 9 | http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz 10 | 11 | Written by Phil Ferriere 12 | 13 | Licensed under the MIT License (see LICENSE for details) 14 | 15 | Based on: 16 | - https://github.com/warmspringwinds/tf-image-segmentation/blob/master/tf_image_segmentation/utils/tf_records.py 17 | https://github.com/warmspringwinds/tf-image-segmentation/blob/master/tf_image_segmentation/utils/pascal_voc.py 18 | https://github.com/warmspringwinds/tf-image-segmentation/blob/master/tf_image_segmentation/recipes/pascal_voc/convert_pascal_voc_to_tfrecords.ipynb 19 | Copyright (c) 2017 Daniil Pakhomov / Written by Daniil Pakhomov 20 | Licensed under the MIT License 21 | 22 | More to look at later to add support for TFRecords: 23 | https://github.com/fperazzi/davis-2017/blob/master/python/lib/davis/dataset/base.py 24 | https://github.com/fperazzi/davis-2017/blob/master/python/lib/davis/dataset/loader.py 25 | https://github.com/kwotsin/create_tfrecords 26 | https://kwotsin.github.io/tech/2017/01/29/tfrecords.html 27 | http://yeephycho.github.io/2016/08/15/image-data-in-tensorflow/ 28 | - http://www.machinelearninguru.com/deep_learning/tensorflow/basics/tfrecord/tfrecord.html 29 | How to write into and read from a tfrecords file in TensorFlow 30 | Writeen by Hadi Kazemi 31 | - https://github.com/ferreirafabio/video2tfrecords/blob/master/video2tfrecords.py 32 | Copyright (c) 2017 Fábio Ferreira / Written Fábio Ferreira 33 | Licensed under the MIT License 34 | """ 35 | 36 | # TODO Add support for TFRecords 37 | 38 | from __future__ import absolute_import 39 | from __future__ import division 40 | from __future__ import print_function 41 | 42 | import os, sys 43 | import glob 44 | import warnings 45 | import tensorflow as tf 46 | import numpy as np 47 | from scipy.io import loadmat 48 | from tqdm import tqdm 49 | from skimage import img_as_ubyte 50 | from skimage.io import imread, imsave 51 | from bboxes import extract_bbox 52 | from segment import rect_mask, grabcut 53 | from visualize import draw_masks 54 | 55 | if sys.platform.startswith("win"): 56 | _BK_VOC_DATASET = "E:/datasets/bk-voc/benchmark_RELEASE/dataset" 57 | else: 58 | _BK_VOC_DATASET = '/media/EDrive/datasets/bk-voc/benchmark_RELEASE/dataset' 59 | 60 | _DBG_TRAIN_SET = -1 61 | 62 | _DEFAULT_BKVOC_OPTIONS = { 63 | 'in_memory': False, 64 | 'data_aug': False, 65 | 'use_cache': False, 66 | 'use_grabcut_labels': True} 67 | 68 | class BKVOCDataset(object): 69 | """Berkeley-augmented Pascal VOC 2012 segmentation dataset. 70 | """ 71 | 72 | def __init__(self, phase='train', dataset_root=_BK_VOC_DATASET, options=_DEFAULT_BKVOC_OPTIONS): 73 | """Initialize the Dataset object 74 | Args: 75 | phase: Possible options: 'train' or 'test' 76 | dataset_root: Path to the root of the dataset 77 | options: see below 78 | Options: 79 | in_memory: True loads all the training images upfront, False loads images in small batches 80 | data_aug: True adds augmented data to training set 81 | use_cache: True stores training files and augmented versions in npy file 82 | use_grabcut_labels: True computes magnitudes of forward and backward flows 83 | """ 84 | # Only options supported in this initial implementation 85 | assert (options == _DEFAULT_BKVOC_OPTIONS) 86 | 87 | # Save file and folder name 88 | self._dataset_root = dataset_root 89 | self._phase = phase 90 | self._options = options 91 | 92 | # Set paths and file names 93 | self._img_folder = self._dataset_root + '/img' 94 | self._mats_folder = self._dataset_root + '/inst' 95 | self._masks_folder = self._dataset_root + '/inst_masks' 96 | self._grabcuts_folder = self._dataset_root + '/inst_grabcuts' 97 | self.pred_masks_path = self._dataset_root + '/predicted_inst_masks' 98 | self.img_pred_masks_path = self._dataset_root + '/img_with_predicted_inst_masks' 99 | self._train_IDs_file = self._dataset_root + '/train.txt' 100 | self._test_IDs_file = self._dataset_root + '/val.txt' 101 | self._img_mask_pairs_file = self._dataset_root + '/img_mask_pairs.txt' 102 | self._train_img_mask_pairs_file = self._dataset_root + '/train_img_mask_pairs.txt' 103 | self._test_img_mask_pairs_file = self._dataset_root + '/val_img_mask_pairs.txt' 104 | 105 | # Load ID files 106 | if not self._load_img_mask_pairs_file(self._img_mask_pairs_file): 107 | self.prepare() 108 | 109 | # Init batch parameters 110 | if self._phase == 'train': 111 | self._load_img_mask_pairs_file(self._train_img_mask_pairs_file) 112 | self._grabcut_files = [self._grabcuts_folder + '/' + os.path.basename(img_mask_pair[1]) for img_mask_pair in self._img_mask_pairs] 113 | self._train_ptr = 0 114 | self.train_size = len(self._img_mask_pairs) if _DBG_TRAIN_SET == -1 else _DBG_TRAIN_SET 115 | self._train_idx = np.arange(self.train_size) 116 | np.random.seed(1) 117 | np.random.shuffle(self._train_idx) 118 | else: 119 | self._options['use_grabcut_labels'] = False 120 | self._load_img_mask_pairs_file(self._test_img_mask_pairs_file) 121 | self._test_ptr = 0 122 | self.test_size = len(self._img_mask_pairs) 123 | 124 | ### 125 | ### Input Samples and Labels Prep 126 | ### 127 | def prepare(self): 128 | """Do all the preprocessing needed before training/val/test samples can be generated. 129 | """ 130 | # Convert instance masks stored in .mat files to .png files and compute their bboxes 131 | self._mat_masks_to_png() 132 | 133 | # Generate grabcuts, if they don't exist yet 134 | self._bboxes_to_grabcuts() 135 | 136 | # Generate train and test image/mask pair files, if they don't exist yet 137 | self._split_img_mask_pairs() 138 | 139 | def _save_img_mask_pairs_file(self): 140 | """Create the file that matches masks with their image file (and has bbox info) 141 | """ 142 | assert (len(self._mask_bboxes) == len(self._img_mask_pairs)) 143 | with open(self._img_mask_pairs_file, 'w') as img_mask_pairs_file: 144 | for img_mask_pair, mask_bbox in zip(self._img_mask_pairs, self._mask_bboxes): 145 | img_path = os.path.basename(img_mask_pair[0]) 146 | mask_path = os.path.basename(img_mask_pair[1]) 147 | line = '{}###{}###{}###{}###{}###{}\n'.format(img_path, mask_path, mask_bbox[0], 148 | mask_bbox[1], mask_bbox[2], mask_bbox[3]) 149 | img_mask_pairs_file.write(line) 150 | 151 | 152 | def _split_img_mask_pairs(self): 153 | """Create the training and test portions of the image mask pairs 154 | """ 155 | if os.path.exists(self._train_img_mask_pairs_file) and os.path.exists(self._test_img_mask_pairs_file): 156 | return False 157 | 158 | with open(self._train_IDs_file, 'r') as f: 159 | train_IDs = f.readlines() 160 | 161 | with open(self._test_IDs_file, 'r') as f: 162 | test_IDs = f.readlines() 163 | 164 | # Load complete list of entries and separate training and test entries 165 | assert(os.path.exists(self._img_mask_pairs_file)) 166 | with open(self._img_mask_pairs_file, 'r') as img_mask_pairs_file: 167 | lines = img_mask_pairs_file.readlines() 168 | train_lines = [] 169 | test_lines = [] 170 | for line in lines: 171 | splits = line.split('###') 172 | file_ID = '{}\n'.format(str(splits[0])[-15:-4]) 173 | if file_ID in train_IDs: 174 | train_lines.append(line) 175 | elif file_ID in test_IDs: 176 | test_lines.append(line) 177 | else: 178 | raise ValueError('Error in processing train/val text files.') 179 | 180 | # Save result 181 | with open(self._train_img_mask_pairs_file, 'w') as f: 182 | for line in train_lines: 183 | f.write(line) 184 | with open(self._test_img_mask_pairs_file, 'w') as f: 185 | for line in test_lines: 186 | f.write(line) 187 | return True 188 | 189 | 190 | def _load_img_mask_pairs_file(self, img_mask_pairs_path): 191 | """Load the file that matches masks with their image file (and has bbox info) 192 | Args: 193 | img_mask_pairs_path: path to file 194 | Returns: 195 | True if file was correctly loaded, False otherwise 196 | """ 197 | if os.path.exists(img_mask_pairs_path): 198 | with open(img_mask_pairs_path, 'r') as img_mask_pairs_file: 199 | lines = img_mask_pairs_file.readlines() 200 | self._img_mask_pairs = [] 201 | self._mask_bboxes = [] 202 | for line in lines: 203 | splits = line.split('###') 204 | img_path = self._img_folder + '/' + str(splits[0]) 205 | mask_path = self._masks_folder + '/' + str(splits[1]) 206 | self._img_mask_pairs.append((img_path, mask_path)) 207 | self._mask_bboxes.append((int(splits[2]), int(splits[3]), int(splits[4]), int(splits[5]))) 208 | return True 209 | return False 210 | 211 | 212 | def _mat_masks_to_png(self): 213 | """Converts instance masks stored in .mat files to .png files. 214 | PNG files are created in the same folder as where the .mat files are. 215 | If the name of this folder ends with "cls", class masks are created. 216 | If the name of this folder ends with "inst", instance masks are created. 217 | 218 | Returns: 219 | True if files were created, False if the masks folder already contains PNG files 220 | """ 221 | mat_files = glob.glob(self._mats_folder + '/*.mat') 222 | 223 | # Build the list of image files for which we have mat masks 224 | key = os.path.basename(os.path.normpath(self._mats_folder)) 225 | if key == 'cls': 226 | key = 'GTcls' 227 | elif key == 'inst': 228 | key = 'GTinst' 229 | else: 230 | raise ValueError('ERR: Expected mask folder path to end with "/inst" or "/cls"') 231 | 232 | # Create output folder, if necessary 233 | img_files = [self._img_folder + '/' + os.path.basename(file).replace('.mat', '.jpg') for file in mat_files] 234 | if not os.path.exists(self._masks_folder): 235 | os.makedirs(self._masks_folder) 236 | 237 | # Generate image mask pairs and compute their bboxes 238 | self._img_mask_pairs = [] 239 | self._mask_bboxes = [] 240 | with warnings.catch_warnings(): 241 | warnings.simplefilter("ignore") 242 | for mat_file, img_file in tqdm(zip(mat_files, img_files), total=len(mat_files), ascii=True, ncols=80, 243 | desc='MAT to PNG masks'): 244 | mat = loadmat(mat_file, mat_dtype=True, squeeze_me=True, struct_as_record=False) 245 | masks = mat[key].Segmentation 246 | mask_file_basename = os.path.basename(mat_file) 247 | for instance in np.unique(masks)[1:]: 248 | mask_file = self._masks_folder + '/' + mask_file_basename[:-4] + '_' + str(int(instance)) + '.png' 249 | self._img_mask_pairs.append((img_file, mask_file)) 250 | # Build mask for object instance 251 | mask = img_as_ubyte(masks == instance) 252 | # Compute the mask's bbox 253 | self._mask_bboxes.append(extract_bbox(mask)) 254 | # Save the mask in PNG format to mask folder 255 | imsave(mask_file, mask) 256 | 257 | # Save the results to disk 258 | self._save_img_mask_pairs_file() 259 | 260 | return True 261 | 262 | 263 | def _bboxes_to_grabcuts(self): 264 | """Generate segmentation masks from images and bounding boxes using Grabcut. 265 | """ 266 | mask_files = glob.glob(self._masks_folder + '/*.png') 267 | if os.path.exists(self._grabcuts_folder): 268 | self._grabcut_files = glob.glob(self._grabcuts_folder + '/*.png') 269 | if _DBG_TRAIN_SET == -1: 270 | if self._grabcut_files and len(self._grabcut_files) == len(mask_files): 271 | return False 272 | else: 273 | if self._grabcut_files and len(self._grabcut_files) >= _DBG_TRAIN_SET: 274 | return False 275 | 276 | # Create output folder, if necessary 277 | grabcut_files = [self._grabcuts_folder + '/' + os.path.basename(img_mask_pair[1]) for img_mask_pair in 278 | self._img_mask_pairs] 279 | if not os.path.exists(self._grabcuts_folder): 280 | os.makedirs(self._grabcuts_folder) 281 | 282 | # Run Grabcut on input data 283 | self._grabcut_files = [] 284 | with warnings.catch_warnings(): 285 | warnings.simplefilter("ignore") 286 | for img_mask_pair, mask_bbox, grabcut_file in tqdm(zip(self._img_mask_pairs, self._mask_bboxes, grabcut_files), 287 | total=len(self._img_mask_pairs), ascii=True, ncols=80, 288 | desc='Grabcuts'): 289 | # Continue generating grabcuts from where you last stopped after OpenCV crash 290 | if not os.path.exists(grabcut_file): 291 | # Use Grabcut to create a segmentation within the bbox 292 | mask = grabcut(img_mask_pair[0], mask_bbox) 293 | # Save the mask in PNG format to the grabcuts folder 294 | imsave(grabcut_file, mask) 295 | self._grabcut_files.append(grabcut_file) 296 | 297 | return True 298 | 299 | ### 300 | ### Batch Management 301 | ### 302 | def _load_sample(self, input_rgb_path, input_bbox, label_path=None): 303 | """Load a propertly formatted sample (input sample + associated label) 304 | In training mode, there is a label; in testimg mode, there isn't. 305 | Args: 306 | input_rgb_path: Path to RGB image 307 | input_bbox: Bounding box to convert to a binary mask 308 | label_path: Path to grabcut label, if any label 309 | Returns in training: 310 | input sample: RGB+bbox binary mask concatenated in format [W, H, 4] 311 | label: Grabcut segmentation in format [W, H, 1], if any label 312 | """ 313 | input_rgb = imread(input_rgb_path) 314 | input_shape = input_rgb.shape 315 | input_bin_mask = rect_mask((input_shape[0], input_shape[1], 1), input_bbox) 316 | assert (len(input_bin_mask.shape) == 3 and input_bin_mask.shape[2] == 1) 317 | input = np.concatenate((input_rgb, input_bin_mask), axis=-1) 318 | assert (len(input.shape) == 3 and input.shape[2] == 4) 319 | if label_path: 320 | label = imread(label_path) 321 | label = np.expand_dims(label, axis=-1) 322 | assert (len(label.shape) == 3 and label.shape[2] == 1) 323 | else: 324 | label = None 325 | return input, label 326 | 327 | def next_batch(self, batch_size, phase='train', segnet_stream='weak'): 328 | """Get next batch of image (path) and masks 329 | Args: 330 | batch_size: Size of the batch 331 | phase: Possible options:'train' or 'test' 332 | segnet_stream: Binary segmentation net stream ['weak'|'full'] 333 | Returns in training: 334 | inputs: Batch of 4-channel inputs (RGB+bbox binary mask) in format [batch_size, W, H, 4] 335 | labels: Batch of grabcut segmentations in format [batch_size, W, H, 1] 336 | Returns in testing: 337 | inputs: Batch of 4-channel inputs (RGB+bbox binary mask) in format [batch_size, W, H, 4] 338 | output_file: List of output file names that match the bbox file names 339 | """ 340 | assert (self._options['in_memory'] is False) # Only option supported at this point 341 | assert (segnet_stream == 'weak') # Only option supported at this point 342 | if phase == 'train': 343 | inputs, labels = [], [] 344 | if self._train_ptr + batch_size < self.train_size: 345 | idx = np.array(self._train_idx[self._train_ptr:self._train_ptr + batch_size]) 346 | for l in idx: 347 | input, label = self._load_sample(self._img_mask_pairs[l][0], self._mask_bboxes[l], 348 | self._grabcut_files[l]) 349 | inputs.append(input) 350 | labels.append(label) 351 | self._train_ptr += batch_size 352 | else: 353 | old_idx = np.array(self._train_idx[self._train_ptr:]) 354 | np.random.shuffle(self._train_idx) 355 | new_ptr = (self._train_ptr + batch_size) % self.train_size 356 | idx = np.array(self._train_idx[:new_ptr]) 357 | inputs_1, labels_1, inputs_2, labels_2 = [], [], [], [] 358 | for l in old_idx: 359 | input, label = self._load_sample(self._img_mask_pairs[l][0], self._mask_bboxes[l], 360 | self._grabcut_files[l]) 361 | inputs_1.append(input) 362 | labels_1.append(label) 363 | for l in idx: 364 | input, label = self._load_sample(self._img_mask_pairs[l][0], self._mask_bboxes[l], 365 | self._grabcut_files[l]) 366 | inputs_2.append(input) 367 | labels_2.append(label) 368 | inputs = inputs_1 + inputs_2 369 | labels = labels_1 + labels_2 370 | self._train_ptr = new_ptr 371 | return np.asarray(inputs), np.asarray(labels) 372 | elif phase == 'test': 373 | inputs, output_files = [], [] 374 | if self._test_ptr + batch_size < self.test_size: 375 | for l in range(self._test_ptr, self._test_ptr + batch_size): 376 | input, _ = self._load_sample(self._img_mask_pairs[l][0], self._mask_bboxes[l]) 377 | output_file = os.path.basename(self._img_mask_pairs[l][1]) 378 | inputs.append(input) 379 | output_files.append(output_file) 380 | self._test_ptr += batch_size 381 | else: 382 | new_ptr = (self._test_ptr + batch_size) % self.test_size 383 | inputs_1, output_files_1, inputs_2, output_files_2 = [], [], [], [] 384 | for l in range(self._test_ptr, self.test_size): 385 | input, _ = self._load_sample(self._img_mask_pairs[l][0], self._mask_bboxes[l]) 386 | output_file = os.path.basename(self._img_mask_pairs[l][1]) 387 | inputs_1.append(input) 388 | output_files_1.append(output_file) 389 | for l in range(0, new_ptr): 390 | input, _ = self._load_sample(self._img_mask_pairs[l][0], self._mask_bboxes[l]) 391 | output_file = os.path.basename(self._img_mask_pairs[l][1]) 392 | inputs_2.append(input) 393 | output_files_2.append(output_file) 394 | inputs = inputs_1 + inputs_2 395 | output_files = output_files_1 + output_files_2 396 | self._test_ptr = new_ptr 397 | return np.asarray(inputs), output_files 398 | else: 399 | return None, None 400 | 401 | ### 402 | ### Debug utils 403 | ### 404 | def print_config(self): 405 | """Display configuration values.""" 406 | print("\nConfiguration:") 407 | for k, v in self._options.items(): 408 | print(" {:20} {}".format(k, v)) 409 | print(" {:20} {}".format('phase', self._phase)) 410 | print(" {:20} {}".format('samples', len(self._img_mask_pairs))) 411 | 412 | ### 413 | ### TODO TFRecords helpers 414 | ### See: 415 | ### https://github.com/fperazzi/davis-2017/blob/master/python/lib/davis/dataset/base.py 416 | ### https://github.com/fperazzi/davis-2017/blob/master/python/lib/davis/dataset/loader.py 417 | ### https://github.com/kwotsin/create_tfrecords 418 | ### https://kwotsin.github.io/tech/2017/01/29/tfrecords.html 419 | ### http://yeephycho.github.io/2016/08/15/image-data-in-tensorflow/ 420 | ### E:\repos\models-master\research\inception\inception\data\build_imagenet_data.py 421 | ### E:\repos\models-master\research\object_detection\dataset_tools\create_kitti_tf_record.py 422 | ### 423 | def _load_from_tfrecords(self): 424 | # TODO _load_from_tfrecords 425 | pass 426 | 427 | def _write_to_tfrecords(self): 428 | # TODO _write_to_tfrecords 429 | pass 430 | 431 | def combine_images_with_predicted_masks(self): 432 | """Build list of individual test immages with predicted masks overlayed.""" 433 | # Overlay masks on top of images 434 | prev_image, bboxes, masks = None, [], [] 435 | with tqdm(total=len(self._mask_bboxes), desc="Combining JPGs with predictions", ascii=True, ncols=80) as pbar: 436 | for img_mask_pair, bbox in zip(self._img_mask_pairs, self._mask_bboxes): 437 | pbar.update(1) 438 | if img_mask_pair[0] == prev_image: 439 | # Accumulate predicted masks and bbox instances belonging to the same image 440 | bboxes.append(bbox) 441 | masks.append(self.pred_masks_path + '/' + os.path.basename(img_mask_pair[1])) 442 | else: 443 | if prev_image: 444 | # Combine image, masks and bboxes in a single image and save the result to disk 445 | image = imread(prev_image) 446 | masks = np.asarray([imread(mask) for mask in masks]) 447 | masks = np.expand_dims(masks, axis=-1) 448 | draw_masks(image, np.asarray(bboxes), np.asarray(masks)) 449 | imsave(self.img_pred_masks_path + '/' + os.path.basename(prev_image), image) 450 | prev_image = img_mask_pair[0] 451 | bboxes = [bbox] 452 | masks = [self.pred_masks_path + '/' + os.path.basename(img_mask_pair[1])] 453 | 454 | # def test(): 455 | # dataset = BKVOCDataset() 456 | # dataset.print_config() 457 | # # WARNING: THE NEXT LINE WILL FORCE REGENERATION OF INTERMEDIARY FILES 458 | # # dataset.prepare() 459 | # 460 | # if __name__ == '__main__': 461 | # test() 462 | -------------------------------------------------------------------------------- /tfwss/img/2008_000203.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000203.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000219.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000219.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000553.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000553.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000581.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000581.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000657.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000657.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000727.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000727.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000795.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000795.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000811.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000811.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000825.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000825.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000839.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000839.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_000957.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_000957.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_001113.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_001113.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_001199.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_001199.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_001867.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_001867.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_002191.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_002191.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_002673.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_002673.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_003055.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_003055.jpg -------------------------------------------------------------------------------- /tfwss/img/2008_003141.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/2008_003141.jpg -------------------------------------------------------------------------------- /tfwss/img/data_files.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/data_files.png -------------------------------------------------------------------------------- /tfwss/img/vgg16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg16.png -------------------------------------------------------------------------------- /tfwss/img/vgg_16_4chan_weak_dsn2_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg_16_4chan_weak_dsn2_loss.png -------------------------------------------------------------------------------- /tfwss/img/vgg_16_4chan_weak_dsn3_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg_16_4chan_weak_dsn3_loss.png -------------------------------------------------------------------------------- /tfwss/img/vgg_16_4chan_weak_dsn4_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg_16_4chan_weak_dsn4_loss.png -------------------------------------------------------------------------------- /tfwss/img/vgg_16_4chan_weak_dsn5_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg_16_4chan_weak_dsn5_loss.png -------------------------------------------------------------------------------- /tfwss/img/vgg_16_4chan_weak_main_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg_16_4chan_weak_main_loss.png -------------------------------------------------------------------------------- /tfwss/img/vgg_16_4chan_weak_total_loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/philferriere/tfwss/5938f284606c839ea76c88b6e02c33280e2b79e7/tfwss/img/vgg_16_4chan_weak_total_loss.png -------------------------------------------------------------------------------- /tfwss/model.py: -------------------------------------------------------------------------------- 1 | """ 2 | model.py 3 | 4 | Segmentation backbone networks. 5 | 6 | Written by Phil Ferriere 7 | 8 | Licensed under the MIT License (see LICENSE for details) 9 | 10 | Based on: 11 | - https://github.com/scaelles/OSVOS-TensorFlow/blob/master/osvos_parent_demo.py 12 | Written by Sergi Caelles (scaelles@vision.ee.ethz.ch) 13 | This file is part of the OSVOS paper presented in: 14 | Sergi Caelles, Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, Luc Van Gool 15 | One-Shot Video Object Segmentation 16 | CVPR 2017 17 | Unknown code license 18 | 19 | References for future work: 20 | https://github.com/scaelles/OSVOS-TensorFlow 21 | http://localhost:8889/notebooks/models-master/research/slim/slim_walkthrough.ipynb 22 | https://github.com/bryanyzhu/two-stream-pytorch 23 | https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim 24 | https://github.com/kwotsin/TensorFlow-ENet/blob/master/predict_segmentation.py 25 | https://github.com/fperazzi/davis-2017/tree/master/python/lib/davis/measures 26 | https://github.com/suyogduttjain/fusionseg 27 | https://gist.github.com/omoindrot/dedc857cdc0e680dfb1be99762990c9c/ 28 | """ 29 | from __future__ import absolute_import 30 | from __future__ import division 31 | from __future__ import print_function 32 | 33 | import os, sys, warnings 34 | import numpy as np 35 | from datetime import datetime 36 | from skimage.io import imsave 37 | 38 | import tensorflow as tf 39 | from tensorflow.contrib.layers.python.layers import utils 40 | slim = tf.contrib.slim 41 | 42 | from tqdm import trange 43 | 44 | def backbone_arg_scope(weight_decay=0.0002): 45 | """Defines the network's arg scope. 46 | Args: 47 | weight_decay: The l2 regularization coefficient. 48 | Returns: 49 | An arg_scope. 50 | """ 51 | with slim.arg_scope([slim.conv2d, slim.convolution2d_transpose], 52 | activation_fn=tf.nn.relu, 53 | weights_initializer=tf.random_normal_initializer(stddev=0.001), 54 | weights_regularizer=slim.l2_regularizer(weight_decay), 55 | biases_initializer=tf.zeros_initializer(), 56 | biases_regularizer=None, 57 | padding='SAME') as arg_sc: 58 | return arg_sc 59 | 60 | 61 | def crop_features(feature, out_size): 62 | """Crop the center of a feature map 63 | This is necessary when large upsampling results in a (width x height) size larger than the original input. 64 | Args: 65 | feature: Feature map to crop 66 | out_size: Size of the output feature map 67 | Returns: 68 | Tensor that performs the cropping 69 | """ 70 | up_size = tf.shape(feature) 71 | ini_w = tf.div(tf.subtract(up_size[1], out_size[1]), 2) 72 | ini_h = tf.div(tf.subtract(up_size[2], out_size[2]), 2) 73 | slice_input = tf.slice(feature, (0, ini_w, ini_h, 0), (-1, out_size[1], out_size[2], -1)) 74 | # slice_input = tf.slice(feature, (0, ini_w, ini_w, 0), (-1, out_size[1], out_size[2], -1)) # Caffe cropping way 75 | return tf.reshape(slice_input, [int(feature.get_shape()[0]), out_size[1], out_size[2], int(feature.get_shape()[3])]) 76 | 77 | 78 | def backbone(inputs, segnet_stream='weak'): 79 | """Defines the backbone network (same as the OSVOS network, with variation in input size) 80 | Args: 81 | inputs: Tensorflow placeholder that contains the input image (either 3 or 4 channels) 82 | segnet_stream: Is this the 3-channel or the 4-channel input version? 83 | Returns: 84 | net: Output Tensor of the network 85 | end_points: Dictionary with all Tensors of the network 86 | Reminder: 87 | This is how a VGG16 network looks like: 88 | 89 | Layer (type) Output Shape Param # Connected to 90 | ==================================================================================================== 91 | input_1 (InputLayer) (None, 480, 854, 3) 0 92 | ____________________________________________________________________________________________________ 93 | block1_conv1 (Convolution2D) (None, 480, 854, 64) 1792 input_1[0][0] 94 | ____________________________________________________________________________________________________ 95 | block1_conv2 (Convolution2D) (None, 480, 854, 64) 36928 block1_conv1[0][0] 96 | ____________________________________________________________________________________________________ 97 | block1_pool (MaxPooling2D) (None, 240, 427, 64) 0 block1_conv2[0][0] 98 | ____________________________________________________________________________________________________ 99 | block2_conv1 (Convolution2D) (None, 240, 427, 128) 73856 block1_pool[0][0] 100 | ____________________________________________________________________________________________________ 101 | block2_conv2 (Convolution2D) (None, 240, 427, 128) 147584 block2_conv1[0][0] 102 | ____________________________________________________________________________________________________ 103 | block2_pool (MaxPooling2D) (None, 120, 214, 128) 0 block2_conv2[0][0] 104 | ____________________________________________________________________________________________________ 105 | block3_conv1 (Convolution2D) (None, 120, 214, 256) 295168 block2_pool[0][0] 106 | ____________________________________________________________________________________________________ 107 | block3_conv2 (Convolution2D) (None, 120, 214, 256) 590080 block3_conv1[0][0] 108 | ____________________________________________________________________________________________________ 109 | block3_conv3 (Convolution2D) (None, 120, 214, 256) 590080 block3_conv2[0][0] 110 | ____________________________________________________________________________________________________ 111 | block3_conv4 (Convolution2D) (None, 120, 214, 256) 590080 block3_conv3[0][0] 112 | ____________________________________________________________________________________________________ 113 | block3_pool (MaxPooling2D) (None, 60, 107, 256) 0 block3_conv4[0][0] 114 | ____________________________________________________________________________________________________ 115 | block4_conv1 (Convolution2D) (None, 60, 107, 512) 1180160 block3_pool[0][0] 116 | ____________________________________________________________________________________________________ 117 | block4_conv2 (Convolution2D) (None, 60, 107, 512) 2359808 block4_conv1[0][0] 118 | ____________________________________________________________________________________________________ 119 | block4_conv3 (Convolution2D) (None, 60, 107, 512) 2359808 block4_conv2[0][0] 120 | ____________________________________________________________________________________________________ 121 | block4_conv4 (Convolution2D) (None, 60, 107, 512) 2359808 block4_conv3[0][0] 122 | ____________________________________________________________________________________________________ 123 | block4_pool (MaxPooling2D) (None, 30, 54, 512) 0 block4_conv4[0][0] 124 | ____________________________________________________________________________________________________ 125 | block5_conv1 (Convolution2D) (None, 30, 54, 512) 2359808 block4_pool[0][0] 126 | ____________________________________________________________________________________________________ 127 | block5_conv2 (Convolution2D) (None, 30, 54, 512) 2359808 block5_conv1[0][0] 128 | ____________________________________________________________________________________________________ 129 | block5_conv3 (Convolution2D) (None, 30, 54, 512) 2359808 block5_conv2[0][0] 130 | ____________________________________________________________________________________________________ 131 | block5_conv4 (Convolution2D) (None, 30, 54, 512) 2359808 block5_conv3[0][0] 132 | ____________________________________________________________________________________________________ 133 | block5_pool (MaxPooling2D) (None, 15, 27, 512) 0 block5_conv4[0][0] 134 | ____________________________________________________________________________________________________ 135 | flatten (Flatten) (None, 207360) 0 block5_pool[0][0] 136 | ____________________________________________________________________________________________________ 137 | fc1 (Dense) (None, 4096) xxx flatten[0][0] 138 | ____________________________________________________________________________________________________ 139 | fc2 (Dense) (None, 4096) yyy fc1[0][0] 140 | ____________________________________________________________________________________________________ 141 | predictions (Dense) (None, 1000) zzz fc2[0][0] 142 | ==================================================================================================== 143 | Original Code: 144 | ETH Zurich 145 | """ 146 | im_size = tf.shape(inputs) 147 | 148 | with tf.variable_scope(segnet_stream, segnet_stream, [inputs]) as sc: 149 | end_points_collection = sc.name + '_end_points' 150 | # Collect outputs of all intermediate layers. 151 | # Make sure convolution and max-pooling layers use SAME padding by default 152 | # Also, group all end points in the same container/collection 153 | with slim.arg_scope([slim.conv2d, slim.max_pool2d], 154 | padding='SAME', 155 | outputs_collections=end_points_collection): 156 | 157 | # VGG16 stage 1 has 2 convolution blocks followed by max-pooling 158 | net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1') 159 | net = slim.max_pool2d(net, [2, 2], scope='pool1') 160 | 161 | # VGG16 stage 2 has 2 convolution blocks followed by max-pooling 162 | net_2 = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') 163 | net = slim.max_pool2d(net_2, [2, 2], scope='pool2') 164 | 165 | # VGG16 stage 3 has 3 convolution blocks followed by max-pooling 166 | net_3 = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') 167 | net = slim.max_pool2d(net_3, [2, 2], scope='pool3') 168 | 169 | # VGG16 stage 4 has 3 convolution blocks followed by max-pooling 170 | net_4 = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') 171 | net = slim.max_pool2d(net_4, [2, 2], scope='pool4') 172 | 173 | # VGG16 stage 5 has 3 convolution blocks... 174 | net_5 = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') 175 | # ...but here, it is not followed by max-pooling, as in the original VGG16 architecture. 176 | 177 | # This is where the specialization of the VGG network takes place, as described in DRIU and 178 | # OSVOS-S. The idea is to extract *side feature maps* and design *specialized layers* to perform 179 | # *deep supervision* targeted at a different task (here, segmentation) than the one used to 180 | # train the base network originally (i.e., large-scale natural image classification). 181 | 182 | # As explained in DRIU, each specialized side output produces feature maps in 16 different channels, 183 | # which are resized to the original image size and concatenated, creating a volume of fine-to-coarse 184 | # feature maps. one last convolutional layer linearly combines the feature maps from the volume 185 | # created by the specialized side outputs into a regressed result. The convolutional layers employ 186 | # 3 x 3 convolutional filters for efficiency, except the ones used for linearly combining the outputs 187 | # (1 x 1 filters). 188 | 189 | with slim.arg_scope([slim.conv2d], activation_fn=None): 190 | 191 | # Convolve last layer of stage 2 (before max-pooling) -> side_2 (None, 240, 427, 16) 192 | side_2 = slim.conv2d(net_2, 16, [3, 3], scope='conv2_2_16') 193 | 194 | # Convolve last layer of stage 3 (before max-pooling) -> side_3 (None, 120, 214, 16) 195 | side_3 = slim.conv2d(net_3, 16, [3, 3], scope='conv3_3_16') 196 | 197 | # Convolve last layer of stage 4 (before max-pooling) -> side_3 (None, 60, 117, 16) 198 | side_4 = slim.conv2d(net_4, 16, [3, 3], scope='conv4_3_16') 199 | 200 | # Convolve last layer of stage 3 (before max-pooling) -> side_3 (None, 30, 54, 16) 201 | side_5 = slim.conv2d(net_5, 16, [3, 3], scope='conv5_3_16') 202 | 203 | # The _S layears are the side output that will be used for deep supervision 204 | 205 | # Dim reduction - linearly combine side_2 feature maps -> side_2_s (None, 240, 427, 1) 206 | side_2_s = slim.conv2d(side_2, 1, [1, 1], scope='score-dsn_2') 207 | 208 | # Dim reduction - linearly combine side_3 feature maps -> side_3_s (None, 120, 214, 1) 209 | side_3_s = slim.conv2d(side_3, 1, [1, 1], scope='score-dsn_3') 210 | 211 | # Dim reduction - linearly combine side_4 feature maps -> side_4_s (None, 60, 117, 1) 212 | side_4_s = slim.conv2d(side_4, 1, [1, 1], scope='score-dsn_4') 213 | 214 | # Dim reduction - linearly combine side_5 feature maps -> side_5_s (None, 30, 54, 1) 215 | side_5_s = slim.conv2d(side_5, 1, [1, 1], scope='score-dsn_5') 216 | 217 | # As repeated in OSVOS-S, upscaling operations take place wherever necessary, and feature 218 | # maps from the separate paths are concatenated to construct a volume with information from 219 | # different levels of detail. We linearly fuse the feature maps to a single output which has 220 | # the same dimensions as the input image. 221 | with slim.arg_scope([slim.convolution2d_transpose], 222 | activation_fn=None, biases_initializer=None, padding='VALID', 223 | outputs_collections=end_points_collection, trainable=False): 224 | 225 | # Upsample the side outputs for deep supervision and center-cop them to the same size as 226 | # the input. Note that this is straight upsampling (we're not trying to learn upsampling 227 | # filters), hence the trainable=False param. 228 | 229 | # Upsample side_2_s (None, 240, 427, 1) -> (None, 480, 854, 1) 230 | # Center-crop (None, 480, 854, 1) to original image size (None, 480, 854, 1) 231 | side_2_s = slim.convolution2d_transpose(side_2_s, 1, 4, 2, scope='score-dsn_2-up') 232 | side_2_s = crop_features(side_2_s, im_size) 233 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/score-dsn_2-cr', side_2_s) 234 | 235 | # Upsample side_3_s (None, 120, 214, 1) -> (None, 484, 860, 1) 236 | # Center-crop (None, 484, 860, 1) to original image size (None, 480, 854, 1) 237 | side_3_s = slim.convolution2d_transpose(side_3_s, 1, 8, 4, scope='score-dsn_3-up') 238 | side_3_s = crop_features(side_3_s, im_size) 239 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/score-dsn_3-cr', side_3_s) 240 | 241 | # Upsample side_4_s (None, 60, 117, 1) -> (None, 488, 864, 1) 242 | # Center-crop (None, 488, 864, 1) to original image size (None, 480, 854, 1) 243 | side_4_s = slim.convolution2d_transpose(side_4_s, 1, 16, 8, scope='score-dsn_4-up') 244 | side_4_s = crop_features(side_4_s, im_size) 245 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/score-dsn_4-cr', side_4_s) 246 | 247 | # Upsample side_5_s (None, 30, 54, 1) -> (None, 496, 880, 1) 248 | # Center-crop (None, 496, 880, 1) to original image size (None, 480, 854, 1) 249 | side_5_s = slim.convolution2d_transpose(side_5_s, 1, 32, 16, scope='score-dsn_5-up') 250 | side_5_s = crop_features(side_5_s, im_size) 251 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/score-dsn_5-cr', side_5_s) 252 | 253 | # Upsample the main outputs and center-cop them to the same size as the input 254 | # Note that this is straight upsampling (we're not trying to learn upsampling filters), 255 | # hence the trainable=False param. Then, concatenate thm in a big volume of fine-to-coarse 256 | # feature maps of the same size. 257 | 258 | # Upsample side_2 (None, 240, 427, 16) -> side_2_f (None, 480, 854, 16) 259 | # Center-crop (None, 480, 854, 16) to original image size (None, 480, 854, 16) 260 | side_2_f = slim.convolution2d_transpose(side_2, 16, 4, 2, scope='score-multi2-up') 261 | side_2_f = crop_features(side_2_f, im_size) 262 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/side-multi2-cr', side_2_f) 263 | 264 | # Upsample side_2 (None, 120, 214, 16) -> side_2_f (None, 488, 864, 16) 265 | # Center-crop (None, 488, 864, 16) to original image size (None, 480, 854, 16) 266 | side_3_f = slim.convolution2d_transpose(side_3, 16, 8, 4, scope='score-multi3-up') 267 | side_3_f = crop_features(side_3_f, im_size) 268 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/side-multi3-cr', side_3_f) 269 | 270 | # Upsample side_2 (None, 60, 117, 16) -> side_2_f (None, 488, 864, 16) 271 | # Center-crop (None, 488, 864, 16) to original image size (None, 480, 854, 16) 272 | side_4_f = slim.convolution2d_transpose(side_4, 16, 16, 8, scope='score-multi4-up') 273 | side_4_f = crop_features(side_4_f, im_size) 274 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/side-multi4-cr', side_4_f) 275 | 276 | # Upsample side_2 (None, 30, 54, 16) -> side_2_f (None, 496, 880, 16) 277 | # Center-crop (None, 496, 880, 16) to original image size (None, 480, 854, 16) 278 | side_5_f = slim.convolution2d_transpose(side_5, 16, 32, 16, scope='score-multi5-up') 279 | side_5_f = crop_features(side_5_f, im_size) 280 | utils.collect_named_outputs(end_points_collection, segnet_stream + '/side-multi5-cr', side_5_f) 281 | 282 | # Build the main volume concat_side (None, 496, 880, 16x4) 283 | concat_side = tf.concat([side_2_f, side_3_f, side_4_f, side_5_f], axis=3) 284 | 285 | # Dim reduction - linearly combine concat_side feature maps -> (None, 496, 880, 1) 286 | net = slim.conv2d(concat_side, 1, [1, 1], scope='upscore-fuse') 287 | 288 | # Note that the FC layers of the original VGG16 network are not part of the DRIU architecture 289 | 290 | end_points = slim.utils.convert_collection_to_dict(end_points_collection) 291 | return net, end_points 292 | 293 | 294 | def upsample_filt(size): 295 | factor = (size + 1) // 2 296 | if size % 2 == 1: 297 | center = factor - 1 298 | else: 299 | center = factor - 0.5 300 | og = np.ogrid[:size, :size] 301 | return (1 - abs(og[0] - center) / factor) * \ 302 | (1 - abs(og[1] - center) / factor) 303 | 304 | 305 | # Set deconvolutional layers to compute bilinear interpolation 306 | def interp_surgery(variables): 307 | interp_tensors = [] 308 | for v in variables: 309 | if '-up' in v.name: 310 | h, w, k, m = v.get_shape() 311 | tmp = np.zeros((m, k, h, w)) 312 | if m != k: 313 | raise ValueError('input + output channels need to be the same') 314 | if h != w: 315 | raise ValueError('filters need to be square') 316 | up_filter = upsample_filt(int(h)) 317 | tmp[range(m), range(k), :, :] = up_filter 318 | interp_tensors.append(tf.assign(v, tmp.transpose((2, 3, 1, 0)), validate_shape=True, use_locking=True)) 319 | return interp_tensors 320 | 321 | # TODO: Move preprocessing to Tensorflow API? 322 | def preprocess_inputs(inputs, segnet_stream='weak'): 323 | """Preprocess the inputs to adapt them to the network requirements 324 | Args: 325 | Image we want to input to the network in (batch_size,W,H,3) or (batch_size,W,H,4) np array 326 | Returns: 327 | Image ready to input to the network with means substracted 328 | """ 329 | assert(len(inputs.shape) == 4) 330 | 331 | if segnet_stream == 'weak': 332 | new_inputs = np.subtract(inputs.astype(np.float32), np.array((104.00699, 116.66877, 122.67892, 128.), dtype=np.float32)) 333 | else: 334 | new_inputs = np.subtract(inputs.astype(np.float32), np.array((104.00699, 116.66877, 122.67892), dtype=np.float32)) 335 | # input = tf.subtract(tf.cast(input, tf.float32), np.array((104.00699, 116.66877, 122.67892), dtype=np.float32)) 336 | # input = np.expand_dims(input, axis=0) 337 | return new_inputs 338 | 339 | 340 | # TODO: Move preprocessing to Tensorflow API? 341 | def preprocess_labels(labels): 342 | """Preprocess the labels to adapt them to the loss computation requirements 343 | Args: 344 | Labels (batch_size,W,H) or (batch_size,W,H,1) in numpy array 345 | Returns: 346 | Label ready to compute the loss (batch_size,W,H,1) 347 | """ 348 | assert(len(labels.shape) == 4) 349 | 350 | max_mask = np.max(labels) * 0.5 351 | labels = np.greater(labels, max_mask).astype(np.float32) 352 | if len(labels.shape) == 3: 353 | labels = np.expand_dims(labels, axis=-1) 354 | # label = tf.cast(np.array(label), tf.float32) 355 | # max_mask = tf.multiply(tf.reduce_max(label), 0.5) 356 | # label = tf.cast(tf.greater(label, max_mask), tf.float32) 357 | # label = tf.expand_dims(tf.expand_dims(label, 0), 3) 358 | return labels 359 | 360 | 361 | def load_vgg_imagenet(ckpt_path, segnet_stream='weak'): 362 | """Initialize the network parameters from the VGG-16 pre-trained model provided by TF-SLIM 363 | Args: 364 | Path to the checkpoint, either the 3-channel or 4-channel input version 365 | segnet_stream: Is this the 3-channel or the 4-channel input version? 366 | Returns: 367 | Function that takes a session and initializes the network 368 | """ 369 | assert(segnet_stream in ['weak','full']) 370 | reader = tf.train.NewCheckpointReader(ckpt_path) 371 | var_to_shape_map = reader.get_variable_to_shape_map() 372 | vars_corresp = dict() 373 | for v in var_to_shape_map: 374 | if "conv" in v: 375 | vars_corresp[v] = slim.get_model_variables(v.replace("vgg_16", segnet_stream))[0] 376 | init_fn = slim.assign_from_checkpoint_fn(ckpt_path, vars_corresp) 377 | return init_fn 378 | 379 | def class_balanced_cross_entropy_loss(output, label): 380 | """Define the class balanced cross entropy loss to train the network 381 | Args: 382 | output: Output of the network 383 | label: Ground truth label 384 | Returns: 385 | Tensor that evaluates the loss 386 | """ 387 | 388 | labels = tf.cast(tf.greater(label, 0.5), tf.float32) 389 | 390 | num_labels_pos = tf.reduce_sum(labels) 391 | num_labels_neg = tf.reduce_sum(1.0 - labels) 392 | num_total = num_labels_pos + num_labels_neg 393 | 394 | output_gt_zero = tf.cast(tf.greater_equal(output, 0), tf.float32) 395 | loss_val = tf.multiply(output, (labels - output_gt_zero)) - tf.log( 396 | 1 + tf.exp(output - 2 * tf.multiply(output, output_gt_zero))) 397 | 398 | loss_pos = tf.reduce_sum(-tf.multiply(labels, loss_val)) 399 | loss_neg = tf.reduce_sum(-tf.multiply(1.0 - labels, loss_val)) 400 | 401 | final_loss = num_labels_neg / num_total * loss_pos + num_labels_pos / num_total * loss_neg 402 | 403 | return final_loss 404 | 405 | 406 | def class_balanced_cross_entropy_loss_theoretical(output, label): 407 | """Theoretical version of the class balanced cross entropy loss to train the network (Produces unstable results) 408 | Args: 409 | output: Output of the network 410 | label: Ground truth label 411 | Returns: 412 | Tensor that evaluates the loss 413 | """ 414 | output = tf.nn.sigmoid(output) 415 | 416 | labels_pos = tf.cast(tf.greater(label, 0), tf.float32) 417 | labels_neg = tf.cast(tf.less(label, 1), tf.float32) 418 | 419 | num_labels_pos = tf.reduce_sum(labels_pos) 420 | num_labels_neg = tf.reduce_sum(labels_neg) 421 | num_total = num_labels_pos + num_labels_neg 422 | 423 | loss_pos = tf.reduce_sum(tf.multiply(labels_pos, tf.log(output + 0.00001))) 424 | loss_neg = tf.reduce_sum(tf.multiply(labels_neg, tf.log(1 - output + 0.00001))) 425 | 426 | final_loss = -num_labels_neg / num_total * loss_pos - num_labels_pos / num_total * loss_neg 427 | 428 | return final_loss 429 | 430 | 431 | def load_caffe_weights(weights_path): 432 | """Initialize the network parameters from a .npy caffe weights file 433 | Args: 434 | Path to the .npy file containing the value of the network parameters 435 | Returns: 436 | Function that takes a session and initializes the network 437 | """ 438 | osvos_weights = np.load(weights_path).item() 439 | vars_corresp = dict() 440 | vars_corresp['osvos/conv1/conv1_1/weights'] = osvos_weights['conv1_1_w'] 441 | vars_corresp['osvos/conv1/conv1_1/biases'] = osvos_weights['conv1_1_b'] 442 | vars_corresp['osvos/conv1/conv1_2/weights'] = osvos_weights['conv1_2_w'] 443 | vars_corresp['osvos/conv1/conv1_2/biases'] = osvos_weights['conv1_2_b'] 444 | 445 | vars_corresp['osvos/conv2/conv2_1/weights'] = osvos_weights['conv2_1_w'] 446 | vars_corresp['osvos/conv2/conv2_1/biases'] = osvos_weights['conv2_1_b'] 447 | vars_corresp['osvos/conv2/conv2_2/weights'] = osvos_weights['conv2_2_w'] 448 | vars_corresp['osvos/conv2/conv2_2/biases'] = osvos_weights['conv2_2_b'] 449 | 450 | vars_corresp['osvos/conv3/conv3_1/weights'] = osvos_weights['conv3_1_w'] 451 | vars_corresp['osvos/conv3/conv3_1/biases'] = osvos_weights['conv3_1_b'] 452 | vars_corresp['osvos/conv3/conv3_2/weights'] = osvos_weights['conv3_2_w'] 453 | vars_corresp['osvos/conv3/conv3_2/biases'] = osvos_weights['conv3_2_b'] 454 | vars_corresp['osvos/conv3/conv3_3/weights'] = osvos_weights['conv3_3_w'] 455 | vars_corresp['osvos/conv3/conv3_3/biases'] = osvos_weights['conv3_3_b'] 456 | 457 | vars_corresp['osvos/conv4/conv4_1/weights'] = osvos_weights['conv4_1_w'] 458 | vars_corresp['osvos/conv4/conv4_1/biases'] = osvos_weights['conv4_1_b'] 459 | vars_corresp['osvos/conv4/conv4_2/weights'] = osvos_weights['conv4_2_w'] 460 | vars_corresp['osvos/conv4/conv4_2/biases'] = osvos_weights['conv4_2_b'] 461 | vars_corresp['osvos/conv4/conv4_3/weights'] = osvos_weights['conv4_3_w'] 462 | vars_corresp['osvos/conv4/conv4_3/biases'] = osvos_weights['conv4_3_b'] 463 | 464 | vars_corresp['osvos/conv5/conv5_1/weights'] = osvos_weights['conv5_1_w'] 465 | vars_corresp['osvos/conv5/conv5_1/biases'] = osvos_weights['conv5_1_b'] 466 | vars_corresp['osvos/conv5/conv5_2/weights'] = osvos_weights['conv5_2_w'] 467 | vars_corresp['osvos/conv5/conv5_2/biases'] = osvos_weights['conv5_2_b'] 468 | vars_corresp['osvos/conv5/conv5_3/weights'] = osvos_weights['conv5_3_w'] 469 | vars_corresp['osvos/conv5/conv5_3/biases'] = osvos_weights['conv5_3_b'] 470 | 471 | vars_corresp['osvos/conv2_2_16/weights'] = osvos_weights['conv2_2_16_w'] 472 | vars_corresp['osvos/conv2_2_16/biases'] = osvos_weights['conv2_2_16_b'] 473 | vars_corresp['osvos/conv3_3_16/weights'] = osvos_weights['conv3_3_16_w'] 474 | vars_corresp['osvos/conv3_3_16/biases'] = osvos_weights['conv3_3_16_b'] 475 | vars_corresp['osvos/conv4_3_16/weights'] = osvos_weights['conv4_3_16_w'] 476 | vars_corresp['osvos/conv4_3_16/biases'] = osvos_weights['conv4_3_16_b'] 477 | vars_corresp['osvos/conv5_3_16/weights'] = osvos_weights['conv5_3_16_w'] 478 | vars_corresp['osvos/conv5_3_16/biases'] = osvos_weights['conv5_3_16_b'] 479 | 480 | vars_corresp['osvos/score-dsn_2/weights'] = osvos_weights['score-dsn_2_w'] 481 | vars_corresp['osvos/score-dsn_2/biases'] = osvos_weights['score-dsn_2_b'] 482 | vars_corresp['osvos/score-dsn_3/weights'] = osvos_weights['score-dsn_3_w'] 483 | vars_corresp['osvos/score-dsn_3/biases'] = osvos_weights['score-dsn_3_b'] 484 | vars_corresp['osvos/score-dsn_4/weights'] = osvos_weights['score-dsn_4_w'] 485 | vars_corresp['osvos/score-dsn_4/biases'] = osvos_weights['score-dsn_4_b'] 486 | vars_corresp['osvos/score-dsn_5/weights'] = osvos_weights['score-dsn_5_w'] 487 | vars_corresp['osvos/score-dsn_5/biases'] = osvos_weights['score-dsn_5_b'] 488 | 489 | vars_corresp['osvos/upscore-fuse/weights'] = osvos_weights['new-score-weighting_w'] 490 | vars_corresp['osvos/upscore-fuse/biases'] = osvos_weights['new-score-weighting_b'] 491 | return slim.assign_from_values_fn(vars_corresp) 492 | 493 | 494 | def parameter_lr(segnet_stream='weak'): 495 | """Specify the relative learning rate for every parameter. The final learning rate 496 | in every parameter will be the one defined here multiplied by the global one 497 | Args: 498 | segnet_stream: Is this the 3-channel or the 4-channel input version? 499 | Returns: 500 | Dictionary with the relative learning rate for every parameter 501 | """ 502 | assert(segnet_stream in ['weak','full']) 503 | vars_corresp = dict() 504 | vars_corresp[segnet_stream + '/conv1/conv1_1/weights'] = 1 505 | vars_corresp[segnet_stream + '/conv1/conv1_1/biases'] = 2 506 | vars_corresp[segnet_stream + '/conv1/conv1_2/weights'] = 1 507 | vars_corresp[segnet_stream + '/conv1/conv1_2/biases'] = 2 508 | 509 | vars_corresp[segnet_stream + '/conv2/conv2_1/weights'] = 1 510 | vars_corresp[segnet_stream + '/conv2/conv2_1/biases'] = 2 511 | vars_corresp[segnet_stream + '/conv2/conv2_2/weights'] = 1 512 | vars_corresp[segnet_stream + '/conv2/conv2_2/biases'] = 2 513 | 514 | vars_corresp[segnet_stream + '/conv3/conv3_1/weights'] = 1 515 | vars_corresp[segnet_stream + '/conv3/conv3_1/biases'] = 2 516 | vars_corresp[segnet_stream + '/conv3/conv3_2/weights'] = 1 517 | vars_corresp[segnet_stream + '/conv3/conv3_2/biases'] = 2 518 | vars_corresp[segnet_stream + '/conv3/conv3_3/weights'] = 1 519 | vars_corresp[segnet_stream + '/conv3/conv3_3/biases'] = 2 520 | 521 | vars_corresp[segnet_stream + '/conv4/conv4_1/weights'] = 1 522 | vars_corresp[segnet_stream + '/conv4/conv4_1/biases'] = 2 523 | vars_corresp[segnet_stream + '/conv4/conv4_2/weights'] = 1 524 | vars_corresp[segnet_stream + '/conv4/conv4_2/biases'] = 2 525 | vars_corresp[segnet_stream + '/conv4/conv4_3/weights'] = 1 526 | vars_corresp[segnet_stream + '/conv4/conv4_3/biases'] = 2 527 | 528 | vars_corresp[segnet_stream + '/conv5/conv5_1/weights'] = 1 529 | vars_corresp[segnet_stream + '/conv5/conv5_1/biases'] = 2 530 | vars_corresp[segnet_stream + '/conv5/conv5_2/weights'] = 1 531 | vars_corresp[segnet_stream + '/conv5/conv5_2/biases'] = 2 532 | vars_corresp[segnet_stream + '/conv5/conv5_3/weights'] = 1 533 | vars_corresp[segnet_stream + '/conv5/conv5_3/biases'] = 2 534 | 535 | vars_corresp[segnet_stream + '/conv2_2_16/weights'] = 1 536 | vars_corresp[segnet_stream + '/conv2_2_16/biases'] = 2 537 | vars_corresp[segnet_stream + '/conv3_3_16/weights'] = 1 538 | vars_corresp[segnet_stream + '/conv3_3_16/biases'] = 2 539 | vars_corresp[segnet_stream + '/conv4_3_16/weights'] = 1 540 | vars_corresp[segnet_stream + '/conv4_3_16/biases'] = 2 541 | vars_corresp[segnet_stream + '/conv5_3_16/weights'] = 1 542 | vars_corresp[segnet_stream + '/conv5_3_16/biases'] = 2 543 | 544 | vars_corresp[segnet_stream + '/score-dsn_2/weights'] = 0.1 545 | vars_corresp[segnet_stream + '/score-dsn_2/biases'] = 0.2 546 | vars_corresp[segnet_stream + '/score-dsn_3/weights'] = 0.1 547 | vars_corresp[segnet_stream + '/score-dsn_3/biases'] = 0.2 548 | vars_corresp[segnet_stream + '/score-dsn_4/weights'] = 0.1 549 | vars_corresp[segnet_stream + '/score-dsn_4/biases'] = 0.2 550 | vars_corresp[segnet_stream + '/score-dsn_5/weights'] = 0.1 551 | vars_corresp[segnet_stream + '/score-dsn_5/biases'] = 0.2 552 | 553 | vars_corresp[segnet_stream + '/upscore-fuse/weights'] = 0.01 554 | vars_corresp[segnet_stream + '/upscore-fuse/biases'] = 0.02 555 | return vars_corresp 556 | 557 | 558 | def _train(dataset, initial_ckpt, supervison, learning_rate, logs_path, max_training_iters, save_step, display_step, 559 | global_step, segnet_stream='weak', iter_mean_grad=1, batch_size=1, momentum=0.9, resume_training=False, config=None, finetune=1, 560 | test_image_path=None, ckpt_name='weak'): 561 | """Train OSVOS 562 | Args: 563 | dataset: Reference to a Dataset object instance 564 | initial_ckpt: Path to the checkpoint to initialize the network (May be parent network or pre-trained Imagenet) 565 | supervison: Level of the side outputs supervision: 1-Strong 2-Weak 3-No supervision 566 | learning_rate: Value for the learning rate. It can be a number or an instance to a learning rate object. 567 | logs_path: Path to store the checkpoints 568 | max_training_iters: Number of training iterations 569 | save_step: A checkpoint will be created every save_steps 570 | display_step: Information of the training will be displayed every display_steps 571 | global_step: Reference to a Variable that keeps track of the training steps 572 | segnet_stream: Binary segmentation network stream; either "appearance stream" or "flow stream" ['weak'|'full'] 573 | iter_mean_grad: Number of gradient computations that are average before updating the weights 574 | batch_size: Size of the training batch 575 | momentum: Value of the momentum parameter for the Momentum optimizer 576 | resume_training: Boolean to try to restore from a previous checkpoint (True) or not (False) 577 | config: Reference to a Configuration object used in the creation of a Session 578 | finetune: Use to select the type of training, 0 for the parent network and 1 for finetunning 579 | test_image_path: If image path provided, every save_step the result of the network with this image is stored 580 | ckpt_name: Checkpoint name 581 | Returns: 582 | """ 583 | model_name = os.path.join(logs_path, ckpt_name+".ckpt") 584 | if config is None: 585 | config = tf.ConfigProto() 586 | config.gpu_options.allow_growth = True 587 | # config.log_device_placement = True 588 | config.allow_soft_placement = True 589 | 590 | tf.logging.set_verbosity(tf.logging.INFO) 591 | 592 | # Prepare the input data: 593 | # Section "3.3 Binary Segmentation" of the MaskRNN paper and "Figure 2" are inconsistent when it comes to describing 594 | # the inputs of the two-stream network. In this implementation, we chose the input of the appearance stream 595 | # 'weak' to be the concatenation of the current frame It and the warped prediction of the previous 596 | # frame's segmentation mask bt-1, denoted as Phit-1,t(bt-1). The warping function 597 | # Phit-1,t(.) transforms the input based on the optical flow fields from frame It-1 to 598 | # frame It. 599 | # We chose the input of the flow stream 'full' to be the concatenation of the magnitude of the flow field from 600 | # It-1 to It and It to frame It+1 and, again, the warped prediction 601 | # of the previous frame's segmentation mask bt-1. 602 | # The architecture of both streams is identical. 603 | assert(segnet_stream in ['weak','full']) 604 | if segnet_stream == 'weak': 605 | input_image = tf.placeholder(tf.float32, [batch_size, None, None, 4]) 606 | else: 607 | input_image = tf.placeholder(tf.float32, [batch_size, None, None, 3]) 608 | input_label = tf.placeholder(tf.float32, [batch_size, None, None, 1]) 609 | 610 | # Create the convnet 611 | with slim.arg_scope(backbone_arg_scope()): 612 | net, end_points = backbone(input_image, segnet_stream) 613 | 614 | # Print name and shape of each tensor. 615 | print("Network Layers:") 616 | for k, v in end_points.items(): 617 | print(' name = {}, shape = {}'.format(v.name, v.get_shape())) 618 | 619 | # Print name and shape of parameter nodes (values not yet initialized) 620 | print("Network Parameters:") 621 | for v in slim.get_model_variables(): 622 | print(' name = {}, shape = {}'.format(v.name, v.get_shape())) 623 | 624 | # Initialize weights from pre-trained model 625 | if finetune == 0: 626 | init_weights = load_vgg_imagenet(initial_ckpt, segnet_stream) 627 | 628 | # Define loss 629 | with tf.name_scope('losses'): 630 | if supervison == 1 or supervison == 2: 631 | dsn_2_loss = class_balanced_cross_entropy_loss(end_points[segnet_stream + '/score-dsn_2-cr'], input_label) 632 | tf.summary.scalar('dsn_2_loss', dsn_2_loss) 633 | dsn_3_loss = class_balanced_cross_entropy_loss(end_points[segnet_stream + '/score-dsn_3-cr'], input_label) 634 | tf.summary.scalar('dsn_3_loss', dsn_3_loss) 635 | dsn_4_loss = class_balanced_cross_entropy_loss(end_points[segnet_stream + '/score-dsn_4-cr'], input_label) 636 | tf.summary.scalar('dsn_4_loss', dsn_4_loss) 637 | dsn_5_loss = class_balanced_cross_entropy_loss(end_points[segnet_stream + '/score-dsn_5-cr'], input_label) 638 | tf.summary.scalar('dsn_5_loss', dsn_5_loss) 639 | 640 | main_loss = class_balanced_cross_entropy_loss(net, input_label) 641 | tf.summary.scalar('main_loss', main_loss) 642 | 643 | if supervison == 1: 644 | output_loss = dsn_2_loss + dsn_3_loss + dsn_4_loss + dsn_5_loss + main_loss 645 | elif supervison == 2: 646 | output_loss = 0.5 * dsn_2_loss + 0.5 * dsn_3_loss + 0.5 * dsn_4_loss + 0.5 * dsn_5_loss + main_loss 647 | elif supervison == 3: 648 | output_loss = main_loss 649 | else: 650 | sys.exit('Incorrect supervision id, select 1 for supervision of the side outputs, 2 for weak supervision ' 651 | 'of the side outputs and 3 for no supervision of the side outputs') 652 | total_loss = output_loss + tf.add_n(tf.losses.get_regularization_losses()) 653 | tf.summary.scalar('total_loss', total_loss) 654 | 655 | # Define optimization method 656 | with tf.name_scope('optimization'): 657 | tf.summary.scalar('learning_rate', learning_rate) 658 | optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) 659 | grads_and_vars = optimizer.compute_gradients(total_loss) 660 | with tf.name_scope('grad_accumulator'): 661 | grad_accumulator = {} 662 | for ind in range(0, len(grads_and_vars)): 663 | if grads_and_vars[ind][0] is not None: 664 | grad_accumulator[ind] = tf.ConditionalAccumulator(grads_and_vars[ind][0].dtype) 665 | with tf.name_scope('apply_gradient'): 666 | layer_lr = parameter_lr(segnet_stream) 667 | grad_accumulator_ops = [] 668 | for var_ind, grad_acc in grad_accumulator.items(): # Phil: was: for var_ind, grad_acc in grad_accumulator.iteritems(): 669 | var_name = str(grads_and_vars[var_ind][1].name).split(':')[0] 670 | var_grad = grads_and_vars[var_ind][0] 671 | grad_accumulator_ops.append(grad_acc.apply_grad(var_grad * layer_lr[var_name], local_step=global_step)) 672 | with tf.name_scope('take_gradients'): 673 | mean_grads_and_vars = [] 674 | for var_ind, grad_acc in grad_accumulator.items(): # Phil: was: for var_ind, grad_acc in grad_accumulator.iteritems(): 675 | mean_grads_and_vars.append( 676 | (grad_acc.take_grad(iter_mean_grad), grads_and_vars[var_ind][1])) 677 | apply_gradient_op = optimizer.apply_gradients(mean_grads_and_vars, global_step=global_step) 678 | # Log training info 679 | merged_summary_op = tf.summary.merge_all() 680 | 681 | # Log evolution of test image 682 | if test_image_path is not None: 683 | probabilities = tf.nn.sigmoid(net) 684 | img_summary = tf.summary.image("Output probabilities", probabilities, max_outputs=1) 685 | # Initialize variables 686 | init = tf.global_variables_initializer() 687 | 688 | # Create objects to record timing and memory of the graph execution 689 | # run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) # Option in the session options=run_options 690 | # run_metadata = tf.RunMetadata() # Option in the session run_metadata=run_metadata 691 | # summary_writer.add_run_metadata(run_metadata, 'step%d' % i) 692 | with tf.Session(config=config) as sess: 693 | print('Init variable') 694 | sess.run(init) 695 | 696 | # op to write logs to Tensorboard 697 | summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph()) 698 | 699 | # Create saver to manage checkpoints 700 | saver = tf.train.Saver(max_to_keep=None) 701 | 702 | last_ckpt_path = tf.train.latest_checkpoint(logs_path) 703 | if last_ckpt_path is not None and resume_training: 704 | # Load last checkpoint 705 | print('Initializing from previous checkpoint...') 706 | saver.restore(sess, last_ckpt_path) 707 | step = global_step.eval() + 1 708 | else: 709 | # Load pre-trained model 710 | if finetune == 0: 711 | print('Initializing from pre-trained imagenet model...') 712 | init_weights(sess) 713 | else: 714 | print('Initializing from specified pre-trained model...') 715 | # init_weights(sess) 716 | var_list = [] 717 | for var in tf.global_variables(): 718 | var_type = var.name.split('/')[-1] 719 | if 'weights' in var_type or 'bias' in var_type: 720 | var_list.append(var) 721 | saver_res = tf.train.Saver(var_list=var_list) 722 | saver_res.restore(sess, initial_ckpt) 723 | step = 1 724 | sess.run(interp_surgery(tf.global_variables())) 725 | print('Weights initialized') 726 | 727 | print('Start training') 728 | while step < max_training_iters + 1: 729 | # Average the gradient 730 | for _ in range(0, iter_mean_grad): 731 | batch_inputs, batch_labels = dataset.next_batch(batch_size, 'train', segnet_stream) 732 | inputs = preprocess_inputs(batch_inputs, segnet_stream) 733 | labels = preprocess_labels(batch_labels) 734 | run_res = sess.run([total_loss, merged_summary_op] + grad_accumulator_ops, 735 | feed_dict={input_image: inputs, input_label: labels}) 736 | batch_loss = run_res[0] 737 | summary = run_res[1] 738 | 739 | # Apply the gradients 740 | sess.run(apply_gradient_op) # Momentum updates here its statistics 741 | 742 | # Save summary reports 743 | summary_writer.add_summary(summary, step) 744 | 745 | # Display training status 746 | if step % display_step == 0: 747 | print("{} Iter {}: Training Loss = {:.4f}".format(datetime.now(), step, batch_loss)) 748 | 749 | # Save a checkpoint 750 | if step % save_step == 0: 751 | if test_image_path is not None: 752 | curr_output = sess.run(img_summary, feed_dict={input_image: preprocess_inputs(test_image_path, segnet_stream)}) 753 | summary_writer.add_summary(curr_output, step) 754 | save_path = saver.save(sess, model_name, global_step=global_step) 755 | print("Model saved in file: %s" % save_path) 756 | 757 | step += 1 758 | 759 | if (step - 1) % save_step != 0: 760 | save_path = saver.save(sess, model_name, global_step=global_step) 761 | print("Model saved in file: %s" % save_path) 762 | 763 | print('Finished training.') 764 | 765 | 766 | def train_parent(dataset, initial_ckpt, supervison, learning_rate, logs_path, max_training_iters, save_step, 767 | display_step, global_step, segnet_stream='full', iter_mean_grad=1, batch_size=1, momentum=0.9, resume_training=False, 768 | config=None, test_image_path=None, ckpt_name='full'): 769 | """Train OSVOS parent network 770 | Args: 771 | See _train() 772 | Returns: 773 | """ 774 | finetune = 0 775 | _train(dataset, initial_ckpt, supervison, learning_rate, logs_path, max_training_iters, save_step, display_step, 776 | global_step, segnet_stream, iter_mean_grad, batch_size, momentum, resume_training, config, finetune, test_image_path, 777 | ckpt_name) 778 | 779 | 780 | def train_finetune(dataset, initial_ckpt, supervison, learning_rate, logs_path, max_training_iters, save_step, 781 | display_step, global_step, segnet_stream='full', iter_mean_grad=1, batch_size=1, momentum=0.9, resume_training=False, 782 | config=None, test_image_path=None, ckpt_name='full'): 783 | """Finetune OSVOS 784 | Args: 785 | See _train() 786 | Returns: 787 | """ 788 | finetune = 1 789 | _train(dataset, initial_ckpt, supervison, learning_rate, logs_path, max_training_iters, save_step, display_step, 790 | global_step, segnet_stream, iter_mean_grad, batch_size, momentum, resume_training, config, finetune, test_image_path, 791 | ckpt_name) 792 | 793 | 794 | def test(dataset, checkpoint_file, pred_masks_path, img_pred_masks_path, segnet_stream='full', config=None): 795 | """Test one sequence 796 | Args: 797 | dataset: Reference to a Dataset object instance 798 | checkpoint_path: Path of the checkpoint to use for the evaluation 799 | segnet_stream: Binary segmentation network stream; either "appearance stream" or "flow stream" ['weak'|'full'] 800 | pred_masks_path: Path to save the individual predicted masks 801 | img_pred_masks_path: Path to save the composite of the input image overlayed with the predicted masks 802 | config: Reference to a Configuration object used in the creation of a Session 803 | Returns: 804 | """ 805 | if config is None: 806 | config = tf.ConfigProto() 807 | config.gpu_options.allow_growth = True 808 | # config.log_device_placement = True 809 | config.allow_soft_placement = True 810 | tf.logging.set_verbosity(tf.logging.INFO) 811 | 812 | # Input data 813 | assert(segnet_stream in ['weak','full']) 814 | batch_size = 1 815 | if segnet_stream == 'weak': 816 | input_image = tf.placeholder(tf.float32, [batch_size, None, None, 4]) 817 | else: 818 | input_image = tf.placeholder(tf.float32, [batch_size, None, None, 3]) 819 | 820 | # Create the convnet 821 | with slim.arg_scope(backbone_arg_scope()): 822 | net, end_points = backbone(input_image, segnet_stream) 823 | probabilities = tf.nn.sigmoid(net) 824 | global_step = tf.Variable(0, name='global_step', trainable=False) 825 | 826 | # Create a saver to load the network 827 | saver = tf.train.Saver([v for v in tf.global_variables() if '-up' not in v.name and '-cr' not in v.name]) 828 | 829 | if not os.path.exists(pred_masks_path): 830 | os.makedirs(pred_masks_path) 831 | if not os.path.exists(img_pred_masks_path): 832 | os.makedirs(img_pred_masks_path) 833 | 834 | with tf.Session(config=config) as sess: 835 | sess.run(tf.global_variables_initializer()) 836 | sess.run(interp_surgery(tf.global_variables())) 837 | saver.restore(sess, checkpoint_file) 838 | rounds, rounds_left = divmod(dataset.test_size, batch_size) 839 | if rounds_left: 840 | rounds += 1 841 | with warnings.catch_warnings(): 842 | warnings.simplefilter("ignore") 843 | for _round in trange(rounds, ascii=True, ncols=80, desc='Saving predictions as PNGs'): 844 | samples, output_files = dataset.next_batch(batch_size, 'test', segnet_stream) 845 | inputs = preprocess_inputs(samples, segnet_stream) 846 | masks = sess.run(probabilities, feed_dict={input_image: inputs}) 847 | masks = np.where(masks.astype(np.float32) < 162.0/255.0, 0, 255).astype('uint8') 848 | for mask, output_file in zip(masks, output_files): 849 | imsave(os.path.join(pred_masks_path, output_file), mask[:, :, 0]) 850 | -------------------------------------------------------------------------------- /tfwss/model_test.py: -------------------------------------------------------------------------------- 1 | """ 2 | model_test.py 3 | 4 | The SDI Grabcut testing is done using a model trained in the ["Simple Does It" Grabcut Training for Instance Segmentation](model_train.ipynb) notebook, so make sure you've run that notebook first! We test the model on the **validation** split of the Berkeley-augmented dataset. 5 | 6 | The Berkley augmented dataset can be downloaded from [here]( 7 | http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz). 8 | 9 | Written by Phil Ferriere 10 | 11 | Licensed under the MIT License (see LICENSE for details) 12 | 13 | Based on: 14 | - https://github.com/scaelles/OSVOS-TensorFlow/blob/master/osvos_demo.py 15 | Written by Sergi Caelles (scaelles@vision.ee.ethz.ch) 16 | This file is part of the OSVOS paper presented in: 17 | Sergi Caelles, Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, Luc Van Gool 18 | One-Shot Video Object Segmentation 19 | CVPR 2017 20 | Unknown code license 21 | """ 22 | 23 | from __future__ import absolute_import 24 | from __future__ import division 25 | from __future__ import absolute_import 26 | 27 | import sys 28 | import tensorflow as tf 29 | slim = tf.contrib.slim 30 | 31 | # Import model files 32 | import model 33 | from dataset import BKVOCDataset 34 | 35 | # Parameters 36 | gpu_id = 0 37 | # Modify the value below to match the value of max_training_iters_3 in the training notebook! 38 | max_training_iters = 50000 39 | 40 | # Model paths 41 | segnet_stream = 'weak' 42 | ckpt_name = 'vgg_16_4chan_' + segnet_stream 43 | ckpt_path = 'models/' + ckpt_name + '/' + ckpt_name + '.ckpt-' + str(max_training_iters) 44 | 45 | # Load the Berkeley-augmented Pascal VOC 2012 segmentation dataset 46 | if sys.platform.startswith("win"): 47 | dataset_root = "E:/datasets/bk-voc/benchmark_RELEASE/dataset" 48 | else: 49 | dataset_root = '/media/EDrive/datasets/bk-voc/benchmark_RELEASE/dataset' 50 | dataset = BKVOCDataset(phase='test', dataset_root=dataset_root) 51 | 52 | # Display dataset configuration 53 | dataset.print_config() 54 | 55 | # Test the model 56 | with tf.Graph().as_default(): 57 | with tf.device('/gpu:' + str(gpu_id)): 58 | model.test(dataset, ckpt_path, dataset.pred_masks_path, dataset.img_pred_masks_path, segnet_stream) 59 | 60 | # Combine original images with predicted instance masks 61 | dataset.combine_images_with_predicted_masks() -------------------------------------------------------------------------------- /tfwss/model_train.py: -------------------------------------------------------------------------------- 1 | """ 2 | model_train.py 3 | 4 | This file performs training of the SDI Grabcut weakly supervised model for **instance segmentation**. 5 | Following the instructions provided in Section "6. Instance Segmentation Results" of the "Simple Does It" paper, we use 6 | the Berkeley-augmented Pascal VOC segmentation dataset that provides per-instance segmentation masks for VOC2012 data. 7 | 8 | The Berkley augmented dataset can be downloaded from [here]( 9 | http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 10 | 11 | The SDI Grabcut training is done using a **4-channel input** VGG16 network pre-trained on ImageNet, so make sure to run 12 | the [`VGG16 Surgery`](vgg16_surgery.ipynb) notebook first! 13 | 14 | To monitor training, run: 15 | ``` 16 | # On Windows 17 | tensorboard --logdir E:\repos\tf-wss\tfwss\models\vgg_16_4chan_weak 18 | # On Ubuntu 19 | tensorboard --logdir /media/EDrive/repos/tf-wss/tfwss/models/vgg_16_4chan_weak 20 | http://:6006 21 | ``` 22 | 23 | Written by Phil Ferriere 24 | 25 | Licensed under the MIT License (see LICENSE for details) 26 | 27 | Based on: 28 | - https://github.com/scaelles/OSVOS-TensorFlow/blob/master/osvos_parent_demo.py 29 | Written by Sergi Caelles (scaelles@vision.ee.ethz.ch) 30 | This file is part of the OSVOS paper presented in: 31 | Sergi Caelles, Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, Luc Van Gool 32 | One-Shot Video Object Segmentation 33 | CVPR 2017 34 | Unknown code license 35 | """ 36 | 37 | from __future__ import absolute_import 38 | from __future__ import division 39 | from __future__ import absolute_import 40 | 41 | import os 42 | import sys 43 | import tensorflow as tf 44 | slim = tf.contrib.slim 45 | 46 | # Import model files 47 | import model 48 | from dataset import BKVOCDataset 49 | 50 | # Model paths 51 | # Pre-trained VGG_16 downloaded from http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz 52 | imagenet_ckpt = 'models/vgg_16_4chan/vgg_16_4chan.ckpt' 53 | segnet_stream = 'weak' 54 | ckpt_name = 'vgg_16_4chan_' + segnet_stream 55 | logs_path = 'models/' + ckpt_name 56 | 57 | # Training parameters 58 | gpu_id = 0 59 | iter_mean_grad = 10 60 | max_training_iters_1 = 15000 61 | max_training_iters_2 = 30000 62 | max_training_iters_3 = 50000 63 | save_step = 5000 64 | test_image = None 65 | display_step = 100 66 | ini_lr = 1e-8 67 | boundaries = [10000, 15000, 25000, 30000, 40000] 68 | values = [ini_lr, ini_lr * 0.1, ini_lr, ini_lr * 0.1, ini_lr, ini_lr * 0.1] 69 | 70 | # Load the Berkeley-augmented Pascal VOC 2012 segmentation dataset 71 | if sys.platform.startswith("win"): 72 | dataset_root = "E:/datasets/bk-voc/benchmark_RELEASE/dataset" 73 | else: 74 | dataset_root = '/media/EDrive/datasets/bk-voc/benchmark_RELEASE/dataset' 75 | dataset = BKVOCDataset(phase='train', dataset_root=dataset_root) 76 | 77 | # Display dataset configuration 78 | dataset.print_config() 79 | 80 | # Train the network with strong side outputs supervision 81 | with tf.Graph().as_default(): 82 | with tf.device('/gpu:' + str(gpu_id)): 83 | global_step = tf.Variable(0, name='global_step', trainable=False) 84 | learning_rate = tf.train.piecewise_constant(global_step, boundaries, values) 85 | model.train_parent(dataset, imagenet_ckpt, 1, learning_rate, logs_path, max_training_iters_1, save_step, 86 | display_step, global_step, segnet_stream, iter_mean_grad=iter_mean_grad, test_image_path=test_image, 87 | ckpt_name=ckpt_name) 88 | # Train the network with weak side outputs supervision 89 | with tf.Graph().as_default(): 90 | with tf.device('/gpu:' + str(gpu_id)): 91 | global_step = tf.Variable(max_training_iters_1, name='global_step', trainable=False) 92 | learning_rate = tf.train.piecewise_constant(global_step, boundaries, values) 93 | model.train_parent(dataset, imagenet_ckpt, 2, learning_rate, logs_path, max_training_iters_2, save_step, 94 | display_step, global_step, segnet_stream, iter_mean_grad=iter_mean_grad, resume_training=True, 95 | test_image_path=test_image, ckpt_name=ckpt_name) 96 | # Train the network without side outputs supervision 97 | with tf.Graph().as_default(): 98 | with tf.device('/gpu:' + str(gpu_id)): 99 | global_step = tf.Variable(max_training_iters_2, name='global_step', trainable=False) 100 | learning_rate = tf.train.piecewise_constant(global_step, boundaries, values) 101 | model.train_parent(dataset, imagenet_ckpt, 3, learning_rate, logs_path, max_training_iters_3, save_step, 102 | display_step, global_step, segnet_stream, iter_mean_grad=iter_mean_grad, resume_training=True, 103 | test_image_path=test_image, ckpt_name=ckpt_name) 104 | 105 | 106 | -------------------------------------------------------------------------------- /tfwss/models/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | -------------------------------------------------------------------------------- /tfwss/net_surgery.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# VGG16 Net Surgery\n", 8 | "VGG16 Transfer Learning After 3-to-4-Channel Input Conversion" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "metadata": {}, 14 | "source": [ 15 | "## Background\n", 16 | "The weakly supervised segmentation techniques presented in the \"Simply Does It\" paper use a backbone convnet (either DeepLab or VGG16 network) **pre-trained on ImageNet**. This pre-trained network takes RGB images as an input (W x H x 3). Remember that the weakly supervised version is trained using **4-channel inputs: RGB + a binary mask with a filled bounding box of the object instance**. Therefore, we need to **perform net surgery and create a 4-channel input version** of the VGG16 net, initialized with the 3-channel parameter values **except** for the additional convolutional filters (we use Gaussian initialization for them).\n", 17 | "\n", 18 | "Here's how a typical VGG16 convnet looks like:\n", 19 | "\n", 20 | "![](img/vgg16.png)\n", 21 | "\n", 22 | "In the picture above, we're modifying the first block on the left." 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": 1, 28 | "metadata": { 29 | "collapsed": true 30 | }, 31 | "outputs": [], 32 | "source": [ 33 | "\"\"\"\n", 34 | "net_surgery.ipynb\n", 35 | "\n", 36 | "VGG16 Transfer Learning After 3-to-4-Channel Input Conversion\n", 37 | "\n", 38 | "Written by Phil Ferriere\n", 39 | "\n", 40 | "Licensed under the MIT License (see LICENSE for details)\n", 41 | "\n", 42 | "Based on:\n", 43 | " - https://github.com/minhnhat93/tf_object_detection_multi_channels/blob/master/edit_checkpoint.py\n", 44 | " Written by SNhat M. Nguyen\n", 45 | " Unknown code license\n", 46 | "\"\"\"\n", 47 | "from tensorflow.python import pywrap_tensorflow\n", 48 | "import numpy as np\n", 49 | "import tensorflow as tf" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "## Configuration" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": 2, 62 | "metadata": { 63 | "collapsed": true 64 | }, 65 | "outputs": [], 66 | "source": [ 67 | "num_input_channels = 4 # AStream uses 4-channel inputs\n", 68 | "init_method = 'gaussian' # ['gaussian'|'spread_average'|'zeros']\n", 69 | "input_path = 'models/vgg_16_3chan.ckpt' # copy of checkpoint in http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz\n", 70 | "output_path = 'models/vgg_16_4chan.ckpt'" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "metadata": {}, 76 | "source": [ 77 | "## Surgery" 78 | ] 79 | }, 80 | { 81 | "cell_type": "markdown", 82 | "metadata": {}, 83 | "source": [ 84 | "Here are the VGG16 stage 1 parameters we'll want to modify:\n", 85 | "```\n", 86 | "(dlwin36tfwss) Phil@SERVERP E:\\repos\\tf-wss\\tfwss\\tools\n", 87 | "$ python -m inspect_checkpoint --file_name=../models/vgg_16_3chan.ckpt | grep -i conv1_1\n", 88 | "vgg_16/conv1/conv1_1/weights (DT_FLOAT) [3,3,3,64]\n", 89 | "vgg_16/conv1/conv1_1/biases (DT_FLOAT) [64]\n", 90 | "```\n", 91 | "First, let's find the correct tensor:" 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": 3, 97 | "metadata": {}, 98 | "outputs": [ 99 | { 100 | "name": "stdout", 101 | "output_type": "stream", 102 | "text": [ 103 | "Loading checkpoint...\n", 104 | "...done loading checkpoint.\n", 105 | "Tensor vgg_16/conv1/conv1_1/weights of shape (3, 3, 3, 64) located.\n" 106 | ] 107 | } 108 | ], 109 | "source": [ 110 | "print('Loading checkpoint...')\n", 111 | "reader = pywrap_tensorflow.NewCheckpointReader(input_path)\n", 112 | "print('...done loading checkpoint.')\n", 113 | "\n", 114 | "var_to_shape_map = reader.get_variable_to_shape_map()\n", 115 | "var_to_edit_name = 'vgg_16/conv1/conv1_1/weights'\n", 116 | "\n", 117 | "for key in sorted(var_to_shape_map):\n", 118 | " if key != var_to_edit_name:\n", 119 | " var = tf.Variable(reader.get_tensor(key), name=key, dtype=tf.float32)\n", 120 | " else:\n", 121 | " var_to_edit = reader.get_tensor(var_to_edit_name)\n", 122 | " print('Tensor {} of shape {} located.'.format(var_to_edit_name, var_to_edit.shape))" 123 | ] 124 | }, 125 | { 126 | "cell_type": "markdown", 127 | "metadata": {}, 128 | "source": [ 129 | "Now, let's edit the tensor and initialize it according to the chosen init method:" 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": 4, 135 | "metadata": { 136 | "collapsed": true 137 | }, 138 | "outputs": [], 139 | "source": [ 140 | "sess = tf.Session()\n", 141 | "if init_method != 'gaussian':\n", 142 | " print('Error: Unimplemented initialization method')\n", 143 | "new_channels_shape = list(var_to_edit.shape)\n", 144 | "new_channels_shape[2] = num_input_channels - 3\n", 145 | "gaussian_var = tf.random_normal(shape=new_channels_shape, stddev=0.001).eval(session=sess)\n", 146 | "new_var = np.concatenate([var_to_edit, gaussian_var], axis=2)\n", 147 | "new_var = tf.Variable(new_var, name=var_to_edit_name, dtype=tf.float32)" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "Finally, let's update the network parameters and the save the updated model to disk:" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": 5, 160 | "metadata": {}, 161 | "outputs": [ 162 | { 163 | "data": { 164 | "text/plain": [ 165 | "'models/vgg_16_4chan.ckpt'" 166 | ] 167 | }, 168 | "execution_count": 5, 169 | "metadata": {}, 170 | "output_type": "execute_result" 171 | } 172 | ], 173 | "source": [ 174 | "sess.run(tf.global_variables_initializer())\n", 175 | "saver = tf.train.Saver()\n", 176 | "saver.save(sess, output_path)" 177 | ] 178 | }, 179 | { 180 | "cell_type": "markdown", 181 | "metadata": {}, 182 | "source": [ 183 | "## Verification\n", 184 | "Verify the result of this surgery by looking at the output of the following commands:\n", 185 | "```\n", 186 | "$ python -m inspect_checkpoint --file_name=../models/vgg_16_3chan.ckpt --tensor_name=vgg_16/conv1/conv1_1/weights > vgg_16_3chan-conv1_1-weights.txt\n", 187 | "$ python -m inspect_checkpoint --file_name=../models/vgg_16_4chan.ckpt --tensor_name=vgg_16/conv1/conv1_1/weights > vgg_16_4chan-conv1_1-weights.txt\n", 188 | "```\n", 189 | "You should see the following values in the first filter:\n", 190 | "```\n", 191 | "# 3-channel VGG16\n", 192 | "3,3,3,0\n", 193 | "[[[ 0.4800154 0.55037946 0.42947057]\n", 194 | " [ 0.4085474 0.44007453 0.373467 ]\n", 195 | " [-0.06514555 -0.08138704 -0.06136011]]\n", 196 | "\n", 197 | " [[ 0.31047726 0.34573907 0.27476987]\n", 198 | " [ 0.05020237 0.04063221 0.03868078]\n", 199 | " [-0.40338343 -0.45350131 -0.36722335]]\n", 200 | "\n", 201 | " [[-0.05087169 -0.05863491 -0.05746817]\n", 202 | " [-0.28522751 -0.33066967 -0.26224968]\n", 203 | " [-0.41851634 -0.4850302 -0.35009676]]]\n", 204 | " \n", 205 | "# 4-channel VGG16\n", 206 | "3,3,4,0\n", 207 | "[[[ 4.80015397e-01 5.50379455e-01 4.29470569e-01 1.13388560e-04]\n", 208 | " [ 4.08547401e-01 4.40074533e-01 3.73466998e-01 7.61439209e-04]\n", 209 | " [ -6.51455522e-02 -8.13870355e-02 -6.13601133e-02 4.74345696e-04]]\n", 210 | "\n", 211 | " [[ 3.10477257e-01 3.45739067e-01 2.74769872e-01 4.11637186e-04]\n", 212 | " [ 5.02023660e-02 4.06322069e-02 3.86807770e-02 1.38304755e-03]\n", 213 | " [ -4.03383434e-01 -4.53501314e-01 -3.67223352e-01 1.28411280e-03]]\n", 214 | "\n", 215 | " [[ -5.08716851e-02 -5.86349145e-02 -5.74681684e-02 -6.34787197e-04]\n", 216 | " [ -2.85227507e-01 -3.30669671e-01 -2.62249678e-01 -1.77454809e-03]\n", 217 | " [ -4.18516338e-01 -4.85030204e-01 -3.50096762e-01 2.10441509e-03]]]\n", 218 | " \n", 219 | "```" 220 | ] 221 | } 222 | ], 223 | "metadata": { 224 | "kernelspec": { 225 | "display_name": "Python 3", 226 | "language": "python", 227 | "name": "python3" 228 | }, 229 | "language_info": { 230 | "codemirror_mode": { 231 | "name": "ipython", 232 | "version": 3 233 | }, 234 | "file_extension": ".py", 235 | "mimetype": "text/x-python", 236 | "name": "python", 237 | "nbconvert_exporter": "python", 238 | "pygments_lexer": "ipython3", 239 | "version": "3.6.1" 240 | } 241 | }, 242 | "nbformat": 4, 243 | "nbformat_minor": 2 244 | } 245 | -------------------------------------------------------------------------------- /tfwss/segment.py: -------------------------------------------------------------------------------- 1 | """ 2 | segment.py 3 | 4 | Segmentation utility functions. 5 | 6 | Written by Phil Ferriere 7 | 8 | Licensed under the MIT License (see LICENSE for details) 9 | """ 10 | 11 | # TODO The SDI paper uses Grabcut+ (Grabcut on HED boundaries). How difficult is it to implement? 12 | # TODO Try this version of Grabcut: https://github.com/meng-tang/KernelCut_ICCV15 ? 13 | # TODO Try this version of Grabcut: https://github.com/meng-tang/OneCut ? 14 | 15 | import numpy as np 16 | import cv2 as cv 17 | 18 | _MIN_AREA = 9 19 | _ITER_COUNT = 5 20 | _RECT_SHRINK = 3 21 | 22 | def rect_mask(shape, bbox): 23 | """Given a bbox and a shape, creates a mask (white rectangle foreground, black background) 24 | Param: 25 | shape: shape (H,W) or (H,W,1) 26 | bbox: bbox numpy array [y1, x1, y2, x2] 27 | Returns: 28 | mask 29 | """ 30 | mask = np.zeros(shape[:2], np.uint8) 31 | mask[bbox[0]:bbox[2], bbox[1]:bbox[3]] = 255 32 | if len(shape) == 3 and shape[2] == 1: 33 | mask = np.expand_dims(mask, axis=-1) 34 | return mask 35 | 36 | def grabcut(img_path, bbox): 37 | """Use Grabcut to create a binary segmentation of an image within a bounding box 38 | Param: 39 | img: path to input image astype('uint8') 40 | bbox: bbox numpy array [y1, x1, y2, x2] 41 | Returns: 42 | mask with binary segmentation astype('uint8') 43 | Based on: 44 | https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html 45 | """ 46 | img = cv.imread(img_path) 47 | width, height = bbox[3] - bbox[1], bbox[2] - bbox[0] 48 | if width * height < _MIN_AREA: 49 | # OpenCV's Grabcut breaks if the rectangle is too small! 50 | # This happens with instance mask 2008_002212_4.png 51 | # Fix: Draw a filled rectangle at that location, making the assumption everything in the rectangle is foreground 52 | assert(width*height > 0) 53 | mask = rect_mask(img.shape[:2], bbox) 54 | elif width * height == img.shape[0] * img.shape[1]: 55 | # If the rectangle covers the entire image, grabCut can't distinguish between background and foreground 56 | # because it assumes what's outside the rect is background (no "outside" if the rect is as large as the input) 57 | # This happens with instance mask 2008_002638_3.png 58 | # Crappy Fix: Shrink the rectangle corners by _RECT_SHRINK on all sides 59 | # Use Grabcut to create a segmentation within the bbox 60 | rect = (_RECT_SHRINK, _RECT_SHRINK, width - _RECT_SHRINK * 2, height - _RECT_SHRINK * 2) 61 | gct = np.zeros(img.shape[:2], np.uint8) 62 | bgdModel, fgdModel = np.zeros((1, 65), np.float64), np.zeros((1, 65), np.float64) 63 | cv.grabCut(img, gct, rect, bgdModel,fgdModel, _ITER_COUNT, cv.GC_INIT_WITH_RECT) 64 | mask = np.where((gct == 2) | (gct == 0), 0, 255).astype('uint8') 65 | else: 66 | # Use Grabcut to create a segmentation within the bbox 67 | rect = (bbox[1], bbox[0], width, height) 68 | gct = np.zeros(img.shape[:2], np.uint8) 69 | bgdModel, fgdModel = np.zeros((1, 65), np.float64), np.zeros((1, 65), np.float64) 70 | cv.grabCut(img, gct, rect, bgdModel,fgdModel, _ITER_COUNT, cv.GC_INIT_WITH_RECT) 71 | mask = np.where((gct == 2) | (gct == 0), 0, 255).astype('uint8') 72 | return mask 73 | -------------------------------------------------------------------------------- /tfwss/setup/dlubu36tfwss.yml: -------------------------------------------------------------------------------- 1 | name: dlubu36tfwss 2 | channels: 3 | - conda-forge 4 | - defaults 5 | dependencies: 6 | - backports=1.0=py36_1 7 | - backports.functools_lru_cache=1.4=py36_1 8 | - blas=1.1=openblas 9 | - bzip2=1.0.6=1 10 | - cairo=1.14.6=5 11 | - ffmpeg=3.2.4=3 12 | - fontconfig=2.12.1=6 13 | - freetype=2.7=2 14 | - gettext=0.19.8.1=0 15 | - giflib=5.1.4=0 16 | - glib=2.51.4=0 17 | - harfbuzz=1.3.4=2 18 | - hdf5=1.10.1=1 19 | - icu=58.2=0 20 | - jasper=1.900.1=4 21 | - jpeg=9b=2 22 | - libiconv=1.15=0 23 | - libtiff=4.0.7=1 24 | - libwebp=0.5.2=7 25 | - matplotlib=2.1.1=py36_0 26 | - openblas=0.2.20=7 27 | - opencv=3.3.0=py36_blas_openblas_203 28 | - pillow=4.3.0=py36_1 29 | - pixman=0.34.0=1 30 | - qt=5.6.2=6 31 | - scikit-learn=0.19.1=py36_blas_openblas_201 32 | - scipy=1.0.0=py36_blas_openblas_201 33 | - x264=20131217=3 34 | - backports.weakref=1.0rc1=py36_0 35 | - bleach=1.5.0=py36_0 36 | - certifi=2016.2.28=py36_0 37 | - cudatoolkit=8.0=3 38 | - cudnn=6.0.21=cuda8.0_0 39 | - cycler=0.10.0=py36_0 40 | - cython=0.26=py36_0 41 | - dbus=1.10.20=0 42 | - decorator=4.1.2=py36_0 43 | - entrypoints=0.2.3=py36_0 44 | - expat=2.1.0=0 45 | - gst-plugins-base=1.8.0=0 46 | - gstreamer=1.8.0=0 47 | - html5lib=0.9999999=py36_0 48 | - ipykernel=4.6.1=py36_0 49 | - ipython=6.1.0=py36_0 50 | - ipython_genutils=0.2.0=py36_0 51 | - ipywidgets=6.0.0=py36_0 52 | - jbig=2.1=0 53 | - jedi=0.10.2=py36_2 54 | - jinja2=2.9.6=py36_0 55 | - jsonschema=2.6.0=py36_0 56 | - jupyter=1.0.0=py36_3 57 | - jupyter_client=5.1.0=py36_0 58 | - jupyter_console=5.2.0=py36_0 59 | - jupyter_core=4.3.0=py36_0 60 | - libffi=3.2.1=1 61 | - libgcc=5.2.0=0 62 | - libgfortran=3.0.0=1 63 | - libpng=1.6.30=1 64 | - libprotobuf=3.4.0=0 65 | - libsodium=1.0.10=0 66 | - libxcb=1.12=1 67 | - libxml2=2.9.4=0 68 | - markdown=2.6.9=py36_0 69 | - markupsafe=1.0=py36_0 70 | - mistune=0.7.4=py36_0 71 | - mkl=2017.0.3=0 72 | - mkl-service=1.1.2=py36_3 73 | - nbconvert=5.2.1=py36_0 74 | - nbformat=4.4.0=py36_0 75 | - networkx=1.11=py36_0 76 | - notebook=5.0.0=py36_0 77 | - numpy=1.13.1=py36_0 78 | - olefile=0.44=py36_0 79 | - openssl=1.0.2l=0 80 | - pandas=0.20.3=py36_0 81 | - pandocfilters=1.4.2=py36_0 82 | - path.py=10.3.1=py36_0 83 | - pcre=8.39=1 84 | - pexpect=4.2.1=py36_0 85 | - pickleshare=0.7.4=py36_0 86 | - pip=9.0.1=py36_1 87 | - prompt_toolkit=1.0.15=py36_0 88 | - protobuf=3.4.0=py36_0 89 | - ptyprocess=0.5.2=py36_0 90 | - py=1.4.34=py36_0 91 | - pygments=2.2.0=py36_0 92 | - pyparsing=2.2.0=py36_0 93 | - pyqt=5.6.0=py36_2 94 | - pytest=3.2.1=py36_0 95 | - python=3.6.2=0 96 | - python-dateutil=2.6.1=py36_0 97 | - pytz=2017.2=py36_0 98 | - pywavelets=0.5.2=np113py36_0 99 | - pyzmq=16.0.2=py36_0 100 | - qtconsole=4.3.1=py36_0 101 | - readline=6.2=2 102 | - scikit-image=0.13.0=np113py36_0 103 | - setuptools=36.4.0=py36_1 104 | - simplegeneric=0.8.1=py36_1 105 | - sip=4.18=py36_0 106 | - six=1.10.0=py36_0 107 | - sqlite=3.13.0=0 108 | - tensorflow-gpu=1.3.0=0 109 | - tensorflow-gpu-base=1.3.0=py36cuda8.0cudnn6.0_1 110 | - tensorflow-tensorboard=0.1.5=py36_0 111 | - terminado=0.6=py36_0 112 | - testpath=0.3.1=py36_0 113 | - tk=8.5.18=0 114 | - tornado=4.5.2=py36_0 115 | - tqdm=4.15.0=py36_0 116 | - traitlets=4.3.2=py36_0 117 | - wcwidth=0.1.7=py36_0 118 | - werkzeug=0.12.2=py36_0 119 | - wheel=0.29.0=py36_0 120 | - widgetsnbextension=3.0.2=py36_0 121 | - xz=5.2.3=0 122 | - zeromq=4.1.5=0 123 | - zlib=1.2.11=0 124 | - pip: 125 | - backports.functools-lru-cache==1.4 126 | - imageio==2.2.0 127 | - ipython-genutils==0.2.0 128 | - jupyter-client==5.1.0 129 | - jupyter-console==5.2.0 130 | - jupyter-core==4.3.0 131 | - prompt-toolkit==1.0.15 132 | - pyflow==1.0 133 | - tensorflow==1.3.0 134 | prefix: /media/EDrive/toolkits.ubu/anaconda3-4.4.0/envs/dlubu36tfwss 135 | 136 | -------------------------------------------------------------------------------- /tfwss/setup/dlwin36tfwss.yml: -------------------------------------------------------------------------------- 1 | name: dlwin36tfwss 2 | channels: 3 | - conda-forge 4 | - defaults 5 | dependencies: 6 | - bzip2=1.0.6=vc14_1 7 | - ca-certificates=2017.11.5=0 8 | - ffmpeg=3.4.1=0 9 | - freetype=2.7=vc14_1 10 | - icu=58.2=vc14_0 11 | - jpeg=9b=vc14_2 12 | - jupyter_contrib_core=0.3.3=py36_0 13 | - jupyter_contrib_nbextensions=0.3.1=py36_0 14 | - jupyter_highlight_selected_word=0.0.10=py36_0 15 | - jupyter_latex_envs=1.3.8.2=py36_1 16 | - jupyter_nbextensions_configurator=0.2.7=py36_0 17 | - libgpuarray=0.7.5=vc14_0 18 | - libpng=1.6.28=vc14_1 19 | - libtiff=4.0.7=vc14_0 20 | - libwebp=0.5.2=vc14_7 21 | - opencv=3.3.0=py36_200 22 | - openssl=1.0.2n=vc14_0 23 | - pillow=4.3.0=py36_0 24 | - pygpu=0.7.5=py36_0 25 | - pyyaml=3.12=py36_1 26 | - qt=5.6.2=vc14_1 27 | - tk=8.5.19=vc14_1 28 | - vc=14=0 29 | - yaml=0.1.6=vc14_0 30 | - zlib=1.2.8=vc14_3 31 | - bleach=1.5.0=py36_0 32 | - colorama=0.3.9=py36h029ae33_0 33 | - cycler=0.10.0=py36_0 34 | - cython=0.26=py36_0 35 | - decorator=4.0.11=py36_0 36 | - entrypoints=0.2.2=py36_1 37 | - html5lib=0.999=py36_0 38 | - ipykernel=4.6.1=py36_0 39 | - ipython=6.1.0=py36_0 40 | - ipython_genutils=0.2.0=py36_0 41 | - ipywidgets=6.0.0=py36_0 42 | - jedi=0.10.2=py36_2 43 | - jinja2=2.9.6=py36_0 44 | - jsonschema=2.6.0=py36_0 45 | - jupyter=1.0.0=py36_3 46 | - jupyter_client=5.1.0=py36_0 47 | - jupyter_console=5.1.0=py36_0 48 | - jupyter_core=4.3.0=py36_0 49 | - libpython=2.0=py36_0 50 | - m2w64-binutils=2.25.1=5 51 | - m2w64-bzip2=1.0.6=6 52 | - m2w64-crt-git=5.0.0.4636.2595836=2 53 | - m2w64-gcc=5.3.0=6 54 | - m2w64-gcc-ada=5.3.0=6 55 | - m2w64-gcc-fortran=5.3.0=6 56 | - m2w64-gcc-libgfortran=5.3.0=6 57 | - m2w64-gcc-libs=5.3.0=7 58 | - m2w64-gcc-libs-core=5.3.0=7 59 | - m2w64-gcc-objc=5.3.0=6 60 | - m2w64-gmp=6.1.0=2 61 | - m2w64-headers-git=5.0.0.4636.c0ad18a=2 62 | - m2w64-isl=0.16.1=2 63 | - m2w64-libiconv=1.14=6 64 | - m2w64-libmangle-git=5.0.0.4509.2e5a9a2=2 65 | - m2w64-libwinpthread-git=5.0.0.4634.697f757=2 66 | - m2w64-make=4.1.2351.a80a8b8=2 67 | - m2w64-mpc=1.0.3=3 68 | - m2w64-mpfr=3.1.4=4 69 | - m2w64-pkg-config=0.29.1=2 70 | - m2w64-toolchain=5.3.0=7 71 | - m2w64-tools-git=5.0.0.4592.90b8472=2 72 | - m2w64-windows-default-manifest=6.4=3 73 | - m2w64-winpthreads-git=5.0.0.4634.697f757=2 74 | - m2w64-zlib=1.2.8=10 75 | - mako=1.0.6=py36_0 76 | - markupsafe=0.23=py36_2 77 | - matplotlib=2.0.2=np113py36_0 78 | - mistune=0.7.4=py36_0 79 | - mkl=2017.0.3=0 80 | - mkl-service=1.1.2=py36_3 81 | - msys2-conda-epoch=20160418=1 82 | - nbconvert=5.2.1=py36_0 83 | - nbformat=4.3.0=py36_0 84 | - networkx=1.11=py36_0 85 | - nose=1.3.7=py36_1 86 | - notebook=5.0.0=py36_0 87 | - numpy=1.13.0=py36_0 88 | - olefile=0.44=py36_0 89 | - pandas=0.20.3=py36_0 90 | - pandocfilters=1.4.1=py36_0 91 | - path.py=10.3.1=py36_0 92 | - pickleshare=0.7.4=py36_0 93 | - pip=9.0.1=py36_1 94 | - prompt_toolkit=1.0.14=py36_0 95 | - pygments=2.2.0=py36_0 96 | - pyparsing=2.2.0=py36_0 97 | - pyqt=5.6.0=py36_2 98 | - python=3.6.1=2 99 | - python-dateutil=2.6.0=py36_0 100 | - pytz=2017.2=py36_0 101 | - pywavelets=0.5.2=np113py36_0 102 | - pyzmq=16.0.2=py36_0 103 | - qtconsole=4.3.0=py36_0 104 | - scikit-image=0.13.0=np113py36_0 105 | - scikit-learn=0.19.0=np113py36_0 106 | - scipy=0.19.1=np113py36_0 107 | - setuptools=27.2.0=py36_1 108 | - simplegeneric=0.8.1=py36_1 109 | - sip=4.18=py36_0 110 | - six=1.10.0=py36_0 111 | - testpath=0.3.1=py36_0 112 | - tornado=4.5.1=py36_0 113 | - tqdm=4.19.4=py36h02a35f0_0 114 | - traitlets=4.3.2=py36_0 115 | - vs2015_runtime=14.0.25420=0 116 | - wcwidth=0.1.7=py36_0 117 | - wheel=0.29.0=py36_0 118 | - widgetsnbextension=2.0.0=py36_0 119 | - pip: 120 | - backports.weakref==1.0rc1 121 | - beautifulsoup4==4.6.0 122 | - certifi==2017.7.27.1 123 | - chardet==3.0.4 124 | - cliff==2.8.0 125 | - cmd2==0.7.7 126 | - cntk==2.0 127 | - configparser==3.5.0 128 | - cssselect==1.0.1 129 | - ez-setup==0.9 130 | - idna==2.6 131 | - imageio==2.2.0 132 | - ipython-genutils==0.2.0 133 | - jupyter-client==5.1.0 134 | - jupyter-console==5.1.0 135 | - jupyter-contrib-core==0.3.3 136 | - jupyter-contrib-nbextensions==0.3.1 137 | - jupyter-core==4.3.0 138 | - jupyter-highlight-selected-word==0.0.10 139 | - jupyter-latex-envs==1.3.8.2 140 | - jupyter-nbextensions-configurator==0.2.7 141 | - kaggle-cli==0.12.10 142 | - keras==2.0.5 143 | - lxml==4.0.0 144 | - markdown==2.6.9 145 | - mechanicalsoup==0.8.0 146 | - object-detection==0.1 147 | - pbr==3.1.1 148 | - prettytable==0.7.2 149 | - progressbar2==3.34.3 150 | - prompt-toolkit==1.0.14 151 | - protobuf==3.3.0 152 | - pyflow==1.0 153 | - pyperclip==1.5.32 154 | - python-utils==2.2.0 155 | - requests==2.18.4 156 | - stevedore==1.27.1 157 | - tensorflow-gpu==1.3.0 158 | - tensorflow-tensorboard==0.1.8 159 | - theano==0.9.0 160 | - urllib3==1.22 161 | - werkzeug==0.12.2 162 | prefix: e:\toolkits.win\anaconda3-4.4.0\envs\dlwin36tfwss 163 | 164 | -------------------------------------------------------------------------------- /tfwss/setup/requirements_ubu.txt: -------------------------------------------------------------------------------- 1 | # This file may be used to create an environment using: 2 | # $ conda create --name --file 3 | # platform: linux-64 4 | backports=1.0=py36_1 5 | backports.functools_lru_cache=1.4=py36_1 6 | backports.weakref=1.0rc1=py36_0 7 | blas=1.1=openblas 8 | bleach=1.5.0=py36_0 9 | bzip2=1.0.6=1 10 | cairo=1.14.6=5 11 | certifi=2016.2.28=py36_0 12 | cudatoolkit=8.0=3 13 | cudnn=6.0.21=cuda8.0_0 14 | cycler=0.10.0=py36_0 15 | cython=0.26=py36_0 16 | dbus=1.10.20=0 17 | decorator=4.1.2=py36_0 18 | entrypoints=0.2.3=py36_0 19 | expat=2.1.0=0 20 | ffmpeg=3.2.4=3 21 | fontconfig=2.12.1=6 22 | freetype=2.7=2 23 | gettext=0.19.8.1=0 24 | giflib=5.1.4=0 25 | glib=2.51.4=0 26 | gst-plugins-base=1.8.0=0 27 | gstreamer=1.8.0=0 28 | harfbuzz=1.3.4=2 29 | hdf5=1.10.1=1 30 | html5lib=0.9999999=py36_0 31 | icu=58.2=0 32 | imageio=2.2.0 33 | ipykernel=4.6.1=py36_0 34 | ipython=6.1.0=py36_0 35 | ipython_genutils=0.2.0=py36_0 36 | ipywidgets=6.0.0=py36_0 37 | jasper=1.900.1=4 38 | jbig=2.1=0 39 | jedi=0.10.2=py36_2 40 | jinja2=2.9.6=py36_0 41 | jpeg=9b=2 42 | jsonschema=2.6.0=py36_0 43 | jupyter=1.0.0=py36_3 44 | jupyter_client=5.1.0=py36_0 45 | jupyter_console=5.2.0=py36_0 46 | jupyter_core=4.3.0=py36_0 47 | libffi=3.2.1=1 48 | libgcc=5.2.0=0 49 | libgfortran=3.0.0=1 50 | libiconv=1.15=0 51 | libpng=1.6.30=1 52 | libprotobuf=3.4.0=0 53 | libsodium=1.0.10=0 54 | libtiff=4.0.7=1 55 | libwebp=0.5.2=7 56 | libxcb=1.12=1 57 | libxml2=2.9.4=0 58 | markdown=2.6.9=py36_0 59 | markupsafe=1.0=py36_0 60 | matplotlib=2.1.1=py36_0 61 | mistune=0.7.4=py36_0 62 | mkl=2017.0.3=0 63 | mkl-service=1.1.2=py36_3 64 | nbconvert=5.2.1=py36_0 65 | nbformat=4.4.0=py36_0 66 | networkx=1.11=py36_0 67 | notebook=5.0.0=py36_0 68 | numpy=1.13.1=py36_0 69 | olefile=0.44=py36_0 70 | openblas=0.2.20=7 71 | opencv=3.3.0=py36_blas_openblas_203 72 | openssl=1.0.2l=0 73 | pandas=0.20.3=py36_0 74 | pandocfilters=1.4.2=py36_0 75 | path.py=10.3.1=py36_0 76 | pcre=8.39=1 77 | pexpect=4.2.1=py36_0 78 | pickleshare=0.7.4=py36_0 79 | pillow=4.3.0=py36_1 80 | pip=9.0.1=py36_1 81 | pixman=0.34.0=1 82 | prompt_toolkit=1.0.15=py36_0 83 | protobuf=3.4.0=py36_0 84 | ptyprocess=0.5.2=py36_0 85 | py=1.4.34=py36_0 86 | pygments=2.2.0=py36_0 87 | pyparsing=2.2.0=py36_0 88 | pyqt=5.6.0=py36_2 89 | pytest=3.2.1=py36_0 90 | python=3.6.2=0 91 | python-dateutil=2.6.1=py36_0 92 | pytz=2017.2=py36_0 93 | pywavelets=0.5.2=np113py36_0 94 | pyzmq=16.0.2=py36_0 95 | qt=5.6.2=6 96 | qtconsole=4.3.1=py36_0 97 | readline=6.2=2 98 | scikit-image=0.13.0=np113py36_0 99 | scikit-learn=0.19.1=py36_blas_openblas_201 100 | scipy=1.0.0=py36_blas_openblas_201 101 | setuptools=36.4.0=py36_1 102 | simplegeneric=0.8.1=py36_1 103 | sip=4.18=py36_0 104 | six=1.10.0=py36_0 105 | sqlite=3.13.0=0 106 | tensorflow-gpu=1.3.0=0 107 | tensorflow-gpu-base=1.3.0=py36cuda8.0cudnn6.0_1 108 | tensorflow-tensorboard=0.1.5=py36_0 109 | terminado=0.6=py36_0 110 | testpath=0.3.1=py36_0 111 | tk=8.5.18=0 112 | tornado=4.5.2=py36_0 113 | tqdm=4.15.0=py36_0 114 | traitlets=4.3.2=py36_0 115 | wcwidth=0.1.7=py36_0 116 | werkzeug=0.12.2=py36_0 117 | wheel=0.29.0=py36_0 118 | widgetsnbextension=3.0.2=py36_0 119 | x264=20131217=3 120 | xz=5.2.3=0 121 | zeromq=4.1.5=0 122 | zlib=1.2.11=0 123 | -------------------------------------------------------------------------------- /tfwss/setup/requirements_win.txt: -------------------------------------------------------------------------------- 1 | # This file may be used to create an environment using: 2 | # $ conda create --name --file 3 | # platform: win-64 4 | bleach=1.5.0=py36_0 5 | bzip2=1.0.6=vc14_1 6 | ca-certificates=2017.11.5=0 7 | colorama=0.3.9=py36h029ae33_0 8 | cycler=0.10.0=py36_0 9 | cython=0.26=py36_0 10 | decorator=4.0.11=py36_0 11 | entrypoints=0.2.2=py36_1 12 | ffmpeg=3.4.1=0 13 | freetype=2.7=vc14_1 14 | html5lib=0.999=py36_0 15 | icu=58.2=vc14_0 16 | imageio=2.2.0 17 | ipykernel=4.6.1=py36_0 18 | ipython=6.1.0=py36_0 19 | ipython_genutils=0.2.0=py36_0 20 | ipywidgets=6.0.0=py36_0 21 | jedi=0.10.2=py36_2 22 | jinja2=2.9.6=py36_0 23 | jpeg=9b=vc14_2 24 | jsonschema=2.6.0=py36_0 25 | jupyter=1.0.0=py36_3 26 | jupyter_client=5.1.0=py36_0 27 | jupyter_console=5.1.0=py36_0 28 | jupyter_contrib_core=0.3.3=py36_0 29 | jupyter_contrib_nbextensions=0.3.1=py36_0 30 | jupyter_core=4.3.0=py36_0 31 | jupyter_highlight_selected_word=0.0.10=py36_0 32 | jupyter_latex_envs=1.3.8.2=py36_1 33 | jupyter_nbextensions_configurator=0.2.7=py36_0 34 | libgpuarray=0.7.5=vc14_0 35 | libpng=1.6.28=vc14_1 36 | libpython=2.0=py36_0 37 | libtiff=4.0.7=vc14_0 38 | libwebp=0.5.2=vc14_7 39 | m2w64-binutils=2.25.1=5 40 | m2w64-bzip2=1.0.6=6 41 | m2w64-crt-git=5.0.0.4636.2595836=2 42 | m2w64-gcc=5.3.0=6 43 | m2w64-gcc-ada=5.3.0=6 44 | m2w64-gcc-fortran=5.3.0=6 45 | m2w64-gcc-libgfortran=5.3.0=6 46 | m2w64-gcc-libs=5.3.0=7 47 | m2w64-gcc-libs-core=5.3.0=7 48 | m2w64-gcc-objc=5.3.0=6 49 | m2w64-gmp=6.1.0=2 50 | m2w64-headers-git=5.0.0.4636.c0ad18a=2 51 | m2w64-isl=0.16.1=2 52 | m2w64-libiconv=1.14=6 53 | m2w64-libmangle-git=5.0.0.4509.2e5a9a2=2 54 | m2w64-libwinpthread-git=5.0.0.4634.697f757=2 55 | m2w64-make=4.1.2351.a80a8b8=2 56 | m2w64-mpc=1.0.3=3 57 | m2w64-mpfr=3.1.4=4 58 | m2w64-pkg-config=0.29.1=2 59 | m2w64-toolchain=5.3.0=7 60 | m2w64-tools-git=5.0.0.4592.90b8472=2 61 | m2w64-windows-default-manifest=6.4=3 62 | m2w64-winpthreads-git=5.0.0.4634.697f757=2 63 | m2w64-zlib=1.2.8=10 64 | mako=1.0.6=py36_0 65 | markupsafe=0.23=py36_2 66 | matplotlib=2.0.2=np113py36_0 67 | mistune=0.7.4=py36_0 68 | mkl=2017.0.3=0 69 | mkl-service=1.1.2=py36_3 70 | msys2-conda-epoch=20160418=1 71 | nbconvert=5.2.1=py36_0 72 | nbformat=4.3.0=py36_0 73 | networkx=1.11=py36_0 74 | nose=1.3.7=py36_1 75 | notebook=5.0.0=py36_0 76 | numpy=1.13.0=py36_0 77 | olefile=0.44=py36_0 78 | opencv=3.3.0=py36_200 79 | openssl=1.0.2n=vc14_0 80 | pandas=0.20.3=py36_0 81 | pandocfilters=1.4.1=py36_0 82 | path.py=10.3.1=py36_0 83 | pickleshare=0.7.4=py36_0 84 | pillow=4.3.0=py36_0 85 | pip=9.0.1=py36_1 86 | prompt_toolkit=1.0.14=py36_0 87 | pygments=2.2.0=py36_0 88 | pygpu=0.7.5=py36_0 89 | pyparsing=2.2.0=py36_0 90 | pyqt=5.6.0=py36_2 91 | python=3.6.1=2 92 | python-dateutil=2.6.0=py36_0 93 | pytz=2017.2=py36_0 94 | pywavelets=0.5.2=np113py36_0 95 | pyyaml=3.12=py36_1 96 | pyzmq=16.0.2=py36_0 97 | qt=5.6.2=vc14_1 98 | qtconsole=4.3.0=py36_0 99 | scikit-image=0.13.0=np113py36_0 100 | scikit-learn=0.19.0=np113py36_0 101 | scipy=0.19.1=np113py36_0 102 | setuptools=27.2.0=py36_1 103 | simplegeneric=0.8.1=py36_1 104 | sip=4.18=py36_0 105 | six=1.10.0=py36_0 106 | testpath=0.3.1=py36_0 107 | tk=8.5.19=vc14_1 108 | tornado=4.5.1=py36_0 109 | tqdm=4.19.4=py36h02a35f0_0 110 | traitlets=4.3.2=py36_0 111 | vc=14=0 112 | vs2015_runtime=14.0.25420=0 113 | wcwidth=0.1.7=py36_0 114 | wheel=0.29.0=py36_0 115 | widgetsnbextension=2.0.0=py36_0 116 | yaml=0.1.6=vc14_0 117 | zlib=1.2.8=vc14_3 118 | -------------------------------------------------------------------------------- /tfwss/tools/inspect_checkpoint.py: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | # ============================================================================== 15 | """A simple script for inspect checkpoint files.""" 16 | from __future__ import absolute_import 17 | from __future__ import division 18 | from __future__ import print_function 19 | 20 | import argparse 21 | import sys 22 | 23 | import numpy as np 24 | 25 | from tensorflow.python import pywrap_tensorflow 26 | from tensorflow.python.platform import app 27 | from tensorflow.python.platform import flags 28 | 29 | FLAGS = None 30 | 31 | 32 | def print_tensors_in_checkpoint_file(file_name, tensor_name, all_tensors, 33 | all_tensor_names): 34 | """Prints tensors in a checkpoint file. 35 | 36 | If no `tensor_name` is provided, prints the tensor names and shapes 37 | in the checkpoint file. 38 | 39 | If `tensor_name` is provided, prints the content of the tensor. 40 | 41 | Args: 42 | file_name: Name of the checkpoint file. 43 | tensor_name: Name of the tensor in the checkpoint file to print. 44 | all_tensors: Boolean indicating whether to print all tensors. 45 | all_tensor_names: Boolean indicating whether to print all tensor names. 46 | """ 47 | try: 48 | reader = pywrap_tensorflow.NewCheckpointReader(file_name) 49 | debug_string = reader.debug_string() 50 | if all_tensors or all_tensor_names: 51 | var_to_shape_map = reader.get_variable_to_shape_map() 52 | for key in sorted(var_to_shape_map): 53 | print("tensor_name: ", key) 54 | if all_tensors: 55 | print(reader.get_tensor(key)) 56 | elif not tensor_name: 57 | print(reader.debug_string().decode("utf-8")) 58 | else: 59 | print("tensor_name: ", tensor_name) 60 | if tensor_name == "vgg_16/conv1/conv1_1/weights": 61 | tensor = reader.get_tensor(tensor_name) 62 | feature_maps = tensor.shape[-1] 63 | for feature_map in range(feature_maps): 64 | print("{},{},{},{}".format(tensor.shape[0], tensor.shape[1], tensor.shape[2], feature_map)) 65 | print(tensor[:,:,:,feature_map]) 66 | print(tensor.shape) 67 | print(tensor) 68 | else: 69 | print(reader.get_tensor(tensor_name)) 70 | except Exception as e: # pylint: disable=broad-except 71 | print(str(e)) 72 | if "corrupted compressed block contents" in str(e): 73 | print("It's likely that your checkpoint file has been compressed " 74 | "with SNAPPY.") 75 | if ("Data loss" in str(e) and 76 | (any([e in file_name for e in [".index", ".meta", ".data"]]))): 77 | proposed_file = ".".join(file_name.split(".")[0:-1]) 78 | v2_file_error_template = """ 79 | It's likely that this is a V2 checkpoint and you need to provide the filename 80 | *prefix*. Try removing the '.' and extension. Try: 81 | inspect checkpoint --file_name = {}""" 82 | print(v2_file_error_template.format(proposed_file)) 83 | 84 | 85 | def parse_numpy_printoption(kv_str): 86 | """Sets a single numpy printoption from a string of the form 'x=y'. 87 | 88 | See documentation on numpy.set_printoptions() for details about what values 89 | x and y can take. x can be any option listed there other than 'formatter'. 90 | 91 | Args: 92 | kv_str: A string of the form 'x=y', such as 'threshold=100000' 93 | 94 | Raises: 95 | argparse.ArgumentTypeError: If the string couldn't be used to set any 96 | nump printoption. 97 | """ 98 | k_v_str = kv_str.split("=", 1) 99 | if len(k_v_str) != 2 or not k_v_str[0]: 100 | raise argparse.ArgumentTypeError("'%s' is not in the form k=v." % kv_str) 101 | k, v_str = k_v_str 102 | printoptions = np.get_printoptions() 103 | if k not in printoptions: 104 | raise argparse.ArgumentTypeError("'%s' is not a valid printoption." % k) 105 | v_type = type(printoptions[k]) 106 | if v_type is type(None): 107 | raise argparse.ArgumentTypeError( 108 | "Setting '%s' from the command line is not supported." % k) 109 | try: 110 | v = ( 111 | v_type(v_str) 112 | if v_type is not bool else flags.BooleanParser().parse(v_str)) 113 | except ValueError as e: 114 | raise argparse.ArgumentTypeError(e.message) 115 | np.set_printoptions(**{k: v}) 116 | 117 | 118 | def main(unused_argv): 119 | if not FLAGS.file_name: 120 | print("Usage: inspect_checkpoint --file_name=checkpoint_file_name " 121 | "[--tensor_name=tensor_to_print] " 122 | "[--all_tensors] " 123 | "[--all_tensor_names] " 124 | "[--printoptions]") 125 | sys.exit(1) 126 | else: 127 | print_tensors_in_checkpoint_file(FLAGS.file_name, FLAGS.tensor_name, 128 | FLAGS.all_tensors, FLAGS.all_tensor_names) 129 | 130 | 131 | if __name__ == "__main__": 132 | parser = argparse.ArgumentParser() 133 | parser.register("type", "bool", lambda v: v.lower() == "true") 134 | parser.add_argument( 135 | "--file_name", 136 | type=str, 137 | default="", 138 | help="Checkpoint filename. " 139 | "Note, if using Checkpoint V2 format, file_name is the " 140 | "shared prefix between all files in the checkpoint.") 141 | parser.add_argument( 142 | "--tensor_name", 143 | type=str, 144 | default="", 145 | help="Name of the tensor to inspect") 146 | parser.add_argument( 147 | "--all_tensors", 148 | nargs="?", 149 | const=True, 150 | type="bool", 151 | default=False, 152 | help="If True, print the values of all the tensors.") 153 | parser.add_argument( 154 | "--all_tensor_names", 155 | nargs="?", 156 | const=True, 157 | type="bool", 158 | default=False, 159 | help="If True, print the names of all the tensors.") 160 | parser.add_argument( 161 | "--printoptions", 162 | nargs="*", 163 | type=parse_numpy_printoption, 164 | help="Argument for numpy.set_printoptions(), in the form 'k=v'.") 165 | FLAGS, unparsed = parser.parse_known_args() 166 | app.run(main=main, argv=[sys.argv[0]] + unparsed) 167 | -------------------------------------------------------------------------------- /tfwss/tools/inspect_vgg_16_3chan.bat: -------------------------------------------------------------------------------- 1 | python -m inspect_checkpoint --file_name=../models/vgg_16_3chan.ckpt --tensor_name=vgg_16/conv1/conv1_1/weights 2 | -------------------------------------------------------------------------------- /tfwss/tools/inspect_vgg_16_4chan.bat: -------------------------------------------------------------------------------- 1 | python -m inspect_checkpoint --file_name=../models/vgg_16_4chan.ckpt --tensor_name=vgg_16/conv1/conv1_1/weights 2 | -------------------------------------------------------------------------------- /tfwss/tools/vgg_16-conv1-conv1_1-weights.py: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | # ============================================================================== 15 | """A simple script for inspect checkpoint files.""" 16 | from __future__ import absolute_import 17 | from __future__ import division 18 | from __future__ import print_function 19 | 20 | import argparse 21 | import sys 22 | 23 | import numpy as np 24 | 25 | from tensorflow.python import pywrap_tensorflow 26 | from tensorflow.python.platform import app 27 | from tensorflow.python.platform import flags 28 | 29 | FLAGS = None 30 | 31 | 32 | def print_tensors_in_checkpoint_file(file_name, tensor_name, all_tensors, 33 | all_tensor_names): 34 | """Prints tensors in a checkpoint file. 35 | 36 | If no `tensor_name` is provided, prints the tensor names and shapes 37 | in the checkpoint file. 38 | 39 | If `tensor_name` is provided, prints the content of the tensor. 40 | 41 | Args: 42 | file_name: Name of the checkpoint file. 43 | tensor_name: Name of the tensor in the checkpoint file to print. 44 | all_tensors: Boolean indicating whether to print all tensors. 45 | all_tensor_names: Boolean indicating whether to print all tensor names. 46 | """ 47 | try: 48 | reader = pywrap_tensorflow.NewCheckpointReader(file_name) 49 | debug_string = reader.debug_string() 50 | print(debug_string.decode("utf-8")) 51 | if all_tensors or all_tensor_names: 52 | var_to_shape_map = reader.get_variable_to_shape_map() 53 | for key in sorted(var_to_shape_map): 54 | print("tensor_name: ", key) 55 | if all_tensors: 56 | print(reader.get_tensor(key)) 57 | elif not tensor_name: 58 | print(reader.debug_string().decode("utf-8")) 59 | else: 60 | print("tensor_name: ", tensor_name) 61 | if tensor_name == "vgg_16/conv1/conv1_1/weights": 62 | tensor = reader.get_tensor(tensor_name) 63 | feature_maps = tensor.shape[-1] 64 | for feature_map in range(feature_maps): 65 | print("{},{},{},{}".format(tensor.shape[0], tensor.shape[1], tensor.shape[2], feature_map)) 66 | print(tensor[:,:,:,feature_map]) 67 | print(tensor.shape) 68 | print(tensor) 69 | else: 70 | print(reader.get_tensor(tensor_name)) 71 | except Exception as e: # pylint: disable=broad-except 72 | print(str(e)) 73 | if "corrupted compressed block contents" in str(e): 74 | print("It's likely that your checkpoint file has been compressed " 75 | "with SNAPPY.") 76 | if ("Data loss" in str(e) and 77 | (any([e in file_name for e in [".index", ".meta", ".data"]]))): 78 | proposed_file = ".".join(file_name.split(".")[0:-1]) 79 | v2_file_error_template = """ 80 | It's likely that this is a V2 checkpoint and you need to provide the filename 81 | *prefix*. Try removing the '.' and extension. Try: 82 | inspect checkpoint --file_name = {}""" 83 | print(v2_file_error_template.format(proposed_file)) 84 | 85 | 86 | def parse_numpy_printoption(kv_str): 87 | """Sets a single numpy printoption from a string of the form 'x=y'. 88 | 89 | See documentation on numpy.set_printoptions() for details about what values 90 | x and y can take. x can be any option listed there other than 'formatter'. 91 | 92 | Args: 93 | kv_str: A string of the form 'x=y', such as 'threshold=100000' 94 | 95 | Raises: 96 | argparse.ArgumentTypeError: If the string couldn't be used to set any 97 | nump printoption. 98 | """ 99 | k_v_str = kv_str.split("=", 1) 100 | if len(k_v_str) != 2 or not k_v_str[0]: 101 | raise argparse.ArgumentTypeError("'%s' is not in the form k=v." % kv_str) 102 | k, v_str = k_v_str 103 | printoptions = np.get_printoptions() 104 | if k not in printoptions: 105 | raise argparse.ArgumentTypeError("'%s' is not a valid printoption." % k) 106 | v_type = type(printoptions[k]) 107 | if v_type is type(None): 108 | raise argparse.ArgumentTypeError( 109 | "Setting '%s' from the command line is not supported." % k) 110 | try: 111 | v = ( 112 | v_type(v_str) 113 | if v_type is not bool else flags.BooleanParser().parse(v_str)) 114 | except ValueError as e: 115 | raise argparse.ArgumentTypeError(e.message) 116 | np.set_printoptions(**{k: v}) 117 | 118 | 119 | def main(unused_argv): 120 | if not FLAGS.file_name: 121 | print("Usage: inspect_checkpoint --file_name=checkpoint_file_name " 122 | "[--tensor_name=tensor_to_print] " 123 | "[--all_tensors] " 124 | "[--all_tensor_names] " 125 | "[--printoptions]") 126 | sys.exit(1) 127 | else: 128 | print_tensors_in_checkpoint_file(FLAGS.file_name, FLAGS.tensor_name, 129 | FLAGS.all_tensors, FLAGS.all_tensor_names) 130 | 131 | 132 | if __name__ == "__main__": 133 | parser = argparse.ArgumentParser() 134 | parser.register("type", "bool", lambda v: v.lower() == "true") 135 | parser.add_argument( 136 | "--file_name", 137 | type=str, 138 | default="", 139 | help="Checkpoint filename. " 140 | "Note, if using Checkpoint V2 format, file_name is the " 141 | "shared prefix between all files in the checkpoint.") 142 | parser.add_argument( 143 | "--tensor_name", 144 | type=str, 145 | default="", 146 | help="Name of the tensor to inspect") 147 | parser.add_argument( 148 | "--all_tensors", 149 | nargs="?", 150 | const=True, 151 | type="bool", 152 | default=False, 153 | help="If True, print the values of all the tensors.") 154 | parser.add_argument( 155 | "--all_tensor_names", 156 | nargs="?", 157 | const=True, 158 | type="bool", 159 | default=False, 160 | help="If True, print the names of all the tensors.") 161 | parser.add_argument( 162 | "--printoptions", 163 | nargs="*", 164 | type=parse_numpy_printoption, 165 | help="Argument for numpy.set_printoptions(), in the form 'k=v'.") 166 | FLAGS, unparsed = parser.parse_known_args() 167 | app.run(main=main, argv=[sys.argv[0]] + unparsed) 168 | -------------------------------------------------------------------------------- /tfwss/tools/vgg_16_3chan-conv1_1-weights.txt: -------------------------------------------------------------------------------- 1 | tensor_name: vgg_16/conv1/conv1_1/weights 2 | 3,3,3,0 3 | [[[ 0.4800154 0.55037946 0.42947057] 4 | [ 0.4085474 0.44007453 0.373467 ] 5 | [-0.06514555 -0.08138704 -0.06136011]] 6 | 7 | [[ 0.31047726 0.34573907 0.27476987] 8 | [ 0.05020237 0.04063221 0.03868078] 9 | [-0.40338343 -0.45350131 -0.36722335]] 10 | 11 | [[-0.05087169 -0.05863491 -0.05746817] 12 | [-0.28522751 -0.33066967 -0.26224968] 13 | [-0.41851634 -0.4850302 -0.35009676]]] 14 | 3,3,3,1 15 | [[[-0.17269668 0.02087744 0.11727387] 16 | [-0.17037505 0.04734124 0.16206263] 17 | [-0.15435153 0.04185439 0.135694 ]] 18 | 19 | [[-0.18760149 0.03104937 0.14835016] 20 | [-0.17757156 0.06581022 0.20229845] 21 | [-0.17439997 0.0462575 0.16168842]] 22 | 23 | [[-0.16600266 0.03167877 0.12934428] 24 | [-0.16666673 0.05471011 0.17157242] 25 | [-0.15704881 0.04231958 0.13871045]]] 26 | 3,3,3,2 27 | [[[ 3.75577137e-02 9.88311544e-02 3.40129584e-02] 28 | [ -4.96297423e-03 5.13819456e-02 1.70863140e-03] 29 | [ -1.38038069e-01 -1.01763301e-01 -1.15694344e-01]] 30 | 31 | [[ 1.66595340e-01 2.40750551e-01 1.61559835e-01] 32 | [ 1.51188180e-01 2.20311403e-01 1.56414255e-01] 33 | [ -1.09849639e-01 -6.67438358e-02 -8.99365395e-02]] 34 | 35 | [[ 1.56279504e-02 7.59588331e-02 1.29030216e-02] 36 | [ -7.96697661e-03 4.86797579e-02 5.44555223e-05] 37 | [ -1.49133086e-01 -1.12076312e-01 -1.25339806e-01]]] 38 | 3,3,3,3 39 | [[[ 0.40877539 0.43703237 0.35422093] 40 | [ 0.31449109 0.3227509 0.27866557] 41 | [ 0.34451026 0.35101369 0.30172354]] 42 | 43 | [[ 0.09916737 0.09673885 0.09780094] 44 | [-0.39442134 -0.42466447 -0.362367 ] 45 | [-0.06177092 -0.08895405 -0.04607875]] 46 | 47 | [[-0.02741977 -0.0481492 -0.00097071] 48 | [-0.46822482 -0.51541698 -0.40317345] 49 | [-0.19285059 -0.23510846 -0.1468185 ]]] 50 | 3,3,3,4 51 | [[[-0.0820337 -0.10985146 -0.0865837 ] 52 | [ 0.25096548 0.23999788 0.19084392] 53 | [ 0.23059797 0.23852432 0.16842183]] 54 | 55 | [[-0.18142731 -0.23399651 -0.18646947] 56 | [ 0.24379836 0.20586449 0.17608799] 57 | [ 0.228274 0.21035554 0.15426615]] 58 | 59 | [[-0.27790195 -0.33689022 -0.23963833] 60 | [-0.11767372 -0.17719544 -0.14159909] 61 | [-0.06813156 -0.10954819 -0.10266907]]] 62 | 3,3,3,5 63 | [[[-0.04828076 -0.19237703 0.22968295] 64 | [-0.08605088 -0.22363968 0.23940392] 65 | [-0.04345802 -0.18737321 0.23243521]] 66 | 67 | [[-0.07164131 -0.21034837 0.25140768] 68 | [-0.10327227 -0.23609307 0.26766661] 69 | [-0.08243635 -0.22165568 0.23725881]] 70 | 71 | [[-0.04586786 -0.19264044 0.22514924] 72 | [-0.05622768 -0.19742602 0.25982696] 73 | [-0.04606855 -0.19427177 0.22045547]]] 74 | 3,3,3,6 75 | [[[ 0.06561838 0.09789737 0.06683242] 76 | [-0.14696138 -0.12558413 -0.11207201] 77 | [-0.21190998 -0.20913845 -0.16651477]] 78 | 79 | [[ 0.155517 0.20074116 0.14721949] 80 | [-0.07578589 -0.04312737 -0.05267247] 81 | [-0.19151597 -0.17975736 -0.1556235 ]] 82 | 83 | [[ 0.18386579 0.22895685 0.15202443] 84 | [ 0.10438651 0.14149222 0.0982329 ] 85 | [ 0.00406519 0.02033227 0.00739815]]] 86 | 3,3,3,7 87 | [[[ 0.11799728 0.07935189 0.04133838] 88 | [ 0.25529897 0.21280921 0.11657716] 89 | [ 0.26339853 0.23472066 0.10312013]] 90 | 91 | [[-0.16804008 -0.23328967 -0.21259114] 92 | [-0.0351117 -0.10727409 -0.14905244] 93 | [ 0.18590738 0.13385555 0.03825026]] 94 | 95 | [[-0.22125229 -0.28242558 -0.2132176 ] 96 | [-0.12856954 -0.19886224 -0.18567187] 97 | [ 0.09334721 0.04201774 -0.00163494]]] 98 | 3,3,3,8 99 | [[[ 0.16427684 -0.27517739 0.13026784] 100 | [ 0.16928768 -0.3185305 0.09586009] 101 | [ 0.16928303 -0.27461806 0.12636507]] 102 | 103 | [[ 0.19656844 -0.29317632 0.1113601 ] 104 | [ 0.20457552 -0.33554825 0.0772165 ] 105 | [ 0.19321002 -0.30037677 0.0990585 ]] 106 | 107 | [[ 0.17174917 -0.28164104 0.10635132] 108 | [ 0.21990669 -0.28050637 0.1134313 ] 109 | [ 0.16903633 -0.28820974 0.09442389]]] 110 | 3,3,3,9 111 | [[[-0.15978716 -0.14232883 0.03291035] 112 | [-0.27969706 -0.25114581 -0.05266473] 113 | [-0.17982842 -0.16095386 0.01590413]] 114 | 115 | [[-0.05638141 -0.01564573 0.15295148] 116 | [-0.14474234 -0.09185937 0.09797249] 117 | [-0.0629037 -0.02074183 0.14915155]] 118 | 119 | [[ 0.06816525 0.10518216 0.23163976] 120 | [ 0.04729915 0.09847552 0.24260002] 121 | [ 0.06334335 0.10148139 0.22880137]]] 122 | 3,3,3,10 123 | [[[ 0.14613639 0.17323188 0.09615457] 124 | [ 0.56498653 0.60851592 0.50021327] 125 | [ 0.09936478 0.11340355 0.04598916]] 126 | 127 | [[ 0.00304555 0.01206517 0.00384975] 128 | [ 0.50553119 0.52829504 0.48986775] 129 | [-0.03820084 -0.04311888 -0.04158864]] 130 | 131 | [[-0.45314637 -0.48918396 -0.38299575] 132 | [-0.36620376 -0.40672201 -0.31344005] 133 | [-0.45361367 -0.5010919 -0.38670486]]] 134 | 3,3,3,11 135 | [[[-0.09036129 0.04103169 -0.09222532] 136 | [ 0.13328667 0.29481015 0.10438558] 137 | [ 0.08771688 0.24502251 0.04211995]] 138 | 139 | [[-0.18831857 -0.05235679 -0.15983562] 140 | [ 0.08857661 0.25445628 0.08756892] 141 | [ 0.06015956 0.22273313 0.03877559]] 142 | 143 | [[-0.27611634 -0.16469611 -0.22494633] 144 | [-0.17987624 -0.04491104 -0.1523234 ] 145 | [-0.12226781 0.01077384 -0.11743781]]] 146 | 3,3,3,12 147 | [[[-0.21903016 0.29249442 -0.15587331] 148 | [-0.19743554 0.34778324 -0.14431474] 149 | [-0.20970553 0.30743936 -0.14780511]] 150 | 151 | [[-0.18859522 0.35785919 -0.14120808] 152 | [-0.14462055 0.43766999 -0.10614073] 153 | [-0.17370623 0.37939447 -0.12375734]] 154 | 155 | [[-0.21276034 0.30069417 -0.17139398] 156 | [-0.19308928 0.35418957 -0.15895553] 157 | [-0.20633635 0.31385601 -0.15827182]]] 158 | 3,3,3,13 159 | [[[-0.12617548 0.01629656 0.0522968 ] 160 | [-0.22639902 -0.07863537 -0.01378012] 161 | [-0.30508557 -0.18746006 -0.11160712]] 162 | 163 | [[ 0.00424111 0.17765731 0.20315832] 164 | [-0.06015022 0.11844507 0.17187729] 165 | [-0.23216102 -0.08801441 -0.01970161]] 166 | 167 | [[ 0.03813203 0.20032384 0.20184605] 168 | [ 0.00298583 0.1722267 0.19776742] 169 | [-0.13173215 0.00345506 0.04263865]]] 170 | 3,3,3,14 171 | [[[-0.09937377 -0.12607214 -0.08324417] 172 | [-0.41710737 -0.45902365 -0.38120896] 173 | [-0.06171193 -0.08580325 -0.05585349]] 174 | 175 | [[-0.03084002 -0.04947374 -0.0433545 ] 176 | [-0.40418109 -0.43691039 -0.39929131] 177 | [ 0.03518116 0.02217908 0.01176218]] 178 | 179 | [[ 0.29788256 0.30938667 0.22901718] 180 | [ 0.22192094 0.2245709 0.15875629] 181 | [ 0.3406893 0.35814291 0.26349422]]] 182 | 3,3,3,15 183 | [[[ 0.16049297 -0.15555367 0.15728769] 184 | [ 0.15084587 -0.19292881 0.13965815] 185 | [ 0.12625746 -0.18910025 0.1296532 ]] 186 | 187 | [[ 0.10185438 -0.2362152 0.10083992] 188 | [ 0.03751706 -0.32983929 0.02901204] 189 | [ 0.08432751 -0.25150189 0.08919528]] 190 | 191 | [[ 0.1039431 -0.20190401 0.12384758] 192 | [ 0.09285374 -0.23962668 0.10433289] 193 | [ 0.10560406 -0.19676568 0.1297702 ]]] 194 | 3,3,3,16 195 | [[[ 0.18870655 -0.05317234 0.0291196 ] 196 | [ 0.18121564 -0.10486918 -0.03295433] 197 | [ 0.17780127 -0.05472881 0.0360858 ]] 198 | 199 | [[ 0.23172282 -0.09415262 -0.04473094] 200 | [ 0.2256189 -0.14862581 -0.11210624] 201 | [ 0.22518183 -0.09155737 -0.03498694]] 202 | 203 | [[ 0.21362288 -0.09909017 -0.0429165 ] 204 | [ 0.23785017 -0.12288104 -0.08188602] 205 | [ 0.20846486 -0.09609735 -0.03524959]]] 206 | 3,3,3,17 207 | [[[ 0.09347539 0.18129717 0.05094835] 208 | [ 0.11005884 0.19839597 0.08374372] 209 | [-0.08960549 -0.02546789 -0.09326147]] 210 | 211 | [[ 0.0987699 0.19233669 0.0711324 ] 212 | [ 0.11281546 0.20635575 0.10391945] 213 | [-0.1609457 -0.09300824 -0.14529482]] 214 | 215 | [[-0.07285507 -0.0007545 -0.08312926] 216 | [-0.10681842 -0.03500161 -0.09716237] 217 | [-0.21489841 -0.16189417 -0.18348503]]] 218 | 3,3,3,18 219 | [[[ 0.07497339 -0.2077193 0.19055748] 220 | [ 0.04183868 -0.25816104 0.16667616] 221 | [ 0.09044836 -0.19532511 0.19906574]] 222 | 223 | [[ 0.07823766 -0.21835856 0.2015547 ] 224 | [-0.01976684 -0.33544189 0.11178232] 225 | [ 0.09559622 -0.20320655 0.21125664]] 226 | 227 | [[ 0.08573886 -0.19978707 0.19310385] 228 | [ 0.03318957 -0.2700071 0.14746384] 229 | [ 0.09177572 -0.19609001 0.19074047]]] 230 | 3,3,3,19 231 | [[[ 0.12632439 -0.13894619 -0.04372229] 232 | [ 0.127719 -0.19135612 -0.1041602 ] 233 | [ 0.1158876 -0.14542769 -0.04712771]] 234 | 235 | [[ 0.19359826 -0.14635436 -0.08472029] 236 | [ 0.190134 -0.20704944 -0.1557896 ] 237 | [ 0.18273103 -0.15368845 -0.09054703]] 238 | 239 | [[ 0.1817631 -0.12267412 -0.06277645] 240 | [ 0.21130536 -0.14748918 -0.10147458] 241 | [ 0.17022918 -0.13097374 -0.07124022]]] 242 | 3,3,3,20 243 | [[[ 0.23849919 0.25539106 0.36180311] 244 | [ 0.14157274 0.15677808 0.30849633] 245 | [-0.00566976 -0.02011288 0.14056203]] 246 | 247 | [[ 0.10057033 0.12054416 0.27560392] 248 | [-0.1311032 -0.11795881 0.09054149] 249 | [-0.27792889 -0.29411548 -0.07637139]] 250 | 251 | [[-0.02568433 -0.03019952 0.13497761] 252 | [-0.23363787 -0.24361543 -0.02560142] 253 | [-0.32948837 -0.36464703 -0.14163131]]] 254 | 3,3,3,21 255 | [[[ 0.35597178 0.3563149 0.23902874] 256 | [ 0.39788729 0.3799369 0.30987528] 257 | [ 0.08646969 0.0571688 0.06473621]] 258 | 259 | [[ 0.18229982 0.14790893 0.09006898] 260 | [-0.02528543 -0.08454139 -0.07610167] 261 | [-0.27923226 -0.34281766 -0.25570887]] 262 | 263 | [[ 0.03504165 -0.01092279 -0.00071148] 264 | [-0.21692428 -0.28586996 -0.20556365] 265 | [-0.33109641 -0.39586702 -0.25005817]]] 266 | 3,3,3,22 267 | [[[-0.12046139 0.00118498 0.16667192] 268 | [-0.17090142 -0.02594189 0.17097801] 269 | [-0.12404908 0.00151047 0.1687201 ]] 270 | 271 | [[-0.16372119 -0.01909573 0.17909338] 272 | [-0.21932983 -0.05162582 0.17934875] 273 | [-0.17791158 -0.03037101 0.17024809]] 274 | 275 | [[-0.13394448 -0.00957824 0.16053411] 276 | [-0.15728132 -0.01042137 0.1914428 ] 277 | [-0.13078149 -0.00378751 0.16880605]]] 278 | 3,3,3,23 279 | [[[ -3.08338255e-01 -4.87027854e-01 -1.80467457e-01] 280 | [ -2.37905011e-01 -4.09513474e-01 -1.17244996e-01] 281 | [ 1.08455405e-01 -4.31569032e-02 1.81487694e-01]] 282 | 283 | [[ -2.03620762e-01 -3.75806600e-01 -7.99025521e-02] 284 | [ -1.17175855e-01 -2.76997328e-01 -3.49949521e-04] 285 | [ 2.66444415e-01 1.30394563e-01 3.40175331e-01]] 286 | 287 | [[ 1.07057400e-01 -4.81424779e-02 1.84494838e-01] 288 | [ 2.03627706e-01 6.29161298e-02 2.77538925e-01] 289 | [ 3.68812591e-01 2.43745610e-01 4.04833347e-01]]] 290 | 3,3,3,24 291 | [[[-0.02083396 0.11533792 0.04404821] 292 | [-0.07155016 0.06701437 0.00966333] 293 | [-0.12228294 -0.00246835 -0.04592194]] 294 | 295 | [[-0.00051869 0.14725193 0.07451686] 296 | [-0.06439608 0.08504013 0.02745619] 297 | [-0.14637132 -0.01696968 -0.05888153]] 298 | 299 | [[-0.02194524 0.11463996 0.04025492] 300 | [-0.06314134 0.07585644 0.01520346] 301 | [-0.11968347 0.00035171 -0.04599162]]] 302 | 3,3,3,25 303 | [[[-0.15872632 0.1163794 0.09592837] 304 | [-0.16319121 0.13944666 0.12468038] 305 | [-0.15240695 0.12636176 0.10174061]] 306 | 307 | [[-0.16730706 0.13736778 0.12232669] 308 | [-0.17251123 0.16125675 0.15278567] 309 | [-0.16109975 0.1473745 0.12963735]] 310 | 311 | [[-0.17379802 0.10524599 0.07937991] 312 | [-0.17338306 0.13363452 0.11532211] 313 | [-0.17065473 0.11229908 0.08544833]]] 314 | 3,3,3,26 315 | [[[-0.07149437 -0.07708844 -0.04381131] 316 | [-0.46935585 -0.495031 -0.39102998] 317 | [-0.38805357 -0.4157187 -0.28564966]] 318 | 319 | [[ 0.22795013 0.24125536 0.18568173] 320 | [-0.10067864 -0.10921352 -0.1042131 ] 321 | [-0.11535279 -0.12957004 -0.08812878]] 322 | 323 | [[ 0.37457848 0.40863019 0.27207401] 324 | [ 0.32480803 0.34552643 0.249017 ] 325 | [ 0.22145008 0.23217745 0.17421368]]] 326 | 3,3,3,27 327 | [[[-0.05043453 0.10368702 -0.18848351] 328 | [ 0.02727965 0.17416753 -0.17792737] 329 | [-0.00554895 0.13986839 -0.18292557]] 330 | 331 | [[-0.00488439 0.14413531 -0.18481646] 332 | [ 0.12199405 0.26450121 -0.12894408] 333 | [ 0.05089047 0.19054219 -0.17017853]] 334 | 335 | [[-0.06530017 0.08363787 -0.19339779] 336 | [ 0.00711 0.14819314 -0.18588647] 337 | [-0.02640966 0.1133359 -0.19133255]]] 338 | 3,3,3,28 339 | [[[-0.03383944 -0.07185701 -0.04399502] 340 | [-0.00187677 -0.02275022 -0.03687759] 341 | [ 0.36880159 0.38729131 0.28088847]] 342 | 343 | [[-0.37024724 -0.42247528 -0.34596241] 344 | [-0.43076617 -0.46505883 -0.43290257] 345 | [ 0.27394396 0.28503442 0.20495637]] 346 | 347 | [[-0.06858932 -0.10352251 -0.06763568] 348 | [-0.0553761 -0.07197934 -0.07852355] 349 | [ 0.31682923 0.33926243 0.24074145]]] 350 | 3,3,3,29 351 | [[[ 0.21956988 -0.00388477 -0.18985446] 352 | [ 0.27089405 0.02269586 -0.17633693] 353 | [ 0.14517576 -0.02902579 -0.16050746]] 354 | 355 | [[ 0.22818473 -0.01421229 -0.19384338] 356 | [ 0.2789039 0.01214919 -0.17836204] 357 | [ 0.14357196 -0.04767225 -0.17088714]] 358 | 359 | [[ 0.10133076 -0.06590685 -0.15913573] 360 | [ 0.10542412 -0.08424108 -0.18455751] 361 | [ 0.03615477 -0.07995966 -0.12442967]]] 362 | 3,3,3,30 363 | [[[-0.29206657 -0.26350829 -0.23386982] 364 | [-0.21533535 -0.16567972 -0.16781022] 365 | [-0.12312724 -0.07209669 -0.09850768]] 366 | 367 | [[-0.15050288 -0.09885683 -0.10912691] 368 | [ 0.14034545 0.2213829 0.16967893] 369 | [ 0.12203539 0.20242453 0.13140653]] 370 | 371 | [[-0.05022326 0.00267146 -0.04033394] 372 | [ 0.17961133 0.2615135 0.1801023 ] 373 | [ 0.13728672 0.21539773 0.1198751 ]]] 374 | 3,3,3,31 375 | [[[ 0.14893703 0.12447353 0.12836744] 376 | [ 0.13036445 0.09525505 0.08564197] 377 | [ 0.13415098 0.11141251 0.12136985]] 378 | 379 | [[ 0.14171803 0.11669213 0.11066163] 380 | [ 0.04218798 0.00535035 -0.0145823 ] 381 | [ 0.13311349 0.11018976 0.11101208]] 382 | 383 | [[ 0.15279444 0.14212981 0.15973611] 384 | [ 0.11152215 0.09029514 0.09495541] 385 | [ 0.13753545 0.12912421 0.15397617]]] 386 | 3,3,3,32 387 | [[[ 0.01326195 -0.0746256 -0.11059975] 388 | [ 0.03292726 -0.07831065 -0.1236757 ] 389 | [ 0.01893479 -0.07127871 -0.10691541]] 390 | 391 | [[ 0.0595953 -0.04749715 -0.0924182 ] 392 | [ 0.1232256 -0.00727669 -0.06347281] 393 | [ 0.04791402 -0.0613196 -0.10601826]] 394 | 395 | [[ 0.01089093 -0.07251924 -0.10283364] 396 | [ 0.0448981 -0.06059125 -0.10179035] 397 | [ 0.00816117 -0.07648329 -0.10756919]]] 398 | 3,3,3,33 399 | [[[-0.00503388 -0.15612163 0.00571336] 400 | [-0.03748786 -0.19752818 -0.0181762 ] 401 | [-0.14911501 -0.31025812 -0.12190244]] 402 | 403 | [[ 0.18421321 0.04034597 0.19312122] 404 | [ 0.26538953 0.11548385 0.28351158] 405 | [-0.04233449 -0.2031974 -0.02150796]] 406 | 407 | [[ 0.11910769 -0.02443784 0.11672159] 408 | [ 0.19854152 0.051978 0.20606373] 409 | [-0.01941619 -0.17438021 -0.00898326]]] 410 | 3,3,3,34 411 | [[[-0.27007344 -0.29000348 -0.24479656] 412 | [-0.30398133 -0.32222342 -0.26261711] 413 | [-0.39854556 -0.42748395 -0.33642569]] 414 | 415 | [[ 0.11472005 0.12724607 0.09554115] 416 | [ 0.37811118 0.40158507 0.37399763] 417 | [-0.10353793 -0.10683778 -0.08733696]] 418 | 419 | [[ 0.14936899 0.16730642 0.09352687] 420 | [ 0.40726802 0.43927687 0.36522156] 421 | [ 0.02333043 0.02671521 -0.00308948]]] 422 | 3,3,3,35 423 | [[[-0.40322903 -0.43226039 -0.30530283] 424 | [-0.19461407 -0.20673625 -0.16753504] 425 | [ 0.06705149 0.07580717 0.03397209]] 426 | 427 | [[-0.3044951 -0.32399613 -0.24142911] 428 | [ 0.15433477 0.16329655 0.14228928] 429 | [ 0.40919971 0.44448513 0.34415394]] 430 | 431 | [[-0.13797051 -0.14450736 -0.10461871] 432 | [ 0.13048458 0.14883979 0.09734739] 433 | [ 0.26570961 0.30314723 0.18610834]]] 434 | 3,3,3,36 435 | [[[-0.17164022 0.05063162 0.060938 ] 436 | [-0.20385186 0.04089997 0.06914271] 437 | [-0.17306712 0.05201071 0.06014954]] 438 | 439 | [[-0.203079 0.04168436 0.07318608] 440 | [-0.22849302 0.03889391 0.08916554] 441 | [-0.20274836 0.04436671 0.07417805]] 442 | 443 | [[-0.17413661 0.0492637 0.06187231] 444 | [-0.19977772 0.04500385 0.07536139] 445 | [-0.17028588 0.05473826 0.06654227]]] 446 | 3,3,3,37 447 | [[[-0.36889768 -0.43103257 -0.32816252] 448 | [-0.28371742 -0.35003701 -0.27985194] 449 | [-0.297187 -0.36224136 -0.29610091]] 450 | 451 | [[-0.08745761 -0.12045752 -0.09964942] 452 | [ 0.38095042 0.36192468 0.3341161 ] 453 | [ 0.05269272 0.02334622 0.00453637]] 454 | 455 | [[ 0.07536738 0.07618032 0.04458109] 456 | [ 0.5184412 0.53946865 0.45954201] 457 | [ 0.19605535 0.20172837 0.13463163]]] 458 | 3,3,3,38 459 | [[[ 0.13081503 0.29444188 0.1711953 ] 460 | [ 0.15459608 0.32921073 0.21669765] 461 | [-0.00133177 0.15041706 0.0564939 ]] 462 | 463 | [[-0.06252337 0.10120346 0.01644653] 464 | [-0.09860469 0.07380549 0.00429485] 465 | [-0.18317151 -0.03111723 -0.08407679]] 466 | 467 | [[-0.17010963 -0.03734917 -0.08887924] 468 | [-0.23584525 -0.095698 -0.12936987] 469 | [-0.23454525 -0.11113026 -0.13394564]]] 470 | 3,3,3,39 471 | [[[-0.0358882 -0.10993682 0.10265407] 472 | [-0.11990397 -0.19758695 0.04595486] 473 | [-0.04643798 -0.12538068 0.09067708]] 474 | 475 | [[-0.04197054 -0.11337133 0.12120296] 476 | [-0.17326267 -0.24885692 0.0188637 ] 477 | [-0.05966521 -0.13633575 0.10113112]] 478 | 479 | [[-0.03258639 -0.1041396 0.10223453] 480 | [-0.11105626 -0.18644011 0.04860276] 481 | [-0.05120771 -0.12792586 0.08034358]]] 482 | 3,3,3,40 483 | [[[-0.10626303 0.06623434 -0.11063665] 484 | [-0.11452698 0.06585026 -0.11243556] 485 | [-0.16372663 -0.0040107 -0.15568645]] 486 | 487 | [[-0.00123176 0.19449428 -0.01835439] 488 | [ 0.03562608 0.24023479 0.02441533] 489 | [-0.10281684 0.07850371 -0.10447415]] 490 | 491 | [[-0.01397882 0.17332828 -0.04090111] 492 | [ 0.00525392 0.20117153 -0.01628016] 493 | [-0.09490839 0.07894424 -0.10676192]]] 494 | 3,3,3,41 495 | [[[ 0.29136992 0.18029599 0.33012986] 496 | [ 0.31167251 0.20340009 0.35892653] 497 | [ 0.37733772 0.28342822 0.39616027]] 498 | 499 | [[-0.0971558 -0.23187284 -0.00446805] 500 | [-0.27026483 -0.40738681 -0.16933602] 501 | [ 0.15261276 0.03896909 0.21756372]] 502 | 503 | [[-0.22289325 -0.36571217 -0.11403126] 504 | [-0.40462691 -0.55175447 -0.28587604] 505 | [ 0.00768055 -0.11616419 0.08607976]]] 506 | 3,3,3,42 507 | [[[-0.06310713 -0.07049958 -0.05869728] 508 | [ 0.15044418 0.1689757 0.13775158] 509 | [ 0.25260803 0.28537536 0.22143658]] 510 | 511 | [[-0.2185995 -0.23056777 -0.19022377] 512 | [ 0.01303413 0.02520111 0.01994503] 513 | [ 0.21116725 0.24065673 0.19511278]] 514 | 515 | [[-0.22979894 -0.24971613 -0.19143637] 516 | [-0.14925547 -0.15264013 -0.12912531] 517 | [ 0.00594821 0.01558872 0.00047423]]] 518 | 3,3,3,43 519 | [[[ 0.28635997 0.30771789 0.18478717] 520 | [ 0.27824068 0.29837188 0.18296133] 521 | [ 0.30842188 0.34127289 0.21322653]] 522 | 523 | [[-0.02256021 -0.03182225 -0.06467631] 524 | [-0.30704486 -0.32256067 -0.33172512] 525 | [ 0.02450761 0.026387 -0.01420332]] 526 | 527 | [[-0.11940004 -0.14375187 -0.11015832] 528 | [-0.41201642 -0.44364116 -0.37998617] 529 | [-0.10298665 -0.11676502 -0.08947985]]] 530 | 3,3,3,44 531 | [[[-0.04254856 0.09669701 0.01050853] 532 | [-0.13263085 0.00467647 -0.05518027] 533 | [-0.25632083 -0.15041117 -0.1762652 ]] 534 | 535 | [[ 0.11691329 0.2796587 0.17095707] 536 | [ 0.06137704 0.22147244 0.13875549] 537 | [-0.21487495 -0.09147725 -0.13328704]] 538 | 539 | [[ 0.03554726 0.18363142 0.06857023] 540 | [ 0.00140742 0.14939182 0.05752702] 541 | [-0.17419812 -0.05940686 -0.11471854]]] 542 | 3,3,3,45 543 | [[[ 0.10345241 0.09883999 0.10668027] 544 | [ 0.33046445 0.35624322 0.2862331 ] 545 | [ 0.51294214 0.57510966 0.42510888]] 546 | 547 | [[-0.44129026 -0.48058259 -0.37709406] 548 | [-0.25068027 -0.26492789 -0.24571872] 549 | [ 0.36411214 0.3975631 0.30716747]] 550 | 551 | [[-0.43794334 -0.4860552 -0.34069246] 552 | [-0.33834398 -0.36805609 -0.29558903] 553 | [ 0.15643641 0.16743517 0.13225588]]] 554 | 3,3,3,46 555 | [[[-0.02848322 -0.06726889 -0.13086924] 556 | [-0.07600795 -0.13530047 -0.1867422 ] 557 | [-0.20166892 -0.25851721 -0.25350815]] 558 | 559 | [[ 0.22551435 0.18651542 0.05354407] 560 | [ 0.30200261 0.24376494 0.11727266] 561 | [-0.06013079 -0.12498847 -0.17994316]] 562 | 563 | [[ 0.14379103 0.11588541 -0.01713191] 564 | [ 0.21982133 0.17528656 0.04505523] 565 | [-0.03164292 -0.08063878 -0.14532445]]] 566 | 3,3,3,47 567 | [[[-0.45139903 -0.4840571 -0.36283946] 568 | [-0.44357869 -0.46559554 -0.38248262] 569 | [-0.08717645 -0.09077421 -0.07348043]] 570 | 571 | [[-0.22162372 -0.23572764 -0.18524732] 572 | [-0.01118407 -0.00494583 -0.0081452 ] 573 | [ 0.24371201 0.26982284 0.21025793]] 574 | 575 | [[ 0.14303869 0.15393226 0.11510787] 576 | [ 0.39877 0.43593571 0.34456447] 577 | [ 0.40707463 0.45485801 0.32593292]]] 578 | 3,3,3,48 579 | [[[ 0.19618554 0.16756544 0.06440807] 580 | [-0.03699766 -0.09247749 -0.10362787] 581 | [-0.30172807 -0.3579697 -0.2615459 ]] 582 | 583 | [[ 0.34061837 0.30806005 0.16197859] 584 | [ 0.08399384 0.02136817 -0.0299311 ] 585 | [-0.33069053 -0.39859223 -0.33031601]] 586 | 587 | [[ 0.25599322 0.23466325 0.07784141] 588 | [ 0.1803335 0.13664511 0.05642072] 589 | [-0.08776058 -0.13477233 -0.11027577]]] 590 | 3,3,3,49 591 | [[[-0.08979315 -0.05524461 0.09335972] 592 | [ 0.05867117 0.13476901 0.26183617] 593 | [ 0.25323483 0.33324924 0.39673269]] 594 | 595 | [[-0.40399408 -0.36483973 -0.16163439] 596 | [-0.23614286 -0.15524386 0.02310665] 597 | [ 0.14306851 0.23351616 0.33817336]] 598 | 599 | [[-0.35029486 -0.33124816 -0.13694081] 600 | [-0.21698503 -0.1596313 0.014815 ] 601 | [ 0.06087554 0.12487411 0.22911766]]] 602 | 3,3,3,50 603 | [[[-0.15111087 0.12566034 0.00401619] 604 | [-0.16209759 0.13452438 0.01031087] 605 | [-0.14677018 0.13262685 0.00411675]] 606 | 607 | [[-0.15829873 0.13894165 0.0206385 ] 608 | [-0.15486029 0.16263415 0.04231073] 609 | [-0.15428965 0.14550267 0.0215735 ]] 610 | 611 | [[-0.15430553 0.12466028 0.00498014] 612 | [-0.16138081 0.13674538 0.01619651] 613 | [-0.14715308 0.13372743 0.01049088]]] 614 | 3,3,3,51 615 | [[[ 0.46370047 0.50040096 0.37814805] 616 | [ 0.16602094 0.15793252 0.14995824] 617 | [-0.14674309 -0.18845403 -0.09938256]] 618 | 619 | [[ 0.39816543 0.42490977 0.33295661] 620 | [-0.24064894 -0.26625356 -0.22060387] 621 | [-0.6122331 -0.67140007 -0.51870257]] 622 | 623 | [[ 0.32768586 0.35854527 0.27094221] 624 | [-0.03203399 -0.04496703 -0.01171813] 625 | [-0.30880859 -0.35057274 -0.22309828]]] 626 | 3,3,3,52 627 | [[[ 0.12699455 -0.07442727 0.05268804] 628 | [ 0.16893545 -0.0775722 0.02540543] 629 | [ 0.23803551 0.01520116 0.0882033 ]] 630 | 631 | [[-0.00122763 -0.23309128 -0.08019977] 632 | [ 0.00069239 -0.27801824 -0.15045896] 633 | [ 0.20508599 -0.04599127 0.04148736]] 634 | 635 | [[ 0.03577188 -0.15689564 0.00196651] 636 | [ 0.05411232 -0.18269643 -0.0474086 ] 637 | [ 0.16965567 -0.04256564 0.05644935]]] 638 | 3,3,3,53 639 | [[[-0.27397472 -0.3387222 -0.27078936] 640 | [-0.26331019 -0.32948962 -0.31478256] 641 | [ 0.06894964 0.0388309 -0.00562813]] 642 | 643 | [[-0.07121032 -0.1435205 -0.15887783] 644 | [ 0.01288261 -0.05691215 -0.13578036] 645 | [ 0.3038432 0.27266422 0.14166872]] 646 | 647 | [[ 0.12269582 0.07197306 -0.00540928] 648 | [ 0.21773337 0.17007834 0.03158833] 649 | [ 0.29993615 0.28292358 0.1114843 ]]] 650 | 3,3,3,54 651 | [[[ 0.04517473 -0.02613011 0.11060741] 652 | [-0.24549749 -0.33279872 -0.14417367] 653 | [-0.27318993 -0.3677302 -0.16846347]] 654 | 655 | [[ 0.19594613 0.13500369 0.25348401] 656 | [-0.12340893 -0.20328592 -0.03425237] 657 | [-0.17490524 -0.26486534 -0.0812318 ]] 658 | 659 | [[ 0.30062667 0.24752659 0.32685903] 660 | [ 0.18325534 0.11749914 0.23576194] 661 | [ 0.09996969 0.02211057 0.15412465]]] 662 | 3,3,3,55 663 | [[[-0.23849334 -0.07352738 -0.11819526] 664 | [-0.17497575 0.02083533 -0.04717913] 665 | [-0.04208441 0.15315224 0.05607524]] 666 | 667 | [[-0.22775872 -0.04329246 -0.0987839 ] 668 | [-0.12246658 0.09528973 0.01390128] 669 | [ 0.03500566 0.25358799 0.1431932 ]] 670 | 671 | [[-0.13378069 0.04178403 -0.03451736] 672 | [-0.05737292 0.15019932 0.0509751 ] 673 | [ 0.04029984 0.24646994 0.12314192]]] 674 | 3,3,3,56 675 | [[[ 0.00912306 -0.11533686 0.04298935] 676 | [ 0.0872103 -0.04046623 0.09854051] 677 | [ 0.23265514 0.12966995 0.22941053]] 678 | 679 | [[-0.1649529 -0.30907845 -0.12695695] 680 | [-0.11551519 -0.26433125 -0.10364927] 681 | [ 0.19174919 0.07230287 0.18545516]] 682 | 683 | [[-0.05257837 -0.18305206 -0.01578351] 684 | [-0.00867471 -0.14473282 0.00424274] 685 | [ 0.1629665 0.05139726 0.15893888]]] 686 | 3,3,3,57 687 | [[[ 0.29967809 0.17063108 0.35323623] 688 | [ 0.02390032 -0.13316667 0.11756504] 689 | [-0.08351236 -0.24944878 0.02054979]] 690 | 691 | [[ 0.23045376 0.09270763 0.29886681] 692 | [-0.24196999 -0.41218945 -0.1286826 ] 693 | [-0.31117243 -0.48976371 -0.18665253]] 694 | 695 | [[ 0.27573535 0.14676033 0.3250291 ] 696 | [ 0.04112757 -0.11370508 0.12952094] 697 | [-0.04330288 -0.20563774 0.05386377]]] 698 | 3,3,3,58 699 | [[[-0.31998897 -0.36752653 -0.28468162] 700 | [-0.02897463 -0.04093431 -0.04440473] 701 | [ 0.15085261 0.1735898 0.11191057]] 702 | 703 | [[-0.31702453 -0.36213866 -0.28596532] 704 | [ 0.29317614 0.2975775 0.27352276] 705 | [ 0.48624769 0.5295403 0.44455817]] 706 | 707 | [[-0.35386804 -0.39189911 -0.30099481] 708 | [-0.02387705 -0.02742473 -0.02337684] 709 | [ 0.14334013 0.17104657 0.11767759]]] 710 | 3,3,3,59 711 | [[[ 0.08725975 0.13339569 -0.20289008] 712 | [ 0.12510653 0.1433036 -0.24480745] 713 | [ 0.08552386 0.13384549 -0.19338934]] 714 | 715 | [[ 0.14986272 0.16017723 -0.24688537] 716 | [ 0.19491087 0.17601621 -0.28674325] 717 | [ 0.14985105 0.16200341 -0.23610432]] 718 | 719 | [[ 0.11217672 0.14201447 -0.21393025] 720 | [ 0.15377352 0.15403908 -0.25577372] 721 | [ 0.11028167 0.14183886 -0.20606868]]] 722 | 3,3,3,60 723 | [[[-0.33848372 -0.36689544 -0.284944 ] 724 | [-0.20189942 -0.20320873 -0.18103454] 725 | [ 0.0333641 0.05208361 0.01963145]] 726 | 727 | [[-0.26034912 -0.27594826 -0.22900179] 728 | [ 0.14416429 0.16854356 0.14214948] 729 | [ 0.3995232 0.44915229 0.36932898]] 730 | 731 | [[-0.17121053 -0.18206213 -0.16244631] 732 | [ 0.10107949 0.12521495 0.07966521] 733 | [ 0.2568973 0.29974779 0.20952551]]] 734 | 3,3,3,61 735 | [[[-0.12713556 -0.08482055 -0.13224158] 736 | [-0.12236057 -0.09884989 -0.14820713] 737 | [-0.12940456 -0.09431243 -0.14015837]] 738 | 739 | [[-0.09883884 -0.06934191 -0.11431616] 740 | [-0.01406498 -0.0037898 -0.05200896] 741 | [-0.12568861 -0.10350242 -0.14594546]] 742 | 743 | [[-0.14974272 -0.10501725 -0.14144939] 744 | [-0.10978009 -0.08290239 -0.12272868] 745 | [-0.1568398 -0.11830606 -0.15309229]]] 746 | 3,3,3,62 747 | [[[-0.05029916 -0.05113892 -0.05334752] 748 | [-0.2764504 -0.29619575 -0.23530066] 749 | [-0.4622438 -0.50566256 -0.3776668 ]] 750 | 751 | [[ 0.40405855 0.43711686 0.36549452] 752 | [ 0.25930083 0.26941243 0.25714901] 753 | [-0.31402633 -0.34579235 -0.27182356]] 754 | 755 | [[ 0.30680192 0.3395502 0.24109964] 756 | [ 0.27920374 0.2954661 0.24468745] 757 | [-0.1428743 -0.16705802 -0.13991733]]] 758 | 3,3,3,63 759 | [[[ 0.03489657 0.03749434 0.00757738] 760 | [-0.03907965 -0.07043571 -0.06303568] 761 | [-0.32398528 -0.38369432 -0.30050987]] 762 | 763 | [[ 0.3925612 0.41317144 0.339939 ] 764 | [ 0.42376447 0.4095059 0.37168267] 765 | [-0.23283976 -0.29248625 -0.23971818]] 766 | 767 | [[ 0.08827017 0.09863744 0.04556021] 768 | [ 0.09465253 0.074447 0.05329137] 769 | [-0.26969463 -0.32224196 -0.26507524]]] 770 | (3, 3, 3, 64) 771 | [[[[ 4.80015397e-01 -1.72696680e-01 3.75577137e-02 ..., 772 | -1.27135560e-01 -5.02991639e-02 3.48965675e-02] 773 | [ 5.50379455e-01 2.08774377e-02 9.88311544e-02 ..., 774 | -8.48205537e-02 -5.11389151e-02 3.74943428e-02] 775 | [ 4.29470569e-01 1.17273867e-01 3.40129584e-02 ..., 776 | -1.32241577e-01 -5.33475243e-02 7.57738389e-03]] 777 | 778 | [[ 4.08547401e-01 -1.70375049e-01 -4.96297423e-03 ..., 779 | -1.22360572e-01 -2.76450396e-01 -3.90796512e-02] 780 | [ 4.40074533e-01 4.73412387e-02 5.13819456e-02 ..., 781 | -9.88498852e-02 -2.96195745e-01 -7.04357103e-02] 782 | [ 3.73466998e-01 1.62062630e-01 1.70863140e-03 ..., 783 | -1.48207128e-01 -2.35300660e-01 -6.30356818e-02]] 784 | 785 | [[ -6.51455522e-02 -1.54351532e-01 -1.38038069e-01 ..., 786 | -1.29404560e-01 -4.62243795e-01 -3.23985279e-01] 787 | [ -8.13870355e-02 4.18543853e-02 -1.01763301e-01 ..., 788 | -9.43124294e-02 -5.05662560e-01 -3.83694321e-01] 789 | [ -6.13601133e-02 1.35693997e-01 -1.15694344e-01 ..., 790 | -1.40158370e-01 -3.77666801e-01 -3.00509870e-01]]] 791 | 792 | 793 | [[[ 3.10477257e-01 -1.87601492e-01 1.66595340e-01 ..., 794 | -9.88388434e-02 4.04058546e-01 3.92561197e-01] 795 | [ 3.45739067e-01 3.10493708e-02 2.40750551e-01 ..., 796 | -6.93419054e-02 4.37116861e-01 4.13171440e-01] 797 | [ 2.74769872e-01 1.48350164e-01 1.61559835e-01 ..., 798 | -1.14316158e-01 3.65494519e-01 3.39938998e-01]] 799 | 800 | [[ 5.02023660e-02 -1.77571565e-01 1.51188180e-01 ..., 801 | -1.40649760e-02 2.59300828e-01 4.23764467e-01] 802 | [ 4.06322069e-02 6.58102185e-02 2.20311403e-01 ..., 803 | -3.78979952e-03 2.69412428e-01 4.09505904e-01] 804 | [ 3.86807770e-02 2.02298447e-01 1.56414255e-01 ..., 805 | -5.20089604e-02 2.57149011e-01 3.71682674e-01]] 806 | 807 | [[ -4.03383434e-01 -1.74399972e-01 -1.09849639e-01 ..., 808 | -1.25688612e-01 -3.14026326e-01 -2.32839763e-01] 809 | [ -4.53501314e-01 4.62574959e-02 -6.67438358e-02 ..., 810 | -1.03502415e-01 -3.45792353e-01 -2.92486250e-01] 811 | [ -3.67223352e-01 1.61688417e-01 -8.99365395e-02 ..., 812 | -1.45945460e-01 -2.71823555e-01 -2.39718184e-01]]] 813 | 814 | 815 | [[[ -5.08716851e-02 -1.66002661e-01 1.56279504e-02 ..., 816 | -1.49742723e-01 3.06801915e-01 8.82701725e-02] 817 | [ -5.86349145e-02 3.16787697e-02 7.59588331e-02 ..., 818 | -1.05017252e-01 3.39550197e-01 9.86374393e-02] 819 | [ -5.74681684e-02 1.29344285e-01 1.29030216e-02 ..., 820 | -1.41449392e-01 2.41099641e-01 4.55602147e-02]] 821 | 822 | [[ -2.85227507e-01 -1.66666731e-01 -7.96697661e-03 ..., 823 | -1.09780088e-01 2.79203743e-01 9.46525261e-02] 824 | [ -3.30669671e-01 5.47101051e-02 4.86797579e-02 ..., 825 | -8.29023942e-02 2.95466095e-01 7.44469985e-02] 826 | [ -2.62249678e-01 1.71572417e-01 5.44555223e-05 ..., 827 | -1.22728683e-01 2.44687453e-01 5.32913655e-02]] 828 | 829 | [[ -4.18516338e-01 -1.57048807e-01 -1.49133086e-01 ..., 830 | -1.56839803e-01 -1.42874300e-01 -2.69694626e-01] 831 | [ -4.85030204e-01 4.23195846e-02 -1.12076312e-01 ..., 832 | -1.18306056e-01 -1.67058021e-01 -3.22241962e-01] 833 | [ -3.50096762e-01 1.38710454e-01 -1.25339806e-01 ..., 834 | -1.53092295e-01 -1.39917329e-01 -2.65075237e-01]]]] 835 | -------------------------------------------------------------------------------- /tfwss/tools/vgg_16_4chan-conv1_1-weights.txt: -------------------------------------------------------------------------------- 1 | tensor_name: vgg_16/conv1/conv1_1/weights 2 | 3,3,4,0 3 | [[[ 4.80015397e-01 5.50379455e-01 4.29470569e-01 1.13388560e-04] 4 | [ 4.08547401e-01 4.40074533e-01 3.73466998e-01 7.61439209e-04] 5 | [ -6.51455522e-02 -8.13870355e-02 -6.13601133e-02 4.74345696e-04]] 6 | 7 | [[ 3.10477257e-01 3.45739067e-01 2.74769872e-01 4.11637186e-04] 8 | [ 5.02023660e-02 4.06322069e-02 3.86807770e-02 1.38304755e-03] 9 | [ -4.03383434e-01 -4.53501314e-01 -3.67223352e-01 1.28411280e-03]] 10 | 11 | [[ -5.08716851e-02 -5.86349145e-02 -5.74681684e-02 -6.34787197e-04] 12 | [ -2.85227507e-01 -3.30669671e-01 -2.62249678e-01 -1.77454809e-03] 13 | [ -4.18516338e-01 -4.85030204e-01 -3.50096762e-01 2.10441509e-03]]] 14 | 3,3,4,1 15 | [[[-0.17269668 0.02087744 0.11727387 0.00146379] 16 | [-0.17037505 0.04734124 0.16206263 0.00219835] 17 | [-0.15435153 0.04185439 0.135694 0.00067962]] 18 | 19 | [[-0.18760149 0.03104937 0.14835016 -0.00090772] 20 | [-0.17757156 0.06581022 0.20229845 0.00072191] 21 | [-0.17439997 0.0462575 0.16168842 -0.00218071]] 22 | 23 | [[-0.16600266 0.03167877 0.12934428 0.00097403] 24 | [-0.16666673 0.05471011 0.17157242 -0.00084941] 25 | [-0.15704881 0.04231958 0.13871045 -0.00225278]]] 26 | 3,3,4,2 27 | [[[ 3.75577137e-02 9.88311544e-02 3.40129584e-02 -1.79303065e-03] 28 | [ -4.96297423e-03 5.13819456e-02 1.70863140e-03 -1.66041229e-03] 29 | [ -1.38038069e-01 -1.01763301e-01 -1.15694344e-01 -2.96418410e-04]] 30 | 31 | [[ 1.66595340e-01 2.40750551e-01 1.61559835e-01 -5.13275329e-04] 32 | [ 1.51188180e-01 2.20311403e-01 1.56414255e-01 5.95636084e-04] 33 | [ -1.09849639e-01 -6.67438358e-02 -8.99365395e-02 3.56495693e-05]] 34 | 35 | [[ 1.56279504e-02 7.59588331e-02 1.29030216e-02 -6.87221691e-05] 36 | [ -7.96697661e-03 4.86797579e-02 5.44555223e-05 -1.82305416e-03] 37 | [ -1.49133086e-01 -1.12076312e-01 -1.25339806e-01 1.73211924e-03]]] 38 | 3,3,4,3 39 | [[[ 4.08775389e-01 4.37032372e-01 3.54220927e-01 1.68607128e-03] 40 | [ 3.14491093e-01 3.22750896e-01 2.78665572e-01 9.05156601e-04] 41 | [ 3.44510257e-01 3.51013690e-01 3.01723540e-01 6.66802167e-04]] 42 | 43 | [[ 9.91673693e-02 9.67388451e-02 9.78009403e-02 7.00010452e-04] 44 | [ -3.94421339e-01 -4.24664468e-01 -3.62367004e-01 -4.01875332e-05] 45 | [ -6.17709160e-02 -8.89540538e-02 -4.60787490e-02 -2.63872411e-04]] 46 | 47 | [[ -2.74197720e-02 -4.81492020e-02 -9.70711466e-04 -5.63400914e-04] 48 | [ -4.68224823e-01 -5.15416980e-01 -4.03173447e-01 1.08675600e-03] 49 | [ -1.92850590e-01 -2.35108465e-01 -1.46818504e-01 7.03229918e-04]]] 50 | 3,3,4,4 51 | [[[ -8.20337012e-02 -1.09851457e-01 -8.65836963e-02 -2.04305374e-03] 52 | [ 2.50965476e-01 2.39997879e-01 1.90843925e-01 5.88585863e-05] 53 | [ 2.30597973e-01 2.38524318e-01 1.68421835e-01 2.94434140e-04]] 54 | 55 | [[ -1.81427315e-01 -2.33996511e-01 -1.86469465e-01 -2.92018929e-04] 56 | [ 2.43798360e-01 2.05864489e-01 1.76087990e-01 -4.28683939e-04] 57 | [ 2.28274003e-01 2.10355535e-01 1.54266149e-01 2.31265370e-03]] 58 | 59 | [[ -2.77901947e-01 -3.36890221e-01 -2.39638329e-01 1.78045273e-04] 60 | [ -1.17673725e-01 -1.77195445e-01 -1.41599089e-01 6.22104388e-04] 61 | [ -6.81315586e-02 -1.09548189e-01 -1.02669068e-01 -1.75137573e-03]]] 62 | 3,3,4,5 63 | [[[ -4.82807644e-02 -1.92377031e-01 2.29682952e-01 2.15341119e-04] 64 | [ -8.60508829e-02 -2.23639682e-01 2.39403918e-01 1.17204757e-03] 65 | [ -4.34580185e-02 -1.87373206e-01 2.32435212e-01 -6.19930623e-04]] 66 | 67 | [[ -7.16413110e-02 -2.10348368e-01 2.51407683e-01 3.38533107e-04] 68 | [ -1.03272267e-01 -2.36093074e-01 2.67666608e-01 1.57048204e-03] 69 | [ -8.24363455e-02 -2.21655682e-01 2.37258807e-01 3.23963293e-04]] 70 | 71 | [[ -4.58678640e-02 -1.92640439e-01 2.25149244e-01 4.36268369e-04] 72 | [ -5.62276803e-02 -1.97426021e-01 2.59826958e-01 6.57131139e-04] 73 | [ -4.60685454e-02 -1.94271773e-01 2.20455468e-01 1.63183745e-03]]] 74 | 3,3,4,6 75 | [[[ 6.56183809e-02 9.78973731e-02 6.68324158e-02 -6.48704765e-04] 76 | [ -1.46961376e-01 -1.25584126e-01 -1.12072006e-01 -5.52026147e-04] 77 | [ -2.11909980e-01 -2.09138453e-01 -1.66514769e-01 7.62175332e-05]] 78 | 79 | [[ 1.55516997e-01 2.00741157e-01 1.47219494e-01 -4.03717248e-04] 80 | [ -7.57858902e-02 -4.31273729e-02 -5.26724681e-02 6.76984346e-05] 81 | [ -1.91515967e-01 -1.79757357e-01 -1.55623496e-01 -1.53073983e-04]] 82 | 83 | [[ 1.83865786e-01 2.28956848e-01 1.52024433e-01 6.01732812e-04] 84 | [ 1.04386508e-01 1.41492218e-01 9.82329026e-02 1.06108702e-04] 85 | [ 4.06519324e-03 2.03322694e-02 7.39815086e-03 8.83699104e-04]]] 86 | 3,3,4,7 87 | [[[ 1.17997281e-01 7.93518871e-02 4.13383804e-02 -6.53948344e-04] 88 | [ 2.55298972e-01 2.12809205e-01 1.16577163e-01 9.55657379e-05] 89 | [ 2.63398528e-01 2.34720662e-01 1.03120133e-01 -1.66259753e-03]] 90 | 91 | [[ -1.68040082e-01 -2.33289674e-01 -2.12591141e-01 2.67472729e-04] 92 | [ -3.51117030e-02 -1.07274085e-01 -1.49052441e-01 4.56791495e-05] 93 | [ 1.85907379e-01 1.33855551e-01 3.82502601e-02 -6.93426467e-04]] 94 | 95 | [[ -2.21252292e-01 -2.82425582e-01 -2.13217601e-01 -7.53994274e-04] 96 | [ -1.28569543e-01 -1.98862240e-01 -1.85671866e-01 1.85738818e-03] 97 | [ 9.33472142e-02 4.20177393e-02 -1.63493946e-03 -1.26743238e-04]]] 98 | 3,3,4,8 99 | [[[ 1.64276838e-01 -2.75177389e-01 1.30267844e-01 -8.98452592e-04] 100 | [ 1.69287682e-01 -3.18530500e-01 9.58600938e-02 1.21853373e-03] 101 | [ 1.69283032e-01 -2.74618059e-01 1.26365066e-01 -8.26674979e-04]] 102 | 103 | [[ 1.96568444e-01 -2.93176323e-01 1.11360095e-01 1.73495704e-04] 104 | [ 2.04575524e-01 -3.35548252e-01 7.72164986e-02 1.49904896e-04] 105 | [ 1.93210021e-01 -3.00376773e-01 9.90585014e-02 1.44385325e-03]] 106 | 107 | [[ 1.71749175e-01 -2.81641036e-01 1.06351323e-01 6.83704930e-05] 108 | [ 2.19906688e-01 -2.80506372e-01 1.13431305e-01 1.56364834e-03] 109 | [ 1.69036329e-01 -2.88209736e-01 9.44238901e-02 -8.82330467e-04]]] 110 | 3,3,4,9 111 | [[[ -1.59787163e-01 -1.42328829e-01 3.29103470e-02 1.25879771e-04] 112 | [ -2.79697061e-01 -2.51145810e-01 -5.26647307e-02 -1.38905272e-03] 113 | [ -1.79828420e-01 -1.60953864e-01 1.59041267e-02 7.57250120e-04]] 114 | 115 | [[ -5.63814081e-02 -1.56457294e-02 1.52951479e-01 -5.09620120e-04] 116 | [ -1.44742340e-01 -9.18593705e-02 9.79724899e-02 1.54006563e-03] 117 | [ -6.29037023e-02 -2.07418278e-02 1.49151549e-01 1.05796522e-03]] 118 | 119 | [[ 6.81652501e-02 1.05182156e-01 2.31639758e-01 7.17531075e-04] 120 | [ 4.72991541e-02 9.84755233e-02 2.42600024e-01 5.18330839e-04] 121 | [ 6.33433536e-02 1.01481386e-01 2.28801370e-01 -4.48988343e-04]]] 122 | 3,3,4,10 123 | [[[ 1.46136388e-01 1.73231885e-01 9.61545706e-02 3.60121892e-04] 124 | [ 5.64986527e-01 6.08515918e-01 5.00213265e-01 1.64276699e-03] 125 | [ 9.93647799e-02 1.13403551e-01 4.59891558e-02 -4.73600667e-04]] 126 | 127 | [[ 3.04555404e-03 1.20651731e-02 3.84974666e-03 -4.24425671e-04] 128 | [ 5.05531192e-01 5.28295040e-01 4.89867747e-01 1.22847687e-03] 129 | [ -3.82008366e-02 -4.31188755e-02 -4.15886380e-02 -6.48762856e-04]] 130 | 131 | [[ -4.53146368e-01 -4.89183962e-01 -3.82995754e-01 -1.94232992e-03] 132 | [ -3.66203755e-01 -4.06722009e-01 -3.13440055e-01 7.93474319e-04] 133 | [ -4.53613669e-01 -5.01091897e-01 -3.86704862e-01 -1.78481001e-04]]] 134 | 3,3,4,11 135 | [[[ -9.03612897e-02 4.10316885e-02 -9.22253206e-02 9.99996089e-04] 136 | [ 1.33286670e-01 2.94810146e-01 1.04385577e-01 8.10125668e-04] 137 | [ 8.77168775e-02 2.45022506e-01 4.21199463e-02 1.58907159e-03]] 138 | 139 | [[ -1.88318565e-01 -5.23567908e-02 -1.59835622e-01 6.35231903e-04] 140 | [ 8.85766149e-02 2.54456282e-01 8.75689238e-02 5.49569959e-04] 141 | [ 6.01595603e-02 2.22733125e-01 3.87755893e-02 -1.65053454e-04]] 142 | 143 | [[ -2.76116341e-01 -1.64696112e-01 -2.24946335e-01 1.00222249e-04] 144 | [ -1.79876238e-01 -4.49110419e-02 -1.52323395e-01 5.33286424e-04] 145 | [ -1.22267805e-01 1.07738422e-02 -1.17437810e-01 2.26041302e-05]]] 146 | 3,3,4,12 147 | [[[ -2.19030157e-01 2.92494416e-01 -1.55873314e-01 -1.26443093e-03] 148 | [ -1.97435543e-01 3.47783238e-01 -1.44314736e-01 -4.28979198e-04] 149 | [ -2.09705532e-01 3.07439357e-01 -1.47805110e-01 1.01733394e-03]] 150 | 151 | [[ -1.88595220e-01 3.57859194e-01 -1.41208082e-01 1.48400932e-03] 152 | [ -1.44620553e-01 4.37669992e-01 -1.06140725e-01 6.10694173e-04] 153 | [ -1.73706234e-01 3.79394472e-01 -1.23757340e-01 -2.07108460e-04]] 154 | 155 | [[ -2.12760344e-01 3.00694168e-01 -1.71393976e-01 4.91097395e-04] 156 | [ -1.93089277e-01 3.54189575e-01 -1.58955529e-01 1.27445601e-05] 157 | [ -2.06336349e-01 3.13856006e-01 -1.58271819e-01 -4.86146280e-04]]] 158 | 3,3,4,13 159 | [[[ -1.26175478e-01 1.62965637e-02 5.22967987e-02 -6.14640070e-04] 160 | [ -2.26399019e-01 -7.86353722e-02 -1.37801236e-02 -1.75511464e-04] 161 | [ -3.05085570e-01 -1.87460065e-01 -1.11607119e-01 -7.53149943e-05]] 162 | 163 | [[ 4.24111402e-03 1.77657306e-01 2.03158319e-01 1.75670406e-03] 164 | [ -6.01502173e-02 1.18445069e-01 1.71877295e-01 1.44856214e-03] 165 | [ -2.32161015e-01 -8.80144089e-02 -1.97016075e-02 -1.53572182e-03]] 166 | 167 | [[ 3.81320342e-02 2.00323835e-01 2.01846048e-01 -9.32616706e-04] 168 | [ 2.98582599e-03 1.72226697e-01 1.97767422e-01 -9.24926775e-04] 169 | [ -1.31732151e-01 3.45506333e-03 4.26386483e-02 -8.35052633e-04]]] 170 | 3,3,4,14 171 | [[[ -9.93737653e-02 -1.26072139e-01 -8.32441747e-02 -6.68326160e-04] 172 | [ -4.17107373e-01 -4.59023654e-01 -3.81208956e-01 2.96015671e-04] 173 | [ -6.17119335e-02 -8.58032480e-02 -5.58534861e-02 -3.15031299e-04]] 174 | 175 | [[ -3.08400188e-02 -4.94737402e-02 -4.33545001e-02 -4.67942562e-04] 176 | [ -4.04181093e-01 -4.36910391e-01 -3.99291307e-01 -3.73359333e-04] 177 | [ 3.51811610e-02 2.21790820e-02 1.17621832e-02 1.78375770e-03]] 178 | 179 | [[ 2.97882557e-01 3.09386671e-01 2.29017183e-01 -1.32277701e-03] 180 | [ 2.21920937e-01 2.24570900e-01 1.58756286e-01 -9.60782869e-04] 181 | [ 3.40689301e-01 3.58142912e-01 2.63494223e-01 4.22705838e-04]]] 182 | 3,3,4,15 183 | [[[ 1.60492972e-01 -1.55553669e-01 1.57287687e-01 4.79616247e-05] 184 | [ 1.50845870e-01 -1.92928806e-01 1.39658153e-01 -3.33883771e-04] 185 | [ 1.26257464e-01 -1.89100251e-01 1.29653201e-01 9.24483407e-04]] 186 | 187 | [[ 1.01854384e-01 -2.36215204e-01 1.00839920e-01 5.37036336e-04] 188 | [ 3.75170559e-02 -3.29839289e-01 2.90120393e-02 -8.76780308e-04] 189 | [ 8.43275115e-02 -2.51501888e-01 8.91952813e-02 -1.34118670e-03]] 190 | 191 | [[ 1.03943102e-01 -2.01904014e-01 1.23847581e-01 1.77986373e-03] 192 | [ 9.28537399e-02 -2.39626676e-01 1.04332887e-01 2.17455742e-03] 193 | [ 1.05604060e-01 -1.96765676e-01 1.29770204e-01 1.52595970e-03]]] 194 | 3,3,4,16 195 | [[[ 1.88706547e-01 -5.31723350e-02 2.91195959e-02 -2.96750193e-04] 196 | [ 1.81215644e-01 -1.04869179e-01 -3.29543278e-02 -7.36831571e-05] 197 | [ 1.77801266e-01 -5.47288097e-02 3.60857993e-02 2.10519205e-03]] 198 | 199 | [[ 2.31722817e-01 -9.41526219e-02 -4.47309427e-02 1.49270883e-04] 200 | [ 2.25618899e-01 -1.48625806e-01 -1.12106241e-01 1.05418765e-03] 201 | [ 2.25181833e-01 -9.15573686e-02 -3.49869356e-02 -9.76169598e-04]] 202 | 203 | [[ 2.13622883e-01 -9.90901664e-02 -4.29164991e-02 -1.39784021e-03] 204 | [ 2.37850174e-01 -1.22881040e-01 -8.18860233e-02 1.93783152e-03] 205 | [ 2.08464861e-01 -9.60973501e-02 -3.52495946e-02 -1.46008283e-03]]] 206 | 3,3,4,17 207 | [[[ 9.34753865e-02 1.81297168e-01 5.09483516e-02 -1.07467186e-03] 208 | [ 1.10058844e-01 1.98395967e-01 8.37437212e-02 -1.92984566e-03] 209 | [ -8.96054879e-02 -2.54678875e-02 -9.32614729e-02 -9.66958876e-04]] 210 | 211 | [[ 9.87698957e-02 1.92336693e-01 7.11323991e-02 5.15427906e-04] 212 | [ 1.12815462e-01 2.06355751e-01 1.03919454e-01 1.00791702e-04] 213 | [ -1.60945699e-01 -9.30082351e-02 -1.45294815e-01 -1.03721151e-03]] 214 | 215 | [[ -7.28550702e-02 -7.54500623e-04 -8.31292570e-02 1.19632296e-03] 216 | [ -1.06818415e-01 -3.50016095e-02 -9.71623659e-02 -5.52321435e-04] 217 | [ -2.14898407e-01 -1.61894172e-01 -1.83485031e-01 9.86951636e-04]]] 218 | 3,3,4,18 219 | [[[ 7.49733895e-02 -2.07719296e-01 1.90557480e-01 2.20160233e-03] 220 | [ 4.18386832e-02 -2.58161038e-01 1.66676164e-01 -4.42939228e-04] 221 | [ 9.04483646e-02 -1.95325106e-01 1.99065745e-01 1.11128774e-03]] 222 | 223 | [[ 7.82376602e-02 -2.18358561e-01 2.01554701e-01 3.61813232e-04] 224 | [ -1.97668355e-02 -3.35441887e-01 1.11782320e-01 6.62604580e-04] 225 | [ 9.55962166e-02 -2.03206554e-01 2.11256638e-01 -4.22907207e-04]] 226 | 227 | [[ 8.57388601e-02 -1.99787065e-01 1.93103850e-01 3.27849266e-04] 228 | [ 3.31895687e-02 -2.70007104e-01 1.47463843e-01 -6.57329336e-04] 229 | [ 9.17757154e-02 -1.96090013e-01 1.90740466e-01 -9.36687924e-04]]] 230 | 3,3,4,19 231 | [[[ 1.26324385e-01 -1.38946190e-01 -4.37222868e-02 -4.38039133e-04] 232 | [ 1.27719000e-01 -1.91356122e-01 -1.04160197e-01 1.15812596e-04] 233 | [ 1.15887597e-01 -1.45427689e-01 -4.71277125e-02 6.91400492e-04]] 234 | 235 | [[ 1.93598256e-01 -1.46354362e-01 -8.47202912e-02 1.11973716e-03] 236 | [ 1.90134004e-01 -2.07049444e-01 -1.55789599e-01 2.52124225e-03] 237 | [ 1.82731032e-01 -1.53688446e-01 -9.05470252e-02 2.87229253e-04]] 238 | 239 | [[ 1.81763098e-01 -1.22674122e-01 -6.27764463e-02 -7.54579261e-04] 240 | [ 2.11305365e-01 -1.47489175e-01 -1.01474576e-01 5.26181073e-04] 241 | [ 1.70229182e-01 -1.30973741e-01 -7.12402165e-02 1.61059026e-03]]] 242 | 3,3,4,20 243 | [[[ 2.38499194e-01 2.55391061e-01 3.61803114e-01 -8.36465391e-04] 244 | [ 1.41572744e-01 1.56778082e-01 3.08496326e-01 7.94966472e-04] 245 | [ -5.66975819e-03 -2.01128796e-02 1.40562028e-01 -9.03783075e-04]] 246 | 247 | [[ 1.00570329e-01 1.20544158e-01 2.75603920e-01 1.57430663e-03] 248 | [ -1.31103203e-01 -1.17958814e-01 9.05414894e-02 -1.43037492e-03] 249 | [ -2.77928889e-01 -2.94115484e-01 -7.63713941e-02 3.42887943e-04]] 250 | 251 | [[ -2.56843269e-02 -3.01995203e-02 1.34977609e-01 -1.33765256e-03] 252 | [ -2.33637869e-01 -2.43615434e-01 -2.56014206e-02 -7.85776647e-04] 253 | [ -3.29488367e-01 -3.64647031e-01 -1.41631305e-01 2.26618038e-04]]] 254 | 3,3,4,21 255 | [[[ 3.55971783e-01 3.56314898e-01 2.39028737e-01 -8.56014085e-04] 256 | [ 3.97887290e-01 3.79936904e-01 3.09875280e-01 -3.26682384e-05] 257 | [ 8.64696875e-02 5.71688004e-02 6.47362098e-02 7.94421183e-04]] 258 | 259 | [[ 1.82299823e-01 1.47908926e-01 9.00689811e-02 -1.59950578e-04] 260 | [ -2.52854265e-02 -8.45413879e-02 -7.61016682e-02 -1.81999916e-04] 261 | [ -2.79232264e-01 -3.42817664e-01 -2.55708873e-01 -2.14731484e-03]] 262 | 263 | [[ 3.50416526e-02 -1.09227877e-02 -7.11482309e-04 -1.39723136e-03] 264 | [ -2.16924280e-01 -2.85869956e-01 -2.05563650e-01 8.17215012e-04] 265 | [ -3.31096411e-01 -3.95867020e-01 -2.50058174e-01 -1.13058719e-03]]] 266 | 3,3,4,22 267 | [[[-0.12046139 0.00118498 0.16667192 0.00209613] 268 | [-0.17090142 -0.02594189 0.17097801 0.00131281] 269 | [-0.12404908 0.00151047 0.1687201 0.00033247]] 270 | 271 | [[-0.16372119 -0.01909573 0.17909338 -0.00141767] 272 | [-0.21932983 -0.05162582 0.17934875 0.00051631] 273 | [-0.17791158 -0.03037101 0.17024809 -0.00088464]] 274 | 275 | [[-0.13394448 -0.00957824 0.16053411 -0.00060924] 276 | [-0.15728132 -0.01042137 0.1914428 -0.00101545] 277 | [-0.13078149 -0.00378751 0.16880605 0.00032277]]] 278 | 3,3,4,23 279 | [[[ -3.08338255e-01 -4.87027854e-01 -1.80467457e-01 -2.29462539e-03] 280 | [ -2.37905011e-01 -4.09513474e-01 -1.17244996e-01 1.07080594e-03] 281 | [ 1.08455405e-01 -4.31569032e-02 1.81487694e-01 6.29162823e-04]] 282 | 283 | [[ -2.03620762e-01 -3.75806600e-01 -7.99025521e-02 5.82857348e-04] 284 | [ -1.17175855e-01 -2.76997328e-01 -3.49949521e-04 1.77080766e-03] 285 | [ 2.66444415e-01 1.30394563e-01 3.40175331e-01 -2.87092553e-04]] 286 | 287 | [[ 1.07057400e-01 -4.81424779e-02 1.84494838e-01 -9.29470334e-05] 288 | [ 2.03627706e-01 6.29161298e-02 2.77538925e-01 8.56606639e-04] 289 | [ 3.68812591e-01 2.43745610e-01 4.04833347e-01 -4.62649332e-04]]] 290 | 3,3,4,24 291 | [[[-0.02083396 0.11533792 0.04404821 -0.00028313] 292 | [-0.07155016 0.06701437 0.00966333 0.00063241] 293 | [-0.12228294 -0.00246835 -0.04592194 -0.00059245]] 294 | 295 | [[-0.00051869 0.14725193 0.07451686 -0.00043084] 296 | [-0.06439608 0.08504013 0.02745619 0.00043707] 297 | [-0.14637132 -0.01696968 -0.05888153 -0.00030289]] 298 | 299 | [[-0.02194524 0.11463996 0.04025492 0.00104976] 300 | [-0.06314134 0.07585644 0.01520346 0.00091357] 301 | [-0.11968347 0.00035171 -0.04599162 -0.0001539 ]]] 302 | 3,3,4,25 303 | [[[-0.15872632 0.1163794 0.09592837 0.00170604] 304 | [-0.16319121 0.13944666 0.12468038 0.00100459] 305 | [-0.15240695 0.12636176 0.10174061 -0.00200998]] 306 | 307 | [[-0.16730706 0.13736778 0.12232669 0.0011739 ] 308 | [-0.17251123 0.16125675 0.15278567 -0.00089515] 309 | [-0.16109975 0.1473745 0.12963735 0.00054916]] 310 | 311 | [[-0.17379802 0.10524599 0.07937991 -0.00145024] 312 | [-0.17338306 0.13363452 0.11532211 0.00060638] 313 | [-0.17065473 0.11229908 0.08544833 -0.00101882]]] 314 | 3,3,4,26 315 | [[[ -7.14943707e-02 -7.70884380e-02 -4.38113064e-02 -8.62951740e-04] 316 | [ -4.69355851e-01 -4.95030999e-01 -3.91029984e-01 2.14594067e-03] 317 | [ -3.88053566e-01 -4.15718704e-01 -2.85649657e-01 8.57863983e-04]] 318 | 319 | [[ 2.27950126e-01 2.41255358e-01 1.85681731e-01 -1.22828793e-03] 320 | [ -1.00678638e-01 -1.09213524e-01 -1.04213096e-01 -1.61578169e-03] 321 | [ -1.15352795e-01 -1.29570037e-01 -8.81287754e-02 -8.04079464e-04]] 322 | 323 | [[ 3.74578476e-01 4.08630192e-01 2.72074014e-01 2.63010035e-03] 324 | [ 3.24808031e-01 3.45526427e-01 2.49017000e-01 2.92870885e-04] 325 | [ 2.21450076e-01 2.32177451e-01 1.74213678e-01 -3.94798437e-04]]] 326 | 3,3,4,27 327 | [[[-0.05043453 0.10368702 -0.18848351 0.00157504] 328 | [ 0.02727965 0.17416753 -0.17792737 -0.00100168] 329 | [-0.00554895 0.13986839 -0.18292557 0.00047422]] 330 | 331 | [[-0.00488439 0.14413531 -0.18481646 0.0009619 ] 332 | [ 0.12199405 0.26450121 -0.12894408 -0.00077069] 333 | [ 0.05089047 0.19054219 -0.17017853 0.0005463 ]] 334 | 335 | [[-0.06530017 0.08363787 -0.19339779 -0.00103373] 336 | [ 0.00711 0.14819314 -0.18588647 0.00057899] 337 | [-0.02640966 0.1133359 -0.19133255 0.00121631]]] 338 | 3,3,4,28 339 | [[[ -3.38394381e-02 -7.18570054e-02 -4.39950228e-02 7.70598766e-04] 340 | [ -1.87677285e-03 -2.27502156e-02 -3.68775949e-02 -1.93937554e-03] 341 | [ 3.68801594e-01 3.87291312e-01 2.80888468e-01 -3.82736471e-04]] 342 | 343 | [[ -3.70247245e-01 -4.22475278e-01 -3.45962405e-01 -9.62856138e-05] 344 | [ -4.30766165e-01 -4.65058833e-01 -4.32902575e-01 3.37066100e-04] 345 | [ 2.73943961e-01 2.85034418e-01 2.04956368e-01 -1.57124632e-05]] 346 | 347 | [[ -6.85893223e-02 -1.03522509e-01 -6.76356778e-02 -1.89934915e-04] 348 | [ -5.53761013e-02 -7.19793364e-02 -7.85235465e-02 1.19243760e-03] 349 | [ 3.16829234e-01 3.39262426e-01 2.40741447e-01 -2.45282846e-03]]] 350 | 3,3,4,29 351 | [[[ 2.19569877e-01 -3.88477091e-03 -1.89854458e-01 -2.98061734e-03] 352 | [ 2.70894051e-01 2.26958580e-02 -1.76336929e-01 2.83600057e-05] 353 | [ 1.45175755e-01 -2.90257912e-02 -1.60507455e-01 -7.17236486e-04]] 354 | 355 | [[ 2.28184730e-01 -1.42122926e-02 -1.93843380e-01 1.24057068e-03] 356 | [ 2.78903902e-01 1.21491915e-02 -1.78362042e-01 -9.13927972e-04] 357 | [ 1.43571958e-01 -4.76722531e-02 -1.70887142e-01 -1.08572119e-03]] 358 | 359 | [[ 1.01330757e-01 -6.59068450e-02 -1.59135729e-01 -1.14139728e-03] 360 | [ 1.05424121e-01 -8.42410773e-02 -1.84557512e-01 -1.60673622e-03] 361 | [ 3.61547731e-02 -7.99596608e-02 -1.24429666e-01 -8.43273956e-05]]] 362 | 3,3,4,30 363 | [[[ -2.92066574e-01 -2.63508290e-01 -2.33869821e-01 -3.43597523e-04] 364 | [ -2.15335354e-01 -1.65679723e-01 -1.67810217e-01 -1.00738078e-03] 365 | [ -1.23127244e-01 -7.20966905e-02 -9.85076800e-02 1.13447628e-03]] 366 | 367 | [[ -1.50502875e-01 -9.88568291e-02 -1.09126911e-01 -1.06349599e-03] 368 | [ 1.40345454e-01 2.21382901e-01 1.69678926e-01 1.63291895e-03] 369 | [ 1.22035392e-01 2.02424526e-01 1.31406531e-01 4.09015425e-04]] 370 | 371 | [[ -5.02232574e-02 2.67145922e-03 -4.03339416e-02 -2.67298106e-04] 372 | [ 1.79611325e-01 2.61513501e-01 1.80102304e-01 3.02236411e-04] 373 | [ 1.37286723e-01 2.15397730e-01 1.19875103e-01 -5.29180455e-04]]] 374 | 3,3,4,31 375 | [[[ 0.14893703 0.12447353 0.12836744 0.0014211 ] 376 | [ 0.13036445 0.09525505 0.08564197 0.00077212] 377 | [ 0.13415098 0.11141251 0.12136985 -0.00080795]] 378 | 379 | [[ 0.14171803 0.11669213 0.11066163 -0.00092965] 380 | [ 0.04218798 0.00535035 -0.0145823 -0.0005929 ] 381 | [ 0.13311349 0.11018976 0.11101208 -0.0004348 ]] 382 | 383 | [[ 0.15279444 0.14212981 0.15973611 0.00023949] 384 | [ 0.11152215 0.09029514 0.09495541 0.00057058] 385 | [ 0.13753545 0.12912421 0.15397617 0.00021507]]] 386 | 3,3,4,32 387 | [[[ 1.32619515e-02 -7.46256039e-02 -1.10599749e-01 -1.18659718e-05] 388 | [ 3.29272598e-02 -7.83106536e-02 -1.23675704e-01 -5.80861561e-05] 389 | [ 1.89347882e-02 -7.12787062e-02 -1.06915414e-01 8.84614346e-05]] 390 | 391 | [[ 5.95952980e-02 -4.74971496e-02 -9.24182013e-02 1.64628623e-03] 392 | [ 1.23225600e-01 -7.27669103e-03 -6.34728074e-02 3.75134186e-05] 393 | [ 4.79140207e-02 -6.13196008e-02 -1.06018260e-01 5.58325963e-04]] 394 | 395 | [[ 1.08909262e-02 -7.25192353e-02 -1.02833636e-01 -7.02959078e-04] 396 | [ 4.48981002e-02 -6.05912544e-02 -1.01790354e-01 -5.88551979e-04] 397 | [ 8.16116855e-03 -7.64832944e-02 -1.07569188e-01 -4.07984888e-04]]] 398 | 3,3,4,33 399 | [[[ -5.03388140e-03 -1.56121626e-01 5.71336085e-03 -1.03047758e-03] 400 | [ -3.74878608e-02 -1.97528183e-01 -1.81761980e-02 -3.39679216e-04] 401 | [ -1.49115011e-01 -3.10258120e-01 -1.21902436e-01 1.70111062e-03]] 402 | 403 | [[ 1.84213206e-01 4.03459668e-02 1.93121225e-01 3.41126637e-04] 404 | [ 2.65389532e-01 1.15483850e-01 2.83511579e-01 -1.14357950e-04] 405 | [ -4.23344895e-02 -2.03197405e-01 -2.15079561e-02 -1.46342453e-03]] 406 | 407 | [[ 1.19107686e-01 -2.44378392e-02 1.16721585e-01 -3.60236067e-04] 408 | [ 1.98541522e-01 5.19780032e-02 2.06063733e-01 1.02048100e-03] 409 | [ -1.94161944e-02 -1.74380213e-01 -8.98326188e-03 4.07086191e-04]]] 410 | 3,3,4,34 411 | [[[ -2.70073444e-01 -2.90003479e-01 -2.44796559e-01 3.46525281e-04] 412 | [ -3.03981334e-01 -3.22223425e-01 -2.62617111e-01 -6.67189539e-04] 413 | [ -3.98545563e-01 -4.27483946e-01 -3.36425692e-01 1.13204913e-03]] 414 | 415 | [[ 1.14720054e-01 1.27246067e-01 9.55411494e-02 7.82054412e-05] 416 | [ 3.78111184e-01 4.01585072e-01 3.73997629e-01 -1.52897136e-03] 417 | [ -1.03537932e-01 -1.06837779e-01 -8.73369649e-02 -5.45107236e-04]] 418 | 419 | [[ 1.49368986e-01 1.67306423e-01 9.35268700e-02 -1.85020035e-03] 420 | [ 4.07268018e-01 4.39276874e-01 3.65221560e-01 -9.20031220e-04] 421 | [ 2.33304277e-02 2.67152078e-02 -3.08947964e-03 -4.07330343e-04]]] 422 | 3,3,4,35 423 | [[[ -4.03229028e-01 -4.32260394e-01 -3.05302829e-01 8.80493317e-04] 424 | [ -1.94614068e-01 -2.06736252e-01 -1.67535037e-01 -7.16538518e-04] 425 | [ 6.70514852e-02 7.58071691e-02 3.39720920e-02 -5.80085871e-05]] 426 | 427 | [[ -3.04495096e-01 -3.23996127e-01 -2.41429105e-01 -2.14613974e-04] 428 | [ 1.54334769e-01 1.63296551e-01 1.42289281e-01 1.11243140e-03] 429 | [ 4.09199715e-01 4.44485128e-01 3.44153941e-01 2.03997377e-04]] 430 | 431 | [[ -1.37970507e-01 -1.44507363e-01 -1.04618713e-01 9.30070935e-04] 432 | [ 1.30484581e-01 1.48839787e-01 9.73473936e-02 -6.42454193e-04] 433 | [ 2.65709609e-01 3.03147227e-01 1.86108336e-01 6.36376673e-04]]] 434 | 3,3,4,36 435 | [[[ -1.71640217e-01 5.06316163e-02 6.09379970e-02 5.10957791e-04] 436 | [ -2.03851864e-01 4.08999696e-02 6.91427067e-02 1.17332523e-03] 437 | [ -1.73067123e-01 5.20107076e-02 6.01495430e-02 -1.23930804e-03]] 438 | 439 | [[ -2.03079000e-01 4.16843630e-02 7.31860846e-02 -1.14816020e-03] 440 | [ -2.28493020e-01 3.88939120e-02 8.91655385e-02 2.59644657e-05] 441 | [ -2.02748358e-01 4.43667136e-02 7.41780549e-02 9.21108003e-04]] 442 | 443 | [[ -1.74136609e-01 4.92637046e-02 6.18723109e-02 3.41892825e-04] 444 | [ -1.99777722e-01 4.50038463e-02 7.53613859e-02 1.30976562e-03] 445 | [ -1.70285881e-01 5.47382645e-02 6.65422678e-02 -1.35283673e-03]]] 446 | 3,3,4,37 447 | [[[ -3.68897676e-01 -4.31032568e-01 -3.28162521e-01 -2.54034647e-03] 448 | [ -2.83717424e-01 -3.50037009e-01 -2.79851943e-01 1.38248794e-03] 449 | [ -2.97187001e-01 -3.62241358e-01 -2.96100914e-01 4.97765373e-04]] 450 | 451 | [[ -8.74576122e-02 -1.20457523e-01 -9.96494219e-02 -1.29736750e-03] 452 | [ 3.80950421e-01 3.61924678e-01 3.34116101e-01 -9.54193529e-04] 453 | [ 5.26927151e-02 2.33462155e-02 4.53636842e-03 5.85956499e-04]] 454 | 455 | [[ 7.53673762e-02 7.61803165e-02 4.45810929e-02 -1.73954968e-03] 456 | [ 5.18441200e-01 5.39468646e-01 4.59542006e-01 3.27636109e-04] 457 | [ 1.96055353e-01 2.01728374e-01 1.34631634e-01 -8.76455684e-04]]] 458 | 3,3,4,38 459 | [[[ 1.30815029e-01 2.94441879e-01 1.71195298e-01 5.33363898e-04] 460 | [ 1.54596075e-01 3.29210728e-01 2.16697648e-01 -7.60823663e-04] 461 | [ -1.33177440e-03 1.50417060e-01 5.64939007e-02 3.06655827e-04]] 462 | 463 | [[ -6.25233650e-02 1.01203464e-01 1.64465327e-02 -2.78871786e-03] 464 | [ -9.86046866e-02 7.38054886e-02 4.29484574e-03 -7.74785818e-04] 465 | [ -1.83171511e-01 -3.11172325e-02 -8.40767920e-02 3.30188836e-04]] 466 | 467 | [[ -1.70109630e-01 -3.73491682e-02 -8.88792425e-02 -1.00366866e-04] 468 | [ -2.35845253e-01 -9.56979990e-02 -1.29369870e-01 -6.96551695e-04] 469 | [ -2.34545246e-01 -1.11130260e-01 -1.33945644e-01 -1.14147365e-03]]] 470 | 3,3,4,39 471 | [[[-0.0358882 -0.10993682 0.10265407 0.0002848 ] 472 | [-0.11990397 -0.19758695 0.04595486 0.0002899 ] 473 | [-0.04643798 -0.12538068 0.09067708 0.00054781]] 474 | 475 | [[-0.04197054 -0.11337133 0.12120296 -0.00076238] 476 | [-0.17326267 -0.24885692 0.0188637 0.00069629] 477 | [-0.05966521 -0.13633575 0.10113112 0.00070712]] 478 | 479 | [[-0.03258639 -0.1041396 0.10223453 -0.00060281] 480 | [-0.11105626 -0.18644011 0.04860276 0.00034494] 481 | [-0.05120771 -0.12792586 0.08034358 -0.0018769 ]]] 482 | 3,3,4,40 483 | [[[-0.10626303 0.06623434 -0.11063665 0.0006883 ] 484 | [-0.11452698 0.06585026 -0.11243556 0.00086051] 485 | [-0.16372663 -0.0040107 -0.15568645 -0.00024142]] 486 | 487 | [[-0.00123176 0.19449428 -0.01835439 -0.00079336] 488 | [ 0.03562608 0.24023479 0.02441533 0.00092082] 489 | [-0.10281684 0.07850371 -0.10447415 0.00207832]] 490 | 491 | [[-0.01397882 0.17332828 -0.04090111 0.00137141] 492 | [ 0.00525392 0.20117153 -0.01628016 0.00260798] 493 | [-0.09490839 0.07894424 -0.10676192 0.00091297]]] 494 | 3,3,4,41 495 | [[[ 2.91369915e-01 1.80295989e-01 3.30129862e-01 2.84246518e-04] 496 | [ 3.11672509e-01 2.03400090e-01 3.58926535e-01 8.47124262e-04] 497 | [ 3.77337724e-01 2.83428222e-01 3.96160275e-01 6.08939619e-04]] 498 | 499 | [[ -9.71558020e-02 -2.31872842e-01 -4.46805311e-03 2.54500046e-04] 500 | [ -2.70264834e-01 -4.07386810e-01 -1.69336021e-01 6.65826723e-04] 501 | [ 1.52612761e-01 3.89690883e-02 2.17563719e-01 1.93231317e-04]] 502 | 503 | [[ -2.22893253e-01 -3.65712166e-01 -1.14031255e-01 1.15730823e-03] 504 | [ -4.04626906e-01 -5.51754475e-01 -2.85876036e-01 3.99426935e-04] 505 | [ 7.68054649e-03 -1.16164185e-01 8.60797614e-02 4.79181152e-04]]] 506 | 3,3,4,42 507 | [[[ -6.31071255e-02 -7.04995841e-02 -5.86972833e-02 2.03584961e-04] 508 | [ 1.50444180e-01 1.68975696e-01 1.37751579e-01 -1.60933132e-06] 509 | [ 2.52608031e-01 2.85375357e-01 2.21436575e-01 3.78645607e-04]] 510 | 511 | [[ -2.18599498e-01 -2.30567768e-01 -1.90223768e-01 -3.08311020e-04] 512 | [ 1.30341314e-02 2.52011139e-02 1.99450254e-02 1.53789570e-05] 513 | [ 2.11167246e-01 2.40656734e-01 1.95112780e-01 -3.14908684e-04]] 514 | 515 | [[ -2.29798943e-01 -2.49716133e-01 -1.91436365e-01 -5.31804864e-04] 516 | [ -1.49255469e-01 -1.52640134e-01 -1.29125312e-01 -4.80456598e-04] 517 | [ 5.94821200e-03 1.55887157e-02 4.74228873e-04 6.61657687e-05]]] 518 | 3,3,4,43 519 | [[[ 2.86359966e-01 3.07717890e-01 1.84787169e-01 -4.15943185e-04] 520 | [ 2.78240681e-01 2.98371881e-01 1.82961330e-01 -8.89639021e-04] 521 | [ 3.08421880e-01 3.41272891e-01 2.13226527e-01 -1.77734357e-03]] 522 | 523 | [[ -2.25602072e-02 -3.18222530e-02 -6.46763146e-02 -6.38901838e-04] 524 | [ -3.07044864e-01 -3.22560668e-01 -3.31725121e-01 -9.96974530e-04] 525 | [ 2.45076064e-02 2.63870042e-02 -1.42033221e-02 1.75873283e-04]] 526 | 527 | [[ -1.19400039e-01 -1.43751875e-01 -1.10158324e-01 2.69284181e-04] 528 | [ -4.12016422e-01 -4.43641156e-01 -3.79986167e-01 -1.53055866e-04] 529 | [ -1.02986649e-01 -1.16765022e-01 -8.94798487e-02 1.34340650e-03]]] 530 | 3,3,4,44 531 | [[[ -4.25485559e-02 9.66970101e-02 1.05085261e-02 3.45907611e-04] 532 | [ -1.32630855e-01 4.67646774e-03 -5.51802665e-02 2.10194802e-03] 533 | [ -2.56320834e-01 -1.50411174e-01 -1.76265195e-01 7.73363747e-04]] 534 | 535 | [[ 1.16913289e-01 2.79658705e-01 1.70957074e-01 9.86250117e-04] 536 | [ 6.13770373e-02 2.21472442e-01 1.38755485e-01 2.21146998e-04] 537 | [ -2.14874953e-01 -9.14772451e-02 -1.33287042e-01 -7.00817109e-05]] 538 | 539 | [[ 3.55472639e-02 1.83631420e-01 6.85702264e-02 9.25188535e-04] 540 | [ 1.40741945e-03 1.49391815e-01 5.75270206e-02 1.17929047e-03] 541 | [ -1.74198121e-01 -5.94068617e-02 -1.14718542e-01 -2.23895680e-04]]] 542 | 3,3,4,45 543 | [[[ 1.03452407e-01 9.88399908e-02 1.06680267e-01 1.02086633e-03] 544 | [ 3.30464453e-01 3.56243223e-01 2.86233097e-01 -6.45530527e-04] 545 | [ 5.12942135e-01 5.75109661e-01 4.25108880e-01 7.19962816e-04]] 546 | 547 | [[ -4.41290259e-01 -4.80582595e-01 -3.77094060e-01 -7.36318616e-05] 548 | [ -2.50680268e-01 -2.64927894e-01 -2.45718718e-01 5.86840499e-04] 549 | [ 3.64112139e-01 3.97563100e-01 3.07167470e-01 8.69760115e-04]] 550 | 551 | [[ -4.37943339e-01 -4.86055195e-01 -3.40692461e-01 -2.04954722e-05] 552 | [ -3.38343978e-01 -3.68056089e-01 -2.95589030e-01 -1.69405341e-03] 553 | [ 1.56436414e-01 1.67435169e-01 1.32255882e-01 2.97567545e-04]]] 554 | 3,3,4,46 555 | [[[ -2.84832176e-02 -6.72688857e-02 -1.30869240e-01 -1.07683893e-03] 556 | [ -7.60079473e-02 -1.35300472e-01 -1.86742201e-01 1.33525417e-03] 557 | [ -2.01668918e-01 -2.58517206e-01 -2.53508151e-01 -7.27095117e-04]] 558 | 559 | [[ 2.25514352e-01 1.86515421e-01 5.35440668e-02 -1.09527413e-04] 560 | [ 3.02002609e-01 2.43764937e-01 1.17272660e-01 9.90731409e-04] 561 | [ -6.01307862e-02 -1.24988474e-01 -1.79943159e-01 -1.28030963e-03]] 562 | 563 | [[ 1.43791035e-01 1.15885414e-01 -1.71319097e-02 2.15743552e-04] 564 | [ 2.19821334e-01 1.75286561e-01 4.50552292e-02 -1.09832990e-03] 565 | [ -3.16429213e-02 -8.06387812e-02 -1.45324454e-01 1.00540719e-03]]] 566 | 3,3,4,47 567 | [[[ -4.51399028e-01 -4.84057099e-01 -3.62839460e-01 -1.16053503e-03] 568 | [ -4.43578690e-01 -4.65595543e-01 -3.82482618e-01 -1.48905128e-05] 569 | [ -8.71764496e-02 -9.07742083e-02 -7.34804273e-02 1.57327682e-03]] 570 | 571 | [[ -2.21623719e-01 -2.35727638e-01 -1.85247317e-01 -2.74451659e-03] 572 | [ -1.11840693e-02 -4.94583417e-03 -8.14520195e-03 4.15172108e-04] 573 | [ 2.43712008e-01 2.69822836e-01 2.10257933e-01 3.34921671e-04]] 574 | 575 | [[ 1.43038690e-01 1.53932258e-01 1.15107872e-01 2.60236207e-04] 576 | [ 3.98770005e-01 4.35935706e-01 3.44564468e-01 3.37867066e-04] 577 | [ 4.07074630e-01 4.54858005e-01 3.25932920e-01 -7.06413528e-04]]] 578 | 3,3,4,48 579 | [[[ 1.96185544e-01 1.67565435e-01 6.44080713e-02 -1.51823006e-05] 580 | [ -3.69976610e-02 -9.24774930e-02 -1.03627868e-01 1.20362774e-05] 581 | [ -3.01728070e-01 -3.57969701e-01 -2.61545897e-01 -1.20400800e-03]] 582 | 583 | [[ 3.40618372e-01 3.08060050e-01 1.61978588e-01 -1.10826944e-03] 584 | [ 8.39938372e-02 2.13681739e-02 -2.99310964e-02 5.44598850e-04] 585 | [ -3.30690533e-01 -3.98592234e-01 -3.30316007e-01 1.01187581e-03]] 586 | 587 | [[ 2.55993217e-01 2.34663248e-01 7.78414086e-02 1.26060331e-03] 588 | [ 1.80333495e-01 1.36645108e-01 5.64207211e-02 2.80121923e-04] 589 | [ -8.77605826e-02 -1.34772331e-01 -1.10275768e-01 -5.07225923e-04]]] 590 | 3,3,4,49 591 | [[[ -8.97931457e-02 -5.52446060e-02 9.33597162e-02 -8.63154302e-04] 592 | [ 5.86711690e-02 1.34769008e-01 2.61836171e-01 -1.36802904e-03] 593 | [ 2.53234833e-01 3.33249241e-01 3.96732688e-01 -6.73012037e-05]] 594 | 595 | [[ -4.03994083e-01 -3.64839733e-01 -1.61634386e-01 1.29460590e-03] 596 | [ -2.36142859e-01 -1.55243859e-01 2.31066477e-02 2.83644127e-04] 597 | [ 1.43068507e-01 2.33516157e-01 3.38173360e-01 -2.10888265e-05]] 598 | 599 | [[ -3.50294858e-01 -3.31248164e-01 -1.36940807e-01 4.65010584e-04] 600 | [ -2.16985032e-01 -1.59631297e-01 1.48149990e-02 -5.43780334e-04] 601 | [ 6.08755350e-02 1.24874108e-01 2.29117662e-01 1.12339365e-03]]] 602 | 3,3,4,50 603 | [[[ -1.51110873e-01 1.25660345e-01 4.01619263e-03 -6.30480237e-04] 604 | [ -1.62097588e-01 1.34524375e-01 1.03108706e-02 -8.48751748e-04] 605 | [ -1.46770179e-01 1.32626846e-01 4.11674706e-03 8.10199243e-04]] 606 | 607 | [[ -1.58298731e-01 1.38941646e-01 2.06385013e-02 -1.72075198e-03] 608 | [ -1.54860288e-01 1.62634149e-01 4.23107333e-02 1.60069918e-04] 609 | [ -1.54289648e-01 1.45502672e-01 2.15735007e-02 -9.91763081e-04]] 610 | 611 | [[ -1.54305533e-01 1.24660276e-01 4.98014083e-03 2.20430922e-03] 612 | [ -1.61380813e-01 1.36745378e-01 1.61965080e-02 1.05267495e-03] 613 | [ -1.47153080e-01 1.33727431e-01 1.04908813e-02 2.68625724e-03]]] 614 | 3,3,4,51 615 | [[[ 4.63700473e-01 5.00400960e-01 3.78148049e-01 -8.68916803e-04] 616 | [ 1.66020945e-01 1.57932520e-01 1.49958238e-01 -2.80754035e-03] 617 | [ -1.46743089e-01 -1.88454032e-01 -9.93825570e-02 5.72784687e-04]] 618 | 619 | [[ 3.98165435e-01 4.24909770e-01 3.32956612e-01 1.16434961e-03] 620 | [ -2.40648940e-01 -2.66253561e-01 -2.20603868e-01 1.80966128e-03] 621 | [ -6.12233102e-01 -6.71400070e-01 -5.18702567e-01 -7.11895002e-04]] 622 | 623 | [[ 3.27685863e-01 3.58545274e-01 2.70942211e-01 -1.45243853e-03] 624 | [ -3.20339911e-02 -4.49670255e-02 -1.17181307e-02 2.01564608e-03] 625 | [ -3.08808595e-01 -3.50572735e-01 -2.23098278e-01 1.91191758e-03]]] 626 | 3,3,4,52 627 | [[[ 1.26994550e-01 -7.44272694e-02 5.26880398e-02 5.89157396e-04] 628 | [ 1.68935448e-01 -7.75721967e-02 2.54054330e-02 3.32750409e-04] 629 | [ 2.38035515e-01 1.52011551e-02 8.82033035e-02 1.32949976e-03]] 630 | 631 | [[ -1.22762937e-03 -2.33091280e-01 -8.01997706e-02 -1.47000363e-04] 632 | [ 6.92391593e-04 -2.78018236e-01 -1.50458962e-01 -7.25476537e-04] 633 | [ 2.05085993e-01 -4.59912717e-02 4.14873622e-02 -2.73849844e-04]] 634 | 635 | [[ 3.57718803e-02 -1.56895638e-01 1.96650671e-03 3.03490466e-04] 636 | [ 5.41123189e-02 -1.82696432e-01 -4.74085957e-02 -1.33917772e-03] 637 | [ 1.69655666e-01 -4.25656363e-02 5.64493537e-02 1.39901950e-03]]] 638 | 3,3,4,53 639 | [[[ -2.73974717e-01 -3.38722199e-01 -2.70789355e-01 -2.48061202e-04] 640 | [ -2.63310194e-01 -3.29489619e-01 -3.14782560e-01 1.05505611e-03] 641 | [ 6.89496398e-02 3.88309024e-02 -5.62812947e-03 2.02714931e-03]] 642 | 643 | [[ -7.12103248e-02 -1.43520504e-01 -1.58877835e-01 -5.88798197e-04] 644 | [ 1.28826099e-02 -5.69121465e-02 -1.35780364e-01 1.07920961e-03] 645 | [ 3.03843200e-01 2.72664219e-01 1.41668722e-01 1.17129879e-03]] 646 | 647 | [[ 1.22695819e-01 7.19730631e-02 -5.40927518e-03 6.83012622e-05] 648 | [ 2.17733368e-01 1.70078337e-01 3.15883271e-02 1.63907965e-03] 649 | [ 2.99936146e-01 2.82923579e-01 1.11484297e-01 2.87137518e-04]]] 650 | 3,3,4,54 651 | [[[ 4.51747254e-02 -2.61301082e-02 1.10607408e-01 4.53318004e-04] 652 | [ -2.45497495e-01 -3.32798719e-01 -1.44173667e-01 -9.82096652e-04] 653 | [ -2.73189932e-01 -3.67730200e-01 -1.68463469e-01 -7.80823611e-05]] 654 | 655 | [[ 1.95946127e-01 1.35003686e-01 2.53484011e-01 1.28798978e-03] 656 | [ -1.23408929e-01 -2.03285918e-01 -3.42523716e-02 -1.92197680e-03] 657 | [ -1.74905241e-01 -2.64865339e-01 -8.12317953e-02 7.97121902e-04]] 658 | 659 | [[ 3.00626665e-01 2.47526586e-01 3.26859027e-01 3.12882505e-04] 660 | [ 1.83255345e-01 1.17499143e-01 2.35761940e-01 -3.43348976e-04] 661 | [ 9.99696925e-02 2.21105684e-02 1.54124647e-01 1.44580088e-04]]] 662 | 3,3,4,55 663 | [[[ -2.38493338e-01 -7.35273808e-02 -1.18195258e-01 -2.74479011e-04] 664 | [ -1.74975753e-01 2.08353326e-02 -4.71791252e-02 -1.16551993e-03] 665 | [ -4.20844108e-02 1.53152242e-01 5.60752414e-02 6.73301111e-05]] 666 | 667 | [[ -2.27758721e-01 -4.32924628e-02 -9.87838954e-02 7.38678849e-04] 668 | [ -1.22466579e-01 9.52897295e-02 1.39012812e-02 -8.74078425e-04] 669 | [ 3.50056551e-02 2.53587991e-01 1.43193200e-01 -3.58817342e-04]] 670 | 671 | [[ -1.33780688e-01 4.17840295e-02 -3.45173553e-02 2.64813454e-04] 672 | [ -5.73729165e-02 1.50199324e-01 5.09751029e-02 3.53883894e-04] 673 | [ 4.02998440e-02 2.46469945e-01 1.23141922e-01 -1.14076363e-04]]] 674 | 3,3,4,56 675 | [[[ 9.12305620e-03 -1.15336858e-01 4.29893471e-02 -3.20871099e-04] 676 | [ 8.72102976e-02 -4.04662266e-02 9.85405147e-02 -7.97601766e-04] 677 | [ 2.32655138e-01 1.29669949e-01 2.29410529e-01 5.63011330e-04]] 678 | 679 | [[ -1.64952904e-01 -3.09078455e-01 -1.26956955e-01 1.47141039e-03] 680 | [ -1.15515195e-01 -2.64331251e-01 -1.03649266e-01 -5.22812770e-04] 681 | [ 1.91749185e-01 7.23028705e-02 1.85455158e-01 1.26944564e-03]] 682 | 683 | [[ -5.25783673e-02 -1.83052063e-01 -1.57835074e-02 -3.48622212e-04] 684 | [ -8.67471471e-03 -1.44732818e-01 4.24273638e-03 -7.89923302e-04] 685 | [ 1.62966505e-01 5.13972640e-02 1.58938885e-01 5.54912949e-05]]] 686 | 3,3,4,57 687 | [[[ 2.99678087e-01 1.70631081e-01 3.53236228e-01 4.93234373e-04] 688 | [ 2.39003152e-02 -1.33166671e-01 1.17565043e-01 1.36533426e-03] 689 | [ -8.35123584e-02 -2.49448776e-01 2.05497947e-02 3.19052226e-04]] 690 | 691 | [[ 2.30453759e-01 9.27076265e-02 2.98866808e-01 -2.46528070e-03] 692 | [ -2.41969988e-01 -4.12189454e-01 -1.28682598e-01 1.44919509e-03] 693 | [ -3.11172426e-01 -4.89763707e-01 -1.86652526e-01 -3.41151550e-04]] 694 | 695 | [[ 2.75735348e-01 1.46760330e-01 3.25029105e-01 -1.83304795e-03] 696 | [ 4.11275662e-02 -1.13705084e-01 1.29520938e-01 1.45271164e-03] 697 | [ -4.33028750e-02 -2.05637738e-01 5.38637713e-02 -1.46628800e-03]]] 698 | 3,3,4,58 699 | [[[ -3.19988966e-01 -3.67526531e-01 -2.84681618e-01 3.56855599e-04] 700 | [ -2.89746299e-02 -4.09343056e-02 -4.44047265e-02 -7.52406311e-04] 701 | [ 1.50852606e-01 1.73589796e-01 1.11910567e-01 -4.74784407e-04]] 702 | 703 | [[ -3.17024529e-01 -3.62138659e-01 -2.85965323e-01 8.53169331e-05] 704 | [ 2.93176144e-01 2.97577500e-01 2.73522764e-01 8.69424606e-04] 705 | [ 4.86247689e-01 5.29540300e-01 4.44558173e-01 -1.00016082e-03]] 706 | 707 | [[ -3.53868037e-01 -3.91899109e-01 -3.00994813e-01 1.84323260e-04] 708 | [ -2.38770470e-02 -2.74247322e-02 -2.33768374e-02 7.26194179e-04] 709 | [ 1.43340126e-01 1.71046570e-01 1.17677592e-01 -1.18355372e-03]]] 710 | 3,3,4,59 711 | [[[ 8.72597471e-02 1.33395687e-01 -2.02890083e-01 6.81760372e-04] 712 | [ 1.25106528e-01 1.43303603e-01 -2.44807452e-01 1.60608703e-04] 713 | [ 8.55238587e-02 1.33845493e-01 -1.93389341e-01 -3.75615957e-04]] 714 | 715 | [[ 1.49862722e-01 1.60177231e-01 -2.46885374e-01 -2.00135051e-03] 716 | [ 1.94910869e-01 1.76016212e-01 -2.86743253e-01 -1.39442753e-04] 717 | [ 1.49851054e-01 1.62003413e-01 -2.36104324e-01 -1.66748406e-03]] 718 | 719 | [[ 1.12176716e-01 1.42014474e-01 -2.13930249e-01 -3.65579093e-04] 720 | [ 1.53773516e-01 1.54039085e-01 -2.55773723e-01 -2.07287769e-04] 721 | [ 1.10281669e-01 1.41838863e-01 -2.06068680e-01 9.45783395e-04]]] 722 | 3,3,4,60 723 | [[[ -3.38483721e-01 -3.66895437e-01 -2.84943998e-01 -1.63135421e-03] 724 | [ -2.01899424e-01 -2.03208730e-01 -1.81034535e-01 2.16956134e-04] 725 | [ 3.33640985e-02 5.20836115e-02 1.96314529e-02 4.53040295e-04]] 726 | 727 | [[ -2.60349125e-01 -2.75948256e-01 -2.29001790e-01 -3.05505266e-04] 728 | [ 1.44164294e-01 1.68543562e-01 1.42149478e-01 6.61101949e-04] 729 | [ 3.99523199e-01 4.49152291e-01 3.69328976e-01 -9.12765565e-04]] 730 | 731 | [[ -1.71210527e-01 -1.82062134e-01 -1.62446305e-01 7.03925965e-04] 732 | [ 1.01079494e-01 1.25214949e-01 7.96652064e-02 3.60667356e-04] 733 | [ 2.56897300e-01 2.99747795e-01 2.09525511e-01 -8.12537255e-05]]] 734 | 3,3,4,61 735 | [[[ -1.27135560e-01 -8.48205537e-02 -1.32241577e-01 -6.98905438e-04] 736 | [ -1.22360572e-01 -9.88498852e-02 -1.48207128e-01 -1.89969665e-04] 737 | [ -1.29404560e-01 -9.43124294e-02 -1.40158370e-01 -4.36439761e-04]] 738 | 739 | [[ -9.88388434e-02 -6.93419054e-02 -1.14316158e-01 -5.72267636e-05] 740 | [ -1.40649760e-02 -3.78979952e-03 -5.20089604e-02 2.67642288e-04] 741 | [ -1.25688612e-01 -1.03502415e-01 -1.45945460e-01 -3.22441745e-04]] 742 | 743 | [[ -1.49742723e-01 -1.05017252e-01 -1.41449392e-01 -4.20855562e-04] 744 | [ -1.09780088e-01 -8.29023942e-02 -1.22728683e-01 6.55753247e-04] 745 | [ -1.56839803e-01 -1.18306056e-01 -1.53092295e-01 6.57919270e-04]]] 746 | 3,3,4,62 747 | [[[ -5.02991639e-02 -5.11389151e-02 -5.33475243e-02 1.13640062e-03] 748 | [ -2.76450396e-01 -2.96195745e-01 -2.35300660e-01 1.49572894e-04] 749 | [ -4.62243795e-01 -5.05662560e-01 -3.77666801e-01 6.33388772e-05]] 750 | 751 | [[ 4.04058546e-01 4.37116861e-01 3.65494519e-01 3.88983201e-04] 752 | [ 2.59300828e-01 2.69412428e-01 2.57149011e-01 1.10025972e-03] 753 | [ -3.14026326e-01 -3.45792353e-01 -2.71823555e-01 8.24989926e-04]] 754 | 755 | [[ 3.06801915e-01 3.39550197e-01 2.41099641e-01 4.98259964e-04] 756 | [ 2.79203743e-01 2.95466095e-01 2.44687453e-01 4.59204981e-04] 757 | [ -1.42874300e-01 -1.67058021e-01 -1.39917329e-01 -1.50147208e-03]]] 758 | 3,3,4,63 759 | [[[ 3.48965675e-02 3.74943428e-02 7.57738389e-03 6.42915606e-04] 760 | [ -3.90796512e-02 -7.04357103e-02 -6.30356818e-02 1.18428504e-03] 761 | [ -3.23985279e-01 -3.83694321e-01 -3.00509870e-01 -1.95312445e-04]] 762 | 763 | [[ 3.92561197e-01 4.13171440e-01 3.39938998e-01 4.46895836e-04] 764 | [ 4.23764467e-01 4.09505904e-01 3.71682674e-01 8.00598820e-04] 765 | [ -2.32839763e-01 -2.92486250e-01 -2.39718184e-01 -8.35361949e-04]] 766 | 767 | [[ 8.82701725e-02 9.86374393e-02 4.55602147e-02 -3.19052342e-04] 768 | [ 9.46525261e-02 7.44469985e-02 5.32913655e-02 1.62428978e-03] 769 | [ -2.69694626e-01 -3.22241962e-01 -2.65075237e-01 -9.13318290e-05]]] 770 | (3, 3, 4, 64) 771 | [[[[ 4.80015397e-01 -1.72696680e-01 3.75577137e-02 ..., 772 | -1.27135560e-01 -5.02991639e-02 3.48965675e-02] 773 | [ 5.50379455e-01 2.08774377e-02 9.88311544e-02 ..., 774 | -8.48205537e-02 -5.11389151e-02 3.74943428e-02] 775 | [ 4.29470569e-01 1.17273867e-01 3.40129584e-02 ..., 776 | -1.32241577e-01 -5.33475243e-02 7.57738389e-03] 777 | [ 1.13388560e-04 1.46378891e-03 -1.79303065e-03 ..., 778 | -6.98905438e-04 1.13640062e-03 6.42915606e-04]] 779 | 780 | [[ 4.08547401e-01 -1.70375049e-01 -4.96297423e-03 ..., 781 | -1.22360572e-01 -2.76450396e-01 -3.90796512e-02] 782 | [ 4.40074533e-01 4.73412387e-02 5.13819456e-02 ..., 783 | -9.88498852e-02 -2.96195745e-01 -7.04357103e-02] 784 | [ 3.73466998e-01 1.62062630e-01 1.70863140e-03 ..., 785 | -1.48207128e-01 -2.35300660e-01 -6.30356818e-02] 786 | [ 7.61439209e-04 2.19835248e-03 -1.66041229e-03 ..., 787 | -1.89969665e-04 1.49572894e-04 1.18428504e-03]] 788 | 789 | [[ -6.51455522e-02 -1.54351532e-01 -1.38038069e-01 ..., 790 | -1.29404560e-01 -4.62243795e-01 -3.23985279e-01] 791 | [ -8.13870355e-02 4.18543853e-02 -1.01763301e-01 ..., 792 | -9.43124294e-02 -5.05662560e-01 -3.83694321e-01] 793 | [ -6.13601133e-02 1.35693997e-01 -1.15694344e-01 ..., 794 | -1.40158370e-01 -3.77666801e-01 -3.00509870e-01] 795 | [ 4.74345696e-04 6.79618970e-04 -2.96418410e-04 ..., 796 | -4.36439761e-04 6.33388772e-05 -1.95312445e-04]]] 797 | 798 | 799 | [[[ 3.10477257e-01 -1.87601492e-01 1.66595340e-01 ..., 800 | -9.88388434e-02 4.04058546e-01 3.92561197e-01] 801 | [ 3.45739067e-01 3.10493708e-02 2.40750551e-01 ..., 802 | -6.93419054e-02 4.37116861e-01 4.13171440e-01] 803 | [ 2.74769872e-01 1.48350164e-01 1.61559835e-01 ..., 804 | -1.14316158e-01 3.65494519e-01 3.39938998e-01] 805 | [ 4.11637186e-04 -9.07719019e-04 -5.13275329e-04 ..., 806 | -5.72267636e-05 3.88983201e-04 4.46895836e-04]] 807 | 808 | [[ 5.02023660e-02 -1.77571565e-01 1.51188180e-01 ..., 809 | -1.40649760e-02 2.59300828e-01 4.23764467e-01] 810 | [ 4.06322069e-02 6.58102185e-02 2.20311403e-01 ..., 811 | -3.78979952e-03 2.69412428e-01 4.09505904e-01] 812 | [ 3.86807770e-02 2.02298447e-01 1.56414255e-01 ..., 813 | -5.20089604e-02 2.57149011e-01 3.71682674e-01] 814 | [ 1.38304755e-03 7.21911609e-04 5.95636084e-04 ..., 815 | 2.67642288e-04 1.10025972e-03 8.00598820e-04]] 816 | 817 | [[ -4.03383434e-01 -1.74399972e-01 -1.09849639e-01 ..., 818 | -1.25688612e-01 -3.14026326e-01 -2.32839763e-01] 819 | [ -4.53501314e-01 4.62574959e-02 -6.67438358e-02 ..., 820 | -1.03502415e-01 -3.45792353e-01 -2.92486250e-01] 821 | [ -3.67223352e-01 1.61688417e-01 -8.99365395e-02 ..., 822 | -1.45945460e-01 -2.71823555e-01 -2.39718184e-01] 823 | [ 1.28411280e-03 -2.18071369e-03 3.56495693e-05 ..., 824 | -3.22441745e-04 8.24989926e-04 -8.35361949e-04]]] 825 | 826 | 827 | [[[ -5.08716851e-02 -1.66002661e-01 1.56279504e-02 ..., 828 | -1.49742723e-01 3.06801915e-01 8.82701725e-02] 829 | [ -5.86349145e-02 3.16787697e-02 7.59588331e-02 ..., 830 | -1.05017252e-01 3.39550197e-01 9.86374393e-02] 831 | [ -5.74681684e-02 1.29344285e-01 1.29030216e-02 ..., 832 | -1.41449392e-01 2.41099641e-01 4.55602147e-02] 833 | [ -6.34787197e-04 9.74029012e-04 -6.87221691e-05 ..., 834 | -4.20855562e-04 4.98259964e-04 -3.19052342e-04]] 835 | 836 | [[ -2.85227507e-01 -1.66666731e-01 -7.96697661e-03 ..., 837 | -1.09780088e-01 2.79203743e-01 9.46525261e-02] 838 | [ -3.30669671e-01 5.47101051e-02 4.86797579e-02 ..., 839 | -8.29023942e-02 2.95466095e-01 7.44469985e-02] 840 | [ -2.62249678e-01 1.71572417e-01 5.44555223e-05 ..., 841 | -1.22728683e-01 2.44687453e-01 5.32913655e-02] 842 | [ -1.77454809e-03 -8.49406992e-04 -1.82305416e-03 ..., 843 | 6.55753247e-04 4.59204981e-04 1.62428978e-03]] 844 | 845 | [[ -4.18516338e-01 -1.57048807e-01 -1.49133086e-01 ..., 846 | -1.56839803e-01 -1.42874300e-01 -2.69694626e-01] 847 | [ -4.85030204e-01 4.23195846e-02 -1.12076312e-01 ..., 848 | -1.18306056e-01 -1.67058021e-01 -3.22241962e-01] 849 | [ -3.50096762e-01 1.38710454e-01 -1.25339806e-01 ..., 850 | -1.53092295e-01 -1.39917329e-01 -2.65075237e-01] 851 | [ 2.10441509e-03 -2.25277967e-03 1.73211924e-03 ..., 852 | 6.57919270e-04 -1.50147208e-03 -9.13318290e-05]]]] 853 | -------------------------------------------------------------------------------- /tfwss/visualize.py: -------------------------------------------------------------------------------- 1 | """ 2 | visualize.py 3 | 4 | Davis 2016 visualization helpers. 5 | 6 | Written by Phil Ferriere 7 | 8 | Licensed under the MIT License (see LICENSE for details) 9 | 10 | Based on: 11 | - https://github.com/matterport/Mask_RCNN/blob/master/visualize.py 12 | Copyright (c) 2017 Matterport, Inc. / Written by Waleed Abdulla 13 | Licensed under the MIT License 14 | 15 | References for future work: 16 | E:/repos/models-master/research/object_detection/utils/visualization_utils.py 17 | """ 18 | 19 | from __future__ import absolute_import 20 | from __future__ import division 21 | from __future__ import print_function 22 | 23 | from IPython.display import HTML 24 | import io, base64 25 | import imageio 26 | import numpy as np 27 | import random 28 | import colorsys 29 | import matplotlib.pyplot as plt 30 | 31 | def random_colors(N, bright=True, RGB_max=255): 32 | """ 33 | Generate random colors. To get visually distinct colors, generate them in HSV space then convert to RGB. 34 | Args: 35 | N: number of colors to generate. 36 | bright: set to True for bright colors. 37 | RGB_max: set to 1.0 or 255, based on image type you're working with 38 | """ 39 | brightness = 1.0 if bright else 0.7 40 | hsv = [(i / N, 1, brightness) for i in range(N)] 41 | colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) 42 | colors = [(color[0] * RGB_max, color[1] * RGB_max, color[2] * RGB_max) for color in colors] 43 | random.shuffle(colors) 44 | return colors 45 | 46 | def display_images(images, titles=None, cols=4, cmap=None, norm=None, interpolation=None): 47 | """Display the given set of images, optionally with titles. 48 | Args: 49 | images: list or array of image tensors in HWC format. 50 | titles: optional. A list of titles to display with each image. 51 | cols: number of images per row 52 | cmap: Optional. Color map to use. For example, "Blues". 53 | norm: Optional. A Normalize instance to map values to colors. 54 | interpolation: Optional. Image interporlation to use for display. 55 | """ 56 | titles = titles if titles is not None else [""] * len(images) 57 | rows = len(images) // cols + 1 58 | width = 20 59 | plt.figure(figsize=(width, width * rows // cols)) 60 | i = 1 61 | for image, title in zip(images, titles): 62 | plt.subplot(rows, cols, i) 63 | # plt.title(title, fontsize=9) 64 | plt.axis('off') 65 | plt.imshow(image.astype(np.uint8), cmap=cmap, norm=norm, interpolation=interpolation) 66 | i += 1 67 | plt.tight_layout() 68 | plt.show() 69 | 70 | def draw_box(image, bbox, color, in_place=True): 71 | """Draw (in-place, or not) 3-pixel-width bounding bboxes on an image. 72 | Args: 73 | image: video frame (H,W,3) 74 | bbox: y1, x1, y2, x2 bounding box 75 | color: color list of 3 int values for RGB 76 | in_place: in place / copy flag 77 | Returns: 78 | image with bounding box 79 | """ 80 | y1, x1, y2, x2 = bbox 81 | result = image if in_place == True else np.copy(image) 82 | result[y1:y1 + 2, x1:x2] = color 83 | result[y2:y2 + 2, x1:x2] = color 84 | result[y1:y2, x1:x1 + 2] = color 85 | result[y1:y2, x2:x2 + 2] = color 86 | return result 87 | 88 | def draw_mask(image, mask, color, alpha=0.5, in_place=False): 89 | """Draw (in-place, or not) a mask on an image. 90 | Args: 91 | image: input image (H,W,3) 92 | mask: mask (H,W,1) 93 | color: color list of 3 int values for RGB 94 | alpha: alpha blending level 95 | in_place: in place / copy flag 96 | Returns: 97 | image with mask 98 | """ 99 | assert(len(image.shape) == len(mask.shape) == len(color) == 3) 100 | assert(image.shape[0] == mask.shape[0] and image.shape[1] == mask.shape[1]) 101 | threshold = (np.max(mask) - np.min(mask)) / 2 102 | multiplier = 1 if np.amax(color) > 1 else 255 103 | masked_image = image if in_place == True else np.copy(image) 104 | for c in range(3): 105 | masked_image[:, :, c] = np.where(mask[:,:,0] > threshold, 106 | masked_image[:, :, c] * 107 | (1 - alpha) + alpha * color[c] * multiplier, 108 | masked_image[:, :, c]) 109 | return masked_image 110 | 111 | def draw_masks(image, bboxes, masks, alpha=0.5, in_place=True): 112 | """Apply the given instance masks to the image and draw their bboxes. 113 | Args: 114 | image: input image (H, W, 3) 115 | bboxes: (num_instances, (y1, x1, y2, x2)) bounding boxes as numpy array 116 | masks: masks (num_instances, H, W, 1) as numpy array 117 | alpha: alpha blending level 118 | in_place: in place / copy flag 119 | Returns: 120 | image with masks overlaid 121 | """ 122 | # Number of instances 123 | num_instances = bboxes.shape[0] 124 | assert(num_instances == masks.shape[0]) 125 | 126 | # Make a copy of the input image, if requested 127 | masked_image = image if in_place == True else np.copy(image) 128 | 129 | # Draw bboxes and masks on the image, if the bbox is not empty, using a random color 130 | colors = random_colors(num_instances) 131 | for instance in range(num_instances): 132 | if not np.any(bboxes[instance]): 133 | continue 134 | color = colors[instance] 135 | draw_mask(masked_image, masks[instance], color, alpha=alpha, in_place=True) 136 | # draw_mask(masked_image, masks[instance, :, :, 0], color, alpha=alpha, in_place=True) 137 | draw_box(masked_image, bboxes[instance], color, in_place=True) 138 | 139 | return masked_image 140 | --------------------------------------------------------------------------------