├── LICENSE
├── README.md
├── data
├── Art.txt
├── Clipart.txt
├── Product.txt
├── Real_World.txt
├── VisDA2017_train.txt
├── VisDA2017_valid.txt
├── amazon.txt
├── clipart_test.txt
├── clipart_train.txt
├── dslr.txt
├── infograph_test.txt
├── infograph_train.txt
├── painting_test.txt
├── painting_train.txt
├── quickdraw_test.txt
├── quickdraw_train.txt
├── real_test.txt
├── real_train.txt
├── sketch_test.txt
├── sketch_train.txt
└── webcam.txt
├── dataset
├── augmentations.py
├── data_list.py
└── data_provider.py
├── fig
├── SSRT.png
└── SafeTraining.png
├── main_SSRT.domainnet.py
├── main_SSRT.office31.py
├── main_SSRT.office_home.py
├── main_SSRT.visda.py
├── main_ViT_baseline.domainnet.py
├── main_ViT_baseline.office31.py
├── main_ViT_baseline.office_home.py
├── main_ViT_baseline.visda.py
├── model
├── SSRT.py
├── ViT.py
├── ViTgrl.py
├── grl.py
└── helpers.py
├── requirements.txt
├── trainer
├── argument_parser.py
├── evaluate.py
└── train.py
└── utils
└── utils.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 tsun
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SSRT
2 | Pytorch implementation of SSRT.
3 | > [Safe Self-Refinement for Transformer-based Domain Adaptation](https://arxiv.org/abs/2204.07683)
4 | > Tao Sun, Cheng Lu, Tianshuo Zhang, and Haibin Ling
5 | > *CVPR 2022*
6 |
7 | ## Abstract
8 | Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain. It is a challenging problem especially when a large domain gap lies between the source and target domains. In this paper we propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects. First, encouraged by the success of vision transformers in various vision tasks, we arm SSRT with a transformer backbone. We find that the combination of vision transformer with simple adversarial adaptation surpasses best reported Convolutional Neural Network (CNN)-based results on the challenging DomainNet benchmark, showing its strong transferable feature representation. Second, to reduce the risk of model collapse and improve the effectiveness of knowledge transfer between domains with large gaps, we propose a Safe Self-Refinement strategy. Specifically, SSRT utilizes predictions of perturbed target domain data to refine the model. Since the model capacity of vision transformer is large and predictions in such challenging tasks can be noisy, a safe training mechanism is designed to adaptively adjust learning configuration. Extensive evaluations are conducted on several widely tested UDA benchmarks and SSRT achieves consistently the best performances, including 85.43% on Office-Home, 88.76% on VisDA-2017 and 45.2% on DomainNet.
9 |
10 |
11 |
12 | ## Usage
13 | ### Prerequisites
14 | ```shell
15 | We experimented with python==3.8, pytorch==1.8.0, cudatoolkit==11.1
16 | ```
17 | ### Training
18 | 1. Clone this repository to local
19 | ```shell
20 | git clone https://github.com/tsun/SSRT.git
21 | ```
22 | 2. Download the [office31](https://faculty.cc.gatech.edu/~judy/domainadapt/), [Office-Home](https://www.hemanthdv.org/officeHomeDataset.html), [VisDA](https://ai.bu.edu/visda-2017/), [DomainNet](http://ai.bu.edu/M3SDA/) datasets and extract to ./data.
23 |
24 | 3. To reproduce results in Tables 1-4 of the paper, run
25 | ```shell
26 | python main_SSRT.office31.py
27 | python main_SSRT.office_home.py
28 | python main_SSRT.visda.py
29 | python main_SSRT.domainnet.py
30 | ```
31 |
32 | ## Acknowledgements
33 | - The implementation of Vision Transformer is adapted from the excellent [timm](https://github.com/rwightman/pytorch-image-models/tree/master/timm) library.
34 | - We thank for the open-sourced repos:
35 |
[pytorch-image-models
36 | ](https://github.com/rwightman/pytorch-image-models)
37 | [Transfer-Learning-Library
38 | ](https://github.com/thuml/Transfer-Learning-Library)
39 | [implicit_alignment
40 | ](https://github.com/xiangdal/implicit_alignment)
41 |
42 |
43 |
44 | ## Reference
45 | ```bibtex
46 | @inproceedings{sun2022safe,
47 | author = {Sun, Tao and Lu, Cheng and Zhang, Tianshuo and Ling, Haibin},
48 | title = {Safe Self-Refinement for Transformer-based Domain Adaptation},
49 | booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
50 | year = {2022}
51 | }
52 | ```
--------------------------------------------------------------------------------
/data/dslr.txt:
--------------------------------------------------------------------------------
1 | ./data/office31/dslr/images/calculator/frame_0001.jpg 5
2 | ./data/office31/dslr/images/calculator/frame_0002.jpg 5
3 | ./data/office31/dslr/images/calculator/frame_0003.jpg 5
4 | ./data/office31/dslr/images/calculator/frame_0004.jpg 5
5 | ./data/office31/dslr/images/calculator/frame_0005.jpg 5
6 | ./data/office31/dslr/images/calculator/frame_0006.jpg 5
7 | ./data/office31/dslr/images/calculator/frame_0007.jpg 5
8 | ./data/office31/dslr/images/calculator/frame_0008.jpg 5
9 | ./data/office31/dslr/images/calculator/frame_0009.jpg 5
10 | ./data/office31/dslr/images/calculator/frame_0010.jpg 5
11 | ./data/office31/dslr/images/calculator/frame_0011.jpg 5
12 | ./data/office31/dslr/images/calculator/frame_0012.jpg 5
13 | ./data/office31/dslr/images/ring_binder/frame_0001.jpg 24
14 | ./data/office31/dslr/images/ring_binder/frame_0002.jpg 24
15 | ./data/office31/dslr/images/ring_binder/frame_0003.jpg 24
16 | ./data/office31/dslr/images/ring_binder/frame_0004.jpg 24
17 | ./data/office31/dslr/images/ring_binder/frame_0005.jpg 24
18 | ./data/office31/dslr/images/ring_binder/frame_0006.jpg 24
19 | ./data/office31/dslr/images/ring_binder/frame_0007.jpg 24
20 | ./data/office31/dslr/images/ring_binder/frame_0008.jpg 24
21 | ./data/office31/dslr/images/ring_binder/frame_0009.jpg 24
22 | ./data/office31/dslr/images/ring_binder/frame_0010.jpg 24
23 | ./data/office31/dslr/images/printer/frame_0001.jpg 21
24 | ./data/office31/dslr/images/printer/frame_0002.jpg 21
25 | ./data/office31/dslr/images/printer/frame_0003.jpg 21
26 | ./data/office31/dslr/images/printer/frame_0004.jpg 21
27 | ./data/office31/dslr/images/printer/frame_0005.jpg 21
28 | ./data/office31/dslr/images/printer/frame_0006.jpg 21
29 | ./data/office31/dslr/images/printer/frame_0007.jpg 21
30 | ./data/office31/dslr/images/printer/frame_0008.jpg 21
31 | ./data/office31/dslr/images/printer/frame_0009.jpg 21
32 | ./data/office31/dslr/images/printer/frame_0010.jpg 21
33 | ./data/office31/dslr/images/printer/frame_0011.jpg 21
34 | ./data/office31/dslr/images/printer/frame_0012.jpg 21
35 | ./data/office31/dslr/images/printer/frame_0013.jpg 21
36 | ./data/office31/dslr/images/printer/frame_0014.jpg 21
37 | ./data/office31/dslr/images/printer/frame_0015.jpg 21
38 | ./data/office31/dslr/images/keyboard/frame_0001.jpg 11
39 | ./data/office31/dslr/images/keyboard/frame_0002.jpg 11
40 | ./data/office31/dslr/images/keyboard/frame_0003.jpg 11
41 | ./data/office31/dslr/images/keyboard/frame_0004.jpg 11
42 | ./data/office31/dslr/images/keyboard/frame_0005.jpg 11
43 | ./data/office31/dslr/images/keyboard/frame_0006.jpg 11
44 | ./data/office31/dslr/images/keyboard/frame_0007.jpg 11
45 | ./data/office31/dslr/images/keyboard/frame_0008.jpg 11
46 | ./data/office31/dslr/images/keyboard/frame_0009.jpg 11
47 | ./data/office31/dslr/images/keyboard/frame_0010.jpg 11
48 | ./data/office31/dslr/images/scissors/frame_0001.jpg 26
49 | ./data/office31/dslr/images/scissors/frame_0002.jpg 26
50 | ./data/office31/dslr/images/scissors/frame_0003.jpg 26
51 | ./data/office31/dslr/images/scissors/frame_0004.jpg 26
52 | ./data/office31/dslr/images/scissors/frame_0005.jpg 26
53 | ./data/office31/dslr/images/scissors/frame_0006.jpg 26
54 | ./data/office31/dslr/images/scissors/frame_0007.jpg 26
55 | ./data/office31/dslr/images/scissors/frame_0008.jpg 26
56 | ./data/office31/dslr/images/scissors/frame_0009.jpg 26
57 | ./data/office31/dslr/images/scissors/frame_0010.jpg 26
58 | ./data/office31/dslr/images/scissors/frame_0011.jpg 26
59 | ./data/office31/dslr/images/scissors/frame_0012.jpg 26
60 | ./data/office31/dslr/images/scissors/frame_0013.jpg 26
61 | ./data/office31/dslr/images/scissors/frame_0014.jpg 26
62 | ./data/office31/dslr/images/scissors/frame_0015.jpg 26
63 | ./data/office31/dslr/images/scissors/frame_0016.jpg 26
64 | ./data/office31/dslr/images/scissors/frame_0017.jpg 26
65 | ./data/office31/dslr/images/scissors/frame_0018.jpg 26
66 | ./data/office31/dslr/images/laptop_computer/frame_0001.jpg 12
67 | ./data/office31/dslr/images/laptop_computer/frame_0002.jpg 12
68 | ./data/office31/dslr/images/laptop_computer/frame_0003.jpg 12
69 | ./data/office31/dslr/images/laptop_computer/frame_0004.jpg 12
70 | ./data/office31/dslr/images/laptop_computer/frame_0005.jpg 12
71 | ./data/office31/dslr/images/laptop_computer/frame_0006.jpg 12
72 | ./data/office31/dslr/images/laptop_computer/frame_0007.jpg 12
73 | ./data/office31/dslr/images/laptop_computer/frame_0008.jpg 12
74 | ./data/office31/dslr/images/laptop_computer/frame_0009.jpg 12
75 | ./data/office31/dslr/images/laptop_computer/frame_0010.jpg 12
76 | ./data/office31/dslr/images/laptop_computer/frame_0011.jpg 12
77 | ./data/office31/dslr/images/laptop_computer/frame_0012.jpg 12
78 | ./data/office31/dslr/images/laptop_computer/frame_0013.jpg 12
79 | ./data/office31/dslr/images/laptop_computer/frame_0014.jpg 12
80 | ./data/office31/dslr/images/laptop_computer/frame_0015.jpg 12
81 | ./data/office31/dslr/images/laptop_computer/frame_0016.jpg 12
82 | ./data/office31/dslr/images/laptop_computer/frame_0017.jpg 12
83 | ./data/office31/dslr/images/laptop_computer/frame_0018.jpg 12
84 | ./data/office31/dslr/images/laptop_computer/frame_0019.jpg 12
85 | ./data/office31/dslr/images/laptop_computer/frame_0020.jpg 12
86 | ./data/office31/dslr/images/laptop_computer/frame_0021.jpg 12
87 | ./data/office31/dslr/images/laptop_computer/frame_0022.jpg 12
88 | ./data/office31/dslr/images/laptop_computer/frame_0023.jpg 12
89 | ./data/office31/dslr/images/laptop_computer/frame_0024.jpg 12
90 | ./data/office31/dslr/images/mouse/frame_0001.jpg 16
91 | ./data/office31/dslr/images/mouse/frame_0002.jpg 16
92 | ./data/office31/dslr/images/mouse/frame_0003.jpg 16
93 | ./data/office31/dslr/images/mouse/frame_0004.jpg 16
94 | ./data/office31/dslr/images/mouse/frame_0005.jpg 16
95 | ./data/office31/dslr/images/mouse/frame_0006.jpg 16
96 | ./data/office31/dslr/images/mouse/frame_0007.jpg 16
97 | ./data/office31/dslr/images/mouse/frame_0008.jpg 16
98 | ./data/office31/dslr/images/mouse/frame_0009.jpg 16
99 | ./data/office31/dslr/images/mouse/frame_0010.jpg 16
100 | ./data/office31/dslr/images/mouse/frame_0011.jpg 16
101 | ./data/office31/dslr/images/mouse/frame_0012.jpg 16
102 | ./data/office31/dslr/images/monitor/frame_0001.jpg 15
103 | ./data/office31/dslr/images/monitor/frame_0002.jpg 15
104 | ./data/office31/dslr/images/monitor/frame_0003.jpg 15
105 | ./data/office31/dslr/images/monitor/frame_0004.jpg 15
106 | ./data/office31/dslr/images/monitor/frame_0005.jpg 15
107 | ./data/office31/dslr/images/monitor/frame_0006.jpg 15
108 | ./data/office31/dslr/images/monitor/frame_0007.jpg 15
109 | ./data/office31/dslr/images/monitor/frame_0008.jpg 15
110 | ./data/office31/dslr/images/monitor/frame_0009.jpg 15
111 | ./data/office31/dslr/images/monitor/frame_0010.jpg 15
112 | ./data/office31/dslr/images/monitor/frame_0011.jpg 15
113 | ./data/office31/dslr/images/monitor/frame_0012.jpg 15
114 | ./data/office31/dslr/images/monitor/frame_0013.jpg 15
115 | ./data/office31/dslr/images/monitor/frame_0014.jpg 15
116 | ./data/office31/dslr/images/monitor/frame_0015.jpg 15
117 | ./data/office31/dslr/images/monitor/frame_0016.jpg 15
118 | ./data/office31/dslr/images/monitor/frame_0017.jpg 15
119 | ./data/office31/dslr/images/monitor/frame_0018.jpg 15
120 | ./data/office31/dslr/images/monitor/frame_0019.jpg 15
121 | ./data/office31/dslr/images/monitor/frame_0020.jpg 15
122 | ./data/office31/dslr/images/monitor/frame_0021.jpg 15
123 | ./data/office31/dslr/images/monitor/frame_0022.jpg 15
124 | ./data/office31/dslr/images/mug/frame_0001.jpg 17
125 | ./data/office31/dslr/images/mug/frame_0002.jpg 17
126 | ./data/office31/dslr/images/mug/frame_0003.jpg 17
127 | ./data/office31/dslr/images/mug/frame_0004.jpg 17
128 | ./data/office31/dslr/images/mug/frame_0005.jpg 17
129 | ./data/office31/dslr/images/mug/frame_0006.jpg 17
130 | ./data/office31/dslr/images/mug/frame_0007.jpg 17
131 | ./data/office31/dslr/images/mug/frame_0008.jpg 17
132 | ./data/office31/dslr/images/tape_dispenser/frame_0001.jpg 29
133 | ./data/office31/dslr/images/tape_dispenser/frame_0002.jpg 29
134 | ./data/office31/dslr/images/tape_dispenser/frame_0003.jpg 29
135 | ./data/office31/dslr/images/tape_dispenser/frame_0004.jpg 29
136 | ./data/office31/dslr/images/tape_dispenser/frame_0005.jpg 29
137 | ./data/office31/dslr/images/tape_dispenser/frame_0006.jpg 29
138 | ./data/office31/dslr/images/tape_dispenser/frame_0007.jpg 29
139 | ./data/office31/dslr/images/tape_dispenser/frame_0008.jpg 29
140 | ./data/office31/dslr/images/tape_dispenser/frame_0009.jpg 29
141 | ./data/office31/dslr/images/tape_dispenser/frame_0010.jpg 29
142 | ./data/office31/dslr/images/tape_dispenser/frame_0011.jpg 29
143 | ./data/office31/dslr/images/tape_dispenser/frame_0012.jpg 29
144 | ./data/office31/dslr/images/tape_dispenser/frame_0013.jpg 29
145 | ./data/office31/dslr/images/tape_dispenser/frame_0014.jpg 29
146 | ./data/office31/dslr/images/tape_dispenser/frame_0015.jpg 29
147 | ./data/office31/dslr/images/tape_dispenser/frame_0016.jpg 29
148 | ./data/office31/dslr/images/tape_dispenser/frame_0017.jpg 29
149 | ./data/office31/dslr/images/tape_dispenser/frame_0018.jpg 29
150 | ./data/office31/dslr/images/tape_dispenser/frame_0019.jpg 29
151 | ./data/office31/dslr/images/tape_dispenser/frame_0020.jpg 29
152 | ./data/office31/dslr/images/tape_dispenser/frame_0021.jpg 29
153 | ./data/office31/dslr/images/tape_dispenser/frame_0022.jpg 29
154 | ./data/office31/dslr/images/pen/frame_0001.jpg 19
155 | ./data/office31/dslr/images/pen/frame_0002.jpg 19
156 | ./data/office31/dslr/images/pen/frame_0003.jpg 19
157 | ./data/office31/dslr/images/pen/frame_0004.jpg 19
158 | ./data/office31/dslr/images/pen/frame_0005.jpg 19
159 | ./data/office31/dslr/images/pen/frame_0006.jpg 19
160 | ./data/office31/dslr/images/pen/frame_0007.jpg 19
161 | ./data/office31/dslr/images/pen/frame_0008.jpg 19
162 | ./data/office31/dslr/images/pen/frame_0009.jpg 19
163 | ./data/office31/dslr/images/pen/frame_0010.jpg 19
164 | ./data/office31/dslr/images/bike/frame_0001.jpg 1
165 | ./data/office31/dslr/images/bike/frame_0002.jpg 1
166 | ./data/office31/dslr/images/bike/frame_0003.jpg 1
167 | ./data/office31/dslr/images/bike/frame_0004.jpg 1
168 | ./data/office31/dslr/images/bike/frame_0005.jpg 1
169 | ./data/office31/dslr/images/bike/frame_0006.jpg 1
170 | ./data/office31/dslr/images/bike/frame_0007.jpg 1
171 | ./data/office31/dslr/images/bike/frame_0008.jpg 1
172 | ./data/office31/dslr/images/bike/frame_0009.jpg 1
173 | ./data/office31/dslr/images/bike/frame_0010.jpg 1
174 | ./data/office31/dslr/images/bike/frame_0011.jpg 1
175 | ./data/office31/dslr/images/bike/frame_0012.jpg 1
176 | ./data/office31/dslr/images/bike/frame_0013.jpg 1
177 | ./data/office31/dslr/images/bike/frame_0014.jpg 1
178 | ./data/office31/dslr/images/bike/frame_0015.jpg 1
179 | ./data/office31/dslr/images/bike/frame_0016.jpg 1
180 | ./data/office31/dslr/images/bike/frame_0017.jpg 1
181 | ./data/office31/dslr/images/bike/frame_0018.jpg 1
182 | ./data/office31/dslr/images/bike/frame_0019.jpg 1
183 | ./data/office31/dslr/images/bike/frame_0020.jpg 1
184 | ./data/office31/dslr/images/bike/frame_0021.jpg 1
185 | ./data/office31/dslr/images/punchers/frame_0001.jpg 23
186 | ./data/office31/dslr/images/punchers/frame_0002.jpg 23
187 | ./data/office31/dslr/images/punchers/frame_0003.jpg 23
188 | ./data/office31/dslr/images/punchers/frame_0004.jpg 23
189 | ./data/office31/dslr/images/punchers/frame_0005.jpg 23
190 | ./data/office31/dslr/images/punchers/frame_0006.jpg 23
191 | ./data/office31/dslr/images/punchers/frame_0007.jpg 23
192 | ./data/office31/dslr/images/punchers/frame_0008.jpg 23
193 | ./data/office31/dslr/images/punchers/frame_0009.jpg 23
194 | ./data/office31/dslr/images/punchers/frame_0010.jpg 23
195 | ./data/office31/dslr/images/punchers/frame_0011.jpg 23
196 | ./data/office31/dslr/images/punchers/frame_0012.jpg 23
197 | ./data/office31/dslr/images/punchers/frame_0013.jpg 23
198 | ./data/office31/dslr/images/punchers/frame_0014.jpg 23
199 | ./data/office31/dslr/images/punchers/frame_0015.jpg 23
200 | ./data/office31/dslr/images/punchers/frame_0016.jpg 23
201 | ./data/office31/dslr/images/punchers/frame_0017.jpg 23
202 | ./data/office31/dslr/images/punchers/frame_0018.jpg 23
203 | ./data/office31/dslr/images/back_pack/frame_0001.jpg 0
204 | ./data/office31/dslr/images/back_pack/frame_0002.jpg 0
205 | ./data/office31/dslr/images/back_pack/frame_0003.jpg 0
206 | ./data/office31/dslr/images/back_pack/frame_0004.jpg 0
207 | ./data/office31/dslr/images/back_pack/frame_0005.jpg 0
208 | ./data/office31/dslr/images/back_pack/frame_0006.jpg 0
209 | ./data/office31/dslr/images/back_pack/frame_0007.jpg 0
210 | ./data/office31/dslr/images/back_pack/frame_0008.jpg 0
211 | ./data/office31/dslr/images/back_pack/frame_0009.jpg 0
212 | ./data/office31/dslr/images/back_pack/frame_0010.jpg 0
213 | ./data/office31/dslr/images/back_pack/frame_0011.jpg 0
214 | ./data/office31/dslr/images/back_pack/frame_0012.jpg 0
215 | ./data/office31/dslr/images/desktop_computer/frame_0001.jpg 8
216 | ./data/office31/dslr/images/desktop_computer/frame_0002.jpg 8
217 | ./data/office31/dslr/images/desktop_computer/frame_0003.jpg 8
218 | ./data/office31/dslr/images/desktop_computer/frame_0004.jpg 8
219 | ./data/office31/dslr/images/desktop_computer/frame_0005.jpg 8
220 | ./data/office31/dslr/images/desktop_computer/frame_0006.jpg 8
221 | ./data/office31/dslr/images/desktop_computer/frame_0007.jpg 8
222 | ./data/office31/dslr/images/desktop_computer/frame_0008.jpg 8
223 | ./data/office31/dslr/images/desktop_computer/frame_0009.jpg 8
224 | ./data/office31/dslr/images/desktop_computer/frame_0010.jpg 8
225 | ./data/office31/dslr/images/desktop_computer/frame_0011.jpg 8
226 | ./data/office31/dslr/images/desktop_computer/frame_0012.jpg 8
227 | ./data/office31/dslr/images/desktop_computer/frame_0013.jpg 8
228 | ./data/office31/dslr/images/desktop_computer/frame_0014.jpg 8
229 | ./data/office31/dslr/images/desktop_computer/frame_0015.jpg 8
230 | ./data/office31/dslr/images/speaker/frame_0001.jpg 27
231 | ./data/office31/dslr/images/speaker/frame_0002.jpg 27
232 | ./data/office31/dslr/images/speaker/frame_0003.jpg 27
233 | ./data/office31/dslr/images/speaker/frame_0004.jpg 27
234 | ./data/office31/dslr/images/speaker/frame_0005.jpg 27
235 | ./data/office31/dslr/images/speaker/frame_0006.jpg 27
236 | ./data/office31/dslr/images/speaker/frame_0007.jpg 27
237 | ./data/office31/dslr/images/speaker/frame_0008.jpg 27
238 | ./data/office31/dslr/images/speaker/frame_0009.jpg 27
239 | ./data/office31/dslr/images/speaker/frame_0010.jpg 27
240 | ./data/office31/dslr/images/speaker/frame_0011.jpg 27
241 | ./data/office31/dslr/images/speaker/frame_0012.jpg 27
242 | ./data/office31/dslr/images/speaker/frame_0013.jpg 27
243 | ./data/office31/dslr/images/speaker/frame_0014.jpg 27
244 | ./data/office31/dslr/images/speaker/frame_0015.jpg 27
245 | ./data/office31/dslr/images/speaker/frame_0016.jpg 27
246 | ./data/office31/dslr/images/speaker/frame_0017.jpg 27
247 | ./data/office31/dslr/images/speaker/frame_0018.jpg 27
248 | ./data/office31/dslr/images/speaker/frame_0019.jpg 27
249 | ./data/office31/dslr/images/speaker/frame_0020.jpg 27
250 | ./data/office31/dslr/images/speaker/frame_0021.jpg 27
251 | ./data/office31/dslr/images/speaker/frame_0022.jpg 27
252 | ./data/office31/dslr/images/speaker/frame_0023.jpg 27
253 | ./data/office31/dslr/images/speaker/frame_0024.jpg 27
254 | ./data/office31/dslr/images/speaker/frame_0025.jpg 27
255 | ./data/office31/dslr/images/speaker/frame_0026.jpg 27
256 | ./data/office31/dslr/images/mobile_phone/frame_0001.jpg 14
257 | ./data/office31/dslr/images/mobile_phone/frame_0002.jpg 14
258 | ./data/office31/dslr/images/mobile_phone/frame_0003.jpg 14
259 | ./data/office31/dslr/images/mobile_phone/frame_0004.jpg 14
260 | ./data/office31/dslr/images/mobile_phone/frame_0005.jpg 14
261 | ./data/office31/dslr/images/mobile_phone/frame_0006.jpg 14
262 | ./data/office31/dslr/images/mobile_phone/frame_0007.jpg 14
263 | ./data/office31/dslr/images/mobile_phone/frame_0008.jpg 14
264 | ./data/office31/dslr/images/mobile_phone/frame_0009.jpg 14
265 | ./data/office31/dslr/images/mobile_phone/frame_0010.jpg 14
266 | ./data/office31/dslr/images/mobile_phone/frame_0011.jpg 14
267 | ./data/office31/dslr/images/mobile_phone/frame_0012.jpg 14
268 | ./data/office31/dslr/images/mobile_phone/frame_0013.jpg 14
269 | ./data/office31/dslr/images/mobile_phone/frame_0014.jpg 14
270 | ./data/office31/dslr/images/mobile_phone/frame_0015.jpg 14
271 | ./data/office31/dslr/images/mobile_phone/frame_0016.jpg 14
272 | ./data/office31/dslr/images/mobile_phone/frame_0017.jpg 14
273 | ./data/office31/dslr/images/mobile_phone/frame_0018.jpg 14
274 | ./data/office31/dslr/images/mobile_phone/frame_0019.jpg 14
275 | ./data/office31/dslr/images/mobile_phone/frame_0020.jpg 14
276 | ./data/office31/dslr/images/mobile_phone/frame_0021.jpg 14
277 | ./data/office31/dslr/images/mobile_phone/frame_0022.jpg 14
278 | ./data/office31/dslr/images/mobile_phone/frame_0023.jpg 14
279 | ./data/office31/dslr/images/mobile_phone/frame_0024.jpg 14
280 | ./data/office31/dslr/images/mobile_phone/frame_0025.jpg 14
281 | ./data/office31/dslr/images/mobile_phone/frame_0026.jpg 14
282 | ./data/office31/dslr/images/mobile_phone/frame_0027.jpg 14
283 | ./data/office31/dslr/images/mobile_phone/frame_0028.jpg 14
284 | ./data/office31/dslr/images/mobile_phone/frame_0029.jpg 14
285 | ./data/office31/dslr/images/mobile_phone/frame_0030.jpg 14
286 | ./data/office31/dslr/images/mobile_phone/frame_0031.jpg 14
287 | ./data/office31/dslr/images/paper_notebook/frame_0001.jpg 18
288 | ./data/office31/dslr/images/paper_notebook/frame_0002.jpg 18
289 | ./data/office31/dslr/images/paper_notebook/frame_0003.jpg 18
290 | ./data/office31/dslr/images/paper_notebook/frame_0004.jpg 18
291 | ./data/office31/dslr/images/paper_notebook/frame_0005.jpg 18
292 | ./data/office31/dslr/images/paper_notebook/frame_0006.jpg 18
293 | ./data/office31/dslr/images/paper_notebook/frame_0007.jpg 18
294 | ./data/office31/dslr/images/paper_notebook/frame_0008.jpg 18
295 | ./data/office31/dslr/images/paper_notebook/frame_0009.jpg 18
296 | ./data/office31/dslr/images/paper_notebook/frame_0010.jpg 18
297 | ./data/office31/dslr/images/ruler/frame_0001.jpg 25
298 | ./data/office31/dslr/images/ruler/frame_0002.jpg 25
299 | ./data/office31/dslr/images/ruler/frame_0003.jpg 25
300 | ./data/office31/dslr/images/ruler/frame_0004.jpg 25
301 | ./data/office31/dslr/images/ruler/frame_0005.jpg 25
302 | ./data/office31/dslr/images/ruler/frame_0006.jpg 25
303 | ./data/office31/dslr/images/ruler/frame_0007.jpg 25
304 | ./data/office31/dslr/images/letter_tray/frame_0001.jpg 13
305 | ./data/office31/dslr/images/letter_tray/frame_0002.jpg 13
306 | ./data/office31/dslr/images/letter_tray/frame_0003.jpg 13
307 | ./data/office31/dslr/images/letter_tray/frame_0004.jpg 13
308 | ./data/office31/dslr/images/letter_tray/frame_0005.jpg 13
309 | ./data/office31/dslr/images/letter_tray/frame_0006.jpg 13
310 | ./data/office31/dslr/images/letter_tray/frame_0007.jpg 13
311 | ./data/office31/dslr/images/letter_tray/frame_0008.jpg 13
312 | ./data/office31/dslr/images/letter_tray/frame_0009.jpg 13
313 | ./data/office31/dslr/images/letter_tray/frame_0010.jpg 13
314 | ./data/office31/dslr/images/letter_tray/frame_0011.jpg 13
315 | ./data/office31/dslr/images/letter_tray/frame_0012.jpg 13
316 | ./data/office31/dslr/images/letter_tray/frame_0013.jpg 13
317 | ./data/office31/dslr/images/letter_tray/frame_0014.jpg 13
318 | ./data/office31/dslr/images/letter_tray/frame_0015.jpg 13
319 | ./data/office31/dslr/images/letter_tray/frame_0016.jpg 13
320 | ./data/office31/dslr/images/file_cabinet/frame_0001.jpg 9
321 | ./data/office31/dslr/images/file_cabinet/frame_0002.jpg 9
322 | ./data/office31/dslr/images/file_cabinet/frame_0003.jpg 9
323 | ./data/office31/dslr/images/file_cabinet/frame_0004.jpg 9
324 | ./data/office31/dslr/images/file_cabinet/frame_0005.jpg 9
325 | ./data/office31/dslr/images/file_cabinet/frame_0006.jpg 9
326 | ./data/office31/dslr/images/file_cabinet/frame_0007.jpg 9
327 | ./data/office31/dslr/images/file_cabinet/frame_0008.jpg 9
328 | ./data/office31/dslr/images/file_cabinet/frame_0009.jpg 9
329 | ./data/office31/dslr/images/file_cabinet/frame_0010.jpg 9
330 | ./data/office31/dslr/images/file_cabinet/frame_0011.jpg 9
331 | ./data/office31/dslr/images/file_cabinet/frame_0012.jpg 9
332 | ./data/office31/dslr/images/file_cabinet/frame_0013.jpg 9
333 | ./data/office31/dslr/images/file_cabinet/frame_0014.jpg 9
334 | ./data/office31/dslr/images/file_cabinet/frame_0015.jpg 9
335 | ./data/office31/dslr/images/phone/frame_0001.jpg 20
336 | ./data/office31/dslr/images/phone/frame_0002.jpg 20
337 | ./data/office31/dslr/images/phone/frame_0003.jpg 20
338 | ./data/office31/dslr/images/phone/frame_0004.jpg 20
339 | ./data/office31/dslr/images/phone/frame_0005.jpg 20
340 | ./data/office31/dslr/images/phone/frame_0006.jpg 20
341 | ./data/office31/dslr/images/phone/frame_0007.jpg 20
342 | ./data/office31/dslr/images/phone/frame_0008.jpg 20
343 | ./data/office31/dslr/images/phone/frame_0009.jpg 20
344 | ./data/office31/dslr/images/phone/frame_0010.jpg 20
345 | ./data/office31/dslr/images/phone/frame_0011.jpg 20
346 | ./data/office31/dslr/images/phone/frame_0012.jpg 20
347 | ./data/office31/dslr/images/phone/frame_0013.jpg 20
348 | ./data/office31/dslr/images/bookcase/frame_0001.jpg 3
349 | ./data/office31/dslr/images/bookcase/frame_0002.jpg 3
350 | ./data/office31/dslr/images/bookcase/frame_0003.jpg 3
351 | ./data/office31/dslr/images/bookcase/frame_0004.jpg 3
352 | ./data/office31/dslr/images/bookcase/frame_0005.jpg 3
353 | ./data/office31/dslr/images/bookcase/frame_0006.jpg 3
354 | ./data/office31/dslr/images/bookcase/frame_0007.jpg 3
355 | ./data/office31/dslr/images/bookcase/frame_0008.jpg 3
356 | ./data/office31/dslr/images/bookcase/frame_0009.jpg 3
357 | ./data/office31/dslr/images/bookcase/frame_0010.jpg 3
358 | ./data/office31/dslr/images/bookcase/frame_0011.jpg 3
359 | ./data/office31/dslr/images/bookcase/frame_0012.jpg 3
360 | ./data/office31/dslr/images/projector/frame_0001.jpg 22
361 | ./data/office31/dslr/images/projector/frame_0002.jpg 22
362 | ./data/office31/dslr/images/projector/frame_0003.jpg 22
363 | ./data/office31/dslr/images/projector/frame_0004.jpg 22
364 | ./data/office31/dslr/images/projector/frame_0005.jpg 22
365 | ./data/office31/dslr/images/projector/frame_0006.jpg 22
366 | ./data/office31/dslr/images/projector/frame_0007.jpg 22
367 | ./data/office31/dslr/images/projector/frame_0008.jpg 22
368 | ./data/office31/dslr/images/projector/frame_0009.jpg 22
369 | ./data/office31/dslr/images/projector/frame_0010.jpg 22
370 | ./data/office31/dslr/images/projector/frame_0011.jpg 22
371 | ./data/office31/dslr/images/projector/frame_0012.jpg 22
372 | ./data/office31/dslr/images/projector/frame_0013.jpg 22
373 | ./data/office31/dslr/images/projector/frame_0014.jpg 22
374 | ./data/office31/dslr/images/projector/frame_0015.jpg 22
375 | ./data/office31/dslr/images/projector/frame_0016.jpg 22
376 | ./data/office31/dslr/images/projector/frame_0017.jpg 22
377 | ./data/office31/dslr/images/projector/frame_0018.jpg 22
378 | ./data/office31/dslr/images/projector/frame_0019.jpg 22
379 | ./data/office31/dslr/images/projector/frame_0020.jpg 22
380 | ./data/office31/dslr/images/projector/frame_0021.jpg 22
381 | ./data/office31/dslr/images/projector/frame_0022.jpg 22
382 | ./data/office31/dslr/images/projector/frame_0023.jpg 22
383 | ./data/office31/dslr/images/stapler/frame_0001.jpg 28
384 | ./data/office31/dslr/images/stapler/frame_0002.jpg 28
385 | ./data/office31/dslr/images/stapler/frame_0003.jpg 28
386 | ./data/office31/dslr/images/stapler/frame_0004.jpg 28
387 | ./data/office31/dslr/images/stapler/frame_0005.jpg 28
388 | ./data/office31/dslr/images/stapler/frame_0006.jpg 28
389 | ./data/office31/dslr/images/stapler/frame_0007.jpg 28
390 | ./data/office31/dslr/images/stapler/frame_0008.jpg 28
391 | ./data/office31/dslr/images/stapler/frame_0009.jpg 28
392 | ./data/office31/dslr/images/stapler/frame_0010.jpg 28
393 | ./data/office31/dslr/images/stapler/frame_0011.jpg 28
394 | ./data/office31/dslr/images/stapler/frame_0012.jpg 28
395 | ./data/office31/dslr/images/stapler/frame_0013.jpg 28
396 | ./data/office31/dslr/images/stapler/frame_0014.jpg 28
397 | ./data/office31/dslr/images/stapler/frame_0015.jpg 28
398 | ./data/office31/dslr/images/stapler/frame_0016.jpg 28
399 | ./data/office31/dslr/images/stapler/frame_0017.jpg 28
400 | ./data/office31/dslr/images/stapler/frame_0018.jpg 28
401 | ./data/office31/dslr/images/stapler/frame_0019.jpg 28
402 | ./data/office31/dslr/images/stapler/frame_0020.jpg 28
403 | ./data/office31/dslr/images/stapler/frame_0021.jpg 28
404 | ./data/office31/dslr/images/trash_can/frame_0001.jpg 30
405 | ./data/office31/dslr/images/trash_can/frame_0002.jpg 30
406 | ./data/office31/dslr/images/trash_can/frame_0003.jpg 30
407 | ./data/office31/dslr/images/trash_can/frame_0004.jpg 30
408 | ./data/office31/dslr/images/trash_can/frame_0005.jpg 30
409 | ./data/office31/dslr/images/trash_can/frame_0006.jpg 30
410 | ./data/office31/dslr/images/trash_can/frame_0007.jpg 30
411 | ./data/office31/dslr/images/trash_can/frame_0008.jpg 30
412 | ./data/office31/dslr/images/trash_can/frame_0009.jpg 30
413 | ./data/office31/dslr/images/trash_can/frame_0010.jpg 30
414 | ./data/office31/dslr/images/trash_can/frame_0011.jpg 30
415 | ./data/office31/dslr/images/trash_can/frame_0012.jpg 30
416 | ./data/office31/dslr/images/trash_can/frame_0013.jpg 30
417 | ./data/office31/dslr/images/trash_can/frame_0014.jpg 30
418 | ./data/office31/dslr/images/trash_can/frame_0015.jpg 30
419 | ./data/office31/dslr/images/bike_helmet/frame_0001.jpg 2
420 | ./data/office31/dslr/images/bike_helmet/frame_0002.jpg 2
421 | ./data/office31/dslr/images/bike_helmet/frame_0003.jpg 2
422 | ./data/office31/dslr/images/bike_helmet/frame_0004.jpg 2
423 | ./data/office31/dslr/images/bike_helmet/frame_0005.jpg 2
424 | ./data/office31/dslr/images/bike_helmet/frame_0006.jpg 2
425 | ./data/office31/dslr/images/bike_helmet/frame_0007.jpg 2
426 | ./data/office31/dslr/images/bike_helmet/frame_0008.jpg 2
427 | ./data/office31/dslr/images/bike_helmet/frame_0009.jpg 2
428 | ./data/office31/dslr/images/bike_helmet/frame_0010.jpg 2
429 | ./data/office31/dslr/images/bike_helmet/frame_0011.jpg 2
430 | ./data/office31/dslr/images/bike_helmet/frame_0012.jpg 2
431 | ./data/office31/dslr/images/bike_helmet/frame_0013.jpg 2
432 | ./data/office31/dslr/images/bike_helmet/frame_0014.jpg 2
433 | ./data/office31/dslr/images/bike_helmet/frame_0015.jpg 2
434 | ./data/office31/dslr/images/bike_helmet/frame_0016.jpg 2
435 | ./data/office31/dslr/images/bike_helmet/frame_0017.jpg 2
436 | ./data/office31/dslr/images/bike_helmet/frame_0018.jpg 2
437 | ./data/office31/dslr/images/bike_helmet/frame_0019.jpg 2
438 | ./data/office31/dslr/images/bike_helmet/frame_0020.jpg 2
439 | ./data/office31/dslr/images/bike_helmet/frame_0021.jpg 2
440 | ./data/office31/dslr/images/bike_helmet/frame_0022.jpg 2
441 | ./data/office31/dslr/images/bike_helmet/frame_0023.jpg 2
442 | ./data/office31/dslr/images/bike_helmet/frame_0024.jpg 2
443 | ./data/office31/dslr/images/headphones/frame_0001.jpg 10
444 | ./data/office31/dslr/images/headphones/frame_0002.jpg 10
445 | ./data/office31/dslr/images/headphones/frame_0003.jpg 10
446 | ./data/office31/dslr/images/headphones/frame_0004.jpg 10
447 | ./data/office31/dslr/images/headphones/frame_0005.jpg 10
448 | ./data/office31/dslr/images/headphones/frame_0006.jpg 10
449 | ./data/office31/dslr/images/headphones/frame_0007.jpg 10
450 | ./data/office31/dslr/images/headphones/frame_0008.jpg 10
451 | ./data/office31/dslr/images/headphones/frame_0009.jpg 10
452 | ./data/office31/dslr/images/headphones/frame_0010.jpg 10
453 | ./data/office31/dslr/images/headphones/frame_0011.jpg 10
454 | ./data/office31/dslr/images/headphones/frame_0012.jpg 10
455 | ./data/office31/dslr/images/headphones/frame_0013.jpg 10
456 | ./data/office31/dslr/images/desk_lamp/frame_0001.jpg 7
457 | ./data/office31/dslr/images/desk_lamp/frame_0002.jpg 7
458 | ./data/office31/dslr/images/desk_lamp/frame_0003.jpg 7
459 | ./data/office31/dslr/images/desk_lamp/frame_0004.jpg 7
460 | ./data/office31/dslr/images/desk_lamp/frame_0005.jpg 7
461 | ./data/office31/dslr/images/desk_lamp/frame_0006.jpg 7
462 | ./data/office31/dslr/images/desk_lamp/frame_0007.jpg 7
463 | ./data/office31/dslr/images/desk_lamp/frame_0008.jpg 7
464 | ./data/office31/dslr/images/desk_lamp/frame_0009.jpg 7
465 | ./data/office31/dslr/images/desk_lamp/frame_0010.jpg 7
466 | ./data/office31/dslr/images/desk_lamp/frame_0011.jpg 7
467 | ./data/office31/dslr/images/desk_lamp/frame_0012.jpg 7
468 | ./data/office31/dslr/images/desk_lamp/frame_0013.jpg 7
469 | ./data/office31/dslr/images/desk_lamp/frame_0014.jpg 7
470 | ./data/office31/dslr/images/desk_chair/frame_0001.jpg 6
471 | ./data/office31/dslr/images/desk_chair/frame_0002.jpg 6
472 | ./data/office31/dslr/images/desk_chair/frame_0003.jpg 6
473 | ./data/office31/dslr/images/desk_chair/frame_0004.jpg 6
474 | ./data/office31/dslr/images/desk_chair/frame_0005.jpg 6
475 | ./data/office31/dslr/images/desk_chair/frame_0006.jpg 6
476 | ./data/office31/dslr/images/desk_chair/frame_0007.jpg 6
477 | ./data/office31/dslr/images/desk_chair/frame_0008.jpg 6
478 | ./data/office31/dslr/images/desk_chair/frame_0009.jpg 6
479 | ./data/office31/dslr/images/desk_chair/frame_0010.jpg 6
480 | ./data/office31/dslr/images/desk_chair/frame_0011.jpg 6
481 | ./data/office31/dslr/images/desk_chair/frame_0012.jpg 6
482 | ./data/office31/dslr/images/desk_chair/frame_0013.jpg 6
483 | ./data/office31/dslr/images/bottle/frame_0001.jpg 4
484 | ./data/office31/dslr/images/bottle/frame_0002.jpg 4
485 | ./data/office31/dslr/images/bottle/frame_0003.jpg 4
486 | ./data/office31/dslr/images/bottle/frame_0004.jpg 4
487 | ./data/office31/dslr/images/bottle/frame_0005.jpg 4
488 | ./data/office31/dslr/images/bottle/frame_0006.jpg 4
489 | ./data/office31/dslr/images/bottle/frame_0007.jpg 4
490 | ./data/office31/dslr/images/bottle/frame_0008.jpg 4
491 | ./data/office31/dslr/images/bottle/frame_0009.jpg 4
492 | ./data/office31/dslr/images/bottle/frame_0010.jpg 4
493 | ./data/office31/dslr/images/bottle/frame_0011.jpg 4
494 | ./data/office31/dslr/images/bottle/frame_0012.jpg 4
495 | ./data/office31/dslr/images/bottle/frame_0013.jpg 4
496 | ./data/office31/dslr/images/bottle/frame_0014.jpg 4
497 | ./data/office31/dslr/images/bottle/frame_0015.jpg 4
498 | ./data/office31/dslr/images/bottle/frame_0016.jpg 4
499 |
--------------------------------------------------------------------------------
/data/webcam.txt:
--------------------------------------------------------------------------------
1 | ./data/office31//webcam/images/calculator/frame_0001.jpg 5
2 | ./data/office31//webcam/images/calculator/frame_0002.jpg 5
3 | ./data/office31//webcam/images/calculator/frame_0003.jpg 5
4 | ./data/office31//webcam/images/calculator/frame_0004.jpg 5
5 | ./data/office31//webcam/images/calculator/frame_0005.jpg 5
6 | ./data/office31//webcam/images/calculator/frame_0006.jpg 5
7 | ./data/office31//webcam/images/calculator/frame_0007.jpg 5
8 | ./data/office31//webcam/images/calculator/frame_0008.jpg 5
9 | ./data/office31//webcam/images/calculator/frame_0009.jpg 5
10 | ./data/office31//webcam/images/calculator/frame_0010.jpg 5
11 | ./data/office31//webcam/images/calculator/frame_0011.jpg 5
12 | ./data/office31//webcam/images/calculator/frame_0012.jpg 5
13 | ./data/office31//webcam/images/calculator/frame_0013.jpg 5
14 | ./data/office31//webcam/images/calculator/frame_0014.jpg 5
15 | ./data/office31//webcam/images/calculator/frame_0015.jpg 5
16 | ./data/office31//webcam/images/calculator/frame_0016.jpg 5
17 | ./data/office31//webcam/images/calculator/frame_0017.jpg 5
18 | ./data/office31//webcam/images/calculator/frame_0018.jpg 5
19 | ./data/office31//webcam/images/calculator/frame_0019.jpg 5
20 | ./data/office31//webcam/images/calculator/frame_0020.jpg 5
21 | ./data/office31//webcam/images/calculator/frame_0021.jpg 5
22 | ./data/office31//webcam/images/calculator/frame_0022.jpg 5
23 | ./data/office31//webcam/images/calculator/frame_0023.jpg 5
24 | ./data/office31//webcam/images/calculator/frame_0024.jpg 5
25 | ./data/office31//webcam/images/calculator/frame_0025.jpg 5
26 | ./data/office31//webcam/images/calculator/frame_0026.jpg 5
27 | ./data/office31//webcam/images/calculator/frame_0027.jpg 5
28 | ./data/office31//webcam/images/calculator/frame_0028.jpg 5
29 | ./data/office31//webcam/images/calculator/frame_0029.jpg 5
30 | ./data/office31//webcam/images/calculator/frame_0030.jpg 5
31 | ./data/office31//webcam/images/calculator/frame_0031.jpg 5
32 | ./data/office31//webcam/images/ring_binder/frame_0001.jpg 24
33 | ./data/office31//webcam/images/ring_binder/frame_0002.jpg 24
34 | ./data/office31//webcam/images/ring_binder/frame_0003.jpg 24
35 | ./data/office31//webcam/images/ring_binder/frame_0004.jpg 24
36 | ./data/office31//webcam/images/ring_binder/frame_0005.jpg 24
37 | ./data/office31//webcam/images/ring_binder/frame_0006.jpg 24
38 | ./data/office31//webcam/images/ring_binder/frame_0007.jpg 24
39 | ./data/office31//webcam/images/ring_binder/frame_0008.jpg 24
40 | ./data/office31//webcam/images/ring_binder/frame_0009.jpg 24
41 | ./data/office31//webcam/images/ring_binder/frame_0010.jpg 24
42 | ./data/office31//webcam/images/ring_binder/frame_0011.jpg 24
43 | ./data/office31//webcam/images/ring_binder/frame_0012.jpg 24
44 | ./data/office31//webcam/images/ring_binder/frame_0013.jpg 24
45 | ./data/office31//webcam/images/ring_binder/frame_0014.jpg 24
46 | ./data/office31//webcam/images/ring_binder/frame_0015.jpg 24
47 | ./data/office31//webcam/images/ring_binder/frame_0016.jpg 24
48 | ./data/office31//webcam/images/ring_binder/frame_0017.jpg 24
49 | ./data/office31//webcam/images/ring_binder/frame_0018.jpg 24
50 | ./data/office31//webcam/images/ring_binder/frame_0019.jpg 24
51 | ./data/office31//webcam/images/ring_binder/frame_0020.jpg 24
52 | ./data/office31//webcam/images/ring_binder/frame_0021.jpg 24
53 | ./data/office31//webcam/images/ring_binder/frame_0022.jpg 24
54 | ./data/office31//webcam/images/ring_binder/frame_0023.jpg 24
55 | ./data/office31//webcam/images/ring_binder/frame_0024.jpg 24
56 | ./data/office31//webcam/images/ring_binder/frame_0025.jpg 24
57 | ./data/office31//webcam/images/ring_binder/frame_0026.jpg 24
58 | ./data/office31//webcam/images/ring_binder/frame_0027.jpg 24
59 | ./data/office31//webcam/images/ring_binder/frame_0028.jpg 24
60 | ./data/office31//webcam/images/ring_binder/frame_0029.jpg 24
61 | ./data/office31//webcam/images/ring_binder/frame_0030.jpg 24
62 | ./data/office31//webcam/images/ring_binder/frame_0031.jpg 24
63 | ./data/office31//webcam/images/ring_binder/frame_0032.jpg 24
64 | ./data/office31//webcam/images/ring_binder/frame_0033.jpg 24
65 | ./data/office31//webcam/images/ring_binder/frame_0034.jpg 24
66 | ./data/office31//webcam/images/ring_binder/frame_0035.jpg 24
67 | ./data/office31//webcam/images/ring_binder/frame_0036.jpg 24
68 | ./data/office31//webcam/images/ring_binder/frame_0037.jpg 24
69 | ./data/office31//webcam/images/ring_binder/frame_0038.jpg 24
70 | ./data/office31//webcam/images/ring_binder/frame_0039.jpg 24
71 | ./data/office31//webcam/images/ring_binder/frame_0040.jpg 24
72 | ./data/office31//webcam/images/printer/frame_0001.jpg 21
73 | ./data/office31//webcam/images/printer/frame_0002.jpg 21
74 | ./data/office31//webcam/images/printer/frame_0003.jpg 21
75 | ./data/office31//webcam/images/printer/frame_0004.jpg 21
76 | ./data/office31//webcam/images/printer/frame_0005.jpg 21
77 | ./data/office31//webcam/images/printer/frame_0006.jpg 21
78 | ./data/office31//webcam/images/printer/frame_0007.jpg 21
79 | ./data/office31//webcam/images/printer/frame_0008.jpg 21
80 | ./data/office31//webcam/images/printer/frame_0009.jpg 21
81 | ./data/office31//webcam/images/printer/frame_0010.jpg 21
82 | ./data/office31//webcam/images/printer/frame_0011.jpg 21
83 | ./data/office31//webcam/images/printer/frame_0012.jpg 21
84 | ./data/office31//webcam/images/printer/frame_0013.jpg 21
85 | ./data/office31//webcam/images/printer/frame_0014.jpg 21
86 | ./data/office31//webcam/images/printer/frame_0015.jpg 21
87 | ./data/office31//webcam/images/printer/frame_0016.jpg 21
88 | ./data/office31//webcam/images/printer/frame_0017.jpg 21
89 | ./data/office31//webcam/images/printer/frame_0018.jpg 21
90 | ./data/office31//webcam/images/printer/frame_0019.jpg 21
91 | ./data/office31//webcam/images/printer/frame_0020.jpg 21
92 | ./data/office31//webcam/images/keyboard/frame_0001.jpg 11
93 | ./data/office31//webcam/images/keyboard/frame_0002.jpg 11
94 | ./data/office31//webcam/images/keyboard/frame_0003.jpg 11
95 | ./data/office31//webcam/images/keyboard/frame_0004.jpg 11
96 | ./data/office31//webcam/images/keyboard/frame_0005.jpg 11
97 | ./data/office31//webcam/images/keyboard/frame_0006.jpg 11
98 | ./data/office31//webcam/images/keyboard/frame_0007.jpg 11
99 | ./data/office31//webcam/images/keyboard/frame_0008.jpg 11
100 | ./data/office31//webcam/images/keyboard/frame_0009.jpg 11
101 | ./data/office31//webcam/images/keyboard/frame_0010.jpg 11
102 | ./data/office31//webcam/images/keyboard/frame_0011.jpg 11
103 | ./data/office31//webcam/images/keyboard/frame_0012.jpg 11
104 | ./data/office31//webcam/images/keyboard/frame_0013.jpg 11
105 | ./data/office31//webcam/images/keyboard/frame_0014.jpg 11
106 | ./data/office31//webcam/images/keyboard/frame_0015.jpg 11
107 | ./data/office31//webcam/images/keyboard/frame_0016.jpg 11
108 | ./data/office31//webcam/images/keyboard/frame_0017.jpg 11
109 | ./data/office31//webcam/images/keyboard/frame_0018.jpg 11
110 | ./data/office31//webcam/images/keyboard/frame_0019.jpg 11
111 | ./data/office31//webcam/images/keyboard/frame_0020.jpg 11
112 | ./data/office31//webcam/images/keyboard/frame_0021.jpg 11
113 | ./data/office31//webcam/images/keyboard/frame_0022.jpg 11
114 | ./data/office31//webcam/images/keyboard/frame_0023.jpg 11
115 | ./data/office31//webcam/images/keyboard/frame_0024.jpg 11
116 | ./data/office31//webcam/images/keyboard/frame_0025.jpg 11
117 | ./data/office31//webcam/images/keyboard/frame_0026.jpg 11
118 | ./data/office31//webcam/images/keyboard/frame_0027.jpg 11
119 | ./data/office31//webcam/images/scissors/frame_0001.jpg 26
120 | ./data/office31//webcam/images/scissors/frame_0002.jpg 26
121 | ./data/office31//webcam/images/scissors/frame_0003.jpg 26
122 | ./data/office31//webcam/images/scissors/frame_0004.jpg 26
123 | ./data/office31//webcam/images/scissors/frame_0005.jpg 26
124 | ./data/office31//webcam/images/scissors/frame_0006.jpg 26
125 | ./data/office31//webcam/images/scissors/frame_0007.jpg 26
126 | ./data/office31//webcam/images/scissors/frame_0008.jpg 26
127 | ./data/office31//webcam/images/scissors/frame_0009.jpg 26
128 | ./data/office31//webcam/images/scissors/frame_0010.jpg 26
129 | ./data/office31//webcam/images/scissors/frame_0011.jpg 26
130 | ./data/office31//webcam/images/scissors/frame_0012.jpg 26
131 | ./data/office31//webcam/images/scissors/frame_0013.jpg 26
132 | ./data/office31//webcam/images/scissors/frame_0014.jpg 26
133 | ./data/office31//webcam/images/scissors/frame_0015.jpg 26
134 | ./data/office31//webcam/images/scissors/frame_0016.jpg 26
135 | ./data/office31//webcam/images/scissors/frame_0017.jpg 26
136 | ./data/office31//webcam/images/scissors/frame_0018.jpg 26
137 | ./data/office31//webcam/images/scissors/frame_0019.jpg 26
138 | ./data/office31//webcam/images/scissors/frame_0020.jpg 26
139 | ./data/office31//webcam/images/scissors/frame_0021.jpg 26
140 | ./data/office31//webcam/images/scissors/frame_0022.jpg 26
141 | ./data/office31//webcam/images/scissors/frame_0023.jpg 26
142 | ./data/office31//webcam/images/scissors/frame_0024.jpg 26
143 | ./data/office31//webcam/images/scissors/frame_0025.jpg 26
144 | ./data/office31//webcam/images/laptop_computer/frame_0001.jpg 12
145 | ./data/office31//webcam/images/laptop_computer/frame_0002.jpg 12
146 | ./data/office31//webcam/images/laptop_computer/frame_0003.jpg 12
147 | ./data/office31//webcam/images/laptop_computer/frame_0004.jpg 12
148 | ./data/office31//webcam/images/laptop_computer/frame_0005.jpg 12
149 | ./data/office31//webcam/images/laptop_computer/frame_0006.jpg 12
150 | ./data/office31//webcam/images/laptop_computer/frame_0007.jpg 12
151 | ./data/office31//webcam/images/laptop_computer/frame_0008.jpg 12
152 | ./data/office31//webcam/images/laptop_computer/frame_0009.jpg 12
153 | ./data/office31//webcam/images/laptop_computer/frame_0010.jpg 12
154 | ./data/office31//webcam/images/laptop_computer/frame_0011.jpg 12
155 | ./data/office31//webcam/images/laptop_computer/frame_0012.jpg 12
156 | ./data/office31//webcam/images/laptop_computer/frame_0013.jpg 12
157 | ./data/office31//webcam/images/laptop_computer/frame_0014.jpg 12
158 | ./data/office31//webcam/images/laptop_computer/frame_0015.jpg 12
159 | ./data/office31//webcam/images/laptop_computer/frame_0016.jpg 12
160 | ./data/office31//webcam/images/laptop_computer/frame_0017.jpg 12
161 | ./data/office31//webcam/images/laptop_computer/frame_0018.jpg 12
162 | ./data/office31//webcam/images/laptop_computer/frame_0019.jpg 12
163 | ./data/office31//webcam/images/laptop_computer/frame_0020.jpg 12
164 | ./data/office31//webcam/images/laptop_computer/frame_0021.jpg 12
165 | ./data/office31//webcam/images/laptop_computer/frame_0022.jpg 12
166 | ./data/office31//webcam/images/laptop_computer/frame_0023.jpg 12
167 | ./data/office31//webcam/images/laptop_computer/frame_0024.jpg 12
168 | ./data/office31//webcam/images/laptop_computer/frame_0025.jpg 12
169 | ./data/office31//webcam/images/laptop_computer/frame_0026.jpg 12
170 | ./data/office31//webcam/images/laptop_computer/frame_0027.jpg 12
171 | ./data/office31//webcam/images/laptop_computer/frame_0028.jpg 12
172 | ./data/office31//webcam/images/laptop_computer/frame_0029.jpg 12
173 | ./data/office31//webcam/images/laptop_computer/frame_0030.jpg 12
174 | ./data/office31//webcam/images/mouse/frame_0001.jpg 16
175 | ./data/office31//webcam/images/mouse/frame_0002.jpg 16
176 | ./data/office31//webcam/images/mouse/frame_0003.jpg 16
177 | ./data/office31//webcam/images/mouse/frame_0004.jpg 16
178 | ./data/office31//webcam/images/mouse/frame_0005.jpg 16
179 | ./data/office31//webcam/images/mouse/frame_0006.jpg 16
180 | ./data/office31//webcam/images/mouse/frame_0007.jpg 16
181 | ./data/office31//webcam/images/mouse/frame_0008.jpg 16
182 | ./data/office31//webcam/images/mouse/frame_0009.jpg 16
183 | ./data/office31//webcam/images/mouse/frame_0010.jpg 16
184 | ./data/office31//webcam/images/mouse/frame_0011.jpg 16
185 | ./data/office31//webcam/images/mouse/frame_0012.jpg 16
186 | ./data/office31//webcam/images/mouse/frame_0013.jpg 16
187 | ./data/office31//webcam/images/mouse/frame_0014.jpg 16
188 | ./data/office31//webcam/images/mouse/frame_0015.jpg 16
189 | ./data/office31//webcam/images/mouse/frame_0016.jpg 16
190 | ./data/office31//webcam/images/mouse/frame_0017.jpg 16
191 | ./data/office31//webcam/images/mouse/frame_0018.jpg 16
192 | ./data/office31//webcam/images/mouse/frame_0019.jpg 16
193 | ./data/office31//webcam/images/mouse/frame_0020.jpg 16
194 | ./data/office31//webcam/images/mouse/frame_0021.jpg 16
195 | ./data/office31//webcam/images/mouse/frame_0022.jpg 16
196 | ./data/office31//webcam/images/mouse/frame_0023.jpg 16
197 | ./data/office31//webcam/images/mouse/frame_0024.jpg 16
198 | ./data/office31//webcam/images/mouse/frame_0025.jpg 16
199 | ./data/office31//webcam/images/mouse/frame_0026.jpg 16
200 | ./data/office31//webcam/images/mouse/frame_0027.jpg 16
201 | ./data/office31//webcam/images/mouse/frame_0028.jpg 16
202 | ./data/office31//webcam/images/mouse/frame_0029.jpg 16
203 | ./data/office31//webcam/images/mouse/frame_0030.jpg 16
204 | ./data/office31//webcam/images/monitor/frame_0001.jpg 15
205 | ./data/office31//webcam/images/monitor/frame_0002.jpg 15
206 | ./data/office31//webcam/images/monitor/frame_0003.jpg 15
207 | ./data/office31//webcam/images/monitor/frame_0004.jpg 15
208 | ./data/office31//webcam/images/monitor/frame_0005.jpg 15
209 | ./data/office31//webcam/images/monitor/frame_0006.jpg 15
210 | ./data/office31//webcam/images/monitor/frame_0007.jpg 15
211 | ./data/office31//webcam/images/monitor/frame_0008.jpg 15
212 | ./data/office31//webcam/images/monitor/frame_0009.jpg 15
213 | ./data/office31//webcam/images/monitor/frame_0010.jpg 15
214 | ./data/office31//webcam/images/monitor/frame_0011.jpg 15
215 | ./data/office31//webcam/images/monitor/frame_0012.jpg 15
216 | ./data/office31//webcam/images/monitor/frame_0013.jpg 15
217 | ./data/office31//webcam/images/monitor/frame_0014.jpg 15
218 | ./data/office31//webcam/images/monitor/frame_0015.jpg 15
219 | ./data/office31//webcam/images/monitor/frame_0016.jpg 15
220 | ./data/office31//webcam/images/monitor/frame_0017.jpg 15
221 | ./data/office31//webcam/images/monitor/frame_0018.jpg 15
222 | ./data/office31//webcam/images/monitor/frame_0019.jpg 15
223 | ./data/office31//webcam/images/monitor/frame_0020.jpg 15
224 | ./data/office31//webcam/images/monitor/frame_0021.jpg 15
225 | ./data/office31//webcam/images/monitor/frame_0022.jpg 15
226 | ./data/office31//webcam/images/monitor/frame_0023.jpg 15
227 | ./data/office31//webcam/images/monitor/frame_0024.jpg 15
228 | ./data/office31//webcam/images/monitor/frame_0025.jpg 15
229 | ./data/office31//webcam/images/monitor/frame_0026.jpg 15
230 | ./data/office31//webcam/images/monitor/frame_0027.jpg 15
231 | ./data/office31//webcam/images/monitor/frame_0028.jpg 15
232 | ./data/office31//webcam/images/monitor/frame_0029.jpg 15
233 | ./data/office31//webcam/images/monitor/frame_0030.jpg 15
234 | ./data/office31//webcam/images/monitor/frame_0031.jpg 15
235 | ./data/office31//webcam/images/monitor/frame_0032.jpg 15
236 | ./data/office31//webcam/images/monitor/frame_0033.jpg 15
237 | ./data/office31//webcam/images/monitor/frame_0034.jpg 15
238 | ./data/office31//webcam/images/monitor/frame_0035.jpg 15
239 | ./data/office31//webcam/images/monitor/frame_0036.jpg 15
240 | ./data/office31//webcam/images/monitor/frame_0037.jpg 15
241 | ./data/office31//webcam/images/monitor/frame_0038.jpg 15
242 | ./data/office31//webcam/images/monitor/frame_0039.jpg 15
243 | ./data/office31//webcam/images/monitor/frame_0040.jpg 15
244 | ./data/office31//webcam/images/monitor/frame_0041.jpg 15
245 | ./data/office31//webcam/images/monitor/frame_0042.jpg 15
246 | ./data/office31//webcam/images/monitor/frame_0043.jpg 15
247 | ./data/office31//webcam/images/mug/frame_0001.jpg 17
248 | ./data/office31//webcam/images/mug/frame_0002.jpg 17
249 | ./data/office31//webcam/images/mug/frame_0003.jpg 17
250 | ./data/office31//webcam/images/mug/frame_0004.jpg 17
251 | ./data/office31//webcam/images/mug/frame_0005.jpg 17
252 | ./data/office31//webcam/images/mug/frame_0006.jpg 17
253 | ./data/office31//webcam/images/mug/frame_0007.jpg 17
254 | ./data/office31//webcam/images/mug/frame_0008.jpg 17
255 | ./data/office31//webcam/images/mug/frame_0009.jpg 17
256 | ./data/office31//webcam/images/mug/frame_0010.jpg 17
257 | ./data/office31//webcam/images/mug/frame_0011.jpg 17
258 | ./data/office31//webcam/images/mug/frame_0012.jpg 17
259 | ./data/office31//webcam/images/mug/frame_0013.jpg 17
260 | ./data/office31//webcam/images/mug/frame_0014.jpg 17
261 | ./data/office31//webcam/images/mug/frame_0015.jpg 17
262 | ./data/office31//webcam/images/mug/frame_0016.jpg 17
263 | ./data/office31//webcam/images/mug/frame_0017.jpg 17
264 | ./data/office31//webcam/images/mug/frame_0018.jpg 17
265 | ./data/office31//webcam/images/mug/frame_0019.jpg 17
266 | ./data/office31//webcam/images/mug/frame_0020.jpg 17
267 | ./data/office31//webcam/images/mug/frame_0021.jpg 17
268 | ./data/office31//webcam/images/mug/frame_0022.jpg 17
269 | ./data/office31//webcam/images/mug/frame_0023.jpg 17
270 | ./data/office31//webcam/images/mug/frame_0024.jpg 17
271 | ./data/office31//webcam/images/mug/frame_0025.jpg 17
272 | ./data/office31//webcam/images/mug/frame_0026.jpg 17
273 | ./data/office31//webcam/images/mug/frame_0027.jpg 17
274 | ./data/office31//webcam/images/tape_dispenser/frame_0001.jpg 29
275 | ./data/office31//webcam/images/tape_dispenser/frame_0002.jpg 29
276 | ./data/office31//webcam/images/tape_dispenser/frame_0003.jpg 29
277 | ./data/office31//webcam/images/tape_dispenser/frame_0004.jpg 29
278 | ./data/office31//webcam/images/tape_dispenser/frame_0005.jpg 29
279 | ./data/office31//webcam/images/tape_dispenser/frame_0006.jpg 29
280 | ./data/office31//webcam/images/tape_dispenser/frame_0007.jpg 29
281 | ./data/office31//webcam/images/tape_dispenser/frame_0008.jpg 29
282 | ./data/office31//webcam/images/tape_dispenser/frame_0009.jpg 29
283 | ./data/office31//webcam/images/tape_dispenser/frame_0010.jpg 29
284 | ./data/office31//webcam/images/tape_dispenser/frame_0011.jpg 29
285 | ./data/office31//webcam/images/tape_dispenser/frame_0012.jpg 29
286 | ./data/office31//webcam/images/tape_dispenser/frame_0013.jpg 29
287 | ./data/office31//webcam/images/tape_dispenser/frame_0014.jpg 29
288 | ./data/office31//webcam/images/tape_dispenser/frame_0015.jpg 29
289 | ./data/office31//webcam/images/tape_dispenser/frame_0016.jpg 29
290 | ./data/office31//webcam/images/tape_dispenser/frame_0017.jpg 29
291 | ./data/office31//webcam/images/tape_dispenser/frame_0018.jpg 29
292 | ./data/office31//webcam/images/tape_dispenser/frame_0019.jpg 29
293 | ./data/office31//webcam/images/tape_dispenser/frame_0020.jpg 29
294 | ./data/office31//webcam/images/tape_dispenser/frame_0021.jpg 29
295 | ./data/office31//webcam/images/tape_dispenser/frame_0022.jpg 29
296 | ./data/office31//webcam/images/tape_dispenser/frame_0023.jpg 29
297 | ./data/office31//webcam/images/pen/frame_0001.jpg 19
298 | ./data/office31//webcam/images/pen/frame_0002.jpg 19
299 | ./data/office31//webcam/images/pen/frame_0003.jpg 19
300 | ./data/office31//webcam/images/pen/frame_0004.jpg 19
301 | ./data/office31//webcam/images/pen/frame_0005.jpg 19
302 | ./data/office31//webcam/images/pen/frame_0006.jpg 19
303 | ./data/office31//webcam/images/pen/frame_0007.jpg 19
304 | ./data/office31//webcam/images/pen/frame_0008.jpg 19
305 | ./data/office31//webcam/images/pen/frame_0009.jpg 19
306 | ./data/office31//webcam/images/pen/frame_0010.jpg 19
307 | ./data/office31//webcam/images/pen/frame_0011.jpg 19
308 | ./data/office31//webcam/images/pen/frame_0012.jpg 19
309 | ./data/office31//webcam/images/pen/frame_0013.jpg 19
310 | ./data/office31//webcam/images/pen/frame_0014.jpg 19
311 | ./data/office31//webcam/images/pen/frame_0015.jpg 19
312 | ./data/office31//webcam/images/pen/frame_0016.jpg 19
313 | ./data/office31//webcam/images/pen/frame_0017.jpg 19
314 | ./data/office31//webcam/images/pen/frame_0018.jpg 19
315 | ./data/office31//webcam/images/pen/frame_0019.jpg 19
316 | ./data/office31//webcam/images/pen/frame_0020.jpg 19
317 | ./data/office31//webcam/images/pen/frame_0021.jpg 19
318 | ./data/office31//webcam/images/pen/frame_0022.jpg 19
319 | ./data/office31//webcam/images/pen/frame_0023.jpg 19
320 | ./data/office31//webcam/images/pen/frame_0024.jpg 19
321 | ./data/office31//webcam/images/pen/frame_0025.jpg 19
322 | ./data/office31//webcam/images/pen/frame_0026.jpg 19
323 | ./data/office31//webcam/images/pen/frame_0027.jpg 19
324 | ./data/office31//webcam/images/pen/frame_0028.jpg 19
325 | ./data/office31//webcam/images/pen/frame_0029.jpg 19
326 | ./data/office31//webcam/images/pen/frame_0030.jpg 19
327 | ./data/office31//webcam/images/pen/frame_0031.jpg 19
328 | ./data/office31//webcam/images/pen/frame_0032.jpg 19
329 | ./data/office31//webcam/images/bike/frame_0001.jpg 1
330 | ./data/office31//webcam/images/bike/frame_0002.jpg 1
331 | ./data/office31//webcam/images/bike/frame_0003.jpg 1
332 | ./data/office31//webcam/images/bike/frame_0004.jpg 1
333 | ./data/office31//webcam/images/bike/frame_0005.jpg 1
334 | ./data/office31//webcam/images/bike/frame_0006.jpg 1
335 | ./data/office31//webcam/images/bike/frame_0007.jpg 1
336 | ./data/office31//webcam/images/bike/frame_0008.jpg 1
337 | ./data/office31//webcam/images/bike/frame_0009.jpg 1
338 | ./data/office31//webcam/images/bike/frame_0010.jpg 1
339 | ./data/office31//webcam/images/bike/frame_0011.jpg 1
340 | ./data/office31//webcam/images/bike/frame_0012.jpg 1
341 | ./data/office31//webcam/images/bike/frame_0013.jpg 1
342 | ./data/office31//webcam/images/bike/frame_0014.jpg 1
343 | ./data/office31//webcam/images/bike/frame_0015.jpg 1
344 | ./data/office31//webcam/images/bike/frame_0016.jpg 1
345 | ./data/office31//webcam/images/bike/frame_0017.jpg 1
346 | ./data/office31//webcam/images/bike/frame_0018.jpg 1
347 | ./data/office31//webcam/images/bike/frame_0019.jpg 1
348 | ./data/office31//webcam/images/bike/frame_0020.jpg 1
349 | ./data/office31//webcam/images/bike/frame_0021.jpg 1
350 | ./data/office31//webcam/images/punchers/frame_0001.jpg 23
351 | ./data/office31//webcam/images/punchers/frame_0002.jpg 23
352 | ./data/office31//webcam/images/punchers/frame_0003.jpg 23
353 | ./data/office31//webcam/images/punchers/frame_0004.jpg 23
354 | ./data/office31//webcam/images/punchers/frame_0005.jpg 23
355 | ./data/office31//webcam/images/punchers/frame_0006.jpg 23
356 | ./data/office31//webcam/images/punchers/frame_0007.jpg 23
357 | ./data/office31//webcam/images/punchers/frame_0008.jpg 23
358 | ./data/office31//webcam/images/punchers/frame_0009.jpg 23
359 | ./data/office31//webcam/images/punchers/frame_0010.jpg 23
360 | ./data/office31//webcam/images/punchers/frame_0011.jpg 23
361 | ./data/office31//webcam/images/punchers/frame_0012.jpg 23
362 | ./data/office31//webcam/images/punchers/frame_0013.jpg 23
363 | ./data/office31//webcam/images/punchers/frame_0014.jpg 23
364 | ./data/office31//webcam/images/punchers/frame_0015.jpg 23
365 | ./data/office31//webcam/images/punchers/frame_0016.jpg 23
366 | ./data/office31//webcam/images/punchers/frame_0017.jpg 23
367 | ./data/office31//webcam/images/punchers/frame_0018.jpg 23
368 | ./data/office31//webcam/images/punchers/frame_0019.jpg 23
369 | ./data/office31//webcam/images/punchers/frame_0020.jpg 23
370 | ./data/office31//webcam/images/punchers/frame_0021.jpg 23
371 | ./data/office31//webcam/images/punchers/frame_0022.jpg 23
372 | ./data/office31//webcam/images/punchers/frame_0023.jpg 23
373 | ./data/office31//webcam/images/punchers/frame_0024.jpg 23
374 | ./data/office31//webcam/images/punchers/frame_0025.jpg 23
375 | ./data/office31//webcam/images/punchers/frame_0026.jpg 23
376 | ./data/office31//webcam/images/punchers/frame_0027.jpg 23
377 | ./data/office31//webcam/images/back_pack/frame_0001.jpg 0
378 | ./data/office31//webcam/images/back_pack/frame_0002.jpg 0
379 | ./data/office31//webcam/images/back_pack/frame_0003.jpg 0
380 | ./data/office31//webcam/images/back_pack/frame_0004.jpg 0
381 | ./data/office31//webcam/images/back_pack/frame_0005.jpg 0
382 | ./data/office31//webcam/images/back_pack/frame_0006.jpg 0
383 | ./data/office31//webcam/images/back_pack/frame_0007.jpg 0
384 | ./data/office31//webcam/images/back_pack/frame_0008.jpg 0
385 | ./data/office31//webcam/images/back_pack/frame_0009.jpg 0
386 | ./data/office31//webcam/images/back_pack/frame_0010.jpg 0
387 | ./data/office31//webcam/images/back_pack/frame_0011.jpg 0
388 | ./data/office31//webcam/images/back_pack/frame_0012.jpg 0
389 | ./data/office31//webcam/images/back_pack/frame_0013.jpg 0
390 | ./data/office31//webcam/images/back_pack/frame_0014.jpg 0
391 | ./data/office31//webcam/images/back_pack/frame_0015.jpg 0
392 | ./data/office31//webcam/images/back_pack/frame_0016.jpg 0
393 | ./data/office31//webcam/images/back_pack/frame_0017.jpg 0
394 | ./data/office31//webcam/images/back_pack/frame_0018.jpg 0
395 | ./data/office31//webcam/images/back_pack/frame_0019.jpg 0
396 | ./data/office31//webcam/images/back_pack/frame_0020.jpg 0
397 | ./data/office31//webcam/images/back_pack/frame_0021.jpg 0
398 | ./data/office31//webcam/images/back_pack/frame_0022.jpg 0
399 | ./data/office31//webcam/images/back_pack/frame_0023.jpg 0
400 | ./data/office31//webcam/images/back_pack/frame_0024.jpg 0
401 | ./data/office31//webcam/images/back_pack/frame_0025.jpg 0
402 | ./data/office31//webcam/images/back_pack/frame_0026.jpg 0
403 | ./data/office31//webcam/images/back_pack/frame_0027.jpg 0
404 | ./data/office31//webcam/images/back_pack/frame_0028.jpg 0
405 | ./data/office31//webcam/images/back_pack/frame_0029.jpg 0
406 | ./data/office31//webcam/images/desktop_computer/frame_0001.jpg 8
407 | ./data/office31//webcam/images/desktop_computer/frame_0002.jpg 8
408 | ./data/office31//webcam/images/desktop_computer/frame_0003.jpg 8
409 | ./data/office31//webcam/images/desktop_computer/frame_0004.jpg 8
410 | ./data/office31//webcam/images/desktop_computer/frame_0005.jpg 8
411 | ./data/office31//webcam/images/desktop_computer/frame_0006.jpg 8
412 | ./data/office31//webcam/images/desktop_computer/frame_0007.jpg 8
413 | ./data/office31//webcam/images/desktop_computer/frame_0008.jpg 8
414 | ./data/office31//webcam/images/desktop_computer/frame_0009.jpg 8
415 | ./data/office31//webcam/images/desktop_computer/frame_0010.jpg 8
416 | ./data/office31//webcam/images/desktop_computer/frame_0011.jpg 8
417 | ./data/office31//webcam/images/desktop_computer/frame_0012.jpg 8
418 | ./data/office31//webcam/images/desktop_computer/frame_0013.jpg 8
419 | ./data/office31//webcam/images/desktop_computer/frame_0014.jpg 8
420 | ./data/office31//webcam/images/desktop_computer/frame_0015.jpg 8
421 | ./data/office31//webcam/images/desktop_computer/frame_0016.jpg 8
422 | ./data/office31//webcam/images/desktop_computer/frame_0017.jpg 8
423 | ./data/office31//webcam/images/desktop_computer/frame_0018.jpg 8
424 | ./data/office31//webcam/images/desktop_computer/frame_0019.jpg 8
425 | ./data/office31//webcam/images/desktop_computer/frame_0020.jpg 8
426 | ./data/office31//webcam/images/desktop_computer/frame_0021.jpg 8
427 | ./data/office31//webcam/images/speaker/frame_0001.jpg 27
428 | ./data/office31//webcam/images/speaker/frame_0002.jpg 27
429 | ./data/office31//webcam/images/speaker/frame_0003.jpg 27
430 | ./data/office31//webcam/images/speaker/frame_0004.jpg 27
431 | ./data/office31//webcam/images/speaker/frame_0005.jpg 27
432 | ./data/office31//webcam/images/speaker/frame_0006.jpg 27
433 | ./data/office31//webcam/images/speaker/frame_0007.jpg 27
434 | ./data/office31//webcam/images/speaker/frame_0008.jpg 27
435 | ./data/office31//webcam/images/speaker/frame_0009.jpg 27
436 | ./data/office31//webcam/images/speaker/frame_0010.jpg 27
437 | ./data/office31//webcam/images/speaker/frame_0011.jpg 27
438 | ./data/office31//webcam/images/speaker/frame_0012.jpg 27
439 | ./data/office31//webcam/images/speaker/frame_0013.jpg 27
440 | ./data/office31//webcam/images/speaker/frame_0014.jpg 27
441 | ./data/office31//webcam/images/speaker/frame_0015.jpg 27
442 | ./data/office31//webcam/images/speaker/frame_0016.jpg 27
443 | ./data/office31//webcam/images/speaker/frame_0017.jpg 27
444 | ./data/office31//webcam/images/speaker/frame_0018.jpg 27
445 | ./data/office31//webcam/images/speaker/frame_0019.jpg 27
446 | ./data/office31//webcam/images/speaker/frame_0020.jpg 27
447 | ./data/office31//webcam/images/speaker/frame_0021.jpg 27
448 | ./data/office31//webcam/images/speaker/frame_0022.jpg 27
449 | ./data/office31//webcam/images/speaker/frame_0023.jpg 27
450 | ./data/office31//webcam/images/speaker/frame_0024.jpg 27
451 | ./data/office31//webcam/images/speaker/frame_0025.jpg 27
452 | ./data/office31//webcam/images/speaker/frame_0026.jpg 27
453 | ./data/office31//webcam/images/speaker/frame_0027.jpg 27
454 | ./data/office31//webcam/images/speaker/frame_0028.jpg 27
455 | ./data/office31//webcam/images/speaker/frame_0029.jpg 27
456 | ./data/office31//webcam/images/speaker/frame_0030.jpg 27
457 | ./data/office31//webcam/images/mobile_phone/frame_0001.jpg 14
458 | ./data/office31//webcam/images/mobile_phone/frame_0002.jpg 14
459 | ./data/office31//webcam/images/mobile_phone/frame_0003.jpg 14
460 | ./data/office31//webcam/images/mobile_phone/frame_0004.jpg 14
461 | ./data/office31//webcam/images/mobile_phone/frame_0005.jpg 14
462 | ./data/office31//webcam/images/mobile_phone/frame_0006.jpg 14
463 | ./data/office31//webcam/images/mobile_phone/frame_0007.jpg 14
464 | ./data/office31//webcam/images/mobile_phone/frame_0008.jpg 14
465 | ./data/office31//webcam/images/mobile_phone/frame_0009.jpg 14
466 | ./data/office31//webcam/images/mobile_phone/frame_0010.jpg 14
467 | ./data/office31//webcam/images/mobile_phone/frame_0011.jpg 14
468 | ./data/office31//webcam/images/mobile_phone/frame_0012.jpg 14
469 | ./data/office31//webcam/images/mobile_phone/frame_0013.jpg 14
470 | ./data/office31//webcam/images/mobile_phone/frame_0014.jpg 14
471 | ./data/office31//webcam/images/mobile_phone/frame_0015.jpg 14
472 | ./data/office31//webcam/images/mobile_phone/frame_0016.jpg 14
473 | ./data/office31//webcam/images/mobile_phone/frame_0017.jpg 14
474 | ./data/office31//webcam/images/mobile_phone/frame_0018.jpg 14
475 | ./data/office31//webcam/images/mobile_phone/frame_0019.jpg 14
476 | ./data/office31//webcam/images/mobile_phone/frame_0020.jpg 14
477 | ./data/office31//webcam/images/mobile_phone/frame_0021.jpg 14
478 | ./data/office31//webcam/images/mobile_phone/frame_0022.jpg 14
479 | ./data/office31//webcam/images/mobile_phone/frame_0023.jpg 14
480 | ./data/office31//webcam/images/mobile_phone/frame_0024.jpg 14
481 | ./data/office31//webcam/images/mobile_phone/frame_0025.jpg 14
482 | ./data/office31//webcam/images/mobile_phone/frame_0026.jpg 14
483 | ./data/office31//webcam/images/mobile_phone/frame_0027.jpg 14
484 | ./data/office31//webcam/images/mobile_phone/frame_0028.jpg 14
485 | ./data/office31//webcam/images/mobile_phone/frame_0029.jpg 14
486 | ./data/office31//webcam/images/mobile_phone/frame_0030.jpg 14
487 | ./data/office31//webcam/images/paper_notebook/frame_0001.jpg 18
488 | ./data/office31//webcam/images/paper_notebook/frame_0002.jpg 18
489 | ./data/office31//webcam/images/paper_notebook/frame_0003.jpg 18
490 | ./data/office31//webcam/images/paper_notebook/frame_0004.jpg 18
491 | ./data/office31//webcam/images/paper_notebook/frame_0005.jpg 18
492 | ./data/office31//webcam/images/paper_notebook/frame_0006.jpg 18
493 | ./data/office31//webcam/images/paper_notebook/frame_0007.jpg 18
494 | ./data/office31//webcam/images/paper_notebook/frame_0008.jpg 18
495 | ./data/office31//webcam/images/paper_notebook/frame_0009.jpg 18
496 | ./data/office31//webcam/images/paper_notebook/frame_0010.jpg 18
497 | ./data/office31//webcam/images/paper_notebook/frame_0011.jpg 18
498 | ./data/office31//webcam/images/paper_notebook/frame_0012.jpg 18
499 | ./data/office31//webcam/images/paper_notebook/frame_0013.jpg 18
500 | ./data/office31//webcam/images/paper_notebook/frame_0014.jpg 18
501 | ./data/office31//webcam/images/paper_notebook/frame_0015.jpg 18
502 | ./data/office31//webcam/images/paper_notebook/frame_0016.jpg 18
503 | ./data/office31//webcam/images/paper_notebook/frame_0017.jpg 18
504 | ./data/office31//webcam/images/paper_notebook/frame_0018.jpg 18
505 | ./data/office31//webcam/images/paper_notebook/frame_0019.jpg 18
506 | ./data/office31//webcam/images/paper_notebook/frame_0020.jpg 18
507 | ./data/office31//webcam/images/paper_notebook/frame_0021.jpg 18
508 | ./data/office31//webcam/images/paper_notebook/frame_0022.jpg 18
509 | ./data/office31//webcam/images/paper_notebook/frame_0023.jpg 18
510 | ./data/office31//webcam/images/paper_notebook/frame_0024.jpg 18
511 | ./data/office31//webcam/images/paper_notebook/frame_0025.jpg 18
512 | ./data/office31//webcam/images/paper_notebook/frame_0026.jpg 18
513 | ./data/office31//webcam/images/paper_notebook/frame_0027.jpg 18
514 | ./data/office31//webcam/images/paper_notebook/frame_0028.jpg 18
515 | ./data/office31//webcam/images/ruler/frame_0001.jpg 25
516 | ./data/office31//webcam/images/ruler/frame_0002.jpg 25
517 | ./data/office31//webcam/images/ruler/frame_0003.jpg 25
518 | ./data/office31//webcam/images/ruler/frame_0004.jpg 25
519 | ./data/office31//webcam/images/ruler/frame_0005.jpg 25
520 | ./data/office31//webcam/images/ruler/frame_0006.jpg 25
521 | ./data/office31//webcam/images/ruler/frame_0007.jpg 25
522 | ./data/office31//webcam/images/ruler/frame_0008.jpg 25
523 | ./data/office31//webcam/images/ruler/frame_0009.jpg 25
524 | ./data/office31//webcam/images/ruler/frame_0010.jpg 25
525 | ./data/office31//webcam/images/ruler/frame_0011.jpg 25
526 | ./data/office31//webcam/images/letter_tray/frame_0001.jpg 13
527 | ./data/office31//webcam/images/letter_tray/frame_0002.jpg 13
528 | ./data/office31//webcam/images/letter_tray/frame_0003.jpg 13
529 | ./data/office31//webcam/images/letter_tray/frame_0004.jpg 13
530 | ./data/office31//webcam/images/letter_tray/frame_0005.jpg 13
531 | ./data/office31//webcam/images/letter_tray/frame_0006.jpg 13
532 | ./data/office31//webcam/images/letter_tray/frame_0007.jpg 13
533 | ./data/office31//webcam/images/letter_tray/frame_0008.jpg 13
534 | ./data/office31//webcam/images/letter_tray/frame_0009.jpg 13
535 | ./data/office31//webcam/images/letter_tray/frame_0010.jpg 13
536 | ./data/office31//webcam/images/letter_tray/frame_0011.jpg 13
537 | ./data/office31//webcam/images/letter_tray/frame_0012.jpg 13
538 | ./data/office31//webcam/images/letter_tray/frame_0013.jpg 13
539 | ./data/office31//webcam/images/letter_tray/frame_0014.jpg 13
540 | ./data/office31//webcam/images/letter_tray/frame_0015.jpg 13
541 | ./data/office31//webcam/images/letter_tray/frame_0016.jpg 13
542 | ./data/office31//webcam/images/letter_tray/frame_0017.jpg 13
543 | ./data/office31//webcam/images/letter_tray/frame_0018.jpg 13
544 | ./data/office31//webcam/images/letter_tray/frame_0019.jpg 13
545 | ./data/office31//webcam/images/file_cabinet/frame_0001.jpg 9
546 | ./data/office31//webcam/images/file_cabinet/frame_0002.jpg 9
547 | ./data/office31//webcam/images/file_cabinet/frame_0003.jpg 9
548 | ./data/office31//webcam/images/file_cabinet/frame_0004.jpg 9
549 | ./data/office31//webcam/images/file_cabinet/frame_0005.jpg 9
550 | ./data/office31//webcam/images/file_cabinet/frame_0006.jpg 9
551 | ./data/office31//webcam/images/file_cabinet/frame_0007.jpg 9
552 | ./data/office31//webcam/images/file_cabinet/frame_0008.jpg 9
553 | ./data/office31//webcam/images/file_cabinet/frame_0009.jpg 9
554 | ./data/office31//webcam/images/file_cabinet/frame_0010.jpg 9
555 | ./data/office31//webcam/images/file_cabinet/frame_0011.jpg 9
556 | ./data/office31//webcam/images/file_cabinet/frame_0012.jpg 9
557 | ./data/office31//webcam/images/file_cabinet/frame_0013.jpg 9
558 | ./data/office31//webcam/images/file_cabinet/frame_0014.jpg 9
559 | ./data/office31//webcam/images/file_cabinet/frame_0015.jpg 9
560 | ./data/office31//webcam/images/file_cabinet/frame_0016.jpg 9
561 | ./data/office31//webcam/images/file_cabinet/frame_0017.jpg 9
562 | ./data/office31//webcam/images/file_cabinet/frame_0018.jpg 9
563 | ./data/office31//webcam/images/file_cabinet/frame_0019.jpg 9
564 | ./data/office31//webcam/images/phone/frame_0001.jpg 20
565 | ./data/office31//webcam/images/phone/frame_0002.jpg 20
566 | ./data/office31//webcam/images/phone/frame_0003.jpg 20
567 | ./data/office31//webcam/images/phone/frame_0004.jpg 20
568 | ./data/office31//webcam/images/phone/frame_0005.jpg 20
569 | ./data/office31//webcam/images/phone/frame_0006.jpg 20
570 | ./data/office31//webcam/images/phone/frame_0007.jpg 20
571 | ./data/office31//webcam/images/phone/frame_0008.jpg 20
572 | ./data/office31//webcam/images/phone/frame_0009.jpg 20
573 | ./data/office31//webcam/images/phone/frame_0010.jpg 20
574 | ./data/office31//webcam/images/phone/frame_0011.jpg 20
575 | ./data/office31//webcam/images/phone/frame_0012.jpg 20
576 | ./data/office31//webcam/images/phone/frame_0013.jpg 20
577 | ./data/office31//webcam/images/phone/frame_0014.jpg 20
578 | ./data/office31//webcam/images/phone/frame_0015.jpg 20
579 | ./data/office31//webcam/images/phone/frame_0016.jpg 20
580 | ./data/office31//webcam/images/bookcase/frame_0001.jpg 3
581 | ./data/office31//webcam/images/bookcase/frame_0002.jpg 3
582 | ./data/office31//webcam/images/bookcase/frame_0003.jpg 3
583 | ./data/office31//webcam/images/bookcase/frame_0004.jpg 3
584 | ./data/office31//webcam/images/bookcase/frame_0005.jpg 3
585 | ./data/office31//webcam/images/bookcase/frame_0006.jpg 3
586 | ./data/office31//webcam/images/bookcase/frame_0007.jpg 3
587 | ./data/office31//webcam/images/bookcase/frame_0008.jpg 3
588 | ./data/office31//webcam/images/bookcase/frame_0009.jpg 3
589 | ./data/office31//webcam/images/bookcase/frame_0010.jpg 3
590 | ./data/office31//webcam/images/bookcase/frame_0011.jpg 3
591 | ./data/office31//webcam/images/bookcase/frame_0012.jpg 3
592 | ./data/office31//webcam/images/projector/frame_0001.jpg 22
593 | ./data/office31//webcam/images/projector/frame_0002.jpg 22
594 | ./data/office31//webcam/images/projector/frame_0003.jpg 22
595 | ./data/office31//webcam/images/projector/frame_0004.jpg 22
596 | ./data/office31//webcam/images/projector/frame_0005.jpg 22
597 | ./data/office31//webcam/images/projector/frame_0006.jpg 22
598 | ./data/office31//webcam/images/projector/frame_0007.jpg 22
599 | ./data/office31//webcam/images/projector/frame_0008.jpg 22
600 | ./data/office31//webcam/images/projector/frame_0009.jpg 22
601 | ./data/office31//webcam/images/projector/frame_0010.jpg 22
602 | ./data/office31//webcam/images/projector/frame_0011.jpg 22
603 | ./data/office31//webcam/images/projector/frame_0012.jpg 22
604 | ./data/office31//webcam/images/projector/frame_0013.jpg 22
605 | ./data/office31//webcam/images/projector/frame_0014.jpg 22
606 | ./data/office31//webcam/images/projector/frame_0015.jpg 22
607 | ./data/office31//webcam/images/projector/frame_0016.jpg 22
608 | ./data/office31//webcam/images/projector/frame_0017.jpg 22
609 | ./data/office31//webcam/images/projector/frame_0018.jpg 22
610 | ./data/office31//webcam/images/projector/frame_0019.jpg 22
611 | ./data/office31//webcam/images/projector/frame_0020.jpg 22
612 | ./data/office31//webcam/images/projector/frame_0021.jpg 22
613 | ./data/office31//webcam/images/projector/frame_0022.jpg 22
614 | ./data/office31//webcam/images/projector/frame_0023.jpg 22
615 | ./data/office31//webcam/images/projector/frame_0024.jpg 22
616 | ./data/office31//webcam/images/projector/frame_0025.jpg 22
617 | ./data/office31//webcam/images/projector/frame_0026.jpg 22
618 | ./data/office31//webcam/images/projector/frame_0027.jpg 22
619 | ./data/office31//webcam/images/projector/frame_0028.jpg 22
620 | ./data/office31//webcam/images/projector/frame_0029.jpg 22
621 | ./data/office31//webcam/images/projector/frame_0030.jpg 22
622 | ./data/office31//webcam/images/stapler/frame_0001.jpg 28
623 | ./data/office31//webcam/images/stapler/frame_0002.jpg 28
624 | ./data/office31//webcam/images/stapler/frame_0003.jpg 28
625 | ./data/office31//webcam/images/stapler/frame_0004.jpg 28
626 | ./data/office31//webcam/images/stapler/frame_0005.jpg 28
627 | ./data/office31//webcam/images/stapler/frame_0006.jpg 28
628 | ./data/office31//webcam/images/stapler/frame_0007.jpg 28
629 | ./data/office31//webcam/images/stapler/frame_0008.jpg 28
630 | ./data/office31//webcam/images/stapler/frame_0009.jpg 28
631 | ./data/office31//webcam/images/stapler/frame_0010.jpg 28
632 | ./data/office31//webcam/images/stapler/frame_0011.jpg 28
633 | ./data/office31//webcam/images/stapler/frame_0012.jpg 28
634 | ./data/office31//webcam/images/stapler/frame_0013.jpg 28
635 | ./data/office31//webcam/images/stapler/frame_0014.jpg 28
636 | ./data/office31//webcam/images/stapler/frame_0015.jpg 28
637 | ./data/office31//webcam/images/stapler/frame_0016.jpg 28
638 | ./data/office31//webcam/images/stapler/frame_0017.jpg 28
639 | ./data/office31//webcam/images/stapler/frame_0018.jpg 28
640 | ./data/office31//webcam/images/stapler/frame_0019.jpg 28
641 | ./data/office31//webcam/images/stapler/frame_0020.jpg 28
642 | ./data/office31//webcam/images/stapler/frame_0021.jpg 28
643 | ./data/office31//webcam/images/stapler/frame_0022.jpg 28
644 | ./data/office31//webcam/images/stapler/frame_0023.jpg 28
645 | ./data/office31//webcam/images/stapler/frame_0024.jpg 28
646 | ./data/office31//webcam/images/trash_can/frame_0001.jpg 30
647 | ./data/office31//webcam/images/trash_can/frame_0002.jpg 30
648 | ./data/office31//webcam/images/trash_can/frame_0003.jpg 30
649 | ./data/office31//webcam/images/trash_can/frame_0004.jpg 30
650 | ./data/office31//webcam/images/trash_can/frame_0005.jpg 30
651 | ./data/office31//webcam/images/trash_can/frame_0006.jpg 30
652 | ./data/office31//webcam/images/trash_can/frame_0007.jpg 30
653 | ./data/office31//webcam/images/trash_can/frame_0008.jpg 30
654 | ./data/office31//webcam/images/trash_can/frame_0009.jpg 30
655 | ./data/office31//webcam/images/trash_can/frame_0010.jpg 30
656 | ./data/office31//webcam/images/trash_can/frame_0011.jpg 30
657 | ./data/office31//webcam/images/trash_can/frame_0012.jpg 30
658 | ./data/office31//webcam/images/trash_can/frame_0013.jpg 30
659 | ./data/office31//webcam/images/trash_can/frame_0014.jpg 30
660 | ./data/office31//webcam/images/trash_can/frame_0015.jpg 30
661 | ./data/office31//webcam/images/trash_can/frame_0016.jpg 30
662 | ./data/office31//webcam/images/trash_can/frame_0017.jpg 30
663 | ./data/office31//webcam/images/trash_can/frame_0018.jpg 30
664 | ./data/office31//webcam/images/trash_can/frame_0019.jpg 30
665 | ./data/office31//webcam/images/trash_can/frame_0020.jpg 30
666 | ./data/office31//webcam/images/trash_can/frame_0021.jpg 30
667 | ./data/office31//webcam/images/bike_helmet/frame_0001.jpg 2
668 | ./data/office31//webcam/images/bike_helmet/frame_0002.jpg 2
669 | ./data/office31//webcam/images/bike_helmet/frame_0003.jpg 2
670 | ./data/office31//webcam/images/bike_helmet/frame_0004.jpg 2
671 | ./data/office31//webcam/images/bike_helmet/frame_0005.jpg 2
672 | ./data/office31//webcam/images/bike_helmet/frame_0006.jpg 2
673 | ./data/office31//webcam/images/bike_helmet/frame_0007.jpg 2
674 | ./data/office31//webcam/images/bike_helmet/frame_0008.jpg 2
675 | ./data/office31//webcam/images/bike_helmet/frame_0009.jpg 2
676 | ./data/office31//webcam/images/bike_helmet/frame_0010.jpg 2
677 | ./data/office31//webcam/images/bike_helmet/frame_0011.jpg 2
678 | ./data/office31//webcam/images/bike_helmet/frame_0012.jpg 2
679 | ./data/office31//webcam/images/bike_helmet/frame_0013.jpg 2
680 | ./data/office31//webcam/images/bike_helmet/frame_0014.jpg 2
681 | ./data/office31//webcam/images/bike_helmet/frame_0015.jpg 2
682 | ./data/office31//webcam/images/bike_helmet/frame_0016.jpg 2
683 | ./data/office31//webcam/images/bike_helmet/frame_0017.jpg 2
684 | ./data/office31//webcam/images/bike_helmet/frame_0018.jpg 2
685 | ./data/office31//webcam/images/bike_helmet/frame_0019.jpg 2
686 | ./data/office31//webcam/images/bike_helmet/frame_0020.jpg 2
687 | ./data/office31//webcam/images/bike_helmet/frame_0021.jpg 2
688 | ./data/office31//webcam/images/bike_helmet/frame_0022.jpg 2
689 | ./data/office31//webcam/images/bike_helmet/frame_0023.jpg 2
690 | ./data/office31//webcam/images/bike_helmet/frame_0024.jpg 2
691 | ./data/office31//webcam/images/bike_helmet/frame_0025.jpg 2
692 | ./data/office31//webcam/images/bike_helmet/frame_0026.jpg 2
693 | ./data/office31//webcam/images/bike_helmet/frame_0027.jpg 2
694 | ./data/office31//webcam/images/bike_helmet/frame_0028.jpg 2
695 | ./data/office31//webcam/images/headphones/frame_0001.jpg 10
696 | ./data/office31//webcam/images/headphones/frame_0002.jpg 10
697 | ./data/office31//webcam/images/headphones/frame_0003.jpg 10
698 | ./data/office31//webcam/images/headphones/frame_0004.jpg 10
699 | ./data/office31//webcam/images/headphones/frame_0005.jpg 10
700 | ./data/office31//webcam/images/headphones/frame_0006.jpg 10
701 | ./data/office31//webcam/images/headphones/frame_0007.jpg 10
702 | ./data/office31//webcam/images/headphones/frame_0008.jpg 10
703 | ./data/office31//webcam/images/headphones/frame_0009.jpg 10
704 | ./data/office31//webcam/images/headphones/frame_0010.jpg 10
705 | ./data/office31//webcam/images/headphones/frame_0011.jpg 10
706 | ./data/office31//webcam/images/headphones/frame_0012.jpg 10
707 | ./data/office31//webcam/images/headphones/frame_0013.jpg 10
708 | ./data/office31//webcam/images/headphones/frame_0014.jpg 10
709 | ./data/office31//webcam/images/headphones/frame_0015.jpg 10
710 | ./data/office31//webcam/images/headphones/frame_0016.jpg 10
711 | ./data/office31//webcam/images/headphones/frame_0017.jpg 10
712 | ./data/office31//webcam/images/headphones/frame_0018.jpg 10
713 | ./data/office31//webcam/images/headphones/frame_0019.jpg 10
714 | ./data/office31//webcam/images/headphones/frame_0020.jpg 10
715 | ./data/office31//webcam/images/headphones/frame_0021.jpg 10
716 | ./data/office31//webcam/images/headphones/frame_0022.jpg 10
717 | ./data/office31//webcam/images/headphones/frame_0023.jpg 10
718 | ./data/office31//webcam/images/headphones/frame_0024.jpg 10
719 | ./data/office31//webcam/images/headphones/frame_0025.jpg 10
720 | ./data/office31//webcam/images/headphones/frame_0026.jpg 10
721 | ./data/office31//webcam/images/headphones/frame_0027.jpg 10
722 | ./data/office31//webcam/images/desk_lamp/frame_0001.jpg 7
723 | ./data/office31//webcam/images/desk_lamp/frame_0002.jpg 7
724 | ./data/office31//webcam/images/desk_lamp/frame_0003.jpg 7
725 | ./data/office31//webcam/images/desk_lamp/frame_0004.jpg 7
726 | ./data/office31//webcam/images/desk_lamp/frame_0005.jpg 7
727 | ./data/office31//webcam/images/desk_lamp/frame_0006.jpg 7
728 | ./data/office31//webcam/images/desk_lamp/frame_0007.jpg 7
729 | ./data/office31//webcam/images/desk_lamp/frame_0008.jpg 7
730 | ./data/office31//webcam/images/desk_lamp/frame_0009.jpg 7
731 | ./data/office31//webcam/images/desk_lamp/frame_0010.jpg 7
732 | ./data/office31//webcam/images/desk_lamp/frame_0011.jpg 7
733 | ./data/office31//webcam/images/desk_lamp/frame_0012.jpg 7
734 | ./data/office31//webcam/images/desk_lamp/frame_0013.jpg 7
735 | ./data/office31//webcam/images/desk_lamp/frame_0014.jpg 7
736 | ./data/office31//webcam/images/desk_lamp/frame_0015.jpg 7
737 | ./data/office31//webcam/images/desk_lamp/frame_0016.jpg 7
738 | ./data/office31//webcam/images/desk_lamp/frame_0017.jpg 7
739 | ./data/office31//webcam/images/desk_lamp/frame_0018.jpg 7
740 | ./data/office31//webcam/images/desk_chair/frame_0001.jpg 6
741 | ./data/office31//webcam/images/desk_chair/frame_0002.jpg 6
742 | ./data/office31//webcam/images/desk_chair/frame_0003.jpg 6
743 | ./data/office31//webcam/images/desk_chair/frame_0004.jpg 6
744 | ./data/office31//webcam/images/desk_chair/frame_0005.jpg 6
745 | ./data/office31//webcam/images/desk_chair/frame_0006.jpg 6
746 | ./data/office31//webcam/images/desk_chair/frame_0007.jpg 6
747 | ./data/office31//webcam/images/desk_chair/frame_0008.jpg 6
748 | ./data/office31//webcam/images/desk_chair/frame_0009.jpg 6
749 | ./data/office31//webcam/images/desk_chair/frame_0010.jpg 6
750 | ./data/office31//webcam/images/desk_chair/frame_0011.jpg 6
751 | ./data/office31//webcam/images/desk_chair/frame_0012.jpg 6
752 | ./data/office31//webcam/images/desk_chair/frame_0013.jpg 6
753 | ./data/office31//webcam/images/desk_chair/frame_0014.jpg 6
754 | ./data/office31//webcam/images/desk_chair/frame_0015.jpg 6
755 | ./data/office31//webcam/images/desk_chair/frame_0016.jpg 6
756 | ./data/office31//webcam/images/desk_chair/frame_0017.jpg 6
757 | ./data/office31//webcam/images/desk_chair/frame_0018.jpg 6
758 | ./data/office31//webcam/images/desk_chair/frame_0019.jpg 6
759 | ./data/office31//webcam/images/desk_chair/frame_0020.jpg 6
760 | ./data/office31//webcam/images/desk_chair/frame_0021.jpg 6
761 | ./data/office31//webcam/images/desk_chair/frame_0022.jpg 6
762 | ./data/office31//webcam/images/desk_chair/frame_0023.jpg 6
763 | ./data/office31//webcam/images/desk_chair/frame_0024.jpg 6
764 | ./data/office31//webcam/images/desk_chair/frame_0025.jpg 6
765 | ./data/office31//webcam/images/desk_chair/frame_0026.jpg 6
766 | ./data/office31//webcam/images/desk_chair/frame_0027.jpg 6
767 | ./data/office31//webcam/images/desk_chair/frame_0028.jpg 6
768 | ./data/office31//webcam/images/desk_chair/frame_0029.jpg 6
769 | ./data/office31//webcam/images/desk_chair/frame_0030.jpg 6
770 | ./data/office31//webcam/images/desk_chair/frame_0031.jpg 6
771 | ./data/office31//webcam/images/desk_chair/frame_0032.jpg 6
772 | ./data/office31//webcam/images/desk_chair/frame_0033.jpg 6
773 | ./data/office31//webcam/images/desk_chair/frame_0034.jpg 6
774 | ./data/office31//webcam/images/desk_chair/frame_0035.jpg 6
775 | ./data/office31//webcam/images/desk_chair/frame_0036.jpg 6
776 | ./data/office31//webcam/images/desk_chair/frame_0037.jpg 6
777 | ./data/office31//webcam/images/desk_chair/frame_0038.jpg 6
778 | ./data/office31//webcam/images/desk_chair/frame_0039.jpg 6
779 | ./data/office31//webcam/images/desk_chair/frame_0040.jpg 6
780 | ./data/office31//webcam/images/bottle/frame_0001.jpg 4
781 | ./data/office31//webcam/images/bottle/frame_0002.jpg 4
782 | ./data/office31//webcam/images/bottle/frame_0003.jpg 4
783 | ./data/office31//webcam/images/bottle/frame_0004.jpg 4
784 | ./data/office31//webcam/images/bottle/frame_0005.jpg 4
785 | ./data/office31//webcam/images/bottle/frame_0006.jpg 4
786 | ./data/office31//webcam/images/bottle/frame_0007.jpg 4
787 | ./data/office31//webcam/images/bottle/frame_0008.jpg 4
788 | ./data/office31//webcam/images/bottle/frame_0009.jpg 4
789 | ./data/office31//webcam/images/bottle/frame_0010.jpg 4
790 | ./data/office31//webcam/images/bottle/frame_0011.jpg 4
791 | ./data/office31//webcam/images/bottle/frame_0012.jpg 4
792 | ./data/office31//webcam/images/bottle/frame_0013.jpg 4
793 | ./data/office31//webcam/images/bottle/frame_0014.jpg 4
794 | ./data/office31//webcam/images/bottle/frame_0015.jpg 4
795 | ./data/office31//webcam/images/bottle/frame_0016.jpg 4
796 |
--------------------------------------------------------------------------------
/dataset/augmentations.py:
--------------------------------------------------------------------------------
1 | # code in this file is adpated from rpmcruz/autoaugment
2 | # https://github.com/rpmcruz/autoaugment/blob/master/transformations.py
3 | import random
4 |
5 | import PIL, PIL.ImageOps, PIL.ImageEnhance, PIL.ImageDraw
6 | import numpy as np
7 | import torch
8 | from PIL import Image
9 |
10 |
11 | def ShearX(img, v): # [-0.3, 0.3]
12 | assert -0.3 <= v <= 0.3
13 | if random.random() > 0.5:
14 | v = -v
15 | return img.transform(img.size, PIL.Image.AFFINE, (1, v, 0, 0, 1, 0))
16 |
17 |
18 | def ShearY(img, v): # [-0.3, 0.3]
19 | assert -0.3 <= v <= 0.3
20 | if random.random() > 0.5:
21 | v = -v
22 | return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, v, 1, 0))
23 |
24 |
25 | def TranslateX(img, v): # [-150, 150] => percentage: [-0.45, 0.45]
26 | assert -0.45 <= v <= 0.45
27 | if random.random() > 0.5:
28 | v = -v
29 | v = v * img.size[0]
30 | return img.transform(img.size, PIL.Image.AFFINE, (1, 0, v, 0, 1, 0))
31 |
32 |
33 | def TranslateXabs(img, v): # [-150, 150] => percentage: [-0.45, 0.45]
34 | assert 0 <= v
35 | if random.random() > 0.5:
36 | v = -v
37 | return img.transform(img.size, PIL.Image.AFFINE, (1, 0, v, 0, 1, 0))
38 |
39 |
40 | def TranslateY(img, v): # [-150, 150] => percentage: [-0.45, 0.45]
41 | assert -0.45 <= v <= 0.45
42 | if random.random() > 0.5:
43 | v = -v
44 | v = v * img.size[1]
45 | return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, 0, 1, v))
46 |
47 |
48 | def TranslateYabs(img, v): # [-150, 150] => percentage: [-0.45, 0.45]
49 | assert 0 <= v
50 | if random.random() > 0.5:
51 | v = -v
52 | return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, 0, 1, v))
53 |
54 |
55 | def Rotate(img, v): # [-30, 30]
56 | assert -30 <= v <= 30
57 | if random.random() > 0.5:
58 | v = -v
59 | return img.rotate(v)
60 |
61 |
62 | def AutoContrast(img, _):
63 | return PIL.ImageOps.autocontrast(img)
64 |
65 |
66 | def Invert(img, _):
67 | return PIL.ImageOps.invert(img)
68 |
69 |
70 | def Equalize(img, _):
71 | return PIL.ImageOps.equalize(img)
72 |
73 |
74 | def Flip(img, _): # not from the paper
75 | return PIL.ImageOps.mirror(img)
76 |
77 |
78 | def Solarize(img, v): # [0, 256]
79 | assert 0 <= v <= 256
80 | return PIL.ImageOps.solarize(img, v)
81 |
82 |
83 | def SolarizeAdd(img, addition=0, threshold=128):
84 | img_np = np.array(img).astype(np.int)
85 | img_np = img_np + addition
86 | img_np = np.clip(img_np, 0, 255)
87 | img_np = img_np.astype(np.uint8)
88 | img = Image.fromarray(img_np)
89 | return PIL.ImageOps.solarize(img, threshold)
90 |
91 |
92 | def Posterize(img, v): # [4, 8]
93 | v = int(v)
94 | v = max(1, v)
95 | return PIL.ImageOps.posterize(img, v)
96 |
97 |
98 | def Contrast(img, v): # [0.1,1.9]
99 | assert 0.1 <= v <= 1.9
100 | return PIL.ImageEnhance.Contrast(img).enhance(v)
101 |
102 |
103 | def Color(img, v): # [0.1,1.9]
104 | assert 0.1 <= v <= 1.9
105 | return PIL.ImageEnhance.Color(img).enhance(v)
106 |
107 |
108 | def Brightness(img, v): # [0.1,1.9]
109 | assert 0.1 <= v <= 1.9
110 | return PIL.ImageEnhance.Brightness(img).enhance(v)
111 |
112 |
113 | def Sharpness(img, v): # [0.1,1.9]
114 | assert 0.1 <= v <= 1.9
115 | return PIL.ImageEnhance.Sharpness(img).enhance(v)
116 |
117 |
118 | def Cutout(img, v): # [0, 60] => percentage: [0, 0.2]
119 | assert 0.0 <= v <= 0.2
120 | if v <= 0.:
121 | return img
122 |
123 | v = v * img.size[0]
124 | return CutoutAbs(img, v)
125 |
126 |
127 | def CutoutAbs(img, v): # [0, 60] => percentage: [0, 0.2]
128 | # assert 0 <= v <= 20
129 | if v < 0:
130 | return img
131 | w, h = img.size
132 | x0 = np.random.uniform(w)
133 | y0 = np.random.uniform(h)
134 |
135 | x0 = int(max(0, x0 - v / 2.))
136 | y0 = int(max(0, y0 - v / 2.))
137 | x1 = min(w, x0 + v)
138 | y1 = min(h, y0 + v)
139 |
140 | xy = (x0, y0, x1, y1)
141 | color = (125, 123, 114)
142 | # color = (0, 0, 0)
143 | img = img.copy()
144 | PIL.ImageDraw.Draw(img).rectangle(xy, color)
145 | return img
146 |
147 |
148 | def SamplePairing(imgs): # [0, 0.4]
149 | def f(img1, v):
150 | i = np.random.choice(len(imgs))
151 | img2 = PIL.Image.fromarray(imgs[i])
152 | return PIL.Image.blend(img1, img2, v)
153 |
154 | return f
155 |
156 |
157 | def Identity(img, v):
158 | return img
159 |
160 |
161 | def augment_list(): # 16 oeprations and their ranges
162 | # https://github.com/google-research/uda/blob/master/image/randaugment/policies.py#L57
163 | # l = [
164 | # (Identity, 0., 1.0),
165 | # (ShearX, 0., 0.3), # 0
166 | # (ShearY, 0., 0.3), # 1
167 | # (TranslateX, 0., 0.33), # 2
168 | # (TranslateY, 0., 0.33), # 3
169 | # (Rotate, 0, 30), # 4
170 | # (AutoContrast, 0, 1), # 5
171 | # (Invert, 0, 1), # 6
172 | # (Equalize, 0, 1), # 7
173 | # (Solarize, 0, 110), # 8
174 | # (Posterize, 4, 8), # 9
175 | # # (Contrast, 0.1, 1.9), # 10
176 | # (Color, 0.1, 1.9), # 11
177 | # (Brightness, 0.1, 1.9), # 12
178 | # (Sharpness, 0.1, 1.9), # 13
179 | # # (Cutout, 0, 0.2), # 14
180 | # # (SamplePairing(imgs), 0, 0.4), # 15
181 | # ]
182 |
183 | # https://github.com/tensorflow/tpu/blob/8462d083dd89489a79e3200bcc8d4063bf362186/models/official/efficientnet/autoaugment.py#L505
184 | l = [
185 | (AutoContrast, 0, 1),
186 | (Equalize, 0, 1),
187 | (Invert, 0, 1),
188 | (Rotate, 0, 30),
189 | (Posterize, 0, 4),
190 | (Solarize, 0, 256),
191 | (SolarizeAdd, 0, 110),
192 | (Color, 0.1, 1.9),
193 | (Contrast, 0.1, 1.9),
194 | (Brightness, 0.1, 1.9),
195 | (Sharpness, 0.1, 1.9),
196 | (ShearX, 0., 0.3),
197 | (ShearY, 0., 0.3),
198 | (CutoutAbs, 0, 40),
199 | (TranslateXabs, 0., 100),
200 | (TranslateYabs, 0., 100),
201 | ]
202 |
203 | return l
204 |
205 |
206 | # class Lighting(object):
207 | # """Lighting noise(AlexNet - style PCA - based noise)"""
208 | #
209 | # def __init__(self, alphastd, eigval, eigvec):
210 | # self.alphastd = alphastd
211 | # self.eigval = torch.Tensor(eigval)
212 | # self.eigvec = torch.Tensor(eigvec)
213 | #
214 | # def __call__(self, img):
215 | # if self.alphastd == 0:
216 | # return img
217 | #
218 | # alpha = img.new().resize_(3).normal_(0, self.alphastd)
219 | # rgb = self.eigvec.type_as(img).clone() \
220 | # .mul(alpha.view(1, 3).expand(3, 3)) \
221 | # .mul(self.eigval.view(1, 3).expand(3, 3)) \
222 | # .sum(1).squeeze()
223 | #
224 | # return img.add(rgb.view(3, 1, 1).expand_as(img))
225 |
226 |
227 | # class CutoutDefault(object):
228 | # """
229 | # Reference : https://github.com/quark0/darts/blob/master/cnn/utils.py
230 | # """
231 | # def __init__(self, length):
232 | # self.length = length
233 | #
234 | # def __call__(self, img):
235 | # h, w = img.size(1), img.size(2)
236 | # mask = np.ones((h, w), np.float32)
237 | # y = np.random.randint(h)
238 | # x = np.random.randint(w)
239 | #
240 | # y1 = np.clip(y - self.length // 2, 0, h)
241 | # y2 = np.clip(y + self.length // 2, 0, h)
242 | # x1 = np.clip(x - self.length // 2, 0, w)
243 | # x2 = np.clip(x + self.length // 2, 0, w)
244 | #
245 | # mask[y1: y2, x1: x2] = 0.
246 | # mask = torch.from_numpy(mask)
247 | # mask = mask.expand_as(img)
248 | # img *= mask
249 | # return img
250 |
251 |
252 | class RandAugment:
253 | def __init__(self, n, m):
254 | self.n = n
255 | self.m = m # [0, 30]
256 | self.augment_list = augment_list()
257 |
258 | def __call__(self, img):
259 |
260 | if self.n == 0:
261 | return img
262 |
263 | ops = random.choices(self.augment_list, k=self.n)
264 | for op, minval, maxval in ops:
265 | val = (float(self.m) / 30) * float(maxval - minval) + minval
266 | img = op(img, val)
267 |
268 | return img
--------------------------------------------------------------------------------
/dataset/data_list.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from PIL import Image
3 | import copy
4 | from .augmentations import RandAugment
5 |
6 | def make_dataset(image_list, labels):
7 | if labels:
8 | len_ = len(image_list)
9 | images = [(image_list[i].strip(), labels[i, :]) for i in range(len_)]
10 | else:
11 | if len(image_list[0].split()) > 2:
12 | images = [(val.split()[0], np.array([int(la) for la in val.split()[1:]])) for val in image_list]
13 | else:
14 | images = [(val.split()[0], int(val.split()[1])) for val in image_list]
15 | return images
16 |
17 |
18 | def pil_loader(path):
19 | # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
20 | with open(path, 'rb') as f:
21 | with Image.open(f) as img:
22 | return img.convert('RGB')
23 |
24 | def default_loader(path):
25 | return pil_loader(path)
26 |
27 |
28 | class ImageList(object):
29 | """A generic data loader where the images are arranged in this way: ::
30 | root/dog/xxx.png
31 | root/dog/xxy.png
32 | root/dog/xxz.png
33 | root/cat/123.png
34 | root/cat/nsdf3.png
35 | root/cat/asd932_.png
36 | Args:
37 | root (string): Root directory path.
38 | transform (callable, optional): A function/transform that takes in an PIL image
39 | and returns a transformed version. E.g, ``transforms.RandomCrop``
40 | target_transform (callable, optional): A function/transform that takes in the
41 | target and transforms it.
42 | loader (callable, optional): A function to load an image given its path.
43 | Attributes:
44 | classes (list): List of the class names.
45 | class_to_idx (dict): Dict with items (class_name, class_index).
46 | imgs (list): List of (image path, class_index) tuples
47 | """
48 |
49 | def __init__(self, image_list, labels=None, transform=None, target_transform=None,
50 | loader=default_loader, rand_aug=False):
51 | imgs = make_dataset(image_list, labels)
52 | if len(imgs) == 0:
53 | raise Exception
54 |
55 | self.imgs = imgs
56 | self.transform = transform
57 | self.target_transform = target_transform
58 | self.loader = loader
59 | self.labels = [label for (_, label) in imgs]
60 | self.rand_aug = rand_aug
61 | if self.rand_aug:
62 | self.rand_aug_transform = copy.deepcopy(self.transform)
63 | self.rand_aug_transform.transforms.insert(0, RandAugment(1, 2.0))
64 |
65 | def __getitem__(self, index):
66 | """
67 | Args:
68 | index (int): Index
69 | Returns:
70 | tuple: (image, target) where target is class_index of the target class.
71 | """
72 | path, target = self.imgs[index]
73 | img_ = self.loader(path)
74 | if self.transform is not None:
75 | img = self.transform(img_)
76 | if self.target_transform is not None:
77 | target = self.target_transform(target)
78 |
79 | if self.rand_aug:
80 | rand_img = self.rand_aug_transform(img_)
81 | return img, target, index, rand_img
82 | else:
83 | return img, target, index
84 |
85 | def __len__(self):
86 | return len(self.imgs)
87 |
--------------------------------------------------------------------------------
/dataset/data_provider.py:
--------------------------------------------------------------------------------
1 | from .data_list import ImageList
2 | import torch.utils.data as util_data
3 | from torchvision import transforms as T
4 |
5 | def get_dataloader_from_image_filepath(images_file_path, batch_size=32, resize_size=256, is_train=True, crop_size=224,
6 | center_crop=True, rand_aug=False, random_resized_crop=False, num_workers=4):
7 | if images_file_path is None:
8 | return None
9 |
10 | normalize = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
11 | if is_train is not True:
12 | transformer = T.Compose([
13 | T.Resize([resize_size, resize_size]),
14 | T.CenterCrop(crop_size),
15 | T.ToTensor(),
16 | normalize])
17 | images = ImageList(open(images_file_path).readlines(), transform=transformer)
18 | images_loader = util_data.DataLoader(images, batch_size=batch_size, shuffle=False, num_workers=num_workers)
19 | else:
20 | if center_crop:
21 | transformer = T.Compose([T.Resize([resize_size, resize_size]),
22 | T.RandomHorizontalFlip(),
23 | T.CenterCrop(crop_size),
24 | T.ToTensor(),
25 | normalize])
26 | elif random_resized_crop:
27 | transformer = T.Compose([T.Resize([resize_size, resize_size]),
28 | T.RandomCrop(crop_size),
29 | T.RandomHorizontalFlip(),
30 | T.ToTensor(),
31 | normalize])
32 | else:
33 | transformer = T.Compose([T.Resize([resize_size, resize_size]),
34 | T.RandomResizedCrop(crop_size),
35 | T.RandomHorizontalFlip(),
36 | T.ToTensor(),
37 | normalize])
38 |
39 | images = ImageList(open(images_file_path).readlines(), transform=transformer, rand_aug=rand_aug)
40 | images_loader = util_data.DataLoader(images, batch_size=batch_size, shuffle=True, num_workers=num_workers, drop_last=True)
41 |
42 | return images_loader
43 |
44 |
45 | def get_dataloaders(args):
46 | dataloaders = {}
47 | source_train_loader = get_dataloader_from_image_filepath(args.source_path, batch_size=args.batch_size,
48 | center_crop=args.center_crop, num_workers=args.num_workers,
49 | random_resized_crop=args.random_resized_crop)
50 | target_train_loader = get_dataloader_from_image_filepath(args.target_path, batch_size=args.batch_size,
51 | center_crop=args.center_crop, num_workers=args.num_workers,
52 | rand_aug=args.rand_aug, random_resized_crop=args.random_resized_crop)
53 | source_val_loader = get_dataloader_from_image_filepath(args.source_path, batch_size=args.batch_size, is_train=False,
54 | num_workers=args.num_workers)
55 | target_val_loader = get_dataloader_from_image_filepath(args.target_path, batch_size=args.batch_size, is_train=False,
56 | num_workers=args.num_workers)
57 |
58 | if type(args.test_path) is list:
59 | test_loader = []
60 | for tst_addr in args.test_path:
61 | test_loader.append(get_dataloader_from_image_filepath(tst_addr, batch_size=args.batch_size, is_train=False,
62 | num_workers=args.num_workers))
63 | else:
64 | test_loader = get_dataloader_from_image_filepath(args.test_path, batch_size=args.batch_size, is_train=False,
65 | num_workers=args.num_workers)
66 | dataloaders["source_tr"] = source_train_loader
67 | dataloaders["target_tr"] = target_train_loader
68 | dataloaders["source_val"] = source_val_loader
69 | dataloaders["target_val"] = target_val_loader
70 | dataloaders["test"] = test_loader
71 |
72 | return dataloaders
73 |
74 |
75 | class ForeverDataIterator:
76 | r"""A data iterator that will never stop producing data"""
77 |
78 | def __init__(self, data_loader):
79 | self.data_loader = data_loader
80 | self.iter = iter(self.data_loader or [])
81 |
82 | def __next__(self):
83 | try:
84 | data = next(self.iter)
85 | except StopIteration:
86 | self.iter = iter(self.data_loader)
87 | data = next(self.iter)
88 | return data
89 |
90 | def __len__(self):
91 | return len(self.data_loader)
--------------------------------------------------------------------------------
/fig/SSRT.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tsun/SSRT/0bf39fd188d5f1ce12785ea94ae737eb55c3416a/fig/SSRT.png
--------------------------------------------------------------------------------
/fig/SafeTraining.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tsun/SSRT/0bf39fd188d5f1ce12785ea94ae737eb55c3416a/fig/SafeTraining.png
--------------------------------------------------------------------------------
/main_SSRT.domainnet.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | domains = ['clipart', 'infograph', 'painting', 'quickdraw', 'real', 'sketch']
11 |
12 | for src in domains:
13 | for tgt in domains:
14 |
15 | if src == tgt:
16 | continue
17 |
18 | header = '''
19 | ++++++++++++++++++++++++++++++++++++++++++++++++
20 | {}
21 | ++++++++++++++++++++++++++++++++++++++++++++++++
22 | @{}:{}
23 | '''.format
24 |
25 | args = ['--model=SSRT',
26 | '--base_net=vit_base_patch16_224',
27 |
28 | '--gpu=0',
29 | '--timestamp={}'.format(timestamp),
30 |
31 | '--dataset=DomainNet',
32 | '--source_path=data/{}_train.txt'.format(src),
33 | '--target_path=data/{}_train.txt'.format(tgt),
34 | '--test_path=data/{}_test.txt'.format(tgt),
35 | '--batch_size=32',
36 |
37 | '--lr=0.004',
38 | '--train_epoch=40',
39 | '--save_epoch=40',
40 | '--eval_epoch=5',
41 | '--iters_per_epoch=1000',
42 |
43 | '--sr_loss_weight=0.2',
44 | '--sr_alpha=0.3',
45 | '--sr_layers=[0,4,8]',
46 | '--sr_epsilon=0.4',
47 |
48 | '--use_safe_training=True',
49 | '--adap_adjust_T=1000',
50 | '--adap_adjust_L=4',
51 |
52 | '--use_tensorboard=False',
53 | '--tensorboard_dir=tbs/SSRT',
54 | '--use_file_logger=True',
55 | '--log_dir=logs/SSRT']
56 | train_main(args, header('\n\t\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_SSRT.office31.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | domains = ['webcam', 'amazon', 'dslr']
11 |
12 | for src in domains:
13 | for tgt in domains:
14 |
15 | if src == tgt:
16 | continue
17 |
18 | header = '''
19 | ++++++++++++++++++++++++++++++++++++++++++++++++
20 | {}
21 | ++++++++++++++++++++++++++++++++++++++++++++++++
22 | @{}:{}
23 | '''.format
24 |
25 | args = ['--model=SSRT',
26 | '--base_net=vit_base_patch16_224',
27 |
28 | '--gpu=0',
29 | '--timestamp={}'.format(timestamp),
30 |
31 | '--dataset=Office-31',
32 | '--source_path=data/{}.txt'.format(src),
33 | '--target_path=data/{}.txt'.format(tgt),
34 | '--batch_size=32',
35 |
36 | '--lr=0.001',
37 | '--train_epoch=10',
38 | '--save_epoch=10',
39 | '--eval_epoch=2',
40 | '--iters_per_epoch=1000',
41 |
42 | '--sr_loss_weight=0.2',
43 | '--sr_alpha=0.2',
44 | '--sr_layers=[0,4,8]',
45 | '--sr_epsilon=0.4',
46 |
47 | '--use_safe_training=True',
48 | '--adap_adjust_T=1000',
49 | '--adap_adjust_L=4',
50 |
51 | '--use_tensorboard=False',
52 | '--tensorboard_dir=tbs/SSRT',
53 | '--use_file_logger=True',
54 | '--log_dir=logs/SSRT']
55 | train_main(args, header('\n\t\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_SSRT.office_home.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | domains = ['Product', 'Clipart', 'Art', 'Real_World']
11 |
12 | for src in domains:
13 | for tgt in domains:
14 |
15 | if src == tgt:
16 | continue
17 |
18 | header = '''
19 | ++++++++++++++++++++++++++++++++++++++++++++++++
20 | {}
21 | ++++++++++++++++++++++++++++++++++++++++++++++++
22 | @{}:{}
23 | '''.format
24 |
25 | args = ['--model=SSRT',
26 | '--base_net=vit_base_patch16_224',
27 |
28 | '--gpu=0',
29 | '--timestamp={}'.format(timestamp),
30 |
31 | '--dataset=Office-Home',
32 | '--source_path=data/{}.txt'.format(src),
33 | '--target_path=data/{}.txt'.format(tgt),
34 | '--batch_size=32',
35 |
36 | '--lr=0.004',
37 | '--train_epoch=20',
38 | '--save_epoch=20',
39 | '--eval_epoch=5',
40 | '--iters_per_epoch=1000',
41 |
42 | '--sr_loss_weight=0.2',
43 | '--sr_alpha=0.3',
44 | '--sr_layers=[0,4,8]',
45 | '--sr_epsilon=0.4',
46 |
47 | '--use_safe_training=True',
48 | '--adap_adjust_T=1000',
49 | '--adap_adjust_L=4',
50 |
51 | '--use_tensorboard=False',
52 | '--tensorboard_dir=tbs/SSRT',
53 | '--use_file_logger=True',
54 | '--log_dir=logs/SSRT']
55 | train_main(args, header('\n\t\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_SSRT.visda.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | header = '''
11 | ++++++++++++++++++++++++++++++++++++++++++++++++
12 | {}
13 | ++++++++++++++++++++++++++++++++++++++++++++++++
14 | @{}:{}
15 | '''.format
16 |
17 | args = ['--model=SSRT',
18 | '--base_net=vit_base_patch16_224',
19 |
20 | '--gpu=0',
21 | '--timestamp={}'.format(timestamp),
22 |
23 | '--dataset=visda',
24 | '--source_path=data/VisDA2017_train.txt',
25 | '--target_path=data/VisDA2017_valid.txt',
26 | '--batch_size=32',
27 |
28 | '--lr=0.002',
29 | '--train_epoch=20',
30 | '--save_epoch=20',
31 | '--eval_epoch=5',
32 | '--iters_per_epoch=1000',
33 |
34 | '--sr_loss_weight=0.2',
35 | '--sr_alpha=0.3',
36 | '--sr_layers=[0,4,8]',
37 | '--sr_epsilon=0.4',
38 |
39 | '--use_safe_training=True',
40 | '--adap_adjust_T=1000',
41 | '--adap_adjust_L=4',
42 |
43 | '--use_tensorboard=False',
44 | '--tensorboard_dir=tbs/SSRT',
45 | '--use_file_logger=True',
46 | '--log_dir=logs/SSRT' ]
47 | train_main(args, header('\n\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_ViT_baseline.domainnet.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | domains = ['clipart', 'infograph', 'painting', 'quickdraw', 'real', 'sketch']
11 |
12 | for src in domains:
13 | for tgt in domains:
14 |
15 | if src == tgt:
16 | continue
17 |
18 | header = '''
19 | ++++++++++++++++++++++++++++++++++++++++++++++++
20 | {}
21 | ++++++++++++++++++++++++++++++++++++++++++++++++
22 | @{}:{}
23 | '''.format
24 |
25 | args = ['--model=ViTgrl',
26 | '--base_net=vit_base_patch16_224',
27 |
28 | '--gpu=0',
29 | '--timestamp={}'.format(timestamp),
30 |
31 | '--dataset=DomainNet',
32 | '--source_path=data/{}_train.txt'.format(src),
33 | '--target_path=data/{}_train.txt'.format(tgt),
34 | '--test_path=data/{}_test.txt'.format(tgt),
35 | '--batch_size=32',
36 |
37 | '--lr=0.004',
38 | '--train_epoch=40',
39 | '--save_epoch=40',
40 | '--eval_epoch=5',
41 | '--iters_per_epoch=1000',
42 |
43 | '--use_tensorboard=False',
44 | '--use_file_logger=True',
45 | '--log_dir=logs/ViTgrl']
46 | train_main(args, header('\n\t\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_ViT_baseline.office31.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | domains = ['webcam', 'amazon', 'dslr']
11 |
12 | for src in domains:
13 | for tgt in domains:
14 |
15 | if src == tgt:
16 | continue
17 |
18 | header = '''
19 | ++++++++++++++++++++++++++++++++++++++++++++++++
20 | {}
21 | ++++++++++++++++++++++++++++++++++++++++++++++++
22 | @{}:{}
23 | '''.format
24 |
25 | args = ['--model=ViTgrl',
26 | '--base_net=vit_base_patch16_224',
27 |
28 | '--gpu=0',
29 | '--timestamp={}'.format(timestamp),
30 |
31 | '--dataset=Office-31',
32 | '--source_path=data/{}.txt'.format(src),
33 | '--target_path=data/{}.txt'.format(tgt),
34 | '--batch_size=32',
35 |
36 | '--lr=0.001',
37 | '--train_epoch=10',
38 | '--save_epoch=10',
39 | '--eval_epoch=2',
40 | '--iters_per_epoch=1000',
41 |
42 | '--use_tensorboard=False',
43 | '--use_file_logger=True',
44 | '--log_dir=logs/ViTgrl']
45 | train_main(args, header('\n\t\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_ViT_baseline.office_home.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | domains = ['Product', 'Clipart', 'Art', 'Real_World']
11 |
12 | for src in domains:
13 | for tgt in domains:
14 |
15 | if src == tgt:
16 | continue
17 |
18 | header = '''
19 | ++++++++++++++++++++++++++++++++++++++++++++++++
20 | {}
21 | ++++++++++++++++++++++++++++++++++++++++++++++++
22 | @{}:{}
23 | '''.format
24 |
25 | args = ['--model=ViTgrl',
26 | '--base_net=vit_base_patch16_224',
27 |
28 | '--gpu=0',
29 | '--timestamp={}'.format(timestamp),
30 |
31 | '--dataset=Office-Home',
32 | '--source_path=data/{}.txt'.format(src),
33 | '--target_path=data/{}.txt'.format(tgt),
34 | '--batch_size=32',
35 |
36 | '--lr=0.004',
37 | '--train_epoch=20',
38 | '--save_epoch=20',
39 | '--eval_epoch=5',
40 | '--iters_per_epoch=1000',
41 |
42 | '--use_tensorboard=False',
43 | '--use_file_logger=True',
44 | '--log_dir=logs/ViTgrl']
45 | train_main(args, header('\n\t\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/main_ViT_baseline.visda.py:
--------------------------------------------------------------------------------
1 | from trainer.train import train_main
2 | import time
3 | import socket
4 | import os
5 |
6 | timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
7 | hostName = socket.gethostname()
8 | pid = os.getpid()
9 |
10 | header = '''
11 | ++++++++++++++++++++++++++++++++++++++++++++++++
12 | {}
13 | ++++++++++++++++++++++++++++++++++++++++++++++++
14 | @{}:{}
15 | '''.format
16 |
17 | args = ['--model=ViTgrl',
18 | '--base_net=vit_base_patch16_224',
19 |
20 | '--gpu=0',
21 | '--timestamp={}'.format(timestamp),
22 |
23 | '--dataset=visda',
24 | '--source_path=data/VisDA2017_train.txt',
25 | '--target_path=data/VisDA2017_valid.txt',
26 | '--batch_size=32',
27 |
28 | '--lr=0.002',
29 | '--train_epoch=20',
30 | '--save_epoch=20',
31 | '--eval_epoch=5',
32 | '--iters_per_epoch=1000',
33 |
34 | '--use_tensorboard=False',
35 | '--use_file_logger=True',
36 | '--log_dir=logs/ViTgrl' ]
37 | train_main(args, header('\n\t'.join(args), hostName, pid))
--------------------------------------------------------------------------------
/model/SSRT.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 | from functools import partial
6 | import random
7 | import numpy as np
8 | import logging
9 |
10 | from model.ViT import Block, PatchEmbed, VisionTransformer, vit_model
11 | from model.grl import WarmStartGradientReverseLayer
12 |
13 | class VT(VisionTransformer):
14 | def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
15 | num_heads=12, mlp_ratio=4., qkv_bias=True,
16 | drop_rate=0., attn_drop_rate=0., drop_path_rate=0., distilled=False,
17 | args=None):
18 |
19 | super(VisionTransformer, self).__init__()
20 | self.num_classes = num_classes
21 | self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
22 | norm_layer = partial(nn.LayerNorm, eps=1e-6)
23 | self.distilled = distilled
24 |
25 | self.patch_embed = PatchEmbed(
26 | img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
27 | num_patches = self.patch_embed.num_patches
28 |
29 | self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
30 | if distilled:
31 | self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
32 | self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 2, embed_dim))
33 | else:
34 | self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
35 | self.pos_drop = nn.Dropout(p=drop_rate)
36 |
37 | dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
38 | self.blocks = nn.Sequential(*[
39 | Block(
40 | dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias,
41 | drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
42 | for i in range(depth)])
43 | self.norm = norm_layer(embed_dim)
44 |
45 | self.pre_logits = nn.Identity()
46 | self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
47 | if distilled:
48 | self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
49 |
50 | self.sr_alpha = args.sr_alpha
51 | self.sr_layers = args.sr_layers
52 | self.sr_alpha_adap = self.sr_alpha
53 | self.iter_num = 0
54 |
55 |
56 | def forward_features(self, x):
57 | B = x.shape[0]
58 |
59 | if self.training and len(self.sr_layers) > 0:
60 | perturb_layer = random.choice(self.sr_layers)
61 | else:
62 | perturb_layer = None
63 |
64 | # perturbing raw input image
65 | if perturb_layer == -1:
66 | idx = torch.flip(torch.arange(B // 2, B), dims=[0])
67 | xm = x[B // 2:] + (x[idx] - x[B // 2:]).detach() * self.sr_alpha_adap
68 | x = torch.cat((x, xm))
69 |
70 | y = self.patch_embed(x)
71 |
72 | cls_tokens = self.cls_token.expand(y.shape[0], -1, -1) # stole cls_tokens impl from Phil Wang, thanks
73 |
74 | if self.distilled:
75 | dist_tokens = self.dist_token.expand(y.shape[0], -1, -1)
76 | y = torch.cat((cls_tokens, dist_tokens, y), dim=1)
77 | else:
78 | y = torch.cat((cls_tokens, y), dim=1)
79 | y = y + self.pos_embed
80 | y = self.pos_drop(y)
81 |
82 |
83 | for layer, blk in enumerate(self.blocks):
84 | if self.training:
85 | if layer == perturb_layer:
86 | idx = torch.flip(torch.arange(B // 2, B), dims=[0])
87 | ym = y[B // 2:] + (y[idx]-y[B // 2:]).detach() * self.sr_alpha_adap
88 | y = torch.cat((y, ym))
89 | y = blk(y)
90 | else:
91 | y = blk(y)
92 |
93 | y = self.norm(y)
94 | y = y[:, 0]
95 | self.iter_num += 1
96 |
97 | return y
98 |
99 |
100 | class SSRTNet(nn.Module):
101 | def __init__(self, base_net='vit_base_patch16_224', use_bottleneck=True, bottleneck_dim=1024, width=1024, class_num=31, args=None):
102 | super(SSRTNet, self).__init__()
103 |
104 | self.base_network = vit_model[base_net](pretrained=True, args=args, VisionTransformerModule=VT)
105 | self.use_bottleneck = use_bottleneck
106 | self.grl = WarmStartGradientReverseLayer(alpha=1.0, lo=0.0, hi=0.1, max_iters=1000, auto_step=True)
107 | if self.use_bottleneck:
108 | self.bottleneck_layer = [nn.Linear(self.base_network.embed_dim, bottleneck_dim), nn.BatchNorm1d(bottleneck_dim), nn.ReLU(), nn.Dropout(0.5)]
109 | self.bottleneck = nn.Sequential(*self.bottleneck_layer)
110 |
111 | classifier_dim = bottleneck_dim if use_bottleneck else self.base_network.embed_dim
112 | self.classifier_layer = [nn.Linear(classifier_dim, width), nn.ReLU(), nn.Dropout(0.5), nn.Linear(width, class_num)]
113 | self.classifier = nn.Sequential(*self.classifier_layer)
114 |
115 | self.discriminator_layer = [nn.Linear(classifier_dim, width), nn.ReLU(), nn.Dropout(0.5), nn.Linear(width, 1)]
116 | self.discriminator = nn.Sequential(*self.discriminator_layer)
117 |
118 | if self.use_bottleneck:
119 | self.bottleneck[0].weight.data.normal_(0, 0.005)
120 | self.bottleneck[0].bias.data.fill_(0.1)
121 |
122 | for dep in range(2):
123 | self.discriminator[dep * 3].weight.data.normal_(0, 0.01)
124 | self.discriminator[dep * 3].bias.data.fill_(0.0)
125 | self.classifier[dep * 3].weight.data.normal_(0, 0.01)
126 | self.classifier[dep * 3].bias.data.fill_(0.0)
127 |
128 | self.parameter_list = [
129 | {"params":self.base_network.parameters(), "lr":0.1},
130 | {"params":self.classifier.parameters(), "lr":1},
131 | {"params":self.discriminator.parameters(), "lr":1}]
132 | if self.use_bottleneck:
133 | self.parameter_list.extend([{"params":self.bottleneck.parameters(), "lr":1}])
134 |
135 |
136 | def forward(self, inputs):
137 | features = self.base_network.forward_features(inputs)
138 | if self.use_bottleneck:
139 | features = self.bottleneck(features)
140 |
141 | outputs_dc = self.discriminator(self.grl(features))
142 | outputs = self.classifier(features)
143 |
144 | if self.training:
145 | return features, outputs, outputs_dc
146 | else:
147 | return outputs
148 |
149 | class SSRT(object):
150 | def __init__(self, base_net='vit_base_patch16_224', bottleneck_dim=1024, class_num=31, use_gpu=True, args=None):
151 | self.net = SSRTNet(base_net, args.use_bottleneck, bottleneck_dim, bottleneck_dim, class_num, args)
152 |
153 | self.use_gpu = use_gpu
154 | self.is_train = False
155 | self.iter_num = 0
156 | self.class_num = class_num
157 | if self.use_gpu:
158 | self.net = self.net.cuda()
159 |
160 | self.use_safe_training = args.use_safe_training
161 | self.sr_loss_weight = args.sr_loss_weight
162 | self.sr_loss_weight_adap = self.sr_loss_weight
163 |
164 | if self.use_safe_training:
165 | self.snap_shot = None
166 | self.restore = False
167 | self.r = 0.0
168 | self.r_period = args.adap_adjust_T
169 | self.r_phase = 0
170 | self.r_mag = 1.0
171 | self.adap_adjust_T = args.adap_adjust_T
172 | self.adap_adjust_L = args.adap_adjust_L
173 | self.adap_adjust_append_last_subintervals = args.adap_adjust_append_last_subintervals
174 | self.adap_adjust_last_restore_iter = 0
175 | self.divs = []
176 | self.divs_last_period = None
177 |
178 |
179 | def to_dicts(self):
180 | return self.net.state_dict()
181 |
182 | def from_dicts(self, dicts):
183 | self.net.load_state_dict(dicts, strict=False)
184 |
185 | def get_adjust(self, iter):
186 | if iter >= self.r_period+self.r_phase:
187 | return self.r_mag
188 | return np.sin((iter-self.r_phase)/self.r_period*np.pi/2) * self.r_mag
189 |
190 | def save_snapshot(self):
191 | self.snap_shot = self.net.state_dict()
192 |
193 | def restore_snapshot(self):
194 | self.net.load_state_dict(self.snap_shot)
195 | self.adap_adjust_last_restore_iter = self.iter_num
196 |
197 | def check_div_drop(self):
198 | flag = False
199 |
200 | for l in range(self.adap_adjust_L+1):
201 | chunk = np.power(2, l)
202 | divs_ = np.array_split(np.array(self.divs), chunk)
203 | divs_ = [d.mean() for d in divs_]
204 |
205 | if self.adap_adjust_append_last_subintervals and self.divs_last_period is not None:
206 | divs_last_period = np.array_split(np.array(self.divs_last_period), chunk)
207 | divs_last_period = [d.mean() for d in divs_last_period]
208 | divs_.insert(0, divs_last_period[-1])
209 |
210 | for i in range(len(divs_)-1):
211 | if divs_[i+1] < divs_[i] - 1.0:
212 | flag = True
213 |
214 | if self.r <= 0.1:
215 | flag = False
216 |
217 | if flag:
218 | self.restore = True
219 | self.r_phase = self.iter_num
220 | if self.iter_num - self.adap_adjust_last_restore_iter <= self.r_period:
221 | self.r_period *= 2
222 |
223 |
224 | def get_sr_loss(self, out1, out2, sr_epsilon=0.4, sr_loss_p=0.5, args=None):
225 | prob1_t = F.softmax(out1, dim=1)
226 | prob2_t = F.softmax(out2, dim=1)
227 |
228 | prob1 = F.softmax(out1, dim=1)
229 | log_prob1 = F.log_softmax(out1, dim=1)
230 | prob2 = F.softmax(out2, dim=1)
231 | log_prob2 = F.log_softmax(out2, dim=1)
232 |
233 | if random.random() <= sr_loss_p:
234 | log_prob2 = F.log_softmax(out2, dim=1)
235 | mask1 = (prob1_t.max(-1)[0] > sr_epsilon).float()
236 | aug_loss = ((prob1 * (log_prob1 - log_prob2)).sum(-1) * mask1).sum() / (mask1.sum() + 1e-6)
237 | else:
238 | log_prob1 = F.log_softmax(out1, dim=1)
239 | mask2 = (prob2_t.max(-1)[0] > sr_epsilon).float()
240 | aug_loss = ((prob2 * (log_prob2 - log_prob1)).sum(-1) * mask2).sum() / (mask2.sum()+1e-6)
241 |
242 | if args.use_safe_training:
243 | self.r = self.get_adjust(self.iter_num)
244 | self.net.base_network.sr_alpha_adap = self.net.base_network.sr_alpha * self.r
245 | self.sr_loss_weight_adap = self.sr_loss_weight * self.r
246 |
247 | div_unique = prob1.argmax(-1).unique().shape[0]
248 | self.divs.append(div_unique)
249 |
250 | if (self.iter_num+1) % self.adap_adjust_T == 0 and self.iter_num > 0:
251 | self.check_div_drop()
252 | if not self.restore:
253 | self.divs_last_period = self.divs
254 |
255 | if args.use_tensorboard:
256 | args.writer.add_scalar('div_unique', div_unique, self.iter_num)
257 | args.writer.flush()
258 |
259 | return aug_loss
260 |
261 |
262 | def get_loss(self, inputs_source, inputs_target, labels_source, labels_target=None, args=None):
263 | if self.use_safe_training:
264 | if self.restore and self.iter_num > 0 and self.sr_loss_weight > 0:
265 | self.restore_snapshot()
266 | self.restore = False
267 | logging.info('Train iter={}:restore model snapshot:r={}'.format(self.iter_num, self.r))
268 |
269 | if self.iter_num % self.adap_adjust_T == 0 and self.sr_loss_weight > 0:
270 | self.save_snapshot()
271 | self.divs = []
272 | logging.info('Train iter={}:save model snapshot:r={}'.format(self.iter_num, self.r))
273 |
274 | inputs = torch.cat((inputs_source, inputs_target))
275 | _, outputs, outputs_dc = self.net(inputs)
276 |
277 | classification_loss = nn.CrossEntropyLoss()(outputs.narrow(0, 0, labels_source.size(0)), labels_source)
278 |
279 | domain_loss = 0.
280 | if args.domain_loss_weight > 0:
281 | domain_labels = torch.cat(
282 | (torch.ones(inputs_source.shape[0], device=inputs.device, dtype=torch.float),
283 | torch.zeros(inputs_target.shape[0], device=inputs.device, dtype=torch.float)),
284 | 0)
285 | domain_loss = nn.BCELoss()(F.sigmoid(outputs_dc.narrow(0, 0, inputs.size(0))).squeeze(), domain_labels) * 2
286 |
287 | total_loss = classification_loss * args.classification_loss_weight + domain_loss * args.domain_loss_weight
288 |
289 | sr_loss = 0.
290 | if args.sr_loss_weight > 0:
291 | outputs_tgt = outputs.narrow(0, labels_source.size(0), inputs.size(0)-labels_source.size(0))
292 | outputs_tgt_perturb = outputs.narrow(0, inputs.size(0),
293 | inputs.size(0) - labels_source.size(0))
294 |
295 | sr_loss = self.get_sr_loss(outputs_tgt, outputs_tgt_perturb, sr_epsilon=args.sr_epsilon,
296 | sr_loss_p=args.sr_loss_p, args=args)
297 | total_loss += self.sr_loss_weight_adap * sr_loss
298 |
299 | # mi loss
300 | if args.mi_loss_weight > 0:
301 | softmax_out = F.softmax(
302 | outputs.narrow(0, labels_source.size(0), inputs.size(0) - labels_source.size(0)))
303 | entropy_loss = torch.mean(torch.sum(-softmax_out * torch.log(softmax_out+1e-6), dim=1))
304 | msoftmax = softmax_out.mean(dim=0)
305 | gentropy_loss = torch.sum(-msoftmax * torch.log(msoftmax + 1e-6))
306 | entropy_loss -= gentropy_loss
307 | total_loss += args.mi_loss_weight * entropy_loss
308 |
309 |
310 | if args.use_tensorboard:
311 | all_losses = {}
312 | all_losses.update({'classification_loss': classification_loss})
313 | all_losses.update({'domain_loss': domain_loss})
314 | all_losses.update({'sr_loss': sr_loss})
315 |
316 | for key, value in all_losses.items():
317 | if torch.is_tensor(value):
318 | args.writer.add_scalar(key, value.item(), self.iter_num)
319 | else:
320 | args.writer.add_scalar(key, value, self.iter_num)
321 |
322 | args.writer.add_scalar('sr_alpha_adap', self.net.base_network.sr_alpha_adap, self.iter_num)
323 | args.writer.add_scalar('sr_loss_weight_adap', self.sr_loss_weight_adap, self.iter_num)
324 |
325 | args.writer.flush()
326 |
327 | self.iter_num += 1
328 |
329 | return total_loss
330 |
331 |
332 | def predict(self, inputs, output='prob'):
333 | outputs = self.net(inputs)
334 | if output == 'prob':
335 | softmax_outputs = F.softmax(outputs)
336 | return softmax_outputs
337 | elif output == 'score':
338 | return outputs
339 | else:
340 | raise NotImplementedError('Invalid output')
341 |
342 | def get_parameter_list(self):
343 | return self.net.parameter_list
344 |
345 | def set_train(self, mode):
346 | self.net.train(mode)
347 | self.is_train = mode
348 |
--------------------------------------------------------------------------------
/model/ViT.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 | import math
6 | import logging
7 | from functools import partial
8 | from itertools import repeat
9 | import collections.abc
10 | from .helpers import _create_vision_transformer, trunc_normal_, _init_vit_weights, named_apply, _load_weights
11 |
12 | _logger = logging.getLogger(__name__)
13 |
14 | vit_model = {}
15 | def register_model(name):
16 | def re(cls):
17 | vit_model[name] = cls
18 | return cls
19 | return re
20 |
21 | def drop_path(x, drop_prob: float = 0., training: bool = False):
22 | """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
23 | This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
24 | the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
25 | See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
26 | changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
27 | 'survival rate' as the argument.
28 | """
29 | if drop_prob == 0. or not training:
30 | return x
31 | keep_prob = 1 - drop_prob
32 | shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
33 | random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
34 | random_tensor.floor_() # binarize
35 | output = x.div(keep_prob) * random_tensor
36 | return output
37 |
38 | class DropPath(nn.Module):
39 | """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
40 | """
41 | def __init__(self, drop_prob=None):
42 | super(DropPath, self).__init__()
43 | self.drop_prob = drop_prob
44 |
45 | def forward(self, x):
46 | return drop_path(x, self.drop_prob, self.training)
47 |
48 | def _ntuple(n):
49 | def parse(x):
50 | if isinstance(x, collections.abc.Iterable):
51 | return x
52 | return tuple(repeat(x, n))
53 | return parse
54 | to_2tuple = _ntuple(2)
55 |
56 | class PatchEmbed(nn.Module):
57 | """ 2D Image to Patch Embedding
58 | """
59 | def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768, norm_layer=None, flatten=True):
60 | super().__init__()
61 | img_size = to_2tuple(img_size)
62 | patch_size = to_2tuple(patch_size)
63 | self.img_size = img_size
64 | self.patch_size = patch_size
65 | self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
66 | self.num_patches = self.grid_size[0] * self.grid_size[1]
67 | self.flatten = flatten
68 |
69 | self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
70 | self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
71 |
72 | def forward(self, x):
73 | B, C, H, W = x.shape
74 | assert H == self.img_size[0] and W == self.img_size[1], \
75 | f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
76 | x = self.proj(x)
77 | if self.flatten:
78 | x = x.flatten(2).transpose(1, 2) # BCHW -> BNC
79 | x = self.norm(x)
80 | return x
81 |
82 | class Mlp(nn.Module):
83 | def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
84 | super().__init__()
85 | out_features = out_features or in_features
86 | hidden_features = hidden_features or in_features
87 | self.fc1 = nn.Linear(in_features, hidden_features)
88 | self.act = act_layer()
89 | self.fc2 = nn.Linear(hidden_features, out_features)
90 | self.drop = nn.Dropout(drop)
91 |
92 | def forward(self, x):
93 | x = self.fc1(x)
94 | x = self.act(x)
95 | x = self.drop(x)
96 | x = self.fc2(x)
97 | x = self.drop(x)
98 | return x
99 |
100 | class Attention(nn.Module):
101 | def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
102 | super().__init__()
103 | self.num_heads = num_heads
104 | head_dim = dim // num_heads
105 | # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
106 | self.scale = qk_scale or head_dim ** -0.5
107 |
108 | self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
109 | self.attn_drop = nn.Dropout(attn_drop)
110 | self.proj = nn.Linear(dim, dim)
111 | self.proj_drop = nn.Dropout(proj_drop)
112 |
113 | def forward(self, x):
114 | B, N, C = x.shape
115 | qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
116 | q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
117 |
118 | attn = (q @ k.transpose(-2, -1)) * self.scale
119 | attn = attn.softmax(dim=-1)
120 |
121 | attn = self.attn_drop(attn)
122 |
123 | x = (attn @ v).transpose(1, 2).reshape(B, N, C)
124 | x = self.proj(x)
125 | return x
126 |
127 | class Block(nn.Module):
128 |
129 | def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
130 | drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, AttentionModule=Attention):
131 | super().__init__()
132 | self.norm1 = norm_layer(dim)
133 | self.attn = AttentionModule(
134 | dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
135 | # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
136 | self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
137 | self.norm2 = norm_layer(dim)
138 | mlp_hidden_dim = int(dim * mlp_ratio)
139 | self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
140 |
141 | def forward(self, x):
142 | y = self.attn(self.norm1(x))
143 |
144 | x = x + self.drop_path(y)
145 | x = x + self.drop_path(self.mlp(self.norm2(x)))
146 | return x
147 |
148 |
149 | class VisionTransformer(nn.Module):
150 | """ Vision Transformer
151 | A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
152 | - https://arxiv.org/abs/2010.11929
153 | Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
154 | - https://arxiv.org/abs/2012.12877
155 | """
156 |
157 | def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
158 | num_heads=12, mlp_ratio=4., qkv_bias=True, distilled=False,
159 | drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
160 | act_layer=None, weight_init=''):
161 | """
162 | Args:
163 | img_size (int, tuple): input image size
164 | patch_size (int, tuple): patch size
165 | in_chans (int): number of input channels
166 | num_classes (int): number of classes for classification head
167 | embed_dim (int): embedding dimension
168 | depth (int): depth of transformer
169 | num_heads (int): number of attention heads
170 | mlp_ratio (int): ratio of mlp hidden dim to embedding dim
171 | qkv_bias (bool): enable bias for qkv if True
172 | distilled (bool): model includes a distillation token and head as in DeiT models
173 | drop_rate (float): dropout rate
174 | attn_drop_rate (float): attention dropout rate
175 | drop_path_rate (float): stochastic depth rate
176 | embed_layer (nn.Module): patch embedding layer
177 | norm_layer: (nn.Module): normalization layer
178 | weight_init: (str): weight init scheme
179 | """
180 | super().__init__()
181 | self.num_classes = num_classes
182 | self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
183 | self.num_tokens = 2 if distilled else 1
184 | norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
185 | act_layer = act_layer or nn.GELU
186 |
187 | self.patch_embed = embed_layer(
188 | img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
189 | num_patches = self.patch_embed.num_patches
190 |
191 | self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
192 | self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
193 | self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
194 | self.pos_drop = nn.Dropout(p=drop_rate)
195 |
196 | dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
197 | self.blocks = nn.Sequential(*[
198 | Block(
199 | dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
200 | attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer)
201 | for i in range(depth)])
202 | self.norm = norm_layer(embed_dim)
203 |
204 | # Classifier head(s)
205 | self.pre_logits = nn.Identity()
206 | self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
207 | self.head_dist = None
208 | if distilled:
209 | self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
210 |
211 | self.init_weights(weight_init)
212 |
213 | def init_weights(self, mode=''):
214 | assert mode in ('jax', 'jax_nlhb', 'nlhb', '')
215 | head_bias = -math.log(self.num_classes) if 'nlhb' in mode else 0.
216 | trunc_normal_(self.pos_embed, std=.02)
217 | if self.dist_token is not None:
218 | trunc_normal_(self.dist_token, std=.02)
219 | if mode.startswith('jax'):
220 | # leave cls token as zeros to match jax impl
221 | named_apply(partial(_init_vit_weights, head_bias=head_bias, jax_impl=True), self)
222 | else:
223 | trunc_normal_(self.cls_token, std=.02)
224 | self.apply(_init_vit_weights)
225 |
226 | def _init_weights(self, m):
227 | # this fn left here for compat with downstream users
228 | _init_vit_weights(m)
229 |
230 | @torch.jit.ignore()
231 | def load_pretrained(self, checkpoint_path, prefix=''):
232 | _load_weights(self, checkpoint_path, prefix)
233 |
234 | @torch.jit.ignore
235 | def no_weight_decay(self):
236 | return {'pos_embed', 'cls_token', 'dist_token'}
237 |
238 | def get_classifier(self):
239 | if self.dist_token is None:
240 | return self.head
241 | else:
242 | return self.head, self.head_dist
243 |
244 | def reset_classifier(self, num_classes, global_pool=''):
245 | self.num_classes = num_classes
246 | self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
247 | if self.num_tokens == 2:
248 | self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
249 |
250 | def forward_features(self, x):
251 | x = self.patch_embed(x)
252 | cls_token = self.cls_token.expand(x.shape[0], -1, -1) # stole cls_tokens impl from Phil Wang, thanks
253 | if self.dist_token is None:
254 | x = torch.cat((cls_token, x), dim=1)
255 | else:
256 | x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)
257 | x = self.pos_drop(x + self.pos_embed)
258 | x = self.blocks(x)
259 | x = self.norm(x)
260 | return x[:, 0]
261 |
262 |
263 | @register_model('vit_base_patch16_224')
264 | def vit_base_patch16_224(pretrained=False, args=None, VisionTransformerModule=VisionTransformer):
265 | """ ViT-Base (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
266 | ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer.
267 | """
268 | model_kwargs = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, args=args, VisionTransformerModule=VisionTransformerModule)
269 | model = _create_vision_transformer('vit_base_patch16_224', pretrained=pretrained, **model_kwargs)
270 | return model
271 |
272 | @register_model('vit_small_patch16_224')
273 | def vit_small_patch16_224(pretrained=False, args=None, VisionTransformerModule=VisionTransformer):
274 | """ ViT-Small (ViT-S/16)
275 | NOTE I've replaced my previous 'small' model definition and weights with the small variant from the DeiT paper
276 | """
277 | model_kwargs = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6, args=args, VisionTransformerModule=VisionTransformerModule)
278 | model = _create_vision_transformer('vit_small_patch16_224', pretrained=pretrained, **model_kwargs)
279 | return model
280 |
281 |
282 | class VT(VisionTransformer):
283 | def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
284 | num_heads=12, mlp_ratio=4., qkv_bias=True,
285 | drop_rate=0., attn_drop_rate=0., drop_path_rate=0., distilled=False,
286 | args=None):
287 |
288 | super(VisionTransformer, self).__init__()
289 | self.num_classes = num_classes
290 | self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
291 | norm_layer = partial(nn.LayerNorm, eps=1e-6)
292 | self.distilled = distilled
293 |
294 | self.patch_embed = PatchEmbed(
295 | img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
296 | num_patches = self.patch_embed.num_patches
297 |
298 | self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
299 | if distilled:
300 | self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
301 | self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 2, embed_dim))
302 | else:
303 | self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
304 | self.pos_drop = nn.Dropout(p=drop_rate)
305 |
306 | dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
307 | self.blocks = nn.Sequential(*[
308 | Block(
309 | dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias,
310 | drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
311 | for i in range(depth)])
312 | self.norm = norm_layer(embed_dim)
313 |
314 | self.pre_logits = nn.Identity()
315 | self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
316 | if distilled:
317 | self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
318 |
319 |
320 | def forward_features(self, x):
321 | x = self.patch_embed(x)
322 | cls_tokens = self.cls_token.expand(x.shape[0], -1, -1) # stole cls_tokens impl from Phil Wang, thanks
323 |
324 | if self.distilled:
325 | dist_tokens = self.dist_token.expand(x.shape[0], -1, -1)
326 | x = torch.cat((cls_tokens, dist_tokens, x), dim=1)
327 | else:
328 | x = torch.cat((cls_tokens, x), dim=1)
329 | x = x + self.pos_embed
330 | x = self.pos_drop(x)
331 |
332 | for layer, blk in enumerate(self.blocks):
333 | x = blk(x)
334 |
335 | x = self.norm(x)
336 | x = x[:, 0]
337 |
338 | return x
339 |
340 |
341 | class ViTNet(nn.Module):
342 | def __init__(self, base_net='vit_base_patch16_224', use_bottleneck=True, bottleneck_dim=1024, width=1024, class_num=31, args=None):
343 | super(ViTNet, self).__init__()
344 |
345 | self.base_network = vit_model[base_net](pretrained=True, args=args, VisionTransformerModule=VT)
346 | self.use_bottleneck = use_bottleneck
347 |
348 | if self.use_bottleneck:
349 | self.bottleneck_layer = [nn.Linear(self.base_network.embed_dim, bottleneck_dim), nn.BatchNorm1d(bottleneck_dim), nn.ReLU(), nn.Dropout(0.5)]
350 | self.bottleneck = nn.Sequential(*self.bottleneck_layer)
351 |
352 | classifier_dim = bottleneck_dim if use_bottleneck else self.base_network.embed_dim
353 | self.classifier_layer = [nn.Linear(classifier_dim, width), nn.ReLU(), nn.Dropout(0.5), nn.Linear(width, class_num)]
354 | self.classifier = nn.Sequential(*self.classifier_layer)
355 |
356 | if self.use_bottleneck:
357 | self.bottleneck[0].weight.data.normal_(0, 0.005)
358 | self.bottleneck[0].bias.data.fill_(0.1)
359 |
360 | for dep in range(2):
361 | self.classifier[dep * 3].weight.data.normal_(0, 0.01)
362 | self.classifier[dep * 3].bias.data.fill_(0.0)
363 |
364 | self.parameter_list = [
365 | {"params":self.base_network.parameters(), "lr":0.1},
366 | {"params":self.classifier.parameters(), "lr":1}]
367 |
368 | if self.use_bottleneck:
369 | self.parameter_list.extend([{"params":self.bottleneck.parameters(), "lr":1}])
370 |
371 |
372 | def forward(self, inputs):
373 | features = self.base_network.forward_features(inputs)
374 | if self.use_bottleneck:
375 | features = self.bottleneck(features)
376 |
377 | outputs = self.classifier(features)
378 |
379 | return outputs
380 |
381 | class ViT(object):
382 | def __init__(self, base_net='vit_base_patch16_224', bottleneck_dim=1024, class_num=31, use_gpu=True, args=None):
383 | self.c_net = ViTNet(base_net, args.use_bottleneck, bottleneck_dim, class_num, args)
384 | self.use_gpu = use_gpu
385 | self.is_train = False
386 | self.class_num = class_num
387 | if self.use_gpu:
388 | self.c_net = self.c_net.cuda()
389 |
390 | def to_dicts(self):
391 | return self.c_net.state_dict()
392 |
393 | def from_dicts(self, dicts):
394 | self.c_net.load_state_dict(dicts)
395 |
396 | def get_loss(self, inputs_source, inputs_target, labels_source, labels_target, args=None):
397 |
398 | outputs = self.c_net(inputs_source)
399 |
400 | classifier_loss = nn.CrossEntropyLoss()(outputs, labels_source)
401 | total_loss = classifier_loss
402 |
403 | if args.use_tensorboard:
404 | all_losses = {}
405 | all_losses.update({'classifier_loss': classifier_loss})
406 |
407 | for key, value in all_losses.items():
408 | if torch.is_tensor(value):
409 | args.writer.add_scalar(key, value.item(), self.iter_num)
410 | else:
411 | args.writer.add_scalar(key, value, self.iter_num)
412 | args.writer.flush()
413 |
414 | return total_loss
415 |
416 | def __call__(self, inputs):
417 | return self.forward(inputs)
418 |
419 | def forward(self, inputs):
420 | outputs = self.c_net(inputs)
421 | return outputs
422 |
423 | def predict(self, inputs, domain='target', output='prob'):
424 | outputs = self.c_net(inputs)
425 | if output == 'prob':
426 | softmax_outputs = F.softmax(outputs)
427 | return softmax_outputs
428 | elif output == 'score':
429 | return outputs
430 | else:
431 | raise NotImplementedError('Invalid output')
432 |
433 | def get_parameter_list(self):
434 | return self.c_net.parameter_list
435 |
436 | def set_train(self, mode):
437 | self.c_net.train(mode)
438 | self.is_train = mode
439 |
--------------------------------------------------------------------------------
/model/ViTgrl.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 | from model.ViT import VT, vit_model
6 | from model.grl import WarmStartGradientReverseLayer
7 |
8 |
9 | class ViTgrlNet(nn.Module):
10 | def __init__(self, base_net='vit_base_patch16_224', use_bottleneck=True, bottleneck_dim=1024, width=1024, class_num=31, args=None):
11 | super(ViTgrlNet, self).__init__()
12 |
13 | self.base_network = vit_model[base_net](pretrained=True, args=args, VisionTransformerModule=VT)
14 | self.use_bottleneck = use_bottleneck
15 | self.grl = WarmStartGradientReverseLayer(alpha=1.0, lo=0.0, hi=0.1, max_iters=1000, auto_step=True)
16 | if self.use_bottleneck:
17 | self.bottleneck_layer = [nn.Linear(self.base_network.embed_dim, bottleneck_dim), nn.BatchNorm1d(bottleneck_dim), nn.ReLU(), nn.Dropout(0.5)]
18 | self.bottleneck = nn.Sequential(*self.bottleneck_layer)
19 |
20 | classifier_dim = bottleneck_dim if use_bottleneck else self.base_network.embed_dim
21 | self.classifier_layer = [nn.Linear(classifier_dim, width), nn.ReLU(), nn.Dropout(0.5), nn.Linear(width, class_num)]
22 | self.classifier = nn.Sequential(*self.classifier_layer)
23 |
24 | self.discriminator_layer = [nn.Linear(classifier_dim, width), nn.ReLU(), nn.Dropout(0.5), nn.Linear(width, 1)]
25 | self.discriminator = nn.Sequential(*self.discriminator_layer)
26 |
27 | if self.use_bottleneck:
28 | self.bottleneck[0].weight.data.normal_(0, 0.005)
29 | self.bottleneck[0].bias.data.fill_(0.1)
30 |
31 | for dep in range(2):
32 | self.discriminator[dep * 3].weight.data.normal_(0, 0.01)
33 | self.discriminator[dep * 3].bias.data.fill_(0.0)
34 | self.classifier[dep * 3].weight.data.normal_(0, 0.01)
35 | self.classifier[dep * 3].bias.data.fill_(0.0)
36 |
37 | self.parameter_list = [
38 | {"params":self.base_network.parameters(), "lr":0.1},
39 | {"params":self.classifier.parameters(), "lr":1},
40 | {"params":self.discriminator.parameters(), "lr":1}]
41 | if self.use_bottleneck:
42 | self.parameter_list.extend([{"params":self.bottleneck.parameters(), "lr":1}])
43 |
44 |
45 | def forward(self, inputs):
46 | features = self.base_network.forward_features(inputs)
47 | if self.use_bottleneck:
48 | features = self.bottleneck(features)
49 |
50 | outputs = self.classifier(features)
51 |
52 | if self.training:
53 | outputs_dc = self.discriminator(self.grl(features))
54 | return outputs, outputs_dc
55 | else:
56 | return outputs
57 |
58 | class ViTgrl(object):
59 | def __init__(self, base_net='vit_base_patch16_224', bottleneck_dim=1024, class_num=31, use_gpu=True, args=None):
60 | self.c_net = ViTgrlNet(base_net, args.use_bottleneck, bottleneck_dim, bottleneck_dim, class_num, args)
61 | self.use_gpu = use_gpu
62 | self.is_train = False
63 | self.iter_num = 0
64 | self.class_num = class_num
65 | if self.use_gpu:
66 | self.c_net = self.c_net.cuda()
67 |
68 | def to_dicts(self):
69 | return self.c_net.state_dict()
70 |
71 | def from_dicts(self, dicts):
72 | self.c_net.load_state_dict(dicts)
73 |
74 | def get_loss(self, inputs_source, inputs_target, labels_source, labels_target, args=None):
75 |
76 | inputs = torch.cat((inputs_source, inputs_target))
77 | outputs, outputs_dc = self.c_net(inputs)
78 |
79 | classification_loss = nn.CrossEntropyLoss()(outputs.narrow(0, 0, labels_source.size(0)), labels_source)
80 |
81 | domain_loss = 0.
82 | if args.domain_loss_weight > 0:
83 | domain_labels = torch.cat(
84 | (torch.ones(inputs.shape[0] // 2, device=inputs.device, dtype=torch.float),
85 | torch.zeros(inputs.shape[0] // 2, device=inputs.device, dtype=torch.float)),
86 | 0)
87 | domain_loss = nn.BCELoss()(F.sigmoid(outputs_dc).squeeze(), domain_labels) * 2
88 |
89 | self.iter_num += 1
90 |
91 | total_loss = classification_loss * args.classification_loss_weight + domain_loss * args.domain_loss_weight
92 |
93 | if args.use_tensorboard:
94 | all_losses = {}
95 | all_losses.update({'classification_loss': classification_loss})
96 | all_losses.update({'domain_loss': domain_loss})
97 |
98 | for key, value in all_losses.items():
99 | if torch.is_tensor(value):
100 | args.writer.add_scalar(key, value.item(), self.iter_num)
101 | else:
102 | args.writer.add_scalar(key, value, self.iter_num)
103 | args.writer.flush()
104 |
105 | return total_loss
106 |
107 | def __call__(self, inputs):
108 | return self.forward(inputs)
109 |
110 | def forward(self, inputs):
111 | outputs = self.c_net(inputs)
112 | return outputs
113 |
114 | def predict(self, inputs, domain='target', output='prob'):
115 | outputs = self.c_net(inputs)
116 | if output == 'prob':
117 | softmax_outputs = F.softmax(outputs)
118 | return softmax_outputs
119 | elif output == 'score':
120 | return outputs
121 | else:
122 | raise NotImplementedError('Invalid output')
123 |
124 | def get_parameter_list(self):
125 | return self.c_net.parameter_list
126 |
127 | def set_train(self, mode):
128 | self.c_net.train(mode)
129 | self.is_train = mode
130 |
--------------------------------------------------------------------------------
/model/grl.py:
--------------------------------------------------------------------------------
1 | from typing import Optional, Any, Tuple
2 | import numpy as np
3 | import torch.nn as nn
4 | from torch.autograd import Function
5 | import torch
6 |
7 |
8 | class GradientReverseFunction(Function):
9 |
10 | @staticmethod
11 | def forward(ctx: Any, input: torch.Tensor, coeff: Optional[float] = 1.) -> torch.Tensor:
12 | ctx.coeff = coeff
13 | output = input * 1.0
14 | return output
15 |
16 | @staticmethod
17 | def backward(ctx: Any, grad_output: torch.Tensor) -> Tuple[torch.Tensor, Any]:
18 | return grad_output.neg() * ctx.coeff, None
19 |
20 |
21 | class GradientReverseLayer(nn.Module):
22 | def __init__(self):
23 | super(GradientReverseLayer, self).__init__()
24 |
25 | def forward(self, *input):
26 | return GradientReverseFunction.apply(*input)
27 |
28 |
29 | class WarmStartGradientReverseLayer(nn.Module):
30 | """Gradient Reverse Layer :math:`\mathcal{R}(x)` with warm start
31 |
32 | The forward and backward behaviours are:
33 |
34 | .. math::
35 | \mathcal{R}(x) = x,
36 |
37 | \dfrac{ d\mathcal{R}} {dx} = - \lambda I.
38 |
39 | :math:`\lambda` is initiated at :math:`lo` and is gradually changed to :math:`hi` using the following schedule:
40 |
41 | .. math::
42 | \lambda = \dfrac{2(hi-lo)}{1+\exp(- α \dfrac{i}{N})} - (hi-lo) + lo
43 |
44 | where :math:`i` is the iteration step.
45 |
46 | Args:
47 | alpha (float, optional): :math:`α`. Default: 1.0
48 | lo (float, optional): Initial value of :math:`\lambda`. Default: 0.0
49 | hi (float, optional): Final value of :math:`\lambda`. Default: 1.0
50 | max_iters (int, optional): :math:`N`. Default: 1000
51 | auto_step (bool, optional): If True, increase :math:`i` each time `forward` is called.
52 | Otherwise use function `step` to increase :math:`i`. Default: False
53 | """
54 |
55 | def __init__(self, alpha: Optional[float] = 1.0, lo: Optional[float] = 0.0, hi: Optional[float] = 1.,
56 | max_iters: Optional[int] = 1000., auto_step: Optional[bool] = False):
57 | super(WarmStartGradientReverseLayer, self).__init__()
58 | self.alpha = alpha
59 | self.lo = lo
60 | self.hi = hi
61 | self.iter_num = 0
62 | self.max_iters = max_iters
63 | self.auto_step = auto_step
64 |
65 | def forward(self, input: torch.Tensor) -> torch.Tensor:
66 | """"""
67 | coeff = np.float(
68 | 2.0 * (self.hi - self.lo) / (1.0 + np.exp(-self.alpha * self.iter_num / self.max_iters))
69 | - (self.hi - self.lo) + self.lo
70 | )
71 | if self.auto_step:
72 | self.step()
73 | return GradientReverseFunction.apply(input, coeff)
74 |
75 | def step(self):
76 | """Increase iteration number :math:`i` by 1"""
77 | self.iter_num += 1
78 |
--------------------------------------------------------------------------------
/model/helpers.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 | from torch.nn.init import _calculate_fan_in_and_fan_out
5 |
6 | import logging
7 | import os
8 | from typing import Optional, Callable
9 | import math
10 | import warnings
11 |
12 |
13 | _logger = logging.getLogger(__name__)
14 | from torch.hub import download_url_to_file, urlparse, HASH_REGEX
15 | try:
16 | from torch.hub import get_dir
17 | except ImportError:
18 | from torch.hub import _get_torch_home as get_dir
19 |
20 |
21 | def _no_grad_trunc_normal_(tensor, mean, std, a, b):
22 | # Cut & paste from PyTorch official master until it's in a few official releases - RW
23 | # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
24 | def norm_cdf(x):
25 | # Computes standard normal cumulative distribution function
26 | return (1. + math.erf(x / math.sqrt(2.))) / 2.
27 |
28 | if (mean < a - 2 * std) or (mean > b + 2 * std):
29 | warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
30 | "The distribution of values may be incorrect.",
31 | stacklevel=2)
32 |
33 | with torch.no_grad():
34 | # Values are generated by using a truncated uniform distribution and
35 | # then using the inverse CDF for the normal distribution.
36 | # Get upper and lower cdf values
37 | l = norm_cdf((a - mean) / std)
38 | u = norm_cdf((b - mean) / std)
39 |
40 | # Uniformly fill tensor with values from [l, u], then translate to
41 | # [2l-1, 2u-1].
42 | tensor.uniform_(2 * l - 1, 2 * u - 1)
43 |
44 | # Use inverse cdf transform for normal distribution to get truncated
45 | # standard normal
46 | tensor.erfinv_()
47 |
48 | # Transform to proper mean, std
49 | tensor.mul_(std * math.sqrt(2.))
50 | tensor.add_(mean)
51 |
52 | # Clamp to ensure it's in the proper range
53 | tensor.clamp_(min=a, max=b)
54 | return tensor
55 |
56 |
57 | def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
58 | # type: (Tensor, float, float, float, float) -> Tensor
59 | r"""Fills the input Tensor with values drawn from a truncated
60 | normal distribution. The values are effectively drawn from the
61 | normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
62 | with values outside :math:`[a, b]` redrawn until they are within
63 | the bounds. The method used for generating the random values works
64 | best when :math:`a \leq \text{mean} \leq b`.
65 | Args:
66 | tensor: an n-dimensional `torch.Tensor`
67 | mean: the mean of the normal distribution
68 | std: the standard deviation of the normal distribution
69 | a: the minimum cutoff value
70 | b: the maximum cutoff value
71 | Examples:
72 | >>> w = torch.empty(3, 5)
73 | >>> nn.init.trunc_normal_(w)
74 | """
75 | return _no_grad_trunc_normal_(tensor, mean, std, a, b)
76 |
77 | def variance_scaling_(tensor, scale=1.0, mode='fan_in', distribution='normal'):
78 | fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
79 | if mode == 'fan_in':
80 | denom = fan_in
81 | elif mode == 'fan_out':
82 | denom = fan_out
83 | elif mode == 'fan_avg':
84 | denom = (fan_in + fan_out) / 2
85 |
86 | variance = scale / denom
87 |
88 | if distribution == "truncated_normal":
89 | # constant is stddev of standard normal truncated to (-2, 2)
90 | trunc_normal_(tensor, std=math.sqrt(variance) / .87962566103423978)
91 | elif distribution == "normal":
92 | tensor.normal_(std=math.sqrt(variance))
93 | elif distribution == "uniform":
94 | bound = math.sqrt(3 * variance)
95 | tensor.uniform_(-bound, bound)
96 | else:
97 | raise ValueError(f"invalid distribution {distribution}")
98 |
99 |
100 | def lecun_normal_(tensor):
101 | variance_scaling_(tensor, mode='fan_in', distribution='truncated_normal')
102 |
103 | def _init_vit_weights(module: nn.Module, name: str = '', head_bias: float = 0., jax_impl: bool = False):
104 | """ ViT weight initialization
105 | * When called without n, head_bias, jax_impl args it will behave exactly the same
106 | as my original init for compatibility with prev hparam / downstream use cases (ie DeiT).
107 | * When called w/ valid n (module name) and jax_impl=True, will (hopefully) match JAX impl
108 | """
109 | if isinstance(module, nn.Linear):
110 | if name.startswith('head'):
111 | nn.init.zeros_(module.weight)
112 | nn.init.constant_(module.bias, head_bias)
113 | elif name.startswith('pre_logits'):
114 | lecun_normal_(module.weight)
115 | nn.init.zeros_(module.bias)
116 | else:
117 | if jax_impl:
118 | nn.init.xavier_uniform_(module.weight)
119 | if module.bias is not None:
120 | if 'mlp' in name:
121 | nn.init.normal_(module.bias, std=1e-6)
122 | else:
123 | nn.init.zeros_(module.bias)
124 | else:
125 | trunc_normal_(module.weight, std=.02)
126 | if module.bias is not None:
127 | nn.init.zeros_(module.bias)
128 | elif jax_impl and isinstance(module, nn.Conv2d):
129 | # NOTE conv was left to pytorch default in my original init
130 | lecun_normal_(module.weight)
131 | if module.bias is not None:
132 | nn.init.zeros_(module.bias)
133 | elif isinstance(module, (nn.LayerNorm, nn.GroupNorm, nn.BatchNorm2d)):
134 | nn.init.zeros_(module.bias)
135 | nn.init.ones_(module.weight)
136 |
137 | @torch.no_grad()
138 | def _load_weights(model, checkpoint_path, prefix=''):
139 | """ Load weights from .npz checkpoints for official Google Brain Flax implementation
140 | """
141 | import numpy as np
142 |
143 | def _n2p(w, t=True):
144 | if w.ndim == 4 and w.shape[0] == w.shape[1] == w.shape[2] == 1:
145 | w = w.flatten()
146 | if t:
147 | if w.ndim == 4:
148 | w = w.transpose([3, 2, 0, 1])
149 | elif w.ndim == 3:
150 | w = w.transpose([2, 0, 1])
151 | elif w.ndim == 2:
152 | w = w.transpose([1, 0])
153 | return torch.from_numpy(w)
154 |
155 | w = np.load(checkpoint_path)
156 | if not prefix and 'opt/target/embedding/kernel' in w:
157 | prefix = 'opt/target/'
158 |
159 | if hasattr(model.patch_embed, 'backbone'):
160 | # hybrid
161 | backbone = model.patch_embed.backbone
162 | stem_only = not hasattr(backbone, 'stem')
163 | stem = backbone if stem_only else backbone.stem
164 | stem.conv.weight.copy_(adapt_input_conv(stem.conv.weight.shape[1], _n2p(w[f'{prefix}conv_root/kernel'])))
165 | stem.norm.weight.copy_(_n2p(w[f'{prefix}gn_root/scale']))
166 | stem.norm.bias.copy_(_n2p(w[f'{prefix}gn_root/bias']))
167 | if not stem_only:
168 | for i, stage in enumerate(backbone.stages):
169 | for j, block in enumerate(stage.blocks):
170 | bp = f'{prefix}block{i + 1}/unit{j + 1}/'
171 | for r in range(3):
172 | getattr(block, f'conv{r + 1}').weight.copy_(_n2p(w[f'{bp}conv{r + 1}/kernel']))
173 | getattr(block, f'norm{r + 1}').weight.copy_(_n2p(w[f'{bp}gn{r + 1}/scale']))
174 | getattr(block, f'norm{r + 1}').bias.copy_(_n2p(w[f'{bp}gn{r + 1}/bias']))
175 | if block.downsample is not None:
176 | block.downsample.conv.weight.copy_(_n2p(w[f'{bp}conv_proj/kernel']))
177 | block.downsample.norm.weight.copy_(_n2p(w[f'{bp}gn_proj/scale']))
178 | block.downsample.norm.bias.copy_(_n2p(w[f'{bp}gn_proj/bias']))
179 | embed_conv_w = _n2p(w[f'{prefix}embedding/kernel'])
180 | else:
181 | embed_conv_w = adapt_input_conv(
182 | model.patch_embed.proj.weight.shape[1], _n2p(w[f'{prefix}embedding/kernel']))
183 | model.patch_embed.proj.weight.copy_(embed_conv_w)
184 | model.patch_embed.proj.bias.copy_(_n2p(w[f'{prefix}embedding/bias']))
185 | model.cls_token.copy_(_n2p(w[f'{prefix}cls'], t=False))
186 | pos_embed_w = _n2p(w[f'{prefix}Transformer/posembed_input/pos_embedding'], t=False)
187 | if pos_embed_w.shape != model.pos_embed.shape:
188 | pos_embed_w = resize_pos_embed( # resize pos embedding when different size from pretrained weights
189 | pos_embed_w, model.pos_embed, getattr(model, 'num_tokens', 1), model.patch_embed.grid_size)
190 | model.pos_embed.copy_(pos_embed_w)
191 | model.norm.weight.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/scale']))
192 | model.norm.bias.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/bias']))
193 | if isinstance(model.head, nn.Linear) and model.head.bias.shape[0] == w[f'{prefix}head/bias'].shape[-1]:
194 | model.head.weight.copy_(_n2p(w[f'{prefix}head/kernel']))
195 | model.head.bias.copy_(_n2p(w[f'{prefix}head/bias']))
196 | if isinstance(getattr(model.pre_logits, 'fc', None), nn.Linear) and f'{prefix}pre_logits/bias' in w:
197 | model.pre_logits.fc.weight.copy_(_n2p(w[f'{prefix}pre_logits/kernel']))
198 | model.pre_logits.fc.bias.copy_(_n2p(w[f'{prefix}pre_logits/bias']))
199 | for i, block in enumerate(model.blocks.children()):
200 | block_prefix = f'{prefix}Transformer/encoderblock_{i}/'
201 | mha_prefix = block_prefix + 'MultiHeadDotProductAttention_1/'
202 | block.norm1.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/scale']))
203 | block.norm1.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/bias']))
204 | block.attn.qkv.weight.copy_(torch.cat([
205 | _n2p(w[f'{mha_prefix}{n}/kernel'], t=False).flatten(1).T for n in ('query', 'key', 'value')]))
206 | block.attn.qkv.bias.copy_(torch.cat([
207 | _n2p(w[f'{mha_prefix}{n}/bias'], t=False).reshape(-1) for n in ('query', 'key', 'value')]))
208 | block.attn.proj.weight.copy_(_n2p(w[f'{mha_prefix}out/kernel']).flatten(1))
209 | block.attn.proj.bias.copy_(_n2p(w[f'{mha_prefix}out/bias']))
210 | for r in range(2):
211 | getattr(block.mlp, f'fc{r + 1}').weight.copy_(_n2p(w[f'{block_prefix}MlpBlock_3/Dense_{r}/kernel']))
212 | getattr(block.mlp, f'fc{r + 1}').bias.copy_(_n2p(w[f'{block_prefix}MlpBlock_3/Dense_{r}/bias']))
213 | block.norm2.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_2/scale']))
214 | block.norm2.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_2/bias']))
215 |
216 |
217 | def named_apply(fn: Callable, module: nn.Module, name='', depth_first=True, include_root=False) -> nn.Module:
218 | if not depth_first and include_root:
219 | fn(module=module, name=name)
220 | for child_name, child_module in module.named_children():
221 | child_name = '.'.join((name, child_name)) if name else child_name
222 | named_apply(fn=fn, module=child_module, name=child_name, depth_first=depth_first, include_root=True)
223 | if depth_first and include_root:
224 | fn(module=module, name=name)
225 | return module
226 |
227 | def get_cache_dir(child_dir=''):
228 | """
229 | Returns the location of the directory where models are cached (and creates it if necessary).
230 | """
231 | # Issue warning to move data if old env is set
232 | if os.getenv('TORCH_MODEL_ZOO'):
233 | _logger.warning('TORCH_MODEL_ZOO is deprecated, please use env TORCH_HOME instead')
234 |
235 | hub_dir = get_dir()
236 | child_dir = () if not child_dir else (child_dir,)
237 | model_dir = os.path.join(hub_dir, 'checkpoints', *child_dir)
238 | os.makedirs(model_dir, exist_ok=True)
239 | return model_dir
240 |
241 | def download_cached_file(url, check_hash=True, progress=False):
242 | parts = urlparse(url)
243 | filename = os.path.basename(parts.path)
244 | cached_file = os.path.join(get_cache_dir(), filename)
245 | if not os.path.exists(cached_file):
246 | _logger.info('Downloading: "{}" to {}\n'.format(url, cached_file))
247 | hash_prefix = None
248 | if check_hash:
249 | r = HASH_REGEX.search(filename) # r is Optional[Match[str]]
250 | hash_prefix = r.group(1) if r else None
251 | download_url_to_file(url, cached_file, hash_prefix, progress=progress)
252 | return cached_file
253 |
254 | def load_custom_pretrained(model, default_cfg=None, load_fn=None, progress=False, check_hash=False):
255 | r"""Loads a custom (read non .pth) weight file
256 | Downloads checkpoint file into cache-dir like torch.hub based loaders, but calls
257 | a passed in custom load fun, or the `load_pretrained` model member fn.
258 | If the object is already present in `model_dir`, it's deserialized and returned.
259 | The default value of `model_dir` is ``/checkpoints`` where
260 | `hub_dir` is the directory returned by :func:`~torch.hub.get_dir`.
261 | Args:
262 | model: The instantiated model to load weights into
263 | default_cfg (dict): Default pretrained model cfg
264 | load_fn: An external stand alone fn that loads weights into provided model, otherwise a fn named
265 | 'laod_pretrained' on the model will be called if it exists
266 | progress (bool, optional): whether or not to display a progress bar to stderr. Default: False
267 | check_hash(bool, optional): If True, the filename part of the URL should follow the naming convention
268 | ``filename-.ext`` where ```` is the first eight or more
269 | digits of the SHA256 hash of the contents of the file. The hash is used to
270 | ensure unique names and to verify the contents of the file. Default: False
271 | """
272 | default_cfg = default_cfg or getattr(model, 'default_cfg', None) or {}
273 | pretrained_url = default_cfg.get('url', None)
274 | if not pretrained_url:
275 | _logger.warning("No pretrained weights exist for this model. Using random initialization.")
276 | return
277 | cached_file = download_cached_file(default_cfg['url'], check_hash=check_hash, progress=progress)
278 |
279 | if load_fn is not None:
280 | load_fn(model, cached_file)
281 | elif hasattr(model, 'load_pretrained'):
282 | model.load_pretrained(cached_file)
283 | else:
284 | _logger.warning("Valid function to load pretrained weights is not available, using random initialization.")
285 |
286 | def adapt_input_conv(in_chans, conv_weight):
287 | conv_type = conv_weight.dtype
288 | conv_weight = conv_weight.float() # Some weights are in torch.half, ensure it's float for sum on CPU
289 | O, I, J, K = conv_weight.shape
290 | if in_chans == 1:
291 | if I > 3:
292 | assert conv_weight.shape[1] % 3 == 0
293 | # For models with space2depth stems
294 | conv_weight = conv_weight.reshape(O, I // 3, 3, J, K)
295 | conv_weight = conv_weight.sum(dim=2, keepdim=False)
296 | else:
297 | conv_weight = conv_weight.sum(dim=1, keepdim=True)
298 | elif in_chans != 3:
299 | if I != 3:
300 | raise NotImplementedError('Weight format not supported by conversion.')
301 | else:
302 | # NOTE this strategy should be better than random init, but there could be other combinations of
303 | # the original RGB input layer weights that'd work better for specific cases.
304 | repeat = int(math.ceil(in_chans / 3))
305 | conv_weight = conv_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :]
306 | conv_weight *= (3 / float(in_chans))
307 | conv_weight = conv_weight.to(conv_type)
308 | return conv_weight
309 |
310 | def resize_pos_embed(posemb, posemb_new, num_tokens=1, gs_new=()):
311 | # Rescale the grid of position embeddings when loading from state_dict. Adapted from
312 | # https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224
313 | _logger.info('Resized position embedding: %s to %s', posemb.shape, posemb_new.shape)
314 | ntok_new = posemb_new.shape[1]
315 | if num_tokens:
316 | posemb_tok, posemb_grid = posemb[:, :num_tokens], posemb[0, num_tokens:]
317 | ntok_new -= num_tokens
318 | else:
319 | posemb_tok, posemb_grid = posemb[:, :0], posemb[0]
320 | gs_old = int(math.sqrt(len(posemb_grid)))
321 | if not len(gs_new): # backwards compatibility
322 | gs_new = [int(math.sqrt(ntok_new))] * 2
323 | assert len(gs_new) >= 2
324 | _logger.info('Position embedding grid-size from %s to %s', [gs_old, gs_old], gs_new)
325 | posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
326 | posemb_grid = F.interpolate(posemb_grid, size=gs_new, mode='bilinear')
327 | posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_new[0] * gs_new[1], -1)
328 | posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
329 | return posemb
330 |
331 |
332 | def checkpoint_filter_fn(state_dict, model):
333 | """ convert patch embedding weight from manual patchify + linear proj to conv"""
334 | out_dict = {}
335 | if 'model' in state_dict:
336 | # For deit models
337 | state_dict = state_dict['model']
338 | for k, v in state_dict.items():
339 | if 'patch_embed.proj.weight' in k and len(v.shape) < 4:
340 | # For old models that I trained prior to conv based patchification
341 | O, I, H, W = model.patch_embed.proj.weight.shape
342 | v = v.reshape(O, -1, H, W)
343 | elif k == 'pos_embed' and v.shape != model.pos_embed.shape:
344 | # To resize pos embedding when using model at different size from pretrained weights
345 | v = resize_pos_embed(
346 | v, model.pos_embed, getattr(model, 'num_tokens', 1), model.patch_embed.grid_size)
347 | out_dict[k] = v
348 | return out_dict
349 |
350 | IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
351 | IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)
352 | def _cfg(url='', **kwargs):
353 | return {
354 | 'url': url,
355 | 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
356 | 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True,
357 | 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
358 | 'first_conv': 'patch_embed.proj', 'classifier': 'head',
359 | **kwargs
360 | }
361 |
362 | default_cfgs = {
363 | # patch models (weights from official Google JAX impl)
364 | 'vit_small_patch16_224': _cfg(
365 | url='https://storage.googleapis.com/vit_models/augreg/'
366 | 'S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz'),
367 | 'vit_base_patch16_224': _cfg(
368 | url='https://storage.googleapis.com/vit_models/augreg/'
369 | 'B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz'),
370 | }
371 |
372 | def _create_vision_transformer(variant, pretrained=False, distilled=False, **kwargs):
373 | default_cfg = default_cfgs[variant]
374 | default_num_classes = default_cfg['num_classes']
375 | default_img_size = default_cfg['input_size'][-1]
376 |
377 | num_classes = kwargs.pop('num_classes', default_num_classes)
378 | img_size = kwargs.pop('img_size', default_img_size)
379 |
380 | model_cls = kwargs.pop('VisionTransformerModule')
381 | model = model_cls(img_size=img_size, num_classes=num_classes, distilled=distilled, **kwargs)
382 | model.default_cfg = default_cfg
383 |
384 | if pretrained:
385 | load_custom_pretrained(model)
386 |
387 | return model
388 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | tensorboard
2 | easydict
3 | scikit-learn
4 | matplotlib
5 | tqdm
--------------------------------------------------------------------------------
/trainer/argument_parser.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | from utils.utils import str2bool, str2list, strlist
3 | from torch.utils.tensorboard import SummaryWriter
4 | import time
5 |
6 | def argument_parse(args_):
7 | _timestamp = time.strftime("%Y-%m-%d_%H.%M.%S", time.localtime())
8 | parser = argparse.ArgumentParser()
9 | # model
10 | parser.add_argument('--model', default=None, type=str,
11 | help='uda model')
12 | parser.add_argument('--base_net', default='vit_base_patch16_224', type=str,
13 | help='vit backbone')
14 | parser.add_argument('--restore_checkpoint', default=None, type=str,
15 | help='checkpoint to restore weights')
16 | parser.add_argument('--use_bottleneck', default=True, type=str2bool,
17 | help='whether use bottleneck layer')
18 | parser.add_argument('--bottleneck_dim', default=None, type=int,
19 | help="the dim of the bottleneck layer")
20 |
21 | # dataset
22 | parser.add_argument('--dataset', default='Office-31', type=str,
23 | help='dataset')
24 | parser.add_argument('--source_path', default=None, type=str,
25 | help='path to source (train) image list')
26 | parser.add_argument('--target_path', default=None, type=str,
27 | help='path to target (train) image list')
28 | parser.add_argument('--test_path', default=None, type=strlist,
29 | help='path to (target) test image list')
30 | parser.add_argument('--rand_aug', default='False', type=str2bool,
31 | help='whether use RandAug for target images')
32 | parser.add_argument('--center_crop', default=False, type=str2bool,
33 | help='whether use center crop for images')
34 | parser.add_argument('--random_resized_crop', default=False, type=str2bool,
35 | help='whether use RandomResizedCrop for images')
36 | parser.add_argument('--num_workers', default=4, type=int,
37 | help='number of workers for dataloader')
38 |
39 | # training configuration
40 | parser.add_argument('--lr', default=0.004, type=float,
41 | help='learning rate')
42 | parser.add_argument('--lr_wd', default=0.0005, type=float,
43 | help='weight decay')
44 | parser.add_argument('--lr_momentum', default=0.9, type=float,
45 | help='lr schedule momentum')
46 | parser.add_argument('--lr_scheduler_gamma', default=0.001, type=float,
47 | help='lr scheduler gamma')
48 | parser.add_argument('--lr_scheduler_decay_rate', default=0.75, type=float,
49 | help='lr schedule decay rate')
50 | parser.add_argument('--lr_scheduler_rate', default=1, type=int,
51 | help='lr schedule rate')
52 | parser.add_argument('--batch_size', default=32, type=int,
53 | help='batch size')
54 | parser.add_argument('--class_num', default=-1, type=int,
55 | help='class number')
56 | parser.add_argument('--eval_source', default='True', type=str2bool,
57 | help='whether evaluate on source data')
58 | parser.add_argument('--eval_target', default='True', type=str2bool,
59 | help='whether evaluate on target data')
60 | parser.add_argument('--eval_test', default='True', type=str2bool,
61 | help='whether evaluate on test data')
62 | parser.add_argument('--save_checkpoint', default='True', type=str2bool,
63 | help='whether save checkpoint')
64 | parser.add_argument('--iters_per_epoch', default=1000, type=int,
65 | help='number of iterations per epoch')
66 | parser.add_argument('--save_epoch', default='50', type=int,
67 | help='interval of saving checkpoint')
68 | parser.add_argument('--eval_epoch', default='10', type=int,
69 | help='interval of evaluating')
70 | parser.add_argument('--train_epoch', default=50, type=int,
71 | help='number of training epochs')
72 |
73 | # environment
74 | parser.add_argument('--gpu_id', default='0', type=str,
75 | help='which gpu to use')
76 | parser.add_argument('--random_seed', default='0', type=int,
77 | help='random seed')
78 | parser.add_argument('--timestamp', default=_timestamp, type=str,
79 | help='timestamp')
80 |
81 | # tensorboard and logger
82 | parser.add_argument('--use_file_logger', default='True', type=str2bool,
83 | help='whether use file logger')
84 | parser.add_argument('--log_dir', default='log', type=str,
85 | help='logging directory')
86 | parser.add_argument('--use_tensorboard', default='False', type=str2bool,
87 | help='whether use tensorboard')
88 | parser.add_argument('--tensorboard_dir', default='tensorboard', type=str,
89 | help='tensorboard directory')
90 | parser.add_argument('--writer', default=None, type=SummaryWriter,
91 | help='tensorboard writer')
92 |
93 | # losses
94 | parser.add_argument('--classification_loss_weight', default='1.00', type=float,
95 | help='weight of semantic classification loss')
96 | parser.add_argument('--domain_loss_weight', default='1.00', type=float,
97 | help='weight of domain classification loss')
98 | parser.add_argument('--mi_loss_weight', default='0.00', type=float,
99 | help='weight of mutual information maximization loss')
100 |
101 | # self refinement
102 | parser.add_argument('--sr_alpha', default='0.3', type=float,
103 | help='self refinement alpha (perturbation magnitude)')
104 | parser.add_argument('--sr_layers', default='[0,4,8]', type=str2list,
105 | help='transformer layers to add perturbation (0 to 11; -1 means raw input images)')
106 | parser.add_argument('--sr_loss_p', default='0.5', type=float,
107 | help='self refinement loss sampling probability')
108 | parser.add_argument('--sr_loss_weight', default='0.2', type=float,
109 | help='weight of self refinement loss')
110 | parser.add_argument('--sr_epsilon', default='0.4', type=float,
111 | help='self refinement epsilon (confidence threshold)')
112 |
113 | # safe training
114 | parser.add_argument('--use_safe_training', default='True', type=str2bool,
115 | help='whether use safe training')
116 | parser.add_argument('--adap_adjust_restore_optimizor', default='False', type=str2bool,
117 | help='whether save and restore snapshot of optimizor')
118 | parser.add_argument('--adap_adjust_T', default='1000', type=int,
119 | help='adaptive adjustment T (interval of saving/restoring snapshot and detecting diversity drop)')
120 | parser.add_argument('--adap_adjust_L', default='4', type=int,
121 | help='adaptive adjustment L (multi-scale detection of diversity dropping)')
122 | parser.add_argument('--adap_adjust_append_last_subintervals', default='True', type=str2bool,
123 | help='whether detect diversity drop along with last sub-intervals')
124 |
125 | args = parser.parse_args(args_)
126 |
127 | # default configurations
128 | if args.dataset == 'Office-31':
129 | class_num = 31
130 | bottleneck_dim = 1024
131 | center_crop = False
132 | elif args.dataset == 'Office-Home':
133 | class_num = 65
134 | bottleneck_dim = 2048
135 | center_crop = False
136 | elif args.dataset == 'visda':
137 | class_num = 12
138 | bottleneck_dim = 1024
139 | center_crop = True
140 | elif args.dataset == 'DomainNet':
141 | class_num = 345
142 | bottleneck_dim = 1024
143 | center_crop = False
144 | else:
145 | raise NotImplementedError('Unsupported dataset')
146 |
147 | args.bottleneck_dim = bottleneck_dim if args.bottleneck_dim is None else args.bottleneck_dim
148 | args.center_crop = center_crop if args.center_crop is None else args.center_crop
149 | args.class_num = class_num
150 |
151 | return args
--------------------------------------------------------------------------------
/trainer/evaluate.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import logging
3 | import sklearn
4 | import sklearn.metrics
5 | from utils.utils import parse_path
6 |
7 | def evaluate(model_instance, input_loader):
8 | ori_train_state = model_instance.is_train
9 | model_instance.set_train(False)
10 | num_iter = len(input_loader)
11 | iter_test = iter(input_loader)
12 | first_test = True
13 |
14 | with torch.no_grad():
15 | for i in range(num_iter):
16 | data = iter_test.next()
17 | inputs = data[0]
18 | labels = data[1]
19 | if model_instance.use_gpu:
20 | inputs = inputs.cuda()
21 | labels = labels.cuda()
22 |
23 | probabilities = model_instance.predict(inputs)
24 |
25 | probabilities = probabilities.data.float()
26 | labels = labels.data.float()
27 | if first_test:
28 | all_probs = probabilities
29 | all_labels = labels
30 | first_test = False
31 | else:
32 | all_probs = torch.cat((all_probs, probabilities), 0)
33 | all_labels = torch.cat((all_labels, labels), 0)
34 |
35 | _, predict = torch.max(all_probs, 1)
36 | accuracy = torch.sum(torch.squeeze(predict).float() == all_labels) / float(all_labels.size()[0])
37 |
38 | avg_acc = sklearn.metrics.balanced_accuracy_score(all_labels.cpu().numpy(),
39 | torch.squeeze(predict).float().cpu().numpy())
40 |
41 | cm = sklearn.metrics.confusion_matrix(all_labels.cpu().numpy(),
42 | torch.squeeze(predict).float().cpu().numpy())
43 | accuracies = cm.diagonal() / cm.sum(1)
44 |
45 | model_instance.set_train(ori_train_state)
46 |
47 | # return {'accuracy': np.round(100*accuracy, decimals=2), 'per_class_accuracy': np.round(100*avg_acc, decimals=2)}
48 | return {'accuracy': accuracy.item(), 'per_class_accuracy': avg_acc, 'accuracies': accuracies}
49 |
50 | def format_evaluate_result(eval_result, flag=False):
51 | if flag:
52 | return 'Accuracy={}:Per-class accuracy={}:Accs={}'.format(eval_result['accuracy'],
53 | eval_result['per_class_accuracy'], eval_result['accuracies'])
54 | else:
55 | return 'Accuracy={}:Per-class accuracy={}'.format(eval_result['accuracy'], eval_result['per_class_accuracy'])
56 |
57 | def evaluate_all(model_instance, dataloaders, iter_num, args):
58 | flag = args.dataset=='visda'
59 | if args.eval_source:
60 | eval_result = evaluate(model_instance, dataloaders["source_val"])
61 | if args.use_tensorboard:
62 | args.writer.add_scalar('source_accuracy', eval_result['accuracy'], iter_num)
63 | args.writer.add_scalar('per_class_source_accuracy', eval_result['per_class_accuracy'], iter_num)
64 | args.writer.flush()
65 | print('\n')
66 | logging.info('Train epoch={}:Source {}'.format(iter_num, format_evaluate_result(eval_result, flag)))
67 |
68 | if args.eval_target and dataloaders["target_val"] is not None:
69 | eval_result = evaluate(model_instance, dataloaders["target_val"])
70 | if args.use_tensorboard:
71 | args.writer.add_scalar('target_accuracy', eval_result['accuracy'], iter_num)
72 | args.writer.add_scalar('per_class_target_accuracy', eval_result['per_class_accuracy'], iter_num)
73 | args.writer.flush()
74 | print('\n')
75 | logging.info('Train epoch={}:Target {}'.format(iter_num, format_evaluate_result(eval_result, flag)))
76 |
77 | if args.eval_test and dataloaders["test"] is not None:
78 | if type(dataloaders["test"]) is list or type(dataloaders["test"]) is tuple:
79 | for i, t_test_loader in enumerate(dataloaders["test"]):
80 | eval_result = evaluate(model_instance, t_test_loader)
81 | ext = parse_path(args.test_path[i])
82 | if args.use_tensorboard:
83 | args.writer.add_scalar('test_accuracy_{}'.format(ext), eval_result['accuracy'], iter_num)
84 | args.writer.add_scalar('per_class_test_accuracy_{}'.format(ext), eval_result['per_class_accuracy'], iter_num)
85 | args.writer.flush()
86 | print('\n')
87 | logging.info('Train epoch={}:Test {} {}'.format(iter_num, ext, format_evaluate_result(eval_result, flag)))
88 | else:
89 | eval_result = evaluate(model_instance, dataloaders["test"])
90 | if args.use_tensorboard:
91 | args.writer.add_scalar('test_accuracy', eval_result['accuracy'], iter_num)
92 | args.writer.add_scalar('per_class_test_accuracy', eval_result['per_class_accuracy'], iter_num)
93 | args.writer.flush()
94 | print('\n')
95 | logging.info('Train epoch={}:Test {}'.format(iter_num, format_evaluate_result(eval_result, flag)))
96 |
--------------------------------------------------------------------------------
/trainer/train.py:
--------------------------------------------------------------------------------
1 | import torch.nn as nn
2 | from torch.optim.lr_scheduler import LambdaLR
3 | from importlib import import_module
4 | import tqdm
5 | import warnings
6 | warnings.filterwarnings("ignore", category=UserWarning)
7 | import logging
8 | pil_logger = logging.getLogger('PIL')
9 | pil_logger.setLevel(logging.INFO)
10 |
11 | from utils.utils import *
12 | from trainer.argument_parser import argument_parse
13 | from trainer.evaluate import evaluate_all
14 | from dataset.data_provider import get_dataloaders, ForeverDataIterator
15 |
16 | torch.backends.cudnn.deterministic = True
17 | torch.backends.cudnn.benchmark = False
18 |
19 | def train_source(model_instance, dataloaders, optimizer, lr_scheduler, args):
20 | model_instance.set_train(True)
21 | print("start train source model...")
22 | iter_per_epoch = args.iters_per_epoch
23 | max_iter = args.train_epoch * iter_per_epoch
24 | iter_num = 0
25 |
26 | total_progress_bar = tqdm.tqdm(desc='Train iter', total=max_iter, initial=0)
27 |
28 | iter_source = ForeverDataIterator(dataloaders["source_tr"])
29 | for epoch in range(args.train_epoch):
30 | for _ in tqdm.tqdm(
31 | range(iter_per_epoch),
32 | total=iter_per_epoch,
33 | desc='Train epoch = {}'.format(epoch), ncols=80, leave=False):
34 |
35 | datas = next(iter_source)
36 | inputs_source, labels_source, indexes_source = datas
37 |
38 | inputs_source = inputs_source.cuda()
39 | labels_source = labels_source.cuda()
40 |
41 | optimizer.zero_grad()
42 | outputs_source = model_instance.forward(inputs_source)
43 | classifier_loss = nn.CrossEntropyLoss()(outputs_source, labels_source)
44 | classifier_loss.backward()
45 | optimizer.step()
46 |
47 | iter_num += 1
48 | total_progress_bar.update(1)
49 |
50 |
51 | if (epoch+1) % args.eval_epoch == 0:
52 | evaluate_all(model_instance, dataloaders, epoch+1, args)
53 |
54 | if (epoch+1) % args.save_epoch == 0 and args.save_checkpoint:
55 | checkpoint_dir = "checkpoint_source/{}/".format(args.base_net)
56 | checkpoint_name = checkpoint_dir + args_to_str_src(args) + '_' + args.timestamp + '_' + str(
57 | args.random_seed) + '_epoch_' + str(epoch+1) + '.pth'
58 | if not os.path.exists(checkpoint_dir):
59 | os.makedirs(checkpoint_dir)
60 | save_checkpoint(model_instance, checkpoint_name)
61 | logging.info('Train iter={}:Checkpoint saved to {}'.format(epoch+1, checkpoint_name))
62 |
63 | print('finish source train')
64 |
65 |
66 |
67 | def train(model_instance, dataloaders, optimizer, lr_scheduler, args):
68 | model_instance.set_train(True)
69 | logging.info("start training ...")
70 | iter_num = 0
71 | iter_per_epoch = args.iters_per_epoch
72 | max_iter = args.train_epoch * iter_per_epoch
73 | total_progress_bar = tqdm.tqdm(desc='Train iter', total=max_iter, initial=0)
74 |
75 | iter_source = ForeverDataIterator(dataloaders["source_tr"])
76 | iter_target = ForeverDataIterator(dataloaders["target_tr"])
77 |
78 | for epoch in range(args.train_epoch):
79 | for _ in tqdm.tqdm(
80 | range(iter_per_epoch),
81 | total=iter_per_epoch,
82 | desc='Train epoch = {}'.format(epoch), ncols=80, leave=False):
83 |
84 | inputs_source, labels_source, indexes_source = next(iter_source)
85 | if args.rand_aug:
86 | inputs_target, labels_target, indexes_target, inputs_rand_target = next(iter_target)
87 | else:
88 | inputs_target, labels_target, indexes_target = next(iter_target)
89 | inputs_rand_target = None
90 |
91 | inputs_source = inputs_source.cuda()
92 | inputs_target = inputs_target.cuda()
93 | labels_source = labels_source.cuda()
94 | labels_target = labels_target.cuda()
95 | if args.rand_aug:
96 | inputs_rand_target = inputs_rand_target.cuda()
97 |
98 | # safe training
99 | if args.use_safe_training and args.adap_adjust_restore_optimizor:
100 | if model_instance.restore and iter_num > 0 and args.sr_loss_weight > 0:
101 | optimizer.load_state_dict(optimizer_snapshot)
102 | logging.info('Train iter={}:restore optimizor snapshot'.format(iter_num))
103 |
104 | if iter_num % args.adap_adjust_T == 0 and args.sr_loss_weight > 0:
105 | optimizer_snapshot = optimizer.state_dict()
106 | logging.info('Train iter={}:save optimizor snapshot'.format(iter_num))
107 |
108 | optimizer.zero_grad()
109 | if args.rand_aug:
110 | total_loss = model_instance.get_loss(inputs_source, inputs_target, labels_source, labels_target,
111 | inputs_rand_target, args=args)
112 | else:
113 | total_loss = model_instance.get_loss(inputs_source, inputs_target, labels_source, labels_target, args=args)
114 | total_loss.backward()
115 | optimizer.step()
116 |
117 | if iter_num % args.lr_scheduler_rate == 0:
118 | lr_scheduler.step()
119 |
120 | iter_num += 1
121 | total_progress_bar.update(1)
122 |
123 | if (epoch+1) % args.eval_epoch == 0 and epoch!=0:
124 | evaluate_all(model_instance, dataloaders, (epoch+1), args)
125 |
126 | if (epoch+1) % args.save_epoch == 0 and args.save_checkpoint:
127 | checkpoint_dir = "./checkpoint/{}/".format(args.base_net)
128 | checkpoint_name = checkpoint_dir+args_to_str(args)+'_'+args.timestamp+'_'+ str(args.random_seed)+'_epoch_'+str(epoch+1)+'.pth'
129 | if not os.path.exists(checkpoint_dir):
130 | os.makedirs(checkpoint_dir)
131 | save_checkpoint(model_instance, checkpoint_name)
132 | logging.info('Train epoch={}:Checkpoint saved to {}'.format((epoch+1), checkpoint_name))
133 |
134 |
135 | logging.info('finish training.')
136 |
137 |
138 | def _init_(args_, header):
139 | args = argument_parse(args_)
140 |
141 | resetRNGseed(args.random_seed)
142 | os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_id
143 |
144 | dir = '{}_{}'.format(args.timestamp, '_'.join([_ for _ in [args.model or args.base_model, parse_path(args.source_path),
145 | parse_path(args.target_path)] if _!='']))
146 |
147 | if not logger_init:
148 | init_logger(dir, args.use_file_logger, args.log_dir)
149 |
150 | if args.use_tensorboard:
151 | args.writer = init_tensorboard_writer(args.tensorboard_dir, dir + '_' + str(args.random_seed))
152 |
153 | logging.info(header)
154 | logging.info(args)
155 |
156 | return args
157 |
158 | def train_source_main(args_, header=''):
159 | args = _init_(args_, header)
160 |
161 | try:
162 | model_module = import_module('model.'+args.model)
163 | Model = getattr(model_module, args.model)
164 | model_instance = Model(base_net=args.base_net, bottleneck_dim=args.bottleneck_dim, use_gpu=True, class_num=args.class_num, args=args)
165 | except:
166 | raise NotImplementedError('Unsupported model')
167 |
168 | dataloaders = get_dataloaders(args)
169 | param_groups = model_instance.get_parameter_list()
170 |
171 | optimizer = torch.optim.SGD(param_groups, args.lr, momentum=args.lr_momentum, weight_decay=args.lr_wd, nesterov=True)
172 | lr_scheduler = LambdaLR(optimizer, lambda x: args.lr * (1. + args.lr_scheduler_gamma * float(x)) ** (-args.lr_scheduler_decay_rate))
173 |
174 | train_source(model_instance, dataloaders, optimizer=optimizer, lr_scheduler=lr_scheduler, args=args)
175 |
176 |
177 | def train_main(args_, header=''):
178 | args = _init_(args_, header)
179 |
180 | try:
181 | model_module = import_module('model.'+args.model)
182 | Model = getattr(model_module, args.model)
183 | model_instance = Model(base_net=args.base_net, bottleneck_dim=args.bottleneck_dim, use_gpu=True, class_num=args.class_num, args=args)
184 | except:
185 | raise NotImplementedError('Unsupported model')
186 |
187 | dataloaders = get_dataloaders(args)
188 | param_groups = model_instance.get_parameter_list()
189 |
190 | optimizer = torch.optim.SGD(param_groups, args.lr, momentum=args.lr_momentum, weight_decay=args.lr_wd, nesterov=True)
191 | lr_scheduler = LambdaLR(optimizer, lambda x: args.lr * (1. + args.lr_scheduler_gamma * float(x)) ** (-args.lr_scheduler_decay_rate))
192 |
193 | if args.restore_checkpoint is not None:
194 | load_checkpoint(model_instance, args.restore_checkpoint)
195 | logging.info('Model weights restored from: {}'.format(args.restore_checkpoint))
196 |
197 | train(model_instance, dataloaders, optimizer=optimizer, lr_scheduler=lr_scheduler, args=args)
--------------------------------------------------------------------------------
/utils/utils.py:
--------------------------------------------------------------------------------
1 | import random
2 | import numpy as np
3 | import os
4 | import os.path as osp
5 |
6 | import torch
7 | import logging
8 | import argparse
9 |
10 | from collections import OrderedDict
11 |
12 | import pickle
13 | from torch.utils.tensorboard import SummaryWriter
14 | logger_init = False
15 |
16 | def init_logger(_log_file, use_file_logger=True, dir='log/'):
17 | if not os.path.exists(dir):
18 | os.makedirs(dir)
19 | log_file = osp.join(dir, _log_file + '.log')
20 | logger = logging.getLogger()
21 | for handler in logger.handlers[:]:
22 | logger.removeHandler(handler)
23 |
24 | logger.setLevel('DEBUG')
25 | BASIC_FORMAT = "%(asctime)s:%(levelname)s:%(message)s"
26 | DATE_FORMAT = '%Y-%m-%d %H.%M.%S'
27 | formatter = logging.Formatter(BASIC_FORMAT, DATE_FORMAT)
28 | chlr = logging.StreamHandler()
29 | chlr.setFormatter(formatter)
30 | logger.addHandler(chlr)
31 | if use_file_logger:
32 | fhlr = logging.FileHandler(log_file)
33 | fhlr.setFormatter(formatter)
34 | logger.addHandler(fhlr)
35 |
36 | global logger_init
37 | logger_init = True
38 |
39 | def init_tensorboard_writer(dir='tensorboard/', _writer_file=None):
40 | writer = SummaryWriter(osp.join(dir, _writer_file))
41 | return writer
42 |
43 | # set random number generators' seeds
44 | def resetRNGseed(seed):
45 | np.random.seed(seed)
46 | random.seed(seed)
47 | torch.manual_seed(seed)
48 | torch.cuda.manual_seed(seed)
49 | torch.cuda.manual_seed_all(seed)
50 |
51 | def str2bool(v):
52 | if isinstance(v, bool):
53 | return v
54 | if v.lower() in ('yes', 'true', 't', 'y', '1'):
55 | return True
56 | elif v.lower() in ('no', 'false', 'f', 'n', '0'):
57 | return False
58 | else:
59 | raise argparse.ArgumentTypeError('Boolean value expected.')
60 |
61 | def strlist(v):
62 | # just list
63 | if isinstance(v, list):
64 | return v
65 |
66 | # just string
67 | if '[' not in v or ']' not in v:
68 | return v
69 |
70 | v_list = v.strip('[]').split(',')
71 | v_list = [vi for vi in v_list]
72 | return v_list
73 |
74 | def str2list(v):
75 | if isinstance(v, list):
76 | return v
77 |
78 | if v == '[]':
79 | return []
80 |
81 | v_list = v.strip('[]').split(',')
82 | v_list = [int(vi) for vi in v_list]
83 | return v_list
84 |
85 | def save_checkpoint(model, filename):
86 | weight_dicts = model.to_dicts()
87 | with open(filename, "wb" ) as fc:
88 | pickle.dump(weight_dicts, fc)
89 |
90 | def load_checkpoint(model, filename):
91 | with open(filename, "rb" ) as fc:
92 | dicts = pickle.load(fc)
93 | try:
94 | model.from_dicts(dicts)
95 | except:
96 | new_dicts = []
97 | for _dict in dicts:
98 | new_dict = {}
99 | if isinstance(_dict, OrderedDict):
100 | for name, param in _dict.items():
101 | namel = name.split('.')
102 | key = '.'.join(namel[1:])
103 | new_dict.update({key: param})
104 | new_dicts.append(new_dict)
105 | else:
106 | new_dicts.append(_dict)
107 | model.from_dicts(new_dicts)
108 |
109 | def parse_path(path):
110 | if path is None:
111 | return ''
112 |
113 | return path.split('/')[-1].split('.')[0]
114 |
115 | def args_to_str_src(args):
116 | return '_'.join([args.model, args.dataset, parse_path(args.source_path)])
117 |
118 | def args_to_str(args):
119 | return '_'.join([args.model, args.dataset, parse_path(args.source_path), parse_path(args.target_path)])
120 |
121 |
122 |
--------------------------------------------------------------------------------