├── .gitignore
├── LICENSE
├── Readme.md
├── Samples
├── 5_tasks
│ ├── MNIST_permutations_task_0.png
│ ├── MNIST_permutations_task_1.png
│ ├── MNIST_permutations_task_2.png
│ ├── MNIST_permutations_task_3.png
│ ├── MNIST_permutations_task_4.png
│ ├── MNIST_rotations_task_0.png
│ ├── MNIST_rotations_task_1.png
│ ├── MNIST_rotations_task_2.png
│ ├── MNIST_rotations_task_3.png
│ └── MNIST_rotations_task_4.png
├── Readme.md
├── disjoint_10_tasks
│ ├── fashion_task_0.png
│ ├── fashion_task_1.png
│ ├── fashion_task_2.png
│ ├── fashion_task_3.png
│ ├── fashion_task_4.png
│ ├── fashion_task_5.png
│ ├── fashion_task_6.png
│ ├── fashion_task_7.png
│ ├── fashion_task_8.png
│ └── fashion_task_9.png
├── disjoint_5_tasks
│ ├── MNIST_task_0.png
│ ├── MNIST_task_1.png
│ ├── MNIST_task_2.png
│ ├── MNIST_task_3.png
│ └── MNIST_task_4.png
└── mnist_fellowship
│ ├── mnist_fellowship_task_0.png
│ ├── mnist_fellowship_task_1.png
│ └── mnist_fellowship_task_2.png
├── continuum
├── __init__.py
├── continuum_loader.py
├── continuumbuilder.py
├── data_utils.py
├── datasets
│ ├── LSUN.py
│ ├── __init__.py
│ ├── cifar10.py
│ ├── cifar100.py
│ ├── core50.py
│ ├── fashion.py
│ └── kmnist.py
├── disjoint.py
├── mnistfellowship.py
├── permutation_classes.t
├── permutations.py
└── rotations.py
├── doxygen_config
├── setup.py
└── tests
├── Readme.md
├── __init__.py
├── pytest.ini
├── test_Dataloader.py
├── test_disjoint.py
├── test_fellowship.py
├── test_permutations.py
├── test_rotations.py
├── test_task_sequences.py
└── utils_tests.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | *.egg-info/
24 | .installed.cfg
25 | *.egg
26 | MANIFEST
27 |
28 | # PyInstaller
29 | # Usually these files are written by a python script from a template
30 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
31 | *.manifest
32 | *.spec
33 |
34 | # Installer logs
35 | pip-log.txt
36 | pip-delete-this-directory.txt
37 |
38 | # Unit test / coverage reports
39 | htmlcov/
40 | .tox/
41 | .coverage
42 | .coverage.*
43 | .cache
44 | nosetests.xml
45 | coverage.xml
46 | *.cover
47 | .hypothesis/
48 | .pytest_cache/
49 |
50 | # Translations
51 | *.mo
52 | *.pot
53 |
54 | # Django stuff:
55 | *.log
56 | local_settings.py
57 | db.sqlite3
58 |
59 | # Flask stuff:
60 | instance/
61 | .webassets-cache
62 |
63 | # Scrapy stuff:
64 | .scrapy
65 |
66 | # Sphinx documentation
67 | docs/_build/
68 |
69 | # PyBuilder
70 | target/
71 |
72 | # Jupyter Notebook
73 | .ipynb_checkpoints
74 |
75 | # pyenv
76 | .python-version
77 |
78 | # celery beat schedule file
79 | celerybeat-schedule
80 |
81 | # SageMath parsed files
82 | *.sage.py
83 |
84 | # Environments
85 | .env
86 | .venv
87 | env/
88 | venv/
89 | ENV/
90 | env.bak/
91 | venv.bak/
92 |
93 | # Spyder project settings
94 | .spyderproject
95 | .spyproject
96 |
97 | # Rope project settings
98 | .ropeproject
99 |
100 | # mkdocs documentation
101 | /site
102 |
103 | # mypy
104 | .mypy_cache/
105 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 Timothée Lesort
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Readme.md:
--------------------------------------------------------------------------------
1 | ## Continuum: A dataloader for continual learning
2 |
3 | [](https://app.codacy.com/app/TLESORT/Continual_Learning_Data_Former?utm_source=github.com&utm_medium=referral&utm_content=TLESORT/Continual_Learning_Data_Former&utm_campaign=Badge_Grade_Dashboard)
4 | [](https://zenodo.org/badge/latestdoi/198824802)
5 |
6 |
7 | ### Intro
8 |
9 | This repositery proprose several script to create sequence of tasks for continual learning. The spirit is the following :
10 | Instead of managing the sequence of tasks while learning, we create the sequence of tasks first and then we load tasks
11 | one by one while learning.
12 |
13 | It makes programming easier and code cleaner.
14 |
15 | ### Installation
16 |
17 | ```bash
18 | git clone https://github.com/TLESORT/Continual_Learning_Data_Former
19 | cd Continual_Learning_Data_Former
20 | pip install .
21 | ```
22 |
23 | ### Few possible invocations
24 |
25 | - Disjoint tasks
26 |
27 | ```python
28 | from continuum.disjoint import Disjoint
29 |
30 | #MNIST with 10 tasks of one class
31 | continuum = Disjoint(path="./Data", dataset="MNIST", task_number=10, download=True, train=True)
32 | ```
33 | - Rotations tasks
34 |
35 | ```python
36 | from continuum.rotations import Rotations
37 |
38 | #MNIST with 5 tasks with various rotations
39 | continuum = Rotations(path="./Data", dataset="MNIST", tasks_number=5, download=True, train=True, min_rot=0.0,
40 | max_rot=90.0)
41 | ```
42 |
43 | - Permutations tasks
44 |
45 | ```python
46 | from continuum.permutations import Permutations
47 |
48 | #MNIST with 5 tasks with different permutations
49 | continuum = Permutations(path="./Data", dataset="MNIST", tasks_number=1, download=False, train=True)
50 | ```
51 |
52 | ### Use example
53 |
54 | ```python
55 | from continuum.disjoint import Disjoint
56 | from torch.utils import data
57 |
58 | # create continuum dataset
59 | continuum = Disjoint(path=".", dataset="MNIST", task_number=10, download=True, train=True)
60 |
61 | # create pytorch dataloader
62 | train_loader = data.DataLoader(data_set, batch_size=64, shuffle=True, num_workers=6)
63 |
64 | #set the task on 0 for example with the data_set
65 | continuum.set_task(0)
66 |
67 | # iterate on task 0
68 | for t, (data, target) in enumerate(train_loader):
69 | print(target)
70 |
71 | #change the task to 2 for example
72 | continuum.set_task(2)
73 |
74 | # iterate on task 2
75 | for t, (data, target) in enumerate(train_loader):
76 | print(target)
77 |
78 | # We can visualize samples from the sequence of tasks
79 | for i in range(10):
80 | continuum.set_task(i)
81 |
82 | folder = "./Samples/disjoint_10_tasks/"
83 |
84 | if not os.path.exists(folder):
85 | os.makedirs(folder)
86 |
87 | path_samples = os.path.join(folder, "MNIST_task_{}.png".format(i))
88 | continuum.visualize_sample(path_samples , number=100, shape=[28,28,1])
89 |
90 | ```
91 |
92 |
93 | ### Task sequences possibilities
94 |
95 | - **Disjoint tasks** : each task propose new classes
96 | - **Rotations tasks** : each tasks propose same data but with different rotations of datata point
97 | - **Permutations tasks** : each tasks propose same data but with different permutations of pixels
98 | - **Mnist Fellowship task** : each task is a new mnist like dataset (this sequence of task is an original contribution of this repository)
99 |
100 | ### An example with MNIST 5 dijoint tasks
101 |
102 | ||||||
103 | |:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
104 | |Task 0 | Task 1 | Task 2 | Task 3 | Task 4|
105 |
106 | More examples at [Samples](/Samples)
107 |
108 | ### Datasets
109 |
110 | - Mnist
111 | - fashion-Mnist
112 | - kmnist
113 | - cifar10
114 | - Core50/Core10
115 |
116 | ### Some supplementary option are possible
117 | - The number of tasks can be choosed (1, 3, 5 and 10 have been tested normally)
118 | - Classes order can be shuffled for disjoint tasks
119 | - We can choose the magnitude of rotation for rotations mnist
120 |
121 |
122 |
123 |
124 |
125 | ### Citing the Project
126 |
127 | ```Array.
128 | @software{timothee_lesort_2020_3605202,
129 | author = {Timothée LESORT},
130 | title = {Continual Learning Data Former},
131 | month = jan,
132 | year = 2020,
133 | publisher = {Zenodo},
134 | version = {v1.0},
135 | doi = {10.5281/zenodo.3605202},
136 | url = {https://doi.org/10.5281/zenodo.3605202}
137 | }
138 |
139 | ```
140 |
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_permutations_task_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_permutations_task_0.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_permutations_task_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_permutations_task_1.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_permutations_task_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_permutations_task_2.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_permutations_task_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_permutations_task_3.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_permutations_task_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_permutations_task_4.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_rotations_task_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_rotations_task_0.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_rotations_task_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_rotations_task_1.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_rotations_task_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_rotations_task_2.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_rotations_task_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_rotations_task_3.png
--------------------------------------------------------------------------------
/Samples/5_tasks/MNIST_rotations_task_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/5_tasks/MNIST_rotations_task_4.png
--------------------------------------------------------------------------------
/Samples/Readme.md:
--------------------------------------------------------------------------------
1 | # Samples Examples
2 |
3 | ## The MNIST Fellowship
4 |
5 | ||||
6 | |:-------------------------:|:-------------------------:|:-------------------------:|
7 | |Task 0 | Task 1 | Task 2|
8 |
9 | ## The Fashion-MNIST 10 disjoint tasks
10 |
11 | ||||||
12 | |:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
13 | |Task 0 | Task 1 | Task 2 | Task 3 | Task 4|
14 |
15 | ||||||
16 | |:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
17 | |Task 5 | Task 6 | Task 7 | Task 8 | Task 9|
18 |
19 | ## The Permutation MNIST 5 tasks
20 |
21 | ||||||
22 | |:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
23 | |Task 0 | Task 1 | Task 2 | Task 3 | Task 4|
24 |
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_0.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_1.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_2.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_3.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_4.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_5.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_6.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_7.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_8.png
--------------------------------------------------------------------------------
/Samples/disjoint_10_tasks/fashion_task_9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_10_tasks/fashion_task_9.png
--------------------------------------------------------------------------------
/Samples/disjoint_5_tasks/MNIST_task_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_5_tasks/MNIST_task_0.png
--------------------------------------------------------------------------------
/Samples/disjoint_5_tasks/MNIST_task_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_5_tasks/MNIST_task_1.png
--------------------------------------------------------------------------------
/Samples/disjoint_5_tasks/MNIST_task_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_5_tasks/MNIST_task_2.png
--------------------------------------------------------------------------------
/Samples/disjoint_5_tasks/MNIST_task_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_5_tasks/MNIST_task_3.png
--------------------------------------------------------------------------------
/Samples/disjoint_5_tasks/MNIST_task_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/disjoint_5_tasks/MNIST_task_4.png
--------------------------------------------------------------------------------
/Samples/mnist_fellowship/mnist_fellowship_task_0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/mnist_fellowship/mnist_fellowship_task_0.png
--------------------------------------------------------------------------------
/Samples/mnist_fellowship/mnist_fellowship_task_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/mnist_fellowship/mnist_fellowship_task_1.png
--------------------------------------------------------------------------------
/Samples/mnist_fellowship/mnist_fellowship_task_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/Samples/mnist_fellowship/mnist_fellowship_task_2.png
--------------------------------------------------------------------------------
/continuum/__init__.py:
--------------------------------------------------------------------------------
1 | from .continuum_loader import ContinuumSetLoader
2 | from .disjoint import Disjoint
3 | from .mnistfellowship import MnistFellowship
4 | from .permutations import Permutations
5 | from .rotations import Rotations
--------------------------------------------------------------------------------
/continuum/continuum_loader.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import numpy as np
3 | from copy import deepcopy
4 | from torch.utils import data
5 | import torchvision.transforms.functional as TF
6 | from PIL import Image
7 | import os
8 | import random
9 |
10 | from continuum.data_utils import make_samples_batche, save_images
11 |
12 |
13 | class ContinuumSetLoader(data.Dataset):
14 | def __init__(self, data, transform=None, load_images=False, path=None):
15 |
16 | '''
17 |
18 | dataset.shape = [num , 3, image_number]
19 | dataset[0 , 1, :] # all data from task 0
20 | dataset[0 , 2, :] # all label from task 0
21 |
22 | '''
23 |
24 | self.dataset = data
25 | self.n_tasks = len(self.dataset)
26 |
27 | self.current_task = 0
28 | self.transform = transform
29 | self.load_images = load_images
30 | self.path = path
31 | if self.load_images and self.path is None:
32 | raise Exception("[!] The path to data need to be defined")
33 | self.shape_img = None
34 |
35 | 'Initialization'
36 | self.all_task_IDs = []
37 | self.all_labels = []
38 | for ind_task in range(self.n_tasks):
39 |
40 | if self.load_images:
41 | list_data = range(len(self.dataset[ind_task][1]))
42 | else:
43 | list_data = range(self.dataset[ind_task][1].shape[0])
44 | list_labels = self.dataset[ind_task][2].tolist()
45 |
46 | # convert it to dictionnary for pytorch
47 | self.all_task_IDs.append({i: list_data[i] for i in range(0, len(list_data))})
48 | self.all_labels.append({i: list_labels[i] for i in range(0, len(list_labels))})
49 |
50 | # lists used by pytorch loader
51 | self.list_IDs = self.all_task_IDs[self.current_task]
52 | self.labels = self.all_labels[self.current_task]
53 |
54 | if not load_images:
55 | self.shape_img = list(self.dataset[self.current_task][1][0].shape)
56 |
57 | def __len__(self):
58 | return len(self.list_IDs)
59 |
60 | def get_num_tasks(self):
61 | return self.n_tasks
62 |
63 | def __getitem__(self, index):
64 | 'Generates one sample of data'
65 |
66 | # Select sample
67 | ID = self.list_IDs[index]
68 |
69 | # Load data and get label
70 |
71 | if self.load_images:
72 | X = Image.open(os.path.join(self.path, self.dataset[self.current_task][1][ID])).convert('RGB')
73 | else:
74 | # here they have been already loaded, so I don't know if it is really optimized....
75 | X = self.dataset[self.current_task][1][ID]
76 | y = self.labels[ID]
77 |
78 | if self.transform is not None:
79 | if not self.load_images:
80 | X = TF.to_pil_image(X).convert('RGB')
81 | X = self.transform(X)
82 |
83 | return X, y
84 |
85 | def next(self):
86 | return self.__next__()
87 |
88 | def reset_labels(self):
89 |
90 | list_labels = self.dataset[self.current_task][2].tolist()
91 | self.all_labels[self.current_task] = {i: list_labels[i] for i in range(0, len(list_labels))}
92 | self.labels = self.all_labels[self.current_task]
93 |
94 | def set_task(self, new_task_index):
95 | """
96 |
97 | :param new_task_index:
98 | :return:
99 | """
100 | self.current_task = new_task_index
101 | self.list_IDs = self.all_task_IDs[self.current_task]
102 | self.labels = self.all_labels[self.current_task]
103 | return self
104 |
105 | def shuffle_task(self):
106 | random.shuffle(self.list_IDs)
107 | # print("OUIIIIIIIIIIIIIIIIIIIIIIi")
108 | #
109 | # assert len(self.dataset[self.current_task][1]) == len(self.dataset[self.current_task][2])
110 | #
111 | # indices = torch.randperm(len(self.dataset[self.current_task][1]))
112 | #
113 | # self.dataset[self.current_task][1] = self.dataset[self.current_task][1][indices]
114 | # self.dataset[self.current_task][2] = self.dataset[self.current_task][2][indices]
115 | #
116 | # self.reset_labels()
117 | return self
118 |
119 | def get_samples_from_ind(self, indices):
120 | batch = None
121 | labels = None
122 |
123 | for i, ind in enumerate(indices):
124 | # we need to use get item to have the transform used
125 | img, y = self.__getitem__(ind)
126 |
127 | if i == 0:
128 | if len(list(img.shape)) == 2:
129 | size_image = [1] + list(img.shape)
130 | else:
131 | size_image = list(img.shape)
132 | batch = torch.zeros(([len(indices)] + size_image))
133 | labels = torch.LongTensor(len(indices))
134 |
135 | batch[i] = img.clone()
136 | labels[i] = y
137 |
138 | return batch, labels
139 |
140 | def get_sample(self, number, shape):
141 | """
142 | This function return a number of sample from the dataset
143 | :param number: number of data point expected
144 | :return: FloatTensor on cpu of all samples
145 | """
146 | indices = (torch.randperm(len(self.list_IDs))[0:number]).tolist()
147 | return self.get_samples_from_ind(indices)
148 |
149 | def get_set(self, number, shape):
150 | """
151 | This function return a number of sample from the dataset
152 | :param number: number of data point expected
153 | :return: FloatTensor on cpu of all samples
154 | """
155 |
156 | if self.load_images:
157 | # the set is composed of path and not images
158 | indices = torch.randperm(len(self.labels))[0:number]
159 | batch = self.dataset[self.current_task][1][indices]
160 | labels = self.dataset[self.current_task][2][indices]
161 | else:
162 | batch, labels = self.get_sample(number, shape)
163 | return batch, labels
164 |
165 | def get_batch_from_label(self, label):
166 | """
167 | This function return a number of sample from the dataset with specific label
168 | :param label: label to get data from
169 | :return: FloatTensor on cpu of all samples
170 | """
171 |
172 | indices = [i for i, id in enumerate(self.list_IDs) if self.labels[id] == label]
173 | return self.get_samples_from_ind(indices)
174 |
175 | def concatenate(self, new_data, task=0):
176 | '''
177 |
178 | :param new_data: data to add to the actual task
179 | :return: the actual dataset with supplementary data inside
180 | '''
181 | new_data.sanity_check("before concatenate")
182 |
183 | self.list_IDs = self.all_task_IDs[self.current_task]
184 | self.labels = self.all_labels[self.current_task]
185 |
186 | # First update list
187 |
188 | list_len = len(self.list_IDs)
189 |
190 | new_size = len(new_data.list_IDs)
191 |
192 | if len(self.list_IDs) > 0:
193 | self.sanity_check("before concatenate")
194 |
195 | # the actual size of the dataset is not the same as the size self.list_IDs
196 | # some index might have been modified to artificially grow/reduce data size
197 | size_new_dataset = len(new_data.labels)
198 | actual_size_dataset = len(self.labels)
199 |
200 | for i in range(new_size):
201 | self.all_task_IDs[self.current_task][i + list_len] = new_data.list_IDs[i] + actual_size_dataset
202 | self.list_IDs = self.all_task_IDs[self.current_task]
203 |
204 | for i in range(size_new_dataset):
205 | self.all_labels[self.current_task][i + actual_size_dataset] = new_data.labels[i]
206 |
207 | # lists used by pytorch loader
208 | self.list_IDs = self.all_task_IDs[self.current_task]
209 | self.labels = self.all_labels[self.current_task]
210 |
211 | # then update data
212 |
213 | if self.load_images:
214 | # we concat to list of images
215 | self.dataset[self.current_task][1] = np.concatenate(
216 | (self.dataset[self.current_task][1], new_data.dataset[task][1]))
217 | else:
218 | shape = [-1] + self.shape_img
219 |
220 | self.dataset[self.current_task][1] = torch.cat(
221 | (self.dataset[self.current_task][1], new_data.dataset[task][1].view(shape)),
222 | 0)
223 | self.dataset[self.current_task][2] = torch.cat((self.dataset[self.current_task][2], new_data.dataset[task][2]),
224 | 0)
225 |
226 | self.sanity_check("after concatenate")
227 |
228 | return self
229 |
230 | def get_current_task(self):
231 | return self.current_task
232 |
233 | def save(self, path, force=False):
234 | torch.save(self.dataset, path)
235 |
236 | def visualize_sample(self, path, number, shape, class_=None):
237 |
238 | data, target = self.get_sample(number, shape)
239 |
240 | # get sample in order from 0 to 9
241 | target, order = target.sort()
242 | data = data[order]
243 |
244 | image_frame_dim = int(np.floor(np.sqrt(number)))
245 |
246 | if shape[2] == 1:
247 | data_np = data.numpy().reshape(number, shape[0], shape[1], shape[2])
248 | save_images(data_np[:image_frame_dim * image_frame_dim, :, :, :], [image_frame_dim, image_frame_dim],
249 | path)
250 | elif shape[2] == 3:
251 | # data = data.numpy().reshape(number, shape[0], shape[1], shape[2])
252 | # if self.dataset_name == 'cifar10':
253 | data = data.numpy().reshape(number, shape[2], shape[1], shape[0])
254 | # data = data.numpy().reshape(number, shape[0], shape[1], shape[2])
255 |
256 | # remap between 0 and 1
257 | # data = data - data.min()
258 | # data = data / data.max()
259 |
260 | data = data / 2 + 0.5 # unnormalize
261 | make_samples_batche(data[:number], number, path)
262 | else:
263 | save_images(data[:image_frame_dim * image_frame_dim, :, :, :], [image_frame_dim, image_frame_dim],
264 | path)
265 |
266 | return data
267 |
268 | def visualize_reordered(self, path, number, shape, permutations):
269 |
270 | data = self.visualize_sample(path, number, shape)
271 |
272 | data = data.reshape(-1, shape[0] * shape[1] * shape[2])
273 | concat = deepcopy(data)
274 |
275 | image_frame_dim = int(np.floor(np.sqrt(number)))
276 |
277 | for i in range(1, self.n_tasks):
278 | _, inverse_permutation = permutations[i].sort()
279 | reordered_data = deepcopy(data.index_select(1, inverse_permutation))
280 | concat = torch.cat((concat, reordered_data), 0)
281 |
282 | if shape[2] == 1:
283 | concat = concat.numpy().reshape(number * self.n_tasks, shape[0], shape[1], shape[2])
284 | save_images(concat[:image_frame_dim * image_frame_dim * self.n_tasks, :, :, :],
285 | [self.n_tasks * image_frame_dim, image_frame_dim],
286 | path)
287 | else:
288 | concat = concat.numpy().reshape(number * self.n_tasks, shape[2], shape[1], shape[0])
289 | make_samples_batche(concat[:self.batch_size], self.batch_size, path)
290 |
291 | def increase_size(self, increase_factor):
292 | len_data = len(self.list_IDs)
293 | new_len = len_data * increase_factor
294 |
295 | # make the list grow (not the data)
296 | self.all_task_IDs[self.current_task] = {i: self.list_IDs[i % len_data] for i in range(new_len)}
297 | self.list_IDs = self.all_task_IDs[self.current_task]
298 |
299 | self.sanity_check("increase_size")
300 |
301 | return self
302 |
303 | def sub_sample(self, number):
304 | indices = (torch.randperm(len(self.list_IDs))[0:number]).tolist()
305 |
306 | # subsamples the list (not the data)
307 | self.all_task_IDs[self.current_task] = {i: self.list_IDs[indices[i]] for i in range(number)}
308 | self.list_IDs = self.all_task_IDs[self.current_task]
309 |
310 | return self
311 |
312 | def delete_class(self, class_ind):
313 | # select all the classes differnet to ind_class
314 | # we delete only index and not data
315 | index2keep = {i: self.list_IDs[i] for i, _ in enumerate(self.list_IDs) if
316 | self.labels[self.list_IDs[i]] != class_ind}
317 | self.all_task_IDs[self.current_task] = {i: self.index2keep[key] for i, key in enumerate(index2keep.keys())}
318 | self.list_IDs = self.all_task_IDs[self.current_task]
319 |
320 | def delete_task(self, ind_task):
321 |
322 | self.current_task = ind_task
323 |
324 | self.dataset[ind_task][1] = torch.FloatTensor(0)
325 | self.dataset[ind_task][2] = torch.LongTensor(0)
326 |
327 | self.all_task_IDs[ind_task] = {}
328 | self.all_labels[ind_task] = {}
329 | # lists used by pytorch loader
330 | self.list_IDs = self.all_task_IDs[ind_task]
331 | self.labels = self.all_labels[ind_task]
332 |
333 | def sanity_check(self, origin):
334 |
335 | if self.load_images:
336 | size_data = len(self.dataset[self.current_task][1])
337 | else:
338 | size_data = self.dataset[self.current_task][1].size(0)
339 | size_label = self.dataset[self.current_task][2].size(0)
340 |
341 | biggest_data_id = self.list_IDs[max(self.list_IDs, key=self.list_IDs.get)]
342 |
343 | if not size_label == size_data:
344 | raise AssertionError("Sanity check size data ({}) vs label ({}) : {}".format(size_data, size_label, origin))
345 |
346 | if not biggest_data_id + 1 == size_data:
347 | raise AssertionError("Sanity check list_IDs ({}) vs label ({}) : {}".format(
348 | biggest_data_id + 1,
349 | size_label,
350 | origin))
351 |
352 | if not len(self.labels) == self.dataset[self.current_task][2].size(0):
353 | raise AssertionError("Sanity check list label ({}) vs label ({}) : {}".format(len(self.labels),
354 | size_label,
355 | origin))
356 |
--------------------------------------------------------------------------------
/continuum/continuumbuilder.py:
--------------------------------------------------------------------------------
1 | import os.path
2 | import torch
3 | from copy import deepcopy
4 | from .continuum_loader import ContinuumSetLoader
5 | from .data_utils import load_data, check_and_Download_data, get_images_format
6 |
7 |
8 | class ContinuumBuilder(ContinuumSetLoader):
9 | '''Parent Class for Sequence Formers'''
10 |
11 | def __init__(self, path, dataset, tasks_number, scenario, num_classes, download=False, train=True, path_only=False, verbose=False):
12 |
13 | self.tasks_number = tasks_number
14 | self.num_classes = num_classes
15 | self.dataset = dataset
16 | self.i = os.path.join(path, "Datasets")
17 | self.o = os.path.join(path, "Continua", self.dataset)
18 | self.train = train
19 | self.imageSize, self.img_channels = get_images_format(self.dataset)
20 | self.scenario = scenario
21 | self.verbose = verbose
22 | self.path_only = path_only
23 | self.download = download
24 |
25 | # if self.path_only we don't load data but just path
26 | # data will be loaded online while learning
27 | # it is considered as light mode this continual dataset are easy to generate and load
28 | if self.path_only:
29 | light_id = '_light'
30 | else:
31 | light_id = ''
32 |
33 | if not os.path.exists(self.o):
34 | os.makedirs(self.o)
35 |
36 | if self.train:
37 | self.out_file = os.path.join(self.o, '{}_{}_train{}.pt'.format(self.scenario, self.tasks_number, light_id))
38 | else:
39 | self.out_file = os.path.join(self.o, '{}_{}_test{}.pt'.format(self.scenario, self.tasks_number, light_id))
40 |
41 | check_and_Download_data(self.i, self.dataset, scenario=self.scenario)
42 |
43 | if self.download or not os.path.isfile(self.out_file):
44 | self.formating_data()
45 | else:
46 | self.continuum = torch.load(self.out_file)
47 |
48 | super(ContinuumBuilder, self).__init__(self.continuum)
49 |
50 | def select_index(self, ind_task, y):
51 | """
52 | This function help to select data in particular if needed
53 | :param ind_task: task index in the sequence
54 | :param y: data label
55 | :return: class min, class max, and index of data to keep
56 | """
57 | return 0, self.num_classes - 1, torch.arange(len(y))
58 |
59 | def transformation(self, ind_task, data):
60 | """
61 | Apply transformation to data if needed
62 | :param ind_task: task index in the sequence
63 | :param data: data to process
64 | :return: data post processing
65 | """
66 | if not ind_task < self.tasks_number:
67 | raise AssertionError("Error in task index")
68 | return deepcopy(data)
69 |
70 | def label_transformation(self, ind_task, label):
71 | """
72 | Apply transformation to label if needed
73 | :param ind_task: task index in the sequence
74 | :param label: label to process
75 | :return: data post processing
76 | """
77 | if not ind_task < self.tasks_number:
78 | raise AssertionError("Error in task indice")
79 | return label
80 |
81 | @staticmethod
82 | def get_valid_ind(i_tr):
83 | # it is time to taxe train for validation
84 | len_valid = int(len(i_tr) * 0.2)
85 | indices = torch.randperm(len(i_tr))
86 |
87 | valid_ind = indices[:len_valid]
88 | train_ind = indices[len_valid:]
89 |
90 | i_va = i_tr[valid_ind]
91 | i_tr = i_tr[train_ind]
92 |
93 | return i_tr, i_va
94 |
95 | def create_task(self, ind_task, x_, y_):
96 |
97 | # select only the good classes
98 | class_min, class_max, id_ = self.select_index(ind_task, y_)
99 |
100 | x_select = x_[id_]
101 | y_select = y_[id_]
102 | x_t = self.transformation(ind_task, x_select)
103 | y_t = self.label_transformation(ind_task, y_select)
104 |
105 | if self.verbose and self.path_only:
106 | print("Task : {}".format(ind_task))
107 | ind = torch.randperm(len(x_t))[:10]
108 | print(x_t[ind])
109 |
110 | return class_min, class_max, x_t, y_t
111 |
112 | def prepare_formatting(self):
113 | pass
114 |
115 | def formating_data(self):
116 |
117 | self.prepare_formatting()
118 |
119 | # variable to save the sequence
120 | self.continuum = []
121 |
122 | x_, y_ = load_data(self.dataset, self.i, self.train)
123 |
124 | for ind_task in range(self.tasks_number):
125 |
126 | c1, c2, x_t, y_t = self.create_task(ind_task, x_, y_)
127 | self.continuum.append([(c1, c2), x_t, y_t])
128 |
129 | if self.verbose and not self.path_only:
130 | print(self.continuum[0][1].shape)
131 | print(self.continuum[0][1].mean())
132 | print(self.continuum[0][1].std())
133 |
134 | torch.save(self.continuum, self.out_file)
--------------------------------------------------------------------------------
/continuum/data_utils.py:
--------------------------------------------------------------------------------
1 | import matplotlib as mpl
2 |
3 | mpl.use('Agg')
4 | import matplotlib.pyplot as plt
5 |
6 | import torch
7 | import os
8 | from torchvision import datasets, transforms
9 |
10 | import numpy as np
11 | import imageio
12 |
13 | from .datasets.LSUN import load_LSUN
14 | from .datasets.cifar10 import load_Cifar10
15 | from .datasets.cifar100 import load_Cifar100
16 | from .datasets.core50 import load_core50
17 | from .datasets.fashion import Fashion
18 | from .datasets.kmnist import Kmnist
19 |
20 |
21 | def get_images_format(dataset):
22 |
23 | if dataset == 'MNIST' or dataset == 'fashion' or dataset == 'mnishion' or "mnist" in dataset:
24 | imageSize = 28
25 | img_channels = 1
26 | elif dataset == 'cifar10' or dataset == 'cifar100':
27 | imageSize = 32
28 | img_channels = 3
29 | elif dataset == 'core10' or dataset == 'core50':
30 | # if args.imageSize is at default value we change it to 128
31 | imageSize = 128
32 | img_channels = 3
33 | else:
34 | raise Exception("[!] There is no option for " + dataset)
35 |
36 | return imageSize, img_channels
37 |
38 |
39 | def check_args(args):
40 |
41 |
42 | if "mnist_fellowship" in args.task:
43 | args.dataset = "mnist_fellowship"
44 | if 'merge' in args.task:
45 | args.dataset = "mnist_fellowship_merge"
46 |
47 | return args
48 |
49 |
50 | def check_and_Download_data(folder, dataset, scenario):
51 | # download data if possible
52 | if dataset == 'MNIST' or dataset == 'mnishion' or "mnist_fellowship" in scenario:
53 | datasets.MNIST(folder, train=True, download=True, transform=transforms.ToTensor())
54 | if dataset == 'fashion' or dataset == 'mnishion' or "mnist_fellowship" in scenario:
55 | Fashion(os.path.join(folder, "fashion"), train=True, download=True, transform=transforms.ToTensor())
56 | # download data if possible
57 | if dataset == 'kmnist' or "mnist_fellowship" in scenario:
58 | Kmnist(os.path.join(folder, "kmnist"), train=True, download=True, transform=transforms.ToTensor())
59 | if dataset == 'core50' or dataset == 'core10':
60 | if not os.path.isdir(folder):
61 | print('This dataset should be downloaded manually')
62 |
63 | def load_data(dataset, path2data, train=True):
64 | if dataset == 'cifar10':
65 | path2data = os.path.join(path2data, dataset, "processed")
66 | x_, y_ = load_Cifar10(path2data, train)
67 |
68 | x_ = x_.float()
69 | elif dataset == 'cifar100':
70 | path2data = os.path.join(path2data, dataset, "processed")
71 | x_, y_ = load_Cifar100(path2data, train)
72 |
73 | x_ = x_.float()
74 | elif dataset == 'LSUN':
75 | x_, y_ = load_LSUN(path2data, train)
76 |
77 | x_ = x_.float()
78 | elif dataset == 'core50' or dataset == 'core10':
79 |
80 | x_, y_ = load_core50(dataset, path2data, train)
81 |
82 | elif 'mnist_fellowship' in dataset:
83 | # In this case data will be loaded later dataset by dataset
84 | return None, None
85 | else:
86 |
87 | if train:
88 | data_file = os.path.join(path2data, dataset, "processed", 'training.pt')
89 | else:
90 | data_file = os.path.join(path2data, dataset, "processed", 'test.pt')
91 |
92 | if not os.path.isfile(data_file):
93 | raise AssertionError("Missing file: {}".format(data_file))
94 |
95 | x_, y_ = torch.load(data_file)
96 | x_ = x_.float() / 255.0
97 |
98 | y_ = y_.view(-1).long()
99 |
100 | return x_, y_
101 |
102 |
103 | def visualize_batch(batch, number, shape, path):
104 | batch = batch.cpu().data
105 |
106 | image_frame_dim = int(np.floor(np.sqrt(number)))
107 |
108 | if shape[2] == 1:
109 | data_np = batch.numpy().reshape(number, shape[0], shape[1], shape[2])
110 | save_images(data_np[:image_frame_dim * image_frame_dim, :, :, :], [image_frame_dim, image_frame_dim],
111 | path)
112 | elif shape[2] == 3:
113 | data = batch.numpy().reshape(number, shape[2], shape[1], shape[0])
114 | make_samples_batche(data[:number], number, path)
115 | else:
116 | save_images(batch[:image_frame_dim * image_frame_dim, :, :, :], [image_frame_dim, image_frame_dim],
117 | path)
118 |
119 |
120 | def save_images(images, size, image_path):
121 | return imsave(images, size, image_path)
122 |
123 |
124 | def imsave(images, size, path):
125 | image = np.squeeze(merge(images, size))
126 | image -= np.min(image)
127 | image /= np.max(image) + 1e-12
128 | image = 255 * image # Now scale by 255
129 | image = image.astype(np.uint8)
130 | return imageio.imwrite(path, image)
131 |
132 |
133 | def merge(images, size):
134 | h, w = images.shape[1], images.shape[2]
135 | if (images.shape[3] in (3, 4)):
136 | c = images.shape[3]
137 | img = np.zeros((h * size[0], w * size[1], c))
138 | for idx, image in enumerate(images):
139 | i = idx % size[1]
140 | j = idx // size[1]
141 | img[j * h:j * h + h, i * w:i * w + w, :] = image
142 | return img
143 | elif images.shape[3] == 1:
144 | img = np.zeros((h * size[0], w * size[1]))
145 | for idx, image in enumerate(images):
146 | i = idx % size[1]
147 | j = idx // size[1]
148 | img[j * h:j * h + h, i * w:i * w + w] = image[:, :, 0]
149 | return img
150 | else:
151 | raise ValueError('in merge(images,size) images parameter ''must have dimensions: HxW or HxWx3 or HxWx4')
152 |
153 |
154 | def img_stretch(img):
155 | img = img.astype(float)
156 | img -= np.min(img)
157 | img /= np.max(img) + 1e-12
158 | return img
159 |
160 |
161 | def make_samples_batche(prediction, batch_size, filename_dest):
162 | plt.figure()
163 | batch_size_sqrt = int(np.sqrt(batch_size))
164 | input_channel = prediction[0].shape[0]
165 | input_dim = prediction[0].shape[1]
166 | prediction = np.clip(prediction, 0, 1)
167 | pred = np.rollaxis(prediction.reshape((batch_size_sqrt, batch_size_sqrt, input_channel, input_dim, input_dim)), 2,
168 | 5)
169 | pred = pred.swapaxes(2, 1)
170 | pred = pred.reshape((batch_size_sqrt * input_dim, batch_size_sqrt * input_dim, input_channel))
171 | fig, ax = plt.subplots(figsize=(batch_size_sqrt, batch_size_sqrt))
172 | ax.axis('off')
173 | ax.imshow(img_stretch(pred), interpolation='nearest')
174 | ax.grid()
175 | ax.set_xticks([])
176 | ax.set_yticks([])
177 | fig.savefig(filename_dest, bbox_inches='tight', pad_inches=0)
178 | plt.close(fig)
179 | plt.close()
180 |
--------------------------------------------------------------------------------
/continuum/datasets/LSUN.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torchvision import datasets, transforms
3 |
4 |
5 | def load_LSUN():
6 | transform = transforms.Compose([
7 | transforms.Resize((64, 64)),
8 | transforms.ToTensor(),
9 | transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
10 | ])
11 |
12 | dataset_train = datasets.LSUN(root='/slowdata/LSUN',
13 | classes=['bridge_train', 'church_outdoor_train', 'classroom_train',
14 | 'dining_room_train', 'tower_train'], transform=transform)
15 |
16 | dataset_test = datasets.LSUN(root='/slowdata/LSUN',
17 | classes=['bridge_val', 'church_outdoor_val', 'classroom_val',
18 | 'dining_room_val', 'tower_val'],
19 | transform=transform)
20 |
21 | data_size = 100000
22 | test_size = 1000
23 |
24 | tensor_data = torch.Tensor(data_size, 3, 64, 64)
25 | tensor_label = torch.LongTensor(data_size)
26 |
27 | tensor_test = torch.Tensor(test_size, 3, 64, 64)
28 | tensor_label_test = torch.LongTensor(test_size)
29 |
30 | for i in range(data_size):
31 | tensor_data[i] = dataset_train[i][0]
32 | tensor_label[i] = dataset_train[i][1]
33 |
34 | for i in range(test_size):
35 | tensor_test[i] = dataset_test[i][0]
36 | tensor_label_test[i] = dataset_test[i][1]
37 |
38 | return tensor_data, tensor_label, tensor_test, tensor_label_test
39 |
--------------------------------------------------------------------------------
/continuum/datasets/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/continuum/datasets/__init__.py
--------------------------------------------------------------------------------
/continuum/datasets/cifar10.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import datasets, transforms
4 |
5 |
6 | def load_Cifar10(path):
7 | trans = transforms.Compose([
8 | transforms.Resize(32),
9 | transforms.ToTensor(),
10 | transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
11 | ])
12 |
13 | dataset_train = datasets.CIFAR10(root=path, train=True, download=True, transform=trans)
14 | tensor_data = torch.Tensor(len(dataset_train), 3, 32, 32)
15 |
16 | tensor_label = torch.LongTensor(len(dataset_train))
17 |
18 | for i, (data, label) in enumerate(dataset_train):
19 | tensor_data[i] = data
20 | tensor_label[i] = label
21 |
22 | dataset_test = datasets.CIFAR10(root=path, train=False, download=True, transform=trans)
23 |
24 | tensor_test = torch.Tensor(len(dataset_test), 3, 32, 32)
25 | tensor_label_test = torch.LongTensor(len(dataset_test))
26 |
27 | for i, (data, label) in enumerate(dataset_test):
28 | tensor_test[i] = data
29 | tensor_label_test[i] = label
30 |
31 | return tensor_data, tensor_label, tensor_test, tensor_label_test
--------------------------------------------------------------------------------
/continuum/datasets/cifar100.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from torchvision import datasets, transforms
4 |
5 |
6 | def load_Cifar100(path):
7 | trans = transforms.Compose([
8 | transforms.Resize(32),
9 | transforms.ToTensor(),
10 | transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
11 | ])
12 |
13 | dataset_train = datasets.CIFAR100(root=path, train=True, download=True, transform=trans)
14 | tensor_data = torch.Tensor(len(dataset_train), 3, 32, 32)
15 |
16 | tensor_label = torch.LongTensor(len(dataset_train))
17 |
18 | for i, (data, label) in enumerate(dataset_train):
19 | tensor_data[i] = data
20 | tensor_label[i] = label
21 |
22 | dataset_test = datasets.CIFAR100(root=path, train=False, download=True, transform=trans)
23 |
24 | tensor_test = torch.Tensor(len(dataset_test), 3, 32, 32)
25 | tensor_label_test = torch.LongTensor(len(dataset_test))
26 |
27 | for i, (data, label) in enumerate(dataset_test):
28 | tensor_test[i] = data
29 | tensor_label_test[i] = label
30 |
31 | return tensor_data, tensor_label, tensor_test, tensor_label_test
--------------------------------------------------------------------------------
/continuum/datasets/core50.py:
--------------------------------------------------------------------------------
1 | import os.path
2 | import torch
3 | import numpy as np
4 | import pickle as pkl
5 |
6 | def get_train_test_ind(paths):
7 | """
8 | Select from the list of all files the train and test files
9 | :param paths: all files
10 | :return: list of train and list of test data
11 | """
12 | list_train = []
13 | list_test = []
14 |
15 | for i, str_path in enumerate(paths):
16 | str_sequence = str_path.split('/')[0]
17 | int_sequence = int(str_sequence.replace('s', ''))
18 |
19 | # sequence 3,7,10 are for test as in the original paper
20 | if int_sequence == 3 or int_sequence == 7 or int_sequence == 10:
21 | list_test.append(i)
22 | elif int_sequence <= 11:
23 | list_train.append(i)
24 | else:
25 | print("There is a problem")
26 |
27 | return list_train, list_test
28 |
29 | def get_list_labels(paths, num_classes):
30 | """
31 | create a list with all labels from paths
32 | :param paths: path to all images
33 | :param num_classes: number of classes considered (an be either 10 or 50)
34 | :return: the list of labels
35 | """
36 |
37 | # ex : paths[0] -> 's11/o1/C_11_01_000.png'
38 |
39 | # [o1, ..., o5] -> plug adapters -> label 1
40 | # [o6, ..., o10] -> mobile phones
41 | # [o11, ..., o15] -> scissors
42 | # [o16, ..., o20] -> light bulbs
43 | # [o21, ..., o25] -> cans
44 | # [o26, ..., o30] -> glasses
45 | # [o31, ..., o35] -> balls
46 | # [o36, ..., o40] -> markers
47 | # [o41, ..., o45] -> cups
48 | # [o46, ..., o50] -> remote controls
49 |
50 | list_labels = []
51 | for str_path in paths:
52 | # Ex: str_path = 's11/o1/C_11_01_000.png'
53 | str_label = str_path.split('/')[1] # -> 'o1'
54 | int_label = int(str_label.replace('o', '')) # -> 1
55 |
56 | # We remap from 1 to 50 from 0 to 9
57 | if num_classes == 10:
58 | list_labels.append((int_label - 1) // 5)
59 | else: # We remap from 1 to 50 from 0 to 49
60 | list_labels.append(int_label - 1)
61 |
62 | return list_labels
63 |
64 | def reduce_data_size(paths):
65 | """
66 | select one image over 4 to reduce dataset size and redundancy
67 | :param paths: all paths
68 | :return:
69 | """
70 | new_path = []
71 | for i, path in enumerate(paths):
72 | # we go from 20 Hz to 5 hz following https://arxiv.org/pdf/1805.10966.pdf
73 | if i % 4 == 0:
74 | new_path.append(path)
75 | return new_path
76 |
77 | def create_set(image_path, path, paths, list_data, list_label, name):
78 | """
79 | Pick the right files, create a list with it and save it.
80 | :param image_path: path to the folder containing all images
81 | :param path: path path to the folder ta save results
82 | :param paths: path inside image_path to all images
83 | :param list_data: list of index to select
84 | :param list_label: list of all labels
85 | :param name: name to give to the file to save
86 | :return: None
87 | """
88 |
89 | selected_labels = np.zeros(len(list_data))
90 | selected_path = []
91 |
92 | # train data
93 | for i, ind in enumerate(list_data):
94 | label = list_label[ind]
95 | selected_labels[i] = label
96 | selected_path.append(os.path.join(image_path, paths[ind]))
97 |
98 | save_path = path.replace("raw", "processed")
99 |
100 | if not os.path.exists(save_path):
101 | os.makedirs(save_path)
102 |
103 | np.savez(os.path.join(save_path, name), y=selected_labels, paths=selected_path)
104 |
105 | def check_data_avaibility(image_path, path_path):
106 | """
107 | Check avaibility of main folders and files
108 | :param image_path: path to the folder containing all images
109 | :param path_path: path to the file containing path to all images
110 | :return: None
111 | """
112 |
113 | if not os.path.isfile(path_path):
114 | raise AssertionError("paths.pkl have to be downloaded in https://vlomonaco.github.io/core50/index.html#dataset"
115 | " and put in '{}' ".format(path_path))
116 |
117 | # test if all folders exists
118 | folders_exists = True
119 | # 11 sequences
120 | for i in range(1, 11):
121 | # 50 objects
122 | for j in range(1, 50):
123 | folder = os.path.join(image_path, "s" + str(i), "o" + str(j))
124 | if not os.path.isdir(folder):
125 | print("Missing folder {}".format(folder))
126 | folders_exists = False
127 | if not folders_exists:
128 | raise AssertionError("Some folder are missing and probable some data to download then in"
129 | " https://vlomonaco.github.io/core50/index.html#dataset"
130 | " and put in{}".format(image_path))
131 |
132 |
133 | def create_data_sets(path, num_classes):
134 | """
135 | This function create test and train sets for core50
136 | data and paths.pkl need to be downladed manually at "https://vlomonaco.github.io/core50/index.html#dataset"
137 | :param path: path to the folder with all data and paths.pkl
138 | :return: None
139 | """
140 |
141 | name_dataset = "core" + str(num_classes)
142 |
143 | if not (num_classes == 10 or num_classes == 50):
144 | raise AssertionError("Only 10 or 50 are possible here")
145 |
146 | image_path = path.replace("core10", "core50")
147 | path_path = os.path.join(path, 'paths.pkl').replace("core10", "core50")
148 |
149 | # check if main repository already exists
150 | check_data_avaibility(image_path, path_path)
151 |
152 |
153 | pkl_file = open(path_path, 'rb')
154 | paths = pkl.load(pkl_file)
155 |
156 | # Reduction of data size (because there is a lot of similarities between two images)
157 | paths = reduce_data_size(paths)
158 |
159 | # first : get labels
160 | list_label = get_list_labels(paths, num_classes)
161 |
162 | # second : separate test (sequences #3, #7, #10) from train
163 | list_train, list_test = get_train_test_ind(paths)
164 |
165 | print("We start creating the train set")
166 | create_set(image_path, path, paths, list_train, list_label, name=name_dataset + '_paths_train.npz')
167 | print("We start creating the test set")
168 | create_set(image_path, path, paths, list_test, list_label, name=name_dataset + '_paths_test.npz')
169 |
170 |
171 | def load_path(path):
172 | """
173 | Load the file containing the path to all data
174 | :param path: path to the file
175 | :return: list of files and a tensor of labels
176 | """
177 | path_tr = np.load(path)['paths']
178 | y_tr = np.load(path)['y']
179 | y_tr = y_tr.reshape((-1))
180 | y_tr = torch.Tensor(y_tr)
181 | return path_tr, y_tr
182 |
183 |
184 | def load_core50(dataset, path):
185 | """
186 | Function to load data from core50. Actually we only process path to data and not data for efficiency purpose.
187 | :param dataset: allow to know if we are loading core10 or cor50
188 | :param path: path to data
189 | :param path_only:
190 | :return:
191 | """
192 |
193 | path_raw = os.path.join(path, dataset, "raw")
194 | path = os.path.join(path, dataset, "processed")
195 |
196 | path_train = os.path.join(path, '{}_paths_train.npz'.format(dataset))
197 | path_test = os.path.join(path, '{}_paths_test.npz'.format(dataset))
198 |
199 | if not (os.path.isfile(path_train) and os.path.isfile(path_test)):
200 | pass
201 |
202 | if dataset == "core50":
203 | create_data_sets(path_raw, 50)
204 | elif dataset == "core10":
205 | create_data_sets(path_raw, 10)
206 | else:
207 | raise AssertionError("Only core10 or core50 are possible here")
208 |
209 | x_tr, y_tr = load_path(path_train)
210 | x_te, y_te = load_path(path_test)
211 |
212 | return x_tr, y_tr, x_te, y_te
213 |
--------------------------------------------------------------------------------
/continuum/datasets/fashion.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import torch.utils.data as data
3 | from PIL import Image
4 | import os
5 | import os.path
6 | import errno
7 | import torch
8 | import codecs
9 |
10 |
11 | class Fashion(data.Dataset):
12 | """`Fashion-MNIST Dataset.
13 | Args:
14 | root (string): Root directory of dataset where ``processed/training.pt``
15 | and ``processed/test.pt`` exist.
16 | train (bool, optional): If True, creates dataset from ``training.pt``,
17 | otherwise from ``test.pt``.
18 | download (bool, optional): If true, downloads the dataset from the internet and
19 | puts it in root directory. If dataset is already downloaded, it is not
20 | downloaded again.
21 | transform (callable, optional): A function/transform that takes in an PIL image
22 | and returns a transformed version. E.g, ``transforms.RandomCrop``
23 | target_transform (callable, optional): A function/transform that takes in the
24 | target and transforms it.
25 | """
26 | urls = [
27 | 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz',
28 | 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz',
29 | 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz',
30 | 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz',
31 | ]
32 | raw_folder = 'raw'
33 | processed_folder = 'processed'
34 | training_file = 'training.pt'
35 | test_file = 'test.pt'
36 |
37 | def __init__(self, root, train=True, transform=None, target_transform=None, download=False):
38 | self.root = os.path.expanduser(root)
39 | self.transform = transform
40 | self.target_transform = target_transform
41 | self.train = train # training set or test set
42 |
43 | if download:
44 | self.download()
45 |
46 | if not self._check_exists():
47 | raise RuntimeError('Dataset not found.' +
48 | ' You can use download=True to download it')
49 |
50 | if self.train:
51 | self.train_data, self.train_labels = torch.load(
52 | os.path.join(root, self.processed_folder, self.training_file))
53 | else:
54 | self.test_data, self.test_labels = torch.load(os.path.join(root, self.processed_folder, self.test_file))
55 |
56 | def __getitem__(self, index):
57 | """
58 | Args:
59 | index (int): Index
60 | Returns:
61 | tuple: (image, target) where target is index of the target class.
62 | """
63 | if self.train:
64 | img, target = self.train_data[index], self.train_labels[index]
65 | else:
66 | img, target = self.test_data[index], self.test_labels[index]
67 |
68 | # doing this so that it is consistent with all other datasets
69 | # to return a PIL Image
70 | img = Image.fromarray(img.numpy(), mode='L')
71 |
72 | if self.transform is not None:
73 | img = self.transform(img)
74 |
75 | if self.target_transform is not None:
76 | target = self.target_transform(target)
77 |
78 | return img, target
79 |
80 | def __len__(self):
81 | if self.train:
82 | return len(self.train_data)
83 | else:
84 | return len(self.test_data)
85 |
86 | def _check_exists(self):
87 |
88 | return os.path.exists(os.path.join(self.root, self.processed_folder, self.training_file)) and \
89 | os.path.exists(os.path.join(self.root, self.processed_folder, self.test_file))
90 |
91 | def download(self):
92 | """Download the Fashion MNIST data if it doesn't exist in processed_folder already."""
93 | from six.moves import urllib
94 | import gzip
95 |
96 | if self._check_exists():
97 | return
98 |
99 | # download files
100 | try:
101 | os.makedirs(os.path.join(self.root, self.raw_folder))
102 | os.makedirs(os.path.join(self.root, self.processed_folder))
103 | except OSError as e:
104 | if e.errno == errno.EEXIST:
105 | pass
106 | else:
107 | raise
108 |
109 | for url in self.urls:
110 | print('Downloading ' + url)
111 | data = urllib.request.urlopen(url)
112 | filename = url.rpartition('/')[2]
113 | file_path = os.path.join(self.root, self.raw_folder, filename)
114 | with open(file_path, 'wb') as f:
115 | f.write(data.read())
116 | with open(file_path.replace('.gz', ''), 'wb') as out_f, \
117 | gzip.GzipFile(file_path) as zip_f:
118 | out_f.write(zip_f.read())
119 | os.unlink(file_path)
120 |
121 | # process and save as torch files
122 | print('Processing...')
123 |
124 | training_set = (
125 | read_image_file(os.path.join(self.root, self.raw_folder, 'train-images-idx3-ubyte')),
126 | read_label_file(os.path.join(self.root, self.raw_folder, 'train-labels-idx1-ubyte'))
127 | )
128 | test_set = (
129 | read_image_file(os.path.join(self.root, self.raw_folder, 't10k-images-idx3-ubyte')),
130 | read_label_file(os.path.join(self.root, self.raw_folder, 't10k-labels-idx1-ubyte'))
131 | )
132 | with open(os.path.join(self.root, self.processed_folder, self.training_file), 'wb') as f:
133 | torch.save(training_set, f)
134 | with open(os.path.join(self.root, self.processed_folder, self.test_file), 'wb') as f:
135 | torch.save(test_set, f)
136 |
137 | print('Done!')
138 |
139 |
140 | def get_int(b):
141 | return int(codecs.encode(b, 'hex'), 16)
142 |
143 |
144 | def parse_byte(b):
145 | if isinstance(b, str):
146 | return ord(b)
147 | return b
148 |
149 |
150 | def read_label_file(path):
151 | with open(path, 'rb') as f:
152 | data = f.read()
153 |
154 | if not get_int(data[:4]) == 2049:
155 | raise AssertionError("Wong size data")
156 |
157 | length = get_int(data[4:8])
158 | labels = [parse_byte(b) for b in data[8:]]
159 | if not len(labels) == length:
160 | raise AssertionError("Wong size label")
161 | return torch.LongTensor(labels)
162 |
163 |
164 | def read_image_file(path):
165 | with open(path, 'rb') as f:
166 | data = f.read()
167 | if not get_int(data[:4]) == 2051:
168 | raise AssertionError("Wong size data")
169 | length = get_int(data[4:8])
170 | num_rows = get_int(data[8:12])
171 | num_cols = get_int(data[12:16])
172 | images = []
173 | idx = 16
174 | for _ in range(length):
175 | img = []
176 | images.append(img)
177 | for _ in range(num_rows):
178 | row = []
179 | img.append(row)
180 | for _ in range(num_cols):
181 | row.append(parse_byte(data[idx]))
182 | idx += 1
183 | if not len(images) == length:
184 | raise AssertionError("Wong size data")
185 | return torch.ByteTensor(images).view(-1, 28, 28)
186 |
187 |
--------------------------------------------------------------------------------
/continuum/datasets/kmnist.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import os
3 |
4 | if os.path.exists("datasets"):
5 | from datasets.fashion import Fashion
6 | else:
7 | from ..datasets.fashion import Fashion
8 |
9 |
10 |
11 | class Kmnist(Fashion):
12 | """Kuzushiji-MNIST (10 classes, 28x28, 70k examples)
13 | Args:
14 | root (string): Root directory of dataset where ``processed/training.pt``
15 | and ``processed/test.pt`` exist.
16 | train (bool, optional): If True, creates dataset from ``training.pt``,
17 | otherwise from ``test.pt``.
18 | download (bool, optional): If true, downloads the dataset from the internet and
19 | puts it in root directory. If dataset is already downloaded, it is not
20 | downloaded again.
21 | transform (callable, optional): A function/transform that takes in an PIL image
22 | and returns a transformed version. E.g, ``transforms.RandomCrop``
23 | target_transform (callable, optional): A function/transform that takes in the
24 | target and transforms it.
25 | """
26 | urls = ['http://codh.rois.ac.jp/kmnist/dataset/kmnist/train-images-idx3-ubyte.gz',
27 | 'http://codh.rois.ac.jp/kmnist/dataset/kmnist/train-labels-idx1-ubyte.gz',
28 | 'http://codh.rois.ac.jp/kmnist/dataset/kmnist/t10k-images-idx3-ubyte.gz',
29 | 'http://codh.rois.ac.jp/kmnist/dataset/kmnist/t10k-labels-idx1-ubyte.gz']
30 |
31 |
--------------------------------------------------------------------------------
/continuum/disjoint.py:
--------------------------------------------------------------------------------
1 | from continuum.continuumbuilder import ContinuumBuilder
2 |
3 |
4 | class Disjoint(ContinuumBuilder):
5 | """Scenario : each new classes gives never seen new classes to learn. The code here allows to choose in how many task we
6 | want to split a dataset and therefor in autorize to choose the number of classes per tasks.
7 | This scenario test algorithms when there is no intersection between tasks."""
8 |
9 | def __init__(self, path="./Data", dataset="MNIST", tasks_number=1, download=False, train=True):
10 | super(Disjoint, self).__init__(path=path,
11 | dataset=dataset,
12 | tasks_number=tasks_number,
13 | scenario="Disjoint",
14 | download=download,
15 | train=train,
16 | num_classes=10)
17 |
18 | def select_index(self, ind_task, y):
19 | cpt = int(self.num_classes / self.tasks_number)
20 |
21 | if not cpt > 0:
22 | raise AssertionError("Cpt can't be equal to zero for selection of classes")
23 |
24 | class_min = ind_task * cpt
25 | class_max = (ind_task + 1) * cpt
26 |
27 | return class_min, class_max, ((y >= class_min) & (y < class_max)).nonzero().view(-1)
28 |
--------------------------------------------------------------------------------
/continuum/mnistfellowship.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import os
3 |
4 | from .data_utils import load_data
5 | from .continuumbuilder import ContinuumBuilder
6 |
7 |
8 |
9 | class MnistFellowship(ContinuumBuilder):
10 | def __init__(self, path="./Archives/Data", tasks_number=3, merge=False, download=False, train=True):
11 |
12 | self.merge = merge
13 | if self.merge:
14 | self.scenario = "mnist_fellowship_merge"
15 | else:
16 | self.scenario = "mnist_fellowship"
17 |
18 | super(MnistFellowship, self).__init__(path=path,
19 | dataset="mnist_fellowship",
20 | tasks_number=tasks_number,
21 | scenario=self.scenario,
22 | download=download,
23 | train=train,
24 | num_classes=10)
25 |
26 | def select_index(self, ind_task, y):
27 |
28 | if not self.merge:
29 | class_min = self.num_classes * ind_task
30 | class_max = self.num_classes * (ind_task + 1) - 1
31 | else:
32 | class_min = 0
33 | class_max = self.num_classes - 1
34 | return class_min, class_max, torch.arange(len(y))
35 |
36 | def label_transformation(self, ind_task, label):
37 | """
38 | Apply transformation to label if needed
39 | :param ind_task: task index in the sequence
40 | :param label: label to process
41 | :return: data post processing
42 | """
43 |
44 | # if self.disjoint class 0 of second task become class 10, class 1 -> class 11, ...
45 | if not self.merge:
46 | label = label + self.num_classes * ind_task
47 |
48 | return label
49 |
50 | def create_task(self, ind_task, x_, y_):
51 |
52 | if ind_task == 0: # MNIST
53 | self.dataset = 'MNIST'
54 | elif ind_task == 1: # fashion
55 | self.dataset = 'fashion'
56 | elif ind_task == 2: # kmnist
57 | self.dataset = 'kmnist'
58 |
59 | # we load a new dataset for each task
60 | x_, y_ = load_data(self.dataset, self.i)
61 |
62 | return super().create_task(ind_task, x_, y_)
63 |
--------------------------------------------------------------------------------
/continuum/permutation_classes.t:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TLESORT/Continual_Learning_Data_Former/51b43d770d97e441bb6e63e0a568c2f3d5bc8866/continuum/permutation_classes.t
--------------------------------------------------------------------------------
/continuum/permutations.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import os
3 | from continuum.continuumbuilder import ContinuumBuilder
4 | from copy import deepcopy
5 |
6 |
7 | class Permutations(ContinuumBuilder):
8 | '''Scenario : In this scenario, for each tasks all classes are available, however for each task pixels are permutated.
9 | The goal is to test algorithms where all data for each classes are not available simultaneously and are available from
10 | different mode of th distribution (different permutation modes).'''
11 |
12 | def __init__(self, path="./Data", dataset="MNIST", tasks_number=5, download=False, train=True):
13 | self.num_pixels = 0 # will be set in prepare_formatting
14 | self.perm_file = "" # will be set in prepare_formatting
15 | self.list_perm = []
16 |
17 | super(Permutations, self).__init__(path=path,
18 | dataset=dataset,
19 | tasks_number=tasks_number,
20 | scenario="Rotations",
21 | download=download,
22 | train=train,
23 | num_classes=10)
24 |
25 | def prepare_formatting(self):
26 |
27 | self.num_pixels = self.imageSize * self.imageSize * self.img_channels
28 | self.perm_file = os.path.join(self.o, '{}_{}_train.pt'.format("ind_permutations", self.tasks_number))
29 |
30 | if os.path.isfile(self.perm_file):
31 | self.list_perm = torch.load(self.perm_file)
32 | else:
33 | p = torch.FloatTensor(range(self.num_pixels)).long()
34 | for _ in range(self.tasks_number):
35 | self.list_perm.append(p)
36 | p = torch.randperm(self.num_pixels).long().view(-1)
37 | torch.save(self.list_perm, self.perm_file)
38 |
39 | def transformation(self, ind_task, data):
40 | p = self.list_perm[ind_task]
41 |
42 | data = data.view(-1, self.num_pixels)
43 | return deepcopy(data).index_select(1, p).view(-1, self.img_channels, self.imageSize, self.imageSize)
44 |
--------------------------------------------------------------------------------
/continuum/rotations.py:
--------------------------------------------------------------------------------
1 | from torchvision import transforms
2 | import torch
3 | from continuum.continuumbuilder import ContinuumBuilder
4 |
5 |
6 | class Rotations(ContinuumBuilder):
7 | '''Scenario : In this scenario, for each tasks all classes are available, however for each task data rotate a bit.
8 | The goal is to test algorithms where all data for each classes are not available simultaneously and there is a concept
9 | drift.'''
10 |
11 | def __init__(self, path="./Data", dataset="MNIST", tasks_number=1, rotation_number=None, download=False, train=True, min_rot=0.0,
12 | max_rot=90.0):
13 | self.max_rot = max_rot
14 | self.min_rot = min_rot
15 |
16 | if rotation_number is None:
17 | rotation_number = tasks_number
18 | self.rotation_number = rotation_number
19 |
20 | super(Rotations, self).__init__(path=path,
21 | dataset=dataset,
22 | tasks_number=tasks_number,
23 | scenario="Rotations",
24 | download=download,
25 | train=train,
26 | num_classes=10)
27 |
28 | def apply_rotation(self, data, min_rot, max_rot):
29 | transform = transforms.Compose(
30 | [transforms.RandomAffine(degrees=[min_rot, max_rot]),
31 | transforms.ToTensor()])
32 |
33 | result = torch.FloatTensor(data.size(0), 784)
34 | for i in range(data.size(0)):
35 | X = data[i].view(self.imageSize, self.imageSize)
36 | X = transforms.ToPILImage()(X)
37 | result[i] = transform(X).view(784)
38 |
39 | return result
40 |
41 | def transformation(self, ind_task, data):
42 | if ind_task is None:
43 | ind_task = self.current_task
44 |
45 | delta_rot = 1.0 * (self.max_rot - self.min_rot) / self.tasks_number
46 | noise = 1.0 * delta_rot / 10.0
47 |
48 | min_rot = self.min_rot + (delta_rot * ind_task) - noise
49 | max_rot = self.min_rot + (delta_rot * ind_task) + noise
50 |
51 | return self.apply_rotation(data, min_rot, max_rot)
52 |
--------------------------------------------------------------------------------
/doxygen_config:
--------------------------------------------------------------------------------
1 | # Doxyfile 1.8.13
2 |
3 | # This file describes the settings to be used by the documentation system
4 | # doxygen (www.doxygen.org) for a project.
5 | #
6 | # All text after a double hash (##) is considered a comment and is placed in
7 | # front of the TAG it is preceding.
8 | #
9 | # All text after a single hash (#) is considered a comment and will be ignored.
10 | # The format is:
11 | # TAG = value [value, ...]
12 | # For lists, items can also be appended using:
13 | # TAG += value [value, ...]
14 | # Values that contain spaces should be placed between quotes (\" \").
15 |
16 | #---------------------------------------------------------------------------
17 | # Project related configuration options
18 | #---------------------------------------------------------------------------
19 |
20 | # This tag specifies the encoding used for all characters in the config file
21 | # that follow. The default is UTF-8 which is also the encoding used for all text
22 | # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
23 | # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv
24 | # for the list of possible encodings.
25 | # The default value is: UTF-8.
26 |
27 | DOXYFILE_ENCODING = UTF-8
28 |
29 | # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
30 | # double-quotes, unless you are using Doxywizard) that should identify the
31 | # project for which the documentation is generated. This name is used in the
32 | # title of most generated pages and in a few other places.
33 | # The default value is: My Project.
34 |
35 | PROJECT_NAME = "CONTINUAL LEARNING"
36 |
37 | # The PROJECT_NUMBER tag can be used to enter a project or revision number. This
38 | # could be handy for archiving the generated documentation or if some version
39 | # control system is used.
40 |
41 | PROJECT_NUMBER =
42 |
43 | # Using the PROJECT_BRIEF tag one can provide an optional one line description
44 | # for a project that appears at the top of each page and should give viewer a
45 | # quick idea about the purpose of the project. Keep the description short.
46 |
47 | PROJECT_BRIEF =
48 |
49 | # With the PROJECT_LOGO tag one can specify a logo or an icon that is included
50 | # in the documentation. The maximum height of the logo should not exceed 55
51 | # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
52 | # the logo to the output directory.
53 |
54 | PROJECT_LOGO =
55 |
56 | # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
57 | # into which the generated documentation will be written. If a relative path is
58 | # entered, it will be relative to the location where doxygen was started. If
59 | # left blank the current directory will be used.
60 |
61 | OUTPUT_DIRECTORY =
62 |
63 | # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
64 | # directories (in 2 levels) under the output directory of each output format and
65 | # will distribute the generated files over these directories. Enabling this
66 | # option can be useful when feeding doxygen a huge amount of source files, where
67 | # putting all generated files in the same directory would otherwise causes
68 | # performance problems for the file system.
69 | # The default value is: NO.
70 |
71 | CREATE_SUBDIRS = NO
72 |
73 | # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
74 | # characters to appear in the names of generated files. If set to NO, non-ASCII
75 | # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
76 | # U+3044.
77 | # The default value is: NO.
78 |
79 | ALLOW_UNICODE_NAMES = NO
80 |
81 | # The OUTPUT_LANGUAGE tag is used to specify the language in which all
82 | # documentation generated by doxygen is written. Doxygen will use this
83 | # information to generate all constant output in the proper language.
84 | # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
85 | # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
86 | # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
87 | # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
88 | # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
89 | # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
90 | # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
91 | # Ukrainian and Vietnamese.
92 | # The default value is: English.
93 |
94 | OUTPUT_LANGUAGE = French
95 |
96 | # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
97 | # descriptions after the members that are listed in the file and class
98 | # documentation (similar to Javadoc). Set to NO to disable this.
99 | # The default value is: YES.
100 |
101 | BRIEF_MEMBER_DESC = YES
102 |
103 | # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
104 | # description of a member or function before the detailed description
105 | #
106 | # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
107 | # brief descriptions will be completely suppressed.
108 | # The default value is: YES.
109 |
110 | REPEAT_BRIEF = YES
111 |
112 | # This tag implements a quasi-intelligent brief description abbreviator that is
113 | # used to form the text in various listings. Each string in this list, if found
114 | # as the leading text of the brief description, will be stripped from the text
115 | # and the result, after processing the whole list, is used as the annotated
116 | # text. Otherwise, the brief description is used as-is. If left blank, the
117 | # following values are used ($name is automatically replaced with the name of
118 | # the entity):The $name class, The $name widget, The $name file, is, provides,
119 | # specifies, contains, represents, a, an and the.
120 |
121 | ABBREVIATE_BRIEF = "The $name class" \
122 | "The $name widget" \
123 | "The $name file" \
124 | is \
125 | provides \
126 | specifies \
127 | contains \
128 | represents \
129 | a \
130 | an \
131 | the
132 |
133 | # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
134 | # doxygen will generate a detailed section even if there is only a brief
135 | # description.
136 | # The default value is: NO.
137 |
138 | ALWAYS_DETAILED_SEC = NO
139 |
140 | # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
141 | # inherited members of a class in the documentation of that class as if those
142 | # members were ordinary class members. Constructors, destructors and assignment
143 | # operators of the base classes will not be shown.
144 | # The default value is: NO.
145 |
146 | INLINE_INHERITED_MEMB = NO
147 |
148 | # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
149 | # before files name in the file list and in the header files. If set to NO the
150 | # shortest path that makes the file name unique will be used
151 | # The default value is: YES.
152 |
153 | FULL_PATH_NAMES = YES
154 |
155 | # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
156 | # Stripping is only done if one of the specified strings matches the left-hand
157 | # part of the path. The tag can be used to show relative paths in the file list.
158 | # If left blank the directory from which doxygen is run is used as the path to
159 | # strip.
160 | #
161 | # Note that you can specify absolute paths here, but also relative paths, which
162 | # will be relative from the directory where doxygen is started.
163 | # This tag requires that the tag FULL_PATH_NAMES is set to YES.
164 |
165 | STRIP_FROM_PATH =
166 |
167 | # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
168 | # path mentioned in the documentation of a class, which tells the reader which
169 | # header file to include in order to use a class. If left blank only the name of
170 | # the header file containing the class definition is used. Otherwise one should
171 | # specify the list of include paths that are normally passed to the compiler
172 | # using the -I flag.
173 |
174 | STRIP_FROM_INC_PATH =
175 |
176 | # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
177 | # less readable) file names. This can be useful is your file systems doesn't
178 | # support long names like on DOS, Mac, or CD-ROM.
179 | # The default value is: NO.
180 |
181 | SHORT_NAMES = NO
182 |
183 | # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
184 | # first line (until the first dot) of a Javadoc-style comment as the brief
185 | # description. If set to NO, the Javadoc-style will behave just like regular Qt-
186 | # style comments (thus requiring an explicit @brief command for a brief
187 | # description.)
188 | # The default value is: NO.
189 |
190 | JAVADOC_AUTOBRIEF = NO
191 |
192 | # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
193 | # line (until the first dot) of a Qt-style comment as the brief description. If
194 | # set to NO, the Qt-style will behave just like regular Qt-style comments (thus
195 | # requiring an explicit \brief command for a brief description.)
196 | # The default value is: NO.
197 |
198 | QT_AUTOBRIEF = NO
199 |
200 | # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
201 | # multi-line C++ special comment block (i.e. a block of //! or /// comments) as
202 | # a brief description. This used to be the default behavior. The new default is
203 | # to treat a multi-line C++ comment block as a detailed description. Set this
204 | # tag to YES if you prefer the old behavior instead.
205 | #
206 | # Note that setting this tag to YES also means that rational rose comments are
207 | # not recognized any more.
208 | # The default value is: NO.
209 |
210 | MULTILINE_CPP_IS_BRIEF = NO
211 |
212 | # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
213 | # documentation from any documented member that it re-implements.
214 | # The default value is: YES.
215 |
216 | INHERIT_DOCS = YES
217 |
218 | # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
219 | # page for each member. If set to NO, the documentation of a member will be part
220 | # of the file/class/namespace that contains it.
221 | # The default value is: NO.
222 |
223 | SEPARATE_MEMBER_PAGES = NO
224 |
225 | # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
226 | # uses this value to replace tabs by spaces in code fragments.
227 | # Minimum value: 1, maximum value: 16, default value: 4.
228 |
229 | TAB_SIZE = 4
230 |
231 | # This tag can be used to specify a number of aliases that act as commands in
232 | # the documentation. An alias has the form:
233 | # name=value
234 | # For example adding
235 | # "sideeffect=@par Side Effects:\n"
236 | # will allow you to put the command \sideeffect (or @sideeffect) in the
237 | # documentation, which will result in a user-defined paragraph with heading
238 | # "Side Effects:". You can put \n's in the value part of an alias to insert
239 | # newlines.
240 |
241 | ALIASES =
242 |
243 | # This tag can be used to specify a number of word-keyword mappings (TCL only).
244 | # A mapping has the form "name=value". For example adding "class=itcl::class"
245 | # will allow you to use the command class in the itcl::class meaning.
246 |
247 | TCL_SUBST =
248 |
249 | # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
250 | # only. Doxygen will then generate output that is more tailored for C. For
251 | # instance, some of the names that are used will be different. The list of all
252 | # members will be omitted, etc.
253 | # The default value is: NO.
254 |
255 | OPTIMIZE_OUTPUT_FOR_C = NO
256 |
257 | # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
258 | # Python sources only. Doxygen will then generate output that is more tailored
259 | # for that language. For instance, namespaces will be presented as packages,
260 | # qualified scopes will look different, etc.
261 | # The default value is: NO.
262 |
263 | OPTIMIZE_OUTPUT_JAVA = NO
264 |
265 | # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
266 | # sources. Doxygen will then generate output that is tailored for Fortran.
267 | # The default value is: NO.
268 |
269 | OPTIMIZE_FOR_FORTRAN = NO
270 |
271 | # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
272 | # sources. Doxygen will then generate output that is tailored for VHDL.
273 | # The default value is: NO.
274 |
275 | OPTIMIZE_OUTPUT_VHDL = NO
276 |
277 | # Doxygen selects the parser to use depending on the extension of the files it
278 | # parses. With this tag you can assign which parser to use for a given
279 | # extension. Doxygen has a built-in mapping, but you can override or extend it
280 | # using this tag. The format is ext=language, where ext is a file extension, and
281 | # language is one of the parsers supported by doxygen: IDL, Java, Javascript,
282 | # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
283 | # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
284 | # Fortran. In the later case the parser tries to guess whether the code is fixed
285 | # or free formatted code, this is the default for Fortran type files), VHDL. For
286 | # instance to make doxygen treat .inc files as Fortran files (default is PHP),
287 | # and .f files as C (default is Fortran), use: inc=Fortran f=C.
288 | #
289 | # Note: For files without extension you can use no_extension as a placeholder.
290 | #
291 | # Note that for custom extensions you also need to set FILE_PATTERNS otherwise
292 | # the files are not read by doxygen.
293 |
294 | EXTENSION_MAPPING =
295 |
296 | # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
297 | # according to the Markdown format, which allows for more readable
298 | # documentation. See http://daringfireball.net/projects/markdown/ for details.
299 | # The output of markdown processing is further processed by doxygen, so you can
300 | # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
301 | # case of backward compatibilities issues.
302 | # The default value is: YES.
303 |
304 | MARKDOWN_SUPPORT = YES
305 |
306 | # When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up
307 | # to that level are automatically included in the table of contents, even if
308 | # they do not have an id attribute.
309 | # Note: This feature currently applies only to Markdown headings.
310 | # Minimum value: 0, maximum value: 99, default value: 0.
311 | # This tag requires that the tag MARKDOWN_SUPPORT is set to YES.
312 |
313 | TOC_INCLUDE_HEADINGS = 0
314 |
315 | # When enabled doxygen tries to link words that correspond to documented
316 | # classes, or namespaces to their corresponding documentation. Such a link can
317 | # be prevented in individual cases by putting a % sign in front of the word or
318 | # globally by setting AUTOLINK_SUPPORT to NO.
319 | # The default value is: YES.
320 |
321 | AUTOLINK_SUPPORT = YES
322 |
323 | # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
324 | # to include (a tag file for) the STL sources as input, then you should set this
325 | # tag to YES in order to let doxygen match functions declarations and
326 | # definitions whose arguments contain STL classes (e.g. func(std::string);
327 | # versus func(std::string) {}). This also make the inheritance and collaboration
328 | # diagrams that involve STL classes more complete and accurate.
329 | # The default value is: NO.
330 |
331 | BUILTIN_STL_SUPPORT = NO
332 |
333 | # If you use Microsoft's C++/CLI language, you should set this option to YES to
334 | # enable parsing support.
335 | # The default value is: NO.
336 |
337 | CPP_CLI_SUPPORT = NO
338 |
339 | # Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
340 | # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen
341 | # will parse them like normal C++ but will assume all classes use public instead
342 | # of private inheritance when no explicit protection keyword is present.
343 | # The default value is: NO.
344 |
345 | SIP_SUPPORT = NO
346 |
347 | # For Microsoft's IDL there are propget and propput attributes to indicate
348 | # getter and setter methods for a property. Setting this option to YES will make
349 | # doxygen to replace the get and set methods by a property in the documentation.
350 | # This will only work if the methods are indeed getting or setting a simple
351 | # type. If this is not the case, or you want to show the methods anyway, you
352 | # should set this option to NO.
353 | # The default value is: YES.
354 |
355 | IDL_PROPERTY_SUPPORT = YES
356 |
357 | # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
358 | # tag is set to YES then doxygen will reuse the documentation of the first
359 | # member in the group (if any) for the other members of the group. By default
360 | # all members of a group must be documented explicitly.
361 | # The default value is: NO.
362 |
363 | DISTRIBUTE_GROUP_DOC = NO
364 |
365 | # If one adds a struct or class to a group and this option is enabled, then also
366 | # any nested class or struct is added to the same group. By default this option
367 | # is disabled and one has to add nested compounds explicitly via \ingroup.
368 | # The default value is: NO.
369 |
370 | GROUP_NESTED_COMPOUNDS = NO
371 |
372 | # Set the SUBGROUPING tag to YES to allow class member groups of the same type
373 | # (for instance a group of public functions) to be put as a subgroup of that
374 | # type (e.g. under the Public Functions section). Set it to NO to prevent
375 | # subgrouping. Alternatively, this can be done per class using the
376 | # \nosubgrouping command.
377 | # The default value is: YES.
378 |
379 | SUBGROUPING = YES
380 |
381 | # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
382 | # are shown inside the group in which they are included (e.g. using \ingroup)
383 | # instead of on a separate page (for HTML and Man pages) or section (for LaTeX
384 | # and RTF).
385 | #
386 | # Note that this feature does not work in combination with
387 | # SEPARATE_MEMBER_PAGES.
388 | # The default value is: NO.
389 |
390 | INLINE_GROUPED_CLASSES = NO
391 |
392 | # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
393 | # with only public data fields or simple typedef fields will be shown inline in
394 | # the documentation of the scope in which they are defined (i.e. file,
395 | # namespace, or group documentation), provided this scope is documented. If set
396 | # to NO, structs, classes, and unions are shown on a separate page (for HTML and
397 | # Man pages) or section (for LaTeX and RTF).
398 | # The default value is: NO.
399 |
400 | INLINE_SIMPLE_STRUCTS = NO
401 |
402 | # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
403 | # enum is documented as struct, union, or enum with the name of the typedef. So
404 | # typedef struct TypeS {} TypeT, will appear in the documentation as a struct
405 | # with name TypeT. When disabled the typedef will appear as a member of a file,
406 | # namespace, or class. And the struct will be named TypeS. This can typically be
407 | # useful for C code in case the coding convention dictates that all compound
408 | # types are typedef'ed and only the typedef is referenced, never the tag name.
409 | # The default value is: NO.
410 |
411 | TYPEDEF_HIDES_STRUCT = NO
412 |
413 | # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
414 | # cache is used to resolve symbols given their name and scope. Since this can be
415 | # an expensive process and often the same symbol appears multiple times in the
416 | # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
417 | # doxygen will become slower. If the cache is too large, memory is wasted. The
418 | # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
419 | # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
420 | # symbols. At the end of a run doxygen will report the cache usage and suggest
421 | # the optimal cache size from a speed point of view.
422 | # Minimum value: 0, maximum value: 9, default value: 0.
423 |
424 | LOOKUP_CACHE_SIZE = 0
425 |
426 | #---------------------------------------------------------------------------
427 | # Build related configuration options
428 | #---------------------------------------------------------------------------
429 |
430 | # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
431 | # documentation are documented, even if no documentation was available. Private
432 | # class members and static file members will be hidden unless the
433 | # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
434 | # Note: This will also disable the warnings about undocumented members that are
435 | # normally produced when WARNINGS is set to YES.
436 | # The default value is: NO.
437 |
438 | EXTRACT_ALL = NO
439 |
440 | # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
441 | # be included in the documentation.
442 | # The default value is: NO.
443 |
444 | EXTRACT_PRIVATE = NO
445 |
446 | # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
447 | # scope will be included in the documentation.
448 | # The default value is: NO.
449 |
450 | EXTRACT_PACKAGE = NO
451 |
452 | # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
453 | # included in the documentation.
454 | # The default value is: NO.
455 |
456 | EXTRACT_STATIC = NO
457 |
458 | # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
459 | # locally in source files will be included in the documentation. If set to NO,
460 | # only classes defined in header files are included. Does not have any effect
461 | # for Java sources.
462 | # The default value is: YES.
463 |
464 | EXTRACT_LOCAL_CLASSES = YES
465 |
466 | # This flag is only useful for Objective-C code. If set to YES, local methods,
467 | # which are defined in the implementation section but not in the interface are
468 | # included in the documentation. If set to NO, only methods in the interface are
469 | # included.
470 | # The default value is: NO.
471 |
472 | EXTRACT_LOCAL_METHODS = NO
473 |
474 | # If this flag is set to YES, the members of anonymous namespaces will be
475 | # extracted and appear in the documentation as a namespace called
476 | # 'anonymous_namespace{file}', where file will be replaced with the base name of
477 | # the file that contains the anonymous namespace. By default anonymous namespace
478 | # are hidden.
479 | # The default value is: NO.
480 |
481 | EXTRACT_ANON_NSPACES = NO
482 |
483 | # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
484 | # undocumented members inside documented classes or files. If set to NO these
485 | # members will be included in the various overviews, but no documentation
486 | # section is generated. This option has no effect if EXTRACT_ALL is enabled.
487 | # The default value is: NO.
488 |
489 | HIDE_UNDOC_MEMBERS = NO
490 |
491 | # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
492 | # undocumented classes that are normally visible in the class hierarchy. If set
493 | # to NO, these classes will be included in the various overviews. This option
494 | # has no effect if EXTRACT_ALL is enabled.
495 | # The default value is: NO.
496 |
497 | HIDE_UNDOC_CLASSES = NO
498 |
499 | # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
500 | # (class|struct|union) declarations. If set to NO, these declarations will be
501 | # included in the documentation.
502 | # The default value is: NO.
503 |
504 | HIDE_FRIEND_COMPOUNDS = NO
505 |
506 | # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
507 | # documentation blocks found inside the body of a function. If set to NO, these
508 | # blocks will be appended to the function's detailed documentation block.
509 | # The default value is: NO.
510 |
511 | HIDE_IN_BODY_DOCS = NO
512 |
513 | # The INTERNAL_DOCS tag determines if documentation that is typed after a
514 | # \internal command is included. If the tag is set to NO then the documentation
515 | # will be excluded. Set it to YES to include the internal documentation.
516 | # The default value is: NO.
517 |
518 | INTERNAL_DOCS = NO
519 |
520 | # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
521 | # names in lower-case letters. If set to YES, upper-case letters are also
522 | # allowed. This is useful if you have classes or files whose names only differ
523 | # in case and if your file system supports case sensitive file names. Windows
524 | # and Mac users are advised to set this option to NO.
525 | # The default value is: system dependent.
526 |
527 | CASE_SENSE_NAMES = YES
528 |
529 | # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
530 | # their full class and namespace scopes in the documentation. If set to YES, the
531 | # scope will be hidden.
532 | # The default value is: NO.
533 |
534 | HIDE_SCOPE_NAMES = NO
535 |
536 | # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will
537 | # append additional text to a page's title, such as Class Reference. If set to
538 | # YES the compound reference will be hidden.
539 | # The default value is: NO.
540 |
541 | HIDE_COMPOUND_REFERENCE= NO
542 |
543 | # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
544 | # the files that are included by a file in the documentation of that file.
545 | # The default value is: YES.
546 |
547 | SHOW_INCLUDE_FILES = YES
548 |
549 | # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
550 | # grouped member an include statement to the documentation, telling the reader
551 | # which file to include in order to use the member.
552 | # The default value is: NO.
553 |
554 | SHOW_GROUPED_MEMB_INC = NO
555 |
556 | # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
557 | # files with double quotes in the documentation rather than with sharp brackets.
558 | # The default value is: NO.
559 |
560 | FORCE_LOCAL_INCLUDES = NO
561 |
562 | # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
563 | # documentation for inline members.
564 | # The default value is: YES.
565 |
566 | INLINE_INFO = YES
567 |
568 | # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
569 | # (detailed) documentation of file and class members alphabetically by member
570 | # name. If set to NO, the members will appear in declaration order.
571 | # The default value is: YES.
572 |
573 | SORT_MEMBER_DOCS = YES
574 |
575 | # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
576 | # descriptions of file, namespace and class members alphabetically by member
577 | # name. If set to NO, the members will appear in declaration order. Note that
578 | # this will also influence the order of the classes in the class list.
579 | # The default value is: NO.
580 |
581 | SORT_BRIEF_DOCS = NO
582 |
583 | # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
584 | # (brief and detailed) documentation of class members so that constructors and
585 | # destructors are listed first. If set to NO the constructors will appear in the
586 | # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
587 | # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
588 | # member documentation.
589 | # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
590 | # detailed member documentation.
591 | # The default value is: NO.
592 |
593 | SORT_MEMBERS_CTORS_1ST = NO
594 |
595 | # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
596 | # of group names into alphabetical order. If set to NO the group names will
597 | # appear in their defined order.
598 | # The default value is: NO.
599 |
600 | SORT_GROUP_NAMES = NO
601 |
602 | # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
603 | # fully-qualified names, including namespaces. If set to NO, the class list will
604 | # be sorted only by class name, not including the namespace part.
605 | # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
606 | # Note: This option applies only to the class list, not to the alphabetical
607 | # list.
608 | # The default value is: NO.
609 |
610 | SORT_BY_SCOPE_NAME = NO
611 |
612 | # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
613 | # type resolution of all parameters of a function it will reject a match between
614 | # the prototype and the implementation of a member function even if there is
615 | # only one candidate or it is obvious which candidate to choose by doing a
616 | # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
617 | # accept a match between prototype and implementation in such cases.
618 | # The default value is: NO.
619 |
620 | STRICT_PROTO_MATCHING = NO
621 |
622 | # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
623 | # list. This list is created by putting \todo commands in the documentation.
624 | # The default value is: YES.
625 |
626 | GENERATE_TODOLIST = YES
627 |
628 | # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
629 | # list. This list is created by putting \test commands in the documentation.
630 | # The default value is: YES.
631 |
632 | GENERATE_TESTLIST = YES
633 |
634 | # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
635 | # list. This list is created by putting \bug commands in the documentation.
636 | # The default value is: YES.
637 |
638 | GENERATE_BUGLIST = YES
639 |
640 | # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
641 | # the deprecated list. This list is created by putting \deprecated commands in
642 | # the documentation.
643 | # The default value is: YES.
644 |
645 | GENERATE_DEPRECATEDLIST= YES
646 |
647 | # The ENABLED_SECTIONS tag can be used to enable conditional documentation
648 | # sections, marked by \if ... \endif and \cond
649 | # ... \endcond blocks.
650 |
651 | ENABLED_SECTIONS =
652 |
653 | # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
654 | # initial value of a variable or macro / define can have for it to appear in the
655 | # documentation. If the initializer consists of more lines than specified here
656 | # it will be hidden. Use a value of 0 to hide initializers completely. The
657 | # appearance of the value of individual variables and macros / defines can be
658 | # controlled using \showinitializer or \hideinitializer command in the
659 | # documentation regardless of this setting.
660 | # Minimum value: 0, maximum value: 10000, default value: 30.
661 |
662 | MAX_INITIALIZER_LINES = 30
663 |
664 | # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
665 | # the bottom of the documentation of classes and structs. If set to YES, the
666 | # list will mention the files that were used to generate the documentation.
667 | # The default value is: YES.
668 |
669 | SHOW_USED_FILES = YES
670 |
671 | # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
672 | # will remove the Files entry from the Quick Index and from the Folder Tree View
673 | # (if specified).
674 | # The default value is: YES.
675 |
676 | SHOW_FILES = YES
677 |
678 | # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
679 | # page. This will remove the Namespaces entry from the Quick Index and from the
680 | # Folder Tree View (if specified).
681 | # The default value is: YES.
682 |
683 | SHOW_NAMESPACES = YES
684 |
685 | # The FILE_VERSION_FILTER tag can be used to specify a program or script that
686 | # doxygen should invoke to get the current version for each file (typically from
687 | # the version control system). Doxygen will invoke the program by executing (via
688 | # popen()) the command command input-file, where command is the value of the
689 | # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
690 | # by doxygen. Whatever the program writes to standard output is used as the file
691 | # version. For an example see the documentation.
692 |
693 | FILE_VERSION_FILTER =
694 |
695 | # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
696 | # by doxygen. The layout file controls the global structure of the generated
697 | # output files in an output format independent way. To create the layout file
698 | # that represents doxygen's defaults, run doxygen with the -l option. You can
699 | # optionally specify a file name after the option, if omitted DoxygenLayout.xml
700 | # will be used as the name of the layout file.
701 | #
702 | # Note that if you run doxygen from a directory containing a file called
703 | # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
704 | # tag is left empty.
705 |
706 | LAYOUT_FILE =
707 |
708 | # The CITE_BIB_FILES tag can be used to specify one or more bib files containing
709 | # the reference definitions. This must be a list of .bib files. The .bib
710 | # extension is automatically appended if omitted. This requires the bibtex tool
711 | # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.
712 | # For LaTeX the style of the bibliography can be controlled using
713 | # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
714 | # search path. See also \cite for info how to create references.
715 |
716 | CITE_BIB_FILES =
717 |
718 | #---------------------------------------------------------------------------
719 | # Configuration options related to warning and progress messages
720 | #---------------------------------------------------------------------------
721 |
722 | # The QUIET tag can be used to turn on/off the messages that are generated to
723 | # standard output by doxygen. If QUIET is set to YES this implies that the
724 | # messages are off.
725 | # The default value is: NO.
726 |
727 | QUIET = NO
728 |
729 | # The WARNINGS tag can be used to turn on/off the warning messages that are
730 | # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
731 | # this implies that the warnings are on.
732 | #
733 | # Tip: Turn warnings on while writing the documentation.
734 | # The default value is: YES.
735 |
736 | WARNINGS = YES
737 |
738 | # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
739 | # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
740 | # will automatically be disabled.
741 | # The default value is: YES.
742 |
743 | WARN_IF_UNDOCUMENTED = YES
744 |
745 | # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
746 | # potential errors in the documentation, such as not documenting some parameters
747 | # in a documented function, or documenting parameters that don't exist or using
748 | # markup commands wrongly.
749 | # The default value is: YES.
750 |
751 | WARN_IF_DOC_ERROR = YES
752 |
753 | # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
754 | # are documented, but have no documentation for their parameters or return
755 | # value. If set to NO, doxygen will only warn about wrong or incomplete
756 | # parameter documentation, but not about the absence of documentation.
757 | # The default value is: NO.
758 |
759 | WARN_NO_PARAMDOC = NO
760 |
761 | # If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
762 | # a warning is encountered.
763 | # The default value is: NO.
764 |
765 | WARN_AS_ERROR = NO
766 |
767 | # The WARN_FORMAT tag determines the format of the warning messages that doxygen
768 | # can produce. The string should contain the $file, $line, and $text tags, which
769 | # will be replaced by the file and line number from which the warning originated
770 | # and the warning text. Optionally the format may contain $version, which will
771 | # be replaced by the version of the file (if it could be obtained via
772 | # FILE_VERSION_FILTER)
773 | # The default value is: $file:$line: $text.
774 |
775 | WARN_FORMAT = "$file:$line: $text"
776 |
777 | # The WARN_LOGFILE tag can be used to specify a file to which warning and error
778 | # messages should be written. If left blank the output is written to standard
779 | # error (stderr).
780 |
781 | WARN_LOGFILE =
782 |
783 | #---------------------------------------------------------------------------
784 | # Configuration options related to the input files
785 | #---------------------------------------------------------------------------
786 |
787 | # The INPUT tag is used to specify the files and/or directories that contain
788 | # documented source files. You may enter file names like myfile.cpp or
789 | # directories like /usr/src/myproject. Separate the files or directories with
790 | # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
791 | # Note: If this tag is empty the current directory is searched.
792 |
793 | INPUT =
794 |
795 | # This tag can be used to specify the character encoding of the source files
796 | # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
797 | # libiconv (or the iconv built into libc) for the transcoding. See the libiconv
798 | # documentation (see: http://www.gnu.org/software/libiconv) for the list of
799 | # possible encodings.
800 | # The default value is: UTF-8.
801 |
802 | INPUT_ENCODING = UTF-8
803 |
804 | # If the value of the INPUT tag contains directories, you can use the
805 | # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
806 | # *.h) to filter out the source-files in the directories.
807 | #
808 | # Note that for custom extensions or not directly supported extensions you also
809 | # need to set EXTENSION_MAPPING for the extension otherwise the files are not
810 | # read by doxygen.
811 | #
812 | # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
813 | # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
814 | # *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
815 | # *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
816 | # *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
817 |
818 | FILE_PATTERNS = *.c \
819 | *.cc \
820 | *.cxx \
821 | *.cpp \
822 | *.c++ \
823 | *.java \
824 | *.ii \
825 | *.ixx \
826 | *.ipp \
827 | *.i++ \
828 | *.inl \
829 | *.idl \
830 | *.ddl \
831 | *.odl \
832 | *.h \
833 | *.hh \
834 | *.hxx \
835 | *.hpp \
836 | *.h++ \
837 | *.cs \
838 | *.d \
839 | *.php \
840 | *.php4 \
841 | *.php5 \
842 | *.phtml \
843 | *.inc \
844 | *.m \
845 | *.markdown \
846 | *.md \
847 | *.mm \
848 | *.dox \
849 | *.py \
850 | *.pyw \
851 | *.f90 \
852 | *.f95 \
853 | *.f03 \
854 | *.f08 \
855 | *.f \
856 | *.for \
857 | *.tcl \
858 | *.vhd \
859 | *.vhdl \
860 | *.ucf \
861 | *.qsf
862 |
863 | # The RECURSIVE tag can be used to specify whether or not subdirectories should
864 | # be searched for input files as well.
865 | # The default value is: NO.
866 |
867 | RECURSIVE = YES
868 |
869 | # The EXCLUDE tag can be used to specify files and/or directories that should be
870 | # excluded from the INPUT source files. This way you can easily exclude a
871 | # subdirectory from a directory tree whose root is specified with the INPUT tag.
872 | #
873 | # Note that relative paths are relative to the directory from which doxygen is
874 | # run.
875 |
876 | EXCLUDE =
877 |
878 | # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
879 | # directories that are symbolic links (a Unix file system feature) are excluded
880 | # from the input.
881 | # The default value is: NO.
882 |
883 | EXCLUDE_SYMLINKS = NO
884 |
885 | # If the value of the INPUT tag contains directories, you can use the
886 | # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
887 | # certain files from those directories.
888 | #
889 | # Note that the wildcards are matched against the file with absolute path, so to
890 | # exclude all test directories for example use the pattern */test/*
891 |
892 | EXCLUDE_PATTERNS =
893 |
894 | # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
895 | # (namespaces, classes, functions, etc.) that should be excluded from the
896 | # output. The symbol name can be a fully qualified name, a word, or if the
897 | # wildcard * is used, a substring. Examples: ANamespace, AClass,
898 | # AClass::ANamespace, ANamespace::*Test
899 | #
900 | # Note that the wildcards are matched against the file with absolute path, so to
901 | # exclude all test directories use the pattern */test/*
902 |
903 | EXCLUDE_SYMBOLS =
904 |
905 | # The EXAMPLE_PATH tag can be used to specify one or more files or directories
906 | # that contain example code fragments that are included (see the \include
907 | # command).
908 |
909 | EXAMPLE_PATH =
910 |
911 | # If the value of the EXAMPLE_PATH tag contains directories, you can use the
912 | # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
913 | # *.h) to filter out the source-files in the directories. If left blank all
914 | # files are included.
915 |
916 | EXAMPLE_PATTERNS = *
917 |
918 | # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
919 | # searched for input files to be used with the \include or \dontinclude commands
920 | # irrespective of the value of the RECURSIVE tag.
921 | # The default value is: NO.
922 |
923 | EXAMPLE_RECURSIVE = NO
924 |
925 | # The IMAGE_PATH tag can be used to specify one or more files or directories
926 | # that contain images that are to be included in the documentation (see the
927 | # \image command).
928 |
929 | IMAGE_PATH =
930 |
931 | # The INPUT_FILTER tag can be used to specify a program that doxygen should
932 | # invoke to filter for each input file. Doxygen will invoke the filter program
933 | # by executing (via popen()) the command:
934 | #
935 | #
936 | #
937 | # where is the value of the INPUT_FILTER tag, and is the
938 | # name of an input file. Doxygen will then use the output that the filter
939 | # program writes to standard output. If FILTER_PATTERNS is specified, this tag
940 | # will be ignored.
941 | #
942 | # Note that the filter must not add or remove lines; it is applied before the
943 | # code is scanned, but not when the output code is generated. If lines are added
944 | # or removed, the anchors will not be placed correctly.
945 | #
946 | # Note that for custom extensions or not directly supported extensions you also
947 | # need to set EXTENSION_MAPPING for the extension otherwise the files are not
948 | # properly processed by doxygen.
949 |
950 | INPUT_FILTER =
951 |
952 | # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
953 | # basis. Doxygen will compare the file name with each pattern and apply the
954 | # filter if there is a match. The filters are a list of the form: pattern=filter
955 | # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
956 | # filters are used. If the FILTER_PATTERNS tag is empty or if none of the
957 | # patterns match the file name, INPUT_FILTER is applied.
958 | #
959 | # Note that for custom extensions or not directly supported extensions you also
960 | # need to set EXTENSION_MAPPING for the extension otherwise the files are not
961 | # properly processed by doxygen.
962 |
963 | FILTER_PATTERNS =
964 |
965 | # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
966 | # INPUT_FILTER) will also be used to filter the input files that are used for
967 | # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
968 | # The default value is: NO.
969 |
970 | FILTER_SOURCE_FILES = NO
971 |
972 | # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
973 | # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
974 | # it is also possible to disable source filtering for a specific pattern using
975 | # *.ext= (so without naming a filter).
976 | # This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
977 |
978 | FILTER_SOURCE_PATTERNS =
979 |
980 | # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
981 | # is part of the input, its contents will be placed on the main page
982 | # (index.html). This can be useful if you have a project on for instance GitHub
983 | # and want to reuse the introduction page also for the doxygen output.
984 |
985 | USE_MDFILE_AS_MAINPAGE =
986 |
987 | #---------------------------------------------------------------------------
988 | # Configuration options related to source browsing
989 | #---------------------------------------------------------------------------
990 |
991 | # If the SOURCE_BROWSER tag is set to YES then a list of source files will be
992 | # generated. Documented entities will be cross-referenced with these sources.
993 | #
994 | # Note: To get rid of all source code in the generated output, make sure that
995 | # also VERBATIM_HEADERS is set to NO.
996 | # The default value is: NO.
997 |
998 | SOURCE_BROWSER = NO
999 |
1000 | # Setting the INLINE_SOURCES tag to YES will include the body of functions,
1001 | # classes and enums directly into the documentation.
1002 | # The default value is: NO.
1003 |
1004 | INLINE_SOURCES = NO
1005 |
1006 | # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
1007 | # special comment blocks from generated source code fragments. Normal C, C++ and
1008 | # Fortran comments will always remain visible.
1009 | # The default value is: YES.
1010 |
1011 | STRIP_CODE_COMMENTS = YES
1012 |
1013 | # If the REFERENCED_BY_RELATION tag is set to YES then for each documented
1014 | # function all documented functions referencing it will be listed.
1015 | # The default value is: NO.
1016 |
1017 | REFERENCED_BY_RELATION = NO
1018 |
1019 | # If the REFERENCES_RELATION tag is set to YES then for each documented function
1020 | # all documented entities called/used by that function will be listed.
1021 | # The default value is: NO.
1022 |
1023 | REFERENCES_RELATION = NO
1024 |
1025 | # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
1026 | # to YES then the hyperlinks from functions in REFERENCES_RELATION and
1027 | # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
1028 | # link to the documentation.
1029 | # The default value is: YES.
1030 |
1031 | REFERENCES_LINK_SOURCE = YES
1032 |
1033 | # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
1034 | # source code will show a tooltip with additional information such as prototype,
1035 | # brief description and links to the definition and documentation. Since this
1036 | # will make the HTML file larger and loading of large files a bit slower, you
1037 | # can opt to disable this feature.
1038 | # The default value is: YES.
1039 | # This tag requires that the tag SOURCE_BROWSER is set to YES.
1040 |
1041 | SOURCE_TOOLTIPS = YES
1042 |
1043 | # If the USE_HTAGS tag is set to YES then the references to source code will
1044 | # point to the HTML generated by the htags(1) tool instead of doxygen built-in
1045 | # source browser. The htags tool is part of GNU's global source tagging system
1046 | # (see http://www.gnu.org/software/global/global.html). You will need version
1047 | # 4.8.6 or higher.
1048 | #
1049 | # To use it do the following:
1050 | # - Install the latest version of global
1051 | # - Enable SOURCE_BROWSER and USE_HTAGS in the config file
1052 | # - Make sure the INPUT points to the root of the source tree
1053 | # - Run doxygen as normal
1054 | #
1055 | # Doxygen will invoke htags (and that will in turn invoke gtags), so these
1056 | # tools must be available from the command line (i.e. in the search path).
1057 | #
1058 | # The result: instead of the source browser generated by doxygen, the links to
1059 | # source code will now point to the output of htags.
1060 | # The default value is: NO.
1061 | # This tag requires that the tag SOURCE_BROWSER is set to YES.
1062 |
1063 | USE_HTAGS = NO
1064 |
1065 | # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
1066 | # verbatim copy of the header file for each class for which an include is
1067 | # specified. Set to NO to disable this.
1068 | # See also: Section \class.
1069 | # The default value is: YES.
1070 |
1071 | VERBATIM_HEADERS = YES
1072 |
1073 | # If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the
1074 | # clang parser (see: http://clang.llvm.org/) for more accurate parsing at the
1075 | # cost of reduced performance. This can be particularly helpful with template
1076 | # rich C++ code for which doxygen's built-in parser lacks the necessary type
1077 | # information.
1078 | # Note: The availability of this option depends on whether or not doxygen was
1079 | # generated with the -Duse-libclang=ON option for CMake.
1080 | # The default value is: NO.
1081 |
1082 | CLANG_ASSISTED_PARSING = NO
1083 |
1084 | # If clang assisted parsing is enabled you can provide the compiler with command
1085 | # line options that you would normally use when invoking the compiler. Note that
1086 | # the include paths will already be set by doxygen for the files and directories
1087 | # specified with INPUT and INCLUDE_PATH.
1088 | # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
1089 |
1090 | CLANG_OPTIONS =
1091 |
1092 | #---------------------------------------------------------------------------
1093 | # Configuration options related to the alphabetical class index
1094 | #---------------------------------------------------------------------------
1095 |
1096 | # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
1097 | # compounds will be generated. Enable this if the project contains a lot of
1098 | # classes, structs, unions or interfaces.
1099 | # The default value is: YES.
1100 |
1101 | ALPHABETICAL_INDEX = YES
1102 |
1103 | # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
1104 | # which the alphabetical index list will be split.
1105 | # Minimum value: 1, maximum value: 20, default value: 5.
1106 | # This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
1107 |
1108 | COLS_IN_ALPHA_INDEX = 5
1109 |
1110 | # In case all classes in a project start with a common prefix, all classes will
1111 | # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
1112 | # can be used to specify a prefix (or a list of prefixes) that should be ignored
1113 | # while generating the index headers.
1114 | # This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
1115 |
1116 | IGNORE_PREFIX =
1117 |
1118 | #---------------------------------------------------------------------------
1119 | # Configuration options related to the HTML output
1120 | #---------------------------------------------------------------------------
1121 |
1122 | # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
1123 | # The default value is: YES.
1124 |
1125 | GENERATE_HTML = YES
1126 |
1127 | # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
1128 | # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
1129 | # it.
1130 | # The default directory is: html.
1131 | # This tag requires that the tag GENERATE_HTML is set to YES.
1132 |
1133 | HTML_OUTPUT = html
1134 |
1135 | # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
1136 | # generated HTML page (for example: .htm, .php, .asp).
1137 | # The default value is: .html.
1138 | # This tag requires that the tag GENERATE_HTML is set to YES.
1139 |
1140 | HTML_FILE_EXTENSION = .html
1141 |
1142 | # The HTML_HEADER tag can be used to specify a user-defined HTML header file for
1143 | # each generated HTML page. If the tag is left blank doxygen will generate a
1144 | # standard header.
1145 | #
1146 | # To get valid HTML the header file that includes any scripts and style sheets
1147 | # that doxygen needs, which is dependent on the configuration options used (e.g.
1148 | # the setting GENERATE_TREEVIEW). It is highly recommended to start with a
1149 | # default header using
1150 | # doxygen -w html new_header.html new_footer.html new_stylesheet.css
1151 | # YourConfigFile
1152 | # and then modify the file new_header.html. See also section "Doxygen usage"
1153 | # for information on how to generate the default header that doxygen normally
1154 | # uses.
1155 | # Note: The header is subject to change so you typically have to regenerate the
1156 | # default header when upgrading to a newer version of doxygen. For a description
1157 | # of the possible markers and block names see the documentation.
1158 | # This tag requires that the tag GENERATE_HTML is set to YES.
1159 |
1160 | HTML_HEADER =
1161 |
1162 | # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
1163 | # generated HTML page. If the tag is left blank doxygen will generate a standard
1164 | # footer. See HTML_HEADER for more information on how to generate a default
1165 | # footer and what special commands can be used inside the footer. See also
1166 | # section "Doxygen usage" for information on how to generate the default footer
1167 | # that doxygen normally uses.
1168 | # This tag requires that the tag GENERATE_HTML is set to YES.
1169 |
1170 | HTML_FOOTER =
1171 |
1172 | # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
1173 | # sheet that is used by each HTML page. It can be used to fine-tune the look of
1174 | # the HTML output. If left blank doxygen will generate a default style sheet.
1175 | # See also section "Doxygen usage" for information on how to generate the style
1176 | # sheet that doxygen normally uses.
1177 | # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
1178 | # it is more robust and this tag (HTML_STYLESHEET) will in the future become
1179 | # obsolete.
1180 | # This tag requires that the tag GENERATE_HTML is set to YES.
1181 |
1182 | HTML_STYLESHEET =
1183 |
1184 | # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
1185 | # cascading style sheets that are included after the standard style sheets
1186 | # created by doxygen. Using this option one can overrule certain style aspects.
1187 | # This is preferred over using HTML_STYLESHEET since it does not replace the
1188 | # standard style sheet and is therefore more robust against future updates.
1189 | # Doxygen will copy the style sheet files to the output directory.
1190 | # Note: The order of the extra style sheet files is of importance (e.g. the last
1191 | # style sheet in the list overrules the setting of the previous ones in the
1192 | # list). For an example see the documentation.
1193 | # This tag requires that the tag GENERATE_HTML is set to YES.
1194 |
1195 | HTML_EXTRA_STYLESHEET =
1196 |
1197 | # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
1198 | # other source files which should be copied to the HTML output directory. Note
1199 | # that these files will be copied to the base HTML output directory. Use the
1200 | # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
1201 | # files. In the HTML_STYLESHEET file, use the file name only. Also note that the
1202 | # files will be copied as-is; there are no commands or markers available.
1203 | # This tag requires that the tag GENERATE_HTML is set to YES.
1204 |
1205 | HTML_EXTRA_FILES =
1206 |
1207 | # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
1208 | # will adjust the colors in the style sheet and background images according to
1209 | # this color. Hue is specified as an angle on a colorwheel, see
1210 | # http://en.wikipedia.org/wiki/Hue for more information. For instance the value
1211 | # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
1212 | # purple, and 360 is red again.
1213 | # Minimum value: 0, maximum value: 359, default value: 220.
1214 | # This tag requires that the tag GENERATE_HTML is set to YES.
1215 |
1216 | HTML_COLORSTYLE_HUE = 220
1217 |
1218 | # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
1219 | # in the HTML output. For a value of 0 the output will use grayscales only. A
1220 | # value of 255 will produce the most vivid colors.
1221 | # Minimum value: 0, maximum value: 255, default value: 100.
1222 | # This tag requires that the tag GENERATE_HTML is set to YES.
1223 |
1224 | HTML_COLORSTYLE_SAT = 100
1225 |
1226 | # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
1227 | # luminance component of the colors in the HTML output. Values below 100
1228 | # gradually make the output lighter, whereas values above 100 make the output
1229 | # darker. The value divided by 100 is the actual gamma applied, so 80 represents
1230 | # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
1231 | # change the gamma.
1232 | # Minimum value: 40, maximum value: 240, default value: 80.
1233 | # This tag requires that the tag GENERATE_HTML is set to YES.
1234 |
1235 | HTML_COLORSTYLE_GAMMA = 80
1236 |
1237 | # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
1238 | # page will contain the date and time when the page was generated. Setting this
1239 | # to YES can help to show when doxygen was last run and thus if the
1240 | # documentation is up to date.
1241 | # The default value is: NO.
1242 | # This tag requires that the tag GENERATE_HTML is set to YES.
1243 |
1244 | HTML_TIMESTAMP = NO
1245 |
1246 | # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
1247 | # documentation will contain sections that can be hidden and shown after the
1248 | # page has loaded.
1249 | # The default value is: NO.
1250 | # This tag requires that the tag GENERATE_HTML is set to YES.
1251 |
1252 | HTML_DYNAMIC_SECTIONS = NO
1253 |
1254 | # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
1255 | # shown in the various tree structured indices initially; the user can expand
1256 | # and collapse entries dynamically later on. Doxygen will expand the tree to
1257 | # such a level that at most the specified number of entries are visible (unless
1258 | # a fully collapsed tree already exceeds this amount). So setting the number of
1259 | # entries 1 will produce a full collapsed tree by default. 0 is a special value
1260 | # representing an infinite number of entries and will result in a full expanded
1261 | # tree by default.
1262 | # Minimum value: 0, maximum value: 9999, default value: 100.
1263 | # This tag requires that the tag GENERATE_HTML is set to YES.
1264 |
1265 | HTML_INDEX_NUM_ENTRIES = 100
1266 |
1267 | # If the GENERATE_DOCSET tag is set to YES, additional index files will be
1268 | # generated that can be used as input for Apple's Xcode 3 integrated development
1269 | # environment (see: http://developer.apple.com/tools/xcode/), introduced with
1270 | # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
1271 | # Makefile in the HTML output directory. Running make will produce the docset in
1272 | # that directory and running make install will install the docset in
1273 | # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
1274 | # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
1275 | # for more information.
1276 | # The default value is: NO.
1277 | # This tag requires that the tag GENERATE_HTML is set to YES.
1278 |
1279 | GENERATE_DOCSET = NO
1280 |
1281 | # This tag determines the name of the docset feed. A documentation feed provides
1282 | # an umbrella under which multiple documentation sets from a single provider
1283 | # (such as a company or product suite) can be grouped.
1284 | # The default value is: Doxygen generated docs.
1285 | # This tag requires that the tag GENERATE_DOCSET is set to YES.
1286 |
1287 | DOCSET_FEEDNAME = "Doxygen generated docs"
1288 |
1289 | # This tag specifies a string that should uniquely identify the documentation
1290 | # set bundle. This should be a reverse domain-name style string, e.g.
1291 | # com.mycompany.MyDocSet. Doxygen will append .docset to the name.
1292 | # The default value is: org.doxygen.Project.
1293 | # This tag requires that the tag GENERATE_DOCSET is set to YES.
1294 |
1295 | DOCSET_BUNDLE_ID = org.doxygen.Project
1296 |
1297 | # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
1298 | # the documentation publisher. This should be a reverse domain-name style
1299 | # string, e.g. com.mycompany.MyDocSet.documentation.
1300 | # The default value is: org.doxygen.Publisher.
1301 | # This tag requires that the tag GENERATE_DOCSET is set to YES.
1302 |
1303 | DOCSET_PUBLISHER_ID = org.doxygen.Publisher
1304 |
1305 | # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
1306 | # The default value is: Publisher.
1307 | # This tag requires that the tag GENERATE_DOCSET is set to YES.
1308 |
1309 | DOCSET_PUBLISHER_NAME = Publisher
1310 |
1311 | # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
1312 | # additional HTML index files: index.hhp, index.hhc, and index.hhk. The
1313 | # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
1314 | # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
1315 | # Windows.
1316 | #
1317 | # The HTML Help Workshop contains a compiler that can convert all HTML output
1318 | # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
1319 | # files are now used as the Windows 98 help format, and will replace the old
1320 | # Windows help format (.hlp) on all Windows platforms in the future. Compressed
1321 | # HTML files also contain an index, a table of contents, and you can search for
1322 | # words in the documentation. The HTML workshop also contains a viewer for
1323 | # compressed HTML files.
1324 | # The default value is: NO.
1325 | # This tag requires that the tag GENERATE_HTML is set to YES.
1326 |
1327 | GENERATE_HTMLHELP = NO
1328 |
1329 | # The CHM_FILE tag can be used to specify the file name of the resulting .chm
1330 | # file. You can add a path in front of the file if the result should not be
1331 | # written to the html output directory.
1332 | # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1333 |
1334 | CHM_FILE =
1335 |
1336 | # The HHC_LOCATION tag can be used to specify the location (absolute path
1337 | # including file name) of the HTML help compiler (hhc.exe). If non-empty,
1338 | # doxygen will try to run the HTML help compiler on the generated index.hhp.
1339 | # The file has to be specified with full path.
1340 | # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1341 |
1342 | HHC_LOCATION =
1343 |
1344 | # The GENERATE_CHI flag controls if a separate .chi index file is generated
1345 | # (YES) or that it should be included in the master .chm file (NO).
1346 | # The default value is: NO.
1347 | # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1348 |
1349 | GENERATE_CHI = NO
1350 |
1351 | # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
1352 | # and project file content.
1353 | # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1354 |
1355 | CHM_INDEX_ENCODING =
1356 |
1357 | # The BINARY_TOC flag controls whether a binary table of contents is generated
1358 | # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
1359 | # enables the Previous and Next buttons.
1360 | # The default value is: NO.
1361 | # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1362 |
1363 | BINARY_TOC = NO
1364 |
1365 | # The TOC_EXPAND flag can be set to YES to add extra items for group members to
1366 | # the table of contents of the HTML help documentation and to the tree view.
1367 | # The default value is: NO.
1368 | # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1369 |
1370 | TOC_EXPAND = NO
1371 |
1372 | # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
1373 | # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
1374 | # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
1375 | # (.qch) of the generated HTML documentation.
1376 | # The default value is: NO.
1377 | # This tag requires that the tag GENERATE_HTML is set to YES.
1378 |
1379 | GENERATE_QHP = NO
1380 |
1381 | # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
1382 | # the file name of the resulting .qch file. The path specified is relative to
1383 | # the HTML output folder.
1384 | # This tag requires that the tag GENERATE_QHP is set to YES.
1385 |
1386 | QCH_FILE =
1387 |
1388 | # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
1389 | # Project output. For more information please see Qt Help Project / Namespace
1390 | # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).
1391 | # The default value is: org.doxygen.Project.
1392 | # This tag requires that the tag GENERATE_QHP is set to YES.
1393 |
1394 | QHP_NAMESPACE = org.doxygen.Project
1395 |
1396 | # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
1397 | # Help Project output. For more information please see Qt Help Project / Virtual
1398 | # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-
1399 | # folders).
1400 | # The default value is: doc.
1401 | # This tag requires that the tag GENERATE_QHP is set to YES.
1402 |
1403 | QHP_VIRTUAL_FOLDER = doc
1404 |
1405 | # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
1406 | # filter to add. For more information please see Qt Help Project / Custom
1407 | # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
1408 | # filters).
1409 | # This tag requires that the tag GENERATE_QHP is set to YES.
1410 |
1411 | QHP_CUST_FILTER_NAME =
1412 |
1413 | # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
1414 | # custom filter to add. For more information please see Qt Help Project / Custom
1415 | # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
1416 | # filters).
1417 | # This tag requires that the tag GENERATE_QHP is set to YES.
1418 |
1419 | QHP_CUST_FILTER_ATTRS =
1420 |
1421 | # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
1422 | # project's filter section matches. Qt Help Project / Filter Attributes (see:
1423 | # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
1424 | # This tag requires that the tag GENERATE_QHP is set to YES.
1425 |
1426 | QHP_SECT_FILTER_ATTRS =
1427 |
1428 | # The QHG_LOCATION tag can be used to specify the location of Qt's
1429 | # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
1430 | # generated .qhp file.
1431 | # This tag requires that the tag GENERATE_QHP is set to YES.
1432 |
1433 | QHG_LOCATION =
1434 |
1435 | # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
1436 | # generated, together with the HTML files, they form an Eclipse help plugin. To
1437 | # install this plugin and make it available under the help contents menu in
1438 | # Eclipse, the contents of the directory containing the HTML and XML files needs
1439 | # to be copied into the plugins directory of eclipse. The name of the directory
1440 | # within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
1441 | # After copying Eclipse needs to be restarted before the help appears.
1442 | # The default value is: NO.
1443 | # This tag requires that the tag GENERATE_HTML is set to YES.
1444 |
1445 | GENERATE_ECLIPSEHELP = NO
1446 |
1447 | # A unique identifier for the Eclipse help plugin. When installing the plugin
1448 | # the directory name containing the HTML and XML files should also have this
1449 | # name. Each documentation set should have its own identifier.
1450 | # The default value is: org.doxygen.Project.
1451 | # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
1452 |
1453 | ECLIPSE_DOC_ID = org.doxygen.Project
1454 |
1455 | # If you want full control over the layout of the generated HTML pages it might
1456 | # be necessary to disable the index and replace it with your own. The
1457 | # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
1458 | # of each HTML page. A value of NO enables the index and the value YES disables
1459 | # it. Since the tabs in the index contain the same information as the navigation
1460 | # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
1461 | # The default value is: NO.
1462 | # This tag requires that the tag GENERATE_HTML is set to YES.
1463 |
1464 | DISABLE_INDEX = NO
1465 |
1466 | # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
1467 | # structure should be generated to display hierarchical information. If the tag
1468 | # value is set to YES, a side panel will be generated containing a tree-like
1469 | # index structure (just like the one that is generated for HTML Help). For this
1470 | # to work a browser that supports JavaScript, DHTML, CSS and frames is required
1471 | # (i.e. any modern browser). Windows users are probably better off using the
1472 | # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
1473 | # further fine-tune the look of the index. As an example, the default style
1474 | # sheet generated by doxygen has an example that shows how to put an image at
1475 | # the root of the tree instead of the PROJECT_NAME. Since the tree basically has
1476 | # the same information as the tab index, you could consider setting
1477 | # DISABLE_INDEX to YES when enabling this option.
1478 | # The default value is: NO.
1479 | # This tag requires that the tag GENERATE_HTML is set to YES.
1480 |
1481 | GENERATE_TREEVIEW = NO
1482 |
1483 | # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
1484 | # doxygen will group on one line in the generated HTML documentation.
1485 | #
1486 | # Note that a value of 0 will completely suppress the enum values from appearing
1487 | # in the overview section.
1488 | # Minimum value: 0, maximum value: 20, default value: 4.
1489 | # This tag requires that the tag GENERATE_HTML is set to YES.
1490 |
1491 | ENUM_VALUES_PER_LINE = 4
1492 |
1493 | # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
1494 | # to set the initial width (in pixels) of the frame in which the tree is shown.
1495 | # Minimum value: 0, maximum value: 1500, default value: 250.
1496 | # This tag requires that the tag GENERATE_HTML is set to YES.
1497 |
1498 | TREEVIEW_WIDTH = 250
1499 |
1500 | # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
1501 | # external symbols imported via tag files in a separate window.
1502 | # The default value is: NO.
1503 | # This tag requires that the tag GENERATE_HTML is set to YES.
1504 |
1505 | EXT_LINKS_IN_WINDOW = NO
1506 |
1507 | # Use this tag to change the font size of LaTeX formulas included as images in
1508 | # the HTML documentation. When you change the font size after a successful
1509 | # doxygen run you need to manually remove any form_*.png images from the HTML
1510 | # output directory to force them to be regenerated.
1511 | # Minimum value: 8, maximum value: 50, default value: 10.
1512 | # This tag requires that the tag GENERATE_HTML is set to YES.
1513 |
1514 | FORMULA_FONTSIZE = 10
1515 |
1516 | # Use the FORMULA_TRANPARENT tag to determine whether or not the images
1517 | # generated for formulas are transparent PNGs. Transparent PNGs are not
1518 | # supported properly for IE 6.0, but are supported on all modern browsers.
1519 | #
1520 | # Note that when changing this option you need to delete any form_*.png files in
1521 | # the HTML output directory before the changes have effect.
1522 | # The default value is: YES.
1523 | # This tag requires that the tag GENERATE_HTML is set to YES.
1524 |
1525 | FORMULA_TRANSPARENT = YES
1526 |
1527 | # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
1528 | # http://www.mathjax.org) which uses client side Javascript for the rendering
1529 | # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
1530 | # installed or if you want to formulas look prettier in the HTML output. When
1531 | # enabled you may also need to install MathJax separately and configure the path
1532 | # to it using the MATHJAX_RELPATH option.
1533 | # The default value is: NO.
1534 | # This tag requires that the tag GENERATE_HTML is set to YES.
1535 |
1536 | USE_MATHJAX = NO
1537 |
1538 | # When MathJax is enabled you can set the default output format to be used for
1539 | # the MathJax output. See the MathJax site (see:
1540 | # http://docs.mathjax.org/en/latest/output.html) for more details.
1541 | # Possible values are: HTML-CSS (which is slower, but has the best
1542 | # compatibility), NativeMML (i.e. MathML) and SVG.
1543 | # The default value is: HTML-CSS.
1544 | # This tag requires that the tag USE_MATHJAX is set to YES.
1545 |
1546 | MATHJAX_FORMAT = HTML-CSS
1547 |
1548 | # When MathJax is enabled you need to specify the location relative to the HTML
1549 | # output directory using the MATHJAX_RELPATH option. The destination directory
1550 | # should contain the MathJax.js script. For instance, if the mathjax directory
1551 | # is located at the same level as the HTML output directory, then
1552 | # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
1553 | # Content Delivery Network so you can quickly see the result without installing
1554 | # MathJax. However, it is strongly recommended to install a local copy of
1555 | # MathJax from http://www.mathjax.org before deployment.
1556 | # The default value is: http://cdn.mathjax.org/mathjax/latest.
1557 | # This tag requires that the tag USE_MATHJAX is set to YES.
1558 |
1559 | MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
1560 |
1561 | # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
1562 | # extension names that should be enabled during MathJax rendering. For example
1563 | # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
1564 | # This tag requires that the tag USE_MATHJAX is set to YES.
1565 |
1566 | MATHJAX_EXTENSIONS =
1567 |
1568 | # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
1569 | # of code that will be used on startup of the MathJax code. See the MathJax site
1570 | # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
1571 | # example see the documentation.
1572 | # This tag requires that the tag USE_MATHJAX is set to YES.
1573 |
1574 | MATHJAX_CODEFILE =
1575 |
1576 | # When the SEARCHENGINE tag is enabled doxygen will generate a search box for
1577 | # the HTML output. The underlying search engine uses javascript and DHTML and
1578 | # should work on any modern browser. Note that when using HTML help
1579 | # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
1580 | # there is already a search function so this one should typically be disabled.
1581 | # For large projects the javascript based search engine can be slow, then
1582 | # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
1583 | # search using the keyboard; to jump to the search box use + S
1584 | # (what the is depends on the OS and browser, but it is typically
1585 | # , /