├── .gitattributes ├── .github └── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md ├── .gitignore ├── CHANGELOG.md ├── CONTRIBUTING.md ├── LICENSE ├── MANIFEST.in ├── README.md ├── bin └── epynn ├── dpynn └── README.md ├── environ.yml ├── epynn ├── commons │ ├── io.py │ ├── library.py │ ├── logs.py │ ├── loss.py │ ├── maths.py │ ├── metrics.py │ ├── models.py │ ├── plot.py │ └── schedule.py ├── convolution │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── dense │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── dropout │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── embedding │ ├── backward.py │ ├── dataset.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── flatten │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── gru │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── initialize.py ├── lstm │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── network │ ├── backward.py │ ├── evaluate.py │ ├── forward.py │ ├── hyperparameters.py │ ├── initialize.py │ ├── models.py │ ├── report.py │ └── training.py ├── pooling │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── rnn │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── settings.py └── template │ ├── backward.py │ ├── forward.py │ ├── models.py │ └── parameters.py ├── epynnlive ├── author_music │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py ├── captcha_mnist │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py ├── dummy_boolean │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py ├── dummy_image │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py ├── dummy_string │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py ├── dummy_time │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py └── ptm_protein │ ├── prepare_dataset.ipynb │ ├── prepare_dataset.py │ ├── settings.py │ ├── train.ipynb │ └── train.py ├── pyproject.toml ├── requirements.txt ├── setup.cfg └── setup.py /.gitattributes: -------------------------------------------------------------------------------- 1 | *.ipynb linguist-documentation 2 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | webupdate.sh 2 | *.pyc 3 | *.gz 4 | *.wav 5 | *.pickle 6 | *data*dat 7 | *.ipynb_checkpoints 8 | *.ipynb_checkpoints* 9 | *ipynb_checkpoints 10 | *png 11 | *pdf 12 | *eps 13 | nncheck/ 14 | docs/ 15 | test/ 16 | epynn.sh 17 | *dat 18 | nnlibs/statistics.py 19 | nnlive/statistics.py 20 | *.ai 21 | dist/ 22 | build/ 23 | *.egg-info/ 24 | *.egg 25 | __pycache__/ 26 | __init__.py 27 | */__init__.py 28 | statistics.py 29 | *statistics.py 30 | pypi.sh 31 | setup.py 32 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # EpyNN - Changelog 2 | 3 | ## Past releases 4 | 5 | ### 1.1 - Initial release with improvements 6 | 7 | * Change **nnlive** to **epynnlive**. 8 | * Implement new metrics: Negative Predictive Value (NPV), Specificity, F-Score. 9 | 10 | ### 1.0 - Initial release 11 | 12 | * **epynn** contains core API sources and related to https://epynn.net/. 13 | * **dpynn** contains developmental API sources. 14 | * **epynnlive** contains live examples in Python and Jupyter notebook formats. 15 | * https://epynn.net/ contains extensive documentation. 16 | 17 | **1.0.4** 18 | 19 | * Change positive and negative label assignment in ``epynnlive`` examples. 20 | 21 | **1.0.3** 22 | 23 | * Regression in ``epynn.network.embedding.models.Embedding``, remove ``XY_encode`` argument. 24 | 25 | **1.0.2** 26 | 27 | * Patch *KeyError* exception from ``epynn.network.hyperparameters.model_hyperparameters()`` that arised when assigning layer-specific hyperparameters. 28 | 29 | **1.0.1** 30 | 31 | * Change **nnlibs** to **epynn**. 32 | * Implement **dpynn**. 33 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | Anyone can contribute to EpyNN development. 4 | 5 | The **epynn** module contains: 6 | 7 | * Layers and commons exhaustively described at [epynn.net](epynn.net). 8 | * Required sources for API base functioning. 9 | 10 | The **dpynn** module contains: 11 | 12 | * Other layers and commons. 13 | * Base API development. 14 | 15 | New layers should be implemented in **dpynn** module. 16 | 17 | Practical examples should be implemented in **epynnlive** module. 18 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include requirements.txt 2 | -------------------------------------------------------------------------------- /bin/epynn: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # EpyNN/bin/epynn 3 | # Standard library imports 4 | import shutil 5 | import glob 6 | import sys 7 | import os 8 | 9 | # Local application/library specific imports 10 | import epynn 11 | import epynnlive 12 | 13 | 14 | origin_epynn = os.path.dirname(epynn.__file__) 15 | origin_epynnlive = os.path.dirname(epynnlive.__file__) 16 | 17 | target_root = os.path.join(os.getcwd(), 'EpyNN') 18 | 19 | target_epynn = os.path.join(target_root, 'epynn') 20 | target_epynnlive = os.path.join(target_root, 'epynnlive') 21 | 22 | if os.path.exists(target_root): 23 | print('Target path %s already exists.' % (target_epynnlive)) 24 | sys.exit() 25 | 26 | os.mkdir(target_root) 27 | 28 | shutil.copytree(origin_epynn, target_epynn) 29 | shutil.copytree(origin_epynnlive, target_epynnlive) 30 | 31 | print("Copy epynn from %s to %s" % (origin_epynn, target_epynn)) 32 | print("Copy epynnlive from %s to %s" % (origin_epynnlive, target_epynnlive)) 33 | -------------------------------------------------------------------------------- /dpynn/README.md: -------------------------------------------------------------------------------- 1 | # DpyNN - EpyNN development 2 | -------------------------------------------------------------------------------- /environ.yml: -------------------------------------------------------------------------------- 1 | name: EpyNN dependencies 2 | 3 | channels: 4 | - conda-forge 5 | - defaults 6 | dependencies: 7 | - python-wget==3.2 8 | - cycler==0.10.0 9 | - kiwisolver==1.3.2 10 | - matplotlib==3.4.3 11 | - numpy==1.21.2 12 | - Pillow==8.3.1 13 | - Pygments==2.10.0 14 | - pyparsing==2.4.7 15 | - python-dateutil==2.8.2 16 | - six==1.16.0 17 | - tabulate==0.8.9 18 | - termcolor==1.1.0 19 | - texttable==1.6.4 20 | - jupyter==1.0.0 21 | - nbconvert==5.4.1 22 | - scipy==1.6.3 23 | -------------------------------------------------------------------------------- /epynn/commons/io.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/io.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def index_elements_auto(X_data): 7 | """Determine elements size and generate dictionary for one-hot encoding or features or label. 8 | 9 | :param X_data: Dataset containing samples features or samples label. 10 | :type X_data: :class:`numpy.ndarray` 11 | 12 | :return: One-hot encoding converter. 13 | :rtype: dict[str or int or float, int] 14 | 15 | :return: One-hot decoding converter. 16 | :rtype: dict[int, str or int or float] 17 | 18 | :return: Vocabulary size. 19 | :rtype: int 20 | """ 21 | X_data = X_data.flatten().tolist() # All elements in 1D list 22 | 23 | elements = sorted(list(set(X_data))) # Unique elements list 24 | elements_size = len(elements) # Number of elements 25 | 26 | # Converters to encode and decode sequences 27 | element_to_idx = {w: i for i, w in enumerate(elements)} 28 | idx_to_element = {i: w for w, i in element_to_idx.items()} 29 | 30 | return element_to_idx, idx_to_element, elements_size 31 | 32 | 33 | def scale_features(X_data): 34 | """Scale input array within [0, 1]. 35 | 36 | :param X_data: Raw data. 37 | :type X_data: :class:`numpy.ndarray` 38 | 39 | :return: Normalized data. 40 | :rtype: :class:`numpy.ndarray` 41 | """ 42 | X_data = (X_data-np.min(X_data)) / (np.max(X_data)-np.min(X_data)) 43 | 44 | return X_data 45 | 46 | 47 | def one_hot_encode(i, elements_size): 48 | """Generate one-hot encoding array. 49 | 50 | :param i: One-hot index for current word. 51 | :type i: int 52 | 53 | :param elements_size: Number of keys in the word to index encoder. 54 | :type elements_size: int 55 | 56 | :return: One-hot encoding array for current word. 57 | :rtype: :class:`numpy.ndarray` 58 | """ 59 | one_hot = np.zeros(elements_size) 60 | 61 | one_hot[i] = 1.0 # Set 1 at index assigned to word 62 | 63 | return one_hot 64 | 65 | 66 | def one_hot_encode_sequence(sequence, element_to_idx, elements_size): 67 | """One-hot encode sequence. 68 | 69 | :param sequence: Sequential data. 70 | :type sequence: list or :class:`numpy.ndarray` 71 | 72 | :param element_to_idx: Converter with word as key and index as value. 73 | :type element_to_idx: dict[str or int or float, int] 74 | 75 | :param elements_size: Number of keys in converter. 76 | :type elements_size: int 77 | 78 | :return: One-hot encoded sequence. 79 | :rtype: :class:`numpy.ndarray` 80 | """ 81 | encoding = np.array([one_hot_encode(element_to_idx[word], elements_size) for word in sequence]) 82 | 83 | return encoding 84 | 85 | 86 | def one_hot_decode_sequence(sequence, idx_to_element): 87 | """One-hot decode sequence. 88 | 89 | :param sequence: One-hot encoded sequence. 90 | :type sequence: list or :class:`numpy.ndarray` 91 | 92 | :param idx_to_element: Converter with index as key and word as value. 93 | :type idx_to_element: dict[int, str or int or float] 94 | 95 | :return: One-hot decoded sequence. 96 | :rtype: list[str or int or float] 97 | """ 98 | decoding = [idx_to_element[np.argmax(encoded)] for encoded in sequence] 99 | 100 | return decoding 101 | 102 | 103 | def encode_dataset(X_data, element_to_idx, elements_size): 104 | """One-hot encode a set of sequences. 105 | 106 | :param X_data: Contains sequences. 107 | :type X_data: :class:`numpy.ndarray` 108 | 109 | :param element_to_idx: Converter with word as key and index as value. 110 | :type element_to_idx: dict[str or int or float, int] 111 | 112 | :param elements_size: Number of keys in converter. 113 | :type elements_size: int 114 | 115 | :return: One-hot encoded dataset. 116 | :rtype: list[:class:`numpy.ndarray`] 117 | """ 118 | X_encoded = [] 119 | 120 | # Iterate over sequences 121 | for i in range(X_data.shape[0]): 122 | 123 | sequence = X_data[i] # Retrieve sequence 124 | 125 | encoded_sequence = one_hot_encode_sequence(sequence, element_to_idx, elements_size) 126 | 127 | X_encoded.append(encoded_sequence) # Append to dataset of encoded sequences 128 | 129 | return X_encoded 130 | 131 | 132 | def padding(X_data, padding, forward=True): 133 | """Image padding. 134 | 135 | :param X_data: Array representing a set of images. 136 | :type X_data: :class:`numpy.ndarray` 137 | 138 | :param padding: Number of zeros to add in each side of the image. 139 | :type padding: int 140 | 141 | :param forward: Set to False to remove padding, defaults to `True`. 142 | :type forward: bool, optional 143 | """ 144 | if padding and forward: 145 | # Pad image 146 | shape = ((0, 0), (padding, padding), (padding, padding), (0, 0)) 147 | X_data = np.pad(X_data, shape, mode='constant', constant_values=(0, 0)) 148 | 149 | elif padding and not forward: 150 | # Remove padding 151 | X_data = X_data[:, padding:-padding, padding:-padding, :] 152 | 153 | return X_data 154 | -------------------------------------------------------------------------------- /epynn/commons/library.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/library.py 2 | # Standard library imports 3 | import pathlib 4 | import pickle 5 | import shutil 6 | import glob 7 | import os 8 | 9 | # Related third party imports 10 | import numpy as np 11 | 12 | # Local application/library specific imports 13 | from epynn.commons.logs import process_logs 14 | 15 | 16 | def read_pickle(f): 17 | """Read pickle binary file. 18 | 19 | :param f: Filename. 20 | :type f: str 21 | 22 | :return: File content. 23 | :rtype: Object 24 | """ 25 | with open(f, 'rb') as msg: 26 | c = pickle.load(msg) 27 | 28 | return c 29 | 30 | 31 | def read_file(f): 32 | """Read text file. 33 | 34 | :param f: Filename. 35 | :type f: str 36 | 37 | :return: File content. 38 | :rtype: str 39 | """ 40 | with open(f, 'r') as msg: 41 | c = msg.read() 42 | 43 | return c 44 | 45 | 46 | def write_pickle(f, c): 47 | """Write pickle binary file. 48 | 49 | :param f: Filename. 50 | :type f: str 51 | 52 | :param c: Content to write. 53 | :type c: Object 54 | """ 55 | with open(f, 'wb') as msg: 56 | pickle.dump(c,msg) 57 | 58 | return None 59 | 60 | 61 | def configure_directory(clear=False): 62 | """Configure working directory. 63 | 64 | :param clear: Remove and make directories, defaults to False. 65 | :type clear: bool, optional 66 | """ 67 | # Set paths for defaults directories 68 | datasets_path = os.path.join(os.getcwd(), 'datasets') 69 | models_path = os.path.join(os.getcwd(), 'models') 70 | plots_path = os.path.join(os.getcwd(), 'plots') 71 | 72 | # Iterate over directory paths 73 | for path in [datasets_path, models_path, plots_path]: 74 | 75 | # If clear set to True, remove directories 76 | if clear and os.path.exists(path): 77 | shutil.rmtree(path) 78 | process_logs('Remove: '+path, level=2) 79 | 80 | # Create directory if not existing 81 | if not os.path.exists(path): 82 | os.mkdir(path) 83 | process_logs('Make: '+path, level=1) 84 | 85 | return None 86 | 87 | 88 | def write_model(model, model_path=None): 89 | """Write EpyNN model on disk. 90 | 91 | :param model: An instance of EpyNN network object. 92 | :type model: :class:`epynn.network.models.EpyNN` 93 | 94 | :param model_path: Where to write model, defaults to `None` which sets path in `models` directory. 95 | :type model_path: str or NoneType, optional 96 | """ 97 | data = { 98 | 'model': model, 99 | } 100 | 101 | if model_path: 102 | # If model_path not set to None, pass on user-defined path 103 | pass 104 | else: 105 | # Set default location and name to write model on disk 106 | model_path = os.path.join(os.getcwd(), 'models', model.uname) 107 | model_path = model_path+'.pickle' 108 | 109 | # Write model with pickle 110 | write_pickle(model_path, data) 111 | process_logs('Make: ' + model_path, level=1) 112 | 113 | return None 114 | 115 | 116 | def read_model(model_path=None): 117 | """Read EpyNN model from disk. 118 | 119 | :param model_path: Where to read model from, defaults to `None` which reads the last saved model in `models` directory. 120 | :type model_path: str or NoneType, optional 121 | """ 122 | if model_path: 123 | # If model_path not set to None, pass on user-defined path 124 | pass 125 | else: 126 | # Set default location and name to read the model from 127 | models_path = os.path.join(os.getcwd(), 'models', '*') 128 | model_path = max(glob.glob(models_path), key=os.path.getctime) 129 | 130 | model = read_pickle(model_path)['model'] 131 | 132 | return model 133 | 134 | 135 | def settings_verification(): 136 | """Import default :class:`epynn.settings.se_hPars` if not present in working directory. 137 | """ 138 | # Absolute path of epynn directory 139 | init_path = str(pathlib.Path(__file__).parent.parent.absolute()) 140 | 141 | # Copy defaults settings in working directory if not present 142 | if not os.path.exists('settings.py'): 143 | se_default_path = os.path.join(init_path, 'settings.py') 144 | shutil.copy(se_default_path, 'settings.py') 145 | 146 | return None 147 | -------------------------------------------------------------------------------- /epynn/commons/loss.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/loss.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | # To prevent from divide floatting points errors 7 | E_SAFE = 1e-16 8 | 9 | 10 | def loss_functions(key=None, output_activation=None): 11 | """Callback function for loss. 12 | 13 | :param key: Name of the loss function, defaults to `None` which returns all functions. 14 | :type key: str, optional 15 | 16 | :param output_activation: Name of the activation function for output layer. 17 | :type output_activation: str, optional 18 | 19 | :raises Exception: If key is `CCE` and output activation is different from softmax. 20 | 21 | :raises Exception: If key is either `CCE`, `BCE` or `MSLE` and output activation is tanh. 22 | 23 | :return: Loss functions or computed loss. 24 | :rtype: dict[str, function] or :class:`numpy.ndarray` 25 | """ 26 | loss = { 27 | 'MSE': MSE, 28 | 'MAE': MAE, 29 | 'MSLE': MSLE, 30 | 'CCE': CCE, 31 | 'BCE': BCE, 32 | } 33 | # If key provided, returns output of function 34 | if key: 35 | loss = loss[key] 36 | 37 | if key == 'CCE' and output_activation != 'softmax': 38 | raise Exception('CCE can not be used with %s activation, \ 39 | please use softmax instead.' % output_activation) 40 | 41 | if key in ['CCE', 'BCE', 'MSLE'] and output_activation == 'tanh': 42 | raise Exception('%s contains log() not be used with %s activation, \ 43 | please change.' % (key, output_activation)) 44 | 45 | return loss 46 | 47 | 48 | def MSE(Y, A, deriv=False): 49 | """Mean Squared Error. 50 | 51 | :param Y: True labels for a set of samples. 52 | :type Y: :class:`numpy.ndarray` 53 | 54 | :param A: Output of forward propagation. 55 | :type A: :class:`numpy.ndarray` 56 | 57 | :param deriv: To compute the derivative. 58 | :type deriv: bool, optional 59 | 60 | :return: Loss. 61 | :rtype: :class:`numpy.ndarray` 62 | """ 63 | U = A.shape[1] # Number of output nodes 64 | 65 | if not deriv: 66 | loss = 1. / U * np.sum((Y - A)**2, axis=1) 67 | 68 | elif deriv: 69 | loss = -2. / U * (Y-A) 70 | 71 | return loss 72 | 73 | 74 | def MAE(Y, A, deriv=False): 75 | """Mean Absolute Error. 76 | 77 | :param Y: True labels for a set of samples. 78 | :type Y: :class:`numpy.ndarray` 79 | 80 | :param A: Output of forward propagation. 81 | :type A: :class:`numpy.ndarray` 82 | 83 | :param deriv: To compute the derivative. 84 | :type deriv: bool, optional 85 | 86 | :return: Loss. 87 | :rtype: :class:`numpy.ndarray` 88 | """ 89 | U = A.shape[1] # Number of output nodes 90 | 91 | if not deriv: 92 | loss = 1. / U * np.sum(np.abs(Y-A), axis=1) 93 | 94 | elif deriv: 95 | loss = -1. / U * (Y-A) / (np.abs(Y-A)+E_SAFE) 96 | 97 | return loss 98 | 99 | 100 | def MSLE(Y, A, deriv=False): 101 | """Mean Squared Logarythmic Error. 102 | 103 | :param Y: True labels for a set of samples. 104 | :type Y: :class:`numpy.ndarray` 105 | 106 | :param A: Output of forward propagation. 107 | :type A: :class:`numpy.ndarray` 108 | 109 | :param deriv: To compute the derivative. 110 | :type deriv: bool, optional 111 | 112 | :return: Loss. 113 | :rtype: :class:`numpy.ndarray` 114 | """ 115 | U = A.shape[1] # Number of output nodes 116 | 117 | if not deriv: 118 | loss = 1. / U * np.sum(np.square(np.log1p(Y) - np.log1p(A)), axis=1) 119 | 120 | elif deriv: 121 | loss = -2. / U * (np.log1p(Y) - np.log1p(A)) / (A + 1.) 122 | 123 | return loss 124 | 125 | 126 | def CCE(Y, A, deriv=False): 127 | """Categorical Cross-Entropy. 128 | 129 | :param Y: True labels for a set of samples. 130 | :type Y: :class:`numpy.ndarray` 131 | 132 | :param A: Output of forward propagation. 133 | :type A: :class:`numpy.ndarray` 134 | 135 | :param deriv: To compute the derivative. 136 | :type deriv: bool, optional 137 | 138 | :return: Loss. 139 | :rtype: :class:`numpy.ndarray` 140 | """ 141 | U = A.shape[1] # Number of output nodes 142 | 143 | if not deriv: 144 | loss = -1. * np.sum(Y * np.log(A+E_SAFE), axis=1) 145 | 146 | elif deriv: 147 | loss = -1. * (Y / A) 148 | 149 | return loss 150 | 151 | 152 | def BCE(Y, A, deriv=False): 153 | """Binary Cross-Entropy. 154 | 155 | :param Y: True labels for a set of samples. 156 | :type Y: :class:`numpy.ndarray` 157 | 158 | :param A: Output of forward propagation. 159 | :type A: :class:`numpy.ndarray` 160 | 161 | :param deriv: To compute the derivative. 162 | :type deriv: bool, optional 163 | 164 | :return: Loss. 165 | :rtype: :class:`numpy.ndarray` 166 | """ 167 | U = A.shape[1] # Number of output nodes 168 | 169 | if not deriv: 170 | loss = -1. / U * np.sum(Y*np.log(A+E_SAFE) + (1-Y)*np.log((1-A)+E_SAFE), axis=1) 171 | 172 | elif deriv: 173 | loss = 1. / U * (A-Y) / (A - A*A + E_SAFE) 174 | 175 | return loss 176 | -------------------------------------------------------------------------------- /epynn/commons/metrics.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/metrics.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def metrics_functions(key=None): 7 | """Callback function for metrics. 8 | 9 | :param key: Name of the metrics function, defaults to `None` which returns all functions. 10 | :type key: str, optional 11 | 12 | :return: Metrics functions or computed metrics. 13 | :rtype: dict[str: function] or :class:`numpy.ndarray` 14 | """ 15 | metrics = { 16 | 'accuracy': accuracy, 17 | 'recall': recall, 18 | 'precision': precision, 19 | 'fscore': fscore, 20 | 'specificity': specificity, 21 | 'NPV': NPV, 22 | } 23 | # If key provided, returns output of function 24 | if key: 25 | metrics = metrics[key] 26 | 27 | return metrics 28 | 29 | 30 | def accuracy(Y, A): 31 | """Accuracy of prediction. 32 | 33 | :param Y: True labels for a set of samples. 34 | :type Y: :class:`numpy.ndarray` 35 | 36 | :param A: Output of forward propagation. 37 | :type A: :class:`numpy.ndarray` 38 | 39 | :return: Accuracy for each sample. 40 | :rtype: :class:`numpy.ndarray` 41 | """ 42 | encoded = (Y.shape[1] > 1) 43 | 44 | P = np.argmax(A, axis=1) if encoded else np.around(A) 45 | y = np.argmax(Y, axis=1) if encoded else Y 46 | 47 | accuracy = (P == y) 48 | 49 | return accuracy 50 | 51 | 52 | def recall(Y, A): 53 | """Fraction of positive instances retrieved over the total. 54 | 55 | :param Y: True labels for a set of samples. 56 | :type Y: :class:`numpy.ndarray` 57 | 58 | :param A: Output of forward propagation. 59 | :type A: :class:`numpy.ndarray` 60 | 61 | :return: Recall. 62 | :rtype: :class:`numpy.ndarray` 63 | """ 64 | encoded = (Y.shape[1] > 1) # Check if one-hot encoding of labels 65 | 66 | P = np.argmax(A, axis=1) if encoded else np.around(A) 67 | y = np.argmax(Y, axis=1) if encoded else Y 68 | 69 | tp = np.sum(np.where((P==0) & (y==0), 1, 0)) # True positive 70 | fp = np.sum(np.where((P==0) & (y==1), 1, 0)) # False positive 71 | tn = np.sum(np.where((P==1) & (y==1), 1, 0)) # True negative 72 | fn = np.sum(np.where((P==1) & (y==0), 1, 0)) # False negative 73 | 74 | recall = (tp / (tp+fn)) 75 | 76 | return recall 77 | 78 | 79 | def precision(Y, A): 80 | """Fraction of positive samples among retrieved instances. 81 | 82 | :param Y: True labels for a set of samples. 83 | :type Y: :class:`numpy.ndarray` 84 | 85 | :param A: Output of forward propagation. 86 | :type A: :class:`numpy.ndarray` 87 | 88 | :return: Precision. 89 | :rtype: :class:`numpy.ndarray` 90 | """ 91 | encoded = (Y.shape[1] > 1) # Check if one-hot encoding of labels 92 | 93 | P = np.argmax(A, axis=1) if encoded else np.around(A) 94 | y = np.argmax(Y, axis=1) if encoded else Y 95 | 96 | tp = np.sum(np.where((P==0) & (y==0), 1, 0)) # True positive 97 | fp = np.sum(np.where((P==0) & (y==1), 1, 0)) # False positive 98 | tn = np.sum(np.where((P==1) & (y==1), 1, 0)) # True negative 99 | fn = np.sum(np.where((P==1) & (y==0), 1, 0)) # False negative 100 | 101 | precision = (tp / (tp+fp)) 102 | 103 | return precision 104 | 105 | 106 | def NPV(Y, A): 107 | """Fraction of negative samples among excluded instances. 108 | 109 | :param Y: True labels for a set of samples. 110 | :type Y: :class:`numpy.ndarray` 111 | 112 | :param A: Output of forward propagation. 113 | :type A: :class:`numpy.ndarray` 114 | 115 | :return: Negative Predictive Value. 116 | :rtype: :class:`numpy.ndarray` 117 | """ 118 | encoded = (Y.shape[1] > 1) # Check if one-hot encoding of labels 119 | 120 | P = np.argmax(A, axis=1) if encoded else np.around(A) 121 | y = np.argmax(Y, axis=1) if encoded else Y 122 | 123 | tp = np.sum(np.where((P==0) & (y==0), 1, 0)) # True positive 124 | fp = np.sum(np.where((P==0) & (y==1), 1, 0)) # False positive 125 | tn = np.sum(np.where((P==1) & (y==1), 1, 0)) # True negative 126 | fn = np.sum(np.where((P==1) & (y==0), 1, 0)) # False negative 127 | 128 | npv = (tn / (tn+fn)) 129 | 130 | return npv 131 | 132 | 133 | def fscore(Y, A): 134 | """F-Score that is the harmonic mean of recall and precision. 135 | 136 | :param Y: True labels for a set of samples. 137 | :type Y: :class:`numpy.ndarray` 138 | 139 | :param A: Output of forward propagation. 140 | :type A: :class:`numpy.ndarray` 141 | 142 | :return: F-score. 143 | :rtype: :class:`numpy.ndarray` 144 | """ 145 | encoded = (Y.shape[1] > 1) # Check if one-hot encoding of labels 146 | 147 | P = np.argmax(A, axis=1) if encoded else np.around(A) 148 | y = np.argmax(Y, axis=1) if encoded else Y 149 | 150 | tp = np.sum(np.where((P==0) & (y==0), 1, 0)) # True positive 151 | fp = np.sum(np.where((P==0) & (y==1), 1, 0)) # False positive 152 | tn = np.sum(np.where((P==1) & (y==1), 1, 0)) # True negative 153 | fn = np.sum(np.where((P==1) & (y==0), 1, 0)) # False negative 154 | 155 | fscore = (tp / (tp + 0.5*(fp+fn))) 156 | 157 | return fscore 158 | 159 | 160 | def specificity(Y, A): 161 | """Fraction of negative samples among excluded instances. 162 | 163 | :param Y: True labels for a set of samples. 164 | :type Y: :class:`numpy.ndarray` 165 | 166 | :param A: Output of forward propagation. 167 | :type A: :class:`numpy.ndarray` 168 | 169 | :return: Specificity. 170 | :rtype: :class:`numpy.ndarray` 171 | """ 172 | encoded = (Y.shape[1] > 1) # Check if one-hot encoding of labels 173 | 174 | P = np.argmax(A, axis=1) if encoded else np.around(A) 175 | y = np.argmax(Y, axis=1) if encoded else Y 176 | 177 | tp = np.sum(np.where((P==0) & (y==0), 1, 0)) # True positive 178 | fp = np.sum(np.where((P==0) & (y==1), 1, 0)) # False positive 179 | tn = np.sum(np.where((P==1) & (y==1), 1, 0)) # True negative 180 | fn = np.sum(np.where((P==1) & (y==0), 1, 0)) # False negative 181 | 182 | specificity = (tn / (tn+fp)) 183 | 184 | return specificity 185 | -------------------------------------------------------------------------------- /epynn/commons/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/models.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | class Layer: 7 | """ 8 | Definition of a parent **base layer** prototype. Any given **layer** prototype inherits from this class and is defined with respect to a specific architecture (Dense, RNN, Convolution...). The **parent** base layer defines instance attributes common to any **child** layer prototype. 9 | """ 10 | 11 | def __init__(self, se_hPars=None): 12 | """Initialize instance variable attributes. 13 | 14 | :ivar d: Layer **dimensions** containing scalar quantities such as the number of nodes, hidden units, filters or samples. 15 | :vartype d: dict[str, int] 16 | 17 | :ivar fs: Layer **forward shapes** for parameters, input, output and processing intermediates. 18 | :vartype fs: dict[str, tuple[int]] 19 | 20 | :ivar p: Layer weight and bias **parameters**. These are the trainable parameters. 21 | :vartype p: dict[str, :class:`numpy.ndarray`] 22 | 23 | :ivar fc: Layer **forward cache** related for input, output and processing intermediates. 24 | :vartype fc: dict[str, :class:`numpy.ndarray`] 25 | 26 | :ivar bs: Layer **backward shapes** for gradients, input, output and processing intermediates. 27 | :vartype bs: dict[str, tuple[int]] 28 | 29 | :ivar g: Layer **gradients** used to update the trainable parameters. 30 | :vartype g: dict[str, :class:`numpy.ndarray`] 31 | 32 | :ivar bc: Layer **backward cache** for input, output and processing intermediates. 33 | :vartype bc: dict[str, :class:`numpy.ndarray`] 34 | 35 | :ivar o: Other scalar quantities that do not fit within the above-described attributes (rarely used). 36 | :vartype o: dict[str, int] 37 | 38 | :ivar activation: Conveniency attribute containing names of activation gates and corresponding activation functions. 39 | :vartype activation: dict[str, str] 40 | 41 | :ivar se_hPars: Layer **hyper-parameters**. 42 | :vartype se_hPars: dict[str, str or int] 43 | """ 44 | self.d = {} 45 | self.fs = {} 46 | self.p = {} 47 | self.fc = {} 48 | self.bs = {} 49 | self.g = {} 50 | self.bc = {} 51 | self.o = {} 52 | 53 | self.activation = {} 54 | 55 | self.se_hPars = se_hPars 56 | 57 | return None 58 | 59 | def update_shapes(self, cache, shapes): 60 | """Update shapes from cache. 61 | 62 | :param cache: Cache from forward or backward propagation. 63 | :type cache: dict[str, :class:`numpy.ndarray`] 64 | 65 | :param shapes: Corresponding shapes. 66 | :type shapes: dict[str, tuple[int]] 67 | """ 68 | shapes.update({k:v.shape for k,v in cache.items()}) 69 | 70 | return None 71 | 72 | 73 | class dataSet: 74 | """ 75 | Definition of a dataSet object prototype. 76 | 77 | :param X_data: Set of sample features. 78 | :type X_data: list[list[int or float]] 79 | 80 | :param Y_data: Set of sample label, defaults to None. 81 | :type Y_data: list[list[int] or int] or NoneType, optional 82 | 83 | :param name: Name of set, defaults to 'dummy'. 84 | :type name: str, optional 85 | """ 86 | 87 | def __init__(self, 88 | X_data, 89 | Y_data=None, 90 | name='dummy'): 91 | """Initialize dataSet object. 92 | 93 | :ivar active: `True` if X_data is not empty. 94 | :vartype active: bool 95 | 96 | :ivar X: Set of sample features. 97 | :vartype X: :class:`numpy.ndarray` 98 | 99 | :ivar Y: Set of sample label. 100 | :vartype Y: :class:`numpy.ndarray` 101 | 102 | :ivar y: Set of single-digit sample label. 103 | :vartype y: :class:`numpy.ndarray` 104 | 105 | :ivar b: Balance of labels in set. 106 | :vartype b: dict[int: int] 107 | 108 | :ivar ids: Sample identifiers. 109 | :vartype ids: list[int] 110 | 111 | :ivar A: Output of forward propagation. 112 | :vartype A: :class:`numpy.ndarray` 113 | 114 | :ivar P: Label predictions. 115 | :vartype P: :class:`numpy.ndarray` 116 | 117 | :ivar name: Name of dataset. 118 | :vartype name: str 119 | """ 120 | self.name = name 121 | 122 | self.active = True if len(X_data) > 0 else False 123 | 124 | # Set of sample features 125 | if self.active: 126 | 127 | # Vectorize X_data in NumPy array 128 | self.X = np.array(X_data) 129 | 130 | # Set of sample label 131 | if (hasattr(Y_data, 'shape') or Y_data) and self.active: 132 | 133 | # Vectorize Y_data in NumPy array 134 | Y_data = np.array(Y_data) 135 | 136 | # Check if labels are one-hot encoded or not 137 | is_encoded = True if Y_data.ndim == 2 else False 138 | 139 | # If not encoded, reshape from (N_SAMPLES,) to (N_SAMPLES, 1) 140 | self.Y = Y_data if is_encoded else np.expand_dims(Y_data, 1) 141 | 142 | # Retrieve single-digit label w. r. t. one-hot encoding 143 | self.y = np.argmax(Y_data, axis=1) if is_encoded else Y_data 144 | 145 | # Map single-digit label with representation in dataset 146 | self.b = {label:np.count_nonzero(self.y == label) for label in self.y} 147 | 148 | # Set sample identifiers 149 | self.ids = np.array([i for i in range(len(X_data))]) 150 | 151 | # Initialize empty arrays 152 | self.A = np.array([]) 153 | self.P = np.array([]) 154 | 155 | return None 156 | -------------------------------------------------------------------------------- /epynn/commons/plot.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/plot.py 2 | # Standard library imports 3 | import os 4 | 5 | # Related third party imports 6 | from matplotlib import pyplot as plt 7 | 8 | # Local application/library specific imports 9 | from epynn.commons.logs import process_logs 10 | 11 | 12 | def pyplot_metrics(model, path): 13 | """Plot metrics/costs from training with matplotlib. 14 | 15 | :param model: An instance of EpyNN network object. 16 | :type model: :class:`epynn.meta.models.EpyNN` 17 | 18 | :param path: Write matplotlib plot. 19 | :type path: bool or NoneType 20 | """ 21 | plt.figure() 22 | 23 | metrics = model.metrics # Contains metrics and cost 24 | 25 | # Iterate over metrics/costs 26 | for s in metrics.keys(): 27 | 28 | # Iterate over active datasets 29 | for k, dset in enumerate(model.embedding.dsets): 30 | 31 | dname = dset.name 32 | 33 | x = [x for x in range(len(metrics[s][k]))] # X range 34 | 35 | y = metrics[s][k] # Y values from metrics[idx_dataset][idx_metrics] 36 | 37 | plt.plot(x, y, label=dname + ' ' + s) 38 | 39 | plt.legend() 40 | 41 | plt.xlabel('Epoch') 42 | plt.ylabel('Value') 43 | 44 | plt.title(model.uname) 45 | 46 | plt.show() 47 | 48 | # If path sets to None, set to defaults - Note path can be set to False, which makes it print only 49 | if path == None: 50 | path = 'plots' 51 | plot_path = os.path.join(os.getcwd(), path, model.uname) + '.png' 52 | 53 | if path: 54 | plt.savefig(plot_path) 55 | process_logs('Make: ' + plot_path, level=1) 56 | 57 | plt.close() 58 | 59 | return None 60 | -------------------------------------------------------------------------------- /epynn/commons/schedule.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/commons/schedule.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def schedule_functions(schedule, hPars): 7 | """Roots hyperparameters to relevant scheduler. 8 | 9 | :param schedule: Schedule mode. 10 | :type schedule: str 11 | 12 | :param hPars: Contains hyperparameters. 13 | :type hPars: tuple[int or float] 14 | 15 | :return: Scheduled learning rate. 16 | :rtype: list[float] 17 | """ 18 | schedulers = { 19 | 'exp_decay': exp_decay, 20 | 'lin_decay': lin_decay, 21 | 'steady': steady, 22 | } 23 | 24 | lrate = schedulers[schedule](hPars) 25 | 26 | return lrate 27 | 28 | 29 | def exp_decay(hPars): 30 | """Exponential decay schedule for learning rate. 31 | 32 | :param hPars: Contains hyperparameters. 33 | :type hPars: tuple[int or float] 34 | 35 | :return: Scheduled learning rate. 36 | :rtype: list[float] 37 | """ 38 | e, lr, n, k, d, epc = hPars 39 | 40 | lrate = [lr * (1-d) ** (x//epc) * np.exp(-(x%epc) * k) for x in range(e)] 41 | 42 | return lrate 43 | 44 | 45 | def lin_decay(hPars): 46 | """Linear decay schedule for learning rate. 47 | 48 | :param hPars: Contains hyperparameters. 49 | :type hPars: tuple[int or float] 50 | 51 | :return: Scheduled learning rate. 52 | :rtype: list[float] 53 | """ 54 | e, lr, n, k, d, epc = hPars 55 | 56 | lrate = [lr / (1 + k*100*x) for x in range(e)] 57 | 58 | return lrate 59 | 60 | 61 | def steady(hPars): 62 | """Steady schedule for learning rate. 63 | 64 | :param hPars: Contains hyperparameters. 65 | :type hPars: tuple[int or float] 66 | 67 | :return: Scheduled learning rate. 68 | :rtype: list[float] 69 | """ 70 | e, lr, n, k, d, epc = hPars 71 | 72 | lrate = [lr for x in range(e)] 73 | 74 | return lrate 75 | -------------------------------------------------------------------------------- /epynn/convolution/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/convolution/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of convolution layer. 10 | :type layer: :class:`epynn.convolution.models.Convolution` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | dA = layer.bc['dA'] = dX 19 | 20 | return dA 21 | 22 | 23 | def convolution_backward(layer, dX): 24 | """Backward propagate error gradients to previous layer. 25 | """ 26 | # (1) Initialize cache 27 | dA = initialize_backward(layer, dX) # (m, oh, ow, u) 28 | 29 | # (2) Gradient of the loss w.r.t. Z 30 | dZ = layer.bc['dZ'] = ( 31 | dA 32 | * layer.activate(layer.fc['Z'], deriv=True) 33 | ) # dL/dZ 34 | 35 | # (3) Restore kernel dimensions 36 | dZb = dZ 37 | dZb = np.expand_dims(dZb, axis=3) 38 | dZb = np.expand_dims(dZb, axis=3) 39 | dZb = np.expand_dims(dZb, axis=3) 40 | # (m, oh, ow, 1, u) -> 41 | # (m, oh, ow, 1, 1, u) -> 42 | # (m, oh, ow, 1, 1, 1, u) 43 | 44 | # (4) Initialize backward output dL/dX 45 | dX = np.zeros_like(layer.fc['X']) # (m, h, w, d) 46 | 47 | # Iterate over forward output height 48 | for oh in range(layer.d['oh']): 49 | 50 | hs = oh * layer.d['sh'] 51 | he = hs + layer.d['fh'] 52 | 53 | # Iterate over forward output width 54 | for ow in range(layer.d['ow']): 55 | 56 | ws = ow * layer.d['sw'] 57 | we = ws + layer.d['fw'] 58 | 59 | # (5hw) Gradient of the loss w.r.t Xb 60 | dXb = dZb[:, oh, ow, :] * layer.p['W'] 61 | # (m, oh, ow, 1, 1, 1, u) - dZb 62 | # (m, 1, 1, 1, u) - dZb[:, h, w, :] 63 | # (fh, fw, d, u) - W 64 | 65 | # (6hw) Sum over units axis 66 | dX[:, hs:he, ws:we, :] += np.sum(dXb, axis=4) 67 | # (m, fh, fw, d, u) - dXb 68 | # (m, fh, fw, d) - np.sum(dXb, axis=4) 69 | 70 | layer.bc['dX'] = dX 71 | 72 | return dX 73 | -------------------------------------------------------------------------------- /epynn/convolution/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/convolution/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | # Local application/library specific imports 6 | from epynn.commons.io import padding 7 | 8 | 9 | def initialize_forward(layer, A): 10 | """Forward cache initialization. 11 | 12 | :param layer: An instance of convolution layer. 13 | :type layer: :class:`epynn.convolution.models.Convolution` 14 | 15 | :param A: Output of forward propagation from previous layer. 16 | :type A: :class:`numpy.ndarray` 17 | 18 | :return: Input of forward propagation for current layer. 19 | :rtype: :class:`numpy.ndarray` 20 | 21 | :return: Input of forward propagation for current layer. 22 | :rtype: :class:`numpy.ndarray` 23 | 24 | :return: Input blocks of forward propagation for current layer. 25 | :rtype: :class:`numpy.ndarray` 26 | """ 27 | X = layer.fc['X'] = padding(A, layer.d['p']) 28 | 29 | return X 30 | 31 | 32 | def convolution_forward(layer, A): 33 | """Forward propagate signal to next layer. 34 | """ 35 | # (1) Initialize cache and pad image 36 | X = initialize_forward(layer, A) # (m, h, w, d) 37 | 38 | # (2) Slice input w.r.t. filter size (fh, fw) and strides (sh, sw) 39 | Xb = np.array([[X[ :, h:h + layer.d['fh'], w:w + layer.d['fw'], :] 40 | # Inner loop 41 | # (m, h, w, d) -> 42 | # (ow, m, h, fw, d) 43 | for w in range(layer.d['w'] - layer.d['fw'] + 1) 44 | if w % layer.d['sw'] == 0] 45 | # Outer loop 46 | # (ow, m, h, fw, d) -> 47 | # (oh, ow, m, fh, fw, d) 48 | for h in range(layer.d['h'] - layer.d['fh'] + 1) 49 | if h % layer.d['sh'] == 0]) 50 | 51 | # (3) Bring back m along axis 0 52 | Xb = np.moveaxis(Xb, 2, 0) 53 | # (oh, ow, m, fh, fw, d) -> 54 | # (m, oh, ow, fh, fw, d) 55 | 56 | # (4) Add dimension for filter units (u) on axis 6 57 | Xb = layer.fc['Xb'] = np.expand_dims(Xb, axis=6) 58 | # (m, oh, ow, fh, fw, d) -> 59 | # (m, oh, ow, fh, fw, d, 1) 60 | 61 | # (5.1) Linear activation Xb -> Zb 62 | Zb = Xb * layer.p['W'] 63 | # (m, oh, ow, fh, fw, d, 1) - Xb 64 | # (fh, fw, d, u) - W 65 | 66 | # (5.2) Sum block products 67 | Z = np.sum(Zb, axis=(5, 4, 3)) 68 | # (m, oh, ow, fh, fw, d, u) - Zb 69 | # (m, oh, ow, fh, fw, u) - np.sum(Zb, axis=(5)) 70 | # (m, oh, mw, fh, u) - np.sum(Zb, axis=(5, 4)) 71 | # (m, oh, ow, u) - np.sum(Zb, axis=(5, 4, 3)) 72 | 73 | # (5.3) Add bias to linear activation product 74 | Z = layer.fc['Z'] = Z + layer.p['b'] 75 | 76 | # (6) Non-linear activation 77 | A = layer.fc['A'] = layer.activate(Z) 78 | 79 | return A # To next layer 80 | -------------------------------------------------------------------------------- /epynn/convolution/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/convolution/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.commons.maths import ( 5 | relu, 6 | xavier, 7 | activation_tune, 8 | ) 9 | from epynn.convolution.forward import convolution_forward 10 | from epynn.convolution.backward import convolution_backward 11 | from epynn.convolution.parameters import ( 12 | convolution_compute_shapes, 13 | convolution_initialize_parameters, 14 | convolution_compute_gradients, 15 | convolution_update_parameters, 16 | ) 17 | 18 | 19 | class Convolution(Layer): 20 | """ 21 | Definition of a convolution layer prototype. 22 | 23 | :param unit_filters: Number of unit filters in convolution layer, defaults to 1. 24 | :type unit_filters: int, optional 25 | 26 | :param filter_size: Height and width for convolution window, defaults to `(3, 3)`. 27 | :type filter_size: int or tuple[int], optional 28 | 29 | :param strides: Height and width to shift the convolution window by, defaults to `None` which equals `filter_size`. 30 | :type strides: int or tuple[int], optional 31 | 32 | :param padding: Number of zeros to pad each features plane with, defaults to 0. 33 | :type padding: int, optional 34 | 35 | :param activate: Non-linear activation of unit filters, defaults to `relu`. 36 | :type activate: function, optional 37 | 38 | :param initialization: Weight initialization function for convolution layer, defaults to `xavier`. 39 | :type initialization: function, optional 40 | 41 | :param use_bias: Whether the layer uses bias, defaults to `True`. 42 | :type use_bias: bool, optional 43 | 44 | :param se_hPars: Layer hyper-parameters, defaults to `None` and inherits from model. 45 | :type se_hPars: dict[str, str or float] or NoneType, optional 46 | """ 47 | 48 | def __init__(self, 49 | unit_filters=1, 50 | filter_size=(3, 3), 51 | strides=None, 52 | padding=0, 53 | activate=relu, 54 | initialization=xavier, 55 | use_bias=True, 56 | se_hPars=None): 57 | """Initialize instance variable attributes. 58 | """ 59 | super().__init__() 60 | 61 | filter_size = filter_size if isinstance(filter_size, tuple) else (filter_size, filter_size) 62 | strides = strides if isinstance(strides, tuple) else filter_size 63 | 64 | self.d['u'] = unit_filters 65 | self.d['fh'], self.d['fw'] = filter_size 66 | self.d['sh'], self.d['sw'] = strides 67 | self.d['p'] = padding 68 | self.activate = activate 69 | self.initialization = initialization 70 | self.use_bias = use_bias 71 | 72 | self.activation = { 'activate': activate.__name__ } 73 | self.trainable = True 74 | 75 | return None 76 | 77 | def compute_shapes(self, A): 78 | """Wrapper for :func:`epynn.convolution.parameters.convolution_compute_shapes()`. 79 | 80 | :param A: Output of forward propagation from previous layer. 81 | :type A: :class:`numpy.ndarray` 82 | """ 83 | convolution_compute_shapes(self, A) 84 | 85 | return None 86 | 87 | def initialize_parameters(self): 88 | """Wrapper for :func:`epynn.convolution.parameters.convolution_initialize_parameters()`. 89 | """ 90 | convolution_initialize_parameters(self) 91 | 92 | return None 93 | 94 | def forward(self, A): 95 | """Wrapper for :func:`epynn.convolution.forward.convolution_forward()`. 96 | 97 | :param A: Output of forward propagation from *previous* layer. 98 | :type A: :class:`numpy.ndarray` 99 | 100 | :return: Output of forward propagation for **current** layer. 101 | :rtype: :class:`numpy.ndarray` 102 | """ 103 | activation_tune(self.se_hPars) 104 | A = convolution_forward(self, A) 105 | self.update_shapes(self.fc, self.fs) 106 | 107 | return A 108 | 109 | def backward(self, dX): 110 | """Wrapper for :func:`epynn.convolution.backward.convolution_backward()`. 111 | 112 | :param dX: Output of backward propagation from next layer. 113 | :type dX: :class:`numpy.ndarray` 114 | 115 | :return: Output of backward propagation for current layer. 116 | :rtype: :class:`numpy.ndarray` 117 | """ 118 | activation_tune(self.se_hPars) 119 | dX = convolution_backward(self, dX) 120 | self.update_shapes(self.bc, self.bs) 121 | 122 | return dX 123 | 124 | def compute_gradients(self): 125 | """Wrapper for :func:`epynn.convolution.parameters.convolution_compute_gradients()`. 126 | """ 127 | convolution_compute_gradients(self) 128 | 129 | return None 130 | 131 | def update_parameters(self): 132 | """Wrapper for :func:`epynn.convolution.parameters.convolution_update_parameters()`. 133 | """ 134 | if self.trainable: 135 | convolution_update_parameters(self) 136 | 137 | return None 138 | -------------------------------------------------------------------------------- /epynn/convolution/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/convolution/parameters.py 2 | # Standard library imports 3 | import math 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | 9 | def convolution_compute_shapes(layer, A): 10 | """Compute forward shapes and dimensions for layer. 11 | """ 12 | X = A # Input of current layer 13 | 14 | layer.fs['X'] = X.shape # (m, h, w, d) 15 | 16 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 17 | layer.d['h'] = layer.fs['X'][1] # Height of features (h) 18 | layer.d['w'] = layer.fs['X'][2] # Width of features (w) 19 | layer.d['d'] = layer.fs['X'][3] # Depth of features (d) 20 | 21 | # Output height (oh) and width (ow) 22 | layer.d['oh'] = math.floor((layer.d['h']-layer.d['fh']) / layer.d['sh']) + 1 23 | layer.d['ow'] = math.floor((layer.d['w']-layer.d['fw']) / layer.d['sw']) + 1 24 | 25 | # Shapes for trainable parameters 26 | # filter_height (fh), filter_width (fw), features_depth (d), unit_filters (u) 27 | layer.fs['W'] = (layer.d['fh'], layer.d['fw'], layer.d['d'], layer.d['u']) 28 | layer.fs['b'] = (layer.d['u'], ) 29 | 30 | return None 31 | 32 | 33 | def convolution_initialize_parameters(layer): 34 | """Initialize parameters for layer. 35 | """ 36 | # For linear activation of inputs (Z) 37 | layer.p['W'] = layer.initialization(layer.fs['W'], rng=layer.np_rng) 38 | layer.p['b'] = np.zeros(layer.fs['b']) # Z = X * W + b 39 | 40 | return None 41 | 42 | 43 | def convolution_compute_gradients(layer): 44 | """Compute gradients with respect to weight and bias for layer. 45 | """ 46 | # Gradients initialization with respect to parameters 47 | for parameter in layer.p.keys(): 48 | gradient = 'd' + parameter 49 | layer.g[gradient] = np.zeros_like(layer.p[parameter]) 50 | 51 | Xb = layer.fc['Xb'] # Input blocks of forward propagation 52 | dZ = layer.bc['dZ'] # Gradient of the loss with respect to Z 53 | 54 | # Expand dZ dimensions with respect to Xb 55 | dZb = dZ 56 | dZb = np.expand_dims(dZb, axis=3) # (m, oh, ow, 1, u) 57 | dZb = np.expand_dims(dZb, axis=3) # (m, oh, ow, 1, 1, u) 58 | dZb = np.expand_dims(dZb, axis=3) # (m, oh, ow, 1, 1, 1, u) 59 | 60 | # (1) Gradient of the loss with respect to W, b 61 | dW = layer.g['dW'] = np.sum(dZb * Xb, axis=(2, 1, 0)) # (1.1) dL/dW 62 | db = layer.g['db'] = np.sum(dZb, axis=(2, 1, 0)) # (1.2) dL/db 63 | 64 | layer.g['db'] = db.squeeze() if layer.use_bias else 0. 65 | 66 | return None 67 | 68 | 69 | def convolution_update_parameters(layer): 70 | """Update parameters for layer. 71 | """ 72 | for gradient in layer.g.keys(): 73 | parameter = gradient[1:] 74 | # Update is driven by learning rate and gradients 75 | layer.p[parameter] -= layer.lrate[layer.e] * layer.g[gradient] 76 | 77 | return None 78 | -------------------------------------------------------------------------------- /epynn/dense/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dense/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | # Local application/library specific imports 6 | from epynn.commons.maths import hadamard 7 | 8 | 9 | def initialize_backward(layer, dX): 10 | """Backward cache initialization. 11 | 12 | :param layer: An instance of dense layer. 13 | :type layer: :class:`epynn.dense.models.Dense` 14 | 15 | :param dX: Output of backward propagation from next layer. 16 | :type dX: :class:`numpy.ndarray` 17 | 18 | :return: Input of backward propagation for current layer. 19 | :rtype: :class:`numpy.ndarray` 20 | """ 21 | dA = layer.bc['dA'] = dX 22 | 23 | return dA 24 | 25 | 26 | def dense_backward(layer, dX): 27 | """Backward propagate error gradients to previous layer. 28 | """ 29 | # (1) Initialize cache 30 | dA = initialize_backward(layer, dX) 31 | 32 | # (2) Gradient of the loss with respect to Z 33 | dZ = layer.bc['dZ'] = hadamard( 34 | dA, 35 | layer.activate(layer.fc['Z'], deriv=True) 36 | ) # dL/dZ 37 | 38 | # (3) Gradient of the loss with respect to X 39 | dX = layer.bc['dX'] = np.dot(dZ, layer.p['W'].T) # dL/dX 40 | 41 | return dX # To previous layer 42 | -------------------------------------------------------------------------------- /epynn/dense/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dense/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of dense layer. 10 | :type layer: :class:`epynn.dense.models.Dense` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | X = layer.fc['X'] = A 19 | 20 | return X 21 | 22 | 23 | def dense_forward(layer, A): 24 | """Forward propagate signal to next layer. 25 | """ 26 | # (1) Initialize cache 27 | X = initialize_forward(layer, A) 28 | 29 | # (2) Linear activation X -> Z 30 | Z = layer.fc['Z'] = ( 31 | np.dot(X, layer.p['W']) 32 | + layer.p['b'] 33 | ) # This is the weighted sum 34 | 35 | # (3) Non-linear activation Z -> A 36 | A = layer.fc['A'] = layer.activate(Z) 37 | 38 | return A # To next layer 39 | -------------------------------------------------------------------------------- /epynn/dense/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dense/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.commons.maths import ( 5 | sigmoid, 6 | xavier, 7 | activation_tune, 8 | ) 9 | from epynn.dense.forward import dense_forward 10 | from epynn.dense.backward import dense_backward 11 | from epynn.dense.parameters import ( 12 | dense_compute_shapes, 13 | dense_initialize_parameters, 14 | dense_compute_gradients, 15 | dense_update_parameters 16 | ) 17 | 18 | 19 | class Dense(Layer): 20 | """ 21 | Definition of a dense layer prototype. 22 | 23 | :param units: Number of units in dense layer, defaults to 1. 24 | :type units: int, optional 25 | 26 | :param activate: Non-linear activation of units, defaults to `sigmoid`. 27 | :type activate: function, optional 28 | 29 | :param initialization: Weight initialization function for dense layer, defaults to `xavier`. 30 | :type initialization: function, optional 31 | 32 | :param se_hPars: Layer hyper-parameters, defaults to `None` and inherits from model. 33 | :type se_hPars: dict[str, str or float] or NoneType, optional 34 | """ 35 | 36 | def __init__(self, 37 | units=1, 38 | activate=sigmoid, 39 | initialization=xavier, 40 | se_hPars=None): 41 | """Initialize instance variable attributes. 42 | """ 43 | super().__init__(se_hPars) 44 | 45 | self.d['u'] = units 46 | self.activate = activate 47 | self.initialization = initialization 48 | 49 | self.activation = { 'activate': self.activate.__name__ } 50 | self.trainable = True 51 | 52 | return None 53 | 54 | def compute_shapes(self, A): 55 | """Wrapper for :func:`epynn.dense.parameters.dense_compute_shapes()`. 56 | 57 | :param A: Output of forward propagation from previous layer. 58 | :type A: :class:`numpy.ndarray` 59 | """ 60 | dense_compute_shapes(self, A) 61 | 62 | return None 63 | 64 | def initialize_parameters(self): 65 | """Wrapper for :func:`epynn.dense.parameters.dense_initialize_parameters()`. 66 | """ 67 | dense_initialize_parameters(self) 68 | 69 | return None 70 | 71 | def forward(self, A): 72 | """Wrapper for :func:`epynn.dense.forward.dense_forward()`. 73 | 74 | :param A: Output of forward propagation from previous layer. 75 | :type A: :class:`numpy.ndarray` 76 | 77 | :return: Output of forward propagation for current layer. 78 | :rtype: :class:`numpy.ndarray` 79 | """ 80 | activation_tune(self.se_hPars) 81 | A = dense_forward(self, A) 82 | self.update_shapes(self.fc, self.fs) 83 | 84 | return A 85 | 86 | def backward(self, dX): 87 | """Wrapper for :func:`epynn.dense.backward.dense_backward()`. 88 | 89 | :param dX: Output of backward propagation from next layer. 90 | :type dX: :class:`numpy.ndarray` 91 | 92 | :return: Output of backward propagation for current layer. 93 | :rtype: :class:`numpy.ndarray` 94 | """ 95 | activation_tune(self.se_hPars) 96 | dX = dense_backward(self, dX) 97 | self.update_shapes(self.bc, self.bs) 98 | 99 | return dX 100 | 101 | def compute_gradients(self): 102 | """Wrapper for :func:`epynn.dense.parameters.dense_compute_gradients()`. 103 | """ 104 | dense_compute_gradients(self) 105 | 106 | return None 107 | 108 | def update_parameters(self): 109 | """Wrapper for :func:`epynn.dense.parameters.dense_update_parameters()`. 110 | """ 111 | if self.trainable: 112 | dense_update_parameters(self) 113 | 114 | return None 115 | -------------------------------------------------------------------------------- /epynn/dense/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dense/parameters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def dense_compute_shapes(layer, A): 7 | """Compute forward shapes and dimensions from input for layer. 8 | """ 9 | X = A # Input of current layer 10 | 11 | layer.fs['X'] = X.shape # (m, n) 12 | 13 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 14 | layer.d['n'] = layer.fs['X'][1] # Number of features (n) 15 | 16 | # Shapes for trainable parameters Units (u) 17 | layer.fs['W'] = (layer.d['n'], layer.d['u']) # (n, u) 18 | layer.fs['b'] = (1, layer.d['u']) # (1, u) 19 | 20 | return None 21 | 22 | 23 | def dense_initialize_parameters(layer): 24 | """Initialize trainable parameters from shapes for layer. 25 | """ 26 | # For linear activation of inputs (Z) 27 | layer.p['W'] = layer.initialization(layer.fs['W'], rng=layer.np_rng) 28 | layer.p['b'] = np.zeros(layer.fs['b']) # Z = dot(X, W) + b 29 | 30 | return None 31 | 32 | 33 | def dense_compute_gradients(layer): 34 | """Compute gradients with respect to weight and bias for layer. 35 | """ 36 | X = layer.fc['X'] # Input of forward propagation 37 | dZ = layer.bc['dZ'] # Gradient of the loss with respect to Z 38 | 39 | # (1) Gradient of the loss with respect to W, b 40 | dW = layer.g['dW'] = np.dot(X.T, dZ) # (1.1) dL/dW 41 | db = layer.g['db'] = np.sum(dZ, axis=0) # (1.2) dL/db 42 | 43 | return None 44 | 45 | 46 | def dense_update_parameters(layer): 47 | """Update parameters from gradients for layer. 48 | """ 49 | for gradient in layer.g.keys(): 50 | parameter = gradient[1:] 51 | # Update is driven by learning rate and gradients 52 | layer.p[parameter] -= layer.lrate[layer.e] * layer.g[gradient] 53 | 54 | return None 55 | -------------------------------------------------------------------------------- /epynn/dropout/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dropout/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of dropout layer. 10 | :type layer: :class:`epynn.dropout.models.Dropout` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | dA = layer.bc['dA'] = dX 19 | 20 | return dA 21 | 22 | 23 | def dropout_backward(layer, dX): 24 | """Backward propagate error gradients to previous layer. 25 | """ 26 | # (1) Initialize cache 27 | dA = initialize_backward(layer, dX) 28 | 29 | # (2) Apply the dropout mask used in the forward pass 30 | dX = dA * layer.fc['D'] 31 | 32 | # (3) Scale up gradients 33 | dX /= (1 - layer.d['d']) 34 | 35 | layer.bc['dX'] = dX 36 | 37 | return dX # To previous layer 38 | -------------------------------------------------------------------------------- /epynn/dropout/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dropout/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of dropout layer. 10 | :type layer: :class:`epynn.dropout.models.Dropout` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | X = layer.fc['X'] = A 19 | 20 | return X 21 | 22 | 23 | def dropout_forward(layer, A): 24 | """Forward propagate signal to next layer. 25 | """ 26 | # (1) Initialize cache 27 | X = initialize_forward(layer, A) 28 | 29 | # (2) Generate dropout mask 30 | D = layer.np_rng.uniform(0, 1, layer.fs['D']) 31 | 32 | # (3) Apply a step function with respect to drop_prob (k) 33 | D = layer.fc['D'] = (D > layer.d['d']) 34 | 35 | # (4) Drop data points 36 | A = X * D 37 | 38 | # (5) Scale up signal 39 | A /= (1 - layer.d['d']) 40 | 41 | return A # To next layer 42 | -------------------------------------------------------------------------------- /epynn/dropout/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dropout/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.dropout.forward import dropout_forward 5 | from epynn.dropout.backward import dropout_backward 6 | from epynn.dropout.parameters import ( 7 | dropout_compute_shapes, 8 | dropout_initialize_parameters, 9 | dropout_compute_gradients, 10 | dropout_update_parameters 11 | ) 12 | 13 | 14 | class Dropout(Layer): 15 | """ 16 | Definition of a dropout layer prototype. 17 | 18 | :param drop_prob: Probability to drop one data point from previous layer to next layer, defaults to 0.5. 19 | :type drop_prob: float, optional 20 | 21 | :param axis: Compute and apply dropout mask along defined axis, defaults to all axis. 22 | :type axis: int or tuple[int], optional 23 | """ 24 | 25 | def __init__(self, drop_prob=0.5, axis=()): 26 | """Initialize instance variable attributes. 27 | """ 28 | super().__init__() 29 | 30 | axis = axis if isinstance(axis, tuple) else (axis,) 31 | 32 | self.d['d'] = drop_prob 33 | self.d['a'] = axis 34 | 35 | self.trainable = False 36 | 37 | return None 38 | 39 | def compute_shapes(self, A): 40 | """Wrapper for :func:`epynn.dropout.parameters.dropout_compute_shapes()`. 41 | 42 | :param A: Output of forward propagation from previous layer. 43 | :type A: :class:`numpy.ndarray` 44 | """ 45 | dropout_compute_shapes(self, A) 46 | 47 | return None 48 | 49 | def initialize_parameters(self): 50 | """Wrapper for :func:`epynn.dropout.parameters.dropout_initialize_parameters()`. 51 | """ 52 | dropout_initialize_parameters(self) 53 | 54 | return None 55 | 56 | def forward(self, A): 57 | """Wrapper for :func:`epynn.dropout.forward.dropout_forward()`. 58 | 59 | :param A: Output of forward propagation from previous layer. 60 | :type A: :class:`numpy.ndarray` 61 | 62 | :return: Output of forward propagation for current layer. 63 | :rtype: :class:`numpy.ndarray` 64 | """ 65 | self.compute_shapes(A) 66 | A = self.fc['A'] = dropout_forward(self, A) 67 | self.update_shapes(self.fc, self.fs) 68 | 69 | return A 70 | 71 | def backward(self, dX): 72 | """Wrapper for :func:`epynn.dropout.backward.dropout_backward()`. 73 | 74 | :param dX: Output of backward propagation from next layer. 75 | :type dX: :class:`numpy.ndarray` 76 | 77 | :return: Output of backward propagation for current layer. 78 | :rtype: :class:`numpy.ndarray` 79 | """ 80 | dX = dropout_backward(self, dX) 81 | self.update_shapes(self.bc, self.bs) 82 | 83 | return dX 84 | 85 | def compute_gradients(self): 86 | """Wrapper for :func:`epynn.dropout.parameters.dropout_compute_gradients()`. Dummy method, there are no gradients to compute in layer. 87 | """ 88 | dropout_compute_gradients(self) 89 | 90 | return None 91 | 92 | def update_parameters(self): 93 | """Wrapper for :func:`epynn.dropout.parameters.dropout_update_parameters()`. Dummy method, there are no parameters to update in layer. 94 | """ 95 | if self.trainable: 96 | dropout_update_parameters(self) 97 | 98 | return None 99 | -------------------------------------------------------------------------------- /epynn/dropout/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/dropout/paremeters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def dropout_compute_shapes(layer, A): 7 | """Compute forward shapes and dimensions from input for layer. 8 | """ 9 | X = A # Input of current layer 10 | 11 | layer.fs['X'] = X.shape # (m, .. ) 12 | 13 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 14 | layer.d['n'] = X.size // layer.d['m'] # Number of features (n) 15 | 16 | # Shape for dropout mask 17 | layer.fs['D'] = [1 if ax in layer.d['a'] else layer.fs['X'][ax] 18 | for ax in range(X.ndim)] 19 | 20 | return None 21 | 22 | 23 | def dropout_initialize_parameters(layer): 24 | """Initialize trainable parameters from shapes for layer. 25 | """ 26 | # No parameters to initialize for Dropout layer 27 | 28 | return None 29 | 30 | 31 | def dropout_compute_gradients(layer): 32 | """Compute gradients with respect to weight and bias for layer. 33 | """ 34 | # No gradients to update for Dropout layer 35 | 36 | return None 37 | 38 | 39 | def dropout_update_parameters(layer): 40 | """Update parameters from gradients for layer. 41 | """ 42 | # No parameters to update for Dropout layer 43 | 44 | return None 45 | -------------------------------------------------------------------------------- /epynn/embedding/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/embedding/backward.py 2 | 3 | 4 | def initialize_backward(layer, dX): 5 | """Backward cache initialization. 6 | 7 | :param layer: An instance of embedding layer. 8 | :type layer: :class:`epynn.embedding.models.Embedding` 9 | 10 | :param dX: Output of backward propagation from next layer. 11 | :type dX: :class:`numpy.ndarray` 12 | 13 | :return: Input of backward propagation for current layer. 14 | :rtype: :class:`numpy.ndarray` 15 | """ 16 | dA = layer.bc['dA'] = dX 17 | 18 | return dX 19 | 20 | 21 | def embedding_backward(layer, dX): 22 | """Backward propagate error gradients to previous layer. 23 | """ 24 | # (1) Initialize cache 25 | dA = initialize_backward(layer, dX) 26 | 27 | # (2) Pass backward 28 | dX = layer.bc['dX'] = dA 29 | 30 | return None # No previous layer 31 | -------------------------------------------------------------------------------- /epynn/embedding/dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/embedding/dataset.py 2 | # Standard library imports 3 | import warnings 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | from epynn.commons.io import ( 10 | encode_dataset, 11 | scale_features, 12 | index_elements_auto, 13 | ) 14 | from epynn.commons.models import dataSet 15 | 16 | 17 | def embedding_check(X_data, Y_data=None, X_scale=False): 18 | """Pre-processing. 19 | 20 | :param X_data: Set of sample features. 21 | :type encode: list[list] or :class:`numpy.ndarray` 22 | 23 | :param Y_data: Set of samples label. 24 | :type encode: list[list[int] or int] or :class:`numpy.ndarray`, optional 25 | 26 | :param X_scale: Set to True to normalize sample features within [0, 1]. 27 | :type X_scale: bool, optional 28 | 29 | :return: Sample features and label. 30 | :rtype: tuple[:class:`numpy.ndarray`] 31 | """ 32 | if X_scale: 33 | # Array-wide normalization in [0, 1] 34 | X_data = scale_features(X_data) 35 | 36 | X_data = np.array(X_data) 37 | 38 | Y_data = np.array(Y_data) 39 | 40 | return X_data, Y_data 41 | 42 | 43 | def embedding_encode(layer, X_data, Y_data, X_encode, Y_encode): 44 | """One-hot encoding for samples features and label. 45 | 46 | :param layer: An instance of the :class:`epynn.embedding.models.Embedding` 47 | :type layer: :class:`epynn.embedding.models.Embedding` 48 | 49 | :param X_data: Set of sample features. 50 | :type encode: list[list] or :class:`numpy.ndarray` 51 | 52 | :param Y_data: Set of samples label. 53 | :type encode: list[list[int] or int] or :class:`numpy.ndarray` 54 | 55 | :param X_encode: Set to True to one-hot encode features. 56 | :type encode: bool 57 | 58 | :param Y_encode: Set to True to one-hot encode labels. 59 | :type encode: bool 60 | 61 | :return: Encoded set of sample features, if applicable. 62 | :rtype : :class:`numpy.ndarray` 63 | 64 | :return: Encoded set of sample label, if applicable. 65 | :rtype : :class:`numpy.ndarray` 66 | """ 67 | # Features one-hot encoding 68 | if X_encode: 69 | layer.e2i, layer.i2e, layer.d['e'] = index_elements_auto(X_data) 70 | X_data = encode_dataset(X_data, layer.e2i, layer.d['e']) 71 | # Label one-hot encoding 72 | if Y_encode: 73 | num_classes = len(list(set(Y_data.flatten()))) 74 | Y_data = np.eye(num_classes)[Y_data] 75 | 76 | return X_data, Y_data 77 | 78 | 79 | def embedding_prepare(layer, X_data, Y_data): 80 | """Prepare dataset for Embedding layer object. 81 | 82 | :param layer: An instance of the :class:`epynn.embedding.models.Embedding` 83 | :type layer: :class:`epynn.embedding.models.Embedding` 84 | 85 | :param X_data: Set of sample features. 86 | :type encode: list[list] or :class:`numpy.ndarray` 87 | 88 | :param Y_data: Set of samples label. 89 | :type encode: list[list[int] or int] or :class:`numpy.ndarray` 90 | 91 | :return: All training, validation and testing sets along with batched training set 92 | :rtype: tuple[:class:`epynn.commons.models.dataSet`] 93 | """ 94 | # Embedding parameters 95 | se_dataset = layer.se_dataset 96 | 97 | # Pair-wise features-label list 98 | dataset = list(zip(X_data, Y_data)) 99 | 100 | # Split and separate features and label 101 | dtrain, dval, dtest = split_dataset(dataset, se_dataset) 102 | 103 | X_train, Y_train = zip(*dtrain) 104 | X_val, Y_val = zip(*dval) if dval else [(), ()] 105 | X_test, Y_test = zip(*dtest) if dtest else [(), ()] 106 | 107 | # Instantiate dataSet objects 108 | dtrain = dataSet(X_data=X_train, Y_data=Y_train, name='dtrain') 109 | dval = dataSet(X_data=X_val, Y_data=Y_val, name='dval') 110 | dtest = dataSet(X_data=X_test, Y_data=Y_test, name='dtest') 111 | 112 | embedded_data = (dtrain, dval, dtest) 113 | 114 | return embedded_data 115 | 116 | 117 | def split_dataset(dataset, se_dataset): 118 | """Split dataset in training, testing and validation sets. 119 | 120 | :param dataset: Dataset containing sample features and label 121 | :type dataset: tuple[list or :class:`numpy.ndarray`] 122 | 123 | :param se_dataset: Settings for sets preparation 124 | :type se_dataset: dict[str: int] 125 | 126 | :return: Training, testing and validation sets. 127 | :rtype: tuple[list] 128 | """ 129 | # Retrieve relative sizes 130 | dtrain_relative = se_dataset['dtrain_relative'] 131 | dval_relative = se_dataset['dval_relative'] 132 | dtest_relative = se_dataset['dtest_relative'] 133 | 134 | # Compute absolute sizes with respect to full dataset 135 | sum_relative = sum([dtrain_relative, dval_relative, dtest_relative]) 136 | 137 | dtrain_length = round(dtrain_relative / sum_relative * len(dataset)) 138 | dval_length = round(dval_relative / sum_relative * len(dataset)) 139 | dtest_length = round(dtest_relative / sum_relative * len(dataset)) 140 | 141 | # Slice full dataset 142 | dtrain = dataset[:dtrain_length] 143 | dval = dataset[dtrain_length:dtrain_length + dval_length] 144 | dtest = dataset[dtrain_length + dval_length:] 145 | 146 | return dtrain, dval, dtest 147 | 148 | 149 | def mini_batches(layer): 150 | """Shuffle and divide dataset in batches for each training epoch. 151 | 152 | :param layer: An instance of the :class:`epynn.embedding.models.Embedding` 153 | :type layer: :class:`epynn.embedding.models.Embedding` 154 | 155 | :return: Batches made from dataset with respect to batch_size 156 | :rtype: list[Object] 157 | """ 158 | # Retrieve training set and make pair-wise features-label dataset 159 | dtrain_zip = layer.dtrain_zip 160 | 161 | batch_size = layer.se_dataset['batch_size'] 162 | 163 | # Shuffle dataset 164 | if hasattr(layer, 'np_rng'): 165 | warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) 166 | layer.np_rng.shuffle(dtrain_zip) 167 | else: 168 | np.random.shuffle(dtrain_zip) 169 | 170 | # Compute number of batches w.r.t. batch_size 171 | if not batch_size: 172 | batch_size = len(dtrain_zip) 173 | 174 | n_batch = len(dtrain_zip) // batch_size 175 | 176 | if not n_batch: 177 | n_batch = 1 178 | 179 | # Slice to make sure split will result in equal division 180 | dtrain_zip = dtrain_zip[: n_batch * batch_size] 181 | 182 | X_train, Y_train = zip(*dtrain_zip) 183 | 184 | X_train = np.split(np.array(X_train), n_batch, axis=0) 185 | Y_train = np.split(np.array(Y_train), n_batch, axis=0) 186 | 187 | # Set into dataSet object 188 | batch_dtrain = [dataSet(X_data=X_batch, Y_data=Y_batch, name=str(i)) 189 | for i, (X_batch, Y_batch) in enumerate(zip(X_train, Y_train))] 190 | 191 | return batch_dtrain 192 | -------------------------------------------------------------------------------- /epynn/embedding/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/embedding/forward.py 2 | 3 | 4 | def initialize_forward(layer, A): 5 | """Forward cache initialization. 6 | 7 | :param layer: An instance of embedding layer. 8 | :type layer: :class:`epynn.embedding.models.Embedding` 9 | 10 | :param A: Output of forward propagation from previous layer. 11 | :type A: :class:`numpy.ndarray` 12 | 13 | :return: Input of forward propagation for current layer. 14 | :rtype: :class:`numpy.ndarray` 15 | """ 16 | X = layer.fc['X'] = A 17 | 18 | return X 19 | 20 | 21 | def embedding_forward(layer, A): 22 | """Forward propagate signal to next layer. 23 | """ 24 | # (1) Initialize cache 25 | X = initialize_forward(layer, A) 26 | 27 | # (2) Pass forward 28 | A = layer.fc['A'] = X 29 | 30 | return A # To next layer 31 | -------------------------------------------------------------------------------- /epynn/embedding/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/embedding/parameters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | # Local application/library specific imports 6 | from epynn.commons.models import Layer 7 | from epynn.embedding.dataset import ( 8 | embedding_prepare, 9 | embedding_encode, 10 | embedding_check, 11 | mini_batches, 12 | ) 13 | from epynn.embedding.forward import embedding_forward 14 | from epynn.embedding.backward import embedding_backward 15 | from epynn.embedding.parameters import ( 16 | embedding_compute_shapes, 17 | embedding_initialize_parameters, 18 | embedding_compute_gradients, 19 | embedding_update_parameters 20 | ) 21 | 22 | 23 | class Embedding(Layer): 24 | """ 25 | Definition of an embedding layer prototype. 26 | 27 | :param X_data: Dataset containing samples features, defaults to `None` which returns an empty layer. 28 | :type X_data: list[list[float or str or list[float or str]]] or NoneType, optional 29 | 30 | :param Y_data: Dataset containing samples label, defaults to `None`. 31 | :type Y_data: list[int or list[int]] or NoneType, optional 32 | 33 | :param relative_size: For training, validation and testing sets. Defaults to `(2, 1, 1)` 34 | :type relative_size: tuple[int], optional 35 | 36 | :param batch_size: For training batches, defaults to None which makes a single batch out of the training data. 37 | :type batch_size: int or NoneType, optional 38 | 39 | :param X_encode: Set to True to one-hot encode features, default to `False`. 40 | :type encode: bool, optional 41 | 42 | :param Y_encode: Set to True to one-hot encode labels, default to `False`. 43 | :type encode: bool, optional 44 | 45 | :param X_scale: Normalize sample features within [0, 1], default to `False`. 46 | :type X_scale: bool, optional 47 | """ 48 | 49 | def __init__(self, 50 | X_data=None, 51 | Y_data=None, 52 | relative_size=(2, 1, 0), 53 | batch_size=None, 54 | X_encode=False, 55 | Y_encode=False, 56 | X_scale=False): 57 | """Initialize instance variable attributes. 58 | """ 59 | super().__init__() 60 | 61 | self.se_dataset = { 62 | 'dtrain_relative': relative_size[0], 63 | 'dval_relative': relative_size[1], 64 | 'dtest_relative': relative_size[2], 65 | 'batch_size': batch_size, 66 | 'X_scale': X_scale, 67 | 'X_encode': X_encode, 68 | 'Y_encode': Y_encode, 69 | } 70 | 71 | X_data, Y_data = embedding_check(X_data, Y_data, X_scale) 72 | 73 | X_data, Y_data = embedding_encode(self, X_data, Y_data, X_encode, Y_encode) 74 | 75 | embedded_data = embedding_prepare(self, X_data, Y_data) 76 | 77 | self.dtrain, self.dval, self.dtest = embedded_data 78 | 79 | # Keep non-empty datasets 80 | self.dsets = [self.dtrain, self.dval, self.dtest] 81 | self.dsets = [dset for dset in self.dsets if dset.active] 82 | 83 | self.trainable = False 84 | 85 | return None 86 | 87 | def training_batches(self, init=False): 88 | """Wrapper for :func:`epynn.embedding.dataset.mini_batches()`. 89 | 90 | :param init: Wether to prepare a zip of X and Y data, defaults to False. 91 | :type init: bool, optional 92 | """ 93 | if init: 94 | self.dtrain_zip = list(zip(self.dtrain.X, self.dtrain.Y)) 95 | 96 | self.batch_dtrain = mini_batches(self) 97 | 98 | return None 99 | 100 | def compute_shapes(self, A): 101 | """Wrapper for :func:`epynn.embedding.parameters.embedding_compute_shapes()`. 102 | 103 | :param A: Output of forward propagation from previous layer. 104 | :type A: :class:`numpy.ndarray` 105 | """ 106 | embedding_compute_shapes(self, A) 107 | 108 | return None 109 | 110 | def initialize_parameters(self): 111 | """Wrapper for :func:`epynn.embedding.parameters.embedding_initialize_parameters()`. 112 | """ 113 | embedding_initialize_parameters(self) 114 | 115 | return None 116 | 117 | def forward(self, A): 118 | """Wrapper for :func:`epynn.embedding.forward.embedding_forward()`. 119 | 120 | :param A: Output of forward propagation from *previous* layer. 121 | :type A: :class:`numpy.ndarray` 122 | 123 | :return: Output of forward propagation for **current** layer. 124 | :rtype: :class:`numpy.ndarray` 125 | """ 126 | A = embedding_forward(self, A) 127 | self.update_shapes(self.fc, self.fs) 128 | 129 | return A 130 | 131 | def backward(self, dX): 132 | """Wrapper for :func:`epynn.embedding.backward.embedding_backward()`. 133 | 134 | :param dX: Output of backward propagation from next layer. 135 | :type dX: :class:`numpy.ndarray` 136 | 137 | :return: Output of backward propagation for current layer. 138 | :rtype: :class:`numpy.ndarray` 139 | """ 140 | dX = embedding_backward(self, dX) 141 | self.update_shapes(self.bc, self.bs) 142 | 143 | return dX 144 | 145 | def compute_gradients(self): 146 | """Wrapper for :func:`epynn.embedding.parameters.embedding_compute_gradients()`. Dummy method, there are no gradients to compute in layer. 147 | """ 148 | embedding_compute_gradients(self) 149 | 150 | return None 151 | 152 | def update_parameters(self): 153 | """Wrapper for :func:`epynn.embedding.parameters.embedding_update_parameters()`. Dummy method, there are no parameters to update in layer. 154 | """ 155 | if self.trainable: 156 | embedding_update_parameters(self) 157 | 158 | return None 159 | -------------------------------------------------------------------------------- /epynn/embedding/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/embedding/parameters.py 2 | # Related third party imports 3 | 4 | 5 | def embedding_compute_shapes(layer, A): 6 | """Compute forward shapes and dimensions from input for layer. 7 | """ 8 | X = A # Input of current layer 9 | 10 | layer.fs['X'] = X.shape # (m, .. ) 11 | 12 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 13 | layer.d['n'] = X.size // layer.d['m'] # Number of features (n) 14 | 15 | return None 16 | 17 | 18 | def embedding_initialize_parameters(layer): 19 | """Initialize parameters from shapes for layer. 20 | """ 21 | # No parameters to initialize for Embedding layer 22 | 23 | return None 24 | 25 | 26 | def embedding_compute_gradients(layer): 27 | """Compute gradients with respect to weight and bias for layer. 28 | """ 29 | # No gradients to compute for Embedding layer 30 | 31 | return None 32 | 33 | 34 | def embedding_update_parameters(layer): 35 | """Update parameters from gradients for layer. 36 | """ 37 | # No parameters to update for Embedding layer 38 | 39 | return None 40 | -------------------------------------------------------------------------------- /epynn/flatten/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/flatten/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of flatten layer. 10 | :type layer: :class:`epynn.flatten.models.Flatten` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | dA = layer.bc['dA'] = dX 19 | 20 | return dA 21 | 22 | 23 | def flatten_backward(layer, dX): 24 | """Backward propagate error gradients to previous layer. 25 | """ 26 | # (1) 27 | dA = initialize_backward(layer, dX) 28 | 29 | # (2) Reverse reshape (m, n) -> (m, ...) 30 | dX = layer.bc['dX'] = np.reshape(dA, layer.fs['X']) 31 | 32 | return dX # To previous layer 33 | -------------------------------------------------------------------------------- /epynn/flatten/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/flatten/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of flatten layer. 10 | :type layer: :class:`epynn.flatten.models.Flatten` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | X = layer.fc['X'] = A 19 | 20 | return X 21 | 22 | 23 | def flatten_forward(layer,A): 24 | """Forward propagate signal to next layer. 25 | """ 26 | # (1) Initialize cache 27 | X = initialize_forward(layer, A) 28 | 29 | # (2) Reshape (m, ...) -> (m, n) 30 | A = layer.fc['A'] = np.reshape(X, layer.fs['A']) 31 | 32 | return A # To next layer 33 | -------------------------------------------------------------------------------- /epynn/flatten/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/flatten/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.flatten.forward import flatten_forward 5 | from epynn.flatten.backward import flatten_backward 6 | from epynn.flatten.parameters import ( 7 | flatten_compute_shapes, 8 | flatten_initialize_parameters, 9 | flatten_compute_gradients, 10 | flatten_update_parameters 11 | ) 12 | 13 | 14 | class Flatten(Layer): 15 | """ 16 | Definition of a flatten layer prototype. 17 | """ 18 | 19 | def __init__(self): 20 | """Initialize instance variable attributes. 21 | """ 22 | super().__init__() 23 | 24 | self.trainable = False 25 | 26 | return None 27 | 28 | def compute_shapes(self, A): 29 | """Wrapper for :func:`epynn.flatten.parameters.flatten_compute_shapes()`. 30 | 31 | :param A: Output of forward propagation from previous layer. 32 | :type A: :class:`numpy.ndarray` 33 | """ 34 | flatten_compute_shapes(self, A) 35 | 36 | return None 37 | 38 | def initialize_parameters(self): 39 | """Wrapper for :func:`epynn.flatten.parameters.flatten_initialize_parameters()`. 40 | """ 41 | flatten_initialize_parameters(self) 42 | 43 | return None 44 | 45 | def forward(self, A): 46 | """Wrapper for :func:`epynn.flatten.forward.flatten_forward()`. 47 | 48 | :param A: Output of forward propagation from previous layer. 49 | :type A: :class:`numpy.ndarray` 50 | 51 | :return: Output of forward propagation for current layer. 52 | :rtype: :class:`numpy.ndarray` 53 | """ 54 | self.compute_shapes(A) 55 | A = flatten_forward(self, A) 56 | self.update_shapes(self.fc, self.fs) 57 | 58 | return A 59 | 60 | def backward(self, dX): 61 | """Wrapper for :func:`epynn.flatten.backward.flatten_backward()`. 62 | 63 | :param dX: Output of backward propagation from next layer. 64 | :type dX: :class:`numpy.ndarray` 65 | 66 | :return: Output of backward propagation for current layer. 67 | :rtype: :class:`numpy.ndarray` 68 | """ 69 | dX = flatten_backward(self, dX) 70 | self.update_shapes(self.bc, self.bs) 71 | 72 | return dX 73 | 74 | def compute_gradients(self): 75 | """Wrapper for :func:`epynn.flatten.parameters.flatten_compute_gradients()`. Dummy method, there are no gradients to compute in layer. 76 | """ 77 | flatten_compute_gradients(self) 78 | 79 | return None 80 | 81 | def update_parameters(self): 82 | """Wrapper for :func:`epynn.flatten.parameters.flatten_update_parameters()`. Dummy method, there are no parameters to update in layer. 83 | """ 84 | if self.trainable: 85 | flatten_update_parameters(self) 86 | 87 | return None 88 | -------------------------------------------------------------------------------- /epynn/flatten/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/flatten/parameters.py 2 | 3 | 4 | def flatten_compute_shapes(layer, A): 5 | """Compute forward shapes and dimensions from input for layer. 6 | """ 7 | X = A # Input of current layer 8 | 9 | layer.fs['X'] = X.shape # (m, ...) 10 | 11 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 12 | layer.d['n'] = X.size // layer.d['m'] # Number of features (n) 13 | 14 | # Shape for output of forward propagation 15 | layer.fs['A'] = (layer.d['m'], layer.d['n']) 16 | 17 | return None 18 | 19 | 20 | def flatten_initialize_parameters(layer): 21 | """Initialize trainable parameters from shapes for layer. 22 | """ 23 | # No parameters to initialize for Flatten layer 24 | 25 | return None 26 | 27 | 28 | def flatten_compute_gradients(layer): 29 | """Compute gradients with respect to weight and bias for layer. 30 | """ 31 | # No gradients to compute for Flatten layer 32 | 33 | return None 34 | 35 | 36 | def flatten_update_parameters(layer): 37 | """Update parameters from gradients for layer. 38 | """ 39 | # No parameters to update for Flatten layer 40 | 41 | return None 42 | -------------------------------------------------------------------------------- /epynn/gru/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/gru/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of GRU layer. 10 | :type layer: :class:`epynn.gru.models.GRU` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Next hidden state initialized with zeros. 19 | :rtype: :class:`numpy.ndarray` 20 | """ 21 | if layer.sequences: 22 | dA = dX # Full length sequence 23 | elif not layer.sequences: 24 | dA = np.zeros(layer.fs['h']) # Empty full length sequence 25 | dA[:, -1] = dX # Assign to last index 26 | 27 | cache_keys = ['dh_', 'dh', 'dhn', 'dhh_', 'dz_', 'dr_'] 28 | layer.bc.update({k: np.zeros(layer.fs['h']) for k in cache_keys}) 29 | 30 | layer.bc['dA'] = dA 31 | layer.bc['dX'] = np.zeros(layer.fs['X']) # To previous layer 32 | 33 | dh = layer.bc['dh'][:, 0] # To previous step 34 | 35 | return dA, dh 36 | 37 | 38 | def gru_backward(layer, dX): 39 | """Backward propagate error gradients to previous layer. 40 | """ 41 | # (1) Initialize cache and hidden state gradients 42 | dA, dh = initialize_backward(layer, dX) 43 | 44 | # Reverse iteration over sequence steps 45 | for s in reversed(range(layer.d['s'])): 46 | 47 | # (2s) Slice sequence (m, s, u) w.r.t step 48 | dA = layer.bc['dA'][:, s] # dL/dA 49 | 50 | # (3s) Retrieve previous hidden state 51 | dhn = layer.bc['dhn'][:, s] = dh # dL/dhn 52 | 53 | # (4s) Gradient of the loss w.r.t hidden state h_ 54 | dh_ = layer.bc['dh_'][:, s] = ( 55 | (dA + dhn) 56 | ) # dL/dh_ 57 | 58 | # (5s) Gradient of the loss w.r.t hidden hat hh_ 59 | dhh_ = layer.bc['dhh_'][:, s] = ( 60 | dh_ 61 | * (1-layer.fc['z'][:, s]) 62 | * layer.activate(layer.fc['hh_'][:, s], deriv=True) 63 | ) # dL/dhh_ 64 | 65 | # (6s) Gradient of the loss w.r.t update gate z_ 66 | dz_ = layer.bc['dz_'][:, s] = ( 67 | dh_ 68 | * (layer.fc['hp'][:, s]-layer.fc['hh'][:, s]) 69 | * layer.activate_update(layer.fc['z_'][:, s], deriv=True) 70 | ) # dL/dz_ 71 | 72 | # (7s) Gradient of the loss w.r.t reset gate 73 | dr_ = layer.bc['dr_'][:, s] = ( 74 | np.dot(dhh_, layer.p['Vhh'].T) 75 | * layer.fc['hp'][:, s] 76 | * layer.activate_reset(layer.fc['r_'][:, s], deriv=True) 77 | ) # dL/dr_ 78 | 79 | # (8s) Gradient of the loss w.r.t previous hidden state 80 | dh = layer.bc['dh'][:, s] = ( 81 | np.dot(dhh_, layer.p['Vhh'].T) * layer.fc['r'][:, s] 82 | + np.dot(dz_, layer.p['Vz'].T) + dh_ * layer.fc['z'][:, s] 83 | + np.dot(dr_, layer.p['Vr'].T) 84 | ) # dL/dh 85 | 86 | # (9s) Gradient of the loss w.r.t to X 87 | dX = layer.bc['dX'][:, s] = ( 88 | np.dot(dhh_, layer.p['Uhh'].T) 89 | + np.dot(dz_, layer.p['Uz'].T) 90 | + np.dot(dr_, layer.p['Ur'].T) 91 | ) # dL/dX 92 | 93 | dX = layer.bc['dX'] 94 | 95 | return dX # To previous layer 96 | -------------------------------------------------------------------------------- /epynn/gru/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/gru/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of GRU layer. 10 | :type layer: :class:`epynn.gru.models.GRU` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Previous hidden state initialized with zeros. 19 | :rtype: :class:`numpy.ndarray` 20 | """ 21 | X = layer.fc['X'] = A 22 | 23 | cache_keys = ['h', 'hp', 'hh_', 'hh', 'z', 'z_', 'r', 'r_'] 24 | layer.fc.update({k: np.zeros(layer.fs['h']) for k in cache_keys}) 25 | 26 | h = layer.fc['h'][:, 0] # Hidden state 27 | 28 | return X, h 29 | 30 | 31 | def gru_forward(layer, A): 32 | """Forward propagate signal to next layer. 33 | """ 34 | # (1) Initialize cache and hidden state 35 | X, h = initialize_forward(layer, A) 36 | 37 | # Iterate over sequence steps 38 | for s in range(layer.d['s']): 39 | 40 | # (2s) Slice sequence (m, s, e) with respect to step 41 | X = layer.fc['X'][:, s] 42 | 43 | # (3s) Retrieve previous hidden state 44 | hp = layer.fc['hp'][:, s] = h 45 | 46 | # (4s) Activate reset gate 47 | r_ = layer.fc['r_'][:, s] = ( 48 | np.dot(X, layer.p['Ur']) 49 | + np.dot(hp, layer.p['Vr']) 50 | + layer.p['br'] 51 | ) # (4.1s) 52 | 53 | r = layer.fc['r'][:, s] = layer.activate_reset(r_) # (4.2s) 54 | 55 | # (5s) Activate update gate 56 | z_ = layer.fc['z_'][:, s] = ( 57 | np.dot(X, layer.p['Uz']) 58 | + np.dot(hp, layer.p['Vz']) 59 | + layer.p['bz'] 60 | ) # (5.1s) 61 | 62 | z = layer.fc['z'][:, s] = layer.activate_update(z_) # (5.2s) 63 | 64 | # (6s) Activate hidden hat 65 | hh_ = layer.fc['hh_'][:, s] = ( 66 | np.dot(X, layer.p['Uhh']) 67 | + np.dot(r * hp, layer.p['Vhh']) 68 | + layer.p['bhh'] 69 | ) # (6.1s) 70 | 71 | hh = layer.fc['hh'][:, s] = layer.activate(hh_) # (6.2s) 72 | 73 | # (7s) Compute current hidden state 74 | h = layer.fc['h'][:, s] = ( 75 | z*hp 76 | + (1-z)*hh 77 | ) 78 | 79 | # Return the last hidden state or the full sequence 80 | A = layer.fc['h'] if layer.sequences else layer.fc['h'][:, -1] 81 | 82 | return A # To next layer 83 | -------------------------------------------------------------------------------- /epynn/gru/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/gru/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.commons.maths import ( 5 | tanh, 6 | sigmoid, 7 | orthogonal, 8 | clip_gradient, 9 | activation_tune, 10 | ) 11 | from epynn.gru.forward import gru_forward 12 | from epynn.gru.backward import gru_backward 13 | from epynn.gru.parameters import ( 14 | gru_compute_shapes, 15 | gru_initialize_parameters, 16 | gru_compute_gradients, 17 | gru_update_parameters, 18 | ) 19 | 20 | 21 | class GRU(Layer): 22 | """ 23 | Definition of a GRU layer prototype. 24 | 25 | :param units: Number of unit cells in GRU layer, defaults to 1. 26 | :type units: int, optional 27 | 28 | :param activate: Non-linear activation of hidden hat (hh) state, defaults to `tanh`. 29 | :type activate: function, optional 30 | 31 | :param activate_output: Non-linear activation of update gate, defaults to `sigmoid`. 32 | :type activate_output: function, optional 33 | 34 | :param activate_candidate: Non-linear activation of reset gate, defaults to `sigmoid`. 35 | :type activate_candidate: function, optional 36 | 37 | :param initialization: Weight initialization function for GRU layer, defaults to `orthogonal`. 38 | :type initialization: function, optional 39 | 40 | :param clip_gradients: May prevent exploding/vanishing gradients, defaults to `False`. 41 | :type clip_gradients: bool, optional 42 | 43 | :param sequences: Whether to return only the last hidden state or the full sequence, defaults to `False`. 44 | :type sequences: bool, optional 45 | 46 | :param se_hPars: Layer hyper-parameters, defaults to `None` and inherits from model. 47 | :type se_hPars: dict[str, str or float] or NoneType, optional 48 | """ 49 | 50 | def __init__(self, 51 | unit_cells=1, 52 | activate=tanh, 53 | activate_update=sigmoid, 54 | activate_reset=sigmoid, 55 | initialization=orthogonal, 56 | clip_gradients=False, 57 | sequences=False, 58 | se_hPars=None): 59 | """Initialize instance variable attributes. 60 | """ 61 | super().__init__(se_hPars) 62 | 63 | self.d['u'] = unit_cells 64 | self.activate = activate 65 | self.activate_update = activate_update 66 | self.activate_reset = activate_reset 67 | self.initialization = initialization 68 | self.clip_gradients = clip_gradients 69 | self.sequences = sequences 70 | 71 | self.activation = { 72 | 'activate': self.activate.__name__, 73 | 'activate_update': self.activate_update.__name__, 74 | 'activate_reset': self.activate_reset.__name__, 75 | } 76 | self.trainable = True 77 | 78 | return None 79 | 80 | def compute_shapes(self, A): 81 | """Wrapper for :func:`epynn.gru.parameters.gru_compute_shapes()`. 82 | 83 | :param A: Output of forward propagation from previous layer. 84 | :type A: :class:`numpy.ndarray` 85 | """ 86 | gru_compute_shapes(self, A) 87 | 88 | return None 89 | 90 | def initialize_parameters(self): 91 | """Wrapper for :func:`epynn.gru.parameters.gru_initialize_parameters()`. 92 | """ 93 | gru_initialize_parameters(self) 94 | 95 | return None 96 | 97 | def forward(self, A): 98 | """Wrapper for :func:`epynn.gru.forward.gru_forward()`. 99 | 100 | :param A: Output of forward propagation from previous layer. 101 | :type A: :class:`numpy.ndarray` 102 | 103 | :return: Output of forward propagation for current layer. 104 | :rtype: :class:`numpy.ndarray` 105 | """ 106 | self.compute_shapes(A) 107 | activation_tune(self.se_hPars) 108 | A = self.fc['A'] = gru_forward(self, A) 109 | self.update_shapes(self.fc, self.fs) 110 | 111 | return A 112 | 113 | def backward(self, dX): 114 | """Wrapper for :func:`epynn.gru.backward.gru_backward()`. 115 | 116 | :param dX: Output of backward propagation from next layer. 117 | :type dX: :class:`numpy.ndarray` 118 | 119 | :return: Output of backward propagation for current layer. 120 | :rtype: :class:`numpy.ndarray` 121 | """ 122 | activation_tune(self.se_hPars) 123 | dX = gru_backward(self, dX) 124 | self.update_shapes(self.bc, self.bs) 125 | 126 | return dX 127 | 128 | def compute_gradients(self): 129 | """Wrapper for :func:`epynn.gru.parameters.gru_compute_gradients()`. 130 | """ 131 | gru_compute_gradients(self) 132 | 133 | if self.clip_gradients: 134 | clip_gradient(self) 135 | 136 | return None 137 | 138 | def update_parameters(self): 139 | """Wrapper for :func:`epynn.gru.parameters.gru_update_parameters()`. 140 | """ 141 | if self.trainable: 142 | gru_update_parameters(self) 143 | 144 | return None 145 | -------------------------------------------------------------------------------- /epynn/gru/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/gru/parameters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def gru_compute_shapes(layer, A): 7 | """Compute forward shapes and dimensions from input for layer. 8 | """ 9 | X = A # Input of current layer 10 | 11 | layer.fs['X'] = X.shape # (m, s, e) 12 | 13 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 14 | layer.d['s'] = layer.fs['X'][1] # Steps in sequence (s) 15 | layer.d['e'] = layer.fs['X'][2] # Elements per step (e) 16 | 17 | # Parameter Shapes Unit cells (u) 18 | eu = (layer.d['e'], layer.d['u']) # (e, u) 19 | uu = (layer.d['u'], layer.d['u']) # (u, u) 20 | u1 = (1, layer.d['u']) # (1, u) 21 | # Update gate Reset gate Hidden hat 22 | layer.fs['Uz'] = layer.fs['Ur'] = layer.fs['Uhh'] = eu 23 | layer.fs['Vz'] = layer.fs['Vr'] = layer.fs['Vhh'] = uu 24 | layer.fs['bz'] = layer.fs['br'] = layer.fs['bhh'] = u1 25 | 26 | # Shape of hidden state (h) with respect to steps (s) 27 | layer.fs['h'] = (layer.d['m'], layer.d['s'], layer.d['u']) 28 | 29 | return None 30 | 31 | 32 | def gru_initialize_parameters(layer): 33 | """Initialize trainable parameters from shapes for layer. 34 | """ 35 | # For linear activation of update gate (z_) 36 | layer.p['Uz'] = layer.initialization(layer.fs['Uz'], rng=layer.np_rng) 37 | layer.p['Vz'] = layer.initialization(layer.fs['Vz'], rng=layer.np_rng) 38 | layer.p['bz'] = np.zeros(layer.fs['bz']) # dot(X, U) + dot(hp, V) + b 39 | 40 | # For linear activation of reset gate (r_) 41 | layer.p['Ur'] = layer.initialization(layer.fs['Ur'], rng=layer.np_rng) 42 | layer.p['Vr'] = layer.initialization(layer.fs['Vr'], rng=layer.np_rng) 43 | layer.p['br'] = np.zeros(layer.fs['br']) # dot(X, U) + dot(hp, V) + b 44 | 45 | # For linear activation of hidden hat (hh_) 46 | layer.p['Uhh'] = layer.initialization(layer.fs['Uhh'], rng=layer.np_rng) 47 | layer.p['Vhh'] = layer.initialization(layer.fs['Vhh'], rng=layer.np_rng) 48 | layer.p['bhh'] = np.zeros(layer.fs['bhh']) # dot(X, U) + dot(r * hp, V) + b 49 | 50 | return None 51 | 52 | 53 | def gru_compute_gradients(layer): 54 | """Compute gradients with respect to weight and bias for layer. 55 | """ 56 | # Gradients initialization with respect to parameters 57 | for parameter in layer.p.keys(): 58 | gradient = 'd' + parameter 59 | layer.g[gradient] = np.zeros_like(layer.p[parameter]) 60 | 61 | # Reverse iteration over sequence steps 62 | for s in reversed(range(layer.d['s'])): 63 | 64 | X = layer.fc['X'][:, s] # Input for current step 65 | hp = layer.fc['hp'][:, s] # Previous hidden state 66 | 67 | # (1) Gradients of the loss with respect to U, V, b 68 | dhh_ = layer.bc['dhh_'][:, s] # Gradient w.r.t hidden hat hh_ 69 | layer.g['dUhh'] += np.dot(X.T, dhh_) # (1.1) dL/dUhh 70 | layer.g['dVhh'] += np.dot((layer.fc['r'][:, s] * hp).T, dhh_) 71 | layer.g['dbhh'] += np.sum(dhh_, axis=0) # (1.3) dL/dbhh 72 | 73 | # (2) Gradients of the loss with respect to U, V, b 74 | dz_ = layer.bc['dz_'][:, s] # Gradient w.r.t update gate z_ 75 | layer.g['dUz'] += np.dot(X.T, dz_) # (2.1) dL/dUz 76 | layer.g['dVz'] += np.dot(hp.T, dz_) # (2.2) dL/dVz 77 | layer.g['dbz'] += np.sum(dz_, axis=0) # (2.3) dL/dbz 78 | 79 | # (3) Gradients of the loss with respect to U, V, b 80 | dr_ = layer.bc['dr_'][:, s] # Gradient w.r.t reset gate r_ 81 | layer.g['dUr'] += np.dot(X.T, dr_) # (3.1) dL/dUr 82 | layer.g['dVr'] += np.dot(hp.T, dr_) # (3.2) dL/dVr 83 | layer.g['dbr'] += np.sum(dr_, axis=0) # (3.3) dL/dbr 84 | 85 | return None 86 | 87 | 88 | def gru_update_parameters(layer): 89 | """Update parameters from gradients for layer. 90 | """ 91 | for gradient in layer.g.keys(): 92 | parameter = gradient[1:] 93 | # Update is driven by learning rate and gradients 94 | layer.p[parameter] -= layer.lrate[layer.e] * layer.g[gradient] 95 | 96 | return None 97 | -------------------------------------------------------------------------------- /epynn/initialize.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/initialize.py 2 | # Local application/library specific imports 3 | from epynn.commons.logs import set_highlighted_excepthook 4 | from epynn.commons.library import settings_verification 5 | 6 | # Import epynn.settings in working directory if not present 7 | settings_verification() 8 | 9 | # Colored excepthook 10 | set_highlighted_excepthook() 11 | -------------------------------------------------------------------------------- /epynn/lstm/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/lstm/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of LSTM layer. 10 | :type layer: :class:`epynn.lstm.models.LSTM` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Next hidden state initialized with zeros. 19 | :rtype: :class:`numpy.ndarray` 20 | 21 | :return: Next memory state initialized with zeros. 22 | :rtype: :class:`numpy.ndarray` 23 | """ 24 | if layer.sequences: 25 | dA = dX # Full length sequence 26 | elif not layer.sequences: 27 | dA = np.zeros(layer.fs['h']) # Empty full length sequence 28 | dA[:, -1] = dX # Assign to last index 29 | 30 | cache_keys = [ 31 | 'dh_', 'dh', 'dhn', 32 | 'dC_', 'dC', 'dCn', 33 | 'do_', 'dg_', 'di_', 'df_', 'dz_' 34 | ] 35 | 36 | layer.bc.update({k: np.zeros(layer.fs['h']) for k in cache_keys}) 37 | 38 | layer.bc['dA'] = dA 39 | layer.bc['dX'] = np.zeros(layer.fs['X']) # To previous layer 40 | 41 | dh = layer.bc['dh'][:, 0] # To previous step 42 | dC = layer.bc['dC'][:, 0] # To previous step 43 | 44 | return dA, dh, dC 45 | 46 | 47 | def lstm_backward(layer, dX): 48 | """Backward propagate error gradients to previous layer. 49 | """ 50 | # (1) Initialize cache, hidden and memory state gradients 51 | dA, dh, dC = initialize_backward(layer, dX) 52 | 53 | # Reverse iteration over sequence steps 54 | for s in reversed(range(layer.d['s'])): 55 | 56 | # (2s) Slice sequence (m, s, u) w.r.t step 57 | dA = layer.bc['dA'][:, s] # dL/dA 58 | 59 | # (3s) Gradient of the loss w.r.t. next states 60 | dhn = layer.bc['dhn'][:, s] = dh # (3.1) dL/dhn 61 | dCn = layer.bc['dCn'][:, s] = dC # (3.2) dL/dCn 62 | 63 | # (4s) Gradient of the loss w.r.t hidden state h_ 64 | dh_ = layer.bc['dh_'][:, s] = ( 65 | (dA + dhn) 66 | ) # dL/dh_ 67 | 68 | # (5s) Gradient of the loss w.r.t memory state C_ 69 | dC_ = layer.bc['dC_'][:, s] = ( 70 | dh_ 71 | * layer.fc['o'][:, s] 72 | * layer.activate(layer.fc['C_'][:, s], deriv=True) 73 | + dCn 74 | ) # dL/dC_ 75 | 76 | # (6s) Gradient of the loss w.r.t output gate o_ 77 | do_ = layer.bc['do_'][:, s] = ( 78 | dh_ 79 | * layer.fc['C'][:, s] 80 | * layer.activate_output(layer.fc['o_'][:, s], deriv=True) 81 | ) # dL/do_ 82 | 83 | # (7s) Gradient of the loss w.r.t candidate g_ 84 | dg_ = layer.bc['dg_'][:, s] = ( 85 | dC_ 86 | * layer.fc['i'][:, s] 87 | * layer.activate_candidate(layer.fc['g_'][:, s], deriv=True) 88 | ) # dL/dg_ 89 | 90 | # (8s) Gradient of the loss w.r.t input gate i_ 91 | di_ = layer.bc['di_'][:, s] = ( 92 | dC_ 93 | * layer.fc['g'][:, s] 94 | * layer.activate_input(layer.fc['i_'][:, s], deriv=True) 95 | ) # dL/di_ 96 | 97 | # (9s) Gradient of the loss w.r.t forget gate f_ 98 | df_ = layer.bc['df_'][:, s] = ( 99 | dC_ 100 | * layer.fc['Cp_'][:, s] 101 | * layer.activate_forget(layer.fc['f_'][:, s], deriv=True) 102 | ) # dL/df_ 103 | 104 | # (10s) Gradient of the loss w.r.t memory state C 105 | dC = layer.bc['dC'][:, s] = ( 106 | dC_ 107 | * layer.fc['f'][:, s] 108 | ) # dL/dC 109 | 110 | # (11s) Gradient of the loss w.r.t hidden state h 111 | dh = layer.bc['dh'][:, s] = ( 112 | np.dot(do_, layer.p['Vo'].T) 113 | + np.dot(dg_, layer.p['Vg'].T) 114 | + np.dot(di_, layer.p['Vi'].T) 115 | + np.dot(df_, layer.p['Vf'].T) 116 | ) # dL/dh 117 | 118 | # (12s) Gradient of the loss w.r.t hidden state X 119 | dX = layer.bc['dX'][:, s] = ( 120 | np.dot(dg_, layer.p['Ug'].T) 121 | + np.dot(do_, layer.p['Uo'].T) 122 | + np.dot(di_, layer.p['Ui'].T) 123 | + np.dot(df_, layer.p['Uf'].T) 124 | ) # dL/dX 125 | 126 | dX = layer.bc['dX'] 127 | 128 | return dX # To previous layer 129 | -------------------------------------------------------------------------------- /epynn/lstm/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/lstm/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of LSTM layer. 10 | :type layer: :class:`epynn.lstm.models.LSTM` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Previous hidden state initialized with zeros. 19 | :rtype: :class:`numpy.ndarray` 20 | 21 | :return: Previous memory state initialized with zeros. 22 | :rtype: :class:`numpy.ndarray` 23 | """ 24 | X = layer.fc['X'] = A 25 | 26 | cache_keys = ['h', 'hp', 'o_', 'o', 'i_', 'i', 'f_', 'f', 'g_', 'g', 'C_', 'Cp_', 'C'] 27 | layer.fc.update({k: np.zeros(layer.fs['h']) for k in cache_keys}) 28 | 29 | h = layer.fc['h'][:, 0] # Hidden state 30 | C_ = layer.fc['C_'][:, 0] # Memory state 31 | 32 | return X, h, C_ 33 | 34 | 35 | def lstm_forward(layer, A): 36 | """Forward propagate signal to next layer. 37 | """ 38 | # (1) Initialize cache, hidden and memory states 39 | X, h, C_ = initialize_forward(layer, A) 40 | 41 | # Iterate over sequence steps 42 | for s in range(layer.d['s']): 43 | 44 | # (2s) Slice sequence (m, s, e) w.r.t to step 45 | X = layer.fc['X'][:, s] 46 | 47 | # (3s) Retrieve previous states 48 | hp = layer.fc['hp'][:, s] = h # (3.1s) Hidden 49 | Cp_ = layer.fc['Cp_'][:, s] = C_ # (3.2s) Memory 50 | 51 | # (4s) Activate forget gate 52 | f_ = layer.fc['f_'][:, s] = ( 53 | np.dot(X, layer.p['Uf']) 54 | + np.dot(hp, layer.p['Vf']) 55 | + layer.p['bf'] 56 | ) # (4.1s) 57 | 58 | f = layer.fc['f'][:, s] = layer.activate_forget(f_) # (4.2s) 59 | 60 | # (5s) Activate input gate 61 | i_ = layer.fc['i_'][:, s] = ( 62 | np.dot(X, layer.p['Ui']) 63 | + np.dot(hp, layer.p['Vi']) 64 | + layer.p['bi'] 65 | ) # (5.1s) 66 | 67 | i = layer.fc['i'][:, s] = layer.activate_input(i_) # (5.2s) 68 | 69 | # (6s) Activate candidate 70 | g_ = layer.fc['g_'][:, s] = ( 71 | np.dot(X, layer.p['Ug']) 72 | + np.dot(hp, layer.p['Vg']) 73 | + layer.p['bg'] 74 | ) # (6.1s) 75 | 76 | g = layer.fc['g'][:, s] = layer.activate_candidate(g_) # (6.2s) 77 | 78 | # (7s) Activate output gate 79 | o_ = layer.fc['o_'][:, s] = ( 80 | np.dot(X, layer.p['Uo']) 81 | + np.dot(hp, layer.p['Vo']) 82 | + layer.p['bo'] 83 | ) # (7.1s) 84 | 85 | o = layer.fc['o'][:, s] = layer.activate_output(o_) # (7.2s) 86 | 87 | # (8s) Compute current memory state 88 | C_ = layer.fc['C_'][:, s] = ( 89 | Cp_ * f 90 | + i * g 91 | ) # (8.1s) 92 | 93 | C = layer.fc['C'][:, s] = layer.activate(C_) # (8.2s) 94 | 95 | # (9s) Compute current hidden state 96 | h = layer.fc['h'][:, s] = o * C 97 | 98 | # Return the last hidden state or the full sequence 99 | A = layer.fc['h'] if layer.sequences else layer.fc['h'][:, -1] 100 | 101 | return A # To next layer 102 | -------------------------------------------------------------------------------- /epynn/lstm/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/lstm/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.commons.maths import ( 5 | tanh, 6 | sigmoid, 7 | orthogonal, 8 | clip_gradient, 9 | activation_tune, 10 | ) 11 | from epynn.lstm.forward import lstm_forward 12 | from epynn.lstm.backward import lstm_backward 13 | from epynn.lstm.parameters import ( 14 | lstm_compute_shapes, 15 | lstm_initialize_parameters, 16 | lstm_compute_gradients, 17 | lstm_update_parameters 18 | ) 19 | 20 | 21 | class LSTM(Layer): 22 | """ 23 | Definition of a LSTM layer prototype. 24 | 25 | :param units: Number of unit cells in LSTM layer, defaults to 1. 26 | :type units: int, optional 27 | 28 | :param activate: Non-linear activation of hidden and memory states, defaults to `tanh`. 29 | :type activate: function, optional 30 | 31 | :param activate_output: Non-linear activation of output gate, defaults to `sigmoid`. 32 | :type activate_output: function, optional 33 | 34 | :param activate_candidate: Non-linear activation of candidate, defaults to `tanh`. 35 | :type activate_candidate: function, optional 36 | 37 | :param activate_input: Non-linear activation of input gate, defaults to `sigmoid`. 38 | :type activate_input: function, optional 39 | 40 | :param activate_forget: Non-linear activation of forget gate, defaults to `sigmoid`. 41 | :type activate_forget: function, optional 42 | 43 | :param initialization: Weight initialization function for LSTM layer, defaults to `orthogonal`. 44 | :type initialization: function, optional 45 | 46 | :param clip_gradients: May prevent exploding/vanishing gradients, defaults to `False`. 47 | :type clip_gradients: bool, optional 48 | 49 | :param sequences: Whether to return only the last hidden state or the full sequence, defaults to `False`. 50 | :type sequences: bool, optional 51 | 52 | :param se_hPars: Layer hyper-parameters, defaults to `None` and inherits from model. 53 | :type se_hPars: dict[str, str or float] or NoneType, optional 54 | """ 55 | 56 | def __init__(self, 57 | unit_cells=1, 58 | activate=tanh, 59 | activate_output=sigmoid, 60 | activate_candidate=tanh, 61 | activate_input=sigmoid, 62 | activate_forget=sigmoid, 63 | initialization=orthogonal, 64 | clip_gradients=False, 65 | sequences=False, 66 | se_hPars=None): 67 | """Initialize instance variable attributes. 68 | """ 69 | super().__init__(se_hPars) 70 | 71 | self.d['u'] = unit_cells 72 | self.activate = activate 73 | self.activate_output = activate_output 74 | self.activate_candidate = activate_candidate 75 | self.activate_input = activate_input 76 | self.activate_forget = activate_forget 77 | self.initialization = initialization 78 | self.clip_gradients = clip_gradients 79 | self.sequences = sequences 80 | 81 | self.activation = { 82 | 'activate': self.activate.__name__, 83 | 'activate_output': self.activate_output.__name__, 84 | 'activate_candidate': self.activate_candidate.__name__, 85 | 'activate_input': self.activate_input.__name__, 86 | 'activate_forget': self.activate_forget.__name__, 87 | } 88 | self.trainable = True 89 | 90 | return None 91 | 92 | def compute_shapes(self, A): 93 | """Is a wrapper for :func:`epynn.lstm.parameters.lstm_compute_shapes()`. 94 | 95 | :param A: Output of forward propagation from previous layer. 96 | :type A: :class:`numpy.ndarray` 97 | """ 98 | lstm_compute_shapes(self, A) 99 | 100 | return None 101 | 102 | def initialize_parameters(self): 103 | """Is a wrapper for :func:`epynn.lstm.parameters.lstm_initialize_parameters()`. 104 | """ 105 | lstm_initialize_parameters(self) 106 | 107 | return None 108 | 109 | def forward(self, A): 110 | """Is a wrapper for :func:`epynn.lstm.forward.lstm_forward()`. 111 | 112 | :param A: Output of forward propagation from previous layer. 113 | :type A: :class:`numpy.ndarray` 114 | 115 | :return: Output of forward propagation for current layer. 116 | :rtype: :class:`numpy.ndarray` 117 | """ 118 | self.compute_shapes(A) 119 | activation_tune(self.se_hPars) 120 | A = self.fc['A'] = lstm_forward(self, A) 121 | self.update_shapes(self.fc, self.fs) 122 | 123 | return A 124 | 125 | def backward(self, dX): 126 | """Is a wrapper for :func:`epynn.lstm.backward.lstm_backward()`. 127 | 128 | :param dX: Output of backward propagation from next layer. 129 | :type dX: :class:`numpy.ndarray` 130 | 131 | :return: Output of backward propagation for current layer. 132 | :rtype: :class:`numpy.ndarray` 133 | """ 134 | activation_tune(self.se_hPars) 135 | dX = lstm_backward(self, dX) 136 | self.update_shapes(self.bc, self.bs) 137 | 138 | return dX 139 | 140 | def compute_gradients(self): 141 | """Is a wrapper for :func:`epynn.lstm.parameters.lstm_compute_gradients()`. 142 | """ 143 | lstm_compute_gradients(self) 144 | 145 | if self.clip_gradients: 146 | clip_gradient(self) 147 | 148 | return None 149 | 150 | def update_parameters(self): 151 | """Is a wrapper for :func:`epynn.lstm.parameters.lstm_update_parameters()`. 152 | """ 153 | if self.trainable: 154 | lstm_update_parameters(self) 155 | 156 | return None 157 | -------------------------------------------------------------------------------- /epynn/lstm/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/lstm/parameters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def lstm_compute_shapes(layer, A): 7 | """Compute forward shapes and dimensions from input for layer. 8 | """ 9 | X = A # Input of current layer 10 | 11 | layer.fs['X'] = X.shape # (m, s, e) 12 | 13 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 14 | layer.d['s'] = layer.fs['X'][1] # Steps in sequence (s) 15 | layer.d['e'] = layer.fs['X'][2] # Elements per step (e) 16 | 17 | # Parameter Shapes Unit cells (u) 18 | eu = (layer.d['e'], layer.d['u']) # (v, u) 19 | uu = (layer.d['u'], layer.d['u']) # (u, u) 20 | u1 = (1, layer.d['u']) # (1, u) 21 | # Forget gate Input gate Candidate Output gate 22 | layer.fs['Uf'] = layer.fs['Ui'] = layer.fs['Ug'] = layer.fs['Uo'] = eu 23 | layer.fs['Vf'] = layer.fs['Vi'] = layer.fs['Vg'] = layer.fs['Vo'] = uu 24 | layer.fs['bf'] = layer.fs['bi'] = layer.fs['bg'] = layer.fs['bo'] = u1 25 | 26 | # Shape of hidden (h) and memory (C) state with respect to steps (s) 27 | layer.fs['h'] = layer.fs['C'] = (layer.d['m'], layer.d['s'], layer.d['u']) 28 | 29 | return None 30 | 31 | 32 | def lstm_initialize_parameters(layer): 33 | """Initialize trainable parameters from shapes for layer. 34 | """ 35 | # For linear activation of forget gate (f_) 36 | layer.p['Uf'] = layer.initialization(layer.fs['Uf'], rng=layer.np_rng) 37 | layer.p['Vf'] = layer.initialization(layer.fs['Vf'], rng=layer.np_rng) 38 | layer.p['bf'] = np.zeros(layer.fs['bf']) # dot(X, U) + dot(hp, V) + b 39 | 40 | # For linear activation of input gate (i_) 41 | layer.p['Ui'] = layer.initialization(layer.fs['Ui'], rng=layer.np_rng) 42 | layer.p['Vi'] = layer.initialization(layer.fs['Vi'], rng=layer.np_rng) 43 | layer.p['bi'] = np.zeros(layer.fs['bi']) # dot(X, U) + dot(hp, V) + b 44 | 45 | # For linear activation of candidate (g_) 46 | layer.p['Ug'] = layer.initialization(layer.fs['Ug'], rng=layer.np_rng) 47 | layer.p['Vg'] = layer.initialization(layer.fs['Vg'], rng=layer.np_rng) 48 | layer.p['bg'] = np.zeros(layer.fs['bg']) # dot(X, U) + dot(hp, V) + b 49 | 50 | # For linear activation of output gate (o_) 51 | layer.p['Uo'] = layer.initialization(layer.fs['Uo'], rng=layer.np_rng) 52 | layer.p['Vo'] = layer.initialization(layer.fs['Vo'], rng=layer.np_rng) 53 | layer.p['bo'] = np.zeros(layer.fs['bo']) # dot(X, U) + dot(hp, V) + b 54 | 55 | return None 56 | 57 | 58 | def lstm_compute_gradients(layer): 59 | """Compute gradients with respect to weight and bias for layer. 60 | """ 61 | # Gradients initialization with respect to parameters 62 | for parameter in layer.p.keys(): 63 | gradient = 'd' + parameter 64 | layer.g[gradient] = np.zeros_like(layer.p[parameter]) 65 | 66 | # Reverse iteration over sequence steps 67 | for s in reversed(range(layer.d['s'])): 68 | 69 | X = layer.fc['X'][:, s] # Input for current step 70 | hp = layer.fc['hp'][:, s] # Previous hidden state 71 | 72 | # (1) Gradients of the loss with respect to U, V, b 73 | do_ = layer.bc['do_'][:, s] # Gradient w.r.t output gate o_ 74 | layer.g['dUo'] += np.dot(X.T, do_) # (1.1) dL/dUo 75 | layer.g['dVo'] += np.dot(hp.T, do_) # (1.2) dL/dVo 76 | layer.g['dbo'] += np.sum(do_, axis=0) # (1.3) dL/dbo 77 | 78 | # (2) Gradients of the loss with respect to U, V, b 79 | dg_ = layer.bc['dg_'][:, s] # Gradient w.r.t candidate g_ 80 | layer.g['dUg'] += np.dot(X.T, dg_) # (2.1) dL/dUg 81 | layer.g['dVg'] += np.dot(hp.T, dg_) # (2.2) dL/dVg 82 | layer.g['dbg'] += np.sum(dg_, axis=0) # (2.3) dL/dbg 83 | 84 | # (3) Gradients of the loss with respect to U, V, b 85 | di_ = layer.bc['di_'][:, s] # Gradient w.r.t input gate i_ 86 | layer.g['dUi'] += np.dot(X.T, di_) # (3.1) dL/dUi 87 | layer.g['dVi'] += np.dot(hp.T, di_) # (3.2) dL/dVi 88 | layer.g['dbi'] += np.sum(di_, axis=0) # (3.3) dL/dbi 89 | 90 | # (4) Gradients of the loss with respect to U, V, b 91 | df_ = layer.bc['df_'][:, s] # Gradient w.r.t forget gate f_ 92 | layer.g['dUf'] += np.dot(X.T, df_) # (4.1) dL/dUf 93 | layer.g['dVf'] += np.dot(hp.T, df_) # (4.2) dL/dVf 94 | layer.g['dbf'] += np.sum(df_, axis=0) # (4.3) dL/dbf 95 | 96 | return None 97 | 98 | 99 | def lstm_update_parameters(layer): 100 | """Update parameters from gradients for layer. 101 | """ 102 | for gradient in layer.g.keys(): 103 | parameter = gradient[1:] 104 | # Update is driven by learning rate and gradients 105 | layer.p[parameter] -= layer.lrate[layer.e] * layer.g[gradient] 106 | 107 | return None 108 | -------------------------------------------------------------------------------- /epynn/network/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/backward.py 2 | 3 | 4 | def model_backward(model, dA): 5 | """Backward propagate error gradients from output to input layer. 6 | """ 7 | # By convention 8 | dX = dA 9 | 10 | # Iterate over reversed layers 11 | for layer in reversed(model.layers): 12 | 13 | # Layer returns dL/dX (dX) - layer.bs, layer.bc 14 | dX = layer.backward(dX) 15 | 16 | # Update values in layer.g 17 | layer.compute_gradients() 18 | 19 | # Update values in layer.p 20 | layer.update_parameters() 21 | 22 | return None 23 | -------------------------------------------------------------------------------- /epynn/network/evaluate.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/evaluate.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | # Local application/library specific imports 6 | from epynn.commons.loss import loss_functions 7 | from epynn.commons.metrics import metrics_functions 8 | 9 | 10 | def model_evaluate(model): 11 | """Compute metrics including cost for model. 12 | 13 | Will evaluate training, testing and validation sets against metrics set in model.se_config. 14 | 15 | :param model: An instance of EpyNN network. 16 | :type model: :class:`epynn.network.models.EpyNN` 17 | """ 18 | 19 | # Callback functions for metrics and loss 20 | metrics = metrics_functions() 21 | metrics.update(loss_functions()) 22 | 23 | dsets = model.embedding.dsets 24 | 25 | # Iterate over dsets [dtrain, dval, dtest] 26 | for k, dset in enumerate(dsets): 27 | 28 | # Check if one-hot encoding 29 | encoded = (dset.Y.shape[1] > 1) 30 | 31 | # Output probs 32 | dset.A = model.forward(dset.X) 33 | 34 | # Decisions 35 | dset.P = np.argmax(dset.A, axis=1) if encoded else np.around(dset.A) 36 | 37 | # Iterate over selected metrics 38 | for s in model.metrics.keys(): 39 | 40 | m = metrics[s](dset.Y, dset.A) 41 | 42 | # Metrics such as precision/recall returned as scalar 43 | if m.ndim == 0: 44 | pass 45 | # Others returned as per-sample 1D array 46 | else: 47 | m = np.mean(m) # To scalar 48 | 49 | # Save value for metrics (s) for dset (k) 50 | model.metrics[s][k].append(m) 51 | 52 | return None 53 | 54 | 55 | def batch_evaluate(model, Y, A): 56 | """Compute metrics for current batch. 57 | 58 | Will evaluate current batch against accuracy and training loss. 59 | 60 | :param model: An instance of EpyNN network. 61 | :type model: :class:`epynn.network.models.EpyNN` 62 | 63 | :param Y: True labels for batch samples. 64 | :type Y: :class:`numpy.ndarray` 65 | 66 | :param A: Output of forward propagation for batch. 67 | :type A: :class:`numpy.ndarray` 68 | """ 69 | metrics = metrics_functions() 70 | 71 | # Per sample 1D array to scalar 72 | accuracy = np.mean(metrics['accuracy'](Y, A)) 73 | 74 | # Per sample 1D array to scalar 75 | cost = np.mean(model.training_loss(Y, A)) 76 | 77 | return accuracy, cost 78 | -------------------------------------------------------------------------------- /epynn/network/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/forward.py 2 | 3 | 4 | def model_forward(model, X): 5 | """Forward propagate input data from input to output layer. 6 | """ 7 | # By convention 8 | A = X 9 | 10 | # Iterate over layers 11 | for layer in model.layers: 12 | 13 | # For learning rate schedule 14 | layer.e = model.e 15 | 16 | # Layer returns A - layer.fs, layer.fc 17 | A = layer.forward(A) 18 | 19 | return A # To derivative of loss function 20 | -------------------------------------------------------------------------------- /epynn/network/hyperparameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/hyperparameters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | # Local application/library specific imports 6 | from epynn.commons.schedule import schedule_functions 7 | 8 | 9 | def model_hyperparameters(model): 10 | """Set hyperparameters for each layer in model. 11 | 12 | :param model: An instance of EpyNN network. 13 | :type model: :class:`epynn.network.models.EpyNN` 14 | """ 15 | # Iterate over layers 16 | for layer in model.layers: 17 | 18 | # If no se_hPars provided for layer, then assign se_hPars from model 19 | if not layer.se_hPars: 20 | layer.se_hPars = model.se_hPars 21 | else: 22 | pass 23 | 24 | return None 25 | 26 | 27 | def model_learning_rate(model): 28 | """Schedule learning rate for each layer in model. 29 | 30 | :param model: An instance of EpyNN network. 31 | :type model: :class:`epynn.network.models.EpyNN` 32 | """ 33 | # Iterate over layers 34 | for layer in model.layers: 35 | layer.se_hPars, layer.lrate = schedule_lrate(layer.se_hPars, model.epochs) 36 | 37 | return None 38 | 39 | 40 | def schedule_lrate(se_hPars, training_epochs): 41 | """Learning rate schedule. 42 | 43 | :param se_hPars: Hyperparameters settings for layer. 44 | :type se_hPars: dict 45 | 46 | :param training_epochs: Number of training epochs for model. 47 | :type training_epochs: int 48 | 49 | :return: Updated settings for layer hyperparameters. 50 | :rtype: dict 51 | 52 | :return: Scheduled learning rate for layer. 53 | :rtype: list 54 | """ 55 | # Extract hyperparameters 56 | e = se_hPars['epochs'] = training_epochs 57 | lr = se_hPars['learning_rate'] 58 | d = se_hPars['cycle_descent'] 59 | k = se_hPars['decay_k'] 60 | 61 | # Compute dependent hyperparameters 62 | epc = se_hPars['cycle_epochs'] if se_hPars['cycle_epochs'] else training_epochs 63 | se_hPars['cycle_number'] = n = 1 if not epc else e // epc 64 | 65 | # Default decay 66 | if k == 0: 67 | # ~ 1% of initial lr for last epoch in cycle 68 | k = se_hPars['decay_k'] = 5 / epc 69 | 70 | # Compute learning rate schedule 71 | hPars = (e, lr, n, k, d, epc) 72 | lrate = schedule_functions(se_hPars['schedule'], hPars) 73 | 74 | return se_hPars, lrate 75 | -------------------------------------------------------------------------------- /epynn/network/initialize.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/initialize.py 2 | # Standard library imports 3 | import sys 4 | 5 | # Related third party imports 6 | from termcolor import cprint 7 | import numpy as np 8 | 9 | 10 | def model_initialize(model, params=True, end='\n'): 11 | """Initialize EpyNN network. 12 | 13 | :param model: An instance of EpyNN network. 14 | :type model: :class:`epynn.network.models.EpyNN` 15 | 16 | :param params: Layer parameters initialization, defaults to `True`. 17 | :type params: bool, optional 18 | 19 | :param end: Wether to print every line for steps or overwrite, default to `\\n`. 20 | :type end: str in ['\\n', '\\r'] 21 | 22 | :raises Exception: If any layer other than Dense was provided with softmax activation. See :func:`epynn.maths.softmax`. 23 | """ 24 | # Retrieve sample batch 25 | model.embedding.training_batches(init=True) 26 | batch_dtrain = model.embedding.batch_dtrain 27 | batch = batch_dtrain[0] 28 | A = batch.X # Features 29 | Y = batch.Y # Labels 30 | 31 | cprint('{: <100}'.format('--- EpyNN Check --- '), attrs=['bold'], end=end) 32 | 33 | # Iterate over layers 34 | for layer in model.layers: 35 | 36 | # Layer instance attributes 37 | layer.check = False 38 | layer.name = layer.__class__.__name__ 39 | 40 | cprint('Layer: ' + layer.name, attrs=['bold'], end=end) 41 | 42 | # Store layer information in model summary 43 | model.network[id(layer)]['Layer'] = layer.name 44 | model.network[id(layer)]['Activation'] = layer.activation 45 | model.network[id(layer)]['Dimensions'] = layer.d 46 | 47 | # Dense uses epynn.maths.hadamard to handle softmax derivative 48 | if 'softmax' in layer.activation.values() and layer.name != 'Dense': 49 | raise Exception('Softmax can not be used with %s, only with Dense' % layer.name) 50 | 51 | # Test layer.compute_shapes() method 52 | cprint('compute_shapes: ' + layer.name, 'green', attrs=['bold'], end=end) 53 | layer.compute_shapes(A) 54 | 55 | # Store forward shapes in model summary 56 | model.network[id(layer)]['FW_Shapes'] = layer.fs 57 | 58 | # Initialize trainable parameters 59 | if params: 60 | cprint('initialize_parameters: ' + layer.name, 'green', attrs=['bold'], end=end) 61 | layer.initialize_parameters() 62 | 63 | # Test layer.forward() method 64 | cprint('forward: ' + layer.name, 'green', attrs=['bold'], end=end) 65 | A = layer.forward(A) 66 | 67 | # Output shape 68 | print('shape:', layer.fs['A'], end=end) 69 | 70 | # Store updated forward shapes in model summary 71 | model.network[id(layer)]['FW_Shapes'] = layer.fs 72 | 73 | # Clear check 74 | delattr(layer, 'check') 75 | 76 | # Compute derivative of loss function 77 | dX = dA = model.training_loss(Y, A, deriv=True) 78 | 79 | # Iterate over reversed layers 80 | for layer in reversed(model.layers): 81 | 82 | # Set check attribute for layer 83 | layer.check = False 84 | 85 | cprint('Layer: ' + layer.name, attrs=['bold'], end=end) 86 | 87 | # Test layer.backward() method 88 | cprint('backward: ' + layer.name, 'cyan', attrs=['bold'], end=end) 89 | dX = layer.backward(dX) 90 | 91 | # Output shape 92 | print('shape:', layer.bs['dX'], end=end) 93 | 94 | # Store backward shapes in model summary 95 | model.network[id(layer)]['BW_Shapes'] = layer.bs 96 | 97 | # Test layer.compute_gradients() method 98 | cprint('compute_gradients: ' + layer.name, 'cyan', attrs=['bold'], end=end) 99 | layer.compute_gradients() 100 | 101 | # Clear check 102 | delattr(layer, 'check') 103 | 104 | cprint('{: <100}'.format('--- EpyNN Check OK! --- '), attrs=['bold'], end=end) 105 | 106 | # Initialize current epoch to zero 107 | model.e = 0 108 | 109 | return None 110 | 111 | 112 | def model_assign_seeds(model): 113 | """Seed model and layers with independant pseudo-random number generators. 114 | 115 | Model is seeded from user-input. Layers are seeded by incrementing the 116 | input by one in order to not generate same numbers for all objects 117 | 118 | :param model: An instance of EpyNN network. 119 | :type model: :class:`epynn.network.models.EpyNN` 120 | """ 121 | seed = model.seed 122 | 123 | # If seed is not defined, seeding is random 124 | model.np_rng = np.random.default_rng(seed=seed) 125 | 126 | # Iterate over layers 127 | for layer in model.layers: 128 | 129 | # If seed is defined 130 | if seed: 131 | # We do not want the same seed for every object 132 | seed += 1 133 | 134 | # Seed layer 135 | layer.o['seed'] = seed 136 | layer.np_rng = np.random.default_rng(seed=layer.o['seed']) 137 | 138 | return None 139 | 140 | 141 | def model_initialize_exceptions(model, trace): 142 | """Handle error in model initialization and show logs. 143 | 144 | :param model: An instance of EpyNN network. 145 | :type model: :class:`epynn.network.models.EpyNN` 146 | 147 | :param trace: Traceback of fatal error. 148 | :type trace: traceback object 149 | """ 150 | cprint('\n/!\\ Initialization of EpyNN model failed - debug', 'red', attrs=['bold']) 151 | 152 | try: 153 | # Identify faulty layer 154 | layer = [layer for layer in model.layers if hasattr(layer, 'check')][0] 155 | 156 | # Update shapes from existing caches 157 | layer.update_shapes(layer.fc, layer.fs) 158 | layer.update_shapes(layer.bc, layer.bs) 159 | 160 | # Report debug information for faulty layer 161 | cprint('%s layer: ' % layer.name, 'red', attrs=['bold']) 162 | 163 | cprint('Known dimensions', 'white', attrs=['bold']) 164 | print(', '.join([k + ': ' + str(v) for k, v in layer.d.items()])) 165 | 166 | cprint('Known forward shapes', 'green', attrs=['bold']) 167 | print('\n'.join([k + ': ' + str(v) for k, v in layer.fs.items()])) 168 | 169 | cprint('Known backward shape', 'cyan', attrs=['bold']) 170 | print('\n'.join([k + ': ' + str(v) for k, v in layer.bs.items()])) 171 | 172 | except: 173 | pass 174 | 175 | # Report traceback of error and exit program 176 | cprint('System trace', 'red', attrs=['bold']) 177 | print(trace) 178 | 179 | sys.exit() 180 | -------------------------------------------------------------------------------- /epynn/network/report.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/report.py 2 | # Standard library imports 3 | import time 4 | 5 | # Related third party imports 6 | from tabulate import tabulate 7 | from termcolor import cprint 8 | 9 | # Local application/library specific imports 10 | from epynn.network.evaluate import batch_evaluate 11 | from epynn.commons.logs import ( 12 | current_logs, 13 | dsets_samples_logs, 14 | dsets_labels_logs, 15 | headers_logs, 16 | initialize_logs_print, 17 | layers_lrate_logs, 18 | layers_others_logs, 19 | network_logs, 20 | start_counter 21 | ) 22 | 23 | 24 | def model_report(model): 25 | """Report selected metrics for datasets at current epoch. 26 | 27 | :param model: An instance of EpyNN network object. 28 | :type model: :class:`epynn.network.models.EpyNN` 29 | """ 30 | # You may edit the colorscheme to fulfill your preference 31 | colors = [ 32 | 'white', 33 | 'green', 34 | 'red', 35 | 'magenta', 36 | 'cyan', 37 | 'yellow', 38 | 'blue', 39 | 'grey', 40 | ] 41 | 42 | # Rows in tabular report excluding headers 43 | size_table = 11 44 | 45 | # Initialize list of rows with headers 46 | if model.e == 0 or not hasattr(model, 'current_logs'): 47 | model.current_logs = [headers_logs(model, colors)] 48 | 49 | # Check if last epoch 50 | eLast = (model.e == model.epochs - 1) 51 | 52 | # Append row one every verboseth epoch or if last epoch 53 | if model.e % model.verbose == 0 or eLast: 54 | model.current_logs.append(current_logs(model, colors)) 55 | 56 | # Report on terminal 57 | if len(model.current_logs) == size_table + 1 or eLast: 58 | 59 | logs = tabulate(model.current_logs, 60 | headers="firstrow", 61 | numalign="center", 62 | stralign='center', 63 | tablefmt="pretty", 64 | ) 65 | 66 | print('\n') 67 | print (logs, flush=True) 68 | 69 | # Clear-up 70 | del model.current_logs 71 | 72 | return None 73 | 74 | 75 | def single_batch_report(model, batch, A): 76 | """Report accuracy and cost for current batch. 77 | 78 | :param model: An instance of EpyNN network. 79 | :type model: :class:`epynn.network.models.EpyNN` 80 | 81 | :param batch: An instance of batch dataSet. 82 | :type batch: :class:`epynn.commons.models.dataSet` 83 | 84 | :param A: Output of forward propagation for batch. 85 | :type A: :class:`numpy.ndarray` 86 | """ 87 | current = time.time() 88 | 89 | # Total elapsed time 90 | elapsed_time = round(current - model.ts, 2) 91 | 92 | # Time for one epoch based on current batch 93 | epoch_time = (current - model.cts) * len(model.embedding.batch_dtrain) 94 | model.cts = current 95 | 96 | # Epochs per second 97 | rate = round((model.e + 1) / (elapsed_time + 1e-16), 3) 98 | 99 | # Time until completion 100 | ttc = round((model.epochs - model.e + 1) / (rate + 1e-16)) 101 | 102 | # Accuracy and cost 103 | accuracy, cost = batch_evaluate(model, batch.Y, A) 104 | accuracy = round(accuracy, 3) 105 | cost = round(cost, 5) 106 | 107 | # Current batch numerical identifier 108 | batch_counter = batch.name + '/' + model.embedding.batch_dtrain[-1].name 109 | 110 | # Format and print data 111 | rate = '{:.2e}'.format(rate) 112 | 113 | log = ('Epoch %s - Batch %s - Accuracy: %s Cost: %s - TIME: %ss RATE: %se/s TTC: %ss' 114 | % (model.e, batch_counter, accuracy, cost, elapsed_time, rate, ttc)) 115 | 116 | cprint('{: <100}'.format(log), 'white', attrs=['bold'], end='\r', flush=True) 117 | 118 | return None 119 | 120 | 121 | def initialize_model_report(model, timeout): 122 | """Report exhaustive initialization logs for datasets, 123 | model architecture and shapes, layers hyperparameters. 124 | 125 | :param model: An instance of EpyNN network. 126 | :type model: :class:`epynn.network.models.EpyNN` 127 | 128 | :param timeout: Time to hold on initialization logs. 129 | :type timeout: int 130 | """ 131 | model.init_logs = [] 132 | 133 | # Dataset initialization logs 134 | dsets = model.embedding.dsets 135 | se_dataset = model.embedding.se_dataset 136 | 137 | model.init_logs.append(dsets_samples_logs(dsets, se_dataset)) 138 | model.init_logs.append(dsets_labels_logs(dsets)) 139 | 140 | # Model architecture and shapes initialization logs 141 | network = model.network 142 | 143 | model.init_logs.append(network_logs(network)) 144 | 145 | # Model and layer hyperparameters initialization logs 146 | layers = model.layers 147 | 148 | model.init_logs.append(layers_lrate_logs(layers)) 149 | model.init_logs.append(layers_others_logs(layers)) 150 | 151 | initialize_logs_print(model) 152 | 153 | start_counter(timeout) 154 | 155 | return None 156 | -------------------------------------------------------------------------------- /epynn/network/training.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/network/train.py 2 | 3 | 4 | def model_training(model): 5 | """Perform the training of the Neural Network. 6 | 7 | :param model: An instance of EpyNN network. 8 | :type model: :class:`epynn.network.models.EpyNN` 9 | """ 10 | # Iterate over training epochs 11 | for model.e in range(model.e, model.epochs): 12 | 13 | # Shuffle dtrain and prepare new batches 14 | model.embedding.training_batches() 15 | 16 | # Iterate over training batches 17 | for batch in model.embedding.batch_dtrain: 18 | 19 | # Pass through every layer.forward() methods 20 | A = model.forward(batch.X) 21 | 22 | # Compute derivative of loss 23 | dA = model.training_loss(batch.Y, A, deriv=True) 24 | 25 | # Pass through every layer.backward() methods 26 | model.backward(dA) 27 | 28 | # Accuracy and cost for batch 29 | model.batch_report(batch, A) 30 | 31 | # Selected metrics and costs for dsets 32 | model.evaluate() 33 | 34 | # Tabular report for dsets 35 | model.report() 36 | 37 | return None 38 | -------------------------------------------------------------------------------- /epynn/pooling/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/pool/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of pooling layer. 10 | :type layer: :class:`epynn.pooling.models.Pooling` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | """ 18 | dA = layer.bc['dA'] = dX 19 | 20 | return dA 21 | 22 | 23 | def pooling_backward(layer, dX): 24 | """Backward propagate error gradients to previous layer. 25 | """ 26 | # (1) Initialize cache 27 | dA = initialize_backward(layer, dX) # (m, oh, ow, d) 28 | 29 | # (2) Restore pooling block axes 30 | dZ = dA 31 | dZ = np.expand_dims(dZ, axis=3) 32 | dZ = np.expand_dims(dZ, axis=3) 33 | # (m, oh, ow, d) -> 34 | # (m, oh, ow, 1, d) -> 35 | # (m, oh, ow, 1, 1, d) 36 | 37 | # (3) Initialize backward output dL/dX 38 | dX = np.zeros_like(layer.fc['X']) # (m, h, w, d) 39 | 40 | # Iterate over forward output height 41 | for oh in range(layer.d['oh']): 42 | 43 | hs = oh * layer.d['sh'] 44 | he = hs + layer.d['ph'] 45 | 46 | # Iterate over forward output width 47 | for ow in range(layer.d['ow']): 48 | 49 | ws = ow * layer.d['sw'] 50 | we = ws + layer.d['pw'] 51 | 52 | # (4hw) Retrieve input block 53 | Xb = layer.fc['Xb'][:, oh, ow, :, :, :] 54 | # (m, oh, ow, ph, pw, d) - Xb (array of blocks) 55 | # (m, ph, pw, d) - Xb (single block) 56 | 57 | # (5hw) Retrieve pooled value and restore block shape 58 | Zb = layer.fc['Z'][:, oh:oh+1, ow:ow+1, :] 59 | Zb = np.repeat(Zb, layer.d['ph'], axis=1) 60 | Zb = np.repeat(Zb, layer.d['pw'], axis=2) 61 | # (m, oh, ow, d) - Z 62 | # (m, 1, 1, d) - Zb -> np.repeat(Zb, pw, axis=1) 63 | # (m, ph, 1, d) -> np.repeat(Zb, pw, axis=2) 64 | # (m, ph, pw, d) 65 | 66 | # (6hw) Match pooled value in Zb against Xb 67 | mask = (Zb == Xb) 68 | 69 | # (7hw) Retrieve gradient w.r.t Z and restore block shape 70 | dZb = dZ[:, oh, ow, :] 71 | dZb = np.repeat(dZb, layer.d['ph'], 1) 72 | dZb = np.repeat(dZb, layer.d['pw'], 2) 73 | # (m, oh, ow, 1, 1, d) - dZ 74 | # (m, 1, 1, d) - dZb -> np.repeat(dZb, ph, axis=1) 75 | # (m, ph, 1, d) -> np.repeat(dZb, pw, axis=2) 76 | # (m, ph, pw, d) 77 | 78 | # (8hw) Keep dXb values for coordinates where Zb = Xb (mask) 79 | dXb = dZb * mask 80 | 81 | # (9hw) Gradient of the loss w.r.t Xb 82 | dX[:, hs:he, ws:we, :] += dXb 83 | # (m, ph, pw, d) - dX[:, hs:he, ws:we, :] 84 | # (m, ph, pw, d) - dXb 85 | 86 | layer.bc['dX'] = dX 87 | 88 | return dX 89 | -------------------------------------------------------------------------------- /epynn/pooling/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/pooling/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of pooling layer. 10 | :type layer: :class:`epynn.pooling.models.Pooling` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Input of forward propagation for current layer. 19 | :rtype: :class:`numpy.ndarray` 20 | 21 | :return: Input blocks of forward propagation for current layer. 22 | :rtype: :class:`numpy.ndarray` 23 | """ 24 | X = layer.fc['X'] = A 25 | 26 | return X 27 | 28 | 29 | def pooling_forward(layer, A): 30 | """Forward propagate signal to next layer. 31 | """ 32 | # (1) Initialize cache 33 | X = initialize_forward(layer, A) 34 | 35 | # (2) Slice input w.r.t. pool size (ph, pw) and strides (sh, sw) 36 | Xb = np.array([[X[ :, h:h + layer.d['ph'], w:w + layer.d['pw'], :] 37 | # Inner loop 38 | # (m, h, w, d) -> 39 | # (ow, m, h, pw, d) 40 | for w in range(layer.d['w'] - layer.d['pw'] + 1) 41 | if w % layer.d['sw'] == 0] 42 | # Outer loop 43 | # (ow, m, h, pw, d) -> 44 | # (oh, ow, m, ph, pw, d) 45 | for h in range(layer.d['h'] - layer.d['ph'] + 1) 46 | if h % layer.d['sh'] == 0]) 47 | 48 | # (3) Bring back m along axis 0 49 | Xb = layer.fc['Xb'] = np.moveaxis(Xb, 2, 0) 50 | # (oh, ow, m, ph, pw, d) -> 51 | # (m, oh, ow, ph, pw, d) 52 | 53 | # (4) Apply pooling operation on blocks 54 | Z = layer.pool(Xb, axis=(4, 3)) 55 | # (m, oh, ow, ph, pw, d) - Xb 56 | # (m, oh, ow, ph, d) - layer.pool(Xb, axis=4) 57 | # (m, oh, ow, d) - layer.pool(Xb, axis=(4, 3)) 58 | 59 | A = layer.fc['A'] = layer.fc['Z'] = Z 60 | 61 | return A # To next layer 62 | -------------------------------------------------------------------------------- /epynn/pooling/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/pool/models.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | # Local application/library specific imports 6 | from epynn.commons.models import Layer 7 | from epynn.pooling.forward import pooling_forward 8 | from epynn.pooling.backward import pooling_backward 9 | from epynn.pooling.parameters import ( 10 | pooling_compute_shapes, 11 | pooling_initialize_parameters, 12 | pooling_compute_gradients, 13 | pooling_update_parameters 14 | ) 15 | 16 | 17 | class Pooling(Layer): 18 | """ 19 | Definition of a pooling layer prototype. 20 | 21 | :param pool_size: Height and width for pooling window, defaults to `(2, 2)`. 22 | :type pool_size: int or tuple[int], optional 23 | 24 | :param strides: Height and width to shift the pooling window by, defaults to `None` which equals `pool_size`. 25 | :type strides: int or tuple[int], optional 26 | 27 | :param pool: Pooling activation of units, defaults to :func:`np.max`. Use one of min or max pooling. 28 | :type pool: function, optional 29 | """ 30 | 31 | def __init__(self, 32 | pool_size=(2, 2), 33 | strides=None, 34 | pool=np.max): 35 | 36 | super().__init__() 37 | 38 | pool_size = pool_size if isinstance(pool_size, tuple) else (pool_size, pool_size) 39 | strides = strides if isinstance(strides, tuple) else pool_size 40 | 41 | self.d['ph'], self.d['pw'] = pool_size 42 | self.d['sh'], self.d['sw'] = strides 43 | self.pool = pool 44 | 45 | self.activation = { 'pool': self.pool.__name__ } 46 | self.trainable = False 47 | 48 | return None 49 | 50 | def compute_shapes(self, A): 51 | """Wrapper for :func:`epynn.pooling.parameters.pooling_compute_shapes()`. 52 | 53 | :param A: Output of forward propagation from previous layer. 54 | :type A: :class:`numpy.ndarray` 55 | """ 56 | pooling_compute_shapes(self, A) 57 | 58 | return None 59 | 60 | def initialize_parameters(self): 61 | """Wrapper for :func:`epynn.pooling.parameters.initialize_parameters()`. 62 | """ 63 | pooling_initialize_parameters(self) 64 | 65 | return None 66 | 67 | def forward(self, A): 68 | """Wrapper for :func:`epynn.pooling.forward.pooling_forward()`. 69 | 70 | :param A: Output of forward propagation from previous layer. 71 | :type A: :class:`numpy.ndarray` 72 | 73 | :return: Output of forward propagation for current layer. 74 | :rtype: :class:`numpy.ndarray` 75 | """ 76 | self.compute_shapes(A) 77 | A = pooling_forward(self, A) 78 | self.update_shapes(self.fc, self.fs) 79 | 80 | return A 81 | 82 | def backward(self, dX): 83 | """Wrapper for :func:`epynn.pooling.backward.pooling_backward()`. 84 | 85 | :param dX: Output of backward propagation from next layer. 86 | :type dX: :class:`numpy.ndarray` 87 | 88 | :return: Output of backward propagation for current layer. 89 | :rtype: :class:`numpy.ndarray` 90 | """ 91 | dX = pooling_backward(self, dX) 92 | self.update_shapes(self.bc, self.bs) 93 | 94 | return dX 95 | 96 | def compute_gradients(self): 97 | """Wrapper for :func:`epynn.pooling.parameters.pooling_compute_gradients()`. Dummy method, there are no gradients to compute in layer. 98 | """ 99 | pooling_compute_gradients(self) 100 | 101 | return None 102 | 103 | def update_parameters(self): 104 | """Wrapper for :func:`epynn.pooling.parameters.pooling_update_parameters()`. Dummy method, there are no parameters to update in layer. 105 | """ 106 | if self.trainable: 107 | pooling_update_parameters(self) 108 | 109 | return None 110 | -------------------------------------------------------------------------------- /epynn/pooling/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/pooling/parameters.py 2 | # Standard library imports 3 | import math 4 | 5 | 6 | def pooling_compute_shapes(layer, A): 7 | """Compute forward shapes and dimensions from input for layer. 8 | """ 9 | X = A # Input of current layer 10 | 11 | layer.fs['X'] = X.shape # (m, h, w, d) 12 | 13 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 14 | layer.d['h'] = layer.fs['X'][1] # Height of features map (h) 15 | layer.d['w'] = layer.fs['X'][2] # Width of features map (w) 16 | layer.d['d'] = layer.fs['X'][3] # Depth of features map (d) 17 | 18 | # Output height (oh) and width (ow) 19 | layer.d['oh'] = math.floor((layer.d['h']-layer.d['ph']) / layer.d['sh']) + 1 20 | layer.d['ow'] = math.floor((layer.d['w']-layer.d['pw']) / layer.d['sw']) + 1 21 | 22 | return None 23 | 24 | 25 | def pooling_initialize_parameters(layer): 26 | """Initialize parameters from shapes for layer. 27 | """ 28 | # No parameters to initialize for Pooling layer 29 | 30 | return None 31 | 32 | 33 | def pooling_compute_gradients(layer): 34 | """Compute gradients with respect to weight and bias for layer. 35 | """ 36 | # No gradients to compute for Pooling layer 37 | 38 | return None 39 | 40 | 41 | def pooling_update_parameters(layer): 42 | """Update parameters from gradients for layer. 43 | """ 44 | # No parameters to update for Pooling layer 45 | 46 | return None 47 | -------------------------------------------------------------------------------- /epynn/rnn/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/rnn/backward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_backward(layer, dX): 7 | """Backward cache initialization. 8 | 9 | :param layer: An instance of RNN layer. 10 | :type layer: :class:`epynn.rnn.models.RNN` 11 | 12 | :param dX: Output of backward propagation from next layer. 13 | :type dX: :class:`numpy.ndarray` 14 | 15 | :return: Input of backward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Next hidden state initialized with zeros. 19 | :rtype: :class:`numpy.ndarray` 20 | """ 21 | if layer.sequences: 22 | dA = dX # Full length sequence 23 | elif not layer.sequences: 24 | dA = np.zeros(layer.fs['h']) # Empty full length sequence 25 | dA[:, -1] = dX # Assign to last index 26 | 27 | cache_keys = ['dh_', 'dh', 'dhn'] 28 | layer.bc.update({k: np.zeros(layer.fs['h']) for k in cache_keys}) 29 | 30 | layer.bc['dA'] = dA 31 | layer.bc['dX'] = np.zeros(layer.fs['X']) # To previous layer 32 | 33 | dh = layer.bc['dh'][:, 0] # To previous step 34 | 35 | return dA, dh 36 | 37 | 38 | def rnn_backward(layer, dX): 39 | """Backward propagate error gradients to previous layer. 40 | """ 41 | # (1) Initialize cache and hidden state gradient 42 | dA, dh = initialize_backward(layer, dX) 43 | 44 | # Reverse iteration over sequence steps 45 | for s in reversed(range(layer.d['s'])): 46 | 47 | # (2s) Slice sequence (m, s, u) w.r.t step 48 | dA = layer.bc['dA'][:, s] # dL/dA 49 | 50 | # (3s) Gradient of the loss w.r.t. next hidden state 51 | dhn = layer.bc['dhn'][:, s] = dh # dL/dhn 52 | 53 | # (4s) Gradient of the loss w.r.t hidden state h_ 54 | dh_ = layer.bc['dh_'][:, s] = ( 55 | (dA + dhn) 56 | * layer.activate(layer.fc['h_'][:, s], deriv=True) 57 | ) # dL/dh_ - To parameters gradients 58 | 59 | # (5s) Gradient of the loss w.r.t hidden state h 60 | dh = layer.bc['dh'][:, s] = ( 61 | np.dot(dh_, layer.p['V'].T) 62 | ) # dL/dh - To previous step 63 | 64 | # (6s) Gradient of the loss w.r.t X 65 | dX = layer.bc['dX'][:, s] = ( 66 | np.dot(dh_, layer.p['U'].T) 67 | ) # dL/dX - To previous layer 68 | 69 | dX = layer.bc['dX'] 70 | 71 | return dX # To previous layer 72 | -------------------------------------------------------------------------------- /epynn/rnn/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/rnn/forward.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def initialize_forward(layer, A): 7 | """Forward cache initialization. 8 | 9 | :param layer: An instance of RNN layer. 10 | :type layer: :class:`epynn.rnn.models.RNN` 11 | 12 | :param A: Output of forward propagation from previous layer. 13 | :type A: :class:`numpy.ndarray` 14 | 15 | :return: Input of forward propagation for current layer. 16 | :rtype: :class:`numpy.ndarray` 17 | 18 | :return: Previous hidden state initialized with zeros. 19 | :rtype: :class:`numpy.ndarray` 20 | """ 21 | X = layer.fc['X'] = A 22 | 23 | cache_keys = ['h_', 'h', 'hp'] 24 | layer.fc.update({k: np.zeros(layer.fs['h']) for k in cache_keys}) 25 | 26 | h = layer.fc['h'][:, 0] # Hidden state 27 | 28 | return X, h 29 | 30 | 31 | def rnn_forward(layer, A): 32 | """Forward propagate signal to next layer. 33 | """ 34 | # (1) Initialize cache and hidden state 35 | X, h = initialize_forward(layer, A) 36 | 37 | # Iterate over sequence steps 38 | for s in range(layer.d['s']): 39 | 40 | # (2s) Slice sequence (m, s, e) with respect to step 41 | X = layer.fc['X'][:, s] 42 | 43 | # (3s) Retrieve previous hidden state 44 | hp = layer.fc['hp'][:, s] = h 45 | 46 | # (4s) Activate current hidden state 47 | h_ = layer.fc['h_'][:, s] = ( 48 | np.dot(X, layer.p['U']) 49 | + np.dot(hp, layer.p['V']) 50 | + layer.p['b'] 51 | ) # (4.1s) Linear 52 | 53 | h = layer.fc['h'][:, s] = layer.activate(h_) # (4.2s) Non-linear 54 | 55 | # Return the last hidden state or the full sequence 56 | A = layer.fc['h'] if layer.sequences else layer.fc['h'][:, -1] 57 | 58 | return A # To next layer 59 | -------------------------------------------------------------------------------- /epynn/rnn/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/rnn/models.py 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.commons.maths import ( 5 | tanh, 6 | xavier, 7 | clip_gradient, 8 | activation_tune, 9 | ) 10 | from epynn.rnn.forward import rnn_forward 11 | from epynn.rnn.backward import rnn_backward 12 | from epynn.rnn.parameters import ( 13 | rnn_compute_shapes, 14 | rnn_initialize_parameters, 15 | rnn_compute_gradients, 16 | rnn_update_parameters 17 | ) 18 | 19 | 20 | class RNN(Layer): 21 | """ 22 | Definition of a RNN layer prototype. 23 | 24 | :param units: Number of unit cells in RNN layer, defaults to 1. 25 | :type units: int, optional 26 | 27 | :param activate: Non-linear activation of hidden state, defaults to `tanh`. 28 | :type activate: function, optional 29 | 30 | :param initialization: Weight initialization function for RNN layer, defaults to `xavier`. 31 | :type initialization: function, optional 32 | 33 | :param clip_gradients: May prevent exploding/vanishing gradients, defaults to `False`. 34 | :type clip_gradients: bool, optional 35 | 36 | :param sequences: Whether to return only the last hidden state or the full sequence, defaults to `False`. 37 | :type sequences: bool, optional 38 | 39 | :param se_hPars: Layer hyper-parameters, defaults to `None` and inherits from model. 40 | :type se_hPars: dict[str, str or float] or NoneType, optional 41 | """ 42 | 43 | def __init__(self, 44 | unit_cells=1, 45 | activate=tanh, 46 | initialization=xavier, 47 | clip_gradients=True, 48 | sequences=False, 49 | se_hPars=None): 50 | """Initialize instance variable attributes. 51 | """ 52 | super().__init__(se_hPars) 53 | 54 | self.d['u'] = unit_cells 55 | self.activate = activate 56 | self.initialization = initialization 57 | self.clip_gradients = clip_gradients 58 | self.sequences = sequences 59 | 60 | self.activation = { 'activate': self.activate.__name__ } 61 | self.trainable = True 62 | 63 | return None 64 | 65 | def compute_shapes(self, A): 66 | """Wrapper for :func:`epynn.rnn.parameters.rnn_compute_shapes()`. 67 | 68 | :param A: Output of forward propagation from previous layer. 69 | :type A: :class:`numpy.ndarray` 70 | """ 71 | rnn_compute_shapes(self, A) 72 | 73 | return None 74 | 75 | def initialize_parameters(self): 76 | """Wrapper for :func:`epynn.rnn.parameters.rnn_initialize_parameters()`. 77 | """ 78 | rnn_initialize_parameters(self) 79 | 80 | return None 81 | 82 | def forward(self, A): 83 | """Wrapper for :func:`epynn.rnn.forward.rnn_forward()`. 84 | 85 | :param A: Output of forward propagation from previous layer. 86 | :type A: :class:`numpy.ndarray` 87 | 88 | :return: Output of forward propagation for current layer. 89 | :rtype: :class:`numpy.ndarray` 90 | """ 91 | self.compute_shapes(A) 92 | activation_tune(self.se_hPars) 93 | A = self.fc['A'] = rnn_forward(self, A) 94 | self.update_shapes(self.fc, self.fs) 95 | 96 | return A 97 | 98 | def backward(self, dX): 99 | """Wrapper for :func:`epynn.rnn.backward.rnn_backward()`. 100 | 101 | :param dX: Output of backward propagation from next layer. 102 | :type dX: :class:`numpy.ndarray` 103 | 104 | :return: Output of backward propagation for current layer. 105 | :rtype: :class:`numpy.ndarray` 106 | """ 107 | activation_tune(self.se_hPars) 108 | dX = rnn_backward(self, dX) 109 | self.update_shapes(self.bc, self.bs) 110 | 111 | return dX 112 | 113 | def compute_gradients(self): 114 | """Wrapper for :func:`epynn.rnn.parameters.rnn_compute_gradients()`. 115 | """ 116 | rnn_compute_gradients(self) 117 | 118 | if self.clip_gradients: 119 | clip_gradient(self) 120 | 121 | return None 122 | 123 | def update_parameters(self): 124 | """Wrapper for :func:`epynn.rnn.parameters.rnn_update_parameters()`. 125 | """ 126 | if self.trainable: 127 | rnn_update_parameters(self) 128 | 129 | return None 130 | -------------------------------------------------------------------------------- /epynn/rnn/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/rnn/parameters.py 2 | # Related third party imports 3 | import numpy as np 4 | 5 | 6 | def rnn_compute_shapes(layer, A): 7 | """Compute forward shapes and dimensions from input for layer. 8 | """ 9 | X = A # Input of current layer 10 | 11 | layer.fs['X'] = X.shape # (m, s, e) 12 | 13 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 14 | layer.d['s'] = layer.fs['X'][1] # Steps in sequence (s) 15 | layer.d['e'] = layer.fs['X'][2] # Elements per step (e) 16 | 17 | # Shapes for trainable parameters Unit cells (u) 18 | layer.fs['U'] = (layer.d['e'], layer.d['u']) # (e, u) 19 | layer.fs['V'] = (layer.d['u'], layer.d['u']) # (u, u) 20 | layer.fs['b'] = (1, layer.d['u']) # (1, u) 21 | 22 | # Shape of hidden state (h) with respect to steps (s) 23 | layer.fs['h'] = (layer.d['m'], layer.d['s'], layer.d['u']) 24 | 25 | return None 26 | 27 | 28 | def rnn_initialize_parameters(layer): 29 | """Initialize trainable parameters from shapes for layer. 30 | """ 31 | # For linear activation of hidden state (h_) 32 | layer.p['U'] = layer.initialization(layer.fs['U'], rng=layer.np_rng) 33 | layer.p['V'] = layer.initialization(layer.fs['V'], rng=layer.np_rng) 34 | layer.p['b'] = np.zeros(layer.fs['b']) # dot(X, U) + dot(hp, V) + b 35 | 36 | return None 37 | 38 | 39 | def rnn_compute_gradients(layer): 40 | """Compute gradients with respect to weight and bias for layer. 41 | """ 42 | # Gradients initialization with respect to parameters 43 | for parameter in layer.p.keys(): 44 | gradient = 'd' + parameter 45 | layer.g[gradient] = np.zeros_like(layer.p[parameter]) 46 | 47 | # Reverse iteration over sequence steps 48 | for s in reversed(range(layer.d['s'])): 49 | 50 | dh_ = layer.bc['dh_'][:, s] # Gradient w.r.t hidden state h_ 51 | X = layer.fc['X'][:, s] # Input for current step 52 | hp = layer.fc['hp'][:, s] # Previous hidden state 53 | 54 | # (1) Gradients of the loss with respect to U, V, b 55 | layer.g['dU'] += np.dot(X.T, dh_) # (1.1) dL/dU 56 | layer.g['dV'] += np.dot(hp.T, dh_) # (1.2) dL/dV 57 | layer.g['db'] += np.sum(dh_, axis=0) # (1.3) dL/db 58 | 59 | return None 60 | 61 | 62 | def rnn_update_parameters(layer): 63 | """Update parameters from gradients for layer. 64 | """ 65 | for gradient in layer.g.keys(): 66 | parameter = gradient[1:] 67 | # Update is driven by learning rate and gradients 68 | layer.p[parameter] -= layer.lrate[layer.e] * layer.g[gradient] 69 | 70 | return None 71 | -------------------------------------------------------------------------------- /epynn/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 1, 14 | 'LRELU_alpha': 0.3, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynn/template/backward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/template/backward.py 2 | 3 | 4 | def initialize_backward(layer, dX): 5 | """Backward cache initialization. 6 | 7 | :param layer: An instance of template layer. 8 | :type layer: :class:`epynn.template.models.Template` 9 | 10 | :param dX: Output of backward propagation from next layer. 11 | :type dX: :class:`numpy.ndarray` 12 | 13 | :return: Input of backward propagation for current layer. 14 | :rtype: :class:`numpy.ndarray` 15 | """ 16 | dA = layer.bc['dA'] = dX 17 | 18 | return dA 19 | 20 | 21 | def template_backward(layer, dX): 22 | """Backward propagate error gradients to previous layer. 23 | """ 24 | # (1) Initialize cache 25 | dA = initialize_backward(layer, dX) 26 | 27 | # (2) Pass backward 28 | dX = layer.bc['dX'] = dA 29 | 30 | return dX # To previous layer 31 | -------------------------------------------------------------------------------- /epynn/template/forward.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/template/forward.py 2 | 3 | 4 | def initialize_forward(layer, A): 5 | """Forward cache initialization. 6 | 7 | :param layer: An instance of template layer. 8 | :type layer: :class:`epynn.template.models.Template` 9 | 10 | :param A: Output of forward propagation from previous layer. 11 | :type A: :class:`numpy.ndarray` 12 | 13 | :return: Input of forward propagation for current layer. 14 | :rtype: :class:`numpy.ndarray` 15 | """ 16 | X = layer.fc['X'] = A 17 | 18 | return X 19 | 20 | 21 | def template_forward(layer, A): 22 | """Forward propagate signal to next layer. 23 | """ 24 | # (1) Initialize cache 25 | X = initialize_forward(layer, A) 26 | 27 | # (2) Pass forward 28 | A = layer.fc['A'] = X 29 | 30 | return A # To next layer 31 | -------------------------------------------------------------------------------- /epynn/template/models.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/template/parameters 2 | # Local application/library specific imports 3 | from epynn.commons.models import Layer 4 | from epynn.commons.maths import activation_tune 5 | from epynn.template.forward import template_forward 6 | from epynn.template.backward import template_backward 7 | from epynn.template.parameters import ( 8 | template_compute_shapes, 9 | template_initialize_parameters, 10 | template_compute_gradients, 11 | template_update_parameters 12 | ) 13 | 14 | 15 | class Template(Layer): 16 | """ 17 | Definition of a template layer prototype. This is a pass-through or inactive layer prototype which contains method definitions used for all active layers. For all layer prototypes, methods are wrappers of functions which contain the specific implementations. 18 | """ 19 | 20 | def __init__(self): 21 | """Initialize instance variable attributes. Extended with ``super().__init__()`` which calls :func:`epynn.commons.models.Layer.__init__()` defined in the parent class. 22 | 23 | :ivar trainable: Whether layer's parameters should be trainable. 24 | :vartype trainable: bool 25 | """ 26 | super().__init__() 27 | 28 | self.trainable = True 29 | 30 | return None 31 | 32 | def compute_shapes(self, A): 33 | """Is a wrapper for :func:`epynn.template.parameters.template_compute_shapes()`. 34 | 35 | :param A: Output of forward propagation from *previous* layer. 36 | :type A: :class:`numpy.ndarray` 37 | """ 38 | template_compute_shapes(self, A) 39 | 40 | return None 41 | 42 | def initialize_parameters(self): 43 | """Is a wrapper for :func:`epynn.template.parameters.template_initialize_parameters()`. 44 | """ 45 | template_initialize_parameters(self) 46 | 47 | return None 48 | 49 | def forward(self, A): 50 | """Is a wrapper for :func:`epynn.template.forward.template_forward()`. 51 | 52 | :param A: Output of forward propagation from *previous* layer. 53 | :type A: :class:`numpy.ndarray` 54 | 55 | :return: Output of forward propagation for **current** layer. 56 | :rtype: :class:`numpy.ndarray` 57 | """ 58 | self.compute_shapes(A) 59 | activation_tune(self.se_hPars) 60 | A = template_forward(self, A) 61 | self.update_shapes(self.fc, self.fs) 62 | 63 | return A 64 | 65 | def backward(self, dX): 66 | """Is a wrapper for :func:`epynn.template.backward.template_backward()`. 67 | 68 | :param dX: Output of backward propagation from *next* layer. 69 | :type dX: :class:`numpy.ndarray` 70 | 71 | :return: Output of backward propagation for **current** layer. 72 | :rtype: :class:`numpy.ndarray` 73 | """ 74 | activation_tune(self.se_hPars) 75 | dX = template_backward(self, dX) 76 | self.update_shapes(self.bc, self.bs) 77 | 78 | return dX 79 | 80 | def compute_gradients(self): 81 | """Is a wrapper for :func:`epynn.template.parameters.template_compute_gradients()`. Dummy method, there are no gradients to compute in layer. 82 | """ 83 | template_compute_gradients(self) 84 | 85 | return None 86 | 87 | def update_parameters(self): 88 | """Is a wrapper for :func:`epynn.template.parameters.template_update_parameters()`. Dummy method, there are no parameters to update in layer. 89 | """ 90 | if self.trainable: 91 | template_update_parameters(self) 92 | 93 | return None 94 | -------------------------------------------------------------------------------- /epynn/template/parameters.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/template/parameters.py 2 | 3 | 4 | def template_compute_shapes(layer, A): 5 | """Compute forward shapes and dimensions from input for layer. 6 | """ 7 | X = A # Input of current layer 8 | 9 | layer.fs['X'] = X.shape # (m, .. ) 10 | 11 | layer.d['m'] = layer.fs['X'][0] # Number of samples (m) 12 | layer.d['n'] = X.size // layer.d['m'] # Number of features (n) 13 | 14 | return None 15 | 16 | 17 | def template_initialize_parameters(layer): 18 | """Initialize parameters from shapes for layer. 19 | """ 20 | # No parameters to initialize for Template layer 21 | 22 | return None 23 | 24 | 25 | def template_compute_gradients(layer): 26 | """Compute gradients with respect to weight and bias for layer. 27 | """ 28 | # No gradients to compute for Template layer 29 | 30 | return None 31 | 32 | 33 | def template_update_parameters(layer): 34 | """Update parameters from gradients for layer. 35 | """ 36 | # No parameters to update for Template layer 37 | 38 | return None 39 | -------------------------------------------------------------------------------- /epynnlive/author_music/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/author_music/prepare_dataset.py 2 | # Standard library imports 3 | import tarfile 4 | import random 5 | import glob 6 | import os 7 | 8 | # Related third party imports 9 | import wget 10 | import numpy as np 11 | from scipy.io import wavfile 12 | 13 | # Local application/library specific imports 14 | from epynn.commons.logs import process_logs 15 | 16 | 17 | def download_music(): 18 | """Download some guitar music. 19 | """ 20 | data_path = os.path.join('.', 'data') 21 | 22 | if not os.path.exists(data_path): 23 | 24 | # Download @url with wget 25 | url = 'https://synthase.s3.us-west-2.amazonaws.com/author_music.tar' 26 | fname = wget.download(url) 27 | 28 | # Extract archive 29 | tar = tarfile.open(fname).extractall('.') 30 | process_logs('Make: '+fname, level=1) 31 | 32 | # Clean-up 33 | os.remove(fname) 34 | 35 | return None 36 | 37 | 38 | def clips_music(wav_file, TIME=1, SAMPLING_RATE=10000): 39 | """Clip music and proceed with resampling. 40 | 41 | :param wav_file: The filename of .wav file which contains the music. 42 | :type wav_file: str 43 | 44 | :param SAMPLING_RATE: Sampling rate (Hz), default to 10000. 45 | :type SAMPLING_RATE: int 46 | 47 | :param TIME: Sampling time (s), defaults to 1. 48 | :type TIME: int 49 | 50 | :return: Clipped and re-sampled music. 51 | :rtype: list[:class:`numpy.ndarray`] 52 | """ 53 | # Number of features describing a sample 54 | N_FEATURES = int(SAMPLING_RATE * TIME) 55 | 56 | # Retrieve original sampling rate (Hz) and data 57 | wav_sampling_rate, wav_data = wavfile.read(wav_file) 58 | 59 | # 16-bits wav files - Pass all positive and norm. [0, 1] 60 | # wav_data = (wav_data + 32768.0) / (32768.0 * 2) 61 | wav_data = wav_data.astype('int64') 62 | wav_data = (wav_data + np.abs(np.min(wav_data))) 63 | wav_data = wav_data / np.max(wav_data) 64 | 65 | # Digitize in 4-bits signal 66 | n_bins = 16 67 | bins = [i / (n_bins - 1) for i in range(n_bins)] 68 | wav_data = np.digitize(wav_data, bins, right=True) 69 | 70 | # Compute step for re-sampling 71 | sampling_step = int(wav_sampling_rate / SAMPLING_RATE) 72 | 73 | # Re-sampling to avoid memory allocation errors 74 | wav_resampled = wav_data[::sampling_step] 75 | 76 | # Total duration (s) of original data 77 | wav_time = wav_data.shape[0] / wav_sampling_rate 78 | 79 | # Number of clips to slice from original data 80 | N_CLIPS = int(wav_time / TIME) 81 | 82 | # Make clips from data 83 | clips = [wav_resampled[i * N_FEATURES:(i+1) * N_FEATURES] for i in range(N_CLIPS)] 84 | 85 | return clips 86 | 87 | 88 | def prepare_dataset(N_SAMPLES=100): 89 | """Prepare a dataset of clipped music as NumPy arrays. 90 | 91 | :param N_SAMPLES: Number of clip samples to retrieve, defaults to 100. 92 | :type N_SAMPLES: int 93 | 94 | :return: Set of sample features. 95 | :rtype: tuple[:class:`numpy.ndarray`] 96 | 97 | :return: Set of single-digit sample label. 98 | :rtype: tuple[:class:`numpy.ndarray`] 99 | """ 100 | # Initialize X and Y datasets 101 | X_features = [] 102 | Y_label = [] 103 | 104 | wav_paths = os.path.join('data', '*', '*wav') 105 | 106 | wav_files = glob.glob(wav_paths) 107 | 108 | # Iterate over WAV_FILES 109 | for wav_file in wav_files: 110 | 111 | # Retrieve clips 112 | clips = clips_music(wav_file) 113 | 114 | # Iterate over clips 115 | for features in clips: 116 | 117 | # Clip is positive if played by true author (0) else negative (1) 118 | label = 0 if 'true' in wav_file else 1 119 | 120 | # Append sample features to X_features 121 | X_features.append(features) 122 | 123 | # Append sample label to Y_label 124 | Y_label.append(label) 125 | 126 | # Prepare X-Y pairwise dataset 127 | dataset = list(zip(X_features, Y_label)) 128 | 129 | # Shuffle dataset 130 | random.shuffle(dataset) 131 | 132 | # Truncate dataset to N_SAMPLES 133 | dataset = dataset[:N_SAMPLES] 134 | 135 | # Separate X-Y pairs 136 | X_features, Y_label = zip(*dataset) 137 | 138 | return X_features, Y_label 139 | -------------------------------------------------------------------------------- /epynnlive/author_music/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/author_music/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/music_author/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.maths import relu, softmax 11 | from epynn.commons.library import ( 12 | configure_directory, 13 | read_model, 14 | ) 15 | from epynn.network.models import EpyNN 16 | from epynn.embedding.models import Embedding 17 | from epynn.rnn.models import RNN 18 | from epynn.gru.models import GRU 19 | from epynn.flatten.models import Flatten 20 | from epynn.dropout.models import Dropout 21 | from epynn.dense.models import Dense 22 | from prepare_dataset import ( 23 | prepare_dataset, 24 | download_music, 25 | ) 26 | from settings import se_hPars 27 | 28 | 29 | ########################## CONFIGURE ########################## 30 | random.seed(1) 31 | 32 | np.set_printoptions(threshold=10) 33 | 34 | np.seterr(all='warn') 35 | np.seterr(under='ignore') 36 | 37 | configure_directory() 38 | 39 | 40 | ############################ DATASET ########################## 41 | download_music() 42 | 43 | X_features, Y_label = prepare_dataset(N_SAMPLES=256) 44 | 45 | 46 | ####################### BUILD AND TRAIN MODEL ################# 47 | embedding = Embedding(X_data=X_features, 48 | Y_data=Y_label, 49 | X_encode=True, 50 | Y_encode=True, 51 | batch_size=16, 52 | relative_size=(2, 1, 0)) 53 | 54 | 55 | ### Feed-forward 56 | 57 | # Model 58 | name = 'Flatten_Dense-64-relu_Dense-2-softmax' 59 | 60 | se_hPars['learning_rate'] = 0.01 61 | se_hPars['softmax_temperature'] = 5 62 | 63 | layers = [ 64 | embedding, 65 | Flatten(), 66 | Dense(64, relu), 67 | Dropout(0.5), 68 | Dense(2, softmax), 69 | ] 70 | 71 | model = EpyNN(layers=layers, name=name) 72 | 73 | model.initialize(loss='MSE', seed=1, metrics=['accuracy', 'recall', 'precision'], se_hPars=se_hPars.copy()) 74 | 75 | model.train(epochs=10, init_logs=False) 76 | 77 | 78 | ### Recurrent 79 | 80 | # Model 81 | name = 'RNN-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax' 82 | 83 | se_hPars['learning_rate'] = 0.01 84 | se_hPars['softmax_temperature'] = 1 85 | 86 | layers = [ 87 | embedding, 88 | RNN(1, sequences=True), 89 | Flatten(), 90 | Dense(64, relu), 91 | Dropout(0.5), 92 | Dense(2, softmax), 93 | ] 94 | 95 | model = EpyNN(layers=layers, name=name) 96 | 97 | model.initialize(loss='MSE', seed=1, metrics=['accuracy', 'recall', 'precision'], se_hPars=se_hPars.copy()) 98 | 99 | model.train(epochs=3, init_logs=False) 100 | 101 | 102 | # Model 103 | name = 'GRU-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax' 104 | 105 | se_hPars['learning_rate'] = 0.01 106 | se_hPars['softmax_temperature'] = 1 107 | 108 | layers = [ 109 | embedding, 110 | GRU(1, sequences=True), 111 | Flatten(), 112 | Dense(64, relu), 113 | Dropout(0.5), 114 | Dense(2, softmax), 115 | ] 116 | 117 | model = EpyNN(layers=layers, name=name) 118 | 119 | model.initialize(loss='MSE', seed=1, metrics=['accuracy', 'recall', 'precision'], se_hPars=se_hPars.copy()) 120 | 121 | model.train(epochs=5, init_logs=False) 122 | 123 | 124 | ### Write/read model 125 | 126 | model.write() 127 | # model.write(path=/your/custom/path) 128 | 129 | model = read_model() 130 | # model = read_model(path=/your/custom/path) 131 | 132 | 133 | ### Predict 134 | 135 | X_features, _ = prepare_dataset(N_SAMPLES=10) 136 | 137 | dset = model.predict(X_features, X_encode=True) 138 | 139 | for n, pred, probs in zip(dset.ids, dset.P, dset.A): 140 | print(n, pred, probs) 141 | -------------------------------------------------------------------------------- /epynnlive/captcha_mnist/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/captcha_mnist/prepare_dataset.py 2 | # Standard library imports 3 | import tarfile 4 | import random 5 | import gzip 6 | import os 7 | 8 | # Related third party imports 9 | import wget 10 | import numpy as np 11 | 12 | # Local application/library specific imports 13 | from epynn.commons.logs import process_logs 14 | 15 | 16 | def download_mnist(): 17 | """Download a subset of the MNIST database. 18 | """ 19 | data_path = os.path.join('.', 'data') 20 | 21 | if not os.path.exists(data_path): 22 | 23 | # Download @url with wget 24 | url = 'https://synthase.s3.us-west-2.amazonaws.com/mnist_database.tar' 25 | fname = wget.download(url) 26 | 27 | # Extract archive 28 | tar = tarfile.open(fname).extractall('.') 29 | process_logs('Make: '+fname, level=1) 30 | 31 | # Clean-up 32 | os.remove(fname) 33 | 34 | return None 35 | 36 | 37 | def prepare_dataset(N_SAMPLES=100): 38 | """Prepare a dataset of hand-written digits as images. 39 | 40 | :param N_SAMPLES: Number of MNIST samples to retrieve, defaults to 100. 41 | :type N_SAMPLES: int 42 | 43 | :return: Set of sample features. 44 | :rtype: tuple[:class:`numpy.ndarray`] 45 | 46 | :return: Set of single-digit sample label. 47 | :rtype: tuple[:class:`numpy.ndarray`] 48 | """ 49 | # Process MNIST images 50 | img_file = gzip.open('data/train-images-idx3-ubyte.gz') 51 | 52 | header = img_file.read(16) 53 | image_size = int.from_bytes(header[8:12], byteorder='big') 54 | buf = img_file.read(image_size * image_size * N_SAMPLES) 55 | X_features = np.frombuffer(buf, dtype=np.uint8).astype(np.float32) 56 | X_features = X_features.reshape(N_SAMPLES, image_size, image_size, 1) 57 | 58 | # Process MNIST labels 59 | label_file = gzip.open('data/train-labels-idx1-ubyte.gz') 60 | 61 | header = label_file.read(8) 62 | buf = label_file.read(image_size * image_size * N_SAMPLES) 63 | Y_label = np.frombuffer(buf, dtype=np.uint8) 64 | 65 | # Prepare X-Y pairwise dataset 66 | dataset = list(zip(X_features, Y_label)) 67 | 68 | # Shuffle dataset 69 | random.shuffle(dataset) 70 | 71 | # Separate X-Y pairs 72 | X_features, Y_label = zip(*dataset) 73 | 74 | return X_features, Y_label 75 | -------------------------------------------------------------------------------- /epynnlive/captcha_mnist/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0.001, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/captcha_mnist/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/captcha_mnist/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.maths import relu, softmax 11 | from epynn.commons.library import ( 12 | configure_directory, 13 | read_model, 14 | ) 15 | from epynn.network.models import EpyNN 16 | from epynn.embedding.models import Embedding 17 | from epynn.convolution.models import Convolution 18 | from epynn.pooling.models import Pooling 19 | from epynn.flatten.models import Flatten 20 | from epynn.dropout.models import Dropout 21 | from epynn.dense.models import Dense 22 | from prepare_dataset import ( 23 | prepare_dataset, 24 | download_mnist, 25 | ) 26 | from settings import se_hPars 27 | 28 | 29 | ########################## CONFIGURE ########################## 30 | random.seed(1) 31 | np.random.seed(1) 32 | 33 | np.set_printoptions(threshold=10) 34 | 35 | np.seterr(all='warn') 36 | 37 | configure_directory() 38 | 39 | 40 | ############################ DATASET ########################## 41 | download_mnist() 42 | 43 | X_features, Y_label = prepare_dataset(N_SAMPLES=750) 44 | 45 | 46 | ####################### BUILD AND TRAIN MODEL ################# 47 | embedding = Embedding(X_data=X_features, 48 | Y_data=Y_label, 49 | X_scale=True, 50 | Y_encode=True, 51 | batch_size=32, 52 | relative_size=(2, 1, 0)) 53 | 54 | 55 | ### Feed-Forward 56 | 57 | # Model 58 | name = 'Flatten_Dropout-02_Dense-64-relu_Dropout-05_Dense-10-softmax' 59 | 60 | se_hPars['learning_rate'] = 0.001 61 | se_hPars['softmax_temperature'] = 5 62 | 63 | flatten = Flatten() 64 | 65 | dropout1 = Dropout(drop_prob=0.2) 66 | 67 | hidden_dense = Dense(64, relu) 68 | 69 | dropout2 = Dropout(drop_prob=0.5) 70 | 71 | dense = Dense(10, softmax) 72 | 73 | layers = [embedding, flatten, dropout1, hidden_dense, dropout2, dense] 74 | 75 | model = EpyNN(layers=layers, name=name) 76 | 77 | model.initialize(loss='CCE', seed=1, se_hPars=se_hPars.copy()) 78 | 79 | model.train(epochs=100, init_logs=False) 80 | 81 | model.plot(path=False) 82 | 83 | 84 | ### Convolutional Neural Network 85 | 86 | # Model 87 | name = 'Convolution-6-4_Pooling-2-2-Max_Flatten_Dense-10-softmax' 88 | 89 | se_hPars['learning_rate'] = 0.005 90 | se_hPars['softmax_temperature'] = 5 91 | 92 | convolution = Convolution(unit_filters=32, filter_size=(4, 4), activate=relu) 93 | 94 | pooling = Pooling(pool_size=(2, 2)) 95 | 96 | flatten = Flatten() 97 | 98 | dense = Dense(10, softmax) 99 | 100 | layers = [embedding, convolution, pooling, flatten, dense] 101 | 102 | model = EpyNN(layers=layers, name=name) 103 | 104 | model.initialize(loss='CCE', seed=1, se_hPars=se_hPars.copy()) 105 | 106 | model.train(epochs=100, init_logs=False) 107 | 108 | 109 | ### Write/read model 110 | 111 | model.write() 112 | # model.write(path=/your/custom/path) 113 | 114 | model = read_model() 115 | # model = read_model(path=/your/custom/path) 116 | 117 | 118 | ### Predict 119 | 120 | X_features, _ = prepare_dataset(N_SAMPLES=10) 121 | 122 | dset = model.predict(X_features) 123 | 124 | for n, pred, probs in zip(dset.ids, dset.P, dset.A): 125 | print(n, pred, probs) 126 | -------------------------------------------------------------------------------- /epynnlive/dummy_boolean/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_boolean/prepare_dataset.py 2 | # Standard library imports 3 | import random 4 | 5 | 6 | def features_boolean(N_FEATURES=11): 7 | """Generate dummy string features. 8 | 9 | :param N_FEATURES: Number of features, defaults to 11. 10 | :type N_FEATURES: int 11 | 12 | :return: random boolean features of length N_FEATURES. 13 | :rtype: list[bool] 14 | """ 15 | # Random choice True or False for N_FEATURES iterations 16 | features = [random.choice([True, False]) for j in range(N_FEATURES)] 17 | 18 | return features 19 | 20 | 21 | def label_features(features): 22 | """Prepare label associated with features. 23 | 24 | The dummy law is: 25 | 26 | More True = positive. 27 | More False = negative. 28 | 29 | :param features: random boolean features of length N_FEATURES. 30 | :type features: list[bool] 31 | 32 | :return: Single-digit label with respect to features. 33 | :rtype: int 34 | """ 35 | # Single-digit positive and negative labels 36 | p_label = 0 37 | n_label = 1 38 | 39 | # Test if features contains more True (0) 40 | if features.count(True) > features.count(False): 41 | label = p_label 42 | 43 | # Test if features contains more False (1) 44 | elif features.count(True) < features.count(False): 45 | label = n_label 46 | 47 | return label 48 | 49 | 50 | def prepare_dataset(N_SAMPLES=100): 51 | """Prepare a set of dummy boolean sample features and label. 52 | 53 | :param N_SAMPLES: Number of samples to generate, defaults to 100. 54 | :type N_SAMPLES: int 55 | 56 | :return: Set of sample features. 57 | :rtype: tuple[list[bool]] 58 | 59 | :return: Set of single-digit sample label. 60 | :rtype: tuple[int] 61 | """ 62 | # Initialize X and Y datasets 63 | X_features = [] 64 | Y_label = [] 65 | 66 | # Iterate over N_SAMPLES 67 | for i in range(N_SAMPLES): 68 | 69 | # Compute random boolean features 70 | features = features_boolean() 71 | 72 | # Retrieve label associated with features 73 | label = label_features(features) 74 | 75 | # Append sample features to X_features 76 | X_features.append(features) 77 | 78 | # Append sample label to Y_label 79 | Y_label.append(label) 80 | 81 | # Prepare X-Y pairwise dataset 82 | dataset = list(zip(X_features, Y_label)) 83 | 84 | # Shuffle dataset 85 | random.shuffle(dataset) 86 | 87 | # Separate X-Y pairs 88 | X_features, Y_label = zip(*dataset) 89 | 90 | return X_features, Y_label 91 | -------------------------------------------------------------------------------- /epynnlive/dummy_boolean/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_boolean/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule_mode': 'steady', 9 | 'decay_k': 0.001, 10 | 'cycling_n': 1, 11 | 'descent_d': 1, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/dummy_boolean/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_boolean/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.library import ( 11 | configure_directory, 12 | read_model, 13 | ) 14 | from epynn.network.models import EpyNN 15 | from epynn.embedding.models import Embedding 16 | from epynn.dense.models import Dense 17 | from prepare_dataset import prepare_dataset 18 | 19 | 20 | ########################## CONFIGURE ########################## 21 | random.seed(1) 22 | 23 | np.set_printoptions(threshold=10) 24 | 25 | np.seterr(all='warn') 26 | 27 | configure_directory(clear=True) # This is a dummy example 28 | 29 | 30 | ############################ DATASET ########################## 31 | X_features, Y_label = prepare_dataset(N_SAMPLES=50) 32 | 33 | 34 | ####################### BUILD AND TRAIN MODEL ################# 35 | 36 | embedding = Embedding(X_data=X_features, 37 | Y_data=Y_label, 38 | relative_size=(2, 1, 0)) 39 | 40 | 41 | ### Feed-Forward 42 | 43 | # Model 44 | name = 'Perceptron_Dense-1-sigmoid' 45 | 46 | dense = Dense() 47 | 48 | model = EpyNN(layers=[embedding, dense], name=name) 49 | 50 | model.train(epochs=100) 51 | 52 | model.plot(path=False) 53 | 54 | 55 | ### Write/read model 56 | 57 | model.write() 58 | # model.write(path=/your/custom/path) 59 | 60 | model = read_model() 61 | # model = read_model(path=/your/custom/path) 62 | 63 | 64 | ### Predict 65 | 66 | X_features, _ = prepare_dataset(N_SAMPLES=10) 67 | 68 | dset = model.predict(X_features) 69 | 70 | for n, pred, probs, features in zip(dset.ids, dset.P, dset.A, dset.X): 71 | print(n, pred, probs, features) 72 | -------------------------------------------------------------------------------- /epynnlive/dummy_image/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_image/prepare_dataset.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | 9 | def features_image(WIDTH=28, HEIGHT=28): 10 | """Generate dummy image features. 11 | 12 | :param WIDTH: Image width, defaults to 28. 13 | :type WIDTH: int 14 | 15 | :param HEIGHT: Image height, defaults to 28. 16 | :type HEIGHT: int 17 | 18 | :return: Random image features of size N_FEATURES. 19 | :rtype: :class:`numpy.ndarray` 20 | 21 | :return: Non-random image features of size N_FEATURES. 22 | :rtype: :class:`numpy.ndarray` 23 | """ 24 | # Number of channels is one for greyscale images 25 | DEPTH = 1 26 | 27 | # Number of features describing a sample 28 | N_FEATURES = WIDTH * HEIGHT * DEPTH 29 | 30 | # Number of distinct tones in features 31 | N_TONES = 16 32 | 33 | # Shades of grey 34 | GSCALE = [i for i in range(N_TONES)] 35 | 36 | # Random choice of shades for N_FEATURES iterations 37 | features = [random.choice(GSCALE) for j in range(N_FEATURES)] 38 | 39 | # Vectorization of features 40 | features = np.array(features).reshape(HEIGHT, WIDTH, DEPTH) 41 | 42 | # Masked features 43 | mask_on_features = features.copy() 44 | mask_on_features[np.random.randint(0, HEIGHT)] = np.zeros_like(features[0]) 45 | mask_on_features[:, np.random.randint(0, WIDTH)] = np.zeros_like(features[:, 0]) 46 | 47 | # Random choice between random image or masked image 48 | features = random.choice([features, mask_on_features]) 49 | 50 | return features, mask_on_features 51 | 52 | 53 | def label_features(features, mask_on_features): 54 | """Prepare label associated with features. 55 | 56 | The dummy law is: 57 | 58 | Image is NOT random = positive 59 | Image is random = negative 60 | 61 | :param features: Random image features of size N_FEATURES 62 | :type features: :class:`numpy.ndarray` 63 | 64 | :param mask_on_features: Non-random image features of size N_FEATURES 65 | :type mask_on_features: :class:`numpy.ndarray` 66 | 67 | :return: Single-digit label with respect to features 68 | :rtype: int 69 | """ 70 | # Single-digit positive and negative labels 71 | p_label = 0 72 | n_label = 1 73 | 74 | # Test if image is not random (0) 75 | if np.sum(features) == np.sum(mask_on_features): 76 | label = p_label 77 | 78 | # Test if image is random (1) 79 | elif np.sum(features) != np.sum(mask_on_features): 80 | label = n_label 81 | 82 | return label 83 | 84 | 85 | def prepare_dataset(N_SAMPLES=100): 86 | """Prepare a set of dummy time sample features and label. 87 | 88 | :param N_SAMPLES: Number of samples to generate, defaults to 100. 89 | :type N_SAMPLES: int 90 | 91 | :return: Set of sample features. 92 | :rtype: tuple[:class:`numpy.ndarray`] 93 | 94 | :return: Set of single-digit sample label. 95 | :rtype: tuple[int] 96 | """ 97 | # Initialize X and Y datasets 98 | X_features = [] 99 | Y_label = [] 100 | 101 | # Iterate over N_SAMPLES 102 | for i in range(N_SAMPLES): 103 | 104 | # Compute random string features 105 | features, mask_on_features = features_image() 106 | 107 | # Retrieve label associated with features 108 | label = label_features(features, mask_on_features) 109 | 110 | # Append sample features to X_features 111 | X_features.append(features) 112 | 113 | # Append sample label to Y_label 114 | Y_label.append(label) 115 | 116 | # Prepare X-Y pairwise dataset 117 | dataset = list(zip(X_features, Y_label)) 118 | 119 | # Shuffle dataset 120 | random.shuffle(dataset) 121 | 122 | # Separate X-Y pairs 123 | X_features, Y_label = zip(*dataset) 124 | 125 | return X_features, Y_label 126 | -------------------------------------------------------------------------------- /epynnlive/dummy_image/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/dummy_image/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_image/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.maths import relu, softmax 11 | from epynn.commons.library import ( 12 | configure_directory, 13 | read_model, 14 | ) 15 | from epynn.network.models import EpyNN 16 | from epynn.embedding.models import Embedding 17 | from epynn.convolution.models import Convolution 18 | from epynn.pooling.models import Pooling 19 | from epynn.flatten.models import Flatten 20 | from epynn.dropout.models import Dropout 21 | from epynn.dense.models import Dense 22 | from prepare_dataset import prepare_dataset 23 | from settings import se_hPars 24 | 25 | 26 | ########################## CONFIGURE ########################## 27 | random.seed(1) 28 | np.random.seed(1) 29 | 30 | np.set_printoptions(threshold=10) 31 | 32 | np.seterr(all='warn') 33 | 34 | configure_directory() 35 | 36 | 37 | ############################ DATASET ########################## 38 | X_features, Y_label = prepare_dataset(N_SAMPLES=750) 39 | 40 | 41 | ####################### BUILD AND TRAIN MODEL ################# 42 | embedding = Embedding(X_data=X_features, 43 | Y_data=Y_label, 44 | X_scale=True, 45 | Y_encode=True, 46 | batch_size=32, 47 | relative_size=(2, 1, 0)) 48 | 49 | 50 | ### Feed-Forward 51 | 52 | # Model 53 | name = 'Flatten_Dropout-02_Dense-64-relu_Dropout-05_Dense-2-softmax' 54 | 55 | se_hPars['learning_rate'] = 0.01 56 | 57 | flatten = Flatten() 58 | 59 | dropout1 = Dropout(drop_prob=0.2) 60 | 61 | hidden_dense = Dense(64, relu) 62 | 63 | dropout2 = Dropout(drop_prob=0.5) 64 | 65 | dense = Dense(2, softmax) 66 | 67 | layers = [embedding, flatten, dropout1, hidden_dense, dropout2, dense] 68 | 69 | model = EpyNN(layers=layers, name=name) 70 | 71 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 72 | 73 | model.train(epochs=100, init_logs=False) 74 | 75 | 76 | ### Convolutional Neural Network 77 | 78 | # Model 79 | name = 'Convolution-6-4_Pooling-2-Max_Flatten_Dense-2-softmax' 80 | 81 | se_hPars['learning_rate'] = 0.001 82 | 83 | convolution = Convolution(unit_filters=6, filter_size=(4, 4), activate=relu) 84 | 85 | pooling = Pooling(pool_size=(2, 2)) 86 | 87 | flatten = Flatten() 88 | 89 | dense = Dense(2, softmax) 90 | 91 | layers = [embedding, convolution, pooling, flatten, dense] 92 | 93 | model = EpyNN(layers=layers, name=name) 94 | 95 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 96 | 97 | model.train(epochs=100, init_logs=False) 98 | 99 | 100 | ### Write/read model 101 | 102 | model.write() 103 | # model.write(path=/your/custom/path) 104 | 105 | model = read_model() 106 | # model = read_model(path=/your/custom/path) 107 | 108 | 109 | ### Predict 110 | 111 | X_features, _ = prepare_dataset(N_SAMPLES=10) 112 | 113 | dset = model.predict(X_features) 114 | 115 | for n, pred, probs in zip(dset.ids, dset.P, dset.A): 116 | print(n, pred, probs) 117 | -------------------------------------------------------------------------------- /epynnlive/dummy_string/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_strings/prepare_dataset.py 2 | # Standard library imports 3 | import random 4 | 5 | 6 | def features_string(N_FEATURES=12): 7 | """Generate dummy string features. 8 | 9 | :param N_FEATURES: Number of features, defaults to 12. 10 | :type N_FEATURES: int 11 | 12 | :return: random string features of length N_FEATURES. 13 | :rtype: list[str] 14 | """ 15 | # List of words 16 | WORDS = ['A', 'T', 'G', 'C'] 17 | 18 | # Random choice of words for N_FEATURES iterations 19 | features = [random.choice(WORDS) for j in range(N_FEATURES)] 20 | 21 | return features 22 | 23 | 24 | def label_features(features): 25 | """Prepare label associated with features. 26 | 27 | The dummy law is: 28 | 29 | First and last elements are equal = positive. 30 | First and last elements are NOT equal = negative. 31 | 32 | :param features: random string features of length N_FEATURES. 33 | :type features: list[str] 34 | 35 | :return: Single-digit label with respect to features. 36 | :rtype: int 37 | """ 38 | # Single-digit positive and negative labels 39 | p_label = 0 40 | n_label = 1 41 | 42 | # Pattern associated with positive label (0) 43 | if features[0] == features[-1]: 44 | label = p_label 45 | 46 | # Other pattern associated with negative label (1) 47 | elif features[0] != features[-1]: 48 | label = n_label 49 | 50 | return label 51 | 52 | 53 | def prepare_dataset(N_SAMPLES=100): 54 | """Prepare a set of dummy string sample features and label. 55 | 56 | :param N_SAMPLES: Number of samples to generate, defaults to 100. 57 | :type N_SAMPLES: int 58 | 59 | :return: Set of sample features. 60 | :rtype: tuple[list[str]] 61 | 62 | :return: Set of single-digit sample label. 63 | :rtype: tuple[int] 64 | """ 65 | # Initialize X and Y datasets 66 | X_features = [] 67 | Y_label = [] 68 | 69 | # Iterate over N_SAMPLES 70 | for i in range(N_SAMPLES): 71 | 72 | # Compute random string features 73 | features = features_string() 74 | 75 | # Retrieve label associated with features 76 | label = label_features(features) 77 | 78 | # Append sample features to X_features 79 | X_features.append(features) 80 | 81 | # Append sample label to Y_label 82 | Y_label.append(label) 83 | 84 | # Prepare X-Y pairwise dataset 85 | dataset = list(zip(X_features, Y_label)) 86 | 87 | # Shuffle dataset 88 | random.shuffle(dataset) 89 | 90 | # Separate X-Y pairs 91 | X_features, Y_label = zip(*dataset) 92 | 93 | return X_features, Y_label 94 | -------------------------------------------------------------------------------- /epynnlive/dummy_string/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/dummy_string/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_string/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.io import one_hot_decode_sequence 11 | from epynn.commons.maths import relu, softmax 12 | from epynn.commons.library import ( 13 | configure_directory, 14 | read_model, 15 | ) 16 | from epynn.network.models import EpyNN 17 | from epynn.embedding.models import Embedding 18 | from epynn.flatten.models import Flatten 19 | from epynn.rnn.models import RNN 20 | from epynn.gru.models import GRU 21 | from epynn.lstm.models import LSTM 22 | from epynn.dense.models import Dense 23 | from prepare_dataset import prepare_dataset 24 | from settings import se_hPars 25 | 26 | 27 | ########################## CONFIGURE ########################## 28 | random.seed(1) 29 | 30 | np.set_printoptions(threshold=10) 31 | 32 | np.seterr(all='warn') 33 | 34 | configure_directory() 35 | 36 | 37 | ############################ DATASET ########################## 38 | X_features, Y_label = prepare_dataset(N_SAMPLES=480) 39 | 40 | 41 | ####################### BUILD AND TRAIN MODEL ################# 42 | embedding = Embedding(X_data=X_features, 43 | Y_data=Y_label, 44 | X_encode=True, 45 | Y_encode=True, 46 | relative_size=(2, 1, 0)) 47 | 48 | ### Feed-Forward 49 | 50 | # Model 51 | name = 'Flatten_Dense-2-softmax' 52 | 53 | se_hPars['learning_rate'] = 0.001 54 | 55 | flatten = Flatten() 56 | 57 | dense = Dense(2, softmax) 58 | 59 | layers = [embedding, flatten, dense] 60 | 61 | model = EpyNN(layers=layers, name=name) 62 | 63 | model.initialize(loss='BCE', seed=1, se_hPars=se_hPars.copy()) 64 | 65 | model.train(epochs=50, init_logs=False) 66 | 67 | model.plot(path=False) 68 | 69 | 70 | ### Recurrent 71 | 72 | # Model 73 | name = 'RNN-1_Dense-2-softmax' 74 | 75 | se_hPars['learning_rate'] = 0.001 76 | 77 | rnn = RNN(1) 78 | 79 | dense = Dense(2, softmax) 80 | 81 | layers = [embedding, rnn, dense] 82 | 83 | model = EpyNN(layers=layers, name=name) 84 | 85 | model.initialize(loss='BCE', seed=1, se_hPars=se_hPars.copy()) 86 | 87 | model.train(epochs=50, init_logs=False) 88 | 89 | model.plot(path=False) 90 | 91 | 92 | # Model 93 | name = 'LSTM-1_Dense-2-softmax' 94 | 95 | se_hPars['learning_rate'] = 0.005 96 | 97 | lstm = LSTM(1) 98 | 99 | dense = Dense(2, softmax) 100 | 101 | layers = [embedding, lstm, dense] 102 | 103 | model = EpyNN(layers=layers, name=name) 104 | 105 | model.initialize(loss='BCE', seed=1, se_hPars=se_hPars.copy()) 106 | 107 | model.train(epochs=50, init_logs=False) 108 | 109 | model.plot(path=False) 110 | 111 | 112 | # Model 113 | name = 'GRU-1_Dense-2-softmax' 114 | 115 | se_hPars['learning_rate'] = 0.005 116 | 117 | gru = GRU(1) 118 | 119 | flatten = Flatten() 120 | 121 | dense = Dense(2, softmax) 122 | 123 | layers = [embedding, gru, dense] 124 | 125 | model = EpyNN(layers=layers, name=name) 126 | 127 | model.initialize(loss='BCE', seed=1, se_hPars=se_hPars.copy()) 128 | 129 | model.train(epochs=50, init_logs=False) 130 | 131 | model.plot(path=False) 132 | 133 | 134 | ### Write/read model 135 | 136 | model.write() 137 | # model.write(path=/your/custom/path) 138 | 139 | model = read_model() 140 | # model = read_model(path=/your/custom/path) 141 | 142 | 143 | ### Predict 144 | 145 | X_features, _ = prepare_dataset(N_SAMPLES=10) 146 | 147 | dset = model.predict(X_features, X_encode=True) 148 | 149 | for n, pred, probs in zip(dset.ids, dset.P, dset.A): 150 | print(n, pred, probs) 151 | -------------------------------------------------------------------------------- /epynnlive/dummy_time/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_time/prepare_dataset.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | 9 | def features_time(TIME=1, SAMPLING_RATE=128): 10 | """Generate dummy time features. 11 | 12 | Time features may be white noise or a sum with a pure sine-wave. 13 | 14 | The pure sin-wave has random frequency lower than SAMPLING_RATE // 4. 15 | 16 | :param SAMPLING_RATE: Sampling rate (Hz), defaults to 128. 17 | :type SAMPLING_RATE: int 18 | 19 | :param TIME: Sampling time (s), defaults to 1. 20 | :type TIME: int 21 | 22 | :return: Time features of shape (N_FEATURES,). 23 | :rtype: :class:`numpy.ndarray` 24 | 25 | :return: White noise of shape (N_FEATURES,). 26 | :rtype: :class:`numpy.ndarray` 27 | """ 28 | # Number of features describing a sample 29 | N_FEATURES = TIME * SAMPLING_RATE 30 | 31 | # Initialize features array 32 | features = np.linspace(0, TIME, N_FEATURES, endpoint=False) 33 | 34 | # Random choice of true signal frequency 35 | signal_frequency = random.uniform(0, SAMPLING_RATE // 4) 36 | 37 | # Generate pure sine wave of N_FEATURES points 38 | features = np.sin(2 * np.pi * signal_frequency * features) 39 | 40 | # Generate white noise 41 | white_noise = np.random.normal(0, scale=0.5, size=N_FEATURES) 42 | 43 | # Random choice between noisy signal or white noise 44 | features = random.choice([features + white_noise, white_noise]) 45 | 46 | return features, white_noise 47 | 48 | 49 | def label_features(features, white_noise): 50 | """Prepare label associated with features. 51 | 52 | The dummy law is: 53 | 54 | True signal in features = positive. 55 | No true signal in features = negative. 56 | 57 | :return: Time features of shape (N_FEATURES,). 58 | :rtype: :class:`numpy.ndarray` 59 | 60 | :return: White noise of shape (N_FEATURES,). 61 | :rtype: :class:`numpy.ndarray` 62 | 63 | :return: Single-digit label with respect to features. 64 | :rtype: int 65 | """ 66 | # Single-digit positive and negative labels 67 | p_label = 0 68 | n_label = 1 69 | 70 | # Test if features contains signal (0) 71 | if any(features != white_noise): 72 | label = p_label 73 | 74 | # Test if features is equal to white noise (1) 75 | elif all(features == white_noise): 76 | label = n_label 77 | 78 | return label 79 | 80 | 81 | def prepare_dataset(N_SAMPLES=100): 82 | """Prepare a set of dummy time sample features and label. 83 | 84 | :param N_SAMPLES: Number of samples to generate, defaults to 100. 85 | :type N_SAMPLES: int 86 | 87 | :return: Set of sample features. 88 | :rtype: tuple[:class:`numpy.ndarray`] 89 | 90 | :return: Set of single-digit sample label. 91 | :rtype: tuple[int] 92 | """ 93 | # Initialize X and Y datasets 94 | X_features = [] 95 | Y_label = [] 96 | 97 | # Iterate over N_SAMPLES 98 | for i in range(N_SAMPLES): 99 | 100 | # Compute random time features 101 | features, white_noise = features_time() 102 | 103 | # Retrieve label associated with features 104 | label = label_features(features, white_noise) 105 | 106 | # From n measurements to n steps with 1 measurements 107 | features = np.expand_dims(features, 1) 108 | 109 | # Append sample features to X_features 110 | X_features.append(features) 111 | 112 | # Append sample label to Y_label 113 | Y_label.append(label) 114 | 115 | # Prepare X-Y pairwise dataset 116 | dataset = list(zip(X_features, Y_label)) 117 | 118 | # Shuffle dataset 119 | random.shuffle(dataset) 120 | 121 | # Separate X-Y pairs 122 | X_features, Y_label = zip(*dataset) 123 | 124 | return X_features, Y_label 125 | -------------------------------------------------------------------------------- /epynnlive/dummy_time/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/dummy_time/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/dummy_time/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.maths import relu, softmax 11 | from epynn.commons.library import ( 12 | configure_directory, 13 | read_model, 14 | ) 15 | from epynn.network.models import EpyNN 16 | from epynn.dropout.models import Dropout 17 | from epynn.embedding.models import Embedding 18 | from epynn.flatten.models import Flatten 19 | from epynn.rnn.models import RNN 20 | from epynn.dense.models import Dense 21 | from prepare_dataset import prepare_dataset 22 | from settings import se_hPars 23 | 24 | 25 | ########################## CONFIGURE ########################## 26 | random.seed(1) 27 | np.random.seed(1) 28 | 29 | np.set_printoptions(threshold=10) 30 | 31 | np.seterr(all='warn') 32 | 33 | configure_directory() 34 | 35 | 36 | ############################ DATASET ########################## 37 | X_features, Y_label = prepare_dataset(N_SAMPLES=1024) 38 | 39 | 40 | ####################### BUILD AND TRAIN MODEL ################# 41 | embedding = Embedding(X_data=X_features, 42 | Y_data=Y_label, 43 | Y_encode=True, 44 | relative_size=(2, 1, 0)) 45 | 46 | ### Feed-forward 47 | 48 | # Model 49 | name = 'Flatten_Dense-64-relu_Dense-2-softmax' 50 | 51 | se_hPars['learning_rate'] = 0.005 52 | 53 | flatten = Flatten() 54 | 55 | hidden_dense = Dense(64, relu) 56 | 57 | dense = Dense(2, softmax) 58 | 59 | layers = [embedding, flatten, hidden_dense, dense] 60 | 61 | model = EpyNN(layers=layers, name=name) 62 | 63 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 64 | 65 | model.train(epochs=100, init_logs=False) 66 | 67 | model.plot(path=False) 68 | 69 | 70 | # Model 71 | name = 'Flatten_Dropout-02_Dense-64-relu_Dropout-05_Dense-2-softmax' 72 | 73 | se_hPars['learning_rate'] = 0.005 74 | 75 | flatten = Flatten() 76 | 77 | dropout1 = Dropout(drop_prob=0.2) 78 | 79 | hidden_dense = Dense(64, relu) 80 | 81 | dropout2 = Dropout(drop_prob=0.5) 82 | 83 | dense = Dense(2, softmax) 84 | 85 | layers = [embedding, flatten, dropout1, hidden_dense, dropout2, dense] 86 | 87 | model = EpyNN(layers=layers, name=name) 88 | 89 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 90 | 91 | model.train(epochs=100, init_logs=False) 92 | 93 | model.plot(path=False) 94 | 95 | 96 | ### Recurrent 97 | 98 | # Model 99 | name = 'RNN-10_Flatten_Dense-2-softmax' 100 | 101 | se_hPars['learning_rate'] = 0.01 102 | se_hPars['softmax_temperature'] = 5 103 | 104 | rnn = RNN(10) 105 | 106 | dense = Dense(2, softmax) 107 | 108 | layers = [embedding, rnn, dense] 109 | 110 | model = EpyNN(layers=layers, name=name) 111 | 112 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 113 | 114 | model.train(epochs=100, init_logs=False) 115 | 116 | 117 | # Model (SGD) 118 | embedding = Embedding(X_data=X_features, 119 | Y_data=Y_label, 120 | Y_encode=True, 121 | batch_size=32, 122 | relative_size=(2, 1, 0)) 123 | 124 | name = 'RNN-10_Flatten_Dense-2-softmax' 125 | 126 | se_hPars['learning_rate'] = 0.01 127 | se_hPars['softmax_temperature'] = 5 128 | 129 | rnn = RNN(10) 130 | 131 | dense = Dense(2, softmax) 132 | 133 | layers = [embedding, rnn, dense] 134 | 135 | model = EpyNN(layers=layers, name=name) 136 | 137 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy(), end='\r') 138 | 139 | model.train(epochs=100, init_logs=False) 140 | 141 | model.plot(path=False) 142 | 143 | 144 | ### Write/read model 145 | 146 | model.write() 147 | # model.write(path=/your/custom/path) 148 | 149 | model = read_model() 150 | # model = read_model(path=/your/custom/path) 151 | 152 | 153 | ### Predict 154 | 155 | X_features, _ = prepare_dataset(N_SAMPLES=10) 156 | 157 | dset = model.predict(X_features) 158 | 159 | for n, pred, probs in zip(dset.ids, dset.P, dset.A): 160 | print(n, pred, probs) 161 | -------------------------------------------------------------------------------- /epynnlive/ptm_protein/prepare_dataset.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/ptm_protein/prepare_dataset.py 2 | # Standard library imports 3 | import tarfile 4 | import random 5 | import os 6 | 7 | # Related third party imports 8 | import wget 9 | 10 | # Local application/library specific imports 11 | from epynn.commons.library import read_file 12 | from epynn.commons.logs import process_logs 13 | 14 | 15 | def download_sequences(): 16 | """Download a set of peptide sequences. 17 | """ 18 | data_path = os.path.join('.', 'data') 19 | 20 | if not os.path.exists(data_path): 21 | 22 | # Download @url with wget 23 | url = 'https://synthase.s3.us-west-2.amazonaws.com/ptm_prediction_data.tar' 24 | fname = wget.download(url) 25 | 26 | # Extract archive 27 | tar = tarfile.open(fname).extractall('.') 28 | process_logs('Make: ' + fname, level=1) 29 | 30 | # Clean-up 31 | os.remove(fname) 32 | 33 | return None 34 | 35 | 36 | def prepare_dataset(N_SAMPLES=100): 37 | """Prepare a set of labeled peptides. 38 | 39 | :param N_SAMPLES: Number of peptide samples to retrieve, defaults to 100. 40 | :type N_SAMPLES: int 41 | 42 | :return: Set of peptides. 43 | :rtype: tuple[list[str]] 44 | 45 | :return: Set of single-digit peptides label. 46 | :rtype: tuple[int] 47 | """ 48 | # Single-digit positive and negative labels 49 | p_label = 0 50 | n_label = 1 51 | 52 | # Positive data are Homo sapiens O-GlcNAcylated peptide sequences from oglcnac.mcw.edu 53 | path_positive = os.path.join('data', '21_positive.dat') 54 | 55 | # Negative data are peptide sequences presumably not O-GlcNAcylated 56 | path_negative = os.path.join('data', '21_negative.dat') 57 | 58 | # Read text files, each containing one sequence per line 59 | positive = [[list(x), p_label] for x in read_file(path_positive).splitlines()] 60 | negative = [[list(x), n_label] for x in read_file(path_negative).splitlines()] 61 | 62 | # Shuffle data to prevent from any sorting previously applied 63 | random.shuffle(positive) 64 | random.shuffle(negative) 65 | 66 | # Truncate to prepare a balanced dataset 67 | negative = negative[:len(positive)] 68 | 69 | # Prepare a balanced dataset 70 | dataset = positive + negative 71 | 72 | # Shuffle dataset 73 | random.shuffle(dataset) 74 | 75 | # Truncate dataset to N_SAMPLES 76 | dataset = dataset[:N_SAMPLES] 77 | 78 | # Separate X-Y pairs 79 | X_features, Y_label = zip(*dataset) 80 | 81 | return X_features, Y_label 82 | -------------------------------------------------------------------------------- /epynnlive/ptm_protein/settings.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynn/settings.py 2 | 3 | 4 | # HYPERPARAMETERS SETTINGS 5 | se_hPars = { 6 | # Schedule learning rate 7 | 'learning_rate': 0.1, 8 | 'schedule': 'steady', 9 | 'decay_k': 0, 10 | 'cycle_epochs': 0, 11 | 'cycle_descent': 0, 12 | # Tune activation function 13 | 'ELU_alpha': 0.01, 14 | 'LRELU_alpha': 0.01, 15 | 'softmax_temperature': 1, 16 | } 17 | """Hyperparameters dictionary settings. 18 | 19 | Set hyperparameters for model and layer. 20 | """ 21 | -------------------------------------------------------------------------------- /epynnlive/ptm_protein/train.py: -------------------------------------------------------------------------------- 1 | # EpyNN/epynnlive/ptm_protein/train.py 2 | # Standard library imports 3 | import random 4 | 5 | # Related third party imports 6 | import numpy as np 7 | 8 | # Local application/library specific imports 9 | import epynn.initialize 10 | from epynn.commons.maths import relu, softmax 11 | from epynn.commons.library import ( 12 | configure_directory, 13 | read_model, 14 | write_model, 15 | ) 16 | from epynn.network.models import EpyNN 17 | from epynn.embedding.models import Embedding 18 | from epynn.lstm.models import LSTM 19 | from epynn.flatten.models import Flatten 20 | from epynn.dropout.models import Dropout 21 | from epynn.dense.models import Dense 22 | from prepare_dataset import ( 23 | prepare_dataset, 24 | download_sequences, 25 | ) 26 | from settings import se_hPars 27 | 28 | 29 | ########################## CONFIGURE ########################## 30 | random.seed(1) 31 | 32 | np.set_printoptions(threshold=10) 33 | 34 | np.seterr(all='warn') 35 | np.seterr(under='ignore') 36 | 37 | configure_directory() 38 | 39 | ############################ DATASET ########################## 40 | download_sequences() 41 | 42 | X_features, Y_label = prepare_dataset(N_SAMPLES=1280) 43 | 44 | ####################### BUILD AND TRAIN MODEL ################# 45 | embedding = Embedding(X_data=X_features, 46 | Y_data=Y_label, 47 | X_encode=True, 48 | Y_encode=True, 49 | batch_size=32, 50 | relative_size=(2, 1, 0)) 51 | 52 | 53 | ### Feed-Forward 54 | 55 | # Model 56 | name = 'Flatten_Dropout-02_Dense-64-relu_Dropout-03_Dense-2-softmax' 57 | 58 | se_hPars['learning_rate'] = 0.001 59 | 60 | flatten = Flatten() 61 | 62 | dropout1 = Dropout(drop_prob=0.2) 63 | 64 | hidden_dense = Dense(64, relu) 65 | 66 | dropout2 = Dropout(drop_prob=0.3) 67 | 68 | dense = Dense(2, softmax) 69 | 70 | layers = [embedding, flatten, dropout1, hidden_dense, dropout2, dense] 71 | 72 | model = EpyNN(layers=layers, name=name) 73 | 74 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 75 | 76 | model.train(epochs=100, init_logs=False) 77 | 78 | model.plot(path=False) 79 | 80 | 81 | ### Recurrent 82 | 83 | # Model 84 | name = 'LSTM-8_Dense-2-softmax' 85 | 86 | se_hPars['learning_rate'] = 0.1 87 | se_hPars['softmax_temperature'] = 5 88 | 89 | lstm = LSTM(8) 90 | 91 | dense = Dense(2, softmax) 92 | 93 | layers = [embedding, lstm, dense] 94 | 95 | model = EpyNN(layers=layers, name=name) 96 | 97 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 98 | 99 | model.train(epochs=20, init_logs=False) 100 | 101 | # Model 102 | name = 'LSTM-8-Seq_Flatten_Dense-2-softmax' 103 | 104 | se_hPars['learning_rate'] = 0.1 105 | se_hPars['softmax_temperature'] = 5 106 | 107 | lstm = LSTM(8, sequences=True) 108 | 109 | flatten = Flatten() 110 | 111 | dense = Dense(2, softmax) 112 | 113 | layers = [embedding, lstm, flatten, dense] 114 | 115 | model = EpyNN(layers=layers, name=name) 116 | 117 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy()) 118 | 119 | model.train(epochs=20, init_logs=False) 120 | 121 | model.plot(path=False) 122 | 123 | # Model 124 | name = 'LSTM-8-Seq_Flatten_Dropout-05_Dense-64-relu_Dropout-05_Dense-2-softmax' 125 | 126 | se_hPars['learning_rate'] = 0.1 127 | se_hPars['softmax_temperature'] = 5 128 | 129 | layers = [ 130 | embedding, 131 | LSTM(8, sequences=True), 132 | Flatten(), 133 | Dropout(drop_prob=0.5), 134 | Dense(64, relu), 135 | Dropout(drop_prob=0.5), 136 | Dense(2, softmax), 137 | ] 138 | 139 | model = EpyNN(layers=layers, name=name) 140 | 141 | model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy(), end='\r') 142 | 143 | model.train(epochs=20, init_logs=False) 144 | 145 | model.plot(path=False) 146 | 147 | 148 | ### Write/read model 149 | 150 | model.write() 151 | # model.write(path=/your/custom/path) 152 | 153 | model = read_model() 154 | # model = read_model(path=/your/custom/path) 155 | 156 | 157 | ### Predict 158 | 159 | X_features, _ = prepare_dataset(N_SAMPLES=10) 160 | 161 | dset = model.predict(X_features, X_encode=True) 162 | 163 | for n, pred, probs in zip(dset.ids, dset.P, dset.A): 164 | print(n, pred, probs) 165 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = [ 3 | "setuptools>=42", 4 | "wheel" 5 | ] 6 | build-backend = "setuptools.build_meta" 7 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | cycler==0.10.0 2 | kiwisolver==1.3.2 3 | numpy==1.21.2 4 | Pillow==8.3.1 5 | Pygments==2.10.0 6 | pyparsing==2.4.7 7 | python-dateutil==2.8.2 8 | six==1.16.0 9 | tabulate==0.8.9 10 | termcolor==1.1.0 11 | texttable==1.6.4 12 | jupyter==1.0.0 13 | nbconvert==5.4.1 14 | scipy 15 | wget==3.2 16 | utilsovs-pkg 17 | matplotlib 18 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | name = EpyNN 3 | version = 1.2.11 4 | author = Florian Malard and Stéphanie Olivier-Van Stichelen 5 | author_email = florian.malard@gmail.com 6 | description = EpyNN: Educational python for Neural Networks. 7 | long_description = file: README.md 8 | long_description_content_type = text/markdown 9 | license = GPL-3.0 License 10 | url = https://github.com/synthaze/EpyNN 11 | project_urls = 12 | Bug Tracker = https://github.com/synthaze/EpyNN/issues 13 | classifiers = 14 | License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+) 15 | Programming Language :: Python :: 3.7 16 | Operating System :: OS Independent 17 | 18 | [options] 19 | include_package_data=True 20 | scripts = 21 | bin/epynn 22 | package_dir = 23 | = . 24 | packages = find: 25 | python_requires = >=3.7 26 | 27 | [options.packages.find] 28 | where = . 29 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | from setuptools import setup 3 | 4 | 5 | with open('requirements.txt') as f: 6 | required = f.read().splitlines() 7 | 8 | setup(install_requires=required) 9 | --------------------------------------------------------------------------------