├── .DS_Store
├── README.md
├── common
└── DeepGradientCompressionOptimizer.py
└── examples
├── .DS_Store
├── BERT
├── .dockerignore
├── .gitignore
├── .gitmodules
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── NOTICE
├── README.md
├── __init__.py
├── data
│ ├── README.md
│ ├── bookcorpus
│ │ ├── clean_and_merge_text.py
│ │ ├── config.sh
│ │ ├── create_pseudo_test_set.py
│ │ ├── create_pseudo_test_set.sh
│ │ ├── preprocessing.sh
│ │ ├── preprocessing_test_set.sh
│ │ ├── preprocessing_test_set_xargs_wrapper.sh
│ │ ├── preprocessing_xargs_wrapper.sh
│ │ ├── run_preprocessing.sh
│ │ ├── sentence_segmentation_nltk.py
│ │ └── shard_text_input_file.py
│ ├── glue
│ │ └── download_glue_data.py
│ ├── images
│ │ ├── bert_pipeline.png
│ │ ├── trtis_base_summary.png
│ │ ├── trtis_bs_1.png
│ │ ├── trtis_bs_8.png
│ │ ├── trtis_dynamic.png
│ │ ├── trtis_ec_1.png
│ │ ├── trtis_ec_4.png
│ │ ├── trtis_large_summary.png
│ │ └── trtis_static.png
│ ├── pretrained_models_google
│ │ └── download_models.py
│ ├── squad
│ │ └── squad_download.sh
│ └── wikipedia_corpus
│ │ ├── config.sh
│ │ ├── create_pseudo_test_set.py
│ │ ├── create_pseudo_test_set.sh
│ │ ├── preprocessing.sh
│ │ ├── preprocessing_test_set.sh
│ │ ├── preprocessing_test_set_xargs_wrapper.sh
│ │ ├── preprocessing_xargs_wrapper.sh
│ │ ├── remove_tags_and_clean.py
│ │ ├── run_preprocessing.sh
│ │ ├── shard_text_input_file.py
│ │ ├── wiki_sentence_segmentation_nltk.py
│ │ ├── wiki_sentence_segmentation_spacy.py
│ │ └── wiki_sentence_segmentation_spacy_pipe.py
├── extract_features.py
├── fp16_utils.py
├── fused_layer_norm.py
├── gpu_environment.py
├── modeling.py
├── modeling_test.py
├── multilingual.md
├── optimization.py
├── optimization_test.py
├── predicting_movie_reviews_with_bert_on_tf_hub.ipynb
├── requirements.txt
├── run_classifier.py
├── run_classifier_with_tfhub.py
├── run_pretraining.py
├── run_pretraining.sh
├── run_squad.py
├── run_squad_trtis_client.py
├── sample_text.txt
├── scripts
│ ├── data_download.sh
│ ├── data_download_helper.sh
│ ├── docker
│ │ ├── build.sh
│ │ ├── launch.sh
│ │ └── launch_server.sh
│ ├── finetune_inference_benchmark.sh
│ ├── finetune_train_benchmark.sh
│ ├── run_glue.sh
│ ├── run_pretraining.sh
│ ├── run_squad.sh
│ ├── run_squad_inference.sh
│ └── trtis
│ │ ├── export_model.sh
│ │ ├── generate_figures.sh
│ │ ├── run_client.sh
│ │ ├── run_perf_client.sh
│ │ ├── run_trtis.sh
│ │ └── wait_for_trtis_server.sh
├── tokenization.py
├── tokenization_test.py
└── utils
│ ├── create_glue_data.py
│ ├── create_pretraining_data.py
│ ├── create_squad_data.py
│ └── utils.py
└── MNIST
├── mnist_estimator.py
└── scripts
└── run_mnist_estimator.sh
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/.DS_Store
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # DeepGradientCompression
2 | It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep gradient compression is a technique by which the gradients are compressed before they are being sent. This approach greatly reduces the communication bandwidth and thus improves multi node training.
3 |
4 | Motivation behind Deep Gradient Compression
5 |
6 | To decrease the training time, we can increase the number of GPUs and computation. That means if we add 64 nodes, that means the training should be 64x times faster. In reality it is harder to achieve 64x speedup. The more the number of nodes, the more communication.
7 | Deep gradient compression is a technique by which the gradients are compressed before they are being sent. This approach greatly reduces the communication bandwidth and thus improves multi node training.
8 |
9 | Implementation
10 |
11 | I implemented gradient sparsification approach. I sparsify the gradients by removing R% of gradients smaller than absolute threshold value, calling it as gradient dropping. This approach is slightly different from the approach proposed by Dryden et al. (2016) as I use a single absolute value of threshold, instead of dropping fixed number of positive and negative gradients separately. This approach is simpler and works just as well. There could be a case when the small gradients accumulate over the period of time and simple dropping them could damage convergence. Gradient accumulation is conducted to avoid losing important gradients. Detailed explanation on each task and corresponding experiments are mentioned below. All the experiments use GradientCompressionOptimizer, create by me which wraps arounds horovod’s distributed optimizer.
12 |
13 |
14 | Masking
15 | To test how the model behaves when gradients are dropped, masking was used. In masking all the absolute values less than the absolute threshold where set to zero and passed as it to the original optimizer. For example if the threshold in the below example is 1, all the indexes whose absolute value is less than 1 are set to 0.
16 |
17 |
18 | Single Node - MNIST
19 |
20 | Send k% grads | Accuracy | Loss |
21 | --------------|-----------|--------|
22 | 100 (send all)| 0.9908 | 0.0285 |
23 | 10 | 0.9887 | 0.0331 |
24 | 5 | 0.9873 | 0.0381 |
25 | 3 | 0.9868 | 0.0396 |
26 | 2 | 0.9852 | 0.0446 |
27 | 1 | 0.9828 | 0.0543 |
28 | 0.9 | 0.9835 | 0.0519 |
29 | 0.5 | 0.9832 | 0.0468 |
30 | 0.3 | 0.9752 | 0.08 |
31 | 0.2 | 0.9452 | 0.1918 |
32 | 0.1 | 0.4099 | 2.2113 |
33 | 0 (Send None) | 0.2211 | 2.247 |
34 |
35 |
36 |
37 | Multi Node - MNIST 2 nodes
38 |
39 | Send k% grads | Accuracy | Loss |
40 | --------------|----------|---------|
41 | 100(send all) | 0.9908 | 0.021 |
42 | 95 | 0.9905 | 0.0266 |
43 | 80 | 0.9905 | 0.0292 |
44 | 70 | 0.9898 | 0.0306 |
45 | 50 | 0.9893 | 0.0304 |
46 | 30 | 0.9882 | 0.0345 |
47 | 1 | 0.9753 | 0.0788 |
48 | 0.5 | 0.9765 | 0.0772 |
49 | 0.2 | 0.9452 | 0.1918 |
50 | 0.1 | 0.2385 | 2.2694 |
51 | 0 (Send None) | 0.1386 | 2.2822 |
52 |
53 |
54 | The Library –
55 |
56 | DeepGradientCompression optimizer wraps around tf.train.optimizer, which is the base class for optimizers. It overrides methods compute_gradients and apply_gradients.
57 |
58 | **compute_gradients** – gradients are computed using this method. It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable. In this case the gradient would be an IndexedSlices object.
59 |
60 | **apply_gradients** - Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients.
61 |
62 | How it can be used-
63 |
64 |
65 |
66 | 1. The optimizer file can be downloaded and added to the source code directory.
67 | 2. Import the class in the model file. The model should be using Horovod distributed optimizer.
68 | 3. Create an optimizer object and wrap it around it and call compute_gradients method.
69 | 4. Call Sparse to dense method to convert the tensors to dense
70 | 5. Call apply gradients
71 |
72 |
73 |
74 | *Use Cases –*
75 |
76 | The optimizer could be used with any model using an optimizer. For example, I used the optimizer with BERT and MNIST. I am working integrating it with Resnet.
77 |
78 | *Further steps-*
79 |
80 |
81 | 1. Optimizing Thresholding - I am going through a couple of research papers that suggest optimal way of finding threshold. I am trying to implement e sampling to reduce top-k selection time. Where I plan to sample only 0.1% to 1% of the gradients and perform top-k selection on the samples to estimate the threshold for the entire population. If the number of gradients exceeding the threshold is far more than expected, a precise threshold is calculated from the already-selected gradients. Hierarchically calculating the threshold significantly reduces top-k selection time. This would reduce the overall training time and still help maintain the accuracy.
82 | 2. Conduct experiments on BERT pre training 32 nodes.
83 |
84 |
85 |
86 |
--------------------------------------------------------------------------------
/common/DeepGradientCompressionOptimizer.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import re
3 | import numpy as np
4 |
5 | class DeepGradientCompressionOptimizer(tf.train.Optimizer):
6 | def __init__(self, optimizer, name=None):
7 | if name is None:
8 | name = "DeepGradientCompressionOptimizer{}".format(type(optimizer).__name__)
9 |
10 | self._optimizer = optimizer
11 | self._name = name
12 |
13 | def compute_gradients(self, *args, **kwargs):
14 | grads_and_vars = self._optimizer.compute_gradients(*args, **kwargs)
15 | grads_and_vars = [(g,v) for g,v in grads_and_vars if g is not None]
16 |
17 | # IndexSparsification ends over here
18 | indexedSliced_grads = []
19 | threshold = tf.contrib.distributions.percentile(tf.abs(grads_and_vars[0][0]),98.0,interpolation='higher')
20 | for grad, var in grads_and_vars:
21 | #threshold = tf.contrib.distributions.percentile(tf.abs(grad),90.0,interpolation='higher')
22 | prev_grad = self._optimizer._get_or_make_slot(var, tf.zeros(tf.shape(grad), grad.dtype, 'prev_grad'), 'prev_grad', self._name)
23 | grad = tf.math.add(grad, prev_grad)
24 |
25 | # backed up grad that are less than threshold to use in next iteration
26 | bool_mask_less = tf.math.less(abs(grad), threshold)
27 | float_mask_less = tf.cast(bool_mask_less, grad.dtype)
28 | backup_grads = tf.multiply(grad, float_mask_less)
29 | prev_grad = self._optimizer._get_or_make_slot(var, backup_grads, 'prev_grad', self._name)
30 |
31 | #create an Indexed slices method
32 | flat_grad = tf.reshape(grad, [-1])
33 | bool_mask = tf.math.greater(tf.abs(flat_grad), threshold)
34 | indices = tf.reshape(tf.where(bool_mask),[-1])
35 | values = tf.reshape(tf.gather(flat_grad, indices),[-1])
36 |
37 | indexed_sclices = tf.IndexedSlices(values,indices,dense_shape=grad.shape)
38 | indexedSliced_grads.append(indexed_sclices)
39 | # IndexSparsification ends over here
40 |
41 | return [(grad, gradvar[1]) for grad, gradvar in zip(indexedSliced_grads, grads_and_vars)]
42 |
43 |
44 | def apply_gradients(self, *args, **kwargs):
45 | return self._optimizer.apply_gradients(*args, **kwargs)
46 |
47 | def get_slot(self, *args, **kwargs):
48 | return self._optimizer.get_slot(*args, **kwargs)
49 |
50 | def get_slot_names(self, *args, **kwargs):
51 | return self._optimizer.get_slot_names(*args, **kwargs)
52 |
53 | def variables(self, *args, **kwargs):
54 | return self._optimizer.variables(*args, **kwargs)
55 |
56 | def _create_slots(self, var_list):
57 | for v in var_list:
58 | self._zeros_slot(v, "prev_grad", self._name)
59 |
60 | def sparse_to_dense(self, grads_and_vars):
61 | # convert Indexed Slices returned by horovod to dense gradients
62 | dense_grads = []
63 | for sparse_grad, var in grads_and_vars:
64 | if isinstance(sparse_grad, tf.IndexedSlices):
65 | indices = tf.cast(sparse_grad.indices,tf.int32)
66 | values = sparse_grad.values
67 | shape = sparse_grad.dense_shape
68 | dimensions,multiple = [],1
69 | if shape is not None:
70 | dimensions = shape.as_list()
71 | if dimensions is not None:
72 | if len(dimensions) == 1:
73 | multiple = dimensions[0]
74 | else:
75 | for dimension in dimensions:
76 | multiple = multiple * dimension
77 | grad = tf.reshape(tf.sparse_to_dense(indices,[multiple],values, default_value=0, validate_indices=True, name=None), shape)
78 | dense_grads.append(grad)
79 | grads_and_vars = [(grad, gradvar[1]) for grad, gradvar in zip(dense_grads, grads_and_vars)]
80 | return grads_and_vars
81 |
--------------------------------------------------------------------------------
/examples/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/.DS_Store
--------------------------------------------------------------------------------
/examples/BERT/.dockerignore:
--------------------------------------------------------------------------------
1 | .idea/
2 | .git/
3 | __pycache__/
4 | results/
5 | data/
6 | checkpoints/
7 |
--------------------------------------------------------------------------------
/examples/BERT/.gitignore:
--------------------------------------------------------------------------------
1 | # Initially taken from Github's Python gitignore file
2 |
3 | # Byte-compiled / optimized / DLL files
4 | __pycache__/
5 | *.py[cod]
6 | *$py.class
7 |
8 | # C extensions
9 | *.so
10 |
11 | #Data
12 | data/*/*/
13 | data/*/*.zip
14 |
15 | #Resutls
16 | results/
17 |
18 | # Distribution / packaging
19 | .Python
20 | build/
21 | develop-eggs/
22 | dist/
23 | downloads/
24 | eggs/
25 | .eggs/
26 | lib/
27 | lib64/
28 | parts/
29 | sdist/
30 | var/
31 | wheels/
32 | *.egg-info/
33 | .installed.cfg
34 | .vscode/
35 | *.egg
36 | MANIFEST
37 |
38 | # PyInstaller
39 | # Usually these files are written by a python script from a template
40 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
41 | *.manifest
42 | *.spec
43 |
44 | # Installer logs
45 | pip-log.txt
46 | pip-delete-this-directory.txt
47 |
48 | # Unit test / coverage reports
49 | htmlcov/
50 | .tox/
51 | .nox/
52 | .coverage
53 | .coverage.*
54 | .cache
55 | nosetests.xml
56 | coverage.xml
57 | *.cover
58 | .hypothesis/
59 | .pytest_cache/
60 |
61 | # Translations
62 | *.mo
63 | *.pot
64 |
65 | # Django stuff:
66 | *.log
67 | local_settings.py
68 | db.sqlite3
69 |
70 | # Flask stuff:
71 | instance/
72 | .webassets-cache
73 |
74 | # Scrapy stuff:
75 | .scrapy
76 |
77 | # Sphinx documentation
78 | docs/_build/
79 |
80 | # PyBuilder
81 | target/
82 |
83 | # Jupyter Notebook
84 | .ipynb_checkpoints
85 |
86 | # IPython
87 | profile_default/
88 | ipython_config.py
89 |
90 | # pyenv
91 | .python-version
92 |
93 | # celery beat schedule file
94 | celerybeat-schedule
95 |
96 | # SageMath parsed files
97 | *.sage.py
98 |
99 | # Environments
100 | .env
101 | .venv
102 | env/
103 | venv/
104 | ENV/
105 | env.bak/
106 | venv.bak/
107 |
108 | # Spyder project settings
109 | .spyderproject
110 | .spyproject
111 |
112 | # Rope project settings
113 | .ropeproject
114 |
115 | # mkdocs documentation
116 | /site
117 |
118 | # mypy
119 | .mypy_cache/
120 | .dmypy.json
121 | dmypy.json
122 |
123 | # Pyre type checker
124 | .pyre/
125 |
--------------------------------------------------------------------------------
/examples/BERT/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "tensorrt-inference-server"]
2 | url = https://github.com/NVIDIA/tensorrt-inference-server.git
3 | path = tensorrt-inference-server
4 | branch = r19.06
5 |
6 |
7 |
--------------------------------------------------------------------------------
/examples/BERT/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # How to Contribute
2 |
3 | BERT needs to maintain permanent compatibility with the pre-trained model files,
4 | so we do not plan to make any major changes to this library (other than what was
5 | promised in the README). However, we can accept small patches related to
6 | re-factoring and documentation. To submit contributes, there are just a few
7 | small guidelines you need to follow.
8 |
9 | ## Contributor License Agreement
10 |
11 | Contributions to this project must be accompanied by a Contributor License
12 | Agreement. You (or your employer) retain the copyright to your contribution;
13 | this simply gives us permission to use and redistribute your contributions as
14 | part of the project. Head over to to see
15 | your current agreements on file or to sign a new one.
16 |
17 | You generally only need to submit a CLA once, so if you've already submitted one
18 | (even if it was for a different project), you probably don't need to do it
19 | again.
20 |
21 | ## Code reviews
22 |
23 | All submissions, including submissions by project members, require review. We
24 | use GitHub pull requests for this purpose. Consult
25 | [GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
26 | information on using pull requests.
27 |
28 | ## Community Guidelines
29 |
30 | This project follows
31 | [Google's Open Source Community Guidelines](https://opensource.google.com/conduct/).
32 |
--------------------------------------------------------------------------------
/examples/BERT/Dockerfile:
--------------------------------------------------------------------------------
1 | ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:19.06-py3
2 |
3 | FROM tensorrtserver_client as trt
4 |
5 | FROM ${FROM_IMAGE_NAME}
6 |
7 | RUN apt-get update && apt-get install -y pbzip2 pv bzip2
8 |
9 | RUN pip install toposort networkx pytest nltk tqdm html2text progressbar
10 |
11 | WORKDIR /workspace
12 | RUN git clone https://github.com/openai/gradient-checkpointing.git
13 | RUN git clone https://github.com/attardi/wikiextractor.git
14 | RUN git clone https://github.com/soskek/bookcorpus.git
15 |
16 | # Copy the perf_client over
17 | COPY --from=trt /workspace/build/perf_client /workspace/build/perf_client
18 |
19 | # Copy the python wheel and install with pip
20 | COPY --from=trt /workspace/build/dist/dist/tensorrtserver*.whl /tmp/
21 | RUN pip install /tmp/tensorrtserver*.whl && rm /tmp/tensorrtserver*.whl
22 |
23 |
24 | WORKDIR /workspace/bert
25 | COPY . .
26 |
27 | ENV PYTHONPATH=/workspace/bert
28 |
--------------------------------------------------------------------------------
/examples/BERT/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
177 | END OF TERMS AND CONDITIONS
178 |
179 | APPENDIX: How to apply the Apache License to your work.
180 |
181 | To apply the Apache License to your work, attach the following
182 | boilerplate notice, with the fields enclosed by brackets "[]"
183 | replaced with your own identifying information. (Don't include
184 | the brackets!) The text should be enclosed in the appropriate
185 | comment syntax for the file format. We also recommend that a
186 | file or class name and description of purpose be included on the
187 | same "printed page" as the copyright notice for easier
188 | identification within third-party archives.
189 |
190 | Copyright [yyyy] [name of copyright owner]
191 |
192 | Licensed under the Apache License, Version 2.0 (the "License");
193 | you may not use this file except in compliance with the License.
194 | You may obtain a copy of the License at
195 |
196 | http://www.apache.org/licenses/LICENSE-2.0
197 |
198 | Unless required by applicable law or agreed to in writing, software
199 | distributed under the License is distributed on an "AS IS" BASIS,
200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201 | See the License for the specific language governing permissions and
202 | limitations under the License.
203 |
--------------------------------------------------------------------------------
/examples/BERT/NOTICE:
--------------------------------------------------------------------------------
1 | BERT TensorFlow
2 |
3 | This repository includes software from https://github.com/google-research/bert
4 | licensed under the Apache License, Version 2.0 (the "License")
--------------------------------------------------------------------------------
/examples/BERT/__init__.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 |
--------------------------------------------------------------------------------
/examples/BERT/data/README.md:
--------------------------------------------------------------------------------
1 | Steps to reproduce datasets from web
2 |
3 | 1) Build the container
4 | * docker build -t bert_tf .
5 | 2) Run the container interactively
6 | * nvidia-docker run -it --ipc=host bert_tf
7 | * Optional: Mount data volumes
8 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/download
9 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/extracted_articles
10 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/raw_data
11 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/intermediate_files
12 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/final_text_file_single
13 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/final_text_files_sharded
14 | * -v yourpath:/workspace/bert/data/wikipedia_corpus/final_tfrecords_sharded
15 | * -v yourpath:/workspace/bert/data/bookcorpus/download
16 | * -v yourpath:/workspace/bert/data/bookcorpus/final_text_file_single
17 | * -v yourpath:/workspace/bert/data/bookcorpus/final_text_files_sharded
18 | * -v yourpath:/workspace/bert/data/bookcorpus/final_tfrecords_sharded
19 | * Optional: Select visible GPUs
20 | * -e CUDA_VISIBLE_DEVICES=0
21 |
22 | ** Inside of the container starting here**
23 | 3) Download pretrained weights (they contain vocab files for preprocessing)
24 | * cd data/pretrained_models_google && python3 download_models.py
25 | 4) "One-click" SQuAD download
26 | * cd /workspace/bert/data/squad && . squad_download.sh
27 | 5) "One-click" Wikipedia data download and prep (provides tfrecords)
28 | * Set your configuration in data/wikipedia_corpus/config.sh
29 | * cd /data/wikipedia_corpus && ./run_preprocessing.sh
30 | 6) "One-click" BookCorpus data download and prep (provided tfrecords)
31 | * Set your configuration in data/wikipedia_corpus/config.sh
32 | * cd /data/bookcorpus && ./run_preprocessing.sh
33 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/clean_and_merge_text.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import glob
4 | import os
5 |
6 | output_file = os.environ['WORKING_DIR'] + '/intermediate_files/bookcorpus.txt'
7 | download_path = os.environ['WORKING_DIR'] + '/download/'
8 |
9 | with open(output_file, "w") as ofile:
10 | for filename in glob.glob(download_path + '*.txt', recursive=True):
11 | with open(filename, mode='r', encoding="utf-8-sig") as file:
12 | for line in file:
13 | if line.strip() != "":
14 | ofile.write(line.strip() + " ")
15 | ofile.write("\n\n ")
16 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/config.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | set -e
4 |
5 | USE_BERT_LARGE=true
6 | MAX_SEQUENCE_LENGTH=512
7 | MAX_PREDICTIONS_PER_SEQUENCE=80
8 | MASKED_LM_PROB=0.15
9 | SEED=12345
10 | DUPE_FACTOR=5
11 | DO_LOWER_CASE="True"
12 | N_LINES_PER_SHARD_APPROX=396000 # Default=396000 creates 256 shards
13 |
14 | N_PROCS_PREPROCESS=4 # Adjust this based on memory requirements and available number of cores
15 | export WORKING_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
16 |
17 | BERT_BASE_DIR="${WORKING_DIR}/../pretrained_models_google/uncased_L-12_H-768_A-12"
18 | BERT_LARGE_DIR="${WORKING_DIR}/../pretrained_models_google/uncased_L-24_H-1024_A-16"
19 |
20 | if [ "$USE_BERT_LARGE" = true ] ; then
21 | VOCAB_FILE="${BERT_LARGE_DIR}/vocab.txt"
22 | else
23 | VOCAB_FILE="${BERT_BASE_DIR}/vocab.txt"
24 | fi
25 |
26 | OUTPUT_DIR="${WORKING_DIR}/final_tfrecords_sharded/bert_large_bookcorpus_seq_${MAX_SEQUENCE_LENGTH}_pred_${MAX_PREDICTIONS_PER_SEQUENCE}"
27 |
28 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/create_pseudo_test_set.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import glob
4 | import os
5 | import random
6 | import shutil
7 |
8 | input_dir = os.environ['WORKING_DIR'] + '/final_text_files_sharded/'
9 | output_dir = os.environ['WORKING_DIR'] + '/test_set_text_files/'
10 |
11 | random.seed(13254)
12 | n_shards_to_keep = 3
13 |
14 | file_glob = glob.glob(input_dir + '/*', recursive=False)
15 | file_glob = random.sample(file_glob, n_shards_to_keep)
16 |
17 | for filename in file_glob:
18 | shutil.copy(filename, output_dir)
19 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/create_pseudo_test_set.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/bookcorpus/config.sh
4 |
5 | # Convert test set sharded text files into tfrecords that are ready for BERT pretraining
6 | echo "Creating test set tfrecords for each text shard"
7 | mkdir -p ${WORKING_DIR}/test_set_text_files
8 | mkdir -p ${WORKING_DIR}/test_set_tfrecords
9 | python3 ${WORKING_DIR}/create_pseudo_test_set.py
10 | . ${WORKING_DIR}/preprocessing_test_set_xargs_wrapper.sh ${N_PROCS_PREPROCESS}
11 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/preprocessing.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | SHARD_INDEX=${1}
4 | INPUT_FILE="${WORKING_DIR}/final_text_files_sharded/bookcorpus.segmented.part.${SHARD_INDEX}.txt"
5 |
6 | source /workspace/bert/data/bookcorpus/config.sh
7 |
8 | OUTPUT_DIR=${WORKING_DIR}/final_tfrecords_sharded
9 | mkdir -p ${OUTPUT_DIR}
10 |
11 | OUTPUT_FILE="${OUTPUT_DIR}/tf_examples.tfrecord000${SHARD_INDEX}"
12 |
13 | python /workspace/bert/utils/create_pretraining_data.py \
14 | --input_file=${INPUT_FILE} \
15 | --output_file=${OUTPUT_FILE} \
16 | --vocab_file=${VOCAB_FILE} \
17 | --do_lower_case=${DO_LOWER_CASE} \
18 | --max_seq_length=${MAX_SEQUENCE_LENGTH} \
19 | --max_predictions_per_seq=${MAX_PREDICTIONS_PER_SEQUENCE} \
20 | --masked_lm_prob=${MASKED_LM_PROB} \
21 | --random_seed=${SEED} \
22 | --dupe_factor=${DUPE_FACTOR}
23 |
24 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/preprocessing_test_set.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | INPUT_FILE=${1}
4 |
5 | source /workspace/bert/data/bookcorpus/config.sh
6 |
7 | OUTPUT_DIR=${WORKING_DIR}/test_set_tfrecords
8 | mkdir -p ${OUTPUT_DIR}
9 |
10 | #SHARD_INDEX=$(( echo ${INPUT_FILE} | egrep -o [0-9]+ ))
11 | SHARD_INDEX=$( eval echo ${INPUT_FILE} | sed -e s/[^0-9]//g )
12 | OUTPUT_FILE="${OUTPUT_DIR}/tf_examples.tfrecord000${SHARD_INDEX}"
13 |
14 | SEED=13254
15 |
16 | echo "Shard index ${SHARD_INDEX}"
17 |
18 | python /workspace/bert/utils/create_pretraining_data.py \
19 | --input_file=${INPUT_FILE} \
20 | --output_file=${OUTPUT_FILE} \
21 | --vocab_file=${VOCAB_FILE} \
22 | --do_lower_case=${DO_LOWER_CASE} \
23 | --max_seq_length=${MAX_SEQUENCE_LENGTH} \
24 | --max_predictions_per_seq=${MAX_PREDICTIONS_PER_SEQUENCE} \
25 | --masked_lm_prob=${MASKED_LM_PROB} \
26 | --random_seed=${SEED} \
27 | --dupe_factor=${DUPE_FACTOR}
28 |
29 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/preprocessing_test_set_xargs_wrapper.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/bookcorpus/config.sh
4 |
5 | SHARD_COUNT=0
6 | rm -rf /workspace/bert/data/bookcorpus/xarg_list.txt
7 | touch /workspace/bert/data/bookcorpus/xarg_list.txt
8 | for file in /workspace/bert/data/bookcorpus/test_set_text_files/*; do
9 | echo ${file} >> /workspace/bert/data/bookcorpus/xarg_list.txt
10 | done
11 |
12 | xargs -n 1 --max-procs=${N_PROCS_PREPROCESS} --arg-file=/workspace/bert/data/bookcorpus/xarg_list.txt /workspace/bert/data/bookcorpus/preprocessing_test_set.sh
13 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/preprocessing_xargs_wrapper.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/bookcorpus/config.sh
4 |
5 | SHARD_COUNT=0
6 | rm -rf /workspace/bert/data/bookcorpus/xarg_list.txt
7 | touch /workspace/bert/data/bookcorpus/xarg_list.txt
8 | for file in /workspace/bert/data/bookcorpus/final_text_files_sharded/*; do
9 | echo ${SHARD_COUNT} >> /workspace/bert/data/bookcorpus/xarg_list.txt
10 | SHARD_COUNT=$((SHARD_COUNT+1))
11 | done
12 |
13 | xargs -n 1 --max-procs=${N_PROCS_PREPROCESS} --arg-file=/workspace/bert/data/bookcorpus/xarg_list.txt /workspace/bert/data/bookcorpus/preprocessing.sh
14 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/run_preprocessing.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/bookcorpus/config.sh
4 |
5 | # Download books
6 | mkdir -p download
7 | python3 /workspace/bookcorpus/download_files.py --list /workspace/bookcorpus/url_list.jsonl --out ${WORKING_DIR}/download --trash-bad-count
8 |
9 | # Clean and prep (one book per line)
10 | mkdir -p ${WORKING_DIR}/intermediate_files
11 | python3 ${WORKING_DIR}/clean_and_merge_text.py
12 |
13 | # Split books into one-sentence-per-line format for use with BERT scripts
14 | echo "Applying sentence segmentation to get one sentence per line"
15 | mkdir -p ${WORKING_DIR}/final_text_file_single
16 | python3 ${WORKING_DIR}/sentence_segmentation_nltk.py
17 | # Note: NLTK can be replaced with Spacy, although it is slower (2 variations provided)
18 |
19 | # Shard finalized text so that it has a chance of fitting in memory when creating pretraining data into tfrecords (choose appropriate number of shards for distributed training)
20 | echo "Shard text files - size is approximate to prevent splitting a book across shards"
21 | mkdir -p ${WORKING_DIR}/final_text_files_sharded
22 | python3 ${WORKING_DIR}/shard_text_input_file.py
23 |
24 | # Convert sharded text files into tfrecords that are ready for BERT pretraining
25 | echo "Creating tfrecords for each text shard"
26 | mkdir -p ${WORKING_DIR}/final_tfrecords_sharded
27 | . ${WORKING_DIR}/preprocessing_xargs_wrapper.sh ${N_PROCS_PREPROCESS}
28 |
29 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/sentence_segmentation_nltk.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import nltk
4 | import os
5 |
6 | nltk.download('punkt')
7 |
8 | input_file = os.environ['WORKING_DIR'] + '/intermediate_files/bookcorpus.txt'
9 | output_file = os.environ['WORKING_DIR'] + '/final_text_file_single/bookcorpus.segmented.nltk.txt'
10 |
11 | doc_seperator = "\n"
12 |
13 | with open(input_file) as ifile:
14 | with open(output_file, "w") as ofile:
15 | for line in ifile:
16 | if line != "\n":
17 | sent_list = nltk.tokenize.sent_tokenize(line)
18 | for sent in sent_list:
19 | ofile.write(sent + "\n")
20 | ofile.write(doc_seperator)
21 |
--------------------------------------------------------------------------------
/examples/BERT/data/bookcorpus/shard_text_input_file.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import os
4 |
5 | input_file = os.environ['WORKING_DIR'] + '/final_text_file_single/bookcorpus.segmented.nltk.txt'
6 | output_file = os.environ['WORKING_DIR'] + '/final_text_files_sharded/bookcorpus.segmented.part.'
7 |
8 | doc_seperator = "\n"
9 |
10 | line_buffer = []
11 | shard_size = 396000 # Approximate, will split at next article break
12 | line_counter = 0
13 | shard_index = 0
14 |
15 | ifile_lines = 0
16 | with open(input_file) as ifile:
17 | for line in ifile:
18 | ifile_lines += 1
19 |
20 | print("Input file contains", ifile_lines, "lines.")
21 |
22 | iline_counter = 1
23 | with open(input_file) as ifile:
24 | for line in ifile:
25 | if line_counter < shard_size and iline_counter < ifile_lines:
26 | line_buffer.append(line)
27 | line_counter += 1
28 | iline_counter += 1
29 | elif line_counter >= shard_size and line != "\n" and iline_counter < ifile_lines:
30 | line_buffer.append(line)
31 | line_counter += 1
32 | iline_counter += 1
33 | else:
34 | with open(output_file + str(shard_index) + ".txt", "w") as ofile:
35 | for oline in line_buffer:
36 | ofile.write(oline)
37 | line_buffer = []
38 | line_counter = 0
39 | shard_index += 1
40 |
41 |
42 |
--------------------------------------------------------------------------------
/examples/BERT/data/glue/download_glue_data.py:
--------------------------------------------------------------------------------
1 | #
2 | #
3 | # @unpublished{wang2018glue
4 | # title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for
5 | # Natural Language Understanding}
6 | # author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill,
7 | # Felix and Levy, Omer and Bowman, Samuel R.}
8 | # note={arXiv preprint 1804.07461}
9 | # year={2018}
10 | # }
11 | #
12 | # Script for downloading all GLUE data.
13 | # Note: for legal reasons, we are unable to host MRPC.
14 | # You can either use the version hosted by the SentEval team, which is already tokenized,
15 | # or you can download the original data from (https://download.microsoft.com/download/D/4/6/D46FF87A-F6B9-4252-AA8B-3604ED519838/MSRParaphraseCorpus.msi) and extract the data from it manually.
16 | # For Windows users, you can run the .msi file. For Mac and Linux users, consider an external library such as 'cabextract' (see below for an example).
17 | # You should then rename and place specific files in a folder (see below for an example).
18 | # mkdir MRPC
19 | # cabextract MSRParaphraseCorpus.msi -d MRPC
20 | # cat MRPC/_2DEC3DBE877E4DB192D17C0256E90F1D | tr -d $'\r' > MRPC/msr_paraphrase_train.txt
21 | # cat MRPC/_D7B391F9EAFF4B1B8BCE8F21B20B1B61 | tr -d $'\r' > MRPC/msr_paraphrase_test.txt
22 | # rm MRPC/_*
23 | # rm MSRParaphraseCorpus.msi
24 |
25 |
26 | import os
27 | import sys
28 | import shutil
29 | import argparse
30 | import tempfile
31 | import urllib
32 | import io
33 | if sys.version_info >= (3, 0):
34 | import urllib.request
35 | import zipfile
36 |
37 | URLLIB=urllib
38 | if sys.version_info >= (3, 0):
39 | URLLIB=urllib.request
40 |
41 | TASKS = ["CoLA", "SST", "MRPC", "QQP", "STS", "MNLI", "SNLI", "QNLI", "RTE", "WNLI", "diagnostic"]
42 | TASK2PATH = {"CoLA":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FCoLA.zip?alt=media&token=46d5e637-3411-4188-bc44-5809b5bfb5f4',
43 | "SST":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8',
44 | "MRPC":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc',
45 | "QQP":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQQP.zip?alt=media&token=700c6acf-160d-4d89-81d1-de4191d02cb5',
46 | "STS":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSTS-B.zip?alt=media&token=bddb94a7-8706-4e0d-a694-1109e12273b5',
47 | "MNLI":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FMNLI.zip?alt=media&token=50329ea1-e339-40e2-809c-10c40afff3ce',
48 | "SNLI":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSNLI.zip?alt=media&token=4afcfbb2-ff0c-4b2d-a09a-dbf07926f4df',
49 | "QNLI":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQNLI.zip?alt=media&token=c24cad61-f2df-4f04-9ab6-aa576fa829d0',
50 | "RTE":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb',
51 | "WNLI":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FWNLI.zip?alt=media&token=068ad0a0-ded7-4bd7-99a5-5e00222e0faf',
52 | "diagnostic":'https://storage.googleapis.com/mtl-sentence-representations.appspot.com/tsvsWithoutLabels%2FAX.tsv?GoogleAccessId=firebase-adminsdk-0khhl@mtl-sentence-representations.iam.gserviceaccount.com&Expires=2498860800&Signature=DuQ2CSPt2Yfre0C%2BiISrVYrIFaZH1Lc7hBVZDD4ZyR7fZYOMNOUGpi8QxBmTNOrNPjR3z1cggo7WXFfrgECP6FBJSsURv8Ybrue8Ypt%2FTPxbuJ0Xc2FhDi%2BarnecCBFO77RSbfuz%2Bs95hRrYhTnByqu3U%2FYZPaj3tZt5QdfpH2IUROY8LiBXoXS46LE%2FgOQc%2FKN%2BA9SoscRDYsnxHfG0IjXGwHN%2Bf88q6hOmAxeNPx6moDulUF6XMUAaXCSFU%2BnRO2RDL9CapWxj%2BDl7syNyHhB7987hZ80B%2FwFkQ3MEs8auvt5XW1%2Bd4aCU7ytgM69r8JDCwibfhZxpaa4gd50QXQ%3D%3D'}
53 |
54 | MRPC_TRAIN = 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt'
55 | MRPC_TEST = 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt'
56 |
57 | def download_and_extract(task, data_dir):
58 | print("Downloading and extracting %s..." % task)
59 | data_file = "%s.zip" % task
60 | URLLIB.urlretrieve(TASK2PATH[task], data_file)
61 | with zipfile.ZipFile(data_file) as zip_ref:
62 | zip_ref.extractall(data_dir)
63 | os.remove(data_file)
64 | print("\tCompleted!")
65 |
66 | def format_mrpc(data_dir, path_to_data):
67 | print("Processing MRPC...")
68 | mrpc_dir = os.path.join(data_dir, "MRPC")
69 | if not os.path.isdir(mrpc_dir):
70 | os.mkdir(mrpc_dir)
71 | if path_to_data:
72 | mrpc_train_file = os.path.join(path_to_data, "msr_paraphrase_train.txt")
73 | mrpc_test_file = os.path.join(path_to_data, "msr_paraphrase_test.txt")
74 | else:
75 | mrpc_train_file = os.path.join(mrpc_dir, "msr_paraphrase_train.txt")
76 | mrpc_test_file = os.path.join(mrpc_dir, "msr_paraphrase_test.txt")
77 | URLLIB.urlretrieve(MRPC_TRAIN, mrpc_train_file)
78 | URLLIB.urlretrieve(MRPC_TEST, mrpc_test_file)
79 | assert os.path.isfile(mrpc_train_file), "Train data not found at %s" % mrpc_train_file
80 | assert os.path.isfile(mrpc_test_file), "Test data not found at %s" % mrpc_test_file
81 | URLLIB.urlretrieve(TASK2PATH["MRPC"], os.path.join(mrpc_dir, "dev_ids.tsv"))
82 |
83 | dev_ids = []
84 | with io.open(os.path.join(mrpc_dir, "dev_ids.tsv"), encoding='utf-8') as ids_fh:
85 | for row in ids_fh:
86 | dev_ids.append(row.strip().split('\t'))
87 |
88 | with io.open(mrpc_train_file, encoding='utf-8') as data_fh, \
89 | io.open(os.path.join(mrpc_dir, "train.tsv"), 'w', encoding='utf-8') as train_fh, \
90 | io.open(os.path.join(mrpc_dir, "dev.tsv"), 'w', encoding='utf-8') as dev_fh:
91 | header = data_fh.readline()
92 | train_fh.write(header)
93 | dev_fh.write(header)
94 | for row in data_fh:
95 | label, id1, id2, s1, s2 = row.strip().split('\t')
96 | if [id1, id2] in dev_ids:
97 | dev_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2))
98 | else:
99 | train_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2))
100 |
101 | with io.open(mrpc_test_file, encoding='utf-8') as data_fh, \
102 | io.open(os.path.join(mrpc_dir, "test.tsv"), 'w', encoding='utf-8') as test_fh:
103 | header = data_fh.readline()
104 | test_fh.write("index\t#1 ID\t#2 ID\t#1 String\t#2 String\n")
105 | for idx, row in enumerate(data_fh):
106 | label, id1, id2, s1, s2 = row.strip().split('\t')
107 | test_fh.write("%d\t%s\t%s\t%s\t%s\n" % (idx, id1, id2, s1, s2))
108 | print("\tCompleted!")
109 |
110 | def download_diagnostic(data_dir):
111 | print("Downloading and extracting diagnostic...")
112 | if not os.path.isdir(os.path.join(data_dir, "diagnostic")):
113 | os.mkdir(os.path.join(data_dir, "diagnostic"))
114 | data_file = os.path.join(data_dir, "diagnostic", "diagnostic.tsv")
115 | URLLIB.urlretrieve(TASK2PATH["diagnostic"], data_file)
116 | print("\tCompleted!")
117 | return
118 |
119 | def get_tasks(task_names):
120 | task_names = task_names.split(',')
121 | if "all" in task_names:
122 | tasks = TASKS
123 | else:
124 | tasks = []
125 | for task_name in task_names:
126 | assert task_name in TASKS, "Task %s not found!" % task_name
127 | tasks.append(task_name)
128 | return tasks
129 |
130 | def main(arguments):
131 | parser = argparse.ArgumentParser()
132 | parser.add_argument('-d', '--data_dir', help='directory to save data to', type=str, default='.')
133 | parser.add_argument('-t', '--tasks', help='tasks to download data for as a comma separated string',
134 | type=str, default='all')
135 | parser.add_argument('--path_to_mrpc', help='path to directory containing extracted MRPC data, msr_paraphrase_train.txt and msr_paraphrase_text.txt',
136 | type=str, default='')
137 | args = parser.parse_args(arguments)
138 |
139 | if not os.path.isdir(args.data_dir):
140 | os.mkdir(args.data_dir)
141 | tasks = get_tasks(args.tasks)
142 |
143 | for task in tasks:
144 | if task == 'MRPC':
145 | format_mrpc(args.data_dir, args.path_to_mrpc)
146 | elif task == 'diagnostic':
147 | download_diagnostic(args.data_dir)
148 | else:
149 | download_and_extract(task, args.data_dir)
150 |
151 |
152 | if __name__ == '__main__':
153 | sys.exit(main(sys.argv[1:]))
--------------------------------------------------------------------------------
/examples/BERT/data/images/bert_pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/bert_pipeline.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_base_summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_base_summary.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_bs_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_bs_1.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_bs_8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_bs_8.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_dynamic.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_dynamic.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_ec_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_ec_1.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_ec_4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_ec_4.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_large_summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_large_summary.png
--------------------------------------------------------------------------------
/examples/BERT/data/images/trtis_static.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Tejalsjsu/DeepGradientCompression/f639ce40fed8acca60aa01b50c2100fb51294a6e/examples/BERT/data/images/trtis_static.png
--------------------------------------------------------------------------------
/examples/BERT/data/pretrained_models_google/download_models.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import hashlib
4 | import urllib.request
5 | import zipfile
6 |
7 | # Download urls
8 | model_urls = {
9 | 'bert_base_uncased' : ('https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip', 'uncased_L-12_H-768_A-12.zip'),
10 | 'bert_large_uncased' : ('https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-24_H-1024_A-16.zip', 'uncased_L-24_H-1024_A-16.zip'),
11 | 'bert_base_cased' : ('https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip', 'cased_L-12_H-768_A-12.zip'),
12 | 'bert_large_cased' : ('https://storage.googleapis.com/bert_models/2018_10_18/cased_L-24_H-1024_A-16.zip', 'cased_L-24_H-1024_A-16.zip'),
13 | 'bert_base_multilingual_cased' : ('https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip', 'multi_cased_L-12_H-768_A-12.zip'),
14 | 'bert_large_multilingual_uncased' : ('https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip', 'multilingual_L-12_H-768_A-12.zip'),
15 | 'bert_base_chinese' : ('https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip', 'chinese_L-12_H-768_A-12.zip')
16 | }
17 |
18 | # SHA256sum verification for file download integrity (and checking for changes from the download source over time)
19 | bert_base_uncased_sha = {
20 | 'bert_config.json' : '7b4e5f53efbd058c67cda0aacfafb340113ea1b5797d9ce6ee411704ba21fcbc',
21 | 'bert_model.ckpt.data-00000-of-00001' : '58580dc5e0bf0ae0d2efd51d0e8272b2f808857f0a43a88aaf7549da6d7a8a84',
22 | 'bert_model.ckpt.index' : '04c1323086e2f1c5b7c0759d8d3e484afbb0ab45f51793daab9f647113a0117b',
23 | 'bert_model.ckpt.meta' : 'dd5682170a10c3ea0280c2e9b9a45fee894eb62da649bbdea37b38b0ded5f60e',
24 | 'vocab.txt' : '07eced375cec144d27c900241f3e339478dec958f92fddbc551f295c992038a3',
25 | }
26 |
27 | bert_large_uncased_sha = {
28 | 'bert_config.json' : 'bfa42236d269e2aeb3a6d30412a33d15dbe8ea597e2b01dc9518c63cc6efafcb',
29 | 'bert_model.ckpt.data-00000-of-00001' : 'bc6b3363e3be458c99ecf64b7f472d2b7c67534fd8f564c0556a678f90f4eea1',
30 | 'bert_model.ckpt.index' : '68b52f2205ffc64dc627d1120cf399c1ef1cbc35ea5021d1afc889ffe2ce2093',
31 | 'bert_model.ckpt.meta' : '6fcce8ff7628f229a885a593625e3d5ff9687542d5ef128d9beb1b0c05edc4a1',
32 | 'vocab.txt' : '07eced375cec144d27c900241f3e339478dec958f92fddbc551f295c992038a3',
33 | }
34 |
35 | bert_base_cased_sha = {
36 | 'bert_config.json' : 'f11dfb757bea16339a33e1bf327b0aade6e57fd9c29dc6b84f7ddb20682f48bc',
37 | 'bert_model.ckpt.data-00000-of-00001' : '734d5a1b68bf98d4e9cb6b6692725d00842a1937af73902e51776905d8f760ea',
38 | 'bert_model.ckpt.index' : '517d6ef5c41fc2ca1f595276d6fccf5521810d57f5a74e32616151557790f7b1',
39 | 'bert_model.ckpt.meta' : '5f8a9771ff25dadd61582abb4e3a748215a10a6b55947cbb66d0f0ba1694be98',
40 | 'vocab.txt' : 'eeaa9875b23b04b4c54ef759d03db9d1ba1554838f8fb26c5d96fa551df93d02',
41 | }
42 |
43 | bert_large_cased_sha = {
44 | 'bert_config.json' : '7adb2125c8225da495656c982fd1c5f64ba8f20ad020838571a3f8a954c2df57',
45 | 'bert_model.ckpt.data-00000-of-00001' : '6ff33640f40d472f7a16af0c17b1179ca9dcc0373155fb05335b6a4dd1657ef0',
46 | 'bert_model.ckpt.index' : 'ef42a53f577fbe07381f4161b13c7cab4f4fc3b167cec6a9ae382c53d18049cf',
47 | 'bert_model.ckpt.meta' : 'd2ddff3ed33b80091eac95171e94149736ea74eb645e575d942ec4a5e01a40a1',
48 | 'vocab.txt' : 'eeaa9875b23b04b4c54ef759d03db9d1ba1554838f8fb26c5d96fa551df93d02',
49 | }
50 |
51 | bert_base_multilingual_cased_sha = {
52 | 'bert_config.json' : 'e76c3964bc14a8bb37a5530cdc802699d2f4a6fddfab0611e153aa2528f234f0',
53 | 'bert_model.ckpt.data-00000-of-00001' : '55b8a2df41f69c60c5180e50a7c31b7cdf6238909390c4ddf05fbc0d37aa1ac5',
54 | 'bert_model.ckpt.index' : '7d8509c2a62b4e300feb55f8e5f1eef41638f4998dd4d887736f42d4f6a34b37',
55 | 'bert_model.ckpt.meta' : '95e5f1997e8831f1c31e5cf530f1a2e99f121e9cd20887f2dce6fe9e3343e3fa',
56 | 'vocab.txt' : 'fe0fda7c425b48c516fc8f160d594c8022a0808447475c1a7c6d6479763f310c',
57 | }
58 |
59 | bert_large_multilingual_uncased_sha = {
60 | 'bert_config.json' : '49063bb061390211d2fdd108cada1ed86faa5f90b80c8f6fdddf406afa4c4624',
61 | 'bert_model.ckpt.data-00000-of-00001' : '3cd83912ebeb0efe2abf35c9f1d5a515d8e80295e61c49b75c8853f756658429',
62 | 'bert_model.ckpt.index' : '87c372c1a3b1dc7effaaa9103c80a81b3cbab04c7933ced224eec3b8ad2cc8e7',
63 | 'bert_model.ckpt.meta' : '27f504f34f02acaa6b0f60d65195ec3e3f9505ac14601c6a32b421d0c8413a29',
64 | 'vocab.txt' : '87b44292b452f6c05afa49b2e488e7eedf79ea4f4c39db6f2f4b37764228ef3f',
65 | }
66 |
67 | bert_base_chinese_sha = {
68 | 'bert_config.json' : '7aaad0335058e2640bcb2c2e9a932b1cd9da200c46ea7b8957d54431f201c015',
69 | 'bert_model.ckpt.data-00000-of-00001' : '756699356b78ad0ef1ca9ba6528297bcb3dd1aef5feadd31f4775d7c7fc989ba',
70 | 'bert_model.ckpt.index' : '46315546e05ce62327b3e2cd1bed22836adcb2ff29735ec87721396edb21b82e',
71 | 'bert_model.ckpt.meta' : 'c0f8d51e1ab986604bc2b25d6ec0af7fd21ff94cf67081996ec3f3bf5d823047',
72 | 'vocab.txt' : '45bbac6b341c319adc98a532532882e91a9cefc0329aa57bac9ae761c27b291c',
73 | }
74 |
75 | # Relate SHA to urls for loop below
76 | model_sha = {
77 | 'bert_base_uncased' : bert_base_uncased_sha,
78 | 'bert_large_uncased' : bert_large_uncased_sha,
79 | 'bert_base_cased' : bert_base_cased_sha,
80 | 'bert_large_cased' : bert_large_cased_sha,
81 | 'bert_base_multilingual_cased' : bert_base_multilingual_cased_sha,
82 | 'bert_large_multilingual_uncased' : bert_large_multilingual_uncased_sha,
83 | 'bert_base_chinese' : bert_base_chinese_sha
84 | }
85 |
86 | # Helper to get sha256sum of a file
87 | def sha256sum(filename):
88 | h = hashlib.sha256()
89 | b = bytearray(128*1024)
90 | mv = memoryview(b)
91 | with open(filename, 'rb', buffering=0) as f:
92 | for n in iter(lambda : f.readinto(mv), 0):
93 | h.update(mv[:n])
94 | return h.hexdigest()
95 |
96 | # Iterate over urls: download, unzip, verify sha256sum
97 | found_mismatch_sha = False
98 | for model in model_urls:
99 | url = model_urls[model][0]
100 | file = model_urls[model][1]
101 |
102 | print("Downloading", url)
103 | response = urllib.request.urlopen(url)
104 | with open(file, "wb") as handle:
105 | handle.write(response.read())
106 |
107 | print("Unzipping", file)
108 | zip = zipfile.ZipFile(file, 'r')
109 | zip.extractall()
110 | zip.close()
111 |
112 | sha_dict = model_sha[model]
113 | for extracted_file in sha_dict:
114 | sha = sha_dict[extracted_file]
115 | if sha != sha256sum(file[:-4] + "/" + extracted_file):
116 | found_mismatch_sha = True
117 | print("SHA256sum does not match on file:", extracted_file, "from download url:", url)
118 | else:
119 | print(file[:-4] + "/" + extracted_file, "\t", "verified")
120 |
121 | if not found_mismatch_sha:
122 | print("All downloads pass sha256sum verification.")
123 |
124 |
--------------------------------------------------------------------------------
/examples/BERT/data/squad/squad_download.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | echo "Downloading dataset for squad..."
4 |
5 | # Download SQuAD
6 |
7 | v1="v1.1"
8 | mkdir $v1
9 | wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json -O $v1/train-v1.1.json
10 | wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -O $v1/dev-v1.1.json
11 | wget https://worksheets.codalab.org/rest/bundles/0xbcd57bee090b421c982906709c8c27e1/contents/blob/ -O $v1/evaluate-v1.1.py
12 |
13 | EXP_TRAIN_v1='981b29407e0affa3b1b156f72073b945 -'
14 | EXP_DEV_v1='3e85deb501d4e538b6bc56f786231552 -'
15 | EXP_EVAL_v1='afb04912d18ff20696f7f88eed49bea9 -'
16 | CALC_TRAIN_v1=`cat ${v1}/train-v1.1.json |md5sum`
17 | CALC_DEV_v1=`cat ${v1}/dev-v1.1.json |md5sum`
18 | CALC_EVAL_v1=`cat ${v1}/evaluate-v1.1.py |md5sum`
19 |
20 | v2="v2.0"
21 | mkdir $v2
22 | wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O $v2/train-v2.0.json
23 | wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O $v2/dev-v2.0.json
24 | wget https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/ -O $v2/evaluate-v2.0.py
25 |
26 | EXP_TRAIN_v2='62108c273c268d70893182d5cf8df740 -'
27 | EXP_DEV_v2='246adae8b7002f8679c027697b0b7cf8 -'
28 | EXP_EVAL_v2='ff23213bed5516ea4a6d9edb6cd7d627 -'
29 |
30 | CALC_TRAIN_v2=`cat ${v2}/train-v2.0.json |md5sum`
31 | CALC_DEV_v2=`cat ${v2}/dev-v2.0.json |md5sum`
32 | CALC_EVAL_v2=`cat ${v2}/evaluate-v2.0.py |md5sum`
33 |
34 | echo "Squad data download done!"
35 |
36 | echo "Verifying Dataset...."
37 |
38 | if [ "$EXP_TRAIN_v1" != "$CALC_TRAIN_v1" ]; then
39 | echo "train-v1.1.json is corrupted! md5sum doesn't match"
40 | fi
41 |
42 | if [ "$EXP_DEV_v1" != "$CALC_DEV_v1" ]; then
43 | echo "dev-v1.1.json is corrupted! md5sum doesn't match"
44 | fi
45 | if [ "$EXP_EVAL_v1" != "$CALC_EVAL_v1" ]; then
46 | echo "evaluate-v1.1.py is corrupted! md5sum doesn't match"
47 | fi
48 |
49 |
50 | if [ "$EXP_TRAIN_v2" != "$CALC_TRAIN_v2" ]; then
51 | echo "train-v2.0.json is corrupted! md5sum doesn't match"
52 | fi
53 | if [ "$EXP_DEV_v2" != "$CALC_DEV_v2" ]; then
54 | echo "dev-v2.0.json is corrupted! md5sum doesn't match"
55 | fi
56 | if [ "$EXP_EVAL_v2" != "$CALC_EVAL_v2" ]; then
57 | echo "evaluate-v2.0.py is corrupted! md5sum doesn't match"
58 | fi
59 |
60 | echo "SQuAD download complete!"
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/config.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | set -e
4 |
5 | USE_BERT_LARGE=true
6 | MAX_SEQUENCE_LENGTH=512
7 | MAX_PREDICTIONS_PER_SEQUENCE=80
8 | MASKED_LM_PROB=0.15
9 | SEED=12345
10 | DUPE_FACTOR=5
11 | DO_LOWER_CASE="True"
12 | N_LINES_PER_SHARD_APPROX=396000 # Default=396000 creates 256 shards
13 |
14 | N_PROCS_PREPROCESS=4 # Adjust this based on memory requirements and available number of cores
15 | export WORKING_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
16 |
17 | WIKI_DUMP="ftp://ftpmirror.your.org/pub/wikimedia/dumps/enwiki/20190301/enwiki-20190301-pages-articles-multistream.xml.bz2"
18 | BERT_BASE_DIR="${WORKING_DIR}/../pretrained_models_google/uncased_L-12_H-768_A-12"
19 | BERT_LARGE_DIR="${WORKING_DIR}/../pretrained_models_google/uncased_L-24_H-1024_A-16"
20 |
21 | if [ "$USE_BERT_LARGE" = true ] ; then
22 | VOCAB_FILE="${BERT_LARGE_DIR}/vocab.txt"
23 | else
24 | VOCAB_FILE="${BERT_BASE_DIR}/vocab.txt"
25 | fi
26 |
27 | OUTPUT_DIR="${WORKING_DIR}/final_tfrecords_sharded/bert_large_wikipedia_seq_${MAX_SEQUENCE_LENGTH}_pred_${MAX_PREDICTIONS_PER_SEQUENCE}"
28 |
29 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/create_pseudo_test_set.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import glob
4 | import os
5 | import random
6 | import shutil
7 |
8 | input_dir = os.environ['WORKING_DIR'] + '/final_text_files_sharded/'
9 | output_dir = os.environ['WORKING_DIR'] + '/test_set_text_files/'
10 |
11 | random.seed(13254)
12 | n_shards_to_keep = 3
13 |
14 | file_glob = glob.glob(input_dir + '/*', recursive=False)
15 | file_glob = random.sample(file_glob, n_shards_to_keep)
16 |
17 | for filename in file_glob:
18 | shutil.copy(filename, output_dir)
19 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/create_pseudo_test_set.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/wikipedia_corpus/config.sh
4 |
5 | # Convert test set sharded text files into tfrecords that are ready for BERT pretraining
6 | echo "Creating test set tfrecords for each text shard"
7 | mkdir -p ${WORKING_DIR}/test_set_text_files
8 | mkdir -p ${WORKING_DIR}/test_set_tfrecords
9 | python3 ${WORKING_DIR}/create_pseudo_test_set.py
10 | . ${WORKING_DIR}/preprocessing_test_set_xargs_wrapper.sh ${N_PROCS_PREPROCESS}
11 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/preprocessing.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | SHARD_INDEX=${1}
4 | INPUT_FILE="${WORKING_DIR}/final_text_files_sharded/wikipedia.segmented.part.${SHARD_INDEX}.txt"
5 |
6 | source /workspace/bert/data/wikipedia_corpus/config.sh
7 |
8 | OUTPUT_DIR=${WORKING_DIR}/final_tfrecords_sharded
9 | mkdir -p ${OUTPUT_DIR}
10 |
11 | OUTPUT_FILE="${OUTPUT_DIR}/tf_examples.tfrecord000${SHARD_INDEX}"
12 |
13 | python /workspace/bert/utils/create_pretraining_data.py \
14 | --input_file=${INPUT_FILE} \
15 | --output_file=${OUTPUT_FILE} \
16 | --vocab_file=${VOCAB_FILE} \
17 | --do_lower_case=${DO_LOWER_CASE} \
18 | --max_seq_length=${MAX_SEQUENCE_LENGTH} \
19 | --max_predictions_per_seq=${MAX_PREDICTIONS_PER_SEQUENCE} \
20 | --masked_lm_prob=${MASKED_LM_PROB} \
21 | --random_seed=${SEED} \
22 | --dupe_factor=${DUPE_FACTOR}
23 |
24 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/preprocessing_test_set.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | INPUT_FILE=${1}
4 |
5 | source /workspace/bert/data/wikipedia_corpus/config.sh
6 |
7 | OUTPUT_DIR=${WORKING_DIR}/test_set_tfrecords
8 | mkdir -p ${OUTPUT_DIR}
9 |
10 | #SHARD_INDEX=$(( echo ${INPUT_FILE} | egrep -o [0-9]+ ))
11 | SHARD_INDEX=$( eval echo ${INPUT_FILE} | sed -e s/[^0-9]//g )
12 | OUTPUT_FILE="${OUTPUT_DIR}/tf_examples.tfrecord000${SHARD_INDEX}"
13 |
14 | SEED=13254
15 |
16 | echo "Shard index ${SHARD_INDEX}"
17 |
18 | python /workspace/bert/utils/create_pretraining_data.py \
19 | --input_file=${INPUT_FILE} \
20 | --output_file=${OUTPUT_FILE} \
21 | --vocab_file=${VOCAB_FILE} \
22 | --do_lower_case=${DO_LOWER_CASE} \
23 | --max_seq_length=${MAX_SEQUENCE_LENGTH} \
24 | --max_predictions_per_seq=${MAX_PREDICTIONS_PER_SEQUENCE} \
25 | --masked_lm_prob=${MASKED_LM_PROB} \
26 | --random_seed=${SEED} \
27 | --dupe_factor=${DUPE_FACTOR}
28 |
29 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/preprocessing_test_set_xargs_wrapper.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/wikipedia_corpus/config.sh
4 |
5 | SHARD_COUNT=0
6 | rm -rf /workspace/bert/data/wikipedia_corpus/xarg_list.txt
7 | touch /workspace/bert/data/wikipedia_corpus/xarg_list.txt
8 | for file in /workspace/bert/data/wikipedia_corpus/test_set_text_files/*; do
9 | echo ${file} >> /workspace/bert/data/wikipedia_corpus/xarg_list.txt
10 | done
11 |
12 | xargs -n 1 --max-procs=${N_PROCS_PREPROCESS} --arg-file=/workspace/bert/data/wikipedia_corpus/xarg_list.txt /workspace/bert/data/wikipedia_corpus/preprocessing_test_set.sh
13 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/preprocessing_xargs_wrapper.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/wikipedia_corpus/config.sh
4 |
5 | SHARD_COUNT=0
6 | rm -rf /workspace/bert/data/wikipedia_corpus/xarg_list.txt
7 | touch /workspace/bert/data/wikipedia_corpus/xarg_list.txt
8 | for file in /workspace/bert/data/wikipedia_corpus/final_text_files_sharded/*; do
9 | echo ${SHARD_COUNT} >> /workspace/bert/data/wikipedia_corpus/xarg_list.txt
10 | SHARD_COUNT=$((SHARD_COUNT+1))
11 | done
12 |
13 | xargs -n 1 --max-procs=${N_PROCS_PREPROCESS} --arg-file=/workspace/bert/data/wikipedia_corpus/xarg_list.txt /workspace/bert/data/wikipedia_corpus/preprocessing.sh
14 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/remove_tags_and_clean.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import glob
4 | import os
5 |
6 | output_file = os.environ['WORKING_DIR'] + '/intermediate_files/wikipedia.txt'
7 |
8 | with open(output_file, "w") as ofile:
9 | for dirname in glob.glob('extracted_articles/*/', recursive=False):
10 | for filename in glob.glob(dirname + 'wiki_*', recursive=True):
11 | print(filename)
12 | article_lines = []
13 | article_open = False
14 |
15 | with open(filename, "r") as file:
16 | for line in file:
17 | if "" in line:
20 | article_open = False
21 | for oline in article_lines[1:]:
22 | if oline != "\n":
23 | ofile.write(oline.rstrip() + " ")
24 | ofile.write("\n\n")
25 | article_lines = []
26 | else:
27 | if article_open:
28 | article_lines.append(line)
29 |
30 |
31 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/run_preprocessing.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | source /workspace/bert/data/wikipedia_corpus/config.sh
4 |
5 | # Note: There are several directories created to make it clear what has been performed at each stage of preprocessing. The intermediate files may be useful if you want to further clean/prepare/augment the data for your own applications.
6 | # NLTK was chosen as the default over spaCy simply due to speed of sentence segmentation on the large files.
7 |
8 | # Download Wikipedia dump file
9 | mkdir -p ${WORKING_DIR}/download
10 |
11 | # Not using --noclobber since it emits an error if exists (incompatible with bash 'set -e')
12 | echo "Downloading Wikidump"
13 | if [ ! -f ${WORKING_DIR}/download/wikidump.xml.bz2 ]; then
14 | cd ${WORKING_DIR}/download && wget -O wikidump.xml.bz2 ${WIKI_DUMP}
15 | fi
16 |
17 | # Extract dump
18 | echo "Extracting Wikidump"
19 | mkdir -p ${WORKING_DIR}/raw_data
20 | #cd ${WORKING_DIR}/raw_data && pv ${WORKING_DIR}/download/wikidump.xml.bz2 | pbzip2 -kdc > ${WORKING_DIR}/raw_data/wikidump.xml
21 | cd ${WORKING_DIR}/raw_data && pv ${WORKING_DIR}/download/wikidump.xml.bz2 | bunzip2 -kdc > ${WORKING_DIR}/raw_data/wikidump.xml
22 | #cd ${WORKING_DIR}/raw_data && bunzip2 -kdc ${WORKING_DIR}/download/wikidump.xml.bz2 > ${WORKING_DIR}/raw_data/wikidump.xml
23 |
24 | # Wikiextractor.py - Creates lots of folders/files in "doc format"
25 | echo "Running Wikiextractor"
26 | mkdir -p ${WORKING_DIR}/extracted_articles
27 | /workspace/wikiextractor/WikiExtractor.py ${WORKING_DIR}/raw_data/wikidump.xml -b 1000M --processes ${N_PROCS_PREPROCESS} -o ${WORKING_DIR}/extracted_articles
28 |
29 | # Remove XML Tags and extraneous titles (since they are not sentences)
30 | # Also clean to remove lines between paragraphs within article and use space-separated articles
31 | echo "Cleaning and formatting files (one article per line)"
32 | mkdir -p ${WORKING_DIR}/intermediate_files
33 | python3 ${WORKING_DIR}/remove_tags_and_clean.py
34 |
35 | # Split articles into one-sentence-per-line format for use with BERT scripts
36 | echo "Applying sentence segmentation to get one sentence per line"
37 | mkdir -p ${WORKING_DIR}/final_text_file_single
38 | python3 ${WORKING_DIR}/wiki_sentence_segmentation_nltk.py
39 | # Note: NLTK can be replaced with Spacy, although it is slower (2 variations provided)
40 |
41 | # Shard finalized text so that it has a chance of fitting in memory when creating pretraining data into tfrecords (choose appropriate number of shards for distributed training)
42 | echo "Shard text files - size is approximate to prevent splitting an article across shards"
43 | mkdir -p ${WORKING_DIR}/final_text_files_sharded
44 | python3 ${WORKING_DIR}/shard_text_input_file.py
45 |
46 | # Convert sharded text files into tfrecords that are ready for BERT pretraining
47 | echo "Creating tfrecords for each text shard"
48 | mkdir -p ${WORKING_DIR}/final_tfrecords_sharded
49 | . ${WORKING_DIR}/preprocessing_xargs_wrapper.sh ${N_PROCS_PREPROCESS}
50 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/shard_text_input_file.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import os
4 |
5 | input_file = os.environ['WORKING_DIR'] + '/final_text_file_single/wikipedia.segmented.nltk.txt'
6 | output_file = os.environ['WORKING_DIR'] + '/final_text_files_sharded/wikipedia.segmented.part.'
7 |
8 | doc_seperator = "\n"
9 |
10 | line_buffer = []
11 | shard_size = 396000 # Approximate, will split at next article break
12 | line_counter = 0
13 | shard_index = 0
14 |
15 | ifile_lines = 0
16 | with open(input_file) as ifile:
17 | for line in ifile:
18 | ifile_lines += 1
19 |
20 | print("Input file contains", ifile_lines, "lines.")
21 |
22 | iline_counter = 1
23 | with open(input_file) as ifile:
24 | for line in ifile:
25 | if line_counter < shard_size and iline_counter < ifile_lines:
26 | line_buffer.append(line)
27 | line_counter += 1
28 | iline_counter += 1
29 | elif line_counter >= shard_size and line != "\n" and iline_counter < ifile_lines:
30 | line_buffer.append(line)
31 | line_counter += 1
32 | iline_counter += 1
33 | else:
34 | with open(output_file + str(shard_index) + ".txt", "w") as ofile:
35 | for oline in line_buffer:
36 | ofile.write(oline)
37 | line_buffer = []
38 | line_counter = 0
39 | shard_index += 1
40 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/wiki_sentence_segmentation_nltk.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import nltk
4 | import os
5 |
6 | nltk.download('punkt')
7 |
8 | input_file = os.environ['WORKING_DIR'] + '/intermediate_files/wikipedia.txt'
9 | output_file = os.environ['WORKING_DIR'] + '/final_text_file_single/wikipedia.segmented.nltk.txt'
10 |
11 | doc_seperator = "\n"
12 |
13 | with open(input_file) as ifile:
14 | with open(output_file, "w") as ofile:
15 | for line in ifile:
16 | if line != "\n":
17 | sent_list = nltk.tokenize.sent_tokenize(line)
18 | for sent in sent_list:
19 | ofile.write(sent + "\n")
20 | ofile.write(doc_seperator)
21 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/wiki_sentence_segmentation_spacy.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import os
4 | import spacy
5 |
6 | #spacy.prefer_gpu()
7 | spacy.require_gpu()
8 |
9 | input_file = os.environ['WORKING_DIR'] + '/intermediate_files/wikipedia.txt'
10 | output_file = os.environ['WORKING_DIR'] + '/final_test_file_single/wikipedia.segmented.txt'
11 |
12 | nlp = spacy.load('en_core_web_sm')
13 |
14 | doc_seperator = "\n"
15 |
16 | with open(input_file) as ifile:
17 | with open(output_file, "w") as ofile:
18 | for line in ifile:
19 | if line != "\n":
20 | doc = nlp(line)
21 | for sent in doc.sents:
22 | ofile.write(sent.text + "\n")
23 |
--------------------------------------------------------------------------------
/examples/BERT/data/wikipedia_corpus/wiki_sentence_segmentation_spacy_pipe.py:
--------------------------------------------------------------------------------
1 | # NVIDIA
2 |
3 | import os
4 | import spacy
5 |
6 | #spacy.prefer_gpu()
7 | spacy.require_gpu()
8 |
9 | input_file = os.environ['WORKING_DIR'] + '/intermediate_files/wikipedia.txt'
10 | output_file = os.environ['WORKING_DIR'] + '/final_test_file_single/wikipedia.segmented.txt'
11 |
12 | nlp = spacy.load('en_core_web_sm')
13 |
14 | doc_seperator = "\n"
15 |
16 | file_mem = []
17 |
18 | print("Reading file into memory.")
19 | with open(input_file) as ifile:
20 | for line in ifile:
21 | if line != "\n":
22 | file_mem.append(line)
23 |
24 | print("File read.")
25 | print("Starting nlp.pipe")
26 | docs = nlp.pipe(file_mem, batch_size=1000)
27 |
28 | print("Starting to write output")
29 | with open(output_file, "w") as ofile:
30 | for item in docs:
31 | for sent in item.sents:
32 | if sent.text != "\n":
33 | ofile.write(sent.text + "\n")
34 |
--------------------------------------------------------------------------------
/examples/BERT/extract_features.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | """Extract pre-computed feature vectors from BERT."""
16 |
17 | from __future__ import absolute_import
18 | from __future__ import division
19 | from __future__ import print_function
20 |
21 | import codecs
22 | import collections
23 | import json
24 | import re
25 |
26 | import modeling
27 | import tokenization
28 | import tensorflow as tf
29 |
30 | flags = tf.flags
31 |
32 | FLAGS = flags.FLAGS
33 |
34 | flags.DEFINE_string("input_file", None, "")
35 |
36 | flags.DEFINE_string("output_file", None, "")
37 |
38 | flags.DEFINE_string("layers", "-1,-2,-3,-4", "")
39 |
40 | flags.DEFINE_string(
41 | "bert_config_file", None,
42 | "The config json file corresponding to the pre-trained BERT model. "
43 | "This specifies the model architecture.")
44 |
45 | flags.DEFINE_integer(
46 | "max_seq_length", 128,
47 | "The maximum total input sequence length after WordPiece tokenization. "
48 | "Sequences longer than this will be truncated, and sequences shorter "
49 | "than this will be padded.")
50 |
51 | flags.DEFINE_string(
52 | "init_checkpoint", None,
53 | "Initial checkpoint (usually from a pre-trained BERT model).")
54 |
55 | flags.DEFINE_string("vocab_file", None,
56 | "The vocabulary file that the BERT model was trained on.")
57 |
58 | flags.DEFINE_bool(
59 | "do_lower_case", True,
60 | "Whether to lower case the input text. Should be True for uncased "
61 | "models and False for cased models.")
62 |
63 | flags.DEFINE_integer("batch_size", 32, "Batch size for predictions.")
64 |
65 | flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.")
66 |
67 | flags.DEFINE_string("master", None,
68 | "If using a TPU, the address of the master.")
69 |
70 | flags.DEFINE_integer(
71 | "num_tpu_cores", 8,
72 | "Only used if `use_tpu` is True. Total number of TPU cores to use.")
73 |
74 | flags.DEFINE_bool(
75 | "use_one_hot_embeddings", False,
76 | "If True, tf.one_hot will be used for embedding lookups, otherwise "
77 | "tf.nn.embedding_lookup will be used. On TPUs, this should be True "
78 | "since it is much faster.")
79 |
80 |
81 | class InputExample(object):
82 |
83 | def __init__(self, unique_id, text_a, text_b):
84 | self.unique_id = unique_id
85 | self.text_a = text_a
86 | self.text_b = text_b
87 |
88 |
89 | class InputFeatures(object):
90 | """A single set of features of data."""
91 |
92 | def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
93 | self.unique_id = unique_id
94 | self.tokens = tokens
95 | self.input_ids = input_ids
96 | self.input_mask = input_mask
97 | self.input_type_ids = input_type_ids
98 |
99 |
100 | def input_fn_builder(features, seq_length):
101 | """Creates an `input_fn` closure to be passed to TPUEstimator."""
102 |
103 | all_unique_ids = []
104 | all_input_ids = []
105 | all_input_mask = []
106 | all_input_type_ids = []
107 |
108 | for feature in features:
109 | all_unique_ids.append(feature.unique_id)
110 | all_input_ids.append(feature.input_ids)
111 | all_input_mask.append(feature.input_mask)
112 | all_input_type_ids.append(feature.input_type_ids)
113 |
114 | def input_fn(params):
115 | """The actual input function."""
116 | batch_size = params["batch_size"]
117 |
118 | num_examples = len(features)
119 |
120 | # This is for demo purposes and does NOT scale to large data sets. We do
121 | # not use Dataset.from_generator() because that uses tf.py_func which is
122 | # not TPU compatible. The right way to load data is with TFRecordReader.
123 | d = tf.data.Dataset.from_tensor_slices({
124 | "unique_ids":
125 | tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
126 | "input_ids":
127 | tf.constant(
128 | all_input_ids, shape=[num_examples, seq_length],
129 | dtype=tf.int32),
130 | "input_mask":
131 | tf.constant(
132 | all_input_mask,
133 | shape=[num_examples, seq_length],
134 | dtype=tf.int32),
135 | "input_type_ids":
136 | tf.constant(
137 | all_input_type_ids,
138 | shape=[num_examples, seq_length],
139 | dtype=tf.int32),
140 | })
141 |
142 | d = d.batch(batch_size=batch_size, drop_remainder=False)
143 | return d
144 |
145 | return input_fn
146 |
147 |
148 | def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
149 | use_one_hot_embeddings):
150 | """Returns `model_fn` closure for TPUEstimator."""
151 |
152 | def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
153 | """The `model_fn` for TPUEstimator."""
154 |
155 | unique_ids = features["unique_ids"]
156 | input_ids = features["input_ids"]
157 | input_mask = features["input_mask"]
158 | input_type_ids = features["input_type_ids"]
159 |
160 | model = modeling.BertModel(
161 | config=bert_config,
162 | is_training=False,
163 | input_ids=input_ids,
164 | input_mask=input_mask,
165 | token_type_ids=input_type_ids,
166 | use_one_hot_embeddings=use_one_hot_embeddings)
167 |
168 | if mode != tf.estimator.ModeKeys.PREDICT:
169 | raise ValueError("Only PREDICT modes are supported: %s" % (mode))
170 |
171 | tvars = tf.trainable_variables()
172 | scaffold_fn = None
173 | (assignment_map,
174 | initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(
175 | tvars, init_checkpoint)
176 | if use_tpu:
177 |
178 | def tpu_scaffold():
179 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
180 | return tf.train.Scaffold()
181 |
182 | scaffold_fn = tpu_scaffold
183 | else:
184 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
185 |
186 | tf.logging.info("**** Trainable Variables ****")
187 | for var in tvars:
188 | init_string = ""
189 | if var.name in initialized_variable_names:
190 | init_string = ", *INIT_FROM_CKPT*"
191 | tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
192 | init_string)
193 |
194 | all_layers = model.get_all_encoder_layers()
195 |
196 | predictions = {
197 | "unique_id": unique_ids,
198 | }
199 |
200 | for (i, layer_index) in enumerate(layer_indexes):
201 | predictions["layer_output_%d" % i] = all_layers[layer_index]
202 |
203 | output_spec = tf.contrib.tpu.TPUEstimatorSpec(
204 | mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
205 | return output_spec
206 |
207 | return model_fn
208 |
209 |
210 | def convert_examples_to_features(examples, seq_length, tokenizer):
211 | """Loads a data file into a list of `InputBatch`s."""
212 |
213 | features = []
214 | for (ex_index, example) in enumerate(examples):
215 | tokens_a = tokenizer.tokenize(example.text_a)
216 |
217 | tokens_b = None
218 | if example.text_b:
219 | tokens_b = tokenizer.tokenize(example.text_b)
220 |
221 | if tokens_b:
222 | # Modifies `tokens_a` and `tokens_b` in place so that the total
223 | # length is less than the specified length.
224 | # Account for [CLS], [SEP], [SEP] with "- 3"
225 | _truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
226 | else:
227 | # Account for [CLS] and [SEP] with "- 2"
228 | if len(tokens_a) > seq_length - 2:
229 | tokens_a = tokens_a[0:(seq_length - 2)]
230 |
231 | # The convention in BERT is:
232 | # (a) For sequence pairs:
233 | # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
234 | # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
235 | # (b) For single sequences:
236 | # tokens: [CLS] the dog is hairy . [SEP]
237 | # type_ids: 0 0 0 0 0 0 0
238 | #
239 | # Where "type_ids" are used to indicate whether this is the first
240 | # sequence or the second sequence. The embedding vectors for `type=0` and
241 | # `type=1` were learned during pre-training and are added to the wordpiece
242 | # embedding vector (and position vector). This is not *strictly* necessary
243 | # since the [SEP] token unambiguously separates the sequences, but it makes
244 | # it easier for the model to learn the concept of sequences.
245 | #
246 | # For classification tasks, the first vector (corresponding to [CLS]) is
247 | # used as as the "sentence vector". Note that this only makes sense because
248 | # the entire model is fine-tuned.
249 | tokens = []
250 | input_type_ids = []
251 | tokens.append("[CLS]")
252 | input_type_ids.append(0)
253 | for token in tokens_a:
254 | tokens.append(token)
255 | input_type_ids.append(0)
256 | tokens.append("[SEP]")
257 | input_type_ids.append(0)
258 |
259 | if tokens_b:
260 | for token in tokens_b:
261 | tokens.append(token)
262 | input_type_ids.append(1)
263 | tokens.append("[SEP]")
264 | input_type_ids.append(1)
265 |
266 | input_ids = tokenizer.convert_tokens_to_ids(tokens)
267 |
268 | # The mask has 1 for real tokens and 0 for padding tokens. Only real
269 | # tokens are attended to.
270 | input_mask = [1] * len(input_ids)
271 |
272 | # Zero-pad up to the sequence length.
273 | while len(input_ids) < seq_length:
274 | input_ids.append(0)
275 | input_mask.append(0)
276 | input_type_ids.append(0)
277 |
278 | assert len(input_ids) == seq_length
279 | assert len(input_mask) == seq_length
280 | assert len(input_type_ids) == seq_length
281 |
282 | if ex_index < 5:
283 | tf.logging.info("*** Example ***")
284 | tf.logging.info("unique_id: %s" % (example.unique_id))
285 | tf.logging.info("tokens: %s" % " ".join(
286 | [tokenization.printable_text(x) for x in tokens]))
287 | tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
288 | tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
289 | tf.logging.info(
290 | "input_type_ids: %s" % " ".join([str(x) for x in input_type_ids]))
291 |
292 | features.append(
293 | InputFeatures(
294 | unique_id=example.unique_id,
295 | tokens=tokens,
296 | input_ids=input_ids,
297 | input_mask=input_mask,
298 | input_type_ids=input_type_ids))
299 | return features
300 |
301 |
302 | def _truncate_seq_pair(tokens_a, tokens_b, max_length):
303 | """Truncates a sequence pair in place to the maximum length."""
304 |
305 | # This is a simple heuristic which will always truncate the longer sequence
306 | # one token at a time. This makes more sense than truncating an equal percent
307 | # of tokens from each, since if one sequence is very short then each token
308 | # that's truncated likely contains more information than a longer sequence.
309 | while True:
310 | total_length = len(tokens_a) + len(tokens_b)
311 | if total_length <= max_length:
312 | break
313 | if len(tokens_a) > len(tokens_b):
314 | tokens_a.pop()
315 | else:
316 | tokens_b.pop()
317 |
318 |
319 | def read_examples(input_file):
320 | """Read a list of `InputExample`s from an input file."""
321 | examples = []
322 | unique_id = 0
323 | with tf.gfile.GFile(input_file, "r") as reader:
324 | while True:
325 | line = tokenization.convert_to_unicode(reader.readline())
326 | if not line:
327 | break
328 | line = line.strip()
329 | text_a = None
330 | text_b = None
331 | m = re.match(r"^(.*) \|\|\| (.*)$", line)
332 | if m is None:
333 | text_a = line
334 | else:
335 | text_a = m.group(1)
336 | text_b = m.group(2)
337 | examples.append(
338 | InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
339 | unique_id += 1
340 | return examples
341 |
342 |
343 | def main(_):
344 | tf.logging.set_verbosity(tf.logging.INFO)
345 |
346 | layer_indexes = [int(x) for x in FLAGS.layers.split(",")]
347 |
348 | bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
349 |
350 | tokenizer = tokenization.FullTokenizer(
351 | vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
352 |
353 | is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
354 | run_config = tf.contrib.tpu.RunConfig(
355 | master=FLAGS.master,
356 | tpu_config=tf.contrib.tpu.TPUConfig(
357 | num_shards=FLAGS.num_tpu_cores,
358 | per_host_input_for_training=is_per_host))
359 |
360 | examples = read_examples(FLAGS.input_file)
361 |
362 | features = convert_examples_to_features(
363 | examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
364 |
365 | unique_id_to_feature = {}
366 | for feature in features:
367 | unique_id_to_feature[feature.unique_id] = feature
368 |
369 | model_fn = model_fn_builder(
370 | bert_config=bert_config,
371 | init_checkpoint=FLAGS.init_checkpoint,
372 | layer_indexes=layer_indexes,
373 | use_tpu=FLAGS.use_tpu,
374 | use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
375 |
376 | # If TPU is not available, this will fall back to normal Estimator on CPU
377 | # or GPU.
378 | estimator = tf.contrib.tpu.TPUEstimator(
379 | use_tpu=FLAGS.use_tpu,
380 | model_fn=model_fn,
381 | config=run_config,
382 | predict_batch_size=FLAGS.batch_size)
383 |
384 | input_fn = input_fn_builder(
385 | features=features, seq_length=FLAGS.max_seq_length)
386 |
387 | with codecs.getwriter("utf-8")(tf.gfile.Open(FLAGS.output_file,
388 | "w")) as writer:
389 | for result in estimator.predict(input_fn, yield_single_examples=True):
390 | unique_id = int(result["unique_id"])
391 | feature = unique_id_to_feature[unique_id]
392 | output_json = collections.OrderedDict()
393 | output_json["linex_index"] = unique_id
394 | all_features = []
395 | for (i, token) in enumerate(feature.tokens):
396 | all_layers = []
397 | for (j, layer_index) in enumerate(layer_indexes):
398 | layer_output = result["layer_output_%d" % j]
399 | layers = collections.OrderedDict()
400 | layers["index"] = layer_index
401 | layers["values"] = [
402 | round(float(x), 6) for x in layer_output[i:(i + 1)].flat
403 | ]
404 | all_layers.append(layers)
405 | features = collections.OrderedDict()
406 | features["token"] = token
407 | features["layers"] = all_layers
408 | all_features.append(features)
409 | output_json["features"] = all_features
410 | writer.write(json.dumps(output_json) + "\n")
411 |
412 |
413 | if __name__ == "__main__":
414 | flags.mark_flag_as_required("input_file")
415 | flags.mark_flag_as_required("vocab_file")
416 | flags.mark_flag_as_required("bert_config_file")
417 | flags.mark_flag_as_required("init_checkpoint")
418 | flags.mark_flag_as_required("output_file")
419 | tf.app.run()
420 |
--------------------------------------------------------------------------------
/examples/BERT/fp16_utils.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4 | #
5 | # Licensed under the Apache License, Version 2.0 (the "License");
6 | # you may not use this file except in compliance with the License.
7 | # You may obtain a copy of the License at
8 | #
9 | # http://www.apache.org/licenses/LICENSE-2.0
10 | #
11 | # Unless required by applicable law or agreed to in writing, software
12 | # distributed under the License is distributed on an "AS IS" BASIS,
13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 | # See the License for the specific language governing permissions and
15 | # limitations under the License.
16 | import tensorflow as tf
17 | import numpy as np
18 |
19 |
20 | def float32_variable_storage_getter(getter, name, shape=None, dtype=None,
21 | initializer=None, regularizer=None,
22 | trainable=True,
23 | *args, **kwargs):
24 | """Custom variable getter that forces trainable variables to be stored in
25 | float32 precision and then casts them to the training precision.
26 | """
27 | storage_dtype = tf.float32 if trainable else dtype
28 | variable = getter(name, shape, dtype=storage_dtype,
29 | initializer=initializer, regularizer=regularizer,
30 | trainable=trainable,
31 | *args, **kwargs)
32 | if trainable and dtype != tf.float32:
33 | variable = tf.cast(variable, dtype)
34 | return variable
35 |
36 |
--------------------------------------------------------------------------------
/examples/BERT/fused_layer_norm.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4 | #
5 | # Licensed under the Apache License, Version 2.0 (the "License");
6 | # you may not use this file except in compliance with the License.
7 | # You may obtain a copy of the License at
8 | #
9 | # http://www.apache.org/licenses/LICENSE-2.0
10 | #
11 | # Unless required by applicable law or agreed to in writing, software
12 | # distributed under the License is distributed on an "AS IS" BASIS,
13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 | # See the License for the specific language governing permissions and
15 | # limitations under the License.
16 |
17 | import collections
18 | import copy
19 | import json
20 | import math
21 | import re
22 | import six
23 | import tensorflow as tf
24 |
25 | from tensorflow.python.framework import ops
26 | from tensorflow.contrib.layers.python.layers import utils
27 | from tensorflow.contrib.framework.python.ops import variables
28 | from tensorflow.python.ops import init_ops
29 | import numpy
30 | from tensorflow.python.ops import array_ops
31 | from tensorflow.python.framework import dtypes
32 | from tensorflow.python.ops import nn
33 |
34 | def fused_layer_norm(inputs,
35 | center=True,
36 | scale=True,
37 | activation_fn=None,
38 | reuse=None,
39 | variables_collections=None,
40 | outputs_collections=None,
41 | trainable=True,
42 | begin_norm_axis=1,
43 | begin_params_axis=-1,
44 | scope=None,
45 | use_fused_batch_norm=False):
46 | with tf.variable_scope(
47 | scope, 'LayerNorm', [inputs], reuse=reuse) as sc:
48 | inputs = ops.convert_to_tensor(inputs)
49 | inputs_shape = inputs.shape
50 | inputs_rank = inputs_shape.ndims
51 | if inputs_rank is None:
52 | raise ValueError('Inputs %s has undefined rank.' % inputs.name)
53 | dtype = inputs.dtype.base_dtype
54 | if begin_norm_axis < 0:
55 | begin_norm_axis = inputs_rank + begin_norm_axis
56 | if begin_params_axis >= inputs_rank or begin_norm_axis >= inputs_rank:
57 | raise ValueError('begin_params_axis (%d) and begin_norm_axis (%d) '
58 | 'must be < rank(inputs) (%d)' %
59 | (begin_params_axis, begin_norm_axis, inputs_rank))
60 | params_shape = inputs_shape[begin_params_axis:]
61 | if not params_shape.is_fully_defined():
62 | raise ValueError(
63 | 'Inputs %s: shape(inputs)[%s:] is not fully defined: %s' %
64 | (inputs.name, begin_params_axis, inputs_shape))
65 | # Allocate parameters for the beta and gamma of the normalization.
66 | beta, gamma = None, None
67 | if center:
68 | beta_collections = utils.get_variable_collections(variables_collections,
69 | 'beta')
70 | beta = variables.model_variable(
71 | 'beta',
72 | shape=params_shape,
73 | dtype=dtype,
74 | initializer=init_ops.zeros_initializer(),
75 | collections=beta_collections,
76 | trainable=trainable)
77 | if scale:
78 | gamma_collections = utils.get_variable_collections(
79 | variables_collections, 'gamma')
80 | gamma = variables.model_variable(
81 | 'gamma',
82 | shape=params_shape,
83 | dtype=dtype,
84 | initializer=init_ops.ones_initializer(),
85 | collections=gamma_collections,
86 | trainable=trainable)
87 | if use_fused_batch_norm:
88 | # get static TensorShape if fully defined,
89 | # otherwise retrieve shape tensor
90 | norm_shape = inputs.shape[begin_norm_axis:]
91 | if norm_shape.is_fully_defined():
92 | bn_shape = [1, -1, 1, numpy.prod(norm_shape.as_list())]
93 | else:
94 | norm_shape = tf.shape(inputs)[begin_norm_axis:]
95 | bn_shape = [1, -1, 1, tf.reduce_prod(norm_shape)]
96 | if inputs.get_shape().is_fully_defined():
97 | outputs_shape = inputs.get_shape()
98 | else:
99 | outputs_shape = tf.shape(inputs)
100 | inputs = array_ops.reshape(inputs, bn_shape)
101 | if inputs.get_shape().is_fully_defined():
102 | # static inputs TensorShape fully defined after reshape.
103 | ones = array_ops.ones(inputs.get_shape()[1], dtype=dtypes.float32)
104 | zeros = array_ops.zeros(inputs.get_shape()[1], dtype=dtypes.float32)
105 | else:
106 | # static inputs TensorShape NOT fully defined after reshape.
107 | # must use dynamic shape, which means these input tensors
108 | # have to be created at runtime, which causes a slowdown.
109 | scale_shape = tf.shape(inputs)[1]
110 | ones = array_ops.ones(scale_shape, dtype=dtypes.float32)
111 | zeros = array_ops.zeros(scale_shape, dtype=dtypes.float32)
112 | outputs, mean, variance = nn.fused_batch_norm(
113 | inputs,
114 | ones, zeros,
115 | epsilon=1e-4,
116 | data_format="NCHW")
117 | outputs = array_ops.reshape(outputs, outputs_shape)
118 | if center and scale:
119 | outputs = outputs * gamma + beta
120 | elif center:
121 | outputs = outputs + beta
122 | elif scale:
123 | outputs = outputs * gamma
124 | else:
125 | # Calculate the moments on the last axis (layer activations).
126 | norm_axes = list(range(begin_norm_axis, inputs_rank))
127 | mean, variance = nn.moments(inputs, norm_axes, keep_dims=True)
128 | # Compute layer normalization using the batch_normalization function.
129 | variance_epsilon = 1e-4
130 | outputs = nn.batch_normalization(
131 | inputs,
132 | mean,
133 | variance,
134 | offset=beta,
135 | scale=gamma,
136 | variance_epsilon=variance_epsilon)
137 | outputs.set_shape(inputs_shape)
138 | if activation_fn is not None:
139 | outputs = activation_fn(outputs)
140 | return utils.collect_named_outputs(outputs_collections, sc.name, outputs)
141 |
142 |
--------------------------------------------------------------------------------
/examples/BERT/gpu_environment.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import tensorflow as tf
17 | import numpy as np
18 |
19 | def float32_variable_storage_getter(getter, name, shape=None, dtype=None,
20 | initializer=None, regularizer=None,
21 | trainable=True,
22 | *args, **kwargs):
23 | """Custom variable getter that forces trainable variables to be stored in
24 | float32 precision and then casts them to the training precision.
25 | """
26 | storage_dtype = tf.float32 if trainable else dtype
27 | variable = getter(name, shape, dtype=storage_dtype,
28 | initializer=initializer, regularizer=regularizer,
29 | trainable=trainable,
30 | *args, **kwargs)
31 | if trainable and dtype != tf.float32:
32 | variable = tf.cast(variable, dtype)
33 | return variable
34 |
35 | def get_custom_getter(compute_type):
36 | return float32_variable_storage_getter if compute_type == tf.float16 else None
37 |
--------------------------------------------------------------------------------
/examples/BERT/modeling_test.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | from __future__ import absolute_import
16 | from __future__ import division
17 | from __future__ import print_function
18 |
19 | import collections
20 | import json
21 | import random
22 | import re
23 |
24 | import modeling
25 | import six
26 | import tensorflow as tf
27 |
28 |
29 | class BertModelTest(tf.test.TestCase):
30 |
31 | class BertModelTester(object):
32 |
33 | def __init__(self,
34 | parent,
35 | batch_size=13,
36 | seq_length=7,
37 | is_training=True,
38 | use_input_mask=True,
39 | use_token_type_ids=True,
40 | vocab_size=99,
41 | hidden_size=32,
42 | num_hidden_layers=5,
43 | num_attention_heads=4,
44 | intermediate_size=37,
45 | hidden_act="gelu",
46 | hidden_dropout_prob=0.1,
47 | attention_probs_dropout_prob=0.1,
48 | max_position_embeddings=512,
49 | type_vocab_size=16,
50 | initializer_range=0.02,
51 | scope=None):
52 | self.parent = parent
53 | self.batch_size = batch_size
54 | self.seq_length = seq_length
55 | self.is_training = is_training
56 | self.use_input_mask = use_input_mask
57 | self.use_token_type_ids = use_token_type_ids
58 | self.vocab_size = vocab_size
59 | self.hidden_size = hidden_size
60 | self.num_hidden_layers = num_hidden_layers
61 | self.num_attention_heads = num_attention_heads
62 | self.intermediate_size = intermediate_size
63 | self.hidden_act = hidden_act
64 | self.hidden_dropout_prob = hidden_dropout_prob
65 | self.attention_probs_dropout_prob = attention_probs_dropout_prob
66 | self.max_position_embeddings = max_position_embeddings
67 | self.type_vocab_size = type_vocab_size
68 | self.initializer_range = initializer_range
69 | self.scope = scope
70 |
71 | def create_model(self):
72 | input_ids = BertModelTest.ids_tensor([self.batch_size, self.seq_length],
73 | self.vocab_size)
74 |
75 | input_mask = None
76 | if self.use_input_mask:
77 | input_mask = BertModelTest.ids_tensor(
78 | [self.batch_size, self.seq_length], vocab_size=2)
79 |
80 | token_type_ids = None
81 | if self.use_token_type_ids:
82 | token_type_ids = BertModelTest.ids_tensor(
83 | [self.batch_size, self.seq_length], self.type_vocab_size)
84 |
85 | config = modeling.BertConfig(
86 | vocab_size=self.vocab_size,
87 | hidden_size=self.hidden_size,
88 | num_hidden_layers=self.num_hidden_layers,
89 | num_attention_heads=self.num_attention_heads,
90 | intermediate_size=self.intermediate_size,
91 | hidden_act=self.hidden_act,
92 | hidden_dropout_prob=self.hidden_dropout_prob,
93 | attention_probs_dropout_prob=self.attention_probs_dropout_prob,
94 | max_position_embeddings=self.max_position_embeddings,
95 | type_vocab_size=self.type_vocab_size,
96 | initializer_range=self.initializer_range)
97 |
98 | model = modeling.BertModel(
99 | config=config,
100 | is_training=self.is_training,
101 | input_ids=input_ids,
102 | input_mask=input_mask,
103 | token_type_ids=token_type_ids,
104 | scope=self.scope)
105 |
106 | outputs = {
107 | "embedding_output": model.get_embedding_output(),
108 | "sequence_output": model.get_sequence_output(),
109 | "pooled_output": model.get_pooled_output(),
110 | "all_encoder_layers": model.get_all_encoder_layers(),
111 | }
112 | return outputs
113 |
114 | def check_output(self, result):
115 | self.parent.assertAllEqual(
116 | result["embedding_output"].shape,
117 | [self.batch_size, self.seq_length, self.hidden_size])
118 |
119 | self.parent.assertAllEqual(
120 | result["sequence_output"].shape,
121 | [self.batch_size, self.seq_length, self.hidden_size])
122 |
123 | self.parent.assertAllEqual(result["pooled_output"].shape,
124 | [self.batch_size, self.hidden_size])
125 |
126 | def test_default(self):
127 | self.run_tester(BertModelTest.BertModelTester(self))
128 |
129 | def test_config_to_json_string(self):
130 | config = modeling.BertConfig(vocab_size=99, hidden_size=37)
131 | obj = json.loads(config.to_json_string())
132 | self.assertEqual(obj["vocab_size"], 99)
133 | self.assertEqual(obj["hidden_size"], 37)
134 |
135 | def run_tester(self, tester):
136 | with self.test_session() as sess:
137 | ops = tester.create_model()
138 | init_op = tf.group(tf.global_variables_initializer(),
139 | tf.local_variables_initializer())
140 | sess.run(init_op)
141 | output_result = sess.run(ops)
142 | tester.check_output(output_result)
143 |
144 | self.assert_all_tensors_reachable(sess, [init_op, ops])
145 |
146 | @classmethod
147 | def ids_tensor(cls, shape, vocab_size, rng=None, name=None):
148 | """Creates a random int32 tensor of the shape within the vocab size."""
149 | if rng is None:
150 | rng = random.Random()
151 |
152 | total_dims = 1
153 | for dim in shape:
154 | total_dims *= dim
155 |
156 | values = []
157 | for _ in range(total_dims):
158 | values.append(rng.randint(0, vocab_size - 1))
159 |
160 | return tf.constant(value=values, dtype=tf.int32, shape=shape, name=name)
161 |
162 | def assert_all_tensors_reachable(self, sess, outputs):
163 | """Checks that all the tensors in the graph are reachable from outputs."""
164 | graph = sess.graph
165 |
166 | ignore_strings = [
167 | "^.*/assert_less_equal/.*$",
168 | "^.*/dilation_rate$",
169 | "^.*/Tensordot/concat$",
170 | "^.*/Tensordot/concat/axis$",
171 | "^testing/.*$",
172 | ]
173 |
174 | ignore_regexes = [re.compile(x) for x in ignore_strings]
175 |
176 | unreachable = self.get_unreachable_ops(graph, outputs)
177 | filtered_unreachable = []
178 | for x in unreachable:
179 | do_ignore = False
180 | for r in ignore_regexes:
181 | m = r.match(x.name)
182 | if m is not None:
183 | do_ignore = True
184 | if do_ignore:
185 | continue
186 | filtered_unreachable.append(x)
187 | unreachable = filtered_unreachable
188 |
189 | self.assertEqual(
190 | len(unreachable), 0, "The following ops are unreachable: %s" %
191 | (" ".join([x.name for x in unreachable])))
192 |
193 | @classmethod
194 | def get_unreachable_ops(cls, graph, outputs):
195 | """Finds all of the tensors in graph that are unreachable from outputs."""
196 | outputs = cls.flatten_recursive(outputs)
197 | output_to_op = collections.defaultdict(list)
198 | op_to_all = collections.defaultdict(list)
199 | assign_out_to_in = collections.defaultdict(list)
200 |
201 | for op in graph.get_operations():
202 | for x in op.inputs:
203 | op_to_all[op.name].append(x.name)
204 | for y in op.outputs:
205 | output_to_op[y.name].append(op.name)
206 | op_to_all[op.name].append(y.name)
207 | if str(op.type) == "Assign":
208 | for y in op.outputs:
209 | for x in op.inputs:
210 | assign_out_to_in[y.name].append(x.name)
211 |
212 | assign_groups = collections.defaultdict(list)
213 | for out_name in assign_out_to_in.keys():
214 | name_group = assign_out_to_in[out_name]
215 | for n1 in name_group:
216 | assign_groups[n1].append(out_name)
217 | for n2 in name_group:
218 | if n1 != n2:
219 | assign_groups[n1].append(n2)
220 |
221 | seen_tensors = {}
222 | stack = [x.name for x in outputs]
223 | while stack:
224 | name = stack.pop()
225 | if name in seen_tensors:
226 | continue
227 | seen_tensors[name] = True
228 |
229 | if name in output_to_op:
230 | for op_name in output_to_op[name]:
231 | if op_name in op_to_all:
232 | for input_name in op_to_all[op_name]:
233 | if input_name not in stack:
234 | stack.append(input_name)
235 |
236 | expanded_names = []
237 | if name in assign_groups:
238 | for assign_name in assign_groups[name]:
239 | expanded_names.append(assign_name)
240 |
241 | for expanded_name in expanded_names:
242 | if expanded_name not in stack:
243 | stack.append(expanded_name)
244 |
245 | unreachable_ops = []
246 | for op in graph.get_operations():
247 | is_unreachable = False
248 | all_names = [x.name for x in op.inputs] + [x.name for x in op.outputs]
249 | for name in all_names:
250 | if name not in seen_tensors:
251 | is_unreachable = True
252 | if is_unreachable:
253 | unreachable_ops.append(op)
254 | return unreachable_ops
255 |
256 | @classmethod
257 | def flatten_recursive(cls, item):
258 | """Flattens (potentially nested) a tuple/dictionary/list to a list."""
259 | output = []
260 | if isinstance(item, list):
261 | output.extend(item)
262 | elif isinstance(item, tuple):
263 | output.extend(list(item))
264 | elif isinstance(item, dict):
265 | for (_, v) in six.iteritems(item):
266 | output.append(v)
267 | else:
268 | return [item]
269 |
270 | flat_output = []
271 | for x in output:
272 | flat_output.extend(cls.flatten_recursive(x))
273 | return flat_output
274 |
275 |
276 | if __name__ == "__main__":
277 | tf.test.main()
278 |
--------------------------------------------------------------------------------
/examples/BERT/multilingual.md:
--------------------------------------------------------------------------------
1 | ## Models
2 |
3 | There are two multilingual models currently available. We do not plan to release
4 | more single-language models, but we may release `BERT-Large` versions of these
5 | two in the future:
6 |
7 | * **[`BERT-Base, Multilingual Cased (New, recommended)`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)**:
8 | 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
9 | * **[`BERT-Base, Multilingual Uncased (Orig, not recommended)`](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)**:
10 | 102 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
11 | * **[`BERT-Base, Chinese`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)**:
12 | Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M
13 | parameters
14 |
15 | **The `Multilingual Cased (New)` model also fixes normalization issues in many
16 | languages, so it is recommended in languages with non-Latin alphabets (and is
17 | often better for most languages with Latin alphabets). When using this model,
18 | make sure to pass `--do_lower_case=false` to `run_pretraining.py` and other
19 | scripts.**
20 |
21 | See the [list of languages](#list-of-languages) that the Multilingual model
22 | supports. The Multilingual model does include Chinese (and English), but if your
23 | fine-tuning data is Chinese-only, then the Chinese model will likely produce
24 | better results.
25 |
26 | ## Results
27 |
28 | To evaluate these systems, we use the
29 | [XNLI dataset](https://github.com/facebookresearch/XNLI) dataset, which is a
30 | version of [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) where the
31 | dev and test sets have been translated (by humans) into 15 languages. Note that
32 | the training set was *machine* translated (we used the translations provided by
33 | XNLI, not Google NMT). For clarity, we only report on 6 languages below:
34 |
35 |
36 |
37 | | System | English | Chinese | Spanish | German | Arabic | Urdu |
38 | | --------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
39 | | XNLI Baseline - Translate Train | 73.7 | 67.0 | 68.8 | 66.5 | 65.8 | 56.6 |
40 | | XNLI Baseline - Translate Test | 73.7 | 68.3 | 70.7 | 68.7 | 66.8 | 59.3 |
41 | | BERT - Translate Train Cased | **81.9** | **76.6** | **77.8** | **75.9** | **70.7** | 61.6 |
42 | | BERT - Translate Train Uncased | 81.4 | 74.2 | 77.3 | 75.2 | 70.5 | 61.7 |
43 | | BERT - Translate Test Uncased | 81.4 | 70.1 | 74.9 | 74.4 | 70.4 | **62.1** |
44 | | BERT - Zero Shot Uncased | 81.4 | 63.8 | 74.3 | 70.5 | 62.1 | 58.3 |
45 |
46 |
47 |
48 | The first two rows are baselines from the XNLI paper and the last three rows are
49 | our results with BERT.
50 |
51 | **Translate Train** means that the MultiNLI training set was machine translated
52 | from English into the foreign language. So training and evaluation were both
53 | done in the foreign language. Unfortunately, training was done on
54 | machine-translated data, so it is impossible to quantify how much of the lower
55 | accuracy (compared to English) is due to the quality of the machine translation
56 | vs. the quality of the pre-trained model.
57 |
58 | **Translate Test** means that the XNLI test set was machine translated from the
59 | foreign language into English. So training and evaluation were both done on
60 | English. However, test evaluation was done on machine-translated English, so the
61 | accuracy depends on the quality of the machine translation system.
62 |
63 | **Zero Shot** means that the Multilingual BERT system was fine-tuned on English
64 | MultiNLI, and then evaluated on the foreign language XNLI test. In this case,
65 | machine translation was not involved at all in either the pre-training or
66 | fine-tuning.
67 |
68 | Note that the English result is worse than the 84.2 MultiNLI baseline because
69 | this training used Multilingual BERT rather than English-only BERT. This implies
70 | that for high-resource languages, the Multilingual model is somewhat worse than
71 | a single-language model. However, it is not feasible for us to train and
72 | maintain dozens of single-language model. Therefore, if your goal is to maximize
73 | performance with a language other than English or Chinese, you might find it
74 | beneficial to run pre-training for additional steps starting from our
75 | Multilingual model on data from your language of interest.
76 |
77 | Here is a comparison of training Chinese models with the Multilingual
78 | `BERT-Base` and Chinese-only `BERT-Base`:
79 |
80 | System | Chinese
81 | ----------------------- | -------
82 | XNLI Baseline | 67.0
83 | BERT Multilingual Model | 74.2
84 | BERT Chinese-only Model | 77.2
85 |
86 | Similar to English, the single-language model does 3% better than the
87 | Multilingual model.
88 |
89 | ## Fine-tuning Example
90 |
91 | The multilingual model does **not** require any special consideration or API
92 | changes. We did update the implementation of `BasicTokenizer` in
93 | `tokenization.py` to support Chinese character tokenization, so please update if
94 | you forked it. However, we did not change the tokenization API.
95 |
96 | To test the new models, we did modify `run_classifier.py` to add support for the
97 | [XNLI dataset](https://github.com/facebookresearch/XNLI). This is a 15-language
98 | version of MultiNLI where the dev/test sets have been human-translated, and the
99 | training set has been machine-translated.
100 |
101 | To run the fine-tuning code, please download the
102 | [XNLI dev/test set](https://s3.amazonaws.com/xnli/XNLI-1.0.zip) and the
103 | [XNLI machine-translated training set](https://s3.amazonaws.com/xnli/XNLI-MT-1.0.zip)
104 | and then unpack both .zip files into some directory `$XNLI_DIR`.
105 |
106 | To run fine-tuning on XNLI. The language is hard-coded into `run_classifier.py`
107 | (Chinese by default), so please modify `XnliProcessor` if you want to run on
108 | another language.
109 |
110 | This is a large dataset, so this will training will take a few hours on a GPU
111 | (or about 30 minutes on a Cloud TPU). To run an experiment quickly for
112 | debugging, just set `num_train_epochs` to a small value like `0.1`.
113 |
114 | ```shell
115 | export BERT_BASE_DIR=/path/to/bert/chinese_L-12_H-768_A-12 # or multilingual_L-12_H-768_A-12
116 | export XNLI_DIR=/path/to/xnli
117 |
118 | python run_classifier.py \
119 | --task_name=XNLI \
120 | --do_train=true \
121 | --do_eval=true \
122 | --data_dir=$XNLI_DIR \
123 | --vocab_file=$BERT_BASE_DIR/vocab.txt \
124 | --bert_config_file=$BERT_BASE_DIR/bert_config.json \
125 | --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
126 | --max_seq_length=128 \
127 | --train_batch_size=32 \
128 | --learning_rate=5e-5 \
129 | --num_train_epochs=2.0 \
130 | --output_dir=/tmp/xnli_output/
131 | ```
132 |
133 | With the Chinese-only model, the results should look something like this:
134 |
135 | ```
136 | ***** Eval results *****
137 | eval_accuracy = 0.774116
138 | eval_loss = 0.83554
139 | global_step = 24543
140 | loss = 0.74603
141 | ```
142 |
143 | ## Details
144 |
145 | ### Data Source and Sampling
146 |
147 | The languages chosen were the
148 | [top 100 languages with the largest Wikipedias](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
149 | The entire Wikipedia dump for each language (excluding user and talk pages) was
150 | taken as the training data for each language
151 |
152 | However, the size of the Wikipedia for a given language varies greatly, and
153 | therefore low-resource languages may be "under-represented" in terms of the
154 | neural network model (under the assumption that languages are "competing" for
155 | limited model capacity to some extent).
156 |
157 | However, the size of a Wikipedia also correlates with the number of speakers of
158 | a language, and we also don't want to overfit the model by performing thousands
159 | of epochs over a tiny Wikipedia for a particular language.
160 |
161 | To balance these two factors, we performed exponentially smoothed weighting of
162 | the data during pre-training data creation (and WordPiece vocab creation). In
163 | other words, let's say that the probability of a language is *P(L)*, e.g.,
164 | *P(English) = 0.21* means that after concatenating all of the Wikipedias
165 | together, 21% of our data is English. We exponentiate each probability by some
166 | factor *S* and then re-normalize, and sample from that distribution. In our case
167 | we use *S=0.7*. So, high-resource languages like English will be under-sampled,
168 | and low-resource languages like Icelandic will be over-sampled. E.g., in the
169 | original distribution English would be sampled 1000x more than Icelandic, but
170 | after smoothing it's only sampled 100x more.
171 |
172 | ### Tokenization
173 |
174 | For tokenization, we use a 110k shared WordPiece vocabulary. The word counts are
175 | weighted the same way as the data, so low-resource languages are upweighted by
176 | some factor. We intentionally do *not* use any marker to denote the input
177 | language (so that zero-shot training can work).
178 |
179 | Because Chinese (and Japanese Kanji and Korean Hanja) does not have whitespace
180 | characters, we add spaces around every character in the
181 | [CJK Unicode range](https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_\(Unicode_block\))
182 | before applying WordPiece. This means that Chinese is effectively
183 | character-tokenized. Note that the CJK Unicode block only includes
184 | Chinese-origin characters and does *not* include Hangul Korean or
185 | Katakana/Hiragana Japanese, which are tokenized with whitespace+WordPiece like
186 | all other languages.
187 |
188 | For all other languages, we apply the
189 | [same recipe as English](https://github.com/google-research/bert#tokenization):
190 | (a) lower casing+accent removal, (b) punctuation splitting, (c) whitespace
191 | tokenization. We understand that accent markers have substantial meaning in some
192 | languages, but felt that the benefits of reducing the effective vocabulary make
193 | up for this. Generally the strong contextual models of BERT should make up for
194 | any ambiguity introduced by stripping accent markers.
195 |
196 | ### List of Languages
197 |
198 | The multilingual model supports the following languages. These languages were
199 | chosen because they are the top 100 languages with the largest Wikipedias:
200 |
201 | * Afrikaans
202 | * Albanian
203 | * Arabic
204 | * Aragonese
205 | * Armenian
206 | * Asturian
207 | * Azerbaijani
208 | * Bashkir
209 | * Basque
210 | * Bavarian
211 | * Belarusian
212 | * Bengali
213 | * Bishnupriya Manipuri
214 | * Bosnian
215 | * Breton
216 | * Bulgarian
217 | * Burmese
218 | * Catalan
219 | * Cebuano
220 | * Chechen
221 | * Chinese (Simplified)
222 | * Chinese (Traditional)
223 | * Chuvash
224 | * Croatian
225 | * Czech
226 | * Danish
227 | * Dutch
228 | * English
229 | * Estonian
230 | * Finnish
231 | * French
232 | * Galician
233 | * Georgian
234 | * German
235 | * Greek
236 | * Gujarati
237 | * Haitian
238 | * Hebrew
239 | * Hindi
240 | * Hungarian
241 | * Icelandic
242 | * Ido
243 | * Indonesian
244 | * Irish
245 | * Italian
246 | * Japanese
247 | * Javanese
248 | * Kannada
249 | * Kazakh
250 | * Kirghiz
251 | * Korean
252 | * Latin
253 | * Latvian
254 | * Lithuanian
255 | * Lombard
256 | * Low Saxon
257 | * Luxembourgish
258 | * Macedonian
259 | * Malagasy
260 | * Malay
261 | * Malayalam
262 | * Marathi
263 | * Minangkabau
264 | * Nepali
265 | * Newar
266 | * Norwegian (Bokmal)
267 | * Norwegian (Nynorsk)
268 | * Occitan
269 | * Persian (Farsi)
270 | * Piedmontese
271 | * Polish
272 | * Portuguese
273 | * Punjabi
274 | * Romanian
275 | * Russian
276 | * Scots
277 | * Serbian
278 | * Serbo-Croatian
279 | * Sicilian
280 | * Slovak
281 | * Slovenian
282 | * South Azerbaijani
283 | * Spanish
284 | * Sundanese
285 | * Swahili
286 | * Swedish
287 | * Tagalog
288 | * Tajik
289 | * Tamil
290 | * Tatar
291 | * Telugu
292 | * Turkish
293 | * Ukrainian
294 | * Urdu
295 | * Uzbek
296 | * Vietnamese
297 | * Volapük
298 | * Waray-Waray
299 | * Welsh
300 | * West
301 | * Western Punjabi
302 | * Yoruba
303 |
304 | The **Multilingual Cased (New)** release contains additionally **Thai** and
305 | **Mongolian**, which were not included in the original release.
306 |
--------------------------------------------------------------------------------
/examples/BERT/optimization.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | """Functions and classes related to optimization (weight updates)."""
16 |
17 | from __future__ import absolute_import
18 | from __future__ import division
19 | from __future__ import print_function
20 | from DeepGradientCompressionOptimizer import DeepGradientCompressionOptimizer
21 |
22 | import re
23 | import tensorflow as tf
24 |
25 |
26 | def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, hvd=None, manual_fp16=False, use_fp16=False):
27 | """Creates an optimizer training op."""
28 | global_step = tf.train.get_or_create_global_step()
29 |
30 | # avoid step change in learning rate at end of warmup phase
31 | decayed_learning_rate_at_crossover_point = init_lr * (1.0-float(num_warmup_steps)/float(num_train_steps))
32 | adjusted_init_lr = init_lr * (init_lr / decayed_learning_rate_at_crossover_point)
33 | print('decayed_learning_rate_at_crossover_point = %e, adjusted_init_lr = %e' % (decayed_learning_rate_at_crossover_point, adjusted_init_lr))
34 |
35 | learning_rate = tf.constant(value=adjusted_init_lr, shape=[], dtype=tf.float32)
36 |
37 | # Implements linear decay of the learning rate.
38 | learning_rate = tf.train.polynomial_decay(
39 | learning_rate,
40 | global_step,
41 | num_train_steps,
42 | end_learning_rate=0.0,
43 | power=1.0,
44 | cycle=False)
45 |
46 | # Implements linear warmup. I.e., if global_step < num_warmup_steps, the
47 | # learning rate will be `global_step/num_warmup_steps * init_lr`.
48 | if num_warmup_steps:
49 | global_steps_int = tf.cast(global_step, tf.int32)
50 | warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32)
51 |
52 | global_steps_float = tf.cast(global_steps_int, tf.float32)
53 | warmup_steps_float = tf.cast(warmup_steps_int, tf.float32)
54 |
55 | warmup_percent_done = global_steps_float / warmup_steps_float
56 | warmup_learning_rate = init_lr * warmup_percent_done
57 |
58 | is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32)
59 | learning_rate = (
60 | (1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate)
61 |
62 | # It is recommended that you use this optimizer for fine tuning, since this
63 | # is how the model was trained (note that the Adam m/v variables are NOT
64 | # loaded from init_checkpoint.)
65 | optimizer = AdamWeightDecayOptimizer(
66 | learning_rate=learning_rate,
67 | weight_decay_rate=0.01,
68 | beta_1=0.9,
69 | beta_2=0.999,
70 | epsilon=1e-6,
71 | exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"])
72 |
73 | if hvd is not None:
74 | from horovod.tensorflow.compression import Compression
75 | optimizer = hvd.DistributedOptimizer(optimizer, sparse_as_dense=True, compression=Compression.none)
76 | if manual_fp16 or use_fp16:
77 | loss_scale_manager = tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager(init_loss_scale=2**32, incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)
78 | optimizer = tf.contrib.mixed_precision.LossScaleOptimizer(optimizer, loss_scale_manager)
79 |
80 | # wrap DeepGradientCompressionOptimizer around horovod Optimizer
81 | optimizer = DeepGradientCompressionOptimizer(optimizer)
82 |
83 |
84 | tvars = tf.trainable_variables()
85 | grads_and_vars = optimizer.compute_gradients(loss, tvars)
86 | grads_and_vars = [(g,v) for g,v in grads_and_vars if g is not None]
87 | grads_and_vars = optimizer.sparse_to_dense(grads_and_vars)
88 | grads, tvars = list(zip(*grads_and_vars))
89 | all_are_finite = tf.reduce_all([tf.reduce_all(tf.is_finite(g)) for g in grads]) if manual_fp16 or use_fp16 else tf.constant(True, dtype=tf.bool)
90 |
91 | # This is how the model was pre-trained.
92 | # ensure global norm is a finite number
93 | # to prevent clip_by_global_norm from having a hizzy fit.
94 | (clipped_grads, _) = tf.clip_by_global_norm(
95 | grads, clip_norm=1.0,
96 | use_norm=tf.cond(
97 | all_are_finite,
98 | lambda: tf.global_norm(grads),
99 | lambda: tf.constant(1.0)))
100 |
101 | train_op = optimizer.apply_gradients(
102 | list(zip(clipped_grads, tvars)), global_step=global_step)
103 |
104 | # Normally the global step update is done inside of `apply_gradients`.
105 | # However, `AdamWeightDecayOptimizer` doesn't do this. But if you use
106 | # a different optimizer, you should probably take this line out.
107 | new_global_step = tf.cond(all_are_finite, lambda: global_step+1, lambda: global_step)
108 | new_global_step = tf.identity(new_global_step, name='step_update')
109 | train_op = tf.group(train_op, [global_step.assign(new_global_step)])
110 | return train_op
111 |
112 |
113 | class AdamWeightDecayOptimizer(tf.train.Optimizer):
114 | """A basic Adam optimizer that includes "correct" L2 weight decay."""
115 |
116 | def __init__(self,
117 | learning_rate,
118 | weight_decay_rate=0.0,
119 | beta_1=0.9,
120 | beta_2=0.999,
121 | epsilon=1e-6,
122 | exclude_from_weight_decay=None,
123 | name="AdamWeightDecayOptimizer"):
124 | """Constructs a AdamWeightDecayOptimizer."""
125 | super(AdamWeightDecayOptimizer, self).__init__(False, name)
126 |
127 | self.learning_rate = tf.identity(learning_rate, name='learning_rate')
128 | self.weight_decay_rate = weight_decay_rate
129 | self.beta_1 = beta_1
130 | self.beta_2 = beta_2
131 | self.epsilon = epsilon
132 | self.exclude_from_weight_decay = exclude_from_weight_decay
133 |
134 | def apply_gradients(self, grads_and_vars, global_step=None, name=None,
135 | manual_fp16=False):
136 | """See base class."""
137 | assignments = []
138 | for (grad, param) in grads_and_vars:
139 | if grad is None or param is None:
140 | continue
141 |
142 | param_name = self._get_variable_name(param.name)
143 | has_shadow = manual_fp16 and param.dtype.base_dtype != tf.float32
144 | if has_shadow:
145 | # create shadow fp32 weights for fp16 variable
146 | param_fp32 = tf.get_variable(
147 | name=param_name + "/shadow",
148 | dtype=tf.float32,
149 | trainable=False,
150 | initializer=tf.cast(param.initialized_value(),tf.float32))
151 | else:
152 | param_fp32 = param
153 |
154 | m = tf.get_variable(
155 | name=param_name + "/adam_m",
156 | shape=param.shape.as_list(),
157 | dtype=tf.float32,
158 | trainable=False,
159 | initializer=tf.zeros_initializer())
160 | v = tf.get_variable(
161 | name=param_name + "/adam_v",
162 | shape=param.shape.as_list(),
163 | dtype=tf.float32,
164 | trainable=False,
165 | initializer=tf.zeros_initializer())
166 |
167 | # Standard Adam update.
168 | next_m = (
169 | tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad))
170 | next_v = (
171 | tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2,
172 | tf.square(grad)))
173 |
174 | update = next_m / (tf.sqrt(next_v) + self.epsilon)
175 |
176 | # Just adding the square of the weights to the loss function is *not*
177 | # the correct way of using L2 regularization/weight decay with Adam,
178 | # since that will interact with the m and v parameters in strange ways.
179 | #
180 | # Instead we want ot decay the weights in a manner that doesn't interact
181 | # with the m/v parameters. This is equivalent to adding the square
182 | # of the weights to the loss with plain (non-momentum) SGD.
183 | if self._do_use_weight_decay(param_name):
184 | update += self.weight_decay_rate * param_fp32
185 |
186 | update_with_lr = self.learning_rate * update
187 |
188 | next_param = param_fp32 - update_with_lr
189 |
190 | if has_shadow:
191 | # cast shadow fp32 weights to fp16 and assign to trainable variable
192 | param.assign(tf.cast(next_param, param.dtype.base_dtype))
193 | assignments.extend(
194 | [param_fp32.assign(next_param),
195 | m.assign(next_m),
196 | v.assign(next_v)])
197 | return tf.group(*assignments, name=name)
198 |
199 | def _do_use_weight_decay(self, param_name):
200 | """Whether to use L2 weight decay for `param_name`."""
201 | if not self.weight_decay_rate:
202 | return False
203 | if self.exclude_from_weight_decay:
204 | for r in self.exclude_from_weight_decay:
205 | if re.search(r, param_name) is not None:
206 | return False
207 | return True
208 |
209 | def _get_variable_name(self, param_name):
210 | """Get the variable name from the tensor name."""
211 | m = re.match("^(.*):\\d+$", param_name)
212 | if m is not None:
213 | param_name = m.group(1)
214 | return param_name
215 |
--------------------------------------------------------------------------------
/examples/BERT/optimization_test.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | from __future__ import absolute_import
16 | from __future__ import division
17 | from __future__ import print_function
18 |
19 | import optimization
20 | import tensorflow as tf
21 |
22 |
23 | class OptimizationTest(tf.test.TestCase):
24 |
25 | def test_adam(self):
26 | with self.test_session() as sess:
27 | w = tf.get_variable(
28 | "w",
29 | shape=[3],
30 | initializer=tf.constant_initializer([0.1, -0.2, -0.1]))
31 | x = tf.constant([0.4, 0.2, -0.5])
32 | loss = tf.reduce_mean(tf.square(x - w))
33 | tvars = tf.trainable_variables()
34 | grads = tf.gradients(loss, tvars)
35 | global_step = tf.train.get_or_create_global_step()
36 | optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
37 | train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
38 | init_op = tf.group(tf.global_variables_initializer(),
39 | tf.local_variables_initializer())
40 | sess.run(init_op)
41 | for _ in range(100):
42 | sess.run(train_op)
43 | w_np = sess.run(w)
44 | self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2)
45 |
46 |
47 | if __name__ == "__main__":
48 | tf.test.main()
49 |
--------------------------------------------------------------------------------
/examples/BERT/requirements.txt:
--------------------------------------------------------------------------------
1 | tensorflow >= 1.11.0 # CPU Version of TensorFlow.
2 | # tensorflow-gpu >= 1.11.0 # GPU version of TensorFlow.
3 |
--------------------------------------------------------------------------------
/examples/BERT/run_classifier_with_tfhub.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | """BERT finetuning runner with TF-Hub."""
16 |
17 | from __future__ import absolute_import
18 | from __future__ import division
19 | from __future__ import print_function
20 |
21 | import os
22 | import optimization
23 | import run_classifier
24 | import tokenization
25 | import tensorflow as tf
26 | import tensorflow_hub as hub
27 |
28 | flags = tf.flags
29 |
30 | FLAGS = flags.FLAGS
31 |
32 | flags.DEFINE_string(
33 | "bert_hub_module_handle", None,
34 | "Handle for the BERT TF-Hub module.")
35 |
36 |
37 | def create_model(is_training, input_ids, input_mask, segment_ids, labels,
38 | num_labels, bert_hub_module_handle):
39 | """Creates a classification model."""
40 | tags = set()
41 | if is_training:
42 | tags.add("train")
43 | bert_module = hub.Module(bert_hub_module_handle, tags=tags, trainable=True)
44 | bert_inputs = dict(
45 | input_ids=input_ids,
46 | input_mask=input_mask,
47 | segment_ids=segment_ids)
48 | bert_outputs = bert_module(
49 | inputs=bert_inputs,
50 | signature="tokens",
51 | as_dict=True)
52 |
53 | # In the demo, we are doing a simple classification task on the entire
54 | # segment.
55 | #
56 | # If you want to use the token-level output, use
57 | # bert_outputs["sequence_output"] instead.
58 | output_layer = bert_outputs["pooled_output"]
59 |
60 | hidden_size = output_layer.shape[-1].value
61 |
62 | output_weights = tf.get_variable(
63 | "output_weights", [num_labels, hidden_size],
64 | initializer=tf.truncated_normal_initializer(stddev=0.02))
65 |
66 | output_bias = tf.get_variable(
67 | "output_bias", [num_labels], initializer=tf.zeros_initializer())
68 |
69 | with tf.variable_scope("loss"):
70 | if is_training:
71 | # I.e., 0.1 dropout
72 | output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
73 |
74 | logits = tf.matmul(output_layer, output_weights, transpose_b=True)
75 | logits = tf.nn.bias_add(logits, output_bias)
76 | probabilities = tf.nn.softmax(logits, axis=-1)
77 | log_probs = tf.nn.log_softmax(logits, axis=-1)
78 |
79 | one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
80 |
81 | per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
82 | loss = tf.reduce_mean(per_example_loss)
83 |
84 | return (loss, per_example_loss, logits, probabilities)
85 |
86 |
87 | def model_fn_builder(num_labels, learning_rate, num_train_steps,
88 | num_warmup_steps, use_tpu, bert_hub_module_handle):
89 | """Returns `model_fn` closure for TPUEstimator."""
90 |
91 | def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
92 | """The `model_fn` for TPUEstimator."""
93 |
94 | tf.logging.info("*** Features ***")
95 | for name in sorted(features.keys()):
96 | tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
97 |
98 | input_ids = features["input_ids"]
99 | input_mask = features["input_mask"]
100 | segment_ids = features["segment_ids"]
101 | label_ids = features["label_ids"]
102 |
103 | is_training = (mode == tf.estimator.ModeKeys.TRAIN)
104 |
105 | (total_loss, per_example_loss, logits, probabilities) = create_model(
106 | is_training, input_ids, input_mask, segment_ids, label_ids, num_labels,
107 | bert_hub_module_handle)
108 |
109 | output_spec = None
110 | if mode == tf.estimator.ModeKeys.TRAIN:
111 | train_op = optimization.create_optimizer(
112 | total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
113 |
114 | output_spec = tf.contrib.tpu.TPUEstimatorSpec(
115 | mode=mode,
116 | loss=total_loss,
117 | train_op=train_op)
118 | elif mode == tf.estimator.ModeKeys.EVAL:
119 |
120 | def metric_fn(per_example_loss, label_ids, logits):
121 | predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
122 | accuracy = tf.metrics.accuracy(label_ids, predictions)
123 | loss = tf.metrics.mean(per_example_loss)
124 | return {
125 | "eval_accuracy": accuracy,
126 | "eval_loss": loss,
127 | }
128 |
129 | eval_metrics = (metric_fn, [per_example_loss, label_ids, logits])
130 | output_spec = tf.contrib.tpu.TPUEstimatorSpec(
131 | mode=mode,
132 | loss=total_loss,
133 | eval_metrics=eval_metrics)
134 | elif mode == tf.estimator.ModeKeys.PREDICT:
135 | output_spec = tf.contrib.tpu.TPUEstimatorSpec(
136 | mode=mode, predictions={"probabilities": probabilities})
137 | else:
138 | raise ValueError(
139 | "Only TRAIN, EVAL and PREDICT modes are supported: %s" % (mode))
140 |
141 | return output_spec
142 |
143 | return model_fn
144 |
145 |
146 | def create_tokenizer_from_hub_module(bert_hub_module_handle):
147 | """Get the vocab file and casing info from the Hub module."""
148 | with tf.Graph().as_default():
149 | bert_module = hub.Module(bert_hub_module_handle)
150 | tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
151 | with tf.Session() as sess:
152 | vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
153 | tokenization_info["do_lower_case"]])
154 | return tokenization.FullTokenizer(
155 | vocab_file=vocab_file, do_lower_case=do_lower_case)
156 |
157 |
158 | def main(_):
159 | tf.logging.set_verbosity(tf.logging.INFO)
160 |
161 | processors = {
162 | "cola": run_classifier.ColaProcessor,
163 | "mnli": run_classifier.MnliProcessor,
164 | "mrpc": run_classifier.MrpcProcessor,
165 | }
166 |
167 | if not FLAGS.do_train and not FLAGS.do_eval:
168 | raise ValueError("At least one of `do_train` or `do_eval` must be True.")
169 |
170 | tf.gfile.MakeDirs(FLAGS.output_dir)
171 |
172 | task_name = FLAGS.task_name.lower()
173 |
174 | if task_name not in processors:
175 | raise ValueError("Task not found: %s" % (task_name))
176 |
177 | processor = processors[task_name]()
178 |
179 | label_list = processor.get_labels()
180 |
181 | tokenizer = create_tokenizer_from_hub_module(FLAGS.bert_hub_module_handle)
182 |
183 | tpu_cluster_resolver = None
184 | if FLAGS.use_tpu and FLAGS.tpu_name:
185 | tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
186 | FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
187 |
188 | is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
189 | run_config = tf.contrib.tpu.RunConfig(
190 | cluster=tpu_cluster_resolver,
191 | master=FLAGS.master,
192 | model_dir=FLAGS.output_dir,
193 | save_checkpoints_steps=FLAGS.save_checkpoints_steps,
194 | tpu_config=tf.contrib.tpu.TPUConfig(
195 | iterations_per_loop=FLAGS.iterations_per_loop,
196 | num_shards=FLAGS.num_tpu_cores,
197 | per_host_input_for_training=is_per_host))
198 |
199 | train_examples = None
200 | num_train_steps = None
201 | num_warmup_steps = None
202 | if FLAGS.do_train:
203 | train_examples = processor.get_train_examples(FLAGS.data_dir)
204 | num_train_steps = int(
205 | len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
206 | num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
207 |
208 | model_fn = model_fn_builder(
209 | num_labels=len(label_list),
210 | learning_rate=FLAGS.learning_rate,
211 | num_train_steps=num_train_steps,
212 | num_warmup_steps=num_warmup_steps,
213 | use_tpu=FLAGS.use_tpu,
214 | bert_hub_module_handle=FLAGS.bert_hub_module_handle)
215 |
216 | # If TPU is not available, this will fall back to normal Estimator on CPU
217 | # or GPU.
218 | estimator = tf.contrib.tpu.TPUEstimator(
219 | use_tpu=FLAGS.use_tpu,
220 | model_fn=model_fn,
221 | config=run_config,
222 | train_batch_size=FLAGS.train_batch_size,
223 | eval_batch_size=FLAGS.eval_batch_size,
224 | predict_batch_size=FLAGS.predict_batch_size)
225 |
226 | if FLAGS.do_train:
227 | train_features = run_classifier.convert_examples_to_features(
228 | train_examples, label_list, FLAGS.max_seq_length, tokenizer)
229 | tf.logging.info("***** Running training *****")
230 | tf.logging.info(" Num examples = %d", len(train_examples))
231 | tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
232 | tf.logging.info(" Num steps = %d", num_train_steps)
233 | train_input_fn = run_classifier.input_fn_builder(
234 | features=train_features,
235 | seq_length=FLAGS.max_seq_length,
236 | is_training=True,
237 | drop_remainder=True)
238 | estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
239 |
240 | if FLAGS.do_eval:
241 | eval_examples = processor.get_dev_examples(FLAGS.data_dir)
242 | eval_features = run_classifier.convert_examples_to_features(
243 | eval_examples, label_list, FLAGS.max_seq_length, tokenizer)
244 |
245 | tf.logging.info("***** Running evaluation *****")
246 | tf.logging.info(" Num examples = %d", len(eval_examples))
247 | tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
248 |
249 | # This tells the estimator to run through the entire set.
250 | eval_steps = None
251 | # However, if running eval on the TPU, you will need to specify the
252 | # number of steps.
253 | if FLAGS.use_tpu:
254 | # Eval will be slightly WRONG on the TPU because it will truncate
255 | # the last batch.
256 | eval_steps = int(len(eval_examples) / FLAGS.eval_batch_size)
257 |
258 | eval_drop_remainder = True if FLAGS.use_tpu else False
259 | eval_input_fn = run_classifier.input_fn_builder(
260 | features=eval_features,
261 | seq_length=FLAGS.max_seq_length,
262 | is_training=False,
263 | drop_remainder=eval_drop_remainder)
264 |
265 | result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
266 |
267 | output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
268 | with tf.gfile.GFile(output_eval_file, "w") as writer:
269 | tf.logging.info("***** Eval results *****")
270 | for key in sorted(result.keys()):
271 | tf.logging.info(" %s = %s", key, str(result[key]))
272 | writer.write("%s = %s\n" % (key, str(result[key])))
273 |
274 | if FLAGS.do_predict:
275 | predict_examples = processor.get_test_examples(FLAGS.data_dir)
276 | if FLAGS.use_tpu:
277 | # Discard batch remainder if running on TPU
278 | n = len(predict_examples)
279 | predict_examples = predict_examples[:(n - n % FLAGS.predict_batch_size)]
280 |
281 | predict_file = os.path.join(FLAGS.output_dir, "predict.tf_record")
282 | run_classifier.file_based_convert_examples_to_features(
283 | predict_examples, label_list, FLAGS.max_seq_length, tokenizer,
284 | predict_file)
285 |
286 | tf.logging.info("***** Running prediction*****")
287 | tf.logging.info(" Num examples = %d", len(predict_examples))
288 | tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size)
289 |
290 | predict_input_fn = run_classifier.file_based_input_fn_builder(
291 | input_file=predict_file,
292 | seq_length=FLAGS.max_seq_length,
293 | is_training=False,
294 | drop_remainder=FLAGS.use_tpu)
295 |
296 | result = estimator.predict(input_fn=predict_input_fn)
297 |
298 | output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
299 | with tf.gfile.GFile(output_predict_file, "w") as writer:
300 | tf.logging.info("***** Predict results *****")
301 | for prediction in result:
302 | probabilities = prediction["probabilities"]
303 | output_line = "\t".join(
304 | str(class_probability)
305 | for class_probability in probabilities) + "\n"
306 | writer.write(output_line)
307 |
308 |
309 | if __name__ == "__main__":
310 | flags.mark_flag_as_required("data_dir")
311 | flags.mark_flag_as_required("task_name")
312 | flags.mark_flag_as_required("bert_hub_module_handle")
313 | flags.mark_flag_as_required("output_dir")
314 | tf.app.run()
315 |
--------------------------------------------------------------------------------
/examples/BERT/run_pretraining.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | mpiexec --allow-run-as-root --bind-to socket -np 8 python3 run_pretraining.py \
4 | --input_file=/workspace/data/bert_large_wikipedia_seq_512_pred_20/tf_examples.tfrecord* \
5 | --output_dir=/workspace/checkpoints/pretraining_base_output \
6 | --do_train=True \
7 | --do_eval=True \
8 | --bert_config_file=$BERT_BASE_DIR/bert_config.json \
9 | --train_batch_size=14 \
10 | --max_seq_length=512 \
11 | --max_predictions_per_seq=20 \
12 | --num_train_steps=250000 \
13 | --num_warmup_steps=10000 \
14 | --learning_rate=1e-4 \
15 | --use_fp16 \
16 | --use_xla \
17 | --report_loss \
18 | --horovod
19 |
20 |
--------------------------------------------------------------------------------
/examples/BERT/run_squad_trtis_client.py:
--------------------------------------------------------------------------------
1 | import modeling
2 | import tokenization
3 | from tensorrtserver.api import ProtocolType, InferContext, ServerStatusContext, grpc_service_pb2_grpc, grpc_service_pb2, model_config_pb2
4 | from utils.create_squad_data import *
5 | import grpc
6 | from run_squad import *
7 | import numpy as np
8 | import tqdm
9 |
10 | # Set this to either 'label_ids' for Google bert or 'unique_ids' for JoC
11 | label_id_key = "unique_ids"
12 |
13 | PendingResult = collections.namedtuple("PendingResult",
14 | ["async_id", "start_time", "inputs"])
15 |
16 | def batch(iterable, n=1):
17 | l = len(iterable)
18 | for ndx in range(0, l, n):
19 | label_ids_data = ()
20 | input_ids_data = ()
21 | input_mask_data = ()
22 | segment_ids_data = ()
23 | for i in range(0, min(n, l-ndx)):
24 | label_ids_data = label_ids_data + (np.array([iterable[ndx + i].unique_id], dtype=np.int32),)
25 | input_ids_data = input_ids_data+ (np.array(iterable[ndx + i].input_ids, dtype=np.int32),)
26 | input_mask_data = input_mask_data+ (np.array(iterable[ndx + i].input_mask, dtype=np.int32),)
27 | segment_ids_data = segment_ids_data+ (np.array(iterable[ndx + i].segment_ids, dtype=np.int32),)
28 |
29 | inputs_dict = {label_id_key: label_ids_data,
30 | 'input_ids': input_ids_data,
31 | 'input_mask': input_mask_data,
32 | 'segment_ids': segment_ids_data}
33 | yield inputs_dict
34 |
35 | def run_client():
36 | """
37 | Ask a question of context on TRTIS.
38 | :param context: str
39 | :param question: str
40 | :param question_id: int
41 | :return:
42 | """
43 |
44 | tokenizer = tokenization.FullTokenizer(vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
45 |
46 |
47 | eval_examples = read_squad_examples(
48 | input_file=FLAGS.predict_file, is_training=False,
49 | version_2_with_negative=FLAGS.version_2_with_negative)
50 |
51 | eval_features = []
52 |
53 | def append_feature(feature):
54 | eval_features.append(feature)
55 |
56 | convert_examples_to_features(
57 | examples=eval_examples[0:],
58 | tokenizer=tokenizer,
59 | max_seq_length=FLAGS.max_seq_length,
60 | doc_stride=FLAGS.doc_stride,
61 | max_query_length=FLAGS.max_query_length,
62 | is_training=False,
63 | output_fn=append_feature)
64 |
65 | protocol_str = 'grpc' # http or grpc
66 | url = FLAGS.trtis_server_url
67 | verbose = True
68 | model_name = FLAGS.trtis_model_name
69 | model_version = FLAGS.trtis_model_version
70 | batch_size = FLAGS.predict_batch_size
71 |
72 | protocol = ProtocolType.from_str(protocol_str) # or 'grpc'
73 |
74 | ctx = InferContext(url, protocol, model_name, model_version, verbose)
75 |
76 | channel = grpc.insecure_channel(url)
77 |
78 | stub = grpc_service_pb2_grpc.GRPCServiceStub(channel)
79 |
80 | prof_request = grpc_service_pb2.server__status__pb2.model__config__pb2.ModelConfig()
81 |
82 | prof_response = stub.Profile(prof_request)
83 |
84 | status_ctx = ServerStatusContext(url, protocol, model_name=model_name, verbose=verbose)
85 |
86 | model_config_pb2.ModelConfig()
87 |
88 | status_result = status_ctx.get_server_status()
89 |
90 | outstanding = {}
91 | max_outstanding = 20
92 |
93 | sent_prog = tqdm.tqdm(desc="Send Requests", total=len(eval_features))
94 | recv_prog = tqdm.tqdm(desc="Recv Requests", total=len(eval_features))
95 |
96 | def process_outstanding(do_wait):
97 |
98 | if (len(outstanding) == 0):
99 | return
100 |
101 | ready_id = ctx.get_ready_async_request(do_wait)
102 |
103 | if (ready_id is None):
104 | return
105 |
106 | # If we are here, we got an id
107 | result = ctx.get_async_run_results(ready_id, False)
108 | stop = time.time()
109 |
110 | if (result is None):
111 | raise ValueError("Context returned null for async id marked as done")
112 |
113 | outResult = outstanding.pop(ready_id)
114 |
115 | time_list.append(stop - outResult.start_time)
116 |
117 | batch_count = len(outResult.inputs[label_id_key])
118 |
119 | for i in range(batch_count):
120 | unique_id = int(outResult.inputs[label_id_key][i][0])
121 | start_logits = [float(x) for x in result["start_logits"][i].flat]
122 | end_logits = [float(x) for x in result["end_logits"][i].flat]
123 | all_results.append(
124 | RawResult(
125 | unique_id=unique_id,
126 | start_logits=start_logits,
127 | end_logits=end_logits))
128 |
129 | recv_prog.update(n=batch_count)
130 |
131 | all_results = []
132 | time_list = []
133 |
134 | print("Starting Sending Requests....\n")
135 |
136 | all_results_start = time.time()
137 |
138 | for inputs_dict in batch(eval_features, batch_size):
139 |
140 | present_batch_size = len(inputs_dict[label_id_key])
141 |
142 | outputs_dict = {'start_logits': InferContext.ResultFormat.RAW,
143 | 'end_logits': InferContext.ResultFormat.RAW}
144 |
145 | start = time.time()
146 | async_id = ctx.async_run(inputs_dict, outputs_dict, batch_size=present_batch_size)
147 |
148 | outstanding[async_id] = PendingResult(async_id=async_id, start_time=start, inputs=inputs_dict)
149 |
150 | sent_prog.update(n=present_batch_size)
151 |
152 | # Try to process at least one response per request
153 | process_outstanding(len(outstanding) >= max_outstanding)
154 |
155 | tqdm.tqdm.write("All Requests Sent! Waiting for responses. Outstanding: {}.\n".format(len(outstanding)))
156 |
157 | # Now process all outstanding requests
158 | while (len(outstanding) > 0):
159 | process_outstanding(True)
160 |
161 | all_results_end = time.time()
162 | all_results_total = (all_results_end - all_results_start) * 1000.0
163 |
164 | print("-----------------------------")
165 | print("Individual Time Runs - Ignoring first two iterations")
166 | print("Total Time: {} ms".format(all_results_total))
167 | print("-----------------------------")
168 |
169 | print("-----------------------------")
170 | print("Total Inference Time = %0.2f for"
171 | "Sentences processed = %d" % (sum(time_list), len(eval_features)))
172 | print("Throughput Average (sentences/sec) = %0.2f" % (len(eval_features) / all_results_total * 1000.0))
173 | print("-----------------------------")
174 |
175 | time_list.sort()
176 |
177 | avg = np.mean(time_list)
178 | cf_95 = max(time_list[:int(len(time_list) * 0.95)])
179 | cf_99 = max(time_list[:int(len(time_list) * 0.99)])
180 | cf_100 = max(time_list[:int(len(time_list) * 1)])
181 | print("-----------------------------")
182 | print("Summary Statistics")
183 | print("Batch size =", FLAGS.predict_batch_size)
184 | print("Sequence Length =", FLAGS.max_seq_length)
185 | print("Latency Confidence Level 95 (ms) =", cf_95 * 1000)
186 | print("Latency Confidence Level 99 (ms) =", cf_99 * 1000)
187 | print("Latency Confidence Level 100 (ms) =", cf_100 * 1000)
188 | print("Latency Average (ms) =", avg * 1000)
189 | print("-----------------------------")
190 |
191 |
192 | output_prediction_file = os.path.join(FLAGS.output_dir, "predictions.json")
193 | output_nbest_file = os.path.join(FLAGS.output_dir, "nbest_predictions.json")
194 | output_null_log_odds_file = os.path.join(FLAGS.output_dir, "null_odds.json")
195 |
196 | write_predictions(eval_examples, eval_features, all_results,
197 | FLAGS.n_best_size, FLAGS.max_answer_length,
198 | FLAGS.do_lower_case, output_prediction_file,
199 | output_nbest_file, output_null_log_odds_file)
200 |
201 |
202 |
203 | if __name__ == "__main__":
204 | flags.mark_flag_as_required("vocab_file")
205 | flags.mark_flag_as_required("bert_config_file")
206 | flags.mark_flag_as_required("output_dir")
207 |
208 | run_client()
209 |
210 |
--------------------------------------------------------------------------------
/examples/BERT/sample_text.txt:
--------------------------------------------------------------------------------
1 | This text is included to make sure Unicode is handled properly: 力加勝北区ᴵᴺᵀᵃছজটডণত
2 | Text should be one-sentence-per-line, with empty lines between documents.
3 | This sample text is public domain and was randomly selected from Project Guttenberg.
4 |
5 | The rain had only ceased with the gray streaks of morning at Blazing Star, and the settlement awoke to a moral sense of cleanliness, and the finding of forgotten knives, tin cups, and smaller camp utensils, where the heavy showers had washed away the debris and dust heaps before the cabin doors.
6 | Indeed, it was recorded in Blazing Star that a fortunate early riser had once picked up on the highway a solid chunk of gold quartz which the rain had freed from its incumbering soil, and washed into immediate and glittering popularity.
7 | Possibly this may have been the reason why early risers in that locality, during the rainy season, adopted a thoughtful habit of body, and seldom lifted their eyes to the rifted or india-ink washed skies above them.
8 | "Cass" Beard had risen early that morning, but not with a view to discovery.
9 | A leak in his cabin roof,--quite consistent with his careless, improvident habits,--had roused him at 4 A. M., with a flooded "bunk" and wet blankets.
10 | The chips from his wood pile refused to kindle a fire to dry his bed-clothes, and he had recourse to a more provident neighbor's to supply the deficiency.
11 | This was nearly opposite.
12 | Mr. Cassius crossed the highway, and stopped suddenly.
13 | Something glittered in the nearest red pool before him.
14 | Gold, surely!
15 | But, wonderful to relate, not an irregular, shapeless fragment of crude ore, fresh from Nature's crucible, but a bit of jeweler's handicraft in the form of a plain gold ring.
16 | Looking at it more attentively, he saw that it bore the inscription, "May to Cass."
17 | Like most of his fellow gold-seekers, Cass was superstitious.
18 |
19 | The fountain of classic wisdom, Hypatia herself.
20 | As the ancient sage--the name is unimportant to a monk--pumped water nightly that he might study by day, so I, the guardian of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge.
21 | From my youth I felt in me a soul above the matter-entangled herd.
22 | She revealed to me the glorious fact, that I am a spark of Divinity itself.
23 | A fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--indeed, even into the hog-bucket itself. Well, after all, I will show you the way to the Archbishop's.
24 | There is a philosophic pleasure in opening one's treasures to the modest young.
25 | Perhaps you will assist me by carrying this basket of fruit?' And the little man jumped up, put his basket on Philammon's head, and trotted off up a neighbouring street.
26 | Philammon followed, half contemptuous, half wondering at what this philosophy might be, which could feed the self-conceit of anything so abject as his ragged little apish guide;
27 | but the novel roar and whirl of the street, the perpetual stream of busy faces, the line of curricles, palanquins, laden asses, camels, elephants, which met and passed him, and squeezed him up steps and into doorways, as they threaded their way through the great Moon-gate into the ample street beyond, drove everything from his mind but wondering curiosity, and a vague, helpless dread of that great living wilderness, more terrible than any dead wilderness of sand which he had left behind.
28 | Already he longed for the repose, the silence of the Laura--for faces which knew him and smiled upon him; but it was too late to turn back now.
29 | His guide held on for more than a mile up the great main street, crossed in the centre of the city, at right angles, by one equally magnificent, at each end of which, miles away, appeared, dim and distant over the heads of the living stream of passengers, the yellow sand-hills of the desert;
30 | while at the end of the vista in front of them gleamed the blue harbour, through a network of countless masts.
31 | At last they reached the quay at the opposite end of the street;
32 | and there burst on Philammon's astonished eyes a vast semicircle of blue sea, ringed with palaces and towers.
33 | He stopped involuntarily; and his little guide stopped also, and looked askance at the young monk, to watch the effect which that grand panorama should produce on him.
34 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/data_download.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | docker run --runtime=nvidia -v $PWD:/workspace/bert \
4 | --rm --shm-size=1g --ulimit memlock=-1 \
5 | --ulimit stack=67108864 --ipc=host -t -i \
6 | bert bash -c "bash scripts/data_download_helper.sh"
--------------------------------------------------------------------------------
/examples/BERT/scripts/data_download_helper.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # Download pretrained_models
4 | cd /workspace/bert/data/pretrained_models_google && python3 download_models.py
5 |
6 | # Download SQUAD
7 | cd /workspace/bert/data/squad && . squad_download.sh
8 |
9 | # Download GLUE
10 | cd /workspace/bert/data/glue && python3 download_glue_data.py
11 |
12 | # WIKI Download, set config in data_generators/wikipedia_corpus/config.sh
13 | cd /workspace/bert/data/wikipedia_corpus && . run_preprocessing.sh
14 |
15 | cd /workspace/bert/data/bookcorpus && . run_preprocessing.sh
16 |
17 | cd /workspace/bert/data/glue && python3 download_glue_data.py
18 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/docker/build.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | docker pull nvcr.io/nvidia/tensorrtserver:19.06-py3
4 |
5 | #The follow has been commented out since we need fixes for the perf_client from Guan
6 | #Uncomment to enable building.
7 | #For now, the tensorrt_client can be downloaded from https://drive.google.com/drive/u/1/folders/1CeOMZbnFT1VUIlIMoDEZJb3kOKbXBDbZ
8 | git submodule update --init --recursive && cd tensorrt-inference-server && docker build -t tensorrtserver_client -f Dockerfile.client . && cd -
9 |
10 | docker build . --rm -t bert
11 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/docker/launch.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | CMD=${@:-/bin/bash}
4 | NV_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:-"all"}
5 |
6 |
7 | nvidia-docker run --rm -it \
8 | --net=host \
9 | --shm-size=1g \
10 | --ulimit memlock=-1 \
11 | --ulimit stack=67108864 \
12 | -e NVIDIA_VISIBLE_DEVICES=$NV_VISIBLE_DEVICES \
13 | -v $PWD:/workspace/bert \
14 | -v $PWD/results:/results \
15 | bert $CMD
16 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/docker/launch_server.sh:
--------------------------------------------------------------------------------
1 | precision=${1:-"fp16"}
2 | NV_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:-"all"}
3 |
4 | if [ "$precision" = "fp16" ] ; then
5 | echo "fp16 activated!"
6 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
7 | else
8 | echo "fp32 activated!"
9 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=0
10 | fi
11 |
12 | # Start TRTIS server in detached state
13 | nvidia-docker run -d --rm \
14 | --shm-size=1g \
15 | --ulimit memlock=-1 \
16 | --ulimit stack=67108864 \
17 | -p8000:8000 \
18 | -p8001:8001 \
19 | -p8002:8002 \
20 | --name trt_server_cont \
21 | -e NVIDIA_VISIBLE_DEVICES=$NV_VISIBLE_DEVICES \
22 | -e TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE \
23 | -v $PWD/results/trtis_models:/models \
24 | nvcr.io/nvidia/tensorrtserver:19.06-py3 trtserver --model-store=/models --strict-model-config=false
--------------------------------------------------------------------------------
/examples/BERT/scripts/finetune_inference_benchmark.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | bert_model=${1:-"large"}
4 | use_xla=${2:-"true"}
5 | task=${3:-"squad"}
6 |
7 | if [ "$bert_model" = "large" ] ; then
8 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
9 | else
10 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
11 | fi
12 | echo "BERT directory set as " $BERT_DIR
13 |
14 | init_checkpoint="$BERT_DIR/bert_model.ckpt"
15 |
16 | if [ "$use_xla" = "true" ] ; then
17 | use_xla_tag="--use_xla"
18 | echo "XLA activated"
19 | else
20 | use_xla_tag=""
21 | fi
22 |
23 | #Edit to save logs & checkpoints in a different directory
24 | RESULTS_DIR=/results
25 | if [ ! -d "$RESULTS_DIR" ] ; then
26 | echo "Error! $RESULTS_DIR directory missing."
27 | exit -1
28 | fi
29 | echo "Results directory set as " $RESULTS_DIR
30 |
31 | LOGFILE="${RESULTS_DIR}/${task}_inference_benchmark_bert_${bert_model}.log"
32 | tmp_file="/tmp/${task}_inference_benchmark.log"
33 | if [ "$task" = "squad" ] ; then
34 | export SQUAD_DIR=data/squad/v1.1
35 |
36 | echo "Squad directory set as " $SQUAD_DIR
37 |
38 | echo "Inference performance benchmarking for BERT $bert_model from $BERT_DIR" >> $LOGFILE
39 | echo "Precision $precision" >> $LOGFILE
40 | echo "Sequence-Length Batch-size Precision Throughput-Average(sent/sec) Latency-Average(ms) Latency-50%(ms) Latency-90%(ms) Latency-95%(ms) Latency-99%(ms) Latency-100%(ms)" >> $LOGFILE
41 |
42 | for seq_len in 128 384; do
43 |
44 | for bs in 1 2 4 8; do
45 |
46 | for precision in fp16 fp32; do
47 |
48 |
49 | if [ "$precision" = "fp16" ] ; then
50 | echo "fp16 activated!"
51 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
52 | use_fp16="--use_fp16"
53 | else
54 | echo "fp32 activated!"
55 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=0
56 | use_fp16=""
57 | fi
58 |
59 | python run_squad.py \
60 | --vocab_file=$BERT_DIR/vocab.txt \
61 | --bert_config_file=$BERT_DIR/bert_config.json \
62 | --init_checkpoint=$init_checkpoint \
63 | --do_predict=True \
64 | --predict_file=$SQUAD_DIR/dev-v1.1.json \
65 | --predict_batch_size=$bs \
66 | --max_seq_length=$seq_len \
67 | --doc_stride=128 \
68 | --output_dir=${RESULTS_DIR} \
69 | "$use_fp16" \
70 | $use_xla_tag --num_eval_iterations=1024 |& tee $tmp_file
71 |
72 | perf=`cat $tmp_file | grep -F 'Throughput Average (sentences/sec)' | awk -F'= ' '{print $2}'`
73 | la=`cat $tmp_file | grep -F 'Latency Average (ms)' | awk -F'= ' '{print $2}'`
74 | l50=`cat $tmp_file | grep -F 'Latency Confidence Level 50 (ms)' | awk -F'= ' '{print $2}'`
75 | l90=`cat $tmp_file | grep -F 'Latency Confidence Level 90 (ms)' | awk -F'= ' '{print $2}'`
76 | l95=`cat $tmp_file | grep -F 'Latency Confidence Level 95 (ms)' | awk -F'= ' '{print $2}'`
77 | l99=`cat $tmp_file | grep -F 'Latency Confidence Level 99 (ms)' | awk -F'= ' '{print $2}'`
78 | l100=`cat $tmp_file | grep -F 'Latency Confidence Level 100 (ms)' | awk -F'= ' '{print $2}'`
79 |
80 | echo "$seq_len $bs $precision $perf $la $l50 $l90 $l95 $l99 $l100" >> $LOGFILE
81 |
82 | done
83 | done
84 | done
85 |
86 | else
87 |
88 | echo "Benchmarking for " $task "currently not supported. Sorry!"
89 |
90 | fi
--------------------------------------------------------------------------------
/examples/BERT/scripts/finetune_train_benchmark.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | bert_model=${1:-"large"}
4 | precision=${2:-"fp16"}
5 | use_xla=${3:-"true"}
6 | num_gpu=${4:-"8"}
7 | task=${5:-"squad"}
8 |
9 | if [ "$bert_model" = "large" ] ; then
10 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
11 | else
12 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
13 | fi
14 |
15 | echo "BERT directory set as " $BERT_DIR
16 |
17 | init_checkpoint="$BERT_DIR/bert_model.ckpt"
18 | learning_rate=5e-6
19 |
20 | #Edit to save logs & checkpoints in a different directory
21 | RESULTS_DIR=/results
22 | if [ ! -d "$RESULTS_DIR" ] ; then
23 | echo "Error! $RESULTS_DIR directory missing."
24 | exit -1
25 | fi
26 | echo "Results directory set as " $RESULTS_DIR
27 |
28 | use_fp16=""
29 | if [ "$precision" = "fp16" ] ; then
30 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
31 | use_fp16="--use_fp16"
32 | fi
33 |
34 |
35 | if [ "$use_xla" = "true" ] ; then
36 | use_xla_tag="--use_xla"
37 | else
38 | use_xla_tag=""
39 | fi
40 |
41 | if [ $num_gpu -gt 1 ] ; then
42 | mpi_command="mpirun -np $num_gpu -H localhost:$num_gpu \
43 | --allow-run-as-root -bind-to none -map-by slot \
44 | -x NCCL_DEBUG=INFO \
45 | -x LD_LIBRARY_PATH \
46 | -x PATH -mca pml ob1 -mca btl ^openib"
47 | use_hvd="--horovod"
48 | else
49 | mpi_command=""
50 | use_hvd=""
51 | fi
52 |
53 | LOGFILE="${RESULTS_DIR}/${task}_training_benchmark_bert_${bert_model}_gpu_${num_gpu}.log"
54 |
55 | if [ "$task" = "squad" ] ; then
56 | export SQUAD_DIR=data/squad/v1.1
57 | epochs="2.0"
58 | echo "Squad directory set as " $SQUAD_DIR
59 |
60 | echo "Training performance benchmarking for BERT $bert_model from $BERT_DIR" >> $LOGFILE
61 | echo "Precision $precision" >> $LOGFILE
62 | echo "Sequence Length Batch size Performance(sent/sec)" >> $LOGFILE
63 |
64 | for seq_len in 128 384; do
65 |
66 | if [ "$seq_len" = "128" ] ; then
67 | doc_stride=64
68 | else
69 | doc_stride=128
70 | fi
71 |
72 | for batch_size in 1 2 4; do
73 | for precision in fp16 fp32; do
74 | res_dir=${RESULTS_DIR}/bert_${bert_model}_gpu_${num_gpu}_sl_${seq_len}_prec_${precision}_bs_${batch_size}
75 | tmp_file="${res_dir}/${task}_training_benchmark.log"
76 |
77 | if [ "$precision" = "fp16" ] ; then
78 | echo "fp16 activated!"
79 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
80 | use_fp16="--use_fp16"
81 | else
82 | echo "fp32 activated!"
83 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=0
84 | use_fp16=""
85 | fi
86 |
87 | $mpi_command python run_squad.py \
88 | --vocab_file=$BERT_DIR/vocab.txt \
89 | --bert_config_file=$BERT_DIR/bert_config.json \
90 | --init_checkpoint=$init_checkpoint \
91 | --do_train=True \
92 | --train_file=$SQUAD_DIR/train-v1.1.json \
93 | --train_batch_size=$batch_size \
94 | --learning_rate=$learning_rate \
95 | --num_train_epochs=$epochs \
96 | --max_seq_length=$seq_len \
97 | --doc_stride=$doc_stride \
98 | --output_dir=$res_dir \
99 | "$use_hvd" \
100 | "$use_fp16" \
101 | $use_xla_tag |& tee $tmp_file
102 |
103 | perf=`cat $tmp_file | grep -F 'Training Performance' | awk -F'= ' '{print $2}'`
104 | echo "$seq_len $batch_size $perf"
105 |
106 | done
107 | done
108 | done
109 |
110 | else
111 |
112 | echo "Benchmarking for " $task "currently not supported. Sorry!"
113 |
114 | fi
--------------------------------------------------------------------------------
/examples/BERT/scripts/run_glue.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | echo "Container nvidia build = " $NVIDIA_BUILD_ID
4 |
5 | batch_size=${1:-"32"}
6 | learning_rate=${2:-"2e-5"}
7 | precision=${3:-"fp16"}
8 | use_xla=${4:-"true"}
9 | num_gpu=${5:-"8"}
10 | seq_length=${6:-"128"}
11 | bert_model=${7:-"large"}
12 |
13 | if [ "$bert_model" = "large" ] ; then
14 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
15 | else
16 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
17 | fi
18 | export GLUE_DIR=data/glue
19 |
20 | epochs=${8:-"3.0"}
21 | ws=${9:-"0.1"}
22 | init_checkpoint=${10:-"$BERT_DIR/bert_model.ckpt"}
23 |
24 | #Edit to save logs & checkpoints in a different directory
25 | RESULTS_DIR=/results
26 |
27 | if [ ! -d "$BERT_DIR" ] ; then
28 | echo "Error! $BERT_DIR directory missing. Please mount pretrained BERT dataset."
29 | exit -1
30 | fi
31 | if [ ! -d "$GLUE_DIR" ] ; then
32 | echo "Error! $GLUE_DIR directory missing. Please mount SQuAD dataset."
33 | exit -1
34 | fi
35 | if [ ! -d "$RESULTS_DIR" ] ; then
36 | echo "Error! $RESULTS_DIR directory missing."
37 | exit -1
38 | fi
39 |
40 | echo "GLUE directory set as " $GLUE_DIR " BERT directory set as " $BERT_DIR
41 | echo "Results directory set as " $RESULTS_DIR
42 |
43 | use_fp16=""
44 | if [ "$precision" = "fp16" ] ; then
45 | echo "fp16 activated!"
46 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
47 | use_fp16="--use_fp16"
48 | fi
49 |
50 | if [ "$use_xla" = "true" ] ; then
51 | use_xla_tag="--use_xla"
52 | echo "XLA activated"
53 | else
54 | use_xla_tag=""
55 | fi
56 |
57 | if [ $num_gpu -gt 1 ] ; then
58 | mpi_command="mpirun -np $num_gpu -H localhost:$num_gpu \
59 | --allow-run-as-root -bind-to none -map-by slot \
60 | -x NCCL_DEBUG=INFO \
61 | -x LD_LIBRARY_PATH \
62 | -x PATH -mca pml ob1 -mca btl ^openib"
63 | use_hvd="--horovod"
64 | else
65 | mpi_command=""
66 | use_hvd=""
67 | fi
68 |
69 | export GBS=$(expr $batch_size \* $num_gpu)
70 | printf -v TAG "tf_bert_%s_glue_1n_%s_gbs%d" "$bert_model" "$precision" $GBS
71 | DATESTAMP=`date +'%y%m%d%H%M%S'`
72 | RESULTS_DIR=${RESULTS_DIR}/${TAG}_${DATESTAMP}
73 | mkdir $RESULTS_DIR
74 | LOGFILE=$RESULTS_DIR/$TAG.$DATESTAMP.log
75 | printf "Saving checkpoints to %s\n" "$RESULTS_DIR"
76 | printf "Writing logs to %s\n" "$LOGFILE"
77 |
78 | $mpi_command python run_classifier.py \
79 | --task_name=MRPC \
80 | --do_train=true \
81 | --do_eval=true \
82 | --data_dir=$GLUE_DIR/MRPC \
83 | --vocab_file=$BERT_DIR/vocab.txt \
84 | --bert_config_file=$BERT_DIR/bert_config.json \
85 | --init_checkpoint=$init_checkpoint \
86 | --max_seq_length=$seq_length \
87 | --train_batch_size=$batch_size \
88 | --learning_rate=$learning_rate \
89 | --num_train_epochs=$epochs \
90 | --output_dir=$RESULTS_DIR \
91 | "$use_hvd" \
92 | "$use_fp16" \
93 | $use_xla_tag --warmup_proportion=$ws |& tee $LOGFILE
--------------------------------------------------------------------------------
/examples/BERT/scripts/run_pretraining.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | echo "Container nvidia build = " $NVIDIA_BUILD_ID
4 |
5 | WIKI_DIR=/workspace/bert/data/wikipedia_corpus/final_tfrecords_sharded
6 | BOOKS_DIR=/workspace/bert/data/bookcorpus/final_tfrecords_sharded
7 | BERT_CONFIG=/workspace/bert/data/pretrained_models_google/uncased_L-24_H-1024_A-16/bert_config.json
8 |
9 | #Edit to save logs & checkpoints in a different directory
10 | RESULTS_DIR=/results
11 |
12 | if [ ! -d "$WIKI_DIR" ] ; then
13 | echo "Error! $WIKI_DIR directory missing. Please mount wikipedia dataset."
14 | exit -1
15 | else
16 | SOURCES="$WIKI_DIR/*"
17 | fi
18 | if [ ! -d "$BOOKS_DIR" ] ; then
19 | echo "Warning! $BOOKS_DIR directory missing. Training will proceed without book corpus."
20 | else
21 | SOURCES+=" $BOOKS_DIR/*"
22 | fi
23 | if [ ! -d "$RESULTS_DIR" ] ; then
24 | echo "Error! $RESULTS_DIR directory missing."
25 | exit -1
26 | fi
27 |
28 | if [ ! -f "$BERT_CONFIG" ] ; then
29 | echo "Error! BERT large configuration file not found at $BERT_CONFIG"
30 | exit -1
31 | fi
32 |
33 | train_batch_size=${1:-14}
34 | eval_batch_size=${2:-8}
35 | learning_rate=${3:-"1e-4"}
36 | precision=${4:-"manual_fp16"}
37 | use_xla=${5:-"true"}
38 | num_gpus=${6:-1}
39 | warmup_steps=${7:-"10000"}
40 | train_steps=${8:-1144000}
41 | save_checkpoints_steps=${9:-5000}
42 |
43 | PREC=""
44 | if [ "$precision" = "fp16" ] ; then
45 | PREC="--use_fp16"
46 | elif [ "$precision" = "fp32" ] ; then
47 | PREC=""
48 | elif [ "$precision" = "manual_fp16" ] ; then
49 | PREC="--manual_fp16"
50 | else
51 | echo "Unknown argument"
52 | exit -2
53 | fi
54 |
55 | if [ "$use_xla" = "true" ] ; then
56 | PREC="$PREC --use_xla"
57 | echo "XLA activated"
58 | fi
59 |
60 | export GBS=$(expr $train_batch_size \* $num_gpus)
61 | printf -v TAG "tf_bert_pretraining_%s_gbs%d" "$precision" $GBS
62 | DATESTAMP=`date +'%y%m%d%H%M%S'`
63 | RESULTS_DIR=${RESULTS_DIR}/${TAG}_${DATESTAMP}
64 | LOGFILE=$RESULTS_DIR/$TAG.$DATESTAMP.log
65 | printf "Saving checkpoints to %s\n" "$RESULTS_DIR"
66 | printf "Logs written to %s\n" "$LOGFILE"
67 |
68 | echo $SOURCES
69 | INPUT_FILES=$(eval ls $SOURCES | tr " " "\n" | awk '{printf "%s,",$1}' | sed s'/.$//')
70 | CMD="python3 /workspace/bert/run_pretraining.py"
71 | CMD+=" --input_file=$INPUT_FILES"
72 | CMD+=" --output_dir=$RESULTS_DIR"
73 | CMD+=" --bert_config_file=$BERT_CONFIG"
74 | CMD+=" --do_train=True"
75 | CMD+=" --do_eval=True"
76 | CMD+=" --train_batch_size=$train_batch_size"
77 | CMD+=" --eval_batch_size=$eval_batch_size"
78 | CMD+=" --max_seq_length=512"
79 | CMD+=" --max_predictions_per_seq=80"
80 | CMD+=" --num_train_steps=$train_steps"
81 | CMD+=" --num_warmup_steps=$warmup_steps"
82 | CMD+=" --save_checkpoints_steps=$save_checkpoints_steps"
83 | CMD+=" --learning_rate=$learning_rate"
84 | CMD+=" --report_loss"
85 | CMD+=" --horovod $PREC"
86 |
87 | if [ $num_gpus -gt 1 ] ; then
88 | CMD="mpiexec --allow-run-as-root -np $num_gpus --bind-to socket $CMD"
89 | fi
90 |
91 |
92 |
93 |
94 | set -x
95 | if [ -z "$LOGFILE" ] ; then
96 | $CMD
97 | else
98 | (
99 | $CMD
100 | ) |& tee $LOGFILE
101 | fi
102 | set +x
103 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/run_squad.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | echo "Container nvidia build = " $NVIDIA_BUILD_ID
4 |
5 | batch_size=${1:-"8"}
6 | learning_rate=${2:-"5e-6"}
7 | precision=${3:-"fp16"}
8 | use_xla=${4:-"true"}
9 | num_gpu=${5:-"8"}
10 | seq_length=${6:-"384"}
11 | doc_stride=${7:-"128"}
12 | bert_model=${8:-"large"}
13 |
14 | if [ "$bert_model" = "large" ] ; then
15 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
16 | else
17 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
18 | fi
19 |
20 | squad_version=${9:-"1.1"}
21 |
22 | export SQUAD_DIR=data/squad/v${squad_version}
23 | if [ "$squad_version" = "1.1" ] ; then
24 | version_2_with_negative="False"
25 | else
26 | version_2_with_negative="True"
27 | fi
28 |
29 | init_checkpoint=${10:-"$BERT_DIR/bert_model.ckpt"}
30 | epochs=${11:-"2.0"}
31 |
32 | #Edit to save logs & checkpoints in a different directory
33 | RESULTS_DIR=/results
34 |
35 | if [ ! -d "$SQUAD_DIR" ] ; then
36 | echo "Error! $SQUAD_DIR directory missing. Please mount SQuAD dataset."
37 | exit -1
38 | fi
39 | if [ ! -d "$BERT_DIR" ] ; then
40 | echo "Error! $BERT_DIR directory missing. Please mount pretrained BERT dataset."
41 | exit -1
42 | fi
43 | if [ ! -d "$RESULTS_DIR" ] ; then
44 | echo "Error! $RESULTS_DIR directory missing."
45 | exit -1
46 | fi
47 |
48 | echo "Squad directory set as " $SQUAD_DIR " BERT directory set as " $BERT_DIR
49 | echo "Results directory set as " $RESULTS_DIR
50 |
51 | use_fp16=""
52 | if [ "$precision" = "fp16" ] ; then
53 | echo "fp16 activated!"
54 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
55 | use_fp16="--use_fp16"
56 | fi
57 |
58 | if [ "$use_xla" = "true" ] ; then
59 | use_xla_tag="--use_xla"
60 | echo "XLA activated"
61 | else
62 | use_xla_tag=""
63 | fi
64 |
65 | if [ $num_gpu -gt 1 ] ; then
66 | mpi_command="mpirun -np $num_gpu -H localhost:$num_gpu \
67 | --allow-run-as-root -bind-to none -map-by slot \
68 | -x NCCL_DEBUG=INFO \
69 | -x LD_LIBRARY_PATH \
70 | -x PATH -mca pml ob1 -mca btl ^openib"
71 | use_hvd="--horovod"
72 | else
73 | mpi_command=""
74 | use_hvd=""
75 | fi
76 |
77 |
78 | export GBS=$(expr $batch_size \* $num_gpu)
79 | printf -v TAG "tf_bert_%s_squad_1n_%s_gbs%d" "$bert_model" "$precision" $GBS
80 | DATESTAMP=`date +'%y%m%d%H%M%S'`
81 |
82 | RESULTS_DIR=${RESULTS_DIR}/${TAG}_${DATESTAMP}
83 | mkdir $RESULTS_DIR
84 | LOGFILE=$RESULTS_DIR/$TAG.$DATESTAMP.log
85 | printf "Saving checkpoints to %s\n" "$RESULTS_DIR"
86 | printf "Writing logs to %s\n" "$LOGFILE"
87 |
88 | $mpi_command python run_squad.py \
89 | --vocab_file=$BERT_DIR/vocab.txt \
90 | --bert_config_file=$BERT_DIR/bert_config.json \
91 | --init_checkpoint=$init_checkpoint \
92 | --do_train=True \
93 | --train_file=$SQUAD_DIR/train-v${squad_version}.json \
94 | --do_predict=True \
95 | --predict_file=$SQUAD_DIR/dev-v${squad_version}.json \
96 | --train_batch_size=$batch_size \
97 | --learning_rate=$learning_rate \
98 | --num_train_epochs=$epochs \
99 | --max_seq_length=$seq_length \
100 | --doc_stride=$doc_stride \
101 | --save_checkpoints_steps 1000 \
102 | --output_dir=$RESULTS_DIR \
103 | "$use_hvd" \
104 | "$use_fp16" \
105 | $use_xla_tag --version_2_with_negative=${version_2_with_negative} |& tee $LOGFILE
106 |
107 | python $SQUAD_DIR/evaluate-v${squad_version}.py $SQUAD_DIR/dev-v${squad_version}.json ${RESULTS_DIR}/predictions.json |& tee -a $LOGFILE
108 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/run_squad_inference.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | echo "Container nvidia build = " $NVIDIA_BUILD_ID
4 |
5 | init_checkpoint=${1:-"/results/model.ckpt"}
6 | batch_size=${2:-"8"}
7 | precision=${3:-"fp16"}
8 | use_xla=${4:-"true"}
9 | seq_length=${5:-"384"}
10 | doc_stride=${6:-"128"}
11 | bert_model=${7:-"large"}
12 | squad_version=${8:-"1.1"}
13 |
14 | if [ "$bert_model" = "large" ] ; then
15 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
16 | else
17 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
18 | fi
19 |
20 | export SQUAD_DIR=data/squad/v${squad_version}
21 | if [ "$squad_version" = "1.1" ] ; then
22 | version_2_with_negative="False"
23 | else
24 | version_2_with_negative="True"
25 | fi
26 |
27 | #Edit to save logs & checkpoints in a different directory
28 | RESULTS_DIR=/results
29 |
30 | if [ ! -d "$SQUAD_DIR" ] ; then
31 | echo "Error! $SQUAD_DIR directory missing. Please mount SQuAD dataset."
32 | exit -1
33 | fi
34 | if [ ! -d "$BERT_DIR" ] ; then
35 | echo "Error! $BERT_DIR directory missing. Please mount pretrained BERT dataset."
36 | exit -1
37 | fi
38 | if [ ! -d "$RESULTS_DIR" ] ; then
39 | echo "Error! $RESULTS_DIR directory missing."
40 | exit -1
41 | fi
42 |
43 | echo "Squad directory set as " $SQUAD_DIR " BERT directory set as " $BERT_DIR
44 | echo "Results directory set as " $RESULTS_DIR
45 |
46 | use_fp16=""
47 | if [ "$precision" = "fp16" ] ; then
48 | echo "fp16 activated!"
49 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
50 | use_fp16="--use_fp16"
51 | fi
52 |
53 | if [ "$use_xla" = "true" ] ; then
54 | use_xla_tag="--use_xla"
55 | echo "XLA activated"
56 | else
57 | use_xla_tag=""
58 | fi
59 |
60 | printf -v TAG "tf_bert_%s_squad_inf_1n_%s_gbs%d_ckpt_%s" "$bert_model" "$precision" $batch_size "$init_checkpoint"
61 | DATESTAMP=`date +'%y%m%d%H%M%S'`
62 | LOGFILE=$RESULTS_DIR/$TAG.$DATESTAMP.log
63 | printf "Writing logs to %s\n" "$LOGFILE"
64 |
65 | python run_squad.py \
66 | --vocab_file=$BERT_DIR/vocab.txt \
67 | --bert_config_file=$BERT_DIR/bert_config.json \
68 | --init_checkpoint=$init_checkpoint \
69 | --do_predict=True \
70 | --predict_file=$SQUAD_DIR/dev-v${squad_version}.json \
71 | --max_seq_length=$seq_length \
72 | --doc_stride=$doc_stride \
73 | --predict_batch_size=$batch_size \
74 | --output_dir=$RESULTS_DIR \
75 | "$use_fp16" \
76 | $use_xla_tag --version_2_with_negative=${version_2_with_negative}
77 |
78 | python $SQUAD_DIR/evaluate-v${squad_version}.py $SQUAD_DIR/dev-v${squad_version}.json $RESULTS_DIR/predictions.json
79 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/trtis/export_model.sh:
--------------------------------------------------------------------------------
1 | init_checkpoint=${1:-"/results/model.ckpt"}
2 | batch_size=${2:-"8"}
3 | precision=${3:-"fp16"}
4 | use_xla=${4:-"true"}
5 | seq_length=${5:-"384"}
6 | doc_stride=${6:-"128"}
7 | BERT_DIR=${7:-"data/pretrained_models_google/uncased_L-24_H-1024_A-16"}
8 | trtis_model_version=${8:-1}
9 | trtis_model_name=${9:-"bert"}
10 | trtis_dyn_batching_delay=${10:-0}
11 | trtis_engine_count=${11:-1}
12 | trtis_model_overwrite=${12:-"False"}
13 |
14 | additional_args="--trtis_model_version=$trtis_model_version --trtis_model_name=$trtis_model_name --trtis_max_batch_size=$batch_size \
15 | --trtis_model_overwrite=$trtis_model_overwrite --trtis_dyn_batching_delay=$trtis_dyn_batching_delay \
16 | --trtis_engine_count=$trtis_engine_count"
17 |
18 | if [ "$precision" = "fp16" ] ; then
19 | echo "fp16 activated!"
20 | export TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE=1
21 | additional_args="$additional_args --use_fp16"
22 | fi
23 |
24 | if [ "$use_xla" = "true" ] ; then
25 | echo "XLA activated"
26 | additional_args="$additional_args --use_xla"
27 | fi
28 |
29 | echo "Additional args: $additional_args"
30 |
31 | bash scripts/docker/launch.sh \
32 | python run_squad.py \
33 | --vocab_file=${BERT_DIR}/vocab.txt \
34 | --bert_config_file=${BERT_DIR}/bert_config.json \
35 | --init_checkpoint=${init_checkpoint} \
36 | --max_seq_length=${seq_length} \
37 | --doc_stride=${doc_stride} \
38 | --predict_batch_size=${batch_size} \
39 | --output_dir=/results \
40 | --export_trtis=True \
41 | ${additional_args}
42 |
43 |
44 |
45 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/trtis/generate_figures.sh:
--------------------------------------------------------------------------------
1 | # Set the number of devices to use
2 | export NVIDIA_VISIBLE_DEVICES=0
3 |
4 | # Always need to be overwriting models to keep memory use low
5 | export TRTIS_MODEL_OVERWRITE=True
6 |
7 | bert_model=${1:-small}
8 | seq_length=${2:-128}
9 | precision=${3:-fp16}
10 | init_checkpoint=${4:-"/results/models/bert_tf_${bert_model}_${precision}_${seq_length}_v1/model.ckpt-5474"}
11 |
12 | MODEL_NAME="bert_${bert_model}_${seq_length}_${precision}"
13 |
14 | if [ "$bert_model" = "large" ] ; then
15 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
16 | else
17 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
18 | fi
19 |
20 | doc_stride=128
21 | use_xla=true
22 | EXPORT_MODEL_ARGS="${precision} ${use_xla} ${seq_length} ${doc_stride} ${BERT_DIR} 1 ${MODEL_NAME}"
23 | PERF_CLIENT_ARGS="1000 10 20 localhost"
24 |
25 | # Start Server
26 | ./scripts/docker/launch_server.sh $precision
27 |
28 | # Restart Server
29 | restart_server() {
30 | docker kill trt_server_cont
31 | ./scripts/docker/launch_server.sh $precision
32 | }
33 |
34 | ############## Dynamic Batching Comparison ##############
35 | SERVER_BATCH_SIZE=8
36 | CLIENT_BATCH_SIZE=1
37 | TRTIS_ENGINE_COUNT=1
38 |
39 | # Dynamic batching 10 ms
40 | TRTIS_DYN_BATCHING_DELAY=10
41 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
42 | restart_server
43 | sleep 15
44 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
45 |
46 | # Dynamic batching 5 ms
47 | TRTIS_DYN_BATCHING_DELAY=5
48 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
49 | restart_server
50 | sleep 15
51 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
52 |
53 | # Dynamic batching 2 ms
54 | TRTIS_DYN_BATCHING_DELAY=2
55 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
56 | restart_server
57 | sleep 15
58 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
59 |
60 |
61 | # Static Batching (i.e. Dynamic batching 0 ms)
62 | TRTIS_DYN_BATCHING_DELAY=0
63 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
64 | restart_server
65 | sleep 15
66 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
67 |
68 |
69 | # ############## Engine Count Comparison ##############
70 | SERVER_BATCH_SIZE=1
71 | CLIENT_BATCH_SIZE=1
72 | TRTIS_DYN_BATCHING_DELAY=0
73 |
74 | # Engine Count = 4
75 | TRTIS_ENGINE_COUNT=4
76 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
77 | restart_server
78 | sleep 15
79 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
80 |
81 | # Engine Count = 2
82 | TRTIS_ENGINE_COUNT=2
83 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
84 | restart_server
85 | sleep 15
86 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
87 |
88 | # Engine Count = 1
89 | TRTIS_ENGINE_COUNT=1
90 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
91 | restart_server
92 | sleep 15
93 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} ${PERF_CLIENT_ARGS}
94 |
95 |
96 | ############## Batch Size Comparison ##############
97 | # BATCH=1 Generate model and perf
98 | SERVER_BATCH_SIZE=1
99 | CLIENT_BATCH_SIZE=1
100 | TRTIS_ENGINE_COUNT=1
101 | TRTIS_DYN_BATCHING_DELAY=0
102 |
103 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
104 | restart_server
105 | sleep 15
106 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} 1000 10 64 localhost
107 |
108 | # BATCH=2 Generate model and perf
109 | SERVER_BATCH_SIZE=2
110 | CLIENT_BATCH_SIZE=2
111 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
112 | restart_server
113 | sleep 15
114 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} 1000 10 32 localhost
115 |
116 | # BATCH=4 Generate model and perf
117 | SERVER_BATCH_SIZE=4
118 | CLIENT_BATCH_SIZE=4
119 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
120 | restart_server
121 | sleep 15
122 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} 1000 10 16 localhost
123 |
124 | # BATCH=8 Generate model and perf
125 | SERVER_BATCH_SIZE=8
126 | CLIENT_BATCH_SIZE=8
127 | ./scripts/trtis/export_model.sh ${init_checkpoint} ${SERVER_BATCH_SIZE} ${EXPORT_MODEL_ARGS} ${TRTIS_DYN_BATCHING_DELAY} ${TRTIS_ENGINE_COUNT} ${TRTIS_MODEL_OVERWRITE}
128 | restart_server
129 | sleep 15
130 | ./scripts/trtis/run_perf_client.sh ${MODEL_NAME} 1 ${precision} ${CLIENT_BATCH_SIZE} 1000 10 8 localhost
131 |
132 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/trtis/run_client.sh:
--------------------------------------------------------------------------------
1 | batch_size=${1:-"8"}
2 | seq_length=${2:-"384"}
3 | doc_stride=${3:-"128"}
4 | trtis_version_name=${4:-"1"}
5 | trtis_model_name=${5:-"bert"}
6 | BERT_DIR=${6:-"data/pretrained_models_google/uncased_L-24_H-1024_A-16"}
7 | squad_version=${7:-"1.1"}
8 |
9 | export SQUAD_DIR=data/squad/v${squad_version}
10 | if [ "$squad_version" = "1.1" ] ; then
11 | version_2_with_negative="False"
12 | else
13 | version_2_with_negative="True"
14 | fi
15 |
16 | echo "Squad directory set as " $SQUAD_DIR
17 | if [ ! -d "$SQUAD_DIR" ] ; then
18 | echo "Error! $SQUAD_DIR directory missing. Please mount SQuAD dataset."
19 | exit -1
20 | fi
21 |
22 | bash scripts/docker/launch.sh \
23 | "python run_squad_trtis_client.py \
24 | --trtis_model_name=$trtis_model_name \
25 | --trtis_model_version=$trtis_version_name \
26 | --vocab_file=$BERT_DIR/vocab.txt \
27 | --bert_config_file=$BERT_DIR/bert_config.json \
28 | --predict_file=$SQUAD_DIR/dev-v${squad_version}.json \
29 | --predict_batch_size=$batch_size \
30 | --max_seq_length=${seq_length} \
31 | --doc_stride=${doc_stride} \
32 | --output_dir=/results \
33 | --version_2_with_negative=${version_2_with_negative}"
34 |
35 | bash scripts/docker/launch.sh "python $SQUAD_DIR/evaluate-v${squad_version}.py \
36 | $SQUAD_DIR/dev-v${squad_version}.json /results/predictions.json"
37 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/trtis/run_perf_client.sh:
--------------------------------------------------------------------------------
1 |
2 | #!/bin/bash
3 |
4 | MODEL_NAME=${1:-"bert"}
5 | MODEL_VERSION=${2:-1}
6 | precision=${3:-"fp16"}
7 | BATCH_SIZE=${4:-1}
8 | MAX_LATENCY=${5:-500}
9 | MAX_CLIENT_THREADS=${6:-10}
10 | MAX_CONCURRENCY=${7:-50}
11 | SERVER_HOSTNAME=${8:-"localhost"}
12 |
13 | if [[ $SERVER_HOSTNAME == *":"* ]]; then
14 | echo "ERROR! Do not include the port when passing the Server Hostname. These scripts require that the TRTIS HTTP endpoint is on Port 8000 and the gRPC endpoint is on Port 8001. Exiting..."
15 | exit 1
16 | fi
17 |
18 | if [ "$SERVER_HOSTNAME" = "localhost" ]
19 | then
20 | if [ ! "$(docker inspect -f "{{.State.Running}}" trt_server_cont)" = "true" ] ; then
21 |
22 | echo "Launching TRTIS server"
23 | bash scripts/docker/launch_server.sh $precision
24 | SERVER_LAUNCHED=true
25 |
26 | function cleanup_server {
27 | echo "Killing TRTIS server"
28 | docker kill trt_server_cont
29 | }
30 |
31 | # Ensure we cleanup the server on exit
32 | # trap "exit" INT TERM
33 | trap cleanup_server EXIT
34 | fi
35 | fi
36 |
37 | # Wait until server is up. curl on the health of the server and sleep until its ready
38 | bash scripts/trtis/wait_for_trtis_server.sh $SERVER_HOSTNAME
39 |
40 | TIMESTAMP=$(date "+%y%m%d_%H%M")
41 |
42 | bash scripts/docker/launch.sh mkdir -p /results/perf_client/${MODEL_NAME}
43 | OUTPUT_FILE_CSV="/results/perf_client/${MODEL_NAME}/results_${TIMESTAMP}.csv"
44 |
45 | ARGS="\
46 | --max-threads ${MAX_CLIENT_THREADS} \
47 | -m ${MODEL_NAME} \
48 | -x ${MODEL_VERSION} \
49 | -p 3000 \
50 | -d \
51 | -v \
52 | -i gRPC \
53 | -u ${SERVER_HOSTNAME}:8001 \
54 | -b ${BATCH_SIZE} \
55 | -l ${MAX_LATENCY} \
56 | -c ${MAX_CONCURRENCY} \
57 | -f ${OUTPUT_FILE_CSV}"
58 |
59 | echo "Using args: $(echo "$ARGS" | sed -e 's/ -/\n-/g')"
60 |
61 | bash scripts/docker/launch.sh /workspace/build/perf_client $ARGS
62 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/trtis/run_trtis.sh:
--------------------------------------------------------------------------------
1 | init_checkpoint=${1:-"/results/model.ckpt"}
2 | batch_size=${2:-"8"}
3 | precision=${3:-"fp16"}
4 | use_xla=${4:-"true"}
5 | seq_length=${5:-"384"}
6 | doc_stride=${6:-"128"}
7 | bert_model=${7:-"large"}
8 | squad_version=${8:-"1.1"}
9 | trtis_version_name=${9:-1}
10 | trtis_model_name=${10:-"bert"}
11 | trtis_export_model=${11:-"true"}
12 | trtis_dyn_batching_delay=${12:-0}
13 | trtis_engine_count=${13:-1}
14 | trtis_model_overwrite=${14:-"False"}
15 |
16 | if [ "$bert_model" = "large" ] ; then
17 | export BERT_DIR=data/pretrained_models_google/uncased_L-24_H-1024_A-16
18 | else
19 | export BERT_DIR=data/pretrained_models_google/uncased_L-12_H-768_A-12
20 | fi
21 |
22 | if [ ! -d "$BERT_DIR" ] ; then
23 | echo "Error! $BERT_DIR directory missing. Please mount pretrained BERT dataset."
24 | exit -1
25 | fi
26 |
27 | # Need to ignore case on some variables
28 | trtis_export_model=$(echo "$trtis_export_model" | tr '[:upper:]' '[:lower:]')
29 |
30 | # Explicitly save this variable to pass down to new containers
31 | NV_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:-"all"}
32 |
33 | echo " BERT directory set as " $BERT_DIR
34 | echo
35 | echo "Argument: "
36 | echo " init_checkpoint = $init_checkpoint"
37 | echo " batch_size = $batch_size"
38 | echo " precision = $precision"
39 | echo " use_xla = $use_xla"
40 | echo " seq_length = $seq_length"
41 | echo " doc_stride = $doc_stride"
42 | echo " bert_model = $bert_model"
43 | echo " squad_version = $squad_version"
44 | echo " version_name = $trtis_version_name"
45 | echo " model_name = $trtis_model_name"
46 | echo " export_model = $trtis_export_model"
47 | echo
48 | echo "Env: "
49 | echo " NVIDIA_VISIBLE_DEVICES = $NV_VISIBLE_DEVICES"
50 | echo
51 |
52 | # Export Model in SavedModel format if enabled
53 | if [ "$trtis_export_model" = "true" ] ; then
54 | echo "Exporting model as: Name - $trtis_model_name Version - $trtis_version_name"
55 |
56 | bash scripts/trtis/export_model.sh $init_checkpoint $batch_size $precision $use_xla $seq_length \
57 | $doc_stride $BERT_DIR $RESULTS_DIR $trtis_version_name $trtis_model_name \
58 | $trtis_dyn_batching_delay $trtis_engine_count $trtis_model_overwrite
59 | fi
60 |
61 | # Start TRTIS server in detached state
62 | bash scripts/docker/launch_server.sh $precision
63 |
64 | # Wait until server is up. curl on the health of the server and sleep until its ready
65 | bash scripts/trtis/wait_for_trtis_server.sh localhost
66 |
67 | # Start TRTIS client for inference and evaluate results
68 | bash scripts/trtis/run_client.sh $batch_size $seq_length $doc_stride $trtis_version_name $trtis_model_name \
69 | $BERT_DIR $squad_version
70 |
71 |
72 | #Kill the TRTIS Server
73 | docker kill trt_server_cont
74 |
--------------------------------------------------------------------------------
/examples/BERT/scripts/trtis/wait_for_trtis_server.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | SERVER_URI=${1:-"localhost"}
4 |
5 | echo "Waiting for TRTIS Server to be ready at http://$SERVER_URI:8000..."
6 |
7 | live_command="curl -m 1 -L -s -o /dev/null -w %{http_code} http://$SERVER_URI:8000/api/health/live"
8 | ready_command="curl -m 1 -L -s -o /dev/null -w %{http_code} http://$SERVER_URI:8000/api/health/ready"
9 |
10 | current_status=$($live_command)
11 |
12 | # First check the current status. If that passes, check the json. If either fail, loop
13 | while [[ ${current_status} != "200" ]] || [[ $($ready_command) != "200" ]]; do
14 |
15 | printf "."
16 | sleep 1
17 | current_status=$($live_command)
18 | done
19 |
20 | echo "TRTIS Server is ready!"
--------------------------------------------------------------------------------
/examples/BERT/tokenization.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | """Tokenization classes."""
16 |
17 | from __future__ import absolute_import
18 | from __future__ import division
19 | from __future__ import print_function
20 |
21 | import collections
22 | import unicodedata
23 | import six
24 | import tensorflow as tf
25 | import re
26 |
27 | def validate_case_matches_checkpoint(do_lower_case, init_checkpoint):
28 | """Checks whether the casing config is consistent with the checkpoint name."""
29 |
30 | # The casing has to be passed in by the user and there is no explicit check
31 | # as to whether it matches the checkpoint. The casing information probably
32 | # should have been stored in the bert_config.json file, but it's not, so
33 | # we have to heuristically detect it to validate.
34 |
35 | if not init_checkpoint:
36 | return
37 |
38 | m = re.match("^.*?([A-Za-z0-9_-]+)/bert_model.ckpt", init_checkpoint)
39 | if m is None:
40 | return
41 |
42 | model_name = m.group(1)
43 |
44 | lower_models = [
45 | "uncased_L-24_H-1024_A-16", "uncased_L-12_H-768_A-12",
46 | "multilingual_L-12_H-768_A-12", "chinese_L-12_H-768_A-12"
47 | ]
48 |
49 | cased_models = [
50 | "cased_L-12_H-768_A-12", "cased_L-24_H-1024_A-16",
51 | "multi_cased_L-12_H-768_A-12"
52 | ]
53 |
54 | is_bad_config = False
55 | if model_name in lower_models and not do_lower_case:
56 | is_bad_config = True
57 | actual_flag = "False"
58 | case_name = "lowercased"
59 | opposite_flag = "True"
60 |
61 | if model_name in cased_models and do_lower_case:
62 | is_bad_config = True
63 | actual_flag = "True"
64 | case_name = "cased"
65 | opposite_flag = "False"
66 |
67 | if is_bad_config:
68 | raise ValueError(
69 | "You passed in `--do_lower_case=%s` with `--init_checkpoint=%s`. "
70 | "However, `%s` seems to be a %s model, so you "
71 | "should pass in `--do_lower_case=%s` so that the fine-tuning matches "
72 | "how the model was pre-training. If this error is wrong, please "
73 | "just comment out this check." % (actual_flag, init_checkpoint,
74 | model_name, case_name, opposite_flag))
75 |
76 |
77 |
78 | def convert_to_unicode(text):
79 | """Converts `text` to Unicode (if it's not already), assuming utf-8 input."""
80 | if six.PY3:
81 | if isinstance(text, str):
82 | return text
83 | elif isinstance(text, bytes):
84 | return text.decode("utf-8", "ignore")
85 | else:
86 | raise ValueError("Unsupported string type: %s" % (type(text)))
87 | elif six.PY2:
88 | if isinstance(text, str):
89 | return text.decode("utf-8", "ignore")
90 | elif isinstance(text, unicode):
91 | return text
92 | else:
93 | raise ValueError("Unsupported string type: %s" % (type(text)))
94 | else:
95 | raise ValueError("Not running on Python2 or Python 3?")
96 |
97 |
98 | def printable_text(text):
99 | """Returns text encoded in a way suitable for print or `tf.logging`."""
100 |
101 | # These functions want `str` for both Python2 and Python3, but in one case
102 | # it's a Unicode string and in the other it's a byte string.
103 | if six.PY3:
104 | if isinstance(text, str):
105 | return text
106 | elif isinstance(text, bytes):
107 | return text.decode("utf-8", "ignore")
108 | else:
109 | raise ValueError("Unsupported string type: %s" % (type(text)))
110 | elif six.PY2:
111 | if isinstance(text, str):
112 | return text
113 | elif isinstance(text, unicode):
114 | return text.encode("utf-8")
115 | else:
116 | raise ValueError("Unsupported string type: %s" % (type(text)))
117 | else:
118 | raise ValueError("Not running on Python2 or Python 3?")
119 |
120 |
121 | def load_vocab(vocab_file):
122 | """Loads a vocabulary file into a dictionary."""
123 | vocab = collections.OrderedDict()
124 | index = 0
125 | with tf.gfile.GFile(vocab_file, "r") as reader:
126 | while True:
127 | token = convert_to_unicode(reader.readline())
128 | if not token:
129 | break
130 | token = token.strip()
131 | vocab[token] = index
132 | index += 1
133 | return vocab
134 |
135 |
136 | def convert_by_vocab(vocab, items):
137 | """Converts a sequence of [tokens|ids] using the vocab."""
138 | output = []
139 | for item in items:
140 | output.append(vocab[item])
141 | return output
142 |
143 |
144 | def convert_tokens_to_ids(vocab, tokens):
145 | return convert_by_vocab(vocab, tokens)
146 |
147 |
148 | def convert_ids_to_tokens(inv_vocab, ids):
149 | return convert_by_vocab(inv_vocab, ids)
150 |
151 |
152 | def whitespace_tokenize(text):
153 | """Runs basic whitespace cleaning and splitting on a piece of text."""
154 | text = text.strip()
155 | if not text:
156 | return []
157 | tokens = text.split()
158 | return tokens
159 |
160 |
161 | class FullTokenizer(object):
162 | """Runs end-to-end tokenziation."""
163 |
164 | def __init__(self, vocab_file, do_lower_case=True):
165 | self.vocab = load_vocab(vocab_file)
166 | self.inv_vocab = {v: k for k, v in self.vocab.items()}
167 | self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
168 | self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)
169 |
170 | def tokenize(self, text):
171 | split_tokens = []
172 | for token in self.basic_tokenizer.tokenize(text):
173 | for sub_token in self.wordpiece_tokenizer.tokenize(token):
174 | split_tokens.append(sub_token)
175 |
176 | return split_tokens
177 |
178 | def convert_tokens_to_ids(self, tokens):
179 | return convert_by_vocab(self.vocab, tokens)
180 |
181 | def convert_ids_to_tokens(self, ids):
182 | return convert_by_vocab(self.inv_vocab, ids)
183 |
184 |
185 | class BasicTokenizer(object):
186 | """Runs basic tokenization (punctuation splitting, lower casing, etc.)."""
187 |
188 | def __init__(self, do_lower_case=True):
189 | """Constructs a BasicTokenizer.
190 |
191 | Args:
192 | do_lower_case: Whether to lower case the input.
193 | """
194 | self.do_lower_case = do_lower_case
195 |
196 | def tokenize(self, text):
197 | """Tokenizes a piece of text."""
198 | text = convert_to_unicode(text)
199 | text = self._clean_text(text)
200 |
201 | # This was added on November 1st, 2018 for the multilingual and Chinese
202 | # models. This is also applied to the English models now, but it doesn't
203 | # matter since the English models were not trained on any Chinese data
204 | # and generally don't have any Chinese data in them (there are Chinese
205 | # characters in the vocabulary because Wikipedia does have some Chinese
206 | # words in the English Wikipedia.).
207 | text = self._tokenize_chinese_chars(text)
208 |
209 | orig_tokens = whitespace_tokenize(text)
210 | split_tokens = []
211 | for token in orig_tokens:
212 | if self.do_lower_case:
213 | token = token.lower()
214 | token = self._run_strip_accents(token)
215 | split_tokens.extend(self._run_split_on_punc(token))
216 |
217 | output_tokens = whitespace_tokenize(" ".join(split_tokens))
218 | return output_tokens
219 |
220 | def _run_strip_accents(self, text):
221 | """Strips accents from a piece of text."""
222 | text = unicodedata.normalize("NFD", text)
223 | output = []
224 | for char in text:
225 | cat = unicodedata.category(char)
226 | if cat == "Mn":
227 | continue
228 | output.append(char)
229 | return "".join(output)
230 |
231 | def _run_split_on_punc(self, text):
232 | """Splits punctuation on a piece of text."""
233 | chars = list(text)
234 | i = 0
235 | start_new_word = True
236 | output = []
237 | while i < len(chars):
238 | char = chars[i]
239 | if _is_punctuation(char):
240 | output.append([char])
241 | start_new_word = True
242 | else:
243 | if start_new_word:
244 | output.append([])
245 | start_new_word = False
246 | output[-1].append(char)
247 | i += 1
248 |
249 | return ["".join(x) for x in output]
250 |
251 | def _tokenize_chinese_chars(self, text):
252 | """Adds whitespace around any CJK character."""
253 | output = []
254 | for char in text:
255 | cp = ord(char)
256 | if self._is_chinese_char(cp):
257 | output.append(" ")
258 | output.append(char)
259 | output.append(" ")
260 | else:
261 | output.append(char)
262 | return "".join(output)
263 |
264 | def _is_chinese_char(self, cp):
265 | """Checks whether CP is the codepoint of a CJK character."""
266 | # This defines a "chinese character" as anything in the CJK Unicode block:
267 | # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
268 | #
269 | # Note that the CJK Unicode block is NOT all Japanese and Korean characters,
270 | # despite its name. The modern Korean Hangul alphabet is a different block,
271 | # as is Japanese Hiragana and Katakana. Those alphabets are used to write
272 | # space-separated words, so they are not treated specially and handled
273 | # like the all of the other languages.
274 | if ((cp >= 0x4E00 and cp <= 0x9FFF) or #
275 | (cp >= 0x3400 and cp <= 0x4DBF) or #
276 | (cp >= 0x20000 and cp <= 0x2A6DF) or #
277 | (cp >= 0x2A700 and cp <= 0x2B73F) or #
278 | (cp >= 0x2B740 and cp <= 0x2B81F) or #
279 | (cp >= 0x2B820 and cp <= 0x2CEAF) or
280 | (cp >= 0xF900 and cp <= 0xFAFF) or #
281 | (cp >= 0x2F800 and cp <= 0x2FA1F)): #
282 | return True
283 |
284 | return False
285 |
286 | def _clean_text(self, text):
287 | """Performs invalid character removal and whitespace cleanup on text."""
288 | output = []
289 | for char in text:
290 | cp = ord(char)
291 | if cp == 0 or cp == 0xfffd or _is_control(char):
292 | continue
293 | if _is_whitespace(char):
294 | output.append(" ")
295 | else:
296 | output.append(char)
297 | return "".join(output)
298 |
299 |
300 | class WordpieceTokenizer(object):
301 | """Runs WordPiece tokenziation."""
302 |
303 | def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=200):
304 | self.vocab = vocab
305 | self.unk_token = unk_token
306 | self.max_input_chars_per_word = max_input_chars_per_word
307 |
308 | def tokenize(self, text):
309 | """Tokenizes a piece of text into its word pieces.
310 |
311 | This uses a greedy longest-match-first algorithm to perform tokenization
312 | using the given vocabulary.
313 |
314 | For example:
315 | input = "unaffable"
316 | output = ["un", "##aff", "##able"]
317 |
318 | Args:
319 | text: A single token or whitespace separated tokens. This should have
320 | already been passed through `BasicTokenizer.
321 |
322 | Returns:
323 | A list of wordpiece tokens.
324 | """
325 |
326 | text = convert_to_unicode(text)
327 |
328 | output_tokens = []
329 | for token in whitespace_tokenize(text):
330 | chars = list(token)
331 | if len(chars) > self.max_input_chars_per_word:
332 | output_tokens.append(self.unk_token)
333 | continue
334 |
335 | is_bad = False
336 | start = 0
337 | sub_tokens = []
338 | while start < len(chars):
339 | end = len(chars)
340 | cur_substr = None
341 | while start < end:
342 | substr = "".join(chars[start:end])
343 | if start > 0:
344 | substr = "##" + substr
345 | if substr in self.vocab:
346 | cur_substr = substr
347 | break
348 | end -= 1
349 | if cur_substr is None:
350 | is_bad = True
351 | break
352 | sub_tokens.append(cur_substr)
353 | start = end
354 |
355 | if is_bad:
356 | output_tokens.append(self.unk_token)
357 | else:
358 | output_tokens.extend(sub_tokens)
359 | return output_tokens
360 |
361 |
362 | def _is_whitespace(char):
363 | """Checks whether `chars` is a whitespace character."""
364 | # \t, \n, and \r are technically contorl characters but we treat them
365 | # as whitespace since they are generally considered as such.
366 | if char == " " or char == "\t" or char == "\n" or char == "\r":
367 | return True
368 | cat = unicodedata.category(char)
369 | if cat == "Zs":
370 | return True
371 | return False
372 |
373 |
374 | def _is_control(char):
375 | """Checks whether `chars` is a control character."""
376 | # These are technically control characters but we count them as whitespace
377 | # characters.
378 | if char == "\t" or char == "\n" or char == "\r":
379 | return False
380 | cat = unicodedata.category(char)
381 | if cat in ("Cc", "Cf"):
382 | return True
383 | return False
384 |
385 |
386 | def _is_punctuation(char):
387 | """Checks whether `chars` is a punctuation character."""
388 | cp = ord(char)
389 | # We treat all non-letter/number ASCII as punctuation.
390 | # Characters such as "^", "$", and "`" are not in the Unicode
391 | # Punctuation class but we treat them as punctuation anyways, for
392 | # consistency.
393 | if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or
394 | (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)):
395 | return True
396 | cat = unicodedata.category(char)
397 | if cat.startswith("P"):
398 | return True
399 | return False
400 |
--------------------------------------------------------------------------------
/examples/BERT/tokenization_test.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2018 The Google AI Language Team Authors.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | from __future__ import absolute_import
16 | from __future__ import division
17 | from __future__ import print_function
18 |
19 | import os
20 | import tempfile
21 | import tokenization
22 | import six
23 | import tensorflow as tf
24 |
25 |
26 | class TokenizationTest(tf.test.TestCase):
27 |
28 | def test_full_tokenizer(self):
29 | vocab_tokens = [
30 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
31 | "##ing", ","
32 | ]
33 | with tempfile.NamedTemporaryFile(delete=False) as vocab_writer:
34 | vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
35 |
36 | vocab_file = vocab_writer.name
37 |
38 | tokenizer = tokenization.FullTokenizer(vocab_file)
39 | os.unlink(vocab_file)
40 |
41 | tokens = tokenizer.tokenize(u"UNwant\u00E9d,running")
42 | self.assertAllEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"])
43 |
44 | self.assertAllEqual(
45 | tokenizer.convert_tokens_to_ids(tokens), [7, 4, 5, 10, 8, 9])
46 |
47 | def test_chinese(self):
48 | tokenizer = tokenization.BasicTokenizer()
49 |
50 | self.assertAllEqual(
51 | tokenizer.tokenize(u"ah\u535A\u63A8zz"),
52 | [u"ah", u"\u535A", u"\u63A8", u"zz"])
53 |
54 | def test_basic_tokenizer_lower(self):
55 | tokenizer = tokenization.BasicTokenizer(do_lower_case=True)
56 |
57 | self.assertAllEqual(
58 | tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "),
59 | ["hello", "!", "how", "are", "you", "?"])
60 | self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"])
61 |
62 | def test_basic_tokenizer_no_lower(self):
63 | tokenizer = tokenization.BasicTokenizer(do_lower_case=False)
64 |
65 | self.assertAllEqual(
66 | tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "),
67 | ["HeLLo", "!", "how", "Are", "yoU", "?"])
68 |
69 | def test_wordpiece_tokenizer(self):
70 | vocab_tokens = [
71 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
72 | "##ing"
73 | ]
74 |
75 | vocab = {}
76 | for (i, token) in enumerate(vocab_tokens):
77 | vocab[token] = i
78 | tokenizer = tokenization.WordpieceTokenizer(vocab=vocab)
79 |
80 | self.assertAllEqual(tokenizer.tokenize(""), [])
81 |
82 | self.assertAllEqual(
83 | tokenizer.tokenize("unwanted running"),
84 | ["un", "##want", "##ed", "runn", "##ing"])
85 |
86 | self.assertAllEqual(
87 | tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"])
88 |
89 | def test_convert_tokens_to_ids(self):
90 | vocab_tokens = [
91 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
92 | "##ing"
93 | ]
94 |
95 | vocab = {}
96 | for (i, token) in enumerate(vocab_tokens):
97 | vocab[token] = i
98 |
99 | self.assertAllEqual(
100 | tokenization.convert_tokens_to_ids(
101 | vocab, ["un", "##want", "##ed", "runn", "##ing"]), [7, 4, 5, 8, 9])
102 |
103 | def test_is_whitespace(self):
104 | self.assertTrue(tokenization._is_whitespace(u" "))
105 | self.assertTrue(tokenization._is_whitespace(u"\t"))
106 | self.assertTrue(tokenization._is_whitespace(u"\r"))
107 | self.assertTrue(tokenization._is_whitespace(u"\n"))
108 | self.assertTrue(tokenization._is_whitespace(u"\u00A0"))
109 |
110 | self.assertFalse(tokenization._is_whitespace(u"A"))
111 | self.assertFalse(tokenization._is_whitespace(u"-"))
112 |
113 | def test_is_control(self):
114 | self.assertTrue(tokenization._is_control(u"\u0005"))
115 |
116 | self.assertFalse(tokenization._is_control(u"A"))
117 | self.assertFalse(tokenization._is_control(u" "))
118 | self.assertFalse(tokenization._is_control(u"\t"))
119 | self.assertFalse(tokenization._is_control(u"\r"))
120 | self.assertFalse(tokenization._is_control(u"\U0001F4A9"))
121 |
122 | def test_is_punctuation(self):
123 | self.assertTrue(tokenization._is_punctuation(u"-"))
124 | self.assertTrue(tokenization._is_punctuation(u"$"))
125 | self.assertTrue(tokenization._is_punctuation(u"`"))
126 | self.assertTrue(tokenization._is_punctuation(u"."))
127 |
128 | self.assertFalse(tokenization._is_punctuation(u"A"))
129 | self.assertFalse(tokenization._is_punctuation(u" "))
130 |
131 |
132 | if __name__ == "__main__":
133 | tf.test.main()
134 |
--------------------------------------------------------------------------------
/examples/BERT/utils/utils.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import time
3 |
4 | # report latency and throughput during eval
5 | class LogEvalRunHook(tf.train.SessionRunHook):
6 | def __init__(self, global_batch_size, hvd_rank=-1):
7 | self.global_batch_size = global_batch_size
8 | self.hvd_rank = hvd_rank
9 | self.total_time = 0.0
10 | self.count = 0
11 | self.skipped = 0
12 | self.time_list = []
13 |
14 | def before_run(self, run_context):
15 | self.t0 = time.time()
16 |
17 | def after_run(self, run_context, run_values):
18 | elapsed_secs = time.time() - self.t0
19 | self.count += 1
20 |
21 | # Removing first 2 (arbitrary) number of startup iterations from perf evaluations
22 | if self.count <= 2:
23 | print("Skipping time record for ", self.count, " due to overhead")
24 | self.skipped += 1
25 | else:
26 | self.time_list.append(elapsed_secs)
27 | self.total_time += elapsed_secs
28 |
29 | # report throughput during training
30 | class LogTrainRunHook(tf.train.SessionRunHook):
31 | def __init__(self, global_batch_size, hvd_rank=-1, save_checkpoints_steps=1000):
32 | self.global_batch_size = global_batch_size
33 | self.hvd_rank = hvd_rank
34 | self.save_checkpoints_steps = save_checkpoints_steps
35 |
36 | self.total_time = 0.0
37 | self.count = 0 # Holds number of iterations, including skipped iterations for fp16 loss scaling
38 |
39 | def after_create_session(self, session, coord):
40 | self.init_global_step = session.run(tf.train.get_global_step())
41 |
42 | def before_run(self, run_context):
43 | self.t0 = time.time()
44 | return tf.train.SessionRunArgs(
45 | fetches=['step_update:0'])
46 |
47 | def after_run(self, run_context, run_values):
48 | elapsed_secs = time.time() - self.t0
49 | self.global_step = run_values.results[0]
50 | self.count += 1
51 |
52 | # Removing first step + first two steps after every checkpoint save
53 | if (self.global_step - self.init_global_step) % self.save_checkpoints_steps <= 1:
54 | print("Skipping time record for ", self.global_step, " due to checkpoint-saving/warmup overhead")
55 | else:
56 | self.total_time += elapsed_secs
57 |
58 | def end(self, session):
59 | num_global_steps = self.global_step - self.init_global_step
60 |
61 | self.skipped = (num_global_steps // self.save_checkpoints_steps) * 2 + \
62 | min(2, num_global_steps % self.save_checkpoints_steps) - 1
--------------------------------------------------------------------------------
/examples/MNIST/mnist_estimator.py:
--------------------------------------------------------------------------------
1 | # Copyright 2018 Uber Technologies, Inc. All Rights Reserved.
2 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | """Convolutional Neural Network Estimator for MNIST, built with tf.layers."""
16 |
17 | from __future__ import absolute_import
18 | from __future__ import division
19 | from __future__ import print_function
20 | from DeepGradientCompressionOptimizer import DeepGradientCompressionOptimizer
21 |
22 | import os
23 | import errno
24 | import numpy as np
25 | import tensorflow as tf
26 | import horovod.tensorflow as hvd
27 |
28 | from tensorflow import keras
29 |
30 | tf.logging.set_verbosity(tf.logging.INFO)
31 |
32 |
33 | def cnn_model_fn(features, labels, mode):
34 | """Model function for CNN."""
35 | # Input Layer
36 | # Reshape X to 4-D tensor: [batch_size, width, height, channels]
37 | # MNIST images are 28x28 pixels, and have one color channel
38 | input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
39 |
40 | # Convolutional Layer #1
41 | # Computes 32 features using a 5x5 filter with ReLU activation.
42 | # Padding is added to preserve width and height.
43 | # Input Tensor Shape: [batch_size, 28, 28, 1]
44 | # Output Tensor Shape: [batch_size, 28, 28, 32]
45 | conv1 = tf.layers.conv2d(
46 | inputs=input_layer,
47 | filters=32,
48 | kernel_size=[5, 5],
49 | padding="same",
50 | activation=tf.nn.relu)
51 |
52 | # Pooling Layer #1
53 | # First max pooling layer with a 2x2 filter and stride of 2
54 | # Input Tensor Shape: [batch_size, 28, 28, 32]
55 | # Output Tensor Shape: [batch_size, 14, 14, 32]
56 | pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
57 |
58 | # Convolutional Layer #2
59 | # Computes 64 features using a 5x5 filter.
60 | # Padding is added to preserve width and height.
61 | # Input Tensor Shape: [batch_size, 14, 14, 32]
62 | # Output Tensor Shape: [batch_size, 14, 14, 64]
63 | conv2 = tf.layers.conv2d(
64 | inputs=pool1,
65 | filters=64,
66 | kernel_size=[5, 5],
67 | padding="same",
68 | activation=tf.nn.relu)
69 |
70 | # Pooling Layer #2
71 | # Second max pooling layer with a 2x2 filter and stride of 2
72 | # Input Tensor Shape: [batch_size, 14, 14, 64]
73 | # Output Tensor Shape: [batch_size, 7, 7, 64]
74 | pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
75 |
76 | # Flatten tensor into a batch of vectors
77 | # Input Tensor Shape: [batch_size, 7, 7, 64]
78 | # Output Tensor Shape: [batch_size, 7 * 7 * 64]
79 | pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
80 |
81 | # Dense Layer
82 | # Densely connected layer with 1024 neurons
83 | # Input Tensor Shape: [batch_size, 7 * 7 * 64]
84 | # Output Tensor Shape: [batch_size, 1024]
85 | dense = tf.layers.dense(inputs=pool2_flat, units=1024,
86 | activation=tf.nn.relu)
87 |
88 | # Add dropout operation; 0.6 probability that element will be kept
89 | dropout = tf.layers.dropout(
90 | inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
91 |
92 | # Logits layer
93 | # Input Tensor Shape: [batch_size, 1024]
94 | # Output Tensor Shape: [batch_size, 10]
95 | logits = tf.layers.dense(inputs=dropout, units=10)
96 |
97 | predictions = {
98 | # Generate predictions (for PREDICT and EVAL mode)
99 | "classes": tf.argmax(input=logits, axis=1),
100 | # Add `softmax_tensor` to the graph. It is used for PREDICT and by the
101 | # `logging_hook`.
102 | "probabilities": tf.nn.softmax(logits, name="softmax_tensor")
103 | }
104 | if mode == tf.estimator.ModeKeys.PREDICT:
105 | return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
106 |
107 | # Calculate Loss (for both TRAIN and EVAL modes)
108 | onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
109 | loss = tf.losses.softmax_cross_entropy(
110 | onehot_labels=onehot_labels, logits=logits)
111 |
112 | # Configure the Training Op (for TRAIN mode)
113 | if mode == tf.estimator.ModeKeys.TRAIN:
114 | # Horovod: scale learning rate by the number of workers.
115 | optimizer = tf.train.MomentumOptimizer(
116 | learning_rate=0.001 * hvd.size(), momentum=0.9)
117 |
118 | # Horovod: add Horovod Distributed Optimizer.
119 | optimizer = hvd.DistributedOptimizer(optimizer)
120 |
121 | # wrap DeepGradientCompressionOptimizer around horovod Optimizer
122 | optimizer = DeepGradientCompressionOptimizer(optimizer)
123 |
124 |
125 | #train_op = optimizer.minimize(
126 | # loss=loss,
127 | # global_step=tf.train.get_global_step())
128 | #return tf.estimator.EstimatorSpec(mode=mode, loss=loss,
129 | # train_op=train_op)
130 |
131 | grads_and_vars = optimizer.compute_gradients(loss, tvars)
132 | grads_and_vars = [(g,v) for g,v in grads_and_vars if g is not None]
133 | grads_and_vars = optimizer.sparse_to_dense(grads_and_vars)
134 | grads, tvars = list(zip(*grads_and_vars))
135 | train_op = optimizer.apply_gradients(list(zip(grads, tvars)), global_step=tf.train.get_or_create_global_step())
136 | # Add evaluation metrics (for EVAL mode)
137 | eval_metric_ops = {
138 | "accuracy": tf.metrics.accuracy(
139 | labels=labels, predictions=predictions["classes"])}
140 | return tf.estimator.EstimatorSpec(
141 | mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
142 |
143 |
144 | def main(unused_argv):
145 | # Horovod: initialize Horovod.
146 | hvd.init()
147 |
148 | # Keras automatically creates a cache directory in ~/.keras/datasets for
149 | # storing the downloaded MNIST data. This creates a race
150 | # condition among the workers that share the same filesystem. If the
151 | # directory already exists by the time this worker gets around to creating
152 | # it, ignore the resulting exception and continue.
153 | cache_dir = os.path.join(os.path.expanduser('~'), '.keras', 'datasets')
154 | if not os.path.exists(cache_dir):
155 | try:
156 | os.mkdir(cache_dir)
157 | except OSError as e:
158 | if e.errno == errno.EEXIST and os.path.isdir(cache_dir):
159 | pass
160 | else:
161 | raise
162 |
163 | # Download and load MNIST dataset.
164 | (train_data, train_labels), (eval_data, eval_labels) = \
165 | keras.datasets.mnist.load_data('MNIST-data-%d' % hvd.rank())
166 |
167 | # The shape of downloaded data is (-1, 28, 28), hence we need to reshape it
168 | # into (-1, 784) to feed into our network. Also, need to normalize the
169 | # features between 0 and 1.
170 | train_data = np.reshape(train_data, (-1, 784)) / 255.0
171 | eval_data = np.reshape(eval_data, (-1, 784)) / 255.0
172 |
173 | # Horovod: pin GPU to be used to process local rank (one GPU per process)
174 | config = tf.ConfigProto()
175 | config.gpu_options.allow_growth = True
176 | config.gpu_options.visible_device_list = str(hvd.local_rank())
177 |
178 | # Horovod: save checkpoints only on worker 0 to prevent other workers from
179 | # corrupting them.
180 | model_dir = './mnist_convnet_model' if hvd.rank() == 0 else None
181 |
182 | # Create the Estimator
183 | mnist_classifier = tf.estimator.Estimator(
184 | model_fn=cnn_model_fn, model_dir=model_dir,
185 | config=tf.estimator.RunConfig(session_config=config))
186 |
187 | # Set up logging for predictions
188 | # Log the values in the "Softmax" tensor with label "probabilities"
189 | tensors_to_log = {"probabilities": "softmax_tensor"}
190 | logging_hook = tf.train.LoggingTensorHook(
191 | tensors=tensors_to_log, every_n_iter=500)
192 |
193 | # Horovod: BroadcastGlobalVariablesHook broadcasts initial variable states from
194 | # rank 0 to all other processes. This is necessary to ensure consistent
195 | # initialization of all workers when training is started with random weights or
196 | # restored from a checkpoint.
197 | bcast_hook = hvd.BroadcastGlobalVariablesHook(0)
198 |
199 | # Train the model
200 | train_input_fn = tf.estimator.inputs.numpy_input_fn(
201 | x={"x": train_data},
202 | y=train_labels,
203 | batch_size=100,
204 | num_epochs=None,
205 | shuffle=True)
206 |
207 | # Horovod: adjust number of steps based on number of GPUs.
208 | mnist_classifier.train(
209 | input_fn=train_input_fn,
210 | steps=20000 // hvd.size(),
211 | hooks=[logging_hook, bcast_hook])
212 |
213 | # Evaluate the model and print results
214 | eval_input_fn = tf.estimator.inputs.numpy_input_fn(
215 | x={"x": eval_data},
216 | y=eval_labels,
217 | num_epochs=1,
218 | shuffle=False)
219 | eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
220 | print(eval_results)
221 |
222 |
223 | if __name__ == "__main__":
224 | tf.app.run()
225 |
--------------------------------------------------------------------------------
/examples/MNIST/scripts/run_mnist_estimator.sh:
--------------------------------------------------------------------------------
1 | startTime=${SECONDS}
2 |
3 | sudo rm -r mnist_convnet_model
4 | source activate tensorflow_p36
5 |
6 |
7 | mpi_command="/home/ubuntu/anaconda3/envs/tensorflow_p36/bin/mpirun -np 8 -H localhost:8 \
8 | --allow-run-as-root -bind-to none -map-by slot \
9 | -x NCCL_DEBUG=INFO \
10 | -x LD_LIBRARY_PATH \
11 | -x NCCL_SOCKET_IFNAME=ens5 -mca btl_tcp_if_exclude lo,docker0 \
12 | -x PATH -mca pml ob1 -mca btl ^openib"
13 | use_hvd="--horovod"
14 |
15 |
16 | $mpi_command python mnist_estimator.py
17 |
18 |
19 | endTime=${SECONDS}
20 | diffTime=`expr ${endTime} - ${startTime}`
21 | echo "Diff Time: [${diffTime}]":
22 |
--------------------------------------------------------------------------------