├── .gitignore ├── .travis.yml ├── LICENSE ├── Makefile ├── README.md ├── demo.py ├── easy_tf_log.py ├── setup.py ├── tensorboard_screenshot.png └── tests.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | __pycache__ 3 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | jobs: 3 | include: 4 | - name: "TensorFlow 1" 5 | python: 3.7 6 | install: 7 | - pip install tensorflow==1.15 8 | - name: "TensorFlow 2" 9 | python: 3.8 10 | install: 11 | - pip install tensorflow 12 | install: 13 | - pip install . 14 | script: 15 | - python tests.py 16 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License 2 | 3 | Copyright (c) 2017 OpenAI (http://openai.com), 2018 Matthew Rahtz 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | dist: easy_tf_log.py 2 | rm -rf dist 3 | python3 setup.py sdist bdist_wheel 4 | twine upload dist/* 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Easy TensorFlow Logging 2 | 3 | [![Build Status](https://travis-ci.com/mrahtz/easy-tf-log.svg?branch=master)](https://travis-ci.com/mrahtz/easy-tf-log) 4 | 5 | **Note: This is mainly designed for TensorFlow 1**; the logging API in TensorFlow 2 is significantly easier than in TensorFlow 1. This module *is* compatible with TensorFlow 2, but some features may not work with eager execution. 6 | 7 | Are you prototyping something and want to be able to _magically_ graph some value 8 | without going through all the usual steps to set up TensorFlow logging properly? 9 | 10 | `easy_tf_log` is a simple module to do just that. 11 | 12 | ``` 13 | from easy_tf_log import tflog 14 | ``` 15 | 16 | then you can do 17 | 18 | ``` 19 | for i in range(10): 20 | tflog('really_interesting_variable_name', i) 21 | ``` 22 | 23 | and you'll find a directory `logs` that you can point TensorBoard to 24 | 25 | `$ tensorboard --logdir logs` 26 | 27 | to get 28 | 29 | ![](https://github.com/mrahtz/easy-tf-log/blob/master/tensorboard_screenshot.png) 30 | 31 | See [`demo.py`](demo.py) for a full demo. 32 | 33 | Based on logging code from OpenAI's [baselines](https://github.com/openai/baselines). 34 | 35 | ## Installation 36 | 37 | `pip install easy-tf-log` 38 | 39 | Note that TensorFlow must be installed separately. 40 | 41 | ## Documentation 42 | 43 | `easy-tf-log` supports logging using either a global logger or an instantiated logger object. 44 | 45 | The global logger is good for very quick prototypes, but for anything more complicated, 46 | you'll probably want to instantiate your own `Logger` object. 47 | 48 | ### Global logger 49 | 50 | * `easy_tf_log.tflog(key, value, step=None)` 51 | * Logs `value` (int or float) under the name `key` (string). 52 | * `step` (int) sets the step associated with `value` explicitly. 53 | If not specified, the step will increment on each call. 54 | * `easy_tf_log.set_dir(log_dir)` 55 | * Sets the global logger to log to the specified directory. 56 | * `log_dir` can be an absolute or a relative path. 57 | * `easy_tf_log.set_writer(writer)` 58 | * Sets the global logger to log using the specified `tf.summary.FileWriter` instance. 59 | 60 | By default (i.e. if `set_dir` is not called), the global logger logs to a `logs` directory 61 | automatically created in the working directory. 62 | 63 | ### Logger object 64 | 65 | * `logger = easy_tf_log.Logger(log_dir=None, writer=None)` 66 | * Create a `Logger`. 67 | * `log_dir`: an absolute of relative path specifying the directory to log to. 68 | * `writer`: an existing `tf.summary.FileWriter` instance to use for logging. 69 | * If neither `log_dir` nor `writer` are specified, the logger will log to a `logs` directory in the 70 | working directory. If both are specified, the constructor will raise a `ValueError`. 71 | * `logger.log_key_value(key, value, step=None)` 72 | * See `tflog`. 73 | * `logger.log_list_stats(key, values_list)` 74 | * Log the minimum, maximum, mean, and standard deviation of `values_list` (a list of ints or floats). 75 | * `logger.measure_rate(key, value)` 76 | * Log the rate at which `value` (int or float) changes per second. 77 | * The first call internally stores the time of the first value; 78 | the second call logs the change between the second value and the first value divided by the 79 | time between the calls; etc. 80 | * `logger.set_dir(log_dir)` 81 | * See `easy_tf_log.set_dir(log_dir)`. 82 | * `logger.set_writer(writer)` 83 | * See `easy_tf_log.set_writer(writer)`. 84 | * `logger.close()` 85 | * Flush logs and close the log file handle. 86 | -------------------------------------------------------------------------------- /demo.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import time 3 | 4 | import easy_tf_log 5 | 6 | # Logging using the global logger 7 | 8 | # Will log to automatically-created 'logs' directory 9 | for i in range(10): 10 | easy_tf_log.tflog('foo', i) 11 | for j in range(10, 20): 12 | easy_tf_log.tflog('bar', j) 13 | 14 | easy_tf_log.set_dir('logs2') 15 | 16 | for k in range(20, 30): 17 | easy_tf_log.tflog('baz', k) 18 | for l in range(5): 19 | easy_tf_log.tflog('qux', l, step=(10 * l)) 20 | 21 | # Logging using a Logger object 22 | 23 | logger = easy_tf_log.Logger(log_dir='logs3') 24 | 25 | for i in range(10): 26 | logger.log_key_value('quux', i) 27 | 28 | logger.log_list_stats('quuz', [1, 2, 3, 4, 5]) 29 | 30 | logger.measure_rate('corge', 10) 31 | time.sleep(1) 32 | logger.measure_rate('corge', 20) # Logged rate: (20 - 10) / 1 33 | time.sleep(2) 34 | logger.measure_rate('corge', 30) # Logged rate: (30 - 20) / 2 35 | -------------------------------------------------------------------------------- /easy_tf_log.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import time 4 | from typing import Union 5 | 6 | import numpy as np 7 | import tensorflow as tf 8 | from tensorflow.core.util import event_pb2 9 | from tensorflow.python.util import compat 10 | 11 | if tf.__version__ >= '2': 12 | import tensorflow.compat.v1.summary as tf1_summary 13 | import tensorflow.python._pywrap_events_writer as tf_pywrap 14 | else: 15 | import tensorflow.summary as tf1_summary 16 | import tensorflow.python.pywrap_tensorflow as tf_pywrap 17 | 18 | 19 | class EventsFileWriterWrapper: 20 | """ 21 | Rename EventsFileWriter's flush() and add_event() methods to be consistent 22 | with EventsWriter's methods. 23 | """ 24 | 25 | def __init__(self, events_file_writer): 26 | self.writer = events_file_writer 27 | 28 | def WriteEvent(self, event): 29 | self.writer.add_event(event) 30 | 31 | def Flush(self): 32 | self.writer.flush() 33 | 34 | 35 | class Logger(object): 36 | DEFAULT = None 37 | 38 | def __init__(self, log_dir=None, writer=None): 39 | self.key_steps = {} 40 | self.rate_values = {} 41 | self.writer = None # type: Union[None, tf_pywrap.EventsWriter, tf1_summary.FileWriter] 42 | 43 | if log_dir is None and writer is None: 44 | log_dir = 'logs' 45 | self.set_log_dir(log_dir) 46 | elif log_dir is not None and writer is None: 47 | self.set_log_dir(log_dir) 48 | elif log_dir is None and writer is not None: 49 | self.set_writer(writer) 50 | else: 51 | raise ValueError("Only one of log_dir or writer must be specified") 52 | 53 | def set_log_dir(self, log_dir): 54 | os.makedirs(log_dir, exist_ok=True) 55 | path = osp.join(log_dir, "events") 56 | # Why don't we just use an EventsFileWriter? 57 | # By default, we want to be fork-safe - we want to work even if we 58 | # create the writer in one process and try to use it in a forked 59 | # process. And because EventsFileWriter uses a subthread to do the 60 | # actual writing, EventsFileWriter /isn't/ fork-safe. 61 | self.writer = tf_pywrap.EventsWriter(compat.as_bytes(path)) 62 | 63 | def set_writer(self, writer: tf1_summary.FileWriter): 64 | """ 65 | Set the log writer to an existing tf1_summary.FileWriter instance 66 | (so that you can write both TensorFlow summaries and easy_tf_log events to the same log file) 67 | """ 68 | self.writer = EventsFileWriterWrapper(writer) 69 | 70 | def logkv(self, k, v, step=None): 71 | self.log_key_value(k, v, step) 72 | 73 | def log_key_value(self, key, value, step=None): 74 | def summary_val(k, v): 75 | kwargs = {'tag': k, 'simple_value': float(v)} 76 | if tf.__version__ >= '2': 77 | return tf.compat.v1.Summary.Value(**kwargs) 78 | else: 79 | return tf.Summary.Value(**kwargs) 80 | 81 | summary = None 82 | if tf.__version__ >= '2': 83 | summary = tf.compat.v1.Summary(value=[summary_val(key, value)]) 84 | else: 85 | summary = tf.Summary(value=[summary_val(key, value)]) 86 | event = event_pb2.Event(wall_time=time.time(), summary=summary) 87 | # Use a separate step counter for each key 88 | if key not in self.key_steps: 89 | self.key_steps[key] = 0 90 | if step is not None: 91 | self.key_steps[key] = step 92 | event.step = self.key_steps[key] 93 | self.writer.WriteEvent(event) 94 | self.writer.Flush() 95 | self.key_steps[key] += 1 96 | 97 | def log_list_stats(self, key, values_list): 98 | for suffix, f in [('min', np.min), ('max', np.max), ('avg', np.mean), ('std', np.std)]: 99 | self.logkv(key + '_' + suffix, f(values_list)) 100 | 101 | def measure_rate(self, key, value): 102 | if key in self.rate_values: 103 | last_val, last_time = self.rate_values[key] 104 | interval = time.time() - last_time 105 | self.logkv(key + '_rate', (value - last_val) / interval) 106 | self.rate_values[key] = (value, time.time()) 107 | 108 | def close(self): 109 | if self.writer: 110 | self.writer.Close() 111 | self.writer = None 112 | 113 | 114 | def set_dir(log_dir): 115 | Logger.DEFAULT = Logger(log_dir=log_dir) 116 | 117 | 118 | def set_writer(writer): 119 | Logger.DEFAULT = Logger(writer=writer) 120 | 121 | 122 | def tflog(key, value, step=None): 123 | if not Logger.DEFAULT: 124 | set_dir('logs') 125 | Logger.DEFAULT.log_key_value(key, value, step) 126 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import os.path as path 2 | 3 | from setuptools import setup 4 | 5 | with open('README.md', encoding='utf-8') as f: 6 | long_description = f.read() 7 | 8 | setup( 9 | name='easy_tf_log', 10 | version='1.12', 11 | description='TensorFlow logging made easy', 12 | long_description=long_description, 13 | long_description_content_type='text/markdown', 14 | url='https://github.com/mrahtz/easy-tf-log', 15 | author='Matthew Rahtz', 16 | author_email='matthew.rahtz@gmail.com', 17 | keywords='tensorflow graph graphs graphing', 18 | py_modules=['easy_tf_log'], 19 | install_requires=['numpy'], 20 | ) 21 | -------------------------------------------------------------------------------- /tensorboard_screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrahtz/easy-tf-log/efee1d42a1ec56dd9f7abe2e5025bb0a499ffb96/tensorboard_screenshot.png -------------------------------------------------------------------------------- /tests.py: -------------------------------------------------------------------------------- 1 | import importlib 2 | import os 3 | import os.path as osp 4 | import queue 5 | import tempfile 6 | import time 7 | import unittest 8 | from multiprocessing import Queue, Process 9 | 10 | import numpy as np 11 | import tensorflow as tf 12 | 13 | import easy_tf_log 14 | 15 | if tf.__version__ >= '2': 16 | import tensorflow.compat.v1.train as tf_train 17 | # Needed for creation of a TensorFlow 1 `summary` op (which behave 18 | # differently from a TensorFlow 2 `summary` op), and a TensorFlow 1 19 | # `FileWriter` (TensorFlow 2 does has `tf.summary.create_file_writer, but 20 | # the object it returns seems to be slightly different - it doesn't have the 21 | # `add_summary` method.) 22 | import tensorflow.compat.v1.summary as tf1_summary 23 | # FileWriter is not compatible with eager execution. 24 | tf.compat.v1.disable_eager_execution() 25 | else: 26 | import tensorflow.train as tf_train 27 | import tensorflow.summary as tf1_summary 28 | 29 | 30 | class TestEasyTFLog(unittest.TestCase): 31 | 32 | def setUp(self): 33 | importlib.reload(easy_tf_log) 34 | print(self._testMethodName) 35 | 36 | def test_no_setup(self): 37 | """ 38 | Test that if tflog() is used without any extra setup, a directory 39 | 'logs' is created in the current directory containing the event file. 40 | """ 41 | with tempfile.TemporaryDirectory() as temp_dir: 42 | os.chdir(temp_dir) 43 | easy_tf_log.tflog('var', 0) 44 | self.assertEqual(os.listdir(), ['logs']) 45 | self.assertIn('events.out.tfevents', os.listdir('logs')[0]) 46 | 47 | def test_set_dir(self): 48 | """ 49 | Confirm that set_dir works. 50 | """ 51 | with tempfile.TemporaryDirectory() as temp_dir: 52 | os.chdir(temp_dir) 53 | easy_tf_log.set_dir('logs2') 54 | easy_tf_log.tflog('var', 0) 55 | self.assertEqual(os.listdir(), ['logs2']) 56 | self.assertIn('events.out.tfevents', os.listdir('logs2')[0]) 57 | 58 | def test_set_writer(self): 59 | """ 60 | Check that when using an EventFileWriter from a FileWriter, 61 | the resulting events file contains events from both the FileWriter 62 | and easy_tf_log. 63 | """ 64 | with tempfile.TemporaryDirectory() as temp_dir: 65 | os.chdir(temp_dir) 66 | writer = tf1_summary.FileWriter('logs') 67 | var = tf.Variable(0.0) 68 | summary_op = tf1_summary.scalar('tf_var', var) 69 | 70 | if tf.__version__ >= '2': 71 | sess = tf.compat.v1.Session() 72 | else: 73 | sess = tf.Session() 74 | 75 | sess.run(var.initializer) 76 | summary = sess.run(summary_op) 77 | writer.add_summary(summary) 78 | 79 | easy_tf_log.set_writer(writer.event_writer) 80 | easy_tf_log.tflog('easy-tf-log_var', 0) 81 | 82 | self.assertEqual(os.listdir(), ['logs']) 83 | event_filename = osp.join('logs', os.listdir('logs')[0]) 84 | self.assertIn('events.out.tfevents', event_filename) 85 | 86 | tags = set() 87 | for event in tf_train.summary_iterator(event_filename): 88 | for value in event.summary.value: 89 | tags.add(value.tag) 90 | self.assertIn('tf_var', tags) 91 | self.assertIn('easy-tf-log_var', tags) 92 | 93 | def test_full(self): 94 | """ 95 | Log a few values and check that the event file contain the expected 96 | values. 97 | """ 98 | with tempfile.TemporaryDirectory() as temp_dir: 99 | os.chdir(temp_dir) 100 | 101 | for i in range(10): 102 | easy_tf_log.tflog('foo', i) 103 | for i in range(10): 104 | easy_tf_log.tflog('bar', i) 105 | 106 | event_filename = osp.join('logs', os.listdir('logs')[0]) 107 | event_n = 0 108 | for event in tf_train.summary_iterator(event_filename): 109 | if event_n == 0: # metadata 110 | event_n += 1 111 | continue 112 | if event_n <= 10: 113 | self.assertEqual(event.step, event_n - 1) 114 | self.assertEqual(event.summary.value[0].tag, "foo") 115 | self.assertEqual(event.summary.value[0].simple_value, 116 | float(event_n - 1)) 117 | if event_n > 10 and event_n <= 20: 118 | self.assertEqual(event.step, event_n - 10 - 1) 119 | self.assertEqual(event.summary.value[0].tag, "bar") 120 | self.assertEqual(event.summary.value[0].simple_value, 121 | float(event_n - 10 - 1)) 122 | event_n += 1 123 | 124 | def test_explicit_step(self): 125 | """ 126 | Log a few values explicitly setting the step number. 127 | """ 128 | with tempfile.TemporaryDirectory() as temp_dir: 129 | os.chdir(temp_dir) 130 | 131 | for i in range(5): 132 | easy_tf_log.tflog('foo', i, step=(10 * i)) 133 | # These ones should continue from where the previous ones left off 134 | for i in range(5): 135 | easy_tf_log.tflog('foo', i) 136 | 137 | event_filename = osp.join('logs', os.listdir('logs')[0]) 138 | event_n = 0 139 | for event in tf_train.summary_iterator(event_filename): 140 | if event_n == 0: # metadata 141 | event_n += 1 142 | continue 143 | if event_n <= 5: 144 | self.assertEqual(event.step, 10 * (event_n - 1)) 145 | if event_n > 5 and event_n <= 10: 146 | self.assertEqual(event.step, 40 + (event_n - 5)) 147 | event_n += 1 148 | 149 | def test_fork(self): 150 | with tempfile.TemporaryDirectory() as temp_dir: 151 | easy_tf_log.set_dir(temp_dir) 152 | 153 | def f(queue): 154 | easy_tf_log.tflog('foo', 0) 155 | queue.put(True) 156 | 157 | q = Queue() 158 | Process(target=f, args=[q], daemon=True).start() 159 | try: 160 | q.get(timeout=1.0) 161 | except queue.Empty: 162 | self.fail("Process did not return") 163 | 164 | def test_measure_rate(self): 165 | with tempfile.TemporaryDirectory() as temp_dir: 166 | logger = easy_tf_log.Logger(log_dir=temp_dir) 167 | 168 | logger.measure_rate('foo', 0) 169 | time.sleep(1) 170 | logger.measure_rate('foo', 10) 171 | time.sleep(1) 172 | logger.measure_rate('foo', 25) 173 | 174 | event_filename = list(os.scandir(temp_dir))[0].path 175 | event_n = 0 176 | rates = [] 177 | for event in tf_train.summary_iterator(event_filename): 178 | if event_n == 0: # metadata 179 | event_n += 1 180 | continue 181 | rates.append(event.summary.value[0].simple_value) 182 | event_n += 1 183 | np.testing.assert_array_almost_equal(rates, [10., 15.], decimal=1) 184 | 185 | 186 | if __name__ == '__main__': 187 | unittest.main() 188 | --------------------------------------------------------------------------------