├── .gitignore ├── CHANGES ├── MANIFEST.in ├── Makefile ├── NOTES.release_howto ├── README.rst ├── asciitable ├── __init__.py ├── basic.py ├── cds.py ├── core.py ├── daophot.py ├── fixedwidth.py ├── ipac.py ├── latex.py ├── memory.py ├── ui.py └── version.py ├── doc ├── Makefile ├── _templates │ └── layout.html ├── conf.py ├── fixed_width_gallery.rst └── index.rst ├── setup.py ├── t ├── apostrophe.rdb ├── apostrophe.tab ├── bad.txt ├── bars_at_ends.txt ├── cds.dat ├── cds │ ├── glob │ │ ├── ReadMe │ │ └── lmxbrefs.dat │ └── multi │ │ ├── ReadMe │ │ └── lhs2065.dat ├── cds2.dat ├── commented_header.dat ├── commented_header2.dat ├── continuation.dat ├── daophot.dat ├── fill_values.txt ├── ipac.dat ├── latex1.tex ├── latex2.tex ├── nls1_stackinfo.dbout ├── no_data_cds.dat ├── no_data_daophot.dat ├── no_data_ipac.dat ├── no_data_with_header.dat ├── no_data_without_header.dat ├── short.rdb ├── short.tab ├── simple.txt ├── simple2.txt ├── simple3.txt ├── simple4.txt ├── simple5.txt ├── space_delim_blank_lines.txt ├── space_delim_no_header.dat ├── space_delim_no_names.dat ├── test4.dat ├── test5.dat ├── vizier │ ├── ReadMe │ ├── table1.dat │ └── table5.dat ├── vots_spec.dat └── whitespace.dat └── test ├── __init__.py ├── common.py ├── test_cds_header_from_readme.py ├── test_fixedwidth.py ├── test_memory.py ├── test_read.py ├── test_types.py └── test_write.py /.gitignore: -------------------------------------------------------------------------------- 1 | doc/ 2 | dist/ 3 | build/ 4 | *~ 5 | *.pyc 6 | MANIFEST 7 | genfromtext.api 8 | asciitable.egg-info/ 9 | wing_asciitable.wpr 10 | -------------------------------------------------------------------------------- /CHANGES: -------------------------------------------------------------------------------- 1 | 0.8.0 2 | ===== 3 | Add support for reading and writing fixed width tables. 4 | 5 | 0.7.1 6 | ===== 7 | Minor features and bug fix release: 8 | 9 | - Add a method inconsistent_handler() to the BaseReader 10 | class as a hook to handle rows with an inconsistent number 11 | of data columns (contributed by Erik Tollerud). 12 | - Output a more informative error message when guessing fails. 13 | - Fix issues in column type handling, mostly related to the 14 | MemoryReader class which is used for writing tables. 15 | - Fix a problem in guessing where user-supplied args were 16 | not filtering the guess possibilities correctly. 17 | - Fix problem reading a single column, string-only table with 18 | MemoryReader on MacOS. 19 | 20 | 0.7.0.2 21 | ======= 22 | Fixed CDS docstrings so they pass doctest. 23 | 24 | Thanks Segio Pascual for sending this fix. 25 | 26 | 0.7.0.1 27 | ======= 28 | Fixed setup.py to install asciitable package instead asciitable.py module. 29 | 30 | 0.7.0 31 | ===== 32 | This is a significant release with the following key features: 33 | 34 | - Added support for reading and writing LaTeX tables 35 | (contributed by Moritz Guenther). 36 | - Improved the CDS reader by better supporting multi-file 37 | tables (contributed by Frederic Grollier). 38 | - Refactored the code into a package with functionally distinct modules. 39 | - Added "type" attribute in the Column class that provides the 40 | type of a column as IntType, FloatType, or StrType. 41 | 42 | The new Latex and AASTex classes provide the ability to write 43 | publication quality LaTeX tables in both the standard and AAS 44 | "deluxetable" formats. These classes provide hooks to inject 45 | additional LaTeX commands as needed for more complex tables. 46 | 47 | API changes: 48 | - Previously the read(), write(), get_reader() and get_writer() 49 | raised an exception for unrecognized keyword arguments. Now 50 | those extra arguments are passed on to the Reader class 51 | constructor. From the user perspective this means you can call 52 | read()/write() with class initialization arguments (see Cds and 53 | Latex for examples). For developers it means more flexibility 54 | in Reader classes. 55 | - One minor API change is not backward compatible. When 56 | specifying custom column converters (e.g. to force a column 57 | that looks like integers to convert to floats) it was 58 | previously possible to provide either a list of converters or a 59 | single converter. Now you must always provide a list of 60 | converters even it is has only one element. This was needed to 61 | support consistent assigment of the Column.type attribute. 62 | 63 | Other minor fixes: 64 | - Fixed a bug when the "end_line" parameter is passed a function. 65 | - Fixed a bug where the RDB writer issued incorrect column types. 66 | 67 | 0.6.0 68 | ===== 69 | Add support and testing for MacOS and Windows. 70 | 71 | - Fixed issue reading a table from a string under Windows. 72 | - Added testing using Python 2.7 for Windows XP and MacOS 10.6. 73 | - Changed Python 3 testing on linux from 3.1 to 3.2. 74 | - Added pip and easy_install instructions to the installation documentation. 75 | - Added __version__ variable to module. 76 | - Removed download_url from setup.py since tar file is hosted on PyPI now. 77 | 78 | 0.5.2 79 | ===== 80 | Enhance CDS reader support for missing values and fix a bug in NumpyOutputter 81 | related to masking. 82 | 83 | CDS reader 84 | ---------- 85 | Cds tables can contain missing values, realized as empty stings or with special 86 | values. These special values are then given in the first few characters of the 87 | column description in the form ?=value. For columns with float values (type E 88 | and F) asciitable now implements an automated setting for fill_values, that masks 89 | these spacial values in a masked array. Nothing has changed for ascii or 90 | integer columns as these do not have a natural value which can be used to 91 | replace the masked or missing values in a transparent way. 92 | 93 | Bug fix 94 | ------- 95 | Fix a bug in NumpyOutputter if a mask is defined by no value is masked. 96 | 97 | 0.5.1 98 | ===== 99 | This release is primarily an update to the CDS format reader with code 100 | contributed by Prasanth Haridas Nair. It now can handle separate ReadMe and 101 | data files. The ReadMe file specifies the column information for one or more 102 | associated data files. For example: 103 | 104 | >>> readme = "t/vizier/ReadMe" 105 | >>> r = asciitable.Cds(readme) 106 | >>> table = r.read("t/vizier/table1.dat") 107 | >>> # table5.dat has the same ReadMe file 108 | >>> table = r.read("t/vizier/table5.dat") 109 | 110 | In addition a minor bug was fixed where blank table lines were not 111 | being properly ignored. 112 | 113 | 0.5.0 114 | ===== 115 | This release features a new function to guess the table format from the 116 | supported formats within asciitable. This function is now called by default 117 | within asciitable.read(). In addition: 118 | 119 | - Added support for whitespace (tab or space) delimited tables by setting 120 | the delimiter parameter to "\s". 121 | - Improved support for RDB tables by parsing the second line which specifies 122 | column type and (optionally) width. These values are written out if 123 | available when writing an RDB table. 124 | - More rigorous checking of format compatibility for several table formats. 125 | 126 | 0.4.0 127 | ===== 128 | Add capability to handle bad or missing values in the input table. This is 129 | done with a new fill_values parameter that specifies replacement of 130 | particular bad or missing values with new values. When used with NumPy 131 | the default output will be a NumPy masked array. 132 | 133 | Contributed by Moritz Guenther 134 | 135 | 0.3.1 136 | ===== 137 | This release features the new capability to write ASCII tables using the same 138 | basic infrastructure and API as for reading. Other updates include: 139 | 140 | - Python 3 compatibility 141 | - Significant documentation updates 142 | - Improved test coverage 143 | - New Reader class to read in-memory tables 144 | 145 | 0.2.5 146 | ====== 147 | Primarily a documentation update, including much more detail on the parameters 148 | for the read() function plus a bit more for other advanced features. There are 149 | a couple of minor code fixes / updates: 150 | 151 | - Gracefully read tables that have a header but no data lines. 152 | - Update the Tab reader so all data value spaces are preserved 153 | (including leading/trailing) 154 | 155 | 0.2.2 156 | ===== 157 | Add IPAC reader and reader comment_lines attribute plus doc updates. 158 | 159 | - Add IPAC format reader 160 | 161 | - Add 'comment_lines' attribute to BaseReader class to return all lines 162 | matching the header comment character. 163 | 164 | - Made the table lines retrieved by the Inputter available as the 165 | Reader object 'lines' attribute in the BaseReader.read() function. 166 | 167 | - Updates and reorganization of doc-strings and documentation 168 | 169 | - Removed the non-working code related to mask support (try again later) 170 | 171 | - Added 'meta' and 'keywords' attributes to BaseReader and added a Keywords 172 | class. This is a placeholder for future support of table metadata. 173 | 174 | 0.2.1 175 | ===== 176 | Add support for reading CDS format files (header + data in one file) 177 | 178 | 0.2.0 179 | ===== 180 | Updates to the way lines are processed and comments handled for support of new formats. 181 | 182 | - Splitter process_line and process_val and Inputter process_line are now object methods. 183 | This allows for settable parameters that control processing (e.g. continuation_char). 184 | 185 | - Add BaseHeader and BaseData process_line methods that do comment and blank-line 186 | filtering as needed. Splitter process_line should no longer be used for this purpose. 187 | 188 | - Add support for formats: 189 | - CommentedHeader (column def line begins with comment char) 190 | - DAOphot 191 | - Files with continuation lines via ContinuationLinesInputter class 192 | 193 | - Add 'Inputter' as an option to get_reader() and read() 194 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include asciitable/*.py 2 | include t/* 3 | include t/cds/*/* 4 | include t/vizier/* 5 | include doc/*.rst 6 | include CHANGES 7 | include VERSION 8 | include test/*.py 9 | exclude t/*~ 10 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | PROJECT = asciitable 2 | WWW = /proj/web-cxc-dmz/htdocs/contrib/$(PROJECT) 3 | 4 | .PHONY: doc dist install 5 | 6 | dist: 7 | rm -rf dist 8 | rm -f MANIFEST 9 | python setup.py sdist --format=gztar 10 | 11 | pypi: 12 | rm -rf dist 13 | rm -f MANIFEST 14 | python setup.py sdist --format=zip upload 15 | python setup.py sdist --format=gztar upload 16 | 17 | doc: 18 | cd doc; \ 19 | make html 20 | 21 | install: doc 22 | rsync -av doc/_build/html/ $(WWW)/ 23 | 24 | install_dev: doc dist 25 | rsync -av doc/_build/html/ $(WWW)/dev 26 | cp -p dist/$(PROJECT)-*.tar.gz $(WWW)/dev/downloads/ 27 | cp -p dist/$(PROJECT)-*.tar.gz $(WWW)/dev/downloads/$(PROJECT).tar.gz 28 | 29 | -------------------------------------------------------------------------------- /NOTES.release_howto: -------------------------------------------------------------------------------- 1 | Release procedure 2 | =================== 3 | 4 | Edit/develop in a new git branch 5 | -------------------------------- 6 | git checkout -b 7 | git push origin 8 | 9 | Test 10 | ------ 11 | Coverage 12 | ^^^^^^^^^ 13 | (Using Python 2.7 on linux) 14 | nosetests --with-coverage --cover-package=asciitable --cover-erase --cover-html 15 | 16 | Linux 17 | ^^^^^^ 18 | /usr/bin/nosetests # Python 2.4 on CentOS-5 system 19 | set pyroot=$HOME/soft/ActivePython 20 | $pyroot/2.6/bin/nosetests 21 | $pyroot/2.7/bin/nosetests --with-doctest 22 | $pyroot/2.7_bare/bin/nosetests 23 | $pyroot/3.2/bin/nosetests 24 | 25 | Mac / windows 26 | ^^^^^^^^^^^^^^ 27 | cd ~/git 28 | [ git clone git@github.com:taldcroft/asciitable.git ] 29 | cd asciitable 30 | git checkout 31 | git pull 32 | 33 | Prepare for release 34 | -------------------- 35 | - Update version in asciitable/version.py 36 | - Update CHANGES 37 | - Update setup.py 38 | - Update README.rst 39 | - Update index.rst 40 | make doc 41 | 42 | Make the source distribution and confirm the correct files 43 | ------------------------------------------------------------ 44 | make dist 45 | 46 | Install the source distribution and test 47 | -------------------------------------------------- 48 | 49 | cp dist/*.tar.gz /pool14/aldcroft/ 50 | cd /pool14/aldcroft 51 | tar xvf asciitable-*.tar.gz 52 | cd asciitable-* 53 | 54 | /usr/bin/nosetests 55 | 56 | more asciitable/version.py #confirm __version__ !! 57 | 58 | Install new source docs 59 | --------------------------------------------- 60 | cd ~/git/asciitable 61 | make install 62 | 63 | - Confirm the site looks correct (version number, changes). 64 | 65 | Local git checkin, merge and github 66 | ----------------------------------- 67 | git status 68 | git commit -a 69 | Insert text from CHANGES, starting with a single line "Release x.y.z" followed 70 | by a blank line and then the changes. 71 | 72 | git remote -v # check origin 73 | git push [--dry-run] origin 74 | 75 | git checkout master 76 | git fetch # make sure nothing happened to master ?? 77 | git merge --no-ff 78 | git tag -a 79 | Include CHANGES 80 | git push [--dry-run] origin 81 | 82 | Update PyPI 83 | ----------- 84 | make pypi 85 | 86 | - Download tarball and diff against the one in /pool14/aldcroft 87 | 88 | Test install on mac: 89 | 90 | pip install --upgrade asciitable 91 | ipython 92 | import asciitable 93 | asciitable? 94 | 95 | Astrolib SVN 96 | ------------ 97 | (Note: svn => /usr/local/bin/svn doesn't have http or https support) 98 | 99 | cd /pool14/aldcroft 100 | mkdir astrolib 101 | cd astrolib 102 | /usr/bin/svn checkout https://svn6.assembla.com/svn/astrolib/trunk/asciitable 103 | rsync [--dry-run ] -av --size-only /pool14/aldcroft/asciitable-*/ asciitable/ 104 | cd asciitable 105 | /usr/bin/svn add ... 106 | /usr/bin/svn commit --username taldcroft 107 | # Password is from "assembla" on LastPass, begins with L... 108 | # Enter release commit message from git starting with: 109 | asciitable 110 | 111 | Post announcements 112 | ------------------- 113 | - Send mail to astropy@scipy.org bcc: pythonusers@head.cfa.harvard.edu 114 | Example:: 115 | 116 | I'd like to announce the release of version 0.3.1 of asciitable, an 117 | extensible module for reading and writing ASCII tables. This release 118 | features the new capability to write ASCII tables using the same basic 119 | infrastructure and API as for reading. 120 | 121 | http://cxc.harvard.edu/contrib/asciitable/ 122 | 123 | Other updates include: 124 | 125 | - Python 3 compatibility 126 | - Significant documentation updates 127 | - Improved test coverage 128 | - New Reader class to read in-memory tables 129 | 130 | Regards, 131 | Tom Aldcroft 132 | 133 | * Post the same to comp.lang.python.announce and astropython.org 134 | http://groups.google.com/group/comp.lang.python.announce/topics?pli=1 135 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | An extensible ASCII table reader and writer for Python 2 and 3. 2 | 3 | Asciitable can read and write a wide range of ASCII table formats via built-in 4 | Extension Reader Classes: 5 | 6 | * `Basic`: basic table with customizable delimiters and header configurations 7 | * `Cds`: `CDS format table `_ (also Vizier and ApJ machine readable tables) 8 | * `CommentedHeader`: column names given in a line that begins with the comment character 9 | * `Daophot`: table from the IRAF DAOphot package 10 | * `Ipac`: `IPAC format table `_ 11 | * `Latex`: LaTeX tables (plain and AASTex) 12 | * `Memory`: table already in memory (list of lists, dict of lists, etc) 13 | * `NoHeader`: basic table with no header where columns are auto-named 14 | * `Rdb`: tab-separated values with an extra line after the column definition line 15 | * `Tab`: tab-separated values 16 | 17 | At the top level asciitable looks like many other ASCII table interfaces 18 | since it provides default read() and write() functions with long lists of 19 | parameters to accommodate the many variations possible in commonly encountered 20 | ASCII table formats. Below the hood however asciitable is built on a 21 | modular and extensible class structure. The basic functionality required for 22 | reading or writing a table is largely broken into independent base class 23 | elements so that new formats can be accomodated by modifying the underlying 24 | class methods as needed. 25 | 26 | :Copyright: Smithsonian Astrophysical Observatory (2011) 27 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 28 | 29 | 30 | -------------------------------------------------------------------------------- /asciitable/__init__.py: -------------------------------------------------------------------------------- 1 | """ An extensible ASCII table reader and writer. 2 | 3 | :Copyright: Smithsonian Astrophysical Observatory (2010) 4 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 5 | """ 6 | ## 7 | ## Redistribution and use in source and binary forms, with or without 8 | ## modification, are permitted provided that the following conditions are met: 9 | ## * Redistributions of source code must retain the above copyright 10 | ## notice, this list of conditions and the following disclaimer. 11 | ## * Redistributions in binary form must reproduce the above copyright 12 | ## notice, this list of conditions and the following disclaimer in the 13 | ## documentation and/or other materials provided with the distribution. 14 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 15 | ## names of its contributors may be used to endorse or promote products 16 | ## derived from this software without specific prior written permission. 17 | ## 18 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 19 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 20 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 22 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 23 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 24 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 25 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 26 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 27 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | 29 | from asciitable.core import (has_numpy, 30 | InconsistentTableError, 31 | NoType, StrType, NumType, FloatType, IntType, AllType, 32 | Column, Keyword, 33 | BaseInputter, ContinuationLinesInputter, 34 | BaseHeader, 35 | BaseData, 36 | BaseOutputter, NumpyOutputter, DictLikeNumpy, 37 | BaseReader, 38 | BaseSplitter, DefaultSplitter, WhitespaceSplitter, 39 | convert_list, convert_numpy, 40 | ) 41 | from asciitable.basic import (Basic, BasicReader, 42 | Rdb, RdbReader, 43 | Tab, TabReader, 44 | NoHeader, NoHeaderReader, 45 | CommentedHeader, CommentedHeaderReader) 46 | from asciitable.cds import Cds, CdsReader 47 | from asciitable.latex import Latex, LatexReader, AASTex, AASTexReader, latexdicts 48 | from asciitable.ipac import Ipac, IpacReader 49 | from asciitable.daophot import Daophot, DaophotReader 50 | from asciitable.memory import Memory, MemoryReader 51 | from asciitable.fixedwidth import (FixedWidth, FixedWidthNoHeader, 52 | FixedWidthTwoLine, FixedWidthSplitter, 53 | FixedWidthHeader, FixedWidthData) 54 | from asciitable.ui import (set_guess, get_reader, read, get_writer, write) 55 | 56 | from asciitable.version import version as __version__ 57 | -------------------------------------------------------------------------------- /asciitable/basic.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | basic.py: 4 | Basic table read / write functionality for simple character 5 | delimited files with various options for column header definition. 6 | 7 | :Copyright: Smithsonian Astrophysical Observatory (2011) 8 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 9 | """ 10 | 11 | ## 12 | ## Redistribution and use in source and binary forms, with or without 13 | ## modification, are permitted provided that the following conditions are met: 14 | ## * Redistributions of source code must retain the above copyright 15 | ## notice, this list of conditions and the following disclaimer. 16 | ## * Redistributions in binary form must reproduce the above copyright 17 | ## notice, this list of conditions and the following disclaimer in the 18 | ## documentation and/or other materials provided with the distribution. 19 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 20 | ## names of its contributors may be used to endorse or promote products 21 | ## derived from this software without specific prior written permission. 22 | ## 23 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 24 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 27 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 | 34 | import re 35 | import asciitable.core as core 36 | from asciitable.core import io, next, izip, any 37 | 38 | class Basic(core.BaseReader): 39 | """Read a character-delimited table with a single header line at the top 40 | followed by data lines to the end of the table. Lines beginning with # as 41 | the first non-whitespace character are comments. This reader is highly 42 | configurable. 43 | :: 44 | 45 | rdr = asciitable.get_reader(Reader=asciitable.Basic) 46 | rdr.header.splitter.delimiter = ' ' 47 | rdr.data.splitter.delimiter = ' ' 48 | rdr.header.start_line = 0 49 | rdr.data.start_line = 1 50 | rdr.data.end_line = None 51 | rdr.header.comment = r'\s*#' 52 | rdr.data.comment = r'\s*#' 53 | 54 | Example table:: 55 | 56 | # Column definition is the first uncommented line 57 | # Default delimiter is the space character. 58 | apples oranges pears 59 | 60 | # Data starts after the header column definition, blank lines ignored 61 | 1 2 3 62 | 4 5 6 63 | """ 64 | def __init__(self): 65 | core.BaseReader.__init__(self) 66 | self.header.splitter.delimiter = ' ' 67 | self.data.splitter.delimiter = ' ' 68 | self.header.start_line = 0 69 | self.data.start_line = 1 70 | self.header.comment = r'\s*#' 71 | self.header.write_comment = '# ' 72 | self.data.comment = r'\s*#' 73 | self.data.write_comment = '# ' 74 | 75 | BasicReader = Basic 76 | 77 | class NoHeader(BasicReader): 78 | """Read a table with no header line. Columns are autonamed using 79 | header.auto_format which defaults to "col%d". Otherwise this reader 80 | the same as the :class:`Basic` class from which it is derived. Example:: 81 | 82 | # Table data 83 | 1 2 "hello there" 84 | 3 4 world 85 | """ 86 | def __init__(self): 87 | BasicReader.__init__(self) 88 | self.header.start_line = None 89 | self.data.start_line = 0 90 | 91 | NoHeaderReader = NoHeader 92 | 93 | class CommentedHeaderHeader(core.BaseHeader): 94 | """Header class for which the column definition line starts with the 95 | comment character. See the :class:`CommentedHeader` class for an example. 96 | """ 97 | def process_lines(self, lines): 98 | """Return only lines that start with the comment regexp. For these 99 | lines strip out the matching characters.""" 100 | re_comment = re.compile(self.comment) 101 | for line in lines: 102 | match = re_comment.match(line) 103 | if match: 104 | yield line[match.end():] 105 | 106 | def write(self, lines): 107 | lines.append(self.write_comment + self.splitter.join([x.name for x in self.cols])) 108 | 109 | class CommentedHeader(core.BaseReader): 110 | """Read a file where the column names are given in a line that begins with 111 | the header comment character. `header_start` can be used to specify the 112 | line index of column names, and it can be a negative index (for example -1 113 | for the last commented line). The default delimiter is the 114 | character.:: 115 | 116 | # col1 col2 col3 117 | # Comment line 118 | 1 2 3 119 | 4 5 6 120 | """ 121 | def __init__(self): 122 | core.BaseReader.__init__(self) 123 | self.header = CommentedHeaderHeader() 124 | self.header.data = self.data 125 | self.data.header = self.header 126 | self.header.splitter.delimiter = ' ' 127 | self.data.splitter.delimiter = ' ' 128 | self.header.start_line = 0 129 | self.data.start_line = 0 130 | self.header.comment = r'\s*#' 131 | self.header.write_comment = '# ' 132 | self.data.comment = r'\s*#' 133 | self.data.write_comment = '# ' 134 | 135 | CommentedHeaderReader = CommentedHeader 136 | 137 | class Tab(BasicReader): 138 | """Read a tab-separated file. Unlike the :class:`Basic` reader, whitespace is 139 | not stripped from the beginning and end of lines. By default whitespace is 140 | still stripped from the beginning and end of individual column values. 141 | 142 | Example:: 143 | 144 | col1 col2 col3 145 | # Comment line 146 | 1 2 5 147 | """ 148 | def __init__(self): 149 | BasicReader.__init__(self) 150 | self.header.splitter.delimiter = '\t' 151 | self.data.splitter.delimiter = '\t' 152 | # Don't strip line whitespace since that includes tabs 153 | self.header.splitter.process_line = None 154 | self.data.splitter.process_line = None 155 | # Don't strip data value whitespace since that is significant in TSV tables 156 | self.data.splitter.process_val = None 157 | self.data.splitter.skipinitialspace = False 158 | 159 | TabReader = Tab 160 | 161 | class Rdb(TabReader): 162 | """Read a tab-separated file with an extra line after the column definition 163 | line. The RDB format meets this definition. Example:: 164 | 165 | col1 col2 col3 166 | N S N 167 | 1 2 5 168 | 169 | In this reader the second line is just ignored. 170 | """ 171 | def __init__(self): 172 | TabReader.__init__(self) 173 | self.header = RdbHeader() 174 | self.header.start_line = 0 175 | self.header.comment = r'\s*#' 176 | self.header.write_comment = '# ' 177 | self.header.splitter.delimiter = '\t' 178 | self.header.splitter.process_line = None 179 | self.header.data = self.data 180 | self.data.header = self.header 181 | self.data.start_line = 2 182 | 183 | RdbReader = Rdb 184 | 185 | class RdbHeader(core.BaseHeader): 186 | col_type_map = {'n': core.NumType, 187 | 's': core.StrType} 188 | 189 | def get_type_map_key(self, col): 190 | return col.raw_type[-1] 191 | 192 | def get_cols(self, lines): 193 | """Initialize the header Column objects from the table ``lines``. 194 | 195 | This is a specialized get_cols for the RDB type: 196 | Line 0: RDB col names 197 | Line 1: RDB col definitions 198 | Line 2+: RDB data rows 199 | 200 | :param lines: list of table lines 201 | :returns: None 202 | """ 203 | header_lines = self.process_lines(lines) # this is a generator 204 | header_vals_list = [hl for _, hl in zip(range(2), self.splitter(header_lines))] 205 | if len(header_vals_list) != 2: 206 | raise ValueError('RDB header requires 2 lines') 207 | self.names, raw_types = header_vals_list 208 | 209 | if len(self.names) != len(raw_types): 210 | raise ValueError('RDB header mismatch between number of column names and column types') 211 | 212 | if any(not re.match(r'\d*(N|S)$', x, re.IGNORECASE) for x in raw_types): 213 | raise ValueError('RDB types definitions do not all match [num](N|S): %s' % raw_types) 214 | 215 | self._set_cols_from_names() 216 | for col, raw_type in zip(self.cols, raw_types): 217 | col.raw_type = raw_type 218 | col.type = self.get_col_type(col) 219 | 220 | def write(self, lines): 221 | lines.append(self.splitter.join([x.name for x in self.cols])) 222 | rdb_types = [] 223 | for col in self.cols: 224 | if issubclass(col.type, core.NumType): 225 | rdb_types.append('N') 226 | else: 227 | rdb_types.append('S') 228 | lines.append(self.splitter.join(rdb_types)) 229 | -------------------------------------------------------------------------------- /asciitable/cds.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | cds.py: 4 | Classes to read CDS / Vizier table format 5 | 6 | :Copyright: Smithsonian Astrophysical Observatory (2011) 7 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 8 | """ 9 | 10 | ## 11 | ## Redistribution and use in source and binary forms, with or without 12 | ## modification, are permitted provided that the following conditions are met: 13 | ## * Redistributions of source code must retain the above copyright 14 | ## notice, this list of conditions and the following disclaimer. 15 | ## * Redistributions in binary form must reproduce the above copyright 16 | ## notice, this list of conditions and the following disclaimer in the 17 | ## documentation and/or other materials provided with the distribution. 18 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 19 | ## names of its contributors may be used to endorse or promote products 20 | ## derived from this software without specific prior written permission. 21 | ## 22 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 23 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 24 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 25 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 26 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 27 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 28 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 29 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 31 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 | 33 | import fnmatch 34 | import itertools 35 | import re 36 | 37 | import asciitable.core as core 38 | import asciitable.fixedwidth as fixedwidth 39 | 40 | class CdsHeader(core.BaseHeader): 41 | col_type_map = {'e': core.FloatType, 42 | 'f': core.FloatType, 43 | 'i': core.IntType, 44 | 'a': core.StrType} 45 | 46 | def get_type_map_key(self, col): 47 | match = re.match(r'\d*(\S)', col.raw_type.lower()) 48 | if not match: 49 | raise ValueError('Unrecognized CDS format "%s" for column "%s"' % ( 50 | col.raw_type, col.name)) 51 | return match.group(1) 52 | 53 | def __init__(self, readme=None): 54 | """Initialize ReadMe filename. 55 | 56 | :param readme: The ReadMe file to construct header from. 57 | :type readme: String 58 | 59 | CDS tables have their header information in a separate file 60 | named "ReadMe". The ``get_cols`` method will read the contents 61 | of the ReadMe file given by ``self.readme`` and set the various 62 | properties needed to read the data file. The data file name 63 | will be the ``table`` passed to the ``read`` method. 64 | """ 65 | core.BaseHeader.__init__(self) 66 | self.readme = readme 67 | 68 | def get_cols(self, lines): 69 | """Initialize the header Column objects from the table ``lines`` for a CDS 70 | header. 71 | 72 | :param lines: list of table lines 73 | :returns: list of table Columns 74 | """ 75 | # Read header block for the table ``self.data.table_name`` from the read 76 | # me file ``self.readme``. 77 | if self.readme and self.data.table_name: 78 | in_header = False 79 | f = open(self.readme,"r") 80 | # Header info is not in data lines but in a separate file. 81 | lines = [] 82 | comment_lines = 0 83 | for line in f: 84 | line = line.strip() 85 | if in_header: 86 | lines.append(line) 87 | if line.startswith('------') or line.startswith('======='): 88 | comment_lines += 1 89 | if comment_lines == 3: 90 | break 91 | else: 92 | match = re.match(r'Byte-by-byte Description of file: (?P.+)$', 93 | line, re.IGNORECASE) 94 | if match: 95 | # Split 'name' in case in contains multiple files 96 | names = [s for s in re.split('[, ]+', match.group('name')) 97 | if s] 98 | # Iterate on names to find if one matches the tablename 99 | # including wildcards. 100 | for pattern in names: 101 | if fnmatch.fnmatch(self.data.table_name, pattern): 102 | in_header = True 103 | lines.append(line) 104 | break 105 | 106 | else: 107 | raise core.InconsistentTableError("Cant' find table {0} in {1}".format( 108 | self.data.table_name, self.readme)) 109 | f.close() 110 | 111 | for i_col_def, line in enumerate(lines): 112 | if re.match(r'Byte-by-byte Description', line, re.IGNORECASE): 113 | break 114 | 115 | re_col_def = re.compile(r"""\s* 116 | (?P \d+ \s* -)? \s* 117 | (?P \d+) \s+ 118 | (?P [\w.]+) \s+ 119 | (?P \S+) \s+ 120 | (?P \S+) \s+ 121 | (?P \S.+)""", 122 | re.VERBOSE) 123 | 124 | cols = [] 125 | for i, line in enumerate(itertools.islice(lines, i_col_def+4, None)): 126 | if line.startswith('------') or line.startswith('======='): 127 | break 128 | match = re_col_def.match(line) 129 | if match: 130 | col = core.Column(name=match.group('name'), index=i) 131 | col.start = int(re.sub(r'[-\s]', '', match.group('start') or match.group('end'))) - 1 132 | col.end = int(match.group('end')) 133 | col.units = match.group('units') 134 | col.descr = match.group('descr') 135 | col.raw_type = match.group('format') 136 | col.type = self.get_col_type(col) 137 | 138 | match = re.match(r'\? (?P =)? (?P \S*)', col.descr, re.VERBOSE) 139 | if match: 140 | if issubclass(col.type, core.FloatType): 141 | fillval = 'nan' 142 | else: 143 | fillval = '-999' 144 | if match.group('nullval') == '': 145 | col.null = '' 146 | elif match.group('nullval') == '-': 147 | col.null = '---' 148 | else: 149 | col.null = match.group('nullval') 150 | self.data.fill_values.append((col.null, fillval, col.name)) 151 | 152 | cols.append(col) 153 | else: # could be a continuation of the previous col's description 154 | if cols: 155 | cols[-1].descr += line.strip() 156 | else: 157 | raise ValueError('Line "%s" not parsable as CDS header' % line) 158 | 159 | self.names = [x.name for x in cols] 160 | names = set(self.names) 161 | if self.include_names is not None: 162 | names.intersection_update(self.include_names) 163 | if self.exclude_names is not None: 164 | names.difference_update(self.exclude_names) 165 | 166 | self.cols = [x for x in cols if x.name in names] 167 | self.n_data_cols = len(self.cols) 168 | 169 | # Re-index the cols because the FixedWidthSplitter does NOT return the ignored 170 | # cols (as is the case for typical delimiter-based splitters) 171 | for i, col in enumerate(self.cols): 172 | col.index = i 173 | 174 | 175 | class CdsData(core.BaseData): 176 | """CDS table data reader 177 | """ 178 | splitter_class = fixedwidth.FixedWidthSplitter 179 | 180 | def process_lines(self, lines): 181 | """Skip over CDS header by finding the last section delimiter""" 182 | # If the header has a ReadMe and data has a filename 183 | # then no need to skip, as the data lines do not have header 184 | # info. The ``read`` method adds the table_name to the ``data`` 185 | # attribute. 186 | if self.header.readme and self.table_name: 187 | return lines 188 | i_sections = [i for (i, x) in enumerate(lines) 189 | if x.startswith('------') or x.startswith('=======')] 190 | if not i_sections: 191 | raise core.InconsistentTableError('No CDS section delimiter found') 192 | return lines[i_sections[-1]+1 : ] 193 | 194 | 195 | class Cds(core.BaseReader): 196 | """Read a CDS format table: http://vizier.u-strasbg.fr/doc/catstd.htx. 197 | Example:: 198 | 199 | Table: Spitzer-identified YSOs: Addendum 200 | ================================================================================ 201 | Byte-by-byte Description of file: datafile3.txt 202 | -------------------------------------------------------------------------------- 203 | Bytes Format Units Label Explanations 204 | -------------------------------------------------------------------------------- 205 | 1- 3 I3 --- Index Running identification number 206 | 5- 6 I2 h RAh Hour of Right Ascension (J2000) 207 | 8- 9 I2 min RAm Minute of Right Ascension (J2000) 208 | 11- 15 F5.2 s RAs Second of Right Ascension (J2000) 209 | -------------------------------------------------------------------------------- 210 | 1 03 28 39.09 211 | 212 | **Basic usage** 213 | 214 | Use the ``asciitable.read()`` function as normal, with an optional ``readme`` 215 | parameter indicating the CDS ReadMe file. If not supplied it is assumed that 216 | the header information is at the top of the given table. Examples:: 217 | 218 | >>> import asciitable 219 | >>> table = asciitable.read("t/cds.dat") 220 | >>> table = asciitable.read("t/vizier/table1.dat", readme="t/vizier/ReadMe") 221 | >>> table = asciitable.read("t/cds/multi/lhs2065.dat", readme="t/cds/multi/ReadMe") 222 | >>> table = asciitable.read("t/cds/glob/lmxbrefs.dat", readme="t/cds/glob/ReadMe") 223 | 224 | **Using a reader object** 225 | 226 | When ``Cds`` reader object is created with a ``readme`` parameter 227 | passed to it at initialization, then when the ``read`` method is 228 | executed with a table filename, the header information for the 229 | specified table is taken from the ``readme`` file. An 230 | ``InconsistentTableError`` is raised if the ``readme`` file does not 231 | have header information for the given table. 232 | 233 | >>> readme = "t/vizier/ReadMe" 234 | >>> r = asciitable.get_reader(asciitable.Cds, readme=readme) 235 | >>> table = r.read("t/vizier/table1.dat") 236 | >>> # table5.dat has the same ReadMe file 237 | >>> table = r.read("t/vizier/table5.dat") 238 | 239 | If no ``readme`` parameter is specified, then the header 240 | information is assumed to be at the top of the given table. 241 | 242 | >>> r = asciitable.get_reader(asciitable.Cds) 243 | >>> table = r.read("t/cds.dat") 244 | >>> #The following gives InconsistentTableError, since no 245 | >>> #readme file was given and table1.dat does not have a header. 246 | >>> table = r.read("t/vizier/table1.dat") 247 | Traceback (most recent call last): 248 | ... 249 | InconsistentTableError: No CDS section delimiter found 250 | 251 | Caveats: 252 | 253 | * Format, Units, and Explanations are available in the ``Reader.cols`` attribute. 254 | * All of the other metadata defined by this format is ignored. 255 | 256 | Code contribution to enhance the parsing to include metadata in a Reader.meta 257 | attribute would be welcome. 258 | 259 | """ 260 | def __init__(self, readme=None): 261 | core.BaseReader.__init__(self) 262 | self.header = CdsHeader(readme) 263 | self.data = CdsData() 264 | 265 | def write(self, table=None): 266 | """Not available for the Cds class (raises NotImplementedError)""" 267 | raise NotImplementedError 268 | 269 | CdsReader = Cds 270 | 271 | -------------------------------------------------------------------------------- /asciitable/daophot.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | daophot.py: 4 | Classes to read DAOphot table format 5 | 6 | :Copyright: Smithsonian Astrophysical Observatory (2011) 7 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 8 | """ 9 | 10 | ## 11 | ## Redistribution and use in source and binary forms, with or without 12 | ## modification, are permitted provided that the following conditions are met: 13 | ## * Redistributions of source code must retain the above copyright 14 | ## notice, this list of conditions and the following disclaimer. 15 | ## * Redistributions in binary form must reproduce the above copyright 16 | ## notice, this list of conditions and the following disclaimer in the 17 | ## documentation and/or other materials provided with the distribution. 18 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 19 | ## names of its contributors may be used to endorse or promote products 20 | ## derived from this software without specific prior written permission. 21 | ## 22 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 23 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 24 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 25 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 26 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 27 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 28 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 29 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 31 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 | 33 | import re 34 | import asciitable.core as core 35 | import asciitable.basic as basic 36 | 37 | class Daophot(core.BaseReader): 38 | """Read a DAOphot file. 39 | Example:: 40 | 41 | #K MERGERAD = INDEF scaleunit %-23.7g 42 | #K IRAF = NOAO/IRAFV2.10EXPORT version %-23s 43 | #K USER = davis name %-23s 44 | #K HOST = tucana computer %-23s 45 | # 46 | #N ID XCENTER YCENTER MAG MERR MSKY NITER \\ 47 | #U ## pixels pixels magnitudes magnitudes counts ## \\ 48 | #F %-9d %-10.3f %-10.3f %-12.3f %-14.3f %-15.7g %-6d 49 | # 50 | #N SHARPNESS CHI PIER PERROR \\ 51 | #U ## ## ## perrors \\ 52 | #F %-23.3f %-12.3f %-6d %-13s 53 | # 54 | 14 138.538 256.405 15.461 0.003 34.85955 4 \\ 55 | -0.032 0.802 0 No_error 56 | 57 | The keywords defined in the #K records are available via the Daophot reader object:: 58 | 59 | reader = asciitable.get_reader(Reader=asciitable.DaophotReader) 60 | data = reader.read('t/daophot.dat') 61 | for keyword in reader.keywords: 62 | print keyword.name, keyword.value, keyword.units, keyword.format 63 | 64 | """ 65 | 66 | def __init__(self): 67 | core.BaseReader.__init__(self) 68 | self.header = DaophotHeader() 69 | self.inputter = core.ContinuationLinesInputter() 70 | self.data.splitter.delimiter = ' ' 71 | self.data.start_line = 0 72 | self.data.comment = r'\s*#' 73 | 74 | def read(self, table): 75 | output = core.BaseReader.read(self, table) 76 | if core.has_numpy: 77 | reader = core._get_reader(Reader=basic.NoHeaderReader, comment=r'(?!#K)', 78 | names = ['temp1','keyword','temp2','value','unit','format']) 79 | headerkeywords = reader.read(self.comment_lines) 80 | 81 | for line in headerkeywords: 82 | self.keywords.append(core.Keyword(line['keyword'], line['value'], 83 | units=line['unit'], format=line['format'])) 84 | self.table = output 85 | self.cols = self.header.cols 86 | 87 | return self.table 88 | 89 | def write(self, table=None): 90 | raise NotImplementedError 91 | 92 | DaophotReader = Daophot 93 | 94 | class DaophotHeader(core.BaseHeader): 95 | """Read the header from a file produced by the IRAF DAOphot routine.""" 96 | def __init__(self): 97 | core.BaseHeader.__init__(self) 98 | self.comment = r'\s*#K' 99 | 100 | def get_cols(self, lines): 101 | """Initialize the header Column objects from the table ``lines`` for a DAOphot 102 | header. The DAOphot header is specialized so that we just copy the entire BaseHeader 103 | get_cols routine and modify as needed. 104 | 105 | :param lines: list of table lines 106 | :returns: list of table Columns 107 | """ 108 | 109 | self.names = [] 110 | re_name_def = re.compile(r'#N([^#]+)#') 111 | for line in lines: 112 | if not line.startswith('#'): 113 | break # End of header lines 114 | else: 115 | match = re_name_def.search(line) 116 | if match: 117 | self.names.extend(match.group(1).split()) 118 | 119 | if not self.names: 120 | raise core.InconsistentTableError('No column names found in DAOphot header') 121 | 122 | self._set_cols_from_names() 123 | 124 | -------------------------------------------------------------------------------- /asciitable/fixedwidth.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | fixedwidth.py: 4 | Read or write a table with fixed width columns. 5 | 6 | :Copyright: Smithsonian Astrophysical Observatory (2011) 7 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 8 | """ 9 | 10 | ## 11 | ## Redistribution and use in source and binary forms, with or without 12 | ## modification, are permitted provided that the following conditions are met: 13 | ## * Redistributions of source code must retain the above copyright 14 | ## notice, this list of conditions and the following disclaimer. 15 | ## * Redistributions in binary form must reproduce the above copyright 16 | ## notice, this list of conditions and the following disclaimer in the 17 | ## documentation and/or other materials provided with the distribution. 18 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 19 | ## names of its contributors may be used to endorse or promote products 20 | ## derived from this software without specific prior written permission. 21 | ## 22 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 23 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 24 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 25 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 26 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 27 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 28 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 29 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 31 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 | 33 | import re 34 | import itertools 35 | import asciitable.core as core 36 | from asciitable.core import io, next, izip, any 37 | 38 | class FixedWidthSplitter(core.BaseSplitter): 39 | """Split line based on fixed start and end positions for each ``col`` in 40 | ``self.cols``. 41 | 42 | This class requires that the Header class will have defined ``col.start`` 43 | and ``col.end`` for each column. The reference to the ``header.cols`` gets 44 | put in the splitter object by the base Reader.read() function just in time 45 | for splitting data lines by a ``data`` object. 46 | 47 | Note that the ``start`` and ``end`` positions are defined in the pythonic 48 | style so line[start:end] is the desired substring for a column. This splitter 49 | class does not have a hook for ``process_lines`` since that is generally not 50 | useful for fixed-width input. 51 | """ 52 | delimiter_pad = '' 53 | bookend = False 54 | 55 | def __call__(self, lines): 56 | for line in lines: 57 | vals = [line[x.start:x.end] for x in self.cols] 58 | if self.process_val: 59 | yield [self.process_val(x) for x in vals] 60 | else: 61 | yield vals 62 | 63 | def join(self, vals, widths): 64 | pad = self.delimiter_pad or '' 65 | delimiter = self.delimiter or '' 66 | padded_delim = pad + delimiter + pad 67 | if self.bookend: 68 | bookend_left = delimiter + pad 69 | bookend_right = pad + delimiter 70 | else: 71 | bookend_left = '' 72 | bookend_right = '' 73 | vals = [' ' * (width - len(val)) + val for val, width in zip(vals, widths)] 74 | return bookend_left + padded_delim.join(vals) + bookend_right 75 | 76 | 77 | class FixedWidthHeader(core.BaseHeader): 78 | """Fixed width table header reader. 79 | 80 | The key settable class attributes are: 81 | 82 | :param auto_format: format string for auto-generating column names 83 | :param start_line: None, int, or a function of ``lines`` that returns None or int 84 | :param comment: regular expression for comment lines 85 | :param splitter_class: Splitter class for splitting data lines into columns 86 | :param names: list of names corresponding to each data column 87 | :param include_names: list of names to include in output (default=None selects all names) 88 | :param exclude_names: list of names to exlude from output (applied after ``include_names``) 89 | :param position_line: row index of line that specifies position (default = 1) 90 | :param position_char: character used to write the position line (default = "-") 91 | :param col_starts: list of start positions for each column (0-based counting) 92 | :param col_ends: list of end positions (inclusive) for each column 93 | :param delimiter_pad: padding around delimiter when writing (default = None) 94 | :param bookend: put the delimiter at start and end of line when writing (default = False) 95 | """ 96 | 97 | position_line = None # secondary header line position 98 | 99 | def get_line(self, lines, index): 100 | for i, line in enumerate(self.process_lines(lines)): 101 | if i == index: 102 | break 103 | else: # No header line matching 104 | raise InconsistentTableError('No header line found in table') 105 | return line 106 | 107 | def get_cols(self, lines): 108 | """Initialize the header Column objects from the table ``lines``. 109 | 110 | Based on the previously set Header attributes find or create the column names. 111 | Sets ``self.cols`` with the list of Columns. This list only includes the actual 112 | requested columns after filtering by the include_names and exclude_names 113 | attributes. See ``self.names`` for the full list. 114 | 115 | :param lines: list of table lines 116 | :returns: None 117 | """ 118 | 119 | # See "else" clause below for explanation of start_line and position_line 120 | start_line = core._get_line_index(self.start_line, self.process_lines(lines)) 121 | position_line = core._get_line_index(self.position_line, self.process_lines(lines)) 122 | 123 | # If start_line is none then there is no header line. Column positions are 124 | # determined from first data line and column names are either supplied by user 125 | # or auto-generated. 126 | if start_line is None: 127 | if position_line is not None: 128 | raise ValueError("Cannot set position_line without also setting header_start") 129 | data_lines = self.data.process_lines(lines) 130 | if not data_lines: 131 | raise InconsistentTableError('No data lines found so cannot autogenerate column names') 132 | vals, starts, ends = self.get_fixedwidth_params(data_lines[0]) 133 | 134 | if self.names is None: 135 | self.names = [self.auto_format % i for i in range(1, len(vals) + 1)] 136 | 137 | else: 138 | # This bit of code handles two cases: 139 | # start_line = and position_line = None 140 | # Single header line where that line is used to determine both the 141 | # column positions and names. 142 | # start_line = and position_line = 143 | # Two header lines where the first line defines the column names and 144 | # the second line defines the column positions 145 | 146 | if position_line is not None: 147 | # Define self.col_starts and self.col_ends so that the call to 148 | # get_fixedwidth_params below will use those to find the header 149 | # column names. Note that get_fixedwidth_params returns Python 150 | # slice col_ends but expects inclusive col_ends on input (for 151 | # more intuitive user interface). 152 | line = self.get_line(lines, position_line) 153 | vals, self.col_starts, col_ends = self.get_fixedwidth_params(line) 154 | self.col_ends = [x - 1 for x in col_ends] 155 | 156 | # Get the header column names and column positions 157 | line = self.get_line(lines, start_line) 158 | vals, starts, ends = self.get_fixedwidth_params(line) 159 | 160 | # Possibly override the column names with user-supplied values 161 | if self.names is None: 162 | self.names = vals 163 | 164 | # Filter self.names using include_names and exclude_names, then create 165 | # the actual Column objects. 166 | self._set_cols_from_names() 167 | self.n_data_cols = len(self.cols) 168 | 169 | # Set column start and end positions. Also re-index the cols because 170 | # the FixedWidthSplitter does NOT return the ignored cols (as is the 171 | # case for typical delimiter-based splitters) 172 | for i, col in enumerate(self.cols): 173 | col.start = starts[col.index] 174 | col.end = ends[col.index] 175 | col.index = i 176 | 177 | def get_fixedwidth_params(self, line): 178 | """Split ``line`` on the delimiter and determine column values and column 179 | start and end positions. This might include null columns with zero length 180 | (e.g. for header row = "| col1 || col2 | col3 |" or 181 | header2_row = "----- ------- -----"). The null columns are 182 | stripped out. Returns the values between delimiters and the corresponding 183 | start and end positions. 184 | 185 | :param line: input line 186 | :returns: (vals, starts, ends) 187 | """ 188 | 189 | # If column positions are already specified then just use those, otherwise 190 | # figure out positions between delimiters. 191 | if self.col_starts is not None and self.col_ends is not None: 192 | starts = list(self.col_starts) # could be any iterable, e.g. np.array 193 | ends = [x + 1 for x in self.col_ends] # user supplies inclusive endpoint 194 | if len(starts) != len(ends): 195 | raise ValueError('Fixed width col_starts and col_ends must have the same length') 196 | vals = [line[start:end].strip() for start, end in zip(starts, ends)] 197 | else: 198 | # There might be a cleaner way to do this but it works... 199 | vals = line.split(self.splitter.delimiter) 200 | starts = [0] 201 | ends = [] 202 | for val in vals: 203 | if val: 204 | ends.append(starts[-1] + len(val)) 205 | starts.append(ends[-1] + 1) 206 | else: 207 | starts[-1] += 1 208 | starts = starts[:-1] 209 | vals = [x.strip() for x in vals if x] 210 | if len(vals) != len(starts) or len(vals) != len(ends): 211 | raise InconsistentTableError('Error parsing fixed width header') 212 | 213 | return vals, starts, ends 214 | 215 | def write(self, lines): 216 | # Header line not written until data are formatted. Until then it is 217 | # not known how wide each column will be for fixed width. 218 | pass 219 | 220 | 221 | class FixedWidthData(core.BaseData): 222 | """Base table data reader. 223 | 224 | :param start_line: None, int, or a function of ``lines`` that returns None or int 225 | :param end_line: None, int, or a function of ``lines`` that returns None or int 226 | :param comment: Regular expression for comment lines 227 | :param splitter_class: Splitter class for splitting data lines into columns 228 | """ 229 | 230 | splitter_class = FixedWidthSplitter 231 | 232 | def write(self, lines): 233 | formatters = [] 234 | for col in self.cols: 235 | formatter = self.formats.get(col.name, self.default_formatter) 236 | if not hasattr(formatter, '__call__'): 237 | formatter = core._format_func(formatter) 238 | col.formatter = formatter 239 | 240 | vals_list = [] 241 | # Col iterator does the formatting defined above so each val is a string 242 | # and vals is a tuple of strings for all columns of each row 243 | for vals in izip(*self.cols): 244 | vals_list.append(vals) 245 | 246 | for i, col in enumerate(self.cols): 247 | col.width = max([len(vals[i]) for vals in vals_list]) 248 | if self.header.start_line is not None: 249 | col.width = max(col.width, len(col.name)) 250 | 251 | widths = [col.width for col in self.cols] 252 | 253 | if self.header.start_line is not None: 254 | lines.append(self.splitter.join([col.name for col in self.cols], widths)) 255 | 256 | if self.header.position_line is not None: 257 | char = self.header.position_char 258 | if len(char) != 1: 259 | raise ValueError('Position_char="%s" must be a single character' % char) 260 | vals = [char * col.width for col in self.cols] 261 | lines.append(self.splitter.join(vals, widths)) 262 | 263 | for vals in vals_list: 264 | lines.append(self.splitter.join(vals, widths)) 265 | 266 | return lines 267 | 268 | 269 | class FixedWidth(core.BaseReader): 270 | """Read or write a fixed width table with a single header line that defines column 271 | names and positions. Examples:: 272 | 273 | # Bar delimiter in header and data 274 | 275 | | Col1 | Col2 | Col3 | 276 | | 1.2 | hello there | 3 | 277 | | 2.4 | many words | 7 | 278 | 279 | # Bar delimiter in header only 280 | 281 | Col1 | Col2 | Col3 282 | 1.2 hello there 3 283 | 2.4 many words 7 284 | 285 | # No delimiter with column positions specified as input 286 | 287 | Col1 Col2Col3 288 | 1.2hello there 3 289 | 2.4many words 7 290 | 291 | See the :ref:`fixed_width_gallery` for specific usage examples. 292 | 293 | :param col_starts: list of start positions for each column (0-based counting) 294 | :param col_ends: list of end positions (inclusive) for each column 295 | :param delimiter_pad: padding around delimiter when writing (default = None) 296 | :param bookend: put the delimiter at start and end of line when writing (default = False) 297 | """ 298 | def __init__(self, col_starts=None, col_ends=None, delimiter_pad=' ', bookend=True): 299 | core.BaseReader.__init__(self) 300 | 301 | self.header = FixedWidthHeader() 302 | self.data = FixedWidthData() 303 | self.data.header = self.header 304 | self.header.data = self.data 305 | 306 | self.header.splitter.delimiter = '|' 307 | self.data.splitter.delimiter = '|' 308 | self.data.splitter.delimiter_pad = delimiter_pad 309 | self.data.splitter.bookend = bookend 310 | self.header.start_line = 0 311 | self.data.start_line = 1 312 | self.header.comment = r'\s*#' 313 | self.header.write_comment = '# ' 314 | self.data.comment = r'\s*#' 315 | self.data.write_comment = '# ' 316 | self.header.col_starts = col_starts 317 | self.header.col_ends = col_ends 318 | 319 | 320 | class FixedWidthNoHeader(FixedWidth): 321 | """Read or write a fixed width table which has no header line. Column 322 | names are either input (``names`` keyword) or auto-generated. Column 323 | positions are determined either by input (``col_starts`` and ``col_stops`` 324 | keywords) or by splitting the first data line. In the latter case a 325 | ``delimiter`` is required to split the data line. 326 | 327 | Examples:: 328 | 329 | # Bar delimiter in header and data 330 | 331 | | 1.2 | hello there | 3 | 332 | | 2.4 | many words | 7 | 333 | 334 | # Compact table having no delimiter and column positions specified as input 335 | 336 | 1.2hello there3 337 | 2.4many words 7 338 | 339 | This class is just a convenience wrapper around :class:`~asciitable.FixedWidth` 340 | but with ``header.start_line = None`` and ``data.start_line = 0``. 341 | 342 | See the :ref:`fixed_width_gallery` for specific usage examples. 343 | 344 | :param col_starts: list of start positions for each column (0-based counting) 345 | :param col_ends: list of end positions (inclusive) for each column 346 | :param delimiter_pad: padding around delimiter when writing (default = None) 347 | :param bookend: put the delimiter at start and end of line when writing (default = False) 348 | """ 349 | def __init__(self, col_starts=None, col_ends=None, delimiter_pad=' ', bookend=True): 350 | FixedWidth.__init__(self, col_starts, col_ends, 351 | delimiter_pad=delimiter_pad, bookend=bookend) 352 | self.header.start_line = None 353 | self.data.start_line = 0 354 | 355 | 356 | class FixedWidthTwoLine(FixedWidth): 357 | """Read or write a fixed width table which has two header lines. The first 358 | header line defines the column names and the second implicitly defines the 359 | column positions. Examples:: 360 | 361 | # Typical case with column extent defined by ---- under column names. 362 | 363 | col1 col2 <== header_start = 0 364 | ----- ------------ <== position_line = 1, position_char = "-" 365 | 1 bee flies <== data_start = 2 366 | 2 fish swims 367 | 368 | # Pretty-printed table 369 | 370 | +------+------------+ 371 | | Col1 | Col2 | 372 | +------+------------+ 373 | | 1.2 | "hello" | 374 | | 2.4 | there world| 375 | +------+------------+ 376 | 377 | See the :ref:`fixed_width_gallery` for specific usage examples. 378 | 379 | :param position_line: row index of line that specifies position (default = 1) 380 | :param position_char: character used to write the position line (default = "-") 381 | :param delimiter_pad: padding around delimiter when writing (default = None) 382 | :param bookend: put the delimiter at start and end of line when writing (default = False) 383 | """ 384 | def __init__(self, position_line=1, position_char='-', delimiter_pad=None, bookend=False): 385 | FixedWidth.__init__(self, delimiter_pad=delimiter_pad, bookend=bookend) 386 | self.header.position_line = position_line 387 | self.header.position_char = position_char 388 | self.data.start_line = position_line + 1 389 | self.header.splitter.delimiter = ' ' 390 | self.data.splitter.delimiter = ' ' 391 | 392 | 393 | -------------------------------------------------------------------------------- /asciitable/ipac.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | ipac.py: 4 | Classes to read IPAC table format 5 | 6 | :Copyright: Smithsonian Astrophysical Observatory (2011) 7 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 8 | """ 9 | 10 | ## 11 | ## Redistribution and use in source and binary forms, with or without 12 | ## modification, are permitted provided that the following conditions are met: 13 | ## * Redistributions of source code must retain the above copyright 14 | ## notice, this list of conditions and the following disclaimer. 15 | ## * Redistributions in binary form must reproduce the above copyright 16 | ## notice, this list of conditions and the following disclaimer in the 17 | ## documentation and/or other materials provided with the distribution. 18 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 19 | ## names of its contributors may be used to endorse or promote products 20 | ## derived from this software without specific prior written permission. 21 | ## 22 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 23 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 24 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 25 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 26 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 27 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 28 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 29 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 31 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 | 33 | import asciitable.core as core 34 | import asciitable.fixedwidth as fixedwidth 35 | 36 | class Ipac(core.BaseReader): 37 | """Read an IPAC format table: 38 | http://irsa.ipac.caltech.edu/applications/DDGEN/Doc/ipac_tbl.html:: 39 | 40 | \\name=value 41 | \\ Comment 42 | | column1 | column2 | column3 | column4 | column5 | 43 | | double | double | int | double | char | 44 | | unit | unit | unit | unit | unit | 45 | | null | null | null | null | null | 46 | 2.0978 29.09056 73765 2.06000 B8IVpMnHg 47 | 48 | Or:: 49 | 50 | |-----ra---|----dec---|---sao---|------v---|----sptype--------| 51 | 2.09708 29.09056 73765 2.06000 B8IVpMnHg 52 | 53 | Caveats: 54 | 55 | * Data type, Units, and Null value specifications are ignored. 56 | * Keywords are ignored. 57 | * The IPAC spec requires the first two header lines but this reader only 58 | requires the initial column name definition line 59 | 60 | Overcoming these limitations would not be difficult, code contributions 61 | welcome from motivated users. 62 | """ 63 | def __init__(self): 64 | core.BaseReader.__init__(self) 65 | self.header = IpacHeader() 66 | self.data = IpacData() 67 | 68 | def write(self, table=None): 69 | """Not available for the Ipac class (raises NotImplementedError)""" 70 | raise NotImplementedError 71 | 72 | IpacReader = Ipac 73 | 74 | class IpacHeader(core.BaseHeader): 75 | """IPAC table header""" 76 | comment = r'\\' 77 | splitter_class = core.BaseSplitter 78 | col_type_map = {'int': core.IntType, 79 | 'long': core.IntType, 80 | 'double': core.FloatType, 81 | 'float': core.FloatType, 82 | 'real': core.FloatType, 83 | 'char': core.StrType, 84 | 'date': core.StrType, 85 | 'i': core.IntType, 86 | 'l': core.IntType, 87 | 'd': core.FloatType, 88 | 'f': core.FloatType, 89 | 'r': core.FloatType, 90 | 'c': core.StrType} 91 | 92 | def __init__(self): 93 | self.splitter = self.__class__.splitter_class() 94 | self.splitter.process_line = None 95 | self.splitter.process_val = None 96 | self.splitter.delimiter = '|' 97 | 98 | def process_lines(self, lines): 99 | """Generator to yield IPAC header lines, i.e. those starting and ending with 100 | delimiter character.""" 101 | delim = self.splitter.delimiter 102 | for line in lines: 103 | if line.startswith(delim) and line.endswith(delim): 104 | yield line.strip(delim) 105 | 106 | def get_cols(self, lines): 107 | """Initialize the header Column objects from the table ``lines``. 108 | 109 | Based on the previously set Header attributes find or create the column names. 110 | Sets ``self.cols`` with the list of Columns. This list only includes the actual 111 | requested columns after filtering by the include_names and exclude_names 112 | attributes. See ``self.names`` for the full list. 113 | 114 | :param lines: list of table lines 115 | :returns: list of table Columns 116 | """ 117 | header_lines = self.process_lines(lines) # generator returning valid header lines 118 | header_vals = [vals for vals in self.splitter(header_lines)] 119 | if len(header_vals) == 0: 120 | raise ValueError('At least one header line beginning and ending with delimiter required') 121 | elif len(header_vals) > 4: 122 | raise ValueError('More than four header lines were found') 123 | 124 | # Generate column definitions 125 | cols = [] 126 | start = 1 127 | for i, name in enumerate(header_vals[0]): 128 | col = core.Column(name=name.strip(' -'), index=i) 129 | col.start = start 130 | col.end = start + len(name) 131 | if len(header_vals) > 1: 132 | col.raw_type = header_vals[1][i].strip(' -') 133 | col.type = self.get_col_type(col) 134 | if len(header_vals) > 2: 135 | col.units = header_vals[2][i].strip() # Can't strip dashes here 136 | if len(header_vals) > 3: 137 | null = header_vals[3][i].strip() 138 | if null.lower() != 'null': 139 | col.null = null # Can't strip dashes here 140 | if issubclass(col.type, core.FloatType): 141 | fillval = 'nan' 142 | else: 143 | fillval = '-999' 144 | self.data.fill_values.append((col.null, fillval, col.name)) 145 | start = col.end + 1 146 | cols.append(col) 147 | 148 | # Standard column name filtering (include or exclude names) 149 | self.names = [x.name for x in cols] 150 | names = set(self.names) 151 | if self.include_names is not None: 152 | names.intersection_update(self.include_names) 153 | if self.exclude_names is not None: 154 | names.difference_update(self.exclude_names) 155 | 156 | # Generate final list of cols and re-index the cols because the 157 | # FixedWidthSplitter does NOT return the ignored cols (as is the 158 | # case for typical delimiter-based splitters) 159 | self.cols = [x for x in cols if x.name in names] 160 | for i, col in enumerate(self.cols): 161 | col.index = i 162 | 163 | class IpacData(core.BaseData): 164 | """IPAC table data reader""" 165 | splitter_class = fixedwidth.FixedWidthSplitter 166 | comment = r'[|\\]' 167 | 168 | -------------------------------------------------------------------------------- /asciitable/latex.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | latex.py: 4 | Classes to read and write LaTeX tables 5 | 6 | :Copyright: Smithsonian Astrophysical Observatory (2011) 7 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 8 | """ 9 | 10 | ## 11 | ## Redistribution and use in source and binary forms, with or without 12 | ## modification, are permitted provided that the following conditions are met: 13 | ## * Redistributions of source code must retain the above copyright 14 | ## notice, this list of conditions and the following disclaimer. 15 | ## * Redistributions in binary form must reproduce the above copyright 16 | ## notice, this list of conditions and the following disclaimer in the 17 | ## documentation and/or other materials provided with the distribution. 18 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 19 | ## names of its contributors may be used to endorse or promote products 20 | ## derived from this software without specific prior written permission. 21 | ## 22 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 23 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 24 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 25 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 26 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 27 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 28 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 29 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 30 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 31 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 32 | 33 | import re 34 | import asciitable.core as core 35 | 36 | latexdicts ={'AA': {'tabletype': 'table', 37 | 'header_start': r'\hline \hline', 'header_end': r'\hline', 38 | 'data_end': r'\hline'}, 39 | 'doublelines': {'tabletype': 'table', 40 | 'header_start': r'\hline \hline', 'header_end': r'\hline\hline', 41 | 'data_end': r'\hline\hline'}, 42 | 'template': {'tabletype': 'tabletype', 'caption': 'caption', 43 | 'col_align': 'col_align', 'preamble': 'preamble', 'header_start': 'header_start', 44 | 'header_end': 'header_end', 'data_start': 'data_start', 45 | 'data_end': 'data_end', 'tablefoot': 'tablefoot', 'units': {'col1': 'unit of col1', 'col2': 'unit of col2'}} 46 | } 47 | 48 | def add_dictval_to_list(adict, key, alist): 49 | '''add a value from a dictionary to a list 50 | 51 | :param adict: dictionary 52 | :param key: key of value 53 | :param list: list where value should be added 54 | ''' 55 | if key in adict.keys(): 56 | if type(adict[key]) == str: 57 | alist.append(adict[key]) 58 | else: 59 | alist.extend(adict[key]) 60 | 61 | def find_latex_line(lines, latex): 62 | '''Find the first line which matches a patters 63 | 64 | :param lines: list of strings 65 | :param latex: search pattern 66 | :returns: line number or None, if no match was found 67 | ''' 68 | re_string = re.compile(latex.replace('\\', '\\\\')) 69 | for i,line in enumerate(lines): 70 | if re_string.match(line): 71 | return i 72 | else: 73 | return None 74 | 75 | 76 | class LatexHeader(core.BaseHeader): 77 | header_start = r'\begin{tabular}' 78 | 79 | def start_line(self, lines): 80 | line = find_latex_line(lines, self.header_start) 81 | if line: 82 | return line + 1 83 | else: 84 | return None 85 | 86 | def write(self, lines): 87 | if not 'col_align' in self.latex.keys(): 88 | self.latex['col_align'] = len(self.cols) * 'c' 89 | 90 | lines.append(r'\begin{' + self.latex['tabletype'] + r'}') 91 | add_dictval_to_list(self.latex, 'preamble', lines) 92 | if 'caption' in self.latex.keys(): 93 | lines.append(r'\caption{' + self.latex['caption'] +'}') 94 | lines.append(self.header_start + r'{' + self.latex['col_align'] + r'}') 95 | add_dictval_to_list(self.latex, 'header_start', lines) 96 | lines.append(self.splitter.join([x.name for x in self.cols])) 97 | if 'units' in self.latex.keys(): 98 | lines.append(self.splitter.join([self.latex['units'].get(x.name, ' ') for x in self.cols])) 99 | add_dictval_to_list(self.latex, 'header_end', lines) 100 | 101 | 102 | 103 | class LatexData(core.BaseData): 104 | data_start = None 105 | data_end = r'\end{tabular}' 106 | 107 | def start_line(self, lines): 108 | if self.data_start: 109 | return find_latex_line(lines, self.data_start) 110 | else: 111 | return self.header.start_line(lines) + 1 112 | 113 | def end_line(self, lines): 114 | if self.data_end: 115 | return find_latex_line(lines, self.data_end) 116 | else: 117 | return None 118 | 119 | def write(self, lines): 120 | add_dictval_to_list(self.latex, 'data_start', lines) 121 | core.BaseData.write(self, lines) 122 | add_dictval_to_list(self.latex, 'data_end', lines) 123 | lines.append(self.data_end) 124 | add_dictval_to_list(self.latex, 'tablefoot', lines) 125 | lines.append(r'\end{' + self.latex['tabletype'] + '}') 126 | 127 | 128 | 129 | class LatexSplitter(core.BaseSplitter): 130 | '''Split LaTeX table date. Default delimiter is `&`. 131 | ''' 132 | delimiter = '&' 133 | 134 | def process_line(self, line): 135 | """Remove whitespace at the beginning or end of line. Also remove 136 | \\ at end of line""" 137 | line = line.split('%')[0] 138 | line = line.strip() 139 | if line[-2:] ==r'\\': 140 | line = line.strip(r'\\') 141 | else: 142 | raise core.InconsistentTableError(r'Lines in LaTeX table have to end with \\') 143 | return line 144 | 145 | def process_val(self, val): 146 | """Remove whitespace and {} at the beginning or end of value.""" 147 | val = val.strip() 148 | if val and (val[0] == '{') and (val[-1] == '}'): 149 | val = val[1:-1] 150 | return val 151 | 152 | def join(self, vals): 153 | '''Join values together and add a few extra spaces for readability''' 154 | delimiter = ' ' + self.delimiter + ' ' 155 | return delimiter.join(str(x) for x in vals) + r' \\' 156 | 157 | class Latex(core.BaseReader): 158 | '''Write and read LaTeX tables. 159 | 160 | This class implements some LaTeX specific commands. 161 | Its main purpose is to write out a table in a form that LaTeX 162 | can compile. It is beyond the scope of this class to implement every possible 163 | LaTeX command, instead the focus is to generate a syntactically valid LaTeX 164 | tables. 165 | This class can also read simple LaTeX tables (one line per table row, 166 | no ``\multicolumn`` or similar constructs), specifically, it can read the 167 | tables that it writes. 168 | 169 | Reading a LaTeX table, the following keywords are accepted: 170 | 171 | **ignore_latex_commands** : 172 | Lines starting with these LaTeX commands will be treated as comments (i.e. ignored). 173 | 174 | When writing a LaTeX table, the some keywords can customize the format. 175 | Care has to be taken here, because python interprets ``\\`` in a string as an escape character. 176 | In order to pass this to the output either format your strings as raw strings with the ``r`` specifier 177 | or use a double ``\\\\``. 178 | Examples:: 179 | 180 | caption = r'My table \label{mytable}' 181 | caption = 'My table \\\\label{mytable}' 182 | 183 | **latexdict** : Dictionary of extra parameters for the LaTeX output 184 | * tabletype : used for first and last line of table. 185 | The default is ``\\begin{table}``. 186 | The following would generate a table, which spans the whole page in a two-column document:: 187 | 188 | asciitable.write(data, sys.stdout, Writer = asciitable.Latex, 189 | latexdict = {'tabletype': 'table*'}) 190 | 191 | * col_align : Alignment of columns 192 | If not present all columns will be centered. 193 | 194 | * caption : Table caption (string or list of strings) 195 | This will appear above the table as it is the standard in many scientific publications. 196 | If you prefer a caption below the table, just write the full LaTeX command as 197 | ``latexdict['tablefoot'] = r'\caption{My table}'`` 198 | 199 | * preamble, header_start, header_end, data_start, data_end, tablefoot: Pure LaTeX 200 | Each one can be a string or a list of strings. These strings will be inserted into the table 201 | without any further processing. See the examples below. 202 | * units : dictionary of strings 203 | Keys in this dictionary should be names of columns. If present, 204 | a line in the LaTeX table directly below the column names is 205 | added, which contains the values of the dictionary. Example:: 206 | 207 | import asciitable 208 | import asciitable.latex 209 | import sys 210 | data = {'name': ['bike', 'car'], 'mass': [75,1200], 'speed': [10, 130]} 211 | asciitable.write(data, sys.stdout, Writer = asciitable.Latex, 212 | latexdict = {'units': {'mass': 'kg', 'speed': 'km/h'}}) 213 | 214 | If the column has no entry in the `units` dictionary, it defaults 215 | to `' '`. 216 | 217 | Run the following code to see where each element of the dictionary is inserted in the 218 | LaTeX table:: 219 | 220 | import asciitable 221 | import asciitable.latex 222 | import sys 223 | data = {'cola': [1,2], 'colb': [3,4]} 224 | asciitable.write(data, sys.stdout, Writer = asciitable.Latex, 225 | latexdict = asciitable.latex.latexdicts['template']) 226 | 227 | Some table styles are predefined in the dictionary ``asciitable.latex.latexdicts``. The following generates 228 | in table in style preferred by A&A and some other journals:: 229 | 230 | asciitable.write(data, sys.stdout, Writer = asciitable.Latex, 231 | latexdict = asciitable.latex.latexdicts['AA']) 232 | 233 | As an example, this generates a table, which spans all columns and is centered on the page:: 234 | 235 | asciitable.write(data, sys.stdout, Writer = asciitable.Latex, 236 | col_align = '|lr|', 237 | latexdict = {'preamble': r'\begin{center}', 'tablefoot': r'\end{center}', 238 | 'tabletype': 'table*'}) 239 | 240 | **caption** : Set table caption 241 | Shorthand for:: 242 | 243 | latexdict['caption'] = caption 244 | 245 | **col_align** : Set the column alignment. 246 | If not present this will be auto-generated for centered columns. Shorthand for:: 247 | 248 | latexdict['col_align'] = col_align 249 | 250 | ''' 251 | 252 | def __init__(self, ignore_latex_commands = ['hline', 'vspace', 'tableline'], latexdict = {}, caption ='', col_align = None): 253 | 254 | core.BaseReader.__init__(self) 255 | self.header = LatexHeader() 256 | self.data = LatexData() 257 | 258 | self.header.splitter = LatexSplitter() 259 | self.data.splitter = LatexSplitter() 260 | self.data.header = self.header 261 | self.header.data = self.data 262 | self.latex = {} 263 | self.latex['tabletype'] = 'table' 264 | # The latex dict drives the format of the table and needs to be shared 265 | # with data and header 266 | self.header.latex = self.latex 267 | self.data.latex = self.latex 268 | self.latex['tabletype'] = 'table' 269 | self.latex.update(latexdict) 270 | if caption: self.latex['caption'] = caption 271 | if col_align: self.latex['col_align'] = col_align 272 | 273 | self.ignore_latex_commands = ignore_latex_commands 274 | self.header.comment = '%|' + '|'.join([r'\\' + command for command in self.ignore_latex_commands]) 275 | self.data.comment = self.header.comment 276 | 277 | 278 | def write(self, table=None): 279 | self.header.start_line = None 280 | self.data.start_line = None 281 | return core.BaseReader.write(self, table=table) 282 | 283 | 284 | LatexReader = Latex 285 | 286 | 287 | 288 | class AASTexHeader(LatexHeader): 289 | '''In a `deluxetable` some header keywords differ from standard LaTeX. 290 | 291 | This header is modified to take that into account. 292 | ''' 293 | header_start = r'\tablehead' 294 | 295 | def start_line(self, lines): 296 | return find_latex_line(lines, r'\tablehead') 297 | 298 | def write(self, lines): 299 | if not 'col_align' in self.latex.keys(): 300 | self.latex['col_align'] = len(self.cols) * 'c' 301 | 302 | lines.append(r'\begin{' + self.latex['tabletype'] + r'}{' + self.latex['col_align'] + r'}') 303 | add_dictval_to_list(self.latex, 'preamble', lines) 304 | if 'caption' in self.latex.keys(): 305 | lines.append(r'\tablecaption{' + self.latex['caption'] +'}') 306 | tablehead = ' & '.join([r'\colhead{' + x.name + '}' for x in self.cols]) 307 | if 'units' in self.latex.keys(): 308 | tablehead += r'\\ ' + (self.splitter.join([ self.latex['units'].get(x.name, ' ') for x in self.cols])) 309 | lines.append(r'\tablehead{' + tablehead + '}') 310 | 311 | 312 | class AASTexData(LatexData): 313 | '''In a `deluxetable` the data is enclosed in `\startdata` and `\enddata` 314 | ''' 315 | data_start = r'\startdata' 316 | data_end = r'\enddata' 317 | 318 | def start_line(self, lines): 319 | return find_latex_line(lines, self.data_start) + 1 320 | 321 | def write(self, lines): 322 | lines.append(self.data_start) 323 | core.BaseData.write(self, lines) 324 | lines.append(self.data_end) 325 | add_dictval_to_list(self.latex, 'tablefoot', lines) 326 | lines.append(r'\end{' + self.latex['tabletype'] + r'}') 327 | 328 | class AASTexHeaderSplitter(LatexSplitter): 329 | '''extract column names from a `deluxetable` 330 | 331 | This splitter expects the following LaTeX code **in a single line**: 332 | 333 | \tablehead{\colhead{col1} & ... & \colhead{coln}} 334 | ''' 335 | def process_line(self, line): 336 | """extract column names from tablehead 337 | """ 338 | line = line.split('%')[0] 339 | line = line.replace(r'\tablehead','') 340 | line = line.strip() 341 | if (line[0] =='{') and (line[-1] == '}'): 342 | line = line[1:-1] 343 | else: 344 | raise core.InconsistentTableError(r'\tablehead is missing {}') 345 | return line.replace(r'\colhead','') 346 | 347 | def join(self, vals): 348 | return ' & '.join([r'\colhead{' + str(x) + '}' for x in vals]) 349 | 350 | 351 | 352 | class AASTex(Latex): 353 | '''Write and read AASTeX tables. 354 | 355 | This class implements some AASTeX specific commands. 356 | AASTeX is used for the AAS (American Astronomical Society) 357 | publications like ApJ, ApJL and AJ. 358 | 359 | It derives from :class:`~asciitable.Latex` and accepts the same keywords (see :class:`~asciitable.Latex` for documentation). 360 | However, the keywords ``header_start``, ``header_end``, ``data_start`` and ``data_end`` in 361 | ``latexdict`` have no effect. 362 | ''' 363 | 364 | def __init__(self, **kwargs): 365 | Latex.__init__(self, **kwargs) 366 | self.header = AASTexHeader() 367 | self.data = AASTexData() 368 | self.header.comment = '%|' + '|'.join([r'\\' + command for command in self.ignore_latex_commands]) 369 | self.header.splitter = AASTexHeaderSplitter() 370 | self.data.splitter = LatexSplitter() 371 | self.data.comment = self.header.comment 372 | self.data.header = self.header 373 | self.header.data = self.data 374 | self.latex['tabletype'] = 'deluxetable' 375 | # The latex dict drives the format of the table and needs to be shared 376 | # with data and header 377 | self.header.latex = self.latex 378 | self.data.latex = self.latex 379 | 380 | AASTexReader = AASTex 381 | -------------------------------------------------------------------------------- /asciitable/memory.py: -------------------------------------------------------------------------------- 1 | """Asciitable: an extensible ASCII table reader and writer. 2 | 3 | memory.py: 4 | Classes to read table from in-memory data structure into 5 | asciitable format. This is used for writing tables. 6 | 7 | :Copyright: Smithsonian Astrophysical Observatory (2011) 8 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 9 | """ 10 | 11 | ## 12 | ## Redistribution and use in source and binary forms, with or without 13 | ## modification, are permitted provided that the following conditions are met: 14 | ## * Redistributions of source code must retain the above copyright 15 | ## notice, this list of conditions and the following disclaimer. 16 | ## * Redistributions in binary form must reproduce the above copyright 17 | ## notice, this list of conditions and the following disclaimer in the 18 | ## documentation and/or other materials provided with the distribution. 19 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 20 | ## names of its contributors may be used to endorse or promote products 21 | ## derived from this software without specific prior written permission. 22 | ## 23 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 24 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 27 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 | 34 | import asciitable.core as core 35 | import asciitable.basic as basic 36 | from asciitable.core import io, next, izip, any 37 | if core.has_numpy: 38 | import numpy 39 | 40 | class Memory(core.BaseReader): 41 | """Read a table from a data object in memory. Several input data formats are supported: 42 | 43 | **Output of asciitable.read()**:: 44 | 45 | table = asciitable.get_reader(Reader=asciitable.Daophot) 46 | data = table.read('t/daophot.dat') 47 | mem_data_from_table = asciitable.read(table, Reader=asciitable.Memory) 48 | mem_data_from_data = asciitable.read(data, Reader=asciitable.Memory) 49 | 50 | **Numpy structured array**:: 51 | 52 | data = numpy.zeros((2,), dtype=[('col1','i4'), ('col2','f4'), ('col3', 'a10')]) 53 | data[:] = [(1, 2., 'Hello'), (2, 3., "World")] 54 | mem_data = asciitable.read(data, Reader=asciitable.Memory) 55 | 56 | **Numpy masked structured array**:: 57 | 58 | data = numpy.ma.zeros((2,), dtype=[('col1','i4'), ('col2','f4'), ('col3', 'a10')]) 59 | data[:] = [(1, 2., 'Hello'), (2, 3., "World")] 60 | data['col2'] = ma.masked 61 | mem_data = asciitable.read(data, Reader=asciitable.Memory) 62 | 63 | In the current version all masked values will be converted to nan. 64 | 65 | **Sequence of sequences**:: 66 | 67 | data = [[1, 2, 3 ], 68 | [4, 5.2, 6.1 ], 69 | [8, 9, 'hello']] 70 | mem_data = asciitable.read(data, Reader=asciitable.Memory, names=('c1','c2','c3')) 71 | 72 | **Dict of sequences**:: 73 | 74 | data = {'c1': [1, 2, 3], 75 | 'c2': [4, 5.2, 6.1], 76 | 'c3': [8, 9, 'hello']} 77 | mem_data = asciitable.read(data, Reader=asciitable.Memory, names=('c1','c2','c3')) 78 | 79 | """ 80 | def __init__(self): 81 | self.header = MemoryHeader() 82 | self.data = MemoryData() 83 | self.inputter = MemoryInputter() 84 | self.outputter = core.BaseOutputter() 85 | self.meta = {} # Placeholder for storing table metadata 86 | self.keywords = [] # Table Keywords 87 | 88 | def read(self, table): 89 | self.data.header = self.header 90 | self.header.data = self.data 91 | 92 | self.lines = self.inputter.get_lines(table, self.header.names) 93 | self.data.get_data_lines(self.lines) 94 | self.header.get_cols(self.lines) 95 | cols = self.header.cols # header.cols corresponds to *output* columns requested 96 | n_data_cols = len(self.header.names) # header.names corresponds to *all* header columns in table 97 | self.data.splitter.cols = cols 98 | 99 | for i, str_vals in enumerate(self.data.get_str_vals()): 100 | if len(list(str_vals)) != n_data_cols: 101 | errmsg = ('Number of header columns (%d) inconsistent with ' 102 | 'data columns (%d) at data line %d\n' 103 | 'Header values: %s\n' 104 | 'Data values: %s' % (len(cols), len(str_vals), i, 105 | [x.name for x in cols], str_vals)) 106 | raise core.InconsistentTableError(errmsg) 107 | 108 | for col in cols: 109 | col.str_vals.append(str_vals[col.index]) 110 | 111 | self.data.masks(cols) 112 | self.cols = cols 113 | if hasattr(table, 'keywords'): 114 | self.keywords = table.keywords 115 | 116 | self.outputter.default_converters = [((lambda vals: vals), core.IntType), 117 | ((lambda vals: vals), core.FloatType), 118 | ((lambda vals: vals), core.StrType)] 119 | self.table = self.outputter(cols) 120 | self.cols = self.header.cols 121 | 122 | return self.table 123 | 124 | def write(self, table=None): 125 | """Not available for the Memory class (raises NotImplementedError)""" 126 | raise NotImplementedError 127 | 128 | MemoryReader = Memory 129 | 130 | class MemoryInputter(core.BaseInputter): 131 | """Get the lines from the table input and return an iterable object that contains the data lines. 132 | 133 | The input table can be one of: 134 | 135 | * asciitable Reader object 136 | * NumPy structured array 137 | * List of lists 138 | * Dict of lists 139 | """ 140 | def get_lines(self, table, names): 141 | """Get the lines from the ``table`` input. 142 | 143 | :param table: table input 144 | :param names: list of column names (only used for dict of lists to set column order) 145 | :returns: list of lines 146 | 147 | """ 148 | try: 149 | # If table is dict-like this will return the first key. 150 | # If table is list-like this will return the first row. 151 | first_row_or_key = next(iter(table)) 152 | except TypeError: 153 | # Not iterable, is it an asciitable Reader instance? 154 | if isinstance(table, core.BaseReader): 155 | lines = table.table 156 | else: 157 | # None of the possible choices so raise exception 158 | raise TypeError('Input table must be iterable or else be a Reader object') 159 | else: 160 | # table is iterable, now see if it is dict-like or list-like 161 | try: 162 | # If first_row_or_key is a row (in the case that table is 163 | # list-like) then this will raise exception 164 | table[first_row_or_key] 165 | except (TypeError, IndexError, ValueError): 166 | # Table is list-like (python list-of-lists or numpy recarray) 167 | lines = table 168 | else: 169 | # Table is dict-like. Turn this into a DictLikeNumpy that has 170 | # an API similar to a numpy recarray. 171 | lines = core.DictLikeNumpy(table) 172 | if names is None: 173 | lines.dtype.names = sorted(lines.keys()) 174 | else: 175 | lines.dtype.names = names 176 | 177 | # ``lines`` could now be one of the following iterable objects: 178 | # - NumPy recarray 179 | # - DictLikeNumpy object 180 | # - Python list of lists 181 | return lines 182 | 183 | def get_val_type(val): 184 | """Get the asciitable data type corresponding to ``val``. Try a series 185 | of possibilities, organized roughly by expected frequencies of types in 186 | data tables. 187 | """ 188 | # Try native python types 189 | if isinstance(val, float): 190 | return core.FloatType 191 | elif isinstance(val, int): 192 | return core.IntType 193 | elif isinstance(val, str): 194 | return core.StrType 195 | elif isinstance(val, core.long): 196 | return core.IntType 197 | elif isinstance(val, core.unicode): 198 | return core.StrType 199 | 200 | # Not a native Python type so try a NumPy type 201 | try: 202 | type_name = val.dtype.name 203 | except AttributeError: 204 | pass 205 | else: 206 | if 'int' in type_name: 207 | return core.IntType 208 | elif 'float' in type_name: 209 | return core.FloatType 210 | elif 'string' in type_name: 211 | return core.StrType 212 | 213 | # Nothing matched 214 | raise TypeError("Memory: could not infer type for data value '%s'" % val) 215 | 216 | def get_lowest_type(type_set): 217 | """Return the lowest common denominator among a set of asciitable Types, 218 | in order StrType, FloatType, IntType. 219 | """ 220 | if core.StrType in type_set: 221 | return core.StrType 222 | elif core.FloatType in type_set: 223 | return core.FloatType 224 | elif core.IntType in type_set: 225 | return core.IntType 226 | 227 | raise ValueError("Type_set '%s' does not have expected values" % type_set) 228 | 229 | 230 | class MemoryHeader(core.BaseHeader): 231 | """Memory table header reader""" 232 | def __init__(self): 233 | pass 234 | 235 | def get_cols(self, lines): 236 | """Initialize the header Column objects from the table ``lines``. 237 | 238 | Based on the previously set Header attributes find or create the column names. 239 | Sets ``self.cols`` with the list of Columns. This list only includes the actual 240 | requested columns after filtering by the include_names and exclude_names 241 | attributes. See ``self.names`` for the full list. 242 | 243 | :param lines: list of table lines 244 | :returns: list of table Columns 245 | """ 246 | 247 | if self.names is None: 248 | # No column names supplied so first try to get them from NumPy structured array 249 | try: 250 | self.names = lines.dtype.names 251 | except AttributeError: 252 | # Still no col names available so auto-generate them 253 | try: 254 | first_data_vals = next(iter(lines)) 255 | except StopIteration: 256 | raise core.InconsistentTableError( 257 | 'No data lines found so cannot autogenerate column names') 258 | n_data_cols = len(first_data_vals) 259 | self.names = [self.auto_format % i for i in range(1, n_data_cols+1)] 260 | 261 | self._set_cols_from_names() 262 | 263 | # ``lines`` could be one of: NumPy recarray, DictLikeNumpy obj, Python 264 | # list of lists. If NumPy recarray then set col.type accordingly. In 265 | # the other two cases convert the data values to strings so the usual 266 | # data converter processing will get the correct type. 267 | if core.has_numpy and isinstance(lines, numpy.ndarray): 268 | for col in self.cols: 269 | type_name = lines[col.name].dtype.name 270 | if 'int' in type_name: 271 | col.type = core.IntType 272 | elif 'float' in type_name: 273 | col.type = core.FloatType 274 | elif 'str' in type_name: 275 | col.type = core.StrType 276 | else: 277 | # lines is a list of lists or DictLikeNumpy. 278 | col_types = {} 279 | col_indexes = [col.index for col in self.cols] 280 | for vals in lines: 281 | for col_index in col_indexes: 282 | val = vals[col_index] 283 | col_type_set = col_types.setdefault(col_index, set()) 284 | col_type_set.add(get_val_type(val)) 285 | for col in self.cols: 286 | col.type = get_lowest_type(col_types[col.index]) 287 | 288 | 289 | class MemorySplitter(core.BaseSplitter): 290 | """Splitter for data already in memory. It is assumed that ``lines`` are 291 | iterable and that each line (aka row) is also an iterable object that 292 | provides the column values for that row.""" 293 | def __call__(self, lines): 294 | for vals in lines: 295 | yield vals 296 | 297 | class MemoryData(core.BaseData): 298 | """Memory table data reader. Same as the BaseData reader but with a 299 | special splitter and a "pass-thru" process_lines function.""" 300 | 301 | splitter_class = MemorySplitter 302 | 303 | def process_lines(self, lines): 304 | return lines 305 | 306 | -------------------------------------------------------------------------------- /asciitable/ui.py: -------------------------------------------------------------------------------- 1 | """asciitable: an extensible ASCII table reader and writer. 2 | 3 | ui.py: 4 | Provides the main user functions for reading and writing tables. 5 | 6 | :Copyright: Smithsonian Astrophysical Observatory (2010) 7 | :Author: Tom Aldcroft (aldcroft@head.cfa.harvard.edu) 8 | """ 9 | 10 | ## Copyright (c) 2010, Smithsonian Astrophysical Observatory 11 | ## All rights reserved. 12 | ## 13 | ## Redistribution and use in source and binary forms, with or without 14 | ## modification, are permitted provided that the following conditions are met: 15 | ## * Redistributions of source code must retain the above copyright 16 | ## notice, this list of conditions and the following disclaimer. 17 | ## * Redistributions in binary form must reproduce the above copyright 18 | ## notice, this list of conditions and the following disclaimer in the 19 | ## documentation and/or other materials provided with the distribution. 20 | ## * Neither the name of the Smithsonian Astrophysical Observatory nor the 21 | ## names of its contributors may be used to endorse or promote products 22 | ## derived from this software without specific prior written permission. 23 | ## 24 | ## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 25 | ## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 26 | ## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 27 | ## DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 28 | ## DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 29 | ## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 30 | ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 31 | ## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 32 | ## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 33 | ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 34 | 35 | import re 36 | import os 37 | import sys 38 | 39 | import asciitable.core as core 40 | import asciitable.basic as basic 41 | import asciitable.cds as cds 42 | import asciitable.daophot as daophot 43 | import asciitable.ipac as ipac 44 | import asciitable.memory as memory 45 | from asciitable.core import next, izip, any 46 | import asciitable.latex as latex 47 | 48 | # Default setting for guess parameter in read() 49 | _GUESS = True 50 | def set_guess(guess): 51 | """Set the default value of the ``guess`` parameter for read() 52 | 53 | :param guess: New default ``guess`` value (True|False) 54 | """ 55 | global _GUESS 56 | _GUESS = guess 57 | 58 | def get_reader(Reader=None, Inputter=None, Outputter=None, numpy=True, **kwargs): 59 | """Initialize a table reader allowing for common customizations. Most of the 60 | default behavior for various parameters is determined by the Reader class. 61 | 62 | :param Reader: Reader class (default= :class:`BasicReader`) 63 | :param Inputter: Inputter class 64 | :param Outputter: Outputter class 65 | :param numpy: use the NumpyOutputter class else use BaseOutputter (default=True) 66 | :param delimiter: column delimiter string 67 | :param comment: regular expression defining a comment line in table 68 | :param quotechar: one-character string to quote fields containing special characters 69 | :param header_start: line index for the header line not counting comment lines 70 | :param data_start: line index for the start of data not counting comment lines 71 | :param data_end: line index for the end of data (can be negative to count from end) 72 | :param converters: dict of converters 73 | :param data_Splitter: Splitter class to split data columns 74 | :param header_Splitter: Splitter class to split header columns 75 | :param names: list of names corresponding to each data column 76 | :param include_names: list of names to include in output (default=None selects all names) 77 | :param exclude_names: list of names to exlude from output (applied after ``include_names``) 78 | :param fill_values: specification of fill values for bad or missing table values 79 | :param fill_include_names: list of names to include in fill_values (default=None selects all names) 80 | :param fill_exclude_names: list of names to exlude from fill_values (applied after ``fill_include_names``) 81 | """ 82 | # This function is a light wrapper around core._get_reader to provide a public interface 83 | # with a default Reader. 84 | if Reader is None: 85 | Reader = basic.BasicReader 86 | reader = core._get_reader(Reader, Inputter=Inputter, Outputter=Outputter, numpy=numpy, **kwargs) 87 | return reader 88 | 89 | def read(table, numpy=True, guess=None, **kwargs): 90 | """Read the input ``table``. If ``numpy`` is True (default) return the 91 | table in a numpy record array. Otherwise return the table as a dictionary 92 | of column objects using plain python lists to hold the data. Most of the 93 | default behavior for various parameters is determined by the Reader class. 94 | 95 | :param table: input table (file name, list of strings, or single newline-separated string) 96 | :param numpy: use the :class:`NumpyOutputter` class else use :class:`BaseOutputter` (default=True) 97 | :param guess: try to guess the table format (default=True) 98 | :param Reader: Reader class (default= :class:`~asciitable.BasicReader`) 99 | :param Inputter: Inputter class 100 | :param Outputter: Outputter class 101 | :param delimiter: column delimiter string 102 | :param comment: regular expression defining a comment line in table 103 | :param quotechar: one-character string to quote fields containing special characters 104 | :param header_start: line index for the header line not counting comment lines 105 | :param data_start: line index for the start of data not counting comment lines 106 | :param data_end: line index for the end of data (can be negative to count from end) 107 | :param converters: dict of converters 108 | :param data_Splitter: Splitter class to split data columns 109 | :param header_Splitter: Splitter class to split header columns 110 | :param names: list of names corresponding to each data column 111 | :param include_names: list of names to include in output (default=None selects all names) 112 | :param exclude_names: list of names to exlude from output (applied after ``include_names``) 113 | :param fill_values: specification of fill values for bad or missing table values 114 | :param fill_include_names: list of names to include in fill_values (default=None selects all names) 115 | :param fill_exclude_names: list of names to exlude from fill_values (applied after ``fill_include_names``) 116 | 117 | """ 118 | 119 | # Provide a simple way to choose between the two common outputters. If an Outputter is 120 | # supplied in kwargs that will take precedence. 121 | new_kwargs = {} 122 | if core.has_numpy and numpy: 123 | new_kwargs['Outputter'] = core.NumpyOutputter 124 | else: 125 | new_kwargs['Outputter'] = core.BaseOutputter 126 | new_kwargs.update(kwargs) 127 | 128 | if guess is None: 129 | guess = _GUESS 130 | if guess: 131 | dat = _guess(table, new_kwargs) 132 | else: 133 | reader = get_reader(**new_kwargs) 134 | dat = reader.read(table) 135 | return dat 136 | 137 | def _is_number(x): 138 | try: 139 | x = float(x) 140 | return True 141 | except ValueError: 142 | pass 143 | return False 144 | 145 | def _guess(table, read_kwargs): 146 | """Try to read the table using various sets of keyword args. First try the 147 | original args supplied in the read() call. Then try the standard guess 148 | keyword args. For each key/val pair specified explicitly in the read() 149 | call make sure that if there is a corresponding definition in the guess 150 | then it must have the same val. If not then skip this guess.""" 151 | 152 | # Keep a trace of all failed guesses kwarg 153 | failed_kwargs = [] 154 | 155 | # First try guessing 156 | for guess_kwargs in [read_kwargs.copy()] + _get_guess_kwargs_list(): 157 | guess_kwargs_ok = True # guess_kwargs are consistent with user_kwargs? 158 | for key, val in read_kwargs.items(): 159 | # Do guess_kwargs.update(read_kwargs) except that if guess_args has 160 | # a conflicting key/val pair then skip this guess entirely. 161 | if key not in guess_kwargs: 162 | guess_kwargs[key] = val 163 | elif val != guess_kwargs[key]: 164 | guess_kwargs_ok = False 165 | break 166 | 167 | if not guess_kwargs_ok: 168 | # User-supplied kwarg is inconsistent with the guess-supplied kwarg, e.g. 169 | # user supplies delimiter="|" but the guess wants to try delimiter=" ", 170 | # so skip the guess entirely. 171 | continue 172 | 173 | try: 174 | reader = get_reader(**guess_kwargs) 175 | dat = reader.read(table) 176 | # When guessing impose additional requirements on column names and number of cols 177 | bads = [" ", ",", "|", "\t", "'", '"'] 178 | if (len(reader.cols) <= 1 or 179 | any(_is_number(col.name) or 180 | len(col.name) == 0 or 181 | col.name[0] in bads or 182 | col.name[-1] in bads for col in reader.cols)): 183 | raise ValueError 184 | return dat 185 | except (core.InconsistentTableError, ValueError, TypeError): 186 | failed_kwargs.append(guess_kwargs) 187 | pass 188 | else: 189 | # failed all guesses, try the original read_kwargs without column requirements 190 | try: 191 | reader = get_reader(**read_kwargs) 192 | return reader.read(table) 193 | except (core.InconsistentTableError, ValueError): 194 | failed_kwargs.append(read_kwargs) 195 | lines = ['\nERROR: Unable to guess table for with the guesses listed below:'] 196 | for kwargs in failed_kwargs: 197 | sorted_keys = sorted([x for x in sorted(kwargs) if x not in ('Reader', 'Outputter')]) 198 | reader_repr = repr(kwargs.get('Reader', basic.Basic)) 199 | keys_vals = ['Reader:' + re.search(r"\.(\w+)'>", reader_repr).group(1)] 200 | kwargs_sorted = ((key, kwargs[key]) for key in sorted_keys) 201 | keys_vals.extend(['%s: %s' % (key, repr(val)) for key, val in kwargs_sorted]) 202 | lines.append(' '.join(keys_vals)) 203 | lines.append('ERROR: Unable to guess table for with the guesses listed above.') 204 | lines.append('Check the table and try with guess=False and appropriate arguments to read()') 205 | raise core.InconsistentTableError('\n'.join(lines)) 206 | 207 | def _get_guess_kwargs_list(): 208 | guess_kwargs_list = [dict(Reader=basic.Rdb), 209 | dict(Reader=basic.Tab), 210 | dict(Reader=cds.Cds), 211 | dict(Reader=daophot.Daophot), 212 | dict(Reader=ipac.Ipac), 213 | dict(Reader=latex.Latex), 214 | dict(Reader=latex.AASTex) 215 | ] 216 | for Reader in (basic.CommentedHeader, basic.BasicReader, basic.NoHeader): 217 | for delimiter in ("|", ",", " ", "\s"): 218 | for quotechar in ('"', "'"): 219 | guess_kwargs_list.append(dict( 220 | Reader=Reader, delimiter=delimiter, quotechar=quotechar)) 221 | return guess_kwargs_list 222 | 223 | extra_writer_pars = ('delimiter', 'comment', 'quotechar', 'formats', 224 | 'names', 'include_names', 'exclude_names') 225 | 226 | def get_writer(Writer=None, **kwargs): 227 | """Initialize a table writer allowing for common customizations. Most of the 228 | default behavior for various parameters is determined by the Writer class. 229 | 230 | :param Writer: Writer class (default= :class:`~asciitable.Basic` ) 231 | :param delimiter: column delimiter string 232 | :param write_comment: string defining a comment line in table 233 | :param quotechar: one-character string to quote fields containing special characters 234 | :param formats: dict of format specifiers or formatting functions 235 | :param names: list of names corresponding to each data column 236 | :param include_names: list of names to include in output (default=None selects all names) 237 | :param exclude_names: list of names to exlude from output (applied after ``include_names``) 238 | """ 239 | if Writer is None: 240 | Writer = basic.Basic 241 | writer = core._get_writer(Writer, **kwargs) 242 | return writer 243 | 244 | def write(table, output=sys.stdout, Writer=None, **kwargs): 245 | """Write the input ``table`` to ``filename``. Most of the default behavior 246 | for various parameters is determined by the Writer class. 247 | 248 | :param table: input table (Reader object, NumPy struct array, list of lists, etc) 249 | :param output: output [filename, file-like object] (default = sys.stdout) 250 | :param Writer: Writer class (default= :class:`~asciitable.Basic` ) 251 | :param delimiter: column delimiter string 252 | :param write_comment: string defining a comment line in table 253 | :param quotechar: one-character string to quote fields containing special characters 254 | :param formats: dict of format specifiers or formatting functions 255 | :param names: list of names corresponding to each data column 256 | :param include_names: list of names to include in output (default=None selects all names) 257 | :param exclude_names: list of names to exlude from output (applied after ``include_names``) 258 | """ 259 | 260 | reader_kwargs = dict((key, val) for key, val in kwargs.items() 261 | if key in ('names', 'include_names', 'exclude_names')) 262 | if not isinstance(table, core.BaseReader) or reader_kwargs: 263 | reader = get_reader(Reader=memory.Memory, **reader_kwargs) 264 | reader.read(table) 265 | table = reader 266 | 267 | writer = get_writer(Writer=Writer, **kwargs) 268 | lines = writer.write(table) 269 | 270 | # Write the lines to output 271 | outstr = os.linesep.join(lines) 272 | if not hasattr(output, 'write'): 273 | output = open(output, 'w') 274 | output.write(outstr) 275 | output.write(os.linesep) 276 | output.close() 277 | else: 278 | output.write(outstr) 279 | output.write(os.linesep) 280 | 281 | -------------------------------------------------------------------------------- /asciitable/version.py: -------------------------------------------------------------------------------- 1 | """ 2 | Version numbering for this module. The `major`, `minor`, and `bugfix` variables 3 | hold the respective parts of the version number (bugfix is '0' if absent). The 4 | `release` variable is True if this is a release, and False if this is a 5 | development version. 6 | 7 | NOTE: this code copied from astropy.version and simplified. Any license restrictions 8 | therein are applicable. 9 | """ 10 | 11 | version = '0.8.0' 12 | 13 | _versplit = version.replace('dev', '').split('.') 14 | major = int(_versplit[0]) 15 | minor = int(_versplit[1]) 16 | if len(_versplit) < 3: 17 | bugfix = 0 18 | else: 19 | bugfix = int(_versplit[2]) 20 | del _versplit 21 | 22 | release = not version.endswith('dev') 23 | 24 | def _get_git_devstr(): 25 | """Determines the number of revisions in this repository and returns "" if 26 | this is not possible. 27 | 28 | Returns 29 | ------- 30 | devstr : str 31 | A string that begins with 'dev' to be appended to the version number string. 32 | 33 | """ 34 | from os import path 35 | from subprocess import Popen, PIPE 36 | from warnings import warn 37 | 38 | if release: 39 | raise ValueError('revision devstring should not be used in a release version') 40 | 41 | currdir = path.abspath(path.split(__file__)[0]) 42 | 43 | p = Popen(['git', 'rev-list', 'HEAD'], cwd=currdir, 44 | stdout=PIPE, stderr=PIPE, stdin=PIPE) 45 | stdout, stderr = p.communicate() 46 | 47 | if p.returncode != 0: 48 | return '' 49 | else: 50 | nrev = stdout.decode('ascii').count('\n') 51 | return '-r%i' % nrev 52 | 53 | if not release: 54 | version = version + _get_git_devstr() 55 | -------------------------------------------------------------------------------- /doc/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = -E 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | 9 | # Internal variables. 10 | PAPEROPT_a4 = -D latex_paper_size=a4 11 | PAPEROPT_letter = -D latex_paper_size=letter 12 | ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 13 | 14 | .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest 15 | 16 | help: 17 | @echo "Please use \`make ' where is one of" 18 | @echo " html to make standalone HTML files" 19 | @echo " dirhtml to make HTML files named index.html in directories" 20 | @echo " pickle to make pickle files" 21 | @echo " json to make JSON files" 22 | @echo " htmlhelp to make HTML files and a HTML help project" 23 | @echo " qthelp to make HTML files and a qthelp project" 24 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 25 | @echo " changes to make an overview of all changed/added/deprecated items" 26 | @echo " linkcheck to check all external links for integrity" 27 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 28 | 29 | clean: 30 | -rm -rf _build/* 31 | 32 | html: 33 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html 34 | @echo 35 | @echo "Build finished. The HTML pages are in _build/html." 36 | 37 | dirhtml: 38 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) _build/dirhtml 39 | @echo 40 | @echo "Build finished. The HTML pages are in _build/dirhtml." 41 | 42 | pickle: 43 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle 44 | @echo 45 | @echo "Build finished; now you can process the pickle files." 46 | 47 | json: 48 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json 49 | @echo 50 | @echo "Build finished; now you can process the JSON files." 51 | 52 | htmlhelp: 53 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp 54 | @echo 55 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 56 | ".hhp project file in _build/htmlhelp." 57 | 58 | qthelp: 59 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) _build/qthelp 60 | @echo 61 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 62 | ".qhcp project file in _build/qthelp, like this:" 63 | @echo "# qcollectiongenerator _build/qthelp/asciitable.qhcp" 64 | @echo "To view the help file:" 65 | @echo "# assistant -collectionFile _build/qthelp/asciitable.qhc" 66 | 67 | latex: 68 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex 69 | @echo 70 | @echo "Build finished; the LaTeX files are in _build/latex." 71 | @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ 72 | "run these through (pdf)latex." 73 | 74 | changes: 75 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes 76 | @echo 77 | @echo "The overview file is in _build/changes." 78 | 79 | linkcheck: 80 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck 81 | @echo 82 | @echo "Link check complete; look for any errors in the above output " \ 83 | "or in _build/linkcheck/output.txt." 84 | 85 | doctest: 86 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) _build/doctest 87 | @echo "Testing of doctests in the sources finished, look at the " \ 88 | "results in _build/doctest/output.txt." 89 | -------------------------------------------------------------------------------- /doc/_templates/layout.html: -------------------------------------------------------------------------------- 1 | {% extends "!layout.html" %} 2 | 3 | {%- block extrahead %} 4 | {{ super() }} 5 | 10 | {% endblock %} 11 | 12 | {% block footer %} 13 | {{ super() }} 14 | 29 | {% endblock %} 30 | -------------------------------------------------------------------------------- /doc/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # asciitable documentation build configuration file, created by 4 | # sphinx-quickstart on Thu Dec 24 12:16:14 2009. 5 | # 6 | # This file is execfile()d with the current directory set to its containing dir. 7 | # 8 | # Note that not all possible configuration values are present in this 9 | # autogenerated file. 10 | # 11 | # All configuration values have a default; values that are commented out 12 | # serve to show the default. 13 | 14 | import sys, os 15 | 16 | # If extensions (or modules to document with autodoc) are in another directory, 17 | # add these directories to sys.path here. If the directory is relative to the 18 | # documentation root, use os.path.abspath to make it absolute, like shown here. 19 | rootpath = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) 20 | sys.path.insert(0, rootpath) 21 | 22 | # -- General configuration ----------------------------------------------------- 23 | 24 | # Add any Sphinx extension module names here, as strings. They can be extensions 25 | # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 26 | extensions = ['sphinx.ext.autodoc'] 27 | 28 | # Add any paths that contain templates here, relative to this directory. 29 | templates_path = ['_templates'] 30 | 31 | # The suffix of source filenames. 32 | source_suffix = '.rst' 33 | 34 | # The encoding of source files. 35 | #source_encoding = 'utf-8' 36 | 37 | # The master toctree document. 38 | master_doc = 'index' 39 | 40 | # General information about the project. 41 | project = u'asciitable' 42 | copyright = u'2011, Tom Aldcroft' 43 | 44 | # The version info for the project you're documenting, acts as replacement for 45 | # |version| and |release|, also used in various other places throughout the 46 | # built documents. 47 | # 48 | # The full version, including alpha/beta/rc tags. 49 | from asciitable.version import version as release 50 | # The short X.Y version. 51 | version = '.'.join(release.split('.')[:2]) 52 | 53 | # The language for content autogenerated by Sphinx. Refer to documentation 54 | # for a list of supported languages. 55 | #language = None 56 | 57 | # There are two options for replacing |today|: either, you set today to some 58 | # non-false value, then it is used: 59 | #today = '' 60 | # Else, today_fmt is used as the format for a strftime call. 61 | #today_fmt = '%B %d, %Y' 62 | 63 | # List of documents that shouldn't be included in the build. 64 | #unused_docs = [] 65 | 66 | # List of directories, relative to source directory, that shouldn't be searched 67 | # for source files. 68 | exclude_trees = ['_build'] 69 | 70 | # The reST default role (used for this markup: `text`) to use for all documents. 71 | #default_role = None 72 | 73 | # If true, '()' will be appended to :func: etc. cross-reference text. 74 | #add_function_parentheses = True 75 | 76 | # If true, the current module name will be prepended to all description 77 | # unit titles (such as .. function::). 78 | #add_module_names = True 79 | 80 | # If true, sectionauthor and moduleauthor directives will be shown in the 81 | # output. They are ignored by default. 82 | #show_authors = False 83 | 84 | # The name of the Pygments (syntax highlighting) style to use. 85 | pygments_style = 'sphinx' 86 | 87 | # A list of ignored prefixes for module index sorting. 88 | #modindex_common_prefix = [] 89 | 90 | 91 | # -- Options for HTML output --------------------------------------------------- 92 | 93 | # The theme to use for HTML and HTML Help pages. Major themes that come with 94 | # Sphinx are currently 'default' and 'sphinxdoc'. 95 | html_theme = 'default' 96 | 97 | # Theme options are theme-specific and customize the look and feel of a theme 98 | # further. For a list of options available for each theme, see the 99 | # documentation. 100 | #html_theme_options = {} 101 | 102 | # Add any paths that contain custom themes here, relative to this directory. 103 | #html_theme_path = [] 104 | 105 | # The name for this set of Sphinx documents. If None, it defaults to 106 | # " v documentation". 107 | #html_title = None 108 | 109 | # A shorter title for the navigation bar. Default is the same as html_title. 110 | #html_short_title = None 111 | 112 | # The name of an image file (relative to this directory) to place at the top 113 | # of the sidebar. 114 | #html_logo = None 115 | 116 | # The name of an image file (within the static path) to use as favicon of the 117 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 118 | # pixels large. 119 | #html_favicon = None 120 | 121 | # Add any paths that contain custom static files (such as style sheets) here, 122 | # relative to this directory. They are copied after the builtin static files, 123 | # so a file named "default.css" will overwrite the builtin "default.css". 124 | html_static_path = ['_static'] 125 | 126 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 127 | # using the given strftime format. 128 | #html_last_updated_fmt = '%b %d, %Y' 129 | 130 | # If true, SmartyPants will be used to convert quotes and dashes to 131 | # typographically correct entities. 132 | #html_use_smartypants = True 133 | 134 | # Custom sidebar templates, maps document names to template names. 135 | #html_sidebars = {} 136 | 137 | # Additional templates that should be rendered to pages, maps page names to 138 | # template names. 139 | #html_additional_pages = {} 140 | 141 | # If false, no module index is generated. 142 | #html_use_modindex = True 143 | 144 | # If false, no index is generated. 145 | #html_use_index = True 146 | 147 | # If true, the index is split into individual pages for each letter. 148 | #html_split_index = False 149 | 150 | # If true, links to the reST sources are added to the pages. 151 | html_show_sourcelink = False 152 | 153 | # If true, an OpenSearch description file will be output, and all pages will 154 | # contain a tag referring to it. The value of this option must be the 155 | # base URL from which the finished HTML is served. 156 | #html_use_opensearch = '' 157 | 158 | # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 159 | #html_file_suffix = '' 160 | 161 | # Output file base name for HTML help builder. 162 | htmlhelp_basename = 'asciitabledoc' 163 | 164 | 165 | # -- Options for LaTeX output -------------------------------------------------- 166 | 167 | # The paper size ('letter' or 'a4'). 168 | #latex_paper_size = 'letter' 169 | 170 | # The font size ('10pt', '11pt' or '12pt'). 171 | #latex_font_size = '10pt' 172 | 173 | # Grouping the document tree into LaTeX files. List of tuples 174 | # (source start file, target name, title, author, documentclass [howto/manual]). 175 | latex_documents = [ 176 | ('index', 'asciitable.tex', u'asciitable Documentation', 177 | u'Tom Aldcroft', 'manual'), 178 | ] 179 | 180 | # The name of an image file (relative to this directory) to place at the top of 181 | # the title page. 182 | #latex_logo = None 183 | 184 | # For "manual" documents, if this is true, then toplevel headings are parts, 185 | # not chapters. 186 | #latex_use_parts = False 187 | 188 | # Additional stuff for the LaTeX preamble. 189 | #latex_preamble = '' 190 | 191 | # Documents to append as an appendix to all manuals. 192 | #latex_appendices = [] 193 | 194 | # If false, no module index is generated. 195 | #latex_use_modindex = True 196 | -------------------------------------------------------------------------------- /doc/fixed_width_gallery.rst: -------------------------------------------------------------------------------- 1 | .. _fixed_width_gallery: 2 | 3 | Fixed-width Gallery 4 | ===================== 5 | 6 | Fixed-width tables are those where each column has the same width for every row 7 | in the table. This is commonly used to make tables easy to read for humans or 8 | FORTRAN codes. It also reduces issues with quoting and special characters, 9 | for example:: 10 | 11 | Col1 Col2 Col3 Col4 12 | ---- --------- ---- ---- 13 | 1.2 "hello" 1 a 14 | 2.4 's worlds 2 2 15 | 16 | There are a number of common variations in the formatting of fixed-width tables 17 | which :mod:`asciitable` can read and write. The most signicant difference is 18 | whether there is no header line (:class:`~asciitable.FixedWidthNoHeader`), one 19 | header line (:class:`~asciitable.FixedWidth`), or two header lines 20 | (:class:`~asciitable.FixedWidthTwoLine`). Next, there are variations in the 21 | delimiter character, whether the delimiter appears on either end ("bookends"), 22 | and padding around the delimiter. 23 | 24 | Details are available in the class API documentation, but the easiest way to 25 | understand all the options and their interactions is by example. 26 | 27 | Reading 28 | -------- 29 | 30 | FixedWidth 31 | ^^^^^^^^^^^ 32 | 33 | **Nice, typical fixed format table** 34 | :: 35 | 36 | >>> import asciitable 37 | >>> table = """ 38 | ... # comment (with blank line above) 39 | ... | Col1 | Col2 | 40 | ... | 1.2 | "hello" | 41 | ... | 2.4 |'s worlds| 42 | ... """ 43 | >>> asciitable.read(table, Reader=asciitable.FixedWidth) 44 | rec.array([(1.2, '"hello"'), (2.4, "'s worlds")], 45 | dtype=[('Col1', '>> table = """ 51 | ... # comment (with blank line above) 52 | ... | Col1 | Col2 | 53 | ... | 1.2 | "hello" | 54 | ... | 2.4 |'s worlds| 55 | ... """ 56 | >>> asciitable.read(table, Reader=asciitable.FixedWidth, names=('name1', 'name2')) 57 | rec.array([(1.2, '"hello"'), (2.4, "'s worlds")], 58 | dtype=[('name1', '>> table = """ 64 | ... Col1 | Col2 | 65 | ... 1.2 "hello" 66 | ... 2.4 sdf's worlds 67 | ... """ 68 | >>> asciitable.read(table, Reader=asciitable.FixedWidth) 69 | rec.array([(1.2, '"hel'), (2.4, "df's wo")], 70 | dtype=[('Col1', '>> table = """ 76 | ... || Name || Phone || TCP|| 77 | ... | John | 555-1234 |192.168.1.10X| 78 | ... | Mary | 555-2134 |192.168.1.12X| 79 | ... | Bob | 555-4527 | 192.168.1.9X| 80 | ... """ 81 | >>> asciitable.read(table, Reader=asciitable.FixedWidth) 82 | rec.array([('John', '555-1234', '192.168.1.10'), 83 | ('Mary', '555-2134', '192.168.1.12'), 84 | ('Bob', '555-4527', '192.168.1.9')], 85 | dtype=[('Name', '|S4'), ('Phone', '|S8'), ('TCP', '|S12')]) 86 | 87 | **Table with space delimiter** 88 | :: 89 | 90 | >>> table = """ 91 | ... Name --Phone- ----TCP----- 92 | ... John 555-1234 192.168.1.10 93 | ... Mary 555-2134 192.168.1.12 94 | ... Bob 555-4527 192.168.1.9 95 | ... """ 96 | >>> asciitable.read(table, Reader=asciitable.FixedWidth, delimiter=' ') 97 | rec.array([('John', '555-1234', '192.168.1.10'), 98 | ('Mary', '555-2134', '192.168.1.12'), 99 | ('Bob', '555-4527', '192.168.1.9')], 100 | dtype=[('Name', '|S4'), ('--Phone-', '|S8'), ('----TCP-----', '|S12')]) 101 | 102 | **Table with no header row and auto-column naming.** 103 | 104 | Use header_start and data_start keywords to indicate no header line. 105 | :: 106 | 107 | >>> table = """ 108 | ... | John | 555-1234 |192.168.1.10| 109 | ... | Mary | 555-2134 |192.168.1.12| 110 | ... | Bob | 555-4527 | 192.168.1.9| 111 | ... """ 112 | >>> asciitable.read(table, Reader=asciitable.FixedWidth, 113 | ... header_start=None, data_start=0) 114 | rec.array([('John', '555-1234', '192.168.1.10'), 115 | ('Mary', '555-2134', '192.168.1.12'), 116 | ('Bob', '555-4527', '192.168.1.9')], 117 | dtype=[('col1', '|S4'), ('col2', '|S8'), ('col3', '|S12')]) 118 | 119 | **Table with no header row and with col names provided.** 120 | 121 | Second and third rows also have hanging spaces after final "|". Use header_start and data_start 122 | keywords to indicate no header line. 123 | :: 124 | 125 | >>> table = ["| John | 555-1234 |192.168.1.10|" 126 | ... "| Mary | 555-2134 |192.168.1.12| " 127 | ... "| Bob | 555-4527 | 192.168.1.9| "] 128 | >>> asciitable.read(table, Reader=asciitable.FixedWidth, 129 | ... header_start=None, data_start=0, 130 | ... names=('Name', 'Phone', 'TCP')) 131 | rec.array([('John', '555-1234', '192.168.1.10')], 132 | dtype=[('Name', '|S4'), ('Phone', '|S8'), ('TCP', '|S12')]) 133 | 134 | FixedWidthNoHeader 135 | ^^^^^^^^^^^^^^^^^^^ 136 | 137 | **Table with no header row and auto-column naming. Use the FixedWidthNoHeader 138 | convenience class.** 139 | :: 140 | 141 | >>> table = """ 142 | ... | John | 555-1234 |192.168.1.10| 143 | ... | Mary | 555-2134 |192.168.1.12| 144 | ... | Bob | 555-4527 | 192.168.1.9| 145 | ... """ 146 | >>> asciitable.read(table, Reader=asciitable.FixedWidthNoHeader) 147 | rec.array([('John', '555-1234', '192.168.1.10'), 148 | ('Mary', '555-2134', '192.168.1.12'), 149 | ('Bob', '555-4527', '192.168.1.9')], 150 | dtype=[('col1', '|S4'), ('col2', '|S8'), ('col3', '|S12')]) 151 | 152 | **Table with no delimiter with column start and end values specified.** 153 | 154 | This uses the col_starts and col_ends keywords. Note that the 155 | col_ends values are inclusive so a position range of 0 to 5 156 | will select the first 6 characters. 157 | :: 158 | 159 | >>> table = """ 160 | ... # 5 9 17 18 28 <== Column start / end indexes 161 | ... # | | || | <== Column separation positions 162 | ... John 555- 1234 192.168.1.10 163 | ... Mary 555- 2134 192.168.1.12 164 | ... Bob 555- 4527 192.168.1.9 165 | ... """ 166 | >>> asciitable.read(table, Reader=asciitable.FixedWidthNoHeader, 167 | ... names=('Name', 'Phone', 'TCP'), 168 | ... col_starts=(0, 9, 18), 169 | ... col_ends=(5, 17, 28), 170 | ... ) 171 | rec.array([('John', '555- 1234', '192.168.1.'), 172 | ('Mary', '555- 2134', '192.168.1.'), 173 | ('Bob', '555- 4527', '192.168.1')], 174 | dtype=[('Name', '|S4'), ('Phone', '|S9'), ('TCP', '|S10')]) 175 | 176 | FixedWidthTwoLine 177 | ^^^^^^^^^^^^^^^^^^^ 178 | 179 | **Typical fixed format table with two header lines with some cruft** 180 | :: 181 | 182 | >>> table = """ 183 | ... Col1 Col2 184 | ... ---- --------- 185 | ... 1.2xx"hello" 186 | ... 2.4 's worlds 187 | ... """ 188 | >>> asciitable.read(table, Reader=asciitable.FixedWidthTwoLine) 189 | rec.array([(1.2, '"hello"'), (2.4, "'s worlds")], 190 | dtype=[('Col1', '>> table = """ 196 | ... ======= =========== 197 | ... Col1 Col2 198 | ... ======= =========== 199 | ... 1.2 "hello" 200 | ... 2.4 's worlds 201 | ... ======= =========== 202 | ... """ 203 | >>> asciitable.read(table, Reader=asciitable.FixedWidthTwoLine, 204 | ... header_start=1, position_line=2, data_end=-1) 205 | rec.array([(1.2, '"hello"'), (2.4, "'s worlds")], 206 | dtype=[('Col1', '>> table = """ 212 | ... +------+----------+ 213 | ... | Col1 | Col2 | 214 | ... +------|----------+ 215 | ... | 1.2 | "hello" | 216 | ... | 2.4 | 's worlds| 217 | ... +------+----------+ 218 | ... """ 219 | >>> asciitable.read(table, Reader=asciitable.FixedWidthTwoLine, delimiter='+', 220 | ... header_start=1, position_line=0, data_start=3, data_end=-1) 221 | rec.array([(1.2, '"hello"'), (2.4, "'s worlds")], 222 | dtype=[('Col1', '>> table = """ 234 | ... | Col1 | Col2 | Col3 | Col4 | 235 | ... | 1.2 | "hello" | 1 | a | 236 | ... | 2.4 | 's worlds | 2 | 2 | 237 | ... """ 238 | >>> dat = asciitable.read(table, Reader=asciitable.FixedWidth) 239 | 240 | **Write a table as a normal fixed width table.** 241 | :: 242 | 243 | >>> asciitable.write(dat, Writer=asciitable.FixedWidth) 244 | | Col1 | Col2 | Col3 | Col4 | 245 | | 1.2 | "hello" | 1 | a | 246 | | 2.4 | 's worlds | 2 | 2 | 247 | 248 | **Write a table as a fixed width table with no padding.** 249 | :: 250 | 251 | >>> asciitable.write(dat, Writer=asciitable.FixedWidth, delimiter_pad=None) 252 | |Col1| Col2|Col3|Col4| 253 | | 1.2| "hello"| 1| a| 254 | | 2.4|'s worlds| 2| 2| 255 | 256 | **Write a table as a fixed width table with no bookend.** 257 | :: 258 | 259 | >>> asciitable.write(dat, Writer=asciitable.FixedWidth, bookend=False) 260 | Col1 | Col2 | Col3 | Col4 261 | 1.2 | "hello" | 1 | a 262 | 2.4 | 's worlds | 2 | 2 263 | 264 | **Write a table as a fixed width table with no delimiter.** 265 | :: 266 | 267 | >>> asciitable.write(dat, Writer=asciitable.FixedWidth, bookend=False, delimiter=None) 268 | Col1 Col2 Col3 Col4 269 | 1.2 "hello" 1 a 270 | 2.4 's worlds 2 2 271 | 272 | **Write a table as a fixed width table with no delimiter and formatting.** 273 | :: 274 | 275 | >>> asciitable.write(dat, Writer=asciitable.FixedWidth, 276 | ... formats={'Col1': '%-8.3f', 'Col2': '%-15s'}) 277 | | Col1 | Col2 | Col3 | Col4 | 278 | | 1.200 | "hello" | 1 | a | 279 | | 2.400 | 's worlds | 2 | 2 | 280 | 281 | FixedWidthNoHeader 282 | ^^^^^^^^^^^^^^^^^^^ 283 | 284 | **Write a table as a normal fixed width table.** 285 | :: 286 | 287 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthNoHeader) 288 | | 1.2 | "hello" | 1 | a | 289 | | 2.4 | 's worlds | 2 | 2 | 290 | 291 | **Write a table as a fixed width table with no padding.** 292 | :: 293 | 294 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthNoHeader, delimiter_pad=None) 295 | |1.2| "hello"|1|a| 296 | |2.4|'s worlds|2|2| 297 | 298 | **Write a table as a fixed width table with no bookend.** 299 | :: 300 | 301 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthNoHeader, bookend=False) 302 | 1.2 | "hello" | 1 | a 303 | 2.4 | 's worlds | 2 | 2 304 | 305 | **Write a table as a fixed width table with no delimiter.** 306 | :: 307 | 308 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthNoHeader, bookend=False, 309 | ... delimiter=None) 310 | 1.2 "hello" 1 a 311 | 2.4 's worlds 2 2 312 | 313 | FixedWidthTwoLine 314 | ^^^^^^^^^^^^^^^^^^^ 315 | 316 | **Write a table as a normal fixed width table.** 317 | :: 318 | 319 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthTwoLine) 320 | Col1 Col2 Col3 Col4 321 | ---- --------- ---- ---- 322 | 1.2 "hello" 1 a 323 | 2.4 's worlds 2 2 324 | 325 | **Write a table as a fixed width table with space padding and '=' position_char.** 326 | :: 327 | 328 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthTwoLine, 329 | ... delimiter_pad=' ', position_char='=') 330 | Col1 Col2 Col3 Col4 331 | ==== ========= ==== ==== 332 | 1.2 "hello" 1 a 333 | 2.4 's worlds 2 2 334 | 335 | **Write a table as a fixed width table with no bookend.** 336 | :: 337 | 338 | >>> asciitable.write(dat, Writer=asciitable.FixedWidthTwoLine, bookend=True, delimiter='|') 339 | |Col1| Col2|Col3|Col4| 340 | |----|---------|----|----| 341 | | 1.2| "hello"| 1| a| 342 | | 2.4|'s worlds| 2| 2| 343 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from distutils.core import setup 2 | import os 3 | 4 | long_description = """ 5 | Asciitable can read and write a wide range of ASCII table formats via built-in 6 | Extension Reader Classes: 7 | 8 | * **Basic**: basic table with customizable delimiters and header configurations 9 | * **Cds**: `CDS format table `_ (also Vizier and ApJ machine readable tables) 10 | * **CommentedHeader**: column names given in a line that begins with the comment character 11 | * **Daophot**: table from the IRAF DAOphot package 12 | * **FixedWidth**: table with fixed-width columns 13 | * **Ipac**: `IPAC format table `_ 14 | * **Latex**: LaTeX tables (plain and AASTex) 15 | * **Memory**: table already in memory (list of lists, dict of lists, etc) 16 | * **NoHeader**: basic table with no header where columns are auto-named 17 | * **Rdb**: tab-separated values with a column types line after the column names line 18 | * **Tab**: tab-separated values 19 | 20 | At the top level asciitable looks like many other ASCII table interfaces 21 | since it provides default read() and write() functions with long lists of 22 | parameters to accommodate the many variations possible in commonly encountered 23 | ASCII table formats. Below the hood however asciitable is built on a 24 | modular and extensible class structure. The basic functionality required for 25 | reading or writing a table is largely broken into independent base class 26 | elements so that new formats can be accomodated by modifying the underlying 27 | class methods as needed. 28 | """ 29 | 30 | from asciitable.version import version 31 | 32 | setup(name='asciitable', 33 | version=version, 34 | description='Extensible ASCII table reader and writer', 35 | long_description=long_description, 36 | author='Tom Aldcroft', 37 | author_email='aldcroft@head.cfa.harvard.edu', 38 | url='http://cxc.harvard.edu/contrib/asciitable', 39 | license='BSD', 40 | platforms=['any'], 41 | classifiers=[ 42 | 'Development Status :: 5 - Production/Stable', 43 | 'Intended Audience :: Science/Research', 44 | 'License :: OSI Approved :: BSD License', 45 | 'Topic :: Scientific/Engineering', 46 | 'Topic :: Scientific/Engineering :: Astronomy', 47 | 'Topic :: Scientific/Engineering :: Physics', 48 | 'Programming Language :: Python :: 2', 49 | 'Programming Language :: Python :: 3', 50 | ], 51 | packages=['asciitable'], 52 | ) 53 | -------------------------------------------------------------------------------- /t/apostrophe.rdb: -------------------------------------------------------------------------------- 1 | # first comment 2 | agasc_id n_noids n_obs 3 | 11S N N 4 | jean's 1 1 5 | # second comment 6 | 335416352 3 8 7 | -------------------------------------------------------------------------------- /t/apostrophe.tab: -------------------------------------------------------------------------------- 1 | agasc_id n_noids n_obs 2 | jean's 1 1 3 | 335416352 3 8 4 | -------------------------------------------------------------------------------- /t/bad.txt: -------------------------------------------------------------------------------- 1 | # Extra column in last line 2 | "test 1a" test2 test3 test4 3 | # fun1 fun2 fun3 fun4 4 | top1 top2 top3 top4 5 | hat1 hat2 hat3 hat4 hat5 6 | 7 | -------------------------------------------------------------------------------- /t/bars_at_ends.txt: -------------------------------------------------------------------------------- 1 | |obsid | redshift | X | Y | object | rad| 2 | |3102 | 0.32 | 4167 | 4085 | Q1250+568-A | 9| 3 | |3102 | 0.32 | 4706 | 3916 | Q1250+568-B | 14 | 4 | |877 | 0.22 | 4378 | 3892 | 'Source 82' | 12.5 | 5 | -------------------------------------------------------------------------------- /t/cds.dat: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Title: Spitzer Observations of NGC 1333: A Study of Structure and Evolution 6 | in a Nearby Embedded Cluster 7 | Authors: Gutermuth R.A., Myers P.C., Megeath S.T., Allen L.E., Pipher J.L., 8 | Muzerolle J., Porras A., Winston E., Fazio G. 9 | Table: Spitzer-identified YSOs: Addendum 10 | ================================================================================ 11 | Byte-by-byte Description of file: datafile3.txt 12 | -------------------------------------------------------------------------------- 13 | Bytes Format Units Label Explanations 14 | -------------------------------------------------------------------------------- 15 | 1- 3 I3 --- Index Running identification number 16 | 5- 6 I2 h RAh Hour of Right Ascension (J2000) 17 | 8- 9 I2 min RAm Minute of Right Ascension (J2000) 18 | 11- 15 F5.2 s RAs Second of Right Ascension (J2000) 19 | - continuation of description 20 | 17 A1 --- DE- Sign of the Declination (J2000) 21 | 18- 19 I2 deg DEd Degree of Declination (J2000) 22 | 21- 22 I2 arcmin DEm Arcminute of Declination (J2000) 23 | 24- 27 F4.1 arcsec DEs Arcsecond of Declination (J2000) 24 | 29- 68 A40 --- Match Literature match 25 | 70- 75 A6 --- Class Source classification (1) 26 | 77-80 F4.2 mag AK ? The K band extinction (2) 27 | 82-86 F5.2 --- Fit ? Fit of IRAC photometry (3) 28 | -------------------------------------------------------------------------------- 29 | Note (1): Asterisks mark "deeply embedded" sources with questionable IRAC 30 | colors or incomplete IRAC photometry and relatively bright 31 | MIPS 24 micron photometry. 32 | Note (2): Only provided for sources with valid JHK_S_ photometry. 33 | Note (3): Defined as the slope of the linear least squares fit to the 34 | 3.6 - 8.0 micron SEDs in log{lambda} F_{lambda} vs log{lambda} space. 35 | Extinction is not accounted for in these values. High extinction can 36 | bias Fit to higher values. 37 | -------------------------------------------------------------------------------- 38 | 1 03 28 39.09 +31 06 01.9 I* 1.35 39 | -------------------------------------------------------------------------------- /t/cds/multi/ReadMe: -------------------------------------------------------------------------------- 1 | J/MNRAS/301/1031 High resolution spectra of VLM stars (Tinney+ 1998) 2 | ================================================================================ 3 | High resolution spectra of Very Low-Mass Stars 4 | Tinney C.G., Reid I.N. 5 | 6 | =1998MNRAS.301.1031T 7 | ================================================================================ 8 | ADC_Keywords: Stars, dwarfs ; Stars, late-type ; Spectroscopy 9 | 10 | Description: 11 | A high resolution optical spectral atlas for three very low-mass 12 | stars are provided, along with a high resolution observation of 13 | an atmospheric absorption calibrator. This is the data used to 14 | produce Figures 4-9 in the paper. 15 | 16 | These data were acquired with CASPEC on the ESO3.6m telescope. 17 | The FWHM resolution is 16km/s (eg. 0.043nm at 800nm), at a dispersion 18 | of 9km/s. Incomplete wavelength coverage produces inter-order gaps 19 | at wavelengths longer than 804.5nm. 20 | 21 | Objects: 22 | --------------------------------------------------------------------- 23 | RA (2000) DE Designation(s) (File) 24 | --------------------------------------------------------------------- 25 | 16 55 35.7 -08 23 36 VB 8 = LHS 429 = Gl 644 C (vb8.dat) 26 | 08 53 36 -03 29 30 LHS 2065 = LP 666-9 (lhs2065.dat) 27 | 03 39 34.6 -35 25 51 LP 944-20 (lp944-20.dat) 28 | 05 45 59.9 -32 18 23 {mu} Col = HR 1996 = HD 38666 (mucol.dat) 29 | --------------------------------------------------------------------- 30 | 31 | File Summary: 32 | --------------------------------------------------------------------- 33 | FileName Lrecl Records Explanations 34 | --------------------------------------------------------------------- 35 | ReadMe 80 . This file 36 | vb8.dat 26 14390 Spectrum for VB8 37 | lhs2065.dat 26 14390 Spectrum for LHS2065 38 | lp944-20.dat 26 14390 Spectrum for LP944-20 39 | mucol.dat 23 14390 Atmospheric Spectrum for Mu Columbae 40 | --------------------------------------------------------------------- 41 | 42 | Byte-by-byte Description of file: vb8.dat, lhs2065.dat, lp944-20.dat 43 | ------------------------------------------------------------------------- 44 | Bytes Format Units Label Explanations 45 | ------------------------------------------------------------------------- 46 | 1- 12 F12.2 0.1nm Lambda Central wavelength of the flux bin 47 | 13- 26 A14.9 mJy Fnu Data in interorder gaps has value 0.0 48 | ------------------------------------------------------------------------- 49 | 50 | Byte-by-byte Description of file: mucol.dat 51 | ------------------------------------------------------------------------- 52 | Bytes Format Units Label Explanations 53 | ------------------------------------------------------------------------- 54 | 1- 12 F12.2 0.1nm Lambda Central wavelength of the flux bin 55 | 13- 23 F11.6 --- Fnu *Data in interorder gaps has value 0.0 56 | ------------------------------------------------------------------------- 57 | Note on Fnu: 58 | mJy which have been normalised to value 1.0 59 | in the continuum of the atmospheric standard star 60 | ------------------------------------------------------------------------- 61 | 62 | ================================================================================ 63 | (End) C.G. Tinney [AAO] 04-Feb-1999 64 | -------------------------------------------------------------------------------- /t/commented_header.dat: -------------------------------------------------------------------------------- 1 | # a b c 2 | # A comment line 3 | 1 2 3 4 | 4 5 6 5 | -------------------------------------------------------------------------------- /t/commented_header2.dat: -------------------------------------------------------------------------------- 1 | # A comment line 2 | # Another comment line 3 | # a b c 4 | 1 2 3 5 | 4 5 6 6 | -------------------------------------------------------------------------------- /t/continuation.dat: -------------------------------------------------------------------------------- 1 | 1 3 5 \ 2 | hello world 3 | 4 6 8 next \ 4 | line 5 | -------------------------------------------------------------------------------- /t/daophot.dat: -------------------------------------------------------------------------------- 1 | #K MERGERAD = INDEF scaleunit %-23.7g 2 | #K IRAF = NOAO/IRAFV2.10EXPORT version %-23s 3 | #K USER = davis name %-23s 4 | #K HOST = tucana computer %-23s 5 | #K DATE = 05-28-93 mm-dd-yy %-23s 6 | #K TIME = 14:46:13 hh:mm:ss %-23s 7 | #K PACKAGE = daophot name %-23s 8 | #K TASK = nstar name %-23s 9 | #K IMAGE = test imagename %-23s 10 | #K GRPFILE = test.psg.1 filename %-23s 11 | #K PSFIMAGE = test.psf.1 imagename %-23s 12 | #K NSTARFILE = test.nst.1 filename %-23s 13 | #K REJFILE = "hello world" filename %-23s 14 | #K SCALE = 1. units/pix %-23.7g 15 | #K DATAMIN = 50. counts %-23.7g 16 | #K DATAMAX = 24500. counts %-23.7g 17 | #K GAIN = 1. number %-23.7g 18 | #K READNOISE = 0. electrons %-23.7g 19 | #K OTIME = 00:07:59.0 timeunit %-23s 20 | #K XAIRMASS = 1.238106 number %-23.7g 21 | #K IFILTER = V filter %-23s 22 | #K RECENTER = yes switch %-23b 23 | #K FITSKY = no switch %-23b 24 | #K PSFMAG = 16.594 magnitude %-23.7g 25 | #K PSFRAD = 5. scaleunit %-23.7g 26 | #K FITRAD = 3. scaleunit %-23.7g 27 | #K MAXITER = 50 number %-23d 28 | #K MAXGROUP = 60 number %-23d 29 | #K FLATERROR = 0.75 percentage %-23.7g 30 | #K PROFERROR = 5. percentage %-23.7g 31 | #K CLIPEXP = 6 number %-23d 32 | #K CLIPRANGE = 2.5 sigma %-23.7g 33 | # 34 | #N ID XCENTER YCENTER MAG MERR MSKY NITER \ 35 | #U ## pixels pixels magnitudes magnitudes counts ## \ 36 | #F %-9d %-10.3f %-10.3f %-12.3f %-14.3f %-15.7g %-6d 37 | # 38 | #N SHARPNESS CHI PIER PERROR \ 39 | #U ## ## ## perrors \ 40 | #F %-23.3f %-12.3f %-6d %-13s 41 | # 42 | 14 138.538 256.405 15.461 0.003 34.85955 4 \ 43 | -0.032 0.802 0 No_error 44 | 18 18.114 280.170 22.329 0.206 30.12784 4 \ 45 | -2.544 1.104 0 No_error 46 | -------------------------------------------------------------------------------- /t/fill_values.txt: -------------------------------------------------------------------------------- 1 | a,b,c 2 | 1,2,3 3 | a,a,4 4 | -------------------------------------------------------------------------------- /t/ipac.dat: -------------------------------------------------------------------------------- 1 | \catalog = sao 2 | \date = "Wed Sp 20 09:48:36 1995" 3 | \mykeyword = 'Another way for defining keyvalue string' 4 | \ This is an example of a valid comment. 5 | \ The 2nd data line is used to verify the exact column parsing 6 | \ (unclear if this is a valid for the IPAC format) 7 | | ra | dec | sai |-----v2---| sptype | 8 | | real | real | int | real | char | 9 | | unit | unit | unit | unit | ergs | 10 | | null | null | null | null | -999 | 11 | 2.09708 29.09056 73765 2.06000 B8IVpMnHg 12 | 12345678901234567890123456789012345678901234567890123456789012345 13 | -------------------------------------------------------------------------------- /t/latex1.tex: -------------------------------------------------------------------------------- 1 | \begin{table} 2 | \caption{\ion{Ne}{ix} Ly series and \ion{Mg}{xi} triplet fluxes (errors are 5$1\sigma$ confidence intervals) \label{tab:nely}} 3 | \begin{tabular}{lrr}\hline 4 | cola & colb & colc\\ 5 | \hline 6 | a & 1 & 2\\ 7 | b & 3 & 4\\ 8 | \hline 9 | \end{tabular} 10 | \end{table} -------------------------------------------------------------------------------- /t/latex2.tex: -------------------------------------------------------------------------------- 1 | \begin{deluxetable}{llrl} 2 | %\tabletypesize{\scriptsize} 3 | %\rotate 4 | \tablecaption{Log of observations\label{tab:obslog}} 5 | \tablewidth{0pt} 6 | \tablehead{\colhead{Facility} & \colhead{Id} & \colhead{exposure} & \colhead{date}} 7 | 8 | \startdata 9 | Chandra & \dataset[ADS/Sa.CXO#obs/06438]{ObsId 6438} & 23 ks & 2006-12-10\\ 10 | Spitzer & AOR 3656448 & 41.6 s & 2004-06-09\\ 11 | FLWO & filter: $B$ & 600 s & 2009-11-18\\ 12 | \enddata 13 | 14 | \end{deluxetable} 15 | -------------------------------------------------------------------------------- /t/no_data_cds.dat: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Title: Spitzer Observations of NGC 1333: A Study of Structure and Evolution 6 | in a Nearby Embedded Cluster 7 | Authors: Gutermuth R.A., Myers P.C., Megeath S.T., Allen L.E., Pipher J.L., 8 | Muzerolle J., Porras A., Winston E., Fazio G. 9 | Table: Spitzer-identified YSOs: Addendum 10 | ================================================================================ 11 | Byte-by-byte Description of file: datafile3.txt 12 | -------------------------------------------------------------------------------- 13 | Bytes Format Units Label Explanations 14 | -------------------------------------------------------------------------------- 15 | 1- 3 I3 --- Index Running identification number 16 | 5- 6 I2 h RAh Hour of Right Ascension (J2000) 17 | 8- 9 I2 min RAm Minute of Right Ascension (J2000) 18 | 11- 15 F5.2 s RAs Second of Right Ascension (J2000) 19 | - continuation of description 20 | 17 A1 --- DE- Sign of the Declination (J2000) 21 | 18- 19 I2 deg DEd Degree of Declination (J2000) 22 | 21- 22 I2 arcmin DEm Arcminute of Declination (J2000) 23 | 24- 27 F4.1 arcsec DEs Arcsecond of Declination (J2000) 24 | 29- 68 A40 --- Match Literature match 25 | 70- 75 A6 --- Class Source classification (1) 26 | 77-80 F4.2 mag AK ? The K band extinction (2) 27 | 82-86 F5.2 --- Fit ? Fit of IRAC photometry (3) 28 | -------------------------------------------------------------------------------- 29 | Note (1): Asterisks mark "deeply embedded" sources with questionable IRAC 30 | colors or incomplete IRAC photometry and relatively bright 31 | MIPS 24 micron photometry. 32 | Note (2): Only provided for sources with valid JHK_S_ photometry. 33 | Note (3): Defined as the slope of the linear least squares fit to the 34 | 3.6 - 8.0 micron SEDs in log{lambda} F_{lambda} vs log{lambda} space. 35 | Extinction is not accounted for in these values. High extinction can 36 | bias Fit to higher values. 37 | -------------------------------------------------------------------------------- 38 | -------------------------------------------------------------------------------- /t/no_data_daophot.dat: -------------------------------------------------------------------------------- 1 | #K MERGERAD = INDEF scaleunit %-23.7g 2 | #N ID XCENTER YCENTER MAG MERR MSKY NITER \ 3 | #U ## pixels pixels magnitudes magnitudes counts ## \ 4 | #F %-9d %-10.3f %-10.3f %-12.3f %-14.3f %-15.7g %-6d # 5 | #N SHARPNESS CHI PIER PERROR \ 6 | #U ## ## ## perrors \ 7 | #F %-23.3f %-12.3f %-6d %-13s # 8 | -------------------------------------------------------------------------------- /t/no_data_ipac.dat: -------------------------------------------------------------------------------- 1 | \catalog = sao 2 | \date = "Wed Sp 20 09:48:36 1995" 3 | \mykeyword = 'Another way for defining keyvalue string' 4 | \ This is an example of a valid comment. 5 | \ The 2nd data line is used to verify the exact column parsing 6 | \ (unclear if this is a valid for the IPAC format) 7 | | ra | dec | sai |-----v2---| sptype | 8 | | real | real | int | real | char | 9 | | unit | unit | unit | unit | ergs | 10 | | null | null | null | null | -999 | 11 | -------------------------------------------------------------------------------- /t/no_data_with_header.dat: -------------------------------------------------------------------------------- 1 | a b c 2 | -------------------------------------------------------------------------------- /t/no_data_without_header.dat: -------------------------------------------------------------------------------- 1 | # blank data table 2 | 3 | -------------------------------------------------------------------------------- /t/short.rdb: -------------------------------------------------------------------------------- 1 | 2 | # blank lines 3 | 4 | agasc_id n_noids n_obs 5 | N N N 6 | 115345072 1 1 7 | # comment 8 | 335416352 3 8 9 | 266612160 1 1 10 | 645803280 1 1 11 | 117309912 1 1 12 | 114950920 1 1 13 | 335025040 2 24 14 | 15 | -------------------------------------------------------------------------------- /t/short.tab: -------------------------------------------------------------------------------- 1 | agasc_id n_noids n_obs 2 | 115345072 1 1 3 | 335416352 3 8 4 | 266612160 1 1 5 | 645803280 1 1 6 | 117309912 1 1 7 | 114950920 1 1 8 | 335025040 2 24 9 | -------------------------------------------------------------------------------- /t/simple.txt: -------------------------------------------------------------------------------- 1 | 'test 1a' test2 test3 test4 2 | # fun1 fun2 fun3 fun4 fun5 3 | top1 top2 top3 top4 4 | hat1 hat2 hat3 hat4 5 | -------------------------------------------------------------------------------- /t/simple2.txt: -------------------------------------------------------------------------------- 1 | obsid | redshift | X | Y | object | rad 2 | 3102 | 0.32 | 4167 | 4085 | Q1250+568-A | 9 3 | 3102 | 0.32 | 4706 | 3916 | Q1250+568-B | 14 4 | 877 | 0.22 | 4378 | 3892 | 'Source 82' | 12.5 5 | -------------------------------------------------------------------------------- /t/simple3.txt: -------------------------------------------------------------------------------- 1 | obsid|redshift|X|Y|object|rad 2 | 877|0.22|4378|3892|'Sou,rce82'|12.5 3 | 3102|0.32|4167|4085|Q1250+568-A|9 4 | -------------------------------------------------------------------------------- /t/simple4.txt: -------------------------------------------------------------------------------- 1 | 3102 | 0.32 | 4167 | 4085 | Q1250+568-A | 9 2 | 3102 | 0.32 | 4706 | 3916 | Q1250+568-B | 14 3 | 877 | 0.22 | 4378 | 3892 | 'Source 82' | 12.5 4 | -------------------------------------------------------------------------------- /t/simple5.txt: -------------------------------------------------------------------------------- 1 | # Purposely make an ill-formed data file (in last row) 2 | 3102 | 0.32 | 4167 | 4085 | Q1250+568-A | 9 3 | 3102 | 0.32 | 4706 | 3916 | Q1250+568-B | 14 4 | 877 | 4378 | 3892 | 'Source 82' | 12.5 5 | -------------------------------------------------------------------------------- /t/space_delim_blank_lines.txt: -------------------------------------------------------------------------------- 1 | obsid offset x y name oaa 2 | 3 | 3102 0.32 4167 4085 Q1250+568-A 9 4 | 3102 0.32 4706 3916 Q1250+568-B 14 5 | 877 0.22 4378 3892 "Source 82" 12.5 6 | 7 | 8 | 9 | -------------------------------------------------------------------------------- /t/space_delim_no_header.dat: -------------------------------------------------------------------------------- 1 | 1 3.4 hello 2 | 2 6.4 world 3 | -------------------------------------------------------------------------------- /t/space_delim_no_names.dat: -------------------------------------------------------------------------------- 1 | 1 2 2 | 3 4 3 | -------------------------------------------------------------------------------- /t/test5.dat: -------------------------------------------------------------------------------- 1 | # whitespace separated with lines to skip 2 | ------------------------------------------ 3 | zabs1.nh p1.gamma p1.ampl statname statval 4 | ------------------------------------------ 5 | 0.095196313612 1.29238107724 0.000709438701165 chi2xspecvar 455.385700456 6 | 0.0898827896112 1.27317260145 0.000703680688865 cstat 450.402806957 7 | 0.0845373292976 1.26032264432 0.000697817633266 chi2constvar 427.888401816 8 | 0.0813955290921 1.25278166998 0.000694773889339 chi2modvar 422.655226097 9 | 0.0837813193374 1.26108631851 0.000697168659777 cash -582096.060739 10 | 0.0877788113875 1.27498889089 0.000700963122261 chi2gehrels 336.255262001 11 | 0.0886095763534 1.27831934755 0.000702152760295 chi2datavar 427.87097831 12 | 0.0886062881606 1.27831561342 0.000702152575029 chi2xspecvar 427.870972282 13 | 0.0837839157029 1.26109967845 0.000697177275745 cstat 423.869897301 14 | 0.0848856095291 1.26216881055 0.000697245258092 chi2constvar 495.692552206 15 | 0.0834040516574 1.25034791909 0.000694504650678 chi2modvar 448.488349352 16 | 0.0863275923367 1.25920642303 0.000697302969088 cash -581109.867406 17 | 0.0910593842926 1.27434931431 0.000701687557965 chi2gehrels 362.107884887 18 | 0.0925984360666 1.27857224315 0.000703586368322 chi2datavar 467.653055046 19 | 0.0926057133247 1.27858701992 0.000703594356786 chi2xspecvar 467.653060082 20 | 0.0863257498551 1.259192667 0.000697300429366 cstat 451.536967896 21 | 0.0880503692681 1.2588289844 0.000698437310968 chi2constvar 439.513117058 22 | 0.0852962921333 1.25214407357 0.000696223065852 chi2modvar 443.456904712 23 | -------------------------------------------------------------------------------- /t/vizier/ReadMe: -------------------------------------------------------------------------------- 1 | J/A+A/511/A56 Abundances of five open clusters (Pancino+, 2010) 2 | ================================================================================ 3 | Chemical abundance analysis of the open clusters Cr 110, NGC 2420, NGC 7789, 4 | and M 67 (NGC 2682). 5 | Pancino E., Carrera R., Rossetti, E., Gallart C. 6 | 7 | =2010A&A...511A..56P 8 | ================================================================================ 9 | ADC_Keywords: Clusters, open ; Stars, giant ; Equivalent widths ; Spectroscopy 10 | Keywords: stars: abundances - Galaxy: disk - 11 | open clusters and associations: general 12 | 13 | Abstract: 14 | The present number of Galactic open clusters that have high resolution 15 | abundance determinations, not only of [Fe/H], but also of other key 16 | elements, is largely insufficient to enable a clear modeling of the 17 | Galactic disk chemical evolution. To increase the number of Galactic 18 | open clusters with high quality measurements, we obtained high 19 | resolution (R~30000), high quality (S/N~50-100 per pixel), echelle 20 | spectra with the fiber spectrograph FOCES, at Calar Alto, Spain, for 21 | three red clump stars in each of five Open Clusters. We used the 22 | classical equivalent width analysis method to obtain accurate 23 | abundances of sixteen elements: Al, Ba, Ca, Co, Cr, Fe, La, Mg, Na, 24 | Nd, Ni, Sc, Si, Ti, V, and Y. We also derived the oxygen abundance 25 | using spectral synthesis of the 6300{AA} forbidden line. 26 | 27 | Description: 28 | Atomic data and equivalent widths for 15 red clump giants in 5 open 29 | clusters: Cr 110, NGC 2099, NGC 2420, M 67, NGC 7789. 30 | 31 | File Summary: 32 | -------------------------------------------------------------------------------- 33 | FileName Lrecl Records Explanations 34 | -------------------------------------------------------------------------------- 35 | ReadMe 80 . This file 36 | table1.dat 103 15 Observing logs and programme stars information 37 | table5.dat 56 5265 Atomic data and equivalent widths 38 | -------------------------------------------------------------------------------- 39 | 40 | See also: 41 | J/A+A/455/271 : Abundances of red giants in NGC 6441 (Gratton+, 2006) 42 | J/A+A/464/953 : Abundances of red giants in NGC 6441 (Gratton+, 2007) 43 | J/A+A/505/117 : Abund. of red giants in 15 globular clusters (Carretta+, 2009) 44 | 45 | Byte-by-byte Description of file: table1.dat 46 | -------------------------------------------------------------------------------- 47 | Bytes Format Units Label Explanations 48 | -------------------------------------------------------------------------------- 49 | 1- 7 A7 --- Cluster Cluster name 50 | 9- 12 I4 --- Star Star number within the cluster 51 | 14- 15 I2 h RAh Right ascension (J2000) 52 | 17- 18 I2 min RAm Right ascension (J2000) 53 | 20- 23 F4.1 s RAs Right ascension (J2000) 54 | 25 A1 --- DE- Declination sign (J2000) 55 | 26- 27 I2 deg DEd Declination (J2000) 56 | 29- 30 I2 arcmin DEm Declination (J2000) 57 | 32- 35 F4.1 arcsec DEs Declination (J2000) 58 | 37- 41 F5.2 mag Bmag B magnitude 59 | 43- 47 F5.2 mag Vmag V magnitude 60 | 49- 53 F5.2 mag Icmag ?=- Cousins I magnitude 61 | 55- 59 F5.2 mag Rmag ?=- R magnitude 62 | 61- 65 F5.2 mag Ksmag Ks magnitude 63 | 67 I1 --- NExp Number of exposures 64 | 69- 73 I5 s TExp Total exposure time 65 | 75- 77 I3 --- S/N Signal-to-nois ratio 66 | 79-103 A25 --- SName Simbad name 67 | -------------------------------------------------------------------------------- 68 | 69 | Byte-by-byte Description of file: table5.dat 70 | -------------------------------------------------------------------------------- 71 | Bytes Format Units Label Explanations 72 | -------------------------------------------------------------------------------- 73 | 1- 7 A7 --- Cluster Cluster name 74 | 9- 12 I4 --- Star Star number within the cluster 75 | 14- 20 F7.2 0.1nm Wave Wavelength in Angstroms 76 | 22- 23 A2 --- El Element name 77 | 24 I1 --- ion Ionization stage (1 for neutral element) 78 | 26- 30 F5.2 eV chiEx Excitation potential 79 | 32- 37 F6.2 --- loggf Logarithm of the oscillator strength 80 | 39- 43 F5.1 0.1pm EW ?=-9.9 Equivalent width (in mA) 81 | 46- 49 F4.1 0.1pm e_EW ?=-9.9 rms uncertainty on EW 82 | 51- 56 F6.3 --- Q ?=-9.999 DAOSPEC quality parameter Q 83 | (large values are bad) 84 | -------------------------------------------------------------------------------- 85 | 86 | Acknowledgements: 87 | Elena Pancino, elena.pancino(at)oabo.inaf.it 88 | ================================================================================ 89 | (End) Elena Pancino [INAF-OABo, Italy], Patricia Vannier [CDS] 23-Nov-2009 90 | -------------------------------------------------------------------------------- /t/vizier/table1.dat: -------------------------------------------------------------------------------- 1 | Cr110 2108 06 38 52.5 +02 01 58.4 14.79 13.35 --- --- 9.76 6 16200 70 Cl* Collinder 110 DI 2108 2 | Cr110 2129 06 38 41.1 +02 01 05.5 15.00 13.66 12.17 12.94 10.29 7 18900 70 Cl* Collinder 110 DI 2129 3 | Cr110 3144 06 38 30.3 +02 03 03.0 14.80 13.49 12.04 12.72 10.19 6 16195 65 Cl* Collinder 110 DI 3144 4 | NGC2099 67 05 52 16.6 +32 34 45.6 12.38 11.12 9.87 --- 8.17 3 3600 95 NGC 2099 67 5 | NGC2099 148 05 52 08.1 +32 30 33.1 12.36 11.09 --- --- 8.05 3 3600 105 NGC 2099 148 6 | NGC2099 508 05 52 33.2 +32 27 43.5 12.24 10.98 --- --- 7.92 3 3900 85 NGC 2099 508 7 | NGC2420 41 07 38 06.2 +21 36 54.7 13.75 12.67 11.61 12.13 10.13 5 9000 70 NGC 2420 41 8 | NGC2420 76 07 38 15.5 +21 38 01.8 13.65 12.66 11.65 12.14 10.31 5 9000 75 NGC 2420 76 9 | NGC2420 174 07 38 26.9 +21 38 24.8 13.41 12.40 --- --- 9.98 5 9000 60 NGC 2420 174 10 | NGC2682 141 08 51 22.8 +11 48 01.7 11.59 10.48 9.40 9.92 7.92 3 2700 85 Cl* NGC 2682 MMU 141 11 | NGC2682 223 08 51 43.9 +11 56 42.3 11.68 10.58 9.50 10.02 8.00 3 2700 85 Cl* NGC 2682 MMU 223 12 | NGC2682 286 08 52 18.6 +11 44 26.3 11.53 10.47 9.43 9.93 7.92 3 2700 105 Cl* NGC 2682 MMU 286 13 | NGC7789 5237 23 56 50.6 +56 49 20.9 13.92 12.81 11.52 --- 9.89 5 9000 70 Cl* NGC 7789 G 5237 14 | NGC7789 7840 23 57 19.3 +56 40 51.5 14.03 12.82 11.49 --- 9.83 6 9000 75 Cl* NGC 7789 G 7840 15 | NGC7789 8556 23 57 27.6 +56 45 39.2 14.18 12.97 11.65 --- 10.03 3 5400 45 Cl* NGC 7789 G 8556 16 | -------------------------------------------------------------------------------- /t/vots_spec.dat: -------------------------------------------------------------------------------- 1 | #################################################################################### 2 | ## 3 | ## VOTable-Simple Specification 4 | ## 5 | ## This is the specification of the VOTable-Simple (VOTS) format, given as an 6 | ## example data table with comments and references. This data table format is 7 | ## intented to provide a way of specifying metadata and data for simple tabular 8 | ## data sets. This specification is intended as a subset of the VOTable data 9 | ## model and allow easy generation of a VOTable-compliant data structure. This 10 | ## provides a uniform starting point for generating table documentation and 11 | ## performing database table creation and ingest. 12 | ## 13 | ## A python application is available which uses the STILTS java package to 14 | ## convert from a VOTS format to any of the (many) output formats supported by 15 | ## STILTS. This application can also generate a documentation file (in 16 | ## reStructured Text format) or a Django model definition from a VOTS table. 17 | ## 18 | ## Key VOTable and STILTS references: 19 | ## Full spec: http://www.ivoa.net/Documents/latest/VOT.html 20 | ## Datatypes: http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html#ToC11 21 | ## FIELD def: http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html#ToC25 22 | ## STILTS : http://www.star.bris.ac.uk/~mbt/stilts/ 23 | ## 24 | ## The VOTable-Simple format consists of header information followed by the tabular 25 | ## data elements. The VOTS header lines are all preceded by a single '#' character. 26 | ## Comments are preceded by '##' at the beginning of a line. 27 | ## 28 | ## The VOTS header defines the metadata associated with the table. In the 29 | ## VOTable-Simple format words in all CAPS (followed by ::) refer to the 30 | ## corresponding metadata elements in the VOTable specification. For instance 31 | ## the DESCRIPTION:: keyword precedes the lines that are used in the VOTable 32 | ## element. The COOSYS::, PARAM::, and FIELD:: keywords are 33 | ## each followed by a whitespace-delimited table that defines the corresponding 34 | ## VOTable elements and attributes. 35 | ## 36 | ## The actual table data must follow the header and consist of space or tab delimited 37 | ## data fields. The chosen delimiter must be used consistently througout the table. 38 | ## 39 | ##---------------------------------------------------------------------------------- 40 | ## Table description, corresponding to the VOTable TABLE::DESCRIPTION element. 41 | ##---------------------------------------------------------------------------------- 42 | # DESCRIPTION:: 43 | # This is a sample table that shows a proposed format for generation of tables 44 | # for the C-COSMOS collaboration. This format is compatible with simple 'awk' or 45 | # S-mongo style processing but also allows full self-documentation and conversion 46 | # to more robust data formats (FITS, VOTable, postgres database ingest, etc). 47 | # 48 | ##---------------------------------------------------------------------------------- 49 | ## Coordinate system specification COOSYS. This is a "future" feature, as the 50 | ## current conversion code does not use this field. 51 | ##---------------------------------------------------------------------------------- 52 | # COOSYS:: 53 | # ID equinox epoch system 54 | # J2000 J2000. J2000. eq_FK5 55 | # 56 | ##---------------------------------------------------------------------------------- 57 | ## Set the TABLE::PARAM values, which are values that apply for the entire table. 58 | ##---------------------------------------------------------------------------------- 59 | # PARAM:: 60 | # name datatype value description 61 | # version string 1.1 'Table version' 62 | # date string 2007/12/01 'Table release date' 63 | # 64 | ##---------------------------------------------------------------------------------- 65 | ## Define the column names via the FIELD element. The attributes 'name', 66 | ## 'datatype', 'unit', and 'description' are required. Optional attributes are: 67 | ## 'width', 'precision', 'ucd', 'utype', 'ref', and 'type'. 68 | ## See http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html#ToC25 for 69 | ## the VOTable defintions. 70 | ## Allowed values of datatype are: 71 | ## boolean, unsignedByte, short, int, long, string, float, double 72 | ## Units: (from http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html#sec:unit) 73 | ## The quantities in a column of the table may be expressed in some physical 74 | ## unit, which is specified by the unit attribute of the FIELD. The syntax of 75 | ## the unit string is defined in reference [3]; it is basically written as a 76 | ## string without blanks or spaces, where the symbols . or * indicate a 77 | ## multiplication, / stands for the division, and no special symbol is 78 | ## required for a power. Examples are unit="m2" for m2, unit="cm-2.s-1.keV-1" 79 | ## for cm-2s-1keV-1, or unit="erg/s" for erg s-1. The references [3] provide 80 | ## also the list of the valid symbols, which is essentially restricted to the 81 | ## Systeme International (SI) conventions, plus a few astronomical extensions 82 | ## concerning units used for time, angular, distance and energy measurements. 83 | ##---------------------------------------------------------------------------------- 84 | # FIELD:: 85 | # name datatype unit ucd description 86 | # id int '' 'meta.id' 'C-COSMOS short identifier number' 87 | # name string '' '' 'C-COSMOS long identifier name' 88 | # ra double deg 'meta.cryptic' 'Right Ascension' 89 | # dec double deg '' Declination 90 | # flux float erg/cm2/s '' Flux 91 | # 92 | ##---------------------------------------------------------------------------------- 93 | ## Now the actual field data in the order specified by the FIELD:: list. 94 | ## The data fields can be separated by tabs or spaces. If using spaces, 95 | ## any fields that contain a space must be enclosed in single quotes. 96 | ## 97 | 12 'CXOCS J193423+022312' 150.01212 2.52322 1.21e-13 98 | 13 'CXOCS J193322+024444' 150.02323 2.54444 1.21e-14 99 | 14 'CXOCS J195555+025555' 150.04444 2.55555 1.21e-15 100 | -------------------------------------------------------------------------------- /t/whitespace.dat: -------------------------------------------------------------------------------- 1 | "quoted colname with tab inside" col2 col3 2 | val1 "val2 with tab" 2 3 | val3 val4 3 4 | -------------------------------------------------------------------------------- /test/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/taldcroft/asciitable/e8958bdb728c414cbd5294acf60a51159ada79a6/test/__init__.py -------------------------------------------------------------------------------- /test/common.py: -------------------------------------------------------------------------------- 1 | import asciitable 2 | 3 | if asciitable.has_numpy: 4 | numpy_cases = (True, False) 5 | else: 6 | numpy_cases = (False,) 7 | 8 | def has_numpy_and_not_has_numpy(func): 9 | """Perform tests that should work for has_numpy==True and has_numpy==False""" 10 | def wrap(): 11 | for numpy_case in numpy_cases: 12 | has_numpy = asciitable.has_numpy 13 | asciitable.has_numpy = numpy_case 14 | asciitable.core.has_numpy = numpy_case 15 | try: 16 | func(numpy=numpy_case) 17 | finally: 18 | asciitable.has_numpy = has_numpy 19 | asciitable.core.has_numpy = has_numpy 20 | wrap.__name__ = func.__name__ 21 | return wrap 22 | 23 | def has_numpy(func): 24 | """Tests that will only succeed if has_numpy == True""" 25 | def wrap(): 26 | for numpy_case in numpy_cases: 27 | if numpy_case is True: 28 | func(numpy=numpy_case) 29 | wrap.__name__ = func.__name__ 30 | return wrap 31 | 32 | -------------------------------------------------------------------------------- /test/test_fixedwidth.py: -------------------------------------------------------------------------------- 1 | import re 2 | import glob 3 | from nose.tools import * 4 | 5 | import asciitable 6 | if asciitable.has_numpy: 7 | import numpy as np 8 | from asciitable.core import io 9 | 10 | from test.common import has_numpy_and_not_has_numpy, has_numpy 11 | 12 | def assert_equal_splitlines(arg1, arg2): 13 | assert_equal(arg1.splitlines(), arg2.splitlines()) 14 | 15 | @has_numpy_and_not_has_numpy 16 | def test_read_normal(numpy): 17 | """Nice, typical fixed format table""" 18 | table = """ 19 | # comment (with blank line above) 20 | | Col1 | Col2 | 21 | | 1.2 | "hello" | 22 | | 2.4 |'s worlds| 23 | """ 24 | reader = asciitable.get_reader(Reader=asciitable.FixedWidth) 25 | dat = reader.read(table) 26 | assert_equal(reader.header.colnames, ('Col1', 'Col2')) 27 | assert_almost_equal(dat[1][0], 2.4) 28 | assert_equal(dat[0][1], '"hello"') 29 | assert_equal(dat[1][1], "'s worlds") 30 | 31 | @has_numpy_and_not_has_numpy 32 | def test_read_normal_names(numpy): 33 | """Nice, typical fixed format table with col names provided""" 34 | table = """ 35 | # comment (with blank line above) 36 | | Col1 | Col2 | 37 | | 1.2 | "hello" | 38 | | 2.4 |'s worlds| 39 | """ 40 | reader = asciitable.get_reader(Reader=asciitable.FixedWidth, 41 | names=('name1', 'name2')) 42 | dat = reader.read(table) 43 | assert_equal(reader.header.colnames, ('name1', 'name2')) 44 | assert_almost_equal(dat[1][0], 2.4) 45 | 46 | @has_numpy_and_not_has_numpy 47 | def test_read_normal_names_include(numpy): 48 | """Nice, typical fixed format table with col names provided""" 49 | table = """ 50 | # comment (with blank line above) 51 | | Col1 | Col2 | Col3 | 52 | | 1.2 | "hello" | 3 | 53 | | 2.4 |'s worlds| 7 | 54 | """ 55 | reader = asciitable.get_reader(Reader=asciitable.FixedWidth, 56 | names=('name1', 'name2', 'name3'), 57 | include_names=('name1', 'name3')) 58 | dat = reader.read(table) 59 | assert_equal(reader.header.colnames, ('name1', 'name3')) 60 | assert_almost_equal(dat[1][0], 2.4) 61 | assert_equal(dat[0][1], 3) 62 | 63 | @has_numpy_and_not_has_numpy 64 | def test_read_normal_exclude(numpy): 65 | """Nice, typical fixed format table with col name excluded""" 66 | table = """ 67 | # comment (with blank line above) 68 | | Col1 | Col2 | 69 | | 1.2 | "hello" | 70 | | 2.4 |'s worlds| 71 | """ 72 | reader = asciitable.get_reader(Reader=asciitable.FixedWidth, 73 | exclude_names=('Col1',)) 74 | dat = reader.read(table) 75 | assert_equal(reader.header.colnames, ('Col2',)) 76 | assert_equal(dat[1][0], "'s worlds") 77 | 78 | @has_numpy_and_not_has_numpy 79 | def test_read_weird(numpy): 80 | """Weird input table with data values chopped by col extent """ 81 | table = """ 82 | Col1 | Col2 | 83 | 1.2 "hello" 84 | 2.4 sdf's worlds 85 | """ 86 | reader = asciitable.get_reader(Reader=asciitable.FixedWidth) 87 | dat = reader.read(table) 88 | assert_equal(reader.header.colnames, ('Col1', 'Col2')) 89 | assert_almost_equal(dat[1][0], 2.4) 90 | assert_equal(dat[0][1], '"hel') 91 | assert_equal(dat[1][1], "df's wo") 92 | 93 | @has_numpy_and_not_has_numpy 94 | def test_read_double(numpy): 95 | """Table with double delimiters""" 96 | table = """ 97 | || Name || Phone || TCP|| 98 | | John | 555-1234 |192.168.1.10X| 99 | | Mary | 555-2134 |192.168.1.12X| 100 | | Bob | 555-4527 | 192.168.1.9X| 101 | """ 102 | dat = asciitable.read(table, Reader=asciitable.FixedWidth, guess=False) 103 | assert_equal(tuple(dat.dtype.names), ('Name', 'Phone', 'TCP')) 104 | assert_equal(dat[1][0], "Mary") 105 | assert_equal(dat[0][1], "555-1234") 106 | assert_equal(dat[2][2], "192.168.1.9") 107 | 108 | @has_numpy_and_not_has_numpy 109 | def test_read_space_delimiter(numpy): 110 | """Table with space delimiter""" 111 | table = """ 112 | Name --Phone- ----TCP----- 113 | John 555-1234 192.168.1.10 114 | Mary 555-2134 192.168.1.12 115 | Bob 555-4527 192.168.1.9 116 | """ 117 | dat = asciitable.read(table, Reader=asciitable.FixedWidth, guess=False, delimiter=' ') 118 | assert_equal(tuple(dat.dtype.names), ('Name', '--Phone-', '----TCP-----')) 119 | assert_equal(dat[1][0], "Mary") 120 | assert_equal(dat[0][1], "555-1234") 121 | assert_equal(dat[2][2], "192.168.1.9") 122 | 123 | @has_numpy_and_not_has_numpy 124 | def test_read_no_header_autocolumn(numpy): 125 | """Table with no header row and auto-column naming""" 126 | table = """ 127 | | John | 555-1234 |192.168.1.10| 128 | | Mary | 555-2134 |192.168.1.12| 129 | | Bob | 555-4527 | 192.168.1.9| 130 | """ 131 | dat = asciitable.read(table, Reader=asciitable.FixedWidth, guess=False, 132 | header_start=None, data_start=0) 133 | assert_equal(tuple(dat.dtype.names), ('col1', 'col2', 'col3')) 134 | assert_equal(dat[1][0], "Mary") 135 | assert_equal(dat[0][1], "555-1234") 136 | assert_equal(dat[2][2], "192.168.1.9") 137 | 138 | @has_numpy_and_not_has_numpy 139 | def test_read_no_header_names(numpy): 140 | """Table with no header row and with col names provided. Second 141 | and third rows also have hanging spaces after final |.""" 142 | table = """ 143 | | John | 555-1234 |192.168.1.10| 144 | | Mary | 555-2134 |192.168.1.12| 145 | | Bob | 555-4527 | 192.168.1.9| 146 | """ 147 | dat = asciitable.read(table, Reader=asciitable.FixedWidth, guess=False, 148 | header_start=None, data_start=0, 149 | names=('Name', 'Phone', 'TCP')) 150 | assert_equal(tuple(dat.dtype.names), ('Name', 'Phone', 'TCP')) 151 | assert_equal(dat[1][0], "Mary") 152 | assert_equal(dat[0][1], "555-1234") 153 | assert_equal(dat[2][2], "192.168.1.9") 154 | 155 | @has_numpy_and_not_has_numpy 156 | def test_read_no_header_autocolumn_NoHeader(numpy): 157 | """Table with no header row and auto-column naming""" 158 | table = """ 159 | | John | 555-1234 |192.168.1.10| 160 | | Mary | 555-2134 |192.168.1.12| 161 | | Bob | 555-4527 | 192.168.1.9| 162 | """ 163 | dat = asciitable.read(table, Reader=asciitable.FixedWidthNoHeader) 164 | assert_equal(tuple(dat.dtype.names), ('col1', 'col2', 'col3')) 165 | assert_equal(dat[1][0], "Mary") 166 | assert_equal(dat[0][1], "555-1234") 167 | assert_equal(dat[2][2], "192.168.1.9") 168 | 169 | @has_numpy_and_not_has_numpy 170 | def test_read_no_header_names_NoHeader(numpy): 171 | """Table with no header row and with col names provided. Second 172 | and third rows also have hanging spaces after final |.""" 173 | table = """ 174 | | John | 555-1234 |192.168.1.10| 175 | | Mary | 555-2134 |192.168.1.12| 176 | | Bob | 555-4527 | 192.168.1.9| 177 | """ 178 | dat = asciitable.read(table, Reader=asciitable.FixedWidthNoHeader, 179 | names=('Name', 'Phone', 'TCP')) 180 | assert_equal(tuple(dat.dtype.names), ('Name', 'Phone', 'TCP')) 181 | assert_equal(dat[1][0], "Mary") 182 | assert_equal(dat[0][1], "555-1234") 183 | assert_equal(dat[2][2], "192.168.1.9") 184 | 185 | @has_numpy_and_not_has_numpy 186 | def test_read_col_starts(numpy): 187 | """Table with no delimiter with column start and end values specified.""" 188 | table = """ 189 | # 5 9 17 18 28 190 | # | | || | 191 | John 555- 1234 192.168.1.10 192 | Mary 555- 2134 192.168.1.12 193 | Bob 555- 4527 192.168.1.9 194 | """ 195 | dat = asciitable.read(table, Reader=asciitable.FixedWidthNoHeader, 196 | names=('Name', 'Phone', 'TCP'), 197 | col_starts=(0, 9, 18), 198 | col_ends=(5, 17, 28), 199 | ) 200 | assert_equal(tuple(dat.dtype.names), ('Name', 'Phone', 'TCP')) 201 | assert_equal(dat[0][1], "555- 1234") 202 | assert_equal(dat[1][0], "Mary") 203 | assert_equal(dat[1][2], "192.168.1.") 204 | assert_equal(dat[2][2], "192.168.1") # col_end=28 cuts this column off 205 | 206 | 207 | table = """\ 208 | | Col1 | Col2 | Col3 | Col4 | 209 | | 1.2 | "hello" | 1 | a | 210 | | 2.4 | 's worlds | 2 | 2 | 211 | """ 212 | dat = asciitable.read(table, Reader=asciitable.FixedWidth) 213 | 214 | @has_numpy_and_not_has_numpy 215 | def test_write_normal(numpy): 216 | """Write a table as a normal fixed width table.""" 217 | out = io.StringIO() 218 | asciitable.write(dat, out, Writer=asciitable.FixedWidth) 219 | assert_equal_splitlines(out.getvalue(), """\ 220 | | Col1 | Col2 | Col3 | Col4 | 221 | | 1.2 | "hello" | 1 | a | 222 | | 2.4 | 's worlds | 2 | 2 | 223 | """) 224 | 225 | @has_numpy_and_not_has_numpy 226 | def test_write_no_pad(numpy): 227 | """Write a table as a fixed width table with no padding.""" 228 | out = io.StringIO() 229 | asciitable.write(dat, out, Writer=asciitable.FixedWidth, delimiter_pad=None) 230 | assert_equal_splitlines(out.getvalue(), """\ 231 | |Col1| Col2|Col3|Col4| 232 | | 1.2| "hello"| 1| a| 233 | | 2.4|'s worlds| 2| 2| 234 | """) 235 | 236 | @has_numpy_and_not_has_numpy 237 | def test_write_no_bookend(numpy): 238 | """Write a table as a fixed width table with no bookend.""" 239 | out = io.StringIO() 240 | asciitable.write(dat, out, Writer=asciitable.FixedWidth, bookend=False) 241 | assert_equal_splitlines(out.getvalue(), """\ 242 | Col1 | Col2 | Col3 | Col4 243 | 1.2 | "hello" | 1 | a 244 | 2.4 | 's worlds | 2 | 2 245 | """) 246 | 247 | @has_numpy_and_not_has_numpy 248 | def test_write_no_delimiter(numpy): 249 | """Write a table as a fixed width table with no delimiter.""" 250 | out = io.StringIO() 251 | asciitable.write(dat, out, Writer=asciitable.FixedWidth, bookend=False, delimiter=None) 252 | assert_equal_splitlines(out.getvalue(), """\ 253 | Col1 Col2 Col3 Col4 254 | 1.2 "hello" 1 a 255 | 2.4 's worlds 2 2 256 | """) 257 | 258 | @has_numpy_and_not_has_numpy 259 | def test_write_noheader_normal(numpy): 260 | """Write a table as a normal fixed width table.""" 261 | out = io.StringIO() 262 | asciitable.write(dat, out, Writer=asciitable.FixedWidthNoHeader) 263 | assert_equal_splitlines(out.getvalue(), """\ 264 | | 1.2 | "hello" | 1 | a | 265 | | 2.4 | 's worlds | 2 | 2 | 266 | """) 267 | 268 | @has_numpy_and_not_has_numpy 269 | def test_write_noheader_no_pad(numpy): 270 | """Write a table as a fixed width table with no padding.""" 271 | out = io.StringIO() 272 | asciitable.write(dat, out, Writer=asciitable.FixedWidthNoHeader, delimiter_pad=None) 273 | assert_equal_splitlines(out.getvalue(), """\ 274 | |1.2| "hello"|1|a| 275 | |2.4|'s worlds|2|2| 276 | """) 277 | 278 | @has_numpy_and_not_has_numpy 279 | def test_write_noheader_no_bookend(numpy): 280 | """Write a table as a fixed width table with no bookend.""" 281 | out = io.StringIO() 282 | asciitable.write(dat, out, Writer=asciitable.FixedWidthNoHeader, bookend=False) 283 | assert_equal_splitlines(out.getvalue(), """\ 284 | 1.2 | "hello" | 1 | a 285 | 2.4 | 's worlds | 2 | 2 286 | """) 287 | 288 | @has_numpy_and_not_has_numpy 289 | def test_write_noheader_no_delimiter(numpy): 290 | """Write a table as a fixed width table with no delimiter.""" 291 | out = io.StringIO() 292 | asciitable.write(dat, out, Writer=asciitable.FixedWidthNoHeader, bookend=False, delimiter=None) 293 | assert_equal_splitlines(out.getvalue(), """\ 294 | 1.2 "hello" 1 a 295 | 2.4 's worlds 2 2 296 | """) 297 | 298 | @has_numpy_and_not_has_numpy 299 | def test_write_formats(numpy): 300 | """Write a table as a fixed width table with no delimiter.""" 301 | out = io.StringIO() 302 | asciitable.write(dat, out, Writer=asciitable.FixedWidth, 303 | formats={'Col1': '%-8.3f', 'Col2': '%-15s'}) 304 | assert_equal_splitlines(out.getvalue(), """\ 305 | | Col1 | Col2 | Col3 | Col4 | 306 | | 1.200 | "hello" | 1 | a | 307 | | 2.400 | 's worlds | 2 | 2 | 308 | """) 309 | 310 | @has_numpy_and_not_has_numpy 311 | def test_read_twoline_normal(numpy): 312 | """Typical fixed format table with two header lines (with some cruft 313 | thrown in to test column positioning""" 314 | table = """ 315 | Col1 Col2 316 | ---- --------- 317 | 1.2xx"hello" 318 | 2.4 's worlds 319 | """ 320 | dat = asciitable.read(table, Reader=asciitable.FixedWidthTwoLine) 321 | assert_equal(dat.dtype.names, ('Col1', 'Col2')) 322 | assert_almost_equal(dat[1][0], 2.4) 323 | assert_equal(dat[0][1], '"hello"') 324 | assert_equal(dat[1][1], "'s worlds") 325 | 326 | @has_numpy_and_not_has_numpy 327 | def test_read_twoline_ReST(numpy): 328 | """Read restructured text table""" 329 | table = """ 330 | ======= =========== 331 | Col1 Col2 332 | ======= =========== 333 | 1.2 "hello" 334 | 2.4 's worlds 335 | ======= =========== 336 | """ 337 | dat = asciitable.read(table, Reader=asciitable.FixedWidthTwoLine, 338 | header_start=1, position_line=2, data_end=-1) 339 | assert_equal(dat.dtype.names, ('Col1', 'Col2')) 340 | assert_almost_equal(dat[1][0], 2.4) 341 | assert_equal(dat[0][1], '"hello"') 342 | assert_equal(dat[1][1], "'s worlds") 343 | 344 | @has_numpy_and_not_has_numpy 345 | def test_read_twoline_human(numpy): 346 | """Read text table designed for humans and test having position line 347 | before the header line""" 348 | table = """ 349 | +------+----------+ 350 | | Col1 | Col2 | 351 | +------|----------+ 352 | | 1.2 | "hello" | 353 | | 2.4 | 's worlds| 354 | +------+----------+ 355 | """ 356 | dat = asciitable.read(table, Reader=asciitable.FixedWidthTwoLine, delimiter='+', 357 | header_start=1, position_line=0, data_start=3, data_end=-1) 358 | assert_equal(dat.dtype.names, ('Col1', 'Col2')) 359 | assert_almost_equal(dat[1][0], 2.4) 360 | assert_equal(dat[0][1], '"hello"') 361 | assert_equal(dat[1][1], "'s worlds") 362 | 363 | @has_numpy_and_not_has_numpy 364 | def test_write_twoline_normal(numpy): 365 | """Write a table as a normal fixed width table.""" 366 | out = io.StringIO() 367 | asciitable.write(dat, out, Writer=asciitable.FixedWidthTwoLine) 368 | assert_equal_splitlines(out.getvalue(), """\ 369 | Col1 Col2 Col3 Col4 370 | ---- --------- ---- ---- 371 | 1.2 "hello" 1 a 372 | 2.4 's worlds 2 2 373 | """) 374 | 375 | 376 | @has_numpy_and_not_has_numpy 377 | def test_write_twoline_no_pad(numpy): 378 | """Write a table as a fixed width table with no padding.""" 379 | out = io.StringIO() 380 | asciitable.write(dat, out, Writer=asciitable.FixedWidthTwoLine, delimiter_pad=' ', 381 | position_char='=') 382 | assert_equal_splitlines(out.getvalue(), """\ 383 | Col1 Col2 Col3 Col4 384 | ==== ========= ==== ==== 385 | 1.2 "hello" 1 a 386 | 2.4 's worlds 2 2 387 | """) 388 | 389 | @has_numpy_and_not_has_numpy 390 | def test_write_twoline_no_bookend(numpy): 391 | """Write a table as a fixed width table with no bookend.""" 392 | out = io.StringIO() 393 | asciitable.write(dat, out, Writer=asciitable.FixedWidthTwoLine, bookend=True, delimiter='|') 394 | assert_equal_splitlines(out.getvalue(), """\ 395 | |Col1| Col2|Col3|Col4| 396 | |----|---------|----|----| 397 | | 1.2| "hello"| 1| a| 398 | | 2.4|'s worlds| 2| 2| 399 | """) 400 | 401 | -------------------------------------------------------------------------------- /test/test_memory.py: -------------------------------------------------------------------------------- 1 | import re 2 | import glob 3 | from nose.tools import * 4 | 5 | import asciitable 6 | if asciitable.has_numpy: 7 | import numpy as np 8 | from test.common import has_numpy_and_not_has_numpy 9 | 10 | def _test_values_equal(data, mem_data, numpy): 11 | for colname in data.dtype.names: 12 | matches = data[colname] == mem_data[colname] 13 | if numpy: 14 | assert(matches.all()) 15 | else: 16 | assert(matches) 17 | 18 | @has_numpy_and_not_has_numpy 19 | def test_memory_from_table(numpy): 20 | table = asciitable.get_reader(numpy=numpy, Reader=asciitable.Daophot) 21 | data = table.read('t/daophot.dat') 22 | 23 | mem_table = asciitable.get_reader(Reader=asciitable.Memory, numpy=numpy) 24 | mem_data = mem_table.read(data) 25 | assert(data.dtype.names == mem_data.dtype.names) 26 | _test_values_equal(data, mem_data, numpy) 27 | 28 | mem_data = mem_table.read(mem_table) 29 | assert(data.dtype.names == mem_data.dtype.names) 30 | _test_values_equal(data, mem_data, numpy) 31 | 32 | @has_numpy_and_not_has_numpy 33 | def test_memory_from_LOL(numpy): 34 | data = [[1, 2, 3], [4, 5.2, 6.1], [8, 9, 'hello']] 35 | mem_table = asciitable.get_reader(Reader=asciitable.Memory, numpy=numpy) 36 | mem_data = mem_table.read(data) 37 | print(mem_data.dtype.names) 38 | assert(mem_data.dtype.names == ('col1', 'col2', 'col3')) 39 | if numpy: 40 | assert(mem_data[0][0] == 1) 41 | assert(mem_data[0][1] == 2) 42 | assert(mem_data[0][2] == '3') 43 | assert((mem_data['col2'] == np.array([2, 5.2, 9])).all()) 44 | assert((mem_data['col3'] == np.array([3, 6.1, 'hello'])).all()) 45 | else: 46 | assert(mem_data[0] == [1, 2, 3]) 47 | assert(mem_data['col2'] == [2, 5.2, 9]) 48 | assert(mem_data['col3'] == [3, 6.1, 'hello']) 49 | 50 | @has_numpy_and_not_has_numpy 51 | def test_memory_from_LOL2(numpy): 52 | data = [[1, 2, 3], [4, 5.2, 6.1], [8, 9, 'hello']] 53 | mem_table = asciitable.get_reader(Reader=asciitable.Memory, numpy=numpy, names=('c1','c2','c3')) 54 | mem_data = mem_table.read(data) 55 | print(mem_data.dtype.names) 56 | assert(mem_data.dtype.names == ('c1', 'c2', 'c3')) 57 | if numpy: 58 | assert(mem_data[0][0] == 1) 59 | assert(mem_data[0][1] == 2) 60 | assert(mem_data[0][2] == '3') 61 | assert((mem_data['c2'] == np.array([2, 5.2, 9])).all()) 62 | assert((mem_data['c3'] == np.array([3, 6.1, 'hello'])).all()) 63 | else: 64 | assert(mem_data[0] == [1, 2, 3]) 65 | assert(mem_data['c2'] == [2, 5.2, 9]) 66 | assert(mem_data['c3'] == [3, 6.1, 'hello']) 67 | 68 | @has_numpy_and_not_has_numpy 69 | def test_memory_from_DOL(numpy): 70 | data = {'c1': [1, 2, 3], 71 | 'c2': [4, 5.2, 6.1], 72 | 'c3': [8, 9, 'hello']} 73 | mem_table = asciitable.get_reader(Reader=asciitable.Memory, numpy=numpy, 74 | names=sorted(data.keys())) 75 | mem_data = mem_table.read(data) 76 | assert(mem_data.dtype.names == ('c1', 'c2', 'c3')) 77 | if numpy: 78 | assert(mem_data[0][0] == 1) 79 | assert(mem_data[0][1] == 4) 80 | assert(mem_data[0][2] == '8') 81 | assert((mem_data['c2'] == np.array([4, 5.2, 6.1])).all()) 82 | assert((mem_data['c3'] == np.array([8, 9, 'hello'])).all()) 83 | else: 84 | assert(mem_data[0] == [1, 4, 8]) 85 | assert(mem_data['c2'] == [4, 5.2, 6.1]) 86 | assert(mem_data['c3'] == [8, 9, 'hello']) 87 | -------------------------------------------------------------------------------- /test/test_read.py: -------------------------------------------------------------------------------- 1 | import re 2 | import glob 3 | import math 4 | from nose.tools import * 5 | 6 | import asciitable 7 | if asciitable.has_numpy: 8 | import numpy as np 9 | 10 | from test.common import has_numpy_and_not_has_numpy, has_numpy 11 | 12 | try: 13 | from math import isnan 14 | except ImportError: 15 | try: 16 | from numpy import isnan 17 | except ImportError: 18 | print('Tests requiring isnan will fail') 19 | 20 | @has_numpy_and_not_has_numpy 21 | def test_read_all_files(numpy): 22 | for testfile in get_testfiles(): 23 | print('\n\n******** READING %s' % testfile['name']) 24 | if testfile.get('requires_numpy') and not asciitable.has_numpy: 25 | return 26 | for guess in (True, False): 27 | test_opts = testfile['opts'].copy() 28 | if 'guess' not in test_opts: 29 | test_opts['guess'] = guess 30 | table = asciitable.read(testfile['name'], numpy=numpy, **testfile['opts']) 31 | assert_equal(table.dtype.names, testfile['cols']) 32 | for colname in table.dtype.names: 33 | assert_equal(len(table[colname]), testfile['nrows']) 34 | 35 | @has_numpy_and_not_has_numpy 36 | def test_guess_all_files(numpy): 37 | for testfile in get_testfiles(): 38 | if not testfile['opts'].get('guess', True): 39 | continue 40 | print('\n\n******** READING %s' % testfile['name']) 41 | if testfile.get('requires_numpy') and not asciitable.has_numpy: 42 | return 43 | for filter_read_opts in (['Reader', 'delimiter', 'quotechar'], []): 44 | # Copy read options except for those in filter_read_opts 45 | guess_opts = dict((k, v) for k, v in testfile['opts'].items() 46 | if k not in filter_read_opts) 47 | table = asciitable.read(testfile['name'], numpy=numpy, guess=True, **guess_opts) 48 | assert_equal(table.dtype.names, testfile['cols']) 49 | for colname in table.dtype.names: 50 | assert_equal(len(table[colname]), testfile['nrows']) 51 | 52 | @has_numpy 53 | def test_daophot_header_keywords(numpy): 54 | reader = asciitable.get_reader(Reader=asciitable.DaophotReader, numpy=numpy) 55 | table = reader.read('t/daophot.dat') 56 | expected_keywords = (('NSTARFILE', 'test.nst.1', 'filename', '%-23s'), 57 | ('REJFILE', 'hello world', 'filename', '%-23s'), 58 | ('SCALE', '1.', 'units/pix', '%-23.7g'),) 59 | 60 | for name, value, units, format_ in expected_keywords: 61 | for keyword in reader.keywords: 62 | if keyword.name == name: 63 | assert_equal(keyword.value, value) 64 | assert_equal(keyword.units, units) 65 | assert_equal(keyword.format, format_) 66 | break 67 | else: 68 | raise ValueError('Keyword not found') 69 | 70 | 71 | @has_numpy_and_not_has_numpy 72 | @raises(asciitable.InconsistentTableError) 73 | def test_empty_table_no_header(numpy): 74 | table = asciitable.read('t/no_data_without_header.dat', Reader=asciitable.NoHeader, 75 | numpy=numpy, guess=False) 76 | 77 | @has_numpy_and_not_has_numpy 78 | @raises(asciitable.InconsistentTableError) 79 | def test_wrong_quote(numpy): 80 | table = asciitable.read('t/simple.txt', numpy=numpy, guess=False) 81 | 82 | @has_numpy_and_not_has_numpy 83 | @raises(asciitable.InconsistentTableError) 84 | def test_extra_data_col(numpy): 85 | table = asciitable.read('t/bad.txt', numpy=numpy) 86 | 87 | @has_numpy_and_not_has_numpy 88 | @raises(asciitable.InconsistentTableError) 89 | def test_extra_data_col2(numpy): 90 | table = asciitable.read('t/simple5.txt', delimiter='|', numpy=numpy) 91 | 92 | @has_numpy_and_not_has_numpy 93 | @raises(IOError) 94 | def test_missing_file(numpy): 95 | table = asciitable.read('does_not_exist', numpy=numpy) 96 | 97 | @has_numpy_and_not_has_numpy 98 | def test_set_names(numpy): 99 | names = ('c1','c2','c3', 'c4', 'c5', 'c6') 100 | include_names = ('c1', 'c3') 101 | exclude_names = ('c4', 'c5', 'c6') 102 | data = asciitable.read('t/simple3.txt', names=names, delimiter='|', numpy=numpy) 103 | assert_equal(data.dtype.names, names) 104 | 105 | @has_numpy_and_not_has_numpy 106 | def test_set_include_names(numpy): 107 | names = ('c1','c2','c3', 'c4', 'c5', 'c6') 108 | include_names = ('c1', 'c3') 109 | data = asciitable.read('t/simple3.txt', names=names, include_names=include_names, 110 | delimiter='|', numpy=numpy) 111 | assert_equal(data.dtype.names, include_names) 112 | 113 | @has_numpy_and_not_has_numpy 114 | def test_set_exclude_names(numpy): 115 | exclude_names = ('Y', 'object') 116 | data = asciitable.read('t/simple3.txt', exclude_names=exclude_names, delimiter='|', numpy=numpy) 117 | assert_equal(data.dtype.names, ('obsid', 'redshift', 'X', 'rad')) 118 | 119 | @has_numpy_and_not_has_numpy 120 | def test_custom_process_lines(numpy): 121 | def process_lines(lines): 122 | bars_at_ends = re.compile(r'^\| | \|$', re.VERBOSE) 123 | striplines = (x.strip() for x in lines) 124 | return [bars_at_ends.sub('', x) for x in striplines if len(x) > 0] 125 | reader = asciitable.get_reader(delimiter='|', numpy=numpy) 126 | reader.inputter.process_lines = process_lines 127 | data = reader.read('t/bars_at_ends.txt') 128 | assert_equal(data.dtype.names, ('obsid', 'redshift', 'X', 'Y', 'object', 'rad')) 129 | assert_equal(len(data), 3) 130 | 131 | @has_numpy_and_not_has_numpy 132 | def test_custom_process_line(numpy): 133 | def process_line(line): 134 | line_out = re.sub(r'^\|\s*', '', line.strip()) 135 | return line_out 136 | reader = asciitable.get_reader(data_start=2, delimiter='|', numpy=numpy) 137 | reader.header.splitter.process_line = process_line 138 | reader.data.splitter.process_line = process_line 139 | data = reader.read('t/nls1_stackinfo.dbout') 140 | cols = get_testfiles('t/nls1_stackinfo.dbout')['cols'] 141 | assert_equal(data.dtype.names, cols[1:]) 142 | 143 | @has_numpy_and_not_has_numpy 144 | def test_custom_splitters(numpy): 145 | reader = asciitable.get_reader(numpy=numpy) 146 | reader.header.splitter = asciitable.BaseSplitter() 147 | reader.data.splitter = asciitable.BaseSplitter() 148 | f = 't/test4.dat' 149 | data = reader.read(f) 150 | testfile = get_testfiles(f) 151 | assert_equal(data.dtype.names, testfile['cols']) 152 | assert_equal(len(data), testfile['nrows']) 153 | assert_almost_equal(data.field('zabs1.nh')[2], 0.0839710433091) 154 | assert_almost_equal(data.field('p1.gamma')[2], 1.25997502704) 155 | assert_almost_equal(data.field('p1.ampl')[2], 0.000696444029148) 156 | assert_equal(data.field('statname')[2], 'chi2modvar') 157 | assert_almost_equal(data.field('statval')[2], 497.56468441) 158 | 159 | @has_numpy_and_not_has_numpy 160 | def test_start_end(numpy): 161 | data = asciitable.read('t/test5.dat', header_start=1, data_start=3, data_end=-5, numpy=numpy) 162 | assert_equal(len(data), 13) 163 | assert_equal(data.field('statname')[0], 'chi2xspecvar') 164 | assert_equal(data.field('statname')[-1], 'chi2gehrels') 165 | 166 | @has_numpy 167 | def test_set_converters(numpy): 168 | converters = {'zabs1.nh': [asciitable.convert_numpy('int32'), 169 | asciitable.convert_numpy('float32')], 170 | 'p1.gamma': [asciitable.convert_numpy('str')] 171 | } 172 | data = asciitable.read('t/test4.dat', converters=converters, numpy=numpy) 173 | assert_equal(str(data['zabs1.nh'].dtype), 'float32') 174 | assert_equal(data['p1.gamma'][0], '1.26764544642') 175 | 176 | @has_numpy_and_not_has_numpy 177 | def test_from_string(numpy): 178 | f = 't/simple.txt' 179 | table = open(f).read() 180 | testfile = get_testfiles(f) 181 | data = asciitable.read(table, numpy=numpy, **testfile['opts']) 182 | assert_equal(data.dtype.names, testfile['cols']) 183 | assert_equal(len(data), testfile['nrows']) 184 | 185 | @has_numpy_and_not_has_numpy 186 | def test_from_filelike(numpy): 187 | f = 't/simple.txt' 188 | table = open(f) 189 | testfile = get_testfiles(f) 190 | data = asciitable.read(table, numpy=numpy, **testfile['opts']) 191 | assert_equal(data.dtype.names, testfile['cols']) 192 | assert_equal(len(data), testfile['nrows']) 193 | 194 | @has_numpy_and_not_has_numpy 195 | def test_from_lines(numpy): 196 | f = 't/simple.txt' 197 | table = open(f).readlines() 198 | testfile = get_testfiles(f) 199 | data = asciitable.read(table, numpy=numpy, **testfile['opts']) 200 | assert_equal(data.dtype.names, testfile['cols']) 201 | assert_equal(len(data), testfile['nrows']) 202 | 203 | @has_numpy_and_not_has_numpy 204 | def test_comment_lines(numpy): 205 | table = asciitable.get_reader(Reader=asciitable.RdbReader, numpy=numpy) 206 | data = table.read('t/apostrophe.rdb') 207 | assert_equal(table.comment_lines, ['# first comment', ' # second comment']) 208 | 209 | @has_numpy_and_not_has_numpy 210 | def test_fill_values(numpy): 211 | f = 't/fill_values.txt' 212 | testfile = get_testfiles(f) 213 | data = asciitable.read(f, numpy=numpy, fill_values=('a','1'), **testfile['opts']) 214 | if numpy: 215 | assert_true((data.mask['a']==[False,True]).all()) 216 | assert_true((data.data['a']==[1,1]).all()) 217 | assert_true((data.mask['b']==[False,True]).all()) 218 | assert_true((data.data['b']==[2,1]).all()) 219 | 220 | else: 221 | assert_equal(data['a'],[1,1]) 222 | assert_equal(data['b'],[2,1]) 223 | 224 | @has_numpy_and_not_has_numpy 225 | def test_fill_values_col(numpy): 226 | f = 't/fill_values.txt' 227 | testfile = get_testfiles(f) 228 | data = asciitable.read(f, numpy=numpy, fill_values=('a','1', 'b'), **testfile['opts']) 229 | check_fill_values(numpy, data) 230 | 231 | @has_numpy_and_not_has_numpy 232 | def test_fill_values_include_names(numpy): 233 | f = 't/fill_values.txt' 234 | testfile = get_testfiles(f) 235 | data = asciitable.read(f, numpy=numpy, fill_values=('a','1'), 236 | fill_include_names = ['b'], **testfile['opts']) 237 | check_fill_values(numpy, data) 238 | 239 | @has_numpy_and_not_has_numpy 240 | def test_fill_values_exclude_names(numpy): 241 | f = 't/fill_values.txt' 242 | testfile = get_testfiles(f) 243 | data = asciitable.read(f, numpy=numpy, fill_values=('a','1'), 244 | fill_exclude_names = ['a'], **testfile['opts']) 245 | check_fill_values(numpy, data) 246 | 247 | def check_fill_values(numpy, data): 248 | """compare array column by column with expectation """ 249 | if numpy: 250 | assert_true((data.mask['a']==[False,False]).all()) 251 | assert_true((data.data['a']==['1','a']).all()) 252 | assert_true((data.mask['b']==[False,True]).all()) 253 | assert_true((data.data['b']==[2,1]).all()) 254 | else: 255 | assert_equal(data['a'],['1','a']) 256 | assert_equal(data['b'],[2,1]) 257 | 258 | @has_numpy_and_not_has_numpy 259 | def test_fill_values_list(numpy): 260 | f = 't/fill_values.txt' 261 | testfile = get_testfiles(f) 262 | data = asciitable.read(f, numpy=numpy, fill_values=[('a','42'),('1','42','a')], 263 | **testfile['opts']) 264 | if numpy: 265 | assert_true((data.data['a']==[42,42]).all()) 266 | else: 267 | assert_equal(data['a'],[42,42]) 268 | 269 | @has_numpy_and_not_has_numpy 270 | def test_masking_Cds(numpy): 271 | f = 't/cds.dat' 272 | testfile = get_testfiles(f) 273 | data = asciitable.read(f, numpy=numpy, 274 | **testfile['opts']) 275 | if numpy: 276 | assert_true(data['AK'].mask[0]) 277 | assert_true(not data['Fit'].mask[0]) 278 | else: 279 | assert_true(isnan(data['AK'][0])) 280 | assert_true(not isnan(data['Fit'][0])) 281 | 282 | @has_numpy_and_not_has_numpy 283 | def test_set_guess_kwarg(numpy): 284 | """Read a file using guess with one of the typical guess_kwargs explicitly set.""" 285 | data = asciitable.read('t/space_delim_no_header.dat', numpy=numpy, 286 | delimiter=',', guess=True) 287 | assert(data.dtype.names == ('1 3.4 hello', )) 288 | assert(len(data) == 1) 289 | 290 | @has_numpy_and_not_has_numpy 291 | @raises(asciitable.InconsistentTableError) 292 | def test_read_rdb_wrong_type(numpy): 293 | """Read RDB data with inconstent data type (except failure)""" 294 | table = """col1\tcol2 295 | N\tN 296 | 1\tHello""" 297 | dat = asciitable.read(table, Reader=asciitable.Rdb) 298 | 299 | def get_testfiles(name=None): 300 | """Set up information about the columns, number of rows, and reader params to 301 | read a bunch of test files and verify columns and number of rows.""" 302 | 303 | testfiles = [ 304 | {'cols': ('agasc_id', 'n_noids', 'n_obs'), 305 | 'name': 't/apostrophe.rdb', 306 | 'nrows': 2, 307 | 'opts': {'Reader': asciitable.Rdb}}, 308 | {'cols': ('agasc_id', 'n_noids', 'n_obs'), 309 | 'name': 't/apostrophe.tab', 310 | 'nrows': 2, 311 | 'opts': {'Reader': asciitable.Tab}}, 312 | {'cols': ('Index', 313 | 'RAh', 314 | 'RAm', 315 | 'RAs', 316 | 'DE-', 317 | 'DEd', 318 | 'DEm', 319 | 'DEs', 320 | 'Match', 321 | 'Class', 322 | 'AK', 323 | 'Fit'), 324 | 'name': 't/cds.dat', 325 | 'nrows': 1, 326 | 'opts': {'Reader': asciitable.Cds}}, 327 | {'cols': ('a', 'b', 'c'), 328 | 'name': 't/commented_header.dat', 329 | 'nrows': 2, 330 | 'opts': {'Reader': asciitable.CommentedHeader}}, 331 | {'cols': ('a', 'b', 'c'), 332 | 'name': 't/commented_header2.dat', 333 | 'nrows': 2, 334 | 'opts': {'Reader': asciitable.CommentedHeader, 'header_start': -1}}, 335 | {'cols': ('col1', 'col2', 'col3', 'col4', 'col5'), 336 | 'name': 't/continuation.dat', 337 | 'nrows': 2, 338 | 'opts': {'Inputter': asciitable.ContinuationLinesInputter, 339 | 'Reader': asciitable.NoHeader}}, 340 | {'cols': ('ID', 341 | 'XCENTER', 342 | 'YCENTER', 343 | 'MAG', 344 | 'MERR', 345 | 'MSKY', 346 | 'NITER', 347 | 'SHARPNESS', 348 | 'CHI', 349 | 'PIER', 350 | 'PERROR'), 351 | 'name': 't/daophot.dat', 352 | 'nrows': 2, 353 | 'requires_numpy': True, 354 | 'opts': {'Reader': asciitable.Daophot}}, 355 | {'cols': ('ra', 'dec', 'sai', 'v2', 'sptype'), 356 | 'name': 't/ipac.dat', 357 | 'nrows': 2, 358 | 'opts': {'Reader': asciitable.Ipac}}, 359 | {'cols': ('', 360 | 'objID', 361 | 'osrcid', 362 | 'xsrcid', 363 | 'SpecObjID', 364 | 'ra', 365 | 'dec', 366 | 'obsid', 367 | 'ccdid', 368 | 'z', 369 | 'modelMag_i', 370 | 'modelMagErr_i', 371 | 'modelMag_r', 372 | 'modelMagErr_r', 373 | 'expo', 374 | 'theta', 375 | 'rad_ecf_39', 376 | 'detlim90', 377 | 'fBlim90'), 378 | 'name': 't/nls1_stackinfo.dbout', 379 | 'nrows': 58, 380 | 'opts': {'data_start': 2, 'delimiter': '|', 'guess': False}}, 381 | {'cols': ('Index', 382 | 'RAh', 383 | 'RAm', 384 | 'RAs', 385 | 'DE-', 386 | 'DEd', 387 | 'DEm', 388 | 'DEs', 389 | 'Match', 390 | 'Class', 391 | 'AK', 392 | 'Fit'), 393 | 'name': 't/no_data_cds.dat', 394 | 'nrows': 0, 395 | 'opts': {'Reader': asciitable.Cds}}, 396 | {'cols': ('ID', 397 | 'XCENTER', 398 | 'YCENTER', 399 | 'MAG', 400 | 'MERR', 401 | 'MSKY', 402 | 'NITER', 403 | 'SHARPNESS', 404 | 'CHI', 405 | 'PIER', 406 | 'PERROR'), 407 | 'name': 't/no_data_daophot.dat', 408 | 'nrows': 0, 409 | 'requires_numpy': True, 410 | 'opts': {'Reader': asciitable.Daophot}}, 411 | {'cols': ('ra', 'dec', 'sai', 'v2', 'sptype'), 412 | 'name': 't/no_data_ipac.dat', 413 | 'nrows': 0, 414 | 'opts': {'Reader': asciitable.Ipac}}, 415 | {'cols': ('a', 'b', 'c'), 416 | 'name': 't/no_data_with_header.dat', 417 | 'nrows': 0, 418 | 'opts': {}}, 419 | {'cols': ('agasc_id', 'n_noids', 'n_obs'), 420 | 'name': 't/short.rdb', 421 | 'nrows': 7, 422 | 'opts': {'Reader': asciitable.Rdb}}, 423 | {'cols': ('agasc_id', 'n_noids', 'n_obs'), 424 | 'name': 't/short.tab', 425 | 'nrows': 7, 426 | 'opts': {'Reader': asciitable.Tab}}, 427 | {'cols': ('test 1a', 'test2', 'test3', 'test4'), 428 | 'name': 't/simple.txt', 429 | 'nrows': 2, 430 | 'opts': {'quotechar': "'"}}, 431 | {'cols': ('obsid', 'redshift', 'X', 'Y', 'object', 'rad'), 432 | 'name': 't/simple2.txt', 433 | 'nrows': 3, 434 | 'opts': {'delimiter': '|'}}, 435 | {'cols': ('obsid', 'redshift', 'X', 'Y', 'object', 'rad'), 436 | 'name': 't/simple3.txt', 437 | 'nrows': 2, 438 | 'opts': {'delimiter': '|'}}, 439 | {'cols': ('col1', 'col2', 'col3', 'col4', 'col5', 'col6'), 440 | 'name': 't/simple4.txt', 441 | 'nrows': 3, 442 | 'opts': {'Reader': asciitable.NoHeader, 'delimiter': '|'}}, 443 | {'cols': ('col1', 'col2', 'col3'), 444 | 'name': 't/space_delim_no_header.dat', 445 | 'nrows': 2, 446 | 'opts': {}}, 447 | {'cols': ('obsid', 'offset', 'x', 'y', 'name', 'oaa'), 448 | 'name': 't/space_delim_blank_lines.txt', 449 | 'nrows': 3, 450 | 'opts': {}}, 451 | {'cols': ('zabs1.nh', 'p1.gamma', 'p1.ampl', 'statname', 'statval'), 452 | 'name': 't/test4.dat', 453 | 'nrows': 1172, 454 | 'opts': {}}, 455 | {'cols': ('a', 'b', 'c'), 456 | 'name': 't/fill_values.txt', 457 | 'nrows': 2, 458 | 'opts': {'delimiter': ','}}, 459 | {'name': 't/whitespace.dat', 460 | 'cols': ('quoted colname with tab\tinside', 'col2', 'col3'), 461 | 'nrows': 2, 462 | 'opts': {'delimiter': '\s'}}, 463 | {'cols': ('cola', 'colb', 'colc'), 464 | 'name': 't/latex1.tex', 465 | 'nrows': 2, 466 | 'opts': {'Reader': asciitable.Latex}}, 467 | {'cols': ('Facility', 'Id', 'exposure', 'date'), 468 | 'name': 't/latex2.tex', 469 | 'nrows': 3, 470 | 'opts': {'Reader': asciitable.AASTex}}, 471 | ] 472 | 473 | if name is not None: 474 | return [x for x in testfiles if x['name'] == name][0] 475 | else: 476 | return testfiles 477 | -------------------------------------------------------------------------------- /test/test_types.py: -------------------------------------------------------------------------------- 1 | import re 2 | import sys 3 | import glob 4 | import math 5 | from nose.tools import * 6 | 7 | try: 8 | import StringIO as io 9 | except ImportError: 10 | import io 11 | 12 | import asciitable 13 | if asciitable.has_numpy: 14 | import numpy as np 15 | 16 | from test.common import has_numpy_and_not_has_numpy, has_numpy 17 | 18 | @has_numpy_and_not_has_numpy 19 | def test_types_from_dat(numpy): 20 | if numpy: 21 | converters = {'a': [asciitable.convert_numpy(np.float)], 22 | 'e': [asciitable.convert_numpy(np.str)]} 23 | else: 24 | converters = {'a': [asciitable.convert_list(float)], 25 | 'e': [asciitable.convert_list(str)]} 26 | 27 | dat = asciitable.read(['a b c d e', '1 1 cat 2.1 4.2'], Reader=asciitable.Basic, 28 | converters=converters, numpy=numpy) 29 | 30 | reader = asciitable.get_reader(Reader=asciitable.Memory, numpy=numpy) 31 | reader.read(dat) 32 | 33 | print('numpy=%s' % numpy) 34 | print('dat=%s' % repr(dat)) 35 | print('reader.table=%s' % repr(reader.table)) 36 | print('types=%s' % repr([x.type for x in reader.cols])) 37 | 38 | assert_true(issubclass(reader.cols[0].type, asciitable.FloatType)) 39 | assert_true(issubclass(reader.cols[1].type, asciitable.IntType)) 40 | assert_true(issubclass(reader.cols[2].type, asciitable.StrType)) 41 | assert_true(issubclass(reader.cols[3].type, asciitable.FloatType)) 42 | assert_true(issubclass(reader.cols[4].type, asciitable.StrType)) 43 | 44 | @has_numpy_and_not_has_numpy 45 | def test_rdb_write_types(numpy): 46 | dat = asciitable.read(['a b c d', '1 1.0 cat 2.1'], Reader=asciitable.Basic, numpy=numpy) 47 | out = io.StringIO() 48 | asciitable.write(dat, out, Writer=asciitable.Rdb) 49 | outs = out.getvalue().splitlines() 50 | assert_equal(outs[1], 'N\tN\tS\tN') 51 | 52 | @has_numpy_and_not_has_numpy 53 | def test_ipac_read_types(numpy): 54 | table = r"""\ 55 | | ra | dec | sai |-----v2---| sptype | 56 | | real | float | l | real | char | 57 | | unit | unit | unit | unit | ergs | 58 | | null | null | null | null | -999 | 59 | 2.09708 2956 73765 2.06000 B8IVpMnHg 60 | """ 61 | reader = asciitable.get_reader(Reader=asciitable.Ipac, numpy=numpy) 62 | dat = reader.read(table) 63 | types = [asciitable.FloatType, 64 | asciitable.FloatType, 65 | asciitable.IntType, 66 | asciitable.FloatType, 67 | asciitable.StrType] 68 | for (col, expected_type) in zip(reader.cols, types): 69 | assert_equal(col.type, expected_type) 70 | 71 | 72 | 73 | -------------------------------------------------------------------------------- /test/test_write.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from nose.tools import * 3 | import asciitable 4 | 5 | try: 6 | import StringIO as io 7 | except ImportError: 8 | import io 9 | 10 | test_defs = [ 11 | dict(kwargs=dict(), 12 | out="""\ 13 | ID XCENTER YCENTER MAG MERR MSKY NITER SHARPNESS CHI PIER PERROR 14 | 14 138.538 256.405 15.461 0.003 34.85955 4 -0.032 0.802 0 No_error 15 | 18 18.114 280.17 22.329 0.206 30.12784 4 -2.544 1.104 0 No_error 16 | """ 17 | ), 18 | dict(kwargs=dict(formats={'XCENTER': '%12.1f', 19 | 'YCENTER': lambda x: round(x, 1)}, 20 | include_names=['XCENTER', 'YCENTER']), 21 | out="""\ 22 | XCENTER YCENTER 23 | " 138.5" 256.4 24 | " 18.1" 280.2 25 | """ 26 | ), 27 | dict(kwargs=dict(Writer=asciitable.Rdb, exclude_names=['CHI']), 28 | out="""\ 29 | ID XCENTER YCENTER MAG MERR MSKY NITER SHARPNESS PIER PERROR 30 | N N N N N N N N N S 31 | 14 138.538 256.405 15.461 0.003 34.85955 4 -0.032 0 No_error 32 | 18 18.114 280.17 22.329 0.206 30.12784 4 -2.544 0 No_error 33 | """ 34 | ), 35 | dict(kwargs=dict(Writer=asciitable.Tab), 36 | out="""\ 37 | ID XCENTER YCENTER MAG MERR MSKY NITER SHARPNESS CHI PIER PERROR 38 | 14 138.538 256.405 15.461 0.003 34.85955 4 -0.032 0.802 0 No_error 39 | 18 18.114 280.17 22.329 0.206 30.12784 4 -2.544 1.104 0 No_error 40 | """ 41 | ), 42 | dict(kwargs=dict(Writer=asciitable.NoHeader), 43 | out="""\ 44 | 14 138.538 256.405 15.461 0.003 34.85955 4 -0.032 0.802 0 No_error 45 | 18 18.114 280.17 22.329 0.206 30.12784 4 -2.544 1.104 0 No_error 46 | """ 47 | ), 48 | dict(kwargs=dict(Writer=asciitable.CommentedHeader), 49 | out="""\ 50 | # ID XCENTER YCENTER MAG MERR MSKY NITER SHARPNESS CHI PIER PERROR 51 | 14 138.538 256.405 15.461 0.003 34.85955 4 -0.032 0.802 0 No_error 52 | 18 18.114 280.17 22.329 0.206 30.12784 4 -2.544 1.104 0 No_error 53 | """ 54 | ), 55 | dict(kwargs=dict(Writer=asciitable.Latex), 56 | out="""\ 57 | \\begin{table} 58 | \\begin{tabular}{ccccccccccc} 59 | ID & XCENTER & YCENTER & MAG & MERR & MSKY & NITER & SHARPNESS & CHI & PIER & PERROR \\\\ 60 | 14 & 138.538 & 256.405 & 15.461 & 0.003 & 34.85955 & 4 & -0.032 & 0.802 & 0 & No_error \\\\ 61 | 18 & 18.114 & 280.17 & 22.329 & 0.206 & 30.12784 & 4 & -2.544 & 1.104 & 0 & No_error \\\\ 62 | \\end{tabular} 63 | \\end{table} 64 | """ 65 | ), 66 | dict(kwargs=dict(Writer=asciitable.AASTex), 67 | out="""\ 68 | \\begin{deluxetable}{ccccccccccc} 69 | \\tablehead{\\colhead{ID} & \\colhead{XCENTER} & \\colhead{YCENTER} & \\colhead{MAG} & \\colhead{MERR} & \\colhead{MSKY} & \\colhead{NITER} & \\colhead{SHARPNESS} & \\colhead{CHI} & \\colhead{PIER} & \\colhead{PERROR}} 70 | \\startdata 71 | 14 & 138.538 & 256.405 & 15.461 & 0.003 & 34.85955 & 4 & -0.032 & 0.802 & 0 & No_error \\\\ 72 | 18 & 18.114 & 280.17 & 22.329 & 0.206 & 30.12784 & 4 & -2.544 & 1.104 & 0 & No_error \\\\ 73 | \\enddata 74 | \\end{deluxetable} 75 | """ 76 | ), 77 | dict(kwargs=dict(Writer=asciitable.AASTex, caption = 'Mag values \\label{tab1}', latexdict = {'units':{'MAG': '[mag]', 'XCENTER': '[pixel]'}}), 78 | out="""\ 79 | \\begin{deluxetable}{ccccccccccc} 80 | \\tablecaption{Mag values \\label{tab1}} 81 | \\tablehead{\\colhead{ID} & \\colhead{XCENTER} & \\colhead{YCENTER} & \\colhead{MAG} & \\colhead{MERR} & \\colhead{MSKY} & \\colhead{NITER} & \\colhead{SHARPNESS} & \\colhead{CHI} & \\colhead{PIER} & \\colhead{PERROR}\\\\ \\colhead{ } & \\colhead{[pixel]} & \\colhead{ } & \\colhead{[mag]} & \\colhead{ } & \\colhead{ } & \\colhead{ } & \\colhead{ } & \\colhead{ } & \\colhead{ } & \\colhead{ }} 82 | \\startdata 83 | 14 & 138.538 & 256.405 & 15.461 & 0.003 & 34.85955 & 4 & -0.032 & 0.802 & 0 & No_error \\\\ 84 | 18 & 18.114 & 280.17 & 22.329 & 0.206 & 30.12784 & 4 & -2.544 & 1.104 & 0 & No_error \\\\ 85 | \\enddata 86 | \\end{deluxetable} 87 | """ 88 | ), 89 | dict(kwargs=dict(Writer=asciitable.Latex, caption = 'Mag values \\label{tab1}', latexdict = {'preamble':'\\begin{center}', 'tablefoot':'\\end{center}', 'data_end':['\\hline','\\hline'], 'units':{'MAG': '[mag]', 'XCENTER': '[pixel]'}}, col_align='|lcccccccccc|'), 90 | out="""\ 91 | \\begin{table} 92 | \\begin{center} 93 | \\caption{Mag values \\label{tab1}} 94 | \\begin{tabular}{|lcccccccccc|} 95 | ID & XCENTER & YCENTER & MAG & MERR & MSKY & NITER & SHARPNESS & CHI & PIER & PERROR \\\\ 96 | & [pixel] & & [mag] & & & & & & & \\\\ 97 | 14 & 138.538 & 256.405 & 15.461 & 0.003 & 34.85955 & 4 & -0.032 & 0.802 & 0 & No_error \\\\ 98 | 18 & 18.114 & 280.17 & 22.329 & 0.206 & 30.12784 & 4 & -2.544 & 1.104 & 0 & No_error \\\\ 99 | \\hline 100 | \\hline 101 | \\end{tabular} 102 | \\end{center} 103 | \\end{table} 104 | """ 105 | ), 106 | dict(kwargs=dict(Writer=asciitable.Latex, latexdict = asciitable.latexdicts['template']), 107 | out="""\ 108 | \\begin{tabletype} 109 | preamble 110 | \\caption{caption} 111 | \\begin{tabular}{col_align} 112 | header_start 113 | ID & XCENTER & YCENTER & MAG & MERR & MSKY & NITER & SHARPNESS & CHI & PIER & PERROR \\\\ 114 | & & & & & & & & & & \\\\ 115 | header_end 116 | data_start 117 | 14 & 138.538 & 256.405 & 15.461 & 0.003 & 34.85955 & 4 & -0.032 & 0.802 & 0 & No_error \\\\ 118 | 18 & 18.114 & 280.17 & 22.329 & 0.206 & 30.12784 & 4 & -2.544 & 1.104 & 0 & No_error \\\\ 119 | data_end 120 | \\end{tabular} 121 | tablefoot 122 | \\end{tabletype} 123 | """ 124 | ), 125 | 126 | ] 127 | 128 | def check_write_table(test_def, table): 129 | out = io.StringIO() 130 | asciitable.write(table, out, **test_def['kwargs']) 131 | print('Expected:\n%s' % test_def['out']) 132 | print('Actual:\n%s' % out.getvalue()) 133 | assert(out.getvalue().splitlines() == test_def['out'].splitlines()) 134 | 135 | def test_write_table(): 136 | table = asciitable.get_reader(Reader=asciitable.Daophot) 137 | data = table.read('t/daophot.dat') 138 | 139 | for test_def in test_defs: 140 | yield check_write_table, test_def, table 141 | yield check_write_table, test_def, data 142 | 143 | def test_write_table_no_numpy(): 144 | table = asciitable.get_reader(Reader=asciitable.Daophot, numpy=False) 145 | data = table.read('t/daophot.dat') 146 | 147 | for test_def in test_defs: 148 | yield check_write_table, test_def, table 149 | yield check_write_table, test_def, data 150 | 151 | --------------------------------------------------------------------------------