├── .gitignore ├── .travis.yml ├── CHANGELOG.rst ├── CONTRIBUTING.rst ├── LICENSE ├── README.rst ├── assets ├── usage-format.gif └── usage.gif ├── docs ├── Makefile ├── changelog.rst ├── conf.py ├── contributing.rst ├── help.rst ├── index.rst ├── installation.rst ├── introduction.rst ├── knownissues.rst ├── usage.rst └── wordnetcomparison.rst ├── nose.cfg ├── requirements-dev.txt ├── requirements.txt ├── setup.cfg ├── setup.py ├── tests ├── __init__.py └── tests.py └── vocabulary ├── __init__.py ├── responselib.py ├── version.py └── vocabulary.py /.gitignore: -------------------------------------------------------------------------------- 1 | #####=== Python ===##### 2 | 3 | .idea/ 4 | 5 | README.md 6 | 7 | # Byte-compiled / optimized / DLL files 8 | __pycache__/ 9 | *.py[cod] 10 | 11 | # C extensions 12 | *.so 13 | 14 | # Distribution / packaging 15 | .Python 16 | env/ 17 | build/ 18 | develop-eggs/ 19 | dist/ 20 | downloads/ 21 | eggs/ 22 | lib/ 23 | lib64/ 24 | parts/ 25 | sdist/ 26 | var/ 27 | *.egg-info/ 28 | .installed.cfg 29 | *.egg 30 | 31 | # PyInstaller 32 | # Usually these files are written by a python script from a template 33 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 34 | *.manifest 35 | *.spec 36 | 37 | # Installer logs 38 | pip-log.txt 39 | pip-delete-this-directory.txt 40 | 41 | # Unit test / coverage reports 42 | htmlcov/ 43 | .tox/ 44 | .coverage 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | 56 | # Sphinx documentation 57 | docs/_build/ 58 | 59 | # PyBuilder 60 | target/ 61 | 62 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - "2.7" 4 | - "3.4" 5 | 6 | install: 7 | - "pip install -r requirements.txt" 8 | - "pip install ." 9 | 10 | notifications: 11 | webhooks: 12 | urls: 13 | - https://webhooks.gitter.im/e/d0c2d8b26723845bd158 ## for gitter chat 14 | on_success: change # options: [always|never|change] default: always 15 | on_failure: always # options: [always|never|change] default: always 16 | on_start: never # options: [always|never|change] default: always 17 | 18 | script: nosetests -c nose.cfg 19 | -------------------------------------------------------------------------------- /CHANGELOG.rst: -------------------------------------------------------------------------------- 1 | Changelog 2 | --------- 3 | 4 | 0.0.4 5 | ~~~~~ 6 | 7 | - ``JSON`` inconsistency fixed for the methods 8 | 9 | - ``Vocabulary.hyphenation()`` 10 | - ``Vocabulary.part_of_speech()`` 11 | - ``Vocabulary.meaning()`` 12 | 13 | 0.0.5 14 | ~~~~~ 15 | 16 | - Added `translate` module 17 | - Improved Documentation 18 | - Minor bug fixes 19 | 20 | 1.0.0 21 | ~~~~~ 22 | 23 | - Added support for specifying response format 24 | - Updated ``Vocabulary.pronunciation``, ``Vocabulary.antonym```, ```Vocabulary.part_of_speech``` to return a list of objects with apprioprate index 25 | 26 | 1.0.3 27 | ~~~~~ 28 | 29 | - Fixed `setup.py` import issue 30 | - API changes to importing the module 31 | ```from vocabulary.vocabulary import Vocabulary as vb``` instead of ```from vocabulary import Vocabulary as vb``` 32 | 33 | 1.0.4 34 | ~~~~~ 35 | 36 | - Fixed `setup.py` requirements.txt file not found issue by removing the logic to strip dependencies. Just a hack for now. Need to do it the older way. 37 | - Fixed failing tests in travis due to older unchanged import of `vocabulary` 38 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | Contributing 2 | ============ 3 | 4 | 1. Fork it. 5 | 6 | 2. Clone it 7 | 8 | create a `virtualenv `__ 9 | 10 | .. code:: bash 11 | 12 | $ virtualenv develop # Create virtual environment 13 | $ source develop/bin/activate # Change default python to virtual one 14 | (develop)$ git clone https://github.com/tasdikrahman/vocabulary.git 15 | (develop)$ cd vocabulary 16 | (develop)$ pip install -r requirements.txt # Install requirements for 'Vocabulary' in virtual environment 17 | 18 | Or, if ``virtualenv`` is not installed on your system: 19 | 20 | .. code:: bash 21 | 22 | $ wget https://raw.github.com/pypa/virtualenv/master/virtualenv.py 23 | $ python virtualenv.py develop # Create virtual environment 24 | $ source develop/bin/activate # Change default python to virtual one 25 | (develop)$ git clone https://github.com/tasdikrahman/vocabulary.git 26 | (develop)$ cd vocabulary 27 | (develop)$ pip install -r requirements.txt # Install requirements for 'Vocabulary' in virtual environment 28 | 29 | 3. Create your feature branch (``$ git checkout -b my-new-awesome-feature``) 30 | 31 | 4. Commit your changes (``$ git commit -am 'Added feature'``) 32 | 33 | 5. Run tests 34 | 35 | .. code:: bash 36 | 37 | (develop) $ ./tests.py -v 38 | 39 | Conform to `PEP8 `__ and if everything is running fine, integrate your feature 40 | 41 | 6. Push to the branch (``$ git push origin my-new-awesome-feature``) 42 | 43 | 7. Create new Pull Request 44 | 45 | Hack away! 46 | 47 | To do 48 | ~~~~~ 49 | 50 | - [X] Add translate module 51 | - [X] Add an option like `JSON=False` or `JSON=True` where the former returns a list object 52 | 53 | Tests 54 | ~~~~~ 55 | 56 | ``Vocabulary`` uses ``unittesting`` for testing purposes. 57 | 58 | Running the test cases 59 | 60 | .. code:: bash 61 | 62 | test_antonym_ant_key_error (tests.tests.TestModule) ... ok 63 | test_antonym_found (tests.tests.TestModule) ... ok 64 | test_antonym_not_found (tests.tests.TestModule) ... ok 65 | test_hyphenation_found (tests.tests.TestModule) ... ok 66 | test_hyphenation_not_found (tests.tests.TestModule) ... ok 67 | test_meaning_found (tests.tests.TestModule) ... ok 68 | test_meaning_key_error (tests.tests.TestModule) ... ok 69 | test_meaning_not_found (tests.tests.TestModule) ... ok 70 | test_partOfSpeech_found (tests.tests.TestModule) ... ok 71 | test_partOfSpeech_not_found (tests.tests.TestModule) ... ok 72 | test_pronunciation_found (tests.tests.TestModule) ... ok 73 | test_pronunciation_not_found (tests.tests.TestModule) ... ok 74 | test_respond_as_dict_1 (tests.tests.TestModule) ... ok 75 | test_respond_as_dict_2 (tests.tests.TestModule) ... ok 76 | test_respond_as_dict_3 (tests.tests.TestModule) ... ok 77 | test_respond_as_list_1 (tests.tests.TestModule) ... ok 78 | test_respond_as_list_2 (tests.tests.TestModule) ... ok 79 | test_respond_as_list_3 (tests.tests.TestModule) ... ok 80 | test_synonynm_empty_list (tests.tests.TestModule) ... ok 81 | test_synonynm_found (tests.tests.TestModule) ... ok 82 | test_synonynm_not_found (tests.tests.TestModule) ... ok 83 | test_synonynm_tuc_key_error (tests.tests.TestModule) ... ok 84 | test_translate_empty_list (tests.tests.TestModule) ... ok 85 | test_translate_found (tests.tests.TestModule) ... ok 86 | test_translate_not_found (tests.tests.TestModule) ... ok 87 | test_translate_tuc_key_error (tests.tests.TestModule) ... ok 88 | test_usageExample_empty_list (tests.tests.TestModule) ... ok 89 | test_usageExample_found (tests.tests.TestModule) ... ok 90 | test_usageExample_not_found (tests.tests.TestModule) ... ok 91 | 92 | ---------------------------------------------------------------------- 93 | Ran 29 tests in 0.015s 94 | 95 | OK 96 | 97 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | Copyright © 2015 Tasdik Rahman 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 5 | 6 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 7 | 8 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | .. figure:: http://i.imgur.com/ddxYie4.jpg 2 | :alt: 3 | 4 | Vocabulary 5 | ========== 6 | 7 | |PyPI version| |License| |Python Versions| |Build Status| |Requirements Status| |Gitter chat| 8 | 9 | A dictionary magician in the form of a module! 10 | 11 | :Author: Tasdik Rahman 12 | 13 | .. contents:: 14 | :backlinks: none 15 | 16 | .. sectnum:: 17 | 18 | What is it 19 | ---------- 20 | `[back to top] `__ 21 | 22 | For a given word, using ``Vocabulary``, you can get its 23 | 24 | - **Meaning** 25 | - **Synonyms** 26 | - **Antonyms** 27 | - **Part of speech** : whether the word is a ``noun``, ``interjection`` 28 | or an ``adverb`` et el 29 | - **Translate** : Translate a phrase from a source language to the desired language. 30 | - **Usage example** : a quick example on how to use the word in a 31 | sentence 32 | - **Pronunciation** 33 | - **Hyphenation** : shows the particular stress points(if any) 34 | 35 | Features 36 | -------- 37 | `[back to top] `__ 38 | 39 | - Written in uncomplicated ``Python`` 40 | - Returns ``JSON`` objects, ``PYTHON`` dictionaries and lists 41 | - Minimum dependencies ( just uses `requests `__ module ) 42 | - Easy to 43 | `install `__ 44 | - A decent substitute to ``Wordnet``\ (well almost!) Wanna see? Here is 45 | a `small comparison <#wordnet-comparison>`__ 46 | - Stupidly `easy to 47 | use `__ 48 | - Fast! 49 | - Supports 50 | 51 | - both, ``python2.*`` and ``python3.*`` 52 | - Works on Mac, Linux and Windows 53 | 54 | Why should I use Vocabulary 55 | --------------------------- 56 | `[back to top] `__ 57 | 58 | ``Wordnet`` is a great resource. No doubt about it! So why should you 59 | use ``Vocabulary`` when we already have ``Wordnet`` out there? 60 | 61 | Wordnet Comparison 62 | ~~~~~~~~~~~~~~~~~~ 63 | `[back to top] `__ 64 | 65 | Let's say you want to find out the synonyms for the word ``car``. 66 | 67 | - Using ``Wordnet`` 68 | 69 | .. code:: python 70 | 71 | >>> from nltk.corpus import wordnet 72 | >>> syns = wordnet.synsets('car') 73 | >>> syns[0].lemmas[0].name 74 | 'car' 75 | >>> [s.lemmas[0].name for s in syns] 76 | ['car', 'car', 'car', 'car', 'cable_car'] 77 | 78 | >>> [l.name for s in syns for l in s.lemmas] 79 | ['car', 'auto', 'automobile', 'machine', 'motorcar', 'car', 'railcar', 'railway_car', 'railroad_car', 'car', 'gondola', 'car', 'elevator_car', 'cable_car', 'car'] 80 | 81 | - Doing the same using ``Vocabulary`` 82 | 83 | .. code:: python 84 | 85 | >>> from vocabulary.vocabulary import Vocabulary as vb 86 | >>> vb.synonym("car") 87 | '[{ 88 | "seq": 0, 89 | "text": "automobile" 90 | }, { 91 | "seq": 1, 92 | "text": "cart" 93 | }, { 94 | "seq": 2, 95 | "text": "automotive" 96 | }, { 97 | "seq": 3, 98 | "text": "wagon" 99 | }, { 100 | "seq": 4, 101 | "text": "motor" 102 | }]' 103 | >>> ## load the json data 104 | >>> car_synonyms = json.loads(vb.synonym("car")) 105 | >>> type(car_synonyms) 106 | 107 | >>> 108 | 109 | So there you go. You get the data in an easy ``JSON`` format. 110 | 111 | You can go on comparing for the other methods too. 112 | 113 | Installation 114 | ------------ 115 | `[back to top] `__ 116 | 117 | Option 1: installing through `pip `__ (Suggested way) 118 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 119 | 120 | `pypi package link `__ 121 | 122 | ``$ pip install vocabulary`` 123 | 124 | If you are behind a proxy 125 | 126 | ``$ pip --proxy [username:password@]domain_name:port install vocabulary`` 127 | 128 | **Note:** If you get ``command not found`` then 129 | ``$ sudo apt-get install python-pip`` should fix that 130 | 131 | Option 2: Installing from source (Only if you must) 132 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 133 | 134 | .. code:: bash 135 | 136 | $ git clone https://github.com/tasdikrahman/vocabulary.git 137 | $ cd vocabulary/ 138 | $ pip install -r requirements.txt 139 | $ python setup.py install 140 | 141 | 142 | Demo 143 | ~~~~ 144 | `[back to top] `__ 145 | 146 | .. figure:: https://raw.githubusercontent.com/tasdikrahman/vocabulary/master/assets/usage.gif 147 | :alt: Demo link 148 | 149 | .. figure:: https://raw.githubusercontent.com/tasdikrahman/vocabulary/master/assets/usage-format.gif 150 | :alt: Demo link 151 | 152 | Documentation 153 | ------------- 154 | `[back to top] `__ 155 | 156 | For a detailed usage example, refer the `documentation at Read the Docs `__ 157 | 158 | Contributing 159 | ------------ 160 | `[back to top] `__ 161 | 162 | Please refer `Contributing page for details `__ 163 | 164 | 165 | Discuss 166 | ~~~~~~~ 167 | `[back to top] `__ 168 | 169 | Join us on our `Gitter channel `__ 170 | if you want to chat or if you have any questions in your mind. 171 | 172 | Contributers 173 | ~~~~~~~~~~~~ 174 | `[back to top] `__ 175 | 176 | - Huge shoutout to `@tenorz007 `__ for adding the ability to return the API response as different data structures. 177 | - Thanks to `Anton Relin `__ for adding the `translate `__ module. 178 | - And a big shout out to all the `contributers `__ for their contributions 179 | 180 | Changelog 181 | --------- 182 | `[back to top] `__ 183 | 184 | Please refer `Changelog page for details `__ 185 | 186 | Bugs 187 | ---- 188 | `[back to top] `__ 189 | 190 | Please report the bugs at the `issue 191 | tracker `__ 192 | 193 | Similar 194 | ------- 195 | `[back to top] `__ 196 | 197 | Other similar software inspired by `Vocabulary `__ 198 | 199 | - `Vocabulary `__ : The ``Go lang`` port of this ``python`` counterpart 200 | - `woordy `__ : Gives back word translations 201 | - `guile-words `__ : The ``Guile Scheme`` port of this ``python`` counterpart 202 | 203 | Known Issues 204 | ~~~~~~~~~~~~ 205 | `[back to top] `__ 206 | 207 | - In **python2**, when using the method **Vocabulary.synonym()** or **Vocabulary.pronunciation()** 208 | 209 | .. code:: python 210 | 211 | >>> vb.synonym("car") 212 | [{ 213 | "seq": 0, 214 | "text": "automotive" 215 | }, { 216 | "seq": 1, 217 | "text": "motor" 218 | }, { 219 | "seq": 2, 220 | "text": "wagon" 221 | }, { 222 | "seq": 3, 223 | "text": "cart" 224 | }, { 225 | "seq": 4, 226 | "text": "automobile" 227 | }] 228 | >>> type(vb.pronunciation("hippopotamus")) 229 | 230 | >>> json.dumps(vb.pronunciation("hippopotamus")) 231 | '[{"raw": "(h\\u012dp\\u02cc\\u0259-p\\u014ft\\u02c8\\u0259-m\\u0259s)", "rawType": "ahd-legacy", "seq": 0}, {"raw": "HH IH2 P AH0 P AA1 T AH0 M AH0 S", "rawType": "arpabet", "seq": 1}]' 232 | >>> 233 | 234 | You are being returned a ``list`` object instead of a ``JSON`` object. 235 | When returning the latter, there are some ``unicode`` issues. A fix for 236 | this will be released soon. 237 | 238 | I may suggest `python-ftfy `__ which can help you in this matter. 239 | 240 | 241 | License : 242 | --------- 243 | `[back to top] `__ 244 | 245 | Built with ♥ by `Tasdik Rahman `__ under the `MIT License `__ © 246 | 247 | You can find a copy of the License at http://prodicus.mit-license.org/ 248 | 249 | Donation 250 | -------- 251 | 252 | |Paypal badge| 253 | 254 | |Instamojo| 255 | 256 | |gratipay| 257 | 258 | |patreon| 259 | 260 | .. |PyPI version| image:: https://img.shields.io/pypi/v/Vocabulary.svg 261 | :target: https://pypi.python.org/pypi/Vocabulary/1.0.2 262 | .. |License| image:: https://img.shields.io/pypi/l/vocabulary.svg 263 | :target: https://github.com/tasdikrahman/vocabulary/blob/master/LICENSE 264 | .. |Python Versions| image:: https://img.shields.io/pypi/pyversions/Vocabulary.svg 265 | .. |Build Status| image:: https://travis-ci.org/tasdikrahman/vocabulary.svg?branch=master 266 | :target: https://travis-ci.org/tasdikrahman/vocabulary 267 | .. |Gitter chat| image:: https://img.shields.io/gitter/room/gitterHQ/gitter.svg 268 | :alt: Join the chat at https://gitter.im/prodicus/vocabulary 269 | :target: https://gitter.im/prodicus/vocabulary?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge 270 | .. |Requirements Status| image:: https://requires.io/github/tasdikrahman/vocabulary/requirements.svg?branch=master 271 | :target: https://requires.io/github/tasdikrahman/vocabulary/requirements/?branch=master 272 | .. |Paypal badge| image:: https://www.paypalobjects.com/webstatic/mktg/logo/AM_mc_vs_dc_ae.jpg 273 | :target: https://www.paypal.me/tasdik 274 | .. |gratipay| image:: https://cdn.rawgit.com/gratipay/gratipay-badge/2.3.0/dist/gratipay.png 275 | :target: https://gratipay.com/tasdikrahman/ 276 | .. |Instamojo| image:: https://www.soldermall.com/images/pic-online-payment.jpg 277 | :target: https://www.instamojo.com/@tasdikrahman 278 | .. |patreon| image:: http://i.imgur.com/ICWPFOs.png 279 | :target: https://www.patreon.com/tasdikrahman/ 280 | -------------------------------------------------------------------------------- /assets/usage-format.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tasdikrahman/vocabulary/54403c5981af25dc3457796b57048ae27f09e9be/assets/usage-format.gif -------------------------------------------------------------------------------- /assets/usage.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tasdikrahman/vocabulary/54403c5981af25dc3457796b57048ae27f09e9be/assets/usage.gif -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " applehelp to make an Apple Help Book" 34 | @echo " devhelp to make HTML files and a Devhelp project" 35 | @echo " epub to make an epub" 36 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 37 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 38 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 39 | @echo " text to make text files" 40 | @echo " man to make manual pages" 41 | @echo " texinfo to make Texinfo files" 42 | @echo " info to make Texinfo files and run them through makeinfo" 43 | @echo " gettext to make PO message catalogs" 44 | @echo " changes to make an overview of all changed/added/deprecated items" 45 | @echo " xml to make Docutils-native XML files" 46 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 47 | @echo " linkcheck to check all external links for integrity" 48 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 49 | @echo " coverage to run coverage check of the documentation (if enabled)" 50 | 51 | clean: 52 | rm -rf $(BUILDDIR)/* 53 | 54 | html: 55 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 56 | @echo 57 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 58 | 59 | dirhtml: 60 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 61 | @echo 62 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 63 | 64 | singlehtml: 65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 66 | @echo 67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 68 | 69 | pickle: 70 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 71 | @echo 72 | @echo "Build finished; now you can process the pickle files." 73 | 74 | json: 75 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 76 | @echo 77 | @echo "Build finished; now you can process the JSON files." 78 | 79 | htmlhelp: 80 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 81 | @echo 82 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 83 | ".hhp project file in $(BUILDDIR)/htmlhelp." 84 | 85 | qthelp: 86 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 87 | @echo 88 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 89 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 90 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/vocabulary.qhcp" 91 | @echo "To view the help file:" 92 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/vocabulary.qhc" 93 | 94 | applehelp: 95 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 96 | @echo 97 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 98 | @echo "N.B. You won't be able to view it unless you put it in" \ 99 | "~/Library/Documentation/Help or install it in your application" \ 100 | "bundle." 101 | 102 | devhelp: 103 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 104 | @echo 105 | @echo "Build finished." 106 | @echo "To view the help file:" 107 | @echo "# mkdir -p $$HOME/.local/share/devhelp/vocabulary" 108 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/vocabulary" 109 | @echo "# devhelp" 110 | 111 | epub: 112 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 113 | @echo 114 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 115 | 116 | latex: 117 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 118 | @echo 119 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 120 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 121 | "(use \`make latexpdf' here to do that automatically)." 122 | 123 | latexpdf: 124 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 125 | @echo "Running LaTeX files through pdflatex..." 126 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 127 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 128 | 129 | latexpdfja: 130 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 131 | @echo "Running LaTeX files through platex and dvipdfmx..." 132 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 133 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 134 | 135 | text: 136 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 137 | @echo 138 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 139 | 140 | man: 141 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 142 | @echo 143 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 144 | 145 | texinfo: 146 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 147 | @echo 148 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 149 | @echo "Run \`make' in that directory to run these through makeinfo" \ 150 | "(use \`make info' here to do that automatically)." 151 | 152 | info: 153 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 154 | @echo "Running Texinfo files through makeinfo..." 155 | make -C $(BUILDDIR)/texinfo info 156 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 157 | 158 | gettext: 159 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 160 | @echo 161 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 162 | 163 | changes: 164 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 165 | @echo 166 | @echo "The overview file is in $(BUILDDIR)/changes." 167 | 168 | linkcheck: 169 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 170 | @echo 171 | @echo "Link check complete; look for any errors in the above output " \ 172 | "or in $(BUILDDIR)/linkcheck/output.txt." 173 | 174 | doctest: 175 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 176 | @echo "Testing of doctests in the sources finished, look at the " \ 177 | "results in $(BUILDDIR)/doctest/output.txt." 178 | 179 | coverage: 180 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 181 | @echo "Testing of coverage in the sources finished, look at the " \ 182 | "results in $(BUILDDIR)/coverage/python.txt." 183 | 184 | xml: 185 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 186 | @echo 187 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 188 | 189 | pseudoxml: 190 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 191 | @echo 192 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 193 | -------------------------------------------------------------------------------- /docs/changelog.rst: -------------------------------------------------------------------------------- 1 | ========= 2 | Changelog 3 | ========= 4 | 5 | 0.0.4 6 | ===== 7 | 8 | - ``JSON`` inconsistency fixed for the methods 9 | 10 | - ``Vocabulary.hyphenation()`` 11 | - ``Vocabulary.part_of_speech()`` 12 | - ``Vocabulary.meaning()`` 13 | 14 | 0.0.5 15 | ===== 16 | 17 | .. versionadded:: 0.0.5 18 | 19 | - Added ``Vocabulary.translate()`` 20 | - Improved Documentation 21 | - Minor bug fixes 22 | 23 | 1.0.0 24 | ~~~~~ 25 | 26 | .. versionadded:: 1.0.0 27 | 28 | - Added support for specifying response format 29 | - Updated ``Vocabulary.pronunciation``, ``Vocabulary.antonym```, ```Vocabulary.part_of_speech``` to return a list of objects with apprioprate index 30 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | # 4 | # vocabulary documentation build configuration file, created by 5 | # sphinx-quickstart on Sat Dec 12 08:15:29 2015. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | import sys 17 | import os 18 | import shlex 19 | 20 | sys.path.insert(0, os.path.abspath('..')) 21 | 22 | from vocabulary.version import VERSION, RELEASE 23 | 24 | # The version info for the project you're documenting, acts as replacement for 25 | # |version| and |release|, also used in various other places throughout the 26 | # built documents. 27 | # 28 | # The short X.Y version.'' 29 | version = VERSION 30 | # The full version, including alpha/beta/rc tags. 31 | release = RELEASE 32 | 33 | # If extensions (or modules to document with autodoc) are in another directory, 34 | # add these directories to sys.path here. If the directory is relative to the 35 | # documentation root, use os.path.abspath to make it absolute, like shown here. 36 | #sys.path.insert(0, os.path.abspath('.')) 37 | 38 | # -- General configuration ------------------------------------------------ 39 | 40 | # If your documentation needs a minimal Sphinx version, state it here. 41 | #needs_sphinx = '1.0' 42 | 43 | # Add any Sphinx extension module names here, as strings. They can be 44 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 45 | # ones. 46 | extensions = [ 47 | 'sphinx.ext.autodoc', 48 | 'sphinx.ext.intersphinx', 49 | 'sphinx.ext.todo', 50 | 'sphinx.ext.ifconfig', 51 | 'sphinx.ext.viewcode', 52 | ] 53 | 54 | # Add any paths that contain templates here, relative to this directory. 55 | templates_path = ['_templates'] 56 | 57 | # The suffix(es) of source filenames. 58 | # You can specify multiple suffix as a list of string: 59 | # source_suffix = ['.rst', '.md'] 60 | source_suffix = '.rst' 61 | 62 | # The encoding of source files. 63 | #source_encoding = 'utf-8-sig' 64 | 65 | # The master toctree document. 66 | master_doc = 'index' 67 | 68 | # General information about the project. 69 | project = 'vocabulary' 70 | copyright = '2015, Tasdik Rahman' 71 | author = 'Tasdik Rahman' 72 | 73 | 74 | 75 | # The language for content autogenerated by Sphinx. Refer to documentation 76 | # for a list of supported languages. 77 | # 78 | # This is also used if you do content translation via gettext catalogs. 79 | # Usually you set "language" from the command line for these cases. 80 | language = None 81 | 82 | # There are two options for replacing |today|: either, you set today to some 83 | # non-false value, then it is used: 84 | #today = '' 85 | # Else, today_fmt is used as the format for a strftime call. 86 | #today_fmt = '%B %d, %Y' 87 | 88 | # List of patterns, relative to source directory, that match files and 89 | # directories to ignore when looking for source files. 90 | exclude_patterns = ['_build'] 91 | 92 | # The reST default role (used for this markup: `text`) to use for all 93 | # documents. 94 | #default_role = None 95 | 96 | # If true, '()' will be appended to :func: etc. cross-reference text. 97 | #add_function_parentheses = True 98 | 99 | # If true, the current module name will be prepended to all description 100 | # unit titles (such as .. function::). 101 | #add_module_names = True 102 | 103 | # If true, sectionauthor and moduleauthor directives will be shown in the 104 | # output. They are ignored by default. 105 | #show_authors = False 106 | 107 | # The name of the Pygments (syntax highlighting) style to use. 108 | pygments_style = 'sphinx' 109 | 110 | # A list of ignored prefixes for module index sorting. 111 | #modindex_common_prefix = [] 112 | 113 | # If true, keep warnings as "system message" paragraphs in the built documents. 114 | #keep_warnings = False 115 | 116 | # If true, `todo` and `todoList` produce output, else they produce nothing. 117 | todo_include_todos = True 118 | 119 | 120 | # -- Options for HTML output ---------------------------------------------- 121 | 122 | # The theme to use for HTML and HTML Help pages. See the documentation for 123 | # a list of builtin themes. 124 | html_theme = 'alabaster' 125 | 126 | # Theme options are theme-specific and customize the look and feel of a theme 127 | # further. For a list of options available for each theme, see the 128 | # documentation. 129 | #html_theme_options = {} 130 | 131 | # Add any paths that contain custom themes here, relative to this directory. 132 | #html_theme_path = [] 133 | 134 | # The name for this set of Sphinx documents. If None, it defaults to 135 | # " v documentation". 136 | #html_title = None 137 | 138 | # A shorter title for the navigation bar. Default is the same as html_title. 139 | #html_short_title = None 140 | 141 | # The name of an image file (relative to this directory) to place at the top 142 | # of the sidebar. 143 | #html_logo = None 144 | 145 | # The name of an image file (within the static path) to use as favicon of the 146 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 147 | # pixels large. 148 | #html_favicon = None 149 | 150 | # Add any paths that contain custom static files (such as style sheets) here, 151 | # relative to this directory. They are copied after the builtin static files, 152 | # so a file named "default.css" will overwrite the builtin "default.css". 153 | html_static_path = ['_static'] 154 | 155 | # Add any extra paths that contain custom files (such as robots.txt or 156 | # .htaccess) here, relative to this directory. These files are copied 157 | # directly to the root of the documentation. 158 | #html_extra_path = [] 159 | 160 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 161 | # using the given strftime format. 162 | #html_last_updated_fmt = '%b %d, %Y' 163 | 164 | # If true, SmartyPants will be used to convert quotes and dashes to 165 | # typographically correct entities. 166 | #html_use_smartypants = True 167 | 168 | # Custom sidebar templates, maps document names to template names. 169 | #html_sidebars = {} 170 | 171 | # Additional templates that should be rendered to pages, maps page names to 172 | # template names. 173 | #html_additional_pages = {} 174 | 175 | # If false, no module index is generated. 176 | #html_domain_indices = True 177 | 178 | # If false, no index is generated. 179 | #html_use_index = True 180 | 181 | # If true, the index is split into individual pages for each letter. 182 | #html_split_index = False 183 | 184 | # If true, links to the reST sources are added to the pages. 185 | #html_show_sourcelink = True 186 | 187 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 188 | #html_show_sphinx = True 189 | 190 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 191 | #html_show_copyright = True 192 | 193 | # If true, an OpenSearch description file will be output, and all pages will 194 | # contain a tag referring to it. The value of this option must be the 195 | # base URL from which the finished HTML is served. 196 | #html_use_opensearch = '' 197 | 198 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 199 | #html_file_suffix = None 200 | 201 | # Language to be used for generating the HTML full-text search index. 202 | # Sphinx supports the following languages: 203 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja' 204 | # 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr' 205 | #html_search_language = 'en' 206 | 207 | # A dictionary with options for the search language support, empty by default. 208 | # Now only 'ja' uses this config value 209 | #html_search_options = {'type': 'default'} 210 | 211 | # The name of a javascript file (relative to the configuration directory) that 212 | # implements a search results scorer. If empty, the default will be used. 213 | #html_search_scorer = 'scorer.js' 214 | 215 | # Output file base name for HTML help builder. 216 | htmlhelp_basename = 'vocabularydoc' 217 | 218 | # -- Options for LaTeX output --------------------------------------------- 219 | 220 | latex_elements = { 221 | # The paper size ('letterpaper' or 'a4paper'). 222 | #'papersize': 'letterpaper', 223 | 224 | # The font size ('10pt', '11pt' or '12pt'). 225 | #'pointsize': '10pt', 226 | 227 | # Additional stuff for the LaTeX preamble. 228 | #'preamble': '', 229 | 230 | # Latex figure (float) alignment 231 | #'figure_align': 'htbp', 232 | } 233 | 234 | # Grouping the document tree into LaTeX files. List of tuples 235 | # (source start file, target name, title, 236 | # author, documentclass [howto, manual, or own class]). 237 | latex_documents = [ 238 | (master_doc, 'vocabulary.tex', 'vocabulary Documentation', 239 | 'Tasdik Rahman', 'manual'), 240 | ] 241 | 242 | # The name of an image file (relative to this directory) to place at the top of 243 | # the title page. 244 | #latex_logo = None 245 | 246 | # For "manual" documents, if this is true, then toplevel headings are parts, 247 | # not chapters. 248 | #latex_use_parts = False 249 | 250 | # If true, show page references after internal links. 251 | #latex_show_pagerefs = False 252 | 253 | # If true, show URL addresses after external links. 254 | #latex_show_urls = False 255 | 256 | # Documents to append as an appendix to all manuals. 257 | #latex_appendices = [] 258 | 259 | # If false, no module index is generated. 260 | #latex_domain_indices = True 261 | 262 | 263 | # -- Options for manual page output --------------------------------------- 264 | 265 | # One entry per manual page. List of tuples 266 | # (source start file, name, description, authors, manual section). 267 | man_pages = [ 268 | (master_doc, 'vocabulary', 'vocabulary Documentation', 269 | [author], 1) 270 | ] 271 | 272 | # If true, show URL addresses after external links. 273 | #man_show_urls = False 274 | 275 | 276 | # -- Options for Texinfo output ------------------------------------------- 277 | 278 | # Grouping the document tree into Texinfo files. List of tuples 279 | # (source start file, target name, title, author, 280 | # dir menu entry, description, category) 281 | texinfo_documents = [ 282 | (master_doc, 'vocabulary', 'vocabulary Documentation', 283 | author, 'vocabulary', 'One line description of project.', 284 | 'Miscellaneous'), 285 | ] 286 | 287 | # Documents to append as an appendix to all manuals. 288 | #texinfo_appendices = [] 289 | 290 | # If false, no module index is generated. 291 | #texinfo_domain_indices = True 292 | 293 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 294 | #texinfo_show_urls = 'footnote' 295 | 296 | # If true, do not generate a @detailmenu in the "Top" node's menu. 297 | #texinfo_no_detailmenu = False 298 | 299 | 300 | # -- Options for Epub output ---------------------------------------------- 301 | 302 | # Bibliographic Dublin Core info. 303 | epub_title = project 304 | epub_author = author 305 | epub_publisher = author 306 | epub_copyright = copyright 307 | 308 | # The basename for the epub file. It defaults to the project name. 309 | #epub_basename = project 310 | 311 | # The HTML theme for the epub output. Since the default themes are not optimized 312 | # for small screen space, using the same theme for HTML and epub output is 313 | # usually not wise. This defaults to 'epub', a theme designed to save visual 314 | # space. 315 | #epub_theme = 'epub' 316 | 317 | # The language of the text. It defaults to the language option 318 | # or 'en' if the language is not set. 319 | #epub_language = '' 320 | 321 | # The scheme of the identifier. Typical schemes are ISBN or URL. 322 | #epub_scheme = '' 323 | 324 | # The unique identifier of the text. This can be a ISBN number 325 | # or the project homepage. 326 | #epub_identifier = '' 327 | 328 | # A unique identification for the text. 329 | #epub_uid = '' 330 | 331 | # A tuple containing the cover image and cover page html template filenames. 332 | #epub_cover = () 333 | 334 | # A sequence of (type, uri, title) tuples for the guide element of content.opf. 335 | #epub_guide = () 336 | 337 | # HTML files that should be inserted before the pages created by sphinx. 338 | # The format is a list of tuples containing the path and title. 339 | #epub_pre_files = [] 340 | 341 | # HTML files shat should be inserted after the pages created by sphinx. 342 | # The format is a list of tuples containing the path and title. 343 | #epub_post_files = [] 344 | 345 | # A list of files that should not be packed into the epub file. 346 | epub_exclude_files = ['search.html'] 347 | 348 | # The depth of the table of contents in toc.ncx. 349 | #epub_tocdepth = 3 350 | 351 | # Allow duplicate toc entries. 352 | #epub_tocdup = True 353 | 354 | # Choose between 'default' and 'includehidden'. 355 | #epub_tocscope = 'default' 356 | 357 | # Fix unsupported image types using the Pillow. 358 | #epub_fix_images = False 359 | 360 | # Scale large images. 361 | #epub_max_image_width = 0 362 | 363 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 364 | #epub_show_urls = 'inline' 365 | 366 | # If false, no index is generated. 367 | #epub_use_index = True 368 | 369 | 370 | # Example configuration for intersphinx: refer to the Python standard library. 371 | intersphinx_mapping = {'https://docs.python.org/': None} 372 | -------------------------------------------------------------------------------- /docs/contributing.rst: -------------------------------------------------------------------------------- 1 | ============ 2 | Contributing 3 | ============ 4 | 5 | 1. Fork it. 6 | 7 | 2. Clone it 8 | 9 | create a `virtualenv `__ 10 | 11 | .. code-block:: bash 12 | 13 | $ virtualenv develop # Create virtual environment 14 | $ source develop/bin/activate # Change default python to virtual one 15 | (develop)$ git clone https://github.com/tasdikrahman/vocabulary.git 16 | (develop)$ cd vocabulary 17 | (develop)$ pip install -r requirements.txt # Install requirements for 'Vocabulary' in virtual environment 18 | 19 | Or, if ``virtualenv`` is not installed on your system: 20 | 21 | .. code-block:: bash 22 | 23 | $ wget https://raw.github.com/pypa/virtualenv/master/virtualenv.py 24 | $ python virtualenv.py develop # Create virtual environment 25 | $ source develop/bin/activate # Change default python to virtual one 26 | (develop)$ git clone https://github.com/tasdikrahman/vocabulary.git 27 | (develop)$ cd vocabulary 28 | (develop)$ pip install -r requirements.txt # Install requirements for 'Vocabulary' in virtual environment 29 | 30 | 3. Create your feature branch (``$ git checkout -b my-new-awesome-feature``) 31 | 32 | 4. Commit your changes (``$ git commit -am 'Added feature'``) 33 | 34 | 5. Run tests 35 | 36 | .. code-block:: bash 37 | 38 | (develop) $ ./tests.py -v 39 | 40 | Conform to `PEP8 `__ and if everything is running fine, integrate your feature 41 | 42 | 6. Push to the branch (``$ git push origin my-new-awesome-feature``) 43 | 44 | 7. Create new Pull Request 45 | 46 | Hack away! 47 | 48 | To do 49 | ===== 50 | 51 | - [X] Add translate module 52 | - [X] Add an option like `JSON=False` or `JSON=True` where the former returns a list object 53 | 54 | Tests 55 | ===== 56 | 57 | 58 | Running the test cases 59 | 60 | .. code-block:: bash 61 | 62 | $ ./tests.py -v 63 | test_antonym_ant_key_error (tests.tests.TestModule) ... ok 64 | test_antonym_found (tests.tests.TestModule) ... ok 65 | test_antonym_not_found (tests.tests.TestModule) ... ok 66 | test_hyphenation_found (tests.tests.TestModule) ... ok 67 | test_hyphenation_not_found (tests.tests.TestModule) ... ok 68 | test_meaning_found (tests.tests.TestModule) ... ok 69 | test_meaning_key_error (tests.tests.TestModule) ... ok 70 | test_meaning_not_found (tests.tests.TestModule) ... ok 71 | test_partOfSpeech_found (tests.tests.TestModule) ... ok 72 | test_partOfSpeech_not_found (tests.tests.TestModule) ... ok 73 | test_pronunciation_found (tests.tests.TestModule) ... ok 74 | test_pronunciation_not_found (tests.tests.TestModule) ... ok 75 | test_respond_as_dict_1 (tests.tests.TestModule) ... ok 76 | test_respond_as_dict_2 (tests.tests.TestModule) ... ok 77 | test_respond_as_dict_3 (tests.tests.TestModule) ... ok 78 | test_respond_as_list_1 (tests.tests.TestModule) ... ok 79 | test_respond_as_list_2 (tests.tests.TestModule) ... ok 80 | test_respond_as_list_3 (tests.tests.TestModule) ... ok 81 | test_synonynm_empty_list (tests.tests.TestModule) ... ok 82 | test_synonynm_found (tests.tests.TestModule) ... ok 83 | test_synonynm_not_found (tests.tests.TestModule) ... ok 84 | test_synonynm_tuc_key_error (tests.tests.TestModule) ... ok 85 | test_translate_empty_list (tests.tests.TestModule) ... ok 86 | test_translate_found (tests.tests.TestModule) ... ok 87 | test_translate_not_found (tests.tests.TestModule) ... ok 88 | test_translate_tuc_key_error (tests.tests.TestModule) ... ok 89 | test_usageExample_empty_list (tests.tests.TestModule) ... ok 90 | test_usageExample_found (tests.tests.TestModule) ... ok 91 | test_usageExample_not_found (tests.tests.TestModule) ... ok 92 | 93 | ---------------------------------------------------------------------- 94 | Ran 29 tests in 0.015s 95 | 96 | OK 97 | 98 | 99 | Discuss 100 | ======= 101 | 102 | Join us on our `Gitter channel `__ 103 | if you want to chat or if you have any questions. 104 | 105 | Building the docs 106 | ================= 107 | 108 | Install the `Sphinx` by doing a `$ pip install requirements-dev.txt` 109 | 110 | .. code-block:: bash 111 | 112 | $ make html 113 | 114 | Contributors 115 | ============ 116 | 117 | - Huge shoutout to `@tenorz007 `__ for adding the ability to return the API response as different data structures. 118 | - Thanks to `Anton Relin `__ for adding the `translate()` module 119 | - A big shout out to all the `contributers `__ 120 | -------------------------------------------------------------------------------- /docs/help.rst: -------------------------------------------------------------------------------- 1 | ==== 2 | Help 3 | ==== 4 | 5 | 6 | If you need to see the usage for any of the methods, do a 7 | 8 | .. code-block:: python 9 | 10 | >>> from vocabulary import Vocabulary as vb 11 | >>> help(vb.translate) 12 | Help on function translate in module vocabulary.vocabulary: 13 | 14 | translate(phrase, source_lang, dest_lang) 15 | Gets the translations for a given word, and returns possibilites as a list 16 | Calls the glosbe API for getting the translation 17 | 18 | and languages should be specifed in 3-letter ISO 639-3 format, 19 | although many 2-letter codes (en, de, fr) will work. 20 | 21 | See http://en.wikipedia.org/wiki/List_of_ISO_639-3_codes for full list. 22 | 23 | :param phrase: word for which translation is being found 24 | :param source_lang: Translation from language 25 | :param dest_lang: Translation to language 26 | :returns: returns a json object 27 | (END) 28 | 29 | 30 | and so on for other functions 31 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. vocabulary documentation master file, created by 2 | sphinx-quickstart on Sat Dec 12 07:24:30 2015. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | vocabulary 7 | ========== 8 | 9 | A Command line Magician in the form of a module 10 | 11 | |PyPI version| |License| |Python Versions| |Build Status| |Requirements Status| |Gitter chat| 12 | 13 | Contents: 14 | 15 | .. toctree:: 16 | :maxdepth: 2 17 | 18 | introduction 19 | wordnetcomparison 20 | installation 21 | usage 22 | help 23 | contributing 24 | changelog 25 | knownissues 26 | 27 | Indices and tables 28 | ================== 29 | 30 | * :ref:`genindex` 31 | * :ref:`modindex` 32 | * :ref:`search` 33 | 34 | 35 | .. |PyPI version| image:: https://img.shields.io/pypi/v/Vocabulary.svg 36 | :target: https://img.shields.io/pypi/v/Vocabulary.svg 37 | .. |License| image:: https://img.shields.io/pypi/l/vocabulary.svg 38 | :target: https://img.shields.io/pypi/l/vocabulary.svg 39 | .. |Python Versions| image:: https://img.shields.io/pypi/pyversions/Vocabulary.svg 40 | .. |Build Status| image:: https://travis-ci.org/tasdikrahman/vocabulary.svg?branch=master 41 | .. |Gitter chat| image:: https://badges.gitter.im/Join%20Chat.svg 42 | :alt: Join the chat at https://gitter.im/prodicus/vocabulary 43 | :target: https://gitter.im/prodicus/vocabulary?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge 44 | .. |Requirements Status| image:: https://requires.io/github/tasdikrahman/vocabulary/requirements.svg?branch=master 45 | :target: https://requires.io/github/tasdikrahman/vocabulary/requirements/?branch=master 46 | -------------------------------------------------------------------------------- /docs/installation.rst: -------------------------------------------------------------------------------- 1 | 2 | 3 | ============ 4 | Installation 5 | ============ 6 | 7 | Option 1: installing through `pip `__ (Suggested way) 8 | ============================================================================================== 9 | 10 | `pypi package link `__ 11 | 12 | .. code-block:: bash 13 | 14 | $ pip install vocabulary 15 | 16 | If you are behind a proxy 17 | 18 | .. code-block:: bash 19 | 20 | $ pip --proxy [username:password@]domain_name:port install vocabulary 21 | 22 | .. Note:: If you get ``command not found`` then 23 | 24 | .. code-block:: bash 25 | 26 | $ sudo apt-get install python-pip 27 | 28 | should fix that 29 | 30 | Option 2: Installing from source 31 | ================================ 32 | 33 | .. code-block:: bash 34 | 35 | $ git clone https://github.com/tasdikrahman/vocabulary.git 36 | $ cd vocabulary/ 37 | $ pip install -r requirements.txt 38 | $ python setup.py install 39 | 40 | Upgrade 41 | ======= 42 | 43 | You can update to the latest version by doing a 44 | 45 | .. code-block:: bash 46 | 47 | $ pip install --upgrade vocabulary 48 | 49 | Uninstalling 50 | ============ 51 | 52 | .. code-block:: bash 53 | 54 | $ pip uninstall vocabulary 55 | -------------------------------------------------------------------------------- /docs/introduction.rst: -------------------------------------------------------------------------------- 1 | 2 | 3 | ============ 4 | Introduction 5 | ============ 6 | 7 | For a given word, using ``Vocabulary``, you can get it's 8 | 9 | - Meaning 10 | - Synonyms 11 | - Antonyms 12 | - Part of speech : whether the word is a ``noun``, ``interjection`` or an ``adverb`` et el 13 | - Translate : Translate a phrase from a source language to the desired language. 14 | - Usage example : a quick example on how to use the word in a sentence 15 | - Pronunciation 16 | - Hyphenation : shows the particular stress points(if any) 17 | 18 | 19 | Features 20 | ======== 21 | 22 | - Written in uncomplicated ``Python`` 23 | - Returns ``JSON`` objects 24 | - Minimum dependencies ( just uses `requests `__ ) 25 | - Easy to `install `__ 26 | - A decent substitute to ``Wordnet``\ (well almost!) Wanna see? Here is a `small comparison <#wordnet-comparison>`__ 27 | - Stupidly `easy to use `__ 28 | - Fast! 29 | - Supports 30 | 31 | - both, ``python2.*`` and ``python3.*`` 32 | - Works on Mac, Linux and Windows 33 | 34 | How does it work 35 | ================ 36 | 37 | Under the hood, it makes use of 4 awesome API's to give you consistent 38 | results. The API's being 39 | 40 | - Wordnik 41 | - Glosbe 42 | - BighugeLabs 43 | - Urbandict 44 | -------------------------------------------------------------------------------- /docs/knownissues.rst: -------------------------------------------------------------------------------- 1 | ============ 2 | Known Issues 3 | ============ 4 | 5 | When using the method ``pronunciation`` 6 | 7 | :: 8 | 9 | >>> vb.pronunciation("hippopotamus") 10 | [{'raw': '(hĭpˌə-pŏtˈə-məs)', 'rawType': 'ahd-legacy', 'seq': 0}, {'raw': 'HH IH2 P AH0 P AA1 T AH0 M AH0 S', 'rawType': 'arpabet', 'seq': 1}] 11 | >>> type(vb.pronunciation("hippopotamus")) 12 | 13 | >>> json.dumps(vb.pronunciation("hippopotamus")) 14 | '[{"raw": "(h\\u012dp\\u02cc\\u0259-p\\u014ft\\u02c8\\u0259-m\\u0259s)", "rawType": "ahd-legacy", "seq": 0}, {"raw": "HH IH2 P AH0 P AA1 T AH0 M AH0 S", "rawType": "arpabet", "seq": 1}]' 15 | >>> 16 | 17 | You are being returned a ``list`` object instead of a ``JSON`` object. 18 | When returning the latter, there are some ``unicode`` issues. A fix for 19 | this will be released soon. 20 | -------------------------------------------------------------------------------- /docs/usage.rst: -------------------------------------------------------------------------------- 1 | ============== 2 | Usage Examples 3 | ============== 4 | 5 | A Simple demonstration of the module 6 | 7 | .. code-block:: python 8 | 9 | ## Importing the module 10 | >>> from vocabulary.vocabulary import Vocabulary as vb 11 | 12 | ## Extracting "Meaning" 13 | >>> vb.meaning("hillbilly") 14 | '[{"text": "Someone who is from the hills; especially from a rural area, with a connotation of a lack of refinement or sophistication.", "seq": 0}, {"text": "someone who is from the hills", "seq": 1}, {"text": "A white person from the rural southern part of the United States.", "seq": 2}]' 15 | >>> 16 | 17 | ## "Synonym" 18 | >>> vb.synonym("hurricane") 19 | '[{"text": "storm", "seq": 0}, {"text": "tropical cyclone", "seq": 1}, {"text": "typhoon", "seq": 2}, {"text": "gale", "seq": 3}]' 20 | >>> 21 | 22 | ## "Antonym" 23 | >>> vb.antonym("respect") 24 | '[{"text": "disesteem"}, {"text": "disrespect"}]' 25 | >>> vb.antonym("insane") 26 | '[{"text": "sane"}]' 27 | 28 | ## "Part of Speech" 29 | >>> vb.part_of_speech("hello") 30 | '[{"text": "interjection", "example": "greeting", "seq": 0}, {"text": "verb-intransitive", "example": "To call.", "seq": 1}]' 31 | >>> 32 | 33 | ## "Usage Examples" 34 | >>> vb.usage_example("chicanery") 35 | '[{"text": "The Bush Administration is now the commander-in-theif (lower-case intentional) thanks to their chicanery.", "seq": 0}]' 36 | >>> 37 | 38 | ## "Pronunciation" 39 | >>> vb.pronunciation("hippopotamus") 40 | '[{'raw': '(hĭpˌə-pŏtˈə-məs)', 'rawType': 'ahd-legacy', 'seq': 0}, {'raw': 'HH IH2 P AH0 P AA1 T AH0 M AH0 S', 'rawType': 'arpabet', 'seq': 1}]' 41 | >>> 42 | 43 | ## "Hyphenation" 44 | >>> vb.hyphenation("hippopotamus") 45 | '[{"text": "hip", "type": "secondary stress", "seq": 0}, {"text": "po", "seq": 1}, {"text": "pot", "type": "stress", "seq": 2}, {"text": "a", "seq": 3}, {"text": "mus", "seq": 4}]' 46 | >>> vb.hyphenation("amazing") 47 | '[{"text": "a", "seq": 0}, {"text": "maz", "type": "stress", "seq": 1}, {"text": "ing", "seq": 2}]' 48 | >>> 49 | 50 | ## "Translate" 51 | >>> vb.translate("bread", "en","fra") 52 | '[{"seq": 0, "text": "pain"}, {"seq": 1, "text": "paner"}, {"seq": 2, "text": "pognon"}, {"seq": 3, "text": "fric"}, {"seq": 4, "text": "bl\\u00e9"}]' 53 | >>> vb.translate("goodbye", "en","es") 54 | '[{"seq": 0, "text": "hasta luego"}, {"seq": 1, "text": "vaya con Dios"}, {"seq": 2, "text": "despedida"}, {"seq": 3, "text": "adi\\u00f3s"}, {"seq": 4, "text": "vaya con dios"}, {"seq": 5, "text": "hasta la vista"}, {"seq": 6, "text": "nos vemos"}, {"seq": 7, "text": "adios"}, {"seq": 8, "text": "hasta pronto"}]' 55 | >>> 56 | 57 | ## "Response Formatting" 58 | >>> vb.antonym("love", format="dict") 59 | '{"text": "hate"}'' 60 | >>> vb.antonym("love", format="list") 61 | ["hate"] 62 | >>> vb.part_of_speech("code", format="dict") 63 | {0: {"text": "noun", "example": "A systematically arranged and comprehensive collection of laws."}} 64 | >>> vb.part_of_speech("code", format="list") 65 | [["noun", "A systematically arranged and comprehensive collection of laws."]] 66 | 67 | -------------------------------------------------------------------------------- /docs/wordnetcomparison.rst: -------------------------------------------------------------------------------- 1 | ================== 2 | Wordnet Comparison 3 | ================== 4 | 5 | ``Wordnet`` is a great resource. No doubt about it! So why should you 6 | use ``Vocabulary`` when we already have ``Wordnet`` out there? 7 | 8 | Let's say you want to find out the synonyms for the word ``car``. 9 | 10 | - Using ``Wordnet`` 11 | 12 | :: 13 | 14 | >>> from nltk.corpus import wordnet 15 | >>> syns = wordnet.synsets('car') 16 | >>> syns[0].lemmas[0].name 17 | 'car' 18 | >>> [s.lemmas[0].name for s in syns] 19 | ['car', 'car', 'car', 'car', 'cable_car'] 20 | 21 | >>> [l.name for s in syns for l in s.lemmas] 22 | ['car', 'auto', 'automobile', 'machine', 'motorcar', 'car', 'railcar', 'railway_car', 'railroad_car', 'car', 'gondola', 'car', 'elevator_car', 'cable_car', 'car'] 23 | 24 | - Doing the same using ``Vocabulary`` 25 | 26 | :: 27 | 28 | >>> from vocabulary import Vocabulary as vb 29 | >>> vb.synonym("car") 30 | '[{"seq": 0, "text": "automotive"}, {"seq": 1, "text": "motor"}, {"seq": 2, "text": "wagon"}, {"seq": 3, "text": "cart"}, {"seq": 4, "text": "automobile"}]' 31 | >>> ## load the json data 32 | >>> car_synonyms = json.loads(vb.synonym("car")) 33 | >>> type(car_synonyms) 34 | 35 | >>> 36 | 37 | So there you go. You get the data in an easy ``JSON`` format. 38 | 39 | You can go on comparing for the other methods too. -------------------------------------------------------------------------------- /nose.cfg: -------------------------------------------------------------------------------- 1 | [nosetests] 2 | verbosity=3 3 | exe=True 4 | -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | Sphinx==1.6.5 2 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests==2.18.4 2 | mock==2.0.0 3 | nose==1.3.7 4 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | description-file = README.md 3 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import sys 5 | 6 | try: 7 | from os import path 8 | from setuptools import setup, find_packages 9 | except ImportError: 10 | from distutils.core import setup 11 | 12 | from vocabulary.version import VERSION 13 | __version__ = VERSION 14 | 15 | # here = path.abspath(path.dirname(__file__)) 16 | 17 | # # get the dependencies and installs 18 | # if sys.version_info[:2] <= (2, 7): 19 | # with open(path.join(here, 'requirements.txt')) as f: 20 | # all_reqs = f.read().split('\n') 21 | # else: 22 | # with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f: 23 | # all_reqs = f.read().split('\n') 24 | 25 | # install_requires = [x.strip() for x in all_reqs if 'git+' not in x] 26 | # dependency_links = [x.strip().replace('git+', '') for x in all_reqs if 'git+' not in x] 27 | 28 | try: 29 | if sys.version_info[:2] <= (2, 7): 30 | readme = open("README.rst") 31 | else: 32 | readme = open("README.rst", encoding="utf8") 33 | long_description = str(readme.read()) 34 | finally: 35 | readme.close() 36 | 37 | setup( 38 | name='Vocabulary', 39 | author='Tasdik Rahman', 40 | version=VERSION, 41 | author_email='tasdik95@gmail.com', 42 | description="Module to get meaning, synonym, antonym, part_of_speech, usage_example, pronunciation and hyphenation for a given word", 43 | long_description=long_description, 44 | url='https://github.com/tasdikrahman/vocabulary', 45 | license='MIT', 46 | install_requires=[ 47 | "requests==2.13.0", 48 | "mock==2.0.0" 49 | ], 50 | #dependency_links=dependency_links, 51 | # adding package data to it 52 | packages=find_packages(exclude=['contrib', 'docs']), 53 | download_url='https://github.com/tasdikrahman/vocabulary/tarball/' + __version__, 54 | classifiers=[ 55 | 'Intended Audience :: Developers', 56 | 'License :: OSI Approved :: MIT License', 57 | 'Natural Language :: English', 58 | 'Programming Language :: Python', 59 | 'Programming Language :: Python :: 2.7', 60 | 'Programming Language :: Python :: 3', 61 | 'Programming Language :: Python :: 3.4', 62 | ], 63 | keywords=['Dictionary', 'Vocabulary', 'simple dictionary', 'pydict', 'dictionary module'] 64 | ) 65 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tasdikrahman/vocabulary/54403c5981af25dc3457796b57048ae27f09e9be/tests/__init__.py -------------------------------------------------------------------------------- /tests/tests.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | from vocabulary.vocabulary import Vocabulary as vb 5 | from vocabulary.responselib import Response as rp 6 | import unittest 7 | import sys 8 | 9 | try: 10 | import simplejson as json 11 | except ImportError: 12 | import json 13 | try: 14 | from unittest import mock 15 | except Exception as e: 16 | import mock 17 | 18 | 19 | class TestModule(unittest.TestCase): 20 | """Checks for the sanity of all module methods""" 21 | 22 | @mock.patch('vocabulary.vocabulary.requests.get') 23 | def test_meaning_found(self, mock_api_call): 24 | res = { 25 | "tuc": [ 26 | { 27 | "meanings": [ 28 | { 29 | "language": "en", 30 | "text": "the act of singing with closed lips" 31 | } 32 | ] 33 | } 34 | ] 35 | } 36 | 37 | mock_api_call.return_value = mock.Mock() 38 | mock_api_call.return_value.status_code = 200 39 | mock_api_call.return_value.json.return_value = res 40 | 41 | expected_result = '[{"seq": 0, "text": "the act of singing with closed lips"}]' 42 | expected_result = json.dumps(json.loads(expected_result)) 43 | result = vb.meaning("humming") 44 | 45 | if sys.version_info[:2] <= (2, 7): 46 | self.assertItemsEqual(expected_result, result) 47 | else: 48 | self.assertCountEqual(expected_result, result) 49 | 50 | @mock.patch('vocabulary.vocabulary.requests.get') 51 | def test_meaning_not_found(self, mock_api_call): 52 | mock_api_call.return_value = mock.Mock() 53 | mock_api_call.return_value.status_code = 404 54 | 55 | self.assertFalse(vb.meaning("humming")) 56 | 57 | @mock.patch('vocabulary.vocabulary.requests.get') 58 | def test_meaning_key_error(self, mock_api_call): 59 | res = { 60 | "result": "ok", 61 | "phrase": "humming" 62 | } 63 | 64 | mock_api_call.return_value = mock.Mock() 65 | mock_api_call.return_value.status_code = 200 66 | mock_api_call.return_value.json.return_value = res 67 | 68 | expected_result = '[{"seq": 0, "text": "the act of singing with closed lips"}]' 69 | expected_result = json.dumps(json.loads(expected_result)) 70 | 71 | self.assertFalse(vb.meaning("humming")) 72 | 73 | @mock.patch('vocabulary.vocabulary.requests.get') 74 | def test_synonynm_found(self, mock_api_call): 75 | res = { 76 | "tuc": [ 77 | { 78 | "phrase": { 79 | "text": "get angry", 80 | "language": "en" 81 | } 82 | }, 83 | { 84 | "phrase": { 85 | "text": "mad", 86 | "language": "en" 87 | }, 88 | } 89 | ] 90 | } 91 | 92 | mock_api_call.return_value = mock.Mock() 93 | mock_api_call.return_value.status_code = 200 94 | mock_api_call.return_value.json.return_value = res 95 | 96 | expected_result = '[{"text": "get angry", "seq": 0}, {"text": "mad", "seq": 1}]' 97 | expected_result = json.dumps(json.loads(expected_result)) 98 | result = vb.synonym("angry") 99 | 100 | if sys.version_info[:2] <= (2, 7): 101 | self.assertItemsEqual(expected_result, result) 102 | else: 103 | self.assertCountEqual(expected_result, result) 104 | 105 | @mock.patch('vocabulary.vocabulary.requests.get') 106 | def test_synonynm_not_found(self, mock_api_call): 107 | mock_api_call.return_value = mock.Mock() 108 | mock_api_call.return_value.status_code = 404 109 | 110 | self.assertFalse(vb.synonym("angry")) 111 | 112 | @mock.patch('vocabulary.vocabulary.requests.get') 113 | def test_synonynm_tuc_key_error(self, mock_api_call): 114 | res = { 115 | "result": "ok", 116 | "phrase": "angry" 117 | } 118 | 119 | mock_api_call.return_value = mock.Mock() 120 | mock_api_call.return_value.status_code = 200 121 | mock_api_call.return_value.json.return_value = res 122 | 123 | self.assertFalse(vb.synonym("angry")) 124 | 125 | @mock.patch('vocabulary.vocabulary.requests.get') 126 | def test_synonynm_empty_list(self, mock_api_call): 127 | res = { 128 | "result": "ok", 129 | "tuc": [], 130 | "phrase": "angry" 131 | } 132 | 133 | mock_api_call.return_value = mock.Mock() 134 | mock_api_call.return_value.status_code = 200 135 | mock_api_call.return_value.json.return_value = res 136 | 137 | self.assertFalse(vb.synonym("angry")) 138 | 139 | @mock.patch('vocabulary.vocabulary.requests.get') 140 | def test_translate_found(self, mock_api_call): 141 | res = { 142 | "tuc": [ 143 | { 144 | "phrase": { 145 | "text": "anglais", 146 | "language": "fr" 147 | } 148 | }, 149 | { 150 | "phrase": { 151 | "text": "germanique", 152 | "language": "fr" 153 | }, 154 | } 155 | ] 156 | } 157 | 158 | mock_api_call.return_value = mock.Mock() 159 | mock_api_call.return_value.status_code = 200 160 | mock_api_call.return_value.json.return_value = res 161 | 162 | expected_result = '[{"text": "anglais", "seq": 0}, {"text": "germanique", "seq": 1}]' 163 | expected_result = json.dumps(json.loads(expected_result)) 164 | result = vb.translate("english", "en", "fr") 165 | 166 | if sys.version_info[:2] <= (2, 7): 167 | self.assertItemsEqual(expected_result, result) 168 | else: 169 | self.assertCountEqual(expected_result, result) 170 | 171 | @mock.patch('vocabulary.vocabulary.requests.get') 172 | def test_translate_not_found(self, mock_api_call): 173 | mock_api_call.return_value = mock.Mock() 174 | mock_api_call.return_value.status_code = 404 175 | 176 | self.assertFalse(vb.translate("english", "en", "fr")) 177 | 178 | @mock.patch('vocabulary.vocabulary.requests.get') 179 | def test_translate_tuc_key_error(self, mock_api_call): 180 | res = { 181 | "result": "ok", 182 | "phrase": "english" 183 | } 184 | 185 | mock_api_call.return_value = mock.Mock() 186 | mock_api_call.return_value.status_code = 200 187 | mock_api_call.return_value.json.return_value = res 188 | 189 | self.assertFalse(vb.translate("english", "en", "fr")) 190 | 191 | @mock.patch('vocabulary.vocabulary.requests.get') 192 | def test_translate_empty_list(self, mock_api_call): 193 | res = { 194 | "result": "ok", 195 | "tuc": [], 196 | "phrase": "english" 197 | } 198 | 199 | mock_api_call.return_value = mock.Mock() 200 | mock_api_call.return_value.status_code = 200 201 | mock_api_call.return_value.json.return_value = res 202 | 203 | self.assertFalse(vb.translate("english", "en", "fr")) 204 | 205 | @mock.patch('vocabulary.vocabulary.requests.get') 206 | def test_antonym_found(self, mock_api_call): 207 | res = { 208 | "noun": { 209 | "ant": ["hate", "dislike"] 210 | }, 211 | "verb": { 212 | "ant": ["hate", "hater"] 213 | } 214 | } 215 | 216 | mock_api_call.return_value = mock.Mock() 217 | mock_api_call.return_value.status_code = 200 218 | mock_api_call.return_value.json.return_value = res 219 | 220 | expected_result = '[{"text": "hate", "seq": 0}, {"text": "dislike", "seq": 1}, {"text": "hater", "seq": 2}]' 221 | result = vb.antonym("love") 222 | 223 | if sys.version_info[:2] <= (2, 7): 224 | self.assertItemsEqual(expected_result, result) 225 | else: 226 | self.assertCountEqual(expected_result, result) 227 | 228 | @mock.patch('vocabulary.vocabulary.requests.get') 229 | def test_antonym_not_found(self, mock_api_call): 230 | mock_api_call.return_value = mock.Mock() 231 | mock_api_call.return_value.status_code = 404 232 | 233 | self.assertFalse(vb.antonym("love")) 234 | 235 | @mock.patch('vocabulary.vocabulary.requests.get') 236 | def test_antonym_ant_key_error(self, mock_api_call): 237 | res = { 238 | "noun": {}, 239 | "verb": {} 240 | } 241 | 242 | mock_api_call.return_value = mock.Mock() 243 | mock_api_call.return_value.status_code = 200 244 | mock_api_call.return_value.json.return_value = res 245 | 246 | self.assertFalse(vb.antonym("love")) 247 | 248 | @mock.patch('vocabulary.vocabulary.requests.get') 249 | def test_partOfSpeech_found(self, mock_api_call): 250 | res = [ 251 | { 252 | "word": "hello", 253 | "partOfSpeech": "interjection", 254 | "text": "greeting" 255 | }, 256 | { 257 | "word": "hello", 258 | "partOfSpeech": "verb-intransitive", 259 | "text": "To call." 260 | } 261 | ] 262 | 263 | mock_api_call.return_value = mock.Mock() 264 | mock_api_call.return_value.status_code = 200 265 | mock_api_call.return_value.json.return_value = res 266 | 267 | expected_result = '[{"text": "interjection", "example": "greeting", "seq": 0}, {"text": "verb-intransitive", "example": "To call.", "seq": 1}]' 268 | result = vb.part_of_speech("hello") 269 | 270 | if sys.version_info[:2] <= (2, 7): 271 | self.assertItemsEqual(expected_result, result) 272 | else: 273 | self.assertCountEqual(expected_result, result) 274 | 275 | @mock.patch('vocabulary.vocabulary.requests.get') 276 | def test_partOfSpeech_not_found(self, mock_api_call): 277 | mock_api_call.return_value = mock.Mock() 278 | mock_api_call.return_value.status_code = 404 279 | 280 | self.assertFalse(vb.part_of_speech("hello")) 281 | 282 | @mock.patch('vocabulary.vocabulary.requests.get') 283 | def test_usageExample_found(self, mock_api_call): 284 | res = { 285 | "list": [ 286 | { 287 | "definition": "a small mound or hill", 288 | "thumbs_up": 18, 289 | "word": "hillock", 290 | "example": "I went to the to of the hillock to look around.", 291 | "thumbs_down": 3 292 | } 293 | ] 294 | } 295 | 296 | mock_api_call.return_value = mock.Mock() 297 | mock_api_call.return_value.status_code = 200 298 | mock_api_call.return_value.json.return_value = res 299 | 300 | expected_result = '[{"seq": 0, "text": "I went to the to of the hillock to look around."}]' 301 | result = vb.usage_example("hillock") 302 | 303 | if sys.version_info[:2] <= (2, 7): 304 | self.assertItemsEqual(expected_result, result) 305 | else: 306 | self.assertCountEqual(expected_result, result) 307 | 308 | @mock.patch('vocabulary.vocabulary.requests.get') 309 | def test_usageExample_not_found(self, mock_api_call): 310 | mock_api_call.return_value = mock.Mock() 311 | mock_api_call.return_value.status_code = 404 312 | 313 | self.assertFalse(vb.usage_example("hillock")) 314 | 315 | @mock.patch('vocabulary.vocabulary.requests.get') 316 | def test_usageExample_empty_list(self, mock_api_call): 317 | res = { 318 | "list": [ 319 | { 320 | "definition": "a small mound or hill", 321 | "thumbs_up": 0, 322 | "word": "hillock", 323 | "example": "I went to the to of the hillock to look around.", 324 | "thumbs_down": 3 325 | } 326 | ] 327 | } 328 | 329 | mock_api_call.return_value = mock.Mock() 330 | mock_api_call.return_value.status_code = 200 331 | mock_api_call.return_value.json.return_value = res 332 | 333 | self.assertFalse(vb.usage_example("hillock")) 334 | 335 | @mock.patch('vocabulary.vocabulary.requests.get') 336 | def test_pronunciation_found(self, mock_api_call): 337 | res = [ 338 | { 339 | "rawType": "ahd-legacy", 340 | "seq": 0, 341 | "raw": "hip" 342 | }, 343 | { 344 | "rawType": "arpabet", 345 | "seq": 0, 346 | "raw": "HH IH2 P AH0 P AA1 T AH0 M AH0 S" 347 | } 348 | ] 349 | 350 | mock_api_call.return_value = mock.Mock() 351 | mock_api_call.return_value.status_code = 200 352 | mock_api_call.return_value.json.return_value = res 353 | 354 | expected_result = '[{"rawType": "ahd-legacy", "raw": "hip", "seq": 0}, {"rawType": "arpabet", "raw": "HH IH2 P AH0 P AA1 T AH0 M AH0 S", "seq": 1}]' 355 | result = vb.pronunciation("hippopotamus") 356 | 357 | if sys.version_info[:2] <= (2, 7): 358 | self.assertItemsEqual(expected_result, result) 359 | else: 360 | self.assertCountEqual(expected_result, result) 361 | 362 | @mock.patch('vocabulary.vocabulary.requests.get') 363 | def test_pronunciation_not_found(self, mock_api_call): 364 | mock_api_call.return_value = mock.Mock() 365 | mock_api_call.return_value.status_code = 404 366 | 367 | self.assertFalse(vb.pronunciation("hippopotamus")) 368 | 369 | @mock.patch('vocabulary.vocabulary.requests.get') 370 | def test_hyphenation_found(self, mock_api_call): 371 | res = [ 372 | { 373 | "seq": 0, 374 | "type": "secondary stress", 375 | "text": "hip" 376 | }, 377 | { 378 | "seq": 1, 379 | "text": "po" 380 | } 381 | ] 382 | 383 | mock_api_call.return_value = mock.Mock() 384 | mock_api_call.return_value.status_code = 200 385 | mock_api_call.return_value.json.return_value = res 386 | 387 | expected_result = '[{"seq": 0, "text": "hip", "type": "secondary stress"}, {"seq": 1, "text": "po"}]' 388 | result = vb.hyphenation("hippopotamus") 389 | 390 | if sys.version_info[:2] <= (2, 7): 391 | self.assertItemsEqual(expected_result, result) 392 | else: 393 | self.assertCountEqual(expected_result, result) 394 | 395 | @mock.patch('vocabulary.vocabulary.requests.get') 396 | def test_hyphenation_not_found(self, mock_api_call): 397 | mock_api_call.return_value = mock.Mock() 398 | mock_api_call.return_value.status_code = 404 399 | 400 | self.assertFalse(vb.hyphenation("hippopotamus")) 401 | 402 | def test_respond_as_dict_1(self): 403 | data = json.loads('[{"text": "hummus", "seq": 0}]') 404 | expected_result = {0: {"text": "hummus"}} 405 | result = rp().respond(data, 'dict') 406 | if sys.version_info[:2] <= (2, 7): 407 | self.assertItemsEqual(expected_result, result) 408 | else: 409 | self.assertCountEqual(expected_result, result) 410 | 411 | def test_respond_as_dict_2(self): 412 | data = json.loads('[{"text": "hummus", "seq": 0},{"text": "hummusy", "seq": 1}]') 413 | expected_result = {0: {"text": "hummus"}, 1: {"text": "hummusy"}} 414 | result = rp().respond(data, 'dict') 415 | if sys.version_info[:2] <= (2, 7): 416 | self.assertItemsEqual(expected_result, result) 417 | else: 418 | self.assertCountEqual(expected_result, result) 419 | 420 | def test_respond_as_dict_3(self): 421 | data = json.loads('{"text": ["hummus"]}') 422 | expected_result = {"text": "hummus"} 423 | result = rp().respond(data, 'dict') 424 | if sys.version_info[:2] <= (2, 7): 425 | self.assertItemsEqual(expected_result, result) 426 | else: 427 | self.assertCountEqual(expected_result, result) 428 | 429 | def test_respond_as_list_1(self): 430 | data = json.loads('[{"text": "hummus", "seq": 0}]') 431 | expected_result = ["hummus"] 432 | result = rp().respond(data, 'list') 433 | if sys.version_info[:2] <= (2, 7): 434 | self.assertItemsEqual(expected_result, result) 435 | else: 436 | self.assertCountEqual(expected_result, result) 437 | 438 | def test_respond_as_list_2(self): 439 | data = json.loads('[{"text": "hummus", "seq": 0},{"text": "hummusy", "seq": 1}]') 440 | expected_result = ["hummus", "hummusy"] 441 | result = rp().respond(data, 'list') 442 | if sys.version_info[:2] <= (2, 7): 443 | self.assertItemsEqual(expected_result, result) 444 | else: 445 | self.assertCountEqual(expected_result, result) 446 | 447 | def test_respond_as_list_3(self): 448 | data = json.loads('{"text": ["hummus"]}') 449 | expected_result = ["hummus"] 450 | result = rp().respond(data, 'list') 451 | if sys.version_info[:2] <= (2, 7): 452 | self.assertItemsEqual(expected_result, result) 453 | else: 454 | self.assertCountEqual(expected_result, result) 455 | 456 | 457 | if __name__ == "__main__": 458 | unittest.main() 459 | -------------------------------------------------------------------------------- /vocabulary/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | Imports class 'Vocabulary' 3 | """ 4 | -------------------------------------------------------------------------------- /vocabulary/responselib.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | The MIT License (MIT) 6 | Copyright © 2017 Chizzy Alaedu 7 | 8 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and 9 | associated documentation files (the “Software”), to deal in the Software without restriction, including 10 | without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the 12 | following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in all copies or substantial 15 | portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT 18 | LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN 19 | NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 21 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 22 | """ 23 | 24 | import json 25 | 26 | __version__ = '0.0.1' 27 | __author__ = "Chizzy Alaedu" 28 | 29 | 30 | class Response(object): 31 | """ 32 | | Private methods | Public methods | 33 | |:----------------------:|------------------| 34 | | __respond_with_dict() | respond() | 35 | | __respond_with_list() | | 36 | | | | 37 | """ 38 | 39 | def __respond_with_dict(self, data): 40 | """ 41 | Builds a python dictionary from a json object 42 | 43 | :param data: the json object 44 | :returns: a nested dictionary 45 | """ 46 | response = {} 47 | if isinstance(data, list): 48 | temp_data, data = data, {} 49 | for key, value in enumerate(temp_data): 50 | data[key] = value 51 | 52 | data.pop('seq', None) 53 | for index, item in data.items(): 54 | values = item 55 | if isinstance(item, list) or isinstance(item, dict): 56 | values = self.__respond_with_dict(item) 57 | 58 | if isinstance(values, dict) and len(values) == 1: 59 | (key, values), = values.items() 60 | response[index] = values 61 | 62 | return response 63 | 64 | def __respond_with_list(self, data): 65 | """ 66 | Builds a python list from a json object 67 | 68 | :param data: the json object 69 | :returns: a nested list 70 | """ 71 | response = [] 72 | if isinstance(data, dict): 73 | data.pop('seq', None) 74 | data = list(data.values()) 75 | 76 | for item in data: 77 | values = item 78 | if isinstance(item, list) or isinstance(item, dict): 79 | values = self.__respond_with_list(item) 80 | 81 | if isinstance(values, list) and len(values) == 1: 82 | response.extend(values) 83 | else: 84 | response.append(values) 85 | 86 | return response 87 | 88 | def respond(self, data, format='json'): 89 | """ 90 | Converts a json object to a python datastructure based on 91 | specified format 92 | 93 | :param data: the json object 94 | :param format: python datastructure type. Defaults to: "json" 95 | :returns: a python specified object 96 | """ 97 | dispatchers = { 98 | "dict": self.__respond_with_dict, 99 | "list": self.__respond_with_list 100 | } 101 | 102 | if not dispatchers.get(format, False): 103 | return json.dumps(data) 104 | 105 | return dispatchers[format](data) 106 | -------------------------------------------------------------------------------- /vocabulary/version.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | VERSION = '1.0.4' 4 | RELEASE = '10' 5 | -------------------------------------------------------------------------------- /vocabulary/vocabulary.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | The MIT License (MIT) 6 | Copyright © 2015 Tasdik Rahman 7 | 8 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and 9 | associated documentation files (the “Software”), to deal in the Software without restriction, including 10 | without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the 12 | following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in all copies or substantial 15 | portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT 18 | LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN 19 | NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 21 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 22 | """ 23 | 24 | import json 25 | import requests 26 | import contextlib 27 | import sys 28 | 29 | from .responselib import Response 30 | 31 | from .version import VERSION, RELEASE 32 | 33 | 34 | @contextlib.contextmanager 35 | def try_URL(message='Connection Lost'): 36 | try: 37 | yield 38 | except requests.exceptions.ConnectionError: 39 | print(message) 40 | 41 | 42 | class Vocabulary(object): 43 | """ 44 | | Private methods | Public methods | 45 | |:-----------------:|------------------| 46 | | __get_api_link() | meaning() | 47 | | __return_json() | synonym() | 48 | | __parse_content() | antonym() | 49 | | __clean_dict() | usage_example() | 50 | | | hyphenation() | 51 | | | part_of_speech() | 52 | | | pronunciation() | 53 | | | translate() | 54 | """ 55 | 56 | __version__ = VERSION 57 | __release__ = RELEASE 58 | __author__ = "Tasdik Rahman" 59 | 60 | @staticmethod 61 | def __get_api_link(api): 62 | """ 63 | returns API links 64 | 65 | :param api: possible values are "wordnik", "glosbe", "urbandict", "bighugelabs" 66 | :returns: returns API links to urbandictionary, wordnik, glosbe, bighugelabs 67 | """ 68 | api_name2links = { 69 | "wordnik": "http://api.wordnik.com/v4/word.json/{word}/{action}?api_key=1e940957819058fe3ec7c59d43c09504b400110db7faa0509", 70 | "glosbe": "https://glosbe.com/gapi/translate?from={source_lang}&dest={dest_lang}&format=json&pretty=true&phrase={word}", 71 | "urbandict": "http://api.urbandictionary.com/v0/{action}?term={word}", 72 | "bighugelabs": "http://words.bighugelabs.com/api/2/eb4e57bb2c34032da68dfeb3a0578b68/{word}/json" 73 | } 74 | 75 | return api_name2links.get(api, False) 76 | 77 | @staticmethod 78 | def __return_json(url): 79 | """ 80 | Returns JSON data which is returned by querying the API service 81 | Called by 82 | - meaning() 83 | - synonym() 84 | 85 | :param url: the complete formatted url which is then queried using requests 86 | :returns: json content being fed by the API 87 | """ 88 | with try_URL(): 89 | response = requests.get(url) 90 | if response.status_code == 200: 91 | return response.json() 92 | else: 93 | return False 94 | 95 | @staticmethod 96 | def __parse_content(tuc_content, content_to_be_parsed): 97 | """ 98 | parses the passed "tuc_content" for 99 | - meanings 100 | - synonym 101 | received by querying the glosbe API 102 | 103 | Called by 104 | - meaning() 105 | - synonym() 106 | 107 | :param tuc_content: passed on the calling Function. A list object 108 | :param content_to_be_parsed: data to be parsed from. Whether to parse "tuc" for meanings or synonyms 109 | :returns: returns a list which contains the parsed data from "tuc" 110 | """ 111 | initial_parsed_content = {} 112 | 113 | i = 0 114 | for content_dict in tuc_content: 115 | if content_to_be_parsed in content_dict.keys(): 116 | contents_raw = content_dict[content_to_be_parsed] 117 | if content_to_be_parsed == "phrase": 118 | # for 'phrase', 'contents_raw' is a dictionary 119 | initial_parsed_content[i] = contents_raw['text'] 120 | i += 1 121 | elif content_to_be_parsed == "meanings": 122 | # for 'meanings', 'contents_raw' is a list 123 | for meaning_content in contents_raw: 124 | initial_parsed_content[i] = meaning_content['text'] 125 | i += 1 126 | 127 | final_parsed_content = {} 128 | # removing duplicates(if any) from the dictionary 129 | for key, value in initial_parsed_content.items(): 130 | if value not in final_parsed_content.values(): 131 | final_parsed_content[key] = value 132 | 133 | # calling __clean_dict 134 | 135 | formatted_list = Vocabulary.__clean_dict(final_parsed_content) 136 | return formatted_list 137 | 138 | @staticmethod 139 | def __clean_dict(dictionary): 140 | """ 141 | Takes the dictionary from __parse_content() and creates a well formatted list 142 | 143 | :param dictionary: unformatted dict 144 | :returns: a list which contains dict's as it's elements 145 | """ 146 | key_dict = {} 147 | value_dict = {} 148 | final_list = [] 149 | for key in dictionary.keys(): 150 | key_dict[key] = "seq" 151 | 152 | for value in dictionary.values(): 153 | value_dict[value] = "text" 154 | 155 | for (key1, value1), (key2, value2) in zip(key_dict.items(), value_dict.items()): 156 | final_list.append({value1: int(key1), value2: key2}) 157 | 158 | return final_list 159 | 160 | @staticmethod 161 | def meaning(phrase, source_lang="en", dest_lang="en", format="json"): 162 | """ 163 | make calls to the glosbe API 164 | 165 | :param phrase: word for which meaning is to be found 166 | :param source_lang: Defaults to : "en" 167 | :param dest_lang: Defaults to : "en" For eg: "fr" for french 168 | :param format: response structure type. Defaults to: "json" 169 | :returns: returns a json object as str, False if invalid phrase 170 | """ 171 | base_url = Vocabulary.__get_api_link("glosbe") 172 | url = base_url.format(word=phrase, source_lang=source_lang, dest_lang=dest_lang) 173 | json_obj = Vocabulary.__return_json(url) 174 | 175 | if json_obj: 176 | try: 177 | tuc_content = json_obj["tuc"] # "tuc_content" is a "list" 178 | except KeyError: 179 | return False 180 | '''get meanings''' 181 | meanings_list = Vocabulary.__parse_content(tuc_content, "meanings") 182 | return Response().respond(meanings_list, format) 183 | # print(meanings_list) 184 | # return json.dumps(meanings_list) 185 | else: 186 | return False 187 | 188 | @staticmethod 189 | def synonym(phrase, source_lang="en", dest_lang="en", format="json"): 190 | """ 191 | Gets the synonym for the given word and returns them (if any found) 192 | Calls the glosbe API for getting the related synonym 193 | 194 | :param phrase: word for which synonym is to be found 195 | :param source_lang: Defaults to : "en" 196 | :param dest_lang: Defaults to : "en" 197 | :param format: response structure type. Defaults to: "json" 198 | :returns: returns a json object as str, False if invalid phrase 199 | """ 200 | base_url = Vocabulary.__get_api_link("glosbe") 201 | url = base_url.format(word=phrase, source_lang=source_lang, dest_lang=dest_lang) 202 | json_obj = Vocabulary.__return_json(url) 203 | if json_obj: 204 | try: 205 | tuc_content = json_obj["tuc"] # "tuc_content" is a "list" 206 | except KeyError: 207 | return False 208 | synonyms_list = Vocabulary.__parse_content(tuc_content, "phrase") 209 | if synonyms_list: 210 | # return synonyms_list 211 | # return json.dumps(synonyms_list) 212 | return Response().respond(synonyms_list, format) 213 | else: 214 | return False 215 | 216 | else: 217 | return False 218 | 219 | # TO-DO: 220 | # if this gives me no results, will query "bighugelabs" 221 | 222 | @staticmethod 223 | def translate(phrase, source_lang, dest_lang, format="json"): 224 | """ 225 | Gets the translations for a given word, and returns possibilites as a list 226 | Calls the glosbe API for getting the translation 227 | 228 | and languages should be specifed in 3-letter ISO 639-3 format, 229 | although many 2-letter codes (en, de, fr) will work. 230 | 231 | See http://en.wikipedia.org/wiki/List_of_ISO_639-3_codes for full list. 232 | 233 | :param phrase: word for which translation is being found 234 | :param source_lang: Translation from language 235 | :param dest_lang: Translation to language 236 | :param format: response structure type. Defaults to: "json" 237 | :returns: returns a json object as str, False if invalid phrase 238 | """ 239 | base_url = Vocabulary.__get_api_link("glosbe") 240 | url = base_url.format(word=phrase, source_lang=source_lang, dest_lang=dest_lang) 241 | json_obj = Vocabulary.__return_json(url) 242 | if json_obj: 243 | try: 244 | tuc_content = json_obj["tuc"] # "tuc_content" is a "list" 245 | except KeyError: 246 | return False 247 | translations_list = Vocabulary.__parse_content(tuc_content, "phrase") 248 | if translations_list: 249 | # return synonyms_list 250 | # return json.dumps(translations_list) 251 | return Response().respond(translations_list, format) 252 | else: 253 | return False 254 | else: 255 | return False 256 | 257 | @staticmethod 258 | def antonym(phrase, format="json"): 259 | """ 260 | queries the bighugelabs API for the antonym. The results include 261 | - "syn" (synonym) 262 | - "ant" (antonym) 263 | - "rel" (related terms) 264 | - "sim" (similar terms) 265 | - "usr" (user suggestions) 266 | 267 | But currently parsing only the antonym as I have already done 268 | - synonym (using glosbe API) 269 | 270 | :param phrase: word for which antonym is to be found 271 | :param format: response structure type. Defaults to: "json" 272 | :returns: returns a json object 273 | :raises KeyError: returns False when no antonyms are found 274 | """ 275 | base_url = Vocabulary.__get_api_link("bighugelabs") 276 | url = base_url.format(word=phrase) 277 | json_obj = Vocabulary.__return_json(url) 278 | 279 | if not json_obj: 280 | return False 281 | 282 | result = [] 283 | visited = {} 284 | idx = 0 285 | for key in json_obj.keys(): 286 | antonyms = json_obj[key].get('ant', False) 287 | if not antonyms: 288 | continue 289 | 290 | for antonym in antonyms: 291 | if visited.get(antonym, False): 292 | continue 293 | 294 | result.append({'seq': idx, 'text': antonym}) 295 | idx += 1 296 | visited[antonym] = True 297 | 298 | if not result: 299 | return False 300 | 301 | return Response().respond(result, format) 302 | 303 | @staticmethod 304 | def part_of_speech(phrase, format='json'): 305 | """ 306 | querrying Wordnik's API for knowing whether the word is a noun, adjective and the like 307 | 308 | :params phrase: word for which part_of_speech is to be found 309 | :param format: response structure type. Defaults to: "json" 310 | :returns: returns a json object as str, False if invalid phrase 311 | """ 312 | # We get a list object as a return value from the Wordnik API 313 | base_url = Vocabulary.__get_api_link("wordnik") 314 | url = base_url.format(word=phrase.lower(), action="definitions") 315 | json_obj = Vocabulary.__return_json(url) 316 | 317 | if not json_obj: 318 | return False 319 | 320 | result = [] 321 | for idx, obj in enumerate(json_obj): 322 | text = obj.get('partOfSpeech', None) 323 | example = obj.get('text', None) 324 | result.append({"seq": idx, "text": text, "example": example}) 325 | 326 | return Response().respond(result, format) 327 | 328 | @staticmethod 329 | def usage_example(phrase, format='json'): 330 | """Takes the source phrase and queries it to the urbandictionary API 331 | 332 | :params phrase: word for which usage_example is to be found 333 | :param format: response structure type. Defaults to: "json" 334 | :returns: returns a json object as str, False if invalid phrase 335 | """ 336 | base_url = Vocabulary.__get_api_link("urbandict") 337 | url = base_url.format(action="define", word=phrase) 338 | word_examples = {} 339 | json_obj = Vocabulary.__return_json(url) 340 | if json_obj: 341 | examples_list = json_obj["list"] 342 | for i, example in enumerate(examples_list): 343 | if example["thumbs_up"] > example["thumbs_down"]: 344 | word_examples[i] = example["example"].replace("\r", "").replace("\n", "") 345 | if word_examples: 346 | # reforamatting "word_examples" using "__clean_dict()" 347 | # return json.dumps(Vocabulary.__clean_dict(word_examples)) 348 | # return Vocabulary.__clean_dict(word_examples) 349 | return Response().respond(Vocabulary.__clean_dict(word_examples), format) 350 | else: 351 | return False 352 | else: 353 | return False 354 | 355 | @staticmethod 356 | def pronunciation(phrase, format='json'): 357 | """ 358 | Gets the pronunciation from the Wordnik API 359 | 360 | :params phrase: word for which pronunciation is to be found 361 | :param format: response structure type. Defaults to: "json" 362 | :returns: returns a list object, False if invalid phrase 363 | """ 364 | base_url = Vocabulary.__get_api_link("wordnik") 365 | url = base_url.format(word=phrase.lower(), action="pronunciations") 366 | json_obj = Vocabulary.__return_json(url) 367 | if json_obj: 368 | ''' 369 | Refer : http://stackoverflow.com/q/18337407/3834059 370 | ''' 371 | ## TODO: Fix the unicode issue mentioned in 372 | ## https://github.com/tasdikrahman/vocabulary#181known-issues 373 | for idx, obj in enumerate(json_obj): 374 | obj['seq'] = idx 375 | 376 | if sys.version_info[:2] <= (2, 7): ## python2 377 | # return json_obj 378 | return Response().respond(json_obj, format) 379 | else: # python3 380 | # return json.loads(json.dumps(json_obj, ensure_ascii=False)) 381 | return Response().respond(json_obj, format) 382 | else: 383 | return False 384 | 385 | @staticmethod 386 | def hyphenation(phrase, format='json'): 387 | """ 388 | Returns back the stress points in the "phrase" passed 389 | 390 | :param phrase: word for which hyphenation is to be found 391 | :param format: response structure type. Defaults to: "json" 392 | :returns: returns a json object as str, False if invalid phrase 393 | """ 394 | base_url = Vocabulary.__get_api_link("wordnik") 395 | url = base_url.format(word=phrase.lower(), action="hyphenation") 396 | json_obj = Vocabulary.__return_json(url) 397 | if json_obj: 398 | # return json.dumps(json_obj) 399 | # return json_obj 400 | return Response().respond(json_obj, format) 401 | else: 402 | return False 403 | --------------------------------------------------------------------------------