├── .gitignore ├── README.md ├── README_ENG.md ├── examples ├── 1_non_type.py ├── 2_collector_type.py ├── 3_specific_type.py ├── 4_common_type.py └── README.md └── src ├── generator3 ├── __init__.py ├── generator3.py └── pycharm_generator_utils │ ├── __init__.py │ ├── clr_tools.py │ ├── constants.py │ ├── module_redeclarator.py │ ├── pyparsing.py │ ├── pyparsing_py3.py │ └── util_methods.py ├── main.py ├── make_stubs.py └── utils ├── __init__.py ├── docopt.py ├── helper.py └── logger.py /.gitignore: -------------------------------------------------------------------------------- 1 | logs/ 2 | stubs/ 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Код репозитория основан на [ironpython-stubs](https://github.com/gtalarico/ironpython-stubs). 2 | Выражаю [gtalarico](https://github.com/gtalarico) бесконечную благодарность за вклад в развитие сообщества разработчиков, скриптов и плагинов для инженерных программ. 3 | Я не стал делать форк его репозитория, потому что хочу внести координальные изменения, 4 | к тому же, автор уже давно им не занимается, а преемник так и не был найден. 5 | 6 | Так же как и в базовом репозитории, "заглушки" могут быть использованы в различных текстовых редакторах, но все примеры будут рассматриваться в [visual studio code](https://code.visualstudio.com/). 7 | 8 | На самом деле сгенерированный код нельзя называть "заглушками", но данное название уже прижилось в сообществе. Если хотите узнать про настоящие заглушки для python кода, то почитайте [тут](https://mypy.readthedocs.io/en/stable/stubs.html). 9 | 10 | ## Для пользователей 11 | Сгенерированные заглушки ищите в [релизах](https://github.com/BIMOpenGroup/revitapistubs/releases). 12 | Как вы можете заметить, они разделены на `common` и `revit`. 13 | В `common` хранятся заглушки системных библиотек Windows, пакеты IronPython, [Dynamo](https://github.com/DynamoDS) и [RPS](https://github.com/architecture-building-systems/revitpythonshell). 14 | В `revit` хранятся заглушки для нескольких версий библиотек `Revit API`. 15 | Чтобы не было конфликтов у анализатора кода, подключать заглушки надо с помощью двух путей. 16 | ```json 17 | // %APPDATA%\Code\User\settings.json 18 | "python.analysis.extraPaths": [ 19 | "ВАШ_ПУТЬ\\stubs\\common", 20 | "ВАШ_ПУТЬ\\stubs\\revit\\2019" 21 | ], 22 | ``` 23 | 24 | При изменении версии Revit для конкретного проекта, придётся повторить оба пути. 25 | ```json 26 | // .vscode/settings.json 27 | "python.analysis.extraPaths": [ 28 | "ВАШ_ПУТЬ\\stubs\\common", 29 | "ВАШ_ПУТЬ\\stubs\\revit\\2021" 30 | ], 31 | ``` 32 | 33 | Подробнее о подключении доп модулей читайте [тут](https://code.visualstudio.com/docs/python/editing). 34 | Примеры использования заглушек смотрите [тут](https://github.com/BIMOpenGroup/RevitAPIStubs/tree/master/examples). 35 | 36 | Чтобы заглушки были максимально эффективными, их надо допиливать руками, но об этом подробнее в примерах. 37 | Надеюсь, в будущем это будет исправлено. 38 | Если выйдет новая версия Revit API, но при этом код генерации не изменится, то будет обновлён архив последнего релиза. 39 | 40 | ## Для котрибьюторов 41 | Любая помощь приветствуется! 42 | Пишите вопросы, предложения, замечания. Даже если я не смогу что-то поправить вовремя, то возможно кто-то из сообщества поможет. 43 | 44 | ### Общая информация 45 | Генерация заглушек возможна через консоль, и это выглядит предпочтительным и гибким вариантом, но для простоты я выбрал [RevitPythonShell](https://github.com/architecture-building-systems/revitpythonshell). 46 | Например, не надо заботиться о дополнительных зависимостях для `RevitAPIUI` или требовать пути для генерации, 47 | просто открываем нужный нам Revit и запускаем скрипт. 48 | К тому же, данный репозиторий создан для генерации заглушек только для Revit. 49 | 50 | Из-за того что `Pylance` достаточно производительный в отличие от `Jedi` заглушки больше не нужно "минимизировать". 51 | Поэтому раньше требовалось запускать скрипт для обработки сгенерированных заглушек и формирования `stubs.min`. 52 | 53 | Решил разместить заглушки в релизах, потому что нет смысла хранить их в репозитории. 54 | Единственная причина хранения их в репозитории - если кто-то будет их обновлять/исправлять руками. Но это очень большой и ненужный труд. Лучше попробовать улучшить генератор. 55 | 56 | ### Как генерировать заглушки 57 | В оболочке RPS запускаем `src/main.py`, изменив при этом путь к `src_dir`. 58 | Если запускать через кастомную кнопку на панели RPS, то в коде ничего менять не надо. 59 | Заглушки сгенерируются в корне репозитория. 60 | 61 | ### Правила 62 | Как и практически в любом опенсорсе тут принята система [Forking Workflow](https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow). 63 | Тезисно: 64 | - Делаем форк репозитория; 65 | - Создаём ветку от `master`; 66 | - Делаем `pull request` в `upstream/master`. 67 | 68 | Правила именования веток: 69 | - Стиль kebab-case; 70 | - Первым словом идёт задача fix/feature/refactor и т.д. Последующими - краткое описание, либо номер issue. 71 | fix-iss57 / feature-iss14 / refactor-generator. 72 | 73 | Просьба добавить папки различных IDE в свой глобальный gitignore. 74 | 75 | ### Текущие задачи 76 | На данный момент приоритетной задачей является улучшение кода генератора. -------------------------------------------------------------------------------- /README_ENG.md: -------------------------------------------------------------------------------- 1 | The repository code is based on [ironpython-stubs](https://github.com/gtalarico/ironpython-stubs). 2 | I express my infinite gratitude to [gtalarico](https://github.com/gtalarico) for contributing to the development of the community of developers, scripts and plug-ins for engineering programs. 3 | I did not fork his repository, because I want to make cardinal changes, 4 | besides, the author has not been engaged in it for a long time, and a successor has not been found. 5 | 6 | Just like in the base repository, "stubs" can be used in various text editors, but all examples will be covered in [visual studio code](https://code.visualstudio.com/). 7 | 8 | In fact, the generated code cannot be called "stubs", but this name has already taken root in the community. If you want to learn about real stubs for python code, then read [here](https://mypy.readthedocs.io/en/stable/stubs.html). 9 | 10 | ## For users 11 | Look for generated stubs in [releases](https://github.com/BIMOpenGroup/revitapistubs/releases). 12 | As you can see, they are divided into `common` and `revit`. 13 | `common` stores Windows system library stubs, IronPython packages, [Dynamo](https://github.com/DynamoDS) and [RPS](https://github.com/architecture-building-systems/revitpythonshell). 14 | `revit` stores stubs for several versions of the `Revit API` libraries. 15 | To avoid conflicts with the code analyzer, you need to connect stubs using two ways. 16 | ```json 17 | // %APPDATA%\Code\User\settings.json 18 | "python.analysis.extraPaths": [ 19 | "YOUR_PATH\\stubs\\common", 20 | "YOUR_WAY\\stubs\\revit\\2019" 21 | ], 22 | ``` 23 | 24 | If you change the version of Revit for a particular project, you will have to repeat both paths. 25 | ```json 26 | // .vscode/settings.json 27 | "python.analysis.extraPaths": [ 28 | "YOUR_PATH\\stubs\\common", 29 | "YOUR_WAY\\stubs\\revit\\2021" 30 | ], 31 | ``` 32 | 33 | Read more about connecting additional modules [here](https://code.visualstudio.com/docs/python/editing). 34 | See examples of using stubs [here](https://github.com/BIMOpenGroup/RevitAPIStubs/tree/master/examples). 35 | 36 | In order for the stubs to be as effective as possible, they must be finished by hand, but more on this in the examples. 37 | Hopefully this will be fixed in the future. 38 | If a new version of the Revit API is released, but the generation code does not change, then the archive of the latest release will be updated. 39 | 40 | ## For contributors 41 | Any help is welcome! 42 | Write questions, suggestions, comments. Even if I can't fix something in time, maybe someone from the community can help. 43 | 44 | ### General information 45 | Stubs generation is possible via the console and this seems to be the preferred and flexible option, but for simplicity I chose [RevitPythonShell](https://github.com/architecture-building-systems/revitpythonshell). 46 | For example, no need to take care of additional dependencies for `RevitAPIUI` or require a path to generate, 47 | just open the Revit we need and run the script. 48 | In addition, this repository is designed to generate stubs for Revit only. 49 | 50 | Because `Pylance` is quite performant, unlike `Jedi`, stubs no longer need to be "minimized". 51 | Therefore, it used to be necessary to run a script to process the generated stubs and form `stubs.min`. 52 | 53 | I decided to place stubs in releases, because it makes no sense to store them in a repository. 54 | The only reason to keep them in the repository is if someone will update/fix them by hand. But this is a very big and unnecessary work. It is better to try to improve the generator. 55 | 56 | ### How to generate stubs 57 | In the RPS shell, run `src/main.py`, changing the path to `src_dir`. 58 | If you run through a custom button on the RPS panel, then you don’t need to change anything in the code. 59 | The stubs will be generated at the root of the repository. 60 | 61 | ### Rules 62 | As in almost any open source, the [Forking Workflow] system is adopted here (https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow). 63 | Abstract: 64 | - Making a fork of the repository; 65 | - Create a branch from `master`; 66 | - Make a `pull request` in `upstream/master`. 67 | 68 | Branch naming rules: 69 | - Kebab-case style; 70 | - The first word is the fix/feature/refactor task, etc. Subsequent - a brief description, or issue number. 71 | fix-iss57 / feature-iss14 / refactor-generator. 72 | 73 | Please add folders of various IDEs to your global gitignore. 74 | 75 | ### Current tasks 76 | At the moment, the priority task is to improve the generator code. -------------------------------------------------------------------------------- /examples/1_non_type.py: -------------------------------------------------------------------------------- 1 | """Базовый пример с не определённым типом для переменной цикла el. 2 | """ 3 | 4 | import clr 5 | clr.AddReference("RevitAPI") 6 | from Autodesk.Revit import DB 7 | from rps import doc # перед запуском надо закомментировать 8 | 9 | views = DB.FilteredElementCollector(doc) \ 10 | .OfClass(DB.View3D) \ 11 | .WhereElementIsElementType() 12 | 13 | for el in views: 14 | print(el) 15 | -------------------------------------------------------------------------------- /examples/2_collector_type.py: -------------------------------------------------------------------------------- 1 | """Определение типа переменной цикла через инициализацию переменной до цикла. 2 | 3 | На код это не сильно повлияет, так как переменная цикла всё равно создаётся в 4 | глобальной области видимости и нет ничего страшного что ей сначала присваивается тип. 5 | Заметьте, если убрать комментарий, то Pylance не сможет определить тип для переменной цикла. 6 | """ 7 | 8 | import clr 9 | clr.AddReference("RevitAPI") 10 | from Autodesk.Revit import DB 11 | from rps import doc # перед запуском надо закомментировать 12 | 13 | views = DB.FilteredElementCollector(doc) \ 14 | .OfClass(DB.View3D) \ 15 | .WhereElementIsNotElementType() 16 | 17 | view = DB.View3D # type: DB.View3D 18 | for view in views: 19 | # Свойство ViewName является устаревшим и не поддерживается в RevitAPI начиная с 2020 версии. 20 | # Можете это проверить переключив заглушки для данного проекта 21 | print(view.ViewName) 22 | -------------------------------------------------------------------------------- /examples/3_specific_type.py: -------------------------------------------------------------------------------- 1 | """Определение типа переменной цикла обозначив тип для последовательности. 2 | 3 | Это выглядит более органично, хоть и добавляет вызов не обязательного метода ToElements. 4 | """ 5 | 6 | import clr 7 | clr.AddReference("RevitAPI") 8 | from Autodesk.Revit import DB 9 | from rps import doc # перед запуском надо закомментировать 10 | 11 | views = DB.FilteredElementCollector(doc) \ 12 | .OfClass(DB.View3D) \ 13 | .WhereElementIsNotElementType() \ 14 | .ToElements() # type: list[DB.View3D] 15 | 16 | for view in views: 17 | # Свойство ViewName является устаревшим и не поддерживается в RevitAPI начиная с 2020 версии. 18 | # Можете это проверить переключив заглушки для данного проекта 19 | print(view.ViewName) 20 | -------------------------------------------------------------------------------- /examples/4_common_type.py: -------------------------------------------------------------------------------- 1 | """Определение типа переменной цикла обозначив тип для последовательности. 2 | 3 | Можно сделать приведение к более общему типу, так как 4 | View3D наследник View, а View наследник Element. 5 | Но будьте осторожны, не ошибитесь в выборе базового типа. 6 | """ 7 | 8 | import clr 9 | clr.AddReference("RevitAPI") 10 | from Autodesk.Revit import DB 11 | from rps import doc # перед запуском надо закомментировать 12 | 13 | views = DB.FilteredElementCollector(doc) \ 14 | .OfClass(DB.View3D) \ 15 | .WhereElementIsNotElementType() \ 16 | .ToElements() # type: list[DB.Element] 17 | 18 | for el in views: 19 | print(el.Name) 20 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | На данный момент генератор заглушек не очень хорошо работает. Он не позволяет "раскрыть их потенциал". 2 | Но мы можем это исправить вручную в тех местах, где нам это надо. 3 | Например, мы пишем: 4 | ```py 5 | import clr 6 | clr.Add 7 | ``` 8 | и vscode нам показывает список того, как можно дополнить код. Мы понимаем, что заглушка модуля `clr` работает. 9 | Но бывают случаи, когда нам необходимо вызвать несколько методов подряд, как в `FilteredElementCollector` или когда в цикле получаем какой-то элемент, то vscode перестаёт показывать атрибуты объекта. 10 | Причина в том, что анализатор `Pylance` не может определить возможный тип рассматриваемого выражения. 11 | 12 | `Pylance` постоянно пытается это делать во время того, как вы пишете код. Убедиться в этом очень просто. 13 | Напишем такую строку `text = "asdf"`. Если навести мышь на переменную `text` и подождать, либо поставить курсор на переменную и нажать `ctrl + k, ctrl + i`, то вы увидите окно с надписью. 14 | ```py 15 | text = "asdf" # (variable) text: Literal['asdf'] 16 | ``` 17 | 18 | То же самое работает для переменных цикла: 19 | ```py 20 | for text in ['qwer', 'asdf', 'zxcv']: 21 | print(text) # (variable) text: str 22 | ``` 23 | 24 | И для функций: 25 | ```py 26 | def get_text(): 27 | return 'some_text' 28 | 29 | text = get_text() # (variable) text: Literal['some_text'] 30 | ``` 31 | 32 | Из-за того, что у переменной `text` определён тип, нам удобнее с ней работать. 33 | 34 | Как я уже писал выше, с заглушками такое не работает, потому что методы классов ничего не возвращают. 35 | А если они ничего не возвращают, то `Pylance` определит переменную как `None`. 36 | Но мы можем это исправить, напишем такой код: 37 | ```py 38 | import clr 39 | clr.AddReference("RevitAPI") 40 | from Autodesk.Revit import DB 41 | from rps import doc # перед запуском надо закомментировать 42 | 43 | views = DB.FilteredElementCollector(doc).OfClass(DB.View3D). 44 | ``` 45 | 46 | Vscode не знает, что возвращает метод `OfClass`, поэтому не может предложить следующий метод коллектора. 47 | Исправим код метода, поставив на него курсор, и нажмём F12. Мы окажемся в файле `Autodesk.Revit.DB.__init__.py` на методе `OfClass`. 48 | Документация этого метода говорит нам, что метод должен вернуть экземпляр коллектора `OfClass(self: FilteredElementCollector, type: Type) -> FilteredElementCollector`. 49 | Поэтому вместо `pass` пишем `return self`. Закрываем модуль-заглушку и ждём некоторое время, чтобы `Pylance` ещё раз проверил все типы. 50 | После этого vscode сможет предложить нам остальные методы коллектора. 51 | 52 | Сделаем то же самое для метода `WhereElementIsElementType`. И теперь в переменной `views` будет такой тип: 53 | ```py 54 | views = DB.FilteredElementCollector(doc) \ 55 | .OfClass(DB.View3D) \ 56 | .WhereElementIsElementType() # (variable) views: FilteredElementCollector 57 | ``` 58 | 59 | Используя views дальше в коде, мы можем обращаться к методам коллектора, удобно же. 60 | 61 | Но чаще всего мы отправляем коллектор в цикл, и поэтому приходится опять исправлять код заглушки. 62 | ```py 63 | # "FilteredElementCollector" is not iterable 64 | # "__iter__" method does not return an objectPylancereportGeneralTypeIssues 65 | for el in views: 66 | print(el) # (variable) el: Unknown 67 | ``` 68 | 69 | Первую проблему можно решить изменив `pass` в методе `__iter__` на `return iter()`. 70 | А вот вторая проблема решается несколькими способами. Все эти способы описаны в модулях-примерах в этой папке, а здесь я опишу то, как это работает. 71 | 72 | Уже давно в python существует такое понятие как "подсказки типов" (type hints). 73 | Для python2 и python3 они выглядят немного по-разному. 74 | В python3 - это часть кода. 75 | ```py 76 | number: int = 10 77 | ``` 78 | 79 | В python2 - это комментарий. 80 | ```py 81 | number = 10 # type: int 82 | ``` 83 | 84 | В обоих случаях анализатор кода будет понимать, какой тип находится в данной переменной, если до этого не мог определить. Это нужно не только для определения типа, но и для того, чтобы определить ошибку, если в данной переменной изменился тип данных. Это может произойти случайно, из-за чего вы получите проблемы в коде. 85 | Подробнее о подсказках типов для python2 можно почитать [тут](https://mypy.readthedocs.io/en/stable/cheat_sheet.html). 86 | 87 | >Да, Pylance и до этого мог определить, что в number хранится число, но пример специально такой простой, чтобы не содержать ничего лишнего. 88 | 89 | Как уже было написано выше, способы решения проблемы 90 | ```py 91 | for el in views: 92 | print(el) # (variable) el: Unknown 93 | ``` 94 | смотрите в модулях данной папки. 95 | -------------------------------------------------------------------------------- /src/generator3/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BIMOpenGroup/RevitAPIStubs/aa871e6e2400226076fe941020efbab0bce30906/src/generator3/__init__.py -------------------------------------------------------------------------------- /src/generator3/generator3.py: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | import atexit 3 | import zipfile 4 | import shutil 5 | 6 | # TODO: Move all CLR-specific functions to clr_tools 7 | 8 | from pycharm_generator_utils.module_redeclarator import * 9 | from pycharm_generator_utils.util_methods import * 10 | from pycharm_generator_utils.constants import * 11 | from pycharm_generator_utils.clr_tools import * 12 | 13 | 14 | debug_mode = False 15 | 16 | 17 | def redo_module(module_name, outfile, module_file_name, doing_builtins): 18 | # gobject does 'del _gobject' in its __init__.py, so the chained attribute lookup code 19 | # fails to find 'gobject._gobject'. thus we need to pull the module directly out of 20 | # sys.modules 21 | mod = sys.modules.get(module_name) 22 | mod_path = module_name.split('.') 23 | if not mod and sys.platform == 'cli': 24 | # "import System.Collections" in IronPython 2.7 doesn't actually put System.Collections in sys.modules 25 | # instead, sys.modules['System'] get set to a Microsoft.Scripting.Actions.NamespaceTracker and Collections can be 26 | # accessed as its attribute 27 | mod = sys.modules[mod_path[0]] 28 | for component in mod_path[1:]: 29 | try: 30 | mod = getattr(mod, component) 31 | except AttributeError: 32 | mod = None 33 | report("Failed to find CLR module " + module_name) 34 | break 35 | if mod: 36 | action("restoring") 37 | r = ModuleRedeclarator(mod, outfile, module_file_name, doing_builtins=doing_builtins) 38 | r.redo(module_name, ".".join(mod_path[:-1]) in MODULES_INSPECT_DIR) 39 | action("flushing") 40 | r.flush() 41 | else: 42 | report("Failed to find imported module in sys.modules " + module_name) 43 | 44 | 45 | # find_binaries functionality 46 | def cut_binary_lib_suffix(path, f): 47 | """ 48 | @param path where f lives 49 | @param f file name of a possible binary lib file (no path) 50 | @return f without a binary suffix (that is, an importable name) if path+f is indeed a binary lib, or None. 51 | Note: if for .pyc or .pyo file a .py is found, None is returned. 52 | """ 53 | if not f.endswith(".pyc") and not f.endswith(".typelib") and not f.endswith(".pyo") and not f.endswith(".so") and not f.endswith(".pyd"): 54 | return None 55 | ret = None 56 | match = BIN_MODULE_FNAME_PAT.match(f) 57 | if match: 58 | ret = match.group(1) 59 | modlen = len('module') 60 | retlen = len(ret) 61 | if ret.endswith('module') and retlen > modlen and f.endswith('.so'): # what for? 62 | ret = ret[:(retlen - modlen)] 63 | if f.endswith('.pyc') or f.endswith('.pyo'): 64 | fullname = os.path.join(path, f[:-1]) # check for __pycache__ is made outside 65 | if os.path.exists(fullname): 66 | ret = None 67 | pat_match = TYPELIB_MODULE_FNAME_PAT.match(f) 68 | if pat_match: 69 | ret = "gi.repository." + pat_match.group(1) 70 | return ret 71 | 72 | 73 | def is_posix_skipped_module(path, f): 74 | if os.name == 'posix': 75 | name = os.path.join(path, f) 76 | for mod in POSIX_SKIP_MODULES: 77 | if name.endswith(mod): 78 | return True 79 | return False 80 | 81 | 82 | def is_mac_skipped_module(path, f): 83 | fullname = os.path.join(path, f) 84 | m = MAC_STDLIB_PATTERN.match(fullname) 85 | if not m: return 0 86 | relpath = m.group(2) 87 | for module in MAC_SKIP_MODULES: 88 | if relpath.startswith(module): return 1 89 | return 0 90 | 91 | 92 | def is_skipped_module(path, f): 93 | return is_mac_skipped_module(path, f) or is_posix_skipped_module(path, f[:f.rindex('.')]) or 'pynestkernel' in f 94 | 95 | 96 | def is_module(d, root): 97 | return (os.path.exists(os.path.join(root, d, "__init__.py")) or 98 | os.path.exists(os.path.join(root, d, "__init__.pyc")) or 99 | os.path.exists(os.path.join(root, d, "__init__.pyo"))) 100 | 101 | 102 | def walk_python_path(path): 103 | for root, dirs, files in os.walk(path): 104 | if root.endswith('__pycache__'): 105 | continue 106 | dirs_copy = list(dirs) 107 | for d in dirs_copy: 108 | if d.endswith('__pycache__') or not is_module(d, root): 109 | dirs.remove(d) 110 | # some files show up but are actually non-existent symlinks 111 | yield root, [f for f in files if os.path.exists(os.path.join(root, f))] 112 | 113 | 114 | def list_binaries(paths): 115 | """ 116 | Finds binaries in the given list of paths. 117 | Understands nested paths, as sys.paths have it (both "a/b" and "a/b/c"). 118 | Tries to be case-insensitive, but case-preserving. 119 | @param paths: list of paths. 120 | @return: dict[module_name, full_path] 121 | """ 122 | SEP = os.path.sep 123 | res = {} # {name.upper(): (name, full_path)} # b/c windows is case-oblivious 124 | if not paths: 125 | return {} 126 | if IS_JAVA: # jython can't have binary modules 127 | return {} 128 | paths = sorted_no_case(paths) 129 | for path in paths: 130 | if path == os.path.dirname(sys.argv[0]): continue 131 | for root, files in walk_python_path(path): 132 | cutpoint = path.rfind(SEP) 133 | if cutpoint > 0: 134 | preprefix = path[(cutpoint + len(SEP)):] + '.' 135 | else: 136 | preprefix = '' 137 | prefix = root[(len(path) + len(SEP)):].replace(SEP, '.') 138 | if prefix: 139 | prefix += '.' 140 | note("root: %s path: %s prefix: %s preprefix: %s", root, path, prefix, preprefix) 141 | for f in files: 142 | name = cut_binary_lib_suffix(root, f) 143 | if name and not is_skipped_module(root, f): 144 | note("cutout: %s", name) 145 | if preprefix: 146 | note("prefixes: %s %s", prefix, preprefix) 147 | pre_name = (preprefix + prefix + name).upper() 148 | if pre_name in res: 149 | res.pop(pre_name) # there might be a dupe, if paths got both a/b and a/b/c 150 | note("done with %s", name) 151 | the_name = prefix + name 152 | file_path = os.path.join(root, f) 153 | 154 | res[the_name.upper()] = (the_name, file_path, os.path.getsize(file_path), int(os.stat(file_path).st_mtime)) 155 | return list(res.values()) 156 | 157 | 158 | def list_sources(paths, target_path): 159 | #noinspection PyBroadException 160 | try: 161 | for path in paths: 162 | if path == os.path.dirname(sys.argv[0]): continue 163 | 164 | path = os.path.normpath(path) 165 | 166 | target_dir_path = '' 167 | extra_info = '' 168 | 169 | if path.endswith('.egg') and os.path.isfile(path): 170 | if target_path is not None: 171 | extra_info = '\t' + os.path.basename(path) 172 | say("%s\t%s\t%d%s", path, path, os.path.getsize(path), extra_info) 173 | else: 174 | target_dir_path = compute_path_hash(path) 175 | if target_path is not None: 176 | extra_info = '\t' + target_dir_path 177 | 178 | for root, files in walk_python_path(path): 179 | for name in files: 180 | if name.endswith('.py'): 181 | file_path = os.path.join(root, name) 182 | if target_path is not None: 183 | relpath = os.path.relpath(root, path) 184 | folder_path = os.path.join(target_path, target_dir_path, relpath) 185 | if not os.path.exists(folder_path): 186 | os.makedirs(folder_path) 187 | shutil.copyfile(file_path, os.path.join(folder_path, name)) 188 | say("%s\t%s\t%d%s", os.path.normpath(file_path), path, os.path.getsize(file_path), extra_info) 189 | say('END') 190 | sys.stdout.flush() 191 | except: 192 | import traceback 193 | 194 | traceback.print_exc() 195 | sys.exit(1) 196 | 197 | 198 | def compute_path_hash(path): 199 | # computes hash string of provided path 200 | h = 0 201 | for c in path: 202 | h = (31 * h + ord(c)) & 0xFFFFFFFF 203 | h = ((h + 0x80000000) & 0xFFFFFFFF) - 0x80000000 204 | return str(h) 205 | 206 | 207 | # noinspection PyBroadException 208 | def zip_sources(zip_path): 209 | if not os.path.exists(zip_path): 210 | os.makedirs(zip_path) 211 | 212 | zip_filename = os.path.normpath(os.path.sep.join([zip_path, "skeletons.zip"])) 213 | 214 | try: 215 | zip = zipfile.ZipFile(zip_filename, 'w', zipfile.ZIP_DEFLATED) 216 | except: 217 | zip = zipfile.ZipFile(zip_filename, 'w') 218 | 219 | try: 220 | try: 221 | while True: 222 | line = sys.stdin.readline() 223 | line = line.strip() 224 | 225 | if line == '-': 226 | break 227 | 228 | if line: 229 | # This line will break the split: 230 | # /.../dist-packages/setuptools/script template (dev).py setuptools/script template (dev).py 231 | split_items = line.split() 232 | if len(split_items) > 2: 233 | match_two_files = re.match(r'^(.+\.py)\s+(.+\.py)$', line) 234 | if not match_two_files: 235 | report("Error(zip_sources): invalid line '%s'" % line) 236 | continue 237 | split_items = match_two_files.group(1, 2) 238 | (path, arcpath) = split_items 239 | zip.write(path, arcpath) 240 | else: 241 | # busy waiting for input from PyCharm... 242 | time.sleep(0.10) 243 | say('OK: ' + zip_filename) 244 | sys.stdout.flush() 245 | except: 246 | import traceback 247 | 248 | traceback.print_exc() 249 | say('Error creating archive.') 250 | 251 | sys.exit(1) 252 | finally: 253 | zip.close() 254 | 255 | 256 | # command-line interface 257 | # noinspection PyBroadException 258 | def process_one(name, mod_file_name, doing_builtins, subdir, quiet=False): 259 | """ 260 | Processes a single module named name defined in file_name (autodetect if not given). 261 | Returns True on success. 262 | """ 263 | if has_regular_python_ext(name): 264 | report("Ignored a regular Python file %r", name) 265 | return True 266 | if not quiet: 267 | say(name) 268 | sys.stdout.flush() 269 | action("doing nothing") 270 | 271 | try: 272 | fname = build_output_name(subdir, name) 273 | action("opening %r", fname) 274 | old_modules = list(sys.modules.keys()) 275 | imported_module_names = [] 276 | 277 | class MyFinder: 278 | #noinspection PyMethodMayBeStatic 279 | def find_module(self, fullname, path=None): 280 | if fullname != name: 281 | imported_module_names.append(fullname) 282 | return None 283 | 284 | my_finder = None 285 | if hasattr(sys, 'meta_path'): 286 | my_finder = MyFinder() 287 | sys.meta_path.append(my_finder) 288 | else: 289 | imported_module_names = None 290 | 291 | action("importing") 292 | __import__(name) # sys.modules will fill up with what we want 293 | 294 | if my_finder: 295 | sys.meta_path.remove(my_finder) 296 | if imported_module_names is None: 297 | imported_module_names = [m for m in sys.modules.keys() if m not in old_modules] 298 | 299 | redo_module(name, fname, mod_file_name, doing_builtins) 300 | # The C library may have called Py_InitModule() multiple times to define several modules (gtk._gtk and gtk.gdk); 301 | # restore all of them 302 | path = name.split(".") 303 | redo_imports = not ".".join(path[:-1]) in MODULES_INSPECT_DIR 304 | if imported_module_names and redo_imports: 305 | for m in sys.modules.keys(): 306 | if m.startswith("pycharm_generator_utils"): continue 307 | action("looking at possible submodule %r", m) 308 | # if module has __file__ defined, it has Python source code and doesn't need a skeleton 309 | if m not in old_modules and m not in imported_module_names and m != name and not hasattr( 310 | sys.modules[m], '__file__'): 311 | if not quiet: 312 | say(m) 313 | sys.stdout.flush() 314 | fname = build_output_name(subdir, m) 315 | action("opening %r", fname) 316 | try: 317 | redo_module(m, fname, mod_file_name, doing_builtins) 318 | finally: 319 | action("closing %r", fname) 320 | except: 321 | exctype, value = sys.exc_info()[:2] 322 | import pdb; pdb.set_trace() 323 | msg = "Failed to process %r while %s: %s" 324 | args = name, CURRENT_ACTION, str(value) 325 | report(msg, *args) 326 | if debug_mode: 327 | if sys.platform == 'cli': 328 | import traceback 329 | traceback.print_exc(file=sys.stderr) 330 | raise 331 | return False 332 | return True 333 | 334 | 335 | def get_help_text(): 336 | return ( 337 | #01234567890123456789012345678901234567890123456789012345678901234567890123456789 338 | 'Generates interface skeletons for python modules.' '\n' 339 | 'Usage: ' '\n' 340 | ' generator [options] [module_name [file_name]]' '\n' 341 | ' generator [options] -L ' '\n' 342 | 'module_name is fully qualified, and file_name is where the module is defined.' '\n' 343 | 'E.g. foo.bar /usr/lib/python/foo_bar.so' '\n' 344 | 'For built-in modules file_name is not provided.' '\n' 345 | 'Output files will be named as modules plus ".py" suffix.' '\n' 346 | 'Normally every name processed will be printed and stdout flushed.' '\n' 347 | 'directory_list is one string separated by OS-specific path separtors.' '\n' 348 | '\n' 349 | 'Options are:' '\n' 350 | ' -h -- prints this help message.' '\n' 351 | ' -d dir -- output dir, must be writable. If not given, current dir is used.' '\n' 352 | ' -b -- use names from sys.builtin_module_names' '\n' 353 | ' -q -- quiet, do not print anything on stdout. Errors still go to stderr.' '\n' 354 | ' -x -- die on exceptions with a stacktrace; only for debugging.' '\n' 355 | ' -v -- be verbose, print lots of debug output to stderr' '\n' 356 | ' -c modules -- import CLR assemblies with specified names' '\n' 357 | ' -p -- run CLR profiler ' '\n' 358 | ' -s path_list -- add paths to sys.path before run; path_list lists directories' '\n' 359 | ' separated by path separator char, e.g. "c:\\foo;d:\\bar;c:\\with space"' '\n' 360 | ' -L -- print version and then a list of binary module files found ' '\n' 361 | ' on sys.path and in directories in directory_list;' '\n' 362 | ' lines are "qualified.module.name /full/path/to/module_file.{pyd,dll,so}"' '\n' 363 | ' -S -- lists all python sources found in sys.path and in directories in directory_list\n' 364 | ' -z archive_name -- zip files to archive_name. Accepts files to be archived from stdin in format ' 365 | ) 366 | 367 | 368 | # if __name__ == "__main__": 369 | # from getopt import getopt 370 | 371 | # helptext = get_help_text() 372 | # opts, args = getopt(sys.argv[1:], "d:hbqxvc:ps:C:LSz") 373 | # opts = dict(opts) 374 | 375 | # quiet = '-q' in opts 376 | # _is_verbose = '-v' in opts 377 | # subdir = opts.get('-d', '') 378 | 379 | # if not opts or '-h' in opts: 380 | # say(helptext) 381 | # sys.exit(0) 382 | 383 | # if '-L' not in opts and '-b' not in opts and '-S' not in opts and not args: 384 | # report("Neither -L nor -b nor -S nor any module name given") 385 | # sys.exit(1) 386 | 387 | # if "-x" in opts: 388 | # debug_mode = True 389 | 390 | # # patch sys.path? 391 | # extra_path = opts.get('-s', None) 392 | # if extra_path: 393 | # source_dirs = extra_path.split(os.path.pathsep) 394 | # for p in source_dirs: 395 | # if p and p not in sys.path: 396 | # sys.path.append(p) # we need this to make things in additional dirs importable 397 | # note("Altered sys.path: %r", sys.path) 398 | 399 | # # find binaries? 400 | # if "-L" in opts: 401 | # if len(args) > 0: 402 | # report("Expected no args with -L, got %d args", len(args)) 403 | # sys.exit(1) 404 | # say(VERSION) 405 | # results = list(list_binaries(sys.path)) 406 | # results.sort() 407 | # for name, path, size, last_modified in results: 408 | # say("%s\t%s\t%d\t%d", name, path, size, last_modified) 409 | # sys.exit(0) 410 | 411 | # if "-S" in opts: 412 | # if len(args) > 0: 413 | # report("Expected no args with -S, got %d args", len(args)) 414 | # sys.exit(1) 415 | # say(VERSION) 416 | # list_sources(sys.path, opts.get('-C', None)) 417 | # sys.exit(0) 418 | 419 | # if "-z" in opts: 420 | # if len(args) != 1: 421 | # report("Expected 1 arg with -z, got %d args", len(args)) 422 | # sys.exit(1) 423 | # zip_sources(args[0]) 424 | # sys.exit(0) 425 | 426 | # # build skeleton(s) 427 | 428 | # timer = Timer() 429 | # # determine names 430 | # if '-b' in opts: 431 | # if args: 432 | # report("No names should be specified with -b") 433 | # sys.exit(1) 434 | # names = list(sys.builtin_module_names) 435 | # if not BUILTIN_MOD_NAME in names: 436 | # names.append(BUILTIN_MOD_NAME) 437 | # if '__main__' in names: 438 | # names.remove('__main__') # we don't want ourselves processed 439 | # ok = True 440 | # for name in names: 441 | # ok = process_one(name, None, True, subdir) and ok 442 | # if not ok: 443 | # sys.exit(1) 444 | 445 | # else: 446 | # if len(args) > 2: 447 | # report("Only module_name or module_name and file_name should be specified; got %d args", len(args)) 448 | # sys.exit(1) 449 | # name = args[0] 450 | # if len(args) == 2: 451 | # mod_file_name = args[1] 452 | # else: 453 | # mod_file_name = None 454 | 455 | # if sys.platform == 'cli': 456 | # #noinspection PyUnresolvedReferences 457 | # import clr 458 | 459 | # refs = opts.get('-c', '') 460 | # if refs: 461 | # for ref in refs.split(';'): clr.AddReferenceByPartialName(ref) 462 | 463 | # if '-p' in opts: 464 | # atexit.register(print_profile) 465 | 466 | # # We take module name from import statement 467 | # name = get_namespace_by_name(name) 468 | 469 | # if not process_one(name, mod_file_name, False, subdir): 470 | # sys.exit(1) 471 | 472 | # say("Generation completed in %d ms", timer.elapsed()) 473 | -------------------------------------------------------------------------------- /src/generator3/pycharm_generator_utils/__init__.py: -------------------------------------------------------------------------------- 1 | __author__ = 'ktisha' 2 | -------------------------------------------------------------------------------- /src/generator3/pycharm_generator_utils/clr_tools.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | """ 3 | .NET (CLR) specific functions 4 | """ 5 | __author__ = 'Ilya.Kazakevich' 6 | 7 | 8 | def get_namespace_by_name(object_name): 9 | """ 10 | Gets namespace for full object name. Sometimes last element of name is module while it may be class. 11 | For System.Console returns System, for System.Web returns System.Web. 12 | Be sure all required assemblies are loaded (i.e. clr.AddRef.. is called) 13 | :param object_name: name to parse 14 | :return: namespace 15 | """ 16 | (imported_object, object_name) = _import_first(object_name) 17 | parts = object_name.partition(".") 18 | first_part = parts[0] 19 | remain_part = parts[2] 20 | 21 | while remain_part and type(_get_attr_by_name(imported_object, remain_part)) is type: # While we are in class 22 | remain_part = remain_part.rpartition(".")[0] 23 | 24 | if remain_part: 25 | return first_part + "." + remain_part 26 | else: 27 | return first_part 28 | 29 | 30 | def _import_first(object_name): 31 | """ 32 | Some times we can not import module directly. For example, Some.Class.InnerClass could not be imported: you need to import "Some.Class" 33 | or even "Some" instead. This function tries to find part of name that could be loaded 34 | 35 | :param object_name: name in dotted notation like "Some.Function.Here" 36 | :return: (imported_object, object_name): tuple with object and its name 37 | """ 38 | while object_name: 39 | try: 40 | return (__import__(object_name, globals=[], locals=[], fromlist=[]), object_name) 41 | except ImportError: 42 | object_name = object_name.rpartition(".")[0] # Remove rightest part 43 | raise Exception("No module name found in name " + object_name) 44 | 45 | 46 | def _get_attr_by_name(obj, name): 47 | """ 48 | Accepts chain of attributes in dot notation like "some.property.name" and gets them on object 49 | :param obj: object to introspec 50 | :param name: attribute name 51 | :return attribute 52 | 53 | >>> str(_get_attr_by_name("A", "__class__.__class__")) 54 | "" 55 | 56 | >>> str(_get_attr_by_name("A", "__class__.__len__.__class__")) 57 | "" 58 | """ 59 | result = obj 60 | parts = name.split('.') 61 | for part in parts: 62 | result = getattr(result, part) 63 | return result 64 | -------------------------------------------------------------------------------- /src/generator3/pycharm_generator_utils/constants.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import types 4 | import sys 5 | import string 6 | import time 7 | 8 | # !!! Don't forget to update VERSION and required_gen_version if necessary !!! 9 | VERSION = "1.145" 10 | 11 | OUT_ENCODING = 'utf-8' 12 | 13 | version = ( 14 | (sys.hexversion & (0xff << 24)) >> 24, 15 | (sys.hexversion & (0xff << 16)) >> 16 16 | ) 17 | 18 | if version[0] >= 3: 19 | #noinspection PyUnresolvedReferences 20 | import builtins as the_builtins 21 | 22 | string = "".__class__ 23 | 24 | STR_TYPES = (getattr(the_builtins, "bytes"), str) 25 | 26 | NUM_TYPES = (int, float) 27 | SIMPLEST_TYPES = NUM_TYPES + STR_TYPES + (None.__class__,) 28 | EASY_TYPES = NUM_TYPES + STR_TYPES + (None.__class__, dict, tuple, list) 29 | 30 | def the_exec(source, context): 31 | exec (source, context) 32 | 33 | else: # < 3.0 34 | import __builtin__ as the_builtins 35 | 36 | STR_TYPES = (getattr(the_builtins, "unicode"), str) 37 | 38 | NUM_TYPES = (int, long, float) 39 | SIMPLEST_TYPES = NUM_TYPES + STR_TYPES + (types.NoneType,) 40 | EASY_TYPES = NUM_TYPES + STR_TYPES + (types.NoneType, dict, tuple, list) 41 | 42 | def the_exec(source, context): 43 | #noinspection PyRedundantParentheses 44 | exec (source) in context 45 | 46 | if version[0] == 2 and version[1] < 4: 47 | HAS_DECORATORS = False 48 | 49 | def lstrip(s, prefix): 50 | i = 0 51 | while s[i] == prefix: 52 | i += 1 53 | return s[i:] 54 | 55 | else: 56 | HAS_DECORATORS = True 57 | lstrip = string.lstrip 58 | 59 | # return type inference helper table 60 | INT_LIT = '0' 61 | FLOAT_LIT = '0.0' 62 | DICT_LIT = '{}' 63 | LIST_LIT = '[]' 64 | TUPLE_LIT = '()' 65 | BOOL_LIT = 'False' 66 | RET_TYPE = {# {'type_name': 'value_string'} lookup table 67 | # chaining 68 | "self": "self", 69 | "self.": "self", 70 | # int 71 | "int": INT_LIT, 72 | "Int": INT_LIT, 73 | "integer": INT_LIT, 74 | "Integer": INT_LIT, 75 | "short": INT_LIT, 76 | "long": INT_LIT, 77 | "number": INT_LIT, 78 | "Number": INT_LIT, 79 | # float 80 | "float": FLOAT_LIT, 81 | "Float": FLOAT_LIT, 82 | "double": FLOAT_LIT, 83 | "Double": FLOAT_LIT, 84 | "floating": FLOAT_LIT, 85 | # boolean 86 | "bool": BOOL_LIT, 87 | "boolean": BOOL_LIT, 88 | "Bool": BOOL_LIT, 89 | "Boolean": BOOL_LIT, 90 | "True": BOOL_LIT, 91 | "true": BOOL_LIT, 92 | "False": BOOL_LIT, 93 | "false": BOOL_LIT, 94 | # list 95 | 'list': LIST_LIT, 96 | 'List': LIST_LIT, 97 | '[]': LIST_LIT, 98 | # tuple 99 | "tuple": TUPLE_LIT, 100 | "sequence": TUPLE_LIT, 101 | "Sequence": TUPLE_LIT, 102 | # dict 103 | "dict": DICT_LIT, 104 | "Dict": DICT_LIT, 105 | "dictionary": DICT_LIT, 106 | "Dictionary": DICT_LIT, 107 | "map": DICT_LIT, 108 | "Map": DICT_LIT, 109 | "hashtable": DICT_LIT, 110 | "Hashtable": DICT_LIT, 111 | "{}": DICT_LIT, 112 | # "objects" 113 | "object": "object()", 114 | } 115 | if version[0] < 3: 116 | UNICODE_LIT = 'u""' 117 | BYTES_LIT = '""' 118 | RET_TYPE.update({ 119 | 'string': BYTES_LIT, 120 | 'String': BYTES_LIT, 121 | 'str': BYTES_LIT, 122 | 'Str': BYTES_LIT, 123 | 'character': BYTES_LIT, 124 | 'char': BYTES_LIT, 125 | 'unicode': UNICODE_LIT, 126 | 'Unicode': UNICODE_LIT, 127 | 'bytes': BYTES_LIT, 128 | 'byte': BYTES_LIT, 129 | 'Bytes': BYTES_LIT, 130 | 'Byte': BYTES_LIT, 131 | }) 132 | DEFAULT_STR_LIT = BYTES_LIT 133 | # also, files: 134 | RET_TYPE.update({ 135 | 'file': "file('/dev/null')", 136 | }) 137 | 138 | def ensureUnicode(data): 139 | if type(data) == str: 140 | return data.decode(OUT_ENCODING, 'replace') 141 | return unicode(data) 142 | else: 143 | UNICODE_LIT = '""' 144 | BYTES_LIT = 'b""' 145 | RET_TYPE.update({ 146 | 'string': UNICODE_LIT, 147 | 'String': UNICODE_LIT, 148 | 'str': UNICODE_LIT, 149 | 'Str': UNICODE_LIT, 150 | 'character': UNICODE_LIT, 151 | 'char': UNICODE_LIT, 152 | 'unicode': UNICODE_LIT, 153 | 'Unicode': UNICODE_LIT, 154 | 'bytes': BYTES_LIT, 155 | 'byte': BYTES_LIT, 156 | 'Bytes': BYTES_LIT, 157 | 'Byte': BYTES_LIT, 158 | }) 159 | DEFAULT_STR_LIT = UNICODE_LIT 160 | # also, files: we can't provide an easy expression on py3k 161 | RET_TYPE.update({ 162 | 'file': None, 163 | }) 164 | 165 | def ensureUnicode(data): 166 | if type(data) == bytes: 167 | return data.decode(OUT_ENCODING, 'replace') 168 | return str(data) 169 | 170 | if version[0] > 2: 171 | import io # in 3.0 172 | 173 | #noinspection PyArgumentList 174 | fopen = lambda name, mode: io.open(name, mode, encoding=OUT_ENCODING) 175 | else: 176 | fopen = open 177 | 178 | if sys.platform == 'cli': 179 | #noinspection PyUnresolvedReferences 180 | from System import DateTime 181 | 182 | class Timer(object): 183 | def __init__(self): 184 | self.started = DateTime.Now 185 | 186 | def elapsed(self): 187 | return (DateTime.Now - self.started).TotalMilliseconds 188 | else: 189 | class Timer(object): 190 | def __init__(self): 191 | self.started = time.time() 192 | 193 | def elapsed(self): 194 | return int((time.time() - self.started) * 1000) 195 | 196 | IS_JAVA = hasattr(os, "java") 197 | 198 | BUILTIN_MOD_NAME = the_builtins.__name__ 199 | 200 | IDENT_PATTERN = "[A-Za-z_][0-9A-Za-z_]*" # re pattern for identifier 201 | NUM_IDENT_PATTERN = re.compile("([A-Za-z_]+)[0-9]?[A-Za-z_]*") # 'foo_123' -> $1 = 'foo_' 202 | STR_CHAR_PATTERN = "[0-9A-Za-z_.,\+\-&\*% ]" 203 | 204 | DOC_FUNC_RE = re.compile("(?:.*\.)?(\w+)\(([^\)]*)\).*") # $1 = function name, $2 = arglist 205 | 206 | SANE_REPR_RE = re.compile(IDENT_PATTERN + "(?:\(.*\))?") # identifier with possible (...), go catches 207 | 208 | IDENT_RE = re.compile("(" + IDENT_PATTERN + ")") # $1 = identifier 209 | 210 | STARS_IDENT_RE = re.compile("(\*?\*?" + IDENT_PATTERN + ")") # $1 = identifier, maybe with a * or ** 211 | 212 | IDENT_EQ_RE = re.compile("(" + IDENT_PATTERN + "\s*=)") # $1 = identifier with a following '=' 213 | 214 | SIMPLE_VALUE_RE = re.compile( 215 | "(\([+-]?[0-9](?:\s*,\s*[+-]?[0-9])*\))|" + # a numeric tuple, e.g. in pygame 216 | "([+-]?[0-9]+\.?[0-9]*(?:[Ee]?[+-]?[0-9]+\.?[0-9]*)?)|" + # number 217 | "('" + STR_CHAR_PATTERN + "*')|" + # single-quoted string 218 | '("' + STR_CHAR_PATTERN + '*")|' + # double-quoted string 219 | "(\[\])|" + 220 | "(\{\})|" + 221 | "(\(\))|" + 222 | "(True|False|None)" 223 | ) # $? = sane default value 224 | 225 | ########################### parsing ########################################################### 226 | if version[0] < 3: 227 | from pyparsing import * 228 | else: 229 | #noinspection PyUnresolvedReferences 230 | from pyparsing_py3 import * 231 | 232 | # grammar to parse parameter lists 233 | 234 | # // snatched from parsePythonValue.py, from pyparsing samples, copyright 2006 by Paul McGuire but under BSD license. 235 | # we don't suppress lots of punctuation because we want it back when we reconstruct the lists 236 | 237 | lparen, rparen, lbrack, rbrack, lbrace, rbrace, colon = map(Literal, "()[]{}:") 238 | 239 | integer = Combine(Optional(oneOf("+ -")) + Word(nums)).setName("integer") 240 | real = Combine(Optional(oneOf("+ -")) + Word(nums) + "." + 241 | Optional(Word(nums)) + 242 | Optional(oneOf("e E") + Optional(oneOf("+ -")) + Word(nums))).setName("real") 243 | tupleStr = Forward() 244 | listStr = Forward() 245 | dictStr = Forward() 246 | 247 | boolLiteral = oneOf("True False") 248 | noneLiteral = Literal("None") 249 | 250 | listItem = real | integer | quotedString | unicodeString | boolLiteral | noneLiteral | \ 251 | Group(listStr) | tupleStr | dictStr 252 | 253 | tupleStr << ( Suppress("(") + Optional(delimitedList(listItem)) + 254 | Optional(Literal(",")) + Suppress(")") ).setResultsName("tuple") 255 | 256 | listStr << (lbrack + Optional(delimitedList(listItem) + 257 | Optional(Literal(","))) + rbrack).setResultsName("list") 258 | 259 | dictEntry = Group(listItem + colon + listItem) 260 | dictStr << (lbrace + Optional(delimitedList(dictEntry) + Optional(Literal(","))) + rbrace).setResultsName("dict") 261 | # \\ end of the snatched part 262 | 263 | # our output format is s-expressions: 264 | # (simple name optional_value) is name or name=value 265 | # (nested (simple ...) (simple ...)) is (name, name,...) 266 | # (opt ...) is [, ...] or suchlike. 267 | 268 | T_SIMPLE = 'Simple' 269 | T_NESTED = 'Nested' 270 | T_OPTIONAL = 'Opt' 271 | T_RETURN = "Ret" 272 | 273 | TRIPLE_DOT = '...' 274 | 275 | COMMA = Suppress(",") 276 | APOS = Suppress("'") 277 | QUOTE = Suppress('"') 278 | SP = Suppress(Optional(White())) 279 | 280 | ident = Word(alphas + "_", alphanums + "_-.").setName("ident") # we accept things like "foo-or-bar" 281 | decorated_ident = ident + Optional(Suppress(SP + Literal(":") + SP + ident)) # accept "foo: bar", ignore "bar" 282 | spaced_ident = Combine(decorated_ident + ZeroOrMore(Literal(' ') + decorated_ident)) # we accept 'list or tuple' or 'C struct' 283 | 284 | # allow quoted names, because __setattr__, etc docs use it 285 | paramname = spaced_ident | \ 286 | APOS + spaced_ident + APOS | \ 287 | QUOTE + spaced_ident + QUOTE 288 | 289 | parenthesized_tuple = ( Literal("(") + Optional(delimitedList(listItem, combine=True)) + 290 | Optional(Literal(",")) + Literal(")") ).setResultsName("(tuple)") 291 | 292 | initializer = (SP + Suppress("=") + SP + Combine(parenthesized_tuple | listItem | ident)).setName("=init") # accept foo=defaultfoo 293 | 294 | param = Group(Empty().setParseAction(replaceWith(T_SIMPLE)) + Combine(Optional(oneOf("* **")) + paramname) + Optional(initializer)) 295 | 296 | ellipsis = Group( 297 | Empty().setParseAction(replaceWith(T_SIMPLE)) + \ 298 | (Literal("..") + 299 | ZeroOrMore(Literal('.'))).setParseAction(replaceWith(TRIPLE_DOT)) # we want to accept both 'foo,..' and 'foo, ...' 300 | ) 301 | 302 | paramSlot = Forward() 303 | 304 | simpleParamSeq = ZeroOrMore(paramSlot + COMMA) + Optional(paramSlot + Optional(COMMA)) 305 | nestedParamSeq = Group( 306 | Suppress('(').setParseAction(replaceWith(T_NESTED)) + \ 307 | simpleParamSeq + Optional(ellipsis + Optional(COMMA) + Optional(simpleParamSeq)) + \ 308 | Suppress(')') 309 | ) # we accept "(a1, ... an)" 310 | 311 | paramSlot << (param | nestedParamSeq) 312 | 313 | optionalPart = Forward() 314 | 315 | paramSeq = simpleParamSeq + Optional(optionalPart) # this is our approximate target 316 | 317 | optionalPart << ( 318 | Group( 319 | Suppress('[').setParseAction(replaceWith(T_OPTIONAL)) + Optional(COMMA) + 320 | paramSeq + Optional(ellipsis) + 321 | Suppress(']') 322 | ) 323 | | ellipsis 324 | ) 325 | 326 | return_type = Group( 327 | Empty().setParseAction(replaceWith(T_RETURN)) + 328 | Suppress(SP + (Literal("->") | (Literal(":") + SP + Literal("return"))) + SP) + 329 | ident 330 | ) 331 | 332 | # this is our ideal target, with balancing paren and a multiline rest of doc. 333 | paramSeqAndRest = paramSeq + Suppress(')') + Optional(return_type) + Suppress(Optional(Regex(".*(?s)"))) 334 | ############################################################################################ 335 | 336 | 337 | # Some values are known to be of no use in source and needs to be suppressed. 338 | # Dict is keyed by module names, with "*" meaning "any module"; 339 | # values are lists of names of members whose value must be pruned. 340 | SKIP_VALUE_IN_MODULE = { 341 | "sys": ( 342 | "modules", "path_importer_cache", "argv", "builtins", 343 | "last_traceback", "last_type", "last_value", "builtin_module_names", 344 | ), 345 | "posix": ( 346 | "environ", 347 | ), 348 | "zipimport": ( 349 | "_zip_directory_cache", 350 | ), 351 | "*": (BUILTIN_MOD_NAME,) 352 | } 353 | # {"module": ("name",..)}: omit the names from the skeleton at all. 354 | OMIT_NAME_IN_MODULE = {} 355 | 356 | if version[0] >= 3: 357 | v = OMIT_NAME_IN_MODULE.get(BUILTIN_MOD_NAME, []) + ["True", "False", "None", "__debug__"] 358 | OMIT_NAME_IN_MODULE[BUILTIN_MOD_NAME] = v 359 | 360 | if IS_JAVA and version > (2, 4): # in 2.5.1 things are way weird! 361 | OMIT_NAME_IN_MODULE['_codecs'] = ['EncodingMap'] 362 | OMIT_NAME_IN_MODULE['_hashlib'] = ['Hash'] 363 | 364 | ADD_VALUE_IN_MODULE = { 365 | "sys": ("exc_value = Exception()", "exc_traceback=None"), # only present after an exception in current thread 366 | } 367 | 368 | # Some values are special and are better represented by hand-crafted constructs. 369 | # Dict is keyed by (module name, member name) and value is the replacement. 370 | REPLACE_MODULE_VALUES = { 371 | ("numpy.core.multiarray", "typeinfo"): "{}", 372 | ("psycopg2._psycopg", "string_types"): "{}", # badly mangled __eq__ breaks fmtValue 373 | ("PyQt5.QtWidgets", "qApp") : "QApplication()", # instead of None 374 | } 375 | if version[0] <= 2: 376 | REPLACE_MODULE_VALUES[(BUILTIN_MOD_NAME, "None")] = "object()" 377 | for std_file in ("stdin", "stdout", "stderr"): 378 | REPLACE_MODULE_VALUES[("sys", std_file)] = "open('')" # 379 | 380 | # Some functions and methods of some builtin classes have special signatures. 381 | # {("class", "method"): ("signature_string")} 382 | PREDEFINED_BUILTIN_SIGS = { #TODO: user-skeleton 383 | ("type", "__init__"): "(cls, what, bases=None, dict=None)", # two sigs squeezed into one 384 | ("object", "__init__"): "(self)", 385 | ("object", "__new__"): "(cls, *more)", # only for the sake of parameter names readability 386 | ("object", "__subclasshook__"): "(cls, subclass)", # trusting PY-1818 on sig 387 | ("int", "__init__"): "(self, x, base=10)", # overrides a fake 388 | ("list", "__init__"): "(self, seq=())", 389 | ("tuple", "__init__"): "(self, seq=())", # overrides a fake 390 | ("set", "__init__"): "(self, seq=())", 391 | ("dict", "__init__"): "(self, seq=None, **kwargs)", 392 | ("property", "__init__"): "(self, fget=None, fset=None, fdel=None, doc=None)", 393 | # TODO: infer, doc comments have it 394 | ("dict", "update"): "(self, E=None, **F)", # docstring nearly lies 395 | (None, "zip"): "(seq1, seq2, *more_seqs)", 396 | (None, "range"): "(start=None, stop=None, step=None)", # suboptimal: allows empty arglist 397 | (None, "filter"): "(function_or_none, sequence)", 398 | (None, "iter"): "(source, sentinel=None)", 399 | (None, "getattr"): "(object, name, default=None)", 400 | ('frozenset', "__init__"): "(self, seq=())", 401 | ("bytearray", "__init__"): "(self, source=None, encoding=None, errors='strict')", 402 | } 403 | 404 | if version[0] < 3: 405 | PREDEFINED_BUILTIN_SIGS[ 406 | ("unicode", "__init__")] = "(self, string=u'', encoding=None, errors='strict')" # overrides a fake 407 | PREDEFINED_BUILTIN_SIGS[("super", "__init__")] = "(self, type1, type2=None)" 408 | PREDEFINED_BUILTIN_SIGS[ 409 | (None, "min")] = "(*args, **kwargs)" # too permissive, but py2.x won't allow a better sig 410 | PREDEFINED_BUILTIN_SIGS[(None, "max")] = "(*args, **kwargs)" 411 | PREDEFINED_BUILTIN_SIGS[("str", "__init__")] = "(self, string='')" # overrides a fake 412 | PREDEFINED_BUILTIN_SIGS[(None, "print")] = "(*args, **kwargs)" # can't do better in 2.x 413 | else: 414 | PREDEFINED_BUILTIN_SIGS[("super", "__init__")] = "(self, type1=None, type2=None)" 415 | PREDEFINED_BUILTIN_SIGS[(None, "min")] = "(*args, key=None)" 416 | PREDEFINED_BUILTIN_SIGS[(None, "max")] = "(*args, key=None)" 417 | PREDEFINED_BUILTIN_SIGS[ 418 | (None, "open")] = "(file, mode='r', buffering=None, encoding=None, errors=None, newline=None, closefd=True)" 419 | PREDEFINED_BUILTIN_SIGS[ 420 | ("str", "__init__")] = "(self, value='', encoding=None, errors='strict')" # overrides a fake 421 | PREDEFINED_BUILTIN_SIGS[("str", "format")] = "(self, *args, **kwargs)" 422 | PREDEFINED_BUILTIN_SIGS[ 423 | ("bytes", "__init__")] = "(self, value=b'', encoding=None, errors='strict')" # overrides a fake 424 | PREDEFINED_BUILTIN_SIGS[("bytes", "format")] = "(self, *args, **kwargs)" 425 | PREDEFINED_BUILTIN_SIGS[(None, "print")] = "(self, *args, sep=' ', end='\\n', file=None)" # proper signature 426 | 427 | if (2, 6) <= version < (3, 0): 428 | PREDEFINED_BUILTIN_SIGS[("unicode", "format")] = "(self, *args, **kwargs)" 429 | PREDEFINED_BUILTIN_SIGS[("str", "format")] = "(self, *args, **kwargs)" 430 | 431 | if version == (2, 5): 432 | PREDEFINED_BUILTIN_SIGS[("unicode", "splitlines")] = "(keepends=None)" # a typo in docstring there 433 | 434 | if version >= (2, 7): 435 | PREDEFINED_BUILTIN_SIGS[ 436 | ("enumerate", "__init__")] = "(self, iterable, start=0)" # dosctring omits this completely. 437 | 438 | if version < (3, 3): 439 | datetime_mod = "datetime" 440 | else: 441 | datetime_mod = "_datetime" 442 | 443 | 444 | # NOTE: per-module signature data may be lazily imported 445 | # keyed by (module_name, class_name, method_name). PREDEFINED_BUILTIN_SIGS might be a layer of it. 446 | # value is ("signature", "return_literal") 447 | PREDEFINED_MOD_CLASS_SIGS = { #TODO: user-skeleton 448 | (BUILTIN_MOD_NAME, None, 'divmod'): ("(x, y)", "(0, 0)"), 449 | 450 | ("binascii", None, "hexlify"): ("(data)", BYTES_LIT), 451 | ("binascii", None, "unhexlify"): ("(hexstr)", BYTES_LIT), 452 | 453 | ("time", None, "ctime"): ("(seconds=None)", DEFAULT_STR_LIT), 454 | 455 | ("_struct", None, "pack"): ("(fmt, *args)", BYTES_LIT), 456 | ("_struct", None, "pack_into"): ("(fmt, buffer, offset, *args)", None), 457 | ("_struct", None, "unpack"): ("(fmt, string)", None), 458 | ("_struct", None, "unpack_from"): ("(fmt, buffer, offset=0)", None), 459 | ("_struct", None, "calcsize"): ("(fmt)", INT_LIT), 460 | ("_struct", "Struct", "__init__"): ("(self, fmt)", None), 461 | ("_struct", "Struct", "pack"): ("(self, *args)", BYTES_LIT), 462 | ("_struct", "Struct", "pack_into"): ("(self, buffer, offset, *args)", None), 463 | ("_struct", "Struct", "unpack"): ("(self, string)", None), 464 | ("_struct", "Struct", "unpack_from"): ("(self, buffer, offset=0)", None), 465 | 466 | (datetime_mod, "date", "__new__"): ("(cls, year=None, month=None, day=None)", None), 467 | (datetime_mod, "date", "fromordinal"): ("(cls, ordinal)", "date(1,1,1)"), 468 | (datetime_mod, "date", "fromtimestamp"): ("(cls, timestamp)", "date(1,1,1)"), 469 | (datetime_mod, "date", "isocalendar"): ("(self)", "(1, 1, 1)"), 470 | (datetime_mod, "date", "isoformat"): ("(self)", DEFAULT_STR_LIT), 471 | (datetime_mod, "date", "isoweekday"): ("(self)", INT_LIT), 472 | (datetime_mod, "date", "replace"): ("(self, year=None, month=None, day=None)", "date(1,1,1)"), 473 | (datetime_mod, "date", "strftime"): ("(self, format)", DEFAULT_STR_LIT), 474 | (datetime_mod, "date", "timetuple"): ("(self)", "(0, 0, 0, 0, 0, 0, 0, 0, 0)"), 475 | (datetime_mod, "date", "today"): ("(self)", "date(1, 1, 1)"), 476 | (datetime_mod, "date", "toordinal"): ("(self)", INT_LIT), 477 | (datetime_mod, "date", "weekday"): ("(self)", INT_LIT), 478 | (datetime_mod, "timedelta", "__new__" 479 | ): ( 480 | "(cls, days=None, seconds=None, microseconds=None, milliseconds=None, minutes=None, hours=None, weeks=None)", 481 | None), 482 | (datetime_mod, "datetime", "__new__" 483 | ): ( 484 | "(cls, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)", 485 | None), 486 | (datetime_mod, "datetime", "astimezone"): ("(self, tz)", "datetime(1, 1, 1)"), 487 | (datetime_mod, "datetime", "combine"): ("(cls, date, time)", "datetime(1, 1, 1)"), 488 | (datetime_mod, "datetime", "date"): ("(self)", "datetime(1, 1, 1)"), 489 | (datetime_mod, "datetime", "fromtimestamp"): ("(cls, timestamp, tz=None)", "datetime(1, 1, 1)"), 490 | (datetime_mod, "datetime", "isoformat"): ("(self, sep='T')", DEFAULT_STR_LIT), 491 | (datetime_mod, "datetime", "now"): ("(cls, tz=None)", "datetime(1, 1, 1)"), 492 | (datetime_mod, "datetime", "strptime"): ("(cls, date_string, format)", DEFAULT_STR_LIT), 493 | (datetime_mod, "datetime", "replace" ): 494 | ( 495 | "(self, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)", 496 | "datetime(1, 1, 1)"), 497 | (datetime_mod, "datetime", "time"): ("(self)", "time(0, 0)"), 498 | (datetime_mod, "datetime", "timetuple"): ("(self)", "(0, 0, 0, 0, 0, 0, 0, 0, 0)"), 499 | (datetime_mod, "datetime", "timetz"): ("(self)", "time(0, 0)"), 500 | (datetime_mod, "datetime", "utcfromtimestamp"): ("(self, timestamp)", "datetime(1, 1, 1)"), 501 | (datetime_mod, "datetime", "utcnow"): ("(cls)", "datetime(1, 1, 1)"), 502 | (datetime_mod, "datetime", "utctimetuple"): ("(self)", "(0, 0, 0, 0, 0, 0, 0, 0, 0)"), 503 | (datetime_mod, "time", "__new__"): ( 504 | "(cls, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)", None), 505 | (datetime_mod, "time", "isoformat"): ("(self)", DEFAULT_STR_LIT), 506 | (datetime_mod, "time", "replace"): ( 507 | "(self, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)", "time(0, 0)"), 508 | (datetime_mod, "time", "strftime"): ("(self, format)", DEFAULT_STR_LIT), 509 | (datetime_mod, "tzinfo", "dst"): ("(self, date_time)", INT_LIT), 510 | (datetime_mod, "tzinfo", "fromutc"): ("(self, date_time)", "datetime(1, 1, 1)"), 511 | (datetime_mod, "tzinfo", "tzname"): ("(self, date_time)", DEFAULT_STR_LIT), 512 | (datetime_mod, "tzinfo", "utcoffset"): ("(self, date_time)", INT_LIT), 513 | 514 | ("_io", None, "open"): ("(name, mode=None, buffering=None)", "file('/dev/null')"), 515 | ("_io", "FileIO", "read"): ("(self, size=-1)", DEFAULT_STR_LIT), 516 | ("_fileio", "_FileIO", "read"): ("(self, size=-1)", DEFAULT_STR_LIT), 517 | 518 | ("thread", None, "start_new"): ("(function, args, kwargs=None)", INT_LIT), 519 | ("_thread", None, "start_new"): ("(function, args, kwargs=None)", INT_LIT), 520 | 521 | ("itertools", "groupby", "__init__"): ("(self, iterable, key=None)", None), 522 | ("itertools", None, "groupby"): ("(iterable, key=None)", LIST_LIT), 523 | 524 | ("cStringIO", "OutputType", "seek"): ("(self, position, mode=0)", None), 525 | ("cStringIO", "InputType", "seek"): ("(self, position, mode=0)", None), 526 | 527 | # NOTE: here we stand on shaky ground providing sigs for 3rd-party modules, though well-known 528 | ("numpy.core.multiarray", "ndarray", "__array__"): ("(self, dtype=None)", None), 529 | ("numpy.core.multiarray", None, "arange"): ("(start=None, stop=None, step=None, dtype=None)", None), 530 | # same as range() 531 | ("numpy.core.multiarray", None, "set_numeric_ops"): ("(**ops)", None), 532 | ("numpy.random.mtrand", None, "rand"): ("(*dn)", None), 533 | ("numpy.random.mtrand", None, "randn"): ("(*dn)", None), 534 | ("numpy.core.multiarray", "ndarray", "reshape"): ("(self, shape, *shapes, order='C')", None), 535 | ("numpy.core.multiarray", "ndarray", "resize"): ("(self, *new_shape, refcheck=True)", None), 536 | } 537 | 538 | bin_collections_names = ['collections', '_collections'] 539 | 540 | for name in bin_collections_names: 541 | PREDEFINED_MOD_CLASS_SIGS[(name, "deque", "__init__")] = ("(self, iterable=(), maxlen=None)", None) 542 | PREDEFINED_MOD_CLASS_SIGS[(name, "defaultdict", "__init__")] = ("(self, default_factory=None, **kwargs)", None) 543 | 544 | if version[0] < 3: 545 | PREDEFINED_MOD_CLASS_SIGS[("exceptions", "BaseException", "__unicode__")] = ("(self)", UNICODE_LIT) 546 | PREDEFINED_MOD_CLASS_SIGS[("itertools", "product", "__init__")] = ("(self, *iterables, **kwargs)", LIST_LIT) 547 | else: 548 | PREDEFINED_MOD_CLASS_SIGS[("itertools", "product", "__init__")] = ("(self, *iterables, repeat=1)", LIST_LIT) 549 | 550 | if version[0] < 3: 551 | PREDEFINED_MOD_CLASS_SIGS[("PyQt4.QtCore", None, "pyqtSlot")] = ( 552 | "(*types, **keywords)", None) # doc assumes py3k syntax 553 | 554 | # known properties of modules 555 | # {{"module": {"class", "property" : ("letters", ("getter", "type"))}}, 556 | # where letters is any set of r,w,d (read, write, del) and "getter" is a source of typed getter. 557 | # if value is None, the property should be omitted. 558 | # read-only properties that return an object are not listed. 559 | G_OBJECT = ("lambda self: object()", None) 560 | G_TYPE = ("lambda self: type(object)", "type") 561 | G_DICT = ("lambda self: {}", "dict") 562 | G_STR = ("lambda self: ''", "string") 563 | G_TUPLE = ("lambda self: tuple()", "tuple") 564 | G_FLOAT = ("lambda self: 0.0", "float") 565 | G_INT = ("lambda self: 0", "int") 566 | G_BOOL = ("lambda self: True", "bool") 567 | 568 | KNOWN_PROPS = { 569 | BUILTIN_MOD_NAME: { 570 | ("object", '__class__'): ('r', G_TYPE), 571 | ('complex', 'real'): ('r', G_FLOAT), 572 | ('complex', 'imag'): ('r', G_FLOAT), 573 | ("file", 'softspace'): ('r', G_BOOL), 574 | ("file", 'name'): ('r', G_STR), 575 | ("file", 'encoding'): ('r', G_STR), 576 | ("file", 'mode'): ('r', G_STR), 577 | ("file", 'closed'): ('r', G_BOOL), 578 | ("file", 'newlines'): ('r', G_STR), 579 | ("slice", 'start'): ('r', G_INT), 580 | ("slice", 'step'): ('r', G_INT), 581 | ("slice", 'stop'): ('r', G_INT), 582 | ("super", '__thisclass__'): ('r', G_TYPE), 583 | ("super", '__self__'): ('r', G_TYPE), 584 | ("super", '__self_class__'): ('r', G_TYPE), 585 | ("type", '__basicsize__'): ('r', G_INT), 586 | ("type", '__itemsize__'): ('r', G_INT), 587 | ("type", '__base__'): ('r', G_TYPE), 588 | ("type", '__flags__'): ('r', G_INT), 589 | ("type", '__mro__'): ('r', G_TUPLE), 590 | ("type", '__bases__'): ('r', G_TUPLE), 591 | ("type", '__dictoffset__'): ('r', G_INT), 592 | ("type", '__dict__'): ('r', G_DICT), 593 | ("type", '__name__'): ('r', G_STR), 594 | ("type", '__weakrefoffset__'): ('r', G_INT), 595 | }, 596 | "exceptions": { 597 | ("BaseException", '__dict__'): ('r', G_DICT), 598 | ("BaseException", 'message'): ('rwd', G_STR), 599 | ("BaseException", 'args'): ('r', G_TUPLE), 600 | ("EnvironmentError", 'errno'): ('rwd', G_INT), 601 | ("EnvironmentError", 'message'): ('rwd', G_STR), 602 | ("EnvironmentError", 'strerror'): ('rwd', G_INT), 603 | ("EnvironmentError", 'filename'): ('rwd', G_STR), 604 | ("SyntaxError", 'text'): ('rwd', G_STR), 605 | ("SyntaxError", 'print_file_and_line'): ('rwd', G_BOOL), 606 | ("SyntaxError", 'filename'): ('rwd', G_STR), 607 | ("SyntaxError", 'lineno'): ('rwd', G_INT), 608 | ("SyntaxError", 'offset'): ('rwd', G_INT), 609 | ("SyntaxError", 'msg'): ('rwd', G_STR), 610 | ("SyntaxError", 'message'): ('rwd', G_STR), 611 | ("SystemExit", 'message'): ('rwd', G_STR), 612 | ("SystemExit", 'code'): ('rwd', G_OBJECT), 613 | ("UnicodeDecodeError", '__basicsize__'): None, 614 | ("UnicodeDecodeError", '__itemsize__'): None, 615 | ("UnicodeDecodeError", '__base__'): None, 616 | ("UnicodeDecodeError", '__flags__'): ('rwd', G_INT), 617 | ("UnicodeDecodeError", '__mro__'): None, 618 | ("UnicodeDecodeError", '__bases__'): None, 619 | ("UnicodeDecodeError", '__dictoffset__'): None, 620 | ("UnicodeDecodeError", '__dict__'): None, 621 | ("UnicodeDecodeError", '__name__'): None, 622 | ("UnicodeDecodeError", '__weakrefoffset__'): None, 623 | ("UnicodeEncodeError", 'end'): ('rwd', G_INT), 624 | ("UnicodeEncodeError", 'encoding'): ('rwd', G_STR), 625 | ("UnicodeEncodeError", 'object'): ('rwd', G_OBJECT), 626 | ("UnicodeEncodeError", 'start'): ('rwd', G_INT), 627 | ("UnicodeEncodeError", 'reason'): ('rwd', G_STR), 628 | ("UnicodeEncodeError", 'message'): ('rwd', G_STR), 629 | ("UnicodeTranslateError", 'end'): ('rwd', G_INT), 630 | ("UnicodeTranslateError", 'encoding'): ('rwd', G_STR), 631 | ("UnicodeTranslateError", 'object'): ('rwd', G_OBJECT), 632 | ("UnicodeTranslateError", 'start'): ('rwd', G_INT), 633 | ("UnicodeTranslateError", 'reason'): ('rwd', G_STR), 634 | ("UnicodeTranslateError", 'message'): ('rwd', G_STR), 635 | }, 636 | '_ast': { 637 | ("AST", '__dict__'): ('rd', G_DICT), 638 | }, 639 | 'posix': { 640 | ("statvfs_result", 'f_flag'): ('r', G_INT), 641 | ("statvfs_result", 'f_bavail'): ('r', G_INT), 642 | ("statvfs_result", 'f_favail'): ('r', G_INT), 643 | ("statvfs_result", 'f_files'): ('r', G_INT), 644 | ("statvfs_result", 'f_frsize'): ('r', G_INT), 645 | ("statvfs_result", 'f_blocks'): ('r', G_INT), 646 | ("statvfs_result", 'f_ffree'): ('r', G_INT), 647 | ("statvfs_result", 'f_bfree'): ('r', G_INT), 648 | ("statvfs_result", 'f_namemax'): ('r', G_INT), 649 | ("statvfs_result", 'f_bsize'): ('r', G_INT), 650 | 651 | ("stat_result", 'st_ctime'): ('r', G_INT), 652 | ("stat_result", 'st_rdev'): ('r', G_INT), 653 | ("stat_result", 'st_mtime'): ('r', G_INT), 654 | ("stat_result", 'st_blocks'): ('r', G_INT), 655 | ("stat_result", 'st_gid'): ('r', G_INT), 656 | ("stat_result", 'st_nlink'): ('r', G_INT), 657 | ("stat_result", 'st_ino'): ('r', G_INT), 658 | ("stat_result", 'st_blksize'): ('r', G_INT), 659 | ("stat_result", 'st_dev'): ('r', G_INT), 660 | ("stat_result", 'st_size'): ('r', G_INT), 661 | ("stat_result", 'st_mode'): ('r', G_INT), 662 | ("stat_result", 'st_uid'): ('r', G_INT), 663 | ("stat_result", 'st_atime'): ('r', G_INT), 664 | }, 665 | "pwd": { 666 | ("struct_pwent", 'pw_dir'): ('r', G_STR), 667 | ("struct_pwent", 'pw_gid'): ('r', G_INT), 668 | ("struct_pwent", 'pw_passwd'): ('r', G_STR), 669 | ("struct_pwent", 'pw_gecos'): ('r', G_STR), 670 | ("struct_pwent", 'pw_shell'): ('r', G_STR), 671 | ("struct_pwent", 'pw_name'): ('r', G_STR), 672 | ("struct_pwent", 'pw_uid'): ('r', G_INT), 673 | 674 | ("struct_passwd", 'pw_dir'): ('r', G_STR), 675 | ("struct_passwd", 'pw_gid'): ('r', G_INT), 676 | ("struct_passwd", 'pw_passwd'): ('r', G_STR), 677 | ("struct_passwd", 'pw_gecos'): ('r', G_STR), 678 | ("struct_passwd", 'pw_shell'): ('r', G_STR), 679 | ("struct_passwd", 'pw_name'): ('r', G_STR), 680 | ("struct_passwd", 'pw_uid'): ('r', G_INT), 681 | }, 682 | "thread": { 683 | ("_local", '__dict__'): None 684 | }, 685 | "xxsubtype": { 686 | ("spamdict", 'state'): ('r', G_INT), 687 | ("spamlist", 'state'): ('r', G_INT), 688 | }, 689 | "zipimport": { 690 | ("zipimporter", 'prefix'): ('r', G_STR), 691 | ("zipimporter", 'archive'): ('r', G_STR), 692 | ("zipimporter", '_files'): ('r', G_DICT), 693 | }, 694 | "_struct": { 695 | ("Struct", "size"): ('r', G_INT), 696 | ("Struct", "format"): ('r', G_STR), 697 | }, 698 | datetime_mod: { 699 | ("datetime", "hour"): ('r', G_INT), 700 | ("datetime", "minute"): ('r', G_INT), 701 | ("datetime", "second"): ('r', G_INT), 702 | ("datetime", "microsecond"): ('r', G_INT), 703 | ("date", "day"): ('r', G_INT), 704 | ("date", "month"): ('r', G_INT), 705 | ("date", "year"): ('r', G_INT), 706 | ("time", "hour"): ('r', G_INT), 707 | ("time", "minute"): ('r', G_INT), 708 | ("time", "second"): ('r', G_INT), 709 | ("time", "microsecond"): ('r', G_INT), 710 | ("timedelta", "days"): ('r', G_INT), 711 | ("timedelta", "seconds"): ('r', G_INT), 712 | ("timedelta", "microseconds"): ('r', G_INT), 713 | }, 714 | } 715 | 716 | # Sometimes module X defines item foo but foo.__module__ == 'Y' instead of 'X'; 717 | # module Y just re-exports foo, and foo fakes being defined in Y. 718 | # We list all such Ys keyed by X, all fully-qualified names: 719 | # {"real_definer_module": ("fake_reexporter_module",..)} 720 | KNOWN_FAKE_REEXPORTERS = { 721 | "_collections": ('collections',), 722 | "_functools": ('functools',), 723 | "_socket": ('socket',), # .error, etc 724 | "pyexpat": ('xml.parsers.expat',), 725 | "_bsddb": ('bsddb.db',), 726 | "pysqlite2._sqlite": ('pysqlite2.dbapi2',), # errors 727 | "numpy.core.multiarray": ('numpy', 'numpy.core'), 728 | "numpy.core._dotblas": ('numpy', 'numpy.core'), 729 | "numpy.core.umath": ('numpy', 'numpy.core'), 730 | "gtk._gtk": ('gtk', 'gtk.gdk',), 731 | "gobject._gobject": ('gobject',), 732 | "gnomecanvas": ("gnome.canvas",), 733 | } 734 | 735 | KNOWN_FAKE_BASES = [] 736 | # list of classes that pretend to be base classes but are mere wrappers, and their defining modules 737 | # [(class, module),...] -- real objects, not names 738 | #noinspection PyBroadException 739 | try: 740 | #noinspection PyUnresolvedReferences 741 | import sip as sip_module # Qt specifically likes it 742 | 743 | if hasattr(sip_module, 'wrapper'): 744 | KNOWN_FAKE_BASES.append((sip_module.wrapper, sip_module)) 745 | if hasattr(sip_module, 'simplewrapper'): 746 | KNOWN_FAKE_BASES.append((sip_module.simplewrapper, sip_module)) 747 | del sip_module 748 | except: 749 | pass 750 | 751 | # This is a list of builtin classes to use fake init 752 | FAKE_BUILTIN_INITS = (tuple, type, int, str) 753 | if version[0] < 3: 754 | FAKE_BUILTIN_INITS = FAKE_BUILTIN_INITS + (getattr(the_builtins, "unicode"),) 755 | else: 756 | FAKE_BUILTIN_INITS = FAKE_BUILTIN_INITS + (getattr(the_builtins, "str"), getattr(the_builtins, "bytes")) 757 | 758 | # Some builtin methods are decorated, but this is hard to detect. 759 | # {("class_name", "method_name"): "decorator"} 760 | KNOWN_DECORATORS = { 761 | ("dict", "fromkeys"): "staticmethod", 762 | ("object", "__subclasshook__"): "classmethod", 763 | ("bytearray", "fromhex"): "classmethod", 764 | ("bytes", "fromhex"): "classmethod", 765 | ("bytearray", "maketrans"): "staticmethod", 766 | ("bytes", "maketrans"): "staticmethod", 767 | ("int", "from_bytes"): "classmethod", 768 | ("float", "fromhex"): "staticmethod", 769 | } 770 | 771 | classobj_txt = ( #TODO: user-skeleton 772 | "class ___Classobj:" "\n" 773 | " '''A mock class representing the old style class base.'''" "\n" 774 | " __module__ = ''" "\n" 775 | " __class__ = None" "\n" 776 | "\n" 777 | " def __init__(self):" "\n" 778 | " pass" "\n" 779 | " __dict__ = {}" "\n" 780 | " __doc__ = ''" "\n" 781 | ) 782 | 783 | MAC_STDLIB_PATTERN = re.compile("/System/Library/Frameworks/Python\\.framework/Versions/(.+)/lib/python\\1/(.+)") 784 | MAC_SKIP_MODULES = ["test", "ctypes/test", "distutils/tests", "email/test", 785 | "importlib/test", "json/tests", "lib2to3/tests", 786 | "bsddb/test", 787 | "sqlite3/test", "tkinter/test", "idlelib", "antigravity"] 788 | 789 | POSIX_SKIP_MODULES = ["vtemodule", "PAMmodule", "_snackmodule", "/quodlibet/_mmkeys"] 790 | 791 | BIN_MODULE_FNAME_PAT = re.compile(r'([a-zA-Z_][0-9a-zA-Z_]*)\.(?:pyc|pyo|(?:(?:[a-zA-Z_0-9\-]+\.)?(?:so|pyd)))$') 792 | # possible binary module filename: letter, alphanum architecture per PEP-3149 793 | TYPELIB_MODULE_FNAME_PAT = re.compile("([a-zA-Z_]+[0-9a-zA-Z]*)[0-9a-zA-Z-.]*\\.typelib") 794 | 795 | MODULES_INSPECT_DIR = ['gi.repository'] 796 | -------------------------------------------------------------------------------- /src/generator3/pycharm_generator_utils/module_redeclarator.py: -------------------------------------------------------------------------------- 1 | import keyword 2 | 3 | from util_methods import * 4 | from constants import * 5 | 6 | 7 | class emptylistdict(dict): 8 | """defaultdict not available before 2.5; simplest reimplementation using [] as default""" 9 | 10 | def __getitem__(self, item): 11 | if item in self: 12 | return dict.__getitem__(self, item) 13 | else: 14 | it = [] 15 | self.__setitem__(item, it) 16 | return it 17 | 18 | class Buf(object): 19 | """Buffers data in a list, can write to a file. Indentation is provided externally.""" 20 | 21 | def __init__(self, indenter): 22 | self.data = [] 23 | self.indenter = indenter 24 | 25 | def put(self, data): 26 | if data: 27 | self.data.append(ensureUnicode(data)) 28 | 29 | def out(self, indent, *what): 30 | """Output the arguments, indenting as needed, and adding an eol""" 31 | self.put(self.indenter.indent(indent)) 32 | for item in what: 33 | self.put(item) 34 | self.put("\n") 35 | 36 | def flush_bytes(self, outfile): 37 | for data in self.data: 38 | outfile.write(data.encode(OUT_ENCODING, "replace")) 39 | 40 | def flush_str(self, outfile): 41 | for data in self.data: 42 | outfile.write(data) 43 | 44 | if version[0] < 3: 45 | flush = flush_bytes 46 | else: 47 | flush = flush_str 48 | 49 | def isEmpty(self): 50 | return len(self.data) == 0 51 | 52 | 53 | class ClassBuf(Buf): 54 | def __init__(self, name, indenter): 55 | super(ClassBuf, self).__init__(indenter) 56 | self.name = name 57 | 58 | #noinspection PyUnresolvedReferences,PyBroadException 59 | class ModuleRedeclarator(object): 60 | def __init__(self, module, outfile, mod_filename, indent_size=4, doing_builtins=False): 61 | """ 62 | Create new instance. 63 | @param module module to restore. 64 | @param outfile output file, must be open and writable. 65 | @param mod_filename filename of binary module (the .dll or .so) 66 | @param indent_size amount of space characters per indent 67 | """ 68 | self.module = module 69 | self.outfile = outfile # where we finally write 70 | self.mod_filename = mod_filename 71 | # we write things into buffers out-of-order 72 | self.header_buf = Buf(self) 73 | self.imports_buf = Buf(self) 74 | self.functions_buf = Buf(self) 75 | self.classes_buf = Buf(self) 76 | self.classes_buffs = list() 77 | self.footer_buf = Buf(self) 78 | self.indent_size = indent_size 79 | self._indent_step = " " * self.indent_size 80 | self.split_modules = False 81 | # 82 | self.imported_modules = {"": the_builtins} # explicit module imports: {"name": module} 83 | self.hidden_imports = {} # {'real_mod_name': 'alias'}; we alias names with "__" since we don't want them exported 84 | # ^ used for things that we don't re-export but need to import, e.g. certain base classes in gnome. 85 | self._defined = {} # stores True for every name defined so far, to break circular refs in values 86 | self.doing_builtins = doing_builtins 87 | self.ret_type_cache = {} 88 | self.used_imports = emptylistdict() # qual_mod_name -> [imported_names,..]: actually used imported names 89 | 90 | def _initializeQApp4(self): 91 | try: # QtGui should be imported _before_ QtCore package. 92 | # This is done for the QWidget references from QtCore (such as QSignalMapper). Known bug in PyQt 4.7+ 93 | # Causes "TypeError: C++ type 'QWidget*' is not supported as a native Qt signal type" 94 | import PyQt4.QtGui 95 | except ImportError: 96 | pass 97 | 98 | # manually instantiate and keep reference to singleton QCoreApplication (we don't want it to be deleted during the introspection) 99 | # use QCoreApplication instead of QApplication to avoid blinking app in Dock on Mac OS 100 | try: 101 | from PyQt4.QtCore import QCoreApplication 102 | self.app = QCoreApplication([]) 103 | return 104 | except ImportError: 105 | pass 106 | 107 | def _initializeQApp5(self): 108 | try: 109 | from PyQt5.QtCore import QCoreApplication 110 | self.app = QCoreApplication([]) 111 | return 112 | except ImportError: 113 | pass 114 | 115 | def indent(self, level): 116 | """Return indentation whitespace for given level.""" 117 | return self._indent_step * level 118 | 119 | def flush(self): 120 | init = None 121 | try: 122 | if self.split_modules: 123 | mod_path = module_to_package_name(self.outfile) 124 | 125 | fname = build_output_name(mod_path, "__init__") 126 | init = fopen(fname, "w") 127 | for buf in (self.header_buf, self.imports_buf, self.functions_buf, self.classes_buf): 128 | buf.flush(init) 129 | 130 | data = "" 131 | for buf in self.classes_buffs: 132 | fname = build_output_name(mod_path, buf.name) 133 | dummy = fopen(fname, "w") 134 | self.header_buf.flush(dummy) 135 | self.imports_buf.flush(dummy) 136 | buf.flush(dummy) 137 | data += self.create_local_import(buf.name) 138 | dummy.close() 139 | 140 | init.write(data) 141 | self.footer_buf.flush(init) 142 | else: 143 | init = fopen(self.outfile, "w") 144 | for buf in (self.header_buf, self.imports_buf, self.functions_buf, self.classes_buf): 145 | buf.flush(init) 146 | 147 | for buf in self.classes_buffs: 148 | buf.flush(init) 149 | 150 | self.footer_buf.flush(init) 151 | 152 | finally: 153 | if init is not None and not init.closed: 154 | init.close() 155 | 156 | # Some builtin classes effectively change __init__ signature without overriding it. 157 | # This callable serves as a placeholder to be replaced via REDEFINED_BUILTIN_SIGS 158 | def fake_builtin_init(self): 159 | pass # just a callable, sig doesn't matter 160 | 161 | fake_builtin_init.__doc__ = object.__init__.__doc__ # this forces class's doc to be used instead 162 | 163 | def create_local_import(self, name): 164 | if len(name.split(".")) > 1: return "" 165 | data = "from " 166 | if version[0] >= 3: 167 | data += "." 168 | data += name + " import " + name + "\n" 169 | return data 170 | 171 | def find_imported_name(self, item): 172 | """ 173 | Finds out how the item is represented in imported modules. 174 | @param item what to check 175 | @return qualified name (like "sys.stdin") or None 176 | """ 177 | # TODO: return a pair, not a glued string 178 | if not isinstance(item, SIMPLEST_TYPES): 179 | for mname in self.imported_modules: 180 | m = self.imported_modules[mname] 181 | for inner_name in m.__dict__: 182 | suspect = getattr(m, inner_name) 183 | if suspect is item: 184 | if mname: 185 | mname += "." 186 | elif self.module is the_builtins: # don't short-circuit builtins 187 | return None 188 | return mname + inner_name 189 | return None 190 | 191 | _initializers = ( 192 | (dict, "{}"), 193 | (tuple, "()"), 194 | (list, "[]"), 195 | ) 196 | 197 | def invent_initializer(self, a_type): 198 | """ 199 | Returns an innocuous initializer expression for a_type, or "None" 200 | """ 201 | for initializer_type, r in self._initializers: 202 | if initializer_type == a_type: 203 | return r 204 | # NOTE: here we could handle things like defaultdict, sets, etc if we wanted 205 | return "None" 206 | 207 | 208 | def fmt_value(self, out, p_value, indent, prefix="", postfix="", as_name=None, seen_values=None): 209 | """ 210 | Formats and outputs value (it occupies an entire line or several lines). 211 | @param out function that does output (a Buf.out) 212 | @param p_value the value. 213 | @param indent indent level. 214 | @param prefix text to print before the value 215 | @param postfix text to print after the value 216 | @param as_name hints which name are we trying to print; helps with circular refs. 217 | @param seen_values a list of keys we've seen if we're processing a dict 218 | """ 219 | SELF_VALUE = "" 220 | ERR_VALUE = "" 221 | if isinstance(p_value, SIMPLEST_TYPES): 222 | out(indent, prefix, reliable_repr(p_value), postfix) 223 | else: 224 | if sys.platform == "cli": 225 | imported_name = None 226 | else: 227 | imported_name = self.find_imported_name(p_value) 228 | if imported_name: 229 | out(indent, prefix, imported_name, postfix) 230 | # TODO: kind of self.used_imports[imported_name].append(p_value) but split imported_name 231 | # else we could potentially return smth we did not otherwise import. but not likely. 232 | else: 233 | if isinstance(p_value, (list, tuple)): 234 | if not seen_values: 235 | seen_values = [p_value] 236 | if len(p_value) == 0: 237 | out(indent, prefix, repr(p_value), postfix) 238 | else: 239 | if isinstance(p_value, list): 240 | lpar, rpar = "[", "]" 241 | else: 242 | lpar, rpar = "(", ")" 243 | out(indent, prefix, lpar) 244 | for value in p_value: 245 | if value in seen_values: 246 | value = SELF_VALUE 247 | elif not isinstance(value, SIMPLEST_TYPES): 248 | seen_values.append(value) 249 | self.fmt_value(out, value, indent + 1, postfix=",", seen_values=seen_values) 250 | out(indent, rpar, postfix) 251 | elif isinstance(p_value, dict): 252 | if len(p_value) == 0: 253 | out(indent, prefix, repr(p_value), postfix) 254 | else: 255 | if not seen_values: 256 | seen_values = [p_value] 257 | out(indent, prefix, "{") 258 | keys = list(p_value.keys()) 259 | try: 260 | keys.sort() 261 | except TypeError: 262 | pass # unsortable keys happen, e,g, in py3k _ctypes 263 | for k in keys: 264 | value = p_value[k] 265 | 266 | try: 267 | is_seen = value in seen_values 268 | except: 269 | is_seen = False 270 | value = ERR_VALUE 271 | 272 | if is_seen: 273 | value = SELF_VALUE 274 | elif not isinstance(value, SIMPLEST_TYPES): 275 | seen_values.append(value) 276 | if isinstance(k, SIMPLEST_TYPES): 277 | self.fmt_value(out, value, indent + 1, prefix=repr(k) + ": ", postfix=",", 278 | seen_values=seen_values) 279 | else: 280 | # both key and value need fancy formatting 281 | self.fmt_value(out, k, indent + 1, postfix=": ", seen_values=seen_values) 282 | self.fmt_value(out, value, indent + 2, seen_values=seen_values) 283 | out(indent + 1, ",") 284 | out(indent, "}", postfix) 285 | else: # something else, maybe representable 286 | # look up this value in the module. 287 | if sys.platform == "cli": 288 | out(indent, prefix, "None", postfix) 289 | return 290 | found_name = "" 291 | for inner_name in self.module.__dict__: 292 | if self.module.__dict__[inner_name] is p_value: 293 | found_name = inner_name 294 | break 295 | if self._defined.get(found_name, False): 296 | out(indent, prefix, found_name, postfix) 297 | elif hasattr(self, "app"): 298 | return 299 | else: 300 | # a forward / circular declaration happens 301 | notice = "" 302 | try: 303 | representation = repr(p_value) 304 | except Exception: 305 | import traceback 306 | traceback.print_exc(file=sys.stderr) 307 | return 308 | real_value = cleanup(representation) 309 | if found_name: 310 | if found_name == as_name: 311 | notice = " # (!) real value is %r" % real_value 312 | real_value = "None" 313 | else: 314 | notice = " # (!) forward: %s, real value is %r" % (found_name, real_value) 315 | if SANE_REPR_RE.match(real_value): 316 | out(indent, prefix, real_value, postfix, notice) 317 | else: 318 | if not found_name: 319 | notice = " # (!) real value is %r" % real_value 320 | out(indent, prefix, "None", postfix, notice) 321 | 322 | def get_ret_type(self, attr): 323 | """ 324 | Returns a return type string as given by T_RETURN in tokens, or None 325 | """ 326 | if attr: 327 | ret_type = RET_TYPE.get(attr, None) 328 | if ret_type: 329 | return ret_type 330 | thing = getattr(self.module, attr, None) 331 | if thing: 332 | if not isinstance(thing, type) and is_callable(thing): # a function 333 | return None # TODO: maybe divinate a return type; see pygame.mixer.Channel 334 | return attr 335 | # adds no noticeable slowdown, I did measure. dch. 336 | for im_name, im_module in self.imported_modules.items(): 337 | cache_key = (im_name, attr) 338 | cached = self.ret_type_cache.get(cache_key, None) 339 | if cached: 340 | return cached 341 | ret_type = getattr(im_module, attr, None) 342 | if ret_type: 343 | if isinstance(ret_type, type): 344 | # detect a constructor 345 | constr_args = detect_constructor(ret_type) 346 | if constr_args is None: 347 | constr_args = "*(), **{}" # a silly catch-all constructor 348 | reference = "%s(%s)" % (attr, constr_args) 349 | elif is_callable(ret_type): # a function, classes are ruled out above 350 | return None 351 | else: 352 | reference = attr 353 | if im_name: 354 | result = "%s.%s" % (im_name, reference) 355 | else: # built-in 356 | result = reference 357 | self.ret_type_cache[cache_key] = result 358 | return result 359 | # TODO: handle things like "[a, b,..] and (foo,..)" 360 | return None 361 | 362 | 363 | SIG_DOC_NOTE = "restored from __doc__" 364 | SIG_DOC_UNRELIABLY = "NOTE: unreliably restored from __doc__ " 365 | 366 | def restore_by_docstring(self, signature_string, class_name, deco=None, ret_hint=None): 367 | """ 368 | @param signature_string: parameter list extracted from the doc string. 369 | @param class_name: name of the containing class, or None 370 | @param deco: decorator to use 371 | @param ret_hint: return type hint, if available 372 | @return (reconstructed_spec, return_type, note) or (None, _, _) if failed. 373 | """ 374 | action("restoring func %r of class %r", signature_string, class_name) 375 | # parse 376 | parsing_failed = False 377 | ret_type = None 378 | try: 379 | # strict parsing 380 | tokens = paramSeqAndRest.parseString(signature_string, True) 381 | ret_name = None 382 | if tokens: 383 | ret_t = tokens[-1] 384 | if ret_t[0] is T_RETURN: 385 | ret_name = ret_t[1] 386 | ret_type = self.get_ret_type(ret_name) or self.get_ret_type(ret_hint) 387 | except ParseException: 388 | # it did not parse completely; scavenge what we can 389 | parsing_failed = True 390 | tokens = [] 391 | try: 392 | # most unrestrictive parsing 393 | tokens = paramSeq.parseString(signature_string, False) 394 | except ParseException: 395 | pass 396 | # 397 | seq = transform_seq(tokens) 398 | 399 | # add safe defaults for unparsed 400 | if parsing_failed: 401 | doc_node = self.SIG_DOC_UNRELIABLY 402 | starred = None 403 | double_starred = None 404 | for one in seq: 405 | if type(one) is str: 406 | if one.startswith("**"): 407 | double_starred = one 408 | elif one.startswith("*"): 409 | starred = one 410 | if not starred: 411 | seq.append("*args") 412 | if not double_starred: 413 | seq.append("**kwargs") 414 | else: 415 | doc_node = self.SIG_DOC_NOTE 416 | 417 | # add 'self' if needed YYY 418 | if class_name and (not seq or seq[0] != 'self'): 419 | first_param = propose_first_param(deco) 420 | if first_param: 421 | seq.insert(0, first_param) 422 | seq = make_names_unique(seq) 423 | return (seq, ret_type, doc_node) 424 | 425 | def parse_func_doc(self, func_doc, func_id, func_name, class_name, deco=None, sip_generated=False): 426 | """ 427 | @param func_doc: __doc__ of the function. 428 | @param func_id: name to look for as identifier of the function in docstring 429 | @param func_name: name of the function. 430 | @param class_name: name of the containing class, or None 431 | @param deco: decorator to use 432 | @return (reconstructed_spec, return_literal, note) or (None, _, _) if failed. 433 | """ 434 | if sip_generated: 435 | overloads = [] 436 | for part in func_doc.split('\n'): 437 | signature = func_id + '(' 438 | i = part.find(signature) 439 | if i >= 0: 440 | overloads.append(part[i + len(signature):]) 441 | if len(overloads) > 1: 442 | docstring_results = [self.restore_by_docstring(overload, class_name, deco) for overload in overloads] 443 | ret_types = [] 444 | for result in docstring_results: 445 | rt = result[1] 446 | if rt and rt not in ret_types: 447 | ret_types.append(rt) 448 | if ret_types: 449 | ret_literal = " or ".join(ret_types) 450 | else: 451 | ret_literal = None 452 | param_lists = [result[0] for result in docstring_results] 453 | spec = build_signature(func_name, restore_parameters_for_overloads(param_lists)) 454 | return (spec, ret_literal, "restored from __doc__ with multiple overloads") 455 | 456 | # find the first thing to look like a definition 457 | prefix_re = re.compile("\s*(?:(\w+)[ \\t]+)?" + func_id + "\s*\(") # "foo(..." or "int foo(..." 458 | match = prefix_re.search(func_doc) # Note: this and previous line may consume up to 35% of time 459 | # parse the part that looks right 460 | if match: 461 | ret_hint = match.group(1) 462 | params, ret_literal, doc_note = self.restore_by_docstring(func_doc[match.end():], class_name, deco, ret_hint) 463 | spec = func_name + flatten(params) 464 | return (spec, ret_literal, doc_note) 465 | else: 466 | return (None, None, None) 467 | 468 | 469 | def is_predefined_builtin(self, module_name, class_name, func_name): 470 | return self.doing_builtins and module_name == BUILTIN_MOD_NAME and ( 471 | class_name, func_name) in PREDEFINED_BUILTIN_SIGS 472 | 473 | 474 | def redo_function(self, out, p_func, p_name, indent, p_class=None, p_modname=None, classname=None, seen=None): 475 | """ 476 | Restore function argument list as best we can. 477 | @param out output function of a Buf 478 | @param p_func function or method object 479 | @param p_name function name as known to owner 480 | @param indent indentation level 481 | @param p_class the class that contains this function as a method 482 | @param p_modname module name 483 | @param seen {id(func): name} map of functions already seen in the same namespace; 484 | id() because *some* functions are unhashable (eg _elementtree.Comment in py2.7) 485 | """ 486 | action("redoing func %r of class %r", p_name, p_class) 487 | if seen is not None: 488 | other_func = seen.get(id(p_func), None) 489 | if other_func and getattr(other_func, "__doc__", None) is getattr(p_func, "__doc__", None): 490 | # _bisect.bisect == _bisect.bisect_right in py31, but docs differ 491 | out(indent, p_name, " = ", seen[id(p_func)]) 492 | out(indent, "") 493 | return 494 | else: 495 | seen[id(p_func)] = p_name 496 | # real work 497 | if classname is None: 498 | classname = p_class and p_class.__name__ or None 499 | if p_class and hasattr(p_class, '__mro__'): 500 | sip_generated = [base_t for base_t in p_class.__mro__ if 'sip.simplewrapper' in str(base_t)] 501 | else: 502 | sip_generated = False 503 | deco = None 504 | deco_comment = "" 505 | mod_class_method_tuple = (p_modname, classname, p_name) 506 | ret_literal = None 507 | is_init = False 508 | # any decorators? 509 | action("redoing decos of func %r of class %r", p_name, p_class) 510 | if self.doing_builtins and p_modname == BUILTIN_MOD_NAME: 511 | deco = KNOWN_DECORATORS.get((classname, p_name), None) 512 | if deco: 513 | deco_comment = " # known case" 514 | elif p_class and p_name in p_class.__dict__: 515 | # detect native methods declared with METH_CLASS flag 516 | descriptor = p_class.__dict__[p_name] 517 | if p_name != "__new__" and type(descriptor).__name__.startswith('classmethod'): 518 | # 'classmethod_descriptor' in Python 2.x and 3.x, 'classmethod' in Jython 519 | deco = "classmethod" 520 | elif type(p_func).__name__.startswith('staticmethod'): 521 | deco = "staticmethod" 522 | if p_name == "__new__": 523 | deco = "staticmethod" 524 | deco_comment = " # known case of __new__" 525 | 526 | action("redoing innards of func %r of class %r", p_name, p_class) 527 | if deco and HAS_DECORATORS: 528 | out(indent, "@", deco, deco_comment) 529 | if inspect and inspect.isfunction(p_func): 530 | out(indent, "def ", p_name, restore_by_inspect(p_func), ": # reliably restored by inspect", ) 531 | out_doc_attr(out, p_func, indent + 1, p_class) 532 | elif self.is_predefined_builtin(*mod_class_method_tuple): 533 | spec, sig_note = restore_predefined_builtin(classname, p_name) 534 | out(indent, "def ", spec, ": # ", sig_note) 535 | out_doc_attr(out, p_func, indent + 1, p_class) 536 | elif sys.platform == 'cli' and is_clr_type(p_class): 537 | is_static, spec, sig_note = restore_clr(p_name, p_class) 538 | if is_static: 539 | out(indent, "@staticmethod") 540 | if not spec: return 541 | if sig_note: 542 | out(indent, "def ", spec, ": #", sig_note) 543 | else: 544 | out(indent, "def ", spec, ":") 545 | if not p_name in ['__gt__', '__ge__', '__lt__', '__le__', '__ne__', '__reduce_ex__', '__str__']: 546 | out_doc_attr(out, p_func, indent + 1, p_class) 547 | elif mod_class_method_tuple in PREDEFINED_MOD_CLASS_SIGS: 548 | sig, ret_literal = PREDEFINED_MOD_CLASS_SIGS[mod_class_method_tuple] 549 | if classname: 550 | ofwhat = "%s.%s.%s" % mod_class_method_tuple 551 | else: 552 | ofwhat = "%s.%s" % (p_modname, p_name) 553 | out(indent, "def ", p_name, sig, ": # known case of ", ofwhat) 554 | out_doc_attr(out, p_func, indent + 1, p_class) 555 | else: 556 | # __doc__ is our best source of arglist 557 | sig_note = "real signature unknown" 558 | spec = "" 559 | is_init = (p_name == "__init__" and p_class is not None) 560 | funcdoc = None 561 | if is_init and hasattr(p_class, "__doc__"): 562 | if hasattr(p_func, "__doc__"): 563 | funcdoc = p_func.__doc__ 564 | if funcdoc == object.__init__.__doc__: 565 | funcdoc = p_class.__doc__ 566 | elif hasattr(p_func, "__doc__"): 567 | funcdoc = p_func.__doc__ 568 | sig_restored = False 569 | action("parsing doc of func %r of class %r", p_name, p_class) 570 | if isinstance(funcdoc, STR_TYPES): 571 | (spec, ret_literal, more_notes) = self.parse_func_doc(funcdoc, p_name, p_name, classname, deco, 572 | sip_generated) 573 | if spec is None and p_name == '__init__' and classname: 574 | (spec, ret_literal, more_notes) = self.parse_func_doc(funcdoc, classname, p_name, classname, deco, 575 | sip_generated) 576 | sig_restored = spec is not None 577 | if more_notes: 578 | if sig_note: 579 | sig_note += "; " 580 | sig_note += more_notes 581 | if not sig_restored: 582 | # use an allow-all declaration 583 | decl = [] 584 | if p_class: 585 | first_param = propose_first_param(deco) 586 | if first_param: 587 | decl.append(first_param) 588 | decl.append("*args") 589 | decl.append("**kwargs") 590 | spec = p_name + "(" + ", ".join(decl) + ")" 591 | out(indent, "def ", spec, ": # ", sig_note) 592 | # to reduce size of stubs, don't output same docstring twice for class and its __init__ method 593 | if not is_init or funcdoc != p_class.__doc__: 594 | out_docstring(out, funcdoc, indent + 1) 595 | # body 596 | if ret_literal and not is_init: 597 | out(indent + 1, "return ", ret_literal) 598 | else: 599 | out(indent + 1, "pass") 600 | if deco and not HAS_DECORATORS: 601 | out(indent, p_name, " = ", deco, "(", p_name, ")", deco_comment) 602 | out(0, "") # empty line after each item 603 | 604 | 605 | def redo_class(self, out, p_class, p_name, indent, p_modname=None, seen=None, inspect_dir=False): 606 | """ 607 | Restores a class definition. 608 | @param out output function of a relevant buf 609 | @param p_class the class object 610 | @param p_name class name as known to owner 611 | @param indent indentation level 612 | @param p_modname name of module 613 | @param seen {class: name} map of classes already seen in the same namespace 614 | """ 615 | action("redoing class %r of module %r", p_name, p_modname) 616 | if seen is not None: 617 | if p_class in seen: 618 | out(indent, p_name, " = ", seen[p_class]) 619 | out(indent, "") 620 | return 621 | else: 622 | seen[p_class] = p_name 623 | bases = get_bases(p_class) 624 | base_def = "" 625 | skipped_bases = [] 626 | if bases: 627 | skip_qualifiers = [p_modname, BUILTIN_MOD_NAME, 'exceptions'] 628 | skip_qualifiers.extend(KNOWN_FAKE_REEXPORTERS.get(p_modname, ())) 629 | bases_list = [] # what we'll render in the class decl 630 | for base in bases: 631 | if [1 for (cls, mdl) in KNOWN_FAKE_BASES if cls == base and mdl != self.module]: 632 | # our base is a wrapper and our module is not its defining module 633 | skipped_bases.append(str(base)) 634 | continue 635 | # somehow import every base class 636 | base_name = base.__name__ 637 | qual_module_name = qualifier_of(base, skip_qualifiers) 638 | got_existing_import = False 639 | if qual_module_name: 640 | if qual_module_name in self.used_imports: 641 | import_list = self.used_imports[qual_module_name] 642 | if base in import_list: 643 | bases_list.append(base_name) # unqualified: already set to import 644 | got_existing_import = True 645 | if not got_existing_import: 646 | mangled_qualifier = "__" + qual_module_name.replace('.', '_') # foo.bar -> __foo_bar 647 | bases_list.append(mangled_qualifier + "." + base_name) 648 | self.hidden_imports[qual_module_name] = mangled_qualifier 649 | else: 650 | bases_list.append(base_name) 651 | base_def = "(" + ", ".join(bases_list) + ")" 652 | 653 | if self.split_modules: 654 | for base in bases_list: 655 | local_import = self.create_local_import(base) 656 | if local_import: 657 | out(indent, local_import) 658 | out(indent, "class ", p_name, base_def, ":", 659 | skipped_bases and " # skipped bases: " + ", ".join(skipped_bases) or "") 660 | out_doc_attr(out, p_class, indent + 1) 661 | # inner parts 662 | methods = {} 663 | properties = {} 664 | others = {} 665 | we_are_the_base_class = p_modname == BUILTIN_MOD_NAME and p_name == "object" 666 | field_source = {} 667 | try: 668 | if hasattr(p_class, "__dict__") and not inspect_dir: 669 | field_source = p_class.__dict__ 670 | field_keys = field_source.keys() # Jython 2.5.1 _codecs fail here 671 | else: 672 | field_keys = dir(p_class) # this includes unwanted inherited methods, but no dict + inheritance is rare 673 | except: 674 | field_keys = () 675 | for item_name in field_keys: 676 | if item_name in ("__doc__", "__module__"): 677 | if we_are_the_base_class: 678 | item = "" # must be declared in base types 679 | else: 680 | continue # in all other cases must be skipped 681 | elif keyword.iskeyword(item_name): # for example, PyQt4 contains definitions of methods named 'exec' 682 | continue 683 | else: 684 | try: 685 | item = getattr(p_class, item_name) # let getters do the magic 686 | # item = field_source.get(item_name) # have it raw 687 | # if item is None: 688 | # continue 689 | except AttributeError: 690 | item = field_source.get(item_name) # have it raw 691 | if item is None: 692 | continue 693 | except Exception: 694 | continue 695 | if is_callable(item) and not isinstance(item, type): 696 | methods[item_name] = item 697 | elif is_property(item): 698 | properties[item_name] = item 699 | else: 700 | others[item_name] = item 701 | # 702 | if we_are_the_base_class: 703 | others["__dict__"] = {} # force-feed it, for __dict__ does not contain a reference to itself :) 704 | # add fake __init__s to have the right sig 705 | if p_class in FAKE_BUILTIN_INITS: 706 | methods["__init__"] = self.fake_builtin_init 707 | note("Faking init of %s", p_name) 708 | elif '__init__' not in methods: 709 | init_method = getattr(p_class, '__init__', None) 710 | if init_method: 711 | methods['__init__'] = init_method 712 | 713 | # 714 | seen_funcs = {} 715 | for item_name in sorted_no_case(methods.keys()): 716 | item = methods[item_name] 717 | try: 718 | self.redo_function(out, item, item_name, indent + 1, p_class, p_modname, classname=p_name, seen=seen_funcs) 719 | except: 720 | handle_error_func(item_name, out) 721 | # 722 | known_props = KNOWN_PROPS.get(p_modname, {}) 723 | a_setter = "lambda self, v: None" 724 | a_deleter = "lambda self: None" 725 | for item_name in sorted_no_case(properties.keys()): 726 | item = properties[item_name] 727 | prop_docstring = getattr(item, '__doc__', None) 728 | prop_key = (p_name, item_name) 729 | if prop_key in known_props: 730 | prop_descr = known_props.get(prop_key, None) 731 | if prop_descr is None: 732 | continue # explicitly omitted 733 | acc_line, getter_and_type = prop_descr 734 | if getter_and_type: 735 | getter, prop_type = getter_and_type 736 | else: 737 | getter, prop_type = None, None 738 | out(indent + 1, item_name, 739 | " = property(", format_accessors(acc_line, getter, a_setter, a_deleter), ")" 740 | ) 741 | if prop_type: 742 | if prop_docstring: 743 | out(indent + 1, '"""', prop_docstring) 744 | out(0, "") 745 | out(indent + 1, ':type: ', prop_type) 746 | out(indent + 1, '"""') 747 | else: 748 | out(indent + 1, '""":type: ', prop_type, '"""') 749 | out(0, "") 750 | else: 751 | out(indent + 1, item_name, " = property(lambda self: object(), lambda self, v: None, lambda self: None) # default") 752 | if prop_docstring: 753 | out(indent + 1, '"""', prop_docstring, '"""') 754 | out(0, "") 755 | if properties: 756 | out(0, "") # empty line after the block 757 | # 758 | for item_name in sorted_no_case(others.keys()): 759 | item = others[item_name] 760 | self.fmt_value(out, item, indent + 1, prefix=item_name + " = ") 761 | if p_name == "object": 762 | out(indent + 1, "__module__ = ''") 763 | if others: 764 | out(0, "") # empty line after the block 765 | # 766 | if not methods and not properties and not others: 767 | out(indent + 1, "pass") 768 | 769 | 770 | 771 | def redo_simple_header(self, p_name): 772 | """Puts boilerplate code on the top""" 773 | out = self.header_buf.out # 1st class methods rule :) 774 | out(0, "# encoding: %s" % OUT_ENCODING) # line 1 775 | # NOTE: maybe encoding should be selectable 776 | if hasattr(self.module, "__name__"): 777 | self_name = self.module.__name__ 778 | if self_name != p_name: 779 | mod_name = " calls itself " + self_name 780 | else: 781 | mod_name = "" 782 | else: 783 | mod_name = " does not know its name" 784 | out(0, "# module ", p_name, mod_name) # line 2 785 | 786 | BUILT_IN_HEADER = "(built-in)" 787 | if self.mod_filename: 788 | filename = self.mod_filename 789 | elif p_name in sys.builtin_module_names: 790 | filename = BUILT_IN_HEADER 791 | else: 792 | filename = getattr(self.module, "__file__", BUILT_IN_HEADER) 793 | 794 | out(0, "# from %s" % filename) # line 3 795 | out(0, "# by generator %s" % VERSION) # line 4 796 | if p_name == BUILTIN_MOD_NAME and version[0] == 2 and version[1] >= 6: 797 | out(0, "from __future__ import print_function") 798 | out_doc_attr(out, self.module, 0) 799 | 800 | 801 | def redo_imports(self): 802 | module_type = type(sys) 803 | for item_name in self.module.__dict__.keys(): 804 | try: 805 | item = self.module.__dict__[item_name] 806 | except: 807 | continue 808 | if type(item) is module_type: # not isinstance, py2.7 + PyQt4.QtCore on windows have a bug here 809 | self.imported_modules[item_name] = item 810 | self.add_import_header_if_needed() 811 | ref_notice = getattr(item, "__file__", str(item)) 812 | if hasattr(item, "__name__"): 813 | self.imports_buf.out(0, "import ", item.__name__, " as ", item_name, " # ", ref_notice) 814 | else: 815 | self.imports_buf.out(0, item_name, " = None # ??? name unknown; ", ref_notice) 816 | 817 | def add_import_header_if_needed(self): 818 | if self.imports_buf.isEmpty(): 819 | self.imports_buf.out(0, "") 820 | self.imports_buf.out(0, "# imports") 821 | 822 | 823 | def redo(self, p_name, inspect_dir): 824 | """ 825 | Restores module declarations. 826 | Intended for built-in modules and thus does not handle import statements. 827 | @param p_name name of module 828 | """ 829 | action("redoing header of module %r %r", p_name, str(self.module)) 830 | 831 | if "pyqt4" in p_name.lower(): # qt4 specific patch 832 | self._initializeQApp4() 833 | elif "pyqt5" in p_name.lower(): # qt5 specific patch 834 | self._initializeQApp5() 835 | 836 | self.redo_simple_header(p_name) 837 | 838 | # find whatever other self.imported_modules the module knows; effectively these are imports 839 | action("redoing imports of module %r %r", p_name, str(self.module)) 840 | try: 841 | self.redo_imports() 842 | except: 843 | pass 844 | 845 | action("redoing innards of module %r %r", p_name, str(self.module)) 846 | 847 | module_type = type(sys) 848 | # group what we have into buckets 849 | vars_simple = {} 850 | vars_complex = {} 851 | funcs = {} 852 | classes = {} 853 | module_dict = self.module.__dict__ 854 | if inspect_dir: 855 | module_dict = dir(self.module) 856 | for item_name in module_dict: 857 | note("looking at %s", item_name) 858 | if item_name in ( 859 | "__dict__", "__doc__", "__module__", "__file__", "__name__", "__builtins__", "__package__"): 860 | continue # handled otherwise 861 | try: 862 | item = getattr(self.module, item_name) # let getters do the magic 863 | except AttributeError: 864 | if not item_name in self.module.__dict__: continue 865 | item = self.module.__dict__[item_name] # have it raw 866 | # check if it has percolated from an imported module 867 | except NotImplementedError: 868 | if not item_name in self.module.__dict__: continue 869 | item = self.module.__dict__[item_name] # have it raw 870 | 871 | # unless we're adamantly positive that the name was imported, we assume it is defined here 872 | mod_name = None # module from which p_name might have been imported 873 | # IronPython has non-trivial reexports in System module, but not in others: 874 | skip_modname = sys.platform == "cli" and p_name != "System" 875 | surely_not_imported_mods = KNOWN_FAKE_REEXPORTERS.get(p_name, ()) 876 | ## can't figure weirdness in some modules, assume no reexports: 877 | #skip_modname = skip_modname or p_name in self.KNOWN_FAKE_REEXPORTERS 878 | if not skip_modname: 879 | try: 880 | mod_name = getattr(item, '__module__', None) 881 | except: 882 | pass 883 | # we assume that module foo.bar never imports foo; foo may import foo.bar. (see pygame and pygame.rect) 884 | maybe_import_mod_name = mod_name or "" 885 | import_is_from_top = len(p_name) > len(maybe_import_mod_name) and p_name.startswith(maybe_import_mod_name) 886 | note("mod_name = %s, prospective = %s, from top = %s", mod_name, maybe_import_mod_name, import_is_from_top) 887 | want_to_import = False 888 | if (mod_name 889 | and mod_name != BUILTIN_MOD_NAME 890 | and mod_name != p_name 891 | and mod_name not in surely_not_imported_mods 892 | and not import_is_from_top 893 | ): 894 | # import looks valid, but maybe it's a .py file? we're certain not to import from .py 895 | # e.g. this rules out _collections import collections and builtins import site. 896 | try: 897 | imported = __import__(mod_name) # ok to repeat, Python caches for us 898 | if imported: 899 | qualifiers = mod_name.split(".")[1:] 900 | for qual in qualifiers: 901 | imported = getattr(imported, qual, None) 902 | if not imported: 903 | break 904 | imported_path = (getattr(imported, '__file__', False) or "").lower() 905 | want_to_import = not (imported_path.endswith('.py') or imported_path.endswith('.pyc')) 906 | imported_name = getattr(imported, "__name__", None) 907 | if imported_name == p_name: 908 | want_to_import = False 909 | note("path of %r is %r, want? %s", mod_name, imported_path, want_to_import) 910 | except ImportError: 911 | want_to_import = False 912 | # NOTE: if we fail to import, we define 'imported' names here lest we lose them at all 913 | if want_to_import: 914 | import_list = self.used_imports[mod_name] 915 | if item_name not in import_list: 916 | import_list.append(item_name) 917 | if not want_to_import: 918 | if isinstance(item, type) or type(item).__name__ == 'classobj': 919 | classes[item_name] = item 920 | elif is_callable(item): # some classes are callable, check them before functions 921 | funcs[item_name] = item 922 | elif isinstance(item, module_type): 923 | continue # self.imported_modules handled above already 924 | else: 925 | if isinstance(item, SIMPLEST_TYPES): 926 | vars_simple[item_name] = item 927 | else: 928 | vars_complex[item_name] = item 929 | # sort and output every bucket 930 | action("outputting innards of module %r %r", p_name, str(self.module)) 931 | # 932 | omitted_names = OMIT_NAME_IN_MODULE.get(p_name, []) 933 | if vars_simple: 934 | out = self.functions_buf.out 935 | prefix = "" # try to group variables by common prefix 936 | PREFIX_LEN = 2 # default prefix length if we can't guess better 937 | out(0, "# Variables with simple values") 938 | for item_name in sorted_no_case(vars_simple.keys()): 939 | if item_name in omitted_names: 940 | out(0, "# definition of " + item_name + " omitted") 941 | continue 942 | item = vars_simple[item_name] 943 | # track the prefix 944 | if len(item_name) >= PREFIX_LEN: 945 | prefix_pos = string.rfind(item_name, "_") # most prefixes end in an underscore 946 | if prefix_pos < 1: 947 | prefix_pos = PREFIX_LEN 948 | beg = item_name[0:prefix_pos] 949 | if prefix != beg: 950 | out(0, "") # space out from other prefix 951 | prefix = beg 952 | else: 953 | prefix = "" 954 | # output 955 | replacement = REPLACE_MODULE_VALUES.get((p_name, item_name), None) 956 | if replacement is not None: 957 | out(0, item_name, " = ", replacement, " # real value of type ", str(type(item)), " replaced") 958 | elif is_skipped_in_module(p_name, item_name): 959 | t_item = type(item) 960 | out(0, item_name, " = ", self.invent_initializer(t_item), " # real value of type ", str(t_item), 961 | " skipped") 962 | else: 963 | self.fmt_value(out, item, 0, prefix=item_name + " = ") 964 | self._defined[item_name] = True 965 | out(0, "") # empty line after vars 966 | # 967 | if funcs: 968 | out = self.functions_buf.out 969 | out(0, "# functions") 970 | out(0, "") 971 | seen_funcs = {} 972 | for item_name in sorted_no_case(funcs.keys()): 973 | if item_name in omitted_names: 974 | out(0, "# definition of ", item_name, " omitted") 975 | continue 976 | item = funcs[item_name] 977 | try: 978 | self.redo_function(out, item, item_name, 0, p_modname=p_name, seen=seen_funcs) 979 | except: 980 | handle_error_func(item_name, out) 981 | else: 982 | self.functions_buf.out(0, "# no functions") 983 | # 984 | if classes: 985 | self.classes_buf.out(0, "# classes") 986 | self.classes_buf.out(0, "") 987 | seen_classes = {} 988 | # sort classes so that inheritance order is preserved 989 | cls_list = [] # items are (class_name, mro_tuple) 990 | for cls_name in sorted_no_case(classes.keys()): 991 | cls = classes[cls_name] 992 | ins_index = len(cls_list) 993 | for i in range(ins_index): 994 | maybe_child_bases = cls_list[i][1] 995 | if cls in maybe_child_bases: 996 | ins_index = i # we could not go farther than current ins_index 997 | break # ...and need not go fartehr than first known child 998 | cls_list.insert(ins_index, (cls_name, get_mro(cls))) 999 | self.split_modules = self.mod_filename and len(cls_list) >= 30 1000 | for item_name in [cls_item[0] for cls_item in cls_list]: 1001 | buf = ClassBuf(item_name, self) 1002 | self.classes_buffs.append(buf) 1003 | out = buf.out 1004 | if item_name in omitted_names: 1005 | out(0, "# definition of ", item_name, " omitted") 1006 | continue 1007 | item = classes[item_name] 1008 | self.redo_class(out, item, item_name, 0, p_modname=p_name, seen=seen_classes, inspect_dir=inspect_dir) 1009 | self._defined[item_name] = True 1010 | out(0, "") # empty line after each item 1011 | 1012 | if self.doing_builtins and p_name == BUILTIN_MOD_NAME and version[0] < 3: 1013 | # classobj still supported 1014 | txt = classobj_txt 1015 | self.classes_buf.out(0, txt) 1016 | 1017 | if self.doing_builtins and p_name == BUILTIN_MOD_NAME: 1018 | txt = create_generator() 1019 | self.classes_buf.out(0, txt) 1020 | txt = create_async_generator() 1021 | self.classes_buf.out(0, txt) 1022 | txt = create_function() 1023 | self.classes_buf.out(0, txt) 1024 | txt = create_method() 1025 | self.classes_buf.out(0, txt) 1026 | txt = create_coroutine() 1027 | self.classes_buf.out(0, txt) 1028 | 1029 | # Fake 1030 | if version[0] >= 3 or (version[0] == 2 and version[1] >= 6): 1031 | namedtuple_text = create_named_tuple() 1032 | self.classes_buf.out(0, namedtuple_text) 1033 | else: 1034 | self.classes_buf.out(0, "# no classes") 1035 | # 1036 | if vars_complex: 1037 | out = self.footer_buf.out 1038 | out(0, "# variables with complex values") 1039 | out(0, "") 1040 | for item_name in sorted_no_case(vars_complex.keys()): 1041 | if item_name in omitted_names: 1042 | out(0, "# definition of " + item_name + " omitted") 1043 | continue 1044 | item = vars_complex[item_name] 1045 | if str(type(item)) == "": 1046 | continue # this is an IronPython submodule, we mustn't generate a reference for it in the base module 1047 | replacement = REPLACE_MODULE_VALUES.get((p_name, item_name), None) 1048 | if replacement is not None: 1049 | out(0, item_name + " = " + replacement + " # real value of type " + str(type(item)) + " replaced") 1050 | elif is_skipped_in_module(p_name, item_name): 1051 | t_item = type(item) 1052 | out(0, item_name + " = " + self.invent_initializer(t_item) + " # real value of type " + str( 1053 | t_item) + " skipped") 1054 | else: 1055 | self.fmt_value(out, item, 0, prefix=item_name + " = ", as_name=item_name) 1056 | self._defined[item_name] = True 1057 | out(0, "") # empty line after each item 1058 | values_to_add = ADD_VALUE_IN_MODULE.get(p_name, None) 1059 | if values_to_add: 1060 | self.footer_buf.out(0, "# intermittent names") 1061 | for value in values_to_add: 1062 | self.footer_buf.out(0, value) 1063 | # imports: last, because previous parts could alter used_imports or hidden_imports 1064 | self.output_import_froms() 1065 | if self.imports_buf.isEmpty(): 1066 | self.imports_buf.out(0, "# no imports") 1067 | self.imports_buf.out(0, "") # empty line after imports 1068 | 1069 | def output_import_froms(self): 1070 | """Mention all imported names known within the module, wrapping as per PEP.""" 1071 | out = self.imports_buf.out 1072 | if self.used_imports: 1073 | self.add_import_header_if_needed() 1074 | for mod_name in sorted_no_case(self.used_imports.keys()): 1075 | import_names = self.used_imports[mod_name] 1076 | if import_names: 1077 | self._defined[mod_name] = True 1078 | right_pos = 0 # tracks width of list to fold it at right margin 1079 | import_heading = "from % s import (" % mod_name 1080 | right_pos += len(import_heading) 1081 | names_pack = [import_heading] 1082 | indent_level = 0 1083 | import_names = list(import_names) 1084 | import_names.sort() 1085 | for n in import_names: 1086 | self._defined[n] = True 1087 | len_n = len(n) 1088 | if right_pos + len_n >= 78: 1089 | out(indent_level, *names_pack) 1090 | names_pack = [n, ", "] 1091 | if indent_level == 0: 1092 | indent_level = 1 # all but first line is indented 1093 | right_pos = self.indent_size + len_n + 2 1094 | else: 1095 | names_pack.append(n) 1096 | names_pack.append(", ") 1097 | right_pos += (len_n + 2) 1098 | # last line is... 1099 | if indent_level == 0: # one line 1100 | names_pack[0] = names_pack[0][:-1] # cut off lpar 1101 | names_pack[-1] = "" # cut last comma 1102 | else: # last line of multiline 1103 | names_pack[-1] = ")" # last comma -> rpar 1104 | out(indent_level, *names_pack) 1105 | 1106 | out(0, "") # empty line after group 1107 | 1108 | if self.hidden_imports: 1109 | self.add_import_header_if_needed() 1110 | for mod_name in sorted_no_case(self.hidden_imports.keys()): 1111 | out(0, 'import ', mod_name, ' as ', self.hidden_imports[mod_name]) 1112 | out(0, "") # empty line after group 1113 | 1114 | 1115 | def module_to_package_name(module_name): 1116 | return re.sub(r"(.*)\.py$", r"\1", module_name) 1117 | -------------------------------------------------------------------------------- /src/generator3/pycharm_generator_utils/util_methods.py: -------------------------------------------------------------------------------- 1 | from constants import * 2 | import keyword 3 | 4 | try: 5 | import inspect 6 | except ImportError: 7 | inspect = None 8 | 9 | def create_named_tuple(): #TODO: user-skeleton 10 | return """ 11 | class __namedtuple(tuple): 12 | '''A mock base class for named tuples.''' 13 | 14 | __slots__ = () 15 | _fields = () 16 | 17 | def __new__(cls, *args, **kwargs): 18 | 'Create a new instance of the named tuple.' 19 | return tuple.__new__(cls, *args) 20 | 21 | @classmethod 22 | def _make(cls, iterable, new=tuple.__new__, len=len): 23 | 'Make a new named tuple object from a sequence or iterable.' 24 | return new(cls, iterable) 25 | 26 | def __repr__(self): 27 | return '' 28 | 29 | def _asdict(self): 30 | 'Return a new dict which maps field types to their values.' 31 | return {} 32 | 33 | def _replace(self, **kwargs): 34 | 'Return a new named tuple object replacing specified fields with new values.' 35 | return self 36 | 37 | def __getnewargs__(self): 38 | return tuple(self) 39 | """ 40 | 41 | def create_generator(): 42 | # Fake 43 | if version[0] < 3: 44 | next_name = "next" 45 | else: 46 | next_name = "__next__" 47 | txt = """ 48 | class __generator(object): 49 | '''A mock class representing the generator function type.''' 50 | def __init__(self): 51 | self.gi_code = None 52 | self.gi_frame = None 53 | self.gi_running = 0 54 | 55 | def __iter__(self): 56 | '''Defined to support iteration over container.''' 57 | pass 58 | 59 | def %s(self): 60 | '''Return the next item from the container.''' 61 | pass 62 | """ % (next_name,) 63 | if version[0] >= 3 or (version[0] == 2 and version[1] >= 5): 64 | txt += """ 65 | def close(self): 66 | '''Raises new GeneratorExit exception inside the generator to terminate the iteration.''' 67 | pass 68 | 69 | def send(self, value): 70 | '''Resumes the generator and "sends" a value that becomes the result of the current yield-expression.''' 71 | pass 72 | 73 | def throw(self, type, value=None, traceback=None): 74 | '''Used to raise an exception inside the generator.''' 75 | pass 76 | """ 77 | return txt 78 | 79 | def create_async_generator(): 80 | # Fake 81 | txt = """ 82 | class __asyncgenerator(object): 83 | '''A mock class representing the async generator function type.''' 84 | def __init__(self): 85 | '''Create an async generator object.''' 86 | self.__name__ = '' 87 | self.__qualname__ = '' 88 | self.ag_await = None 89 | self.ag_frame = None 90 | self.ag_running = False 91 | self.ag_code = None 92 | 93 | def __aiter__(self): 94 | '''Defined to support iteration over container.''' 95 | pass 96 | 97 | def __anext__(self): 98 | '''Returns an awaitable, that performs one asynchronous generator iteration when awaited.''' 99 | pass 100 | 101 | def aclose(self): 102 | '''Returns an awaitable, that throws a GeneratorExit exception into generator.''' 103 | pass 104 | 105 | def asend(self, value): 106 | '''Returns an awaitable, that pushes the value object in generator.''' 107 | pass 108 | 109 | def athrow(self, type, value=None, traceback=None): 110 | '''Returns an awaitable, that throws an exception into generator.''' 111 | pass 112 | """ 113 | return txt 114 | 115 | def create_function(): 116 | txt = """ 117 | class __function(object): 118 | '''A mock class representing function type.''' 119 | 120 | def __init__(self): 121 | self.__name__ = '' 122 | self.__doc__ = '' 123 | self.__dict__ = '' 124 | self.__module__ = '' 125 | """ 126 | if version[0] == 2: 127 | txt += """ 128 | self.func_defaults = {} 129 | self.func_globals = {} 130 | self.func_closure = None 131 | self.func_code = None 132 | self.func_name = '' 133 | self.func_doc = '' 134 | self.func_dict = '' 135 | """ 136 | if version[0] >= 3 or (version[0] == 2 and version[1] >= 6): 137 | txt += """ 138 | self.__defaults__ = {} 139 | self.__globals__ = {} 140 | self.__closure__ = None 141 | self.__code__ = None 142 | self.__name__ = '' 143 | """ 144 | if version[0] >= 3: 145 | txt += """ 146 | self.__annotations__ = {} 147 | self.__kwdefaults__ = {} 148 | """ 149 | if version[0] >= 3 and version[1] >= 3: 150 | txt += """ 151 | self.__qualname__ = '' 152 | """ 153 | return txt 154 | 155 | def create_method(): 156 | txt = """ 157 | class __method(object): 158 | '''A mock class representing method type.''' 159 | 160 | def __init__(self): 161 | """ 162 | if version[0] == 2: 163 | txt += """ 164 | self.im_class = None 165 | self.im_self = None 166 | self.im_func = None 167 | """ 168 | if version[0] >= 3 or (version[0] == 2 and version[1] >= 6): 169 | txt += """ 170 | self.__func__ = None 171 | self.__self__ = None 172 | """ 173 | return txt 174 | 175 | 176 | def create_coroutine(): 177 | if version[0] == 3 and version[1] >= 5: 178 | return """ 179 | class __coroutine(object): 180 | '''A mock class representing coroutine type.''' 181 | 182 | def __init__(self): 183 | self.__name__ = '' 184 | self.__qualname__ = '' 185 | self.cr_await = None 186 | self.cr_frame = None 187 | self.cr_running = False 188 | self.cr_code = None 189 | 190 | def __await__(self): 191 | return [] 192 | 193 | def close(self): 194 | pass 195 | 196 | def send(self, value): 197 | pass 198 | 199 | def throw(self, type, value=None, traceback=None): 200 | pass 201 | """ 202 | return "" 203 | 204 | 205 | def _searchbases(cls, accum): 206 | # logic copied from inspect.py 207 | if cls not in accum: 208 | accum.append(cls) 209 | for x in cls.__bases__: 210 | _searchbases(x, accum) 211 | 212 | 213 | def get_mro(a_class): 214 | # logic copied from inspect.py 215 | """Returns a tuple of MRO classes.""" 216 | if hasattr(a_class, "__mro__"): 217 | return a_class.__mro__ 218 | elif hasattr(a_class, "__bases__"): 219 | bases = [] 220 | _searchbases(a_class, bases) 221 | return tuple(bases) 222 | else: 223 | return tuple() 224 | 225 | 226 | def get_bases(a_class): # TODO: test for classes that don't fit this scheme 227 | """Returns a sequence of class's bases.""" 228 | if hasattr(a_class, "__bases__"): 229 | return a_class.__bases__ 230 | else: 231 | return () 232 | 233 | 234 | def is_callable(x): 235 | return hasattr(x, '__call__') 236 | 237 | 238 | def sorted_no_case(p_array): 239 | """Sort an array case insensitevely, returns a sorted copy""" 240 | p_array = list(p_array) 241 | p_array = sorted(p_array, key=lambda x: x.upper()) 242 | return p_array 243 | 244 | 245 | def cleanup(value): 246 | result = [] 247 | prev = i = 0 248 | length = len(value) 249 | last_ascii = chr(127) 250 | while i < length: 251 | char = value[i] 252 | replacement = None 253 | if char == '\n': 254 | replacement = '\\n' 255 | elif char == '\r': 256 | replacement = '\\r' 257 | elif char < ' ' or char > last_ascii: 258 | replacement = '?' # NOTE: such chars are rare; long swaths could be precessed differently 259 | if replacement: 260 | result.append(value[prev:i]) 261 | result.append(replacement) 262 | i += 1 263 | return "".join(result) 264 | 265 | 266 | _prop_types = [type(property())] 267 | #noinspection PyBroadException 268 | try: 269 | _prop_types.append(types.GetSetDescriptorType) 270 | except: 271 | pass 272 | 273 | #noinspection PyBroadException 274 | try: 275 | _prop_types.append(types.MemberDescriptorType) 276 | except: 277 | pass 278 | 279 | _prop_types = tuple(_prop_types) 280 | 281 | 282 | def is_property(x): 283 | return isinstance(x, _prop_types) 284 | 285 | 286 | def sanitize_ident(x, is_clr=False): 287 | """Takes an identifier and returns it sanitized""" 288 | if x in ("class", "object", "def", "list", "tuple", "int", "float", "str", "unicode" "None"): 289 | return "p_" + x 290 | else: 291 | if is_clr: 292 | # it tends to have names like "int x", turn it to just x 293 | xs = x.split(" ") 294 | if len(xs) == 2: 295 | return sanitize_ident(xs[1]) 296 | return x.replace("-", "_").replace(" ", "_").replace(".", "_") # for things like "list-or-tuple" or "list or tuple" 297 | 298 | 299 | def reliable_repr(value): 300 | # some subclasses of built-in types (see PyGtk) may provide invalid __repr__ implementations, 301 | # so we need to sanitize the output 302 | if type(bool) == type and isinstance(value, bool): 303 | return repr(bool(value)) 304 | for num_type in NUM_TYPES: 305 | if isinstance(value, num_type): 306 | return repr(num_type(value)) 307 | return repr(value) 308 | 309 | 310 | def sanitize_value(p_value): 311 | """Returns p_value or its part if it represents a sane simple value, else returns 'None'""" 312 | if isinstance(p_value, STR_TYPES): 313 | match = SIMPLE_VALUE_RE.match(p_value) 314 | if match: 315 | return match.groups()[match.lastindex - 1] 316 | else: 317 | return 'None' 318 | elif isinstance(p_value, NUM_TYPES): 319 | return reliable_repr(p_value) 320 | elif p_value is None: 321 | return 'None' 322 | else: 323 | if hasattr(p_value, "__name__") and hasattr(p_value, "__module__") and p_value.__module__ == BUILTIN_MOD_NAME: 324 | return p_value.__name__ # float -> "float" 325 | else: 326 | return repr(repr(p_value)) # function -> "", etc 327 | 328 | 329 | def extract_alpha_prefix(p_string, default_prefix="some"): 330 | """Returns 'foo' for things like 'foo1' or 'foo2'; if prefix cannot be found, the default is returned""" 331 | match = NUM_IDENT_PATTERN.match(p_string) 332 | prefix = match and match.groups()[match.lastindex - 1] or None 333 | return prefix or default_prefix 334 | 335 | 336 | def report(msg, *data): 337 | """Say something at error level (stderr)""" 338 | sys.stderr.write(msg % data) 339 | sys.stderr.write("\n") 340 | 341 | 342 | def say(msg, *data): 343 | """Say something at info level (stdout)""" 344 | sys.stdout.write(msg % data) 345 | sys.stdout.write("\n") 346 | 347 | 348 | def transform_seq(results, toplevel=True): 349 | """Transforms a tree of ParseResults into a param spec string.""" 350 | is_clr = sys.platform == "cli" 351 | ret = [] # add here token to join 352 | for token in results: 353 | token_type = token[0] 354 | if token_type is T_SIMPLE: 355 | token_name = token[1] 356 | if len(token) == 3: # name with value 357 | if toplevel: 358 | ret.append(sanitize_ident(token_name, is_clr) + "=" + sanitize_value(token[2])) 359 | else: 360 | # smth like "a, (b1=1, b2=2)", make it "a, p_b" 361 | return ["p_" + results[0][1]] # NOTE: for each item of tuple, return the same name of its 1st item. 362 | elif token_name == TRIPLE_DOT: 363 | if toplevel and not has_item_starting_with(ret, "*"): 364 | ret.append("*more") 365 | else: 366 | # we're in a "foo, (bar1, bar2, ...)"; make it "foo, bar_tuple" 367 | return extract_alpha_prefix(results[0][1]) + "_tuple" 368 | else: # just name 369 | ret.append(sanitize_ident(token_name, is_clr)) 370 | elif token_type is T_NESTED: 371 | inner = transform_seq(token[1:], False) 372 | if len(inner) != 1: 373 | ret.append(inner) 374 | else: 375 | ret.append(inner[0]) # [foo] -> foo 376 | elif token_type is T_OPTIONAL: 377 | ret.extend(transform_optional_seq(token)) 378 | elif token_type is T_RETURN: 379 | pass # this is handled elsewhere 380 | else: 381 | raise Exception("This cannot be a token type: " + repr(token_type)) 382 | return ret 383 | 384 | 385 | def transform_optional_seq(results): 386 | """ 387 | Produces a string that describes the optional part of parameters. 388 | @param results must start from T_OPTIONAL. 389 | """ 390 | assert results[0] is T_OPTIONAL, "transform_optional_seq expects a T_OPTIONAL node, sees " + \ 391 | repr(results[0]) 392 | is_clr = sys.platform == "cli" 393 | ret = [] 394 | for token in results[1:]: 395 | token_type = token[0] 396 | if token_type is T_SIMPLE: 397 | token_name = token[1] 398 | if len(token) == 3: # name with value; little sense, but can happen in a deeply nested optional 399 | ret.append(sanitize_ident(token_name, is_clr) + "=" + sanitize_value(token[2])) 400 | elif token_name == '...': 401 | # we're in a "foo, [bar, ...]"; make it "foo, *bar" 402 | return ["*" + extract_alpha_prefix( 403 | results[1][1])] # we must return a seq; [1] is first simple, [1][1] is its name 404 | else: # just name 405 | ret.append(sanitize_ident(token_name, is_clr) + "=None") 406 | elif token_type is T_OPTIONAL: 407 | ret.extend(transform_optional_seq(token)) 408 | # maybe handle T_NESTED if such cases ever occur in real life 409 | # it can't be nested in a sane case, really 410 | return ret 411 | 412 | 413 | def flatten(seq): 414 | """Transforms tree lists like ['a', ['b', 'c'], 'd'] to strings like '(a, (b, c), d)', enclosing each tree level in parens.""" 415 | ret = [] 416 | for one in seq: 417 | if type(one) is list: 418 | ret.append(flatten(one)) 419 | else: 420 | ret.append(one) 421 | return "(" + ", ".join(ret) + ")" 422 | 423 | 424 | def make_names_unique(seq, name_map=None): 425 | """ 426 | Returns a copy of tree list seq where all clashing names are modified by numeric suffixes: 427 | ['a', 'b', 'a', 'b'] becomes ['a', 'b', 'a_1', 'b_1']. 428 | Each repeating name has its own counter in the name_map. 429 | """ 430 | ret = [] 431 | if not name_map: 432 | name_map = {} 433 | for one in seq: 434 | if type(one) is list: 435 | ret.append(make_names_unique(one, name_map)) 436 | else: 437 | if keyword.iskeyword(one): 438 | one += "_" 439 | one_key = lstrip(one, "*") # starred parameters are unique sans stars 440 | if one_key in name_map: 441 | old_one = one_key 442 | one = one + "_" + str(name_map[old_one]) 443 | name_map[old_one] += 1 444 | else: 445 | name_map[one_key] = 1 446 | ret.append(one) 447 | return ret 448 | 449 | 450 | def has_item_starting_with(p_seq, p_start): 451 | for item in p_seq: 452 | if isinstance(item, STR_TYPES) and item.startswith(p_start): 453 | return True 454 | return False 455 | 456 | 457 | def out_docstring(out_func, docstring, indent): 458 | if not isinstance(docstring, str): return 459 | lines = docstring.strip().split("\n") 460 | if lines: 461 | if len(lines) == 1: 462 | out_func(indent, '""" ' + lines[0] + ' """') 463 | else: 464 | out_func(indent, '"""') 465 | for line in lines: 466 | try: 467 | out_func(indent, line) 468 | except UnicodeEncodeError: 469 | continue 470 | out_func(indent, '"""') 471 | 472 | def out_doc_attr(out_func, p_object, indent, p_class=None): 473 | the_doc = getattr(p_object, "__doc__", None) 474 | if the_doc: 475 | if p_class and the_doc == object.__init__.__doc__ and p_object is not object.__init__ and p_class.__doc__: 476 | the_doc = str(p_class.__doc__) # replace stock init's doc with class's; make it a certain string. 477 | the_doc += "\n# (copied from class doc)" 478 | out_docstring(out_func, the_doc, indent) 479 | else: 480 | out_func(indent, "# no doc") 481 | 482 | def is_skipped_in_module(p_module, p_value): 483 | """ 484 | Returns True if p_value's value must be skipped for module p_module. 485 | """ 486 | skip_list = SKIP_VALUE_IN_MODULE.get(p_module, []) 487 | if p_value in skip_list: 488 | return True 489 | skip_list = SKIP_VALUE_IN_MODULE.get("*", []) 490 | if p_value in skip_list: 491 | return True 492 | return False 493 | 494 | def restore_predefined_builtin(class_name, func_name): 495 | spec = func_name + PREDEFINED_BUILTIN_SIGS[(class_name, func_name)] 496 | note = "known special case of " + (class_name and class_name + "." or "") + func_name 497 | return (spec, note) 498 | 499 | def restore_by_inspect(p_func): 500 | """ 501 | Returns paramlist restored by inspect. 502 | """ 503 | args, varg, kwarg, defaults = inspect.getargspec(p_func) 504 | spec = [] 505 | if defaults: 506 | dcnt = len(defaults) - 1 507 | else: 508 | dcnt = -1 509 | args = args or [] 510 | args.reverse() # backwards, for easier defaults handling 511 | for arg in args: 512 | if dcnt >= 0: 513 | arg += "=" + sanitize_value(defaults[dcnt]) 514 | dcnt -= 1 515 | spec.insert(0, arg) 516 | if varg: 517 | spec.append("*" + varg) 518 | if kwarg: 519 | spec.append("**" + kwarg) 520 | return flatten(spec) 521 | 522 | def restore_parameters_for_overloads(parameter_lists): 523 | param_index = 0 524 | star_args = False 525 | optional = False 526 | params = [] 527 | while True: 528 | parameter_lists_copy = [pl for pl in parameter_lists] 529 | for pl in parameter_lists_copy: 530 | if param_index >= len(pl): 531 | parameter_lists.remove(pl) 532 | optional = True 533 | if not parameter_lists: 534 | break 535 | name = parameter_lists[0][param_index] 536 | for pl in parameter_lists[1:]: 537 | if pl[param_index] != name: 538 | star_args = True 539 | break 540 | if star_args: break 541 | if optional and not '=' in name: 542 | params.append(name + '=None') 543 | else: 544 | params.append(name) 545 | param_index += 1 546 | if star_args: 547 | params.append("*__args") 548 | return params 549 | 550 | def build_signature(p_name, params): 551 | return p_name + '(' + ', '.join(params) + ')' 552 | 553 | 554 | def propose_first_param(deco): 555 | """@return: name of missing first paramater, considering a decorator""" 556 | if deco is None: 557 | return "self" 558 | if deco == "classmethod": 559 | return "cls" 560 | # if deco == "staticmethod": 561 | return None 562 | 563 | def qualifier_of(cls, qualifiers_to_skip): 564 | m = getattr(cls, "__module__", None) 565 | if m in qualifiers_to_skip: 566 | return "" 567 | return m 568 | 569 | def handle_error_func(item_name, out): 570 | exctype, value = sys.exc_info()[:2] 571 | msg = "Error generating skeleton for function %s: %s" 572 | args = item_name, value 573 | report(msg, *args) 574 | out(0, "# " + msg % args) 575 | out(0, "") 576 | 577 | def format_accessors(accessor_line, getter, setter, deleter): 578 | """Nicely format accessors, like 'getter, fdel=deleter'""" 579 | ret = [] 580 | consecutive = True 581 | for key, arg, par in (('r', 'fget', getter), ('w', 'fset', setter), ('d', 'fdel', deleter)): 582 | if key in accessor_line: 583 | if consecutive: 584 | ret.append(par) 585 | else: 586 | ret.append(arg + "=" + par) 587 | else: 588 | consecutive = False 589 | return ", ".join(ret) 590 | 591 | 592 | def has_regular_python_ext(file_name): 593 | """Does name end with .py?""" 594 | return file_name.endswith(".py") 595 | # Note that the standard library on MacOS X 10.6 is shipped only as .pyc files, so we need to 596 | # have them processed by the generator in order to have any code insight for the standard library. 597 | 598 | 599 | def detect_constructor(p_class): 600 | # try to inspect the thing 601 | constr = getattr(p_class, "__init__") 602 | if constr and inspect and inspect.isfunction(constr): 603 | args, _, _, _ = inspect.getargspec(constr) 604 | return ", ".join(args) 605 | else: 606 | return None 607 | 608 | ############## notes, actions ################################################################# 609 | _is_verbose = False # controlled by -v 610 | 611 | CURRENT_ACTION = "nothing yet" 612 | 613 | def action(msg, *data): 614 | global CURRENT_ACTION 615 | CURRENT_ACTION = msg % data 616 | note(msg, *data) 617 | 618 | def note(msg, *data): 619 | """Say something at debug info level (stderr)""" 620 | global _is_verbose 621 | if _is_verbose: 622 | sys.stderr.write(msg % data) 623 | sys.stderr.write("\n") 624 | 625 | 626 | ############## plaform-specific methods ####################################################### 627 | import sys 628 | if sys.platform == 'cli': 629 | #noinspection PyUnresolvedReferences 630 | import clr 631 | 632 | # http://blogs.msdn.com/curth/archive/2009/03/29/an-ironpython-profiler.aspx 633 | def print_profile(): 634 | data = [] 635 | data.extend(clr.GetProfilerData()) 636 | data.sort(lambda x, y: -cmp(x.ExclusiveTime, y.ExclusiveTime)) 637 | 638 | for pd in data: 639 | say('%s\t%d\t%d\t%d', pd.Name, pd.InclusiveTime, pd.ExclusiveTime, pd.Calls) 640 | 641 | def is_clr_type(clr_type): 642 | if not clr_type: return False 643 | try: 644 | clr.GetClrType(clr_type) 645 | return True 646 | except TypeError: 647 | return False 648 | 649 | def restore_clr(p_name, p_class): 650 | """ 651 | Restore the function signature by the CLR type signature 652 | :return (is_static, spec, sig_note) 653 | """ 654 | clr_type = clr.GetClrType(p_class) 655 | if p_name == '__new__': 656 | methods = [c for c in clr_type.GetConstructors()] 657 | if not methods: 658 | return False, p_name + '(self, *args)', 'cannot find CLR constructor' # "self" is always first argument of any non-static method 659 | else: 660 | methods = [m for m in clr_type.GetMethods() if m.Name == p_name] 661 | if not methods: 662 | bases = p_class.__bases__ 663 | if len(bases) == 1 and p_name in dir(bases[0]): 664 | # skip inherited methods 665 | return False, None, None 666 | return False, p_name + '(self, *args)', 'cannot find CLR method' 667 | # "self" is always first argument of any non-static method 668 | 669 | parameter_lists = [] 670 | for m in methods: 671 | parameter_lists.append([p.Name for p in m.GetParameters()]) 672 | params = restore_parameters_for_overloads(parameter_lists) 673 | is_static = False 674 | if not methods[0].IsStatic: 675 | params = ['self'] + params 676 | else: 677 | is_static = True 678 | return is_static, build_signature(p_name, params), None 679 | 680 | def build_output_name(dirname, qualified_name): 681 | qualifiers = qualified_name.split(".") 682 | if dirname and not dirname.endswith("/") and not dirname.endswith("\\"): 683 | dirname += os.path.sep # "a -> a/" 684 | for pathindex in range(len(qualifiers) - 1): # create dirs for all qualifiers but last 685 | subdirname = dirname + os.path.sep.join(qualifiers[0: pathindex + 1]) 686 | if not os.path.isdir(subdirname): 687 | action("creating subdir %r", subdirname) 688 | os.makedirs(subdirname) 689 | init_py = os.path.join(subdirname, "__init__.py") 690 | if os.path.isfile(subdirname + ".py"): 691 | os.rename(subdirname + ".py", init_py) 692 | elif not os.path.isfile(init_py): 693 | init = fopen(init_py, "w") 694 | init.close() 695 | target_name = dirname + os.path.sep.join(qualifiers) 696 | if os.path.isdir(target_name): 697 | fname = os.path.join(target_name, "__init__.py") 698 | else: 699 | fname = target_name + ".py" 700 | 701 | dirname = os.path.dirname(fname) 702 | 703 | if not os.path.isdir(dirname): 704 | os.makedirs(dirname) 705 | 706 | return fname 707 | -------------------------------------------------------------------------------- /src/main.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | import os 3 | import sys 4 | try: 5 | src_dir = os.path.dirname(__file__) 6 | except Exception: 7 | src_dir = r'D:\Development\Programming\Revit\IronPython\revit-api-stubs\src' 8 | if not os.path.exists(src_dir): 9 | raise ValueError('Некорректный путь к папке src') 10 | 11 | root_dir = os.path.dirname(src_dir) 12 | sys.path.extend([root_dir, src_dir]) 13 | os.chdir(root_dir) # Для того чтобы логгер нашёл корневую папку 14 | 15 | from make_stubs import crawl_loaded_references, create_stubs 16 | 17 | stubs_dir = os.path.join( 18 | root_dir, 'stubs', 'revit', __revit__.Application.VersionNumber # type: ignore 19 | ) 20 | 21 | for assembly_name in ['RevitAPI', 'RevitAPIUI']: 22 | assembly = crawl_loaded_references(assembly_name) 23 | for modules in assembly.values(): 24 | for namespace in modules.keys(): 25 | create_stubs(stubs_dir, namespace) 26 | -------------------------------------------------------------------------------- /src/make_stubs.py: -------------------------------------------------------------------------------- 1 | """ Stub Generator for IronPython 2 | 3 | Extended script based on script developed by Gary Edwards at: 4 | gitlab.com/reje/revit-python-stubs 5 | 6 | This is uses a slightly modify version of generator3, 7 | github.com/JetBrains/intellij-community/blob/master/python/helpers/generator3.py 8 | 9 | Iterates through a list of targeted assemblies and generates stub directories 10 | for the namespaces using pycharm's generator3. 11 | 12 | Note: 13 | Some files ended up too large for Jedi to handle and would cause 14 | memory errors and crashes - 1mb+ in a single files was enough to 15 | cause problems. To fix this, there is a separate module that creates 16 | a compressed version of the stubs, but it also split large file 17 | into separate files to deal with jedi. 18 | These directories will show up in the stubs as (X_parts) 19 | 20 | 21 | MIT LICENSE 22 | https://github.com/gtalarico/ironpython-stubs 23 | Gui Talarico 24 | """ 25 | 26 | import os 27 | import sys 28 | import json 29 | import time 30 | from collections import defaultdict 31 | from pprint import pprint 32 | import importlib 33 | 34 | import clr 35 | import System 36 | 37 | # Ensure Proper CWD is set. This ensure proper running from within Revit 38 | # os.chdir(os.path.dirname(__file__)) 39 | from utils.logger import logger 40 | from generator3.generator3 import process_one 41 | 42 | 43 | def is_namespace(something): 44 | """ Returns True if object is Namespace: Module """ 45 | if isinstance(something, type(System)): 46 | return True 47 | 48 | 49 | def load_assemblies(assembly_name): 50 | """ Load Assemblies using clr.AddReference """ 51 | try: 52 | logger.info('Adding Assembly [{}]'.format(assembly_name)) 53 | clr.AddReference(assembly_name) 54 | except Exception as errmsg: 55 | logger.error('Could not load assembly: {}'.format(assembly_name)) 56 | logger.error(errmsg) 57 | else: 58 | logger.info('Loaded [{}]'.format(assembly_name)) 59 | 60 | 61 | def iter_module(module_name, module, module_path=None, namespaces=None): 62 | """ 63 | Recursively iterate through all namespaces in assembly 64 | """ 65 | if not namespaces: 66 | namespaces = {} 67 | 68 | for submodule_name, submodule in vars(module).iteritems(): 69 | if not is_namespace(submodule): 70 | continue 71 | if module_path: 72 | submodule_path = '.'.join([module_path, submodule_name]) 73 | else: 74 | submodule_path = submodule_name 75 | namespaces[submodule_path] = repr(submodule) 76 | iter_module(submodule_name, submodule, submodule_path, 77 | namespaces=namespaces) 78 | return namespaces 79 | 80 | 81 | def crawl_loaded_references(target_assembly_name): 82 | """ Crawl Loaded assemblies to get Namespaces. """ 83 | namespaces_dict = {} 84 | for assembly in clr.References: 85 | assembly_name = assembly.GetName().Name 86 | assembly_path = assembly.CodeBase 87 | assembly_filename = os.path.basename(assembly_path) 88 | if assembly_name == target_assembly_name: 89 | logger.info('Parsing Assembly: {}'.format(assembly_name)) 90 | namespaces_dict[assembly_filename] = iter_module(assembly_name, assembly) 91 | else: 92 | logger.debug('Assembly Skiped. Not in target list: {}'.format(assembly_name)) 93 | return namespaces_dict 94 | 95 | 96 | def stub_exists(output_dir, module_path): 97 | """ Check if Stub exists """ 98 | path = os.path.join(output_dir, *module_path.split('.')) 99 | exists = os.path.exists(path) or os.path.exists(path + '.py') 100 | return exists 101 | 102 | 103 | def create_stubs(output_dir, module_path): # type(str, str) -> None 104 | """ Actually Make Stubs """ 105 | try: 106 | logger.info('='*30) 107 | logger.info('Processing [{}]'.format(module_path)) 108 | process_one(module_path, None, True, output_dir) 109 | except Exception as errmsg: 110 | logger.error('Could not process module_path: {}'.format(module_path)) 111 | logger.error(errmsg) 112 | else: 113 | logger.info('Done') 114 | 115 | 116 | def delete_module(module_path): 117 | """ Delete Module after it has been processed """ 118 | try: 119 | del sys.modules[module_path] 120 | except Exception as exc: 121 | logger.debug('Could not delete: {}'.format(module_path)) 122 | else: 123 | logger.debug('Deleted Module: {}'.format(module_path)) 124 | 125 | 126 | def dump_json_log(namespaces_dict): 127 | json_dir = os.path.join(os.getcwd(), 'logs') 128 | # now = str(time.time()).split('.')[0] 129 | name = '-'.join(namespaces_dict.keys()) 130 | filepath = os.path.join(json_dir, '{}.json'.format(name)) 131 | with open(filepath, 'w') as fp: 132 | json.dump(namespaces_dict, fp, indent=2) 133 | 134 | 135 | def make(output_dir, assembly_or_builtin, overwrite=False, quiet=False): 136 | """ Main Processing Function """ 137 | assembly_dict = {} 138 | print('='*80) 139 | logger.info('Making [{}]'.format(assembly_or_builtin)) 140 | try: 141 | clr.AddReference(assembly_or_builtin) 142 | except: 143 | # clr did not work Worked, Try as BuiltIn 144 | importlib.import_module(assembly_or_builtin) 145 | builtin_name = assembly_or_builtin 146 | builtin_dict = {builtin_name: str(sys.modules[builtin_name])} 147 | assembly_dict = {'__builtins__': builtin_dict} 148 | else: 149 | # Clr Worked Parse as Assembly Name 150 | assembly_name = assembly_or_builtin 151 | load_assemblies(assembly_name) 152 | namespaces_dict = crawl_loaded_references(assembly_name) 153 | assembly_dict = namespaces_dict 154 | 155 | if not assembly_dict: 156 | raise Exception('No namspaces to process') 157 | 158 | modules = [d.keys() for d in assembly_dict.values()] 159 | logger.info('Modules and Assemblies Loaded: {}'.format(modules)) 160 | logger.debug(json.dumps(assembly_dict, indent=2, sort_keys=True)) 161 | 162 | if not quiet and raw_input('>>> Write Stubs ({}) [y/n] [n]:\n>>> '.format(output_dir)) != 'y': 163 | logger.info('No Stubs Created') 164 | else: 165 | for assembly, modules in assembly_dict.items(): 166 | for module_path in modules.keys(): 167 | if not stub_exists(output_dir, module_path) or overwrite: 168 | create_stubs(output_dir, module_path) 169 | else: 170 | logger.info('Skipping [{}] Already Exists'.format(module_path)) 171 | for module_path in modules.keys(): 172 | delete_module(module_path) 173 | 174 | logger.info('Stubs Created') 175 | return assembly_dict 176 | -------------------------------------------------------------------------------- /src/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BIMOpenGroup/RevitAPIStubs/aa871e6e2400226076fe941020efbab0bce30906/src/utils/__init__.py -------------------------------------------------------------------------------- /src/utils/docopt.py: -------------------------------------------------------------------------------- 1 | """Pythonic command-line interface parser that will make you smile. 2 | 3 | * http://docopt.org 4 | * Repository and issue-tracker: https://github.com/docopt/docopt 5 | * Licensed under terms of MIT license (see LICENSE-MIT) 6 | * Copyright (c) 2013 Vladimir Keleshev, vladimir@keleshev.com 7 | 8 | """ 9 | import sys 10 | import re 11 | 12 | 13 | __all__ = ['docopt'] 14 | __version__ = '0.6.2' 15 | 16 | 17 | class DocoptLanguageError(Exception): 18 | 19 | """Error in construction of usage-message by developer.""" 20 | 21 | 22 | class DocoptExit(SystemExit): 23 | 24 | """Exit in case user invoked program with incorrect arguments.""" 25 | 26 | usage = '' 27 | 28 | def __init__(self, message=''): 29 | SystemExit.__init__(self, (message + '\n' + self.usage).strip()) 30 | 31 | 32 | class Pattern(object): 33 | 34 | def __eq__(self, other): 35 | return repr(self) == repr(other) 36 | 37 | def __hash__(self): 38 | return hash(repr(self)) 39 | 40 | def fix(self): 41 | self.fix_identities() 42 | self.fix_repeating_arguments() 43 | return self 44 | 45 | def fix_identities(self, uniq=None): 46 | """Make pattern-tree tips point to same object if they are equal.""" 47 | if not hasattr(self, 'children'): 48 | return self 49 | uniq = list(set(self.flat())) if uniq is None else uniq 50 | for i, c in enumerate(self.children): 51 | if not hasattr(c, 'children'): 52 | assert c in uniq 53 | self.children[i] = uniq[uniq.index(c)] 54 | else: 55 | c.fix_identities(uniq) 56 | 57 | def fix_repeating_arguments(self): 58 | """Fix elements that should accumulate/increment values.""" 59 | either = [list(c.children) for c in self.either.children] 60 | for case in either: 61 | for e in [c for c in case if case.count(c) > 1]: 62 | if type(e) is Argument or type(e) is Option and e.argcount: 63 | if e.value is None: 64 | e.value = [] 65 | elif type(e.value) is not list: 66 | e.value = e.value.split() 67 | if type(e) is Command or type(e) is Option and e.argcount == 0: 68 | e.value = 0 69 | return self 70 | 71 | @property 72 | def either(self): 73 | """Transform pattern into an equivalent, with only top-level Either.""" 74 | # Currently the pattern will not be equivalent, but more "narrow", 75 | # although good enough to reason about list arguments. 76 | ret = [] 77 | groups = [[self]] 78 | while groups: 79 | children = groups.pop(0) 80 | types = [type(c) for c in children] 81 | if Either in types: 82 | either = [c for c in children if type(c) is Either][0] 83 | children.pop(children.index(either)) 84 | for c in either.children: 85 | groups.append([c] + children) 86 | elif Required in types: 87 | required = [c for c in children if type(c) is Required][0] 88 | children.pop(children.index(required)) 89 | groups.append(list(required.children) + children) 90 | elif Optional in types: 91 | optional = [c for c in children if type(c) is Optional][0] 92 | children.pop(children.index(optional)) 93 | groups.append(list(optional.children) + children) 94 | elif AnyOptions in types: 95 | optional = [c for c in children if type(c) is AnyOptions][0] 96 | children.pop(children.index(optional)) 97 | groups.append(list(optional.children) + children) 98 | elif OneOrMore in types: 99 | oneormore = [c for c in children if type(c) is OneOrMore][0] 100 | children.pop(children.index(oneormore)) 101 | groups.append(list(oneormore.children) * 2 + children) 102 | else: 103 | ret.append(children) 104 | return Either(*[Required(*e) for e in ret]) 105 | 106 | 107 | class ChildPattern(Pattern): 108 | 109 | def __init__(self, name, value=None): 110 | self.name = name 111 | self.value = value 112 | 113 | def __repr__(self): 114 | return '%s(%r, %r)' % (self.__class__.__name__, self.name, self.value) 115 | 116 | def flat(self, *types): 117 | return [self] if not types or type(self) in types else [] 118 | 119 | def match(self, left, collected=None): 120 | collected = [] if collected is None else collected 121 | pos, match = self.single_match(left) 122 | if match is None: 123 | return False, left, collected 124 | left_ = left[:pos] + left[pos + 1:] 125 | same_name = [a for a in collected if a.name == self.name] 126 | if type(self.value) in (int, list): 127 | if type(self.value) is int: 128 | increment = 1 129 | else: 130 | increment = ([match.value] if type(match.value) is str 131 | else match.value) 132 | if not same_name: 133 | match.value = increment 134 | return True, left_, collected + [match] 135 | same_name[0].value += increment 136 | return True, left_, collected 137 | return True, left_, collected + [match] 138 | 139 | 140 | class ParentPattern(Pattern): 141 | 142 | def __init__(self, *children): 143 | self.children = list(children) 144 | 145 | def __repr__(self): 146 | return '%s(%s)' % (self.__class__.__name__, 147 | ', '.join(repr(a) for a in self.children)) 148 | 149 | def flat(self, *types): 150 | if type(self) in types: 151 | return [self] 152 | return sum([c.flat(*types) for c in self.children], []) 153 | 154 | 155 | class Argument(ChildPattern): 156 | 157 | def single_match(self, left): 158 | for n, p in enumerate(left): 159 | if type(p) is Argument: 160 | return n, Argument(self.name, p.value) 161 | return None, None 162 | 163 | @classmethod 164 | def parse(class_, source): 165 | name = re.findall('(<\S*?>)', source)[0] 166 | value = re.findall('\[default: (.*)\]', source, flags=re.I) 167 | return class_(name, value[0] if value else None) 168 | 169 | 170 | class Command(Argument): 171 | 172 | def __init__(self, name, value=False): 173 | self.name = name 174 | self.value = value 175 | 176 | def single_match(self, left): 177 | for n, p in enumerate(left): 178 | if type(p) is Argument: 179 | if p.value == self.name: 180 | return n, Command(self.name, True) 181 | else: 182 | break 183 | return None, None 184 | 185 | 186 | class Option(ChildPattern): 187 | 188 | def __init__(self, short=None, long=None, argcount=0, value=False): 189 | assert argcount in (0, 1) 190 | self.short, self.long = short, long 191 | self.argcount, self.value = argcount, value 192 | self.value = None if value is False and argcount else value 193 | 194 | @classmethod 195 | def parse(class_, option_description): 196 | short, long, argcount, value = None, None, 0, False 197 | options, _, description = option_description.strip().partition(' ') 198 | options = options.replace(',', ' ').replace('=', ' ') 199 | for s in options.split(): 200 | if s.startswith('--'): 201 | long = s 202 | elif s.startswith('-'): 203 | short = s 204 | else: 205 | argcount = 1 206 | if argcount: 207 | matched = re.findall('\[default: (.*)\]', description, flags=re.I) 208 | value = matched[0] if matched else None 209 | return class_(short, long, argcount, value) 210 | 211 | def single_match(self, left): 212 | for n, p in enumerate(left): 213 | if self.name == p.name: 214 | return n, p 215 | return None, None 216 | 217 | @property 218 | def name(self): 219 | return self.long or self.short 220 | 221 | def __repr__(self): 222 | return 'Option(%r, %r, %r, %r)' % (self.short, self.long, 223 | self.argcount, self.value) 224 | 225 | 226 | class Required(ParentPattern): 227 | 228 | def match(self, left, collected=None): 229 | collected = [] if collected is None else collected 230 | l = left 231 | c = collected 232 | for p in self.children: 233 | matched, l, c = p.match(l, c) 234 | if not matched: 235 | return False, left, collected 236 | return True, l, c 237 | 238 | 239 | class Optional(ParentPattern): 240 | 241 | def match(self, left, collected=None): 242 | collected = [] if collected is None else collected 243 | for p in self.children: 244 | m, left, collected = p.match(left, collected) 245 | return True, left, collected 246 | 247 | 248 | class AnyOptions(Optional): 249 | 250 | """Marker/placeholder for [options] shortcut.""" 251 | 252 | 253 | class OneOrMore(ParentPattern): 254 | 255 | def match(self, left, collected=None): 256 | assert len(self.children) == 1 257 | collected = [] if collected is None else collected 258 | l = left 259 | c = collected 260 | l_ = None 261 | matched = True 262 | times = 0 263 | while matched: 264 | # could it be that something didn't match but changed l or c? 265 | matched, l, c = self.children[0].match(l, c) 266 | times += 1 if matched else 0 267 | if l_ == l: 268 | break 269 | l_ = l 270 | if times >= 1: 271 | return True, l, c 272 | return False, left, collected 273 | 274 | 275 | class Either(ParentPattern): 276 | 277 | def match(self, left, collected=None): 278 | collected = [] if collected is None else collected 279 | outcomes = [] 280 | for p in self.children: 281 | matched, _, _ = outcome = p.match(left, collected) 282 | if matched: 283 | outcomes.append(outcome) 284 | if outcomes: 285 | return min(outcomes, key=lambda outcome: len(outcome[1])) 286 | return False, left, collected 287 | 288 | 289 | class TokenStream(list): 290 | 291 | def __init__(self, source, error): 292 | self += source.split() if hasattr(source, 'split') else source 293 | self.error = error 294 | 295 | def move(self): 296 | return self.pop(0) if len(self) else None 297 | 298 | def current(self): 299 | return self[0] if len(self) else None 300 | 301 | 302 | def parse_long(tokens, options): 303 | """long ::= '--' chars [ ( ' ' | '=' ) chars ] ;""" 304 | long, eq, value = tokens.move().partition('=') 305 | assert long.startswith('--') 306 | value = None if eq == value == '' else value 307 | similar = [o for o in options if o.long == long] 308 | if tokens.error is DocoptExit and similar == []: # if no exact match 309 | similar = [o for o in options if o.long and o.long.startswith(long)] 310 | if len(similar) > 1: # might be simply specified ambiguously 2+ times? 311 | raise tokens.error('%s is not a unique prefix: %s?' % 312 | (long, ', '.join(o.long for o in similar))) 313 | elif len(similar) < 1: 314 | argcount = 1 if eq == '=' else 0 315 | o = Option(None, long, argcount) 316 | options.append(o) 317 | if tokens.error is DocoptExit: 318 | o = Option(None, long, argcount, value if argcount else True) 319 | else: 320 | o = Option(similar[0].short, similar[0].long, 321 | similar[0].argcount, similar[0].value) 322 | if o.argcount == 0: 323 | if value is not None: 324 | raise tokens.error('%s must not have an argument' % o.long) 325 | else: 326 | if value is None: 327 | if tokens.current() is None: 328 | raise tokens.error('%s requires argument' % o.long) 329 | value = tokens.move() 330 | if tokens.error is DocoptExit: 331 | o.value = value if value is not None else True 332 | return [o] 333 | 334 | 335 | def parse_shorts(tokens, options): 336 | """shorts ::= '-' ( chars )* [ [ ' ' ] chars ] ;""" 337 | token = tokens.move() 338 | assert token.startswith('-') and not token.startswith('--') 339 | left = token.lstrip('-') 340 | parsed = [] 341 | while left != '': 342 | short, left = '-' + left[0], left[1:] 343 | similar = [o for o in options if o.short == short] 344 | if len(similar) > 1: 345 | raise tokens.error('%s is specified ambiguously %d times' % 346 | (short, len(similar))) 347 | elif len(similar) < 1: 348 | o = Option(short, None, 0) 349 | options.append(o) 350 | if tokens.error is DocoptExit: 351 | o = Option(short, None, 0, True) 352 | else: # why copying is necessary here? 353 | o = Option(short, similar[0].long, 354 | similar[0].argcount, similar[0].value) 355 | value = None 356 | if o.argcount != 0: 357 | if left == '': 358 | if tokens.current() is None: 359 | raise tokens.error('%s requires argument' % short) 360 | value = tokens.move() 361 | else: 362 | value = left 363 | left = '' 364 | if tokens.error is DocoptExit: 365 | o.value = value if value is not None else True 366 | parsed.append(o) 367 | return parsed 368 | 369 | 370 | def parse_pattern(source, options): 371 | tokens = TokenStream(re.sub(r'([\[\]\(\)\|]|\.\.\.)', r' \1 ', source), 372 | DocoptLanguageError) 373 | result = parse_expr(tokens, options) 374 | if tokens.current() is not None: 375 | raise tokens.error('unexpected ending: %r' % ' '.join(tokens)) 376 | return Required(*result) 377 | 378 | 379 | def parse_expr(tokens, options): 380 | """expr ::= seq ( '|' seq )* ;""" 381 | seq = parse_seq(tokens, options) 382 | if tokens.current() != '|': 383 | return seq 384 | result = [Required(*seq)] if len(seq) > 1 else seq 385 | while tokens.current() == '|': 386 | tokens.move() 387 | seq = parse_seq(tokens, options) 388 | result += [Required(*seq)] if len(seq) > 1 else seq 389 | return [Either(*result)] if len(result) > 1 else result 390 | 391 | 392 | def parse_seq(tokens, options): 393 | """seq ::= ( atom [ '...' ] )* ;""" 394 | result = [] 395 | while tokens.current() not in [None, ']', ')', '|']: 396 | atom = parse_atom(tokens, options) 397 | if tokens.current() == '...': 398 | atom = [OneOrMore(*atom)] 399 | tokens.move() 400 | result += atom 401 | return result 402 | 403 | 404 | def parse_atom(tokens, options): 405 | """atom ::= '(' expr ')' | '[' expr ']' | 'options' 406 | | long | shorts | argument | command ; 407 | """ 408 | token = tokens.current() 409 | result = [] 410 | if token in '([': 411 | tokens.move() 412 | matching, pattern = {'(': [')', Required], '[': [']', Optional]}[token] 413 | result = pattern(*parse_expr(tokens, options)) 414 | if tokens.move() != matching: 415 | raise tokens.error("unmatched '%s'" % token) 416 | return [result] 417 | elif token == 'options': 418 | tokens.move() 419 | return [AnyOptions()] 420 | elif token.startswith('--') and token != '--': 421 | return parse_long(tokens, options) 422 | elif token.startswith('-') and token not in ('-', '--'): 423 | return parse_shorts(tokens, options) 424 | elif token.startswith('<') and token.endswith('>') or token.isupper(): 425 | return [Argument(tokens.move())] 426 | else: 427 | return [Command(tokens.move())] 428 | 429 | 430 | def parse_argv(tokens, options, options_first=False): 431 | """Parse command-line argument vector. 432 | 433 | If options_first: 434 | argv ::= [ long | shorts ]* [ argument ]* [ '--' [ argument ]* ] ; 435 | else: 436 | argv ::= [ long | shorts | argument ]* [ '--' [ argument ]* ] ; 437 | 438 | """ 439 | parsed = [] 440 | while tokens.current() is not None: 441 | if tokens.current() == '--': 442 | return parsed + [Argument(None, v) for v in tokens] 443 | elif tokens.current().startswith('--'): 444 | parsed += parse_long(tokens, options) 445 | elif tokens.current().startswith('-') and tokens.current() != '-': 446 | parsed += parse_shorts(tokens, options) 447 | elif options_first: 448 | return parsed + [Argument(None, v) for v in tokens] 449 | else: 450 | parsed.append(Argument(None, tokens.move())) 451 | return parsed 452 | 453 | 454 | def parse_defaults(doc): 455 | # in python < 2.7 you can't pass flags=re.MULTILINE 456 | split = re.split('\n *(<\S+?>|-\S+?)', doc)[1:] 457 | split = [s1 + s2 for s1, s2 in zip(split[::2], split[1::2])] 458 | options = [Option.parse(s) for s in split if s.startswith('-')] 459 | #arguments = [Argument.parse(s) for s in split if s.startswith('<')] 460 | #return options, arguments 461 | return options 462 | 463 | 464 | def printable_usage(doc): 465 | # in python < 2.7 you can't pass flags=re.IGNORECASE 466 | usage_split = re.split(r'([Uu][Ss][Aa][Gg][Ee]:)', doc) 467 | if len(usage_split) < 3: 468 | raise DocoptLanguageError('"usage:" (case-insensitive) not found.') 469 | if len(usage_split) > 3: 470 | raise DocoptLanguageError('More than one "usage:" (case-insensitive).') 471 | return re.split(r'\n\s*\n', ''.join(usage_split[1:]))[0].strip() 472 | 473 | 474 | def formal_usage(printable_usage): 475 | pu = printable_usage.split()[1:] # split and drop "usage:" 476 | return '( ' + ' '.join(') | (' if s == pu[0] else s for s in pu[1:]) + ' )' 477 | 478 | 479 | def extras(help, version, options, doc): 480 | if help and any((o.name in ('-h', '--help')) and o.value for o in options): 481 | print(doc.strip("\n")) 482 | sys.exit() 483 | if version and any(o.name == '--version' and o.value for o in options): 484 | print(version) 485 | sys.exit() 486 | 487 | 488 | class Dict(dict): 489 | def __repr__(self): 490 | return '{%s}' % ',\n '.join('%r: %r' % i for i in sorted(self.items())) 491 | 492 | 493 | def docopt(doc, argv=None, help=True, version=None, options_first=False): 494 | """Parse `argv` based on command-line interface described in `doc`. 495 | 496 | `docopt` creates your command-line interface based on its 497 | description that you pass as `doc`. Such description can contain 498 | --options, , commands, which could be 499 | [optional], (required), (mutually | exclusive) or repeated... 500 | 501 | Parameters 502 | ---------- 503 | doc : str 504 | Description of your command-line interface. 505 | argv : list of str, optional 506 | Argument vector to be parsed. sys.argv[1:] is used if not 507 | provided. 508 | help : bool (default: True) 509 | Set to False to disable automatic help on -h or --help 510 | options. 511 | version : any object 512 | If passed, the object will be printed if --version is in 513 | `argv`. 514 | options_first : bool (default: False) 515 | Set to True to require options preceed positional arguments, 516 | i.e. to forbid options and positional arguments intermix. 517 | 518 | Returns 519 | ------- 520 | args : dict 521 | A dictionary, where keys are names of command-line elements 522 | such as e.g. "--verbose" and "", and values are the 523 | parsed values of those elements. 524 | 525 | Example 526 | ------- 527 | >>> from docopt import docopt 528 | >>> doc = ''' 529 | Usage: 530 | my_program tcp [--timeout=] 531 | my_program serial [--baud=] [--timeout=] 532 | my_program (-h | --help | --version) 533 | 534 | Options: 535 | -h, --help Show this screen and exit. 536 | --baud= Baudrate [default: 9600] 537 | ''' 538 | >>> argv = ['tcp', '127.0.0.1', '80', '--timeout', '30'] 539 | >>> docopt(doc, argv) 540 | {'--baud': '9600', 541 | '--help': False, 542 | '--timeout': '30', 543 | '--version': False, 544 | '': '127.0.0.1', 545 | '': '80', 546 | 'serial': False, 547 | 'tcp': True} 548 | 549 | See also 550 | -------- 551 | * For video introduction see http://docopt.org 552 | * Full documentation is available in README.rst as well as online 553 | at https://github.com/docopt/docopt#readme 554 | 555 | """ 556 | if argv is None: 557 | argv = sys.argv[1:] 558 | DocoptExit.usage = printable_usage(doc) 559 | options = parse_defaults(doc) 560 | pattern = parse_pattern(formal_usage(DocoptExit.usage), options) 561 | # [default] syntax for argument is disabled 562 | #for a in pattern.flat(Argument): 563 | # same_name = [d for d in arguments if d.name == a.name] 564 | # if same_name: 565 | # a.value = same_name[0].value 566 | argv = parse_argv(TokenStream(argv, DocoptExit), list(options), 567 | options_first) 568 | pattern_options = set(pattern.flat(Option)) 569 | for ao in pattern.flat(AnyOptions): 570 | doc_options = parse_defaults(doc) 571 | ao.children = list(set(doc_options) - pattern_options) 572 | #if any_options: 573 | # ao.children += [Option(o.short, o.long, o.argcount) 574 | # for o in argv if type(o) is Option] 575 | extras(help, version, argv, doc) 576 | matched, left, collected = pattern.fix().match(argv) 577 | if matched and left == []: # better error message if left? 578 | return Dict((a.name, a.value) for a in (pattern.flat() + collected)) 579 | raise DocoptExit() 580 | -------------------------------------------------------------------------------- /src/utils/helper.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | from collections import defaultdict 4 | from ConfigParser import ConfigParser 5 | from .logger import logger 6 | 7 | class Timer(object): 8 | "Time and TimeIt Decorator" 9 | def __init__(self): 10 | self.start_time = time.time() 11 | 12 | def stop(self): 13 | end_time = time.time() 14 | duration = end_time - self.start_time 15 | return duration 16 | 17 | @staticmethod 18 | def time_function(name): 19 | def wrapper(func): 20 | def wrap(*ags, **kwargs): 21 | logger.debug('START: {}'.format(name)) 22 | t = Timer() 23 | rv = func(*ags, **kwargs) 24 | duration = t.stop() 25 | logger.debug('Done: {} sec'.format(duration)) 26 | return rv 27 | return wrap 28 | return wrapper 29 | -------------------------------------------------------------------------------- /src/utils/logger.py: -------------------------------------------------------------------------------- 1 | import os 2 | import logging 3 | import logging.config 4 | 5 | LOG_FOLDER = 'logs' 6 | LOG_FILE = 'log.log' 7 | LOG_LEVEL = 'INFO' 8 | LOG_LEVELS = { 9 | 50: 'CRITICAL', 10 | 40: 'ERROR', 11 | 30: 'WARNING', 12 | 20: 'INFO', 13 | 10: 'DEBUG', 14 | 0: 'NOTSET' 15 | } 16 | LOGGER_CONFIG = { 17 | "version": 1, 18 | "disable_existing_loggers": True, 19 | "handlers": { 20 | "console": { 21 | "class": "logging.StreamHandler", 22 | "formatter": "simple", 23 | }, 24 | "file_handler": { 25 | "class": "logging.FileHandler", 26 | "formatter": "simple", 27 | "filename": r"{}\{}".format(LOG_FOLDER, LOG_FILE), 28 | }, 29 | }, 30 | "formatters": { 31 | "simple": { 32 | "format": "[%(levelname)s] %(message)s [%(asctime)s]", 33 | "datefmt": "%Y:%m:%d - %I:%M:%S" 34 | } 35 | }, 36 | "loggers": { 37 | "": { 38 | "handlers": ["console", "file_handler"], 39 | "level": LOG_LEVEL 40 | } 41 | } 42 | } 43 | 44 | 45 | def enable_debug(): 46 | logger.setLevel(logging.DEBUG) 47 | 48 | 49 | if not os.path.exists(LOG_FOLDER): 50 | os.mkdir(LOG_FOLDER) 51 | 52 | logging.config.dictConfig(LOGGER_CONFIG) # type: ignore 53 | logger = logging.getLogger() 54 | logger.debug('** LOG LEVEL: {}'.format(LOG_LEVELS[logger.getEffectiveLevel()])) 55 | logger.enable_debug = enable_debug # type: ignore 56 | --------------------------------------------------------------------------------