├── .github
├── FUNDING.yml
├── ISSUE_TEMPLATE
│ └── bug-report.md
└── pictures
│ └── alipay.JPG
├── .gitignore
├── CITATION.cff
├── LICENSE
├── README.md
├── docs
├── Author.md
├── Changelog.md
├── Install.md
├── Makefile
├── Quickstart.md
├── Recommend.md
├── State.md
├── conf.py
├── index.rst
├── logo.png
├── make.bat
├── pikachu.jpg
├── requirements.txt
├── screenshot_characterize.gif
├── screenshot_fastneuralstyletransfer.gif
├── screenshot_photomosaic.png
└── screeshot_noteprocessor.png
├── pydrawing
├── __init__.py
├── asserts
│ ├── animecharacter.jpg
│ ├── badapple.mp4
│ ├── badapple_color.mp4
│ ├── bridge.png
│ ├── dog.jpg
│ ├── face.jpg
│ ├── monalisa.jpg
│ ├── note.jpg
│ └── zurich.jpg
├── modules
│ ├── __init__.py
│ ├── beautifiers
│ │ ├── __init__.py
│ │ ├── base
│ │ │ ├── __init__.py
│ │ │ └── base.py
│ │ ├── beziercurve
│ │ │ ├── __init__.py
│ │ │ ├── beziercurve.py
│ │ │ └── potrace.exe
│ │ ├── cartoongan
│ │ │ ├── __init__.py
│ │ │ └── cartoongan.py
│ │ ├── cartoonise
│ │ │ ├── __init__.py
│ │ │ └── cartoonise.py
│ │ ├── cartoonizeface
│ │ │ ├── __init__.py
│ │ │ ├── cartoonizeface.py
│ │ │ ├── facedetector
│ │ │ │ ├── __init__.py
│ │ │ │ └── facedetector.py
│ │ │ └── facesegmentor
│ │ │ │ ├── __init__.py
│ │ │ │ └── facesegmentor.py
│ │ ├── characterize
│ │ │ ├── __init__.py
│ │ │ └── characterize.py
│ │ ├── douyineffect
│ │ │ ├── __init__.py
│ │ │ └── douyineffect.py
│ │ ├── fastneuralstyletransfer
│ │ │ ├── __init__.py
│ │ │ └── fastneuralstyletransfer.py
│ │ ├── geneticfitting
│ │ │ ├── __init__.py
│ │ │ ├── geneticfittingcircle.py
│ │ │ └── geneticfittingpolygon.py
│ │ ├── glitch
│ │ │ ├── __init__.py
│ │ │ └── glitch.py
│ │ ├── nostalgicstyle
│ │ │ ├── __init__.py
│ │ │ └── nostalgicstyle.py
│ │ ├── noteprocessor
│ │ │ ├── __init__.py
│ │ │ └── noteprocessor.py
│ │ ├── oilpainting
│ │ │ ├── __init__.py
│ │ │ └── oilpainting.py
│ │ ├── pencildrawing
│ │ │ ├── __init__.py
│ │ │ ├── pencildrawing.py
│ │ │ └── textures
│ │ │ │ └── default.jpg
│ │ ├── photocorrection
│ │ │ ├── __init__.py
│ │ │ └── photocorrection.py
│ │ └── photomosaic
│ │ │ ├── __init__.py
│ │ │ └── photomosaic.py
│ └── utils
│ │ ├── __init__.py
│ │ ├── io.py
│ │ └── logger.py
└── pydrawing.py
├── requirements.txt
└── setup.py
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
1 | patreon: CharlesPikachu
2 | ko_fi: charlespikachu
3 | custom: https://github.com/CharlesPikachu/Games/tree/master/.github/pictures/alipay.JPG
4 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug-report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: What bug did you encounter
4 | title: "[BUG]"
5 | labels: ''
6 | assignees: ''
7 | ---
8 |
9 | **Environment (使用环境)**
10 |
11 | - Installation method (安装方式):
12 | - The version of pydrawing (版本号):
13 | - Operating system (操作系统):
14 | - Python version (Python版本):
15 |
16 | **Question description (问题描述)**
17 |
18 | **Screenshot (报错截图)**
19 |
20 | **Advice (修复建议)**
21 |
--------------------------------------------------------------------------------
/.github/pictures/alipay.JPG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/.github/pictures/alipay.JPG
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | *.egg-info/
24 | .installed.cfg
25 | *.egg
26 | MANIFEST
27 |
28 | # PyInstaller
29 | # Usually these files are written by a python script from a template
30 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
31 | *.manifest
32 | *.spec
33 |
34 | # Installer logs
35 | pip-log.txt
36 | pip-delete-this-directory.txt
37 |
38 | # Unit test / coverage reports
39 | htmlcov/
40 | .tox/
41 | .coverage
42 | .coverage.*
43 | .cache
44 | nosetests.xml
45 | coverage.xml
46 | *.cover
47 | .hypothesis/
48 | .pytest_cache/
49 |
50 | # Translations
51 | *.mo
52 | *.pot
53 |
54 | # Django stuff:
55 | *.log
56 | local_settings.py
57 | db.sqlite3
58 |
59 | # Flask stuff:
60 | instance/
61 | .webassets-cache
62 |
63 |
64 | # Scrapy stuff:
65 | .scrapy
66 |
67 | # Sphinx documentation
68 | docs/_build/
69 |
70 | # PyBuilder
71 | target/
72 |
73 | # Jupyter Notebook
74 | .ipynb_checkpoints
75 |
76 | # pyenv
77 | .python-version
78 |
79 | # celery beat schedule file
80 | celerybeat-schedule
81 |
82 | # SageMath parsed files
83 | *.sage.py
84 |
85 | # Environments
86 | .env
87 | .venv
88 | env/
89 | venv/
90 | ENV/
91 | env.bak/
92 | venv.bak/
93 |
94 | # Spyder project settings
95 | .spyderproject
96 | .spyproject
97 |
98 | # Rope project settings
99 | .ropeproject
100 |
101 | # mkdocs documentation
102 | /site
103 |
104 | # mypy
105 | .mypy_cache/
106 |
107 |
--------------------------------------------------------------------------------
/CITATION.cff:
--------------------------------------------------------------------------------
1 | cff-version: 1.2.0
2 | message: "If you use this software, please cite it as below."
3 | authors:
4 | - name: "Zhenchao Jin"
5 | title: "Pydrawing: Beautify your image or video"
6 | date-released: 2022-01-18
7 | url: "https://github.com/CharlesPikachu/pydrawing"
8 | license: Apache-2.0
9 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |

3 |
4 |
5 |
6 | [](https://pydrawing.readthedocs.io/)
7 | [](https://pypi.org/project/pydrawing/)
8 | [](https://pypi.org/project/pydrawing)
9 | [](https://github.com/CharlesPikachu/pydrawing/blob/master/LICENSE)
10 | [](https://pypi.org/project/pydrawing/)
11 | [](https://pypi.org/project/pydrawing/)
12 | [](https://github.com/CharlesPikachu/pydrawing/issues)
13 | [](https://github.com/CharlesPikachu/pydrawing/issues)
14 |
15 | Documents: https://pydrawing.readthedocs.io/
16 |
17 |
18 | # Pydrawing
19 | ```
20 | Beautify your image or video.
21 | You can star this repository to keep track of the project if it's helpful for you, thank you for your support.
22 | ```
23 |
24 |
25 | # Support List
26 | | Beautifier_EN | Introduction | Related Paper | Code | Beautifier_CN |
27 | | :----: | :----: | :----: | :----: | :----: |
28 | | glitch | [click](https://mp.weixin.qq.com/s/Yv0uPLsTGwVnj_PKqYCmAw) | N/A | [click](./pydrawing/modules/beautifiers/glitch) | 信号故障特效 |
29 | | douyineffect | [click](https://mp.weixin.qq.com/s/RRnrO2H84pvtUdDsAYD9Qg) | N/A | [click](./pydrawing/modules/beautifiers/douyineffect) | 抖音特效 |
30 | | characterize | [click](https://mp.weixin.qq.com/s/yaNQJyeUeisOenEeoVsgDg) | N/A | [click](./pydrawing/modules/beautifiers/characterize) | 视频转字符画 |
31 | | cartoonise | [click](https://mp.weixin.qq.com/s/efwNQl0JVJt6_x_evdL41A) | N/A | [click](./pydrawing/modules/beautifiers/cartoonise) | 图像卡通化 |
32 | | photomosaic | [click](https://mp.weixin.qq.com/s/BG1VW3jx0LUazhhifBapVw) | N/A | [click](./pydrawing/modules/beautifiers/photomosaic) | 拼马赛克图片 |
33 | | beziercurve | [click](https://mp.weixin.qq.com/s/SWpaTPw9tOLs5h1EgP30Vw) | N/A | [click](./pydrawing/modules/beautifiers/beziercurve) | 贝塞尔曲线画画 |
34 | | geneticfittingcircle | [click](https://mp.weixin.qq.com/s/L0z1ZO1Qztk0EF1KAMfmbA) | N/A | [click](./pydrawing/modules/beautifiers/geneticfitting) | 遗传算法拟合图像-圆形 |
35 | | geneticfittingpolygon | [click](https://mp.weixin.qq.com/s/L0z1ZO1Qztk0EF1KAMfmbA) | N/A | [click](./pydrawing/modules/beautifiers/geneticfitting) | 遗传算法拟合图像-多边形 |
36 | | nostalgicstyle | [click](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ) | N/A | [click](./pydrawing/modules/beautifiers/nostalgicstyle) | 照片怀旧风格 |
37 | | photocorrection | [click](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ) | N/A | [click](./pydrawing/modules/beautifiers/photocorrection) | 简单的照片矫正 |
38 | | pencildrawing | [click](https://mp.weixin.qq.com/s/K_2lGGlLKHIIm4iSg0xCUw) | [click](https://jiaya.me/archive/projects/pencilsketch/npar12_pencil.pdf) | [click](./pydrawing/modules/beautifiers/pencildrawing) | 铅笔素描画 |
39 | | cartoongan | [click](https://mp.weixin.qq.com/s/18fUOO5fH1PVUzTMNNCWwQ) | [click](https://openaccess.thecvf.com/content_cvpr_2018/CameraReady/2205.pdf) | [click](./pydrawing/modules/beautifiers/cartoongan) | 卡通GAN |
40 | | fastneuralstyletransfer | [click](https://mp.weixin.qq.com/s/Ed-1fWOIhI52G-Ugrv7n9Q) | [click](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf) | [click](./pydrawing/modules/beautifiers/fastneuralstyletransfer) | 快速风格迁移 |
41 | | cartoonizeface | [click](https://mp.weixin.qq.com/s/L0z1ZO1Qztk0EF1KAMfmbA) | [click](https://arxiv.org/pdf/1907.10830.pdf) | [click](./pydrawing/modules/beautifiers/cartoonizeface) | 人脸卡通化 |
42 | | noteprocessor | [click](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ) | [click](https://mzucker.github.io/2016/09/20/noteshrink.html) | [click](./pydrawing/modules/beautifiers/noteprocessor) | 手写笔记处理 |
43 | | oilpainting | [click](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ) | [click](https://github.com/cyshih73/Faster-OilPainting/blob/master/Report.pdf) | [click](./pydrawing/modules/beautifiers/oilpainting) | 照片油画化 |
44 |
45 |
46 | # Install
47 |
48 | #### Preparation
49 | - [ffmpeg](https://ffmpeg.org/): You should set ffmpeg in environment variable.
50 | - [Pytorch](https://pytorch.org/get-started/previous-versions/): To apply some of the supported beautifiers (e.g., cartoongan), you need to install pytorch and corresponding dependencies following [tutorial](https://pytorch.org/get-started/previous-versions/).
51 |
52 | #### Pip install
53 | ```sh
54 | run "pip install pydrawing"
55 | ```
56 |
57 | #### Source code install
58 | ```sh
59 | (1) Offline
60 | Step1: git clone https://github.com/CharlesPikachu/pydrawing.git
61 | Step2: cd pydrawing -> run "python setup.py install"
62 | (2) Online
63 | run "pip install git+https://github.com/CharlesPikachu/pydrawing.git@master"
64 | ```
65 |
66 |
67 | # Quick Start
68 | ```python
69 | import random
70 | from pydrawing import pydrawing
71 |
72 | filepath = 'asserts/dog.jpg'
73 | config = {
74 | "savedir": "outputs",
75 | "savename": "output"
76 | }
77 | drawing_client = pydrawing.pydrawing()
78 | drawing_client.execute(filepath, random.choice(drawing_client.getallsupports()), config=config)
79 | ```
80 |
81 |
82 | # Screenshot
83 |
84 |

85 |
86 |
87 |
88 |

89 |
90 |
91 |
92 |

93 |
94 |
95 |
96 |

97 |
98 |
99 |
100 |
101 | # Projects in Charles_pikachu
102 | - [Games](https://github.com/CharlesPikachu/Games): Create interesting games by pure python.
103 | - [DecryptLogin](https://github.com/CharlesPikachu/DecryptLogin): APIs for loginning some websites by using requests.
104 | - [Musicdl](https://github.com/CharlesPikachu/musicdl): A lightweight music downloader written by pure python.
105 | - [Videodl](https://github.com/CharlesPikachu/videodl): A lightweight video downloader written by pure python.
106 | - [Pytools](https://github.com/CharlesPikachu/pytools): Some useful tools written by pure python.
107 | - [PikachuWeChat](https://github.com/CharlesPikachu/pikachuwechat): Play WeChat with itchat-uos.
108 | - [Pydrawing](https://github.com/CharlesPikachu/pydrawing): Beautify your image or video.
109 | - [ImageCompressor](https://github.com/CharlesPikachu/imagecompressor): Image compressors written by pure python.
110 | - [FreeProxy](https://github.com/CharlesPikachu/freeproxy): Collecting free proxies from internet.
111 | - [Paperdl](https://github.com/CharlesPikachu/paperdl): Search and download paper from specific websites.
112 | - [Sciogovterminal](https://github.com/CharlesPikachu/sciogovterminal): Browse "The State Council Information Office of the People's Republic of China" in the terminal.
113 | - [CodeFree](https://github.com/CharlesPikachu/codefree): Make no code a reality.
114 | - [DeepLearningToys](https://github.com/CharlesPikachu/deeplearningtoys): Some deep learning toys implemented in pytorch.
115 | - [DataAnalysis](https://github.com/CharlesPikachu/dataanalysis): Some data analysis projects in charles_pikachu.
116 | - [Imagedl](https://github.com/CharlesPikachu/imagedl): Search and download images from specific websites.
117 | - [Pytoydl](https://github.com/CharlesPikachu/pytoydl): A toy deep learning framework built upon numpy.
118 | - [NovelDL](https://github.com/CharlesPikachu/noveldl): Search and download novels from some specific websites.
119 |
120 |
121 | # More
122 | #### WeChat Official Accounts
123 | *Charles_pikachu*
124 | 
--------------------------------------------------------------------------------
/docs/Author.md:
--------------------------------------------------------------------------------
1 | # 关于作者
2 |
3 | 学生党, 主要研究方向是计算机视觉, 顺便对信息安全感兴趣。
4 |
5 | 我的个人微信公众号是: Charles_pikachu (欢迎搜索关注,或者搜"Charles的皮卡丘")
6 |
7 | 我的Github账号是: [https://github.com/CharlesPikachu](https://github.com/CharlesPikachu) (欢迎搜索关注)
8 |
9 | 我的知乎账号是: [https://www.zhihu.com/people/charles_pikachu](https://www.zhihu.com/people/charles_pikachu) (欢迎搜索关注)
10 |
11 | 我的B站账号是: [https://space.bilibili.com/406756145](https://space.bilibili.com/406756145) (欢迎搜索关注)
12 |
13 | 个人邮箱: charlesblwx@gmail.com
--------------------------------------------------------------------------------
/docs/Changelog.md:
--------------------------------------------------------------------------------
1 | # 开发日志
2 |
3 | **2022-01-18**
4 |
5 | - 版本号: v0.1.0,
6 | - 更新内容: 支持多种图像/视频美化算法。
7 |
8 | **2022-01-19**
9 |
10 | - 版本号: v0.1.1到v0.1.2,
11 | - 更新内容: 部分算法优化, 支持人脸卡通化算法。
12 |
13 | **2022-01-25**
14 |
15 | - 版本号: v0.1.3到v0.1.4,
16 | - 更新内容: 支持遗传算法拟合图像和贝塞尔曲线画画, 对需要pytorch的算法进行优化, 避免不安装pytorch则无法使用其他功能的尴尬以及解决不安装cuda则无法导入模型的问题。
17 |
18 | **2022-01-26**
19 |
20 | - 版本号: v0.1.5,
21 | - 更新内容: 修复输入输出视频FPS不一致的问题。
22 |
23 | **2022-02-27**
24 |
25 | - 版本号: v0.1.6,
26 | - 更新内容: 支持手写笔记处理、照片怀旧风格等算法。
27 |
28 | **2022-03-21**
29 |
30 | - 版本号: v0.1.7,
31 | - 更新内容: 修改开源协议到Apache-2.0。
32 |
33 | **2022-03-24**
34 |
35 | - 版本号: v0.1.8,
36 | - 更新内容: 增加作者信息。
37 |
38 | **2022-04-23**
39 |
40 | - 版本号: v0.1.9,
41 | - 更新内容: 拼马赛克图片支持多种图片类型。
--------------------------------------------------------------------------------
/docs/Install.md:
--------------------------------------------------------------------------------
1 | # 安装Pydrawing
2 |
3 |
4 | #### 环境配置
5 |
6 | - 操作系统: Linux or macOS or Windows
7 | - Python版本: Python3.6+
8 | - ffmpeg: 若输入视频中含有音频, 需要借助[ffmpeg](https://ffmpeg.org/)解码, 因此需要保证电脑中存在ffmpeg并在环境变量中。
9 | - Pytorch: 若需要使用CartoonGan等算法, 需要安装Pytorch>=1.0.0和配置对应的环境, 详见[官方文档](https://pytorch.org/get-started/locally/)。
10 |
11 |
12 | #### PIP安装(推荐)
13 |
14 | 在终端运行如下命令即可(请保证python在环境变量中):
15 |
16 | ```sh
17 | pip install pydrawing --upgrade
18 | ```
19 |
20 |
21 | #### 源代码安装
22 |
23 | **1.在线安装**
24 |
25 | 运行如下命令即可在线安装:
26 |
27 | ```sh
28 | pip install git+https://github.com/CharlesPikachu/pydrawing.git@master
29 | ```
30 |
31 | **2.离线安装**
32 |
33 | 利用如下命令下载pydrawing源代码到本地:
34 |
35 | ```sh
36 | git clone https://github.com/CharlesPikachu/pydrawing.git
37 | ```
38 |
39 | 接着, 切到pydrawing目录下:
40 |
41 | ```sh
42 | cd pydrawing
43 | ```
44 |
45 | 最后运行如下命令进行安装:
46 |
47 | ```sh
48 | python setup.py install
49 | ```
--------------------------------------------------------------------------------
/docs/Makefile:
--------------------------------------------------------------------------------
1 | # Minimal makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line, and also
5 | # from the environment for the first two.
6 | SPHINXOPTS ?=
7 | SPHINXBUILD ?= sphinx-build
8 | SOURCEDIR = source
9 | BUILDDIR = build
10 |
11 | # Put it first so that "make" without argument is like "make help".
12 | help:
13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14 |
15 | .PHONY: help Makefile
16 |
17 | # Catch-all target: route all unknown targets to Sphinx using the new
18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19 | %: Makefile
20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
21 |
--------------------------------------------------------------------------------
/docs/Quickstart.md:
--------------------------------------------------------------------------------
1 | # 快速开始
2 |
3 |
4 | ## 已经支持的算法
5 |
6 | #### 图像卡通化
7 |
8 | **1.相关论文**
9 |
10 | 暂无
11 |
12 | **2.公众号文章介绍**
13 |
14 | [Introduction](https://mp.weixin.qq.com/s/efwNQl0JVJt6_x_evdL41A)
15 |
16 | **3.调用示例**
17 |
18 | ```python
19 | from pydrawing import pydrawing
20 |
21 | config = {'mode': ['rgb', 'hsv'][0]}
22 | filepath = 'input.jpg'
23 | drawing_client = pydrawing.pydrawing()
24 | drawing_client.execute(filepath, 'cartoonise', config=config)
25 | ```
26 |
27 | **4.config选项**
28 |
29 | - savename: 保存结果时用的文件名, 默认值为"output";
30 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
31 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
32 | - mode: 卡通化时所用的颜色空间, 支持"rgb"和"hsv"模式, 默认值为"rgb"。
33 |
34 | #### 人脸卡通化
35 |
36 | **1.相关论文**
37 |
38 | [Paper](https://arxiv.org/pdf/1907.10830.pdf)
39 |
40 | **2.公众号文章介绍**
41 |
42 | [Introduction](https://mp.weixin.qq.com/s/L0z1ZO1Qztk0EF1KAMfmbA)
43 |
44 | **3.调用示例**
45 |
46 | ```python
47 | from pydrawing import pydrawing
48 |
49 | config = {'use_face_segmentor': False}
50 | filepath = 'input.jpg'
51 | drawing_client = pydrawing.pydrawing()
52 | drawing_client.execute(filepath, 'cartoonizeface', config=config)
53 | ```
54 |
55 | **4.config选项**
56 |
57 | - savename: 保存结果时用的文件名, 默认值为"output";
58 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
59 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
60 | - use_cuda: 模型是否使用cuda加速, 默认值为"False";
61 | - use_face_segmentor: 是否使用人脸分割器进一步去除人脸背景, 默认值为"False"。
62 |
63 | #### 铅笔素描画
64 |
65 | **1.相关论文**
66 |
67 | [Paper](https://jiaya.me/archive/projects/pencilsketch/npar12_pencil.pdf)
68 |
69 | **2.公众号文章介绍**
70 |
71 | [Introduction](https://mp.weixin.qq.com/s/K_2lGGlLKHIIm4iSg0xCUw)
72 |
73 | **3.调用示例**
74 |
75 | ```python
76 | from pydrawing import pydrawing
77 |
78 | config = {'mode': ['gray', 'color'][0]}
79 | filepath = 'input.jpg'
80 | drawing_client = pydrawing.pydrawing()
81 | drawing_client.execute(filepath, 'pencildrawing', config=config)
82 | ```
83 |
84 | **4.config选项**
85 |
86 | - savename: 保存结果时用的文件名, 默认值为"output";
87 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
88 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
89 | - mode: 生成的图片是灰色图还是彩色图, 支持"gray"和"color", 默认值为"gray";
90 | - kernel_size_scale: 铅笔笔画相关参数, 默认值为"1/40";
91 | - stroke_width: 铅笔笔画相关参数, 默认值为"1";
92 | - color_depth: 铅笔色调相关参数, 默认值为"1";
93 | - weights_color: 铅笔色调相关参数, 默认值为"[62, 30, 5]";
94 | - weights_gray: 铅笔色调相关参数, 默认值为"[76, 22, 2]";
95 | - texture_path: 纹理图片路径, 默认使用库里提供的"default.jpg"文件。
96 |
97 | #### 卡通GAN
98 |
99 | **1.相关论文**
100 |
101 | [Paper](https://openaccess.thecvf.com/content_cvpr_2018/CameraReady/2205.pdf)
102 |
103 | **2.公众号文章介绍**
104 |
105 | [Introduction](https://mp.weixin.qq.com/s/18fUOO5fH1PVUzTMNNCWwQ)
106 |
107 | **3.调用示例**
108 | ```python
109 | from pydrawing import pydrawing
110 |
111 | config = {'style': ['Hayao', 'Hosoda', 'Paprika', 'Shinkai'][0]}
112 | filepath = 'input.jpg'
113 | drawing_client = pydrawing.pydrawing()
114 | drawing_client.execute(filepath, 'cartoongan', config=config)
115 | ```
116 |
117 | **4.config选项**
118 |
119 | - savename: 保存结果时用的文件名, 默认值为"output";
120 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
121 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
122 | - style: 卡通画的风格类型, 支持"Hayao", "Hosoda", "Paprika"和"Shinkai", 默认值为"Hosoda";
123 | - use_cuda: 模型是否使用cuda加速, 默认值为"True"。
124 |
125 | #### 快速风格迁移
126 |
127 | **1.相关论文**
128 |
129 | [Paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf)
130 |
131 | **2.公众号文章介绍**
132 |
133 | [Introduction](https://mp.weixin.qq.com/s/Ed-1fWOIhI52G-Ugrv7n9Q)
134 |
135 | **3.调用示例**
136 |
137 | ```python
138 | from pydrawing import pydrawing
139 |
140 | config = {'style': ['starrynight', 'cuphead', 'mosaic'][0]}
141 | filepath = 'input.jpg'
142 | drawing_client = pydrawing()
143 | drawing_client.execute(filepath, 'fastneuralstyletransfer', config=config)
144 | ```
145 |
146 | **4.config选项**
147 |
148 | - savename: 保存结果时用的文件名, 默认值为"output";
149 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
150 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
151 | - style: 迁移的画的风格类型, 支持"starrynight", "cuphead"和"mosaic", 默认值为"starrynight";
152 | - use_cuda: 模型是否使用cuda加速, 默认值为"True"。
153 |
154 | #### 抖音特效
155 |
156 | **1.相关论文**
157 |
158 | 暂无
159 |
160 | **2.公众号文章介绍**
161 |
162 | [Introduction](https://mp.weixin.qq.com/s/RRnrO2H84pvtUdDsAYD9Qg)
163 |
164 | **3.调用示例**
165 |
166 | ```python
167 | from pydrawing import pydrawing
168 |
169 | filepath = 'input.jpg'
170 | drawing_client = pydrawing.pydrawing()
171 | drawing_client.execute(filepath, 'douyineffect')
172 | ```
173 |
174 | **4.config选项**
175 |
176 | - savename: 保存结果时用的文件名, 默认值为"output";
177 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
178 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False"。
179 |
180 | #### 视频转字符画
181 |
182 | **1.相关论文**
183 |
184 | 暂无
185 |
186 | **2.公众号文章介绍**
187 |
188 | [Introduction](https://mp.weixin.qq.com/s/yaNQJyeUeisOenEeoVsgDg)
189 |
190 | **3.调用示例**
191 |
192 | ```python
193 | from pydrawing import pydrawing
194 |
195 | filepath = 'input.jpg'
196 | drawing_client = pydrawing.pydrawing()
197 | drawing_client.execute(filepath, 'characterize')
198 | ```
199 |
200 | **4.config选项**
201 |
202 | - savename: 保存结果时用的文件名, 默认值为"output";
203 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
204 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False"。
205 |
206 | #### 拼马赛克图片
207 |
208 | **1.相关论文**
209 |
210 | 暂无
211 |
212 | **2.公众号文章介绍**
213 |
214 | [Introduction](https://mp.weixin.qq.com/s/BG1VW3jx0LUazhhifBapVw)
215 |
216 | **3.调用示例**
217 |
218 | ```python
219 | from pydrawing import pydrawing
220 |
221 | config = {'src_images_dir': 'images', 'block_size': 15}
222 | filepath = 'input.jpg'
223 | drawing_client = pydrawing.pydrawing()
224 | drawing_client.execute(filepath, 'photomosaic', config=config)
225 | ```
226 |
227 | **4.config选项**
228 |
229 | - savename: 保存结果时用的文件名, 默认值为"output";
230 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
231 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
232 | - block_size: 马赛克block大小, 默认值为"15";
233 | - src_images_dir: 使用的图片路径, 请保证该文件夹中存在大量色彩各异的图片以实现较好的拼图效果。
234 |
235 | #### 信号故障特效
236 |
237 | **1.相关论文**
238 |
239 | 暂无
240 |
241 | **2.公众号文章介绍**
242 |
243 | [Introduction](https://mp.weixin.qq.com/s/Yv0uPLsTGwVnj_PKqYCmAw)
244 |
245 | **3.调用示例**
246 |
247 | ```python
248 | from pydrawing import pydrawing
249 |
250 | filepath = 'input.mp4'
251 | drawing_client = pydrawing.pydrawing()
252 | drawing_client.execute(filepath, 'glitch')
253 | ```
254 |
255 | **4.config选项**
256 |
257 | - savename: 保存结果时用的文件名, 默认值为"output";
258 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
259 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
260 | - header_size: 文件头部大小, 一般不需要改, 默认值为"200";
261 | - intensity: 随机扰动相关的参数, 默认值为"0.1";
262 | - block_size: 一次读取文件的大小, 默认值为"100"。
263 |
264 | #### 贝塞尔曲线画画
265 |
266 | **1.相关论文**
267 |
268 | 暂无
269 |
270 | **2.公众号文章介绍**
271 |
272 | [Introduction](https://mp.weixin.qq.com/s/SWpaTPw9tOLs5h1EgP30Vw)
273 |
274 | **3.调用示例**
275 |
276 | ```python
277 | from pydrawing import pydrawing
278 |
279 | filepath = 'input.jpg'
280 | drawing_client = pydrawing.pydrawing()
281 | drawing_client.execute(filepath, 'beziercurve')
282 | ```
283 |
284 | **4.config选项**
285 |
286 | - savename: 保存结果时用的文件名, 默认值为"output";
287 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
288 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
289 | - num_samples: 采样点, 默认值为"15";
290 | - width: 坐标变换宽度, 默认值为"600";
291 | - height: 坐标变换高度, 默认值为"600";
292 | - num_colors: 使用的颜色数量, 默认值为"32"。
293 |
294 | #### 遗传算法拟合图像-圆形
295 |
296 | **1.相关论文**
297 |
298 | 暂无
299 |
300 | **2.公众号文章介绍**
301 |
302 | [Introduction](https://mp.weixin.qq.com/s/L0z1ZO1Qztk0EF1KAMfmbA)
303 |
304 | **3.调用示例**
305 |
306 | ```python
307 | from pydrawing import pydrawing
308 |
309 | filepath = 'input.jpg'
310 | drawing_client = pydrawing.pydrawing()
311 | drawing_client.execute(filepath, 'geneticfittingcircle')
312 | ```
313 |
314 | **4.config选项**
315 |
316 | - savename: 保存结果时用的文件名, 默认值为"output";
317 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
318 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
319 | - cache_dir: 中间结果保存的文件夹, 默认值为"cache";
320 | - save_cache: 是否保存中间结果, 默认值为"True";
321 | - init_cfg: 算法初始化参数, 默认值为如下:
322 | ```python
323 | init_cfg = {
324 | 'num_populations': 10,
325 | 'init_num_circles': 1,
326 | 'num_generations': 1e5,
327 | 'print_interval': 1,
328 | 'mutation_rate': 0.1,
329 | 'selection_rate': 0.5,
330 | 'crossover_rate': 0.5,
331 | 'circle_cfg': {'radius_range': 50, 'radius_shift_range': 50, 'center_shift_range': 50, 'color_shift_range': 50},
332 | }
333 | ```
334 |
335 | #### 遗传算法拟合图像-多边形
336 |
337 | **1.相关论文**
338 |
339 | 暂无
340 |
341 | **2.公众号文章介绍**
342 |
343 | [Introduction](https://mp.weixin.qq.com/s/L0z1ZO1Qztk0EF1KAMfmbA)
344 |
345 | **3.调用示例**
346 |
347 | ```python
348 | from pydrawing import pydrawing
349 |
350 | filepath = 'input.jpg'
351 | drawing_client = pydrawing.pydrawing()
352 | drawing_client.execute(filepath, 'geneticfittingpolygon')
353 | ```
354 |
355 | **4.config选项**
356 |
357 | - savename: 保存结果时用的文件名, 默认值为"output";
358 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
359 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
360 | - cache_dir: 中间结果保存的文件夹, 默认值为"cache";
361 | - save_cache: 是否保存中间结果, 默认值为"True";
362 | - init_cfg: 算法初始化参数, 默认值为如下:
363 | ```python
364 | init_cfg = {
365 | 'num_populations': 10,
366 | 'num_points_list': list(range(3, 40)),
367 | 'init_num_polygons': 1,
368 | 'num_generations': 1e5,
369 | 'print_interval': 1,
370 | 'mutation_rate': 0.1,
371 | 'selection_rate': 0.5,
372 | 'crossover_rate': 0.5,
373 | 'polygon_cfg': {'size': 50, 'shift_range': 50, 'point_range': 50, 'color_range': 50},
374 | }
375 | ```
376 |
377 | #### 照片怀旧风格
378 |
379 | **1.相关论文**
380 |
381 | 暂无
382 |
383 | **2.公众号文章介绍**
384 |
385 | [Introduction](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ)
386 |
387 | **3.调用示例**
388 |
389 | ```python
390 | from pydrawing import pydrawing
391 |
392 | filepath = 'input.jpg'
393 | drawing_client = pydrawing()
394 | drawing_client.execute(filepath, 'nostalgicstyle')
395 | ```
396 |
397 | **4.config选项**
398 |
399 | - savename: 保存结果时用的文件名, 默认值为"output";
400 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
401 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False"。
402 |
403 | #### 手写笔记处理
404 |
405 | **1.相关论文**
406 |
407 | [Paper](https://mzucker.github.io/2016/09/20/noteshrink.html)
408 |
409 | **2.公众号文章介绍**
410 |
411 | [Introduction](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ)
412 |
413 | **3.调用示例**
414 |
415 | ```python
416 | from pydrawing import pydrawing
417 |
418 | config = {
419 | 'sat_threshold': 0.20,
420 | 'value_threshold': 0.25,
421 | 'num_colors': 8,
422 | 'sample_fraction': 0.05,
423 | 'white_bg': False,
424 | 'saturate': True,
425 | }
426 | filepath = 'input.jpg'
427 | drawing_client = pydrawing()
428 | drawing_client.execute(filepath, 'noteprocessor', config=config)
429 | ```
430 |
431 | **4.config选项**
432 |
433 | - savename: 保存结果时用的文件名, 默认值为"output";
434 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
435 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
436 | - sat_threshold: 背景饱和度阈值, 默认值为"0.2";
437 | - value_threshold: 背景的阈值, 默认值为"0.25";
438 | - num_colors: 输出颜色的数量, 默认值为"8";
439 | - sample_fraction: 采样的像素占比, 默认值为"0.05";
440 | - white_bg: 使背景为白色, 默认值为"False";
441 | - saturate: 使颜色不饱和, 默认值为"True"。
442 |
443 | #### 照片油画化
444 |
445 | **1.相关论文**
446 |
447 | [Paper](https://github.com/cyshih73/Faster-OilPainting/blob/master/Report.pdf)
448 |
449 | **2.公众号文章介绍**
450 |
451 | [Introduction](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ)
452 |
453 | **3.调用示例**
454 |
455 | ```python
456 | from pydrawing import pydrawing
457 |
458 | config = {
459 | 'edge_operator': 'sobel',
460 | 'palette': 0,
461 | 'brush_width': 5,
462 | }
463 | filepath = 'input.jpg'
464 | drawing_client = pydrawing()
465 | drawing_client.execute(filepath, 'oilpainting', config=config)
466 | ```
467 |
468 | **4.config选项**
469 |
470 | - savename: 保存结果时用的文件名, 默认值为"output";
471 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
472 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
473 | - brush_width: 画笔大小, 默认值为"5";
474 | - palette: 调色板颜色, 默认为"0", 代表使用原图的实际颜色;
475 | - edge_operator: 使用的边缘检测算子, 支持"sobel", "prewitt", "scharr"和"roberts", 默认值为"sobel"。
476 |
477 | #### 简单的照片矫正
478 |
479 | **1.相关论文**
480 |
481 | 暂无。
482 |
483 | **2.公众号文章介绍**
484 |
485 | [Introduction](https://mp.weixin.qq.com/s/yRCt69u_gzPI85-vOrb_sQ)
486 |
487 | **3.调用示例**
488 |
489 | ```python
490 | from pydrawing import pydrawing
491 |
492 | config = {
493 | 'epsilon_factor': 0.08,
494 | 'canny_boundaries': [100, 200],
495 | 'use_preprocess': False,
496 | }
497 | filepath = 'input.jpg'
498 | drawing_client = pydrawing()
499 | drawing_client.execute(filepath, 'photocorrection', config=config)
500 | ```
501 |
502 | **4.config选项**
503 |
504 | - savename: 保存结果时用的文件名, 默认值为"output";
505 | - savedir: 保存结果时用的文件夹, 默认值为"outputs";
506 | - merge_audio: 处理视频时, 是否把原视频中的音频合成到生成的视频中, 默认值为"False";
507 | - epsilon_factor: 多边形估计时的超参数, 默认为"0.08";
508 | - canny_boundaries: canny边缘检测算子的两个边界值, 默认为"[100, 200]";
509 | - use_preprocess: 是否在边缘检测前对图像进行预处理, 默认值为"False"。
510 |
511 |
512 | ## 随机运行一个小程序
513 |
514 | 写如下代码,保存并运行即可:
515 |
516 | ```python
517 | import random
518 | from pydrawing import pydrawing
519 |
520 | filepath = 'asserts/dog.jpg'
521 | config = {
522 | "savedir": "outputs",
523 | "savename": "output"
524 | }
525 | drawing_client = pydrawing.pydrawing()
526 | drawing_client.execute(filepath, random.choice(drawing_client.getallsupports()), config=config)
527 | ```
528 |
529 | 部分测试效果如下:
530 |
531 |
532 |

533 |
534 |
535 |
536 |

537 |
538 |
539 |
540 |

541 |
542 |
543 |
544 |

545 |
546 |
--------------------------------------------------------------------------------
/docs/Recommend.md:
--------------------------------------------------------------------------------
1 | # 项目推荐
2 |
3 | - [制作小游戏](https://github.com/CharlesPikachu/Games)
4 |
5 | - [模拟登录系列](https://github.com/CharlesPikachu/DecryptLogin)
6 |
7 | - [音乐下载器](https://github.com/CharlesPikachu/musicdl)
8 |
9 | - [视频下载器](https://github.com/CharlesPikachu/videodl)
10 |
11 | - [实用工具](https://github.com/CharlesPikachu/pytools)
12 |
13 | - [玩转微信](https://github.com/CharlesPikachu/pikachuwechat)
14 |
15 | - [图像语义分割框架](https://github.com/SegmentationBLWX/sssegmentation)
16 |
17 | - [美化图片或视频](https://github.com/CharlesPikachu/pydrawing)
18 |
19 | - [图像压缩算法](https://github.com/CharlesPikachu/imagecompressor)
20 |
21 | - [免费代理工具](https://github.com/CharlesPikachu/freeproxy)
22 |
23 | - [美丽的星空图](https://github.com/CharlesPikachu/constellation)
24 |
25 | - [论文下载器](https://github.com/CharlesPikachu/paperdl)
26 |
27 | - [终端看中华人民共和国国务院新闻办公室](https://github.com/CharlesPikachu/sciogovterminal)
28 |
29 | - [代码自由](https://github.com/CharlesPikachu/codefree)
30 |
31 | - [深度学习小案例](https://github.com/CharlesPikachu/deeplearningtoys)
32 |
33 | - [数据分析相关的小项目](https://github.com/CharlesPikachu/dataanalysis)
34 |
35 | - [图片下载器](https://github.com/CharlesPikachu/imagedl)
36 |
37 | - [从零开始实现一个深度学习框架](https://github.com/CharlesPikachu/pytoydl)
38 |
39 | - [小说下载器](https://github.com/CharlesPikachu/noveldl)
40 |
--------------------------------------------------------------------------------
/docs/State.md:
--------------------------------------------------------------------------------
1 | # 项目声明
2 |
3 |
4 |

5 |
6 |
7 |
8 | 本项目仅供python爱好者学习使用, 禁止用于商业用途, 希望大家合理利用该项目🙂
--------------------------------------------------------------------------------
/docs/conf.py:
--------------------------------------------------------------------------------
1 | # Configuration file for the Sphinx documentation builder.
2 | #
3 | # This file only contains a selection of the most common options. For a full
4 | # list see the documentation:
5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html
6 |
7 | # -- Path setup --------------------------------------------------------------
8 |
9 | # If extensions (or modules to document with autodoc) are in another directory,
10 | # add these directories to sys.path here. If the directory is relative to the
11 | # documentation root, use os.path.abspath to make it absolute, like shown here.
12 | #
13 | # import os
14 | # import sys
15 | # sys.path.insert(0, os.path.abspath('.'))
16 |
17 |
18 | # -- Project information -----------------------------------------------------
19 |
20 | project = 'pydrawing'
21 | copyright = '2022, Zhenchao Jin'
22 | author = 'Zhenchao Jin'
23 | release = '0.1.0'
24 |
25 | # -- General configuration ---------------------------------------------------
26 |
27 | # Add any Sphinx extension module names here, as strings. They can be
28 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
29 | # ones.
30 | extensions = [
31 | 'sphinx.ext.autodoc',
32 | 'sphinx.ext.napoleon',
33 | 'sphinx.ext.viewcode',
34 | 'recommonmark',
35 | 'sphinx_markdown_tables',
36 | ]
37 |
38 | # Add any paths that contain templates here, relative to this directory.
39 | templates_path = ['_templates']
40 |
41 | # The suffix(es) of source filenames.
42 | # You can specify multiple suffix as a list of string:
43 | #
44 | source_suffix = {
45 | '.rst': 'restructuredtext',
46 | '.md': 'markdown',
47 | }
48 |
49 | # The master toctree document.
50 | master_doc = 'index'
51 |
52 | # List of patterns, relative to source directory, that match files and
53 | # directories to ignore when looking for source files.
54 | # This pattern also affects html_static_path and html_extra_path.
55 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
56 |
57 |
58 | # -- Options for HTML output -------------------------------------------------
59 |
60 | # The theme to use for HTML and HTML Help pages. See the documentation for
61 | # a list of builtin themes.
62 | #
63 | html_theme = 'sphinx_rtd_theme'
64 |
65 | # Add any paths that contain custom static files (such as style sheets) here,
66 | # relative to this directory. They are copied after the builtin static files,
67 | # so a file named "default.css" will overwrite the builtin "default.css".
68 | html_static_path = ['_static']
69 |
70 | # For multi language
71 | # locale_dirs = ['locale/']
72 | # gettext_compact = False
--------------------------------------------------------------------------------
/docs/index.rst:
--------------------------------------------------------------------------------
1 | .. pydrawing documentation master file, created by
2 | sphinx-quickstart on Sat Feb 29 22:07:23 2020.
3 | You can adapt this file completely to your liking, but it should at least
4 | contain the root `toctree` directive.
5 |
6 | Pydrawing中文文档
7 | ========================================
8 |
9 | .. toctree::
10 | :maxdepth: 2
11 |
12 | State.md
13 | Install.md
14 | Quickstart.md
15 | Changelog.md
16 | Recommend.md
17 | Author.md
--------------------------------------------------------------------------------
/docs/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/docs/logo.png
--------------------------------------------------------------------------------
/docs/make.bat:
--------------------------------------------------------------------------------
1 | @ECHO OFF
2 |
3 | pushd %~dp0
4 |
5 | REM Command file for Sphinx documentation
6 |
7 | if "%SPHINXBUILD%" == "" (
8 | set SPHINXBUILD=sphinx-build
9 | )
10 | set SOURCEDIR=source
11 | set BUILDDIR=build
12 |
13 | if "%1" == "" goto help
14 |
15 | %SPHINXBUILD% >NUL 2>NUL
16 | if errorlevel 9009 (
17 | echo.
18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
19 | echo.installed, then set the SPHINXBUILD environment variable to point
20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you
21 | echo.may add the Sphinx directory to PATH.
22 | echo.
23 | echo.If you don't have Sphinx installed, grab it from
24 | echo.http://sphinx-doc.org/
25 | exit /b 1
26 | )
27 |
28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
29 | goto end
30 |
31 | :help
32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
33 |
34 | :end
35 | popd
36 |
--------------------------------------------------------------------------------
/docs/pikachu.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/docs/pikachu.jpg
--------------------------------------------------------------------------------
/docs/requirements.txt:
--------------------------------------------------------------------------------
1 | recommonmark
2 | sphinx==4.5.0
3 | sphinx_markdown_tables==0.0.12
4 | sphinx_rtd_theme
--------------------------------------------------------------------------------
/docs/screenshot_characterize.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/docs/screenshot_characterize.gif
--------------------------------------------------------------------------------
/docs/screenshot_fastneuralstyletransfer.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/docs/screenshot_fastneuralstyletransfer.gif
--------------------------------------------------------------------------------
/docs/screenshot_photomosaic.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/docs/screenshot_photomosaic.png
--------------------------------------------------------------------------------
/docs/screeshot_noteprocessor.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/docs/screeshot_noteprocessor.png
--------------------------------------------------------------------------------
/pydrawing/__init__.py:
--------------------------------------------------------------------------------
1 | '''title'''
2 | __title__ = 'pydrawing'
3 | '''description'''
4 | __description__ = 'Pydrawing: Beautify your image or video.'
5 | '''url'''
6 | __url__ = 'https://github.com/CharlesPikachu/pydrawing'
7 | '''version'''
8 | __version__ = '0.1.9'
9 | '''author'''
10 | __author__ = 'Zhenchao Jin'
11 | '''email'''
12 | __email__ = 'charlesblwx@gmail.com'
13 | '''license'''
14 | __license__ = 'Apache License 2.0'
15 | '''copyright'''
16 | __copyright__ = 'Copyright 2021-2022 Zhenchao Jin'
--------------------------------------------------------------------------------
/pydrawing/asserts/animecharacter.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/animecharacter.jpg
--------------------------------------------------------------------------------
/pydrawing/asserts/badapple.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/badapple.mp4
--------------------------------------------------------------------------------
/pydrawing/asserts/badapple_color.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/badapple_color.mp4
--------------------------------------------------------------------------------
/pydrawing/asserts/bridge.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/bridge.png
--------------------------------------------------------------------------------
/pydrawing/asserts/dog.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/dog.jpg
--------------------------------------------------------------------------------
/pydrawing/asserts/face.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/face.jpg
--------------------------------------------------------------------------------
/pydrawing/asserts/monalisa.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/monalisa.jpg
--------------------------------------------------------------------------------
/pydrawing/asserts/note.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/note.jpg
--------------------------------------------------------------------------------
/pydrawing/asserts/zurich.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/asserts/zurich.jpg
--------------------------------------------------------------------------------
/pydrawing/modules/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .utils import Logger
3 | from .beautifiers import (
4 | CartooniseBeautifier, PencilDrawingBeautifier, CartoonGanBeautifier, FastNeuralStyleTransferBeautifier,
5 | DouyinEffectBeautifier, CharacterizeBeautifier, PhotomosaicBeautifier, GlitchBeautifier, CartoonizeFaceBeautifier,
6 | GeneticFittingCircleBeautifier, GeneticFittingPolygonBeautifier, BezierCurveBeautifier, NostalgicstyleBeautifier,
7 | NoteprocessorBeautifier, OilpaintingBeautifier, PhotocorrectionBeautifier
8 | )
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .glitch import GlitchBeautifier
3 | from .cartoonise import CartooniseBeautifier
4 | from .cartoongan import CartoonGanBeautifier
5 | from .oilpainting import OilpaintingBeautifier
6 | from .beziercurve import BezierCurveBeautifier
7 | from .photomosaic import PhotomosaicBeautifier
8 | from .characterize import CharacterizeBeautifier
9 | from .douyineffect import DouyinEffectBeautifier
10 | from .noteprocessor import NoteprocessorBeautifier
11 | from .pencildrawing import PencilDrawingBeautifier
12 | from .cartoonizeface import CartoonizeFaceBeautifier
13 | from .nostalgicstyle import NostalgicstyleBeautifier
14 | from .photocorrection import PhotocorrectionBeautifier
15 | from .fastneuralstyletransfer import FastNeuralStyleTransferBeautifier
16 | from .geneticfitting import GeneticFittingCircleBeautifier, GeneticFittingPolygonBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/base/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .base import BaseBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/base/base.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | Beautifier基类
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import cv2
11 | import subprocess
12 | from tqdm import tqdm
13 | from ...utils import Images2VideoAndSave, SaveImage, ReadVideo
14 |
15 |
16 | '''Beautifier基类'''
17 | class BaseBeautifier():
18 | def __init__(self, savedir='outputs', savename='output', **kwargs):
19 | self.savename, self.savedir = savename, savedir
20 | self.merge_audio, self.tmp_audio_path = False, 'cache.mp3'
21 | for key, value in kwargs.items(): setattr(self, key, value)
22 | '''处理文件'''
23 | def process(self, filepath, images=None):
24 | assert images is None or filepath is None, 'please input filepath or images rather than both'
25 | # 图片/视频处理
26 | if images is None:
27 | # --图片
28 | if filepath.split('.')[-1].lower() in ['jpg', 'jpeg', 'png']:
29 | images = [cv2.imread(filepath)]
30 | # --视频
31 | elif filepath.split('.')[-1].lower() in ['mp4', 'avi']:
32 | images, self.fps = ReadVideo(filepath)
33 | if self.merge_audio:
34 | p = subprocess.Popen(f'ffmpeg -i {filepath} -f mp3 {self.tmp_audio_path}')
35 | while True:
36 | if subprocess.Popen.poll(p) is not None: break
37 | # --不支持的数据格式
38 | else:
39 | raise RuntimeError('Unsupport file type %s...' % filepath.split('.')[-1])
40 | outputs, pbar = [], tqdm(images)
41 | for image in pbar:
42 | pbar.set_description('Process image')
43 | outputs.append(self.iterimage(image))
44 | if len(outputs) > 1:
45 | fps, ext = 25, 'avi'
46 | if hasattr(self, 'fps'): fps = self.fps
47 | if hasattr(self, 'ext'): ext = self.ext
48 | Images2VideoAndSave(outputs, savedir=self.savedir, savename=self.savename, fps=fps, ext=ext, logger_handle=self.logger_handle)
49 | else:
50 | ext = 'png'
51 | if hasattr(self, 'ext'): ext = self.ext
52 | SaveImage(outputs[0], savedir=self.savedir, savename=self.savename, ext=ext, logger_handle=self.logger_handle)
53 | # 如果有音频, 则把音频merge进新视频
54 | if self.merge_audio:
55 | p = subprocess.Popen(f'ffmpeg -i {os.path.join(self.savedir, self.savename+f".{ext}")} -i {self.tmp_audio_path} -strict -2 -f mp4 {os.path.join(self.savedir, self.savename+"_audio.mp4")}')
56 | while True:
57 | if subprocess.Popen.poll(p) is not None: break
58 | os.remove(self.tmp_audio_path)
59 | self.logger_handle.info(f'Video with merged audio is saved into {os.path.join(self.savedir, self.savename+"_audio.mp4")}')
60 | '''迭代图片'''
61 | def iterimage(self, image):
62 | raise NotImplementedError('not to be implemented')
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/beziercurve/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .beziercurve import BezierCurveBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/beziercurve/beziercurve.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 利用贝塞尔曲线画画
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import re
11 | import cv2
12 | import turtle
13 | import numpy as np
14 | from bs4 import BeautifulSoup
15 | from ..base import BaseBeautifier
16 |
17 |
18 | '''利用贝塞尔曲线画画'''
19 | class BezierCurveBeautifier(BaseBeautifier):
20 | def __init__(self, num_samples=15, width=600, height=600, num_colors=32, **kwargs):
21 | super(BezierCurveBeautifier, self).__init__(**kwargs)
22 | self.num_samples = num_samples
23 | self.width = width
24 | self.height = height
25 | self.num_colors = num_colors
26 | self.rootdir = os.path.split(os.path.abspath(__file__))[0]
27 | '''迭代图片'''
28 | def iterimage(self, image):
29 | data = image.reshape((-1, 3))
30 | data = np.float32(data)
31 | # 聚类迭代停止的模式(停止的条件, 迭代最大次数, 精度)
32 | criteria = (cv2.TERM_CRITERIA_EPS, 10, 1.0)
33 | # 数据, 分类数, 预设的分类标签, 迭代停止的模式, 重复试验kmeans算法次数, 初始类中心的选择方式
34 | compactness, labels, centers = cv2.kmeans(data, self.num_colors, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
35 | centers = np.uint8(centers)
36 | data_compress = centers[labels.flatten()]
37 | img_new = data_compress.reshape(image.shape)
38 | count = 0
39 | for center in centers:
40 | count += 1
41 | part = cv2.inRange(img_new, center, center)
42 | part = cv2.bitwise_not(part)
43 | cv2.imwrite('.tmp.bmp', part)
44 | os.system(f'{os.path.join(self.rootdir, "potrace.exe")} .tmp.bmp -s --flat')
45 | if count == 1:
46 | self.drawsvg('.tmp.svg', '#%02x%02x%02x' % (center[2], center[1], center[0]), True)
47 | else:
48 | self.drawsvg('.tmp.svg', '#%02x%02x%02x' % (center[2], center[1], center[0]), False)
49 | os.remove('.tmp.bmp')
50 | os.remove('.tmp.svg')
51 | turtle.done()
52 | return image
53 | '''一阶贝塞尔'''
54 | def FirstOrderBezier(self, p0, p1, t):
55 | assert (len(p0) == 2 and len(p1) == 2) or (len(p0) == 1 and len(p1) == 1)
56 | if len(p0) == 2 and len(p1) == 2:
57 | return p0[0] * (1 - t) + p1[0] * t, p0[1] * (1 - t) + p1[1] * t
58 | else:
59 | return p0 * (1 - t) + p1 * t
60 | '''二阶贝塞尔'''
61 | def SecondOrderBezier(self, p0, p1, p2):
62 | turtle.goto(p0)
63 | turtle.pendown()
64 | for t in range(0, self.num_samples+1):
65 | p = self.FirstOrderBezier(self.FirstOrderBezier(p0, p1, t/self.num_samples), self.FirstOrderBezier(p1, p2, t/self.num_samples), t/self.num_samples)
66 | turtle.goto(p)
67 | turtle.penup()
68 | '''三阶贝塞尔'''
69 | def ThirdOrderBezier(self, p0, p1, p2, p3):
70 | p0 = -self.width / 2 + p0[0], self.height / 2 - p0[1]
71 | p1 = -self.width / 2 + p1[0], self.height / 2 - p1[1]
72 | p2 = -self.width / 2 + p2[0], self.height / 2 - p2[1]
73 | p3 = -self.width / 2 + p3[0], self.height / 2 - p3[1]
74 | turtle.goto(p0)
75 | turtle.pendown()
76 | for t in range(0, self.num_samples+1):
77 | p = self.FirstOrderBezier(
78 | self.FirstOrderBezier(self.FirstOrderBezier(p0, p1, t/self.num_samples), self.FirstOrderBezier(p1, p2, t/self.num_samples), t/self.num_samples),
79 | self.FirstOrderBezier(self.FirstOrderBezier(p1, p2, t/self.num_samples), self.FirstOrderBezier(p2, p3, t/self.num_samples), t/self.num_samples),
80 | t/self.num_samples
81 | )
82 | turtle.goto(p)
83 | turtle.penup()
84 | '''画图(SVG)'''
85 | def drawsvg(self, filename, color, is_first=True, speed=1000):
86 | svgfile = open(filename, 'r')
87 | soup = BeautifulSoup(svgfile.read(), 'lxml')
88 | height, width = float(soup.svg.attrs['height'][:-2]), float(soup.svg.attrs['width'][:-2])
89 | scale = tuple(map(float, re.findall(r'scale\((.*?)\)', soup.g.attrs['transform'])[0].split(',')))
90 | scale = scale[0], -scale[1]
91 | if is_first:
92 | turtle.setup(height=height, width=width)
93 | turtle.setworldcoordinates(-width/2, 300, width-width/2, -height+300)
94 | turtle.tracer(100)
95 | turtle.pensize(1)
96 | turtle.speed(speed)
97 | turtle.penup()
98 | turtle.color(color)
99 | for path in soup.find_all('path'):
100 | attrs = path.attrs['d'].replace('\n', ' ')
101 | attrs = attrs.split(' ')
102 | attrs_yield = self.yieldattrs(attrs)
103 | endl = ''
104 | for attr in attrs_yield:
105 | if attr == 'M':
106 | turtle.end_fill()
107 | x, y = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
108 | turtle.penup()
109 | turtle.goto(-self.width/2+x, self.height/2-y)
110 | turtle.pendown()
111 | turtle.begin_fill()
112 | elif attr == 'm':
113 | turtle.end_fill()
114 | dx, dy = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
115 | turtle.penup()
116 | turtle.goto(turtle.xcor()+dx, turtle.ycor()-dy)
117 | turtle.pendown()
118 | turtle.begin_fill()
119 | elif attr == 'C':
120 | p1 = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
121 | p2 = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
122 | p3 = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
123 | turtle.penup()
124 | p0 = turtle.xcor() + self.width / 2, self.height / 2 - turtle.ycor()
125 | self.ThirdOrderBezier(p0, p1, p2, p3)
126 | endl = attr
127 | elif attr == 'c':
128 | turtle.penup()
129 | p0 = turtle.xcor() + self.width / 2, self.height / 2 - turtle.ycor()
130 | p1 = attrs_yield.__next__() * scale[0] + p0[0], attrs_yield.__next__() * scale[1] + p0[1]
131 | p2 = attrs_yield.__next__() * scale[0] + p0[0], attrs_yield.__next__() * scale[1] + p0[1]
132 | p3 = attrs_yield.__next__() * scale[0] + p0[0], attrs_yield.__next__() * scale[1] + p0[1]
133 | self.ThirdOrderBezier(p0, p1, p2, p3)
134 | endl = attr
135 | elif attr == 'L':
136 | x, y = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
137 | turtle.pendown()
138 | turtle.goto(-self.width/2+x, self.height/2-y)
139 | turtle.penup()
140 | elif attr == 'l':
141 | dx, dy = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
142 | turtle.pendown()
143 | turtle.goto(turtle.xcor()+dx, turtle.ycor()-dy)
144 | turtle.penup()
145 | endl = attr
146 | elif endl == 'C':
147 | p1 = attr * scale[0], attrs_yield.__next__() * scale[1]
148 | p2 = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
149 | p3 = attrs_yield.__next__() * scale[0], attrs_yield.__next__() * scale[1]
150 | turtle.penup()
151 | p0 = turtle.xcor() + self.width / 2, self.height / 2 - turtle.ycor()
152 | self.ThirdOrderBezier(p0, p1, p2, p3)
153 | elif endl == 'c':
154 | turtle.penup()
155 | p0 = turtle.xcor() + self.width / 2, self.height / 2 - turtle.ycor()
156 | p1 = attr * scale[0] + p0[0], attrs_yield.__next__() * scale[1] + p0[1]
157 | p2 = attrs_yield.__next__() * scale[0] + p0[0], attrs_yield.__next__() * scale[1] + p0[1]
158 | p3 = attrs_yield.__next__() * scale[0] + p0[0], attrs_yield.__next__() * scale[1] + p0[1]
159 | self.ThirdOrderBezier(p0, p1, p2, p3)
160 | elif endl == 'L':
161 | x, y = attr * scale[0], attrs_yield.__next__() * scale[1]
162 | turtle.pendown()
163 | turtle.goto(-self.width/2+x, self.height/2-y)
164 | turtle.penup()
165 | elif endl == 'l':
166 | dx, dy = attr * scale[0], attrs_yield.__next__() * scale[1]
167 | turtle.pendown()
168 | turtle.goto(turtle.xcor()+dx, turtle.ycor()-dy)
169 | turtle.penup()
170 | turtle.penup()
171 | turtle.hideturtle()
172 | turtle.update()
173 | svgfile.close()
174 | '''attrs生成器'''
175 | @staticmethod
176 | def yieldattrs(attrs):
177 | for attr in attrs:
178 | if attr.isdigit():
179 | yield float(attr)
180 | elif attr[0].isalpha():
181 | yield attr[0]
182 | yield float(attr[1:])
183 | elif attr[-1].isalpha():
184 | yield float(attr[0: -1])
185 | elif attr[0] == '-':
186 | yield float(attr)
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/beziercurve/potrace.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/modules/beautifiers/beziercurve/potrace.exe
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoongan/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .cartoongan import CartoonGanBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoongan/cartoongan.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 复现论文"CartoonGAN: Generative Adversarial Networks for Photo Cartoonization"
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import numpy as np
11 | from ..base import BaseBeautifier
12 | try:
13 | import torch
14 | import torch.nn as nn
15 | import torch.nn.functional as F
16 | import torch.utils.model_zoo as model_zoo
17 | import torchvision.transforms as transforms
18 | except:
19 | print('[Warning]: Pytorch and torchvision have not be installed, "cartoongan" will be not available.')
20 |
21 |
22 | '''instance normalization'''
23 | class InstanceNormalization(nn.Module):
24 | def __init__(self, dim, eps=1e-9):
25 | super(InstanceNormalization, self).__init__()
26 | self.scale = nn.Parameter(torch.FloatTensor(dim))
27 | self.shift = nn.Parameter(torch.FloatTensor(dim))
28 | self.eps = eps
29 | '''call'''
30 | def __call__(self, x):
31 | n = x.size(2) * x.size(3)
32 | t = x.view(x.size(0), x.size(1), n)
33 | mean = torch.mean(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x)
34 | var = torch.var(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x) * ((n - 1) / float(n))
35 | scale_broadcast = self.scale.unsqueeze(1).unsqueeze(1).unsqueeze(0)
36 | scale_broadcast = scale_broadcast.expand_as(x)
37 | shift_broadcast = self.shift.unsqueeze(1).unsqueeze(1).unsqueeze(0)
38 | shift_broadcast = shift_broadcast.expand_as(x)
39 | out = (x - mean) / torch.sqrt(var + self.eps)
40 | out = out * scale_broadcast + shift_broadcast
41 | return out
42 |
43 |
44 | '''网络结构, 模型修改自: https://github.com/Yijunmaverick/CartoonGAN-Test-Pytorch-Torch'''
45 | class Transformer(nn.Module):
46 | def __init__(self):
47 | super(Transformer, self).__init__()
48 | self.refpad01_1 = nn.ReflectionPad2d(3)
49 | self.conv01_1 = nn.Conv2d(3, 64, 7)
50 | self.in01_1 = InstanceNormalization(64)
51 | self.conv02_1 = nn.Conv2d(64, 128, 3, 2, 1)
52 | self.conv02_2 = nn.Conv2d(128, 128, 3, 1, 1)
53 | self.in02_1 = InstanceNormalization(128)
54 | self.conv03_1 = nn.Conv2d(128, 256, 3, 2, 1)
55 | self.conv03_2 = nn.Conv2d(256, 256, 3, 1, 1)
56 | self.in03_1 = InstanceNormalization(256)
57 | self.refpad04_1 = nn.ReflectionPad2d(1)
58 | self.conv04_1 = nn.Conv2d(256, 256, 3)
59 | self.in04_1 = InstanceNormalization(256)
60 | self.refpad04_2 = nn.ReflectionPad2d(1)
61 | self.conv04_2 = nn.Conv2d(256, 256, 3)
62 | self.in04_2 = InstanceNormalization(256)
63 | self.refpad05_1 = nn.ReflectionPad2d(1)
64 | self.conv05_1 = nn.Conv2d(256, 256, 3)
65 | self.in05_1 = InstanceNormalization(256)
66 | self.refpad05_2 = nn.ReflectionPad2d(1)
67 | self.conv05_2 = nn.Conv2d(256, 256, 3)
68 | self.in05_2 = InstanceNormalization(256)
69 | self.refpad06_1 = nn.ReflectionPad2d(1)
70 | self.conv06_1 = nn.Conv2d(256, 256, 3)
71 | self.in06_1 = InstanceNormalization(256)
72 | self.refpad06_2 = nn.ReflectionPad2d(1)
73 | self.conv06_2 = nn.Conv2d(256, 256, 3)
74 | self.in06_2 = InstanceNormalization(256)
75 | self.refpad07_1 = nn.ReflectionPad2d(1)
76 | self.conv07_1 = nn.Conv2d(256, 256, 3)
77 | self.in07_1 = InstanceNormalization(256)
78 | self.refpad07_2 = nn.ReflectionPad2d(1)
79 | self.conv07_2 = nn.Conv2d(256, 256, 3)
80 | self.in07_2 = InstanceNormalization(256)
81 | self.refpad08_1 = nn.ReflectionPad2d(1)
82 | self.conv08_1 = nn.Conv2d(256, 256, 3)
83 | self.in08_1 = InstanceNormalization(256)
84 | self.refpad08_2 = nn.ReflectionPad2d(1)
85 | self.conv08_2 = nn.Conv2d(256, 256, 3)
86 | self.in08_2 = InstanceNormalization(256)
87 | self.refpad09_1 = nn.ReflectionPad2d(1)
88 | self.conv09_1 = nn.Conv2d(256, 256, 3)
89 | self.in09_1 = InstanceNormalization(256)
90 | self.refpad09_2 = nn.ReflectionPad2d(1)
91 | self.conv09_2 = nn.Conv2d(256, 256, 3)
92 | self.in09_2 = InstanceNormalization(256)
93 | self.refpad10_1 = nn.ReflectionPad2d(1)
94 | self.conv10_1 = nn.Conv2d(256, 256, 3)
95 | self.in10_1 = InstanceNormalization(256)
96 | self.refpad10_2 = nn.ReflectionPad2d(1)
97 | self.conv10_2 = nn.Conv2d(256, 256, 3)
98 | self.in10_2 = InstanceNormalization(256)
99 | self.refpad11_1 = nn.ReflectionPad2d(1)
100 | self.conv11_1 = nn.Conv2d(256, 256, 3)
101 | self.in11_1 = InstanceNormalization(256)
102 | self.refpad11_2 = nn.ReflectionPad2d(1)
103 | self.conv11_2 = nn.Conv2d(256, 256, 3)
104 | self.in11_2 = InstanceNormalization(256)
105 | self.deconv01_1 = nn.ConvTranspose2d(256, 128, 3, 2, 1, 1)
106 | self.deconv01_2 = nn.Conv2d(128, 128, 3, 1, 1)
107 | self.in12_1 = InstanceNormalization(128)
108 | self.deconv02_1 = nn.ConvTranspose2d(128, 64, 3, 2, 1, 1)
109 | self.deconv02_2 = nn.Conv2d(64, 64, 3, 1, 1)
110 | self.in13_1 = InstanceNormalization(64)
111 | self.refpad12_1 = nn.ReflectionPad2d(3)
112 | self.deconv03_1 = nn.Conv2d(64, 3, 7)
113 | '''forward'''
114 | def forward(self, x):
115 | y = F.relu(self.in01_1(self.conv01_1(self.refpad01_1(x))))
116 | y = F.relu(self.in02_1(self.conv02_2(self.conv02_1(y))))
117 | t04 = F.relu(self.in03_1(self.conv03_2(self.conv03_1(y))))
118 | y = F.relu(self.in04_1(self.conv04_1(self.refpad04_1(t04))))
119 | t05 = self.in04_2(self.conv04_2(self.refpad04_2(y))) + t04
120 | y = F.relu(self.in05_1(self.conv05_1(self.refpad05_1(t05))))
121 | t06 = self.in05_2(self.conv05_2(self.refpad05_2(y))) + t05
122 | y = F.relu(self.in06_1(self.conv06_1(self.refpad06_1(t06))))
123 | t07 = self.in06_2(self.conv06_2(self.refpad06_2(y))) + t06
124 | y = F.relu(self.in07_1(self.conv07_1(self.refpad07_1(t07))))
125 | t08 = self.in07_2(self.conv07_2(self.refpad07_2(y))) + t07
126 | y = F.relu(self.in08_1(self.conv08_1(self.refpad08_1(t08))))
127 | t09 = self.in08_2(self.conv08_2(self.refpad08_2(y))) + t08
128 | y = F.relu(self.in09_1(self.conv09_1(self.refpad09_1(t09))))
129 | t10 = self.in09_2(self.conv09_2(self.refpad09_2(y))) + t09
130 | y = F.relu(self.in10_1(self.conv10_1(self.refpad10_1(t10))))
131 | t11 = self.in10_2(self.conv10_2(self.refpad10_2(y))) + t10
132 | y = F.relu(self.in11_1(self.conv11_1(self.refpad11_1(t11))))
133 | y = self.in11_2(self.conv11_2(self.refpad11_2(y))) + t11
134 | y = F.relu(self.in12_1(self.deconv01_2(self.deconv01_1(y))))
135 | y = F.relu(self.in13_1(self.deconv02_2(self.deconv02_1(y))))
136 | y = F.tanh(self.deconv03_1(self.refpad12_1(y)))
137 | return y
138 |
139 |
140 | '''复现论文"CartoonGAN: Generative Adversarial Networks for Photo Cartoonization"'''
141 | class CartoonGanBeautifier(BaseBeautifier):
142 | def __init__(self, style='Hosoda', use_cuda=True, **kwargs):
143 | super(CartoonGanBeautifier, self).__init__(**kwargs)
144 | self.model_urls = {
145 | 'Hayao': 'http://vllab1.ucmerced.edu/~yli62/CartoonGAN/pytorch_pth/Hayao_net_G_float.pth',
146 | 'Hosoda': 'http://vllab1.ucmerced.edu/~yli62/CartoonGAN/pytorch_pth/Hosoda_net_G_float.pth',
147 | 'Paprika': 'http://vllab1.ucmerced.edu/~yli62/CartoonGAN/pytorch_pth/Paprika_net_G_float.pth',
148 | 'Shinkai': 'http://vllab1.ucmerced.edu/~yli62/CartoonGAN/pytorch_pth/Shinkai_net_G_float.pth',
149 | }
150 | assert style in self.model_urls
151 | self.style = style
152 | self.use_cuda = use_cuda
153 | self.transformer = Transformer()
154 | self.transformer.load_state_dict(model_zoo.load_url(self.model_urls[style], map_location='cpu'))
155 | self.transformer.eval()
156 | if torch.cuda.is_available() and self.use_cuda: self.transformer = self.transformer.cuda()
157 | '''迭代图片'''
158 | def iterimage(self, image):
159 | input_image = transforms.ToTensor()(image).unsqueeze(0)
160 | input_image = -1 + 2 * input_image
161 | if torch.cuda.is_available() and self.use_cuda:
162 | input_image = input_image.cuda()
163 | with torch.no_grad():
164 | output_image = self.transformer(input_image)[0]
165 | output_image = output_image.data.cpu().float() * 0.5 + 0.5
166 | output_image = (output_image.numpy() * 255).astype(np.uint8)
167 | output_image = np.transpose(output_image, (1, 2, 0))
168 | output_image = cv2.resize(output_image, (image.shape[1], image.shape[0]))
169 | return output_image
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonise/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .cartoonise import CartooniseBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonise/cartoonise.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 图像卡通化
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import numpy as np
11 | from ..base import BaseBeautifier
12 |
13 |
14 | '''图像卡通化'''
15 | class CartooniseBeautifier(BaseBeautifier):
16 | def __init__(self, mode='rgb', **kwargs):
17 | super(CartooniseBeautifier, self).__init__(**kwargs)
18 | assert mode in ['rgb', 'hsv']
19 | self.mode = mode
20 | '''迭代图片'''
21 | def iterimage(self, image):
22 | if self.mode == 'rgb':
23 | return self.processinrgb(image)
24 | elif self.mode == 'hsv':
25 | return self.processinhsv(image)
26 | '''在RGB空间操作'''
27 | def processinrgb(self, image):
28 | # Step1: 利用双边滤波器对原图像进行保边去噪处理
29 | # --下采样
30 | image_bilateral = image
31 | for _ in range(2):
32 | image_bilateral = cv2.pyrDown(image_bilateral)
33 | # --进行多次的双边滤波
34 | for _ in range(7):
35 | image_bilateral = cv2.bilateralFilter(image_bilateral, d=9, sigmaColor=9, sigmaSpace=7)
36 | # --上采样
37 | for _ in range(2):
38 | image_bilateral = cv2.pyrUp(image_bilateral)
39 | # Step2: 将步骤一中获得的图像灰度化后,使用中值滤波器去噪
40 | image_gray = cv2.cvtColor(image_bilateral, cv2.COLOR_RGB2GRAY)
41 | image_median = cv2.medianBlur(image_gray, 7)
42 | # Step3: 对步骤二中获得的图像使用自适应阈值从而获得原图像的轮廓
43 | image_edge = cv2.adaptiveThreshold(image_median, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, blockSize=9, C=2)
44 | image_edge = cv2.cvtColor(image_edge, cv2.COLOR_GRAY2RGB)
45 | # Step4: 将步骤一中获得的图像与步骤三中获得的图像轮廓合并即可实现将照片变为卡通图片的效果了
46 | image_cartoon = cv2.bitwise_and(image_bilateral, image_edge)
47 | # 返回
48 | return image_cartoon
49 | '''在HSV空间操作'''
50 | def processinhsv(self, image):
51 | # Step1: 图像BGR空间转HSV空间, 在HSV空间进行直方图均衡化, 中值滤波和形态学变换
52 | image_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
53 | h, s, v = cv2.split(image_hsv)
54 | # --直方图均衡化
55 | v = cv2.equalizeHist(v)
56 | image_hsv = cv2.merge((h, s, v))
57 | # --中值滤波
58 | image_hsv = cv2.medianBlur(image_hsv, 7)
59 | # --形态学变换-开/闭运算
60 | kernel = np.ones((5, 5), np.uint8)
61 | image_hsv = cv2.morphologyEx(image_hsv, cv2.MORPH_CLOSE, kernel, iterations=2)
62 | # --中值滤波
63 | image_hsv = cv2.medianBlur(image_hsv, 7)
64 | # Step2: 对步骤一中获得的图像使用自适应阈值从而获得原图像的轮廓
65 | image_mask = cv2.cvtColor(image_hsv, cv2.COLOR_HSV2BGR)
66 | image_mask = cv2.cvtColor(image_mask, cv2.COLOR_RGB2GRAY)
67 | image_mask = cv2.medianBlur(image_mask, 7)
68 | image_edge = cv2.adaptiveThreshold(image_mask, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, blockSize=9, C=2)
69 | image_edge = cv2.cvtColor(image_edge, cv2.COLOR_GRAY2RGB)
70 | # Step3: 将步骤二中获得的图像轮廓与原图像合并即可实现将照片变为卡通图片的效果了
71 | image_cartoon = cv2.bitwise_and(image, image_edge)
72 | # 返回
73 | return image_cartoon
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonizeface/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .cartoonizeface import CartoonizeFaceBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonizeface/cartoonizeface.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 人脸卡通化
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import numpy as np
11 | from PIL import Image
12 | from ..base import BaseBeautifier
13 | try:
14 | import torch
15 | import torch.nn as nn
16 | import torch.nn.functional as F
17 | import torch.utils.model_zoo as model_zoo
18 | from torch.nn.parameter import Parameter
19 | except:
20 | print('[Warning]: Pytorch and torchvision have not be installed, "cartoonizeface" will be not available.')
21 |
22 |
23 | '''ConvBlock'''
24 | class ConvBlock(nn.Module):
25 | def __init__(self, dim_in, dim_out):
26 | super(ConvBlock, self).__init__()
27 | self.dim_out = dim_out
28 | self.ConvBlock1 = nn.Sequential(
29 | nn.InstanceNorm2d(dim_in),
30 | nn.ReLU(True),
31 | nn.ReflectionPad2d(1),
32 | nn.Conv2d(dim_in, dim_out//2, kernel_size=3, stride=1, bias=False)
33 | )
34 | self.ConvBlock2 = nn.Sequential(
35 | nn.InstanceNorm2d(dim_out//2),
36 | nn.ReLU(True),
37 | nn.ReflectionPad2d(1),
38 | nn.Conv2d(dim_out//2, dim_out//4, kernel_size=3, stride=1, bias=False)
39 | )
40 | self.ConvBlock3 = nn.Sequential(
41 | nn.InstanceNorm2d(dim_out//4),
42 | nn.ReLU(True),
43 | nn.ReflectionPad2d(1),
44 | nn.Conv2d(dim_out//4, dim_out//4, kernel_size=3, stride=1, bias=False)
45 | )
46 | self.ConvBlock4 = nn.Sequential(
47 | nn.InstanceNorm2d(dim_in),
48 | nn.ReLU(True),
49 | nn.Conv2d(dim_in, dim_out, kernel_size=1, stride=1, bias=False)
50 | )
51 | '''forward'''
52 | def forward(self, x):
53 | residual = x
54 | x1 = self.ConvBlock1(x)
55 | x2 = self.ConvBlock2(x1)
56 | x3 = self.ConvBlock3(x2)
57 | out = torch.cat((x1, x2, x3), 1)
58 | if residual.size(1) != self.dim_out: residual = self.ConvBlock4(residual)
59 | return residual + out
60 |
61 |
62 | '''HourGlassBlock'''
63 | class HourGlassBlock(nn.Module):
64 | def __init__(self, dim_in, dim_out):
65 | super(HourGlassBlock, self).__init__()
66 | self.ConvBlock1_1 = ConvBlock(dim_in, dim_out)
67 | self.ConvBlock1_2 = ConvBlock(dim_out, dim_out)
68 | self.ConvBlock2_1 = ConvBlock(dim_out, dim_out)
69 | self.ConvBlock2_2 = ConvBlock(dim_out, dim_out)
70 | self.ConvBlock3_1 = ConvBlock(dim_out, dim_out)
71 | self.ConvBlock3_2 = ConvBlock(dim_out, dim_out)
72 | self.ConvBlock4_1 = ConvBlock(dim_out, dim_out)
73 | self.ConvBlock4_2 = ConvBlock(dim_out, dim_out)
74 | self.ConvBlock5 = ConvBlock(dim_out, dim_out)
75 | self.ConvBlock6 = ConvBlock(dim_out, dim_out)
76 | self.ConvBlock7 = ConvBlock(dim_out, dim_out)
77 | self.ConvBlock8 = ConvBlock(dim_out, dim_out)
78 | self.ConvBlock9 = ConvBlock(dim_out, dim_out)
79 | '''forward'''
80 | def forward(self, x):
81 | skip1 = self.ConvBlock1_1(x)
82 | down1 = F.avg_pool2d(x, 2)
83 | down1 = self.ConvBlock1_2(down1)
84 | skip2 = self.ConvBlock2_1(down1)
85 | down2 = F.avg_pool2d(down1, 2)
86 | down2 = self.ConvBlock2_2(down2)
87 | skip3 = self.ConvBlock3_1(down2)
88 | down3 = F.avg_pool2d(down2, 2)
89 | down3 = self.ConvBlock3_2(down3)
90 | skip4 = self.ConvBlock4_1(down3)
91 | down4 = F.avg_pool2d(down3, 2)
92 | down4 = self.ConvBlock4_2(down4)
93 | center = self.ConvBlock5(down4)
94 | up4 = self.ConvBlock6(center)
95 | up4 = F.upsample(up4, scale_factor=2)
96 | up4 = skip4 + up4
97 | up3 = self.ConvBlock7(up4)
98 | up3 = F.upsample(up3, scale_factor=2)
99 | up3 = skip3 + up3
100 | up2 = self.ConvBlock8(up3)
101 | up2 = F.upsample(up2, scale_factor=2)
102 | up2 = skip2 + up2
103 | up1 = self.ConvBlock9(up2)
104 | up1 = F.upsample(up1, scale_factor=2)
105 | up1 = skip1 + up1
106 | return up1
107 |
108 |
109 | '''HourGlass'''
110 | class HourGlass(nn.Module):
111 | def __init__(self, dim_in, dim_out, use_res=True):
112 | super(HourGlass, self).__init__()
113 | self.use_res = use_res
114 | self.HG = nn.Sequential(
115 | HourGlassBlock(dim_in, dim_out),
116 | ConvBlock(dim_out, dim_out),
117 | nn.Conv2d(dim_out, dim_out, kernel_size=1, stride=1, bias=False),
118 | nn.InstanceNorm2d(dim_out),
119 | nn.ReLU(True)
120 | )
121 | self.Conv1 = nn.Conv2d(dim_out, 3, kernel_size=1, stride=1)
122 | if self.use_res:
123 | self.Conv2 = nn.Conv2d(dim_out, dim_out, kernel_size=1, stride=1)
124 | self.Conv3 = nn.Conv2d(3, dim_out, kernel_size=1, stride=1)
125 | '''forward'''
126 | def forward(self, x):
127 | ll = self.HG(x)
128 | tmp_out = self.Conv1(ll)
129 | if self.use_res:
130 | ll = self.Conv2(ll)
131 | tmp_out_ = self.Conv3(tmp_out)
132 | return x + ll + tmp_out_
133 | else:
134 | return tmp_out
135 |
136 |
137 | '''ResnetBlock'''
138 | class ResnetBlock(nn.Module):
139 | def __init__(self, dim, use_bias=False):
140 | super(ResnetBlock, self).__init__()
141 | conv_block = []
142 | conv_block += [nn.ReflectionPad2d(1), nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias), nn.InstanceNorm2d(dim), nn.ReLU(True)]
143 | conv_block += [nn.ReflectionPad2d(1), nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias), nn.InstanceNorm2d(dim)]
144 | self.conv_block = nn.Sequential(*conv_block)
145 | '''forward'''
146 | def forward(self, x):
147 | out = x + self.conv_block(x)
148 | return out
149 |
150 |
151 | '''adaLIN'''
152 | class adaLIN(nn.Module):
153 | def __init__(self, num_features, eps=1e-5):
154 | super(adaLIN, self).__init__()
155 | self.eps = eps
156 | self.rho = Parameter(torch.Tensor(1, num_features, 1, 1))
157 | self.rho.data.fill_(0.9)
158 | '''forward'''
159 | def forward(self, input, gamma, beta):
160 | in_mean, in_var = torch.mean(input, dim=[2, 3], keepdim=True), torch.var(input, dim=[2, 3], keepdim=True)
161 | out_in = (input - in_mean) / torch.sqrt(in_var + self.eps)
162 | ln_mean, ln_var = torch.mean(input, dim=[1, 2, 3], keepdim=True), torch.var(input, dim=[1, 2, 3], keepdim=True)
163 | out_ln = (input - ln_mean) / torch.sqrt(ln_var + self.eps)
164 | out = self.rho.expand(input.shape[0], -1, -1, -1) * out_in + (1-self.rho.expand(input.shape[0], -1, -1, -1)) * out_ln
165 | out = out * gamma.unsqueeze(2).unsqueeze(3) + beta.unsqueeze(2).unsqueeze(3)
166 | return out
167 |
168 |
169 | '''SoftAdaLIN'''
170 | class SoftAdaLIN(nn.Module):
171 | def __init__(self, num_features, eps=1e-5):
172 | super(SoftAdaLIN, self).__init__()
173 | self.norm = adaLIN(num_features, eps)
174 | self.w_gamma = Parameter(torch.zeros(1, num_features))
175 | self.w_beta = Parameter(torch.zeros(1, num_features))
176 | self.c_gamma = nn.Sequential(nn.Linear(num_features, num_features), nn.ReLU(True), nn.Linear(num_features, num_features))
177 | self.c_beta = nn.Sequential(nn.Linear(num_features, num_features), nn.ReLU(True), nn.Linear(num_features, num_features))
178 | self.s_gamma = nn.Linear(num_features, num_features)
179 | self.s_beta = nn.Linear(num_features, num_features)
180 | '''forward'''
181 | def forward(self, x, content_features, style_features):
182 | content_gamma, content_beta = self.c_gamma(content_features), self.c_beta(content_features)
183 | style_gamma, style_beta = self.s_gamma(style_features), self.s_beta(style_features)
184 | w_gamma, w_beta = self.w_gamma.expand(x.shape[0], -1), self.w_beta.expand(x.shape[0], -1)
185 | soft_gamma = (1. - w_gamma) * style_gamma + w_gamma * content_gamma
186 | soft_beta = (1. - w_beta) * style_beta + w_beta * content_beta
187 | out = self.norm(x, soft_gamma, soft_beta)
188 | return out
189 |
190 |
191 | '''ResnetSoftAdaLINBlock'''
192 | class ResnetSoftAdaLINBlock(nn.Module):
193 | def __init__(self, dim, use_bias=False):
194 | super(ResnetSoftAdaLINBlock, self).__init__()
195 | self.pad1 = nn.ReflectionPad2d(1)
196 | self.conv1 = nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias)
197 | self.norm1 = SoftAdaLIN(dim)
198 | self.relu1 = nn.ReLU(True)
199 | self.pad2 = nn.ReflectionPad2d(1)
200 | self.conv2 = nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias)
201 | self.norm2 = SoftAdaLIN(dim)
202 | '''forward'''
203 | def forward(self, x, content_features, style_features):
204 | out = self.pad1(x)
205 | out = self.conv1(out)
206 | out = self.norm1(out, content_features, style_features)
207 | out = self.relu1(out)
208 | out = self.pad2(out)
209 | out = self.conv2(out)
210 | out = self.norm2(out, content_features, style_features)
211 | return out + x
212 |
213 |
214 | '''LIN'''
215 | class LIN(nn.Module):
216 | def __init__(self, num_features, eps=1e-5):
217 | super(LIN, self).__init__()
218 | self.eps = eps
219 | self.rho = Parameter(torch.Tensor(1, num_features, 1, 1))
220 | self.gamma = Parameter(torch.Tensor(1, num_features, 1, 1))
221 | self.beta = Parameter(torch.Tensor(1, num_features, 1, 1))
222 | self.rho.data.fill_(0.0)
223 | self.gamma.data.fill_(1.0)
224 | self.beta.data.fill_(0.0)
225 | '''forward'''
226 | def forward(self, input):
227 | in_mean, in_var = torch.mean(input, dim=[2, 3], keepdim=True), torch.var(input, dim=[2, 3], keepdim=True)
228 | out_in = (input - in_mean) / torch.sqrt(in_var + self.eps)
229 | ln_mean, ln_var = torch.mean(input, dim=[1, 2, 3], keepdim=True), torch.var(input, dim=[1, 2, 3], keepdim=True)
230 | out_ln = (input - ln_mean) / torch.sqrt(ln_var + self.eps)
231 | out = self.rho.expand(input.shape[0], -1, -1, -1) * out_in + (1-self.rho.expand(input.shape[0], -1, -1, -1)) * out_ln
232 | out = out * self.gamma.expand(input.shape[0], -1, -1, -1) + self.beta.expand(input.shape[0], -1, -1, -1)
233 | return out
234 |
235 |
236 | '''ResnetGenerator, 模型修改自: https://github.com/minivision-ai/photo2cartoon'''
237 | class ResnetGenerator(nn.Module):
238 | def __init__(self, ngf=64, img_size=256, light=False):
239 | super(ResnetGenerator, self).__init__()
240 | self.light = light
241 | self.ConvBlock1 = nn.Sequential(
242 | nn.ReflectionPad2d(3),
243 | nn.Conv2d(3, ngf, kernel_size=7, stride=1, padding=0, bias=False),
244 | nn.InstanceNorm2d(ngf),
245 | nn.ReLU(True)
246 | )
247 | self.HourGlass1 = HourGlass(ngf, ngf)
248 | self.HourGlass2 = HourGlass(ngf, ngf)
249 | # Down-Sampling
250 | self.DownBlock1 = nn.Sequential(
251 | nn.ReflectionPad2d(1),
252 | nn.Conv2d(ngf, ngf*2, kernel_size=3, stride=2, padding=0, bias=False),
253 | nn.InstanceNorm2d(ngf * 2),
254 | nn.ReLU(True)
255 | )
256 | self.DownBlock2 = nn.Sequential(
257 | nn.ReflectionPad2d(1),
258 | nn.Conv2d(ngf*2, ngf*4, kernel_size=3, stride=2, padding=0, bias=False),
259 | nn.InstanceNorm2d(ngf*4),
260 | nn.ReLU(True)
261 | )
262 | # Encoder Bottleneck
263 | self.EncodeBlock1 = ResnetBlock(ngf*4)
264 | self.EncodeBlock2 = ResnetBlock(ngf*4)
265 | self.EncodeBlock3 = ResnetBlock(ngf*4)
266 | self.EncodeBlock4 = ResnetBlock(ngf*4)
267 | # Class Activation Map
268 | self.gap_fc = nn.Linear(ngf*4, 1)
269 | self.gmp_fc = nn.Linear(ngf*4, 1)
270 | self.conv1x1 = nn.Conv2d(ngf*8, ngf*4, kernel_size=1, stride=1)
271 | self.relu = nn.ReLU(True)
272 | # Gamma, Beta block
273 | if self.light:
274 | self.FC = nn.Sequential(nn.Linear(ngf*4, ngf*4), nn.ReLU(True), nn.Linear(ngf*4, ngf*4), nn.ReLU(True))
275 | else:
276 | self.FC = nn.Sequential(nn.Linear(img_size//4*img_size//4*ngf*4, ngf*4), nn.ReLU(True), nn.Linear(ngf*4, ngf*4), nn.ReLU(True))
277 | # Decoder Bottleneck
278 | self.DecodeBlock1 = ResnetSoftAdaLINBlock(ngf*4)
279 | self.DecodeBlock2 = ResnetSoftAdaLINBlock(ngf*4)
280 | self.DecodeBlock3 = ResnetSoftAdaLINBlock(ngf*4)
281 | self.DecodeBlock4 = ResnetSoftAdaLINBlock(ngf*4)
282 | # Up-Sampling
283 | self.UpBlock1 = nn.Sequential(
284 | nn.Upsample(scale_factor=2),
285 | nn.ReflectionPad2d(1),
286 | nn.Conv2d(ngf*4, ngf*2, kernel_size=3, stride=1, padding=0, bias=False),
287 | LIN(ngf*2),
288 | nn.ReLU(True)
289 | )
290 | self.UpBlock2 = nn.Sequential(
291 | nn.Upsample(scale_factor=2),
292 | nn.ReflectionPad2d(1),
293 | nn.Conv2d(ngf*2, ngf, kernel_size=3, stride=1, padding=0, bias=False),
294 | LIN(ngf),
295 | nn.ReLU(True)
296 | )
297 | self.HourGlass3 = HourGlass(ngf, ngf)
298 | self.HourGlass4 = HourGlass(ngf, ngf, False)
299 | self.ConvBlock2 = nn.Sequential(
300 | nn.ReflectionPad2d(3),
301 | nn.Conv2d(3, 3, kernel_size=7, stride=1, padding=0, bias=False),
302 | nn.Tanh()
303 | )
304 | '''forward'''
305 | def forward(self, x):
306 | x = self.ConvBlock1(x)
307 | x = self.HourGlass1(x)
308 | x = self.HourGlass2(x)
309 | x = self.DownBlock1(x)
310 | x = self.DownBlock2(x)
311 | x = self.EncodeBlock1(x)
312 | content_features1 = F.adaptive_avg_pool2d(x, 1).view(x.shape[0], -1)
313 | x = self.EncodeBlock2(x)
314 | content_features2 = F.adaptive_avg_pool2d(x, 1).view(x.shape[0], -1)
315 | x = self.EncodeBlock3(x)
316 | content_features3 = F.adaptive_avg_pool2d(x, 1).view(x.shape[0], -1)
317 | x = self.EncodeBlock4(x)
318 | content_features4 = F.adaptive_avg_pool2d(x, 1).view(x.shape[0], -1)
319 | gap = F.adaptive_avg_pool2d(x, 1)
320 | gap_logit = self.gap_fc(gap.view(x.shape[0], -1))
321 | gap_weight = list(self.gap_fc.parameters())[0]
322 | gap = x * gap_weight.unsqueeze(2).unsqueeze(3)
323 | gmp = F.adaptive_max_pool2d(x, 1)
324 | gmp_logit = self.gmp_fc(gmp.view(x.shape[0], -1))
325 | gmp_weight = list(self.gmp_fc.parameters())[0]
326 | gmp = x * gmp_weight.unsqueeze(2).unsqueeze(3)
327 | cam_logit = torch.cat([gap_logit, gmp_logit], 1)
328 | x = torch.cat([gap, gmp], 1)
329 | x = self.relu(self.conv1x1(x))
330 | heatmap = torch.sum(x, dim=1, keepdim=True)
331 | if self.light:
332 | x_ = F.adaptive_avg_pool2d(x, 1)
333 | style_features = self.FC(x_.view(x_.shape[0], -1))
334 | else:
335 | style_features = self.FC(x.view(x.shape[0], -1))
336 | x = self.DecodeBlock1(x, content_features4, style_features)
337 | x = self.DecodeBlock2(x, content_features3, style_features)
338 | x = self.DecodeBlock3(x, content_features2, style_features)
339 | x = self.DecodeBlock4(x, content_features1, style_features)
340 | x = self.UpBlock1(x)
341 | x = self.UpBlock2(x)
342 | x = self.HourGlass3(x)
343 | x = self.HourGlass4(x)
344 | out = self.ConvBlock2(x)
345 | return out, cam_logit, heatmap
346 |
347 |
348 | '''人脸卡通化'''
349 | class CartoonizeFaceBeautifier(BaseBeautifier):
350 | def __init__(self, use_cuda=False, use_face_segmentor=True, **kwargs):
351 | super(CartoonizeFaceBeautifier, self).__init__(**kwargs)
352 | from .facedetector import FaceDetector
353 | from .facesegmentor import FaceSegmentor
354 | self.model_urls = {
355 | 'transformer': 'https://github.com/CharlesPikachu/pydrawing/releases/download/checkpoints/cartoonizeface_transformer.pth',
356 | }
357 | self.use_cuda = use_cuda
358 | self.use_face_segmentor = use_face_segmentor
359 | self.face_detector = FaceDetector(use_cuda=(torch.cuda.is_available() and self.use_cuda))
360 | self.transformer = ResnetGenerator(ngf=32, img_size=256, light=True)
361 | self.transformer.load_state_dict(model_zoo.load_url(self.model_urls['transformer'], map_location='cpu')['genA2B'])
362 | self.transformer.eval()
363 | if use_face_segmentor:
364 | self.face_segmentor = FaceSegmentor()
365 | self.face_segmentor.eval()
366 | if torch.cuda.is_available() and self.use_cuda:
367 | self.transformer = self.transformer.cuda()
368 | if use_face_segmentor: self.face_segmentor = self.face_segmentor.cuda()
369 | '''迭代图片'''
370 | def iterimage(self, image):
371 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
372 | # 人脸提取
373 | face_rgb = self.face_detector(image)
374 | # 人脸分割
375 | if self.use_face_segmentor:
376 | face_rgb_for_seg = self.face_segmentor.preprocess(face_rgb)
377 | mask = self.face_segmentor(face_rgb_for_seg)
378 | mask = F.interpolate(mask, size=face_rgb.shape[:2][::-1], mode='bilinear', align_corners=False)
379 | mask = mask[0].argmax(0).cpu().numpy().astype(np.int32)
380 | mask = self.face_segmentor.getfacemask(mask)
381 | else:
382 | mask = np.ones(face_rgb.shape[:2]) * 255
383 | mask = mask[:, :, np.newaxis]
384 | face_rgba = np.dstack((face_rgb, mask))
385 | # 人脸处理
386 | face_rgba = cv2.resize(face_rgba, (256, 256), interpolation=cv2.INTER_AREA)
387 | face = face_rgba[:, :, :3].copy()
388 | mask = face_rgba[:, :, 3][:, :, np.newaxis].copy() / 255.
389 | face = (face * mask + (1 - mask) * 255) / 127.5 - 1
390 | face = np.transpose(face[np.newaxis, :, :, :], (0, 3, 1, 2)).astype(np.float32)
391 | face = torch.from_numpy(face).type(torch.FloatTensor)
392 | if torch.cuda.is_available() and self.use_cuda:
393 | face = face.cuda()
394 | # 推理
395 | with torch.no_grad():
396 | face_cartoon = self.transformer(face)[0][0]
397 | # 后处理
398 | face_cartoon = np.transpose(face_cartoon.cpu().numpy(), (1, 2, 0))
399 | face_cartoon = (face_cartoon + 1) * 127.5
400 | face_cartoon = (face_cartoon * mask + 255 * (1 - mask)).astype(np.uint8)
401 | face_cartoon = cv2.cvtColor(face_cartoon, cv2.COLOR_RGB2BGR)
402 | # 返回
403 | return face_cartoon
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonizeface/facedetector/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .facedetector import FaceDetector
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonizeface/facedetector/facedetector.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 人脸检测
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import ssl
11 | import math
12 | import numpy as np
13 | ssl._create_default_https_context = ssl._create_unverified_context
14 |
15 |
16 | '''FaceDetector'''
17 | class FaceDetector():
18 | def __init__(self, use_cuda, **kwargs):
19 | super(FaceDetector, self).__init__()
20 | try:
21 | import face_alignment
22 | except:
23 | raise RuntimeError('Please run "pip install face_alignment" to install "face_alignment"')
24 | device = 'cuda' if use_cuda else 'cpu'
25 | self.dlib_detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device=device, face_detector='dlib')
26 | '''forward'''
27 | def __call__(self, image):
28 | # obtain landmarks
29 | preds = self.dlib_detector.get_landmarks(image)
30 | landmarks = None
31 | if preds is None:
32 | raise RuntimeError('no faces are detected')
33 | elif len(preds) == 1:
34 | landmarks = preds[0]
35 | else:
36 | areas = []
37 | for pred in preds:
38 | landmarks_top = np.min(pred[:, 1])
39 | landmarks_bottom = np.max(pred[:, 1])
40 | landmarks_left = np.min(pred[:, 0])
41 | landmarks_right = np.max(pred[:, 0])
42 | areas.append((landmarks_bottom - landmarks_top) * (landmarks_right - landmarks_left))
43 | max_face_index = np.argmax(areas)
44 | landmarks = preds[max_face_index]
45 | # rotate
46 | left_eye_corner = landmarks[36]
47 | right_eye_corner = landmarks[45]
48 | radian = np.arctan((left_eye_corner[1] - right_eye_corner[1]) / (left_eye_corner[0] - right_eye_corner[0]))
49 | height, width, _ = image.shape
50 | cos = math.cos(radian)
51 | sin = math.sin(radian)
52 | new_w = int(width * abs(cos) + height * abs(sin))
53 | new_h = int(width * abs(sin) + height * abs(cos))
54 | Tx = new_w // 2 - width // 2
55 | Ty = new_h // 2 - height // 2
56 | M = np.array([[cos, sin, (1 - cos) * width / 2. - sin * height / 2. + Tx], [-sin, cos, sin * width / 2. + (1 - cos) * height / 2. + Ty]])
57 | image_rotate = cv2.warpAffine(image, M, (new_w, new_h), borderValue=(255, 255, 255))
58 | landmarks = np.concatenate([landmarks, np.ones((landmarks.shape[0], 1))], axis=1)
59 | landmarks_rotate = np.dot(M, landmarks.T).T
60 | # return
61 | return self.crop(image_rotate, landmarks_rotate)
62 | '''crop'''
63 | def crop(self, image, landmarks):
64 | landmarks_top = np.min(landmarks[:, 1])
65 | landmarks_bottom = np.max(landmarks[:, 1])
66 | landmarks_left = np.min(landmarks[:, 0])
67 | landmarks_right = np.max(landmarks[:, 0])
68 | top = int(landmarks_top - 0.8 * (landmarks_bottom - landmarks_top))
69 | bottom = int(landmarks_bottom + 0.3 * (landmarks_bottom - landmarks_top))
70 | left = int(landmarks_left - 0.3 * (landmarks_right - landmarks_left))
71 | right = int(landmarks_right + 0.3 * (landmarks_right - landmarks_left))
72 | if bottom - top > right - left:
73 | left -= ((bottom - top) - (right - left)) // 2
74 | right = left + (bottom - top)
75 | else:
76 | top -= ((right - left) - (bottom - top)) // 2
77 | bottom = top + (right - left)
78 | image_crop = np.ones((bottom - top + 1, right - left + 1, 3), np.uint8) * 255
79 | h, w = image.shape[:2]
80 | left_white = max(0, -left)
81 | left = max(0, left)
82 | right = min(right, w-1)
83 | right_white = left_white + (right-left)
84 | top_white = max(0, -top)
85 | top = max(0, top)
86 | bottom = min(bottom, h-1)
87 | bottom_white = top_white + (bottom - top)
88 | image_crop[top_white:bottom_white+1, left_white:right_white+1] = image[top:bottom+1, left:right+1].copy()
89 | return image_crop
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonizeface/facesegmentor/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .facesegmentor import FaceSegmentor
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/cartoonizeface/facesegmentor/facesegmentor.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 人脸分割
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import torch
11 | import numpy as np
12 | import torch.nn as nn
13 | import torch.utils.model_zoo as model_zoo
14 |
15 |
16 | '''config'''
17 | SEGMENTOR_CFG = {
18 | 'type': 'ce2p',
19 | 'benchmark': True,
20 | 'num_classes': -1,
21 | 'align_corners': False,
22 | 'is_multi_gpus': True,
23 | 'distributed': {'is_on': True, 'backend': 'nccl'},
24 | 'norm_cfg': {'type': 'batchnorm2d', 'opts': {}},
25 | 'act_cfg': {'type': 'leakyrelu', 'opts': {'negative_slope': 0.01, 'inplace': True}},
26 | 'backbone': {
27 | 'type': 'resnet101',
28 | 'series': 'resnet',
29 | 'pretrained': False,
30 | 'outstride': 16,
31 | 'use_stem': True,
32 | 'selected_indices': (0, 1, 2, 3),
33 | },
34 | 'ppm': {
35 | 'in_channels': 2048,
36 | 'out_channels': 512,
37 | 'pool_scales': [1, 2, 3, 6],
38 | },
39 | 'epm': {
40 | 'in_channels_list': [256, 512, 1024],
41 | 'hidden_channels': 256,
42 | 'out_channels': 2
43 | },
44 | 'shortcut': {
45 | 'in_channels': 256,
46 | 'out_channels': 48,
47 | },
48 | 'decoder':{
49 | 'stage1': {
50 | 'in_channels': 560,
51 | 'out_channels': 512,
52 | 'dropout': 0,
53 | },
54 | 'stage2': {
55 | 'in_channels': 1280,
56 | 'out_channels': 512,
57 | 'dropout': 0.1
58 | },
59 | },
60 | }
61 | SEGMENTOR_CFG.update(
62 | {
63 | 'num_classes': 20,
64 | 'backbone': {
65 | 'type': 'resnet50',
66 | 'series': 'resnet',
67 | 'pretrained': True,
68 | 'outstride': 8,
69 | 'use_stem': True,
70 | 'selected_indices': (0, 1, 2, 3),
71 | }
72 | }
73 | )
74 |
75 |
76 | '''FaceSegmentor'''
77 | class FaceSegmentor(nn.Module):
78 | def __init__(self, **kwargs):
79 | super(FaceSegmentor, self).__init__()
80 | try:
81 | from ssseg.modules.models.segmentors.ce2p import CE2P
82 | except:
83 | raise RuntimeError('Please run "pip install sssegmentation" to install "ssseg"')
84 | self.ce2p = CE2P(SEGMENTOR_CFG, mode='TEST')
85 | self.ce2p.load_state_dict(model_zoo.load_url('https://github.com/SegmentationBLWX/modelstore/releases/download/ssseg_ce2p/ce2p_resnet50os8_lip_train.pth', map_location='cpu')['model'])
86 | '''forward'''
87 | def forward(self, x):
88 | return self.ce2p(x)
89 | '''preprocess'''
90 | def preprocess(self, image):
91 | # Resize
92 | output_size = (473, 473)
93 | if image.shape[0] > image.shape[1]:
94 | dsize = min(output_size), max(output_size)
95 | else:
96 | dsize = max(output_size), min(output_size)
97 | image = cv2.resize(image, dsize=dsize, interpolation=cv2.INTER_LINEAR)
98 | # Normalize
99 | mean, std = np.array([123.675, 116.28, 103.53]), np.array([58.395, 57.12, 57.375])
100 | image = image.astype(np.float32)
101 | mean = np.float64(mean.reshape(1, -1))
102 | stdinv = 1 / np.float64(std.reshape(1, -1))
103 | cv2.cvtColor(image, cv2.COLOR_BGR2RGB, image)
104 | cv2.subtract(image, mean, image)
105 | cv2.multiply(image, stdinv, image)
106 | # ToTensor
107 | image = torch.from_numpy((image.transpose((2, 0, 1))).astype(np.float32))
108 | # Return
109 | return image.unsqueeze(0)
110 | '''get face mask'''
111 | def getfacemask(self, mask):
112 | output_mask = np.zeros(mask.shape[:2])
113 | face_idxs = [2, 13]
114 | for idx in face_idxs:
115 | output_mask[mask == idx] = 255
116 | return output_mask
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/characterize/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .characterize import CharacterizeBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/characterize/characterize.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 视频转字符画
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import numpy as np
11 | from ..base import BaseBeautifier
12 | from PIL import Image, ImageFont, ImageDraw
13 |
14 |
15 | '''视频转字符画'''
16 | class CharacterizeBeautifier(BaseBeautifier):
17 | CHARS = "$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\|()1{}[]?-_+~<>i!lI;:,\"^`'. "
18 | def __init__(self, **kwargs):
19 | super(CharacterizeBeautifier, self).__init__(**kwargs)
20 | '''迭代图片'''
21 | def iterimage(self, image):
22 | image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
23 | # 每个字符的大小
24 | font = ImageFont.load_default().font
25 | font_w, font_h = font.getsize(self.CHARS[1])
26 | # 输入图像变成可整除字符大小
27 | image = image.resize((font_w * (image.width // font_w), font_h * (image.height // font_h)), Image.NEAREST)
28 | # 原始大小
29 | h_ori, w_ori = image.height, image.width
30 | # resize
31 | image = image.resize((w_ori // font_w, h_ori // font_h), Image.NEAREST)
32 | h, w = image.height, image.width
33 | # 图像RGB转字符
34 | txts, colors = '', []
35 | for i in range(h):
36 | for j in range(w):
37 | pixel = image.getpixel((j, i))
38 | colors.append(pixel[:3])
39 | txts += self.rgb2char(*pixel)
40 | image = Image.new('RGB', (w_ori, h_ori), (255, 255, 255))
41 | draw = ImageDraw.Draw(image)
42 | x = y = 0
43 | for j in range(len(txts)):
44 | if x == w_ori: x, y = 0, y + font_h
45 | draw.text((x, y), txts[j], font=font, fill=colors[j])
46 | x += font_w
47 | return cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
48 | '''RGB转字符'''
49 | def rgb2char(self, r, g, b, alpha=256):
50 | if alpha == 0: return ''
51 | gray = int(0.2126 * r + 0.7152 * g + 0.0722 * b)
52 | return self.CHARS[gray % len(self.CHARS)]
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/douyineffect/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .douyineffect import DouyinEffectBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/douyineffect/douyineffect.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 图像抖音特效画
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import copy
11 | import numpy as np
12 | from PIL import Image
13 | from ..base import BaseBeautifier
14 |
15 |
16 | '''图像抖音特效画'''
17 | class DouyinEffectBeautifier(BaseBeautifier):
18 | def __init__(self, **kwargs):
19 | super(DouyinEffectBeautifier, self).__init__(**kwargs)
20 | '''迭代图片'''
21 | def iterimage(self, image):
22 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGBA)
23 | # 提取R
24 | image_arr_r = copy.deepcopy(image)
25 | image_arr_r[:, :, 1:3] = 0
26 | # 提取GB
27 | image_arr_gb = copy.deepcopy(image)
28 | image_arr_gb[:, :, 0] = 0
29 | # 创建画布把图片错开放
30 | image_r = Image.fromarray(image_arr_r).convert('RGBA')
31 | image_gb = Image.fromarray(image_arr_gb).convert('RGBA')
32 | canvas_r = Image.new('RGB', (image.shape[1], image.shape[0]), color=(0, 0, 0))
33 | canvas_gb = Image.new('RGB', (image.shape[1], image.shape[0]), color=(0, 0, 0))
34 | canvas_r.paste(image_r, (6, 6), image_r)
35 | canvas_gb.paste(image_gb, (0, 0), image_gb)
36 | output_image = np.array(canvas_gb) + np.array(canvas_r)
37 | output_image = cv2.cvtColor(output_image, cv2.COLOR_RGB2BGR)
38 | # 返回结果
39 | return output_image
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/fastneuralstyletransfer/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .fastneuralstyletransfer import FastNeuralStyleTransferBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/fastneuralstyletransfer/fastneuralstyletransfer.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 复现论文"Perceptual Losses for Real-Time Style Transfer and Super-Resolution"
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import numpy as np
11 | from ..base import BaseBeautifier
12 | try:
13 | import torch
14 | import torch.nn as nn
15 | import torch.nn.functional as F
16 | import torchvision.models as models
17 | import torch.utils.model_zoo as model_zoo
18 | import torchvision.transforms as transforms
19 | except:
20 | print('[Warning]: Pytorch and torchvision have not be installed, "fastneuralstyletransfer" will be not available.')
21 |
22 |
23 | '''ConvBlock'''
24 | class ConvBlock(nn.Module):
25 | def __init__(self, in_channels, out_channels, kernel_size, stride=1, upsample=False, normalize=True, relu=True):
26 | super(ConvBlock, self).__init__()
27 | self.upsample = upsample
28 | self.block = nn.Sequential(
29 | nn.ReflectionPad2d(kernel_size // 2), nn.Conv2d(in_channels, out_channels, kernel_size, stride)
30 | )
31 | self.norm = nn.InstanceNorm2d(out_channels, affine=True) if normalize else None
32 | self.relu = relu
33 | '''forward'''
34 | def forward(self, x):
35 | if self.upsample: x = F.interpolate(x, scale_factor=2)
36 | x = self.block(x)
37 | if self.norm is not None: x = self.norm(x)
38 | if self.relu: x = F.relu(x)
39 | return x
40 |
41 |
42 | '''ResidualBlock'''
43 | class ResidualBlock(nn.Module):
44 | def __init__(self, channels):
45 | super(ResidualBlock, self).__init__()
46 | self.block = nn.Sequential(
47 | ConvBlock(channels, channels, kernel_size=3, stride=1, normalize=True, relu=True),
48 | ConvBlock(channels, channels, kernel_size=3, stride=1, normalize=True, relu=False),
49 | )
50 | '''forward'''
51 | def forward(self, x):
52 | return self.block(x) + x
53 |
54 |
55 | '''TransformerNet, 模型修改自: https://github.com/eriklindernoren/Fast-Neural-Style-Transfer'''
56 | class TransformerNet(nn.Module):
57 | def __init__(self):
58 | super(TransformerNet, self).__init__()
59 | self.model = nn.Sequential(
60 | ConvBlock(3, 32, kernel_size=9, stride=1),
61 | ConvBlock(32, 64, kernel_size=3, stride=2),
62 | ConvBlock(64, 128, kernel_size=3, stride=2),
63 | ResidualBlock(128),
64 | ResidualBlock(128),
65 | ResidualBlock(128),
66 | ResidualBlock(128),
67 | ResidualBlock(128),
68 | ConvBlock(128, 64, kernel_size=3, upsample=True),
69 | ConvBlock(64, 32, kernel_size=3, upsample=True),
70 | ConvBlock(32, 3, kernel_size=9, stride=1, normalize=False, relu=False),
71 | )
72 | '''forward'''
73 | def forward(self, x):
74 | return self.model(x)
75 |
76 |
77 | '''复现论文"Perceptual Losses for Real-Time Style Transfer and Super-Resolution"'''
78 | class FastNeuralStyleTransferBeautifier(BaseBeautifier):
79 | def __init__(self, style='starrynight', use_cuda=True, **kwargs):
80 | super(FastNeuralStyleTransferBeautifier, self).__init__(**kwargs)
81 | self.model_urls = {
82 | 'cuphead': 'https://github.com/CharlesPikachu/pydrawing/releases/download/checkpoints/fastneuralstyletransfer_cuphead.pth',
83 | 'mosaic': 'https://github.com/CharlesPikachu/pydrawing/releases/download/checkpoints/fastneuralstyletransfer_mosaic.pth',
84 | 'starrynight': 'https://github.com/CharlesPikachu/pydrawing/releases/download/checkpoints/fastneuralstyletransfer_starrynight.pth',
85 | }
86 | self.mean = np.array([0.485, 0.456, 0.406])
87 | self.std = np.array([0.229, 0.224, 0.225])
88 | self.preprocess = transforms.Compose([transforms.ToTensor(), transforms.Normalize(self.mean, self.std)])
89 | assert style in self.model_urls
90 | self.style = style
91 | self.use_cuda = use_cuda
92 | self.transformer = TransformerNet()
93 | self.transformer.load_state_dict(model_zoo.load_url(self.model_urls[style]))
94 | self.transformer.eval()
95 | if torch.cuda.is_available() and self.use_cuda: self.transformer = self.transformer.cuda()
96 | '''迭代图片'''
97 | def iterimage(self, image):
98 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
99 | input_image = self.preprocess(image).unsqueeze(0)
100 | if torch.cuda.is_available() and self.use_cuda:
101 | input_image = input_image.cuda()
102 | with torch.no_grad():
103 | output_image = self.transformer(input_image)[0]
104 | output_image = output_image.data.cpu().float()
105 | for c in range(3):
106 | output_image[c, :].mul_(self.std[c]).add_(self.mean[c])
107 | output_image = output_image.mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).to('cpu', torch.uint8).numpy()
108 | output_image = cv2.cvtColor(output_image, cv2.COLOR_RGB2BGR)
109 | output_image = cv2.resize(output_image, (image.shape[1], image.shape[0]))
110 | return output_image
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/geneticfitting/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .geneticfittingcircle import GeneticFittingCircleBeautifier
3 | from .geneticfittingpolygon import GeneticFittingPolygonBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/geneticfitting/geneticfittingcircle.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 利用遗传算法画画
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import cv2
11 | import copy
12 | import random
13 | import numpy as np
14 | from ...utils import checkdir
15 | from PIL import Image, ImageDraw
16 | from ..base import BaseBeautifier
17 |
18 |
19 | '''圆形'''
20 | class Circle():
21 | def __init__(self, radius_range=50, radius_shift_range=50, center_shift_range=50, color_shift_range=50, target_image=None):
22 | # 属性
23 | self.radius_range = radius_range
24 | self.radius_shift_range = radius_shift_range
25 | self.center_shift_range = center_shift_range
26 | self.color_shift_range = color_shift_range
27 | self.target_image = target_image
28 | # 中心点
29 | image_width, image_height = target_image.size[:2]
30 | x, y = random.randint(0, int(image_width)), random.randint(0, int(image_height))
31 | self.image_width, self.image_height = image_width, image_height
32 | self.center = (x, y)
33 | # 半径
34 | self.radius = random.randint(1, radius_range)
35 | # 颜色
36 | r, g, b = np.asarray(target_image)[min(y, image_height-1), min(x, image_width-1)]
37 | self.color = (int(r), int(g), int(b), random.randint(0, 256))
38 | '''变异'''
39 | def mutate(self):
40 | mutations = ['center', 'radius', 'color', 'reset']
41 | mutation_type = random.choice(mutations)
42 | # 随机中心的偏移
43 | if mutation_type == 'center':
44 | x_shift = int(random.randint(-self.center_shift_range, self.center_shift_range) * random.random())
45 | y_shift = int(random.randint(-self.center_shift_range, self.center_shift_range) * random.random())
46 | x = min(max(0, x_shift + self.center[0]), self.image_width)
47 | y = min(max(0, y_shift + self.center[1]), self.image_height)
48 | self.center = (x, y)
49 | # 随机改变半径
50 | elif mutation_type == 'radius':
51 | self.radius += int(random.randint(-self.radius_shift_range, self.radius_shift_range) * random.random())
52 | # 随机改变颜色
53 | elif mutation_type == 'color':
54 | self.color = tuple(c + int(random.randint(-self.color_shift_range, self.color_shift_range) * random.random()) for c in self.color)
55 | self.color = tuple(min(max(c, 0), 255) for c in self.color)
56 | # 重置
57 | else:
58 | new_circle = Circle(
59 | radius_range=self.radius_range,
60 | radius_shift_range=self.radius_shift_range,
61 | center_shift_range=self.center_shift_range,
62 | color_shift_range=self.color_shift_range,
63 | target_image=self.target_image,
64 | )
65 | self.center = new_circle.center
66 | self.radius = new_circle.radius
67 | self.color = new_circle.color
68 |
69 |
70 | '''利用遗传算法画画'''
71 | class GeneticFittingCircleBeautifier(BaseBeautifier):
72 | def __init__(self, init_cfg=None, cache_dir='cache', save_cache=True, **kwargs):
73 | super(GeneticFittingCircleBeautifier, self).__init__(**kwargs)
74 | if init_cfg is None:
75 | init_cfg = {
76 | 'num_populations': 10,
77 | 'init_num_circles': 1,
78 | 'num_generations': 1e5,
79 | 'print_interval': 1,
80 | 'mutation_rate': 0.1,
81 | 'selection_rate': 0.5,
82 | 'crossover_rate': 0.5,
83 | 'circle_cfg': {'radius_range': 50, 'radius_shift_range': 50, 'center_shift_range': 50, 'color_shift_range': 50},
84 | }
85 | self.init_cfg = init_cfg
86 | self.cache_dir = cache_dir
87 | self.save_cache = save_cache
88 | '''迭代图片'''
89 | def iterimage(self, image):
90 | image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
91 | # 初始化
92 | populations = []
93 | for _ in range(self.init_cfg['num_populations']):
94 | population = []
95 | for _ in range(self.init_cfg['init_num_circles']):
96 | population.append(Circle(
97 | target_image=image,
98 | **self.init_cfg['circle_cfg']
99 | ))
100 | populations.append(population)
101 | # 迭代
102 | mutation_rate = self.init_cfg['mutation_rate']
103 | for g in range(1, int(self.init_cfg['num_generations']+1)):
104 | fitnesses = []
105 | for idx, population in enumerate(copy.deepcopy(populations)):
106 | fitness_ori = self.calcfitnesses([population], image)[0]
107 | fitness = 0
108 | while fitness_ori > fitness:
109 | population_new = population + [Circle(target_image=image, **self.init_cfg['circle_cfg'])]
110 | fitness = self.calcfitnesses([population_new], image)[0]
111 | populations[idx] = population_new
112 | fitnesses.append(fitness)
113 | if g % self.init_cfg['print_interval'] == 0:
114 | if self.save_cache:
115 | population = populations[np.argmax(fitnesses)]
116 | output_image = self.draw(population, image)
117 | checkdir(self.cache_dir)
118 | output_image.save(os.path.join(self.cache_dir, f'cache_g{g}.png'))
119 | self.logger_handle.info(f'Generation: {g}, FITNESS: {max(fitnesses)}')
120 | num_populations = len(populations)
121 | # --自然选择
122 | populations = self.select(image, fitnesses, populations)
123 | # --交叉
124 | populations = self.crossover(populations, num_populations)
125 | # --变异
126 | populations = self.mutate(image, populations, mutation_rate)
127 | # 选择最优解返回
128 | population = populations[np.argmax(fitnesses)]
129 | output_image = self.draw(population, image)
130 | return cv2.cvtColor(np.asarray(output_image), cv2.COLOR_RGB2BGR)
131 | '''自然选择'''
132 | def select(self, image, fitnesses, populations):
133 | sorted_idx = np.argsort(fitnesses)[::-1]
134 | selected_populations = []
135 | selection_rate = self.init_cfg['selection_rate']
136 | for idx in range(int(len(populations) * selection_rate)):
137 | selected_idx = int(sorted_idx[idx])
138 | selected_populations.append(populations[selected_idx])
139 | return selected_populations
140 | '''交叉'''
141 | def crossover(self, populations, num_populations):
142 | indices = list(range(len(populations)))
143 | while len(populations) < num_populations:
144 | idx1 = random.choice(indices)
145 | idx2 = random.choice(indices)
146 | population1 = copy.deepcopy(populations[idx1])
147 | population2 = copy.deepcopy(populations[idx2])
148 | for circle_idx in range(len(population1)):
149 | if self.init_cfg['crossover_rate'] > random.random():
150 | population1[circle_idx] = population2[circle_idx]
151 | populations.append(population1)
152 | return populations
153 | '''变异'''
154 | def mutate(self, target_image, populations, mutation_rate):
155 | populations_new = copy.deepcopy(populations)
156 | for idx, population in enumerate(populations):
157 | fitness_ori = self.calcfitnesses([population], target_image)[0]
158 | for circle in population:
159 | if mutation_rate > random.random():
160 | circle.mutate()
161 | fitness = self.calcfitnesses([population], target_image)[0]
162 | if fitness > fitness_ori: populations_new[idx] = population
163 | return populations_new
164 | '''计算适应度'''
165 | def calcfitnesses(self, populations, target_image):
166 | fitnesses = []
167 | for idx in range(len(populations)):
168 | image = self.draw(populations[idx], target_image)
169 | fitnesses.append(self.calcsimilarity(image, target_image))
170 | return fitnesses
171 | '''画图'''
172 | def draw(self, population, target_image):
173 | image = Image.new('RGB', target_image.size, "#FFFFFF")
174 | for circle in population:
175 | item = Image.new('RGB', target_image.size, "#000000")
176 | mask = Image.new('L', target_image.size, 255)
177 | draw = ImageDraw.Draw(item)
178 | draw.ellipse((circle.center[0]-circle.radius, circle.center[1]-circle.radius, circle.center[0]+circle.radius, circle.center[1]+circle.radius), fill=circle.color)
179 | draw = ImageDraw.Draw(mask)
180 | draw.ellipse((circle.center[0]-circle.radius, circle.center[1]-circle.radius, circle.center[0]+circle.radius, circle.center[1]+circle.radius), fill=128)
181 | image = Image.composite(image, item, mask)
182 | return image
183 | '''计算两幅图之间的相似度'''
184 | def calcsimilarity(self, image, target_image):
185 | image = np.asarray(image) / 255.
186 | target_image = np.asarray(target_image) / 255.
187 | similarity = 1 - abs(image - target_image).mean()
188 | return similarity
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/geneticfitting/geneticfittingpolygon.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 利用遗传算法画画
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import cv2
11 | import copy
12 | import random
13 | import numpy as np
14 | from ...utils import checkdir
15 | from PIL import Image, ImageDraw
16 | from ..base import BaseBeautifier
17 |
18 |
19 | '''多边形'''
20 | class Polygon():
21 | def __init__(self, num_points=3, size=50, shift_range=50, point_range=50, color_range=50, target_image=None, **kwargs):
22 | # set attrs
23 | self.size = size
24 | self.num_points = num_points
25 | self.target_image = target_image
26 | self.shift_range = shift_range
27 | self.point_range = point_range
28 | self.color_range = color_range
29 | # 点
30 | image_width, image_height = target_image.size[:2]
31 | x, y = random.randint(0, int(image_width)), random.randint(0, int(image_height))
32 | self.points = []
33 | for _ in range(self.num_points):
34 | self.points.append(((y + random.randint(-size, size), x + random.randint(-size, size))))
35 | # 颜色
36 | point = random.choice(self.points)
37 | r, g, b = np.asarray(target_image)[min(point[0], image_height-1), min(point[1], image_width-1)]
38 | self.color = (int(r), int(g), int(b), random.randint(0, 256))
39 | '''变异'''
40 | def mutate(self):
41 | mutations = ['shift', 'point', 'color', 'reset']
42 | mutation_type = random.choice(mutations)
43 | # 整体偏移
44 | if mutation_type == 'shift':
45 | x_shift = int(random.randint(-self.shift_range, self.shift_range) * random.random())
46 | y_shift = int(random.randint(-self.shift_range, self.shift_range) * random.random())
47 | self.points = [(x + x_shift, y + y_shift) for x, y in self.points]
48 | # 随机改变一个点
49 | elif mutation_type == 'point':
50 | index = random.choice(list(range(len(self.points))))
51 | self.points[index] = (
52 | self.points[index][0] + int(random.randint(-self.point_range, self.point_range) * random.random()),
53 | self.points[index][1] + int(random.randint(-self.point_range, self.point_range) * random.random()),
54 | )
55 | # 随机改变颜色
56 | elif mutation_type == 'color':
57 | self.color = tuple(c + int(random.randint(-self.color_range, self.color_range) * random.random()) for c in self.color)
58 | self.color = tuple(min(max(c, 0), 255) for c in self.color)
59 | # 重置
60 | else:
61 | new_polygon = Polygon(
62 | num_points=max(self.num_points + random.choice([-1, 0, 1]), 3),
63 | size=self.size,
64 | shift_range=self.shift_range,
65 | point_range=self.point_range,
66 | color_range=self.color_range,
67 | target_image=self.target_image
68 | )
69 | self.points = new_polygon.points
70 | self.color = new_polygon.color
71 |
72 |
73 | '''利用遗传算法画画'''
74 | class GeneticFittingPolygonBeautifier(BaseBeautifier):
75 | def __init__(self, init_cfg=None, cache_dir='cache', save_cache=True, **kwargs):
76 | super(GeneticFittingPolygonBeautifier, self).__init__(**kwargs)
77 | if init_cfg is None:
78 | init_cfg = {
79 | 'num_populations': 10,
80 | 'num_points_list': list(range(3, 40)),
81 | 'init_num_polygons': 1,
82 | 'num_generations': 1e5,
83 | 'print_interval': 1,
84 | 'mutation_rate': 0.1,
85 | 'selection_rate': 0.5,
86 | 'crossover_rate': 0.5,
87 | 'polygon_cfg': {'size': 50, 'shift_range': 50, 'point_range': 50, 'color_range': 50},
88 | }
89 | self.init_cfg = init_cfg
90 | self.cache_dir = cache_dir
91 | self.save_cache = save_cache
92 | '''迭代图片'''
93 | def iterimage(self, image):
94 | image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
95 | # 初始化
96 | populations = []
97 | for _ in range(self.init_cfg['num_populations']):
98 | population = []
99 | for _ in range(self.init_cfg['init_num_polygons']):
100 | population.append(Polygon(
101 | target_image=image,
102 | num_points=random.choice(self.init_cfg['num_points_list']),
103 | **self.init_cfg['polygon_cfg']
104 | ))
105 | populations.append(population)
106 | # 迭代
107 | mutation_rate = self.init_cfg['mutation_rate']
108 | for g in range(1, int(self.init_cfg['num_generations']+1)):
109 | fitnesses = []
110 | for idx, population in enumerate(copy.deepcopy(populations)):
111 | fitness_ori = self.calcfitnesses([population], image)[0]
112 | fitness = 0
113 | while fitness_ori > fitness:
114 | population_new = population + [Polygon(target_image=image, num_points=random.choice(self.init_cfg['num_points_list']), **self.init_cfg['polygon_cfg'])]
115 | fitness = self.calcfitnesses([population_new], image)[0]
116 | populations[idx] = population_new
117 | fitnesses.append(fitness)
118 | if g % self.init_cfg['print_interval'] == 0:
119 | if self.save_cache:
120 | population = populations[np.argmax(fitnesses)]
121 | output_image = self.draw(population, image)
122 | checkdir(self.cache_dir)
123 | output_image.save(os.path.join(self.cache_dir, f'cache_g{g}.png'))
124 | self.logger_handle.info(f'Generation: {g}, FITNESS: {max(fitnesses)}')
125 | num_populations = len(populations)
126 | # --自然选择
127 | populations = self.select(image, fitnesses, populations)
128 | # --交叉
129 | populations = self.crossover(populations, num_populations)
130 | # --变异
131 | populations = self.mutate(image, populations, mutation_rate)
132 | # 选择最优解返回
133 | population = populations[np.argmax(fitnesses)]
134 | output_image = self.draw(population, image)
135 | return cv2.cvtColor(np.asarray(output_image), cv2.COLOR_RGB2BGR)
136 | '''自然选择'''
137 | def select(self, image, fitnesses, populations):
138 | sorted_idx = np.argsort(fitnesses)[::-1]
139 | selected_populations = []
140 | selection_rate = self.init_cfg['selection_rate']
141 | for idx in range(int(len(populations) * selection_rate)):
142 | selected_idx = int(sorted_idx[idx])
143 | selected_populations.append(populations[selected_idx])
144 | return selected_populations
145 | '''交叉'''
146 | def crossover(self, populations, num_populations):
147 | indices = list(range(len(populations)))
148 | while len(populations) < num_populations:
149 | idx1 = random.choice(indices)
150 | idx2 = random.choice(indices)
151 | population1 = copy.deepcopy(populations[idx1])
152 | population2 = copy.deepcopy(populations[idx2])
153 | for polygon_idx in range(len(population1)):
154 | if self.init_cfg['crossover_rate'] > random.random():
155 | population1[polygon_idx] = population2[polygon_idx]
156 | populations.append(population1)
157 | return populations
158 | '''变异'''
159 | def mutate(self, target_image, populations, mutation_rate):
160 | populations_new = copy.deepcopy(populations)
161 | for idx, population in enumerate(populations):
162 | fitness_ori = self.calcfitnesses([population], target_image)[0]
163 | for polygon in population:
164 | if mutation_rate > random.random():
165 | polygon.mutate()
166 | fitness = self.calcfitnesses([population], target_image)[0]
167 | if fitness > fitness_ori: populations_new[idx] = population
168 | return populations_new
169 | '''计算适应度'''
170 | def calcfitnesses(self, populations, target_image):
171 | fitnesses = []
172 | for idx in range(len(populations)):
173 | image = self.draw(populations[idx], target_image)
174 | fitnesses.append(self.calcsimilarity(image, target_image))
175 | return fitnesses
176 | '''画图'''
177 | def draw(self, population, target_image):
178 | image = Image.new('RGB', target_image.size, "#FFFFFF")
179 | for polygon in population:
180 | item = Image.new('RGB', target_image.size, "#000000")
181 | mask = Image.new('L', target_image.size, 255)
182 | draw = ImageDraw.Draw(item)
183 | draw.polygon(polygon.points, polygon.color)
184 | draw = ImageDraw.Draw(mask)
185 | draw.polygon(polygon.points, fill=128)
186 | image = Image.composite(image, item, mask)
187 | return image
188 | '''计算两幅图之间的相似度'''
189 | def calcsimilarity(self, image, target_image):
190 | image = np.asarray(image) / 255.
191 | target_image = np.asarray(target_image) / 255.
192 | similarity = 1 - abs(image - target_image).mean()
193 | return similarity
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/glitch/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .glitch import GlitchBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/glitch/glitch.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 信号故障的效果
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import random
11 | from ...utils import checkdir
12 | from ..base import BaseBeautifier
13 |
14 |
15 | '''信号故障的效果'''
16 | class GlitchBeautifier(BaseBeautifier):
17 | def __init__(self, header_size=200, intensity=0.1, block_size=100, **kwargs):
18 | super(GlitchBeautifier, self).__init__(**kwargs)
19 | self.header_size, self.intensity, self.block_size = header_size, intensity, block_size
20 | '''处理文件'''
21 | def process(self, filepath):
22 | checkdir(self.savedir)
23 | ext = filepath.split('.')[-1]
24 | assert ext.lower() in ['mp4', 'avi']
25 | with open(filepath, 'rb') as fp_in:
26 | with open(os.path.join(self.savedir, f'{self.savename}.{ext}'), 'wb') as fp_out:
27 | fp_out.write(fp_in.read(self.header_size))
28 | while True:
29 | block_data = fp_in.read(self.block_size)
30 | if not block_data: break
31 | if random.random() < self.intensity / 100: block_data = os.urandom(self.block_size)
32 | fp_out.write(block_data)
33 | self.logger_handle.info(f'Video is saved into {self.savename}.{ext}')
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/nostalgicstyle/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .nostalgicstyle import NostalgicstyleBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/nostalgicstyle/nostalgicstyle.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 照片怀旧风格
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import numpy as np
10 | from ..base import BaseBeautifier
11 |
12 |
13 | '''照片怀旧风格'''
14 | class NostalgicstyleBeautifier(BaseBeautifier):
15 | def __init__(self, **kwargs):
16 | super(NostalgicstyleBeautifier, self).__init__(**kwargs)
17 | '''迭代图片'''
18 | def iterimage(self, image):
19 | image = image.astype(np.float32)
20 | image_processed = image.copy()
21 | image_processed[..., 0] = image[..., 2] * 0.272 + image[..., 1] * 0.534 + image[..., 0] * 0.131
22 | image_processed[..., 1] = image[..., 2] * 0.349 + image[..., 1] * 0.686 + image[..., 0] * 0.168
23 | image_processed[..., 2] = image[..., 2] * 0.393 + image[..., 1] * 0.769 + image[..., 0] * 0.189
24 | image_processed[image_processed > 255.0] = 255.0
25 | return image_processed
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/noteprocessor/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .noteprocessor import NoteprocessorBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/noteprocessor/noteprocessor.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 手写笔记处理
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import cv2
11 | import numpy as np
12 | from PIL import Image
13 | from ..base import BaseBeautifier
14 | from scipy.cluster.vq import kmeans, vq
15 |
16 |
17 | '''手写笔记处理'''
18 | class NoteprocessorBeautifier(BaseBeautifier):
19 | def __init__(self, value_threshold=0.25, sat_threshold=0.20, num_colors=8, sample_fraction=0.05, white_bg=False, saturate=True, **kwargs):
20 | super(NoteprocessorBeautifier, self).__init__(**kwargs)
21 | self.num_colors = num_colors
22 | self.sample_fraction = sample_fraction
23 | self.value_threshold = value_threshold
24 | self.sat_threshold = sat_threshold
25 | self.white_bg = white_bg
26 | self.saturate = saturate
27 | '''迭代图片'''
28 | def iterimage(self, image):
29 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
30 | sampled_pixels = self.getsampledpixels(image, self.sample_fraction)
31 | palette = self.getpalette(sampled_pixels)
32 | labels = self.applypalette(image, palette)
33 | if self.saturate:
34 | palette = palette.astype(np.float32)
35 | pmin = palette.min()
36 | pmax = palette.max()
37 | palette = 255 * (palette - pmin) / (pmax - pmin)
38 | palette = palette.astype(np.uint8)
39 | if self.white_bg:
40 | palette = palette.copy()
41 | palette[0] = (255, 255, 255)
42 | image_processed = Image.fromarray(labels, 'P')
43 | image_processed.putpalette(palette.flatten())
44 | image_processed.save('tmp.png', dpi=(300, 300))
45 | image_processed = cv2.imread('tmp.png')
46 | os.remove('tmp.png')
47 | return image_processed
48 | '''将调色板应用到给定的图像, 第一步是将所有背景像素设置为背景颜色, 之后是使用最邻近匹配将每个前景色映射到调色板中最接近的一个'''
49 | def applypalette(self, image, palette):
50 | bg_color = palette[0]
51 | fg_mask = self.getfgmask(bg_color, image)
52 | orig_shape = image.shape
53 | pixels = image.reshape((-1, 3))
54 | fg_mask = fg_mask.flatten()
55 | num_pixels = pixels.shape[0]
56 | labels = np.zeros(num_pixels, dtype=np.uint8)
57 | labels[fg_mask], _ = vq(pixels[fg_mask], palette)
58 | return labels.reshape(orig_shape[:-1])
59 | '''选取图像中固定百分比的像素, 以随机顺序返回'''
60 | def getsampledpixels(self, image, sample_fraction):
61 | pixels = image.reshape((-1, 3))
62 | num_pixels = pixels.shape[0]
63 | num_samples = int(num_pixels * sample_fraction)
64 | idx = np.arange(num_pixels)
65 | np.random.shuffle(idx)
66 | return pixels[idx[:num_samples]]
67 | '''提取采样的RGB值集合的调色板, 调色板第一个条目始终是背景色, 其余的是通过运行K均值聚类从前景像素确定的'''
68 | def getpalette(self, samples, return_mask=False, kmeans_iter=40):
69 | bg_color = self.getbgcolor(samples, 6)
70 | fg_mask = self.getfgmask(bg_color, samples)
71 | centers, _ = kmeans(samples[fg_mask].astype(np.float32), self.num_colors-1, iter=kmeans_iter)
72 | palette = np.vstack((bg_color, centers)).astype(np.uint8)
73 | if not return_mask: return palette
74 | return palette, fg_mask
75 | '''通过与背景颜色进行比较来确定一组样本中的每个像素是否为前景, 如果像素的值或饱和度与背景的阈值不同, 则像素被分类为前景像素'''
76 | def getfgmask(self, bg_color, samples):
77 | s_bg, v_bg = self.rgbtosv(bg_color)
78 | s_samples, v_samples = self.rgbtosv(samples)
79 | s_diff = np.abs(s_bg - s_samples)
80 | v_diff = np.abs(v_bg - v_samples)
81 | return ((v_diff >= self.value_threshold) | (s_diff >= self.sat_threshold))
82 | '''将RGB图像或RGB颜色数组转换为饱和度和数值, 每个都作为单独的32位浮点数组或值返回'''
83 | def rgbtosv(self, rgb):
84 | if not isinstance(rgb, np.ndarray): rgb = np.array(rgb)
85 | axis = len(rgb.shape) - 1
86 | cmax = rgb.max(axis=axis).astype(np.float32)
87 | cmin = rgb.min(axis=axis).astype(np.float32)
88 | delta = cmax - cmin
89 | saturation = delta.astype(np.float32) / cmax.astype(np.float32)
90 | saturation = np.where(cmax==0, 0, saturation)
91 | value = cmax / 255.0
92 | return saturation, value
93 | '''从图像或RGB颜色数组中获得背景颜色, 方法为通过将相似的颜色分组为相同的颜色并找到最常见的颜色'''
94 | def getbgcolor(self, image, bits_per_channel=6):
95 | assert image.shape[-1] == 3
96 | image_quantized = self.quantize(image, bits_per_channel).astype(int)
97 | image_packed = self.packrgb(image_quantized)
98 | unique, counts = np.unique(image_packed, return_counts=True)
99 | packed_mode = unique[counts.argmax()]
100 | return self.unpackrgb(packed_mode)
101 | '''减少给定图像中RGB三通道的位数'''
102 | def quantize(self, image, bits_per_channel=6):
103 | assert image.dtype == np.uint8
104 | shift = 8 - bits_per_channel
105 | halfbin = (1 << shift) >> 1
106 | return ((image.astype(int) >> shift) << shift) + halfbin
107 | '''将24位RGB三元组打包成一个整数, 参数rgb为元组或者数组'''
108 | def packrgb(self, rgb):
109 | orig_shape = None
110 | if isinstance(rgb, np.ndarray):
111 | assert rgb.shape[-1] == 3
112 | orig_shape = rgb.shape[:-1]
113 | else:
114 | assert len(rgb) == 3
115 | rgb = np.array(rgb)
116 | rgb = rgb.astype(int).reshape((-1, 3))
117 | packed = (rgb[:, 0] << 16 | rgb[:, 1] << 8 | rgb[:, 2])
118 | if orig_shape is None: return packed
119 | return packed.reshape(orig_shape)
120 | '''将一个整数或整数数组解压缩为一个或多个24位RGB值'''
121 | def unpackrgb(self, packed):
122 | orig_shape = None
123 | if isinstance(packed, np.ndarray):
124 | assert packed.dtype == int
125 | orig_shape = packed.shape
126 | packed = packed.reshape((-1, 1))
127 | rgb = ((packed >> 16) & 0xff, (packed >> 8) & 0xff, (packed) & 0xff)
128 | if orig_shape is None: return rgb
129 | return np.hstack(rgb).reshape(orig_shape + (3,))
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/oilpainting/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .oilpainting import OilpaintingBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/oilpainting/oilpainting.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 照片油画化
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import random
11 | import numpy as np
12 | from scipy import ndimage
13 | from ..base import BaseBeautifier
14 |
15 |
16 | '''照片油画化'''
17 | class OilpaintingBeautifier(BaseBeautifier):
18 | def __init__(self, brush_width=5, palette=0, edge_operator='sobel', **kwargs):
19 | super(OilpaintingBeautifier, self).__init__(**kwargs)
20 | assert edge_operator in ['scharr', 'prewitt', 'sobel', 'roberts']
21 | self.brush_width = brush_width
22 | self.palette = palette
23 | self.edge_operator = edge_operator
24 | '''迭代图片'''
25 | def iterimage(self, image):
26 | # 计算图像梯度
27 | r = 2 * int(image.shape[0] / 50) + 1
28 | gx, gy = self.getgradient(cv2.cvtColor(image, cv2.COLOR_BGR2GRAY), (r, r), self.edge_operator)
29 | gh = np.sqrt(np.sqrt(np.square(gx) + np.square(gy)))
30 | ga = (np.arctan2(gy, gx) / np.pi) * 180 + 90
31 | # 画油画的所有位置
32 | canvas = cv2.medianBlur(image, 11)
33 | order = self.getdraworder(image.shape[0], image.shape[1], scale=self.brush_width * 2)
34 | # 画椭圆
35 | colors = np.array(image, dtype=np.float)
36 | for i, (y, x) in enumerate(order):
37 | length = int(round(self.brush_width + self.brush_width * gh[y, x]))
38 | if self.palette != 0:
39 | color = np.array([round(colors[y, x][0] / self.palette) * self.palette + random.randint(-5, 5), \
40 | round(colors[y, x][1] / self.palette) * self.palette + random.randint(-5, 5), \
41 | round(colors[y, x][2] / self.palette) * self.palette + random.randint(-5, 5)], dtype=np.float)
42 | else:
43 | color = colors[y, x]
44 | cv2.ellipse(canvas, (x, y), (length, self.brush_width), ga[y, x], 0, 360, color, -1, cv2.LINE_AA)
45 | # 返回结果
46 | return canvas
47 | '''画油画的所有位置'''
48 | def getdraworder(self, h, w, scale):
49 | order = []
50 | for i in range(0, h, scale):
51 | for j in range(0, w, scale):
52 | y = random.randint(-scale // 2, scale // 2) + i
53 | x = random.randint(-scale // 2, scale // 2) + j
54 | order.append((y % h, x % w))
55 | return order
56 | '''prewitt算子'''
57 | def prewitt(self, img):
58 | img_gaussian = cv2.GaussianBlur(img, (3, 3), 0)
59 | kernelx = np.array([[1, 1, 1], [0, 0, 0], [-1, -1, -1]])
60 | kernely = np.array([[-1, 0, 1], [-1, 0, 1], [-1, 0, 1]])
61 | img_prewittx = cv2.filter2D(img_gaussian, -1, kernelx)
62 | img_prewitty = cv2.filter2D(img_gaussian, -1, kernely)
63 | return img_prewittx // 15.36, img_prewitty // 15.36
64 | '''roberts算子'''
65 | def roberts(self, img):
66 | roberts_cross_v = np.array([[0, 0, 0], [0, 1, 0], [0, 0, -1]])
67 | roberts_cross_h = np.array([[0, 0, 0], [0, 0, 1], [0, -1, 0]])
68 | vertical = ndimage.convolve(img, roberts_cross_v)
69 | horizontal = ndimage.convolve(img, roberts_cross_h)
70 | return vertical // 50.0, horizontal // 50.0
71 | '''利用边缘检测算子获得梯度'''
72 | def getgradient(self, img_o, ksize, edge_operator):
73 | if edge_operator == 'scharr':
74 | X = cv2.Scharr(img_o, cv2.CV_32F, 1, 0) / 50.0
75 | Y = cv2.Scharr(img_o, cv2.CV_32F, 0, 1) / 50.0
76 | elif edge_operator == 'prewitt':
77 | X, Y = self.prewitt(img_o)
78 | elif edge_operator == 'sobel':
79 | X = cv2.Sobel(img_o, cv2.CV_32F, 1, 0, ksize=5) / 50.0
80 | Y = cv2.Sobel(img_o, cv2.CV_32F, 0, 1, ksize=5) / 50.0
81 | elif edge_operator == 'roberts':
82 | X, Y = self.roberts(img_o)
83 | X = cv2.GaussianBlur(X, ksize, 0)
84 | Y = cv2.GaussianBlur(Y, ksize, 0)
85 | return X, Y
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/pencildrawing/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .pencildrawing import PencilDrawingBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/pencildrawing/pencildrawing.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 复现论文"Combining Sketch and Tone for Pencil Drawing Production"
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import cv2
11 | import math
12 | import numpy as np
13 | from PIL import Image
14 | from scipy import signal
15 | from ..base import BaseBeautifier
16 | from scipy.ndimage import interpolation
17 | from scipy.sparse.linalg import spsolve
18 | from scipy.sparse import csr_matrix, spdiags
19 |
20 |
21 | '''图像处理工具'''
22 | class ImageProcessor():
23 | '''将像素值压缩到[0, 1]'''
24 | @staticmethod
25 | def im2double(img):
26 | if len(img.shape) == 2: return (img - img.min()) / (img.max() - img.min())
27 | else: return cv2.normalize(img.astype('float'), None, 0.0, 1.0, cv2.NORM_MINMAX)
28 | '''拉普拉斯分布'''
29 | @staticmethod
30 | def Laplace(x, sigma=9):
31 | value = (1. / sigma) * math.exp(-(256 - x) / sigma) * (256 - x)
32 | return value
33 | '''均匀分布'''
34 | @staticmethod
35 | def Uniform(x, ua=105, ub=225):
36 | value = (1. / (ub - ua)) * (max(x - ua, 0) - max(x - ub, 0))
37 | return value
38 | '''高斯分布'''
39 | @staticmethod
40 | def Gaussian(x, u=90, sigma=11):
41 | value = (1. / math.sqrt(2 * math.pi * sigma)) * math.exp(-((x - u) ** 2) / (2 * (sigma ** 2)))
42 | return value
43 | '''水平方向拼接'''
44 | @staticmethod
45 | def horizontalStitch(img, width):
46 | img_stitch = img.copy()
47 | while img_stitch.shape[1] < width:
48 | window_size = int(round(img.shape[1] / 4.))
49 | left = img[:, (img.shape[1]-window_size): img.shape[1]]
50 | right = img[:, :window_size]
51 | aleft = np.zeros((left.shape[0], window_size))
52 | aright = np.zeros((left.shape[0], window_size))
53 | for i in range(window_size):
54 | aleft[:, i] = left[:, i] * (1 - (i + 1.) / window_size)
55 | aright[:, i] = right[:, i] * (i + 1.) / window_size
56 | img_stitch = np.column_stack((img_stitch[:, :(img_stitch.shape[1]-window_size)], aleft+aright, img_stitch[:, window_size: img_stitch.shape[1]]))
57 | img_stitch = img_stitch[:, :width]
58 | return img_stitch
59 | '''垂直方向拼接'''
60 | @staticmethod
61 | def verticalStitch(img, height):
62 | img_stitch = img.copy()
63 | while img_stitch.shape[0] < height:
64 | window_size = int(round(img.shape[0] / 4.))
65 | up = img[(img.shape[0]-window_size): img.shape[0], :]
66 | down = img[0:window_size, :]
67 | aup = np.zeros((window_size, up.shape[1]))
68 | adown = np.zeros((window_size, up.shape[1]))
69 | for i in range(window_size):
70 | aup[i, :] = up[i, :] * (1 - (i + 1.) / window_size)
71 | adown[i, :] = down[i, :] * (i + 1.) / window_size
72 | img_stitch = np.row_stack((img_stitch[:img_stitch.shape[0]-window_size, :], aup+adown, img_stitch[window_size: img_stitch.shape[0], :]))
73 | img_stitch = img_stitch[:height, :]
74 | return img_stitch
75 |
76 |
77 | '''复现论文"Combining Sketch and Tone for Pencil Drawing Production"'''
78 | class PencilDrawingBeautifier(BaseBeautifier):
79 | def __init__(self, mode='gray', kernel_size_scale=1/40, stroke_width=1, color_depth=1, weights_color=[62, 30, 5], weights_gray=[76, 22, 2], texture_path=None, **kwargs):
80 | super(PencilDrawingBeautifier, self).__init__(**kwargs)
81 | assert mode in ['gray', 'color']
82 | self.rootdir = os.path.split(os.path.abspath(__file__))[0]
83 | self.image_processor = ImageProcessor()
84 | self.mode = mode
85 | # 铅笔笔画相关参数
86 | self.kernel_size_scale, self.stroke_width = kernel_size_scale, stroke_width
87 | # 铅笔色调相关参数
88 | self.weights_color, self.weights_gray, self.color_depth = weights_color, weights_gray, color_depth
89 | if (texture_path is None) or (not os.path.exists(texture_path)): self.texture_path = os.path.join(self.rootdir, 'textures/default.jpg')
90 | '''迭代图片'''
91 | def iterimage(self, image):
92 | if self.mode == 'color':
93 | img = Image.fromarray(image)
94 | img_ycbcr = img.convert('YCbCr')
95 | img = np.ndarray((img.size[1], img.size[0], 3), 'u1', img_ycbcr.tobytes())
96 | img_out = img.copy()
97 | img_out.flags.writeable = True
98 | img_out[:, :, 0] = self.__strokeGeneration(img[:, :, 0]) * self.__toneGeneration(img[:, :, 0]) * 255
99 | img_out = cv2.cvtColor(img_out, cv2.COLOR_YCR_CB2BGR)
100 | else:
101 | img = image
102 | img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
103 | img_s = self.__strokeGeneration(img)
104 | img_t = self.__toneGeneration(img)
105 | img_out = img_s * img_t * 255
106 | return img_out
107 | '''铅笔笔画生成'''
108 | def __strokeGeneration(self, img):
109 | h, w = img.shape
110 | kernel_size = int(min(w, h) * self.kernel_size_scale)
111 | kernel_size += kernel_size % 2
112 | # 计算梯度,产生幅度
113 | img_double = self.image_processor.im2double(img)
114 | dx = np.concatenate((np.abs(img_double[:, :-1]-img_double[:, 1:]), np.zeros((h, 1))), 1)
115 | dy = np.concatenate((np.abs(img_double[:-1, :]-img_double[1:, :]), np.zeros((1, w))), 0)
116 | img_gradient = np.sqrt(np.power(dx, 2) + np.power(dy, 2))
117 | # 选择八个参考方向
118 | line_segments = np.zeros((kernel_size, kernel_size, 8))
119 | for i in [0, 1, 2, 7]:
120 | for x in range(kernel_size):
121 | y = round((x + 1 - kernel_size / 2) * math.tan(math.pi / 8 * i))
122 | y = kernel_size / 2 - y
123 | if y > 0 and y <= kernel_size:
124 | line_segments[int(y-1), x, i] = 1
125 | if i == 7:
126 | line_segments[:, :, 3] = np.rot90(line_segments[:, :, 7], -1)
127 | else:
128 | line_segments[:, :, i+4] = np.rot90(line_segments[:, :, i], 1)
129 | # 获取参考方向的响应图
130 | response_maps = np.zeros((h, w, 8))
131 | for i in range(8):
132 | response_maps[:, :, i] = signal.convolve2d(img_gradient, line_segments[:, :, i], 'same')
133 | response_maps_maxvalueidx = response_maps.argmax(axis=-1)
134 | # 通过在所有方向的响应中选择最大值来进行分类
135 | magnitude_maps = np.zeros_like(response_maps)
136 | for i in range(8):
137 | magnitude_maps[:, :, i] = img_gradient * (response_maps_maxvalueidx == i).astype('float')
138 | # 线条整形
139 | stroke_maps = np.zeros_like(response_maps)
140 | for i in range(8):
141 | stroke_maps[:, :, i] = signal.convolve2d(magnitude_maps[:, :, i], line_segments[:, :, i], 'same')
142 | stroke_maps = stroke_maps.sum(axis=-1)
143 | stroke_maps = (stroke_maps - stroke_maps.min()) / (stroke_maps.max() - stroke_maps.min())
144 | stroke_maps = (1 - stroke_maps) ** self.stroke_width
145 | return stroke_maps
146 | '''铅笔色调生成'''
147 | def __toneGeneration(self, img, mode=None):
148 | height, width = img.shape
149 | # 直方图匹配
150 | img_hist_match = self.__histogramMatching(img, mode) ** self.color_depth
151 | # 获得纹理
152 | texture = cv2.imread(self.texture_path)
153 | texture = cv2.cvtColor(texture, cv2.COLOR_BGR2GRAY)[99: texture.shape[0]-100, 99: texture.shape[1]-100]
154 | ratio = 0.2 * min(img.shape[0], img.shape[1]) / float(1024)
155 | texture = interpolation.zoom(texture, (ratio, ratio))
156 | texture = self.image_processor.im2double(texture)
157 | texture = self.image_processor.horizontalStitch(texture, img.shape[1])
158 | texture = self.image_processor.verticalStitch(texture, img.shape[0])
159 | size = img.size
160 | nzmax = 2 * (size-1)
161 | i = np.zeros((nzmax, 1))
162 | j = np.zeros((nzmax, 1))
163 | s = np.zeros((nzmax, 1))
164 | for m in range(1, nzmax+1):
165 | i[m-1] = int(math.ceil((m + 0.1) / 2)) - 1
166 | j[m-1] = int(math.ceil((m - 0.1) / 2)) - 1
167 | s[m-1] = -2 * (m % 2) + 1
168 | dx = csr_matrix((s.T[0], (i.T[0], j.T[0])), shape=(size, size))
169 | nzmax = 2 * (size - img.shape[1])
170 | i = np.zeros((nzmax, 1))
171 | j = np.zeros((nzmax, 1))
172 | s = np.zeros((nzmax, 1))
173 | for m in range(1, nzmax+1):
174 | i[m-1, :] = int(math.ceil((m - 1 + 0.1) / 2) + img.shape[1] * (m % 2)) - 1
175 | j[m-1, :] = math.ceil((m - 0.1) / 2) - 1
176 | s[m-1, :] = -2 * (m % 2) + 1
177 | dy = csr_matrix((s.T[0], (i.T[0], j.T[0])), shape=(size, size))
178 | texture_sparse = spdiags(np.log(np.reshape(texture.T, (1, texture.size), order="f") + 0.01), 0, size, size)
179 | img_hist_match1d = np.log(np.reshape(img_hist_match.T, (1, img_hist_match.size), order="f").T + 0.01)
180 | nat = texture_sparse.T.dot(img_hist_match1d)
181 | a = np.dot(texture_sparse.T, texture_sparse)
182 | b = dx.T.dot(dx)
183 | c = dy.T.dot(dy)
184 | mat = a + 0.2 * (b + c)
185 | beta1d = spsolve(mat, nat)
186 | beta = np.reshape(beta1d, (img.shape[0], img.shape[1]), order="c")
187 | tone = texture ** beta
188 | tone = (tone - tone.min()) / (tone.max() - tone.min())
189 | return tone
190 | '''直方图匹配'''
191 | def __histogramMatching(self, img, mode=None):
192 | weights = self.weights_color if mode == 'color' else self.weights_gray
193 | # 图像
194 | histogram_img = cv2.calcHist([img], [0], None, [256], [0, 256])
195 | histogram_img.resize(histogram_img.size)
196 | histogram_img /= histogram_img.sum()
197 | histogram_img_cdf = np.cumsum(histogram_img)
198 | # 自然图像
199 | histogram_natural = np.zeros_like(histogram_img)
200 | for x in range(256):
201 | histogram_natural[x] = weights[0] * self.image_processor.Laplace(x) + weights[1] * self.image_processor.Uniform(x) + weights[2] * self.image_processor.Gaussian(x)
202 | histogram_natural /= histogram_natural.sum()
203 | histogram_natural_cdf = np.cumsum(histogram_natural)
204 | # 做直方图匹配
205 | img_hist_match = np.zeros_like(img)
206 | for x in range(img.shape[0]):
207 | for y in range(img.shape[1]):
208 | value = histogram_img_cdf[img[x, y]]
209 | img_hist_match[x, y] = (np.abs(histogram_natural_cdf-value)).argmin()
210 | img_hist_match = np.true_divide(img_hist_match, 255)
211 | return img_hist_match
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/pencildrawing/textures/default.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CharlesPikachu/pydrawing/e980ad9bf4cece42ff40ed2bc7bed7a155eabb8e/pydrawing/modules/beautifiers/pencildrawing/textures/default.jpg
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/photocorrection/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .photocorrection import PhotocorrectionBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/photocorrection/photocorrection.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 简单的照片矫正
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import numpy as np
11 | from ..base import BaseBeautifier
12 | from imutils.perspective import four_point_transform
13 |
14 |
15 | '''简单的照片矫正'''
16 | class PhotocorrectionBeautifier(BaseBeautifier):
17 | def __init__(self, epsilon_factor=0.08, canny_boundaries=[100, 200], use_preprocess=False, **kwargs):
18 | super(PhotocorrectionBeautifier, self).__init__(**kwargs)
19 | self.epsilon_factor = epsilon_factor
20 | self.canny_boundaries = canny_boundaries
21 | self.use_preprocess = use_preprocess
22 | '''迭代图片'''
23 | def iterimage(self, image):
24 | # 预处理
25 | if self.use_preprocess:
26 | image_edge = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
27 | image_edge = cv2.GaussianBlur(image_edge, (5, 5), 0)
28 | image_edge = cv2.dilate(image_edge, cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)))
29 | else:
30 | image_edge = image.copy()
31 | image_edge = cv2.Canny(image_edge, self.canny_boundaries[0], self.canny_boundaries[1], 3)
32 | # 找到最大轮廓
33 | cnts = cv2.findContours(image_edge.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
34 | cnts = cnts[0]
35 | if len(cnts) < 1: return image
36 | cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
37 | for cnt in cnts:
38 | peri = cv2.arcLength(cnt, True)
39 | approx = cv2.approxPolyDP(cnt, self.epsilon_factor * peri, True)
40 | if len(approx) == 4: break
41 | if len(approx) != 4: return image
42 | # 矫正
43 | image_processed = four_point_transform(image, approx.reshape(4, 2))
44 | # 返回
45 | return image_processed
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/photomosaic/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .photomosaic import PhotomosaicBeautifier
--------------------------------------------------------------------------------
/pydrawing/modules/beautifiers/photomosaic/photomosaic.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 拼马赛克图片
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import cv2
10 | import glob
11 | import numpy as np
12 | from tqdm import tqdm
13 | from itertools import product
14 | from ..base import BaseBeautifier
15 |
16 |
17 | '''拼马赛克图片'''
18 | class PhotomosaicBeautifier(BaseBeautifier):
19 | def __init__(self, block_size=15, src_images_dir=None, **kwargs):
20 | super(PhotomosaicBeautifier, self).__init__(**kwargs)
21 | self.block_size = block_size
22 | self.src_images_dir = src_images_dir
23 | self.src_images, self.avg_colors = self.ReadSourceImages()
24 | '''迭代图片'''
25 | def iterimage(self, image):
26 | output_image = np.zeros(image.shape, np.uint8)
27 | src_images, avg_colors = self.src_images, self.avg_colors
28 | for i, j in tqdm(product(range(int(image.shape[1]/self.block_size)), range(int(image.shape[0]/self.block_size)))):
29 | block = image[j*self.block_size: (j+1)*self.block_size, i*self.block_size: (i+1)*self.block_size, :]
30 | avg_color = np.sum(np.sum(block, axis=0), axis=0) / (self.block_size * self.block_size)
31 | distances = np.linalg.norm(avg_color - avg_colors, axis=1)
32 | idx = np.argmin(distances)
33 | output_image[j*self.block_size: (j+1)*self.block_size, i*self.block_size: (i+1)*self.block_size, :] = src_images[idx]
34 | return output_image
35 | '''读取所有源图片并计算对应的颜色平均值'''
36 | def ReadSourceImages(self):
37 | src_images, avg_colors = [], []
38 | for path in tqdm(glob.glob("{}/*[jpg,jpeg,png,gif]".format(self.src_images_dir))):
39 | image = cv2.imread(path, cv2.IMREAD_COLOR)
40 | if image.shape[-1] != 3: continue
41 | image = cv2.resize(image, (self.block_size, self.block_size))
42 | avg_color = np.sum(np.sum(image, axis=0), axis=0) / (self.block_size * self.block_size)
43 | src_images.append(image)
44 | avg_colors.append(avg_color)
45 | return src_images, np.array(avg_colors)
--------------------------------------------------------------------------------
/pydrawing/modules/utils/__init__.py:
--------------------------------------------------------------------------------
1 | '''initialize'''
2 | from .logger import Logger
3 | from .io import Images2VideoAndSave, SaveImage, ReadVideo, SaveImage, checkdir
--------------------------------------------------------------------------------
/pydrawing/modules/utils/io.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | IO相关的工具函数
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import os
10 | import cv2
11 | from tqdm import tqdm
12 |
13 |
14 | '''检查文件是否存在'''
15 | def checkdir(dirname):
16 | if os.path.exists(dirname): return True
17 | os.mkdir(dirname)
18 | return False
19 |
20 |
21 | '''将图片转为视频并保存'''
22 | def Images2VideoAndSave(images, savedir='outputs', savename='output', fps=25, ext='avi', logger_handle=None):
23 | checkdir(savedir)
24 | savepath = os.path.join(savedir, savename + f'.{ext}')
25 | fourcc = cv2.VideoWriter_fourcc('M', 'J', 'P', 'G')
26 | video_writer = cv2.VideoWriter(savepath, fourcc, fps, (images[0].shape[1], images[0].shape[0]))
27 | pbar = tqdm(images)
28 | for image in pbar:
29 | pbar.set_description(f'Writing image to {savepath}')
30 | video_writer.write(image)
31 | if logger_handle is not None: logger_handle.info(f'Video is saved into {savepath}')
32 |
33 |
34 | '''保存图片'''
35 | def SaveImage(image, savedir='outputs', savename='output', ext='png', logger_handle=None):
36 | checkdir(savedir)
37 | savepath = os.path.join(savedir, savename + f'.{ext}')
38 | cv2.imwrite(savepath, image)
39 | if logger_handle is not None: logger_handle.info(f'Image is saved into {savepath}')
40 |
41 |
42 | '''读取视频'''
43 | def ReadVideo(videopath):
44 | capture, images = cv2.VideoCapture(videopath), []
45 | fps = capture.get(cv2.CAP_PROP_FPS)
46 | while capture.isOpened():
47 | ret, frame = capture.read()
48 | if not ret: break
49 | images.append(frame)
50 | capture.release()
51 | return images, fps
--------------------------------------------------------------------------------
/pydrawing/modules/utils/logger.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 一些终端打印工具
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import logging
10 |
11 |
12 | '''打印日志类'''
13 | class Logger():
14 | def __init__(self, logfilepath, **kwargs):
15 | setattr(self, 'logfilepath', logfilepath)
16 | logging.basicConfig(
17 | level=logging.INFO,
18 | format='%(asctime)s %(levelname)-8s %(message)s',
19 | datefmt='%Y-%m-%d %H:%M:%S',
20 | handlers=[logging.FileHandler(logfilepath), logging.StreamHandler()],
21 | )
22 | @staticmethod
23 | def log(level, message):
24 | logging.log(level, message)
25 | def debug(self, message, disable_print=False):
26 | if disable_print:
27 | fp = open(self.logfilepath, 'a')
28 | fp.write(message + '\n')
29 | else:
30 | Logger.log(logging.DEBUG, message)
31 | def info(self, message, disable_print=False):
32 | if disable_print:
33 | fp = open(self.logfilepath, 'a')
34 | fp.write(message + '\n')
35 | else:
36 | Logger.log(logging.INFO, message)
37 | def warning(self, message, disable_print=False):
38 | if disable_print:
39 | fp = open(self.logfilepath, 'a')
40 | fp.write(message + '\n')
41 | else:
42 | Logger.log(logging.WARNING, message)
43 | def error(self, message, disable_print=False):
44 | if disable_print:
45 | fp = open(self.logfilepath, 'a')
46 | fp.write(message + '\n')
47 | else:
48 | Logger.log(logging.ERROR, message)
--------------------------------------------------------------------------------
/pydrawing/pydrawing.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | 用Python美化你的照片或视频
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | '''
9 | import warnings
10 | if __name__ == '__main__':
11 | from modules import *
12 | else:
13 | from .modules import *
14 | warnings.filterwarnings('ignore')
15 |
16 |
17 | '''用Python美化你的照片或视频'''
18 | class pydrawing():
19 | def __init__(self, **kwargs):
20 | for key, value in kwargs.items(): setattr(self, key, value)
21 | self.supported_beautifiers = self.initializebeautifiers()
22 | self.logger_handle = Logger(kwargs.get('logfilepath', 'pydrawing.log'))
23 | print(self)
24 | '''执行对应的算法'''
25 | def execute(self, filepath='asserts/dog.jpg', beautifier_type=None, config={}):
26 | assert beautifier_type in self.supported_beautifiers, 'unsupport beautifier_type %s' % beautifier_type
27 | if 'savedir' not in config: config['savedir'] = 'outputs'
28 | if 'savename' not in config: config['savename'] = 'output'
29 | if 'logger_handle' not in config: config['logger_handle'] = self.logger_handle
30 | beautifier = self.supported_beautifiers[beautifier_type](**config)
31 | beautifier.process(filepath)
32 | '''获得所有支持的美化器'''
33 | def getallsupports(self):
34 | return list(self.supported_beautifiers.keys())
35 | '''初始化美化器'''
36 | def initializebeautifiers(self):
37 | supported_beautifiers = {
38 | 'glitch': GlitchBeautifier,
39 | 'cartoonise': CartooniseBeautifier,
40 | 'cartoongan': CartoonGanBeautifier,
41 | 'oilpainting': OilpaintingBeautifier,
42 | 'beziercurve': BezierCurveBeautifier,
43 | 'photomosaic': PhotomosaicBeautifier,
44 | 'characterize': CharacterizeBeautifier,
45 | 'douyineffect': DouyinEffectBeautifier,
46 | 'noteprocessor': NoteprocessorBeautifier,
47 | 'pencildrawing': PencilDrawingBeautifier,
48 | 'cartoonizeface': CartoonizeFaceBeautifier,
49 | 'nostalgicstyle': NostalgicstyleBeautifier,
50 | 'photocorrection': PhotocorrectionBeautifier,
51 | 'geneticfittingcircle': GeneticFittingCircleBeautifier,
52 | 'geneticfittingpolygon': GeneticFittingPolygonBeautifier,
53 | 'fastneuralstyletransfer': FastNeuralStyleTransferBeautifier,
54 | }
55 | return supported_beautifiers
56 | '''str'''
57 | def __str__(self):
58 | return 'Welcome to use Pydrawing!\nYou can visit https://github.com/CharlesPikachu/pydrawing for more details.'
59 |
60 |
61 | '''run'''
62 | if __name__ == '__main__':
63 | import random
64 | drawing_client = pydrawing()
65 | drawing_client.execute('asserts/dog.jpg', random.choice(drawing_client.getallsupports()))
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | opencv-python
2 | numpy
3 | tqdm
4 | pillow
5 | beautifulsoup4
6 | imutils
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | '''
2 | Function:
3 | setup the pydrawing
4 | Author:
5 | Charles
6 | 微信公众号:
7 | Charles的皮卡丘
8 | GitHub:
9 | https://github.com/CharlesPikachu
10 | '''
11 | import pydrawing
12 | from setuptools import setup, find_packages
13 |
14 |
15 | '''readme'''
16 | with open('README.md', 'r', encoding='utf-8') as f:
17 | long_description = f.read()
18 |
19 |
20 | '''package data'''
21 | package_data = {}
22 | package_data.update({
23 | 'pydrawing.modules.beautifiers.pencildrawing': ['textures/*']
24 | })
25 | package_data.update({
26 | 'pydrawing.modules.beautifiers.beziercurve': ['potrace.exe']
27 | })
28 |
29 |
30 | '''setup'''
31 | setup(
32 | name=pydrawing.__title__,
33 | version=pydrawing.__version__,
34 | description=pydrawing.__description__,
35 | long_description=long_description,
36 | long_description_content_type='text/markdown',
37 | classifiers=[
38 | 'License :: OSI Approved :: Apache Software License',
39 | 'Programming Language :: Python :: 3',
40 | 'Intended Audience :: Developers',
41 | 'Operating System :: OS Independent'
42 | ],
43 | author=pydrawing.__author__,
44 | url=pydrawing.__url__,
45 | author_email=pydrawing.__email__,
46 | license=pydrawing.__license__,
47 | include_package_data=True,
48 | package_data=package_data,
49 | install_requires=[lab.strip('\n') for lab in list(open('requirements.txt', 'r').readlines())],
50 | zip_safe=True,
51 | packages=find_packages(),
52 | )
--------------------------------------------------------------------------------