├── .gitignore
├── LICENSE
├── NikeMouth
├── README.md
├── imgs
│ ├── nike.png
│ ├── test.jpg
│ └── test_out.jpg
├── main.py
├── normal2nike.py
└── options.py
├── OneImage2Video
├── README_CN.md
├── imgs
│ ├── contributions.png
│ ├── contributions_120.png
│ └── test.jpg
├── main.py
├── pixel_imgs
│ ├── banana
│ │ ├── 00.png
│ │ └── 01.png
│ └── github
│ │ ├── 01.png
│ │ ├── 02.png
│ │ ├── 03.png
│ │ ├── 04.png
│ │ └── 05.png
└── school_badge.py
├── PlayMusic
├── README.md
├── music
│ ├── SchoolBell.txt
│ └── TwinkleTwinkleLittleStar.txt
└── notation.py
├── README.md
├── Util
├── __init__.py
├── array_operation.py
├── dsp.py
├── ffmpeg.py
├── image_processing.py
├── sound.py
└── util.py
└── imgs
├── DeepMosaics.jpg
├── GuiChu_余生一个浪.jpg
├── OneImage2Video_badapple.jpg
├── PlayMusic_小星星.jpg
├── Plogo.jpg
├── ShellPlayer_大威天龙.jpg
└── ShellPlayer_教程.jpg
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
131 | # myrules
132 | tmp/
133 | result/
134 | /DualVectorFoil
135 | /GuiChu
136 | /SkyScreen
137 | /Video
138 | /GhostWriter
139 | /pythontest.py
140 | /ShellPlayer
141 | /Plogo
142 | /test.py
143 |
144 | /PlayMusic/music
145 | /PlayMusic/test.py
146 |
147 | /OneImage2Video/pixel_imgs/university
148 |
149 | /NikeMouth/media
150 | /NikeMouth/test_media
151 | /NikeMouth/test.py
152 |
153 | *.mp4
154 | *.flv
155 | *.mp3
156 | *.wav
157 | *.exe
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2020 Hypo
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/NikeMouth/README.md:
--------------------------------------------------------------------------------
1 | # NikeMouth
2 | [更多有趣的代码](https://github.com/HypoX64/bilibili)
3 | ## 入门
4 | ### 前提要求
5 | * python3
6 | * [ffmpeg](http://ffmpeg.org/)
7 |
8 | ### 依赖
9 | 代码依赖于 opencv-python, face_recognition, 可以通过 pip install安装
10 | ```bash
11 | pip install opencv-python
12 | pip install face_recognition
13 | ```
14 | ### 克隆这个仓库
15 | ```bash
16 | git clone https://github.com/HypoX64/bilibili.git
17 | cd bilibili/NikeMouth
18 | ```
19 | ### 运行程序
20 | ```bash
21 | python main.py -m "图片的路径或目录"
22 | ```
23 |

24 |
25 | ## 更多的参数
26 |
27 | | 选项 | 描述 | 默认值 |
28 | | :----------: | :------------------------: | :-------------------------------------: |
29 | | -m | 图片的路径或目录 | './imgs/test.jpg' |
30 | | -mode | 改变的模式,all_face or only_mouth | all_face |
31 | | -o | 输出视频或图片image or video | image |
32 | | -r | 结果储存的目录 | ./result |
33 | | -i | 变换强度 | 1.0 |
34 |
35 |
--------------------------------------------------------------------------------
/NikeMouth/imgs/nike.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/NikeMouth/imgs/nike.png
--------------------------------------------------------------------------------
/NikeMouth/imgs/test.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/NikeMouth/imgs/test.jpg
--------------------------------------------------------------------------------
/NikeMouth/imgs/test_out.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/NikeMouth/imgs/test_out.jpg
--------------------------------------------------------------------------------
/NikeMouth/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import numpy as np
4 | import cv2
5 | import normal2nike
6 | sys.path.append("..")
7 | from Util import util,ffmpeg
8 | from options import Options
9 | opt = Options().getparse()
10 |
11 | util.file_init(opt)
12 |
13 | if os.path.isdir(opt.media):
14 | files = util.Traversal(opt.media)
15 | else:
16 | files = [opt.media]
17 |
18 | for file in files:
19 | img = cv2.imread(file)
20 | h,w = img.shape[:2]
21 | if opt.output == 'image':
22 | img = normal2nike.convert(img,opt.size,opt.intensity,opt.aspect_ratio,opt.ex_move,opt.mode)
23 | cv2.imwrite(os.path.join(opt.result_dir,os.path.basename(file)), img)
24 | elif opt.output == 'video':
25 | frame = int(opt.time*opt.fps)
26 | for i in range(frame):
27 | tmp = normal2nike.convert(img,opt.size,i*opt.intensity/frame,opt.aspect_ratio,
28 | opt.ex_move,opt.mode)[:4*(h//4),:4*(w//4)]
29 | cv2.imwrite(os.path.join('./tmp/output_imgs','%05d' % i+'.jpg'), tmp)
30 | cv2.imwrite(os.path.join(opt.result_dir,os.path.basename(file)), tmp)
31 | ffmpeg.image2video(
32 | opt.fps,
33 | './tmp/output_imgs/%05d.jpg',
34 | None,
35 | os.path.join(opt.result_dir,os.path.splitext(os.path.basename(file))[0]+'.mp4'))
36 |
37 | # cv2.namedWindow('image', cv2.WINDOW_NORMAL)
38 | # cv2.imshow('image',img)
39 | # cv2.waitKey(0)
40 | # cv2.destroyAllWindows()
--------------------------------------------------------------------------------
/NikeMouth/normal2nike.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | import face_recognition
4 |
5 |
6 | # 繪製delaunay triangles
7 | def draw_delaunay(img, TriangleList, delaunary_color):
8 |
9 | size = img.shape
10 | r = (0, 0, size[1], size[0])
11 |
12 | for t in TriangleList:
13 | pt1 = (t[0], t[1])
14 | pt2 = (t[2], t[3])
15 | pt3 = (t[4], t[5])
16 |
17 | # if rect_contains(r, pt1) and rect_contains(r, pt2) and rect_contains(r, pt3):
18 | cv2.line(img, pt1, pt2, delaunary_color, 1)
19 | cv2.line(img, pt2, pt3, delaunary_color, 1)
20 | cv2.line(img, pt3, pt1, delaunary_color, 1)
21 |
22 | def get_Ms(t1,t2):
23 | Ms = []
24 | for i in range(len(t1)):
25 | pts1 = np.array([[t1[i][0],t1[i][1]],[t1[i][2],t1[i][3]],[t1[i][4],t1[i][5]]]).astype(np.float32)
26 | pts2 = np.array([[t2[i][0],t2[i][1]],[t2[i][2],t2[i][3]],[t2[i][4],t2[i][5]]]).astype(np.float32)
27 | # print(pts1)
28 | Ms.append(cv2.getAffineTransform(pts1,pts2))
29 | return Ms
30 |
31 | def delaunay(h,w,points):
32 | subdiv = cv2.Subdiv2D((0,0,w,h))
33 | for i in range(len(points)):
34 | subdiv.insert(tuple(points[i]))
35 | TriangleList = subdiv.getTriangleList()
36 | return TriangleList
37 |
38 | def get_borderpoints(img):
39 | h,w = img.shape[:2]
40 | h,w = h-1,w-1
41 | points = [[0,0],[w//2,0],[w,0],[w,h//2],[w,h],[w//2,h],[0,h],[0,h//2]]
42 | return np.array(points)
43 |
44 | def get_masks(img,t):
45 | masks = np.zeros((len(t),img.shape[0],img.shape[1]), dtype=np.uint8)
46 | for i in range(len(t)):
47 |
48 | points = np.array([[t[i][0],t[i][1]],[t[i][2],t[i][3]],[t[i][4],t[i][5]]]).astype(np.int64)
49 | #print(points)
50 | masks[i] = cv2.fillConvexPoly(masks[i],points,(255))
51 | cv2.line(masks[i], tuple(points[0]), tuple(points[1]), (255), 3)
52 | cv2.line(masks[i], tuple(points[0]), tuple(points[2]), (255), 3)
53 | cv2.line(masks[i], tuple(points[1]), tuple(points[2]), (255), 3)
54 |
55 | #masks[i] = cv2.drawContours(masks[i], points, contourIdx=0, color=255, thickness=2)
56 | return masks
57 |
58 | def changeTriangleList(t,point,move):
59 | t = t.astype(np.int64)
60 | for i in range(len(t)):
61 | if t[i][0]==point[0] and t[i][1]==point[1]:
62 | t[i][0],t[i][1] = t[i][0]+move[0],t[i][1]+move[1]
63 | elif t[i][2]==point[0] and t[i][3]==point[1]:
64 | t[i][2],t[i][3] = t[i][2]+move[0],t[i][3]+move[1]
65 | elif t[i][4]==point[0] and t[i][5]==point[1]:
66 | t[i][4],t[i][5] = t[i][4]+move[0],t[i][5]+move[1]
67 | return t
68 |
69 | def replace_delaunay(img,Ms,masks):
70 | img_new = np.zeros_like(img)
71 | h,w = img.shape[:2]
72 | for i in range(len(Ms)):
73 | # _img = img.copy()
74 | mask = cv2.merge([masks[i], masks[i], masks[i]])
75 | mask_inv = cv2.bitwise_not(mask)
76 | tmp = cv2.warpAffine(img,Ms[i],(w,h),borderMode = cv2.BORDER_REFLECT_101,flags=cv2.INTER_CUBIC)
77 | tmp = cv2.bitwise_and(mask,tmp)
78 | img_new = cv2.bitwise_and(mask_inv,img_new)
79 | img_new = cv2.add(tmp,img_new)
80 | return img_new
81 |
82 | def get_nikemouth_landmark(src_landmark,alpha=1,aspect_ratio=1.0,mode = 'only_mouth'):
83 |
84 |
85 | nike = cv2.imread('./imgs/nike.png')
86 | landmark = face_recognition.face_landmarks(nike)[0]
87 | if mode == 'only_mouth':
88 | src_landmark = src_landmark[56:]
89 | nikemouth = np.array(landmark['top_lip']+landmark['bottom_lip'])
90 | else:
91 | src_landmark = src_landmark[25:]
92 | nikemouth = np.array(landmark['left_eyebrow']+landmark['right_eyebrow']+landmark['nose_bridge']+\
93 | landmark['nose_tip']+landmark['left_eye']+landmark['right_eye']+landmark['top_lip']+\
94 | landmark['bottom_lip'])
95 |
96 | # 中心置0
97 | nikemouth = nikemouth-[np.mean(nikemouth[:,0]),np.mean(nikemouth[:,1])]
98 | nikemouth[:,0] = nikemouth[:,0]*aspect_ratio
99 | # 获取嘴巴大小
100 | nikemouth_h = np.max(nikemouth[:,1])-np.min(nikemouth[:,1])
101 | nikemouth_w = np.max(nikemouth[:,0])-np.min(nikemouth[:,0])
102 | src_h = np.max(src_landmark[:,1])-np.min(src_landmark[:,1])
103 | src_w = np.max(src_landmark[:,0])-np.min(src_landmark[:,0])
104 |
105 | # 调整大小及位置
106 | beta = nikemouth_w/src_w
107 | nikemouth = alpha*nikemouth/beta+[np.mean(src_landmark[:,0]),np.mean(src_landmark[:,1])]
108 | return np.around(nikemouth,0)
109 |
110 |
111 |
112 | def convert(face,size=1,intensity=1,aspect_ratio=1.0,ex_move=[0,0],mode='all_face'):
113 |
114 | h,w = face.shape[:2]
115 | landmark = face_recognition.face_landmarks(face)[0]
116 | # print(landmark)
117 | landmark_src = np.array(landmark['chin']+landmark['left_eyebrow']+landmark['right_eyebrow']+\
118 | landmark['nose_bridge']+landmark['nose_tip']+landmark['left_eye']+landmark['right_eye']+landmark['top_lip']+\
119 | landmark['bottom_lip'])
120 |
121 | landmark_src = np.concatenate((get_borderpoints(face), landmark_src), axis=0)
122 | TriangleList_src = delaunay(h,w,landmark_src)
123 | TriangleList_dst = TriangleList_src.copy()
124 |
125 |
126 | nikemouth_landmark = get_nikemouth_landmark(landmark_src,alpha=size,aspect_ratio=aspect_ratio,mode = mode)
127 | if mode == 'only_mouth':
128 | for i in range(24):
129 | move = ex_move+(nikemouth_landmark[i]-landmark_src[56+i])*intensity
130 | TriangleList_dst = changeTriangleList(TriangleList_dst, landmark_src[56+i],move)
131 | else:
132 | for i in range(80-25):
133 | move = ex_move+(nikemouth_landmark[i]-landmark_src[25+i])*intensity
134 | TriangleList_dst = changeTriangleList(TriangleList_dst, landmark_src[25+i],move)
135 |
136 | Ms = get_Ms(TriangleList_src,TriangleList_dst)
137 | masks = get_masks(face, TriangleList_dst)
138 | face_new = replace_delaunay(face, Ms, masks)
139 |
140 | return face_new
141 |
142 | # # draw_delaunay(img, TriangleList, delaunary_color):
143 |
144 | # cv2.namedWindow('image', cv2.WINDOW_NORMAL)
145 | # cv2.imshow('image',face_new)
146 | # cv2.waitKey(0)
147 | # cv2.destroyAllWindows()
--------------------------------------------------------------------------------
/NikeMouth/options.py:
--------------------------------------------------------------------------------
1 | import argparse
2 |
3 | class Options():
4 | def __init__(self):
5 | self.parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
6 | self.initialized = False
7 |
8 | def initialize(self):
9 | self.parser.add_argument('-m','--media', type=str, default='./imgs/test.jpg',help='your image path or dir')
10 | self.parser.add_argument('-mode','--mode', type=str, default='all_face',help='all_face | only_mouth')
11 | self.parser.add_argument('-o','--output', type=str, default='image',help='output image or video')
12 | self.parser.add_argument('-t','--time', type=float, default=2.0,help='time of video')
13 | self.parser.add_argument('-f','--fps', type=float, default=25,help='fps of video')
14 | self.parser.add_argument('-s','--size', type=float, default=1.0,help='size of mouth')
15 | self.parser.add_argument('-i','--intensity', type=float, default=1.0,help='effect intensity')
16 | self.parser.add_argument('-a','--aspect_ratio', type=float, default=1.0,help='aspect ratio of mouth')
17 | self.parser.add_argument('-e','--ex_move', type=str, default='[0,0]',help='')
18 | self.parser.add_argument('-r','--result_dir', type=str, default='./result',help='')
19 | # self.parser.add_argument('--temp_dir', type=str, default='./tmp',help='')
20 |
21 | self.initialized = True
22 |
23 | def getparse(self):
24 | if not self.initialized:
25 | self.initialize()
26 | self.opt = self.parser.parse_args()
27 |
28 | self.opt.ex_move = eval(self.opt.ex_move)
29 | return self.opt
30 |
31 |
--------------------------------------------------------------------------------
/OneImage2Video/README_CN.md:
--------------------------------------------------------------------------------
1 | # OneImage2Video
2 | 用一个或一些图片生成视频
3 |
4 | ## 入门
5 | ### 前提要求
6 | * Linux or Windows
7 | * python3
8 | * [ffmpeg](http://ffmpeg.org/)
9 | ```bash
10 | sudo apt-get install ffmpeg
11 | ```
12 | ### 依赖
13 | 代码依赖于 opencv-python, 可以通过 pip install安装
14 | ```bash
15 | pip install opencv-python
16 | ```
17 | ### 克隆这个仓库
18 | ```bash
19 | git clone https://github.com/HypoX64/bilibili.git
20 | cd bilibili/OneImage2Video
21 | ```
22 | ### 运行程序
23 | * 在main.py中修改你的视频路径以及图片路径
24 | ```python
25 | # 在github看香蕉君
26 | # pixel_imgs_dir = './pixel_imgs/github'
27 | # pixel_imgs_resize = 0 # resize pixel_imgs, if 0, do not resize
28 | # output_pixel_num = 52 # how many pixels in the output video'width
29 | # video_path = '../Video/素材/香蕉君/香蕉君_3.mp4'
30 | # inverse = False
31 | ```
32 | * run
33 | ```bash
34 | python main.py
35 | ```
36 |
37 |
38 |
--------------------------------------------------------------------------------
/OneImage2Video/imgs/contributions.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/imgs/contributions.png
--------------------------------------------------------------------------------
/OneImage2Video/imgs/contributions_120.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/imgs/contributions_120.png
--------------------------------------------------------------------------------
/OneImage2Video/imgs/test.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/imgs/test.jpg
--------------------------------------------------------------------------------
/OneImage2Video/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import cv2
3 | import numpy as np
4 |
5 | import sys
6 | sys.path.append("..")
7 | from Util import util,ffmpeg
8 | '''
9 | ,
10 | ,+nnDDDDn1+ n@@@1
11 | +&MMn 1D@@@@@@@@@@@@@&n, 1@@@@@1
12 | D@@@@@, nM@@@@@@@@@@@@@@@@@@&+ 1@@@@@@@1
13 | 1M@@@@@@, +M@@@@D,,,,,,,,,,,1@@@@@D +D@@@@@@@@@D+
14 | ,1nn, &@@@@@@@@nnnnnnnn1, +@@@@@@M&&&&1 ,&&&&M@@@@@@& ,nD&&M@@@@@@@@@@@@@@@@@M&&Dn,
15 | ,M@@@@1 ,@@@@@@@@@@@@@@@@@@@D @@@@@@@@@&1+, +1DM@@@@@@@@n &@@@@@@@@@@@@@@@@@@@@@@@@@@@&
16 | 1@@@@@1 ,@@@@@@@@@@@@@@@@@@@n n@@@@@@@D +n1 D1, +M@@@@@@M 1M@@@@@@@@@@@@@@@@@@@@@@@Mn
17 | 1@@@@@1 ,@@@@@@@@@@@@@@@@@@M D@@@@@@n ,M@@D ,@@@n ,@@@@@@@ 1M@@@@@@@@@@@@@@@@@@@Mn
18 | 1@@@@@1 ,@@@@@@@@@@@@@@@@@@+ 1@@@@@@, D@@@D ,@@@@ &@@@@@M D@@@@@@@@@@@@@@@@@D
19 | 1@@@@@1 ,@@@@@@@@@@@@@@@@@D M@@@@@1,&@@@D ,@@@@1,M@@@@@1 @@@@@@@@@@@@@@@@@,
20 | 1@@@@@1 ,@@@@@@@@@@@@@@@@@, +@@@@@@@@@@@D ,@@@@@@@@@@@D +@@@@@@@@@@@@@@@@@+
21 | 1@@@@@1 ,@@@@@@@@@@@@@@@@n ,M@@@@@@@@@&+n@@@@@@@@@@n n@@@@@@@@@@@@@@@@@D
22 | D@@@@1 ,@@@@@@@@@@@@@@M1 1M@@@@@@@@@@@@@@@@@@D, M@@@@@MD1+1nM@@@@@@
23 | ,++ ++++++++++++, +nM@@@@@@@@@@@MD1 nMMD1, 1DMMD
24 | ,+1nnnn1+,
25 | '''
26 |
27 | # 在github看香蕉君
28 | # pixel_imgs_dir = './pixel_imgs/github'
29 | # pixel_imgs_resize = 0 # resize pixel_imgs, if 0, do not resize
30 | # output_pixel_num = 52 # how many pixels in the output video'width
31 | # video_path = '../Video/素材/香蕉君/香蕉君_3.mp4'
32 | # inverse = False
33 |
34 | # 用香蕉看香蕉君
35 | pixel_imgs_dir = './pixel_imgs/banana'
36 | pixel_imgs_resize = 40 # resize pixel_imgs, if 0, do not resize
37 | output_pixel_num = 48 # how many pixels in the output video'width
38 | video_path = '../Video/素材/香蕉君/香蕉君_3.mp4'
39 | inverse = True
40 |
41 | # ------------------------- Load Blocks -------------------------
42 | pixels = []
43 | pixel_paths = os.listdir(pixel_imgs_dir)
44 | pixel_paths.sort()
45 | if inverse:
46 | pixel_paths.reverse()
47 | for path in pixel_paths:
48 | pixel = cv2.imread(os.path.join(pixel_imgs_dir,path))
49 | if pixel_imgs_resize != 0:
50 | pixel = cv2.resize(pixel,(pixel_imgs_resize,pixel_imgs_resize))
51 | pixels.append(pixel)
52 | pixel_size = pixels[0].shape[0]
53 | if len(pixels)>2:
54 | level = 255//len(pixels)
55 | else:
56 | level = 32
57 | print(pixels[0].shape)
58 |
59 | # ------------------------- Prcessing Video -------------------------
60 | fps,endtime,height,width = ffmpeg.get_video_infos(video_path)
61 | scale = height/width
62 |
63 | util.clean_tempfiles(False)
64 | util.makedirs('./tmp/vid2img')
65 | util.makedirs('./tmp/output_img')
66 | ffmpeg.video2image(video_path, './tmp/vid2img/%05d.png')
67 | ffmpeg.video2voice(video_path, './tmp/tmp.mp3')
68 |
69 | # ------------------------- Video2Block -------------------------
70 | print('Video2Block...')
71 | img_names = os.listdir('./tmp/vid2img')
72 | img_names.sort()
73 | for img_name in img_names:
74 | img = cv2.imread(os.path.join('./tmp/vid2img',img_name))
75 | img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
76 | img = cv2.resize(img, (output_pixel_num,int(output_pixel_num*scale)))
77 |
78 | h,w = img.shape
79 | out_img = np.zeros((h*pixel_size,w*pixel_size,3), dtype = np.uint8)
80 | for i in range(h):
81 | for j in range(w):
82 | index = np.clip(img[i,j]//level,0,len(pixels)-1)
83 | out_img[i*pixel_size:(i+1)*pixel_size,j*pixel_size:(j+1)*pixel_size] = pixels[index]
84 | out_img = out_img[:(h*pixel_size//2)*2,:(w*pixel_size//2)*2]
85 | cv2.imwrite(os.path.join('./tmp/output_img',img_name), out_img)
86 |
87 | # ------------------------- Block2Video -------------------------
88 | ffmpeg.image2video(fps, './tmp/output_img/%05d.png', './tmp/tmp.mp3', './result.mp4')
89 |
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/banana/00.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/banana/00.png
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/banana/01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/banana/01.png
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/github/01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/github/01.png
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/github/02.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/github/02.png
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/github/03.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/github/03.png
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/github/04.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/github/04.png
--------------------------------------------------------------------------------
/OneImage2Video/pixel_imgs/github/05.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/OneImage2Video/pixel_imgs/github/05.png
--------------------------------------------------------------------------------
/OneImage2Video/school_badge.py:
--------------------------------------------------------------------------------
1 | import os
2 | import cv2
3 | import numpy as np
4 |
5 | import sys
6 | sys.path.append("..")
7 | from Util import util,ffmpeg
8 |
9 |
10 | # 用校徽看badapple
11 | imgs_dir = './pixel_imgs/university/base'
12 | highlight_dir = './pixel_imgs/university/highlight'
13 | background_dir = './pixel_imgs/university/background'
14 | cut_size = 79
15 | pixel_resize = 0 # resize pixel_imgs, if 0, do not resize
16 | output_pixel_num = 18 # how many pixels in the output video'width
17 | video_path = '../Video/素材/bad_apple_bbkkbk/BadApple.flv'
18 | change_frame = 2
19 | # ------------------------- Load Blocks -------------------------
20 | pixels = []
21 | img_names = os.listdir(imgs_dir)
22 | img_names.sort()
23 | for name in img_names:
24 | img = cv2.imread(os.path.join(imgs_dir,name))
25 | for h in range(img.shape[0]//cut_size):
26 | for w in range(img.shape[1]//cut_size):
27 | pixel = img[h*cut_size:(h+1)*cut_size,w*cut_size:(w+1)*cut_size]
28 | if pixel_resize != 0:
29 | pixel = cv2.resize(pixel,(pixel_resize,pixel_resize),interpolation=cv2.INTER_AREA)
30 | pixels.append(pixel)
31 | pixel_size = pixels[0].shape[0]
32 |
33 | # highlight
34 | img_names = os.listdir(highlight_dir)
35 | img_names.sort()
36 | for name in img_names:
37 | pixel = cv2.imread(os.path.join(highlight_dir,name))
38 | pixel = cv2.resize(pixel,(pixel_size,pixel_size),interpolation=cv2.INTER_AREA)
39 | for i in range(10):
40 | pixels.append(pixel)
41 |
42 | pixels = np.array(pixels)
43 |
44 |
45 | # background
46 | background_name = os.listdir(background_dir)[0]
47 | background = cv2.imread(os.path.join(background_dir,background_name))
48 | background = cv2.resize(background,(pixel_size,pixel_size),interpolation=cv2.INTER_AREA)
49 |
50 |
51 | # ------------------------- Prcessing Video -------------------------
52 | fps,endtime,height,width = ffmpeg.get_video_infos(video_path)
53 | scale = height/width
54 |
55 | util.clean_tempfiles(False)
56 | util.makedirs('./tmp/vid2img')
57 | util.makedirs('./tmp/output_img')
58 | ffmpeg.video2image(video_path, './tmp/vid2img/%05d.png')
59 | ffmpeg.video2voice(video_path, './tmp/tmp.mp3')
60 |
61 | # ------------------------- Video2Block -------------------------
62 | print('Video2Block...')
63 | img_names = os.listdir('./tmp/vid2img')
64 | img_names.sort()
65 | frame = 0
66 | for img_name in img_names:
67 | img = cv2.imread(os.path.join('./tmp/vid2img',img_name))
68 | img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
69 | img = cv2.resize(img, (output_pixel_num,int(output_pixel_num*scale)),interpolation=cv2.INTER_AREA)
70 |
71 | h,w = img.shape
72 | if frame %change_frame == 0:
73 | indexs = np.random.randint(0, pixels.shape[0]-1, (h,w))
74 | out_img = np.zeros((h*pixel_size,w*pixel_size,3), dtype = np.uint8)
75 | for i in range(h):
76 | for j in range(w):
77 | #index = np.clip(img[i,j]//level,0,len(pixels)-1)
78 | if img[i,j] < 64:
79 | out_img[i*pixel_size:(i+1)*pixel_size,j*pixel_size:(j+1)*pixel_size] = pixels[indexs[i,j]]
80 | else:
81 | out_img[i*pixel_size:(i+1)*pixel_size,j*pixel_size:(j+1)*pixel_size] = background
82 |
83 | out_img = out_img[:(h*pixel_size//2)*2,:(w*pixel_size//2)*2]
84 | cv2.imwrite(os.path.join('./tmp/output_img',img_name), out_img)
85 | frame += 1
86 | # ------------------------- Block2Video -------------------------
87 | ffmpeg.image2video(fps, './tmp/output_img/%05d.png', './tmp/tmp.mp3', './result.mp4')
88 |
--------------------------------------------------------------------------------
/PlayMusic/README.md:
--------------------------------------------------------------------------------
1 | # 根据简谱自动生成音乐
2 | ## Example :TwinkleTwinkleLittleStar.txt
3 | ```python
4 | D #调号
5 | 2/4 #节拍
6 | 108 #播放速度
7 | 0.8 #每个音符的相对持续时间
8 | 1,1 # do ,1拍
9 | 1,1 # do ,1拍
10 | 5,1 # so,1拍
11 | 5,1 # so,1拍
12 | 6,1 # la,1拍
13 | 6,1 # la,1拍
14 | 5,1 # so,1拍
15 | 1,0.5 # do ,0.5拍
16 | -,1 # -,1拍
17 | 1+ # 升8度do,1拍
18 | 1-- # 降两个8度do,1拍
19 | ```
--------------------------------------------------------------------------------
/PlayMusic/music/SchoolBell.txt:
--------------------------------------------------------------------------------
1 | C
2 | 4/4
3 | 50
4 | 1.5
5 | 3,1
6 | 1,1
7 | 2,1
8 | 5-,1
9 | -,1
10 | 5-,1
11 | 2,1
12 | 3,1
13 | 1,1
14 | -,1
15 | 3,1
16 | 2,1
17 | 1,1
18 | 5-,1
19 | -,1
20 | 5-,1
21 | 2,1
22 | 3,1
23 | 1,1
24 | -,1
25 | 1,1.25
26 | -,1
27 | 1,1.25
28 | -,1
29 | 1,1.25
30 |
--------------------------------------------------------------------------------
/PlayMusic/music/TwinkleTwinkleLittleStar.txt:
--------------------------------------------------------------------------------
1 | D
2 | 2/4
3 | 108
4 | 0.8
5 | 1,1
6 | 1,1
7 | 5,1
8 | 5,1
9 | 6,1
10 | 6,1
11 | 5,1
12 | -,1
13 | 4,1
14 | 4,1
15 | 3,1
16 | 3,0.5
17 | 2,1
18 | 2,1
19 | 1,1
20 | -,1
21 | 5,1
22 | 5,1
23 | 4,1
24 | 4,1
25 | 3,1
26 | 3,1
27 | 2,1
28 | -,1
29 | 5,1
30 | 5,1
31 | 4,1
32 | 4,1
33 | 3,1
34 | 3,1
35 | 2,1
36 | -,1
37 | 1,1
38 | 1,1
39 | 5,1
40 | 5,1
41 | 6,1
42 | 6,1
43 | 5,1
44 | -,1
45 | 4,1
46 | 4,1
47 | 3,1
48 | 3,0.5
49 | 3,0.5
50 | 2,1
51 | 2,1
52 | 1,1
53 | -,1
--------------------------------------------------------------------------------
/PlayMusic/notation.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import random
4 | import time
5 | import numpy as np
6 | import matplotlib.pylab as plt
7 |
8 | sys.path.append("..")
9 | from Util import util,ffmpeg,dsp,sound
10 | from Util import array_operation as arrop
11 |
12 |
13 | """音调对应的频率
14 | * C大调中就是1=C中C唱做do,而在1=D中D要唱作do
15 | * C D E F G A B -> do re mi fa so la xi
16 | * 暂时不支持C# D# F# A#
17 | """
18 | NOTES = ['C','D','E','F','G','A','B']
19 | base_pitch = [16.352,18.354,20.602,21.827,24.500,27.500,30.868]
20 | #Frequency in hertz (semitones above or below middle C)
21 | OPS = np.zeros(7*10)
22 | for i in range(10):
23 | for j in range(7):
24 | OPS[i*7+j] = base_pitch[j]*(2**i)
25 |
26 | def getfreq(note,PN):
27 | if note[0] == '-':
28 | return 0,1
29 | index = 4*7-1 + NOTES.index(PN) + int(note[0]) #1 = C or D.....
30 | if '+' in note:
31 | index += (len(note)-1)*7
32 | elif '-' in note:
33 | index -= (len(note)-1)*7
34 | freq = OPS[index]
35 | timescale = np.clip(1/(1.08**(index-33)),0,4.0)
36 | return freq,timescale
37 |
38 | def wave(f, fs, time, mode='sin'):
39 | length = int(fs*time)
40 | signal = dsp.wave(f, fs, time, mode)
41 | weight = np.zeros(length)
42 | weight[0:length//10] = np.hanning(length//10*2)[0:length//10]
43 | weight[length//10:] = np.hanning(int(length*0.9*2))[-(length-length//10):]
44 | return signal*weight
45 |
46 |
47 | """拍子强度
48 | strengths = {'2/4':['++',''],
49 | '3/4':['++','',''],
50 | '4/4':['++','','+',''],
51 | '3/8':['++','',''],
52 | '6/8':['++','','','+','','']}
53 | """
54 |
55 | def getstrength(i,BN,x):
56 | if int(BN[0]) == 2:
57 | if i%2 == 0:
58 | x = x*1.25
59 | elif int(BN[0]) == 3:
60 | if i%3 == 0:
61 | x = x*1.25
62 | elif int(BN[0]) == 4:
63 | if i%4 == 0:
64 | x = x*1.25
65 | elif i%4 == 2:
66 | x = x*1.125
67 | return x
68 |
69 | def readscore(path):
70 | notations = {}
71 | notations['data'] =[]
72 | for i,line in enumerate(open(path),0):
73 | line = line.strip('\n')
74 | if i==0:
75 | notations['PN'] = line
76 | elif i == 1:
77 | notations['BN'] = line
78 | elif i == 2:
79 | notations['BPM'] = float(line)
80 | elif i == 3:
81 | notations['PNLT'] = float(line)
82 | else:
83 | if len(line)>2:
84 | beat = line.split(';')
85 | part = []
86 | for i in range(len(beat)):
87 | part.append(beat[i].split('|'))
88 | notations['data'].append(part)
89 | return notations
90 |
91 |
92 | def notations2music(notations, mode = 'sin', isplot = False):
93 | BPM = notations['BPM']
94 | BN = notations['BN']
95 | PNLT = notations['PNLT']
96 | interval = 60.0/BPM
97 | fs = 44100
98 | time = 0
99 |
100 | music = np.zeros(int(fs*(len(notations['data'])+8)*interval))
101 |
102 | for section in range(len(notations['data'])):
103 | for beat in range(len(notations['data'][section])):
104 |
105 | _music = np.zeros(int(fs*PNLT*4))
106 | for part in range(len(notations['data'][section][beat])):
107 | _note = notations['data'][section][beat][part].split(',')[0]
108 | freq,timescale = getfreq(_note,notations['PN'])
109 | # print(timescale)
110 | if freq != 0:
111 | # print(len(_music[:int(PNLT*timescale*fs)]))
112 | # print(len(wave(freq, fs, PNLT*timescale, mode = mode)))
113 | _music[:int(PNLT*timescale*fs)] += wave(freq, fs, PNLT*timescale, mode = mode)
114 | # print(notations['data'][section][beat][part])
115 |
116 | music[int(time*fs):int(time*fs)+int(PNLT*fs*4)] += _music
117 | time += float(notations['data'][section][beat][0].split(',')[1])*interval
118 | # for i in range(len(notations['data'])):
119 | # for j in range(len(notations['data'][i])//2):
120 | # freq = getfreq(notations['data'][i][j*2],notations['PN'])
121 | # if freq != 0:
122 | # music[int(time*fs):int(time*fs)+int(PNLT*fs)] += wave(freq, fs, PNLT, mode = mode)
123 | # music[int(time*fs):int(time*fs)+int(PNLT*fs)] += getstrength(i,BN,wave(freq, fs, PNLT, mode = mode))
124 |
125 | # if isplot:
126 | # plot_data = music[int(time*fs):int(time*fs)+int(PNLT*fs)]
127 | # plt.clf()
128 | # plt.subplot(221)
129 | # plt.plot(np.linspace(0, len(plot_data)/fs,len(plot_data)),plot_data)
130 | # plt.title('Current audio waveform')
131 | # plt.xlabel('Time')
132 | # plt.ylim((-1.5,1.5))
133 |
134 | # plt.subplot(222)
135 | # _plot_data = plot_data[int(len(plot_data)*0.2):int(len(plot_data)*0.25)]
136 | # plt.plot(np.linspace(0, len(_plot_data)/fs,len(_plot_data)),_plot_data)
137 | # plt.title('Partial audio waveform')
138 | # plt.xlabel('Time')
139 | # plt.ylim((-1.5,1.5))
140 |
141 | # plt.subplot(223)
142 | # f,k = dsp.showfreq(plot_data, 44100, 2000)
143 | # plt.plot(f,k)
144 | # plt.title('FFT')
145 | # plt.xlabel('Hz')
146 | # plt.ylim((-1000,10000))
147 | # plt.pause(interval-0.125)
148 |
149 | #time += float(notations['data'][i][j*2+1])*interval
150 |
151 | return (arrop.sigmoid(music)-0.5)*65536
152 |
153 | notations = readscore('./music/SchoolBell.txt')
154 | print(notations)
155 | print(len(notations['data']))
156 | # sin triangle square
157 | music = notations2music(notations,mode='sin',isplot=False)
158 | sound.playtest(music)
159 |
160 | # import threading
161 | # t=threading.Thread(target=sound.playtest,args=(music,))
162 | # t.start()
163 | #music = notations2music(notations,mode='sin',isplot=True)
164 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # bilibili
2 | ### 代码长大了,会帮up主做视频了
3 |
4 | | 项目名(单击项目名进入项目页) | 演示(单击图片或文字观看视频) | 教程(单击图片或文字观看视频) |
5 | | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
6 | | [DeepMosaics](https://github.com/HypoX64/DeepMosaics)
自动对视频/图片去码或打码 | / | [](https://www.bilibili.com/video/BV1LJ411c7Hg)
[一键去除照片和视频中的马赛克——Deepmosaics软件使用教程](https://www.bilibili.com/video/BV1LJ411c7Hg) |
7 | | [Plogo](https://github.com/HypoX64/Plogo)
Pornhub 风格的视频模板 | / | [](https://www.bilibili.com/video/BV1gC4y1p7gC)
[教程\]5分钟快速制作一个自己专属的P站片头——模板分享](https://www.bilibili.com/video/BV1gC4y1p7gC) |
8 | | [ShellPlayer](https://github.com/HypoX64/ShellPlayer)
在终端中看视频,色彩斑斓且富有声音的 | [](https://www.bilibili.com/video/BV1dT4y1u76w)
[大威天龙](https://www.bilibili.com/video/BV1dT4y1u76w)
[鸡你太美](https://www.bilibili.com/video/BV1Ag4y1z7BB)
[黑人抬棺](https://www.bilibili.com/video/BV16V411C7xy) | [](https://www.bilibili.com/video/BV18V411r7U1)
[【教程】如何一键将视频转化为彩色的字符动画](https://www.bilibili.com/video/BV18V411r7U1) |
9 | | [OneImage2Video](https://github.com/HypoX64/bilibili/tree/master/OneImage2Video)
用一个或一些图片生成视频 | [](https://www.bilibili.com/video/BV1dT4y1u76w)
[BadApple](https://www.bilibili.com/video/BV1w54y1z72s)
[香蕉君](https://www.bilibili.com/video/BV1Vg4y1v7s) | / |
10 | | [GuiChu](https://github.com/HypoX64/GuiChu)
鬼畜视频生成器 | [](https://www.bilibili.com/video/BV1c54y1Q7RG)
[你猫和老鼠X余生一个浪](https://www.bilibili.com/video/BV1c54y1Q7RG)
[克 罗 地 亚 灭 蚊 狂 想 曲](https://www.bilibili.com/video/BV1ft4y1X7q6) | / |
11 | | [PlayMusic](./PlayMusic/README.md)
根据简谱自动生成音乐 | [](https://www.bilibili.com/video/BV1Dv411z7H1)
[小星星](https://www.bilibili.com/video/BV1Dv411z7H1) | / |
12 | | [syseye](https://github.com/HypoX64/syseye)
轻量级深度学习服务器性能监视器 | / | / |
13 |
14 |
--------------------------------------------------------------------------------
/Util/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/Util/__init__.py
--------------------------------------------------------------------------------
/Util/array_operation.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | def interp(y,length):
4 | xp = np.linspace(0, len(y)-1,num = len(y))
5 | fp = y
6 | x = np.linspace(0, len(y)-1,num = length)
7 | return np.interp(x, xp, fp)
8 |
9 | def sigmoid(x):
10 | return 1.0 / (1.0 + np.exp(-x));
11 |
12 | def pad(data,padding):
13 | pad_data = np.zeros(padding, dtype=data.dtype)
14 | data = np.append(data, pad_data)
15 | return data
16 |
17 | def normliaze(data, mode = 'norm', sigma = 0, dtype=np.float32, truncated = 2):
18 | '''
19 | mode: norm | std | maxmin | 5_95
20 | dtype : np.float64,np.float16...
21 | '''
22 | data = data.astype(dtype)
23 | data_calculate = data.copy()
24 | if mode == 'norm':
25 | result = (data-np.mean(data_calculate))/sigma
26 | elif mode == 'std':
27 | mu = np.mean(data_calculate)
28 | sigma = np.std(data_calculate)
29 | result = (data - mu) / sigma
30 | elif mode == 'maxmin':
31 | result = (data-np.mean(data_calculate))/(max(np.max(data_calculate),np.abs(np.min(data_calculate))))
32 | elif mode == '5_95':
33 | data_sort = np.sort(data_calculate,axis=None)
34 | th5 = data_sort[int(0.05*len(data_sort))]
35 | th95 = data_sort[int(0.95*len(data_sort))]
36 | baseline = (th5+th95)/2
37 | sigma = (th95-th5)/2
38 | if sigma == 0:
39 | sigma = 1e-06
40 | result = (data-baseline)/sigma
41 |
42 | if truncated > 1:
43 | result = np.clip(result, (-truncated), (truncated))
44 |
45 | return result.astype(dtype)
46 |
47 | def diff1d(indata,stride=1,padding=1,bias=False):
48 |
49 | pad = np.zeros(padding)
50 | indata = np.append(indata, pad)
51 | if bias:
52 | if np.min(indata)<0:
53 | indata = indata - np.min(indata)
54 |
55 | outdata = np.zeros(int(len(indata)/stride)-1)
56 | for i in range(int(len(indata)/stride)-1):
57 | outdata[i]=indata[i*stride+stride]-indata[i*stride]
58 | return outdata
59 |
60 |
61 | def findpeak(indata, ismax=False, interval=10, threshold=0.1, reverse=False):
62 | '''
63 | return:indexs
64 | '''
65 | indexs = []
66 | if reverse:
67 | indata = -1*indata
68 | indexs = [0,len(indata)-1]
69 | else:
70 | indata = np.clip(indata, np.max(indata)*threshold, np.max(indata))
71 | diff = diff1d(indata)
72 | if ismax:
73 | return np.array([np.argmax(indata)])
74 |
75 | rise = True
76 | if diff[0] <=0:
77 | rise = False
78 | for i in range(len(diff)):
79 | if rise==True and diff[i]<=0:
80 | index = i
81 | ok_flag = True
82 | for x in range(interval):
83 | if indata[np.clip(index-x,0,len(indata)-1)]>indata[index] or indata[np.clip(index+x,0,len(indata)-1)]>indata[index]:
84 | ok_flag = False
85 | if ok_flag:
86 | indexs.append(index)
87 |
88 | if diff[i] <=0:
89 | rise = False
90 | else:
91 | rise = True
92 |
93 | return np.sort(np.array(indexs))
94 |
95 | def crop(data,indexs,size):
96 | if len(data) < size:
97 | data = pad(data, size-len(data))
98 |
99 | crop_result = []
100 | for index in indexs:
101 | if index-int(size/2)<0:
102 | crop_result.append(data[0:size])
103 | elif index+int(size/2)<0:
104 | crop_result.append(data[len(data)-size:len(data)])
105 | else:
106 | crop_result.append(data[index-int(size/2):index+int(size/2)])
107 | return np.array(crop_result)
108 |
109 | def cropbyindex(data,indexs,reverse_indexs):
110 | crop_result = []
111 | crop_index = []
112 | for index in indexs:
113 | for i in range(len(reverse_indexs)-1):
114 | if reverse_indexs[i] < index:
115 | if reverse_indexs[i+1] > index:
116 | crop_result.append(data[reverse_indexs[i]:reverse_indexs[i+1]])
117 | crop_index.append([reverse_indexs[i],reverse_indexs[i+1]])
118 | return np.array(crop_result),np.array(crop_index)
119 |
120 | def get_crossing(line1,line2):
121 | cross_pos = []
122 | dif = line1-line2
123 | flag = 1
124 | if dif[0]<0:
125 | dif = -dif
126 | for i in range(int(len(dif))):
127 | if flag == 1:
128 | if dif[i] <= 0:
129 | cross_pos.append(i)
130 | flag = 0
131 | else:
132 | if dif[i] >= 0:
133 | cross_pos.append(i)
134 | flag = 1
135 | return cross_pos
136 |
137 | def get_y(indexs,fun):
138 | y = []
139 | for index in indexs:
140 | y.append(fun[index])
141 | return np.array(y)
142 |
143 | def fillnone(arr_in,flag,num = 7):
144 | arr = arr_in.copy()
145 | index = np.linspace(0,len(arr)-1,len(arr),dtype='int')
146 | cnt = 0
147 | for i in range(2,len(arr)-2):
148 | if arr[i] != flag:
149 | arr[i] = arr[i]
150 | if cnt != 0:
151 | if cnt <= num*2:
152 | arr[i-cnt:round(i-cnt/2)] = arr[i-cnt-1-2]
153 | arr[round(i-cnt/2):i] = arr[i+2]
154 | index[i-cnt:round(i-cnt/2)] = i-cnt-1-2
155 | index[round(i-cnt/2):i] = i+2
156 | else:
157 | arr[i-cnt:i-cnt+num] = arr[i-cnt-1-2]
158 | arr[i-num:i] = arr[i+2]
159 | index[i-cnt:i-cnt+num] = i-cnt-1-2
160 | index[i-num:i] = i+2
161 | cnt = 0
162 | else:
163 | cnt += 1
164 | return arr,index
165 |
166 |
167 | def closest(a,array):
168 | '''
169 | return:index
170 | '''
171 |
172 | corrcoefs = []
173 | for i in range(len(array)):
174 | #corrcoefs.append(np.linalg.norm(a-array[i], ord=1))
175 | corrcoefs.append(np.corrcoef(a,array[i])[0][1])
176 | return corrcoefs.index(np.max(corrcoefs))
177 |
178 | def match(src,dst):
179 | '''
180 | return:dstindexs
181 | '''
182 | indexs = []
183 | for i in range(len(src)):
184 | indexs.append(closest(src[i], dst))
185 | return np.array(indexs)
186 |
187 |
188 |
189 | def main():
190 | a = [0,2,4,6,8,10]
191 | print(interp(a, 6))
192 | if __name__ == '__main__':
193 | main()
--------------------------------------------------------------------------------
/Util/dsp.py:
--------------------------------------------------------------------------------
1 | import scipy
2 | import scipy.signal
3 | import scipy.fftpack
4 | import numpy as np
5 | from .array_operation import *
6 | import matplotlib.pylab as plt
7 |
8 | def sin(f,fs,time,theta=0):
9 | x = np.linspace(0, 2*np.pi*f*time, int(fs*time))
10 | return np.sin(x+theta)
11 |
12 | def wave(f,fs,time,mode='sin'):
13 | f,fs,time = float(f),float(fs),float(time)
14 | if mode == 'sin':
15 | return sin(f,fs,time,theta=0)
16 | elif mode == 'square':
17 | half_T_num = int(time*f)*2 + 1
18 | half_T_point = int(fs/f/2)
19 | x = np.zeros(int(fs*time)+2*half_T_point)
20 | for i in range(half_T_num):
21 | if i%2 == 0:
22 | x[i*half_T_point:(i+1)*half_T_point] = -1
23 | else:
24 | x[i*half_T_point:(i+1)*half_T_point] = 1
25 | return x[:int(fs*time)]
26 | elif mode == 'triangle':
27 | half_T_num = int(time*f)*2 + 1
28 | half_T_point = int(fs/f/2)
29 | up = np.linspace(-1, 1, half_T_point)
30 | down = np.linspace(1, -1, half_T_point)
31 | x = np.zeros(int(fs*time)+2*half_T_point)
32 | for i in range(half_T_num):
33 | if i%2 == 0:
34 | x[i*half_T_point:(i+1)*half_T_point] = up.copy()
35 | else:
36 | x[i*half_T_point:(i+1)*half_T_point] = down.copy()
37 | return x[:int(fs*time)]
38 |
39 | def downsample(signal,fs1=0,fs2=0,alpha=0,mod = 'just_down'):
40 | if alpha ==0:
41 | alpha = int(fs1/fs2)
42 | if mod == 'just_down':
43 | return signal[::alpha]
44 | elif mod == 'avg':
45 | result = np.zeros(int(len(signal)/alpha))
46 | for i in range(int(len(signal)/alpha)):
47 | result[i] = np.mean(signal[i*alpha:(i+1)*alpha])
48 | return result
49 |
50 | def medfilt(signal,x):
51 | return scipy.signal.medfilt(signal,x)
52 |
53 | def cleanoffset(signal):
54 | return signal - np.mean(signal)
55 |
56 | def bpf(signal, fs, fc1, fc2, numtaps=3, mode='iir'):
57 | if mode == 'iir':
58 | b,a = scipy.signal.iirfilter(numtaps, [fc1,fc2], fs=fs)
59 | elif mode == 'fir':
60 | b = scipy.signal.firwin(numtaps, [fc1, fc2], pass_zero=False,fs=fs)
61 | a = 1
62 | return scipy.signal.lfilter(b, a, signal)
63 |
64 | def fft_filter(signal,fs,fc=[],type = 'bandpass'):
65 | '''
66 | signal: Signal
67 | fs: Sampling frequency
68 | fc: [fc1,fc2...] Cut-off frequency
69 | type: bandpass | bandstop
70 | '''
71 | k = []
72 | N=len(signal)#get N
73 |
74 | for i in range(len(fc)):
75 | k.append(int(fc[i]*N/fs))
76 |
77 | #FFT
78 | signal_fft=scipy.fftpack.fft(signal)
79 | #Frequency truncation
80 |
81 | if type == 'bandpass':
82 | a = np.zeros(N)
83 | for i in range(int(len(fc)/2)):
84 | a[k[2*i]:k[2*i+1]] = 1
85 | a[N-k[2*i+1]:N-k[2*i]] = 1
86 | elif type == 'bandstop':
87 | a = np.ones(N)
88 | for i in range(int(len(fc)/2)):
89 | a[k[2*i]:k[2*i+1]] = 0
90 | a[N-k[2*i+1]:N-k[2*i]] = 0
91 | signal_fft = a*signal_fft
92 | signal_ifft=scipy.fftpack.ifft(signal_fft)
93 | result = signal_ifft.real
94 | return result
95 |
96 | def basefreq(signal,fs,fc=0):
97 | if fc==0:
98 | kc = int(len(signal)/2)
99 | else:
100 | kc = int(len(signal)/fs*fc)
101 | length = len(signal)
102 | signal_fft = np.abs(scipy.fftpack.fft(signal))[:kc]
103 | _sum = np.sum(signal_fft)/2
104 | tmp_sum = 0
105 | for i in range(kc):
106 | tmp_sum += signal_fft[i]
107 | if tmp_sum > _sum:
108 | return i/(length/fs)
109 |
110 | def showfreq(signal,fs,fc=0):
111 | """
112 | return f,fft
113 | """
114 | if fc==0:
115 | kc = int(len(signal)/2)
116 | else:
117 | kc = int(len(signal)/fs*fc)
118 | signal_fft = np.abs(scipy.fftpack.fft(signal))
119 | f = np.linspace(0,fs/2,num=int(len(signal_fft)/2))
120 | return f[:kc],signal_fft[0:int(len(signal_fft)/2)][:kc]
121 |
122 | def rms(signal):
123 | signal = signal.astype('float64')
124 | return np.mean((signal*signal))**0.5
125 |
126 | def energy(signal,kernel_size,stride,padding = 0):
127 | _signal = np.zeros(len(signal)+padding)
128 | _signal[0:len(signal)] = signal
129 | signal = _signal
130 | out_len = int((len(signal)+1-kernel_size)/stride)
131 | energy = np.zeros(out_len)
132 | for i in range(out_len):
133 | energy[i] = rms(signal[i*stride:i*stride+kernel_size])
134 | return energy
135 |
136 | def envelope_demodulation(signal,kernel_size,alpha = 0.9,mod='max'):
137 | out_len = int(len(signal)/kernel_size)
138 | envelope = np.zeros(out_len)
139 | for i in range(out_len):
140 | # envelope[i] = np.max(signal[i*kernel_size:(i+1)*kernel_size])
141 | envelope[i] = np.sort(signal[i*kernel_size:(i+1)*kernel_size])[int(alpha*kernel_size)]
142 | return envelope
143 |
144 | def main():
145 | print(downsample(piano,alpha=9))
146 | if __name__ == '__main__':
147 | main()
148 |
149 |
--------------------------------------------------------------------------------
/Util/ffmpeg.py:
--------------------------------------------------------------------------------
1 | import os,json
2 | import subprocess
3 |
4 | # ffmpeg 3.4.6
5 |
6 | def args2cmd(args):
7 | cmd = ''
8 | for arg in args:
9 | cmd += (arg+' ')
10 | return cmd
11 |
12 | def run(args,mode = 0):
13 |
14 | if mode == 0:
15 | cmd = args2cmd(args)
16 | os.system(cmd)
17 |
18 | elif mode == 1:
19 | '''
20 | out_string = os.popen(cmd_str).read()
21 | For chinese path in Windows
22 | https://blog.csdn.net/weixin_43903378/article/details/91979025
23 | '''
24 | cmd = args2cmd(args)
25 | stream = os.popen(cmd)._stream
26 | sout = stream.buffer.read().decode(encoding='utf-8')
27 | return sout
28 |
29 | elif mode == 2:
30 | cmd = args2cmd(args)
31 | p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
32 | sout = p.stdout.readlines()
33 | return sout
34 |
35 |
36 | def video2image(videopath, imagepath, fps=0, start_time='00:00:00', last_time='00:00:00'):
37 | args = ['ffmpeg', '-i', '"'+videopath+'"']
38 | if last_time != '00:00:00':
39 | args += ['-ss', start_time]
40 | args += ['-t', last_time]
41 | if fps != 0:
42 | args += ['-r', str(fps)]
43 | args += ['-f', 'image2','-q:v','-0',imagepath]
44 | run(args)
45 |
46 | def video2voice(videopath, voicepath, start_time='00:00:00', last_time='00:00:00'):
47 | args = ['ffmpeg', '-i', '"'+videopath+'"','-f mp3','-b:a 320k']
48 | if last_time != '00:00:00':
49 | args += ['-ss', start_time]
50 | args += ['-t', last_time]
51 | args += [voicepath]
52 | run(args)
53 |
54 | def image2video(fps,imagepath,voicepath,videopath):
55 | if voicepath != None:
56 | os.system('ffmpeg -y -r '+str(fps)+' -i '+imagepath+' -vcodec libx264 '+'./tmp/video_tmp.mp4')
57 | os.system('ffmpeg -i ./tmp/video_tmp.mp4 -i "'+voicepath+'" -vcodec copy -acodec aac '+videopath)
58 | else:
59 | os.system('ffmpeg -y -r '+str(fps)+' -i '+imagepath+' -vcodec libx264 '+videopath)
60 |
61 | def get_video_infos(videopath):
62 | args = ['ffprobe -v quiet -print_format json -show_format -show_streams', '-i', '"'+videopath+'"']
63 |
64 | out_string = run(args,mode=1)
65 |
66 | infos = json.loads(out_string)
67 | try:
68 | fps = eval(infos['streams'][0]['avg_frame_rate'])
69 | endtime = float(infos['format']['duration'])
70 | width = int(infos['streams'][0]['width'])
71 | height = int(infos['streams'][0]['height'])
72 | except Exception as e:
73 | fps = eval(infos['streams'][1]['r_frame_rate'])
74 | endtime = float(infos['format']['duration'])
75 | width = int(infos['streams'][1]['width'])
76 | height = int(infos['streams'][1]['height'])
77 |
78 | return fps,endtime,height,width
79 |
80 | def cut_video(in_path,start_time,last_time,out_path,vcodec='h265'):
81 | if vcodec == 'copy':
82 | os.system('ffmpeg -ss '+start_time+' -t '+last_time+' -i "'+in_path+'" -vcodec copy -acodec copy '+out_path)
83 | elif vcodec == 'h264':
84 | os.system('ffmpeg -ss '+start_time+' -t '+last_time+' -i "'+in_path+'" -vcodec libx264 -b 12M '+out_path)
85 | elif vcodec == 'h265':
86 | os.system('ffmpeg -ss '+start_time+' -t '+last_time+' -i "'+in_path+'" -vcodec libx265 -b 12M '+out_path)
87 |
88 | def continuous_screenshot(videopath,savedir,fps):
89 | '''
90 | videopath: input video path
91 | savedir: images will save here
92 | fps: save how many images per second
93 | '''
94 | videoname = os.path.splitext(os.path.basename(videopath))[0]
95 | os.system('ffmpeg -i "'+videopath+'" -vf fps='+str(fps)+' -q:v -0 '+savedir+'/'+videoname+'_%06d.jpg')
96 |
--------------------------------------------------------------------------------
/Util/image_processing.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | import random
4 |
5 | import platform
6 | system_type = 'Linux'
7 | if 'Windows' in platform.platform():
8 | system_type = 'Windows'
9 |
10 | def imread(file_path,mod = 'normal'):
11 | '''
12 | mod = 'normal' | 'gray' | 'all'
13 | '''
14 | if system_type == 'Linux':
15 | if mod == 'normal':
16 | img = cv2.imread(file_path)
17 | elif mod == 'gray':
18 | img = cv2.imread(file_path,0)
19 | elif mod == 'all':
20 | img = cv2.imread(file_path,-1)
21 |
22 | #For chinese path, use cv2.imdecode in windows.
23 | #It will loss EXIF, I can't fix it
24 | else:
25 | if mod == 'gray':
26 | img = cv2.imdecode(np.fromfile(file_path,dtype=np.uint8),0)
27 | else:
28 | img = cv2.imdecode(np.fromfile(file_path,dtype=np.uint8),-1)
29 |
30 | return img
31 |
32 | def imwrite(file_path,img):
33 | '''
34 | in other to save chinese path images in windows,
35 | this fun just for save final output images
36 | '''
37 | if system_type == 'Linux':
38 | cv2.imwrite(file_path, img)
39 | else:
40 | cv2.imencode('.jpg', img)[1].tofile(file_path)
41 |
42 | def resize(img,size,interpolation=cv2.INTER_LINEAR):
43 | h, w = img.shape[:2]
44 | if np.min((w,h)) ==size:
45 | return img
46 | if w >= h:
47 | res = cv2.resize(img,(int(size*w/h), size),interpolation=interpolation)
48 | else:
49 | res = cv2.resize(img,(size, int(size*h/w)),interpolation=interpolation)
50 | return res
51 |
52 | def resize_like(img,img_like):
53 | h, w = img_like.shape[:2]
54 | img = cv2.resize(img, (w,h))
55 | return img
56 |
57 | def ch_one2three(img):
58 | #zeros = np.zeros(img.shape[:2], dtype = "uint8")
59 | # ret,thresh = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
60 | res = cv2.merge([img, img, img])
61 | return res
62 |
63 | def color_adjust(img,alpha=1,beta=0,b=0,g=0,r=0,ran = False):
64 | '''
65 | g(x) = (1+α)g(x)+255*β,
66 | g(x) = g(x[:+b*255,:+g*255,:+r*255])
67 |
68 | Args:
69 | img : input image
70 | alpha : contrast
71 | beta : brightness
72 | b : blue hue
73 | g : green hue
74 | r : red hue
75 | ran : if True, randomly generated color correction parameters
76 | Retuens:
77 | img : output image
78 | '''
79 | img = img.astype('float')
80 | if ran:
81 | alpha = random.uniform(-0.2,0.2)
82 | beta = random.uniform(-0.2,0.2)
83 | b = random.uniform(-0.1,0.1)
84 | g = random.uniform(-0.1,0.1)
85 | r = random.uniform(-0.1,0.1)
86 | img = (1+alpha)*img+255.0*beta
87 | bgr = [b*255.0,g*255.0,r*255.0]
88 | for i in range(3): img[:,:,i]=img[:,:,i]+bgr[i]
89 |
90 | return (np.clip(img,0,255)).astype('uint8')
91 |
92 | def makedataset(target_image,orgin_image):
93 | target_image = resize(target_image,256)
94 | orgin_image = resize(orgin_image,256)
95 | img = np.zeros((256,512,3), dtype = "uint8")
96 | w = orgin_image.shape[1]
97 | img[0:256,0:256] = target_image[0:256,int(w/2-256/2):int(w/2+256/2)]
98 | img[0:256,256:512] = orgin_image[0:256,int(w/2-256/2):int(w/2+256/2)]
99 | return img
100 |
101 | def image2folat(img,ch):
102 | size=img.shape[0]
103 | if ch == 1:
104 | img = (img[:,:,0].reshape(1,size,size)/255.0).astype(np.float32)
105 | else:
106 | img = (img.transpose((2, 0, 1))/255.0).astype(np.float32)
107 | return img
108 |
109 | def spiltimage(img,size = 128):
110 | h, w = img.shape[:2]
111 | # size = min(h,w)
112 | if w >= h:
113 | img1 = img[:,0:size]
114 | img2 = img[:,w-size:w]
115 | else:
116 | img1 = img[0:size,:]
117 | img2 = img[h-size:h,:]
118 |
119 | return img1,img2
120 |
121 | def mergeimage(img1,img2,orgin_image,size = 128):
122 | h, w = orgin_image.shape[:2]
123 | new_img1 = np.zeros((h,w), dtype = "uint8")
124 | new_img2 = np.zeros((h,w), dtype = "uint8")
125 |
126 | # size = min(h,w)
127 | if w >= h:
128 | new_img1[:,0:size]=img1
129 | new_img2[:,w-size:w]=img2
130 | else:
131 | new_img1[0:size,:]=img1
132 | new_img2[h-size:h,:]=img2
133 | result_img = cv2.add(new_img1,new_img2)
134 | return result_img
135 |
136 | def find_best_ROI(mask):
137 | contours,hierarchy=cv2.findContours(mask, cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
138 | if len(contours)>0:
139 | areas = []
140 | for contour in contours:
141 | areas.append(cv2.contourArea(contour))
142 | index = areas.index(max(areas))
143 | mask = np.zeros_like(mask)
144 | mask = cv2.fillPoly(mask,[contours[index]],(255))
145 | return mask
146 |
147 | def boundingSquare(mask,Ex_mul):
148 | # thresh = mask_threshold(mask,10,threshold)
149 | area = mask_area(mask)
150 | if area == 0 :
151 | return 0,0,0,0
152 |
153 | x,y,w,h = cv2.boundingRect(mask)
154 |
155 | center = np.array([int(x+w/2),int(y+h/2)])
156 | size = max(w,h)
157 | point0=np.array([x,y])
158 | point1=np.array([x+size,y+size])
159 |
160 | h, w = mask.shape[:2]
161 | if size*Ex_mul > min(h, w):
162 | size = min(h, w)
163 | halfsize = int(min(h, w)/2)
164 | else:
165 | size = Ex_mul*size
166 | halfsize = int(size/2)
167 | size = halfsize*2
168 | point0 = center - halfsize
169 | point1 = center + halfsize
170 | if point0[0]<0:
171 | point0[0]=0
172 | point1[0]=size
173 | if point0[1]<0:
174 | point0[1]=0
175 | point1[1]=size
176 | if point1[0]>w:
177 | point1[0]=w
178 | point0[0]=w-size
179 | if point1[1]>h:
180 | point1[1]=h
181 | point0[1]=h-size
182 | center = ((point0+point1)/2).astype('int')
183 | return center[0],center[1],halfsize,area
184 |
185 | def mask_threshold(mask,blur,threshold):
186 | mask = cv2.threshold(mask,threshold,255,cv2.THRESH_BINARY)[1]
187 | mask = cv2.blur(mask, (blur, blur))
188 | mask = cv2.threshold(mask,threshold/5,255,cv2.THRESH_BINARY)[1]
189 | return mask
190 |
191 | def mask_area(mask):
192 | mask = cv2.threshold(mask,127,255,0)[1]
193 | # contours= cv2.findContours(mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[1] #for opencv 3.4
194 | contours= cv2.findContours(mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[0]#updata to opencv 4.0
195 | try:
196 | area = cv2.contourArea(contours[0])
197 | except:
198 | area = 0
199 | return area
200 |
201 |
202 | def replace_mosaic(img_origin,img_fake,x,y,size,no_father):
203 | img_fake = resize(img_fake,size*2)
204 | if no_father:
205 | img_origin[y-size:y+size,x-size:x+size]=img_fake
206 | img_result = img_origin
207 | else:
208 | #color correction
209 | RGB_origin = img_origin[y-size:y+size,x-size:x+size].mean(0).mean(0)
210 | RGB_fake = img_fake.mean(0).mean(0)
211 | for i in range(3):img_fake[:,:,i] = np.clip(img_fake[:,:,i]+RGB_origin[i]-RGB_fake[i],0,255)
212 | #eclosion
213 | eclosion_num = int(size/5)
214 | entad = int(eclosion_num/2+2)
215 | mask = np.zeros(img_origin.shape, dtype='uint8')
216 | mask = cv2.rectangle(mask,(x-size+entad,y-size+entad),(x+size-entad,y+size-entad),(255,255,255),-1)
217 | mask = (cv2.blur(mask, (eclosion_num, eclosion_num)))
218 | mask = mask/255.0
219 |
220 | img_tmp = np.zeros(img_origin.shape)
221 | img_tmp[y-size:y+size,x-size:x+size]=img_fake
222 | img_result = img_origin.copy()
223 | img_result = (img_origin*(1-mask)+img_tmp*mask).astype('uint8')
224 | return img_result
--------------------------------------------------------------------------------
/Util/sound.py:
--------------------------------------------------------------------------------
1 | import time
2 | import numpy as np
3 | import scipy.fftpack
4 | from .array_operation import *
5 | from .dsp import *
6 | from .util import *
7 | import librosa
8 | from scipy.io import wavfile
9 | import os
10 | import matplotlib.pylab as plt
11 |
12 | piano = np.array([0,27.5,29.1,30.9,32.7,34.6,36.7,38.9,41.2,
13 | 43.7,46.2,49.0,51.9,55.0,58.3,61.7,65.4,69.3,
14 | 73.4,77.8,82.4,87.3,92.5,98.0,103.8,110.0,116.5,
15 | 123.5,130.8,138.6,146.8,155.6,164.8,174.6,185.0,
16 | 196.0,207.7,220.0,233.1,246.9,261.6,277.2,293.7,
17 | 311.1,329.6,349.2,370.0,392.0,415.3,440.0,466.2,
18 | 493.9,523.3,554.4,587.3,622.3,659.3,698.5,740.0,
19 | 784.0,830.6,880.0,932.3,987.8,1047,1109,1175,1245,
20 | 1319,1397,1480,1568,1661,1760,1865,1976,2093,2217,
21 | 2349,2489,2637,2794,2960,3136,3322,3520,3729,3951,4186,4400])
22 |
23 | piano_10 = np.array([0,73.4,207.7,349.2,587.3,987.8,1245,1661,2093,2794,4400])
24 |
25 | #------------------------------IO------------------------------
26 | def numpy2voice(npdata):
27 | voice = np.zeros((len(npdata),2))
28 | voice[:,0] = npdata
29 | voice[:,1] = npdata
30 | return voice
31 |
32 | def load(path,ch = 0):
33 | freq,audio = wavfile.read(path)
34 | if ch == 0:
35 | audio = audio[:,0]
36 | elif ch == 1:
37 | audio = audio[:,1]
38 | return freq,audio.astype(np.float64)
39 |
40 | def write(npdata,path='./tmp/test_output.wav',freq = 44100):
41 | voice = numpy2voice(npdata)
42 | wavfile.write(path, freq, voice.astype(np.int16))
43 |
44 | def play(path):
45 | os.system("paplay "+path)
46 |
47 | def playtest(npdata,freq = 44100):
48 | time.sleep(0.5)
49 | makedirs('./tmp/')
50 | voice = numpy2voice(npdata)
51 | wavfile.write('./tmp/test_output.wav', freq, voice.astype(np.int16))
52 | play('./tmp/test_output.wav')
53 |
54 |
55 | #------------------------------DSP------------------------------
56 |
57 | def filter(audio,fc=[],fs=44100,win_length = 4096):
58 | for i in range(len(audio)//win_length):
59 | audio[i*win_length:(i+1)*win_length] = fft_filter(audio[i*win_length:(i+1)*win_length], fs=fs, fc=fc)
60 |
61 | return audio
62 |
63 | # def freq_correct(src,dst,fs=44100,alpha = 0.05,fc=3000):
64 | # src_freq = basefreq(src, 44100,3000)
65 | # dst_freq = basefreq(dst, 44100,3000)
66 | # offset = int((src_freq-dst_freq)/(src_freq*0.05))
67 | # out = librosa.effects.pitch_shift(dst.astype(np.float64), 44100, n_steps=offset)
68 | # #print('freqloss:',round((basefreq(out, 44100,3000)-basefreq(src, 44100,3000))/basefreq(src, 44100,3000),3))
69 | # return out.astype(np.int16)
70 |
71 | def freq_correct(src,dst,fs=44100,fc=3000,mode = 'normal',win_length = 1024, alpha = 1):
72 |
73 | out = np.zeros_like(src)
74 | try:
75 | if mode == 'normal':
76 | src_oct = librosa.hz_to_octs(basefreq(src, fs, fc))
77 | dst_oct = librosa.hz_to_octs(basefreq(dst, fs, fc))
78 | offset = (dst_oct-src_oct)*12*alpha
79 | out = librosa.effects.pitch_shift(src, 44100, n_steps=offset)
80 | elif mode == 'track':
81 | length = min([len(src),len(dst)])
82 | for i in range(length//win_length):
83 | src_oct = librosa.hz_to_octs(basefreq(src[i*win_length:(i+1)*win_length], fs, fc))
84 | dst_oct = librosa.hz_to_octs(basefreq(dst[i*win_length:(i+1)*win_length], fs, fc))
85 |
86 | offset = (dst_oct-src_oct)*12*alpha
87 | out[i*win_length:(i+1)*win_length] = librosa.effects.pitch_shift(src[i*win_length:(i+1)*win_length], 44100, n_steps=offset)
88 | return out
89 | except Exception as e:
90 | return src
91 |
92 | #print('freqloss:',round((basefreq(out, 44100,3000)-basefreq(src, 44100,3000))/basefreq(src, 44100,3000),3))
93 |
94 |
95 | def energy_correct(src,dst,mode = 'normal',win_length = 512,alpha=1):
96 | """
97 | mode: normal | track
98 | """
99 | out = np.zeros_like(src)
100 | if mode == 'normal':
101 | src_rms = rms(src)
102 | dst_rms = rms(dst)
103 | out = src*(dst_rms/src_rms)*alpha
104 | elif mode == 'track':
105 | length = min([len(src),len(dst)])
106 | tracks = []
107 | for i in range(length//win_length):
108 | src_rms = np.clip(rms(src[i*win_length:(i+1)*win_length]),1e-6,np.inf)
109 | dst_rms = rms(dst[i*win_length:(i+1)*win_length])
110 | tracks.append((dst_rms/src_rms)*alpha)
111 | tracks = np.clip(np.array(tracks),0.1,10)
112 | tracks = interp(tracks, length)
113 | out = src*tracks
114 |
115 | return np.clip(out,-32760,32760)
116 |
117 | def time_correct(src,dst,_min=0.25):
118 | src_time = len(src)
119 | dst_time = len(dst)
120 | rate = np.clip(src_time/dst_time,_min,100)
121 | out = librosa.effects.time_stretch(src,rate)
122 | return out
123 |
124 | def freqfeatures(signal,fs):
125 | signal = normliaze(signal,mode = '5_95',truncated=100)
126 | signal_fft = np.abs(scipy.fftpack.fft(signal))
127 | length = len(signal)
128 | features = []
129 | for i in range(len(piano_10)-1):
130 | k1 = int(length/fs*piano_10[i])
131 | k2 = int(length/fs*piano_10[i+1])
132 | features.append(np.mean(signal_fft[k1:k2]))
133 | return np.array(features)
134 |
135 |
136 |
137 | def main():
138 | xp = [1, 2, 3]
139 | fp = [3, 2, 0]
140 | x = [0, 1, 1.5, 2.72, 3.14]
141 | print(np.interp(x, xp, fp))
142 |
143 | if __name__ == '__main__':
144 | main()
--------------------------------------------------------------------------------
/Util/util.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | def Traversal(filedir):
4 | file_list=[]
5 | for root,dirs,files in os.walk(filedir):
6 | for file in files:
7 | file_list.append(os.path.join(root,file))
8 | for dir in dirs:
9 | Traversal(dir)
10 | return file_list
11 |
12 | def is_img(path):
13 | ext = os.path.splitext(path)[1]
14 | ext = ext.lower()
15 | if ext in ['.jpg','.png','.jpeg','.bmp']:
16 | return True
17 | else:
18 | return False
19 |
20 | def is_video(path):
21 | ext = os.path.splitext(path)[1]
22 | ext = ext.lower()
23 | if ext in ['.mp4','.flv','.avi','.mov','.mkv','.wmv','.rmvb','.mts']:
24 | return True
25 | else:
26 | return False
27 |
28 | def is_imgs(paths):
29 | tmp = []
30 | for path in paths:
31 | if is_img(path):
32 | tmp.append(path)
33 | return tmp
34 |
35 | def is_videos(paths):
36 | tmp = []
37 | for path in paths:
38 | if is_video(path):
39 | tmp.append(path)
40 | return tmp
41 |
42 | def writelog(path,log):
43 | f = open(path,'a+')
44 | f.write(log+'\n')
45 | f.close()
46 |
47 |
48 | def makedirs(path):
49 | if os.path.isdir(path):
50 | print(path,'existed')
51 | else:
52 | os.makedirs(path)
53 | print('makedir:',path)
54 |
55 | def clean_tempfiles(tmp_init=True):
56 | if os.path.isdir('./tmp'):
57 | shutil.rmtree('./tmp')
58 | if tmp_init:
59 | os.makedirs('./tmp')
60 | os.makedirs('./tmp/video_voice')
61 | os.makedirs('./tmp/music/')
62 | os.makedirs('./tmp/video_imgs')
63 | os.makedirs('./tmp/output_imgs')
64 |
65 | def file_init(opt):
66 | if not os.path.isdir(opt.result_dir):
67 | os.makedirs(opt.result_dir)
68 | print('makedir:',opt.result_dir)
69 | clean_tempfiles()
70 |
71 | def get_bar(percent,num = 25):
72 | bar = '['
73 | for i in range(num):
74 | if i < round(percent/(100/num)):
75 | bar += '#'
76 | else:
77 | bar += '-'
78 | bar += ']'
79 | return bar+' '+str(round(percent,2))+'%'
80 |
81 | def copyfile(src,dst):
82 | try:
83 | shutil.copyfile(src, dst)
84 | except Exception as e:
85 | print(e)
86 |
87 | def second2stamp(s):
88 | floats = s
89 | h = int(s/3600)
90 | s = int(s%3600)
91 | m = int(s/60)
92 | s = int(s%60)
93 | floats = floats - int(floats) + s
94 |
95 | return "%02d:%02d:%.3f" % (h, m, floats)
96 |
--------------------------------------------------------------------------------
/imgs/DeepMosaics.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/DeepMosaics.jpg
--------------------------------------------------------------------------------
/imgs/GuiChu_余生一个浪.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/GuiChu_余生一个浪.jpg
--------------------------------------------------------------------------------
/imgs/OneImage2Video_badapple.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/OneImage2Video_badapple.jpg
--------------------------------------------------------------------------------
/imgs/PlayMusic_小星星.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/PlayMusic_小星星.jpg
--------------------------------------------------------------------------------
/imgs/Plogo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/Plogo.jpg
--------------------------------------------------------------------------------
/imgs/ShellPlayer_大威天龙.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/ShellPlayer_大威天龙.jpg
--------------------------------------------------------------------------------
/imgs/ShellPlayer_教程.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HypoX64/bilibili/992029667ad37d7d03131aa2c4c9923da6cca6f2/imgs/ShellPlayer_教程.jpg
--------------------------------------------------------------------------------