├── output └── readme.txt ├── input ├── dog.png ├── mm.png ├── pp.jpg ├── two.jpg ├── dawei.png ├── erika.png ├── house.png ├── reika.png ├── cat_up.png ├── kasumi.png ├── misuzu.png ├── satomi.png ├── satomi2.png └── ztfn_up.png ├── Supplementary-Material ├── .DS_Store ├── erika.gif ├── erika.jpg ├── kasumi.gif ├── kasumi.jpg ├── satomi2.gif └── satomi2.jpg ├── Resize.py ├── deblue.py ├── quicksort.py ├── README.md ├── LDR.py ├── drawpatch.py ├── tools.py ├── tone.py ├── sketch_demo.ipynb ├── genStroke_origin.py ├── statistics.py ├── simulate_.py ├── ETF └── edge_tangent_flow.py ├── simulate.py ├── simulate_paper.py ├── sketch_movie.ipynb ├── process _Orthogonal.py ├── process.py ├── process_fix_dir.py ├── process_random.py ├── draw.py ├── dog.py ├── girl.py ├── process_order.py ├── cat.py └── LICENSE /output/readme.txt: -------------------------------------------------------------------------------- 1 | output files 2 | -------------------------------------------------------------------------------- /input/dog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/dog.png -------------------------------------------------------------------------------- /input/mm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/mm.png -------------------------------------------------------------------------------- /input/pp.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/pp.jpg -------------------------------------------------------------------------------- /input/two.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/two.jpg -------------------------------------------------------------------------------- /input/dawei.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/dawei.png -------------------------------------------------------------------------------- /input/erika.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/erika.png -------------------------------------------------------------------------------- /input/house.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/house.png -------------------------------------------------------------------------------- /input/reika.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/reika.png -------------------------------------------------------------------------------- /input/cat_up.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/cat_up.png -------------------------------------------------------------------------------- /input/kasumi.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/kasumi.png -------------------------------------------------------------------------------- /input/misuzu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/misuzu.png -------------------------------------------------------------------------------- /input/satomi.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/satomi.png -------------------------------------------------------------------------------- /input/satomi2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/satomi2.png -------------------------------------------------------------------------------- /input/ztfn_up.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/input/ztfn_up.png -------------------------------------------------------------------------------- /Supplementary-Material/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/.DS_Store -------------------------------------------------------------------------------- /Supplementary-Material/erika.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/erika.gif -------------------------------------------------------------------------------- /Supplementary-Material/erika.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/erika.jpg -------------------------------------------------------------------------------- /Supplementary-Material/kasumi.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/kasumi.gif -------------------------------------------------------------------------------- /Supplementary-Material/kasumi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/kasumi.jpg -------------------------------------------------------------------------------- /Supplementary-Material/satomi2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/satomi2.gif -------------------------------------------------------------------------------- /Supplementary-Material/satomi2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale/main/Supplementary-Material/satomi2.jpg -------------------------------------------------------------------------------- /Resize.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | input_path = './input/ztfn1.png' 5 | output_path = './output' 6 | 7 | min_length = 320 8 | img = cv2.imread(input_path, cv2.IMREAD_COLOR) 9 | (h,w,c) = img.shape 10 | if h= pivot['importance']: 9 | 10 | i = i+1 11 | arr[i],arr[j] = arr[j],arr[i] 12 | 13 | arr[i+1],arr[high] = arr[high],arr[i+1] 14 | return ( i+1 ) 15 | 16 | 17 | # arr[] --> 排序数组 18 | # low --> 起始索引 19 | # high --> 结束索引 20 | 21 | # 快速排序函数 22 | def quickSort(arr,low,high): 23 | if low < high: 24 | 25 | pi = partition(arr,low,high) 26 | 27 | quickSort(arr, low, pi-1) 28 | quickSort(arr, pi+1, high) 29 | 30 | 31 | 32 | # print ("排序后的数组:") 33 | # for i in range(n): 34 | # print ("%d" %arr[i]), -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # sketch_demo.ipynb 2 |  2020/12に論文が提出された、画像から鉛筆画を制作プロセスを含めて生成する技術のサンプルコードです。 3 | 4 |  sketch_demo.ipynb ファイルをクリックするとJupyter notebook形式のファイルが表示されます。先頭にある「Open in Colab」ボタンを押すと、Google Colab上で実行できます。使用環境は、Google Colab が動作すれば、どんなものでも構いません。詳細は、cedro-blogを参照下さい。 5 | 6 | 7 | ### Results 8 |
9 | cat cat 10 |
11 | 12 |
13 | cat cat 14 |
15 | 16 |
17 | cat cat 18 |
19 | 20 | -------------------------------------------------------------------------------- /LDR.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | from matplotlib import pyplot as plt 4 | import torch 5 | import torch.nn as nn 6 | import torch.nn.functional as F 7 | from PIL import Image 8 | 9 | from simulate import * 10 | from drawpatch import * 11 | 12 | 13 | def LDR(img, n): 14 | Interval = 255.0/n 15 | img = np.float32(img) 16 | img = np.uint8(img/Interval) 17 | img = np.clip(img,0,n-1) 18 | img = np.uint8((img+0.5)*Interval) 19 | 20 | return img 21 | 22 | 23 | def HistogramEqualization(img,clipLimit=2, tileGridSize=(10,10)): 24 | # create a CLAHE object (Arguments are optional). 25 | clahe = cv2.createCLAHE(clipLimit=clipLimit, tileGridSize=tileGridSize) 26 | img = clahe.apply(img) 27 | return img 28 | 29 | if __name__ == '__main__': 30 | img_path = './input/jiangwen/010s.jpg' 31 | img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE) 32 | 33 | img = HistogramEqualization(img) 34 | # img = LDR(img) 35 | # LDR_single(img, 8) 36 | 37 | # # 核的大小和形状 38 | # kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2)) 39 | # # 开操作 40 | # img = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel, iterations=3) 41 | # cv2.imshow('MORPH_OPEN', img) 42 | # cv2.waitKey(0) 43 | 44 | # LDR_single_add(img, 8) 45 | 46 | s = SideWindowFilter(radius=1, iteration=1) 47 | img = torch.tensor(img, dtype=torch.float32) 48 | 49 | if len(img.size()) == 2: 50 | h, w = img.size() 51 | img = img.view(-1, 1, h, w) 52 | else: 53 | c, h, w = img.size() 54 | img = img.view(-1, c, h, w) 55 | print('img size ', img.shape) 56 | 57 | res = s.forward(img) 58 | print('res = ', res.shape) 59 | if res.size(1) == 2: 60 | res = np.transpose(np.squeeze(res.data.numpy()), (1, 2, 0)) 61 | else: 62 | res = np.squeeze(res.data.numpy()) 63 | 64 | 65 | for n in range(8,11): 66 | img = LDR(res, n) 67 | # cv2.imshow("LDR",img) 68 | # cv2.waitKey(0) 69 | cv2.imwrite("D:/ECCV2020/input/jiangwen/LDR{}.jpg".format(n),img) 70 | 71 | print("done") 72 | 73 | 74 | -------------------------------------------------------------------------------- /drawpatch.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | from matplotlib import pyplot as plt 4 | from simulate import * 5 | 6 | 7 | def rotate(image, angle, scale=1.0, pad_color=255): 8 | # grab the dimensions of the image and then determine the center 9 | (h, w) = image.shape[:2] 10 | (cX, cY) = (w // 2, h // 2) 11 | 12 | # grab the rotation matrix (applying the negative of the angle to rotate clockwise) 13 | # scale can be adjusted 14 | M = cv2.getRotationMatrix2D(center=(cX, cY), angle=angle, scale=scale) 15 | # M.shape = 2 * 3 16 | cos = np.abs(M[0, 0]) 17 | sin = np.abs(M[0, 1]) 18 | # cos sin 0 19 | # -sin cos 0 20 | # 0 0 1 21 | 22 | # compute the new size of the image 23 | nW = int((h * sin) + (w * cos)) 24 | nH = int((h * cos) + (w * sin)) 25 | 26 | # adjust the rotation matrix to take into account translation 27 | M[0, 2] += (nW / 2) - cX 28 | M[1, 2] += (nH / 2) - cY 29 | 30 | # compute the new origin_point of the image 31 | origin_point = np.array([nW/2.0-M[0,1]*h/2.0, nH/2.0-M[1,1]*h/2.0]) 32 | origin_point = np.round(origin_point) 33 | origin_point = (int(origin_point[0]), int(origin_point[1])) 34 | 35 | # perform the actual rotation and return the image 36 | result = cv2.warpAffine(src=image, M=M, dsize=(nW, nH), borderValue=(pad_color,pad_color,pad_color)) 37 | 38 | # result[:,origin_point[0]] = 0 39 | # result[origin_point[1],:] = 0 40 | return result, origin_point 41 | 42 | 43 | def drawpatch(canvas, patch_size, angle, scale, location, grayscale): 44 | distribution = ChooseDistribution(period=7,Grayscale=grayscale) 45 | patch = GetParallel(distribution=distribution, height = patch_size[0], length = patch_size[1], period=distribution.shape[0]) 46 | 47 | imgRotation, origin_point = rotate(image=patch, angle=angle, scale=scale) 48 | 49 | (h,w) = imgRotation.shape 50 | (H,W) = canvas.shape 51 | pad_canvas = np.zeros((2*h+canvas.shape[0], 2*w+canvas.shape[1]), dtype=np.uint8) 52 | pad_canvas[h:h+H, w:w+W] = canvas 53 | 54 | Aligned_point = [w+location[0]-origin_point[0], h+location[1]-origin_point[1]] 55 | 56 | m = np.minimum(imgRotation, pad_canvas[Aligned_point[1]:Aligned_point[1]+h, Aligned_point[0]:Aligned_point[0]+w]) 57 | pad_canvas[Aligned_point[1]:Aligned_point[1]+h, Aligned_point[0]:Aligned_point[0]+w] = m 58 | 59 | return pad_canvas[h:h+H, w:w+W] 60 | 61 | 62 | 63 | 64 | if __name__ == '__main__': 65 | np.random.seed(1945) 66 | canvas = Gassian((500,400), mean=250, var = 3) 67 | ##################################################################### 68 | ####################### args ######################## 69 | ##################################################################### 70 | sequence = ( 71 | [(1000,1000),0 ,1.0,(500,0),208+32], 72 | [(1225,1225),75 ,1.0,(-91,341),208+32], 73 | [(1450,1450),135 ,1.0,(0,1000),208+32], 74 | ) 75 | ##################################################################### 76 | ##################################################################### 77 | ##################################################################### 78 | for j in range(16): 79 | canvas = Gassian((1000,1000), mean=250, var = 3) 80 | for i in sequence: 81 | canvas = drawpatch(canvas=canvas, patch_size=i[0], angle=i[1], scale=i[2], location=i[3], grayscale=j*16) 82 | 83 | # cv2.imshow("drawpatch",canvas) 84 | # cv2.waitKey(0) 85 | cv2.imwrite("D:/ECCV2020/simu_patch/{}.jpg".format(j),canvas) 86 | 87 | print("done") 88 | 89 | 90 | -------------------------------------------------------------------------------- /tools.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | from matplotlib import pyplot as plt 4 | import math 5 | 6 | 7 | def get_start_end(mask): 8 | lines=[] 9 | Flag = True # no new interval 10 | for i in range(mask.shape[0]): 11 | if Flag == True: 12 | if mask[i]==1: 13 | if len(lines)>0 and i-lines[-1][1]<=1: ####### too close 14 | Flag = False 15 | continue 16 | else: 17 | lines.append([i,i]) 18 | Flag = False 19 | else: 20 | continue 21 | else: 22 | if mask[i]==1: 23 | continue 24 | else: 25 | lines[-1][1]=i 26 | Flag = True 27 | 28 | if Flag == False: 29 | lines[-1][1]=i 30 | 31 | return lines 32 | 33 | def rotateImg(img, angle): 34 | row, col = img.shape 35 | M = cv2.getRotationMatrix2D((row / 2 , col / 2 ), angle, 1) 36 | res = cv2.warpAffine(img, M, (row, col)) 37 | return res 38 | 39 | def get_directions(Num_choose, dirNum, img): 40 | height,width = img.shape 41 | img = np.float32(img)/255.0 42 | # print("Input height: %d, width: %d"%(height,width)) 43 | 44 | imX = np.append(np.absolute(img[:, 0 : width - 1] - img[:, 1 : width]), np.zeros((height, 1)), axis = 1) 45 | imY = np.append(np.absolute(img[0 : height - 1, :] - img[1 : height, :]), np.zeros((1, width)), axis = 0) 46 | 47 | img_gradient = np.sqrt((imX ** 2 + imY ** 2)) 48 | mask = (img_gradient-0.02)>0 49 | # cv2.imshow('mask',np.uint8(mask*255)) 50 | # img_gradient = imX + imY 51 | 52 | #filter kernel size 53 | tempsize = 0 54 | if height > width: 55 | tempsize = width 56 | else: 57 | tempsize = height 58 | tempsize /= 30 # according to the paper, the kernelsize is 1/30 of the side length 59 | 60 | halfKsize = int(tempsize / 2) 61 | if halfKsize < 1: 62 | halfKsize = 1 63 | if halfKsize > 9: 64 | halfKsize = 9 65 | kernalsize = halfKsize * 2 + 1 66 | # print("Kernel Size = %s" %(kernalsize)) 67 | 68 | ############################################################## 69 | ############### Here we generate the kernal ################## 70 | ############################################################## 71 | kernel = np.zeros((dirNum, kernalsize, kernalsize)) 72 | kernel [0,halfKsize,:] = 1.0 73 | for i in range(0,dirNum): 74 | kernel[i,:,:] = temp = rotateImg(kernel[0,:,:], i * 180 / dirNum) 75 | kernel[i,:,:] *= kernalsize/np.sum(kernel[i]) 76 | 77 | 78 | #filter gradient map in different directions 79 | print("Filtering Gradient Images in different directions ...") 80 | response = np.zeros((dirNum, height, width)) 81 | for i in range(dirNum): 82 | ker = kernel[i,:,:]; 83 | response[i, :, :] = cv2.filter2D(img_gradient, -1, ker) 84 | 85 | cv2.waitKey(0) 86 | 87 | #divide gradient map into different sub-map 88 | print("Caculating direction classification ...") 89 | direction = np.zeros(( height, width)) 90 | for x in range(width): 91 | for y in range(height): 92 | direction[y, x] = np.argmax(response[:,y,x]) 93 | #direction = direction*mask 94 | 95 | dirs = np.zeros(dirNum) 96 | for i in range (dirNum): 97 | dirs[i]=np.sum((direction-i)==0) 98 | sort_dirs = np.sort(dirs,axis=0) 99 | print(dirs,sort_dirs) 100 | angles = [] 101 | for i in range(Num_choose): 102 | for j in range (dirNum): 103 | if sort_dirs[-1-i]==dirs[j]: 104 | angles.append(j*180/dirNum) 105 | continue 106 | return angles 107 | 108 | if __name__ == '__main__': 109 | input_path = './input/lena.jpg' 110 | img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 111 | angles = get_directions(4,12,img) 112 | -------------------------------------------------------------------------------- /tone.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import math 4 | from LDR import LDR 5 | 6 | def transferTone(img): 7 | ho = np.zeros( 256 ) 8 | po = np.zeros( 256 ) 9 | for i in range(256 ): 10 | po[i] = np.sum(img == i) 11 | po = po / np.sum(po) 12 | 13 | #caculate original cumulative histogram 14 | ho[0] = po[0] 15 | for i in range(1,256): 16 | ho[i] = ho[i - 1] + po[i] 17 | 18 | #use parameter from paper. 19 | omiga1 = 76 20 | omiga2 = 22 21 | omiga3 = 2 22 | p1 = lambda x : (1 / 9.0) * np.exp(-(255 - x) / 9.0) 23 | p2 = lambda x : (1.0 / (225 - 105)) * (x >= 105 and x <= 225) 24 | p3 = lambda x : (1.0 / np.sqrt(2 * math.pi *11) ) * np.exp(-((x - 90) ** 2) / float((2 * (11 **2)))) 25 | p = lambda x : (omiga1 * p1(x) + omiga2 * p2(x) + omiga3 * p3(x)) * 0.01 26 | 27 | prob = np.zeros(256) 28 | total = 0 29 | for i in range(256): 30 | prob[i] = p(i) 31 | total = total + prob[i] 32 | prob = prob / total 33 | 34 | #caculate new cumulative histogram 35 | histo = np.zeros(256) 36 | histo[0] = prob[0] 37 | for i in range(1, 256): 38 | histo[i] = histo[i - 1] + prob[i] 39 | 40 | Iadjusted = np.zeros((img.shape[0], img.shape[1])) 41 | for x in range(img.shape[0]): 42 | for y in range(img.shape[1]): 43 | histogram_value = ho[img[x,y]] 44 | i = np.argmin(np.absolute(histo - histogram_value)) 45 | Iadjusted[x, y] = i 46 | 47 | Iadjusted = np.uint8(Iadjusted) 48 | 49 | # cv2.imshow('adjust tone', Iadjusted) 50 | cv2.waitKey(0) 51 | J = Iadjusted 52 | J = cv2.blur(Iadjusted, (3, 3)) 53 | # cv2.imshow('blurred adjust tone', J) 54 | cv2.waitKey(1) 55 | return J 56 | 57 | 58 | 59 | def LDR_single(img,n,output_path): 60 | Interval = 250.0/n 61 | img = np.float32(img) 62 | img = np.uint8(img/Interval) 63 | img = np.clip(img,0,n-1) 64 | 65 | for i in range (n): 66 | mask = (img-i == 0) 67 | tone = np.uint8(i*Interval*mask + (1-mask)*255) 68 | cv2.imwrite(output_path + "/tone{}.png".format(i),tone) 69 | 70 | # cv2.imwrite("D:/ECCV2020/input/lilianjie/eeee.png",eeee) 71 | return 72 | 73 | def LDR_single_add(img,n,output_path): 74 | Interval = 250.0/n 75 | img = np.float32(img) 76 | img = np.uint8(img/Interval) 77 | img = np.clip(img,0,n-1) 78 | # img = np.float32(img) 79 | # eeee = img*0 80 | mask_add = img*0 81 | for i in range (n): 82 | mask = (img-i == 0) 83 | mask_add += mask 84 | cv2.imwrite(output_path +"/mask/mask{}.png".format(i),np.uint8(mask_add*255)) 85 | tone = np.uint8((i+0.5)*Interval*mask_add + (1-mask_add)*255) 86 | # cv2.imshow('tone{}'.format(i), tone) 87 | # cv2.waitKey(0) 88 | cv2.imwrite(output_path +"/mask/tone_cumulate{}.png".format(i),tone) 89 | 90 | # cv2.imwrite("D:/ECCV2020/input/lilianjie/eeee.png",eeee) 91 | return 92 | 93 | # def LDR_single_add(img,n1,n2,output_path): 94 | # Interval = 250.0/n1 95 | # img = np.float32(img) 96 | # img = np.uint8(img/Interval) 97 | # # img = np.clip(img,0,n1-1) 98 | # # img = np.float32(img) 99 | # # eeee = img*0 100 | 101 | # for i in range (n1): 102 | # if i \"Open" 11 | ] 12 | }, 13 | { 14 | "cell_type": "markdown", 15 | "metadata": { 16 | "id": "x-qhLXg0tCNj" 17 | }, 18 | "source": [ 19 | "# セットアップ" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "id": "lRmfEjSOz5XP" 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "# githubからコードをコピー\n", 31 | "!git clone https://github.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale.git\n", 32 | "%cd Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": { 38 | "id": "j7qPY1YDCfPD" 39 | }, 40 | "source": [ 41 | "# コード本体" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": null, 47 | "metadata": { 48 | "id": "V8GmCzCYFNX9" 49 | }, 50 | "outputs": [], 51 | "source": [ 52 | "# ------ スケッチプロセスを静止画に変換 ------\n", 53 | "from draw import *\n", 54 | "from PIL import Image\n", 55 | "import glob\n", 56 | "\n", 57 | "# setting\n", 58 | "file_name = 'kasumi.png' # inputフォルダーにある画像名の指定\n", 59 | "n = 10 # グレースケール量子化次数\n", 60 | "period = 5 # 線(ストローク)幅 \n", 61 | "\n", 62 | "# drawing\n", 63 | "draw(file_name, n, period)" 64 | ] 65 | }, 66 | { 67 | "cell_type": "code", 68 | "execution_count": null, 69 | "metadata": { 70 | "id": "-KqwKeC86Vse" 71 | }, 72 | "outputs": [], 73 | "source": [ 74 | "# ------ 静止画をmp4に変換 ------\n", 75 | "# 必要に応じて、output/***/process/%04d.jpg と output/***/out.mp4 の***部分を修正して下さい。\n", 76 | "\n", 77 | "!ffmpeg -r 30 -i output/kasumi/process/%04d.jpg -vcodec libx264 -pix_fmt yuv420p output/kasumi/out.mp4" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": null, 83 | "metadata": { 84 | "id": "DSCZdbO7I2Uw" 85 | }, 86 | "outputs": [], 87 | "source": [ 88 | "# ------ mp4の再生 ------\n", 89 | "from IPython.display import HTML\n", 90 | "from base64 import b64encode\n", 91 | "\n", 92 | "mp4 = open('./output/'+file_name[:-4]+'/out.mp4', 'rb').read()\n", 93 | "data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode()\n", 94 | "HTML(f\"\"\"\n", 95 | "\"\"\")" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": null, 103 | "metadata": { 104 | "id": "eSE9kJCYO2Ba" 105 | }, 106 | "outputs": [], 107 | "source": [ 108 | "# ------ 変換前後の画像比較 ------\n", 109 | "import cv2\n", 110 | "import numpy as np\n", 111 | "import matplotlib.pyplot as plt\n", 112 | "\n", 113 | "img1 = cv2.imread('./output/'+file_name[:-4]+'/input.jpg')\n", 114 | "src1 = cv2.cvtColor(img1, cv2.COLOR_BGR2RGB)\n", 115 | "img2 = cv2.imread('./output/'+file_name[:-4]+'/result_RGB.jpg')\n", 116 | "src2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)\n", 117 | "\n", 118 | "fig = plt.figure(figsize=(15,15))\n", 119 | "plt.subplot(1,2,1)\n", 120 | "plt.imshow(src1), plt.title('input')\n", 121 | "plt.axis('off')\n", 122 | "plt.subplot(1,2,2)\n", 123 | "plt.imshow(src2), plt.title('generated')\n", 124 | "plt.axis('off')\n", 125 | "plt.show()" 126 | ] 127 | } 128 | ], 129 | "metadata": { 130 | "accelerator": "GPU", 131 | "colab": { 132 | "include_colab_link": true, 133 | "name": "sketch_demo", 134 | "provenance": [] 135 | }, 136 | "kernelspec": { 137 | "display_name": "Python 3", 138 | "language": "python", 139 | "name": "python3" 140 | }, 141 | "language_info": { 142 | "codemirror_mode": { 143 | "name": "ipython", 144 | "version": 3 145 | }, 146 | "file_extension": ".py", 147 | "mimetype": "text/x-python", 148 | "name": "python", 149 | "nbconvert_exporter": "python", 150 | "pygments_lexer": "ipython3", 151 | "version": "3.7.9" 152 | } 153 | }, 154 | "nbformat": 4, 155 | "nbformat_minor": 1 156 | } 157 | -------------------------------------------------------------------------------- /genStroke_origin.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | from matplotlib import pyplot as plt 4 | 5 | import math 6 | import sys 7 | import os 8 | 9 | # compute the kernal of different direction 10 | def rotateImg(img, angle): 11 | row, col = img.shape 12 | M = cv2.getRotationMatrix2D((row / 2 , col / 2 ), angle, 1) 13 | res = cv2.warpAffine(img, M, (row, col)) 14 | return res 15 | 16 | 17 | # compute and get the stroke of the raw img 18 | def genStroke(img, dirNum, verbose = False): 19 | height , width = img.shape[0], img.shape[1] 20 | img = np.float32(img) / 255.0 21 | print("Input height: %d, width: %d"%(height,width)) 22 | 23 | print("PreProcessing Images, denoising ...") 24 | img = cv2.medianBlur(img, 3) 25 | # if verbose == True: 26 | # cv2.imshow('blurred image', np.uint8(img*255)) 27 | # cv2.waitKey(0) 28 | 29 | print("Generating Gradient Images ...") 30 | imX = np.append(np.absolute(img[:, 0 : width - 1] - img[:, 1 : width]), np.zeros((height, 1)), axis = 1) 31 | imY = np.append(np.absolute(img[0 : height - 1, :] - img[1 : height, :]), np.zeros((1, width)), axis = 0) 32 | ############################################################## 33 | ##### Here we have many methods to generate gradient ##### 34 | ############################################################## 35 | img_gradient = np.sqrt((imX ** 2 + imY ** 2)) 36 | img_gradient = imX + imY 37 | if verbose == True: 38 | # cv2.imshow('gradient image', np.uint8(255-img_gradient*255)) 39 | cv2.imwrite('output/grad.jpg',np.uint8(255-img_gradient*255)) 40 | cv2.waitKey(0) 41 | 42 | 43 | 44 | #filter kernel size 45 | tempsize = 0 46 | if height > width: 47 | tempsize = width 48 | else: 49 | tempsize = height 50 | tempsize /= 30 51 | ##################################################################### 52 | # according to the paper, the kernelsize is 1/30 of the side length 53 | ##################################################################### 54 | halfKsize = int(tempsize / 2) 55 | if halfKsize < 1: 56 | halfKsize = 1 57 | if halfKsize > 9: 58 | halfKsize = 9 59 | kernalsize = halfKsize * 2 + 1 60 | print("Kernel Size = %s" %(kernalsize)) 61 | 62 | 63 | 64 | ############################################################## 65 | ############### Here we generate the kernal ################## 66 | ############################################################## 67 | kernel = np.zeros((dirNum, kernalsize, kernalsize)) 68 | kernel [0,halfKsize,:] = 1.0 69 | for i in range(0,dirNum): 70 | kernel[i,:,:] = temp = rotateImg(kernel[0,:,:], i * 180 / dirNum) 71 | kernel[i,:,:] *= kernalsize/np.sum(kernel[i]) 72 | # print(np.sum(kernel[i])) 73 | if verbose == True: 74 | # print(kernel[i]) 75 | title = 'line kernel %d'%i 76 | # cv2.imshow( title, np.uint8(temp*255)) 77 | cv2.waitKey(0) 78 | 79 | ##################################################### 80 | # cv2.filter2D() 其实做的是correlate而不是conv 81 | # correlate 相当于 kernal 旋转180° 的 conv 82 | # 但是我们的kernal是中心对称的,所以不影响 83 | ##################################################### 84 | 85 | #filter gradient map in different directions 86 | print("Filtering Gradient Images in different directions ...") 87 | response = np.zeros((dirNum, height, width)) 88 | for i in range(dirNum): 89 | ker = kernel[i,:,:]; 90 | response[i, :, :] = cv2.filter2D(img_gradient, -1, ker) 91 | if verbose == True: 92 | for i in range(dirNum): 93 | title = 'response %d'%i 94 | # cv2.imshow(title, np.uint8(response[i,:,:]*255)) 95 | cv2.waitKey(0) 96 | 97 | 98 | 99 | #divide gradient map into different sub-map 100 | print("Caculating Gradient classification ...") 101 | Cs = np.zeros((dirNum, height, width)) 102 | for x in range(width): 103 | for y in range(height): 104 | i = np.argmax(response[:,y,x]) 105 | Cs[i, y, x] = img_gradient[y,x] 106 | if verbose == True: 107 | for i in range(dirNum): 108 | title = 'max_response %d'%i 109 | # cv2.imshow(title, np.uint8(Cs[i,:,:]*255)) 110 | cv2.waitKey(0) 111 | 112 | 113 | 114 | #generate line shape 115 | print("Generating shape Lines ...") 116 | spn = np.zeros((dirNum, height, width)) 117 | for i in range(dirNum): 118 | ker = kernel[i,:,:]; 119 | spn[i, :, :] = cv2.filter2D(Cs[i], -1, ker) 120 | sp = np.sum(spn, axis = 0) 121 | 122 | sp = sp * np.power(img_gradient, 0.4) 123 | ################# 这里怎么理解看论文 ################# 124 | sp = (sp - np.min(sp)) / (np.max(sp) - np.min(sp)) 125 | S = 1 - sp 126 | # if verbose == True: 127 | # cv2.imshow('raw stroke', np.uint8(S*255)) 128 | # cv2.waitKey(0) 129 | 130 | return S 131 | 132 | if __name__ == '__main__': 133 | 134 | img_path = './input/1.jpg' 135 | img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE) 136 | stroke = genStroke(img, 18, False) 137 | #stroke = stroke*(np.exp(stroke)-np.exp(1)+1) 138 | stroke=np.power(stroke, 3) 139 | # stroke=(stroke - np.min(stroke)) / (np.max(stroke) - np.min(stroke)) # Deepen the edges 140 | stroke = np.uint8(stroke*255) 141 | 142 | cv2.imwrite('output/edge.jpg',stroke) 143 | # cv2.imshow('stroke', stroke) 144 | cv2.waitKey(0) 145 | 146 | -------------------------------------------------------------------------------- /statistics.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | from matplotlib import pyplot as plt 4 | 5 | 6 | if __name__ == '__main__': 7 | 8 | img_path = './Patch/014.png' 9 | img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE) 10 | 11 | mean = np.zeros(img.shape[1]) 12 | var = np.zeros(img.shape[1]) 13 | peak_mean = np.zeros(img.shape[1]) 14 | peak_var = np.zeros(img.shape[1]) 15 | valley_mean = np.zeros(img.shape[1]) 16 | valley_var = np.zeros(img.shape[1]) 17 | period = np.zeros(img.shape[1]) 18 | 19 | 20 | ###########-------Column-------############### 21 | for i in range (img.shape[1]): 22 | column = img[:,i] 23 | # plt.plot(column) 24 | # plt.show() 25 | 26 | mean[i] = np.mean(column) 27 | var[i] = np.var(column) 28 | print("Column:",i,"Mean:{:.1f}".format(mean[i]),"Var:{:.1f}".format(var[i])) 29 | 30 | print("Mean:{:.1f}".format(mean.mean()),"Var:{:.1f}".format(var.mean())) 31 | 32 | 33 | ##########---------period--------############### 34 | for i in range (img.shape[1]): 35 | column = img[:,i] 36 | peak_list = [] 37 | valley_list = [] 38 | period_list = [] 39 | 40 | peak_last = 255 41 | valley_last = 0 42 | for j in range(column.shape[0]): 43 | if j>0 and jcolumn[j-1] and column[j]>column[j+1]: 46 | if valley_last+ 25 < column[j]:##################################### 47 | peak_list.append(column[j]) 48 | peak_last = column[j] 49 | # valley 50 | else: 51 | if column[j]0 and jcolumn[j-1] and column[j]>column[j+1]: 108 | if valley_last+ 25 < column[j]:##################################### 109 | peak_list.append(column[j]) 110 | peak_last = column[j] 111 | # valley 112 | else: 113 | if column[j]= 0 and j+radios <= column.shape[0]-1: 119 | t = column[j-radios:j+radios+1] 120 | t = t[np.newaxis,:] 121 | periods.append(t) 122 | 123 | period = np.concatenate((periods),axis=0) 124 | period_stastic_mean = np.mean(period,axis=0) 125 | period_stastic_var = np.var(period,axis=0) 126 | plt.plot(period_stastic_mean) 127 | plt.plot(period_stastic_var) 128 | plt.show() 129 | print("period_stastic_mean\n",period_stastic_mean) 130 | print("period_stastic_var\n",period_stastic_var) 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | -------------------------------------------------------------------------------- /simulate_.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | from matplotlib import pyplot as plt 4 | import math 5 | 6 | def Gassian(size, mean = 0, var = 0): 7 | norm = np.random.randn(*size) 8 | denorm = norm * np.sqrt(var) + mean 9 | return np.uint8(np.round(np.clip(denorm,0,255))) 10 | 11 | def Getline(distribution, length): 12 | period = distribution.shape[0] 13 | if length < 100: # if length is too short, lines are Aligned 14 | patch = Gassian((2*period, length), mean=250, var = 3) 15 | begin = 0 16 | end = 1 17 | for i in range(period): 18 | patch[i]=Gassian((1,length), mean=distribution[i,0], var=distribution[i,1]) 19 | 20 | else: # if length is't too short, lines is't Aligned 21 | patch = Gassian((2*period, length+4*period), mean=250, var = 3) 22 | 23 | begin = Gassian((1,1), mean=2.0*period, var=2*period) 24 | # egin = Gassian((1,1), mean=2.0*period, var=0) 25 | begin = np.uint8(np.round(np.clip(begin,0,4*period))) 26 | begin = int(begin[0,0]) 27 | end = Gassian((1,1), mean=2.0*period, var=2*period) 28 | # end = Gassian((1,1), mean=2.0*period, var=0) 29 | end = np.uint8(np.round(np.clip(end,1,4*period+1))) 30 | end = int(end[0,0]) 31 | 32 | real_length = length+4*period-end-begin 33 | for i in range(period): 34 | patch[i,begin:-end]=Gassian((1,real_length), mean=distribution[i,0], var=distribution[i,1]) 35 | 36 | patch = Attenuation(patch, period=period, distribution=distribution,begin=begin, end=end) 37 | patch = Distortion(patch, begin=begin, end=end) 38 | 39 | return np.uint8(np.round(np.clip(patch,0,255))) 40 | 41 | def Attenuation(patch, period, distribution, begin, end): 42 | order = int((patch.shape[1]-begin-end)/2)+1 43 | radius = (period-1)/2 44 | canvas = Gassian((patch.shape[0], patch.shape[1]), mean=250, var=3) 45 | patch = np.float32(patch) 46 | canvas = np.float32(canvas) 47 | for i in range(begin, patch.shape[1]-end+1): 48 | for j in range(period): 49 | a = np.abs((1.0-(i-begin)/order)**2)/3 50 | b = np.abs((1.0-j/radius)**2)*1 51 | patch[j,i] += (canvas[j,i]-patch[j,i])*np.sqrt(a+b)/1.5 52 | # patch[j,i] += 0.75*(canvas[j,i]-patch[j,i]) * (np.abs((1.0-(i-begin)/order)**2))**0.5 53 | 54 | return np.uint8(np.round(np.clip(patch,0,255))) 55 | 56 | 57 | def Distortion(patch,begin,end): 58 | height = patch.shape[0] 59 | length = patch.shape[1] 60 | patch = np.float32(patch) 61 | patch_copy = patch.copy() 62 | 63 | central = ((length-begin-end)/2+begin) + np.random.randn()*length/15 64 | # central = ((length-begin-end)/2+begin) 65 | if length>100: 66 | radius = length**2/(2*height) 67 | else: 68 | radius = 100**2/(2*height) 69 | for i in range(length): 70 | offset = ((central-i)**2)/(2*radius) 71 | int_offset = int(offset) 72 | decimal_offset = offset-int_offset 73 | for j in range(height): 74 | if j>int_offset: 75 | patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 76 | else: 77 | patch[j,i]= np.random.randn() * np.sqrt(3) + 250 78 | 79 | return np.uint8(np.round(np.clip(patch,0,255))) 80 | 81 | def GetParallel(distribution, height, length, period): 82 | if length<100: # constant length 83 | canvas = Gassian((height+2*period,length), mean=250, var = 3) 84 | else: # variable length 85 | canvas = Gassian((height+2*period,length+4*period), mean=250, var = 3) 86 | 87 | distensce = Gassian((1,int(height/period)+1), mean = period, var = period/5) 88 | # distensce = Gassian((1,int(height/period)+1), mean = period, var = 0) 89 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8,period*1.25))) 90 | 91 | begin = 0 92 | for i in np.squeeze(distensce).tolist(): 93 | newline = Getline(distribution=distribution, length=length) 94 | h,w = newline.shape 95 | # cv2.imshow('line', newline) 96 | # cv2.waitKey(0) 97 | # cv2.imwrite("D:/ECCV2020/simu_patch/Line3.jpg",newline) 98 | 99 | if begin < height: 100 | m = np.minimum(canvas[begin:(begin + h),:], newline) 101 | canvas[begin:(begin + h),:] = m 102 | begin += i 103 | else: 104 | break 105 | 106 | return canvas[:height,:] 107 | 108 | def ChooseDistribution(period, Grayscale): 109 | distribution = np.zeros((period,2)) 110 | c = period/2.0 111 | difference = 250-Grayscale 112 | for i in range(distribution.shape[0]): 113 | distribution[i][0] = Grayscale + difference*abs(i-c)/c 114 | distribution[i][1] = np.cos((i-c)/c*(0.5*3.1415929))*difference+difference**2/300 115 | 116 | # distribution[i][0] -= np.cos((i-4)/4.0*(0.5*3.1415929))*difference 117 | # distribution[i][1] += np.cos((i-4)/4.0*(0.5*3.1415929))*difference 118 | 119 | return distribution 120 | 121 | 122 | if __name__ == '__main__': 123 | np.random.seed(1500) 124 | canvas = Gassian((400,300), mean=250, var = 3) 125 | 126 | # distribution = np.array([[245,31],[238,27],[218,48],[205,33],[214,38],[234,24],[240,42]]) 127 | ################################################### 128 | ################################################### 129 | ################################################### 130 | period = 8 131 | Grayscale = 160 132 | H,L = (100,150) 133 | ################################################### 134 | ################################################### 135 | ################################################### 136 | distribution = ChooseDistribution(period=period, Grayscale=Grayscale) 137 | print(distribution) 138 | patch = GetParallel(distribution=distribution, height=H, length=L, period=period) 139 | (h,w) = patch.shape 140 | # patch = GetOffsetParallel(offset=4, distribution=distribution, patch_size=(40,200), period_mean=distribution.shape[0], period_var=1) 141 | # (h,w) = patch.shape 142 | # canvas[400-int(h/2):400-int(h/2)+h,300-int(w/2):300-int(w/2)+w] = patch 143 | 144 | # cv2.imshow('Parallel', patch[:, 2*distribution.shape[0]:w-2*distribution.shape[0]]) 145 | # cv2.imshow('Parallel', patch) 146 | cv2.waitKey(0) 147 | cv2.imwrite("D:/ECCV2020/simu_patch/Parallel4.jpg",patch) 148 | print("done") 149 | -------------------------------------------------------------------------------- /ETF/edge_tangent_flow.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import math 3 | import numpy as np 4 | import torch 5 | import torch.nn as nn 6 | # np.set_printoptions(threshold=sys.maxsize) 7 | 8 | class ETF(): 9 | def __init__(self, input_path, output_path, dir_num, kernel_radius, iter_time, background_dir=None): 10 | 11 | img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 12 | self.origin_shape = img.shape 13 | (h,w) = img.shape 14 | if h>w: 15 | img = cv2.resize(img,(int(512*w/h),512)) 16 | else: 17 | img = cv2.resize(img,(512,int(512*h/w))) 18 | self.shape = img.shape 19 | self.kernel_size = kernel_radius*2+1 20 | self.kernel_radius = kernel_radius 21 | self.iter_time = iter_time 22 | self.output_path = output_path 23 | self.dir_num = dir_num 24 | self.background_dir = background_dir 25 | 26 | img = cv2.copyMakeBorder(img, kernel_radius, kernel_radius, kernel_radius, kernel_radius, cv2.BORDER_REPLICATE) 27 | img_normal = cv2.normalize(img.astype("float32"), None, 0.0, 1.0, cv2.NORM_MINMAX) 28 | 29 | x_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 1, 0, ksize=5) 30 | y_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 0, 1, ksize=5) 31 | 32 | x_der = torch.from_numpy(x_der) + 1e-12 33 | y_der = torch.from_numpy(y_der) + 1e-12 34 | 35 | gradient_magnitude = torch.sqrt(x_der**2.0 + y_der**2.0) 36 | gradient_norm = gradient_magnitude/gradient_magnitude.max() 37 | 38 | x_norm = x_der/(gradient_magnitude) 39 | y_norm = y_der/(gradient_magnitude) 40 | 41 | # rotate 90 degrees counter-clockwise 42 | self.x_norm = -y_norm 43 | self.y_norm = x_norm 44 | 45 | self.gradient_norm = gradient_norm 46 | self.gradient_magnitude = gradient_magnitude 47 | 48 | def Ws(self): 49 | kernels = torch.ones((*self.shape,self.kernel_size,self.kernel_size)) 50 | # radius = central = (self.kernel_size-1)/2 51 | # for i in range(self.kernel_size): 52 | # for j in range(self.kernel_size): 53 | # if (i-central)**2+(i-central)**2 <= radius**2: 54 | # self.flow_field[x][y] 55 | return kernels 56 | 57 | def Wm(self): 58 | kernels = torch.ones((*self.shape,self.kernel_size,self.kernel_size)) 59 | 60 | eta = 1 # Specified in paper 61 | (h,w) = self.shape 62 | x = self.gradient_norm[self.kernel_radius:-self.kernel_radius,self.kernel_radius:-self.kernel_radius] 63 | for i in range(self.kernel_size): 64 | for j in range(self.kernel_size): 65 | y = self.gradient_norm[i:i+h,j:j+w] 66 | kernels[:,:,i,j] = (1/2) * (1 + torch.tanh(eta*(y - x))) 67 | return kernels 68 | 69 | def Wd(self): 70 | kernels = torch.ones((*self.shape,self.kernel_size,self.kernel_size)) 71 | 72 | (h,w) = self.shape 73 | X_x = self.x_norm[self.kernel_radius:-self.kernel_radius,self.kernel_radius:-self.kernel_radius] 74 | X_y = self.y_norm[self.kernel_radius:-self.kernel_radius,self.kernel_radius:-self.kernel_radius] 75 | 76 | for i in range(self.kernel_size): 77 | for j in range(self.kernel_size): 78 | Y_x = self.x_norm[i:i+h,j:j+w] 79 | Y_y = self.y_norm[i:i+h,j:j+w] 80 | kernels[:,:,i,j] = X_x*Y_x + X_y*Y_y 81 | 82 | return torch.abs(kernels), torch.sign(kernels) 83 | 84 | def forward(self): 85 | Ws = self.Ws() 86 | Wm = self.Wm() 87 | for iter_time in range(self.iter_time): 88 | Wd, phi = self.Wd() 89 | kernels = phi*Ws*Wm*Wd 90 | 91 | x_magnitude = (self.gradient_norm*self.x_norm).unsqueeze(0).unsqueeze(0) 92 | # print(x_magnitude.min()) 93 | y_magnitude = (self.gradient_norm*self.y_norm).unsqueeze(0).unsqueeze(0) 94 | 95 | x_patch = torch.nn.functional.unfold(x_magnitude, (self.kernel_size, self.kernel_size)) 96 | y_patch = torch.nn.functional.unfold(y_magnitude, (self.kernel_size, self.kernel_size)) 97 | 98 | x_patch = x_patch.view(self.kernel_size, self.kernel_size,*self.shape) 99 | y_patch = y_patch.view(self.kernel_size, self.kernel_size,*self.shape) 100 | 101 | x_patch = x_patch.permute(2,3,0,1) 102 | y_patch = y_patch.permute(2,3,0,1) 103 | 104 | 105 | x_result = (x_patch*kernels).sum(-1).sum(-1) 106 | y_result = (y_patch*kernels).sum(-1).sum(-1) 107 | 108 | magnitude = torch.sqrt(x_result**2.0 + y_result**2.0) 109 | x_norm = x_result/magnitude 110 | y_norm = y_result/magnitude 111 | 112 | self.x_norm[self.kernel_radius:-self.kernel_radius,self.kernel_radius:-self.kernel_radius] = x_norm 113 | self.y_norm[self.kernel_radius:-self.kernel_radius,self.kernel_radius:-self.kernel_radius] = y_norm 114 | 115 | self.save(x_norm,y_norm) 116 | return None 117 | 118 | def save(self,x,y): 119 | x = nn.functional.interpolate(x.unsqueeze(0).unsqueeze(0),[*self.origin_shape],mode='nearest') 120 | y = nn.functional.interpolate(y.unsqueeze(0).unsqueeze(0),[*self.origin_shape],mode='nearest') 121 | x = x.squeeze() 122 | y = y.squeeze() 123 | x[x==0] += 1e-12 124 | 125 | tan = -y/x 126 | angle = torch.atan(tan) 127 | angle = 180*angle/math.pi 128 | if self.background_dir!=None: 129 | t = self.gradient_magnitude[self.kernel_radius:-self.kernel_radius,self.kernel_radius:-self.kernel_radius] 130 | t = nn.functional.interpolate(t.unsqueeze(0).unsqueeze(0),[*self.origin_shape], mode='bilinear') 131 | t = t.squeeze() 132 | a = t.min() 133 | b= t.max() 134 | angle[t<0.4] = self.background_dir 135 | 136 | length = 180/self.dir_num 137 | for i in range(self.dir_num): 138 | if i==0: 139 | minimum = -90 140 | maximum = -90+length/2 141 | mask1 = 255*(((angle>minimum)+(angle==minimum))*(angleminimum)+(angle==minimum)) 145 | mask = mask1 + mask2 146 | cv2.imwrite(self.output_path+'/dir_mask{}.png'.format(i),np.uint8(mask.numpy())) 147 | else: 148 | minimum = -90+(i-1/2)*length 149 | maximum = minimum + length 150 | mask = 255*(((angle>minimum)+(angle==minimum))*(angle100: 67 | radius = length**2/(4*height) 68 | else: 69 | radius = length**2/(2*height) 70 | 71 | for i in range(length): 72 | offset = ((central-i)**2)/(2*radius) 73 | int_offset = int(offset) 74 | decimal_offset = offset-int_offset 75 | for j in range(height): 76 | if j>int_offset: 77 | patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 78 | else: 79 | patch[j,i]= np.random.randn() * np.sqrt(3) + 250 80 | 81 | patch_copy = patch.copy() 82 | if length>100: 83 | for i in range(length): 84 | offset = ((central-i)**2)/(2*radius) 85 | int_offset = int(offset) 86 | decimal_offset = offset-int_offset 87 | for j in range(patch.shape[0]): 88 | if j>int_offset: 89 | patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 90 | else: 91 | patch[j,i]= np.random.randn() * np.sqrt(3) + 250 92 | # else: 93 | # radius = length**2/(4*height) 94 | # for i in range(length): 95 | # offset = ((central-i)**2)/(2*radius) 96 | # int_offset = int(offset) 97 | # decimal_offset = offset-int_offset 98 | # for j in range(patch.shape[0]): 99 | # if j>int_offset: 100 | # patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 101 | # else: 102 | # patch[j,i]= np.random.randn() * np.sqrt(3) + 250 103 | 104 | 105 | return np.uint8(np.round(np.clip(patch,0,255))) 106 | 107 | def GetParallel(distribution, height, length, period): 108 | if length100: 67 | radius = length**2/(4*height) 68 | else: 69 | radius = length**2/(2*height) 70 | 71 | for i in range(length): 72 | offset = ((central-i)**2)/(2*radius) 73 | int_offset = int(offset) 74 | decimal_offset = offset-int_offset 75 | for j in range(height): 76 | if j>int_offset: 77 | patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 78 | else: 79 | patch[j,i]= np.random.randn() * np.sqrt(3) + 250 80 | 81 | patch_copy = patch.copy() 82 | if length>100: 83 | for i in range(length): 84 | offset = ((central-i)**2)/(2*radius) 85 | int_offset = int(offset) 86 | decimal_offset = offset-int_offset 87 | for j in range(patch.shape[0]): 88 | if j>int_offset: 89 | patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 90 | else: 91 | patch[j,i]= np.random.randn() * np.sqrt(3) + 250 92 | # else: 93 | # radius = length**2/(4*height) 94 | # for i in range(length): 95 | # offset = ((central-i)**2)/(2*radius) 96 | # int_offset = int(offset) 97 | # decimal_offset = offset-int_offset 98 | # for j in range(patch.shape[0]): 99 | # if j>int_offset: 100 | # patch[j,i]=int(decimal_offset*patch_copy[j-1-int_offset,i]+(1-decimal_offset)*patch_copy[j-int_offset,i]) 101 | # else: 102 | # patch[j,i]= np.random.randn() * np.sqrt(3) + 250 103 | 104 | 105 | return np.uint8(np.round(np.clip(patch,0,255))) 106 | 107 | def GetParallel(distribution, height, length, period): 108 | if length\"Open" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": { 42 | "id": "x-qhLXg0tCNj" 43 | }, 44 | "source": [ 45 | "# セットアップ" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "metadata": { 51 | "id": "lRmfEjSOz5XP" 52 | }, 53 | "source": [ 54 | "# githubからコードをコピー\n", 55 | "!git clone https://github.com/cedro3/Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale.git\n", 56 | "%cd Sketch-Generation-with-Drawing-Process-Guided-by-Vector-Flow-and-Grayscale" 57 | ], 58 | "execution_count": null, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": { 64 | "id": "j7qPY1YDCfPD" 65 | }, 66 | "source": [ 67 | "## ビデオから静止画を切り出し\n", 68 | "・video_file = でビデオを指定" 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "metadata": { 74 | "id": "L8sqM9xtG3Sk" 75 | }, 76 | "source": [ 77 | "import os\n", 78 | "import shutil\n", 79 | "import cv2\n", 80 | "\n", 81 | "# imagesフォルダーリセット\n", 82 | "if os.path.isdir('input'):\n", 83 | " shutil.rmtree('input') \n", 84 | "os.makedirs('input', exist_ok=True)\n", 85 | "\n", 86 | "# ビデオから静止画を切り出す関数 \n", 87 | "def video_2_images(video_file= './movie.mp4', \n", 88 | " image_dir='./input/', \n", 89 | " image_file='%s.png'):\n", 90 | " \n", 91 | " # Initial setting\n", 92 | " i = 0\n", 93 | " interval = 3\n", 94 | " length = 300 # リミッター\n", 95 | " \n", 96 | " cap = cv2.VideoCapture(video_file)\n", 97 | " while(cap.isOpened()):\n", 98 | " flag, frame = cap.read() \n", 99 | " if flag == False: \n", 100 | " break\n", 101 | " if i == length*interval:\n", 102 | " break\n", 103 | " if i % interval == 0: \n", 104 | " cv2.imwrite(image_dir+image_file % str(int(i/interval)).zfill(6), frame)\n", 105 | " i += 1 \n", 106 | " cap.release() \n", 107 | "\n", 108 | "video_2_images()" 109 | ], 110 | "execution_count": null, 111 | "outputs": [] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": { 116 | "id": "LbtRUSbPHshh" 117 | }, 118 | "source": [ 119 | "## 静止画をスケッチ画に変換" 120 | ] 121 | }, 122 | { 123 | "cell_type": "code", 124 | "metadata": { 125 | "id": "YvIhbwDPH2Rt" 126 | }, 127 | "source": [ 128 | "from draw import *\n", 129 | "from PIL import Image\n", 130 | "import os\n", 131 | "\n", 132 | "# output フォルダーリセット\n", 133 | "if os.path.isdir('output'):\n", 134 | " shutil.rmtree('output') \n", 135 | "os.makedirs('output', exist_ok=True)\n", 136 | "\n", 137 | "files = os.listdir('./input')\n", 138 | "files.sort()\n", 139 | "num = len(files)\n", 140 | "print('total = ', num)\n", 141 | "\n", 142 | "n=10\n", 143 | "period=5\n", 144 | "for file in files:\n", 145 | " file_name = file\n", 146 | " draw(file_name, n, period)" 147 | ], 148 | "execution_count": null, 149 | "outputs": [] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": { 154 | "id": "AFwNLqPwJCUf" 155 | }, 156 | "source": [ 157 | "## モノクロ動画の作成" 158 | ] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "metadata": { 163 | "id": "SCaGT9-1NP-S" 164 | }, 165 | "source": [ 166 | "# スケッチ画を集約(モノクロ)\n", 167 | "import os\n", 168 | "import shutil\n", 169 | "\n", 170 | "# final フォルダーリセット\n", 171 | "if os.path.isdir('final'):\n", 172 | " shutil.rmtree('final') \n", 173 | "os.makedirs('final', exist_ok=True)\n", 174 | "\n", 175 | "for i in range(num):\n", 176 | " files = os.listdir('./output/'+str(i).zfill(6)+'/process')\n", 177 | " files.sort()\n", 178 | " file = files[-1]\n", 179 | " print(str(i).zfill(6), file)\n", 180 | " shutil.copy('./output/'+str(i).zfill(6)+'/process/'+file, './final/'+str(i).zfill(6)+'.jpg')" 181 | ], 182 | "execution_count": null, 183 | "outputs": [] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "metadata": { 188 | "id": "tDvRo0QjM7F7" 189 | }, 190 | "source": [ 191 | "# スケッチ画から動画を作成(モノクロ)\n", 192 | "# output.mp4 削除\n", 193 | "if os.path.exists('./output.mp4'):\n", 194 | " os.remove('./output.mp4')\n", 195 | "\n", 196 | "!ffmpeg -r 10 -i final/%06d.jpg -vcodec libx264 -pix_fmt yuv420p output.mp4" 197 | ], 198 | "execution_count": null, 199 | "outputs": [] 200 | }, 201 | { 202 | "cell_type": "code", 203 | "metadata": { 204 | "id": "M2VqTplVIvQI" 205 | }, 206 | "source": [ 207 | "# mp4動画の再生再生(モノクロ)\n", 208 | "from IPython.display import HTML\n", 209 | "from base64 import b64encode\n", 210 | "\n", 211 | "mp4 = open('./output.mp4', 'rb').read()\n", 212 | "data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode()\n", 213 | "HTML(f\"\"\"\n", 214 | "\"\"\")" 217 | ], 218 | "execution_count": null, 219 | "outputs": [] 220 | }, 221 | { 222 | "cell_type": "markdown", 223 | "metadata": { 224 | "id": "XUIDs-k1I8gn" 225 | }, 226 | "source": [ 227 | "## カラー動画の作成" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "metadata": { 233 | "id": "0_rpq_AdHFuA" 234 | }, 235 | "source": [ 236 | "# スケッチ画を集約(カラー)\n", 237 | "import os\n", 238 | "import shutil\n", 239 | "\n", 240 | "# final フォルダーリセット\n", 241 | "if os.path.isdir('finalc'):\n", 242 | " shutil.rmtree('finalc') \n", 243 | "os.makedirs('finalc', exist_ok=True)\n", 244 | "\n", 245 | "for i in range(num):\n", 246 | " shutil.copy('./output/'+str(i).zfill(6)+'/result_RGB.jpg', './finalc/'+str(i).zfill(6)+'.jpg')" 247 | ], 248 | "execution_count": null, 249 | "outputs": [] 250 | }, 251 | { 252 | "cell_type": "code", 253 | "metadata": { 254 | "id": "6eZcxQADGwv_" 255 | }, 256 | "source": [ 257 | "# スケッチ画から動画を作成(カラー)\n", 258 | "# output.mp4 削除\n", 259 | "if os.path.exists('./outputc.mp4'):\n", 260 | " os.remove('./outputc.mp4')\n", 261 | "\n", 262 | "!ffmpeg -r 10 -i finalc/%06d.jpg -vcodec libx264 -pix_fmt yuv420p outputc.mp4" 263 | ], 264 | "execution_count": null, 265 | "outputs": [] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "metadata": { 270 | "id": "qgt0jPE9I5M3" 271 | }, 272 | "source": [ 273 | "# mp4動画の再生再生(カラー)\n", 274 | "from IPython.display import HTML\n", 275 | "from base64 import b64encode\n", 276 | "\n", 277 | "mp4 = open('./outputc.mp4', 'rb').read()\n", 278 | "data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode()\n", 279 | "HTML(f\"\"\"\n", 280 | "\"\"\")" 283 | ], 284 | "execution_count": null, 285 | "outputs": [] 286 | } 287 | ] 288 | } -------------------------------------------------------------------------------- /process _Orthogonal.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/1.jpg' 21 | output_path = './output' 22 | 23 | np.random.seed(0) 24 | n = 12 # Quantization order 25 | period = 5 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 2 # for edge 29 | transTone = False # for Tone 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = None # for ETF 33 | CLAHE = True 34 | edge_CLAHE = False 35 | draw_new = True 36 | 37 | 38 | if __name__ == '__main__': 39 | ####### ETF ####### 40 | time_start=time.time() 41 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 42 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 43 | ETF_filter.forward() 44 | print('ETF done') 45 | 46 | 47 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 48 | (h0,w0) = input_img.shape 49 | cv2.imwrite(output_path + "/input_gray.png", input_img) 50 | # if h0>w0: 51 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 52 | # else: 53 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 54 | # (h0,w0) = input_img.shape 55 | 56 | if transTone == True: 57 | input_img = transferTone(input_img) 58 | 59 | now_ = np.uint8(np.ones((h0,w0)))*255 60 | step = 0 61 | if draw_new==True: 62 | time_start=time.time() 63 | for dirs in range(direction): 64 | 65 | angle = -90+dirs*180/direction 66 | if angle<0: 67 | orthogonal = angle +90 68 | else: 69 | orthogonal = angle -90 70 | img,_ = rotate(input_img, -orthogonal) 71 | 72 | ############ Adjust Histogram ############ 73 | if CLAHE==True: 74 | img = HistogramEqualization(img) 75 | # cv2.imshow('HistogramEqualization', res) 76 | # cv2.waitKey(0) 77 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 78 | print('HistogramEqualization done') 79 | 80 | ############ Quantization ############ 81 | ldr = LDR(img, n) 82 | # cv2.imshow('Quantization', ldr) 83 | # cv2.waitKey(0) 84 | cv2.imwrite(output_path + "/Quantization.png", ldr) 85 | 86 | # LDR_single(ldr,n,output_path) # debug 87 | ############ Cumulate ############ 88 | LDR_single_add(ldr,n,output_path) 89 | print('Quantization done') 90 | 91 | 92 | # get tone 93 | (h,w) = ldr.shape 94 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 95 | 96 | 97 | for j in range(n): 98 | print('tone:',j) 99 | distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 100 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 101 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 102 | dir_mask,_ = rotate(dir_mask, -orthogonal, pad_color=0) 103 | dir_mask[dir_mask<128]=0 104 | dir_mask[dir_mask>127]=1 105 | 106 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 107 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 108 | raw = -int(period/2) 109 | 110 | for i in np.squeeze(distensce).tolist(): 111 | if raw < h: 112 | y = raw + 2*period 113 | raw += i 114 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 115 | 116 | begin = interval[0] 117 | end = interval[1] 118 | 119 | length = end - begin 120 | 121 | begin -= 1*period 122 | end += 1*period 123 | length = end - begin 124 | 125 | newline = Getline(distribution=distribution, length=length) 126 | if length<1000 or begin == -2*period or end == w-1+2*period: 127 | temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 128 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 129 | canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 130 | else: 131 | temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 132 | m = np.minimum(temp, newline) 133 | canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 134 | 135 | 136 | if step % Freq == 0: 137 | if step > Freq: # not first time 138 | before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 139 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], orthogonal) 140 | (H,W) = now.shape 141 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 142 | now = np.minimum(before,now) 143 | else: # first time to save 144 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], orthogonal) 145 | (H,W) = now.shape 146 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 147 | 148 | cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 149 | # cv2.imshow('step', canvas) 150 | # cv2.waitKey(0) 151 | 152 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], orthogonal) 153 | (H,W) = now.shape 154 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 155 | now = np.minimum(now,now_) 156 | step += 1 157 | # cv2.imshow('step', now_) 158 | cv2.waitKey(1) 159 | now_ = now 160 | 161 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], orthogonal) 162 | (H,W) = now.shape 163 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 164 | cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 165 | 166 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], orthogonal) 167 | (H,W) = now.shape 168 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 169 | cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 170 | cv2.destroyAllWindows() 171 | 172 | time_end=time.time() 173 | print('total time',time_end-time_start) 174 | print('stoke number',step) 175 | cv2.imwrite(output_path + "/draw.png", now_) 176 | # cv2.imshow('draw', now_) 177 | cv2.waitKey(0) 178 | 179 | 180 | 181 | ############ gen edge ########### 182 | 183 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 184 | # pc = PencilDraw(device=device, gammaS=1) 185 | # pc(input_path) 186 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 187 | 188 | edge = genStroke(input_img,18) 189 | edge = np.power(edge, deepen) 190 | edge = np.uint8(edge*255) 191 | if edge_CLAHE==True: 192 | edge = HistogramEqualization(edge) 193 | 194 | cv2.imwrite(output_path + '/edge.png', edge) 195 | # cv2.imshow("edge",edge) 196 | cv2.waitKey(0) 197 | 198 | ############# merge ############# 199 | edge = np.float32(edge) 200 | now_ = cv2.imread(output_path + "/draw.png", cv2.IMREAD_GRAYSCALE) 201 | result = res_cross= np.float32(now_) 202 | 203 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 204 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 205 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 206 | result = edge*res_cross/255 207 | result=np.uint8(result) 208 | 209 | cv2.imwrite(output_path + '/result.png', result) 210 | # cv2.imshow("result",result) 211 | cv2.waitKey(0) 212 | 213 | # deblue 214 | deblue(result, output_path) 215 | 216 | # RGB 217 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 218 | cv2.imwrite(output_path + "/input.png", img_rgb_original) 219 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 220 | img_yuv[:,:,0] = result 221 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 222 | 223 | # cv2.imshow("RGB",img_rgb) 224 | cv2.waitKey(0) 225 | cv2.imwrite(output_path + "/result_RGB.png",img_rgb) 226 | 227 | # extract drawing process 228 | extract_pro(input_path=input_path, output_path=output_path+'/step', Quantization=n, direction=direction) 229 | -------------------------------------------------------------------------------- /process.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import math 4 | import torch 5 | import torch.nn as nn 6 | import torchvision.transforms as transforms 7 | import torch.nn.functional as F 8 | from PIL import Image , ImageFilter 9 | import matplotlib.pyplot as plt 10 | import time 11 | import sys 12 | import os 13 | 14 | from LDR import * 15 | from tone import * 16 | from genStroke_origin import * 17 | 18 | from drawpatch import rotate 19 | from tools import * 20 | from ETF.edge_tangent_flow import * 21 | from Edge.pencil import * 22 | from deblue import deblue 23 | from extract import extract_pro 24 | 25 | # args 26 | input_path = './input/jm.png' 27 | output_path = './output' 28 | 29 | np.random.seed(1) 30 | n = 7 # Quantization order 31 | period = 4 # line period 32 | direction = 10 # num of dir 33 | Freq = 100 # save every(freq) lines drawn 34 | deepen = 1 # for edge 35 | transTone = False # for Tone 36 | kernel_radius = 3 # for ETF 37 | iter_time = 15 # for ETF 38 | background_dir = None # for ETF 39 | CLAHE = True 40 | edge_CLAHE = True 41 | draw_new = True 42 | 43 | 44 | 45 | if __name__ == '__main__': 46 | ####### ETF ####### 47 | time_start=time.time() 48 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 49 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 50 | ETF_filter.forward() 51 | print('ETF done') 52 | 53 | 54 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 55 | (h0,w0) = input_img.shape 56 | cv2.imwrite(output_path + "/input_gray.png", input_img) 57 | # if h0>w0: 58 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 59 | # else: 60 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 61 | # (h0,w0) = input_img.shape 62 | 63 | if transTone == True: 64 | input_img = transferTone(input_img) 65 | 66 | now_ = np.uint8(np.ones((h0,w0)))*255 67 | step = 0 68 | if draw_new==True: 69 | time_start=time.time() 70 | for dirs in range(direction): 71 | 72 | angle = -90+dirs*180/direction 73 | img,_ = rotate(input_img, -angle) 74 | 75 | ############ Adjust Histogram ############ 76 | if CLAHE==True: 77 | img = HistogramEqualization(img) 78 | # cv2.imshow('HistogramEqualization', res) 79 | # cv2.waitKey(0) 80 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 81 | print('HistogramEqualization done') 82 | 83 | ############ Quantization ############ 84 | ldr = LDR(img, n) 85 | # cv2.imshow('Quantization', ldr) 86 | # cv2.waitKey(0) 87 | cv2.imwrite(output_path + "/Quantization.png", ldr) 88 | 89 | # LDR_single(ldr,n,output_path) # debug 90 | ############ Cumulate ############ 91 | LDR_single_add(ldr,n,output_path) 92 | print('Quantization done') 93 | 94 | 95 | # get tone 96 | (h,w) = ldr.shape 97 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 98 | 99 | 100 | for j in range(n): 101 | print('tone:',j) 102 | distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 103 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 104 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 105 | # if angle==0: 106 | # dir_mask[::] = 255 107 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 108 | dir_mask[dir_mask<128]=0 109 | dir_mask[dir_mask>127]=1 110 | 111 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 112 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 113 | raw = -int(period/2) 114 | 115 | for i in np.squeeze(distensce).tolist(): 116 | if raw < h: 117 | y = raw + 2*period 118 | raw += i 119 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 120 | 121 | begin = interval[0] 122 | end = interval[1] 123 | 124 | length = end - begin 125 | 126 | begin -= 2*period 127 | end += 2*period 128 | length = end - begin 129 | 130 | newline = Getline(distribution=distribution, length=length) 131 | if length<1000 or begin == -2*period or end == w-1+2*period: 132 | temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 133 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 134 | canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 135 | else: 136 | temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 137 | m = np.minimum(temp, newline) 138 | canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 139 | 140 | 141 | if step % Freq == 0: 142 | if step > Freq: # not first time 143 | before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 144 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 145 | (H,W) = now.shape 146 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 147 | now = np.minimum(before,now) 148 | else: # first time to save 149 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 150 | (H,W) = now.shape 151 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 152 | 153 | cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 154 | # cv2.imshow('step', canvas) 155 | # cv2.waitKey(0) 156 | 157 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 158 | (H,W) = now.shape 159 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 160 | now = np.minimum(now,now_) 161 | step += 1 162 | # cv2.imshow('step', now_) 163 | cv2.waitKey(1) 164 | now_ = now 165 | 166 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 167 | (H,W) = now.shape 168 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 169 | cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 170 | 171 | now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 172 | (H,W) = now.shape 173 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 174 | cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 175 | cv2.destroyAllWindows() 176 | 177 | time_end=time.time() 178 | print('total time',time_end-time_start) 179 | print('stoke number',step) 180 | cv2.imwrite(output_path + "/draw.png", now_) 181 | # cv2.imshow('draw', now_) 182 | cv2.waitKey(0) 183 | 184 | 185 | 186 | ############ gen edge ########### 187 | 188 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 189 | # pc = PencilDraw(device=device, gammaS=1) 190 | # pc(input_path) 191 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 192 | 193 | edge = genStroke(input_img,18) 194 | edge = np.power(edge, deepen) 195 | edge = np.uint8(edge*255) 196 | if edge_CLAHE==True: 197 | edge = HistogramEqualization(edge) 198 | 199 | cv2.imwrite(output_path + '/edge.png', edge) 200 | # cv2.imshow("edge",edge) 201 | cv2.waitKey(0) 202 | 203 | ############# merge ############# 204 | edge = np.float32(edge) 205 | now_ = cv2.imread(output_path + "/draw.png", cv2.IMREAD_GRAYSCALE) 206 | result = res_cross= np.float32(now_) 207 | 208 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 209 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 210 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 211 | result = edge*res_cross/255 212 | result=np.uint8(result) 213 | 214 | cv2.imwrite(output_path + '/result.png', result) 215 | # cv2.imshow("result",result) 216 | cv2.waitKey(0) 217 | 218 | # deblue 219 | deblue(result, output_path) 220 | 221 | # RGB 222 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 223 | cv2.imwrite(output_path + "/input.png", img_rgb_original) 224 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 225 | img_yuv[:,:,0] = result 226 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 227 | 228 | # cv2.imshow("RGB",img_rgb) 229 | cv2.waitKey(0) 230 | cv2.imwrite(output_path + "/result_RGB.png",img_rgb) 231 | 232 | # extract drawing process 233 | extract_pro(input_path=input_path, output_path=output_path+'/step', Quantization=n, direction=direction) 234 | -------------------------------------------------------------------------------- /process_fix_dir.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/3.jpg' 21 | output_path = './output' 22 | 23 | np.random.seed(1) 24 | n = 16 # Quantization order 25 | period = 5 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 1 # for edge 29 | transTone = False # for Tone 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = None # for ETF 33 | CLAHE = True 34 | edge_CLAHE = True 35 | draw_new = True 36 | angle = 45 37 | 38 | 39 | if __name__ == '__main__': 40 | ####### ETF ####### 41 | time_start=time.time() 42 | # ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 43 | # dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 44 | # ETF_filter.forward() 45 | # print('ETF done') 46 | 47 | 48 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 49 | (h0,w0) = input_img.shape 50 | cv2.imwrite(output_path + "/input_gray.png", input_img) 51 | # if h0>w0: 52 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 53 | # else: 54 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 55 | # (h0,w0) = input_img.shape 56 | 57 | if transTone == True: 58 | input_img = transferTone(input_img) 59 | 60 | now_ = np.uint8(np.ones((h0,w0)))*255 61 | step = 0 62 | if draw_new==True: 63 | time_start=time.time() 64 | stroke_sequence=[] 65 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 66 | # for dirs in range(direction): 67 | # angle = -90+dirs*180/direction 68 | print('angle:', angle) 69 | stroke_temp['angle'] = angle 70 | img,_ = rotate(input_img, -angle) 71 | 72 | ############ Adjust Histogram ############ 73 | if CLAHE==True: 74 | img = HistogramEqualization(img) 75 | # cv2.imshow('HistogramEqualization', res) 76 | # cv2.waitKey(0) 77 | cv2.imwrite(output_path + "/HistogramEqualization.png", img) 78 | print('HistogramEqualization done') 79 | 80 | ############ Quantization ############ 81 | ldr = LDR(img, n) 82 | # cv2.imshow('Quantization', ldr) 83 | # cv2.waitKey(0) 84 | cv2.imwrite(output_path + "/Quantization.png", ldr) 85 | 86 | # LDR_single(ldr,n,output_path) # debug 87 | ############ Cumulate ############ 88 | LDR_single_add(ldr,n,output_path) 89 | print('Quantization done') 90 | 91 | 92 | # get tone 93 | (h,w) = ldr.shape 94 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 95 | 96 | 97 | for j in range(n): 98 | # print('tone:',j) 99 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 100 | stroke_temp['grayscale'] = j*256/n 101 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 102 | #dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 103 | # if angle==0: 104 | # dir_mask[::] = 255 105 | # dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 106 | # dir_mask[dir_mask<128]=0 107 | # dir_mask[dir_mask>127]=1 108 | 109 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 110 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 111 | raw = -int(period/2) 112 | 113 | for i in np.squeeze(distensce).tolist(): 114 | if raw < h: 115 | y = raw + 2*period # y < h+2*period 116 | raw += i 117 | for interval in get_start_end(mask[y-2*period]): 118 | 119 | begin = interval[0] 120 | end = interval[1] 121 | 122 | # length = end - begin 123 | 124 | begin -= 2*period 125 | end += 2*period 126 | 127 | length = end - begin 128 | stroke_temp['begin'] = begin 129 | stroke_temp['end'] = end 130 | stroke_temp['row'] = y-int(period/2) 131 | 132 | stroke_sequence.append(stroke_temp.copy()) 133 | # newline = Getline(distribution=distribution, length=length) 134 | # if length<1000 or begin == -2*period or end == w-1+2*period: 135 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 136 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 137 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 138 | # else: 139 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 140 | # m = np.minimum(temp, newline) 141 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 142 | 143 | 144 | # if step % Freq == 0: 145 | # if step > Freq: # not first time 146 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 147 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 148 | # (H,W) = now.shape 149 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 150 | # now = np.minimum(before,now) 151 | # else: # first time to save 152 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 153 | # (H,W) = now.shape 154 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 155 | 156 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 157 | # # cv2.imshow('step', canvas) 158 | # # cv2.waitKey(0) 159 | 160 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 161 | # (H,W) = now.shape 162 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 163 | # now = np.minimum(now,now_) 164 | # step += 1 165 | # cv2.imshow('step', now_) 166 | # cv2.waitKey(1) 167 | # now_ = now 168 | 169 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 170 | # (H,W) = now.shape 171 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 172 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 173 | 174 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 175 | # (H,W) = now.shape 176 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 177 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 178 | # cv2.destroyAllWindows() 179 | 180 | time_end=time.time() 181 | print('total time',time_end-time_start) 182 | print('stoke number',len(stroke_sequence)) 183 | # cv2.imwrite(output_path + "/draw.png", now_) 184 | # cv2.imshow('draw', now_) 185 | # cv2.waitKey(0) 186 | 187 | 188 | # random.shuffle(stroke_sequence) 189 | result = Gassian((h0,w0), mean=250, var = 3) 190 | canvases = [] 191 | 192 | #for dirs in range(direction): 193 | # angle = -90+dirs*180/direction 194 | canvas,_ = rotate(result, -angle) 195 | # (h,w) = canvas.shape 196 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 197 | canvases.append(canvas) 198 | 199 | 200 | 201 | for stroke_temp in stroke_sequence: 202 | angle = stroke_temp['angle'] 203 | dirs = int((angle+90)*direction/180) 204 | grayscale = stroke_temp['grayscale'] 205 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 206 | row = stroke_temp['row'] 207 | begin = stroke_temp['begin'] 208 | end = stroke_temp['end'] 209 | length = end - begin 210 | 211 | newline = Getline(distribution=distribution, length=length) 212 | 213 | # canvas = canvases[dirs] 214 | 215 | if length<1000 or begin == -2*period or end == w-1+2*period: 216 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 217 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 218 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 219 | # else: 220 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 221 | # m = np.minimum(temp, newline) 222 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 223 | 224 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 225 | (H,W) = now.shape 226 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 227 | result = np.minimum(now,result) 228 | # cv2.imshow('step', result) 229 | cv2.waitKey(1) 230 | 231 | step += 1 232 | if step % Freq == 0: 233 | # if step > Freq: # not first time 234 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 235 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 236 | # (H,W) = now.shape 237 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 238 | # now = np.minimum(before,now) 239 | # else: # first time to save 240 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 241 | # (H,W) = now.shape 242 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 243 | 244 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 245 | # cv2.imshow('step', canvas) 246 | # cv2.waitKey(0) 247 | if step % Freq != 0: 248 | step = int(step/Freq)+1 249 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 250 | 251 | cv2.destroyAllWindows() 252 | time_end=time.time() 253 | print('total time',time_end-time_start) 254 | print('stoke number',len(stroke_sequence)) 255 | cv2.imwrite(output_path + '/draw.png', result) 256 | 257 | 258 | 259 | ############ gen edge ########### 260 | 261 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 262 | # pc = PencilDraw(device=device, gammaS=1) 263 | # pc(input_path) 264 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 265 | 266 | edge = genStroke(input_img,18) 267 | edge = np.power(edge, deepen) 268 | edge = np.uint8(edge*255) 269 | if edge_CLAHE==True: 270 | edge = HistogramEqualization(edge) 271 | 272 | cv2.imwrite(output_path + '/edge.png', edge) 273 | # cv2.imshow("edge",edge) 274 | cv2.waitKey(0) 275 | 276 | ############# merge ############# 277 | edge = np.float32(edge) 278 | now_ = cv2.imread(output_path + "/draw.png", cv2.IMREAD_GRAYSCALE) 279 | result = res_cross= np.float32(now_) 280 | 281 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 282 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 283 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 284 | result = edge*res_cross/255 285 | result=np.uint8(result) 286 | 287 | cv2.imwrite(output_path + '/result.png', result) 288 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 289 | # cv2.imshow("result",result) 290 | cv2.waitKey(0) 291 | 292 | # deblue 293 | deblue(result, output_path) 294 | 295 | # RGB 296 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 297 | cv2.imwrite(output_path + "/input.png", img_rgb_original) 298 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 299 | img_yuv[:,:,0] = result 300 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 301 | 302 | # cv2.imshow("RGB",img_rgb) 303 | cv2.waitKey(0) 304 | cv2.imwrite(output_path + "/result_RGB.png",img_rgb) 305 | -------------------------------------------------------------------------------- /process_random.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/cat_up.png' 21 | output_path = './output' 22 | 23 | np.random.seed(1) 24 | n = 6 # Quantization order 25 | period = 4 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 1 # for edge 29 | transTone = False # for Tone 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = 45 # for ETF 33 | CLAHE = True 34 | edge_CLAHE = True 35 | draw_new = True 36 | random_order = True 37 | 38 | 39 | 40 | if __name__ == '__main__': 41 | ####### ETF ####### 42 | time_start=time.time() 43 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 44 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 45 | ETF_filter.forward() 46 | print('ETF done') 47 | 48 | 49 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 50 | (h0,w0) = input_img.shape 51 | cv2.imwrite(output_path + "/input_gray.png", input_img) 52 | # if h0>w0: 53 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 54 | # else: 55 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 56 | # (h0,w0) = input_img.shape 57 | 58 | if transTone == True: 59 | input_img = transferTone(input_img) 60 | 61 | now_ = np.uint8(np.ones((h0,w0)))*255 62 | step = 0 63 | if draw_new==True: 64 | time_start=time.time() 65 | stroke_sequence=[] 66 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 67 | for dirs in range(direction): 68 | angle = -90+dirs*180/direction 69 | print('angle:', angle) 70 | stroke_temp['angle'] = angle 71 | img,_ = rotate(input_img, -angle) 72 | 73 | ############ Adjust Histogram ############ 74 | if CLAHE==True: 75 | img = HistogramEqualization(img) 76 | # cv2.imshow('HistogramEqualization', res) 77 | # cv2.waitKey(0) 78 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 79 | print('HistogramEqualization done') 80 | 81 | ############ Quantization ############ 82 | ldr = LDR(img, n) 83 | # cv2.imshow('Quantization', ldr) 84 | # cv2.waitKey(0) 85 | cv2.imwrite(output_path + "/Quantization.png", ldr) 86 | 87 | # LDR_single(ldr,n,output_path) # debug 88 | ############ Cumulate ############ 89 | LDR_single_add(ldr,n,output_path) 90 | print('Quantization done') 91 | 92 | 93 | # get tone 94 | (h,w) = ldr.shape 95 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 96 | 97 | 98 | for j in range(n): 99 | # print('tone:',j) 100 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 101 | stroke_temp['grayscale'] = j*256/n 102 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 103 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 104 | # if angle==0: 105 | # dir_mask[::] = 255 106 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 107 | dir_mask[dir_mask<128]=0 108 | dir_mask[dir_mask>127]=1 109 | 110 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 111 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 112 | raw = -int(period/2) 113 | 114 | for i in np.squeeze(distensce).tolist(): 115 | if raw < h: 116 | y = raw + 2*period # y < h+2*period 117 | raw += i 118 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 119 | 120 | begin = interval[0] 121 | end = interval[1] 122 | 123 | # length = end - begin 124 | 125 | begin -= 2*period 126 | end += 2*period 127 | 128 | length = end - begin 129 | stroke_temp['begin'] = begin 130 | stroke_temp['end'] = end 131 | stroke_temp['row'] = y-int(period/2) 132 | 133 | stroke_sequence.append(stroke_temp.copy()) 134 | # newline = Getline(distribution=distribution, length=length) 135 | # if length<1000 or begin == -2*period or end == w-1+2*period: 136 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 137 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 138 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 139 | # else: 140 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 141 | # m = np.minimum(temp, newline) 142 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 143 | 144 | 145 | # if step % Freq == 0: 146 | # if step > Freq: # not first time 147 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 148 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 149 | # (H,W) = now.shape 150 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 151 | # now = np.minimum(before,now) 152 | # else: # first time to save 153 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 154 | # (H,W) = now.shape 155 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 156 | 157 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 158 | # # cv2.imshow('step', canvas) 159 | # # cv2.waitKey(0) 160 | 161 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 162 | # (H,W) = now.shape 163 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 164 | # now = np.minimum(now,now_) 165 | # step += 1 166 | # cv2.imshow('step', now_) 167 | # cv2.waitKey(1) 168 | # now_ = now 169 | 170 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 171 | # (H,W) = now.shape 172 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 173 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 174 | 175 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 176 | # (H,W) = now.shape 177 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 178 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 179 | # cv2.destroyAllWindows() 180 | 181 | time_end=time.time() 182 | print('total time',time_end-time_start) 183 | print('stoke number',len(stroke_sequence)) 184 | # cv2.imwrite(output_path + "/draw.png", now_) 185 | # cv2.imshow('draw', now_) 186 | # cv2.waitKey(0) 187 | 188 | if random_order == True: 189 | random.shuffle(stroke_sequence) 190 | result = Gassian((h0,w0), mean=250, var = 3) 191 | canvases = [] 192 | 193 | for dirs in range(direction): 194 | angle = -90+dirs*180/direction 195 | canvas,_ = rotate(result, -angle) 196 | # (h,w) = canvas.shape 197 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 198 | canvases.append(canvas) 199 | 200 | 201 | 202 | for stroke_temp in stroke_sequence: 203 | angle = stroke_temp['angle'] 204 | dirs = int((angle+90)*direction/180) 205 | grayscale = stroke_temp['grayscale'] 206 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 207 | row = stroke_temp['row'] 208 | begin = stroke_temp['begin'] 209 | end = stroke_temp['end'] 210 | length = end - begin 211 | 212 | newline = Getline(distribution=distribution, length=length) 213 | 214 | canvas = canvases[dirs] 215 | 216 | if length<1000 or begin == -2*period or end == w-1+2*period: 217 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 218 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 219 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 220 | # else: 221 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 222 | # m = np.minimum(temp, newline) 223 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 224 | 225 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 226 | (H,W) = now.shape 227 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 228 | result = np.minimum(now,result) 229 | # cv2.imshow('step', result) 230 | cv2.waitKey(1) 231 | 232 | step += 1 233 | if step % Freq == 0: 234 | # if step > Freq: # not first time 235 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 236 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 237 | # (H,W) = now.shape 238 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 239 | # now = np.minimum(before,now) 240 | # else: # first time to save 241 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 242 | # (H,W) = now.shape 243 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 244 | 245 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 246 | # cv2.imshow('step', canvas) 247 | # cv2.waitKey(0) 248 | if step % Freq != 0: 249 | step = int(step/Freq)+1 250 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 251 | 252 | cv2.destroyAllWindows() 253 | time_end=time.time() 254 | print('total time',time_end-time_start) 255 | print('stoke number',len(stroke_sequence)) 256 | cv2.imwrite(output_path + '/draw.png', result) 257 | 258 | 259 | 260 | ############ gen edge ########### 261 | 262 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 263 | # pc = PencilDraw(device=device, gammaS=1) 264 | # pc(input_path) 265 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 266 | 267 | edge = genStroke(input_img,18) 268 | edge = np.power(edge, deepen) 269 | edge = np.uint8(edge*255) 270 | if edge_CLAHE==True: 271 | edge = HistogramEqualization(edge) 272 | 273 | cv2.imwrite(output_path + '/edge.png', edge) 274 | # cv2.imshow("edge",edge) 275 | cv2.waitKey(0) 276 | 277 | ############# merge ############# 278 | edge = np.float32(edge) 279 | now_ = cv2.imread(output_path + "/draw.png", cv2.IMREAD_GRAYSCALE) 280 | result = res_cross= np.float32(now_) 281 | 282 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 283 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 284 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 285 | result = edge*res_cross/255 286 | result=np.uint8(result) 287 | 288 | cv2.imwrite(output_path + '/result.png', result) 289 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 290 | # cv2.imshow("result",result) 291 | cv2.waitKey(0) 292 | 293 | # deblue 294 | deblue(result, output_path) 295 | 296 | # RGB 297 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 298 | cv2.imwrite(output_path + "/input.png", img_rgb_original) 299 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 300 | img_yuv[:,:,0] = result 301 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 302 | 303 | # cv2.imshow("RGB",img_rgb) 304 | cv2.waitKey(0) 305 | cv2.imwrite(output_path + "/result_RGB.png",img_rgb) 306 | -------------------------------------------------------------------------------- /draw.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | 20 | def draw(file_name, n, period): 21 | # initalize 22 | input_path = './input/'+file_name 23 | output_path = './output' 24 | 25 | np.random.seed(1) 26 | # n = 10 # Quantization order 27 | # period = 5 # line period 28 | direction = 10 # num of dir 29 | Freq = 100 # save every(freq) lines drawn 30 | deepen = 1 # for edge 31 | transTone = False # for Tone8 32 | kernel_radius = 3 # for ETF 33 | iter_time = 15 # for ETF 34 | background_dir = None # for ETF 35 | CLAHE = True 36 | edge_CLAHE = True 37 | draw_new = True 38 | random_order = False 39 | ETF_order = True 40 | process_visible = True 41 | 42 | file_name = os.path.basename(input_path) 43 | file_name = file_name.split('.')[0] 44 | print(file_name) 45 | output_path = output_path+"/"+file_name 46 | if not os.path.exists(output_path): 47 | os.makedirs(output_path) 48 | os.makedirs(output_path+"/mask") 49 | os.makedirs(output_path+"/process") 50 | 51 | ####### ETF ####### 52 | time_start=time.time() 53 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 54 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 55 | ETF_filter.forward() 56 | print('ETF done') 57 | 58 | 59 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 60 | (h0,w0) = input_img.shape 61 | cv2.imwrite(output_path + "/input_gray.jpg", input_img) 62 | # if h0>w0: 63 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 64 | # else: 65 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 66 | # (h0,w0) = input_img.shape 67 | 68 | if transTone == True: 69 | input_img = transferTone(input_img) 70 | 71 | now_ = np.uint8(np.ones((h0,w0)))*255 72 | step = 0 73 | if draw_new==True: 74 | time_start=time.time() 75 | stroke_sequence=[] 76 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 77 | for dirs in range(direction): 78 | angle = -90+dirs*180/direction 79 | print('angle:', angle) 80 | stroke_temp['angle'] = angle 81 | img,_ = rotate(input_img, -angle) 82 | 83 | ############ Adjust Histogram ############ 84 | if CLAHE==True: 85 | img = HistogramEqualization(img) 86 | # cv2.imshow('HistogramEqualization', res) 87 | # cv2.waitKey(0) 88 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 89 | print('HistogramEqualization done') 90 | 91 | ########### gredient ####### 92 | img_pad = cv2.copyMakeBorder(img, 2*period, 2*period, 2*period, 2*period, cv2.BORDER_REPLICATE) 93 | img_normal = cv2.normalize(img_pad.astype("float32"), None, 0.0, 1.0, cv2.NORM_MINMAX) 94 | 95 | x_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 1, 0, ksize=5) 96 | y_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 0, 1, ksize=5) 97 | 98 | x_der = torch.from_numpy(x_der) + 1e-12 99 | y_der = torch.from_numpy(y_der) + 1e-12 100 | 101 | gradient_magnitude = torch.sqrt(x_der**2.0 + y_der**2.0) 102 | gradient_norm = gradient_magnitude/gradient_magnitude.max() 103 | 104 | ############ Quantization ############ 105 | ldr = LDR(img, n) 106 | # cv2.imshow('Quantization', ldr) 107 | # cv2.waitKey(0) 108 | cv2.imwrite(output_path + "/Quantization.png", ldr) 109 | 110 | # LDR_single(ldr,n,output_path) # debug 111 | ############ Cumulate ############ 112 | LDR_single_add(ldr,n,output_path) 113 | print('Quantization done') 114 | 115 | 116 | # get tone 117 | (h,w) = ldr.shape 118 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 119 | 120 | 121 | for j in range(n): 122 | # print('tone:',j) 123 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 124 | stroke_temp['grayscale'] = j*256/n 125 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 126 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 127 | # if angle==0: 128 | # dir_mask[::] = 255 129 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 130 | dir_mask[dir_mask<128]=0 131 | dir_mask[dir_mask>127]=1 132 | 133 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 134 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 135 | raw = -int(period/2) 136 | 137 | for i in np.squeeze(distensce).tolist(): 138 | if raw < h: 139 | y = raw + 2*period # y < h+2*period 140 | raw += i 141 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 142 | 143 | begin = interval[0] 144 | end = interval[1] 145 | 146 | # length = end - begin 147 | 148 | begin -= 2*period 149 | end += 2*period 150 | 151 | length = end - begin 152 | stroke_temp['begin'] = begin 153 | stroke_temp['end'] = end 154 | stroke_temp['row'] = y-int(period/2) 155 | #print(gradient_norm[y,interval[0]+2*period:interval[1]+2*period]) 156 | stroke_temp['importance'] = (255-stroke_temp['grayscale'])*torch.sum(gradient_norm[y:y+period,interval[0]+2*period:interval[1]+2*period]).numpy() 157 | 158 | stroke_sequence.append(stroke_temp.copy()) 159 | # newline = Getline(distribution=distribution, length=length) 160 | # if length<1000 or begin == -2*period or end == w-1+2*period: 161 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 162 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 163 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 164 | # else: 165 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 166 | # m = np.minimum(temp, newline) 167 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 168 | 169 | 170 | # if step % Freq == 0: 171 | # if step > Freq: # not first time 172 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 173 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 174 | # (H,W) = now.shape 175 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 176 | # now = np.minimum(before,now) 177 | # else: # first time to save 178 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 179 | # (H,W) = now.shape 180 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 181 | 182 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 183 | # # cv2.imshow('step', canvas) 184 | # # cv2.waitKey(0) 185 | 186 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 187 | # (H,W) = now.shape 188 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 189 | # now = np.minimum(now,now_) 190 | # step += 1 191 | # cv2.imshow('step', now_) 192 | # cv2.waitKey(1) 193 | # now_ = now 194 | 195 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 196 | # (H,W) = now.shape 197 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 198 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 199 | 200 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 201 | # (H,W) = now.shape 202 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 203 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 204 | # cv2.destroyAllWindows() 205 | 206 | time_end=time.time() 207 | print('total time',time_end-time_start) 208 | print('stoke number',len(stroke_sequence)) 209 | # cv2.imwrite(output_path + "/draw.png", now_) 210 | # cv2.imshow('draw', now_) 211 | # cv2.waitKey(0) 212 | 213 | if random_order == True: 214 | random.shuffle(stroke_sequence) 215 | 216 | 217 | if ETF_order == True: 218 | random.shuffle(stroke_sequence) 219 | quickSort(stroke_sequence,0,len(stroke_sequence)-1) 220 | result = Gassian((h0,w0), mean=250, var = 3) 221 | canvases = [] 222 | 223 | 224 | for dirs in range(direction): 225 | angle = -90+dirs*180/direction 226 | canvas,_ = rotate(result, -angle) 227 | # (h,w) = canvas.shape 228 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 229 | canvases.append(canvas) 230 | 231 | 232 | 233 | for stroke_temp in stroke_sequence: 234 | angle = stroke_temp['angle'] 235 | dirs = int((angle+90)*direction/180) 236 | grayscale = stroke_temp['grayscale'] 237 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 238 | row = stroke_temp['row'] 239 | begin = stroke_temp['begin'] 240 | end = stroke_temp['end'] 241 | length = end - begin 242 | 243 | newline = Getline(distribution=distribution, length=length) 244 | 245 | canvas = canvases[dirs] 246 | 247 | if length<1000 or begin == -2*period or end == w-1+2*period: 248 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 249 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 250 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 251 | # else: 252 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 253 | # m = np.minimum(temp, newline) 254 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 255 | 256 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 257 | (H,W) = now.shape 258 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 259 | result = np.minimum(now,result) 260 | if process_visible == True: 261 | # cv2.imshow('step', result) 262 | cv2.waitKey(1) 263 | 264 | step += 1 265 | if step % Freq == 0: 266 | # if step > Freq: # not first time 267 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 268 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 269 | # (H,W) = now.shape 270 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 271 | # now = np.minimum(before,now) 272 | # else: # first time to save 273 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 274 | # (H,W) = now.shape 275 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 276 | 277 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 278 | # cv2.imshow('step', canvas) 279 | # cv2.waitKey(0) 280 | if step % Freq != 0: 281 | step = int(step/Freq)+1 282 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 283 | 284 | cv2.destroyAllWindows() 285 | time_end=time.time() 286 | print('total time',time_end-time_start) 287 | print('stoke number',len(stroke_sequence)) 288 | cv2.imwrite(output_path + '/draw.jpg', result) 289 | 290 | 291 | 292 | ############ gen edge ########### 293 | 294 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 295 | # pc = PencilDraw(device=device, gammaS=1) 296 | # pc(input_path) 297 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 298 | 299 | edge = genStroke(input_img,18) 300 | edge = np.power(edge, deepen) 301 | edge = np.uint8(edge*255) 302 | if edge_CLAHE==True: 303 | edge = HistogramEqualization(edge) 304 | 305 | cv2.imwrite(output_path + '/edge.jpg', edge) 306 | # cv2.imshow("edge",edge) 307 | 308 | 309 | ############# merge ############# 310 | edge = np.float32(edge) 311 | now_ = cv2.imread(output_path + "/draw.jpg", cv2.IMREAD_GRAYSCALE) 312 | result = res_cross= np.float32(now_) 313 | 314 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 315 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 316 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 317 | result = edge*res_cross/255 318 | result=np.uint8(result) 319 | 320 | cv2.imwrite(output_path + '/result.jpg', result) 321 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 322 | # cv2.imshow("result",result) 323 | 324 | 325 | # deblue 326 | deblue(result, output_path) 327 | 328 | # RGB 329 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 330 | cv2.imwrite(output_path + "/input.jpg", img_rgb_original) 331 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 332 | img_yuv[:,:,0] = result 333 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 334 | 335 | # cv2.imshow("RGB",img_rgb) 336 | cv2.waitKey(0) 337 | cv2.imwrite(output_path + "/result_RGB.jpg",img_rgb) -------------------------------------------------------------------------------- /dog.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/dog.png' 21 | output_path = './output' 22 | 23 | np.random.seed(1) 24 | n = 7 # Quantization order 25 | period = 4 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 1 # for edge 29 | transTone = False # for Tone8 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = 45 # for ETF 33 | CLAHE = True 34 | edge_CLAHE = True 35 | draw_new = True 36 | random_order = False 37 | ETF_order = True 38 | process_visible = True 39 | 40 | if __name__ == '__main__': 41 | 42 | file_name = os.path.basename(input_path) 43 | file_name = file_name.split('.')[0] 44 | print(file_name) 45 | output_path = output_path+"/"+file_name 46 | if not os.path.exists(output_path): 47 | os.makedirs(output_path) 48 | os.makedirs(output_path+"/mask") 49 | os.makedirs(output_path+"/process") 50 | 51 | ####### ETF ####### 52 | time_start=time.time() 53 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 54 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 55 | ETF_filter.forward() 56 | print('ETF done') 57 | 58 | 59 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 60 | (h0,w0) = input_img.shape 61 | cv2.imwrite(output_path + "/input_gray.jpg", input_img) 62 | # if h0>w0: 63 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 64 | # else: 65 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 66 | # (h0,w0) = input_img.shape 67 | 68 | if transTone == True: 69 | input_img = transferTone(input_img) 70 | 71 | now_ = np.uint8(np.ones((h0,w0)))*255 72 | step = 0 73 | if draw_new==True: 74 | time_start=time.time() 75 | stroke_sequence=[] 76 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 77 | for dirs in range(direction): 78 | angle = -90+dirs*180/direction 79 | print('angle:', angle) 80 | stroke_temp['angle'] = angle 81 | img,_ = rotate(input_img, -angle) 82 | 83 | ############ Adjust Histogram ############ 84 | if CLAHE==True: 85 | img = HistogramEqualization(img) 86 | # cv2.imshow('HistogramEqualization', res) 87 | # cv2.waitKey(0) 88 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 89 | print('HistogramEqualization done') 90 | 91 | ########### gredient ####### 92 | img_pad = cv2.copyMakeBorder(img, 2*period, 2*period, 2*period, 2*period, cv2.BORDER_REPLICATE) 93 | img_normal = cv2.normalize(img_pad.astype("float32"), None, 0.0, 1.0, cv2.NORM_MINMAX) 94 | 95 | x_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 1, 0, ksize=5) 96 | y_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 0, 1, ksize=5) 97 | 98 | x_der = torch.from_numpy(x_der) + 1e-12 99 | y_der = torch.from_numpy(y_der) + 1e-12 100 | 101 | gradient_magnitude = torch.sqrt(x_der**2.0 + y_der**2.0) 102 | gradient_norm = gradient_magnitude/gradient_magnitude.max() 103 | 104 | ############ Quantization ############ 105 | ldr = LDR(img, n) 106 | # cv2.imshow('Quantization', ldr) 107 | # cv2.waitKey(0) 108 | cv2.imwrite(output_path + "/Quantization.png", ldr) 109 | 110 | # LDR_single(ldr,n,output_path) # debug 111 | ############ Cumulate ############ 112 | LDR_single_add(ldr,n,output_path) 113 | print('Quantization done') 114 | 115 | 116 | # get tone 117 | (h,w) = ldr.shape 118 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 119 | 120 | 121 | for j in range(n): 122 | # print('tone:',j) 123 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 124 | stroke_temp['grayscale'] = j*256/n 125 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 126 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 127 | # if angle==0: 128 | # dir_mask[::] = 255 129 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 130 | dir_mask[dir_mask<128]=0 131 | dir_mask[dir_mask>127]=1 132 | 133 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 134 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 135 | raw = -int(period/2) 136 | 137 | for i in np.squeeze(distensce).tolist(): 138 | if raw < h: 139 | y = raw + 2*period # y < h+2*period 140 | raw += i 141 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 142 | 143 | begin = interval[0] 144 | end = interval[1] 145 | 146 | # length = end - begin 147 | 148 | begin -= 2*period 149 | end += 2*period 150 | 151 | length = end - begin 152 | stroke_temp['begin'] = begin 153 | stroke_temp['end'] = end 154 | stroke_temp['row'] = y-int(period/2) 155 | #print(gradient_norm[y,interval[0]+2*period:interval[1]+2*period]) 156 | stroke_temp['importance'] = (255-stroke_temp['grayscale'])*torch.sum(gradient_norm[y:y+period,interval[0]+2*period:interval[1]+2*period]).numpy() 157 | 158 | stroke_sequence.append(stroke_temp.copy()) 159 | # newline = Getline(distribution=distribution, length=length) 160 | # if length<1000 or begin == -2*period or end == w-1+2*period: 161 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 162 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 163 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 164 | # else: 165 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 166 | # m = np.minimum(temp, newline) 167 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 168 | 169 | 170 | # if step % Freq == 0: 171 | # if step > Freq: # not first time 172 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 173 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 174 | # (H,W) = now.shape 175 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 176 | # now = np.minimum(before,now) 177 | # else: # first time to save 178 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 179 | # (H,W) = now.shape 180 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 181 | 182 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 183 | # # cv2.imshow('step', canvas) 184 | # # cv2.waitKey(0) 185 | 186 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 187 | # (H,W) = now.shape 188 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 189 | # now = np.minimum(now,now_) 190 | # step += 1 191 | # cv2.imshow('step', now_) 192 | # cv2.waitKey(1) 193 | # now_ = now 194 | 195 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 196 | # (H,W) = now.shape 197 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 198 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 199 | 200 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 201 | # (H,W) = now.shape 202 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 203 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 204 | # cv2.destroyAllWindows() 205 | 206 | time_end=time.time() 207 | print('total time',time_end-time_start) 208 | print('stoke number',len(stroke_sequence)) 209 | # cv2.imwrite(output_path + "/draw.png", now_) 210 | # cv2.imshow('draw', now_) 211 | # cv2.waitKey(0) 212 | 213 | if random_order == True: 214 | random.shuffle(stroke_sequence) 215 | 216 | 217 | if ETF_order == True: 218 | random.shuffle(stroke_sequence) 219 | quickSort(stroke_sequence,0,len(stroke_sequence)-1) 220 | result = Gassian((h0,w0), mean=250, var = 3) 221 | canvases = [] 222 | 223 | 224 | for dirs in range(direction): 225 | angle = -90+dirs*180/direction 226 | canvas,_ = rotate(result, -angle) 227 | # (h,w) = canvas.shape 228 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 229 | canvases.append(canvas) 230 | 231 | 232 | 233 | for stroke_temp in stroke_sequence: 234 | angle = stroke_temp['angle'] 235 | dirs = int((angle+90)*direction/180) 236 | grayscale = stroke_temp['grayscale'] 237 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 238 | row = stroke_temp['row'] 239 | begin = stroke_temp['begin'] 240 | end = stroke_temp['end'] 241 | length = end - begin 242 | 243 | newline = Getline(distribution=distribution, length=length) 244 | 245 | canvas = canvases[dirs] 246 | 247 | if length<1000 or begin == -2*period or end == w-1+2*period: 248 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 249 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 250 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 251 | # else: 252 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 253 | # m = np.minimum(temp, newline) 254 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 255 | 256 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 257 | (H,W) = now.shape 258 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 259 | result = np.minimum(now,result) 260 | if process_visible == True: 261 | # cv2.imshow('step', result) 262 | cv2.waitKey(1) 263 | 264 | step += 1 265 | if step % Freq == 0: 266 | # if step > Freq: # not first time 267 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 268 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 269 | # (H,W) = now.shape 270 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 271 | # now = np.minimum(before,now) 272 | # else: # first time to save 273 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 274 | # (H,W) = now.shape 275 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 276 | 277 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 278 | # cv2.imshow('step', canvas) 279 | # cv2.waitKey(0) 280 | if step % Freq != 0: 281 | step = int(step/Freq)+1 282 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 283 | 284 | cv2.destroyAllWindows() 285 | time_end=time.time() 286 | print('total time',time_end-time_start) 287 | print('stoke number',len(stroke_sequence)) 288 | cv2.imwrite(output_path + '/draw.jpg', result) 289 | 290 | 291 | 292 | ############ gen edge ########### 293 | 294 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 295 | # pc = PencilDraw(device=device, gammaS=1) 296 | # pc(input_path) 297 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 298 | 299 | edge = genStroke(input_img,18) 300 | edge = np.power(edge, deepen) 301 | edge = np.uint8(edge*255) 302 | if edge_CLAHE==True: 303 | edge = HistogramEqualization(edge) 304 | 305 | cv2.imwrite(output_path + '/edge.jpg', edge) 306 | # cv2.imshow("edge",edge) 307 | 308 | 309 | ############# merge ############# 310 | edge = np.float32(edge) 311 | now_ = cv2.imread(output_path + "/draw.jpg", cv2.IMREAD_GRAYSCALE) 312 | result = res_cross= np.float32(now_) 313 | 314 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 315 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 316 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 317 | result = edge*res_cross/255 318 | result=np.uint8(result) 319 | 320 | cv2.imwrite(output_path + '/result.jpg', result) 321 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 322 | # cv2.imshow("result",result) 323 | 324 | 325 | # deblue 326 | deblue(result, output_path) 327 | 328 | # RGB 329 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 330 | cv2.imwrite(output_path + "/input.jpg", img_rgb_original) 331 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 332 | img_yuv[:,:,0] = result 333 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 334 | 335 | # cv2.imshow("RGB",img_rgb) 336 | cv2.waitKey(0) 337 | cv2.imwrite(output_path + "/result_RGB.jpg",img_rgb) 338 | -------------------------------------------------------------------------------- /girl.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/ztfn_up.png' 21 | output_path = './output' 22 | 23 | np.random.seed(1) 24 | n = 10 # Quantization order 25 | period = 4 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 1 # for edge 29 | transTone = False # for Tone8 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = 45 # for ETF 33 | CLAHE = True 34 | edge_CLAHE = True 35 | draw_new = True 36 | random_order = False 37 | ETF_order = True 38 | process_visible = True 39 | 40 | if __name__ == '__main__': 41 | 42 | file_name = os.path.basename(input_path) 43 | file_name = file_name.split('.')[0] 44 | print(file_name) 45 | output_path = output_path+"/"+file_name 46 | if not os.path.exists(output_path): 47 | os.makedirs(output_path) 48 | os.makedirs(output_path+"/mask") 49 | os.makedirs(output_path+"/process") 50 | 51 | ####### ETF ####### 52 | time_start=time.time() 53 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 54 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 55 | ETF_filter.forward() 56 | print('ETF done') 57 | 58 | 59 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 60 | (h0,w0) = input_img.shape 61 | cv2.imwrite(output_path + "/input_gray.jpg", input_img) 62 | # if h0>w0: 63 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 64 | # else: 65 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 66 | # (h0,w0) = input_img.shape 67 | 68 | if transTone == True: 69 | input_img = transferTone(input_img) 70 | 71 | now_ = np.uint8(np.ones((h0,w0)))*255 72 | step = 0 73 | if draw_new==True: 74 | time_start=time.time() 75 | stroke_sequence=[] 76 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 77 | for dirs in range(direction): 78 | angle = -90+dirs*180/direction 79 | print('angle:', angle) 80 | stroke_temp['angle'] = angle 81 | img,_ = rotate(input_img, -angle) 82 | 83 | ############ Adjust Histogram ############ 84 | if CLAHE==True: 85 | img = HistogramEqualization(img) 86 | # cv2.imshow('HistogramEqualization', res) 87 | # cv2.waitKey(0) 88 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 89 | print('HistogramEqualization done') 90 | 91 | ########### gredient ####### 92 | img_pad = cv2.copyMakeBorder(img, 2*period, 2*period, 2*period, 2*period, cv2.BORDER_REPLICATE) 93 | img_normal = cv2.normalize(img_pad.astype("float32"), None, 0.0, 1.0, cv2.NORM_MINMAX) 94 | 95 | x_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 1, 0, ksize=5) 96 | y_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 0, 1, ksize=5) 97 | 98 | x_der = torch.from_numpy(x_der) + 1e-12 99 | y_der = torch.from_numpy(y_der) + 1e-12 100 | 101 | gradient_magnitude = torch.sqrt(x_der**2.0 + y_der**2.0) 102 | gradient_norm = gradient_magnitude/gradient_magnitude.max() 103 | 104 | ############ Quantization ############ 105 | ldr = LDR(img, n) 106 | # cv2.imshow('Quantization', ldr) 107 | # cv2.waitKey(0) 108 | cv2.imwrite(output_path + "/Quantization.png", ldr) 109 | 110 | # LDR_single(ldr,n,output_path) # debug 111 | ############ Cumulate ############ 112 | LDR_single_add(ldr,n,output_path) 113 | print('Quantization done') 114 | 115 | 116 | # get tone 117 | (h,w) = ldr.shape 118 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 119 | 120 | 121 | for j in range(n): 122 | # print('tone:',j) 123 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 124 | stroke_temp['grayscale'] = j*256/n 125 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 126 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 127 | # if angle==0: 128 | # dir_mask[::] = 255 129 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 130 | dir_mask[dir_mask<128]=0 131 | dir_mask[dir_mask>127]=1 132 | 133 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 134 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 135 | raw = -int(period/2) 136 | 137 | for i in np.squeeze(distensce).tolist(): 138 | if raw < h: 139 | y = raw + 2*period # y < h+2*period 140 | raw += i 141 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 142 | 143 | begin = interval[0] 144 | end = interval[1] 145 | 146 | # length = end - begin 147 | 148 | begin -= 2*period 149 | end += 2*period 150 | 151 | length = end - begin 152 | stroke_temp['begin'] = begin 153 | stroke_temp['end'] = end 154 | stroke_temp['row'] = y-int(period/2) 155 | #print(gradient_norm[y,interval[0]+2*period:interval[1]+2*period]) 156 | stroke_temp['importance'] = (255-stroke_temp['grayscale'])*torch.sum(gradient_norm[y:y+period,interval[0]+2*period:interval[1]+2*period]).numpy() 157 | 158 | stroke_sequence.append(stroke_temp.copy()) 159 | # newline = Getline(distribution=distribution, length=length) 160 | # if length<1000 or begin == -2*period or end == w-1+2*period: 161 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 162 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 163 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 164 | # else: 165 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 166 | # m = np.minimum(temp, newline) 167 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 168 | 169 | 170 | # if step % Freq == 0: 171 | # if step > Freq: # not first time 172 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 173 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 174 | # (H,W) = now.shape 175 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 176 | # now = np.minimum(before,now) 177 | # else: # first time to save 178 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 179 | # (H,W) = now.shape 180 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 181 | 182 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 183 | # # cv2.imshow('step', canvas) 184 | # # cv2.waitKey(0) 185 | 186 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 187 | # (H,W) = now.shape 188 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 189 | # now = np.minimum(now,now_) 190 | # step += 1 191 | # cv2.imshow('step', now_) 192 | # cv2.waitKey(1) 193 | # now_ = now 194 | 195 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 196 | # (H,W) = now.shape 197 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 198 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 199 | 200 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 201 | # (H,W) = now.shape 202 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 203 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 204 | # cv2.destroyAllWindows() 205 | 206 | time_end=time.time() 207 | print('total time',time_end-time_start) 208 | print('stoke number',len(stroke_sequence)) 209 | # cv2.imwrite(output_path + "/draw.png", now_) 210 | # cv2.imshow('draw', now_) 211 | # cv2.waitKey(0) 212 | 213 | if random_order == True: 214 | random.shuffle(stroke_sequence) 215 | 216 | 217 | if ETF_order == True: 218 | random.shuffle(stroke_sequence) 219 | quickSort(stroke_sequence,0,len(stroke_sequence)-1) 220 | result = Gassian((h0,w0), mean=250, var = 3) 221 | canvases = [] 222 | 223 | 224 | for dirs in range(direction): 225 | angle = -90+dirs*180/direction 226 | canvas,_ = rotate(result, -angle) 227 | # (h,w) = canvas.shape 228 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 229 | canvases.append(canvas) 230 | 231 | 232 | 233 | for stroke_temp in stroke_sequence: 234 | angle = stroke_temp['angle'] 235 | dirs = int((angle+90)*direction/180) 236 | grayscale = stroke_temp['grayscale'] 237 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 238 | row = stroke_temp['row'] 239 | begin = stroke_temp['begin'] 240 | end = stroke_temp['end'] 241 | length = end - begin 242 | 243 | newline = Getline(distribution=distribution, length=length) 244 | 245 | canvas = canvases[dirs] 246 | 247 | if length<1000 or begin == -2*period or end == w-1+2*period: 248 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 249 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 250 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 251 | # else: 252 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 253 | # m = np.minimum(temp, newline) 254 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 255 | 256 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 257 | (H,W) = now.shape 258 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 259 | result = np.minimum(now,result) 260 | if process_visible == True: 261 | # cv2.imshow('step', result) 262 | cv2.waitKey(1) 263 | 264 | step += 1 265 | if step % Freq == 0: 266 | # if step > Freq: # not first time 267 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 268 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 269 | # (H,W) = now.shape 270 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 271 | # now = np.minimum(before,now) 272 | # else: # first time to save 273 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 274 | # (H,W) = now.shape 275 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 276 | 277 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 278 | # cv2.imshow('step', canvas) 279 | # cv2.waitKey(0) 280 | if step % Freq != 0: 281 | step = int(step/Freq)+1 282 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 283 | 284 | cv2.destroyAllWindows() 285 | time_end=time.time() 286 | print('total time',time_end-time_start) 287 | print('stoke number',len(stroke_sequence)) 288 | cv2.imwrite(output_path + '/draw.jpg', result) 289 | 290 | 291 | 292 | ############ gen edge ########### 293 | 294 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 295 | # pc = PencilDraw(device=device, gammaS=1) 296 | # pc(input_path) 297 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 298 | 299 | edge = genStroke(input_img,18) 300 | edge = np.power(edge, deepen) 301 | edge = np.uint8(edge*255) 302 | if edge_CLAHE==True: 303 | edge = HistogramEqualization(edge) 304 | 305 | cv2.imwrite(output_path + '/edge.jpg', edge) 306 | # cv2.imshow("edge",edge) 307 | 308 | 309 | ############# merge ############# 310 | edge = np.float32(edge) 311 | now_ = cv2.imread(output_path + "/draw.jpg", cv2.IMREAD_GRAYSCALE) 312 | result = res_cross= np.float32(now_) 313 | 314 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 315 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 316 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 317 | result = edge*res_cross/255 318 | result=np.uint8(result) 319 | 320 | cv2.imwrite(output_path + '/result.jpg', result) 321 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 322 | # cv2.imshow("result",result) 323 | 324 | 325 | # deblue 326 | deblue(result, output_path) 327 | 328 | # RGB 329 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 330 | cv2.imwrite(output_path + "/input.jpg", img_rgb_original) 331 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 332 | img_yuv[:,:,0] = result 333 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 334 | 335 | # cv2.imshow("RGB",img_rgb) 336 | cv2.waitKey(0) 337 | cv2.imwrite(output_path + "/result_RGB.jpg",img_rgb) 338 | -------------------------------------------------------------------------------- /process_order.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/your file' 21 | output_path = './output' 22 | 23 | np.random.seed(1) 24 | n = 10 # Quantization order 25 | period = 5 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 1 # for edge 29 | transTone = False # for Tone8 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = None # for ETF 33 | CLAHE = True 34 | edge_CLAHE = True 35 | draw_new = True 36 | random_order = False 37 | ETF_order = True 38 | process_visible = True 39 | 40 | if __name__ == '__main__': 41 | 42 | file_name = os.path.basename(input_path) 43 | file_name = file_name.split('.')[0] 44 | print(file_name) 45 | output_path = output_path+"/"+file_name 46 | if not os.path.exists(output_path): 47 | os.makedirs(output_path) 48 | os.makedirs(output_path+"/mask") 49 | os.makedirs(output_path+"/process") 50 | 51 | ####### ETF ####### 52 | time_start=time.time() 53 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 54 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 55 | ETF_filter.forward() 56 | print('ETF done') 57 | 58 | 59 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 60 | (h0,w0) = input_img.shape 61 | cv2.imwrite(output_path + "/input_gray.jpg", input_img) 62 | # if h0>w0: 63 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 64 | # else: 65 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 66 | # (h0,w0) = input_img.shape 67 | 68 | if transTone == True: 69 | input_img = transferTone(input_img) 70 | 71 | now_ = np.uint8(np.ones((h0,w0)))*255 72 | step = 0 73 | if draw_new==True: 74 | time_start=time.time() 75 | stroke_sequence=[] 76 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 77 | for dirs in range(direction): 78 | angle = -90+dirs*180/direction 79 | print('angle:', angle) 80 | stroke_temp['angle'] = angle 81 | img,_ = rotate(input_img, -angle) 82 | 83 | ############ Adjust Histogram ############ 84 | if CLAHE==True: 85 | img = HistogramEqualization(img) 86 | # cv2.imshow('HistogramEqualization', res) 87 | # cv2.waitKey(0) 88 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 89 | print('HistogramEqualization done') 90 | 91 | ########### gredient ####### 92 | img_pad = cv2.copyMakeBorder(img, 2*period, 2*period, 2*period, 2*period, cv2.BORDER_REPLICATE) 93 | img_normal = cv2.normalize(img_pad.astype("float32"), None, 0.0, 1.0, cv2.NORM_MINMAX) 94 | 95 | x_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 1, 0, ksize=5) 96 | y_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 0, 1, ksize=5) 97 | 98 | x_der = torch.from_numpy(x_der) + 1e-12 99 | y_der = torch.from_numpy(y_der) + 1e-12 100 | 101 | gradient_magnitude = torch.sqrt(x_der**2.0 + y_der**2.0) 102 | gradient_norm = gradient_magnitude/gradient_magnitude.max() 103 | 104 | ############ Quantization ############ 105 | ldr = LDR(img, n) 106 | # cv2.imshow('Quantization', ldr) 107 | # cv2.waitKey(0) 108 | cv2.imwrite(output_path + "/Quantization.png", ldr) 109 | 110 | # LDR_single(ldr,n,output_path) # debug 111 | ############ Cumulate ############ 112 | LDR_single_add(ldr,n,output_path) 113 | print('Quantization done') 114 | 115 | 116 | # get tone 117 | (h,w) = ldr.shape 118 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 119 | 120 | 121 | for j in range(n): 122 | # print('tone:',j) 123 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 124 | stroke_temp['grayscale'] = j*256/n 125 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 126 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 127 | # if angle==0: 128 | # dir_mask[::] = 255 129 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 130 | dir_mask[dir_mask<128]=0 131 | dir_mask[dir_mask>127]=1 132 | 133 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 134 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 135 | raw = -int(period/2) 136 | 137 | for i in np.squeeze(distensce).tolist(): 138 | if raw < h: 139 | y = raw + 2*period # y < h+2*period 140 | raw += i 141 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 142 | 143 | begin = interval[0] 144 | end = interval[1] 145 | 146 | # length = end - begin 147 | 148 | begin -= 2*period 149 | end += 2*period 150 | 151 | length = end - begin 152 | stroke_temp['begin'] = begin 153 | stroke_temp['end'] = end 154 | stroke_temp['row'] = y-int(period/2) 155 | #print(gradient_norm[y,interval[0]+2*period:interval[1]+2*period]) 156 | stroke_temp['importance'] = (255-stroke_temp['grayscale'])*torch.sum(gradient_norm[y:y+period,interval[0]+2*period:interval[1]+2*period]).numpy() 157 | 158 | stroke_sequence.append(stroke_temp.copy()) 159 | # newline = Getline(distribution=distribution, length=length) 160 | # if length<1000 or begin == -2*period or end == w-1+2*period: 161 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 162 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 163 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 164 | # else: 165 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 166 | # m = np.minimum(temp, newline) 167 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 168 | 169 | 170 | # if step % Freq == 0: 171 | # if step > Freq: # not first time 172 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 173 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 174 | # (H,W) = now.shape 175 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 176 | # now = np.minimum(before,now) 177 | # else: # first time to save 178 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 179 | # (H,W) = now.shape 180 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 181 | 182 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 183 | # # cv2.imshow('step', canvas) 184 | # # cv2.waitKey(0) 185 | 186 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 187 | # (H,W) = now.shape 188 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 189 | # now = np.minimum(now,now_) 190 | # step += 1 191 | # cv2.imshow('step', now_) 192 | # cv2.waitKey(1) 193 | # now_ = now 194 | 195 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 196 | # (H,W) = now.shape 197 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 198 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 199 | 200 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 201 | # (H,W) = now.shape 202 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 203 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 204 | # cv2.destroyAllWindows() 205 | 206 | time_end=time.time() 207 | print('total time',time_end-time_start) 208 | print('stoke number',len(stroke_sequence)) 209 | # cv2.imwrite(output_path + "/draw.png", now_) 210 | # cv2.imshow('draw', now_) 211 | # cv2.waitKey(0) 212 | 213 | if random_order == True: 214 | random.shuffle(stroke_sequence) 215 | 216 | 217 | if ETF_order == True: 218 | random.shuffle(stroke_sequence) 219 | quickSort(stroke_sequence,0,len(stroke_sequence)-1) 220 | result = Gassian((h0,w0), mean=250, var = 3) 221 | canvases = [] 222 | 223 | 224 | for dirs in range(direction): 225 | angle = -90+dirs*180/direction 226 | canvas,_ = rotate(result, -angle) 227 | # (h,w) = canvas.shape 228 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 229 | canvases.append(canvas) 230 | 231 | 232 | 233 | for stroke_temp in stroke_sequence: 234 | angle = stroke_temp['angle'] 235 | dirs = int((angle+90)*direction/180) 236 | grayscale = stroke_temp['grayscale'] 237 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 238 | row = stroke_temp['row'] 239 | begin = stroke_temp['begin'] 240 | end = stroke_temp['end'] 241 | length = end - begin 242 | 243 | newline = Getline(distribution=distribution, length=length) 244 | 245 | canvas = canvases[dirs] 246 | 247 | if length<1000 or begin == -2*period or end == w-1+2*period: 248 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 249 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 250 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 251 | # else: 252 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 253 | # m = np.minimum(temp, newline) 254 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 255 | 256 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 257 | (H,W) = now.shape 258 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 259 | result = np.minimum(now,result) 260 | if process_visible == True: 261 | # cv2.imshow('step', result) 262 | cv2.waitKey(1) 263 | 264 | step += 1 265 | if step % Freq == 0: 266 | # if step > Freq: # not first time 267 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 268 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 269 | # (H,W) = now.shape 270 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 271 | # now = np.minimum(before,now) 272 | # else: # first time to save 273 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 274 | # (H,W) = now.shape 275 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 276 | 277 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 278 | # cv2.imshow('step', canvas) 279 | # cv2.waitKey(0) 280 | if step % Freq != 0: 281 | step = int(step/Freq)+1 282 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 283 | 284 | cv2.destroyAllWindows() 285 | time_end=time.time() 286 | print('total time',time_end-time_start) 287 | print('stoke number',len(stroke_sequence)) 288 | cv2.imwrite(output_path + '/draw.jpg', result) 289 | 290 | 291 | 292 | ############ gen edge ########### 293 | 294 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 295 | # pc = PencilDraw(device=device, gammaS=1) 296 | # pc(input_path) 297 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 298 | 299 | edge = genStroke(input_img,18) 300 | edge = np.power(edge, deepen) 301 | edge = np.uint8(edge*255) 302 | if edge_CLAHE==True: 303 | edge = HistogramEqualization(edge) 304 | 305 | cv2.imwrite(output_path + '/edge.jpg', edge) 306 | # cv2.imshow("edge",edge) 307 | 308 | 309 | ############# merge ############# 310 | edge = np.float32(edge) 311 | now_ = cv2.imread(output_path + "/draw.jpg", cv2.IMREAD_GRAYSCALE) 312 | result = res_cross= np.float32(now_) 313 | 314 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 315 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 316 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 317 | result = edge*res_cross/255 318 | result=np.uint8(result) 319 | 320 | cv2.imwrite(output_path + '/result.jpg', result) 321 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 322 | # cv2.imshow("result",result) 323 | 324 | 325 | # deblue 326 | deblue(result, output_path) 327 | 328 | # RGB 329 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 330 | cv2.imwrite(output_path + "/input.jpg", img_rgb_original) 331 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 332 | img_yuv[:,:,0] = result 333 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 334 | 335 | # cv2.imshow("RGB",img_rgb) 336 | cv2.waitKey(0) 337 | cv2.imwrite(output_path + "/result_RGB.jpg",img_rgb) 338 | -------------------------------------------------------------------------------- /cat.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import random 7 | import time 8 | import os 9 | 10 | from LDR import * 11 | from tone import * 12 | from genStroke_origin import * 13 | from drawpatch import rotate 14 | from tools import * 15 | from ETF.edge_tangent_flow import * 16 | from deblue import deblue 17 | from quicksort import * 18 | 19 | # args 20 | input_path = './input/cat_up.png' 21 | output_path = './output' 22 | 23 | np.random.seed(1) 24 | n = 6 # Quantization order 25 | period = 4 # line period 26 | direction = 10 # num of dir 27 | Freq = 100 # save every(freq) lines drawn 28 | deepen = 1 # for edge 29 | transTone = False # for Tone8 30 | kernel_radius = 3 # for ETF 31 | iter_time = 15 # for ETF 32 | background_dir = 45 # for ETF 33 | CLAHE = True 34 | edge_CLAHE = True 35 | draw_new = True 36 | random_order = False 37 | ETF_order = True 38 | process_visible = True 39 | 40 | if __name__ == '__main__': 41 | 42 | file_name = os.path.basename(input_path) 43 | file_name = file_name.split('.')[0] 44 | print(file_name) 45 | output_path = output_path+"/"+file_name 46 | if not os.path.exists(output_path): 47 | os.makedirs(output_path) 48 | os.makedirs(output_path+"/mask") 49 | os.makedirs(output_path+"/process") 50 | 51 | ####### ETF ####### 52 | time_start=time.time() 53 | ETF_filter = ETF(input_path=input_path, output_path=output_path+'/mask',\ 54 | dir_num=direction, kernel_radius=kernel_radius, iter_time=iter_time, background_dir=background_dir) 55 | ETF_filter.forward() 56 | print('ETF done') 57 | 58 | 59 | input_img = cv2.imread(input_path, cv2.IMREAD_GRAYSCALE) 60 | (h0,w0) = input_img.shape 61 | cv2.imwrite(output_path + "/input_gray.jpg", input_img) 62 | # if h0>w0: 63 | # input_img = cv2.resize(input_img,(int(256*w0/h0),256)) 64 | # else: 65 | # input_img = cv2.resize(input_img,(256,int(256*h0/w0))) 66 | # (h0,w0) = input_img.shape 67 | 68 | if transTone == True: 69 | input_img = transferTone(input_img) 70 | 71 | now_ = np.uint8(np.ones((h0,w0)))*255 72 | step = 0 73 | if draw_new==True: 74 | time_start=time.time() 75 | stroke_sequence=[] 76 | stroke_temp={'angle':None, 'grayscale':None, 'row':None, 'begin':None, 'end':None} 77 | for dirs in range(direction): 78 | angle = -90+dirs*180/direction 79 | print('angle:', angle) 80 | stroke_temp['angle'] = angle 81 | img,_ = rotate(input_img, -angle) 82 | 83 | ############ Adjust Histogram ############ 84 | if CLAHE==True: 85 | img = HistogramEqualization(img) 86 | # cv2.imshow('HistogramEqualization', res) 87 | # cv2.waitKey(0) 88 | # cv2.imwrite(output_path + "/HistogramEqualization.png", res) 89 | print('HistogramEqualization done') 90 | 91 | ########### gredient ####### 92 | img_pad = cv2.copyMakeBorder(img, 2*period, 2*period, 2*period, 2*period, cv2.BORDER_REPLICATE) 93 | img_normal = cv2.normalize(img_pad.astype("float32"), None, 0.0, 1.0, cv2.NORM_MINMAX) 94 | 95 | x_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 1, 0, ksize=5) 96 | y_der = cv2.Sobel(img_normal, cv2.CV_32FC1, 0, 1, ksize=5) 97 | 98 | x_der = torch.from_numpy(x_der) + 1e-12 99 | y_der = torch.from_numpy(y_der) + 1e-12 100 | 101 | gradient_magnitude = torch.sqrt(x_der**2.0 + y_der**2.0) 102 | gradient_norm = gradient_magnitude/gradient_magnitude.max() 103 | 104 | ############ Quantization ############ 105 | ldr = LDR(img, n) 106 | # cv2.imshow('Quantization', ldr) 107 | # cv2.waitKey(0) 108 | cv2.imwrite(output_path + "/Quantization.png", ldr) 109 | 110 | # LDR_single(ldr,n,output_path) # debug 111 | ############ Cumulate ############ 112 | LDR_single_add(ldr,n,output_path) 113 | print('Quantization done') 114 | 115 | 116 | # get tone 117 | (h,w) = ldr.shape 118 | canvas = Gassian((h+4*period,w+4*period), mean=250, var = 3) 119 | 120 | 121 | for j in range(n): 122 | # print('tone:',j) 123 | # distribution = ChooseDistribution(period=period,Grayscale=j*256/n) 124 | stroke_temp['grayscale'] = j*256/n 125 | mask = cv2.imread(output_path + '/mask/mask{}.png'.format(j),cv2.IMREAD_GRAYSCALE)/255 126 | dir_mask = cv2.imread(output_path + '/mask/dir_mask{}.png'.format(dirs),cv2.IMREAD_GRAYSCALE) 127 | # if angle==0: 128 | # dir_mask[::] = 255 129 | dir_mask,_ = rotate(dir_mask, -angle, pad_color=0) 130 | dir_mask[dir_mask<128]=0 131 | dir_mask[dir_mask>127]=1 132 | 133 | distensce = Gassian((1,int(h/period)+4), mean = period, var = 1) 134 | distensce = np.uint8(np.round(np.clip(distensce, period*0.8, period*1.25))) 135 | raw = -int(period/2) 136 | 137 | for i in np.squeeze(distensce).tolist(): 138 | if raw < h: 139 | y = raw + 2*period # y < h+2*period 140 | raw += i 141 | for interval in get_start_end(mask[y-2*period]*dir_mask[y-2*period]): 142 | 143 | begin = interval[0] 144 | end = interval[1] 145 | 146 | # length = end - begin 147 | 148 | begin -= 2*period 149 | end += 2*period 150 | 151 | length = end - begin 152 | stroke_temp['begin'] = begin 153 | stroke_temp['end'] = end 154 | stroke_temp['row'] = y-int(period/2) 155 | #print(gradient_norm[y,interval[0]+2*period:interval[1]+2*period]) 156 | stroke_temp['importance'] = (255-stroke_temp['grayscale'])*torch.sum(gradient_norm[y:y+period,interval[0]+2*period:interval[1]+2*period]).numpy() 157 | 158 | stroke_sequence.append(stroke_temp.copy()) 159 | # newline = Getline(distribution=distribution, length=length) 160 | # if length<1000 or begin == -2*period or end == w-1+2*period: 161 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] 162 | # m = np.minimum(temp, newline[:,:temp.shape[1]]) 163 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin:2*period+end] = m 164 | # else: 165 | # temp = canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] 166 | # m = np.minimum(temp, newline) 167 | # canvas[y-int(period/2):y-int(period/2)+2*period,2*period+begin-2*period:2*period+end+2*period] = m 168 | 169 | 170 | # if step % Freq == 0: 171 | # if step > Freq: # not first time 172 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 173 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 174 | # (H,W) = now.shape 175 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 176 | # now = np.minimum(before,now) 177 | # else: # first time to save 178 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 179 | # (H,W) = now.shape 180 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 181 | 182 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(int(step/Freq)), now) 183 | # # cv2.imshow('step', canvas) 184 | # # cv2.waitKey(0) 185 | 186 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 187 | # (H,W) = now.shape 188 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 189 | # now = np.minimum(now,now_) 190 | # step += 1 191 | # cv2.imshow('step', now_) 192 | # cv2.waitKey(1) 193 | # now_ = now 194 | 195 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 196 | # (H,W) = now.shape 197 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 198 | # cv2.imwrite(output_path + "/pro/{}_{}.png".format(dirs,j), now) 199 | 200 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 201 | # (H,W) = now.shape 202 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 203 | # cv2.imwrite(output_path + "/{:.1f}.png".format(angle), now) 204 | # cv2.destroyAllWindows() 205 | 206 | time_end=time.time() 207 | print('total time',time_end-time_start) 208 | print('stoke number',len(stroke_sequence)) 209 | # cv2.imwrite(output_path + "/draw.png", now_) 210 | # cv2.imshow('draw', now_) 211 | # cv2.waitKey(0) 212 | 213 | if random_order == True: 214 | random.shuffle(stroke_sequence) 215 | 216 | 217 | if ETF_order == True: 218 | random.shuffle(stroke_sequence) 219 | quickSort(stroke_sequence,0,len(stroke_sequence)-1) 220 | result = Gassian((h0,w0), mean=250, var = 3) 221 | canvases = [] 222 | 223 | 224 | for dirs in range(direction): 225 | angle = -90+dirs*180/direction 226 | canvas,_ = rotate(result, -angle) 227 | # (h,w) = canvas.shape 228 | canvas = np.pad(canvas, pad_width=2*period, mode='constant', constant_values=(255,255)) 229 | canvases.append(canvas) 230 | 231 | 232 | 233 | for stroke_temp in stroke_sequence: 234 | angle = stroke_temp['angle'] 235 | dirs = int((angle+90)*direction/180) 236 | grayscale = stroke_temp['grayscale'] 237 | distribution = ChooseDistribution(period=period,Grayscale=grayscale) 238 | row = stroke_temp['row'] 239 | begin = stroke_temp['begin'] 240 | end = stroke_temp['end'] 241 | length = end - begin 242 | 243 | newline = Getline(distribution=distribution, length=length) 244 | 245 | canvas = canvases[dirs] 246 | 247 | if length<1000 or begin == -2*period or end == w-1+2*period: 248 | temp = canvas[row:row+2*period,2*period+begin:2*period+end] 249 | m = np.minimum(temp, newline[:,:temp.shape[1]]) 250 | canvas[row:row+2*period,2*period+begin:2*period+end] = m 251 | # else: 252 | # temp = canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] 253 | # m = np.minimum(temp, newline) 254 | # canvas[row:row+2*period,2*period+begin-2*period:2*period+end+2*period] = m 255 | 256 | now,_ = rotate(canvas[2*period:-2*period,2*period:-2*period], angle) 257 | (H,W) = now.shape 258 | now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 259 | result = np.minimum(now,result) 260 | if process_visible == True: 261 | # cv2.imshow('step', result) 262 | cv2.waitKey(1) 263 | 264 | step += 1 265 | if step % Freq == 0: 266 | # if step > Freq: # not first time 267 | # before = cv2.imread(output_path + "/process/{0:04d}.png".format(int(step/Freq)-1), cv2.IMREAD_GRAYSCALE) 268 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 269 | # (H,W) = now.shape 270 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 271 | # now = np.minimum(before,now) 272 | # else: # first time to save 273 | # now,_ = rotate(canvas[2*period:2*period+h,2*period:2*period+w], angle) 274 | # (H,W) = now.shape 275 | # now = now[int((H-h0)/2):int((H-h0)/2)+h0, int((W-w0)/2):int((W-w0)/2)+w0] 276 | 277 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(int(step/Freq)), result) 278 | # cv2.imshow('step', canvas) 279 | # cv2.waitKey(0) 280 | if step % Freq != 0: 281 | step = int(step/Freq)+1 282 | cv2.imwrite(output_path + "/process/{0:04d}.jpg".format(step), result) 283 | 284 | cv2.destroyAllWindows() 285 | time_end=time.time() 286 | print('total time',time_end-time_start) 287 | print('stoke number',len(stroke_sequence)) 288 | cv2.imwrite(output_path + '/draw.jpg', result) 289 | 290 | 291 | 292 | ############ gen edge ########### 293 | 294 | # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 295 | # pc = PencilDraw(device=device, gammaS=1) 296 | # pc(input_path) 297 | # edge = cv2.imread('output/Edge.png', cv2.IMREAD_GRAYSCALE) 298 | 299 | edge = genStroke(input_img,18) 300 | edge = np.power(edge, deepen) 301 | edge = np.uint8(edge*255) 302 | if edge_CLAHE==True: 303 | edge = HistogramEqualization(edge) 304 | 305 | cv2.imwrite(output_path + '/edge.jpg', edge) 306 | # cv2.imshow("edge",edge) 307 | 308 | 309 | ############# merge ############# 310 | edge = np.float32(edge) 311 | now_ = cv2.imread(output_path + "/draw.jpg", cv2.IMREAD_GRAYSCALE) 312 | result = res_cross= np.float32(now_) 313 | 314 | result[1:,1:] = np.uint8(edge[:-1,:-1] * res_cross[1:,1:]/255) 315 | result[0] = np.uint8(edge[0] * res_cross[0]/255) 316 | result[:,0] = np.uint8(edge[:,0] * res_cross[:,0]/255) 317 | result = edge*res_cross/255 318 | result=np.uint8(result) 319 | 320 | cv2.imwrite(output_path + '/result.jpg', result) 321 | # cv2.imwrite(output_path + "/process/{0:04d}.png".format(step+1), result) 322 | # cv2.imshow("result",result) 323 | 324 | 325 | # deblue 326 | deblue(result, output_path) 327 | 328 | # RGB 329 | img_rgb_original = cv2.imread(input_path, cv2.IMREAD_COLOR) 330 | cv2.imwrite(output_path + "/input.jpg", img_rgb_original) 331 | img_yuv = cv2.cvtColor(img_rgb_original, cv2.COLOR_BGR2YUV) 332 | img_yuv[:,:,0] = result 333 | img_rgb = cv2.cvtColor(img_yuv, cv2.COLOR_YUV2BGR) 334 | 335 | # cv2.imshow("RGB",img_rgb) 336 | cv2.waitKey(0) 337 | cv2.imwrite(output_path + "/result_RGB.jpg",img_rgb) 338 | 339 | 340 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | 294 | Copyright (C) 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | , 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | --------------------------------------------------------------------------------