├── .directory
├── .gitignore
├── LICENSE
├── README.md
├── bash
├── byr.sh
├── convert.sh
└── girl-atlas.sh
├── else
├── malicious.exe
└── malicious.txt
├── exe
├── 5tps-v0.3.exe
├── 5tps-v0.4.exe
└── hahaha.exe
├── python
├── .directory
├── 5tps.py
├── agwg.py
├── captcha-cl.py
├── captcha_cl
│ ├── __init__.py
│ ├── __main__.py
│ ├── test1.gif
│ ├── test2.gif
│ ├── test3.gif
│ └── trainset.json.bz2
├── coursera.py
├── getphotos.py
├── girl-atlas.py
├── i2a.py
├── image2css.py
├── metagoofil.py
├── missile.py
├── renren.py
├── tagcloud.py
├── xiqu.py
└── yapmg.py
└── update.sh
/.directory:
--------------------------------------------------------------------------------
1 | [Dolphin]
2 | Timestamp=2013,7,2,10,33,49
3 | Version=3
4 |
5 | [Settings]
6 | HiddenFilesShown=true
7 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | update.sh
2 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2012-2013 Reverland
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining
4 | a copy of this software and associated documentation files (the
5 | "Software"), to deal in the Software without restriction, including
6 | without limitation the rights to use, copy, modify, merge, publish,
7 | distribute, sublicense, and/or sell copies of the Software, and to
8 | permit persons to whom the Software is furnished to do so, subject to
9 | the following conditions:
10 |
11 | The above copyright notice and this permission notice shall be
12 | included in all copies or substantial portions of the Software.
13 |
14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
21 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Collection of scripts of me
2 |
3 | ---
4 | ## Python
5 |
6 | * renren.py
7 | - A script to visualize renren friendship
8 | - Dependencies: networkx matplotlib
9 | - Usage: edit `renren.py` for username and password
10 | - [Demo](http://reverland.org/python/2013/02/05/visualize-the-friendship-of-renren/)
11 | * tagcloud.py
12 | - Visualize command history as cloud tags, also available for other texts.
13 | - Optional-dependencies: pytagcloud
14 | - Usage: `history >> hist.txt |python tagcloud hist.txt`
15 | - [Demo](http://reverland.org/python/2013/01/28/visualize-your-shell-history/)
16 | * captcha.py
17 | - Thanks to [xenon](http://github.com/xen0n) who modularize it.
18 | - A script to crack captchas of `正方教务系统`, trainset-included.
19 |
20 | Test 95 items
21 | Right: 95
22 | Wrong: 0
23 | Success rate: 100
24 |
25 | - Dependencies: PIL
26 | - Usage: `python captcha.py file.gif`
27 | * getphotos.py
28 | - Download photos in Renren for specific user.
29 | - Usage: Provide several functions to download photos. More refer to the doc.
30 | * yapmg.py
31 | - Yet Another PhotoMosaic Generator written in python.*SPECIAL* for [chaos](http://www.fmedda.com/en/mosaic/chaos) style and classic now.
32 | - Dependency: PIL
33 | - Usage: Provide several functions to manipulate photos, and generate photomosaic photo. More refer to the source.
34 | - Warning: *ALL* non-png images will be converted to png and removed. Just support jpeg/jpg/png now.
35 | - [Demo](http://reverland.org/python/2013/02/19/yet-another-photomosaic-generator/)
36 | * i2a.py
37 | - Image to ascii, matrix and ascii style supported.
38 | - Dependency: PIL
39 | - Usage: `python i2a.py filename fontsize scale style`
40 | - [Demo](http://reverland.org/python/2013/02/25/generate-ascii-images-like-the-matrix/)
41 | * missile.py
42 | - Provide a class to facilitate missile flight simulation.(JUST FOR FUN!)
43 | - [Demo](http://reverland.org/python/2013/03/02/python/)
44 | * image2css.py
45 | - Generate the css code to draw specific image.
46 | - Dependency: PIL
47 | - Usage: `python image2css.py file [ratio]`
48 | - [Demo](http://reverland.org/python/2013/03/07/image-to-css/)
49 | * girl-atlas.py
50 | - Download images from http://girl-atlas.com
51 | - Usage: `python girl-atlas.py -h` ,Just support download by tag or album now.
52 | * coursera.py
53 | - Download videos for courses which are not download-allowed.
54 | - Usage: Edit your email/password/proxy/video\_url/auth\_url in the source file
55 | * agwg.a[X]
56 | - Wait, I have a new idea...
57 | - Ghost writing generator for payloads: poorly written to fool reverse enginneer and anti-virus softwares... It will generate a new file.
58 | - Usage: `python agwg.py file.asm`
59 | * metagoofil.py
60 | - I just wanna reinvent the wheels...
61 | * 5tps.py
62 | - 我听评书网交互下载脚本, 二进制打包见`exe/`下
63 |
64 | ## Bash
65 |
66 | * byr.sh
67 | - Telnet to bbs.byr.cn with preservation from being kicked off.
68 | - Dependency: expect
69 | - Usage: edit your username/password and run `sh byr.sh`
70 | * convert.sh[X]
71 | - Convert all swf file into mp3 in current directory.(recursively) **Warning: For It will remove all swf files, may damage your data, backup before you use!**
72 | - Dependency: ffmpeg
73 | - Usage: sh convert.sh
74 |
--------------------------------------------------------------------------------
/bash/byr.sh:
--------------------------------------------------------------------------------
1 | #! /usr/bin/expect
2 |
3 | spawn luit -encoding gbk telnet bbs.byr.cn
4 |
5 | expect ":" {
6 | send "username\n"
7 |
8 | send "password\n"
9 | }
10 | interact {
11 | timeout 30 {send "\014"}
12 | }
13 |
--------------------------------------------------------------------------------
/bash/convert.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | SAVEIFS=$IFS
3 | IFS=$(echo -en "\n\b")
4 |
5 | function swf_to_mp3_convert() {
6 | for file in `ls "$1"`
7 | do
8 | if [ -d "$1"/"$file" ]
9 | then
10 | swf_to_mp3_convert "$1""/""$file"
11 | else
12 | if [ ${file##*.} = 'swf' ]
13 | then
14 | echo "Converting " "$1""/""$file"
15 | ffmpeg -i "$1""/""$file" "$1""/"`basename "$file" .swf`".mp3"
16 | rm "$1"/"$file"
17 | fi
18 | fi
19 | done
20 | }
21 |
22 | swf_to_mp3_convert ./
23 | IFS=$SAVEIFS
24 |
--------------------------------------------------------------------------------
/bash/girl-atlas.sh:
--------------------------------------------------------------------------------
1 | #! /bin/sh
2 | # a list tags to download
3 |
4 | export http_proxy="http:127.0.0.1:1998"
5 |
6 | #--------------------------------------------------
7 | # Chapter 1
8 | # 模特们
9 | #------------------------------------------------
10 |
11 | #----------
12 | # 今野杏南,Konno Anna,日本美女模特、艺人,1989年6月15日出生于日本神奈川县,身高165cm,三围B86(F)-W59-H83。在2006年还初中三年级时,就最先接拍一些告白,并且参与综艺节目与舞台剧表演,那时以本名“戸井田杏奈”出道,直到小女孩真正长大后,以成熟丰满的姿态重新展现。
13 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/526
14 | #----------
15 |
16 | #----------
17 | # 星名美津紀
18 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/722
19 | #----------
20 |
21 | #----------
22 | # 逢泽莉娜,逢沢りな,Aizawa Rina,出身地于日本大阪府,生日是1991年7月28日,163cm,三围:B78cm、W58cm、H80cm。日本著名美女模特,甜美可爱,楚楚动人。
23 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/6
24 | #----------
25 |
26 | #----------
27 | # 中村静香
28 | # 日本综合女艺人、美女写真模特。
29 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/40
30 | #----------
31 |
32 | #----------
33 | # 渡辺麻友
34 | # 日本第一国民女子天团AKB48的核心成员
35 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/57
36 | #----------
37 |
38 | #----------
39 | # 柏木由纪,出生于1991年7月15日,是日本国民女子偶像天团AKB48的主力成员,Team B的队长,也是AKB48的子组合フレンチ.キス(参考翻译:法式香吻)的3名成员之一。AKB48第四总选举第三位。隶属于演艺经纪公司BISCUIT ENTERTAINMENT(WATANABE ENTERTAINMENT旗下)。 平假名:かしわぎ ゆき,罗马拼音:Kashiwagi Yuki ,身高:165cm ,三围:B75cm、W54cm、H81cm,日本美女
40 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/98
41 | #----------
42 |
43 | #----------
44 | # 小松彩夏,日本女演员,歌手,模特。出道作品和代表作品是《美少女战士真人版》。2003年的《CANDY》杂志担任模特儿,以此机会进入演艺界。三围:B80 W58 H89,1986年7月29日生。
45 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/64
46 | #----------
47 |
48 | #--------------------------------------------------
49 | # Chapter 2
50 | # 杂志
51 | #------------------------------------------------
52 |
53 | #----------
54 | # WPB-net:日本首屈一指的在线美女写真出版商,你可以看到高品质的照片和甜美性感的模特,相比于其他知名的日本美女写真出榜商,如YS Web/Sabra.net/Bomb.tv/等,其更加讲究摄影的美感并且每一期的照片数目较多。
55 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/96
56 | #----------
57 |
58 | #----------
59 | # Graphis號稱"Graphis NO.1 gravure web site"(第一寫真網站),在多數圖迷心中的地位也是無可動搖的。Graphis的寫真不僅模特可人,畫面也是拍得非常唯美;很多著名女優比如愛田由,古都光,吉沢明歩,櫻朱音,神谷姬,穗花,喜多村麻衣等都為Graphis拍過寫真。從人力物力各方面的資源來看,Graphis 絕對是超級專業的寫真名站,這是其他不少站點無法比擬的。該圖站內容主要分為以下几种類型,Graphis Gals:美少女系列,模特清純靚麗,畫面非常唯美!可以說是Graphis最主要也是最經典的系列。First Gravure 初脫ぎ娘:顧名思義,就是第一次拍裸照的女孩。G-Motion:超過激,限制級影片,就是露點寫真,沒有ML 鏡頭的。G-Cinema:與G-Motion一樣。Feti Style:該系列是以腳、胸、臀等身體部位為主題,類似人體藝術。就是看重身材,臉蛋則不是關鍵。
60 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/154
61 | #----------
62 |
63 | #----------
64 | # ROSI写真是国内著名的美女写真,有丝袜美女、制服诱惑、性感内衣、美腿写真、创意拍摄等几大写真类别,摄影技术精纯,能充分展现模特的性感妩媚。美中不足的是,模特都是匿名的。
65 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/166
66 | #----------
67 |
68 | #----------
69 | # Beautyleg是以美腿写真为主的腿模影音电子杂志媒体,台湾图站套图屋主打官方发布的腿图,其次还有台湾当地某些新闻发布会拍摄的腿图,即Beautyleg新闻图片。另外官方也会定期放出写真视频。Beautyleg系列以美腿著称,当然首先看重的是模特腿形,因此有些模特相貌平平,但整体而言Beautyleg的模特气质上佳。
70 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/186
71 | #----------
72 |
73 | #----------
74 | # S-Cute,日本著名美女写真图站,就像她的名字一樣,該站是以Cute(可愛型)美少女為模特。圖片的質量很高,符合"原味"(幾乎沒有用软件進行色彩上面的處理);寫真的畫面比較素淨淡雅,給人一種寧靜和諧的感覺。
75 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/937
76 | #----------
77 |
78 | #----------
79 | # image.tv
80 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/28
81 | #----------
82 |
83 | #----------
84 | # x-city
85 | python girl-atlas.py -p http://127.0.0.1:1998 -t http://girl-atlas.com/t/1147
86 | #----------
87 |
88 |
--------------------------------------------------------------------------------
/else/malicious.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/else/malicious.exe
--------------------------------------------------------------------------------
/else/malicious.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/else/malicious.txt
--------------------------------------------------------------------------------
/exe/5tps-v0.3.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/exe/5tps-v0.3.exe
--------------------------------------------------------------------------------
/exe/5tps-v0.4.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/exe/5tps-v0.4.exe
--------------------------------------------------------------------------------
/exe/hahaha.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/exe/hahaha.exe
--------------------------------------------------------------------------------
/python/.directory:
--------------------------------------------------------------------------------
1 | [Dolphin]
2 | Timestamp=2013,2,18,16,0,32
3 | Version=3
4 |
5 | [Settings]
6 | HiddenFilesShown=true
7 |
--------------------------------------------------------------------------------
/python/5tps.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | a script to download 评书 from www.5tps.com
6 |
7 | changelog:
8 | v0.1 initial, search api
9 | v0.2 warning updates pingshu
10 | v0.3 fix not right parser -郭德纲
11 | v0.4 fix for search url change, search results turn page won't fix.
12 | """
13 |
14 | __version__ = '0.4'
15 | __author__ = 'Liu Yuyang'
16 | __license__ = 'BSD'
17 |
18 | import sys
19 | import os
20 | import re
21 | import cookielib
22 | import urllib
23 | import urllib2
24 | from time import sleep
25 |
26 | DEBUG = 0
27 |
28 |
29 | def dprint(string):
30 | """debug for print"""
31 | if DEBUG:
32 | print string
33 |
34 |
35 | class C5tps(object):
36 | """5tps网"""
37 | def __init__(self, name):
38 | super(C5tps, self).__init__()
39 | self.name = name.decode(sys.getfilesystemencoding())
40 | self.url = "http://www.5tps.com"
41 | self.cj = cookielib.CookieJar()
42 | self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cj))
43 | urllib2.install_opener(self.opener)
44 | self.link, self.path = self.grep()
45 |
46 | def get_html(self, url):
47 | try:
48 | res = urllib.urlopen(url)
49 | html = res.read().decode('gbk')
50 | except:
51 | return None
52 | return html
53 |
54 | def search(self):
55 | data = urllib.urlencode({'keyword': self.name.encode('gbk')})
56 | try:
57 | res = urllib.urlopen(self.url + '/so.asp', data)
58 | html = res.read().decode('gbk')
59 | except:
60 | html = None
61 | return html
62 |
63 | def grep(self):
64 | print u"[+] 正在搜索相关评书..."
65 | html = self.search()
66 | if not html:
67 | print u"[!] 网络不佳,无法连接"
68 | html = self.search()
69 | if not html:
70 | print u"依旧无法连接,五秒后退出"
71 | sleep(5)
72 | exit(0)
73 | links = re.findall(u'
1:
80 | for i, l in zip(range(len(links)), links):
81 | l = re.search(u'html>(.*' + self.name + u'.*)', l).group(1)
82 | # remove spans
83 | l = l.replace(u"", "")
84 | l = l.replace(u"", "")
85 | if re.search(u'font', l):
86 | l += u'<---------!未完结'
87 | l = l.replace(u"(", u"(更新到")
88 | l = l.replace(u"", "")
89 | print u"[%d]: %s" % (i, l)
90 | try:
91 | n = int(raw_input(u">> 键入想要下载评书的序号:(默认为0,回车确认)".encode(sys.getfilesystemencoding())))
92 | except ValueError:
93 | n = 0
94 | link = links[n]
95 | else:
96 | link = links[0]
97 | name = re.search(u'html>(.*' + self.name + u'.*)', link).group(1)
98 | try:
99 | m = re.search(u".*", name).group(0)
100 | except:
101 | m = None
102 | if m:
103 | name = name.replace(m, "")
104 | name = name.replace(u"", "")
105 | name = name.replace(u"", "")
106 | dprint(name)
107 | link = re.search(u'href=([^>]+)', link).group(1)
108 | return (link, name)
109 |
110 | def download(self, path):
111 | print u"[+] 正在解析专辑地址,请稍候..."
112 | html = self.get_html(self.url + self.link)
113 | if not html:
114 | print u"[!] 网络不佳,无法解析专辑下载地址。正在重新尝试"
115 | try:
116 | html = self.get_html(self.url + self.link)
117 | except Exception, e:
118 | dprint(e)
119 | dprint(u"[!] 无法解析专辑地址,五秒后退出")
120 | sleep(5)
121 | links = re.findall(u"href='([^']+)'>
2:
160 | path = sys.argv[2]
161 | else:
162 | path = x.path
163 | try:
164 | os.mkdir(path)
165 | except:
166 | print u"[!] 不能新建文件夹"
167 | if os.path.exists(path):
168 | print u"[!] 文件夹已存在!"
169 | x.download(path)
170 |
--------------------------------------------------------------------------------
/python/agwg.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import os
5 | import re
6 | import random
7 |
8 |
9 | def parse(lines, reg, dg):
10 | new_lines = []
11 | for line in lines:
12 | new_lines += xor_match(line, reg, dg)
13 | return new_lines
14 |
15 |
16 | def xor_match(directive, reg, dg, iteration=2):
17 | """Too much iteration may cause jmp false immediate overflow :i8 0c4h """
18 | m = re.search(r'xor\s+' + reg + ',\s*' + reg, directive)
19 | ins = dg.method
20 | if m:
21 | new_directive = []
22 | for i in range(iteration):
23 | i = random.randrange(0, len(ins))
24 | if ins[i] == 'push_pop':
25 | new_directive += dg.push_pop(reg)
26 | elif ins[i] == 'mov_to_reg':
27 | new_directive += dg.mov_to_reg(reg)
28 | elif ins[i] == 'save_other_reg':
29 | new_directive += dg.save_other_reg(reg)
30 | elif ins[i] == 'inc_reg':
31 | new_directive += dg.save_other_reg(reg)
32 | elif ins[i] == 'dec_reg':
33 | new_directive += dg.save_other_reg(reg)
34 | else:
35 | pass
36 | new_directive += [directive]
37 | else:
38 | new_directive = [directive]
39 | return new_directive
40 |
41 |
42 | class DG(object):
43 | """generate instructions"""
44 | def __init__(self):
45 | super(DG, self).__init__()
46 | self.regs = ["eax", "ebx", "ecx", "edx", "esi", "edi"]
47 | self.method = ["push_pop", "mov_to_reg", "save_other_reg", "inc_reg", "dec_reg"]
48 |
49 | def push_pop(self, reg):
50 | directive = [
51 | " push " + reg + '\n',
52 | " pop " + reg + '\n']
53 | return directive
54 |
55 | def mov_to_reg(self, reg):
56 | i = random.randrange(0, len(self.regs))
57 | if self.regs[i] != reg:
58 | directive = [" mov " + reg + ', ' + self.regs[i] + '\n']
59 | else:
60 | directive = self.mov_to_reg(reg)
61 | return directive
62 |
63 | def save_other_reg(self, reg):
64 | i = random.randrange(0, 5)
65 | if self.regs[i] != reg:
66 | directive = [
67 | " mov " + reg + ', ' + self.regs[i] + '\n', # move some random reg into reg
68 | " xor " + self.regs[i] + ", " + self.regs[i] + '\n', # zero out reg
69 | " mov " + self.regs[i] + ', ' + reg + '\n']
70 | else:
71 | directive = self.save_other_reg(reg)
72 | return directive
73 |
74 | def inc(self, reg):
75 | directive = " inc " + reg + '\n'
76 | return [directive]
77 |
78 | def dec(self, reg):
79 | directive = " dec " + reg + '\n'
80 | return [directive]
81 |
82 |
83 | if __name__ == "__main__":
84 | filename = os.sys.argv[1]
85 | with open(filename, 'rb') as f:
86 | lines = f.readlines()
87 | dg = DG()
88 | # Iterate lines for specific reg
89 | new_lines = lines
90 | for reg in dg.regs:
91 | new_lines = parse(new_lines, reg, dg)
92 | # Once you’ve finished mangling your ASM code you need to add the following two lines to the top of
93 | # the file for it to build correctly.
94 | new_lines = [
95 | ".section '.text' rwx\n",
96 | ".entrypoint\n"] + new_lines
97 | with open(filename.split('.')[0] + "_gw." + filename.split('.')[-1], 'wb') as f:
98 | f.writelines(new_lines)
99 |
--------------------------------------------------------------------------------
/python/captcha_cl/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | u'''
5 | An script to crack captchas of `正方教务系统`.
6 | Greatly inspired by "http://www.wausita.com/captcha/"
7 | rewrite by reverland
8 |
9 | Usage:
10 |
11 | >>> load_trainset()
12 | True
13 | >>> crack_file(u'test1.gif')
14 | u'48353'
15 | >>> crack_file(u'test2.gif')
16 | u'85535'
17 | >>> crack_buf(open('test3.gif', 'rb').read())
18 | u'23297'
19 |
20 | '''
21 |
22 | from __future__ import unicode_literals, division, print_function
23 |
24 | import sys
25 | import os
26 | import math
27 | import bz2
28 | import json
29 |
30 | try:
31 | import cStringIO as StringIO
32 | except ImportError:
33 | import StringIO
34 |
35 | from PIL import Image
36 |
37 | _TRAINSET = []
38 |
39 | PKG_PATH = os.path.realpath(os.path.split(__file__)[0])
40 | TRAINSET_FILENAME = 'trainset.json.bz2'
41 |
42 |
43 | def process_image(im):
44 | '''Get an image file and return a black-white Image object'''
45 | im2 = Image.new("P", im.size, 255)
46 | for x in range(im.size[0]):
47 | for y in range(im.size[1]):
48 | pix = im.getpixel((x, y))
49 | if pix != 212 and pix != 169 and pix != 255 and pix != 40:
50 | # these are the numbers to get
51 | im2.putpixel((x, y), 0)
52 | return im2
53 |
54 |
55 | def image_from_file(filename):
56 | im = Image.open(filename)
57 | return process_image(im.convert('P'))
58 |
59 |
60 | def image_from_buffer(buf):
61 | fp = StringIO.StringIO(buf)
62 | im = Image.open(fp)
63 | return process_image(im)
64 |
65 |
66 | def find_letters(im):
67 | '''Find where letters begin and end'''
68 | inletter = False
69 | foundletter = False
70 | start = 0
71 | end = 0
72 | letters = []
73 | for x in range(im.size[0]): # slice across
74 | for y in range(im.size[1]): # slice down
75 | pix = im.getpixel((x, y))
76 | if pix != 255:
77 | inletter = True
78 | if foundletter is False and inletter is True:
79 | foundletter = True
80 | start = x
81 | if foundletter is True and inletter is False:
82 | foundletter = False
83 | end = x
84 | letters.append((start, end))
85 | inletter = False
86 | return letters
87 |
88 |
89 | def magnitude(lst):
90 | '''Calculate for magnitude of a lst'''
91 | total = 0
92 | for c in lst:
93 | total += c ** 2
94 | return math.sqrt(total)
95 |
96 |
97 | def relation(lst1, lst2):
98 | '''Calculate relation between two given list based on vector space theory.
99 | Truncate when not the same length.'''
100 | dot_multiply = 0
101 | if len(lst1) == len(lst2):
102 | for i in xrange(len(lst1)):
103 | dot_multiply += lst1[i] * lst2[i]
104 | cosin = dot_multiply / (magnitude(lst1) * (magnitude(lst2)))
105 | elif len(lst1) > len(lst2):
106 | lst1 = lst1[:len(lst2)]
107 | cosin = relation(lst1, lst2)
108 | else:
109 | lst2 = lst2[:len(lst1)]
110 | cosin = relation(lst1, lst2)
111 | return cosin
112 |
113 |
114 | def buildvector(im):
115 | '''buildvector for Image object, return list'''
116 | lst = []
117 | for p in im.getdata():
118 | lst.append(p)
119 | return lst
120 |
121 |
122 | ## Functions to crack
123 | def guess(im, trainset):
124 | '''guess what's the image is and return a tuple (relation, character)'''
125 | guess = []
126 | for image in trainset:
127 | for k, v in image.items():
128 | guess.append((relation(v, buildvector(im)), k))
129 | guess.sort(reverse=True)
130 | return guess[0]
131 |
132 |
133 | ## Crack image
134 | def crack_image(im):
135 | trainset = _TRAINSET[0]
136 | letters = find_letters(im)
137 | results = ''
138 | for letter in letters:
139 | im2 = im.crop((letter[0], 0, letter[1], im.size[1]))
140 | digit = guess(im2, trainset)[1]
141 | results += digit
142 | return results
143 |
144 |
145 | def crack_file(filename):
146 | '''receive filename and trainset, give out crack result'''
147 |
148 | return crack_image(image_from_file(filename))
149 |
150 |
151 | def crack_buf(buf):
152 | '''Crack image stored in a memory buffer.
153 |
154 | The image must be a 60x22 GIF, as generated by the ZFsoft
155 | system.
156 |
157 | '''
158 |
159 | return crack_image(image_from_buffer(buf))
160 |
161 |
162 | def load_trainset(filename=TRAINSET_FILENAME):
163 | if _TRAINSET:
164 | return True
165 |
166 | path = os.path.join(PKG_PATH, filename)
167 | with open(path, 'rb') as fp:
168 | trainset_jsonbz2 = fp.read()
169 |
170 | _TRAINSET.append(json.loads(bz2.decompress(trainset_jsonbz2)))
171 | return True
172 |
173 |
174 | # vim:set ai et ts=4 sw=4 sts=4 fenc=utf-8:
175 |
--------------------------------------------------------------------------------
/python/captcha_cl/__main__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | from __future__ import unicode_literals, division, print_function
5 |
6 |
7 | if __name__ == '__main__':
8 | import sys
9 |
10 | if len(sys.argv) == 1:
11 | # run doctests
12 | import doctest
13 | import __init__
14 | fails, count = doctest.testmod(__init__)
15 | sys.exit(0 if fails == 0 else 1)
16 |
17 | from __init__ import load_trainset, crack_file
18 | load_trainset()
19 | for filename in sys.argv[1:]:
20 | result = crack_file(filename)
21 | print(result)
22 |
23 |
24 | # vim:set ai et ts=4 sw=4 sts=4 fenc=utf-8:
25 |
--------------------------------------------------------------------------------
/python/captcha_cl/test1.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/python/captcha_cl/test1.gif
--------------------------------------------------------------------------------
/python/captcha_cl/test2.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/python/captcha_cl/test2.gif
--------------------------------------------------------------------------------
/python/captcha_cl/test3.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/python/captcha_cl/test3.gif
--------------------------------------------------------------------------------
/python/captcha_cl/trainset.json.bz2:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/reverland/scripts/c7ad9005b26adcd5df8726b3c3d86a7f4a19e513/python/captcha_cl/trainset.json.bz2
--------------------------------------------------------------------------------
/python/coursera.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import re
3 | import string
4 | import os
5 |
6 | '''
7 | This script is to download videos for course 'High Performance Scientific Computing' in Coursera.
8 | For some reason, These downloads are not allowed according to the download policy.
9 | It's not agree with the spirit for coursera, and many people have a hard time study online.
10 | So I write a script to download videos for myself.
11 |
12 | PLEASE DO NOT REDISTRIBUTE THE VIDEOS IF YOU DOWNLOAD THEM.
13 | AND WITH NO WARRANTY.
14 |
15 | edit your email and password to login and authenticate.
16 | No matter you are behind some walls or not, change the proxy to work for your self.
17 |
18 | GPLv2 license if there must be one.
19 |
20 | '''
21 |
22 | auth_url = 'https://class.coursera.org/scicomp-001/auth/auth_redirector?type=login&subtype=normal&visiting=https%3A%2F%2Fclass.coursera.org%2Fscicomp-001%2Flecture%2Findex'
23 | video_url = 'https://class.coursera.org/scicomp-001/lecture/index'
24 | proxy = {'https': 'http://127.0.0.1:1998'}
25 | email = 'reverland@reverland.org'
26 | password = 'xxxxxx'
27 |
28 |
29 | def login(email, password, proxy=None):
30 | login_page = 'https://www.coursera.org/maestro/api/user/login'
31 | payload = {'email_address': email, 'password': password}
32 | headers = {'Referer': 'https://www.coursera.org/account/signin',
33 | 'Cookie': 'csrftoken=VE3o3jqpPUbv16YcMTtBFOvl',
34 | 'X-CSRFToken': 'VE3o3jqpPUbv16YcMTtBFOvl',
35 | 'Accept': '*/*',
36 | 'Accept-Encoding': 'gzip, deflate',
37 | 'Accept-Language': 'en-us,en;q=0.5',
38 | 'Cache-Control': 'no-cache',
39 | 'Connection': 'keep-alive',
40 | 'Content-Length': '54',
41 | 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:19.0) Gecko/20130303 Firefox/19.0',
42 | 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}
43 | r = s.post(login_page, data=payload, proxies=proxy, headers=headers, verify=False)
44 | return r.text
45 |
46 |
47 | def auth(auth_url, proxy):
48 | r = s.get(auth_url, proxies=proxy)
49 | return r
50 |
51 |
52 | def parse_links(video_url, proxy):
53 | # get video html file
54 | r = s.get(video_url, proxies=proxy)
55 | # parse videos urls
56 | links = re.findall(r'data-modal-iframe="([^"]+)"', r.text)
57 | return links
58 |
59 |
60 | def download_with_links(links, proxy):
61 | for link in links:
62 | r = s.get(link, proxies=proxy)
63 | src = re.search('([^<]+)\s+', r.text, re.S).group(1)).replace('/', ' or ')
65 | print ">> Download %s now..." % name
66 | if os.path.exists(name):
67 | print ">> Warning: %s exists. skip" % name
68 | continue
69 | with open(name, 'wb') as f:
70 | r = requests.get(src, proxies=proxy)
71 | f.write(r.content)
72 |
73 |
74 | if __name__ == '__main__':
75 | # initial a session
76 | s = requests.Session()
77 | print "Log in with your coursera account now...\n"
78 | login(email, password, proxy)
79 | # auth with course
80 | print "Authenticate your account for the course"
81 | auth(auth_url, proxy)
82 | print "success..."
83 | # parse links
84 | links = parse_links(video_url, proxy)
85 | # download files
86 | print "Begin downloading..."
87 | download_with_links(links, proxy)
88 |
--------------------------------------------------------------------------------
/python/getphotos.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | '''
5 | A script to download all photos of specific user in Renren.
6 | all photos will be downloaded in current working directory, documented with album names.
7 | will not download `手机相册` and `头像相册` now, Maybe I'll add `手机相册` later maybe not.
8 | NOTICE: all albums with the same name will be downloaded into the same directory.
9 | '''
10 |
11 | import os
12 | import urllib
13 | import urllib2
14 | import cookielib
15 | import time
16 | import re
17 | #import cPickle as p
18 |
19 | __author__ = '''Reverland (lhtlyy@gmail.com)'''
20 |
21 |
22 | def download_my_photos(uid):
23 | get_photos(uid)
24 | return 1
25 |
26 |
27 | def download_friends_photos(uid):
28 | get_photos(uid, others=True)
29 |
30 |
31 | def get_photos(uid, others=False):
32 | '''Get photos, if you want to get others' photos, change `others=True`'''
33 | if others is True:
34 | album_list_page = 'http://photo.renren.com/photo/' + str(uid) + '/album/relatives'
35 | else:
36 | album_list_page = 'http://photo.renren.com/photo/' + str(uid) + '/?__view=async-html'
37 | res = urllib2.urlopen(album_list_page)
38 | html = res.read()
39 | album_list = re.findall('\n\n
\n+([^<]+)', html, re.S)
40 | for album_link, album_name in album_list:
41 | print 'Downloading album %s now' % album_name
42 | if not os.path.exists(album_name):
43 | os.mkdir(album_name)
44 | res = urllib2.urlopen(album_link)
45 | html = res.read()
46 | photo_links = re.findall(",large:'([^']+)", html)
47 | count = 1
48 | for photo in photo_links:
49 | print "Downloading %d photo(s) from album %s" % (count, album_name)
50 | urllib.urlretrieve(photo, album_name + '/' + str(time.time()) + '.jpg')
51 | print "Downloaded suscessfully"
52 | count += 1
53 | print 'Album %s downloaded' % album_name
54 |
55 |
56 | def login(username, password):
57 | """log in and return uid"""
58 | logpage = "http://www.renren.com/ajaxLogin/login"
59 | data = {'email': username, 'password': password}
60 | login_data = urllib.urlencode(data)
61 | cj = cookielib.CookieJar()
62 | opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
63 | urllib2.install_opener(opener)
64 | res = opener.open(logpage, login_data)
65 | print "Login now ..."
66 | html = res.read()
67 | #print html
68 |
69 | # Get uid
70 | print "Getting user id of you now"
71 | res = urllib2.urlopen("http://www.renren.com/home")
72 | html = res.read()
73 | # print html
74 | uid = re.search("'ruid':'(\d+)'", html).group(1)
75 | # print uid
76 | print "Login and got uid successfully"
77 | return uid
78 |
79 |
80 | # Notice:You must login first!!!!!!!!
81 | if __name__ == '__main__':
82 | username = '****'
83 | password = '****'
84 | uid = login(username, password)
85 | download_my_photos(uid)
86 |
--------------------------------------------------------------------------------
/python/girl-atlas.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | A little script to download images from 'http://girl-atlas.com/'
6 | Written for fun by Reverland(lhtlyy@gmail.com)
7 | Free to use and with no warranty, BSD License.
8 | """
9 |
10 | from gevent import monkey
11 | monkey.patch_all()
12 | import requests
13 | #import urllib
14 | import os
15 | #import urllib2
16 | import re
17 | import argparse
18 | parser = argparse.ArgumentParser()
19 | #import gevent
20 |
21 |
22 | # def http_proxy_handlers(proxy):
23 | # """dirty work, change opener"""
24 | # proxy_handler = urllib2.ProxyHandler({"http": proxy})
25 | # opener = urllib2.build_opener(proxy_handler)
26 | # urllib2.install_opener(opener)
27 |
28 |
29 | def get_html(url, proxy=None):
30 | """get html"""
31 | #req = urllib2.Request(url)
32 | #res = urllib2.urlopen(req)
33 | # print "pull html file"
34 | #html = res.read()
35 | # print "end of pulling"
36 | r = requests.get(url, proxies=proxy)
37 | html = r.text
38 | return html
39 |
40 |
41 | def parse_image_urls(html, pattern):
42 | """parse_image_urls and store them into a list"""
43 | result = pattern.findall(html)
44 | return result
45 |
46 |
47 | def download(result, path):
48 | """dirty works, download all files in (name, url) list"""
49 | for name, url in result:
50 | print ">>> Begin download %s..." % name
51 | if os.path.exists(path + name + '.jpeg'):
52 | print "%s exists, skip..." % name
53 | continue
54 | try:
55 | r = requests.get(url)
56 | with open(path + name + '.jpeg', 'wb') as f:
57 | f.write(r.content)
58 | #urllib.urlretrieve(url, path + name + '.jpeg')
59 | print ">>> %s downloaded" % name
60 | except Exception, e:
61 | print "%s" % e
62 | continue
63 |
64 |
65 | def download_by_url(url, path='./'):
66 | pattern = re.compile('
')
67 | result = parse_image_urls(get_html(url, proxy), pattern)
68 | download(result, path)
69 |
70 |
71 | def download_by_tags(url):
72 | """download by tags"""
73 | html = get_html(url, proxy)
74 | next_page = re.compile("Next").search(html)
75 | # print "next", next
76 | while next_page:
77 | url = 'http://girl-atlas.com' + next_page.group(1)
78 | # print url
79 | html_next = get_html(url, proxy)
80 | html = html + html_next
81 | next_page = re.compile("Next").search(html_next)
82 | pattern = re.compile("(.+)")
83 | result = parse_image_urls(html, pattern)
84 | # good enough with gevent, however I comment it.
85 | # jobs = [gevent.spawn(download_by_url, url, name + '/') for (url, name) in result]
86 | for album_url, album_name in result:
87 | album_name = album_name.replace('/', '')
88 | try:
89 | os.mkdir(album_name)
90 | except:
91 | pass
92 | # gevent.joinall(jobs)
93 | download_by_url(album_url, album_name + '/')
94 |
95 | # download_by_tags('http://girl-atlas.com/t/40')
96 | # download_by_tags('http://girl-atlas.com/t/722')
97 | # download_by_tags('http://girl-atlas.com/t/937')
98 | # download_by_tags('http://girl-atlas.com/t/6')
99 |
100 | if __name__ == "__main__":
101 | proxy = None
102 | parser.add_argument('-t', help="Download by tags,url format must be like this: 'http://girl-atlas.com/t/6'")
103 | parser.add_argument('-a', help="Download by albums,url format must be like this: 'http://girl-atlas.com/a/10130108074800002185'")
104 | parser.add_argument('-p', help="Enable proxy, like 'http://127.0.0.1:1998'")
105 | args = parser.parse_args()
106 | # print args
107 | if args.p:
108 | if re.search('http', args.p):
109 | proxy = {'http': args.p}
110 | else:
111 | print "Not implement, No proxy used"
112 | if args.a:
113 | print ">>>Downloading from %s now" % args.a
114 | download_by_url(args.a)
115 | elif args.t:
116 | print ">>>Downloading from %s now" % args.t
117 | download_by_tags(args.t)
118 |
--------------------------------------------------------------------------------
/python/i2a.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Turn images into acsii.
6 | """
7 |
8 | __author__ = 'Reverland (lhtlyy@gmail.com)'
9 |
10 | import Image
11 | import ImageOps
12 | import sys
13 |
14 |
15 | filename = 'a.jpg'
16 |
17 |
18 | def makeHTMLbox(body, fontsize, imagesize):
19 | """takes one long string of words and a width(px) then put them in an HTML box"""
20 | boxStr = """%s
21 | """
22 | return boxStr % (fontsize, '%', imagesize[0], body)
23 |
24 |
25 | def makeHTMLascii(body, color):
26 | """take words and , and create an HTML word """
27 | #num = str(random.randint(0,255))
28 | # return random color for every tags
29 | color = 'rgb(%s, %s, %s)' % color
30 | # get the html data
31 | wordStr = '%s'
32 | return wordStr % (color, body)
33 |
34 |
35 | def i2m(im, fontsize):
36 | """turn an image into ascii like matrix"""
37 | im = im.convert('L')
38 | im = ImageOps.autocontrast(im)
39 | im.thumbnail((im.size[0] / fontsize, im.size[1] / fontsize))
40 | string = ''
41 | colors = [(0, i, 0) for i in range(0, 256, 17)]
42 | words = '据说只有到了十五字才会有经验的'
43 | for y in range(im.size[1]):
44 | for x in range(im.size[0]):
45 | p = im.getpixel((x, y))
46 | i = 14
47 | while i >= 0:
48 | if p >= i * 17:
49 | s = makeHTMLascii(words[3 * i:3 * (i + 1)], colors[i])
50 | break
51 | i -= 1
52 | if x % im.size[0] == 0 and y > 0:
53 | s = s + '
'
54 | string = string + s
55 | return string
56 |
57 |
58 | def i2a(im, fontsize):
59 | """turn an image into ascii with colors"""
60 | im = im.convert('RGB')
61 | im = ImageOps.autocontrast(im)
62 | im.thumbnail((im.size[0] / fontsize, im.size[1] / fontsize))
63 | string = ''
64 | for y in range(im.size[1]):
65 | for x in range(im.size[0]):
66 | c = im.getpixel((x, y))
67 | # print c
68 | s = makeHTMLascii('翻', c)
69 | if x % im.size[0] == 0 and y > 0:
70 | s = s + '
'
71 | string = string + s
72 | return string
73 |
74 |
75 | def getHTMLascii(filename, fontsize, style='matrix', outputfile='a.html', scale=1):
76 | """Got html ascii image"""
77 | im = Image.open(filename)
78 | size = (int(im.size[0] * scale), int(im.size[1] * scale))
79 | im.thumbnail(size, Image.ANTIALIAS)
80 | if style == 'matrix':
81 | ascii = makeHTMLbox(i2m(im, fontsize), fontsize, im.size)
82 | elif style == 'ascii':
83 | ascii = makeHTMLbox(i2a(im, fontsize), fontsize, im.size)
84 | else:
85 | print "Just support ascii and matrix now, fall back to matrix"
86 | ascii = makeHTMLbox(i2m(im, fontsize), fontsize, im.size)
87 | with open(outputfile, 'wb') as f:
88 | f.write(ascii)
89 | return 1
90 |
91 |
92 | if __name__ == '__main__':
93 | if sys.argv[1] == '--help' or sys.argv[1] == '-h':
94 | print """Usage:python i2a.py filename fontsize [optional-parameter]
95 | optional-parameter:
96 | scale -- between (0, 1)
97 | style -- matrix or ascii"""
98 | else:
99 | filename = sys.argv[1]
100 | try:
101 | fontsize = int(sys.argv[2])
102 | except:
103 | fontsize = int(raw_input('input fontsize please:'))
104 | try:
105 | scale = float(sys.argv[3])
106 | except:
107 | scale = 1
108 | try:
109 | style = sys.argv[4]
110 | except:
111 | style = 'matrix'
112 | getHTMLascii(filename, fontsize, scale=scale, style=style)
113 |
--------------------------------------------------------------------------------
/python/image2css.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Script to turn image into css
6 | """
7 |
8 | import Image
9 | import sys
10 |
11 | __author__ = "Reverland (lhtlyy@gmail.com)"
12 |
13 |
14 | def getcss(im):
15 | """docstring for get"""
16 | css = """position: absolute;
17 | top: 30px;
18 | left: 30px;
19 | width: 0;
20 | height: 0;
21 | box-shadow:
22 | """
23 | string = '%dpx %dpx 0px 1px rgb%s,\n'
24 | for y in range(0, im.size[1], 1):
25 | for x in range(0, im.size[0], 1):
26 | if im.size[1] - y <= 1 and im.size[0] - x <= 1:
27 | string = '%dpx %dpx 0px 1px rgb%s;\n'
28 | color = im.getpixel((x, y))
29 | css += string % (x, y, color)
30 | return css
31 |
32 |
33 | def gethtml(css):
34 | """docstring for gethtml"""
35 | html = """
36 |
38 | """ % css
39 | return html
40 |
41 | if __name__ == '__main__':
42 | filename = sys.argv[1]
43 | try:
44 | ratio = sys.argv[2]
45 | except:
46 | ratio = 1.0
47 | #outfile = sys.argv[3]
48 | im = Image.open(filename)
49 | size = (int(ratio * im.size[0]), int(ratio * im.size[1]))
50 | im.thumbnail(size)
51 | html = gethtml(getcss(im))
52 | print html
53 | # with open(outfile, 'wb') as f:
54 | # f.write(html)
55 |
--------------------------------------------------------------------------------
/python/metagoofil.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | '''
5 | Something like metagoofil
6 | Nothing... I just write it to work just for google...
7 | Practice for OOP
8 | '''
9 |
--------------------------------------------------------------------------------
/python/missile.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 | tolerance = 1e-1
4 | radius = np.pi
5 |
6 | # missile 1
7 | x_m1, y_m1 = -np.pi, 0
8 | v_m1 = 5
9 |
10 | # missile 2
11 | x_m2, y_m2 = 0, np.pi
12 | v_m2 = v_m1
13 | # missile 3
14 | x_m3, y_m3 = np.pi, 0
15 | v_m3 = v_m1
16 | # missile 4
17 | x_m4, y_m4 = 0, -np.pi
18 | v_m4 = v_m1
19 |
20 | plt.figure(figsize=(10, 10), dpi=80)
21 | plt.title(" missile flight simulator ", fontsize=40)
22 | plt.xlim(-4, 4)
23 | plt.ylim(-4, 4)
24 | #plt.xticks([])
25 | #plt.yticks([])
26 |
27 | # set spines
28 | ax = plt.gca()
29 | ax.spines['right'].set_color('none')
30 | ax.spines['top'].set_color('none')
31 | ax.xaxis.set_ticks_position('bottom')
32 | ax.spines['bottom'].set_position(('data', 0))
33 | ax.yaxis.set_ticks_position('left')
34 | ax.spines['left'].set_position(('data', 0))
35 | plt.xticks([-np.pi, -np.pi / 2, 0, np.pi / 2, np.pi], [r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
36 | plt.yticks([-np.pi, -np.pi / 2, 0, np.pi / 2, np.pi], [r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
37 |
38 | plt.annotate('missile start point', xy=(x_m1, y_m1), xycoords='data',
39 | xytext=(+15, +15), textcoords='offset points', fontsize=12,
40 | arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
41 |
42 | # alpha labels
43 | for label in ax.get_xticklabels() + ax.get_yticklabels():
44 | label.set_fontsize(16)
45 | label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.65))
46 |
47 |
48 | class ob(object):
49 | """docstring for ob"""
50 | def __init__(self, x, y):
51 | self.x = x
52 | self.y = y
53 |
54 |
55 | class missile(ob):
56 | """docstring for missile"""
57 | def __init__(self, x, y):
58 | super(missile, self).__init__(x, y)
59 |
60 | def forward(self, v, target):
61 | """docstring for forward"""
62 | if self.x < target.x:
63 | alpha = np.arctan((target.y - self.y) / (target.x - self.x))
64 | elif self.x > target.x:
65 | alpha = np.pi + np.arctan((target.y - self.y) / (target.x - self.x))
66 | elif self.x == target.x and self.y < target.y:
67 | alpha = np.pi / 2
68 | else:
69 | alpha = -np.pi / 2
70 | self.x = self.x + v * 0.01 * np.cos(alpha)
71 | self.y = self.y + v * 0.01 * np.sin(alpha)
72 | return self.x, self.y
73 |
74 | def distance(self, target):
75 | """docstring for distance"""
76 | return np.sqrt((self.x - target.x) ** 2 + (self.y - target.y) ** 2)
77 |
78 |
79 | class target(ob):
80 | """docstring for target"""
81 | def __init__(self, x, y):
82 | super(target, self).__init__(x, y)
83 |
84 | def newposition(self, x, y):
85 | """docstring for newposition"""
86 | self.x = x
87 | self.y = y
88 |
89 | m1 = missile(x_m1, y_m1)
90 | m2 = missile(x_m2, y_m2)
91 | m3 = missile(x_m3, y_m3)
92 | m4 = missile(x_m4, y_m4)
93 |
94 | while True:
95 | if m1.distance(m2) < tolerance or m1.distance(m3) < tolerance or m1.distance(m4) < tolerance:
96 | print "collision"
97 | plt.plot(x_m1, y_m1, 'o')
98 | plt.annotate('crash point', xy=(x_m1, y_m1), xycoords='data',
99 | xytext=(+15, +15), textcoords='offset points', fontsize=12,
100 | arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
101 | plt.pause(0.1)
102 | plt.show()
103 | break
104 | elif m3.distance(m2) < tolerance or m3.distance(m4) < tolerance:
105 | print "collision"
106 | plt.plot(x_m3, y_m3, 'o')
107 | plt.annotate('crash point', xy=(x_m3, y_m3), xycoords='data',
108 | xytext=(+15, +15), textcoords='offset points', fontsize=12,
109 | arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
110 | plt.pause(0.1)
111 | plt.show
112 | break
113 | x_m1, y_m1 = m1.forward(v_m1, m2)
114 | x_m2, y_m2 = m2.forward(v_m2, m3)
115 | x_m3, y_m3 = m3.forward(v_m3, m4)
116 | x_m4, y_m4 = m4.forward(v_m4, m1)
117 | #print alpha, beta
118 | plt.plot(x_m1, y_m1, 'bx', alpha=.5)
119 | plt.plot(x_m2, y_m2, 'k*', alpha=.5)
120 | plt.plot(x_m3, y_m3, 'r.', alpha=.5)
121 | plt.plot(x_m4, y_m4, 'gp', alpha=.5)
122 | plt.legend(("missile1", "missile2", "missile3", "missile4"), loc="upper left", prop={'size': 12})
123 | plt.pause(0.1)
124 |
--------------------------------------------------------------------------------
/python/renren.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | '''
5 | A python script to visualize your social network of renren.
6 | Inspired by:
7 | http://www.voidspace.org.uk/python/articles/urllib2.shtml
8 | http://www.voidspace.org.uk/python/articles/cookielib.shtml
9 | http://blog.csdn.net/lkkang/article/details/7362888
10 | http://cos.name/2011/04/exploring-renren-social-network/
11 | '''
12 |
13 | import urllib
14 | import urllib2
15 | import cookielib
16 | import re
17 | import cPickle as p
18 | import networkx as nx
19 | import matplotlib.pyplot as plt
20 |
21 | __author__ = """Reverland (lhtlyy@gmail.com)"""
22 |
23 | # Control parameters,EDIT here!
24 | ## Login
25 | USERNAME = '***'
26 | PASSWORD = '***'
27 | ## Control Graphs, Edit for better graphs as you need
28 | LABEL_FLAG = True # Whether shows labels.NOTE: configure your matplotlibrc for Chinese characters.
29 | REMOVE_ISOLATED = True # Whether remove isolated nodes(less than iso_level connects)
30 | DIFFERENT_SIZE = True # Nodes for different size, bigger means more shared friends
31 | ISO_LEVEL = 10
32 | NODE_SIZE = 40 # Default node size
33 |
34 |
35 | def login(username, password):
36 | """log in and return uid"""
37 | logpage = "http://www.renren.com/ajaxLogin/login"
38 | data = {'email': username, 'password': password}
39 | login_data = urllib.urlencode(data)
40 | cj = cookielib.CookieJar()
41 | opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
42 | urllib2.install_opener(opener)
43 | res = opener.open(logpage, login_data)
44 | print "Login now ..."
45 | #html = res.read()
46 | #print html
47 |
48 | # Get uid
49 | print "Getting user id of you now"
50 | res = urllib2.urlopen("http://www.renren.com/home")
51 | html = res.read()
52 | # print html
53 | uid = re.search("'ruid':'(\d+)'", html).group(1)
54 | # print uid
55 | print "Login and got uid successfully"
56 | return uid
57 |
58 |
59 | def getfriends(uid):
60 | """Get the uid's friends and return the dict with uid as key,name as value."""
61 | print "Get %s 's friend list" % str(uid)
62 | pagenum = 0
63 | dict1 = {}
64 | while True:
65 | targetpage = "http://friend.renren.com/GetFriendList.do?curpage=" + str(pagenum) + "&id=" + str(uid)
66 | res = urllib2.urlopen(targetpage)
67 | html = res.read()
68 |
69 | pattern = '
'
70 |
71 | m = re.findall(pattern, html)
72 | #print len(m)
73 | if len(m) == 0:
74 | break
75 | for i in range(0, len(m)):
76 | no = m[i][0]
77 | uname = m[i][1]
78 | #print uname, no
79 | dict1[no] = uname
80 | pagenum += 1
81 | print "Got %s 's friends list successfully." % str(uid)
82 | return dict1
83 |
84 |
85 | def getdict(uid):
86 | """cache dict of uid in the disk."""
87 | try:
88 | with open(str(uid) + '.txt', 'r') as f:
89 | dict_uid = p.load(f)
90 | except:
91 | with open(str(uid) + '.txt', 'w') as f:
92 | p.dump(getfriends(uid), f)
93 | dict_uid = getdict(uid)
94 | return dict_uid
95 |
96 |
97 | def getrelations(uid1, uid2):
98 | """receive two user id, If they are friends, return 1, otherwise 0."""
99 | dict_uid1 = getdict(uid1)
100 | if uid2 in dict_uid1:
101 | return 1
102 | else:
103 | return 0
104 |
105 |
106 | def getgraph(username, password):
107 | """Get the Graph Object and return it.
108 | You must specify a Chinese font such as `SimHei` in ~/.matplotlib/matplotlibrc"""
109 | uid = login(username, password)
110 | dict_root = getdict(uid) # Get root tree
111 |
112 | G = nx.Graph() # Create a Graph object
113 | for uid1, uname1 in dict_root.items():
114 | # Encode Chinese characters for matplotlib **IMPORTANT**
115 | # if you want to draw Chinese labels,
116 | uname1 = unicode(uname1, 'utf8')
117 | G.add_node(uname1)
118 | for uid2, uname2 in dict_root.items():
119 | uname2 = unicode(uname2, 'utf8')
120 | # Not necessary for networkx
121 | if uid2 == uid1:
122 | continue
123 | if getrelations(uid1, uid2):
124 | G.add_edge(uname1, uname2)
125 |
126 | return G
127 |
128 |
129 | def draw_graph(username, password, filename='graph.txt', label_flag=True, remove_isolated=True, different_size=True, iso_level=10, node_size=40):
130 | """Reading data from file and draw the graph.If not exists, create the file and re-scratch data from net"""
131 | print "Generating graph..."
132 | try:
133 | with open(filename, 'r') as f:
134 | G = p.load(f)
135 | except:
136 | G = getgraph(username, password)
137 | with open(filename, 'w') as f:
138 | p.dump(G, f)
139 | #nx.draw(G)
140 | # Judge whether remove the isolated point from graph
141 | if remove_isolated is True:
142 | H = nx.empty_graph()
143 | for SG in nx.connected_component_subgraphs(G):
144 | if SG.number_of_nodes() > iso_level:
145 | H = nx.union(SG, H)
146 | G = H
147 | # Ajust graph for better presentation
148 | if different_size is True:
149 | L = nx.degree(G)
150 | G.dot_size = {}
151 | for k, v in L.items():
152 | G.dot_size[k] = v
153 | node_size = [G.dot_size[v] * 10 for v in G]
154 | pos = nx.spring_layout(G, iterations=50)
155 | nx.draw_networkx_edges(G, pos, alpha=0.2)
156 | nx.draw_networkx_nodes(G, pos, node_size=node_size, node_color='r', alpha=0.3)
157 | # Judge whether shows label
158 | if label_flag is True:
159 | nx.draw_networkx_labels(G, pos, alpha=0.5)
160 | #nx.draw_graphviz(G)
161 | plt.show()
162 |
163 | return G
164 |
165 | if __name__ == "__main__":
166 | G = draw_graph(USERNAME, PASSWORD, filename='graph.txt', label_flag=LABEL_FLAG, remove_isolated=REMOVE_ISOLATED, different_size=DIFFERENT_SIZE, iso_level=ISO_LEVEL, node_size=NODE_SIZE)
167 |
--------------------------------------------------------------------------------
/python/tagcloud.py:
--------------------------------------------------------------------------------
1 | #import string
2 | import random
3 | import sys
4 | #from pytagcloud import create_tag_image, make_tags
5 | #from pytagcloud.lang.counter import get_tag_counts
6 |
7 | filename = sys.argv[1]
8 |
9 | # check your own parameter
10 | # 400 0 9 1 4 for Gresburg.txt
11 | # default for command line
12 | boxsize = 600
13 | basescale = 10
14 | fontScale = 0.5
15 | omitnumber = 5
16 | omitlen = 0
17 |
18 |
19 | def cmd2string(filename):
20 | '''accept the filename and return the string of cmd'''
21 | chist = []
22 |
23 | # Open the file and store the history in chist
24 | with open(filename, 'r') as f:
25 | chist = f.readlines()
26 | # print chist
27 |
28 | for i in range(len(chist)):
29 | chist[i] = chist[i].split()
30 | chist[i] = chist[i][1]
31 | ss = ''
32 | for w in chist:
33 | if w != 'sudo' and w != 'pacman':
34 | ss = ss + ' ' + w
35 |
36 | return ss
37 |
38 |
39 | def string2dict(string, dic):
40 | """split a string into a dict record its frequent"""
41 | wl = string.split()
42 | for w in wl:
43 | if w == '\n':
44 | continue
45 | # if len(w) <= 3:
46 | # continue
47 | if w not in dic:
48 | dic[w] = 1
49 | else:
50 | dic[w] += 1
51 | return dic
52 |
53 |
54 | def makeHTMLbox(body, width):
55 | """takes one long string of words and a width(px) then put them in an HTML box"""
56 | boxStr = """%s
57 | """
58 | return boxStr % (str(width), body)
59 |
60 |
61 | def makeHTMLword(body, fontsize):
62 | """take words and fontsize, and create an HTML word in that fontsize."""
63 | #num = str(random.randint(0,255))
64 | # return random color for every tags
65 | color = 'rgb(%s, %s, %s)' % (str(random.randint(0, 255)), str(random.randint(0, 255)), str(random.randint(0, 255)))
66 | # get the html data
67 | wordStr = '%s'
68 | return wordStr % (str(fontsize), color, body)
69 |
70 |
71 | #def generatetagcloud(string, filename):
72 | """accept a string and generate a tag cloud using pytagcloud"""
73 | tags = make_tags(get_tag_counts(string), minsize=10, maxsize=120)
74 | create_tag_image(tags, filename.split('.')[0] + '.' + 'png', background=(0, 0, 0), size=(800, 600), fontname='Droid Sans', rectangular=False)
75 |
76 |
77 | def main():
78 | # get the html data first
79 | wd = {}
80 | s = cmd2string(filename)
81 | wd = string2dict(s, wd)
82 | vkl = [(k, v) for k, v in wd.items() if v >= omitnumber and len(k) > omitlen] # kick off less used cmd
83 | words = ""
84 | for w, c in vkl:
85 | words += makeHTMLword(w, int(c * fontScale + basescale)) # These parameter looks good
86 | html = makeHTMLbox(words, boxsize)
87 | # dump it to a file
88 | with open(filename.split('.')[0] + '.' + 'html', 'wb') as f:
89 | f.write(html)
90 | return html
91 |
92 |
93 | #generatetagcloud(file2string(filename), filename)
94 | #print "suscessfully created"
95 | if __name__ == "__main__":
96 | main()
97 |
--------------------------------------------------------------------------------
/python/xiqu.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: gbk -*-
3 |
4 | """Download xiqu"""
5 |
6 | import urllib
7 | import urllib2
8 | import gevent
9 | from gevent import monkey
10 | import re
11 | from urlparse import urlparse
12 | from posixpath import basename
13 |
14 |
15 | monkey.patch_all()
16 |
17 |
18 | def worker(reg, url):
19 | """docstring for worker"""
20 | res = urllib.urlopen(url)
21 | reg3 = re.compile('')
57 | #reg2 = re.compile('[^<]+([d+])')
58 | #res = urllib.urlopen("http://www.00394.net/yuju/yujump3/list_17_1.html")
59 | #html = res.read()
60 | #print html
61 | #page_num = re.findall(reg2, html)
62 | page_num = 9
63 | for i in range(page_num):
64 | url = "http://www.00394.net/yuju/yujump3/list_17_" + str(i + 1) + ".html"
65 | musicArray = worker(reg, url)
66 | print musicArray
67 | jobs = []
68 | for (name, path) in musicArray:
69 | jobs.append(gevent.spawn(grun, path, name))
70 | gevent.joinall(jobs)
71 |
--------------------------------------------------------------------------------
/python/yapmg.py:
--------------------------------------------------------------------------------
1 | #! /bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 |
5 | '''
6 | YAPMG -- Yet Another PhotoMosaic Generator written in Python
7 | '''
8 |
9 | import Image
10 | import ImageOps
11 | import os
12 | import random
13 | import ImageStat
14 | import cPickle as p
15 |
16 | __author__ = "Reverland (lhtlyy@gmail.com)"
17 |
18 |
19 | def add_frame(image):
20 | '''Add frame for image.'''
21 | im = ImageOps.expand(image, border=int(0.01 * max(image.size)), fill=0xffffff)
22 | return im
23 |
24 |
25 | def rotate_image(image, degree):
26 | '''Rotate images for specific degree. Expand to show all'''
27 | if image.mode != 'RGBA':
28 | image = image.convert('RGBA')
29 | im = image.rotate(degree, expand=1)
30 | return im
31 |
32 |
33 | def drop_shadow(image, offset, border=0, shadow_color=0x444444):
34 | """Add shadows for image"""
35 | # Caclulate size
36 | fullWidth = image.size[0] + abs(offset[0]) + 2 * border
37 | fullHeight = image.size[1] + abs(offset[1]) + 2 * border
38 | # Create shadow, hardcode color
39 | shadow = Image.new('RGBA', (fullWidth, fullHeight), (0, 0, 0))
40 | # Place the shadow, with required offset
41 | shadowLeft = border + max(offset[0], 0) # if <0, push the rest of the image right
42 | shadowTop = border + max(offset[1], 0) # if <0, push the rest of the image down
43 | shadow.paste(shadow_color, [shadowLeft, shadowTop, shadowLeft + image.size[0], shadowTop + image.size[1]])
44 | shadow_mask = shadow.convert("L")
45 | # Paste the original image on top of the shadow
46 | imgLeft = border - min(offset[0], 0) # if the shadow offset was <0, push right
47 | imgTop = border - min(offset[1], 0) # if the shadow offset was <0, push down
48 | shadow.putalpha(shadow_mask)
49 | shadow.paste(image, (imgLeft, imgTop))
50 | return shadow
51 |
52 |
53 | def process_image(filename, newname):
54 | '''convert image to png to support transparency'''
55 | if filename.split('.')[-1] != 'png':
56 | print filename
57 | im = Image.open(filename)
58 | im.save(newname + '.png')
59 | print "processing image file %s" % filename
60 | return 1
61 |
62 |
63 | def process_directory(path):
64 | os.chdir(path)
65 | count = 1
66 | for filename in os.listdir(path):
67 | ext = filename.split('.')[-1]
68 | if ext == 'jpeg' or ext == 'jpg':
69 | try:
70 | process_image(filename, str(count))
71 | os.remove(filename)
72 | count += 1
73 | except:
74 | print "Can't process %s" % filename
75 | continue
76 | return 1
77 |
78 |
79 | def thumbnail(im, size):
80 | """thumnail the image"""
81 | if im.mode != 'RGBA':
82 | im = im.convert('RGBA')
83 | im.thumbnail(size, Image.ANTIALIAS)
84 | return im
85 |
86 |
87 | # Just for fun
88 | def chao_image(path, size=(800, 800), thumbnail_size=(50, 50), shadow_offset=(10, 10), backgroud_color=0xffffff):
89 | image_all = Image.new('RGB', size, backgroud_color)
90 | for image in os.listdir(path):
91 | if image.split('.')[-1] == 'png':
92 | im = Image.open(image)
93 | degree = random.randint(-30, 30)
94 | im = thumbnail(rotate_image(drop_shadow(add_frame(im), shadow_offset), degree), thumbnail_size)
95 | image_all.paste(im, (random.randint(-thumbnail_size[0], size[0]), random.randint(-thumbnail_size[1], size[1])), im)
96 | return image_all
97 |
98 |
99 | ## May not useful
100 | def rgb2xyz(im):
101 | """rgb to xyz"""
102 | rgb2xyz = (0.412453, 0.357580, 0.180423, 0, 0.212671, 0.715160, 0.072169, 0, 0.019334, 0.119193, 0.950227, 0)
103 | out = im.convert("RGB", rgb2xyz)
104 | return out
105 |
106 |
107 | def average_image(im):
108 | """return average (r,g,b) for image"""
109 | color_vector = [int(x) for x in ImageStat.Stat(im).mean]
110 | return color_vector
111 |
112 |
113 | def compare_vectors(v1, v2):
114 | """compare image1 and image2, return relations"""
115 | if len(v1) == len(v2):
116 | distance = 0
117 | for i in xrange(len(v1)):
118 | distance += (v1[i] - v2[i]) ** 2
119 | return distance
120 | else:
121 | print "vector not match in dimensions"
122 |
123 |
124 | #for r, g, b in list(im.getdata())
125 | def tile_dict(path):
126 | """Return list of average (R,G,B) for image in this path as dict."""
127 | dic = {}
128 | for image in os.listdir(path):
129 | if image.split('.')[-1] == 'png':
130 | try:
131 | im = Image.open(image)
132 | except:
133 | print "image file %s cannot open" % image
134 | continue
135 | if im.mode != 'RGB':
136 | im = im.convert('RGB')
137 | dic[image] = average_image(im)
138 | return dic
139 |
140 |
141 | def thumbnail_background(im, scale):
142 | """thumbnail backgroud image"""
143 | newsize = im.size[0] / scale, im.size[1] / scale
144 | im.thumbnail(newsize)
145 | print 'thumbnail size and the number of tiles %d X %d' % im.size
146 | return im.size
147 |
148 |
149 | def find_similar(lst, dic):
150 | """for lst([R, G, B], Calculate which key-value in dic has the most similarity.Return first 10)"""
151 | similar = {}
152 | for k, v in dic.items():
153 | similar[k] = compare_vectors(v, lst)
154 | # if len(v) != len(lst):
155 | # print v, len(v), lst, len(lst)
156 | similar = [(v, k) for k, v in similar.items()] # Not good, lost the same Score
157 | similar.sort()
158 | return similar[:10]
159 |
160 |
161 | def get_image_list(im, dic):
162 | """receive a thumbnail image and a dict of image to be mosaic, return tiles(filename) in order(as a list)"""
163 | lst = list(im.getdata())
164 | tiles = []
165 | for i in range(len(lst)):
166 | #print find_similar(lst[i], dic)[random.randrange(10)][1]
167 | tiles.append(find_similar(lst[i], dic)[random.randrange(10)][1])
168 | return tiles
169 |
170 |
171 | def paste_chaos(image, tiles, size, shadow_off_set=(30, 30)):
172 | """size is thumbnail of backgroud size that is how many tiles per line and row"""
173 | # image_all = Image.new('RGB', image.size, 0xffffff)
174 | image_all = image
175 | lst = range(len(tiles))
176 | random.shuffle(lst)
177 | fragment_size = (image.size[0] / size[0], image.size[1] / size[1])
178 | print 'tiles size %d X %d' % fragment_size
179 | print 'number of tiles one iteration: %d' % len(lst)
180 | for i in lst:
181 | im = Image.open(tiles[i])
182 | degree = random.randint(-20, 20)
183 | im = thumbnail(rotate_image(drop_shadow(add_frame(im), shadow_off_set), degree), (fragment_size[0] * 3 / 2, fragment_size[1] * 3 / 2))
184 | x = i % size[0] * fragment_size[0] + random.randrange(-fragment_size[0] / 2, fragment_size[0] / 2)
185 | y = i / size[0] * fragment_size[1] + random.randrange(-fragment_size[1] / 2, fragment_size[1] / 2)
186 | # print x, y
187 | image_all.paste(im, (x, y), im)
188 | return image_all
189 |
190 |
191 | def paste_classic(image, tiles, size):
192 | fragment_size = (image.size[0] / size[0], image.size[1] / size[1])
193 | for i in range(len(tiles)):
194 | x = i % size[0] * fragment_size[0]
195 | y = i / size[0] * fragment_size[1]
196 | im = Image.open(tiles[i])
197 | im = thumbnail(im, (fragment_size[0] * 3 / 2, fragment_size[1] * 3 / 2))
198 | image.paste(im, (x, y), im)
199 | return image
200 |
201 |
202 | def main(filename, tile_size, scale, iteration=1, style='classic', path='./'):
203 | # 0. select an big image for mosaic
204 | print "open %s" % filename
205 | im = Image.open(filename)
206 | # 1. process image as png to support transparency
207 | print "process directory %s" % path
208 | process_directory(path)
209 | # 2. get a dict for path
210 | print "get tile dict for path `%s`" % path
211 | try:
212 | with open('dic.txt', 'r') as f:
213 | dic = p.load(f)
214 | except:
215 | dic = tile_dict(path)
216 | with open('dic.txt', 'wb') as f:
217 | p.dump(dic, f)
218 | # 3. thumbnail the big image for compare
219 | print "thumbnail background for compare"
220 | # tile_size = 30 # 原始图片缩为多少分之一,也是每片大小
221 | # scale = 3 # 原始图片放大倍数
222 | big_size = im.size[0] * scale, im.size[1] * scale
223 | im_big = Image.new('RGB', big_size, 0xffffff)
224 | imb_t_size = thumbnail_background(im, tile_size)
225 | print "how may tiles: %d X %d" % imb_t_size
226 | print 'number of iterations: %d' % iteration
227 | if style == 'chaos':
228 | for i in range(iteration):
229 | print 'iteration: %d' % (i + 1)
230 | # 4. get a list of smail image for mosaic
231 | print "get pic list"
232 | im_tiles = get_image_list(im, dic)
233 | # 5. paste in chaos style
234 | print "generate final image"
235 | im_big = paste_chaos(im_big, im_tiles, imb_t_size)
236 | elif style == 'classic':
237 | im_tiles = get_image_list(im, dic)
238 | im_big = paste_classic(im_big, im_tiles, imb_t_size)
239 | return im_big
240 |
241 |
242 | if __name__ == '__main__':
243 | #im = main('../mm.jpg', 30, 5, 2)
244 | im = main('../mm.jpg', 30, 5)
245 | im.save('../final3.png')
246 | im.show()
247 |
--------------------------------------------------------------------------------
/update.sh:
--------------------------------------------------------------------------------
1 | #! /bin/sh
2 |
3 | git add .
4 | git commit -m "update"
5 | git push github master
6 |
--------------------------------------------------------------------------------