├── .gitignore
├── 0xMirror
├── Readme.md
├── Screenshot.png
└── mirror.py
├── Facebook
├── .gitignore
├── Conversations
│ ├── .gitignore
│ ├── config.py.example
│ ├── messages.py
│ ├── output.py
│ └── plot-count.py
├── Readme.md
└── Wall Posts
│ └── get_posts.py
├── Find Untagged MP3s
├── Readme.md
└── find.py
├── Geeks For Geeks
├── .gitignore
├── Readme.md
├── g4g.py
├── links.py
└── screenshot.png
├── Github Contributions
├── .gitignore
├── Readme.md
└── contribs.py
├── Goodreads Quotes
├── GoodReads-Quotes.py
└── Readme.md
├── LICENSE
├── Last.fm Backup
├── .gitignore
├── Readme.md
└── last-backup.py
├── Last.fm Plays
├── HelperFunctions.py
├── Readme.md
├── ScrobblesToday.py
├── TopTracks-All.py
└── TopTracks.py
├── MB Chatlogs
├── .gitignore
├── Download Logs.py
├── HelperFunctions.py
└── Readme.md
├── MITx - 6.00.1x Solutions
├── Quiz
│ └── Test.py
├── Week 2 PSet 1
│ ├── W2 PS1 - 1 - Count Vowels.py
│ ├── W2 PS1 - 2 - Count bob.py
│ └── W2 PS1 - 3 - Longest Alphabetical.py
└── Week 2 PSet 2
│ ├── W2 PS2 - 1 - Calculating the Minimum.py
│ └── W2 PS2 - 2 - Paying off the debt.py
├── README.md
├── Rename TED
├── rename_ted.py
└── ted_talks_list.py
├── Shell History
├── Readme.md
└── history.py
├── Sphinx Linkfix
├── Readme.md
└── linkfix.py
├── Sublime Text 3 Plugins
├── Commands.sublime-commands
├── OpenFolder.py
├── README.md
└── TimeStamp.py
├── WP XML to Octopress MD
├── Readme.md
├── Wordpress Export.xml
└── XML to MD.py
└── py.ico
/.gitignore:
--------------------------------------------------------------------------------
1 | *.sublime-project
2 | *.sublime-workspace
3 |
4 | Help/
5 | /Help/
6 | MP3 Metadata/mutagen3
7 |
8 | *.txt
9 |
10 | *.xml
11 |
12 | *.py[cod]
13 |
14 | # C extensions
15 | *.so
16 |
17 | # Packages
18 | *.egg
19 | *.egg-info
20 | dist
21 | build
22 | eggs
23 | parts
24 | bin
25 | var
26 | sdist
27 | develop-eggs
28 | .installed.cfg
29 | lib
30 | lib64
31 |
32 | # Installer logs
33 | pip-log.txt
34 |
35 | # Unit test / coverage reports
36 | .coverage
37 | .tox
38 | nosetests.xml
39 |
40 | # Translations
41 | *.mo
42 |
43 | # Mr Developer
44 | .mr.developer.cfg
45 | .project
46 | .pydevproject
47 |
--------------------------------------------------------------------------------
/0xMirror/Readme.md:
--------------------------------------------------------------------------------
1 | # 0xMirror
2 |
3 | Creates a zero byte mirror of a folder at another folder.
4 |
5 | 
6 |
7 | __Why?__
8 |
9 | My laptop conked out once, and I had no idea of _what_ exactly I would lose if it never booted up again.
10 |
11 | __Shout-outs:__
12 |
13 | [@iCHAIT](http://github.com/ichait/) for https://github.com/iCHAIT/0xMirror/
14 |
15 | [@kwikadi](https://github.com/kwikadi/) for https://github.com/kwikadi/Raid-X/
16 |
17 | [@nickedes](https://github.com/nickedes/) for https://gist.github.com/nickedes/751a971019da869008e6/
18 |
19 | ## Todo
20 |
21 | * Use `scandir` instead of `os.walk`
22 |
23 | * Support for multiple drives
24 |
25 | * Remove the bug that wants destination not be a part of source
26 |
27 | * Create a zip file (while mirroring) and copy the file to the Dropbox folder on completion.
28 |
29 | * A Progress Bar? (OH YEAH!!)
30 |
31 | * Display Stats?
32 | * Number of files/folders mirrored.
33 | * ~~Errors, if any?~~
34 |
35 | * Output the filepaths to a text file?
36 |
--------------------------------------------------------------------------------
/0xMirror/Screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dufferzafar/Python-Scripts/4816a0efa25de6e0c9c4cc5ad86d10cef871e15d/0xMirror/Screenshot.png
--------------------------------------------------------------------------------
/0xMirror/mirror.py:
--------------------------------------------------------------------------------
1 | """
2 | Create a zero byte mirror of src_root at dest_root.
3 |
4 | `ignore_dirs` are not traversed.
5 | """
6 |
7 | import os
8 | import time
9 | import shutil
10 |
11 | src_root = "D:\\Documents"
12 | dest_root = "F:\\Mirror"
13 |
14 | ignore_dirs = ['.git', 'env', "__pycache__", "node_modules", "$RECYCLE.BIN", "System Volume Information"]
15 |
16 | # Todo: Used only while debugging
17 | if os.path.exists(dest_root):
18 | shutil.rmtree(dest_root)
19 |
20 | # For files that errored
21 | errors = []
22 |
23 | # Use for execution time calculation
24 | start_time = time.time()
25 |
26 | # Todo: Replace walk with scandir?
27 | for cur_dir, dirs, files in os.walk(src_root):
28 |
29 | # Removing entries from `dirs` results in those dirs
30 | # not being traversed.
31 | for ignore in ignore_dirs:
32 | if ignore in dirs:
33 | dirs.remove(ignore)
34 |
35 | # Bug: Both the paths should end with a slash
36 | # or this will fail
37 | dest_dir = cur_dir.replace(src_root, dest_root)
38 |
39 | # Create empty directories
40 | for dirname in dirs:
41 | destination = os.path.join(dest_dir, dirname)
42 |
43 | # Note: What if an error occurs here?
44 | if not os.path.exists(destination):
45 | os.makedirs(destination)
46 |
47 | # Create empty files at destination
48 | for filename in files:
49 | destination = os.path.join(dest_dir, filename)
50 |
51 | try:
52 | # Pythonic way to `touch` a file?
53 | f = open(destination, "w")
54 | f.close()
55 | # Fix the timestamps of the files at destination
56 | stinfo = os.stat(os.path.join(cur_dir, filename))
57 | os.utime(destination, (stinfo.st_atime, stinfo.st_mtime))
58 | except:
59 | errors.append(os.path.join(cur_dir, filename))
60 |
61 | # All Done!
62 | print("Mirror completed in %.2f seconds with %d errors.\n" %
63 | ((time.time() - start_time), len(errors)))
64 |
65 | if errors:
66 | print("Files that errored:\n\n", '\n'.join(errors))
67 |
--------------------------------------------------------------------------------
/Facebook/.gitignore:
--------------------------------------------------------------------------------
1 | # Downloaded Data
2 | Data*/
3 |
4 | # Various Access Tokens
5 | tokens.py
6 |
7 | ###Python###
8 |
9 | # Byte-compiled / optimized / DLL files
10 | __pycache__/
11 | *.py[cod]
12 |
13 | # C extensions
14 | *.so
15 |
16 | # Distribution / packaging
17 | .Python
18 | env/
19 | build/
20 | develop-eggs/
21 | dist/
22 | downloads/
23 | eggs/
24 | lib/
25 | lib64/
26 | parts/
27 | sdist/
28 | var/
29 | *.egg-info/
30 | .installed.cfg
31 | *.egg
32 |
33 | # PyInstaller
34 | # Usually these files are written by a python script from a template
35 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
36 | *.manifest
37 | *.spec
38 |
39 | # Installer logs
40 | pip-log.txt
41 | pip-delete-this-directory.txt
42 |
43 | # Unit test / coverage reports
44 | htmlcov/
45 | .tox/
46 | .coverage
47 | .cache
48 | nosetests.xml
49 | coverage.xml
50 |
51 | # Translations
52 | *.mo
53 | *.pot
54 |
55 | # Django stuff:
56 | *.log
57 |
58 | # Sphinx documentation
59 | docs/_build/
60 |
61 | # PyBuilder
62 | target/
63 |
--------------------------------------------------------------------------------
/Facebook/Conversations/.gitignore:
--------------------------------------------------------------------------------
1 | # Random stuff I've been testing
2 | *.txt
3 |
4 | # File containing Configuration
5 | config.py
6 |
7 | # Conversation Dumps
8 | Messages/
9 |
10 | # Plots
11 | *.png
12 |
--------------------------------------------------------------------------------
/Facebook/Conversations/config.py.example:
--------------------------------------------------------------------------------
1 | # Your Facebook User ID
2 | me = "100002385400990"
3 |
4 | # Set the value you see in 'Form Data'
5 | form_data = {
6 | "__a": "",
7 | "__dyn": "",
8 | "__req": "",
9 | "__rev": "",
10 | "__user": "",
11 | "client": "web_messenger",
12 | "fb_dtsg": "",
13 | "ttstamp": ""
14 | }
15 |
16 | headers = {
17 | # Set the value you see in 'Request Headers'
18 | "cookie": "",
19 | # You don't have to modify these, but feel free to.
20 | "accept": "*/*",
21 | "accept-encoding": "gzip,deflate",
22 | "accept-language": "en-US,en;q=0.8",
23 | "cache-control": "no-cache",
24 | "content-type": "application/x-www-form-urlencoded",
25 | "origin": "https://www.facebook.com",
26 | "pragma": "no-cache",
27 | "referer": "https://www.facebook.com/messages/zuck",
28 | "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.122 Safari/537.36"
29 | }
30 |
31 | # You'll have to generate a list of friend IDs on your own.
32 | friends = {
33 | "facebok_id_of_friend": "name_of_friend"
34 | }
35 |
--------------------------------------------------------------------------------
/Facebook/Conversations/messages.py:
--------------------------------------------------------------------------------
1 | """
2 | Backup your entire facebook conversations.
3 |
4 | Friend list is read from a config file.
5 | """
6 |
7 | import json
8 | import os
9 | import requests
10 | import sys
11 | import time
12 |
13 | from config import friends, form_data, headers
14 |
15 |
16 | # Can be used to download a range of messages
17 | limit = 2000
18 |
19 | # Used to find the end of conversation thread
20 | END_MARK = "end_of_history"
21 |
22 | FB_URL = "https://www.facebook.com/ajax/mercury/thread_info.php"
23 |
24 | ROOT = "Messages"
25 |
26 |
27 | def mkdir(f):
28 | """ Create a directory if it doesn't already exist. """
29 | if not os.path.exists(f):
30 | os.makedirs(f)
31 |
32 |
33 | def fetch(data):
34 | """ Fetch data from Facebook. """
35 |
36 | offset = [v for k, v in data.items() if 'offset' in k][0]
37 | limit = [v for k, v in data.items() if 'limit' in k][0]
38 |
39 | print("\t%6d - %6d" % (offset, offset+limit))
40 |
41 | response = requests.post(FB_URL, data=data, headers=headers)
42 |
43 | # Account for 'for (;;);' in the beginning of the text.
44 | return response.content[9:]
45 |
46 |
47 | def confirm(question):
48 | """ Confirm whether the files should be moved. """
49 | valid = {'y': True, 'n': False}
50 | while True:
51 | sys.stdout.write(question + " [y/n]: ")
52 | choice = raw_input().lower()
53 | if choice in valid:
54 | return valid[choice]
55 | else:
56 | sys.stdout.write("Please respond with 'y' or 'n'.\n")
57 |
58 |
59 | if __name__ == '__main__':
60 |
61 | for friend_id, friend_name in friends.items():
62 |
63 | if not confirm("Fetch converstaion with '%s'?" % friend_name):
64 | continue
65 |
66 | print("Retrieving Messages of: %s" % friend_name)
67 |
68 | # Setup data directory
69 | dirname = os.path.join(ROOT, friend_name)
70 | mkdir(dirname)
71 |
72 | # These parameters need to be reset everytime
73 | offset = 0
74 | timestamp = 0
75 | data = {"payload": ""}
76 |
77 | # We want it ALL!
78 | while END_MARK not in data['payload']:
79 |
80 | form_data["messages[user_ids][%s][offset]" % friend_id] = offset
81 | form_data["messages[user_ids][%s][limit]" % friend_id] = limit
82 | form_data["messages[user_ids][%s][timestamp]" % friend_id] = str(timestamp)
83 |
84 | content = fetch(form_data)
85 |
86 | # Handle facebook rate limits
87 | while not content:
88 | print("Facebook Rate Limit Reached. Retrying after 30 secs")
89 | time.sleep(30)
90 | content = fetch(form_data)
91 |
92 | # Build JSON representation
93 | data = json.loads(content)
94 |
95 | # Dump Data
96 | filename = "%s.json" % (limit+offset)
97 | with open(os.path.join(dirname, filename), "w") as op:
98 | json.dump(data, op, indent=2)
99 |
100 | # Next!
101 | offset = offset + limit
102 | timestamp = data['payload']['actions'][0]['timestamp']
103 | time.sleep(2)
104 |
105 | # Make the form_data usable for the next user
106 | form_data.pop("messages[user_ids][%s][offset]" % friend_id)
107 | form_data.pop("messages[user_ids][%s][limit]" % friend_id)
108 |
109 | print("\t-----END-----")
110 |
--------------------------------------------------------------------------------
/Facebook/Conversations/output.py:
--------------------------------------------------------------------------------
1 | """
2 | Read out the FB Conversation Dumps and Make a nice log file.
3 |
4 | You should run messages.py first.
5 | """
6 |
7 | import os
8 | import json
9 | from collections import namedtuple
10 |
11 | from config import friends, me
12 |
13 | ROOT = "Messages"
14 |
15 | Message = namedtuple('Message', ['author', 'body'])
16 |
17 | for friend in os.listdir(ROOT):
18 |
19 | messages = {}
20 |
21 | # Read all the files of a friend & build a hashmap
22 | for file in os.listdir(os.path.join(ROOT, friend)):
23 |
24 | with open(os.path.join(ROOT, friend, file)) as inp:
25 |
26 | data = json.load(inp)
27 | for act in data['payload']['actions']:
28 |
29 | # BUG: Why wouldn't body be present?
30 | if 'body' in act:
31 | author = act['author'].replace('fbid:', '')
32 |
33 | if author == me:
34 | author = "Me"
35 | else:
36 | author = friends[author][:4]
37 |
38 | messages[act['timestamp']] = Message(author, act['body'])
39 |
40 | # Sort by timestamp and iterate
41 | for stamp in sorted(messages.keys())[:25]:
42 | m = messages[stamp]
43 | print('%s:\t%s' % (m.author, m.body.encode('ascii', 'ignore')))
44 |
45 | print('---------------------------------------------------')
46 |
--------------------------------------------------------------------------------
/Facebook/Conversations/plot-count.py:
--------------------------------------------------------------------------------
1 | """
2 | Plot the number of messages sent/recieved to a friend.
3 |
4 | You should run messages.py first.
5 | """
6 |
7 | import os
8 | import json
9 |
10 | import time
11 | from datetime import datetime as DT
12 |
13 | import matplotlib.pyplot as plt
14 | import matplotlib.dates as mdates
15 |
16 | from messages import mkdir
17 |
18 | ROOT = "Messages"
19 | date_format = "%Y-%m-%d"
20 |
21 |
22 | def pretty_epoch(epoch, fmt):
23 | """ Convert timestamp to a pretty format. """
24 | return time.strftime(fmt, time.localtime(epoch))
25 |
26 |
27 | if __name__ == '__main__':
28 |
29 | mkdir('Plot')
30 |
31 | for friend in os.listdir(ROOT):
32 |
33 | print("Processing conversation with %s" % friend)
34 |
35 | messages = {}
36 |
37 | # Read all the files of a friend & build a hashmap
38 | for file in os.listdir(os.path.join(ROOT, friend)):
39 |
40 | with open(os.path.join(ROOT, friend, file)) as inp:
41 |
42 | data = json.load(inp)
43 | for act in data['payload']['actions']:
44 |
45 | # BUG: Why wouldn't body be present?
46 | if 'body' in act:
47 |
48 | # Facebook uses timestamps with 13 digits for milliseconds
49 | # precision, while Python only needs the first 10 digits.
50 | date = pretty_epoch(act['timestamp'] // 1000, date_format)
51 |
52 | if date in messages:
53 | messages[date] += 1
54 | else:
55 | messages[date] = 1
56 |
57 | # Begin creating a new plot
58 | plt.figure()
59 |
60 | # Prepare the date
61 | x, y = [], []
62 | for date in messages.keys():
63 | x.append(mdates.date2num(DT.strptime(date, date_format)))
64 | y.append(messages[date])
65 |
66 | # Use custom date format
67 | plt.gca().xaxis.set_major_formatter(mdates.DateFormatter("%Y-%m"))
68 | plt.gca().xaxis.set_major_locator(mdates.MonthLocator())
69 |
70 | # Plot!
71 | plt.plot_date(x, y)
72 |
73 | # Ensure that the x-axis ticks don't overlap
74 | plt.gcf().autofmt_xdate()
75 | plt.gcf().set_size_inches(17, 9)
76 |
77 | # Save plot
78 | plt.title("Conversation with %s" % friend)
79 | plt.savefig("Plot/%s.png" % friend)
80 |
--------------------------------------------------------------------------------
/Facebook/Readme.md:
--------------------------------------------------------------------------------
1 | # Facebook Data Backup
2 |
3 | Fetch all sorts of data using the Graph API.
4 |
5 | # Plots
6 |
7 | * split data points into msgs sent & recieved
8 |
9 | * plot multiple friends on a single chart
10 | * with legends
11 |
12 | * use a smooth continuous line
13 |
14 | * day-of-week: Do we chat more on weekends?
15 |
16 | * most chatted week!
17 |
--------------------------------------------------------------------------------
/Facebook/Wall Posts/get_posts.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import time
4 |
5 | import facebook
6 | import requests
7 |
8 | from tokens import *
9 |
10 | graph = facebook.GraphAPI(USER_TOKEN)
11 | profile = graph.get_object("me")
12 | # posts = graph.get_connections(profile['id'], 'posts')
13 |
14 | count = 1
15 | while True:
16 |
17 | filename = "Data/Posts/%d.json" % count
18 |
19 | # Write JSON to Disk, if it doesn't already exist.
20 | if not os.path.isfile(filename):
21 |
22 | with open(filename, "w") as out:
23 | json.dump(posts, out)
24 | print("Written data to: %s" % filename)
25 |
26 | nxt = posts['paging']['next']
27 | posts = requests.get(nxt).json()
28 |
29 | else:
30 |
31 | with open(filename) as inp:
32 | posts = json.load(inp)
33 | print("Loaded data from: %s" % filename)
34 |
35 | count += 1
36 |
37 | # There are two reasons for an empty response:
38 | #
39 | # 1. We got rate limited.
40 | # 2. This was actually the last post.
41 | if not posts["data"]:
42 |
43 | # Bug: An extraneous post gets created
44 | os.remove(filename)
45 | count -= 1
46 | print("Removed: %s" % filename)
47 |
48 | for interval in (120, 240, 480, 960):
49 | posts = requests.get(nxt).json()
50 |
51 | if not posts["data"]:
52 | print("Got an empty Response. Retrying in %d seconds." % interval)
53 | time.sleep(interval)
54 | else:
55 | break
56 | else:
57 | print("The End.")
58 | break
59 |
--------------------------------------------------------------------------------
/Find Untagged MP3s/Readme.md:
--------------------------------------------------------------------------------
1 | # Find Untagged MP3s
2 |
3 | I use [MusicBrainz Picard](http://picard.musicbrainz.org/) to tag all my downloaded music.
4 |
5 | This script finds songs that are not yet tagged and moves them to a separate folder.
6 |
--------------------------------------------------------------------------------
/Find Untagged MP3s/find.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import re
4 | import os
5 | import glob
6 | import sys
7 |
8 | from mutagen.mp3 import MP3
9 |
10 | # Taken from flask_uuid: http://git.io/vmecV
11 | UUID_RE = re.compile(
12 | r'^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$')
13 |
14 | # Musicbrainz Recording ID
15 | # https://picard.musicbrainz.org/docs/mappings/
16 | ufid = u'UFID:http://musicbrainz.org'
17 |
18 |
19 | def is_tagged(file):
20 | """ Determine whether an MP3 file is tagged. """
21 | tags = MP3(file)
22 | if ufid not in tags:
23 | return False
24 | else:
25 | return re.match(UUID_RE, tags[ufid].data) is not None
26 |
27 |
28 | def confirm(question):
29 | """ Confirm whether the files should be moved. """
30 | valid = {'y': True, 'n': False}
31 | while True:
32 | sys.stdout.write(question + " [y/n]: ")
33 | choice = raw_input().lower()
34 | if choice in valid:
35 | return valid[choice]
36 | else:
37 | sys.stdout.write("Please respond with 'y' or 'n'.\n")
38 |
39 |
40 | # Note: Should I just load them in Picard?
41 | def move(files):
42 | """ Move a list of files to 'Untagged' directory. """
43 | destination = "Untagged"
44 | if not os.path.exists(destination):
45 | os.mkdir(destination)
46 |
47 | for file in files:
48 | os.rename(file, os.path.join(destination, file))
49 |
50 | if __name__ == '__main__':
51 |
52 | # List of files found
53 | untagged = []
54 |
55 | # Todo: Handle files other than mp3
56 | for file in glob.glob("*.mp3"):
57 | if not is_tagged(file):
58 | print("Found: %s" % file)
59 | untagged.append(file)
60 |
61 | if confirm("\nMove untagged files?"):
62 | move(untagged)
63 |
--------------------------------------------------------------------------------
/Geeks For Geeks/.gitignore:
--------------------------------------------------------------------------------
1 | *.pdf
2 | *.zip
3 |
--------------------------------------------------------------------------------
/Geeks For Geeks/Readme.md:
--------------------------------------------------------------------------------
1 | # Geeks For Geeks Scraper
2 |
3 | Create PDFs from posts on Geeks for Geeks.
4 |
5 | 
6 |
7 | ## Todo
8 |
9 | * Handle Images
10 |
11 | * Improve Quality of the output
12 | * Syntax Highlight Code?
13 |
--------------------------------------------------------------------------------
/Geeks For Geeks/g4g.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import requests
4 |
5 | from bs4 import BeautifulSoup
6 |
7 | from PyQt4.QtGui import QTextDocument, QPrinter, QApplication
8 | from PyQt4.QtWebKit import QWebView
9 |
10 |
11 | def parse(url):
12 | """ Fetch a URL and return post content. """
13 |
14 | # Soupify!
15 | response = requests.get(url)
16 |
17 | if not response.ok:
18 | return ''
19 |
20 | soup = BeautifulSoup(response.text)
21 |
22 | # Remove big stuff!
23 | for elem in soup.findAll(['script', 'style', 'ins', 'iframe']):
24 | elem.extract()
25 |
26 | for elem in soup.select('div.comments-main'):
27 | elem.extract()
28 |
29 | content = soup.find(id="content")
30 | html = content.decode_contents()
31 |
32 | # # Details, baby!
33 | # block = ['div', 'Related Topics', 'twitter-share-button', 'g:plusone',
34 | # 'Please write comments']
35 |
36 | # lines = html.splitlines(True)
37 |
38 | # for line in lines:
39 | # for item in block:
40 | # if item in line:
41 | # lines.remove(line)
42 | # break
43 |
44 | # html = ''.join(lines)
45 | return html
46 |
47 |
48 | def print_pdf(html, filename):
49 | """ Print HTML to PDF. """
50 |
51 | wv = QWebView()
52 | wv.setHtml(html)
53 |
54 | # doc = QTextDocument()
55 | # doc.setHtml(html)
56 |
57 | printer = QPrinter()
58 | printer.setOutputFileName(filename)
59 | printer.setOutputFormat(QPrinter.PdfFormat)
60 | printer.setPageSize(QPrinter.A4)
61 | printer.setPageMargins(15, 15, 15, 15, QPrinter.Millimeter)
62 |
63 | # doc.print_(printer)
64 | wv.print_(printer)
65 |
66 | print("PDF Generated: " + filename)
67 |
68 | if __name__ == '__main__':
69 | app = QApplication(sys.argv)
70 |
71 | from links import topics, topic_sets
72 |
73 | # Fetch all set pages
74 | for topic in topic_sets:
75 | if os.path.isfile(topic[0] + ".pdf"):
76 | print("Skipping: " + topic[0])
77 | continue
78 |
79 | print("Working on: " + topic[0])
80 |
81 | html = ""
82 | # Fetch each set
83 | for num in range(1, topic[2] + 1):
84 | html += parse(topic[1] % num)
85 |
86 | print_pdf(html, topic[0] + ".pdf")
87 |
88 | # Fetch pages grouped by topics
89 | for topic in topics:
90 | if os.path.isfile(topic + ".pdf"):
91 | print("Skipping: " + topic)
92 | continue
93 |
94 | print("Working on: " + topic)
95 |
96 | html = ""
97 | for page in topics[topic]:
98 | html += parse(page)
99 |
100 | print_pdf(html, topic + ".pdf")
101 |
102 | QApplication.exit()
103 |
--------------------------------------------------------------------------------
/Geeks For Geeks/links.py:
--------------------------------------------------------------------------------
1 | topic_sets = [
2 | ("Analysis of Algorithms",
3 | "http://www.geeksforgeeks.org/analysis-algorithm-set-%d/", 5),
4 | ("Greedy Algorithms",
5 | "http://www.geeksforgeeks.org/greedy-algorithms-set-%d/", 7),
6 | ("Dynamic Programming",
7 | "http://www.geeksforgeeks.org/dynamic-programming-set-%d/", 21),
8 | ("Output of CPP Programs",
9 | "http://www.geeksforgeeks.org/output-of-c-program-set-%d/", 25),
10 | ("Output of C Programs",
11 | "http://www.geeksforgeeks.org/output-of-a-program-set-%d/", 28)
12 | ]
13 |
14 | topics = {
15 | "C vs C++": [
16 | "http://www.geeksforgeeks.org/extern-c-in-c/",
17 | "http://www.geeksforgeeks.org/g-fact-12-2/",
18 | "http://www.geeksforgeeks.org/write-c-program-produce-different-result-c/",
19 | "http://www.geeksforgeeks.org/g-fact-54/"
20 | ],
21 |
22 | "Reference Variables": [
23 | "http://www.geeksforgeeks.org/references-in-c/",
24 | "http://www.geeksforgeeks.org/g-fact-25/",
25 | "http://www.geeksforgeeks.org/when-do-we-pass-arguments-by-reference-or-pointer/"
26 | ],
27 |
28 | "Function Overloading in C++": [
29 | "http://www.geeksforgeeks.org/function-overloading-in-c/",
30 | "http://www.geeksforgeeks.org/function-overloading-and-const-functions/",
31 | "http://www.geeksforgeeks.org/g-fact-75/",
32 | "http://www.geeksforgeeks.org/does-overloading-work-with-inheritance/",
33 | "http://www.geeksforgeeks.org/can-main-overloaded-c/",
34 | "http://geeksquiz.com/default-arguments-c/"
35 | ],
36 |
37 | "new and delete": [
38 | "http://www.geeksforgeeks.org/malloc-vs-new/",
39 | "http://www.geeksforgeeks.org/g-fact-30/",
40 | "http://www.geeksforgeeks.org/g-fact-76/",
41 | "http://www.geeksforgeeks.org/can-a-c-class-have-an-object-of-self-type/",
42 | "http://www.geeksforgeeks.org/why-is-the-size-of-an-empty-class-not-zero-in-c/",
43 | "http://www.geeksforgeeks.org/some-interesting-facts-about-static-member-functions-in-c/",
44 | "http://www.geeksforgeeks.org/stati/"
45 | ],
46 |
47 | "Pointer": [
48 | "http://www.geeksforgeeks.org/this-pointer-in-c/",
49 | "http://www.geeksforgeeks.org/g-fact-77/",
50 | "http://www.geeksforgeeks.org/delete-this-in-c/",
51 | ],
52 |
53 | "Constructor and Destructor": [
54 | "http://geeksquiz.com/constructors-c/",
55 | "http://geeksquiz.com/copy-constructor-in-cpp/",
56 | "http://geeksquiz.com/destructors-c/",
57 | "http://www.geeksforgeeks.org/g-fact-26/",
58 | "http://www.geeksforgeeks.org/g-fact-22/",
59 | "http://www.geeksforgeeks.org/g-fact-13/",
60 | "http://www.geeksforgeeks.org/g-fact-32/",
61 | "http://www.geeksforgeeks.org/when-do-we-use-initializer-list-in-c/",
62 | "http://www.geeksforgeeks.org/c-internals-default-constructors-set-1/",
63 | "http://www.geeksforgeeks.org/private-destructor/",
64 | "http://www.geeksforgeeks.org/playing-with-destructors-in-c/",
65 | "http://www.geeksforgeeks.org/copy-elision-in-c/",
66 | "http://www.geeksforgeeks.org/c-default-constructor-built-in-types/",
67 | "http://www.geeksforgeeks.org/does-compiler-always-create-a-copy-constructor/",
68 | "http://www.geeksforgeeks.org/copy-constructor-argument-const/",
69 | "http://www.geeksforgeeks.org/advanced-c-virtual-constructor/",
70 | "http://www.geeksforgeeks.org/advanced-c-virtual-copy-constructor/",
71 | "http://www.geeksforgeeks.org/c-internals-default-constructors-set-1/",
72 | "http://www.geeksforgeeks.org/static-objects-destroyed/",
73 | "http://www.geeksforgeeks.org/possible-call-constructor-destructor-explicitly/"
74 | ],
75 |
76 | "Inheritance": [
77 | "http://www.geeksforgeeks.org/g-fact-4/",
78 | "http://www.geeksforgeeks.org/virtual-functions-and-runtime-polymorphism-in-c-set-1-introduction/",
79 | "http://www.geeksforgeeks.org/multiple-inheritance-in-c/",
80 | "http://www.geeksforgeeks.org/what-happens-when-more-restrictive-access-is-given-in-a-derived-class-method-in-c/",
81 | "http://www.geeksforgeeks.org/object-slicing-in-c/",
82 | "http://www.geeksforgeeks.org/g-fact-89/"
83 | ]
84 | }
85 |
--------------------------------------------------------------------------------
/Geeks For Geeks/screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dufferzafar/Python-Scripts/4816a0efa25de6e0c9c4cc5ad86d10cef871e15d/Geeks For Geeks/screenshot.png
--------------------------------------------------------------------------------
/Github Contributions/.gitignore:
--------------------------------------------------------------------------------
1 | Data*/
2 |
--------------------------------------------------------------------------------
/Github Contributions/Readme.md:
--------------------------------------------------------------------------------
1 | # Github Contributions
2 |
3 | Fetch & Store all your Github contributions.
4 |
--------------------------------------------------------------------------------
/Github Contributions/contribs.py:
--------------------------------------------------------------------------------
1 | import os
2 | import re
3 | import requests
4 |
5 | from bs4 import BeautifulSoup
6 |
7 |
8 | data = "Data"
9 | user = "dufferzafar"
10 | github = "https://github.com"
11 |
12 |
13 | def fetch_pages(years):
14 | """Fetch contribution pages from github and store them as html files."""
15 |
16 | def process_url(url, out):
17 | """Download a page and extract out the contributions."""
18 |
19 | print("%s\t====>\t%s" % (url, out))
20 |
21 | response = requests.get(url)
22 |
23 | soup = BeautifulSoup(response.text)
24 | contribs = soup.find_all('div', class_="contribution-activity-listing")[0]
25 | html = contribs.decode_contents()
26 |
27 | with open(out, "w") as out:
28 | out.write(html)
29 |
30 | gh_url = ("https://github.com/%s?tab=contributions"
31 | "&from=%d-%d-01&to=%d-%d-01")
32 |
33 | for year in years:
34 |
35 | # Create the destination folder
36 | folder = os.path.join(data, str(year))
37 | if not os.path.exists(folder):
38 | os.makedirs(folder)
39 |
40 | for month in range(1, 13):
41 | out = os.path.join(folder, str(month) + ".htm")
42 |
43 | if month == 12:
44 | params = (user, year, month, year+1, 1)
45 | else:
46 | params = (user, year, month, year, month+1)
47 |
48 | process_url(gh_url % params, out)
49 |
50 |
51 | def list_contributions(years):
52 | """
53 | List all repositories to which contributions have been made.
54 |
55 | You can then fetch commits off those repos.
56 | """
57 |
58 | repos = []
59 | issues = []
60 | pulls = []
61 |
62 | def process_file(file_):
63 | with open(file_) as inp:
64 | soup = BeautifulSoup(inp.read())
65 |
66 | for link in soup.find_all("a", class_="title"):
67 |
68 | url = link.get('href')
69 | if not url:
70 | continue
71 |
72 | if '/commits' in url:
73 | if url not in repos:
74 | repos.append(url)
75 |
76 | # Bug: Github website has endpoint 'pull'
77 | # while the API has 'pulls'. WUT!
78 | elif '/pull' in url:
79 | if url not in pulls:
80 | pulls.append(url)
81 |
82 | elif '/issues' in url:
83 | if url not in issues:
84 | issues.append(url)
85 |
86 | else:
87 | # Let's hope we never reach here!
88 | print(url)
89 |
90 | for year in years:
91 | for month in range(1, 13):
92 | process_file(os.path.join(data, str(year), str(month)) + ".htm")
93 |
94 | return repos, issues, pulls
95 |
96 | if __name__ == '__main__':
97 | # fetch_pages(range(2015, 2016))
98 | rpo, iss, pll = list_contributions([2012, 2013])
99 |
100 | # Worked on repos
101 | for r in sorted(rpo):
102 | print(github + r)
103 |
104 | print("\n")
105 |
106 | # Issues reported
107 | for i in sorted(iss):
108 | print(github + i)
109 |
110 | print("\n")
111 |
112 | # Pulls created
113 | for p in sorted(pll):
114 | print(github + p)
115 |
--------------------------------------------------------------------------------
/Goodreads Quotes/GoodReads-Quotes.py:
--------------------------------------------------------------------------------
1 | ##################################################
2 | #
3 | # Goodreads Quotes Scraper
4 | #
5 | # Scrapes out all the quotes from GoodReads Quotes
6 | # URL. The results are stored in JSON/XML format.
7 | #
8 | # @dufferzafar
9 | #
10 | ##################################################
11 |
12 | # Required Libraries
13 | import urllib.request
14 | import os.path
15 | import re
16 |
17 | # Output Format
18 | import json
19 |
20 | # Make sure you have BeautifulSoup installed
21 | from bs4 import BeautifulSoup
22 |
23 | # The URL
24 | url = "https://www.goodreads.com/quotes/list/18654747-shadab-zafar"
25 |
26 | # The URL will be saved to this file
27 | fileName = "GRQuotesPage1.html"
28 |
29 | # Doownload file only if it does not exist.
30 | if os.path.isfile(fileName):
31 | print("File Exists. Will Be Read.\n")
32 | else:
33 | print("File Does Not Exists. Will Be Downloaded.")
34 | site = urllib.request.urlopen(url)
35 | data = site.read()
36 |
37 | f = open(fileName, "wb")
38 | f.write(data)
39 | f.close()
40 | print("File Downloaded.")
41 |
42 | # Create the soup.
43 | f = open(fileName)
44 | soup = BeautifulSoup(f)
45 | f.close()
46 |
47 | # The Debug file
48 | opFile = open("debug.txt", 'w')
49 |
50 | # User Metadata
51 | title = soup.find('title').string.replace("\n", " ")
52 | titleScrape = re.findall('\((.*?)\)', title)
53 |
54 | # Username and Total Quotes
55 | user = titleScrape[0]
56 | totalQuotes = re.search('(\d+)$', titleScrape[2]).group(1)
57 |
58 | # While Testing and Debugging
59 | # quit()
60 |
61 | # Quote text, author name and URL
62 | quoteText = soup.findAll('div', attrs={'class':'quoteText'})
63 |
64 | # print (len(quoteText))
65 |
66 | # Quote URL
67 | quoteFooterRight = soup.findAll('div', attrs={'class':'right'})
68 |
69 | # Begin Scraping
70 | for (q,r) in zip(quoteText, quoteFooterRight):
71 |
72 | quote = q.contents[0].encode('ascii', 'ignore').decode('ascii', 'ignore')
73 |
74 | qLink = q.find('a')
75 | authorUrl = qLink.get('href')
76 | author = qLink.getText()
77 |
78 | rLink = r.find('a')
79 | quoteUrl = rLink.get('href')
80 |
81 | # json.dumps()
82 |
83 | # print(quoteUrl)
84 | f.write("Quote = " + re.sub(" +", "", quote.replace("\n", "")) + "\n")
85 | f.write("QuoteURL = " + quoteUrl + "\n")
86 | f.write("Author = " + author + "\n")
87 | f.write("AuthorURL = " + authorUrl + "\n\n\n")
88 |
--------------------------------------------------------------------------------
/Goodreads Quotes/Readme.md:
--------------------------------------------------------------------------------
1 | # Quotes from Goodreads
2 |
3 | Scrapes all quotes from a goodreads user's list and displays them, beautifully.
4 |
5 | ## Table of Contents
6 |
7 | * [Stuff to do](#todo)
8 | * [Changelog](#changelog)
9 |
10 | ## Todo
11 |
12 | * All pages from a profile
13 | * ?page=i
14 |
15 | * Scrape
16 | * Quote Text, URL
17 | * Author Text, URL
18 |
19 | * Output
20 | * JSON
21 | * XML
22 |
23 | ## Changelog
24 |
25 | * All data Scraped, currently outputs to a simplistic txt format.
26 |
27 | * Quote Text, Author's Name and URL have been scraped.
28 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | This is free and unencumbered software released into the public domain.
2 |
3 | Anyone is free to copy, modify, publish, use, compile, sell, or
4 | distribute this software, either in source code form or as a compiled
5 | binary, for any purpose, commercial or non-commercial, and by any
6 | means.
7 |
8 | In jurisdictions that recognize copyright laws, the author or authors
9 | of this software dedicate any and all copyright interest in the
10 | software to the public domain. We make this dedication for the benefit
11 | of the public at large and to the detriment of our heirs and
12 | successors. We intend this dedication to be an overt act of
13 | relinquishment in perpetuity of all present and future rights to this
14 | software under copyright law.
15 |
16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
19 | IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
20 | OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
21 | ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
22 | OTHER DEALINGS IN THE SOFTWARE.
23 |
24 | For more information, please refer to
25 |
--------------------------------------------------------------------------------
/Last.fm Backup/.gitignore:
--------------------------------------------------------------------------------
1 | *.tsv
2 | *.csv
3 | *.db
4 |
--------------------------------------------------------------------------------
/Last.fm Backup/Readme.md:
--------------------------------------------------------------------------------
1 | # Last.fm Backup
2 |
3 | Backup Last fm data - scrobbles, loved/banned tracks, etc.
4 |
5 | Modified the [original script](https://gitorious.org/fmthings/lasttolibre/blobs/master/lastexport.py) from the [lasttolibre](https://gitorious.org/fmthings/lasttolibre) project to suit my usecase.
6 |
7 | ## Todo
8 |
9 | * Incremental Backups: Only backup *new* data. Use timestamps to figure out upto what moment we already have stuff and then stop.
10 |
11 | * Only backs up scrobbles currently, but has support for other stuff too. Need to test it out.
12 |
13 | * Data is currently saved as TSV. I think a SQL DB would be better coz I'd be able to run queries.
14 |
15 |
--------------------------------------------------------------------------------
/Last.fm Backup/last-backup.py:
--------------------------------------------------------------------------------
1 | """
2 | Script for exporting tracks through audioscrobbler API.
3 |
4 | Almost of the work was originally done by the libre.fm people.
5 | https://gitorious.org/fmthings/lasttolibre/
6 | """
7 |
8 | import urllib2
9 | import urllib
10 |
11 | import re
12 | import time
13 | import xml.etree.ElementTree as ET
14 |
15 |
16 | def connect_server(username, startpage, tracktype='recenttracks'):
17 | """ Connect to server and get a XML page."""
18 | baseurl = 'http://ws.audioscrobbler.com/2.0/?'
19 | urlvars = dict(method='user.get%s' % tracktype,
20 | api_key='da70281f2f464cfaa4638c4bfe820f9a',
21 | user=username,
22 | page=startpage,
23 | limit=50)
24 |
25 | url = baseurl + urllib.urlencode(urlvars)
26 | for interval in (1, 5, 10, 62, 240):
27 | try:
28 | f = urllib2.urlopen(url)
29 | break
30 | except Exception, e:
31 | last_exc = e
32 | print "Exception occured, retrying in %ds: %s" % (interval, e)
33 | time.sleep(interval)
34 | else:
35 | print "Failed to open page %s" % urlvars['page']
36 | raise last_exc
37 |
38 | response = f.read()
39 | f.close()
40 |
41 | # Bad hack to fix bad xml
42 | response = re.sub('\xef\xbf\xbe', '', response)
43 |
44 | # Save raw backup xmls
45 | # with open("XMLs/" + str(urlvars['page'])+".xml", "w") as backup:
46 | # backup.write(response)
47 |
48 | return response
49 |
50 |
51 | def get_pageinfo(response, tracktype='recenttracks'):
52 | """Check how many pages of tracks the user have."""
53 | xmlpage = ET.fromstring(response)
54 | totalpages = xmlpage.find(tracktype).attrib.get('totalPages')
55 | return int(totalpages)
56 |
57 |
58 | def get_tracklist(response):
59 | """Read XML page and get a list of tracks and their info."""
60 | xmlpage = ET.fromstring(response)
61 | tracklist = xmlpage.getiterator('track')
62 | return tracklist
63 |
64 |
65 | def parse_track(trackelement):
66 | """Extract info from every track entry and output to list."""
67 | if trackelement.find('artist').getchildren():
68 | # artist info is nested in loved/banned tracks xml
69 | artistname = trackelement.find('artist').find('name').text
70 | artistmbid = trackelement.find('artist').find('mbid').text
71 | else:
72 | artistname = trackelement.find('artist').text
73 | artistmbid = trackelement.find('artist').get('mbid')
74 |
75 | if trackelement.find('album') is None:
76 | # no album info for loved/banned tracks
77 | albumname = ''
78 | albummbid = ''
79 | else:
80 | albumname = trackelement.find('album').text
81 | albummbid = trackelement.find('album').get('mbid')
82 |
83 | trackname = trackelement.find('name').text
84 | trackmbid = trackelement.find('mbid').text
85 | date = trackelement.find('date').get('uts')
86 |
87 | output = [date, trackname, artistname,
88 | albumname, trackmbid, artistmbid, albummbid]
89 |
90 | for i, v in enumerate(output):
91 | if v is None:
92 | output[i] = ''
93 |
94 | return output
95 |
96 |
97 | def get_tracks(username, startpage=1, tracktype='recenttracks'):
98 | page = startpage
99 | response = connect_server(username, page, tracktype)
100 | totalpages = get_pageinfo(response, tracktype)
101 |
102 | if startpage > totalpages:
103 | raise ValueError(
104 | "First is higher than total pages (%s)." % (startpage, totalpages))
105 |
106 | while page <= totalpages:
107 | # Skip connect if on first page, already have that one stored.
108 |
109 | if page > startpage:
110 | response = connect_server(username, page, tracktype)
111 |
112 | tracklist = get_tracklist(response)
113 |
114 | tracks = []
115 | for trackelement in tracklist:
116 | # do not export the currently playing track.
117 | if not trackelement.attrib.has_key("nowplaying") \
118 | or not trackelement.attrib["nowplaying"]:
119 | tracks.append(parse_track(trackelement))
120 |
121 | yield page, totalpages, tracks
122 |
123 | page += 1
124 | time.sleep(.5)
125 |
126 |
127 | def write_tracks(tracks, outfileobj):
128 | """Write tracks to an open file"""
129 | for fields in tracks:
130 | outfileobj.write(("\t".join(fields) + "\n").encode('utf-8'))
131 |
132 |
133 | if __name__ == "__main__":
134 | username = "dufferzafar"
135 | output_file = "scrobbles-backup-year.tsv"
136 |
137 | # Todo: Rather than using a start page, use timestamps
138 | # of the already existing data to find the start/end
139 | startpage = 1
140 |
141 | # Can be used for loved/banned tracks etc.
142 | # Todo: Read the API and backup other stuff as well.
143 | infotype = "recenttracks"
144 |
145 | trackdict = {}
146 | page = startpage
147 | totalpages = -1
148 |
149 | # Key used for loved/banned
150 | dict_key = 0
151 |
152 | try:
153 | for page, totalpages, tracks in get_tracks(username, startpage, tracktype=infotype):
154 | print "Got page %s of %s.." % (page, totalpages)
155 | for track in tracks:
156 | if infotype == 'recenttracks':
157 | trackdict.setdefault(track[0], track)
158 | else:
159 | # Can not use loved/banned tracks as
160 | # it's not unique
161 | dict_key += 1
162 | trackdict.setdefault(dict_key, track)
163 | except ValueError, e:
164 | exit(e)
165 | except Exception:
166 | raise
167 | finally:
168 |
169 | with open(output_file, 'a') as backup:
170 |
171 | # Newest First
172 | tracks = sorted(trackdict.values(), reverse=True)
173 |
174 | # Todo: Save this data in some better format
175 | write_tracks(tracks, backup)
176 |
177 | print "Wrote page %s-%s of %s to file %s" % (startpage, page, totalpages, output_file)
178 |
--------------------------------------------------------------------------------
/Last.fm Plays/HelperFunctions.py:
--------------------------------------------------------------------------------
1 | import os
2 | import urllib.request
3 |
4 | def downloadToFile(url, fileName):
5 | """
6 | Download a url to file.
7 |
8 | Does nothing if the file already exist
9 | """
10 |
11 | # Check if the file exist?
12 | if not os.path.isfile(fileName):
13 |
14 | try:
15 | site = urllib.request.urlopen(url)
16 | data = site.read()
17 |
18 | f = open(fileName, "wb")
19 | f.write(data)
20 | f.close()
21 | # print("File Downloaded.")
22 | return 1
23 |
24 | except:
25 | return 0
26 |
--------------------------------------------------------------------------------
/Last.fm Plays/Readme.md:
--------------------------------------------------------------------------------
1 | # Last.fm Plays
2 |
3 | Scripts that make use of the last.fm API.
4 |
5 | ## List of scripts
6 |
7 | * [Top Tracks Playlist](#top)
8 | * [Scrobbles Today](#today)
9 |
10 | ## Top Tracks Playlist
11 |
12 | Get top tracks of an artist and then locate them on the filesystem.
13 |
14 | Usage Scenario:
15 |
16 | I mostly download the entire discography of artists (long live the pirates!) and then can't decide which song I should listen to first.
17 |
18 | This script downloads the list of top tracks (say 20) of an artist from last.fm. It then searches for those mp3 files on your hard drive and creates a m3u playlist with tracks found.
19 |
20 | As a result I have a playlist of the best tracks of an artist.
21 |
22 | ## Scrobbles Today
23 |
24 | Last.fm has nice statistics options. So you can view how many songs you've listened to in say a week or a month etc.
25 |
26 | But surprisingly you can't view the number of songs you have listened to today (or can you?)
27 |
28 | This script solves that.
29 |
--------------------------------------------------------------------------------
/Last.fm Plays/ScrobblesToday.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime as DT
2 | from datetime import timedelta
3 | import time
4 | import urllib.request
5 | import urllib.parse
6 | import xml.etree.ElementTree as ET
7 |
8 | # from datetime import timedelta
9 | # http://ws.audioscrobbler.com/2.0/?method=user.getInfo&user=dufferzafar&api_key=da70281f2f464cfaa4638c4bfe820f9a
10 | # http://ws.audioscrobbler.com/2.0/?method=user.getFriends&user=dufferzafar&recenttracks=0&api_key=da70281f2f464cfaa4638c4bfe820f9a
11 |
12 | # The timestamp of today
13 | # unixTimeStamp = int(time.mktime(time.strptime(DT.strftime(DT.today() - timedelta(days=0), '%Y-%m-%d'), '%Y-%m-%d'))) - time.timezone
14 | unixTS1 = int(time.mktime(time.strptime(DT.strftime(DT.today() - timedelta(days=2), '%Y-%m-%d'), '%Y-%m-%d'))) - time.timezone
15 | unixTS0 = int(time.mktime(time.strptime(DT.strftime(DT.today(), '%Y-%m-%d'), '%Y-%m-%d'))) - time.timezone
16 | unixTS2 = int(time.mktime(time.strptime(DT.strftime(DT.today() + timedelta(days=1), '%Y-%m-%d'), '%Y-%m-%d'))) - time.timezone
17 |
18 | # Don't abuse, Please.
19 | api_key = "da70281f2f464cfaa4638c4bfe820f9a"
20 |
21 | # Last.fm API
22 | root_url = "http://ws.audioscrobbler.com/2.0/?"
23 | params = urllib.parse.urlencode({'method': 'user.getRecentTracks', 'user': 'dufferzafar', 'from': unixTS1, 'to': unixTS2, 'api_key': api_key})
24 | params2 = urllib.parse.urlencode({'method': 'user.getRecentTracks', 'user': 'shivamrana', 'from': unixTS1, 'to': unixTS2, 'api_key': api_key})
25 | # The file where lastfm response will be saved
26 | # print("Downloading Data...")
27 |
28 | site = urllib.request.urlopen(root_url+params)
29 | site2 = urllib.request.urlopen(root_url+params2)
30 |
31 | # Build a parse tree
32 | node = ET.fromstring(site.read()).find("recenttracks")
33 | node2 = ET.fromstring(site2.read()).find("recenttracks")
34 | print("Shadab = " + node.attrib['total'])
35 | print("Shivam = " + node2.attrib['total'])
36 |
--------------------------------------------------------------------------------
/Last.fm Plays/TopTracks-All.py:
--------------------------------------------------------------------------------
1 | import os
2 | import urllib.parse
3 | import xml.etree.ElementTree as ET
4 |
5 | import HelperFunctions
6 |
7 | # Number of tracks to download
8 | limit = "20"
9 |
10 | # Don't abuse, Please.
11 | api_key = "da70281f2f464cfaa4638c4bfe820f9a"
12 |
13 | # Last.fm API
14 | root_url = "http://ws.audioscrobbler.com/2.0/?"
15 |
16 | # Discography Folder
17 | for artist in os.listdir("G:\\Music\\#More"):
18 | if '#' not in artist:
19 |
20 | if os.path.isdir("G:\\Music\\#More\\" + artist):
21 | artistFolder = "G:\\Music\\#More\\" + artist
22 | else:
23 | continue
24 |
25 | # The file where lastfm response will be saved
26 | fileName = "Artists\\" + artist + " - Top " + limit + ".xml"
27 |
28 | print("Downloading File... " + artist)
29 |
30 | # Download to file
31 | params = urllib.parse.urlencode({'method': 'artist.gettoptracks', 'artist': artist, 'api_key': api_key, 'limit': limit})
32 | if (HelperFunctions.downloadToFile(root_url+params, fileName) == 0):
33 | print("Skipping... " + artist)
34 | print("=====================")
35 | continue
36 |
37 | # Build a parse tree
38 | tree = ET.parse(fileName).find("toptracks")
39 |
40 | # Output playlist file
41 | playlist = artistFolder + "\\" + artist + " - Top " + limit + ".m3u"
42 | op = open(playlist, "w")
43 |
44 | print("Building Playlist... " + artist)
45 |
46 | # Iter through the XML
47 | for item in tree.findall("track"):
48 | track = item.find("name").text
49 |
50 | # Walk in the directory
51 | for dirpath, dirnames, files in os.walk(artistFolder):
52 | for filename in files:
53 | if track.lower() in filename.lower():
54 | op.write(os.path.join(dirpath, filename))
55 | op.write("\n")
56 |
57 | # Cleanup
58 | op.close()
59 |
60 | print("=====================")
61 |
62 | # Whew!!
63 | print("All Done!")
64 |
--------------------------------------------------------------------------------
/Last.fm Plays/TopTracks.py:
--------------------------------------------------------------------------------
1 | import os
2 | import urllib.parse
3 | import xml.etree.ElementTree as ET
4 |
5 | # import subprocess
6 |
7 | import HelperFunctions
8 |
9 | # Script Mode
10 | # list / playlist
11 | mode = "playlist"
12 |
13 | # The artist to look for
14 | # artist = "Stars"
15 |
16 | # Number of tracks to download
17 | limit = "20"
18 |
19 | # Don't abuse, Please.
20 | api_key = "da70281f2f464cfaa4638c4bfe820f9a"
21 |
22 | # Last.fm API
23 | root_url = "http://ws.audioscrobbler.com/2.0/?"
24 |
25 | # Discography Folder
26 | if mode == "playlist":
27 | if os.path.isdir("F:\\More Music\\" + artist):
28 | artistFolder = "F:\\More Music\\" + artist
29 | elif os.path.isdir("G:\\Music\\" + artist):
30 | artistFolder = "G:\\Music\\" + artist
31 | else:
32 | print("Where to look for music files?")
33 | quit()
34 |
35 | # The file where lastfm response will be saved
36 | fileName = "Artists\\" + artist + " - Top " + limit + ".xml"
37 |
38 | print("Downloading File... " + artist)
39 |
40 | # Download to file
41 | params = urllib.parse.urlencode({'method': 'artist.gettoptracks', 'artist': artist, 'api_key': api_key, 'limit': limit})
42 | HelperFunctions.downloadToFile(root_url+params, fileName)
43 |
44 | # Build a parse tree
45 | tree = ET.parse(fileName).find("toptracks")
46 | # Decide what to do
47 | if mode == "list":
48 | # os.system('clear')
49 | # subprocess.call("cls", shell=True) # windows
50 |
51 | for item in tree.findall("track"):
52 | track = item.find("name").text
53 | playcount = item.find("playcount").text
54 | listeners = item.find("listeners").text
55 | print(track, playcount, listeners)
56 |
57 | elif mode == "playlist":
58 | # Output playlist file
59 | playlist = artistFolder + "\\" + artist + " - Top " + limit + ".m3u"
60 | op = open(playlist, "w")
61 |
62 | print("Building Playlist... " + artist)
63 |
64 | # Iter through the XML
65 | for item in tree.findall("track"):
66 | track = item.find("name").text
67 |
68 | # Walk in the directory
69 | for dirpath, dirnames, files in os.walk(artistFolder):
70 | for filename in files:
71 | if track.lower() in filename.lower():
72 | op.write(os.path.join(dirpath, filename))
73 | op.write("\n")
74 |
75 | # Cleanup
76 | op.close()
77 |
78 | print("=====================")
79 |
80 | # Play Some Music!!
81 | # os.startfile(playlist)
82 |
83 | # Whew!!
84 | print("All Done!")
85 |
--------------------------------------------------------------------------------
/MB Chatlogs/.gitignore:
--------------------------------------------------------------------------------
1 | # Downloaded Chatlogs are text files
2 | *.txt
3 |
--------------------------------------------------------------------------------
/MB Chatlogs/Download Logs.py:
--------------------------------------------------------------------------------
1 | import os
2 | import HelperFunctions as HF
3 |
4 | usrRoot = "http://chatlogs.musicbrainz.org/musicbrainz"
5 | devRoot = "http://chatlogs.musicbrainz.org/musicbrainz-devel"
6 |
7 | year = "2015"
8 |
9 | for m in range(1, 7):
10 | for d in range(1, 32):
11 |
12 | mon = "0" + str(m) if m < 10 else str(m)
13 | day = "0" + str(d) if d < 10 else str(d)
14 |
15 | try:
16 | os.mkdir("Devel/" + mon)
17 | except:
18 | pass
19 |
20 | try:
21 | os.mkdir("Usr/" + mon)
22 | except:
23 | pass
24 |
25 | fileName = mon + "/" + year + "-" + mon + "-" + day + ".txt"
26 |
27 | devUrl = devRoot + "/" + year + "/" + year + "-" + fileName
28 | usrUrl = usrRoot + "/" + year + "/" + year + "-" + fileName
29 |
30 | print("Dnld Devel... " + fileName)
31 | HF.downloadToFile(devUrl, "Devel/" + fileName)
32 | print("Dnld User... " + "Usr/" + fileName)
33 | HF.downloadToFile(usrUrl, "Usr/" + fileName)
34 |
35 | print("=======================================")
36 |
37 |
38 | # Phew!!
39 | print("All Done!")
40 |
--------------------------------------------------------------------------------
/MB Chatlogs/HelperFunctions.py:
--------------------------------------------------------------------------------
1 | import os
2 | import urllib.request
3 |
4 | def downloadToFile(url, fn):
5 | """
6 | Download a url to file.
7 |
8 | Does nothing if the file already exist
9 | """
10 |
11 | # Check if the file exist?
12 | if not os.path.isfile(fn):
13 |
14 | try:
15 | site = urllib.request.urlopen(url)
16 | data = site.read()
17 |
18 | f = open(fn, "wb")
19 | f.write(data)
20 | f.close()
21 | # print("File Downloaded.")
22 | return 1
23 |
24 | except:
25 | return 0
26 |
--------------------------------------------------------------------------------
/MB Chatlogs/Readme.md:
--------------------------------------------------------------------------------
1 | # IRC Chatlogs Downloader
2 |
3 | A script used to download MusicBrainz IRC (both #musicbrainz and #musicbrainz-devel) chatlogs. I actually needed to find what is being discussed on the channels about GSoC.
4 |
5 | Everything is now just a 'grep' away.
6 |
7 | ## Table of Contents
8 |
9 | * [Stuff to do](#todo)
10 | * [Changelog](#changelog)
11 |
12 | ## Todo
13 |
14 | * Can the code be improved? shortened?
15 | * Why multiple try/excepts??
16 |
17 | * Download parallely?
18 |
19 | ## Changelog
20 |
21 | 12/2/2014 :
22 |
23 | * Basic downloader
24 |
--------------------------------------------------------------------------------
/MITx - 6.00.1x Solutions/Quiz/Test.py:
--------------------------------------------------------------------------------
1 | myDict = {}
2 | myDict[('a', 'b')] = 'foo'
3 |
--------------------------------------------------------------------------------
/MITx - 6.00.1x Solutions/Week 2 PSet 1/W2 PS1 - 1 - Count Vowels.py:
--------------------------------------------------------------------------------
1 | s = 'aeious'
2 |
3 | s= s.lower()
4 |
5 | count = 0
6 |
7 | for a in range(len(s)):
8 | if s[a] in 'aeiou':
9 | count += 1
10 |
11 | print("Number of vowels: " + str(count))
12 |
--------------------------------------------------------------------------------
/MITx - 6.00.1x Solutions/Week 2 PSet 1/W2 PS1 - 2 - Count bob.py:
--------------------------------------------------------------------------------
1 | s = 'asdbobkhjbobkolbobplobob'
2 | s = 'azcbobobegghakl'
3 |
4 | s = s.lower()
5 |
6 | count = 0
7 | p = 0
8 |
9 | while s.find('bob', p) != -1:
10 | p = s.find('bob', p)
11 | p += 1
12 | count += 1
13 |
14 | print("Number of times bob occurs is: " + str(count))
15 |
--------------------------------------------------------------------------------
/MITx - 6.00.1x Solutions/Week 2 PSet 1/W2 PS1 - 3 - Longest Alphabetical.py:
--------------------------------------------------------------------------------
1 | s = 'abcdefqazacd'
2 | s = 'azcbobobegghakl'
3 | s = 'abcbc'
4 |
5 | s = s.lower()
6 |
7 | opMax = ""
8 | op = s[0]
9 |
10 | for a in range(1, len(s)):
11 | if ord(s[a]) >= ord(s[a-1]):
12 | op = op + s[a]
13 | else:
14 | if len(op) > len(opMax):
15 | opMax = op
16 | op = s[a]
17 |
18 | if len(op) > len(opMax):
19 | opMax = op
20 |
21 | print("Longest substring in alphabetical order is: " + opMax)
22 |
--------------------------------------------------------------------------------
/MITx - 6.00.1x Solutions/Week 2 PSet 2/W2 PS2 - 1 - Calculating the Minimum.py:
--------------------------------------------------------------------------------
1 | balance = 4842
2 | annualInterestRate = 0.2
3 | monthlyPaymentRate = 0.04
4 |
5 | bal = balance
6 | sumPayments = 0
7 |
8 | for i in range(1,13):
9 |
10 | # Minimum payment paid
11 | minPayment = monthlyPaymentRate * bal
12 |
13 | # Sum the payments
14 | sumPayments += minPayment
15 |
16 | # Your unpaid balance
17 | uBal = bal - minPayment
18 |
19 | # The new balance
20 | bal = uBal + (uBal * annualInterestRate/12.0)
21 |
22 | # Let's get this done away with
23 | print("Month: " + str(i))
24 | print("Minimum monthly payment: " + str(round(minPayment, 2)))
25 | print("Remaining balance: " + str(round(bal, 2)))
26 |
27 | print("Total paid: " + str(round(sumPayments, 2)))
28 | print("Remaining balance: " + str(round(bal, 2)))
29 |
--------------------------------------------------------------------------------
/MITx - 6.00.1x Solutions/Week 2 PSet 2/W2 PS2 - 2 - Paying off the debt.py:
--------------------------------------------------------------------------------
1 | balance = 3926
2 | annualInterestRate = 0.2
3 |
4 | for minPayment in range(10, 1000, 10):
5 |
6 | bal = balance
7 | flag = 0
8 |
9 | for i in range(1,13):
10 | # Your unpaid balance
11 | uBal = bal - minPayment
12 |
13 | # The new balance
14 | bal = uBal + (uBal * annualInterestRate/12.0)
15 |
16 | # Has the balance been paid?
17 | if (bal <= 0):
18 | flag = 1
19 | break
20 |
21 | # We have what we need, now let's get the fuck out of here...
22 | if flag:
23 | break
24 |
25 | print("Lowest Payment: " + str(minPayment))
26 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Python-Scripts
2 |
3 | After some initial reluctance, I've finally begun to code in Python.
4 |
5 | Here are some of the scripts I've managed to write. Most of them are 'quick-and-dirty' and were created for a very specific use-case, so they may not be of much use as-is. But you are free to edit any of them to suit your needs.
6 |
7 | ## List of scripts
8 |
9 | * [0xMirror](#mirror)
10 | * [Batch Edit MP3 Metadata](#meta)
11 | * [Find Untagged MP3s](#untagged)
12 | * [Geeks for Geeks Scraper](#g4g)
13 | * [Github Contributions](#github)
14 | * [Goodreads Quotes](#gr)
15 | * [Last.fm Plays](#lfm-plays)
16 | * [Last.fm Backup](#lfm-backup)
17 | * [MITx Solutions](#mitx)
18 | * [MusicBrainz IRC Chatlogs Downloader](#irc)
19 | * [Network Usage Analyst](#netuse)
20 | * [Networx XML Parser](#networx)
21 | * [Sphinx Linkfix](#linkfix)
22 | * [Sublime Text 3 Plugins](#sublime)
23 | * [WP XML to Octopress MD](#wp)
24 |
25 | # 0xMirror
26 |
27 | A script to create a zero-byte mirror of an entire hard disk.
28 |
29 | **Tech:** scandir
30 |
31 | # Batch Edit MP3 Metadata
32 |
33 | Use Mutagen to modify artist tag of multiple mp3 files.
34 |
35 | **Tech:** Mutagen.
36 |
37 | # Find Untagged MP3s
38 |
39 | Find all songs in the current directory that have not been tagged with MusicBrainz IDs and optionally move them to a separate folder.
40 |
41 | **Tech:** Mutagen. MBIDs.
42 |
43 | # Geeks for Geeks Scraper
44 |
45 | Create nice PDFs from posts on Geeks for Geeks.
46 |
47 | **Tech:** BeautifulSoup, Printing html to pdf using QTextDocument.
48 |
49 | # Github Contributions
50 |
51 | Fetch all previous year contributions from Github (issues, pull requests etc.)
52 |
53 | **Tech:** Basic Web Scraping using Beautiful Soup.
54 |
55 | # Goodreads Quotes
56 |
57 | A script to download all the quotes I've liked on Goodreads. The plan was to create a offline database that I could edit.
58 |
59 | Couldn't decide how/what to do. So this is just half-done.
60 |
61 | **Tech:** BeautifulSoup to parse the webpage downloaded.
62 |
63 | # Last.fm Backup
64 |
65 | A script to backup my last.fm scrobbles, loved/banned tracks.
66 |
67 | **Tech:** XML. CSV. sqlite.
68 |
69 | # Last.fm Plays
70 |
71 | I am an avid user of the last.fm service. These scripts interact with last.fm's API.
72 |
73 | **TopTracks.py**
74 |
75 | Creates a local playlist from Top 20 tracks of an artist.
76 |
77 | Useful when you have a huge collection of songs and you can't decide what to listen to.
78 |
79 | **ScrobblesToday.py**
80 |
81 | View the number of songs you have listened to today.
82 |
83 | **Tech:** Parse XML responses from the API. os.Walk() to find mp3 files matching the criteria.
84 |
85 | # MITx Solutions
86 |
87 | Set of solutions to the 6.00.1x course from EdX.
88 |
89 | https://courses.edx.org/courses/MITx/6.00.1x/3T2013/courseware/
90 |
91 | I left the course in between, as I often do.
92 |
93 | # MusicBrainz IRC Chatlogs Downloader
94 |
95 | Script used to download IRC Chatlogs of #musicbrainz and #musicbrainz-devel.
96 |
97 | **Tech:** urllib
98 |
99 | # Networx XML Parser
100 |
101 | Parses [Networx](http://www.softperfect.com/products/networx) backup XMLs and outputs the data in js format.
102 |
103 | **Tech:** datetime module. XML parsing.
104 |
105 | This script has been moved to a new repository - [Internet-Usage](http://github.com/dufferzafar/internet-usage).
106 |
107 | # Network Usage Analyst
108 |
109 | I have a cron job setup that dumps my network usage to files.
110 |
111 | This script reads in those files and outputs data such as data downloaded this month, data left and suggested usage.
112 |
113 | # Sphinx Linkfix
114 |
115 | Uses the linkcheck's output file to fix links in docs.
116 |
117 | Originally created for [this issue](https://github.com/scrapy/scrapy/issues/606).
118 |
119 | # Sublime Text 3 Plugins
120 |
121 | Small Plugins that I've written/copied for sublime text.
122 |
123 | # WP XML to Octopress MD
124 |
125 | I used this script to shift my blog from Wordpress to Octopress.
126 |
127 | It creates individual blog posts in markdown format from a wordpress export file (XML).
128 |
129 | **Tech:** XML Parsing. Namespace Dictionaries.
130 |
--------------------------------------------------------------------------------
/Rename TED/rename_ted.py:
--------------------------------------------------------------------------------
1 | """
2 | TED talk Renamer.
3 |
4 | 'JamesGeary_2009G-480p.mp4' becomes 'James Geary - Metaphorically speaking.mp4'
5 | """
6 |
7 | import re
8 | import os
9 |
10 | from ted_talks_list import talks
11 |
12 | from fuzzywuzzy import process
13 |
14 | ted_folder = os.path.expanduser("~/Videos/TED/")
15 |
16 | if __name__ == '__main__':
17 |
18 | for file in os.listdir(ted_folder):
19 |
20 | name, ext = os.path.splitext(file)
21 | name = name.replace("-480p", "")
22 | name = re.sub(r"_\d{4}P", "", name) # Remove year '_2014P'
23 |
24 | talk = process.extractOne(name, talks, score_cutoff=50)
25 | if talk:
26 | newfile = "%s%s" % (talk[0], ext)
27 |
28 | os.rename(
29 | os.path.join(ted_folder, file),
30 | os.path.join(ted_folder, newfile)
31 | )
32 |
33 | print("Convert: '%s' --> '%s' [%d]" %
34 | (file, newfile, talk[1]))
35 | else:
36 | print("Skipping: %s" % file)
37 |
--------------------------------------------------------------------------------
/Shell History/Readme.md:
--------------------------------------------------------------------------------
1 | # Shell History
2 |
3 | Know what commands I've run in the last few days.
4 |
5 | Currently the script just spits out raw commands from last week, along with their date.
6 |
7 | ## Todo
8 |
9 | * Better Output
10 |
11 | * Filter by command
12 | * `if command.startswith(input)` or `if input in command` ?
13 |
14 | * Remove extraneous aliases?
15 | * Or read in a list of aliases and then expand them
16 |
--------------------------------------------------------------------------------
/Shell History/history.py:
--------------------------------------------------------------------------------
1 | import re
2 | import os
3 | import time
4 | from itertools import groupby
5 |
6 |
7 | def pretty_epoch(epoch, format_):
8 | """ Convert timestamp to a pretty format. """
9 | return time.strftime(format_, time.localtime(epoch))
10 |
11 |
12 | def is_in_last_days(stamp, n):
13 | """
14 | Check whether the timestamp was created in the last n days.
15 |
16 | I guess the 'right' way to do this would be by using timedelta
17 | from the datetime module, but this felt 'ok' to me.
18 | """
19 | return stamp - time.time() < n * (24 * 60 * 60)
20 |
21 | # History contains tuples - (timestamp, command)
22 | history = []
23 |
24 | # Read ZSH History
25 | with open(os.path.expanduser("~/.zsh_history")) as inp:
26 | for line in inp.readlines():
27 | m = re.match(r":\s(\d+):\d+;(.*)", line)
28 | if m:
29 | history.append((int(m.group(1)), m.group(2)))
30 |
31 | # Read Fish History
32 | with open(os.path.expanduser("~/.config/fish/fish_history")) as inp:
33 | for line in inp.readlines():
34 | if line.startswith("- cmd:"):
35 | cmd = line[7:]
36 | elif line.startswith(" when:"):
37 | when = line[8:]
38 | history.append((int(when), cmd.strip()))
39 |
40 | # Filter out unwanted entries
41 | history = [item for item in history if is_in_last_days(item, 7)]
42 |
43 | # Sort history by timestamp
44 | history.sort(key=lambda x: x[0])
45 |
46 | # Group history entries by date
47 | groups = groupby(history, lambda item: pretty_epoch(item[0], "%a, %d %b"))
48 |
49 | # Print!
50 | for date, items in groups:
51 | print(date)
52 |
53 | for item in items:
54 | print(pretty_epoch(item[0], "%I:%M %p"), item[1])
55 |
--------------------------------------------------------------------------------
/Sphinx Linkfix/Readme.md:
--------------------------------------------------------------------------------
1 | # Linkfix
2 |
3 | Linkfix - a companion to sphinx's linkcheck builder.
4 |
5 | Uses the linkcheck's output file to fix links in docs.
6 |
7 | Originally created for [this issue](https://github.com/scrapy/scrapy/issues/606).
8 |
9 |
--------------------------------------------------------------------------------
/Sphinx Linkfix/linkfix.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | """
4 |
5 | Linkfix - a companion to sphinx's linkcheck builder.
6 |
7 | Uses the linkcheck's output file to fix links in docs.
8 |
9 | Originally created for this issue:
10 | https://github.com/scrapy/scrapy/issues/606
11 |
12 | """
13 |
14 | import re
15 |
16 | # Used for remembering the file (and its contents)
17 | # so we don't have to open the same file again.
18 | _filename = None
19 | _contents = None
20 |
21 | # A regex that matches standard linkcheck output lines
22 | line_re = re.compile(ur'(.*)\:\d+\:\s\[(.*)\]\s(?:(.*)\sto\s(.*)|(.*))')
23 |
24 | # Read lines from the linkcheck output file
25 | try:
26 | with open("build/linkcheck/output.txt") as out:
27 | output_lines = out.readlines()
28 | except IOError:
29 | print("linkcheck output not found; please run linkcheck first.")
30 | exit(1)
31 |
32 | # For every line, fix the respective file
33 | for line in output_lines:
34 | match = re.match(line_re, line)
35 |
36 | if match:
37 | newfilename = match.group(1)
38 | errortype = match.group(2)
39 |
40 | # Broken links can't be fixed and
41 | # I am not sure what do with the local ones.
42 | if errortype.lower() in ["broken", "local"]:
43 | print("Not Fixed: " + line)
44 | else:
45 | # If this is a new file
46 | if newfilename != _filename:
47 |
48 | # Update the previous file
49 | if _filename:
50 | with open(_filename, "w") as _file:
51 | _file.write(_contents)
52 |
53 | _filename = newfilename
54 |
55 | # Read the new file to memory
56 | with open(_filename) as _file:
57 | _contents = _file.read()
58 |
59 | _contents = _contents.replace(match.group(3), match.group(4))
60 | else:
61 | # We don't understand what the current line means!
62 | print("Not Understood: " + line)
63 |
--------------------------------------------------------------------------------
/Sublime Text 3 Plugins/Commands.sublime-commands:
--------------------------------------------------------------------------------
1 | [
2 | { "caption": "Insert: Timestamp", "command": "timestamp" },
3 | { "caption": "Open Folder: Project", "command": "open_project_folder" },
4 | { "caption": "Open Folder: File ", "command": "open_file_folder" },
5 | ]
6 |
--------------------------------------------------------------------------------
/Sublime Text 3 Plugins/OpenFolder.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sublime, sublime_plugin
3 | from subprocess import call
4 |
5 | class openFolderCommand( sublime_plugin.WindowCommand ):
6 | def run( self ):
7 | if self.window.active_view() is None:
8 | return
9 |
10 | try:
11 | call([ "explorer", self.window.folders()[0] ])
12 | except:
13 | call([ "explorer", os.path.dirname(self.window.active_view().file_name()) ])
14 |
--------------------------------------------------------------------------------
/Sublime Text 3 Plugins/README.md:
--------------------------------------------------------------------------------
1 | # Sublime Text Plugins
2 |
3 | Simple-ish sublime text 3 plugins that I coded.
4 |
5 | ## Table of Contents
6 |
7 | * [Stuff to do](#todo)
8 | * [Changelog](#changelog)
9 |
10 | ## Todo
11 |
12 | * Universal Launch:
13 | Run files directly from sublime text (similar to build, but you won't have to change the build system everytime.)
14 |
15 | ## Changelog
16 |
17 | * Open File/Project Folder
18 | * Insert Timestamp
19 |
--------------------------------------------------------------------------------
/Sublime Text 3 Plugins/TimeStamp.py:
--------------------------------------------------------------------------------
1 | import sublime, sublime_plugin
2 | from datetime import datetime
3 |
4 | class TimestampCommand(sublime_plugin.TextCommand):
5 | def run(self, edit):
6 | stamp = datetime.now().strftime("%d/%m/%Y-%H:%M")
7 | for r in self.view.sel():
8 | if r.empty():
9 | self.view.insert (edit, r.a, stamp)
10 | else:
11 | self.view.replace(edit, r, stamp)
12 |
--------------------------------------------------------------------------------
/WP XML to Octopress MD/Readme.md:
--------------------------------------------------------------------------------
1 | # WP XML to Octopress Posts
2 |
3 | Convert a wordpress exported XML to Octopress markdown files.
4 |
5 | I used it once to migrate my WP blog to Octopress, haven't needed it since then.
6 |
7 | ## Table of Contents
8 |
9 | * [Stuff to do](#todo)
10 | * [Changelog](#changelog)
11 |
12 | ## Todo
13 |
14 | * Add htm2text support
15 |
16 | ## Changelog
17 |
18 | Initial Working Version
19 |
--------------------------------------------------------------------------------
/WP XML to Octopress MD/Wordpress Export.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
27 |
28 |
29 | Duffer's Log
30 | http://dufferzafar.engineerinme.com/blog
31 | Life. Philosophy. Code.
32 | Thu, 10 Oct 2013 05:19:46 +0000
33 | en-US
34 | 1.2
35 | http://dufferzafar.engineerinme.com/blog
36 | http://dufferzafar.engineerinme.com/blog
37 |
38 | 1admindufferzafar0@gmail.com
39 |
40 |
41 | http://wordpress.org/?v=3.6.1
42 |
43 |
44 | Blogging begins...
45 | http://dufferzafar.engineerinme.com/blog/blogging-begins/
46 | Sun, 19 May 2013 16:36:23 +0000
47 | admin
48 | http://engineerinme.com/dufferzafar/metroMe/htm/blog/?p=5
49 |
50 |
51 |
52 | 7
53 | 2013-05-19 16:36:23
54 | 2013-05-19 16:36:23
55 | open
56 | open
57 | blogging-begins
58 | publish
59 | 0
60 | 0
61 | post
62 |
63 | 0
64 |
65 |
66 | _edit_last
67 |
68 |
69 |
70 |
71 | Hi! from Lumia 520.
72 | http://dufferzafar.engineerinme.com/blog/hi-from-lumia-520/
73 | Mon, 17 Jun 2013 08:11:57 +0000
74 | admin
75 | http://engineerinme.com/dufferzafar/metroMe/htm/blog/?p=18
76 |
77 | WordPress for Windows Phone 8.
78 |
79 | This is great.]]>
80 |
81 | 18
82 | 2013-06-17 08:11:57
83 | 2013-06-17 08:11:57
84 | open
85 | open
86 | hi-from-lumia-520
87 | publish
88 | 0
89 | 0
90 | post
91 |
92 | 0
93 |
94 |
95 | _wpas_done_all
96 |
97 |
98 |
99 | _edit_last
100 |
101 |
102 |
103 |
104 | Offline capabilities of the WordPress App.
105 | http://dufferzafar.engineerinme.com/blog/offline-capabilities-of-the-wordpress-app/
106 | Tue, 18 Jun 2013 03:27:32 +0000
107 | admin
108 | http://engineerinme.com/dufferzafar/metroMe/htm/blog/?p=19
109 |
110 | Thankfully the application supports a local draft mode wherein you can save drafts and then publish them once you are online.
]]>
111 |
112 | 19
113 | 2013-06-18 03:27:32
114 | 2013-06-18 03:27:32
115 | open
116 | open
117 | offline-capabilities-of-the-wordpress-app
118 | publish
119 | 0
120 | 0
121 | post
122 |
123 | 0
124 |
125 |
126 | _wpas_done_all
127 |
128 |
129 |
130 | _edit_last
131 |
132 |
133 |
134 |
135 | Good artists copy, great artists steal
136 | http://dufferzafar.engineerinme.com/blog/good-artists-copy-great-artists-steal/
137 | Fri, 02 Aug 2013 15:52:06 +0000
138 | admin
139 | http://dufferzafar.engineerinme.com/blog/?p=20
140 |
141 |
148 |
149 | 20
150 | 2013-08-02 15:52:06
151 | 2013-08-02 15:52:06
152 | open
153 | open
154 | good-artists-copy-great-artists-steal
155 | publish
156 | 0
157 | 0
158 | post
159 |
160 | 0
161 |
162 |
163 |
164 | _edit_last
165 |
166 |
167 |
168 |
169 | Everything by me is unlicensed
170 | http://dufferzafar.engineerinme.com/blog/everything-by-me-is-unlicensed/
171 | Sat, 03 Aug 2013 15:55:31 +0000
172 | admin
173 | http://dufferzafar.engineerinme.com/blog/?p=22
174 |
175 | unlicensed. That includes the stuff I code, design or write.
178 |
179 | You can copy anything that I claim to be mine and use it however you like. I don’t even demand credit, although it’d be insanely great if you cited me.
180 |
181 | P.S.: Sometimes I feel even unlicense is superfluous. Here’s my take at a license –
182 |
I did what I had to. It’s your turn bitch! Use your imagination.....
]]>
183 |
184 | 22
185 | 2013-08-03 15:55:31
186 | 2013-08-03 15:55:31
187 | open
188 | open
189 | everything-by-me-is-unlicensed
190 | publish
191 | 0
192 | 0
193 | post
194 |
195 | 0
196 |
197 |
198 | _edit_last
199 |
200 |
201 |
202 |
203 | 1 - Getting a feel of minimalism
204 | http://dufferzafar.engineerinme.com/blog/1-getting-a-feel-of-minimalism/
205 | Tue, 06 Aug 2013 16:02:45 +0000
206 | admin
207 | http://dufferzafar.engineerinme.com/blog/?p=25
208 |
209 | this gem and It almost kills me. I know this is exactly what I want, Just know it.
218 |
219 | Fifteen minutes in and I already have a homepage, not bad. Problem one solved. Which brings me to problem two: Where am I gonna put links to other sections of the site? The footer just seems unintuitive and I don’t want the traditional navigation bar design. This really is a problem and I am out of ideas. What’d I do?
220 |
221 | Google. This time with a slight change of keywords “Minimalist web design”. I open an article called “Showcase of Super Clean Minimalist Web Designs”. The first link in the article solves the problem and I suddenly get a few ideas, I perfectly visualize them and am convinced that this is going somewhere.
222 |
223 | That’s when I begin writing this article. And it takes me about an hour to write up till this very point. Writing is hard.]]>
224 |
225 | 25
226 | 2013-08-06 16:02:45
227 | 2013-08-06 16:02:45
228 | open
229 | open
230 | 1-getting-a-feel-of-minimalism
231 | publish
232 | 0
233 | 0
234 | post
235 |
236 | 0
237 |
238 |
239 | _edit_last
240 |
241 |
242 |
243 | _wp_old_slug
244 |
245 |
246 |
247 |
248 | Why metroMe sucks?
249 | http://dufferzafar.engineerinme.com/blog/why-metrome-sucks/
250 | Mon, 12 Aug 2013 16:06:09 +0000
251 | admin
252 | http://dufferzafar.engineerinme.com/blog/?p=27
253 |
254 | ]]>
271 |
272 | 27
273 | 2013-08-12 16:06:09
274 | 2013-08-12 16:06:09
275 | open
276 | open
277 | why-metrome-sucks
278 | publish
279 | 0
280 | 0
281 | post
282 |
283 | 0
284 |
285 |
286 | _edit_last
287 |
288 |
289 |
290 |
291 | The New Plan
292 | http://dufferzafar.engineerinme.com/blog/the-new-plan/
293 | Tue, 13 Aug 2013 16:46:56 +0000
294 | admin
295 | http://dufferzafar.engineerinme.com/blog/?p=50
296 |
297 | Design Guidelines
298 |
Responsive from the start
299 | You probably have seen responsive web in action but might not be aware of its presence.
300 |
Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from desktop computer monitors to mobile phones). - Wikipedia
301 | I’ll try to ensure that miniMe looks just as beautiful on a mobile device as on a large screen.
302 |
303 | Another aspect that I think could come under responsiveness is Browser independence. What good is a website that works only on one browser.
304 |
Copy Less, Code More
305 | I have a habit of copying code from multiple sources and putting it all together to make things work. I now intend to change it, not that I don’t find it useful anymore but I just plan to actually code most of the stuff by myself – something I hardly do.
306 |
Minimalism Everywhere
307 | If you still haven’t figured out - miniMe stands for Minimalist Me. So minimalism is going to play a central role. It’d be everywhere, starting right from the design – the looks – to the back end – the code.
308 |
309 | I plan to use minimum number of external dependencies, plugins and templates. This is going to be hard.
310 |
Unison with Wordpress
311 | I’ve come to realise that when it comes to CMS – Wordpress just rocks. So I plan to link miniMe with wordpress perfectly. I still haven’t figured out the entire model but this has to be done.]]>
312 |
313 | 50
314 | 2013-08-13 16:46:56
315 | 2013-08-13 16:46:56
316 | open
317 | open
318 | the-new-plan
319 | publish
320 | 0
321 | 0
322 | post
323 |
324 | 0
325 |
326 |
327 | _edit_last
328 |
329 |
330 |
331 |
332 | Readability: Read without distractions
333 | http://dufferzafar.engineerinme.com/blog/readability-read-without-distractions/
334 | Wed, 14 Aug 2013 22:49:05 +0000
335 | admin
336 | http://dufferzafar.engineerinme.com/blog/?p=53
337 |
338 | I've just found a great way of reading web articles, by stripping them down just to the core: the central text.
339 |
340 | Enter Readability.
341 |
342 | Readability is a simple add-on for browsers designed to create a distraction free reading mode. It's a very simple, yet exceptionally useful. All you need to do is, head over to their website, sign up for an account, and add a plugin to your browser. Once it is added, just surf the net like you usually do and click the button whenever you find a web page that you want formatted in a clean, readable fashion. And then, read away.
343 |
344 | Readability is very customizable, offering the ability to change the font type and size as well as the column width of the text within the page. You can also change the background color to simulate a night mode, perfect for late night reading.
345 |
346 | It also maintains a reading list, categorizing articles that you haven't read, and archiving the ones you have. You can also tag articles which adds another realm to categorizing, you can then sort articles by your tags.
347 |
348 | Sadly, but not surprisingly, there's no windows phone app. So, I can't catch up to my reading on a my phone but Android and iOS users, as always, can. I hate you guys.
349 |
350 | Update:
351 |
352 | I must've been feeling pretty sleepy when I first wrote this because just a little googling resulted in a few unofficial readability clients for Windows Phone. I tested atleast three of them, and none has everything that you'd expect from a readability app but one of them stands out as it comes with offline cache support, so you can read even when you are disconnected, that's exactly what I need.
353 |
354 | You can download Now Readable and wish that the developer adds some more features like font size, background color, and most importantly - landscape mode support.
355 |
356 | Update 2:
357 |
358 | Turns out, Now Readable's sync feature has a few bugs due to which I was not able to sync my reading list. I was not going to sit quiet about it, So I contacted its developer @chustar and he hopes to fix the bug and release an update. Yay!]]>
359 |
360 | 53
361 | 2013-08-14 22:49:05
362 | 2013-08-14 22:49:05
363 | open
364 | open
365 | readability-read-without-distractions
366 | publish
367 | 0
368 | 0
369 | post
370 |
371 | 0
372 |
373 |
374 |
375 | _edit_last
376 |
377 |
378 |
379 |
380 | Bookviser: Enjoying a good book on windows phone
381 | http://dufferzafar.engineerinme.com/blog/bookviser-enjoying-a-good-book-on-windows-phone/
382 | Wed, 14 Aug 2013 23:42:53 +0000
383 | admin
384 | http://dufferzafar.engineerinme.com/blog/?p=61
385 |
386 | Last Opened Books displays a tile view of the books you recently read with their cover pages to help you recognize them easily. Right from here you will be able to manage them i.e. Pin to Start Menu, Delete or Open the book. Nice and Simple.
395 |
396 | You can add new books by searching online catalogs, which include sites like Project Gutenberg and FeedBooks, or by directly downloading from the inbuilt web browser. None of these methods suit my needs. And this is where things get a bit frustrating...
397 |
398 | I rely on skydrive to sync my books because it offers the flexibility to add books that I've downloaded from torrents (long live the pirates) or converted from a PDF. I upload the ePub file to skydrive and then sync it with the app. So basically, to add a book (from torrent) you first have to download it on your computer, then upload it to skydrive and then again download it on your device. Three times bandwidth usage. Sob.
399 |
400 | I am wishing for an update that'd add features like - Adding a book from SD Card, PDF Support etc. Till then, can't do anything but read another book.
401 |
402 | ]]>
403 |
404 | 61
405 | 2013-08-14 23:42:53
406 | 2013-08-14 23:42:53
407 | open
408 | open
409 | bookviser-enjoying-a-good-book-on-windows-phone
410 | publish
411 | 0
412 | 0
413 | post
414 |
415 | 0
416 |
417 |
418 |
419 |
420 | _edit_last
421 |
422 |
423 |
424 |
425 | Source Code Editors: Sublime text and Notepad++
426 | http://dufferzafar.engineerinme.com/blog/source-code-editors-sublime-text-and-notepad/
427 | Thu, 15 Aug 2013 08:39:20 +0000
428 | admin
429 | http://dufferzafar.engineerinme.com/blog/?p=70
430 |
431 | For a coder, the choice of a text editor is almost a political statement. Everybody has, and should have, their own personal preferences. Part of the appeal of the editor comes from its leanness and simplicity. Here are two of the best code editors, although I'd highly recommend sublime text, I just can't seem to let go of Notepad++. Old habits die hard, as they say.
432 |
Sublime Text
433 | Sublime Text certainly feels lean on the surface, with no toolbars or configuration dialogs. It's also very, very fast. But that simplicity is only skin-deep: dig in just a bit, and you'll find yourself immersed in plugins, keyboard shortcuts, and more.
434 |
435 | Looks: The most noticeable feature, and probably the only thing you'll notice at first glance, is the mini-map which runs along the right-side gutter of the editing pane. It can be used as a visual scrollbar when working on long files. The other thing that catches attention is the night-mode styled theme which is easy on the eyes, if you want daylight mode or want to experiment with some other colors, goto Preferences - Color Schemes, and select something that suits you.
436 |
437 | There's also a Distraction Free Mode, which hides away all the crap and takes you into a fullscreen mode where you can just code and code and code.
438 |
439 | Commands: The single most important feature that you'll be using is the command palette. Press Ctrl+Shift+P and you can control every single function, plugin, menu item, and what not by just typing in the some text. Seriously Cool.
440 |
441 | Extensibilty: Sublime Text has some great plugins, the most important of which is called Package Control, it lets you easily search, download and install other plugins right from the editor. It took me around five seconds to add autohotkey syntax highlight support, and that too, without ever leaving the editor.
442 |
443 | These are some of the most basic features, I'll be adding the rest as and when I find them.
444 |
Notepad++
445 | The only reason I am adding NPP here is because no matter how much I resist the temptation, I find myself opening it every now and then. I've been using it for some two years now and am just addicted to the view it presents. I'm used to the default syntax highlighting - Green represents comments, Blue is for functions etc.
446 |
447 | Probably the only plus point with notepad++ is that its free and open source. But that matters only if you have a high conscience or, for some other reason, have vowed not to use pirated software. In which case, I'll salute you and ridicule you at the same time.
448 |
449 | I am not saying that Notepad++ is in any way less equipped than sublime text. It's been there since 2003 (that's a f'king decade) and is still going strong, Don Ho updates it frequently and it has more features than you'll probably ever need.
450 |
451 | But when you are trying something new, you might as well try something that's more sexy. And Sublime Text is sexy as hell.
452 |
453 | [nggallery id=2]]]>
454 |
455 | 70
456 | 2013-08-15 08:39:20
457 | 2013-08-15 08:39:20
458 | open
459 | open
460 | source-code-editors-sublime-text-and-notepad
461 | publish
462 | 0
463 | 0
464 | post
465 |
466 | 0
467 |
468 |
469 | _edit_last
470 |
471 |
472 |
473 |
474 | Notepad++ Uninstalled
475 | http://dufferzafar.engineerinme.com/blog/notepad-uninstalled/
476 | Mon, 19 Aug 2013 13:31:12 +0000
477 | admin
478 | http://dufferzafar.engineerinme.com/blog/?p=88
479 |
480 |
481 |
482 | 88
483 | 2013-08-19 13:31:12
484 | 2013-08-19 13:31:12
485 | open
486 | open
487 | notepad-uninstalled
488 | publish
489 | 0
490 | 0
491 | post
492 |
493 | 0
494 |
495 |
496 |
497 | _edit_last
498 |
499 |
500 |
501 |
502 | Being Sublimed!
503 | http://dufferzafar.engineerinme.com/blog/being-sublimed/
504 | Thu, 22 Aug 2013 14:36:52 +0000
505 | admin
506 | http://dufferzafar.engineerinme.com/blog/?p=90
507 |
508 |
510 |
Disabled Update Check. Duh!
511 |
Remove trailing spaces from your file using a setting.
512 |
Installed Package Control, Autohotkey, INI Syntax Packages
513 |
Added "Open With Sublime" to windows context menu. A REG File - Modifies settings in the registry but comes in handy. You can open any file, folder right from the explorer.
514 |
Ctrl+Shift+S - Runs any open file. Super Useful. Comes from an autohotkey script I wrote - Windows Helper. I'd share it once I clean up the code a bit.
515 |
SideBarEnhancements - Does what it says.
516 |
517 | I also did a setup of C++, finally. Turbo was beginning to get on my nerves. I installed MinGW, doing which was straightforward - you just need to extract the archive, well, that and the path variable modification. But it was worth it, I can now build/run C files right from sublime text.
518 |
519 | Emmet was the next plugin I installed, It is basically an abbreviation engine which allows you to expand expressions. Superb for high-speed HTML, or any other structured code format, coding and editing.
520 |
521 | Gist is another plugin that comes in handy. You just have to type "Create Gist" in the command palette, the plugin does the rest and copies the url of the gist to clipboard, ready for sharing.]]>
522 |
523 | 90
524 | 2013-08-22 14:36:52
525 | 2013-08-22 14:36:52
526 | open
527 | open
528 | being-sublimed
529 | publish
530 | 0
531 | 0
532 | post
533 |
534 | 0
535 |
536 |
537 | _edit_last
538 |
539 |
540 |
541 |
542 | 2 - Gathering Resources
543 | http://dufferzafar.engineerinme.com/blog/?p=97
544 | Wed, 30 Nov -0001 00:00:00 +0000
545 | admin
546 | http://dufferzafar.engineerinme.com/blog/?p=97
547 |
548 | This is the second article in the developing miniMe series. Here's the previous one: Getting a feel of minimalism.
Nothing major happens here, I'll just be gathering stuff that I plan to use - Scripts, Snippets, Graphics, etc.
I've already finalized my homepage, it'll be based on Manuel's
]]>
549 |
550 | 97
551 | 2013-08-22 16:15:48
552 | 0000-00-00 00:00:00
553 | open
554 | open
555 |
556 | draft
557 | 0
558 | 0
559 | post
560 |
561 | 0
562 |
563 |
564 | _edit_last
565 |
566 |
567 |
568 |
569 | Photo Albums On Windows Phone
570 | http://dufferzafar.engineerinme.com/blog/?p=112
571 | Wed, 30 Nov -0001 00:00:00 +0000
572 | admin
573 | http://dufferzafar.engineerinme.com/blog/?p=112
574 |
575 |
576 |
577 | 112
578 | 2013-09-10 14:21:57
579 | 0000-00-00 00:00:00
580 | open
581 | open
582 | photo-albums-nuances
583 | draft
584 | 0
585 | 0
586 | post
587 |
588 | 0
589 |
590 |
591 | _edit_last
592 |
593 |
594 |
595 |
596 | miniMe Paused - Cryptex Resumed
597 | http://dufferzafar.engineerinme.com/blog/minime-paused-cryptex-resumed/
598 | Thu, 12 Sep 2013 09:51:18 +0000
599 | admin
600 | http://dufferzafar.engineerinme.com/blog/?p=118
601 |
602 |
615 |
616 | 118
617 | 2013-09-12 09:51:18
618 | 2013-09-12 09:51:18
619 | open
620 | open
621 | minime-paused-cryptex-resumed
622 | publish
623 | 0
624 | 0
625 | post
626 |
627 | 0
628 |
629 |
630 | _edit_last
631 |
632 |
633 |
634 |
635 | Hacking WTF!
636 | http://dufferzafar.engineerinme.com/blog/hacking-wtf/
637 | Sat, 28 Sep 2013 08:56:18 +0000
638 | admin
639 | http://dufferzafar.engineerinme.com/blog/?p=120
640 |
641 |
652 |
653 | 120
654 | 2013-09-28 08:56:18
655 | 2013-09-28 08:56:18
656 | open
657 | open
658 | hacking-wtf
659 | private
660 | 0
661 | 0
662 | post
663 |
664 | 0
665 |
666 |
667 | _edit_last
668 |
669 |
670 |
671 |
672 | Excerpts from iWoz
673 | http://dufferzafar.engineerinme.com/blog/excerpts-from-iwoz/
674 | Sat, 28 Sep 2013 09:04:46 +0000
675 | admin
676 | http://dufferzafar.engineerinme.com/blog/?p=122
677 |
678 |
683 |
684 | 122
685 | 2013-09-28 09:04:46
686 | 2013-09-28 09:04:46
687 | open
688 | open
689 | excerpts-from-iwoz
690 | private
691 | 0
692 | 0
693 | post
694 |
695 | 0
696 |
697 |
698 | _edit_last
699 |
700 |
701 |
702 |
703 | Code sharing and my Github repositories
704 | http://dufferzafar.engineerinme.com/blog/code-sharing-and-my-github-repositories/
705 | Sun, 29 Sep 2013 17:01:22 +0000
706 | admin
707 | http://dufferzafar.engineerinme.com/blog/?p=124
708 |
709 | Win-Butler last week by merging some of my existing scripts. Most of the code was already written and all I had to do was assemble the chunks, remove crap and add comments. It took me about three hours to do so but I am fairly happy with the output. The script is clean and can be easily edited. Also, it felt great to churn out some AHK after a long time.
724 |
725 | I've polished Autohotkey-Scripts - redundant scripts have been removed, new ones added. I've also added readmes to some the of scripts. There's still a lot of work to be done - I need to further cleanup most of the code, add remaining readmes. I also have some viruses/pranks scripts written that I need to refine and push.
726 |
727 | AMS-Projects has been updated with the readme. I don't plan to further improve anything.
728 |
729 | libLua is a real mess. There's no documentation on what functions are available, the code files themselves are scattered and need to be standardized. I think I should separate bigNum into a separate repository as it really is worthy of some attention.
730 |
731 | Project-Euler-Solutions - The only reason I uploaded these solutions was because they were solved in pure Lua. I coded from scratch any dependency that I needed (bigNum is an example.) This repo is a mess too. I can scrub most of the solutions and could probably increase the efficiency of some.
732 |
733 | Shar - This algorithm is so naive that I wouldn't even have uploaded the code had it been in any other language. But you can't just throw away code written in Lua. As of now the cipher is amateurish and there isn't much hope for it but still I think I'll rewrite the code someday, write a readme too. Till then...]]>
734 |
735 | 124
736 | 2013-09-29 17:01:22
737 | 2013-09-29 17:01:22
738 | open
739 | open
740 | code-sharing-and-my-github-repositories
741 | publish
742 | 0
743 | 0
744 | post
745 |
746 | 0
747 |
748 |
749 | _edit_last
750 |
751 |
752 |
753 | _wp_old_slug
754 |
755 |
756 |
757 |
758 | When life gives you lemons...
759 | http://dufferzafar.engineerinme.com/blog/when-life-gives-you-lemons/
760 | Wed, 02 Oct 2013 10:12:58 +0000
761 | admin
762 | http://dufferzafar.engineerinme.com/blog/?p=131
763 |
764 | this.) It's all there on the hard drive, so I could just bring it all back (in theory.) I started Recuva - a free tool to recover deleted data. It took about a minute to find tonnes of deleted shit on my "C:" drive but sadly, and not surprisingly, the files I needed weren't listed. Welcome to the real world.
771 |
772 | Recuva also provides an option to Deep scan your drives but I've used that stuff before -- It'd have taken about an hour to find the files and the chances of recovery would still have been feeble.
773 |
774 | So, having nothing left to try, I synced the folder with Github but it was about a day old and everything I had done in the past twenty hours was gone. Shit Happens. It's like clearing some checkpoints and then forgetting to save the game. Now you'll have to replay the levels and so I did...
775 |
776 | I did everything all over again -- wrote the code, renamed the files, some more shit. It turned out to be fairly okay if you ask me.
777 |
778 | But now, as I am writing this, I'm really glad that it all happened because -
779 |
780 | A. I got to write this blog post, I love writing and all I need is a topic I sincerely care about.
781 |
782 | B. I thought of a nifty bug in the code I was writing -- had something to do with disabled javascript in the user's browser.
783 |
784 | C. Most importantly, I learnt the importance of backups and why they are as essential as everybody claims them to be. So I decided that I'd backup my stuff too. But come on, you don't expect me to copy folders to some other place every time I update something. Nope, I am gonna write a script to do that.
785 |
786 | I have figured out all the internals. I'll code it in AHK (duh!) and it'll be timer based -- backups will be made every fifteen minutes or so. I am going to use 7zip to compress the folder and copy it to some place that I don't normally visit (what if I end up deleting the backups too? *shudder*)
787 |
788 | The script will maintain at least 5 different snapshots and would delete the older one as soon as the new one comes in. I'll start by adding the core functionality first to get stuff working and would then work on additional features. It sounds fairly simple on paper. Let's see how it turns out...
789 |
I believe when life gives you lemons, you should make lemonade...and try to find someone whose life has given them vodka, and have a party.
790 | -- Ron White