├── gif_slicer ├── .gitignore ├── test_turn_catalogue_image_into_gif.py ├── turn_into_gif.py ├── turn_catalogue_image_into_gif.py └── run.sh ├── aws ├── .gitignore ├── runner.Dockerfile ├── builder.Dockerfile ├── run.sh ├── get_all_route53_record_sets.py ├── Makefile ├── print_ssm_name_tree.py ├── get_aws_icons.py ├── find_sns_topics_with_no_subscription.go ├── issue_temporary_credentials.py ├── fetch_all_cloudwatch.py ├── tail_cloudwatch_events.py ├── list_memory_cpu_of_fargate_tasks.go ├── templates │ └── sqs_dashboard.html └── find_ecs_bottlenecks.py ├── scripts ├── .gitignore └── count_pdf_pages.py ├── safari-stylesheet ├── .gitignore ├── style.scss ├── _pinboard.scss ├── README.md └── Makefile ├── services ├── kindle │ ├── .gitignore │ ├── kindle_dedrm │ │ ├── README.txt │ │ ├── kgenpids.py │ │ └── alfcrypto.c │ └── dedrm.py ├── dreamwidth │ ├── .gitignore │ ├── requirements.in │ ├── _helpers.py │ ├── requirements.txt │ ├── backup_comments.py │ ├── backup_posts.py │ ├── markdownify.py │ ├── build_reading_page_rss.py │ ├── post_entry.py │ └── _api.py ├── youtube │ ├── rename_downloads.py │ └── get_automated_youtube_transcript.py └── backblaze │ └── add_backblaze_exclusions.py ├── adventofcode ├── 2017 │ ├── .gitignore │ ├── runner.Dockerfile │ ├── builder.Dockerfile │ ├── day1.go │ ├── run.sh │ └── Makefile └── 2018 │ ├── 9.txt │ ├── run.sh │ ├── helpers.rb │ ├── 6.txt │ ├── day1.rb │ ├── day2.rb │ ├── day5.rb │ ├── day8.rb │ ├── day3.rb │ ├── day10.rb │ ├── day4.rb │ ├── day9.rb │ ├── day7.rb │ ├── day6.rb │ ├── 7.txt │ └── 2.txt ├── backups ├── deviantart │ ├── requirements.in │ ├── requirements.txt │ └── save_image.py └── flickr │ ├── requirements.in │ ├── requirements.txt │ └── save_image.py ├── style.css ├── repros ├── 2019-05-concurrent-try │ ├── project │ │ └── build.properties │ ├── build.sbt │ └── src │ │ └── main │ │ └── scala │ │ └── Main.scala ├── 2019-05-scanamo-conditional-update │ ├── project │ │ └── build.properties │ ├── docker-compose.yml │ ├── build.sbt │ └── src │ │ └── main │ │ └── scala │ │ ├── Main.scala │ │ └── Helpers.scala ├── 2019-11-ureq-cookies │ ├── Cargo.toml │ ├── src │ │ └── main.rs │ └── server.py ├── 2019-02-rack-test-bug │ ├── Dockerfile │ ├── Makefile │ ├── server.rb │ ├── test.rb │ └── README.md ├── 2019-01-jackson-elasticsearch-bug │ ├── build.sbt │ ├── run.py │ ├── src │ │ └── main │ │ │ └── scala │ │ │ └── Main.scala │ └── README.md ├── 2019-12-transfer-manager-upload │ ├── README.md │ ├── dependency-reduced-pom.xml │ ├── src │ │ └── main │ │ │ └── java │ │ │ └── repro │ │ │ └── Repro.java │ └── pom.xml └── README.md ├── wellcome ├── .gitignore └── pprint_scala.py ├── wallpapers ├── wallpaper_aro.jpg ├── wallpaper_pan.jpg ├── wallpaper_delta.jpg ├── wallpaper_armenia.jpg ├── wallpaper_rainbow.jpg ├── wallpaper_mystery_1.jpg ├── wallpaper_mystery_2.jpg ├── wallpaper_mystery_3.jpg ├── wallpaper_mystery_4.jpg ├── wallpaper_genderfluid.jpg └── make_iphone_rainbow_wallpaper.py ├── Makefile ├── dockerfiles ├── primitive.Dockerfile ├── README.md └── Makefile ├── textmate-config ├── regexes.md ├── Makefile └── Global.tmProperties ├── .github └── delete_branches.workflow ├── xtinamas.sql ├── download_atv_screensavers ├── Cargo.toml ├── README.md └── src │ └── main.rs ├── truthy.py ├── textmate-snippets ├── pytest parametrize.tmSnippet ├── os.walk snippet.tmSnippet └── pytest-raises.tmSnippet ├── pathscripts ├── open_private_browsing ├── pretty_parens └── concatgpx ├── quasirandom.py ├── manhattan.py ├── fun ├── self_deprecating_1.py └── self_deprecating_2.py ├── LICENSE ├── urllib3.html ├── README.md ├── hide_all_blue_ticks.js ├── prime_factors.py ├── setColoursToCellValues.gs ├── game_of_life.py └── .gitignore /gif_slicer/.gitignore: -------------------------------------------------------------------------------- 1 | *.gif 2 | -------------------------------------------------------------------------------- /aws/.gitignore: -------------------------------------------------------------------------------- 1 | *.out 2 | *.log 3 | -------------------------------------------------------------------------------- /scripts/.gitignore: -------------------------------------------------------------------------------- 1 | _pdf_cache.json 2 | -------------------------------------------------------------------------------- /safari-stylesheet/.gitignore: -------------------------------------------------------------------------------- 1 | style.css 2 | -------------------------------------------------------------------------------- /services/kindle/.gitignore: -------------------------------------------------------------------------------- 1 | docstore_password.txt -------------------------------------------------------------------------------- /adventofcode/2017/.gitignore: -------------------------------------------------------------------------------- 1 | .docker 2 | *.out 3 | -------------------------------------------------------------------------------- /safari-stylesheet/style.scss: -------------------------------------------------------------------------------- 1 | @import "_pinboard.scss"; 2 | -------------------------------------------------------------------------------- /backups/deviantart/requirements.in: -------------------------------------------------------------------------------- 1 | bs4 2 | hyperlink 3 | urllib3 4 | -------------------------------------------------------------------------------- /style.css: -------------------------------------------------------------------------------- 1 | div[aria-label="Trending"] { 2 | display: none; 3 | } -------------------------------------------------------------------------------- /adventofcode/2018/9.txt: -------------------------------------------------------------------------------- 1 | 424 players; last marble is worth 71144 points 2 | -------------------------------------------------------------------------------- /backups/flickr/requirements.in: -------------------------------------------------------------------------------- 1 | bs4 2 | hyperlink 3 | urllib3 4 | xmltodict 5 | -------------------------------------------------------------------------------- /repros/2019-05-concurrent-try/project/build.properties: -------------------------------------------------------------------------------- 1 | sbt.version=1.1.2 2 | -------------------------------------------------------------------------------- /wellcome/.gitignore: -------------------------------------------------------------------------------- 1 | *.txt 2 | *.json 3 | *.json.gz 4 | *.csv 5 | *.xml 6 | -------------------------------------------------------------------------------- /repros/2019-05-scanamo-conditional-update/project/build.properties: -------------------------------------------------------------------------------- 1 | sbt.version=1.1.2 2 | -------------------------------------------------------------------------------- /services/dreamwidth/.gitignore: -------------------------------------------------------------------------------- 1 | backup_dreamwidth.sh 2 | *.md 3 | *.json 4 | *.html 5 | *.txt 6 | -------------------------------------------------------------------------------- /services/dreamwidth/requirements.in: -------------------------------------------------------------------------------- 1 | bs4 2 | click 3 | feedgenerator 4 | requests 5 | pytest 6 | -------------------------------------------------------------------------------- /wallpapers/wallpaper_aro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_aro.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_pan.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_pan.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_delta.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_delta.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_armenia.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_armenia.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_rainbow.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_rainbow.jpg -------------------------------------------------------------------------------- /safari-stylesheet/_pinboard.scss: -------------------------------------------------------------------------------- 1 | #pinboard { 2 | a:hover { 3 | text-decoration: underline !important; 4 | } 5 | } 6 | -------------------------------------------------------------------------------- /wallpapers/wallpaper_mystery_1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_mystery_1.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_mystery_2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_mystery_2.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_mystery_3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_mystery_3.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_mystery_4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_mystery_4.jpg -------------------------------------------------------------------------------- /wallpapers/wallpaper_genderfluid.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexwlchan/junkdrawer/HEAD/wallpapers/wallpaper_genderfluid.jpg -------------------------------------------------------------------------------- /aws/runner.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine 2 | 3 | RUN apk add --update ca-certificates 4 | 5 | ENV AWS_REGION=eu-west-1 6 | 7 | VOLUME ["/bin"] 8 | WORKDIR ["/bin"] 9 | -------------------------------------------------------------------------------- /repros/2019-05-concurrent-try/build.sbt: -------------------------------------------------------------------------------- 1 | name := "2019_05_concurrent_try" 2 | 3 | version := "1.0.0" 4 | 5 | organization := "edu.self" 6 | scalaVersion := "2.12.6" 7 | -------------------------------------------------------------------------------- /adventofcode/2017/runner.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine 2 | 3 | RUN apk add --update ca-certificates 4 | 5 | ENV AWS_REGION=eu-west-1 6 | 7 | VOLUME ["/bin"] 8 | WORKDIR ["/bin"] 9 | -------------------------------------------------------------------------------- /adventofcode/2018/run.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -o errexit 4 | set -o nounset 5 | 6 | docker run --rm --tty --volume $(pwd):/data --workdir /data ruby:alpine ruby "day$1.rb" 7 | -------------------------------------------------------------------------------- /repros/2019-05-scanamo-conditional-update/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "2.1" 2 | services: 3 | dynamodb: 4 | image: "peopleperhour/dynamodb" 5 | ports: 6 | - "8000:8000" 7 | -------------------------------------------------------------------------------- /adventofcode/2018/helpers.rb: -------------------------------------------------------------------------------- 1 | def solution(day, title, answer1, answer2) 2 | puts "--- Day #{day}: #{title} ---" 3 | puts "#1) #{answer1}" 4 | puts "#2) #{answer2}" 5 | puts "" 6 | end 7 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | ROOT = $(shell git rev-parse --show-toplevel) 2 | 3 | include safari-stylesheet/Makefile 4 | include textmate-config/Makefile 5 | 6 | safari-stylesheet/style.css: $(ROOT)/safari-stylesheet/style.css 7 | -------------------------------------------------------------------------------- /aws/builder.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:alpine 2 | 3 | RUN apk add --update git 4 | RUN go get github.com/aws/aws-sdk-go 5 | 6 | VOLUME ["/src"] 7 | WORKDIR ["/src"] 8 | 9 | ENTRYPOINT ["go", "build"] 10 | -------------------------------------------------------------------------------- /dockerfiles/primitive.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:1.8-alpine 2 | 3 | RUN apk update && apk add git 4 | RUN go get -u github.com/fogleman/primitive 5 | 6 | WORKDIR /data 7 | VOLUME /data 8 | 9 | ENTRYPOINT ["primitive"] -------------------------------------------------------------------------------- /repros/2019-11-ureq-cookies/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "ureq-cookies" 3 | version = "0.1.0" 4 | authors = ["Alex Chan "] 5 | edition = "2018" 6 | 7 | [dependencies] 8 | ureq = "0.11.2" 9 | -------------------------------------------------------------------------------- /adventofcode/2017/builder.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:alpine 2 | 3 | RUN apk add --update git 4 | RUN go get github.com/aws/aws-sdk-go 5 | 6 | VOLUME ["/src"] 7 | WORKDIR ["/src"] 8 | 9 | ENTRYPOINT ["go", "build"] 10 | -------------------------------------------------------------------------------- /textmate-config/regexes.md: -------------------------------------------------------------------------------- 1 | # Regexes used for find and replace 2 | 3 | For fixing up Python test assertions: 4 | 5 | self.assertEquals?\(([a-zA-Z_.0-9\[\]\(\)'"]+),\s*([a-zA-Z_.0-9\[\]\(\)'"]+)\) 6 | assert $1 == $2 7 | -------------------------------------------------------------------------------- /.github/delete_branches.workflow: -------------------------------------------------------------------------------- 1 | workflow "on pull request merge, delete the branch" { 2 | on = "pull_request" 3 | resolves = ["branch cleanup"] 4 | } 5 | 6 | action "branch cleanup" { 7 | uses = "jessfraz/branch-cleanup-action@master" 8 | secrets = ["GITHUB_TOKEN"] 9 | } -------------------------------------------------------------------------------- /xtinamas.sql: -------------------------------------------------------------------------------- 1 | WITH RECURSIVE 2 | cnt(x) 3 | AS 4 | ( 5 | VALUES(1) 6 | UNION ALL 7 | SELECT x+1 FROM cnt 8 | LIMIT length("Happy Xtina-mas 2019!") 9 | ) 10 | SELECT 11 | sum(unicode(substr("Happy Xtina-mas 2019!", x, 1)) / 100.0) 12 | FROM cnt; 13 | -------------------------------------------------------------------------------- /safari-stylesheet/README.md: -------------------------------------------------------------------------------- 1 | # safari-stylesheet 2 | 3 | This directory has the custom CSS stylesheet I use in Safari. 4 | 5 | To compile the stylesheet, run the following command in the root of the repo: 6 | 7 | ```console 8 | $ make safari-stylesheet/style.css 9 | ``` 10 | -------------------------------------------------------------------------------- /repros/2019-05-scanamo-conditional-update/build.sbt: -------------------------------------------------------------------------------- 1 | name := "2019_05_scanamo_conditional_update" 2 | 3 | version := "1.0.0" 4 | 5 | organization := "edu.self" 6 | scalaVersion := "2.12.6" 7 | 8 | libraryDependencies := Seq( 9 | "org.scanamo" %% "scanamo" % "1.0.0-M9" 10 | ) 11 | -------------------------------------------------------------------------------- /download_atv_screensavers/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "download_atv_screensavers" 3 | version = "0.1.0" 4 | authors = ["Alex Chan "] 5 | 6 | [dependencies] 7 | docopt = "1" 8 | reqwest = "0.8.5" 9 | serde = "1.0.64" 10 | serde_derive = "1.0.64" 11 | serde_json = "1.0.19" 12 | -------------------------------------------------------------------------------- /safari-stylesheet/Makefile: -------------------------------------------------------------------------------- 1 | ROOT = $(shell git rev-parse --show-toplevel) 2 | SAFARI = $(ROOT)/safari-stylesheet 3 | 4 | include $(ROOT)/dockerfiles/Makefile 5 | 6 | $(SAFARI)/style.css: $(ROOT)/.docker/lessc $(wildcard $(SAFARI)/*.scss) 7 | docker run --volume $(SAFARI):/data alexwlchan/lessc style.scss style.css 8 | -------------------------------------------------------------------------------- /repros/2019-02-rack-test-bug/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine 2 | 3 | RUN echo "install: --no-rdoc --no-ri" > /root/.gemrc 4 | 5 | RUN apk add --update build-base ruby ruby-dev 6 | RUN gem install json minitest rack rack-test 7 | 8 | EXPOSE 8282 9 | 10 | VOLUME ["/data"] 11 | WORKDIR /data 12 | 13 | ENTRYPOINT ["ruby"] 14 | -------------------------------------------------------------------------------- /repros/2019-11-ureq-cookies/src/main.rs: -------------------------------------------------------------------------------- 1 | extern crate ureq; 2 | 3 | fn main() { 4 | let mut agent = ureq::agent(); 5 | 6 | agent.set("Cookie", "name=value"); 7 | agent.set("Cookie", "name2=value2"); 8 | 9 | let resp = agent.get("http://127.0.0.1:5000/").call(); 10 | 11 | println!("{:?}", resp); 12 | } 13 | -------------------------------------------------------------------------------- /backups/deviantart/requirements.txt: -------------------------------------------------------------------------------- 1 | # 2 | # This file is autogenerated by pip-compile 3 | # To update, run: 4 | # 5 | # pip-compile 6 | # 7 | beautifulsoup4==4.8.0 # via bs4 8 | bs4==0.0.1 9 | hyperlink==19.0.0 10 | idna==2.8 # via hyperlink 11 | soupsieve==1.9.4 # via beautifulsoup4 12 | urllib3==1.25.6 13 | -------------------------------------------------------------------------------- /repros/2019-02-rack-test-bug/Makefile: -------------------------------------------------------------------------------- 1 | IMAGE = alexwlchan/rack-test-bug 2 | 3 | .docker: Dockerfile 4 | docker build -t $(IMAGE) . 5 | touch .docker 6 | 7 | run: .docker 8 | docker run --volume $(CURDIR):/data --publish 8282:8282 $(IMAGE) server.rb 9 | 10 | test: .docker 11 | docker run --volume $(CURDIR):/data $(IMAGE) test.rb 12 | -------------------------------------------------------------------------------- /adventofcode/2017/day1.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "os" 6 | ) 7 | 8 | func main() { 9 | s := "world" 10 | 11 | if len(os.Args) > 1 { 12 | s = os.Args[1] 13 | } 14 | 15 | fmt.Printf("Hello, %v!", s) 16 | fmt.Println("") 17 | 18 | if s == "fail" { 19 | os.Exit(30) 20 | } 21 | } 22 | -------------------------------------------------------------------------------- /backups/flickr/requirements.txt: -------------------------------------------------------------------------------- 1 | # 2 | # This file is autogenerated by pip-compile 3 | # To update, run: 4 | # 5 | # pip-compile 6 | # 7 | beautifulsoup4==4.8.0 # via bs4 8 | bs4==0.0.1 9 | hyperlink==19.0.0 10 | idna==2.8 # via hyperlink 11 | soupsieve==1.9.4 # via beautifulsoup4 12 | urllib3==1.25.6 13 | xmltodict==0.12.0 14 | -------------------------------------------------------------------------------- /repros/2019-01-jackson-elasticsearch-bug/build.sbt: -------------------------------------------------------------------------------- 1 | name := "2019_01_jackson_elasticsearch_bug" 2 | 3 | version := "1.0.0" 4 | 5 | organization := "edu.self" 6 | scalaVersion := "2.12.6" 7 | 8 | libraryDependencies := Seq( 9 | "com.sksamuel.elastic4s" %% "elastic4s-core" % "6.5.0", 10 | "com.sksamuel.elastic4s" %% "elastic4s-http" % "6.5.0" 11 | ) 12 | -------------------------------------------------------------------------------- /repros/2019-02-rack-test-bug/server.rb: -------------------------------------------------------------------------------- 1 | require 'rack' 2 | 3 | class ExampleServer 4 | def call(env) 5 | ["200", {}, "hello world"] 6 | end 7 | end 8 | 9 | 10 | if __FILE__ == $0 11 | app = Rack::Builder.new do 12 | use Rack::Reloader 13 | run ExampleServer.new 14 | end.to_app 15 | 16 | Rack::Server.start(app: app, Port: 8282, Host: "0.0.0.0") 17 | end 18 | -------------------------------------------------------------------------------- /truthy.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | https://twitter.com/nedbat/status/1542851970184216583 4 | """ 5 | 6 | def true_or_false(value): 7 | """ 8 | Say if a value if "true" or "false". 9 | """ 10 | return "FTarlusee"[int(bool(value)) :: 2] 11 | 12 | >>> true_or_false(1 + 2 == 2) 13 | 'True' 14 | 15 | >>> true_or_false('spam' not in 'spam, spam, eggs and spam') 16 | 'False' 17 | -------------------------------------------------------------------------------- /aws/run.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -o errexit 4 | set -o nounset 5 | 6 | ROOT="$(git rev-parse --show-toplevel)" 7 | AWS="$ROOT"/aws 8 | 9 | NAME=$(echo "$1" | tr "." " " | awk '{print $1}') 10 | 11 | make "$NAME.out" 12 | make "$ROOT"/.docker/gorunner 13 | 14 | docker run --rm \ 15 | --volume "$AWS":/bin \ 16 | --volume ~/.aws:/root/.aws \ 17 | alexwlchan/gorunner /bin/"$NAME".out 18 | -------------------------------------------------------------------------------- /adventofcode/2017/run.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -o errexit 4 | set -o nounset 5 | 6 | ROOT="$(git rev-parse --show-toplevel)" 7 | DIR="$ROOT"/2017 8 | 9 | NAME=$(echo "$1" | tr "." " " | awk '{print $1}') 10 | 11 | make --silent "$NAME.out" 12 | make --silent "$DIR"/.docker/gorunner 13 | 14 | docker run --rm \ 15 | --volume "$DIR":/bin \ 16 | --workdir /bin \ 17 | alexwlchan/gorunner /bin/"$NAME".out 18 | -------------------------------------------------------------------------------- /aws/get_all_route53_record_sets.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import json 5 | import sys 6 | 7 | import boto3 8 | 9 | client = boto3.client("route53") 10 | 11 | paginator = client.get_paginator("list_resource_record_sets") 12 | 13 | for page in paginator.paginate(HostedZoneId=sys.argv[1]): 14 | for record_set in page["ResourceRecordSets"]: 15 | print(json.dumps(record_set)) 16 | -------------------------------------------------------------------------------- /repros/2019-02-rack-test-bug/test.rb: -------------------------------------------------------------------------------- 1 | require 'minitest/autorun' 2 | require 'rack/test' 3 | 4 | require_relative './server' 5 | 6 | class ExampleServerTest < Minitest::Test 7 | include Rack::Test::Methods 8 | 9 | def app 10 | ExampleServer.new 11 | end 12 | 13 | def test_get 14 | get '/foo' 15 | assert_equal 200, last_response.status 16 | assert_equal "hello world", last_response.body 17 | end 18 | end 19 | -------------------------------------------------------------------------------- /repros/2019-11-ureq-cookies/server.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import flask 5 | 6 | app = flask.Flask(__name__) 7 | 8 | 9 | @app.route("/") 10 | def index(): 11 | print("\nGot a request!") 12 | print("Headers: %r" % dict(flask.request.headers)) 13 | print("Cookies: %r" % flask.request.cookies) 14 | return "hello world" 15 | 16 | 17 | if __name__ == "__main__": 18 | app.run(port=5000) 19 | -------------------------------------------------------------------------------- /textmate-config/Makefile: -------------------------------------------------------------------------------- 1 | ROOT = $(shell git rev-parse --show-toplevel) 2 | 3 | TM_CONFIG = $(ROOT)/textmate-config 4 | 5 | APP_SUPPORT = ~/Library/Application\ Support/TextMate 6 | 7 | 8 | $(APP_SUPPORT)/Global.tmProperties: $(TM_CONFIG)/Global.tmProperties 9 | rm -f $(APP_SUPPORT)/Global.tmProperties 10 | ln -s $(TM_CONFIG)/Global.tmProperties $(APP_SUPPORT)/Global.tmProperties 11 | 12 | 13 | textmate-shortcuts: $(APP_SUPPORT)/Global.tmProperties 14 | -------------------------------------------------------------------------------- /repros/2019-01-jackson-elasticsearch-bug/run.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import secrets 5 | 6 | import requests 7 | 8 | 9 | INDEX_NAME = secrets.token_hex(8) 10 | print("Index name is %s" % INDEX_NAME) 11 | 12 | 13 | requests.put("http://localhost:9200/%s" % INDEX_NAME).raise_for_status() 14 | 15 | resp = requests.get( 16 | "http://localhost:9200/%s/_search" % INDEX_NAME, 17 | json={"from": -10} 18 | ) 19 | print(resp.text) 20 | -------------------------------------------------------------------------------- /repros/2019-12-transfer-manager-upload/README.md: -------------------------------------------------------------------------------- 1 | # transfer-manager-upload 2 | 3 | I was seeing a bug in some Scala tests where calling `AmazonS3.putObject()` would never assign a storage class to an object, even when one was set on the request. I wasn't sure if this was a bug in the AWS SDK (unlikely) or the test container we were using (more likely), so I wrote a Java repro against the real S3. 4 | 5 | This code sets the correct storage class when writing to real S3, so there's a bug somewhere in our test container. 6 | -------------------------------------------------------------------------------- /download_atv_screensavers/README.md: -------------------------------------------------------------------------------- 1 | # download_atv_screensavers 2 | 3 | This is a tool for downloading Apple TV screensavers to a local directory. 4 | 5 | ## Installation 6 | 7 | You need Rust installed, then inside this directory run: 8 | 9 | ```console 10 | $ cargo install 11 | ``` 12 | 13 | This will build the tool, and install it in `~/.cargo/bin`. 14 | 15 | ## Usage 16 | 17 | Pass the directory to save screensavers in as the `--dir` argument: 18 | 19 | ```console 20 | $ ~/.cargo/bin/download_atv_screensavers --dir=/home/screensavers 21 | ``` 22 | -------------------------------------------------------------------------------- /textmate-snippets/pytest parametrize.tmSnippet: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | content 6 | @pytest.mark.parametrize("${1:args}", [${2:data}]) 7 | name 8 | pytest parametrize 9 | scope 10 | source.python 11 | tabTrigger 12 | @param 13 | uuid 14 | EAC7D9C1-B1E7-491C-BB33-97FAE171DA14 15 | 16 | 17 | -------------------------------------------------------------------------------- /textmate-snippets/os.walk snippet.tmSnippet: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | content 6 | for dirname, ${1:dirpath}, ${2:filenames} in os.walk(${3}): 7 | ${4} 8 | name 9 | os.walk 10 | scope 11 | source.python 12 | tabTrigger 13 | owalk 14 | uuid 15 | B069AB5E-D793-4E05-A03D-43A7DCE10D91 16 | 17 | 18 | -------------------------------------------------------------------------------- /textmate-snippets/pytest-raises.tmSnippet: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | content 6 | with pytest.raises(${1:ExceptionClass})${2: as exc}: 7 | ${3:pass} 8 | name 9 | with pytest.raises(...) 10 | scope 11 | source.python 12 | tabTrigger 13 | pyraise 14 | uuid 15 | 7416106E-E532-42C2-92C0-D0F08C0277A8 16 | 17 | 18 | -------------------------------------------------------------------------------- /services/youtube/rename_downloads.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import os 5 | import sys 6 | 7 | 8 | if __name__ == "__main__": 9 | try: 10 | path = sys.argv[1] 11 | except IndexError: 12 | sys.exit("%s " % __file__) 13 | 14 | filenames = os.listdir(path) 15 | common_prefix = os.path.commonprefix(filenames) 16 | 17 | for f in sorted(filenames): 18 | name, ext = os.path.splitext(f) 19 | new_f = name[len(common_prefix):-12] + ext 20 | print(new_f) 21 | os.rename( 22 | os.path.join(path, f), 23 | os.path.join(path, new_f) 24 | ) 25 | -------------------------------------------------------------------------------- /aws/Makefile: -------------------------------------------------------------------------------- 1 | ROOT = $(shell git rev-parse --show-toplevel) 2 | AWS = $(ROOT)/aws 3 | 4 | 5 | $(ROOT)/.docker/gobuilder: $(AWS)/builder.Dockerfile 6 | docker build --tag alexwlchan/gobuilder --file builder.Dockerfile $(AWS) 7 | mkdir -p $(ROOT)/.docker 8 | touch $(ROOT)/.docker/gobuilder 9 | 10 | $(ROOT)/.docker/gorunner: $(AWS)/runner.Dockerfile 11 | docker build --tag alexwlchan/gorunner --file runner.Dockerfile $(AWS) 12 | mkdir -p $(ROOT)/.docker 13 | touch $(ROOT)/.docker/gorunner 14 | 15 | %.out: $(ROOT)/.docker/gobuilder %.go 16 | docker run --rm --volume $(AWS):/src --workdir /src alexwlchan/gobuilder \ 17 | -o /src/$@ /src/$(shell basename $@ .out).go 18 | -------------------------------------------------------------------------------- /adventofcode/2017/Makefile: -------------------------------------------------------------------------------- 1 | ROOT = $(shell git rev-parse --show-toplevel) 2 | DIR = $(ROOT)/2017 3 | 4 | 5 | $(DIR)/.docker/gobuilder: $(DIR)/builder.Dockerfile 6 | docker build --tag alexwlchan/gobuilder --file builder.Dockerfile $(DIR) 7 | mkdir -p $(DIR)/.docker 8 | touch $(DIR)/.docker/gobuilder 9 | 10 | $(DIR)/.docker/gorunner: $(DIR)/runner.Dockerfile 11 | docker build --tag alexwlchan/gorunner --file runner.Dockerfile $(DIR) 12 | mkdir -p $(DIR)/.docker 13 | touch $(DIR)/.docker/gorunner 14 | 15 | %.out: $(DIR)/.docker/gobuilder %.go 16 | docker run --rm --volume $(DIR):/src --workdir /src alexwlchan/gobuilder \ 17 | -o /src/$@ /src/$(shell basename $@ .out).go 18 | -------------------------------------------------------------------------------- /adventofcode/2018/6.txt: -------------------------------------------------------------------------------- 1 | 227, 133 2 | 140, 168 3 | 99, 112 4 | 318, 95 5 | 219, 266 6 | 134, 144 7 | 306, 301 8 | 189, 188 9 | 58, 334 10 | 337, 117 11 | 255, 73 12 | 245, 144 13 | 102, 257 14 | 255, 353 15 | 303, 216 16 | 141, 167 17 | 40, 321 18 | 201, 50 19 | 60, 188 20 | 132, 74 21 | 125, 199 22 | 176, 307 23 | 204, 218 24 | 338, 323 25 | 276, 278 26 | 292, 229 27 | 109, 228 28 | 85, 305 29 | 86, 343 30 | 97, 254 31 | 182, 151 32 | 110, 292 33 | 285, 124 34 | 43, 223 35 | 153, 188 36 | 285, 136 37 | 334, 203 38 | 84, 243 39 | 92, 185 40 | 330, 223 41 | 259, 275 42 | 106, 199 43 | 183, 205 44 | 188, 212 45 | 231, 150 46 | 158, 95 47 | 174, 212 48 | 279, 97 49 | 172, 131 50 | 247, 320 51 | -------------------------------------------------------------------------------- /dockerfiles/README.md: -------------------------------------------------------------------------------- 1 | # dockerfiles 2 | 3 | This directory contains Dockerfiles for Docker images that I find useful. 4 | 5 | I run a lot of programs in Docker rather than installing them directly on a machine (e.g. with brew or apt-get). 6 | 7 | * I can see exactly what files are being written/edited on the host 8 | * I can use packages/tools that are installed with package managers I don't use very often, e.g. npm or golang, and not have to worry about getting the package manager working 9 | * When I get a new machine, I have them available as soon as I install Docker 10 | 11 | In my fishconfig, I've got some shell functions that call out to these images, building them first if necessary. 12 | -------------------------------------------------------------------------------- /pathscripts/open_private_browsing: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env osascript 2 | 3 | on openPrivateBrowsingWindow(urlToOpen) 4 | tell application "Safari" 5 | activate 6 | 7 | tell application "System Events" 8 | click menu item "New Private Window" of ¬ 9 | menu "File" of menu bar 1 of ¬ 10 | application process "Safari" 11 | end tell 12 | 13 | -- The frontmost window is the private Browsing window that just got 14 | -- opened -- change the URL to the one we want to open. 15 | tell window 1 to set properties of current tab to {URL:urlToOpen} 16 | end tell 17 | end openPrivateBrowsingWindow 18 | 19 | on run argv 20 | openPrivateBrowsingWindow(argv) 21 | end run 22 | -------------------------------------------------------------------------------- /services/dreamwidth/_helpers.py: -------------------------------------------------------------------------------- 1 | # -*- encoding: utf-8 2 | 3 | import datetime as dt 4 | import re 5 | 6 | import pytest 7 | 8 | 9 | def parse_date(date_s, time_s): 10 | amended_date_s = re.sub( 11 | r'(1st|2nd|3rd|\dth)', 12 | lambda m: m.group()[:-2], 13 | date_s 14 | ) 15 | 16 | return dt.datetime.strptime( 17 | amended_date_s + ' ' + time_s, 18 | '%b. %d, %Y %H:%M %p' 19 | ) 20 | 21 | 22 | @pytest.mark.parametrize('date_s, time_s, expected_dt', [ 23 | ('Mar. 17, 2019', '12:14 am', dt.datetime(2019, 3, 17, 12, 14, 0)) 24 | ]) 25 | def test_parse_date(date_s, time_s, expected_dt): 26 | assert parse_date(date_s, time_s) == expected_dt 27 | -------------------------------------------------------------------------------- /gif_slicer/test_turn_catalogue_image_into_gif.py: -------------------------------------------------------------------------------- 1 | # -*- encoding: utf-8 2 | 3 | import pytest 4 | 5 | from turn_catalogue_image_into_gif import parse_catalogue_id 6 | 7 | 8 | @pytest.mark.parametrize('arg,expected_id', [ 9 | ("https://wellcomecollection.org/works/rq48tke5", "rq48tke5"), 10 | ("https://wellcomecollection.org/works/rq48tke5?query=cabbage&page=1", "rq48tke5"), 11 | ("https://api.wellcomecollection.org/catalogue/v2/works/ecdebckk?include=identifiers,subjects", "ecdebckk"), 12 | ("https://api.wellcomecollection.org/catalogue/v1/works/ecdebckk", "ecdebckk"), 13 | ("a22au6yn", "a22au6yn"), 14 | ]) 15 | def test_parse_catalogue_id(arg, expected_id): 16 | assert parse_catalogue_id(arg) == expected_id 17 | -------------------------------------------------------------------------------- /pathscripts/pretty_parens: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import re, sys 4 | 5 | src = sys.argv[1] 6 | 7 | lines = [] 8 | curr_line = "" 9 | indent = 0 10 | 11 | for char in src: 12 | if char == "(": 13 | curr_line += char 14 | lines.append(curr_line) 15 | indent += 1 16 | curr_line = " " * indent 17 | continue 18 | elif char == ")": 19 | lines.append(curr_line) 20 | indent -= 1 21 | curr_line = " " * indent + char 22 | continue 23 | elif char == ",": 24 | curr_line += char 25 | lines.append(curr_line) 26 | curr_line = " " * indent 27 | else: 28 | curr_line += char 29 | 30 | lines.append(curr_line) 31 | print(re.sub(r"\(\s+\)", "()", "\n".join(lines))) -------------------------------------------------------------------------------- /services/dreamwidth/requirements.txt: -------------------------------------------------------------------------------- 1 | # 2 | # This file is autogenerated by pip-compile 3 | # To update, run: 4 | # 5 | # pip-compile 6 | # 7 | atomicwrites==1.3.0 # via pytest 8 | attrs==19.1.0 # via pytest 9 | beautifulsoup4==4.7.1 # via bs4 10 | bs4==0.0.1 11 | certifi==2019.3.9 # via requests 12 | chardet==3.0.4 # via requests 13 | click==7.0 14 | feedgenerator==1.9 15 | idna==2.8 # via requests 16 | more-itertools==6.0.0 # via pytest 17 | pluggy==0.9.0 # via pytest 18 | py==1.8.0 # via pytest 19 | pytest==4.3.1 20 | pytz==2018.9 # via feedgenerator 21 | requests==2.21.0 22 | six==1.12.0 # via feedgenerator, pytest 23 | soupsieve==1.8 # via beautifulsoup4 24 | urllib3==1.24.1 # via requests 25 | -------------------------------------------------------------------------------- /dockerfiles/Makefile: -------------------------------------------------------------------------------- 1 | ROOT = $(shell git rev-parse --show-toplevel) 2 | DOCKERFILES = $(ROOT)/dockerfiles 3 | 4 | <<<<<<< Updated upstream 5 | IMAGES = \ 6 | <<<<<<< HEAD 7 | atool \ 8 | dos2unix \ 9 | ffmpeg \ 10 | gotutorial \ 11 | lessc \ 12 | primitive \ 13 | sass \ 14 | tiny_elastic \ 15 | travis \ 16 | tree \ 17 | woff2 18 | ======= 19 | primitive \ 20 | tiny_elastic 21 | >>>>>>> 960d1f7ced4ea529789700c67450ed34fe944543 22 | ======= 23 | IMAGES = primitive 24 | >>>>>>> Stashed changes 25 | 26 | 27 | define __template 28 | $(ROOT)/.docker/$(1): $(DOCKERFILES)/$(1).Dockerfile 29 | docker build --tag alexwlchan/$(1) --file $(DOCKERFILES)/$(1).Dockerfile $(DOCKERFILES) 30 | mkdir -p $(ROOT)/.docker 31 | touch $(ROOT)/.docker/$(1) 32 | 33 | docker-$(1)-build: $(ROOT)/.docker/$(1) 34 | endef 35 | 36 | 37 | $(foreach img,$(IMAGES),$(eval $(call __template,$(img)))) 38 | -------------------------------------------------------------------------------- /quasirandom.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Used to generate data that looks a bit like the real thing, but is totally 5 | meaningless. 6 | 7 | e.g. example API keys for screenshots. 8 | 9 | """ 10 | 11 | import random 12 | import string 13 | import sys 14 | 15 | 16 | def choice(xs): 17 | while True: 18 | r = random.choice(xs) 19 | if r not in {'0', 'o', 'O', '1', 'l', 'I'}: 20 | return r 21 | 22 | 23 | def _randomize_char(c): 24 | if c.isupper(): 25 | return choice(string.ascii_uppercase) 26 | elif c.islower(): 27 | return choice(string.ascii_lowercase) 28 | elif c.isnumeric(): 29 | return choice(string.digits[1:]) 30 | 31 | 32 | def randomize(s): 33 | return ''.join(_randomize_char(c) for c in s) 34 | 35 | 36 | if __name__ == '__main__': 37 | print(randomize(sys.argv[1])) 38 | -------------------------------------------------------------------------------- /manhattan.py: -------------------------------------------------------------------------------- 1 | import inspect 2 | import os 3 | 4 | _lines = set() 5 | 6 | 7 | class CriticalityError(Exception): 8 | pass 9 | 10 | 11 | class DemonHemisphere: 12 | """ 13 | Do not create multiple instances of this class on adjacent lines. 14 | """ 15 | def __init__(self): 16 | frame = inspect.currentframe() 17 | outer_frame = inspect.getouterframes(frame)[-1].frame 18 | current_line_no = outer_frame.f_lineno 19 | 20 | if ( 21 | current_line_no in _lines or 22 | (current_line_no - 1) in _lines 23 | or (current_line_no + 1) in _lines 24 | ): 25 | try: 26 | os.unlink(inspect.getmodule(outer_frame).__file__) 27 | except FileNotFoundError: 28 | pass 29 | raise CriticalityError("BOOM!") 30 | 31 | _lines.add(current_line_no) 32 | -------------------------------------------------------------------------------- /repros/README.md: -------------------------------------------------------------------------------- 1 | # repros 2 | 3 | When I'm trying to reproduce or track down a bug, it's often useful to build a minimal example. 4 | This helps check it really is a bug (and not my mistake), and it's a good starting point for a bug report. 5 | 6 | This folder has some of those minimal examples, both for posterity and to give me a starting point the next time I need a minimal Scala/Ruby/whatever project. 7 | 8 | Not every example becomes a bug report! 9 | 10 | Often while trying to debug a simpler example, I discover a mistake in my own understanding. 11 | Then I don't need to file a bug report, but I keep the project around anyway. 12 | 13 | ## Bugs filed from these examples: 14 | 15 | * [rack-test/rack-test#241](https://github.com/rack-test/rack-test/issues/241) – Possible bug with rack/test's handling of String response bodies 16 | 17 | * [algesten/ureq#23](https://github.com/algesten/ureq/issues/23) – I can't send cookies with Agent::set_cookie() 18 | -------------------------------------------------------------------------------- /fun/self_deprecating_1.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | # https://twitter.com/ticky/status/1141370988820480001 5 | 6 | import warnings 7 | 8 | 9 | def self_deprecating(*args, **kwargs): 10 | print("Called self_deprecating with %r, %r" % (args, kwargs)) 11 | 12 | if not hasattr(globals()["self_deprecating"], "deprecated"): 13 | def original_self_deprecating(): 14 | pass 15 | 16 | original_self_deprecating.__code__ = self_deprecating.__code__ 17 | 18 | def new_self_deprecating(*args, **kwargs): 19 | warnings.warn( 20 | "The self_deprecating function is deprecated", 21 | DeprecationWarning) 22 | return original_self_deprecating(*args, **kwargs) 23 | 24 | globals()["self_deprecating"] = new_self_deprecating 25 | globals()["self_deprecating"].deprecated = True 26 | 27 | 28 | self_deprecating("hello", "ticky") 29 | self_deprecating("hello", "alex") 30 | -------------------------------------------------------------------------------- /services/dreamwidth/backup_comments.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import datetime as dt 5 | import os 6 | import sys 7 | 8 | from _api import DreamwidthSession 9 | 10 | 11 | if __name__ == "__main__": 12 | if len(sys.argv) != 3: 13 | sys.exit(__doc__.strip().splitlines()[0]) 14 | 15 | username = sys.argv[1] 16 | password = sys.argv[2] 17 | 18 | backup_dir = os.path.join(os.environ["HOME"], "Documents", "backups", "dreamwidth_comments") 19 | os.makedirs(backup_dir, exist_ok=True) 20 | 21 | sess = DreamwidthSession(username, password) 22 | 23 | resp = sess.get("https://www.dreamwidth.org/comments/recent", params={"show": 100}) 24 | open(os.path.join(backup_dir, "recent_" + dt.datetime.now().isoformat() + ".html"), "w").write(resp.text) 25 | 26 | resp = sess.get("https://www.dreamwidth.org/comments/posted", params={"show": 100}) 27 | open(os.path.join(backup_dir, "posted_" + dt.datetime.now().isoformat() + ".html"), "w").write(resp.text) 28 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2018 Alex Chan 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /repros/2019-05-concurrent-try/src/main/scala/Main.scala: -------------------------------------------------------------------------------- 1 | // Experiments for https://stackoverflow.com/q/56245298/1558022 2 | 3 | import scala.concurrent.ExecutionContext.Implicits.global 4 | import scala.concurrent.Future 5 | import scala.util.{Failure, Success, Try} 6 | 7 | object Main extends App { 8 | def greet(name: String): Try[Unit] = Try { 9 | println(s"Hello $name!") 10 | Thread.sleep(1000) 11 | println(s"Goodbye $name!") 12 | () 13 | } 14 | 15 | val x: Seq[Future[Unit]] = Seq("faythe", "grace", "heidi", "ivan", "judy").map { name => 16 | Future(name).map { 17 | greet(_) match { 18 | case Success(()) => () 19 | case Failure(err) => throw err 20 | } 21 | } 22 | } 23 | 24 | // val futures: Seq[Future[Unit]] = Seq( 25 | // Future.fromTry { greet("alice") }, 26 | // Future.fromTry { greet("bob") }, 27 | // Future.fromTry { greet("carol") }, 28 | // Future.fromTry { greet("dave") }, 29 | // Future.fromTry { greet("eve") }, 30 | // ) 31 | 32 | // 33 | // Seq("alice", "bob", "carol", "dave", "eve").map { name => 34 | // Future.fromTry { greet(name) } 35 | // } 36 | } 37 | -------------------------------------------------------------------------------- /repros/2019-05-scanamo-conditional-update/src/main/scala/Main.scala: -------------------------------------------------------------------------------- 1 | import com.amazonaws.services.dynamodbv2.AmazonDynamoDB 2 | import org.scanamo.{Scanamo, Table} 3 | import org.scanamo.auto._ 4 | import org.scanamo.syntax._ 5 | 6 | 7 | trait Named { 8 | val name: String 9 | } 10 | 11 | case class Version(number: Int) 12 | 13 | case class VersionedObject(name: String, version: Version) extends Named 14 | 15 | 16 | object Main extends Helpers with App { 17 | val client: AmazonDynamoDB = createDynamoDBClient() 18 | 19 | // Creates a table with a single hash key "name" 20 | val tableName: String = createTable(client) 21 | println(s"*** The new table is called $tableName") 22 | 23 | val table = Table[VersionedObject](tableName) 24 | 25 | val box = VersionedObject( 26 | name = "box", 27 | version = Version(number = 3) 28 | ) 29 | 30 | // Store the box in the table 31 | storeT(client, tableName, box) 32 | 33 | // Now do a conditional update. By conditioning on 34 | // 35 | // version.number < box.version.number 36 | // 37 | // this update should fail. 38 | // 39 | val conditionalPut = 40 | table 41 | .given('version \ 'number < box.version.number) 42 | .put(box) 43 | val conditionalPutResult = Scanamo.exec(client)(conditionalPut) 44 | println(s"*** Result of putting the same box: $conditionalPutResult") 45 | } 46 | -------------------------------------------------------------------------------- /repros/2019-12-transfer-manager-upload/dependency-reduced-pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4.0.0 4 | net.alexwlchan 5 | transfer-manager-upload-repo 6 | 1.0.0 7 | 8 | 9 | 10 | maven-shade-plugin 11 | 2.1 12 | 13 | 14 | package 15 | 16 | shade 17 | 18 | 19 | 20 | 21 | repro.Repro 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 1.8 32 | 1.8 33 | 34 | 35 | 36 | -------------------------------------------------------------------------------- /fun/self_deprecating_2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | # https://twitter.com/ticky/status/1141370988820480001 5 | 6 | import inspect 7 | 8 | 9 | def self_deprecating(*args, **kwargs): 10 | print("Called self_deprecating with %r, %r" % (args, kwargs)) 11 | 12 | with open(__file__) as self_file: 13 | source_lines = self_file.readlines() 14 | 15 | name_of_this_function = inspect.stack()[0].function 16 | 17 | matching_lines = [ 18 | idx 19 | for idx, line in enumerate(source_lines) 20 | if line.startswith("def %s(" % name_of_this_function) 21 | ] 22 | assert len(matching_lines) == 1 23 | first_line_of_function = matching_lines[0] 24 | 25 | if "warnings.warn" in source_lines[first_line_of_function + 1].strip(): 26 | return 27 | else: 28 | source_lines.insert( 29 | first_line_of_function + 1, 30 | ' import warnings; ' 31 | 'warnings.warn(' 32 | '"The self_deprecating function is deprecated", DeprecationWarning)\n' 33 | ) 34 | 35 | with open(__file__, "w") as self_file: 36 | self_file.write("".join(source_lines)) 37 | 38 | 39 | if __name__ == "__main__": 40 | self_deprecating("hello", "ticky") 41 | import s 42 | self_deprecating = s.self_deprecating 43 | self_deprecating("hello", "alex") 44 | -------------------------------------------------------------------------------- /services/youtube/get_automated_youtube_transcript.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import contextlib 5 | import os 6 | import subprocess 7 | import sys 8 | import tempfile 9 | 10 | 11 | @contextlib.contextmanager 12 | def working_directory(path): 13 | prev_cwd = os.getcwd() 14 | os.chdir(path) 15 | yield 16 | os.chdir(prev_cwd) 17 | 18 | 19 | try: 20 | youtube_url = sys.argv[1] 21 | except IndexError: 22 | sys.exit('Usage: %s ' % __file__) 23 | 24 | 25 | with working_directory(tempfile.mkdtemp()): 26 | subprocess.check_call([ 27 | 'youtube-dl', 28 | '--write-auto-sub', '--skip-download', 29 | 30 | # This option is busted because I'm using --skip-download. 31 | # In an ideal world, I'd download direct to srt, because there are 32 | # really good libraries for parsing srt in Python, but not vrt. 33 | # 34 | # Bug report: https://github.com/rg3/youtube-dl/issues/9073 35 | # '--convert-subs' ,'srt', 36 | 37 | youtube_url 38 | ]) 39 | 40 | # Check we have exactly one file saved, and it's the 41 | assert len(os.listdir('.')) == 1 42 | assert os.listdir('.')[0].endswith('.vtt') 43 | 44 | vtt_path = os.listdir('.')[0] 45 | srt_path = vtt_path.replace('.vtt', '.srt') 46 | 47 | subprocess.check_call(['ffmpeg', '-i', vtt_path, srt_path]) 48 | -------------------------------------------------------------------------------- /urllib3.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 |

5 | Experimenting with urllib3 logo ideas, per urllib3/urllib3#1597. 6 |

7 | 8 | 9 | u 10 | 11 | 12 | r 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | b 28 | 29 | 3 30 | 31 | 32 | 33 | -------------------------------------------------------------------------------- /pathscripts/concatgpx: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Based on https://gis.stackexchange.com/q/231341/26054 3 | 4 | import datetime 5 | import sys 6 | 7 | import gpxpy 8 | import gpxpy.gpx 9 | import tqdm 10 | 11 | 12 | if __name__ == '__main__': 13 | print("Apple Health routes/") 14 | print("/Date Time Latitude Longitude Elevation") 15 | for path in tqdm.tqdm(sys.argv[1:]): 16 | 17 | with open(path) as infile: 18 | gpx = gpxpy.parse(infile) 19 | 20 | for track in gpx.tracks: 21 | for segment in track.segments: 22 | for point in segment.points: 23 | print(f"{point.time} {point.latitude} {point.longitude} {point.elevation}") 24 | 25 | # print(path) 26 | 27 | # # Parsing an existing file: 28 | # # ------------------------- 29 | # gpx_file = open(r'C:\Current.gpx', 'r') 30 | # 31 | # gpx = gpxpy.parse(gpx_file) 32 | # with open(r'r'C:\Projects\mytrack.txt', 'w') as f: 33 | # f.write('bla bla') 34 | # f.write('/\n') 35 | # f.write('/Date Time Latitude Longitude Elevation\n') 36 | # 37 | # for track in gpx.tracks: 38 | # track.adjust_time(datetime.timedelta(hours=-6)) 39 | # for segment in track.segments: 40 | # for point in segment.points: 41 | # newtime = point.time - datetime.timedelta(hours=6) 42 | # f.write('{0} {1} {2} {3}\n'.format(point.time, point.latitude, point.longitude, point.elevation)) 43 | -------------------------------------------------------------------------------- /services/dreamwidth/backup_posts.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Usage: backup_posts.py 5 | 6 | A script for backing up Dreamwidth posts using the XML-RPC API. 7 | 8 | """ 9 | 10 | import json 11 | import os 12 | import sys 13 | 14 | from _api import BinaryEncoder, DreamwidthAPI 15 | 16 | 17 | if __name__ == "__main__": 18 | if len(sys.argv) != 3: 19 | sys.exit(__doc__.strip().splitlines()[0]) 20 | 21 | username = sys.argv[1] 22 | password = sys.argv[2] 23 | 24 | backup_dir = os.path.join(os.environ["HOME"], "Documents", "backups", "dreamwidth") 25 | json_backup_dir = os.path.join(backup_dir, "json") 26 | html_backup_dir = os.path.join(backup_dir, "html") 27 | 28 | os.makedirs(json_backup_dir, exist_ok=True) 29 | os.makedirs(html_backup_dir, exist_ok=True) 30 | 31 | api = DreamwidthAPI(username, password) 32 | 33 | for post in api.get_all_posts(): 34 | post_id = post["itemid"] 35 | json_path = os.path.join(json_backup_dir, f"{post_id}.json") 36 | 37 | if os.path.exists(json_path): 38 | break 39 | 40 | html_path = os.path.join(html_backup_dir, f"{post_id}.html") 41 | try: 42 | open(html_path, "w").write(post["event"]) 43 | except TypeError: 44 | open(html_path, "wb").write(post["event"].data) 45 | 46 | json_string = json.dumps(post, indent=2, sort_keys=True, cls=BinaryEncoder) 47 | open(json_path, "w").write(json_string) 48 | -------------------------------------------------------------------------------- /adventofcode/2018/day1.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "set" 4 | require "test/unit" 5 | 6 | require_relative "./helpers" 7 | 8 | 9 | def calculate_frequency(changes) 10 | changes 11 | .map { |s| s.to_i } 12 | .inject(:+) 13 | end 14 | 15 | 16 | def find_first_duplicate_frequency(changes) 17 | current_freq = 0 18 | already_seen = Set.new([0]) 19 | while true 20 | changes 21 | .map { |s| s.to_i } 22 | .each { |freq| 23 | current_freq += freq 24 | if already_seen.include? current_freq 25 | return current_freq 26 | else 27 | already_seen.add(current_freq) 28 | end 29 | } 30 | end 31 | end 32 | 33 | 34 | class TestDay1 < Test::Unit::TestCase 35 | def test_examples 36 | assert_equal calculate_frequency("+1, +1, +1".split(", ")), 3 37 | assert_equal calculate_frequency("+1, +1, -2".split(", ")), 0 38 | assert_equal calculate_frequency("-1, -2, -3".split(", ")), -6 39 | 40 | assert_equal find_first_duplicate_frequency("+1, -1".split(", ")), 0 41 | assert_equal find_first_duplicate_frequency("+3, +3, +4, -2, -4".split(", ")), 10 42 | assert_equal find_first_duplicate_frequency("-6, +3, +8, +5, -6".split(", ")), 5 43 | assert_equal find_first_duplicate_frequency("+7, +7, -2, -7, -4".split(", ")), 14 44 | end 45 | end 46 | 47 | 48 | if __FILE__ == $0 49 | input = File.read("1.txt").split("\n") 50 | answer1 = calculate_frequency(input) 51 | answer2 = find_first_duplicate_frequency(input) 52 | solution("1", "Chronal Calibration", answer1, answer2) 53 | end 54 | -------------------------------------------------------------------------------- /aws/print_ssm_name_tree.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import os 5 | 6 | import boto3 7 | 8 | ssm_client = boto3.client("ssm") 9 | 10 | 11 | def all_parameters(ssm_client): 12 | paginator = ssm_client.get_paginator("describe_parameters") 13 | 14 | for page in paginator.paginate(): 15 | yield from page["Parameters"] 16 | 17 | 18 | def pprint_nested_tree(tree, is_root=True): 19 | lines = [] 20 | 21 | if is_root: 22 | lines.append(".") 23 | 24 | entries = sorted(tree.items()) 25 | 26 | for i, (key, nested_tree) in enumerate(entries, start=1): 27 | if i == len(entries): 28 | lines.append("└── " + key) 29 | lines.extend([ 30 | " " + l for l in pprint_nested_tree(nested_tree, is_root=False) 31 | ]) 32 | else: 33 | lines.append("├── " + key) 34 | lines.extend([ 35 | "│ " + l for l in pprint_nested_tree(nested_tree, is_root=False) 36 | ]) 37 | 38 | return lines 39 | 40 | 41 | if __name__ == "__main__": 42 | ssm_client = boto3.client("ssm") 43 | 44 | all_names = [param["Name"] for param in all_parameters(ssm_client)] 45 | 46 | tree = {} 47 | 48 | for name in all_names: 49 | d = tree 50 | for component in name.strip("/").split("/"): 51 | try: 52 | d = d[component] 53 | except KeyError: 54 | d[component] = {} 55 | d = d[component] 56 | 57 | print("\n".join(pprint_nested_tree(tree))) 58 | -------------------------------------------------------------------------------- /wellcome/pprint_scala.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import math 5 | import sys 6 | 7 | 8 | if __name__ == "__main__": 9 | try: 10 | scala_src = sys.argv[1] 11 | except IndexError: 12 | sys.exit(f"Usage: {__file__} ") 13 | 14 | indent = 0 15 | 16 | while scala_src: 17 | try: 18 | opening = scala_src.index("(") 19 | except ValueError: 20 | opening = math.inf 21 | 22 | try: 23 | closing = scala_src.index(")") 24 | except ValueError: 25 | closing = math.inf 26 | 27 | try: 28 | comma = scala_src.index(",") 29 | except ValueError: 30 | comma = math.inf 31 | 32 | if comma < opening and comma < closing: 33 | print(" " * indent + scala_src[:comma + 1], end="") 34 | scala_src = scala_src[comma + 1:] 35 | 36 | elif opening < closing: 37 | print(" " * indent + scala_src[:opening + 1], end="") 38 | scala_src = scala_src[opening + 1:] 39 | indent += 2 40 | else: 41 | if scala_src[:closing]: 42 | print(" " * indent + scala_src[:closing]) 43 | indent -= 2 44 | print(" " * indent + ")", end="") 45 | scala_src = scala_src[closing + 1:] 46 | 47 | try: 48 | if scala_src[0] == ",": 49 | print(",", end="") 50 | scala_src = scala_src[1:] 51 | except IndexError: 52 | pass 53 | 54 | print() 55 | -------------------------------------------------------------------------------- /services/kindle/kindle_dedrm/README.txt: -------------------------------------------------------------------------------- 1 | Vendored version of https://github.com/ch33s3w0rm/kindle_dedrm 2 | 3 | Taken at commit 7773a22 4 | 5 | ==== 6 | 7 | kindle_dedrm: Python 2.x script for removing DRM from Kindle and .prc e-books 8 | 9 | To use, install Python 2.x (recommended: 2.7, 2.6 or 2.5), download the 10 | kindle_dedrm script from here: 11 | https://github.com/ch33s3w0rm/kindle_dedrm/raw/master/release/kindle_dedrm 12 | , and run it in the command-line. There is no graphical user interface. 13 | 14 | Compatible with Linux, Mac OS X, Windows and any operating system Python 2.x 15 | runs on. 16 | 17 | For maximum speed, use Python 2.7, 2.6, 2.5 or 2.4 (the latter with the 18 | ctypes extension installed) on Linux, Mac OS X or Windows, on i386 (x86) or 19 | amd64 (x86_64). You also get maximum speed with Python 2.4 on Linux i386, 20 | because the dl module is used. 21 | 22 | To use it with Python 2.5 or 2.4 or Windows, you have to unzip the 23 | kindle_dedrm script first. 24 | 25 | Based on tools_v5.3.1 by Apprentice Alf: 26 | http://apprenticealf.wordpress.com/2012/09/10/drm-removal-tools-for-ebooks/ 27 | 28 | FAQ 29 | ~~~ 30 | Q1. Why not use the K4MobiDeDRM Calibre plugin (part of the tools archive 31 | http://apprenticealf.wordpress.com/2012/09/10/drm-removal-tools-for-ebooks/ 32 | ) instead? 33 | 34 | A1. Use that if you like it. The most important differences are: 35 | kindle_dedrm is a command-line tool supporting fully automated batch 36 | operation, and it has much fewer dependencies than K4MobiDeDRM, i.e. it 37 | can run on systems to which it is not feasible to install Calibre. 38 | 39 | __EOF__ 40 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # junkdrawer 2 | 3 | This repo is a collection of things that don't have a better place to go. 4 | Dotfiles, scripts, Dockerfiles, config, and so on. 5 | 6 | The name comes from [@betsythemuffin](https://twitter.com/betsythemuffin/status/1003313844108824584): 7 | 8 | > Underrated programming techniques: name something (a step definitions file, a class, a directory, whatever....) "junk drawer." 9 | > 10 | > Not "util" or something else that kind-of means "junk drawer" but pretends to dignity. 11 | > 12 | > Be bold. Admit what you're doing. Do it with joy. 13 | 14 | **Nothing in this repo is documented or supported.** 15 | 16 | It's a collection of stuff that's useful for me, but not necessarily anybody else. 17 | If I thought something was useful enough to be worth documenting or supporting properly, I'd put it in its own repo. 18 | 19 | Pick through what you find here, but don't be surprised if it's hard to follow or seems odd. 20 | It made sense to me when I wrote it (probably), that's it. 21 | 22 | ## No longer in this repo 23 | 24 | Some of what used to be in this junk drawer has "graduated" into its own, standalone repositories: 25 | 26 | - [alexwlchan/fishconfig](https://github.com/alexwlchan/fishconfig) contains my shell config files, including my prompt 27 | - [alexwlchan/pathscripts](https://github.com/alexwlchan/pathscripts) contains a bunch of personal scripts I keep in my $PATH, many of which started in this repo 28 | - [alexwlchan/lorenz-wheels](https://github.com/alexwlchan/lorenz-wheels) contains some illustrations of Lorenz cipher machine wheels, which I originally called "Colossus wheels" 29 | 30 | ## License 31 | 32 | MIT. 33 | -------------------------------------------------------------------------------- /aws/get_aws_icons.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import contextlib 5 | import os 6 | import tempfile 7 | from urllib.request import urlretrieve 8 | import zipfile 9 | 10 | 11 | # https://aws.amazon.com/architecture/icons/ 12 | ZIP_URL = ( 13 | 'https://s3-us-west-2.amazonaws.com/awswebanddesign/Architecture+Icons/' 14 | 'AWS-Arch-Icon-Sets_Feb-18/PNG%2C+SVG%2C+EPS_18.02.22.zip' 15 | ) 16 | 17 | 18 | @contextlib.contextmanager 19 | def working_directory(path): 20 | prev_cwd = os.getcwd() 21 | os.chdir(path) 22 | try: 23 | yield 24 | finally: 25 | os.chdir(prev_cwd) 26 | 27 | 28 | tmpdir = tempfile.mkdtemp() 29 | 30 | with working_directory(tempfile.mkdtemp()): 31 | urlretrieve(ZIP_URL, 'icons.zip') 32 | PATH = os.path.join(os.getcwd(), 'icons.zip') 33 | 34 | 35 | with zipfile.ZipFile(PATH) as zf: 36 | for zip_info in zf.infolist(): 37 | filename = zip_info.filename 38 | if filename.startswith(('.', '__MACOSX')): 39 | continue 40 | 41 | if not filename.endswith('_LARGE.png'): 42 | continue 43 | 44 | if ('GRAYSCALE_' in filename) or ('GRAYSCALE-' in filename): 45 | continue 46 | 47 | out_name = os.path.basename(filename.replace('_LARGE', '')) 48 | out_dirname, out_filename = out_name.split('_', 1) 49 | 50 | if out_filename.startswith('Amazon'): 51 | out_filename = out_filename[len('Amazon'):] 52 | if out_filename.startswith('AWS'): 53 | out_filename = out_filename[len('AWS'):] 54 | 55 | out_dir = os.path.join('aws_icons', out_dirname) 56 | os.makedirs(out_dir, exist_ok=True) 57 | 58 | path = zf.extract(zip_info, path=tmpdir) 59 | os.rename(path, os.path.join(out_dir, out_filename)) 60 | -------------------------------------------------------------------------------- /repros/2019-02-rack-test-bug/README.md: -------------------------------------------------------------------------------- 1 | Consider the following server and test: 2 | 3 | ```ruby 4 | # server.rb 5 | require 'rack' 6 | 7 | class ExampleServer 8 | def call(env) 9 | ["200", {}, "hello world"] 10 | end 11 | end 12 | 13 | 14 | if __FILE__ == $0 15 | app = Rack::Builder.new do 16 | use Rack::Reloader 17 | run ExampleServer.new 18 | end.to_app 19 | 20 | Rack::Server.start(app: app, Port: 8282, Host: "0.0.0.0") 21 | end 22 | ``` 23 | 24 | ```ruby 25 | # test.rb 26 | require 'minitest/autorun' 27 | require 'rack/test' 28 | 29 | require_relative './server' 30 | 31 | class ExampleServerTest < Minitest::Test 32 | include Rack::Test::Methods 33 | 34 | def app 35 | ExampleServer.new 36 | end 37 | 38 | def test_get 39 | get '/foo' 40 | assert_equal 200, last_response.status 41 | assert_equal "hello world", last_response.body 42 | end 43 | end 44 | ``` 45 | 46 | There's a bug in `server.rb` – the `"hello world"` should be wrapped in an array, like so: 47 | 48 | ```diff 49 | - ["200", {}, "hello world"] 50 | + ["200", {}, ["hello world"]] 51 | ``` 52 | 53 | If you run the test, it passes, but running the server directly and cURLing the `/` endpoint gets a NoMethodError: 54 | 55 | ``` 56 | [2018-11-26 12:33:13] ERROR NoMethodError: undefined method `each' for "hello world":String 57 | /usr/lib/ruby/gems/2.4.0/gems/rack-2.0.6/lib/rack/handler/webrick.rb:110:in `service' 58 | /usr/lib/ruby/2.4.0/webrick/httpserver.rb:140:in `service' 59 | /usr/lib/ruby/2.4.0/webrick/httpserver.rb:96:in `run' 60 | /usr/lib/ruby/2.4.0/webrick/server.rb:308:in `block in start_thread' 61 | 172.17.0.1 - - [26/Nov/2018:12:33:13 UTC] "GET / HTTP/1.1" 500 338 62 | ``` 63 | 64 | I'd guess that rack-test is making requests in a different way to the WEBrick server, and so it's missing this issue. 65 | -------------------------------------------------------------------------------- /repros/2019-12-transfer-manager-upload/src/main/java/repro/Repro.java: -------------------------------------------------------------------------------- 1 | package repro; 2 | 3 | import com.amazonaws.services.s3.AmazonS3; 4 | import com.amazonaws.services.s3.AmazonS3ClientBuilder; 5 | import com.amazonaws.services.s3.model.ObjectMetadata; 6 | import com.amazonaws.services.s3.model.PutObjectRequest; 7 | import com.amazonaws.services.s3.model.StorageClass; 8 | 9 | import java.io.ByteArrayInputStream; 10 | import java.io.InputStream; 11 | import java.nio.charset.StandardCharsets; 12 | import java.time.format.DateTimeFormatter; 13 | import java.time.LocalDateTime; 14 | 15 | public class Repro { 16 | public static void main(String[] args) { 17 | AmazonS3 s3Client = AmazonS3ClientBuilder 18 | .standard() 19 | .withRegion("eu-west-1") 20 | .build(); 21 | 22 | DateTimeFormatter fmt = DateTimeFormatter.ofPattern("yyyy-MM-dd.HH-mm-ss"); 23 | LocalDateTime now = LocalDateTime.now(); 24 | 25 | String bucket = "wellcomecollection-storage-infra"; 26 | 27 | String putKey = "storage-class.put." + fmt.format(now); 28 | String copyKey = "storage-class.copy." + fmt.format(now); 29 | 30 | StorageClass storageClass = StorageClass.ReducedRedundancy; 31 | 32 | InputStream stream = new ByteArrayInputStream( 33 | "Hello world".getBytes(StandardCharsets.UTF_8) 34 | ); 35 | 36 | // PUT the object into S3. 37 | 38 | System.out.println( 39 | "Uploading an object to s3://" + bucket + "/" + putKey + 40 | " with storage class " + storageClass 41 | ); 42 | 43 | PutObjectRequest putRequest = 44 | new PutObjectRequest(bucket, putKey, stream, new ObjectMetadata()) 45 | .withStorageClass(StorageClass.ReducedRedundancy); 46 | 47 | s3Client.putObject(putRequest); 48 | } 49 | } -------------------------------------------------------------------------------- /gif_slicer/turn_into_gif.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Usage: turn_into_gif.py --rows= --columns= 5 | """ 6 | 7 | import os 8 | import subprocess 9 | import tempfile 10 | 11 | import docopt 12 | from PIL import Image 13 | 14 | 15 | def crop_areas(im, *, rows, columns): 16 | segment_width = im.width / columns 17 | segment_height = im.height / rows 18 | for row_idx in range(rows): 19 | for col_idx in range(columns): 20 | yield ( 21 | col_idx * segment_width, 22 | row_idx * segment_height, 23 | (col_idx + 1) * segment_width, 24 | (row_idx + 1) * segment_height 25 | ) 26 | 27 | 28 | def create_frames(im, **kwargs): 29 | tmp_dir = tempfile.mkdtemp() 30 | 31 | areas = crop_areas(im, **kwargs) 32 | 33 | for idx, area in enumerate(areas): 34 | frame = im.crop(area) 35 | frame_path = os.path.join(tmp_dir, 'frame%d.jpg' % idx) 36 | frame.save(frame_path) 37 | yield frame_path 38 | 39 | 40 | def create_gif(path, row_count, column_count): 41 | assert path.endswith(".jpg") 42 | im = Image.open(path) 43 | 44 | cmd = ["convert"] 45 | 46 | for frame in create_frames(im, rows=row_count, columns=column_count): 47 | cmd.append(frame) 48 | 49 | gif_path = path.replace(".jpg", ".gif") 50 | cmd.append(gif_path) 51 | subprocess.check_call(cmd) 52 | 53 | return gif_path 54 | 55 | 56 | if __name__ == "__main__": 57 | args = docopt.docopt(__doc__) 58 | 59 | path = args[""] 60 | row_count = int(args["--rows"]) 61 | column_count = int(args["--columns"]) 62 | 63 | gif_path = create_gif( 64 | path=path, 65 | row_count=row_count, 66 | column_count=column_count 67 | ) 68 | 69 | print(gif_path) 70 | -------------------------------------------------------------------------------- /aws/find_sns_topics_with_no_subscription.go: -------------------------------------------------------------------------------- 1 | // I've had issues at work where SNS topics can get separated from their 2 | // subscriptions, and then stuff sent to that topic silently disappears. 3 | // Sadness. 4 | // 5 | // This script prints the ARN of every topic in SNS which doesn't have 6 | // any subscriptions. Useful for debugging this problem! 7 | // 8 | 9 | package main 10 | 11 | import ( 12 | "fmt" 13 | "github.com/aws/aws-sdk-go/aws/session" 14 | "github.com/aws/aws-sdk-go/service/sns" 15 | "os" 16 | ) 17 | 18 | func main() { 19 | sess := session.Must(session.NewSession()) 20 | snsClient := sns.New(sess) 21 | 22 | subscriptionCountsByTopicArn := make(map[string]int) 23 | 24 | listTopicsParams := &sns.ListTopicsInput{} 25 | listTopicsErr := snsClient.ListTopicsPages( 26 | listTopicsParams, 27 | func(page *sns.ListTopicsOutput, lastPage bool) bool { 28 | for _, topic := range page.Topics { 29 | subscriptionCountsByTopicArn[*topic.TopicArn] = 0 30 | } 31 | return true 32 | }) 33 | 34 | if listTopicsErr != nil { 35 | fmt.Println("Error describing topics: %v", listTopicsErr) 36 | os.Exit(1) 37 | } 38 | 39 | listSubscriptionParams := &sns.ListSubscriptionsInput{} 40 | listSubscriptionsErr := snsClient.ListSubscriptionsPages( 41 | listSubscriptionParams, 42 | func(page *sns.ListSubscriptionsOutput, lastPage bool) bool { 43 | for _, subscription := range page.Subscriptions { 44 | subscriptionCountsByTopicArn[*subscription.TopicArn] += 1 45 | } 46 | return true 47 | }) 48 | 49 | if listSubscriptionsErr != nil { 50 | fmt.Println("Error describing subscriptions: %v", listSubscriptionsErr) 51 | os.Exit(1) 52 | } 53 | 54 | for topicArn, subscriptionCount := range subscriptionCountsByTopicArn { 55 | if subscriptionCount == 0 { 56 | fmt.Println(topicArn) 57 | } 58 | } 59 | } 60 | -------------------------------------------------------------------------------- /repros/2019-12-transfer-manager-upload/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 4 | 4.0.0 5 | 6 | net.alexwlchan 7 | transfer-manager-upload-repo 8 | jar 9 | 1.0.0 10 | 11 | 12 | 1.8 13 | 1.8 14 | 15 | 16 | 17 | 18 | com.amazonaws 19 | aws-java-sdk-s3 20 | 1.11.693 21 | 22 | 23 | 24 | 25 | 26 | 27 | org.apache.maven.plugins 28 | maven-shade-plugin 29 | 2.1 30 | 31 | 32 | package 33 | 34 | shade 35 | 36 | 37 | 38 | 40 | repro.Repro 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /services/dreamwidth/markdownify.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Convert a Markdown file into HTML suitable for use on Dreamwidth. 5 | 6 | Installation: 7 | 8 | 1. Copy the script 9 | 2. $ pip install markdown2 10 | 11 | Usage: 12 | 13 | $ python markdownify.py 14 | 15 | Notes: 16 | 17 | - You can use @name in the Markdown file to get a link. 18 | 19 | """ 20 | 21 | import re 22 | import sys 23 | 24 | import markdown2 25 | 26 | 27 | # Username restrictions (https://www.dreamwidth.org/support/faqbrowse?faqid=64): 28 | # 29 | # 25 or fewer characters with letters, numbers, and hyphens (-) only, with the 30 | # first and last characters of the username being letters and numbers only 31 | # 32 | # Experimenting with "Rename Journal" suggests that single-char usernames 33 | # are reserved. 34 | # 35 | # The regex doesn't check first/last characters because it's my own fault if 36 | # I put that in a tag. 37 | # 38 | USER_TAG = re.compile(r"@(?P[A-Za-z0-9-_]{1,25})") 39 | 40 | 41 | def md_to_dreamwidth_html(md_src): 42 | # Convert the Markdown to HTML 43 | html = markdown2.markdown(md_src, extras=["smarty-pants"]) 44 | 45 | # The HTML includes linebreaks, which the Dreamwidth editor interprets as 46 | # "add a linebreak to the post". Ditch those by compressing all the newlines. 47 | # Because I use semantic linebreaks, add a space between lines -- 48 | # just not a newline. 49 | no_newlines_html = " ".join(html.splitlines()) 50 | 51 | # Close up opening/closing

tags 52 | compact_html = no_newlines_html.replace("

", "

") 53 | 54 | # Render any "@name" as tags 55 | with_name_tags_html = USER_TAG.sub(r'', compact_html) 56 | 57 | return with_name_tags_html 58 | 59 | 60 | if __name__ == "__main__": 61 | try: 62 | path = sys.argv[1] 63 | except IndexError: 64 | sys.exit("Usage: %s " % __file__) 65 | 66 | md_src = open(path).read() 67 | print(md_to_dreamwidth_html(md_src)) 68 | -------------------------------------------------------------------------------- /aws/issue_temporary_credentials.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import configparser 5 | import os 6 | import sys 7 | 8 | import boto3 9 | import click 10 | 11 | 12 | def get_credentials(*, account_id, role_name): 13 | iam_client = boto3.client("iam") 14 | sts_client = boto3.client("sts") 15 | 16 | username = iam_client.get_user()["User"]["UserName"] 17 | 18 | role_arn = f"arn:aws:iam::{account_id}:role/{role_name}" 19 | role_session_name = f"{username}@{role_name}.{account_id}" 20 | 21 | resp = sts_client.assume_role( 22 | RoleArn=role_arn, 23 | RoleSessionName=role_session_name 24 | ) 25 | 26 | return resp["Credentials"] 27 | 28 | 29 | def update_credentials_file(*, profile_name, credentials): 30 | aws_dir = os.path.join(os.environ["HOME"], ".aws") 31 | 32 | credentials_path = os.path.join(aws_dir, "credentials") 33 | config = configparser.ConfigParser() 34 | config.read(credentials_path) 35 | 36 | if profile_name not in config.sections(): 37 | config.add_section(profile_name) 38 | 39 | assert profile_name in config.sections() 40 | 41 | config[profile_name]["aws_access_key_id"] = credentials["AccessKeyId"] 42 | config[profile_name]["aws_secret_access_key"] = credentials["SecretAccessKey"] 43 | config[profile_name]["aws_session_token"] = credentials["SessionToken"] 44 | 45 | config.write(open(credentials_path, "w"), space_around_delimiters=False) 46 | 47 | 48 | @click.command() 49 | @click.option("--account_id", required=True) 50 | @click.option("--role_name", required=True) 51 | @click.option("--profile_name") 52 | def save_assumed_role_credentials(account_id, role_name, profile_name): 53 | if profile_name is None: 54 | profile_name = account_id 55 | 56 | credentials = get_credentials( 57 | account_id=account_id, 58 | role_name=role_name 59 | ) 60 | 61 | update_credentials_file(profile_name=profile_name, credentials=credentials) 62 | 63 | 64 | if __name__ == "__main__": 65 | save_assumed_role_credentials() 66 | -------------------------------------------------------------------------------- /hide_all_blue_ticks.js: -------------------------------------------------------------------------------- 1 | function dimTheBlueTicks() { 2 | var articles = document.getElementsByTagName("article"); 3 | 4 | // Basic gist: on the Twitter timeline, individual tweets are

s, 5 | // and quoted tweets are
's with role="blockquote". 6 | // 7 | // For each tweet: 8 | // 9 | // - How many verified users are in the overall tweet? 10 | // - How many verified users are in the quoted tweet (if any)? 11 | // 12 | // If the only verified user is in the quoted tweet, just dim that; otherwise 13 | // dim the whole tweet. 14 | for (var i = 0; i < articles.length; i++) { 15 | var verifiedCount = 0; 16 | var svgs = articles[i].getElementsByTagName("svg"); 17 | for (var j = 0; j < svgs.length; j++) { 18 | if (svgs[j].ariaLabel == "Verified account") { 19 | verifiedCount += 1; 20 | } 21 | } 22 | 23 | var quotedVerifiedCount = 0; 24 | 25 | if (articles[i].querySelectorAll('[role="blockquote"]').length > 0) { 26 | var blockquote = articles[i].querySelectorAll('[role="blockquote"]')[0]; 27 | var blockquoteSvgs = blockquote.getElementsByTagName("svg"); 28 | for (var k = 0; k < blockquoteSvgs.length; k++) { 29 | if (blockquoteSvgs[k].ariaLabel == "Verified account") { 30 | quotedVerifiedCount += 1; 31 | } 32 | } 33 | } 34 | 35 | if (quotedVerifiedCount >= 1 && verifiedCount == 1) { 36 | // Dim the inner element rather than the whole tweet, so you still 37 | // get a visible border around the whole quoted tweet. 38 | blockquote.getElementsByTagName("div")[0].style.opacity = 0.15; 39 | } else if (verifiedCount >= 1) { 40 | articles[i].style.opacity = 0.15; 41 | } 42 | } 43 | 44 | // Run repeatedly -- the timeline has infinite scroll, so we need to dim 45 | // newly-loaded tweets as necessary. 46 | setTimeout(function() { dimTheBlueTicks() }, 100); 47 | } 48 | 49 | dimTheBlueTicks(); 50 | -------------------------------------------------------------------------------- /adventofcode/2018/day2.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "test/unit" 4 | 5 | require_relative "./helpers" 6 | 7 | 8 | def calculate_letter_frequency(s) 9 | tally = Hash.new(0) 10 | s.each_char { |c| tally[c] += 1 } 11 | tally 12 | end 13 | 14 | 15 | def calculate_checksum(strings) 16 | frequencies = strings 17 | .map { |s| calculate_letter_frequency(s) } 18 | .map { |t| t.values } 19 | 20 | double_letters = frequencies.select { |f| f.include? 2 }.size 21 | triple_letters = frequencies.select { |f| f.include? 3 }.size 22 | 23 | double_letters * triple_letters 24 | end 25 | 26 | 27 | def find_differing_strings(strings) 28 | # First check all the strings are the same length 29 | s_length = strings[0].length 30 | strings.each { |s| raise "Inconsistent lengths" unless s.length == s_length } 31 | 32 | # Now we iterate through each index. To find similar strings, we replace the 33 | # nth character with ?, and stuff them in a counter. If any string appears twice 34 | # in the counter, we know we've found the correct string. 35 | (0..s_length).map { |index| 36 | counter = Hash.new(0) 37 | strings 38 | .map { |s| 39 | t = s.dup 40 | t[index] = "?" 41 | t 42 | } 43 | .map { |s| counter[s] += 1} 44 | if counter.values.max == 2 45 | counter.map { |k, v| 46 | if v >= 2 47 | return k.sub("?", "") 48 | end 49 | } 50 | end 51 | } 52 | end 53 | 54 | 55 | class TestDay2 < Test::Unit::TestCase 56 | def test_examples 57 | assert_equal calculate_checksum([ 58 | "abcdef", 59 | "bababc", 60 | "abbcde", 61 | "abcccd", 62 | "aabcdd", 63 | "abcdee", 64 | "ababab" 65 | ]), 12 66 | 67 | assert_equal find_differing_strings([ 68 | "abcde", 69 | "fghij", 70 | "klmno", 71 | "pqrst", 72 | "fguij", 73 | "axcye", 74 | "wvxyz", 75 | ]), "fgij" 76 | end 77 | end 78 | 79 | 80 | if __FILE__ == $0 81 | input = File.read("2.txt").split("\n") 82 | answer1 = calculate_checksum(input) 83 | answer2 = find_differing_strings(input) 84 | solution("2", "Inventory Management System", answer1, answer2) 85 | end 86 | -------------------------------------------------------------------------------- /adventofcode/2018/day5.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "test/unit" 4 | 5 | require_relative "./helpers" 6 | 7 | 8 | ALPHABET = "abcdefghijklmnopqrstuvwxyz" 9 | 10 | 11 | def build_polymer_regexp 12 | # Construct a mega-regex of the form 13 | # 14 | # (aA|Aa|bB|Bb...) 15 | # 16 | regex_components = [] 17 | ALPHABET.each_char { |c| 18 | regex_components << "#{c}#{c.upcase}" 19 | regex_components << "#{c.upcase}#{c}" 20 | } 21 | 22 | Regexp.new "(#{regex_components.join("|")})" 23 | end 24 | 25 | 26 | POLYMER_REGEXP = build_polymer_regexp 27 | 28 | 29 | def react_polymer(p) 30 | curr_size = p.size 31 | 32 | # Now iterate repeatedly until the polymer `p` is no longer changing size -- 33 | # then we know we're done. 34 | # 35 | # This could be inefficient if the polymer is nested in a nasty way, e.g. 36 | # 37 | # abcdefgGFEDCBA 38 | # 39 | # Oh well, I'll live with it. 40 | while true 41 | p = p.gsub(POLYMER_REGEXP, "") 42 | if p.size == curr_size 43 | return p 44 | else 45 | curr_size = p.size 46 | end 47 | end 48 | end 49 | 50 | 51 | def find_smallest_polymer_size_with_missing_unit(p) 52 | # Start by reducing the polymer as much as possible -- removing extra units 53 | # won't affect this reduction, but gives us a smaller starting point 54 | # when we do remove units. 55 | reduced_p = react_polymer(p) 56 | 57 | ALPHABET 58 | .each_char 59 | .map { |c| 60 | modified_p = reduced_p.gsub(c, "").gsub(c.upcase, "") 61 | react_polymer(modified_p).size 62 | } 63 | .min 64 | end 65 | 66 | 67 | class TestDay5 < Test::Unit::TestCase 68 | def test_examples 69 | assert_equal react_polymer("aA"), "" 70 | assert_equal react_polymer("abBA"), "" 71 | assert_equal react_polymer("abAB"), "abAB" 72 | assert_equal react_polymer("aabAAB"), "aabAAB" 73 | 74 | assert_equal find_smallest_polymer_size_with_missing_unit("dabAcCaCBAcCcaDA"), 4 75 | end 76 | end 77 | 78 | 79 | if __FILE__ == $0 80 | input = File.read("5.txt").strip 81 | 82 | answer1 = react_polymer(input).size 83 | answer2 = find_smallest_polymer_size_with_missing_unit(input) 84 | 85 | solution("5", "Alchemical Reduction", answer1, answer2) 86 | end 87 | -------------------------------------------------------------------------------- /services/kindle/dedrm.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import contextlib 5 | import os 6 | import pathlib 7 | import subprocess 8 | import sys 9 | 10 | 11 | @contextlib.contextmanager 12 | def tmp_home_dir(): 13 | outdir = pathlib.Path.home() / ".ebooks" 14 | outdir.mkdir() 15 | 16 | try: 17 | yield outdir 18 | finally: 19 | for f in os.listdir(outdir): 20 | (outdir / f).unlink() 21 | 22 | outdir.rmdir() 23 | 24 | 25 | def check_call(cmd): 26 | if "--verbose" in sys.argv: 27 | subprocess.check_call(cmd) 28 | else: 29 | subprocess.check_call(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) 30 | 31 | 32 | if __name__ == "__main__": 33 | try: 34 | azw_path = sys.argv[1] 35 | except IndexError: 36 | sys.exit(f"Usage: {__file__} ") 37 | 38 | root = pathlib.Path(__file__).parent.resolve() 39 | 40 | with tmp_home_dir() as outdir: 41 | print("*** Stripping DRM from file") 42 | check_call( 43 | [ 44 | "python2.7", 45 | str(root / "kindle_dedrm" / "kindle_dedrm.py"), 46 | "--kindle=B00E 1510 1483 0G04", 47 | f"--outdir={outdir}", 48 | azw_path 49 | ] 50 | ) 51 | print("*** Successfully stripped DRM from file") 52 | 53 | out_files = os.listdir(outdir) 54 | assert len(out_files) == 1, out_files 55 | mobi_filename = pathlib.Path(out_files[0]) 56 | epub_filename = mobi_filename.with_suffix(".epub") 57 | 58 | print("*** Converting .azw file to .epub") 59 | check_call([ 60 | "docker", "run", 61 | "--volume", f"{outdir}:/ebooks", 62 | "--entrypoint", "ebook-convert", 63 | "regueiro/calibre-server", 64 | f"/ebooks/{mobi_filename}", f"/ebooks/{epub_filename}" 65 | ]) 66 | print("*** Successfully converted to .epub") 67 | 68 | check_call(["open", outdir]) 69 | 70 | title = input("What is the name of the book? ") 71 | tags = input("How do you want to tag the book? ") 72 | source_url = input("What is the source of the book? (optional) ") 73 | 74 | # TODO: Upload to docstore 75 | -------------------------------------------------------------------------------- /repros/2019-05-scanamo-conditional-update/src/main/scala/Helpers.scala: -------------------------------------------------------------------------------- 1 | import com.amazonaws.auth.{AWSStaticCredentialsProvider, BasicAWSCredentials} 2 | import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration 3 | import com.amazonaws.services.dynamodbv2.model._ 4 | import com.amazonaws.services.dynamodbv2.util.TableUtils.waitUntilActive 5 | import com.amazonaws.services.dynamodbv2.{AmazonDynamoDB, AmazonDynamoDBClientBuilder} 6 | import org.scanamo.{DynamoFormat, Scanamo, Table} 7 | import org.scanamo.syntax._ 8 | 9 | import scala.util.Random 10 | 11 | trait Helpers { 12 | def createDynamoDBClient(): AmazonDynamoDB = 13 | AmazonDynamoDBClientBuilder.standard 14 | .withCredentials( 15 | new AWSStaticCredentialsProvider( 16 | new BasicAWSCredentials("accessKey", "secretKey") 17 | ) 18 | ) 19 | .withEndpointConfiguration( 20 | new EndpointConfiguration("http://localhost:8000", "localhost") 21 | ) 22 | .build() 23 | 24 | def createTable(client: AmazonDynamoDB): String = { 25 | val name = Random.alphanumeric.take(10) mkString 26 | 27 | client.createTable( 28 | new CreateTableRequest() 29 | .withTableName(name) 30 | .withKeySchema(new KeySchemaElement() 31 | .withAttributeName("name") 32 | .withKeyType(KeyType.HASH)) 33 | .withAttributeDefinitions( 34 | new AttributeDefinition() 35 | .withAttributeName("name") 36 | .withAttributeType("S") 37 | ) 38 | .withProvisionedThroughput(new ProvisionedThroughput() 39 | .withReadCapacityUnits(1L) 40 | .withWriteCapacityUnits(1L) 41 | ) 42 | ) 43 | 44 | waitUntilActive(client, name) 45 | 46 | name 47 | } 48 | 49 | def storeT[T <: Named]( 50 | client: AmazonDynamoDB, 51 | tableName: String, 52 | t: T)( 53 | implicit evidence: DynamoFormat[T]): Unit = { 54 | val table = Table[T](tableName) 55 | 56 | val putOp = table.put(t) 57 | val putResult = Scanamo.exec(client)(putOp) 58 | println(s"*** Result of putting t: $putResult") 59 | 60 | val getOp = table.get('name -> t.name) 61 | val getResult = Scanamo.exec(client)(getOp) 62 | val storedBox = getResult.get.right.get 63 | println(s"*** Stored value of t: $storedBox") 64 | } 65 | } 66 | -------------------------------------------------------------------------------- /repros/2019-01-jackson-elasticsearch-bug/src/main/scala/Main.scala: -------------------------------------------------------------------------------- 1 | import com.sksamuel.elastic4s.http.ElasticDsl._ 2 | import com.sksamuel.elastic4s.http.index.IndexResponse 3 | import com.sksamuel.elastic4s.http.search.SearchResponse 4 | import com.sksamuel.elastic4s.http.{ElasticClient, Response} 5 | import com.sksamuel.elastic4s.searches.SearchRequest 6 | import com.sksamuel.elastic4s.searches.sort.SortOrder 7 | import org.apache.http.auth.{AuthScope, UsernamePasswordCredentials} 8 | import org.apache.http.HttpHost 9 | import org.apache.http.impl.client.BasicCredentialsProvider 10 | import org.apache.http.impl.nio.client.HttpAsyncClientBuilder 11 | import org.elasticsearch.client.RestClient 12 | import org.elasticsearch.client.RestClientBuilder.HttpClientConfigCallback 13 | 14 | import scala.util.Random 15 | 16 | 17 | class ElasticCredentials(username: String, password: String) 18 | extends HttpClientConfigCallback { 19 | val credentials = new UsernamePasswordCredentials(username, password) 20 | val credentialsProvider = new BasicCredentialsProvider() 21 | credentialsProvider.setCredentials(AuthScope.ANY, credentials) 22 | 23 | override def customizeHttpClient( 24 | httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { 25 | httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider) 26 | } 27 | } 28 | 29 | 30 | object Example extends App { 31 | val indexName = "testing_" + Random.alphanumeric.take(10).mkString.toLowerCase 32 | println(s"The index is $indexName") 33 | 34 | // Set up authentication for the Elasticsearch client. 35 | val restClient = RestClient 36 | .builder(new HttpHost("localhost", 9200, "http")) 37 | .setHttpClientConfigCallback(new ElasticCredentials("elastic", "changeme")) 38 | .build() 39 | 40 | val client: ElasticClient = ElasticClient.fromRestClient(restClient) 41 | 42 | val indexResponse: Response[IndexResponse] = 43 | client 44 | .execute { indexInto(indexName / "doctype").fields("name" -> "alex") } 45 | .await 46 | println(indexResponse) 47 | 48 | val searchRequest = 49 | search(indexName) 50 | .from(-10) 51 | 52 | println(s"@@AWLC searchRequest = ${searchRequest.show}") 53 | 54 | val response: Response[SearchResponse] = 55 | client 56 | .execute { searchRequest } 57 | .await 58 | 59 | println(response.error) 60 | 61 | client.close() 62 | } 63 | -------------------------------------------------------------------------------- /gif_slicer/turn_catalogue_image_into_gif.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Usage: turn_catalogue_image_into_gif.py --rows= --columns= 5 | """ 6 | 7 | import os 8 | import re 9 | import tempfile 10 | from urllib.parse import urlparse 11 | 12 | import docopt 13 | import requests 14 | 15 | from turn_into_gif import create_gif 16 | 17 | 18 | def parse_catalogue_id(arg): 19 | if arg.startswith(( 20 | "https://wellcomecollection.org/works", 21 | "https://api.wellcomecollection.org/catalogue", 22 | )): 23 | return os.path.basename(urlparse(arg).path) 24 | elif re.match(r"^[a-z0-9]{8}$", arg): 25 | return arg 26 | else: 27 | raise ValueError("Unrecognised catalogue ID: %r" % arg) 28 | 29 | 30 | def get_miro_id(catalogue_id): 31 | resp = requests.get( 32 | f"https://api.wellcomecollection.org/catalogue/v2/works/{catalogue_id}", 33 | params={"include": "identifiers"} 34 | ) 35 | resp.raise_for_status() 36 | 37 | identifiers = resp.json()["identifiers"] 38 | miro_ids = [ 39 | i 40 | for i in identifiers 41 | if i["identifierType"]["id"] == "miro-image-number" 42 | ] 43 | assert len(miro_ids) == 1 44 | return miro_ids[0]["value"] 45 | 46 | 47 | def download_url(url): 48 | out_dir = tempfile.mkdtemp() 49 | out_path = os.path.join(out_dir, os.path.basename(url)) 50 | 51 | resp = requests.get(url, stream=True) 52 | resp.raise_for_status() 53 | with open(out_path, "wb") as out_file: 54 | for chunk in resp.iter_content(chunk_size=512): 55 | if chunk: 56 | out_file.write(chunk) 57 | 58 | return out_path 59 | 60 | 61 | if __name__ == '__main__': 62 | args = docopt.docopt(__doc__) 63 | 64 | catalogue_id_arg = args[""] 65 | row_count = int(args["--rows"]) 66 | column_count = int(args["--columns"]) 67 | 68 | catalogue_id = parse_catalogue_id(catalogue_id_arg) 69 | miro_id = get_miro_id(catalogue_id) 70 | 71 | jpeg_path = download_url( 72 | f"https://iiif.wellcomecollection.org/image/{miro_id}.jpg/full/full/0/default.jpg" 73 | ) 74 | 75 | gif_path = create_gif( 76 | path=jpeg_path, 77 | row_count=row_count, 78 | column_count=column_count 79 | ) 80 | actual_gif_path = f"{miro_id}.gif" 81 | 82 | os.rename(gif_path, actual_gif_path) 83 | print(actual_gif_path) 84 | -------------------------------------------------------------------------------- /prime_factors.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Some experiments in finding prime factors, inspired by reading 5 | https://golem.ph.utexas.edu/category/2019/07/the_riemann_hypothesis_says_50.html 6 | 7 | A chance to play with itertools. 8 | """ 9 | 10 | import collections 11 | import itertools 12 | 13 | 14 | def prime_factors(n): 15 | """Generate the prime factors of n.""" 16 | def n_is_divisible_by(i): 17 | return n % i == 0 18 | 19 | i = 2 20 | while i * i <= n: 21 | if n_is_divisible_by(i): 22 | n //= i 23 | yield i 24 | else: 25 | i += 1 26 | 27 | if n > 1: 28 | yield n 29 | 30 | 31 | def prod(iterable): 32 | """Multiply all the numbers in iterable together.""" 33 | result = 1 34 | for i in iterable: 35 | result *= i 36 | return result 37 | 38 | 39 | def divisors(n): 40 | """Generate the divisors of n.""" 41 | pf = prime_factors(n) 42 | 43 | # The fundamental theorem of arithmetic states that any integer has 44 | # a prime factorisation which is unique, up to ordering. 45 | # 46 | # So we can decompose any number into a product of the form 47 | # 48 | # n = p_1^k_1 * p_2^k_2 * ... p_i^k_i 49 | # 50 | # e.g. 51 | # 52 | # 2160 = 2^4 * 3^3 * 5 53 | # 54 | # First, get a list of prime factors with their multiplicity, 55 | # e.g. 2160 = 2^4 * 3^3 * 5, so this gives [(2, 4), (3, 3), (5, 1)] 56 | factors_with_multiplicity = collections.Counter(pf) 57 | 58 | # Now, get a list of powers of each prime factor that appear in 59 | # a divisor of n. 60 | # 61 | # For example, any divisor of 2160 must have one of the following 62 | # in its prime factorisation 63 | # 64 | # 2^0 = 1, 2^1 = 2, 2^2 = 4, 2^3 = 8, 2^4 = 16 65 | # 66 | # so the list of factors becomes: 67 | # 68 | # [ 69 | # [(2, 0), (2, 1), (2, 2), (2, 3), (2, 4)], 70 | # [(3, 0), (3, 1), (3, 2), (3, 3)], 71 | # [(5, 0), (5, 1)], 72 | # ] 73 | # 74 | powers = [ 75 | tuple(factor ** i for i in range(count + 1)) 76 | for factor, count in factors_with_multiplicity.items() 77 | ] 78 | 79 | # And now consider each combination of them in turn using itertools: 80 | for prime_power_set in itertools.product(*powers): 81 | yield prod(prime_power_set) 82 | 83 | 84 | for n in (27, 293, 1416, 4716): 85 | print(n, list(divisors(n))) 86 | -------------------------------------------------------------------------------- /setColoursToCellValues.gs: -------------------------------------------------------------------------------- 1 | // Macro to use in Google Sheets for changing the fill colour of a cell 2 | // to its current value. 3 | // 4 | // If you have a range of cells that contain colour values (e.g. hexadecimal colours), 5 | // select the cells and then invoke this macro -- their fill colours will be 6 | // updated to match their value! 7 | // 8 | // To use: 9 | // 10 | // 1. Inside a Google Sheets spreadsheet, select the menu bar item 11 | // Tools > Script Editor 12 | // 13 | // 2. In the Script Editor, select the menu bar item 14 | // File > New > Script file 15 | // Call it something like setColoursToCellValues.gs. 16 | // 17 | // 3. Copy/paste the code from this file into your new script file. 18 | // Select the menu bar item: 19 | // File > Save 20 | // 21 | // 4. Go back to your spreadsheet, select the menu bar item 22 | // Tools > Macros > Import 23 | // You'll be shown a list of files and functions. Find the one named 24 | // "setColoursToCellValues", and click "Add Function". 25 | // 26 | // 5. Select the cells you want to set the fill colour of, then select 27 | // the menu bar item 28 | // Tools > Macros > setColoursToCellValues 29 | // 30 | // See the macro in action: https://twitter.com/alexwlchan/status/1211790869097046016 31 | // 32 | 33 | function setColoursToCellValues() { 34 | var sheet = SpreadsheetApp.getActiveSpreadsheet(); 35 | 36 | // Get all the selections in the current spreadsheet. 37 | // 38 | // e.g. if you have selected the first ten rows of the first column, 39 | // this contains the range A1:A10. 40 | var ranges = sheet.getSelection().getActiveRangeList().getRanges(); 41 | 42 | for (var rangeIndex = 0; rangeIndex < ranges.length; rangeIndex++) { 43 | var currentRange = ranges[rangeIndex]; 44 | 45 | for (var row = 1; row <= currentRange.getNumRows(); row++) { 46 | for (var column = 1; column <= currentRange.getNumColumns(); column++) { 47 | 48 | // Looks for hexadecimal colour strings that are *not* preceded by 49 | // a leading hash, e.g. 00a123, ff0000, 00FFA1. 50 | // 51 | // When you call cell.setBackground(), if you pass a hex value it 52 | // wants a leading hash, so we need to add it. 53 | var regex = new RegExp("^[0-9a-fA-F]{6}$"); 54 | 55 | var cell = currentRange.getCell(row, column); 56 | 57 | var color = cell.getValue(); 58 | 59 | if (regex.exec(color) !== null) { 60 | color = "#" + color; 61 | } 62 | 63 | cell.setBackground(color); 64 | } 65 | } 66 | } 67 | } 68 | -------------------------------------------------------------------------------- /game_of_life.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 -*- 3 | """ 4 | A quick Game of Life implementation I wrote for a practice interview. 5 | 6 | Stuff that's quite neat (imo): 7 | 8 | * Emoji! 9 | * Rather than printing a new board every time, it resets the existing board 10 | with some ASCII control characters (think \r but better) 11 | 12 | """ 13 | 14 | import collections 15 | import random 16 | import sys 17 | 18 | 19 | WIDTH = 30 20 | HEIGHT = 30 21 | 22 | 23 | class Cell(object): 24 | def __init__(self, is_alive): 25 | self.is_alive = is_alive 26 | 27 | def __repr__(self): 28 | return '%s(is_alive=%r)' % (type(self).__name__, self.is_alive) 29 | 30 | 31 | def random_grid(): 32 | grid = {} 33 | for x in range(WIDTH): 34 | for y in range(HEIGHT): 35 | grid[(x, y)] = Cell(is_alive=random.choice((True, False))) 36 | return grid 37 | 38 | 39 | def neighbours(x, y): 40 | result = [] 41 | for x_diff in (-1, 0, 1): 42 | for y_diff in (-1, 0, 1): 43 | p = (x + x_diff, y + y_diff) 44 | if p[0] < 0: 45 | p = (WIDTH - 1, p[1]) 46 | elif p[0] >= WIDTH: 47 | p = (0, p[1]) 48 | if p[1] < 0: 49 | p = (p[0], HEIGHT - 1) 50 | elif p[1] >= HEIGHT: 51 | p = (p[0], 0) 52 | if p != (x, y): 53 | result.append(p) 54 | return result 55 | 56 | 57 | def next_grid(grid): 58 | new_grid = {} 59 | for coordinate, cell in grid.items(): 60 | nghbrs = sum([grid[n].is_alive for n in neighbours(*coordinate)]) 61 | if cell.is_alive: 62 | is_alive = (nghbrs in (2, 3)) 63 | else: 64 | is_alive = (nghbrs == 3) 65 | new_grid[coordinate] = Cell(is_alive=is_alive) 66 | return new_grid 67 | 68 | 69 | def print_grid(grid): 70 | for row_idx in range(HEIGHT): 71 | row = dict((c, cell) for (c, cell) in grid.items() if c[1] == row_idx) 72 | row = [cell for c, cell in sorted(row.items(), key=lambda x: x[0][0])] 73 | row = [C if cell.is_alive else '·' for cell in row] 74 | print(' '.join(row)) 75 | 76 | 77 | while True: 78 | # C = random.choice(['🐞', '🍏', '🐱', '♠️', '💜', '💎', '🍊']) 79 | C = "X" 80 | 81 | grid = random_grid() 82 | 83 | for _ in range(1000): 84 | try: 85 | print_grid(grid) 86 | grid = next_grid(grid) 87 | 88 | import time 89 | time.sleep(0.25) 90 | print('\033[F' * (HEIGHT + 1)) 91 | except KeyboardInterrupt: 92 | sys.exit(0) 93 | -------------------------------------------------------------------------------- /download_atv_screensavers/src/main.rs: -------------------------------------------------------------------------------- 1 | extern crate docopt; 2 | 3 | extern crate reqwest; 4 | extern crate serde; 5 | extern crate serde_json; 6 | 7 | #[macro_use] 8 | extern crate serde_derive; 9 | 10 | 11 | use std::io::prelude::*; 12 | use std::io; 13 | use std::fs::{create_dir, File}; 14 | use std::path::Path; 15 | 16 | use docopt::Docopt; 17 | 18 | 19 | const USAGE: &'static str = " 20 | Download Apple TV screensavers. 21 | 22 | Usage: 23 | download_atv_screensavers --dir= 24 | download_atv_screensavers (-h | --help) 25 | 26 | Options: 27 | -h --help Show this screen. 28 | --dir= Directory to save the screensavers to. 29 | "; 30 | 31 | 32 | #[derive(Deserialize)] 33 | struct Asset { 34 | url: String, 35 | #[serde(rename = "accessibilityLabel")] accessibility_label: String, 36 | #[serde(rename = "timeOfDay")] time_of_day: String, 37 | } 38 | 39 | 40 | #[derive(Deserialize)] 41 | struct AssetCollection { 42 | assets: Vec, 43 | } 44 | 45 | 46 | #[derive(Deserialize)] 47 | struct Args { 48 | flag_dir: String, 49 | } 50 | 51 | 52 | fn main() -> Result<(), serde_json::Error> { 53 | let download_url = "http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/videos/entries.json"; 54 | 55 | let args: Args = Docopt::new(USAGE) 56 | .and_then(|d| d.deserialize()) 57 | .unwrap_or_else(|e| e.exit()); 58 | 59 | let text = reqwest::get(download_url).unwrap() 60 | .text().unwrap(); 61 | 62 | let all_collections: Vec = serde_json::from_str(&text)?; 63 | 64 | let all_assets = all_collections.iter() 65 | .map(|collection| collection.assets.iter()) 66 | .flat_map(|it| it.clone()); 67 | 68 | let out_dir = Path::new(&args.flag_dir); 69 | match create_dir(out_dir) { 70 | Ok(_) => (), 71 | Err(_) => () 72 | }; 73 | 74 | let mut count = 0; 75 | for asset in all_assets { 76 | count += 1; 77 | print!("."); 78 | io::stdout().flush().unwrap(); 79 | 80 | let extension = Path::new(&asset.url) 81 | .extension() 82 | .unwrap() 83 | .to_str() 84 | .unwrap(); 85 | let name = format!("{:03}-{}-{}.{}", count, asset.accessibility_label, asset.time_of_day, extension); 86 | 87 | let out_path = out_dir.join(name); 88 | let mut file = File::create(out_path).unwrap(); 89 | let mut video = reqwest::get(&asset.url).unwrap(); 90 | video.copy_to(&mut file).unwrap(); 91 | } 92 | 93 | print!("\n"); 94 | println!("Downloaded {} videos to {}", count, args.flag_dir); 95 | 96 | Ok(()) 97 | } 98 | -------------------------------------------------------------------------------- /aws/fetch_all_cloudwatch.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """Print log event messages from a CloudWatch log group. 4 | 5 | Usage: print_log_events.py [--start=] [--end=] 6 | print_log_events.py -h --help 7 | 8 | Options: 9 | Name of the CloudWatch log group. 10 | --start= Only print events with a timestamp after this time. 11 | --end= Only print events with a timestamp before this time. 12 | -h --help Show this screen. 13 | 14 | """ 15 | 16 | import boto3 17 | import docopt 18 | import maya 19 | 20 | 21 | def get_log_events(log_group, start_time=None, end_time=None): 22 | """Generate all the log events from a CloudWatch group. 23 | 24 | :param log_group: Name of the CloudWatch log group. 25 | :param start_time: Only fetch events with a timestamp after this time. 26 | Expressed as the number of milliseconds after midnight Jan 1 1970. 27 | :param end_time: Only fetch events with a timestamp before this time. 28 | Expressed as the number of milliseconds after midnight Jan 1 1970. 29 | 30 | """ 31 | client = boto3.client('logs') 32 | kwargs = { 33 | 'logGroupName': log_group, 34 | 'limit': 10000, 35 | } 36 | 37 | if start_time is not None: 38 | kwargs['startTime'] = start_time 39 | if end_time is not None: 40 | kwargs['endTime'] = end_time 41 | 42 | while True: 43 | resp = client.filter_log_events(**kwargs) 44 | yield from resp['events'] 45 | try: 46 | kwargs['nextToken'] = resp['nextToken'] 47 | except KeyError: 48 | break 49 | 50 | 51 | def milliseconds_since_epoch(time_string): 52 | dt = maya.when(time_string) 53 | seconds = dt.epoch 54 | return seconds * 1000 55 | 56 | 57 | if __name__ == '__main__': 58 | args = docopt.docopt(__doc__) 59 | 60 | log_group = args[''] 61 | 62 | if args['--start']: 63 | try: 64 | start_time = milliseconds_since_epoch(args['--start']) 65 | except ValueError: 66 | exit(f'Invalid datetime input as --start: {args["--start"]}') 67 | else: 68 | start_time = None 69 | 70 | if args['--end']: 71 | try: 72 | end_time = milliseconds_since_epoch(args['--end']) 73 | except ValueError: 74 | exit(f'Invalid datetime input as --end: {args["--end"]}') 75 | else: 76 | end_time = None 77 | 78 | logs = get_log_events( 79 | log_group=log_group, 80 | start_time=start_time, 81 | end_time=end_time 82 | ) 83 | for event in logs: 84 | print(event['message'].rstrip()) 85 | -------------------------------------------------------------------------------- /aws/tail_cloudwatch_events.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | """ 3 | The Archivematica CloudWatch logs are very chatty until something breaks, at 4 | which point you get a traceback and then all output stops. 5 | 6 | This script acts as like "tail" for CloudWatch -- just shows you the last 10 seconds 7 | of events, which usually has enough detail to debug the issue. 8 | 9 | """ 10 | 11 | import datetime 12 | import sys 13 | 14 | import boto3 15 | import humanize 16 | import inquirer 17 | import iterfzf 18 | 19 | 20 | def get_log_group_names(client): 21 | paginator = client.get_paginator("describe_log_groups") 22 | 23 | for page in paginator.paginate(): 24 | for log_group in page["logGroups"]: 25 | yield log_group["logGroupName"] 26 | 27 | 28 | def get_log_streams(client, *, log_group_name): 29 | paginator = client.get_paginator("describe_log_streams") 30 | 31 | for page in paginator.paginate( 32 | logGroupName=log_group_name, orderBy="LastEventTime", descending=True 33 | ): 34 | yield from page["logStreams"] 35 | 36 | 37 | def get_last_ten_seconds_from_stream(client, *, log_group_name, log_stream): 38 | # This timestamp is milliseconds since the epoch, so to wind back 10 seconds 39 | # is 10 * 1000 milliseconds. 40 | start_time = log_stream["lastEventTimestamp"] - 10 * 1000 41 | 42 | paginator = client.get_paginator("filter_log_events") 43 | 44 | for page in paginator.paginate( 45 | logGroupName=log_group_name, 46 | logStreamNames=[log_stream["logStreamName"]], 47 | startTime=start_time, 48 | ): 49 | yield from page["events"] 50 | 51 | 52 | if __name__ == "__main__": 53 | client = boto3.client("logs") 54 | 55 | try: 56 | log_group_name = sys.argv[1] 57 | except IndexError: 58 | log_group_name = iterfzf.iterfzf( 59 | get_log_group_names(client), prompt="What log group do you want to tail? " 60 | ) 61 | 62 | all_streams = get_log_streams(client, log_group_name=log_group_name) 63 | 64 | choices = {} 65 | for _ in range(5): 66 | stream = next(all_streams) 67 | dt = datetime.datetime.fromtimestamp(stream["lastEventTimestamp"] / 1000) 68 | name = stream["logStreamName"] 69 | choices[f"{name} ({humanize.naturaltime(dt)})"] = stream 70 | 71 | questions = [ 72 | inquirer.List( 73 | "log_stream_id", 74 | message="Which log stream would you like to tail?", 75 | choices=choices, 76 | ) 77 | ] 78 | 79 | answers = inquirer.prompt(questions) 80 | 81 | log_stream = choices[answers["log_stream_id"]] 82 | 83 | for event in get_last_ten_seconds_from_stream( 84 | client, log_group_name=log_group_name, log_stream=log_stream 85 | ): 86 | print(event["message"]) 87 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .docker 2 | .DS_Store 3 | fishd* 4 | travis_token.txt 5 | .idea 6 | 7 | # Created by https://www.gitignore.io/api/osx,rust,python 8 | 9 | ### OSX ### 10 | *.DS_Store 11 | .AppleDouble 12 | .LSOverride 13 | 14 | # Icon must end with two \r 15 | Icon 16 | 17 | # Thumbnails 18 | ._* 19 | 20 | # Files that might appear in the root of a volume 21 | .DocumentRevisions-V100 22 | .fseventsd 23 | .Spotlight-V100 24 | .TemporaryItems 25 | .Trashes 26 | .VolumeIcon.icns 27 | .com.apple.timemachine.donotpresent 28 | 29 | # Directories potentially created on remote AFP share 30 | .AppleDB 31 | .AppleDesktop 32 | Network Trash Folder 33 | Temporary Items 34 | .apdisk 35 | 36 | ### Python ### 37 | # Byte-compiled / optimized / DLL files 38 | __pycache__/ 39 | *.py[cod] 40 | *$py.class 41 | 42 | # C extensions 43 | *.so 44 | 45 | # Distribution / packaging 46 | .Python 47 | build/ 48 | develop-eggs/ 49 | dist/ 50 | downloads/ 51 | eggs/ 52 | .eggs/ 53 | lib/ 54 | lib64/ 55 | parts/ 56 | sdist/ 57 | var/ 58 | wheels/ 59 | *.egg-info/ 60 | .installed.cfg 61 | *.egg 62 | 63 | # PyInstaller 64 | # Usually these files are written by a python script from a template 65 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 66 | *.manifest 67 | *.spec 68 | 69 | # Installer logs 70 | pip-log.txt 71 | pip-delete-this-directory.txt 72 | 73 | # Unit test / coverage reports 74 | htmlcov/ 75 | .tox/ 76 | .coverage 77 | .coverage.* 78 | .cache 79 | .pytest_cache/ 80 | nosetests.xml 81 | coverage.xml 82 | *.cover 83 | .hypothesis/ 84 | 85 | # Translations 86 | *.mo 87 | *.pot 88 | 89 | # Flask stuff: 90 | instance/ 91 | .webassets-cache 92 | 93 | # Scrapy stuff: 94 | .scrapy 95 | 96 | # Sphinx documentation 97 | docs/_build/ 98 | 99 | # PyBuilder 100 | target/ 101 | 102 | # Jupyter Notebook 103 | .ipynb_checkpoints 104 | 105 | # pyenv 106 | .python-version 107 | 108 | # celery beat schedule file 109 | celerybeat-schedule.* 110 | 111 | # SageMath parsed files 112 | *.sage.py 113 | 114 | # Environments 115 | .env 116 | .venv 117 | env/ 118 | venv/ 119 | ENV/ 120 | env.bak/ 121 | venv.bak/ 122 | 123 | # Spyder project settings 124 | .spyderproject 125 | .spyproject 126 | 127 | # Rope project settings 128 | .ropeproject 129 | 130 | # mkdocs documentation 131 | /site 132 | 133 | # mypy 134 | .mypy_cache/ 135 | 136 | ### Rust ### 137 | # Generated by Cargo 138 | # will have compiled files and executables 139 | /target/ 140 | 141 | # Remove Cargo.lock from gitignore if creating an executable, leave it for libraries 142 | # More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html 143 | Cargo.lock 144 | 145 | # These are backup files generated by rustfmt 146 | **/*.rs.bk 147 | 148 | 149 | # End of https://www.gitignore.io/api/osx,rust,python 150 | -------------------------------------------------------------------------------- /wallpapers/make_iphone_rainbow_wallpaper.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from PIL import Image, ImageDraw 4 | 5 | 6 | def create_wallpaper(stripes, background_color="#000000"): 7 | """ 8 | Create a ``PIL.Image`` instance with the data for thsi wallpaper. 9 | """ 10 | # The dimensions of an iPhone X wallpaper are 1440 x 2560. 11 | im = Image.new("RGB", size=(1440, 2560), color=background_color) 12 | 13 | draw = ImageDraw.Draw(im) 14 | 15 | # When measuring the stripes on a JPEG of the iPhone X wallpaper: 16 | # 17 | # - Each stripe was 110 pixels high 18 | # - The midpoint of the stripes (between orange/red) on the left-hand 19 | # side was 1754 pixels down 20 | # - The midpoint of the stripes (between orange/red) on the left-hand 21 | # side was 1202 pixels down 22 | # 23 | stripe_height = 110 24 | left_hand_midpoint = 1754 25 | right_hand_midpoint = 1202 26 | 27 | total_stripe_height = stripe_height * len(stripes) 28 | left_hand_top = 1754 - (total_stripe_height / 2) 29 | right_hand_top = 1202 - (total_stripe_height / 2) 30 | 31 | # Each stripe is a parallelogram. 32 | # 33 | # The points start in the top left-hand corner, and work clockwise 34 | # around the shape. 35 | for i, color in enumerate(stripes): 36 | draw.polygon( 37 | [ 38 | (0, left_hand_top + stripe_height * i), 39 | (1440, right_hand_top + stripe_height * i), 40 | (1440, right_hand_top + stripe_height * (i + 1)), 41 | (0, left_hand_top + stripe_height * (i + 1)), 42 | ], 43 | fill=color 44 | ) 45 | 46 | return im 47 | 48 | 49 | if __name__ == '__main__': 50 | for name, stripes in [ 51 | ("mystery_1", ["#F996B9", "#FFFFFF", "#CA28E3", "#333333", "#5861CD"]), 52 | ("mystery_2", ["#D90012", "#D90012", "#0033A0", "#0033A0", "#F2A802", "#F2A802"]), 53 | ("mystery_3", ["#5BBD60", "#BAD897", "#ffffff", "#BABABA", "#333333"]), 54 | ("mystery_4", ["#fcd827", "#0423d2", "#0423d2", "#0423d2", "#fcd827"]) 55 | ]: 56 | im = create_wallpaper(stripes=stripes) 57 | im.save(f"wallpaper_{name}.jpg") 58 | 59 | for name, stripes in [ 60 | ("rainbow", ["#61BB46", "#FCB829", "#F5801E", "#DE393D", "#943B96", "#049CDC"]), 61 | ("genderfluid", ["#F996B9", "#FFFFFF", "#CA28E3", "#333333", "#5861CD"]), 62 | ("pan", ["#F53DA2", "#F53DA2", "#F5DA3D", "#F5DA3D", "#3DBEF5", "#3DBEF5"]), 63 | ("armenia", ["#D90012", "#D90012", "#0033A0", "#0033A0", "#F2A802", "#F2A802"]), 64 | ("aro", ["#5BBD60", "#BAD897", "#ffffff", "#BABABA", "#333333"]), 65 | ("delta", ["#fcd827", "#0423d2", "#0423d2", "#0423d2", "#fcd827"]) 66 | ]: 67 | im = create_wallpaper(stripes=stripes) 68 | im.save(f"wallpaper_{name}.jpg") 69 | -------------------------------------------------------------------------------- /scripts/count_pdf_pages.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import collections 5 | import datetime 6 | import itertools 7 | import json 8 | import os 9 | import sys 10 | 11 | import PyPDF2 12 | import tqdm 13 | 14 | 15 | try: 16 | CACHE = json.load(open("_pdf_cache.json")) 17 | except FileNotFoundError: 18 | CACHE = {} 19 | 20 | 21 | def draw_bar_chart(data): 22 | # https://alexwlchan.net/2018/05/ascii-bar-charts/ 23 | max_value = max(count for _, count in data) 24 | increment = max_value / 25 25 | 26 | longest_label_length = max(len(label) for label, _ in data) 27 | 28 | for label, count in data: 29 | 30 | # The ASCII block elements come in chunks of 8, so we work out how 31 | # many fractions of 8 we need. 32 | # https://en.wikipedia.org/wiki/Block_Elements 33 | bar_chunks, remainder = divmod(int(count * 8 / increment), 8) 34 | 35 | # First draw the full width chunks 36 | bar = '█' * bar_chunks 37 | 38 | # Then add the fractional part. The Unicode code points for 39 | # block elements are (8/8), (7/8), (6/8), ... , so we need to 40 | # work backwards. 41 | if remainder > 0: 42 | bar += chr(ord('█') + (8 - remainder)) 43 | 44 | # If the bar is empty, add a left one-eighth block 45 | bar = bar or '▏' 46 | 47 | print(f'{label.rjust(longest_label_length)} ▏ {count:#5d} {bar}') 48 | 49 | 50 | def get_pdfs(search_root): 51 | for root, _, filenames in os.walk(search_root): 52 | for f in filenames: 53 | if not f.lower().endswith(".pdf"): 54 | continue 55 | yield os.path.join(search_root, root, f) 56 | 57 | 58 | if __name__ == "__main__": 59 | search_roots = [os.path.realpath(p) for p in sys.argv[1:]] 60 | 61 | if not search_roots: 62 | sys.exit(f"Usage: {__file__} ...") 63 | 64 | counts = collections.defaultdict(int) 65 | 66 | pdf_paths = list(itertools.chain( 67 | *(get_pdfs(root) for root in search_roots) 68 | )) 69 | 70 | for pdf_path in tqdm.tqdm(pdf_paths): 71 | if pdf_path not in CACHE: 72 | date = datetime.datetime.fromtimestamp(os.stat(pdf_path).st_mtime) 73 | with open(pdf_path, "rb") as pdf: 74 | reader = PyPDF2.PdfFileReader(pdf) 75 | page_count = reader.getNumPages() 76 | 77 | CACHE[pdf_path] = { 78 | "date": date.strftime("%Y-%m-%d"), 79 | "page_count": page_count 80 | } 81 | 82 | data = CACHE[pdf_path] 83 | counts[data["date"]] += data["page_count"] 84 | 85 | print("") 86 | 87 | draw_bar_chart(sorted(counts.items())) 88 | print("─" * 40) 89 | print(f"TOTAL ▏ {sum(counts.values()):#5d}") 90 | 91 | with open("_pdf_cache.json", "w") as f: 92 | f.write(json.dumps(CACHE)) 93 | -------------------------------------------------------------------------------- /services/kindle/kindle_dedrm/kgenpids.py: -------------------------------------------------------------------------------- 1 | # Modified: most of the code was removed. 2 | 3 | import binascii 4 | try: 5 | import hashlib 6 | sha1 = hashlib.sha1 7 | del hashlib 8 | except ImportError: # No hashlib in Python 2.4. 9 | import sha 10 | sha1 = sha.sha 11 | del sha 12 | 13 | global charMap1 14 | global charMap3 15 | global charMap4 16 | 17 | charMap3 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" 18 | charMap4 = "ABCDEFGHIJKLMNPQRSTUVWXYZ123456789" 19 | 20 | def SHA1(message): 21 | ctx = sha1() 22 | ctx.update(message) 23 | return ctx.digest() 24 | 25 | # Returns two bit at offset from a bit field 26 | def getTwoBitsFromBitField(bitField,offset): 27 | byteNumber = offset // 4 28 | bitPosition = 6 - 2*(offset % 4) 29 | return ord(bitField[byteNumber]) >> bitPosition & 3 30 | 31 | # Returns the six bits at offset from a bit field 32 | def getSixBitsFromBitField(bitField,offset): 33 | offset *= 3 34 | value = (getTwoBitsFromBitField(bitField,offset) <<4) + (getTwoBitsFromBitField(bitField,offset+1) << 2) +getTwoBitsFromBitField(bitField,offset+2) 35 | return value 36 | 37 | # 8 bits to six bits encoding from hash to generate PID string 38 | def encodePID(hash): 39 | global charMap3 40 | PID = "" 41 | for position in range (0,8): 42 | PID += charMap3[getSixBitsFromBitField(hash,position)] 43 | return PID 44 | 45 | def crc32(s): 46 | return (~binascii.crc32(s,-1))&0xFFFFFFFF 47 | 48 | # convert from 8 digit PID to 10 digit PID with checksum 49 | def checksumPid(s): 50 | global charMap4 51 | crc = crc32(s) 52 | crc = crc ^ (crc >> 16) 53 | res = s 54 | l = len(charMap4) 55 | for i in (0,1): 56 | b = crc & 0xff 57 | pos = (b // l) ^ (b % l) 58 | res += charMap4[pos%l] 59 | crc >>= 8 60 | return res 61 | 62 | 63 | # old kindle serial number to fixed pid 64 | def pidFromSerial(s, l): 65 | global charMap4 66 | crc = crc32(s) 67 | arr1 = [0]*l 68 | for i in xrange(len(s)): 69 | arr1[i%l] ^= ord(s[i]) 70 | crc_bytes = [crc >> 24 & 0xff, crc >> 16 & 0xff, crc >> 8 & 0xff, crc & 0xff] 71 | for i in xrange(l): 72 | arr1[i] ^= crc_bytes[i&3] 73 | pid = "" 74 | for i in xrange(l): 75 | b = arr1[i] & 0xff 76 | pid+=charMap4[(b >> 7) + ((b >> 5 & 3) ^ (b & 0x1f))] 77 | return pid 78 | 79 | 80 | # Parse the EXTH header records and use the Kindle serial number to calculate the book pid. 81 | def getKindlePid(pidlst, rec209, token, serialnum): 82 | # Compute book PID 83 | pidHash = SHA1(serialnum+rec209+token) 84 | bookPID = encodePID(pidHash) 85 | bookPID = checksumPid(bookPID) 86 | pidlst.append(bookPID) 87 | 88 | # compute fixed pid for old pre 2.5 firmware update pid as well 89 | bookPID = pidFromSerial(serialnum, 7) + "*" 90 | bookPID = checksumPid(bookPID) 91 | pidlst.append(bookPID) 92 | 93 | return pidlst 94 | -------------------------------------------------------------------------------- /services/dreamwidth/build_reading_page_rss.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | """ 4 | Usage: build_reading_page_rss.py --username= --password= 5 | """ 6 | 7 | import sys 8 | 9 | import bs4 10 | import click 11 | import feedgenerator 12 | 13 | from _api import DreamwidthSession 14 | from _helpers import parse_date 15 | 16 | 17 | @click.command() 18 | @click.option("--username", required=True) 19 | @click.option("--password", required=True) 20 | def build_reading_page_rss(username, password): 21 | sess = DreamwidthSession(username, password) 22 | 23 | # Okay, now get the reading page. This should contain the most recent 24 | # 1000 entries from the last 14 days, which is plenty. 25 | resp = sess.get('https://%s.dreamwidth.org/read' % username) 26 | assert "You're viewing your Reading Page." in resp.text 27 | 28 | reading_soup = bs4.BeautifulSoup(resp.text, 'html.parser') 29 | entries = reading_soup.find('div', attrs={'id': 'entries'}) 30 | entry_wrappers = entries.findAll('div', attrs={'class': 'entry-wrapper'}) 31 | 32 | # Now get the feed items. This is just a pile of messy HTML parsing. 33 | # 34 | # Important note: because this RSS feed may be exposed outside Dreamwidth 35 | # (and thus outside the Dreamwidth access controls), it shouldn't include 36 | # overly sensitive information. In particular, NO POST CONTENT. This is 37 | # just some metadata I'm hoping isn't too secret -- in particular, the same 38 | # stuff you get from notification emails. 39 | # 40 | feed = feedgenerator.Rss201rev2Feed( 41 | title="%s's Dreamwidth reading page" % username, 42 | link='https://%s.dreamwidth.org/' % username, 43 | description="Entries on %s's reading page" % username, 44 | language='en' 45 | ) 46 | 47 | for e in entry_wrappers: 48 | h3_title = e.find('h3', attrs={'class': 'entry-title'}) 49 | title = h3_title.find('a').text 50 | url = h3_title.find('a').attrs['href'] 51 | 52 | datetime_span = e.find('span', attrs={'class': 'datetime'}) 53 | date_span = datetime_span.find('span', attrs={'class': 'date'}).text 54 | time_span = datetime_span.find('span', attrs={'class': 'time'}).text 55 | pubdate = parse_date(date_span, time_span) 56 | 57 | poster = e.find('span', attrs={'class': 'poster'}).text 58 | 59 | try: 60 | tags = [ 61 | li.find('a').text for li in 62 | e.find('div', attrs={'class': 'tag'}).find('ul').findAll('li') 63 | ] 64 | description = 'This post is tagged with %s' % ', '.join(tags) 65 | except AttributeError: 66 | tags = [] 67 | description = '(This post is untagged)' 68 | 69 | feed.add_item( 70 | title=title, 71 | link=url, 72 | pubdate=pubdate, 73 | author_name=poster, 74 | description=description 75 | ) 76 | 77 | feed.write(sys.stdout, 'utf-8') 78 | 79 | 80 | if __name__ == '__main__': 81 | build_reading_page_rss() 82 | -------------------------------------------------------------------------------- /services/kindle/kindle_dedrm/alfcrypto.c: -------------------------------------------------------------------------------- 1 | // Most of this file is copy-pasted from the alfcrypto sources. 2 | 3 | typedef struct _TpzCtx { 4 | unsigned int v[2]; 5 | } TpzCtx; 6 | 7 | //implementation of Pukall Cipher 1 8 | unsigned char *PC1(const unsigned char *key, unsigned int klen, const unsigned char *src, 9 | unsigned char *dest, unsigned int len, int decryption) { 10 | unsigned int sum1 = 0; 11 | unsigned int sum2 = 0; 12 | unsigned int keyXorVal = 0; 13 | unsigned short wkey[8]; 14 | unsigned int i; 15 | if (klen != 16) { 16 | // fprintf(stderr, "Bad key length!\n"); 17 | return (void*)0; 18 | } 19 | for (i = 0; i < 8; i++) { 20 | wkey[i] = (key[i * 2] << 8) | key[i * 2 + 1]; 21 | } 22 | for (i = 0; i < len; i++) { 23 | unsigned int temp1 = 0; 24 | unsigned int byteXorVal = 0; 25 | unsigned int j, curByte; 26 | for (j = 0; j < 8; j++) { 27 | temp1 ^= wkey[j]; 28 | sum2 = (sum2 + j) * 20021 + sum1; 29 | sum1 = (temp1 * 346) & 0xFFFF; 30 | sum2 = (sum2 + sum1) & 0xFFFF; 31 | temp1 = (temp1 * 20021 + 1) & 0xFFFF; 32 | byteXorVal ^= temp1 ^ sum2; 33 | } 34 | curByte = src[i]; 35 | if (!decryption) { 36 | keyXorVal = curByte * 257; 37 | } 38 | curByte = ((curByte ^ (byteXorVal >> 8)) ^ byteXorVal) & 0xFF; 39 | if (decryption) { 40 | keyXorVal = curByte * 257; 41 | } 42 | for (j = 0; j < 8; j++) { 43 | wkey[j] ^= keyXorVal; 44 | } 45 | dest[i] = curByte; 46 | } 47 | return dest; 48 | } 49 | 50 | // 51 | // Context initialisation for the Topaz Crypto 52 | // 53 | void topazCryptoInit(TpzCtx *ctx, const unsigned char *key, int klen) { 54 | int i = 0; 55 | ctx->v[0] = 0x0CAFFE19E; 56 | 57 | for (i = 0; i < klen; i++) { 58 | ctx->v[1] = ctx->v[0]; 59 | ctx->v[0] = ((ctx->v[0] >> 2) * (ctx->v[0] >> 7)) ^ 60 | (key[i] * key[i] * 0x0F902007); 61 | } 62 | } 63 | 64 | // 65 | // decrypt data with the context prepared by topazCryptoInit() 66 | // 67 | 68 | void topazCryptoDecrypt(const TpzCtx *ctx, const unsigned char *in, unsigned char *out, int len) { 69 | unsigned int ctx1 = ctx->v[0]; 70 | unsigned int ctx2 = ctx->v[1]; 71 | int i; 72 | for (i = 0; i < len; i++) { 73 | unsigned char m = in[i] ^ (ctx1 >> 3) ^ (ctx2 << 3); 74 | ctx2 = ctx1; 75 | ctx1 = ((ctx1 >> 2) * (ctx1 >> 7)) ^ (m * m * 0x0F902007); 76 | out[i] = m; 77 | } 78 | } 79 | 80 | // Trampoline for `import dl' and qsort. 81 | 82 | struct Item { 83 | int (*f)(int, int, int, int, int, int, int, int, int, int); 84 | int g[10]; 85 | int retval; 86 | }; 87 | 88 | int cmp(struct Item *a, struct Item *b) { 89 | if (a->f == (void*)0) { 90 | a = b; 91 | } 92 | if (a->f != (void*)0) { 93 | a->retval = a->f(a->g[0], a->g[1], a->g[2], a->g[3], a->g[4], a->g[5], 94 | a->g[6], a->g[7], a->g[8], a->g[9]); 95 | a->f = (void*)0; 96 | } 97 | return 0; 98 | } 99 | -------------------------------------------------------------------------------- /services/dreamwidth/post_entry.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import datetime as dt 5 | import json 6 | import sys 7 | import webbrowser 8 | 9 | import click 10 | 11 | from _api import DreamwidthAPI 12 | from markdownify import md_to_dreamwidth_html 13 | 14 | 15 | # First get the post source and convert to HTML 16 | try: 17 | path = sys.argv[1] 18 | except IndexError: 19 | sys.exit("Usage: %s " % __file__) 20 | 21 | md_src = open(path).read() 22 | html_src = md_to_dreamwidth_html(md_src) 23 | 24 | 25 | api = DreamwidthAPI(**json.load(open("auth.json"))) 26 | 27 | 28 | # Work out what the access groups are, and where this post should 29 | # be shared. 30 | choices = [ 31 | ("public", {"security": "public"}), 32 | ("private", {"security": "private"}), 33 | ] 34 | 35 | for access_group in api.get_custom_access_groups(): 36 | choices.append( 37 | (access_group["name"] + " [filter]", 38 | { 39 | "security": "usemask", 40 | 41 | # I'm a bit iffy about this code, because the description isn't great. 42 | # What if you have more than 30 trust groups?? 43 | # 44 | # A 32-bit unsigned integer representing which of the user's 45 | # trust groups are allowed to view this post. Turn bit 0 on to 46 | # allow anyone with basic access to read it. Otherwise, turn 47 | # bit 1-30 on for every trust group that should be allowed to 48 | # read it. Bit 31 is reserved. 49 | # 50 | "allowmask": 2 ** access_group["id"] 51 | }) 52 | ) 53 | 54 | numbered_choices = { 55 | i: (name, data) 56 | for (i, (name, data)) in enumerate(choices, start=1) 57 | } 58 | 59 | 60 | prompt_suffix = "\n".join( 61 | "%2d. %s" % (i, name) 62 | for (i, (name, _)) in numbered_choices.items() 63 | ) 64 | 65 | group_id = click.prompt( 66 | "Where should this be posted?", 67 | type=click.Choice([str(key) for key in sorted(numbered_choices)]), 68 | value_proc=lambda t: int(t), 69 | prompt_suffix=":\n" + prompt_suffix + "\n" 70 | ) 71 | 72 | group_name, security_data = numbered_choices[group_id] 73 | 74 | 75 | # Now construct the post data. 76 | now = dt.datetime.now() 77 | 78 | title = click.prompt("What is the post title?") 79 | tags = click.prompt("What are the post tags?", default="").split(",") 80 | 81 | print("\n=== About to post: ===") 82 | print("title: %s" % title) 83 | print("tags: %s" % ", ".join(tags)) 84 | print("to group: %s" % group_name) 85 | print("") 86 | print(md_src[:95] + "...") 87 | print("") 88 | 89 | resp = click.confirm("Ready to post?", abort=True) 90 | 91 | data = { 92 | "event": html_src, 93 | "subject": title, 94 | "year": now.strftime("%Y"), 95 | "mon": now.strftime("%m"), 96 | "day": now.strftime("%d"), 97 | "hour": now.strftime("%H"), 98 | "min": now.strftime("%M"), 99 | "props": { 100 | "taglist": ",".join(tags) 101 | } 102 | } 103 | 104 | data.update(**security_data) 105 | 106 | resp = api.call_endpoint("postevent", data=data) 107 | print(resp["url"]) 108 | webbrowser.open(resp["url"]) 109 | -------------------------------------------------------------------------------- /gif_slicer/run.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -o errexit 4 | set -o nounset 5 | 6 | # Lemon 7 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/t2h4c4n3" --rows=4 --columns=6 8 | 9 | # Granadilla 10 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/h9uqpwcc" --rows=4 --columns=7 11 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/vucfggbh" --rows=4 --columns=7 12 | 13 | # Persimmon 14 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/u6n4x6at" --rows=4 --columns=5 15 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/dw58q6nu" --rows=4 --columns=5 16 | 17 | # Artichoke 18 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/b9g485fs" --rows=4 --columns=7 19 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/ffffayxy" --rows=4 --columns=7 20 | 21 | # Tomato 22 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/nmppywpm" --rows=4 --columns=6 23 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/vd6zqnsc" --rows=4 --columns=6 24 | 25 | # Guava 26 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/fwbqfd89" --rows=4 --columns=6 27 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/v9x893kv" --rows=4 --columns=6 28 | 29 | # Passion fruit 30 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/xkn4pw2u" --rows=4 --columns=6 31 | 32 | # Garlic 33 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/qbv99v5k" --rows=4 --columns=4 34 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/vwp2yvdd" --rows=4 --columns=4 35 | 36 | # Kiwi 37 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/aqzd3qh7" --rows=4 --columns=7 38 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/n8xrxskt" --rows=4 --columns=7 39 | 40 | # Star fruit 41 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/e9x9hrte" --rows=4 --columns=7 42 | 43 | # Pumpkin 44 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/rb4sm6fd" --rows=3 --columns=6 45 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/z3cja6w4" --rows=3 --columns=6 46 | 47 | # Mandarin orange 48 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/sxm89b3x" --rows=4 --columns=7 49 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/semnhwc6" --rows=4 --columns=7 50 | 51 | # Cabbage 52 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/rq48tke5" --rows=4 --columns=7 53 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/kgz34m4e" --rows=4 --columns=7 54 | 55 | # Strawberry 56 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/jt4qn2qc" --rows=4 --columns=5 57 | 58 | # Apple 59 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/mwn9qf6m" --rows=4 --columns=7 60 | 61 | # Squash 62 | python turn_catalogue_image_into_gif.py "https://wellcomecollection.org/works/x6vcmzba" --rows=4 --columns=6 -------------------------------------------------------------------------------- /adventofcode/2018/day8.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "test/unit" 4 | 5 | require_relative "./helpers" 6 | 7 | 8 | class Node 9 | attr_accessor :child_nodes 10 | attr_accessor :metadata_entries 11 | 12 | def initialize(child_nodes: [], metadata_entries: []) 13 | @child_nodes = child_nodes 14 | @metadata_entries = metadata_entries 15 | end 16 | 17 | def ==(other) 18 | other.class == self.class && other.child_nodes == self.child_nodes && other.metadata_entries == self.metadata_entries 19 | end 20 | 21 | # How many values does this take up in the original license file? 22 | def size 23 | 2 + child_nodes.map { |n| n.size }.sum + metadata_entries.size 24 | end 25 | 26 | def to_s 27 | "" 28 | end 29 | 30 | def all_metadata_entries 31 | all = metadata_entries 32 | child_nodes.each { |n| 33 | all += n.all_metadata_entries 34 | } 35 | all 36 | end 37 | 38 | def value 39 | if child_nodes.size == 0 40 | metadata_entries.sum 41 | else 42 | # Remember to subtract 1 -- Ruby arrays are indexed from 0, not 1. 43 | metadata_entries 44 | .map { |m| child_nodes[m - 1] } 45 | .select { |maybe_child| !maybe_child.nil? } 46 | .map { |child| child.value } 47 | .sum 48 | end 49 | end 50 | end 51 | 52 | 53 | # Parse the first node in a list of numbers. This method may not parse all 54 | # the numbers, if the first node finishes before the end of the list. 55 | def parse_first_node(numbers) 56 | child_node_count = numbers[0] 57 | metadata_count = numbers[1] 58 | 59 | # How far through the list are we? 60 | index = 2 61 | 62 | if child_node_count == 0 63 | Node.new(metadata_entries: numbers.slice(index, metadata_count)) 64 | else 65 | child_nodes = [] 66 | (1..child_node_count).each { |_| 67 | child_nodes << parse_first_node(numbers[index..-1]) 68 | index += child_nodes[-1].size 69 | } 70 | Node.new( 71 | child_nodes: child_nodes, 72 | metadata_entries: numbers.slice(index, metadata_count) 73 | ) 74 | end 75 | end 76 | 77 | 78 | class TestDay8 < Test::Unit::TestCase 79 | def test_examples 80 | node_B = Node.new(metadata_entries: [10, 11, 12]) 81 | node_D = Node.new(metadata_entries: [99]) 82 | node_C = Node.new(child_nodes: [node_D], metadata_entries: [2]) 83 | node_A = Node.new(child_nodes: [node_B, node_C], metadata_entries: [1, 1, 2]) 84 | 85 | assert_equal parse_first_node([0, 3, 10, 11, 12]), node_B 86 | assert_equal parse_first_node([1, 1, 0, 1, 99, 2]), node_C 87 | assert_equal parse_first_node([2, 3, 0, 3, 10, 11, 12, 1, 1, 0, 1, 99, 2, 1, 1, 2]), node_A 88 | 89 | assert_equal node_B.value, 33 90 | assert_equal node_D.value, 99 91 | assert_equal node_C.value, 0 92 | assert_equal node_A.value, 66 93 | end 94 | end 95 | 96 | 97 | if __FILE__ == $0 98 | input = File.read("8.txt").split(" ").map { |s| s.to_i } 99 | root = parse_first_node(input) 100 | 101 | answer1 = root.all_metadata_entries.sum 102 | answer2 = root.value 103 | 104 | solution("8", "Memory Maneuver", answer1, answer2) 105 | end 106 | -------------------------------------------------------------------------------- /adventofcode/2018/day3.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "set" 4 | require "test/unit" 5 | 6 | require_relative "./helpers" 7 | 8 | 9 | def find_single_claim_area(left_offset, top_offset, width, height) 10 | coords = Set.new([]) 11 | (1..width).map { |w| 12 | (1..height).map { |h| 13 | coords.add([left_offset + w, top_offset + h]) 14 | } 15 | } 16 | raise "Incorrect size" unless coords.size == width * height 17 | coords 18 | end 19 | 20 | 21 | def tally_areas(all_claims) 22 | used_areas = Hash.new(0) 23 | all_claims.map { |claim| 24 | claimed_area = find_single_claim_area( 25 | claim["left_offset"], 26 | claim["top_offset"], 27 | claim["width"], 28 | claim["height"] 29 | ) 30 | 31 | claimed_area.map { |coord| 32 | used_areas[coord] += 1 33 | } 34 | } 35 | used_areas 36 | end 37 | 38 | 39 | def count_overlapping_claims(all_claims) 40 | tally_areas(all_claims).values.select { |v| v > 1 }.size 41 | end 42 | 43 | 44 | def find_non_overlapping_claim(all_claims) 45 | used_areas = tally_areas(all_claims) 46 | 47 | all_claims.each { |claim| 48 | claimed_area = find_single_claim_area( 49 | claim["left_offset"], 50 | claim["top_offset"], 51 | claim["width"], 52 | claim["height"] 53 | ) 54 | 55 | has_overlap = false 56 | claimed_area.map { |c| 57 | if used_areas[c] > 1 58 | has_overlap = true 59 | break 60 | end 61 | } 62 | 63 | if !has_overlap 64 | return claim["claim_id"] 65 | end 66 | } 67 | end 68 | 69 | 70 | class TestDay3 < Test::Unit::TestCase 71 | def test_examples 72 | assert_equal find_single_claim_area(3, 2, 5, 4), Set.new([ 73 | [4, 3], [5, 3], [6, 3], [7, 3], [8, 3], 74 | [4, 4], [5, 4], [6, 4], [7, 4], [8, 4], 75 | [4, 5], [5, 5], [6, 5], [7, 5], [8, 5], 76 | [4, 6], [5, 6], [6, 6], [7, 6], [8, 6], 77 | ]) 78 | 79 | assert_equal find_single_claim_area(3, 1, 4, 4), Set.new([ 80 | [4, 2], [5, 2], [6, 2], [7, 2], 81 | [4, 3], [5, 3], [6, 3], [7, 3], 82 | [4, 4], [5, 4], [6, 4], [7, 4], 83 | [4, 5], [5, 5], [6, 5], [7, 5], 84 | ]) 85 | 86 | test_claims = [ 87 | {"claim_id" => "1", "left_offset" => 1, "top_offset" => 3, "width" => 4, "height" => 4}, 88 | {"claim_id" => "2", "left_offset" => 3, "top_offset" => 1, "width" => 4, "height" => 4}, 89 | {"claim_id" => "3", "left_offset" => 5, "top_offset" => 5, "width" => 2, "height" => 2}, 90 | ] 91 | 92 | assert_equal count_overlapping_claims(test_claims), 4 93 | 94 | assert_equal find_non_overlapping_claim(test_claims), "3" 95 | end 96 | end 97 | 98 | 99 | CLAIM_RE = /^\#(?\d+) @ (?\d+),(?\d+): (?\d+)x(?\d+)$/ 100 | 101 | 102 | if __FILE__ == $0 103 | input = File.read("3.txt").split("\n") 104 | 105 | claims = input 106 | .map { |line| CLAIM_RE.match(line) } 107 | .map { |m| m.named_captures } 108 | .each { |data| 109 | ["left_offset", "top_offset", "width", "height"].each { |key| 110 | data[key] = data[key].to_i 111 | } 112 | } 113 | 114 | answer1 = count_overlapping_claims(claims) 115 | answer2 = find_non_overlapping_claim(claims) 116 | solution("3", "No Matter How You Slice It", answer1, answer2) 117 | end 118 | -------------------------------------------------------------------------------- /adventofcode/2018/day10.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "set" 4 | require "test/unit" 5 | 6 | require_relative "./helpers" 7 | 8 | 9 | INPUT_RE = /position=<(?[\d\s\-]+), (?[\d\s\-]+)> velocity=<(?[\d\s\-]+), (?[\d\s\-]+)>/ 10 | 11 | 12 | def parse_input(s) 13 | m = INPUT_RE.match(s) 14 | { 15 | "x" => m["x"].to_i, 16 | "y" => m["y"].to_i, 17 | "v_x" => m["v_x"].to_i, 18 | "v_y" => m["v_y"].to_i, 19 | } 20 | end 21 | 22 | 23 | def find_time_of_message(lights) 24 | t = 0 25 | y_heights = [] 26 | 27 | while true 28 | y_coords = lights.map { |coord| 29 | coord["y"] + coord["v_y"] * t 30 | } 31 | 32 | # When the message is visible, the y-coordinates will be as tight as possible. 33 | # When it diverges, the y-coordinates will start to increase again. 34 | y_heights << y_coords.max - y_coords.min 35 | 36 | if y_heights.size < 3 37 | next 38 | else 39 | if y_heights[-1] > y_heights[-2] and y_heights[-2] < y_heights[-3] 40 | return t - 1 41 | end 42 | end 43 | 44 | t += 1 45 | end 46 | end 47 | 48 | 49 | def draw_message(lights, t: 0) 50 | message_coords = lights 51 | .map { |c| [c["x"] + c["v_x"] * t, c["y"] + c["v_y"] * t] } 52 | .to_set 53 | 54 | x_values = message_coords.map { |coord| coord[0] } 55 | y_values = message_coords.map { |coord| coord[1] } 56 | 57 | # Find the bounds of the box to draw. The -1/+1 ensures an empty border 58 | # around the edge to make the image easier to see. 59 | x_min = x_values.min - 1 60 | x_max = x_values.max + 1 61 | 62 | y_min = y_values.min - 1 63 | y_max = y_values.max + 1 64 | 65 | (y_min..y_max).map { |y| 66 | (x_min..x_max).map { |x| 67 | if message_coords.include? [x, y] 68 | print "# " 69 | else 70 | print "· " 71 | end 72 | 73 | if x == x_max 74 | puts "\n" 75 | end 76 | } 77 | } 78 | end 79 | 80 | 81 | class TestDay9 < Test::Unit::TestCase 82 | def test_examples 83 | example = """position=< 9, 1> velocity=< 0, 2> 84 | position=< 7, 0> velocity=<-1, 0> 85 | position=< 3, -2> velocity=<-1, 1> 86 | position=< 6, 10> velocity=<-2, -1> 87 | position=< 2, -4> velocity=< 2, 2> 88 | position=<-6, 10> velocity=< 2, -2> 89 | position=< 1, 8> velocity=< 1, -1> 90 | position=< 1, 7> velocity=< 1, 0> 91 | position=<-3, 11> velocity=< 1, -2> 92 | position=< 7, 6> velocity=<-1, -1> 93 | position=<-2, 3> velocity=< 1, 0> 94 | position=<-4, 3> velocity=< 2, 0> 95 | position=<10, -3> velocity=<-1, 1> 96 | position=< 5, 11> velocity=< 1, -2> 97 | position=< 4, 7> velocity=< 0, -1> 98 | position=< 8, -2> velocity=< 0, 1> 99 | position=<15, 0> velocity=<-2, 0> 100 | position=< 1, 6> velocity=< 1, 0> 101 | position=< 8, 9> velocity=< 0, -1> 102 | position=< 3, 3> velocity=<-1, 1> 103 | position=< 0, 5> velocity=< 0, -1> 104 | position=<-2, 2> velocity=< 2, 0> 105 | position=< 5, -2> velocity=< 1, 2> 106 | position=< 1, 4> velocity=< 2, 1> 107 | position=<-2, 7> velocity=< 2, -2> 108 | position=< 3, 6> velocity=<-1, -1> 109 | position=< 5, 0> velocity=< 1, 0> 110 | position=<-6, 0> velocity=< 2, 0> 111 | position=< 5, 9> velocity=< 1, -2> 112 | position=<14, 7> velocity=<-2, 0> 113 | position=<-3, 6> velocity=< 2, -1>""" 114 | 115 | lights = example.split("\n").map { |s| 116 | parse_input(s) 117 | } 118 | 119 | assert_equal find_time_of_message(lights), 3 120 | end 121 | end 122 | 123 | 124 | if __FILE__ == $0 125 | inputs = File.read("10.txt").split("\n").map { |s| parse_input(s) } 126 | 127 | puts "--- Day 10: The Stars Align ---" 128 | 129 | t = find_time_of_message(inputs) 130 | 131 | draw_message(inputs, t: t) 132 | 133 | puts "You need to wait #{t} seconds" 134 | end 135 | -------------------------------------------------------------------------------- /services/dreamwidth/_api.py: -------------------------------------------------------------------------------- 1 | # -*- encoding: utf-8 2 | """ 3 | Dreamwidth API wrapper class. 4 | 5 | Useful links: 6 | - https://dw-dev-training.dreamwidth.org/58924.html?thread=383532 7 | - https://github.com/ziarkaen/dreamwidth-gsoc/blob/2f73355b60d59288bc78671cebe901879121fe8a/dreamwidth-library/dreamwidth2.py 8 | - http://wiki.dwscoalition.org/wiki/index.php/XML-RPC_Protocol 9 | - https://www.livejournal.com/doc/server/ljp.csp.xml-rpc.protocol.html 10 | 11 | """ 12 | 13 | import json 14 | 15 | try: 16 | from xmlrpclib import Binary, ServerProxy 17 | except ImportError: # Python 3 18 | from xmlrpc.client import Binary, ServerProxy 19 | 20 | import requests 21 | 22 | 23 | class DreamwidthAPI: 24 | def __init__(self, username, password): 25 | self.username = username 26 | self.password = password 27 | 28 | self.server = ServerProxy("https://www.dreamwidth.org/interface/xmlrpc") 29 | 30 | def call_endpoint(self, method_name, data=None): 31 | if data is None: 32 | data = {} 33 | 34 | # See http://wiki.dwscoalition.org/wiki/index.php/XML-RPC_Protocol_Method:_login 35 | data.update( 36 | { 37 | "username": self.username, 38 | "password": self.password, 39 | "auth_method": "clear", 40 | "ver": "1", 41 | } 42 | ) 43 | 44 | method = getattr(self.server, "LJ.XMLRPC." + method_name) 45 | return method(data) 46 | 47 | def get_all_posts(self): 48 | data = {"selecttype": "lastn", "howmany": 50} 49 | 50 | # I'm doing my own book-keeping of event IDs to ensure that this function 51 | # never returns a duplicate item, even if we mess up the API call and 52 | # end up retrieving an item more than once. 53 | # 54 | # This will happen on the ``beforedate`` boundary, because I deliberately 55 | # fudge the date slightly to ensure we're getting everything before *or on* 56 | # the time specified by ``beforedate``. 57 | seen_event_ids = set() 58 | 59 | while True: 60 | resp = self.call_endpoint("getevents", data=data) 61 | 62 | # If we've seen every event in this array already, we must be at 63 | # the end of the journal. Abort! 64 | if all(event["itemid"] in seen_event_ids for event in resp["events"]): 65 | break 66 | 67 | for event in resp["events"]: 68 | event_id = event["itemid"] 69 | if event_id not in seen_event_ids: 70 | yield event 71 | seen_event_ids.add(event_id) 72 | 73 | # This ensures that if there were multiple posts at the same time as 74 | # the earliest event in the response, we'll get all of them. 75 | sorted_logtimes = sorted(set(event["logtime"] for event in resp["events"])) 76 | data["beforedate"] = sorted_logtimes[1] 77 | 78 | def get_custom_access_groups(self): 79 | return self.call_endpoint("gettrustgroups")["trustgroups"] 80 | 81 | 82 | class BinaryEncoder(json.JSONEncoder): 83 | def default(self, obj): 84 | if isinstance(obj, Binary): 85 | return obj.data.decode("utf8") 86 | return super().default(obj) 87 | 88 | 89 | class DreamwidthSession: 90 | def __init__(self, username, password): 91 | self.sess = requests.Session() 92 | 93 | resp = self.sess.post( 94 | "https://www.dreamwidth.org/login", 95 | data={"user": username, "password": password, "action:login": "Log in"}, 96 | ) 97 | resp.raise_for_status() 98 | 99 | # Dreamwidth always returns an HTTP 200, even if login fails. This is a 100 | # better way to check if login succeeded. 101 | assert "Welcome back to Dreamwidth!" in resp.text 102 | 103 | def get(self, *args, **kwargs): 104 | return self.sess.get(*args, **kwargs) 105 | -------------------------------------------------------------------------------- /backups/flickr/save_image.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import datetime as dt 5 | import json 6 | import pathlib 7 | import sys 8 | import tempfile 9 | 10 | import bs4 11 | import hyperlink 12 | import urllib3 13 | import xmltodict 14 | 15 | 16 | BACKUP_ROOT = pathlib.Path.home() / "Documents" / "backups" / "flickr" 17 | 18 | 19 | def get_canonical_url(url): 20 | http = urllib3.PoolManager() 21 | 22 | seen_urls = set(url) 23 | 24 | url = url.split("/in/")[0] 25 | 26 | while True: 27 | resp = http.request( 28 | "GET", url, 29 | redirect=False, 30 | headers={"User-Agent": "urllib3"} 31 | ) 32 | 33 | try: 34 | url = resp.headers["Location"] 35 | except KeyError: 36 | return url 37 | 38 | if url in seen_urls: 39 | raise ValueError("Circular redirect: {url}") 40 | else: 41 | seen_urls.add(url) 42 | 43 | 44 | def build_url(base, params): 45 | u = hyperlink.URL.from_text(base) 46 | for k, v in params.items(): 47 | u = u.set(k, v) 48 | return u 49 | 50 | 51 | def get_oembed_data(canonical_url): 52 | http = urllib3.PoolManager() 53 | 54 | request_url = build_url( 55 | "https://www.flickr.com/services/oembed/", 56 | params={"url": url} 57 | ) 58 | 59 | oembed_resp = http.request("GET", str(request_url)) 60 | assert oembed_resp.status == 200, oembed_resp.status 61 | return xmltodict.parse(oembed_resp.data)["oembed"] 62 | 63 | 64 | def get_description(url): 65 | http = urllib3.PoolManager() 66 | 67 | resp = http.request("GET", url) 68 | assert resp.status == 200, resp.status 69 | 70 | soup = bs4.BeautifulSoup(resp.data, "html.parser") 71 | return soup.find("meta", attrs={"name": "description"}).attrs["content"] 72 | 73 | 74 | def get_backup_dir(canonical_url, oembed_data): 75 | parsed_url = hyperlink.URL.from_text(canonical_url.strip("/")) 76 | 77 | assert parsed_url.path[0] == "photos", parsed_url 78 | 79 | flickr_id = parsed_url.path[-1] 80 | assert flickr_id.isnumeric() 81 | 82 | author_url = oembed_data["author_url"].strip("/") 83 | creator = hyperlink.URL.from_text(author_url).path[-1] 84 | 85 | return BACKUP_ROOT / f"{creator}-{flickr_id}" 86 | 87 | 88 | def save_image(out_dir, oembed_data): 89 | http = urllib3.PoolManager() 90 | 91 | img_url = oembed_data["url"] 92 | filename = hyperlink.URL.from_text(img_url).path[-1] 93 | 94 | # See https://stackoverflow.com/q/17285464/1558022 95 | resp = http.request("GET", img_url, preload_content=False) 96 | 97 | with (out_dir / filename).open("wb") as out: 98 | while True: 99 | data = resp.read(1024) 100 | if not data: 101 | break 102 | out.write(data) 103 | 104 | resp.release_conn() 105 | 106 | 107 | if __name__ == "__main__": 108 | try: 109 | url = sys.argv[1] 110 | except IndexError: 111 | sys.exit(f"Usage: {__file__} ") 112 | 113 | canonical_url = get_canonical_url(url) 114 | oembed_data = get_oembed_data(canonical_url) 115 | 116 | backup_dir = get_backup_dir(canonical_url, oembed_data) 117 | backup_dir.parent.mkdir(exist_ok=True) 118 | 119 | if backup_dir.exists(): 120 | print("Already saved!") 121 | sys.exit(0) 122 | 123 | description = get_description(canonical_url) 124 | 125 | flickr_info = { 126 | "url": url, 127 | "canonical_url": canonical_url, 128 | "saved_at": dt.datetime.now().isoformat(), 129 | "description": description, 130 | "oembed_data": oembed_data 131 | } 132 | 133 | json_string = json.dumps(flickr_info, indent=2, sort_keys=True) 134 | 135 | tmp_dir = pathlib.Path(tempfile.mkdtemp()) 136 | (tmp_dir / "info.json").write_text(json_string) 137 | 138 | save_image(tmp_dir, oembed_data) 139 | 140 | tmp_dir.rename(backup_dir) 141 | print(backup_dir) 142 | -------------------------------------------------------------------------------- /adventofcode/2018/day4.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "test/unit" 4 | 5 | require_relative "./helpers" 6 | 7 | 8 | def tally_sleep_minutes(events) 9 | tally = Hash.new(nil) 10 | 11 | fell_asleep = nil 12 | 13 | events 14 | .each { |e| 15 | if e["event"] == "falls asleep" 16 | fell_asleep = e["minute"] 17 | elsif e["event"] == "wakes up" 18 | (fell_asleep...e["minute"]).each { |min| 19 | if !tally.key?(e["guard"]) 20 | tally[e["guard"]] = Hash.new(0) 21 | end 22 | tally[e["guard"]][min] += 1 23 | } 24 | end 25 | } 26 | 27 | tally 28 | end 29 | 30 | 31 | def find_guard_who_spends_most_minutes_asleep(tally) 32 | minutes_asleep_by_guard = tally 33 | .map { |guard, sleep_minutes| [guard, sleep_minutes.values.sum] } 34 | 35 | minutes_asleep_by_guard.max_by { |guard, time_asleep| time_asleep }[0] 36 | end 37 | 38 | 39 | def find_guard_who_sleeps_most_on_same_minute(tally) 40 | tally 41 | .map { |guard, minutes| [guard, minutes.values.max] } 42 | .max_by { |guard, max_sleep_count| max_sleep_count }[0] 43 | end 44 | 45 | 46 | def find_most_frequent_sleep_minute(tally, guard) 47 | tally[guard].max_by { |min, count| count }[0] 48 | end 49 | 50 | 51 | def find_minute_most_often_asleep(tally, guard) 52 | tally[guard].max_by { |min, count| count }[0] 53 | end 54 | 55 | 56 | DATE_RE = /^\[(?\d{4}\-\d{2}\-\d{2}) \d{2}:(?\d{2})\] (?Guard #(?\d+) begins shift|falls asleep|wakes up)/ 57 | 58 | 59 | def parse_input(input) 60 | curr_guard = nil 61 | input 62 | .sort 63 | .select { |line| !line.strip.empty? } 64 | .map { |line| DATE_RE.match(line.strip) } 65 | .map { |m| m.named_captures } 66 | .each { |h| 67 | # Make sure that every event has an associated guard; use the last 68 | # guard we saw starting a shift. 69 | if h["guard"] != nil 70 | curr_guard = h["guard"] 71 | else 72 | h["guard"] = curr_guard 73 | end 74 | 75 | h["minute"] = h["minute"].to_i 76 | } 77 | end 78 | 79 | 80 | class TestDay4 < Test::Unit::TestCase 81 | def test_examples 82 | input = """ 83 | [1518-11-01 00:00] Guard #10 begins shift 84 | [1518-11-01 00:05] falls asleep 85 | [1518-11-01 00:25] wakes up 86 | [1518-11-01 00:30] falls asleep 87 | [1518-11-01 00:55] wakes up 88 | [1518-11-01 23:58] Guard #99 begins shift 89 | [1518-11-02 00:40] falls asleep 90 | [1518-11-02 00:50] wakes up 91 | [1518-11-03 00:05] Guard #10 begins shift 92 | [1518-11-03 00:24] falls asleep 93 | [1518-11-03 00:29] wakes up 94 | [1518-11-04 00:02] Guard #99 begins shift 95 | [1518-11-04 00:36] falls asleep 96 | [1518-11-04 00:46] wakes up 97 | [1518-11-05 00:03] Guard #99 begins shift 98 | [1518-11-05 00:45] falls asleep 99 | [1518-11-05 00:55] wakes up 100 | """.split("\n") 101 | 102 | events = parse_input(input) 103 | tally = tally_sleep_minutes(events) 104 | 105 | assert_equal find_guard_who_spends_most_minutes_asleep(tally), "10" 106 | assert_equal find_minute_most_often_asleep(tally, "10"), 24 107 | 108 | assert_equal find_guard_who_sleeps_most_on_same_minute(tally), "99" 109 | assert_equal find_most_frequent_sleep_minute(tally, "99"), 45 110 | end 111 | end 112 | 113 | 114 | if __FILE__ == $0 115 | input = File.read("4.txt").split("\n") 116 | 117 | events = parse_input(input) 118 | tally = tally_sleep_minutes(events) 119 | 120 | most_asleep_guard = find_guard_who_spends_most_minutes_asleep(tally) 121 | minutes_asleep = find_minute_most_often_asleep(tally, most_asleep_guard) 122 | answer1 = most_asleep_guard.to_i * minutes_asleep 123 | 124 | guard_sleeps_most_on_same_minute = find_guard_who_sleeps_most_on_same_minute(tally) 125 | most_sleep_minute = find_most_frequent_sleep_minute(tally, guard_sleeps_most_on_same_minute) 126 | answer2 = guard_sleeps_most_on_same_minute.to_i * most_sleep_minute 127 | 128 | solution("4", "Repose Record", answer1, answer2) 129 | end 130 | -------------------------------------------------------------------------------- /textmate-config/Global.tmProperties: -------------------------------------------------------------------------------- 1 | # Version 1.0 -- Generated content! 2 | exclude = '{*.{o,pyc},Icon\r,CVS,_darcs,_MTN,\{arch\},blib,*\~.nib,__pycache__,target,node_modules}' 3 | fontName = PragmataPro 4 | fontSize = 15 5 | include = {*,.mergify.yml,.tm_properties,.travis*,.github*,.gitignore,.scalafmt.conf,.coveragerc,.wellcome_project} 6 | showInvisibles = false 7 | showWrapColumn = true 8 | softTabs = true 9 | softWrap = true 10 | tabSize = 2 11 | theme = A4299D9B-1DE5-4BC4-87F6-A757E71B1597 12 | wrapColumn = 89 13 | 14 | [ *.md ] 15 | fileType = text.html.markdown 16 | 17 | [ *.py ] 18 | fileType = source.python 19 | 20 | [ *.rs ] 21 | fileType = source.rust 22 | 23 | [ *.cfg ] 24 | fileType = source.python 25 | 26 | [ *.ini ] 27 | fileType = source.ini 28 | 29 | [ *.sbt ] 30 | fileType = source.scala 31 | 32 | [ *.svg ] 33 | fileType = text.xml 34 | 35 | [ *.ttx ] 36 | fileType = text.xml 37 | 38 | [ *.txt ] 39 | fileType = text.plain 40 | 41 | [ *.cxml ] 42 | fileType = text.xml 43 | 44 | [ *.fish ] 45 | fileType = source.shell 46 | 47 | [ *.json ] 48 | fileType = source.json 49 | 50 | [ *.scpt ] 51 | fileType = source.applescript 52 | 53 | [ *.ipynb ] 54 | fileType = source.json 55 | 56 | [ *.scala ] 57 | fileType = source.scala 58 | 59 | [ *.py.tpl ] 60 | fileType = source.python 61 | 62 | [ *.Makefile ] 63 | fileType = source.makefile 64 | 65 | [ Dockerfile ] 66 | fileType = source.dockerfile 67 | 68 | [ Jenkinsfile ] 69 | fileType = source.groovy 70 | 71 | [ *.svg.template ] 72 | fileType = text.xml 73 | 74 | [ managed_schema ] 75 | fileType = text.xml 76 | 77 | [ source ] 78 | showInvisibles = false 79 | softTabs = true 80 | softWrap = false 81 | tabSize = 2 82 | 83 | [ source.css ] 84 | softWrap = true 85 | tabSize = 2 86 | 87 | [ source.css.less ] 88 | tabSize = 2 89 | 90 | [ source.dockerfile ] 91 | softWrap = false 92 | tabSize = 2 93 | 94 | [ source.ini ] 95 | showInvisibles = true 96 | softTabs = true 97 | softWrap = false 98 | 99 | [ source.js ] 100 | showInvisibles = true 101 | softWrap = true 102 | tabSize = 2 103 | 104 | [ source.json ] 105 | softWrap = true 106 | tabSize = 2 107 | 108 | [ source.makefile ] 109 | softWrap = false 110 | tabSize = 4 111 | 112 | [ source.python ] 113 | showInvisibles = false 114 | softWrap = false 115 | 116 | [ source.python.django ] 117 | softWrap = false 118 | tabSize = 2 119 | 120 | [ source.rust ] 121 | tabSize = 2 122 | 123 | [ source.scss ] 124 | tabSize = 2 125 | 126 | [ source.shell ] 127 | showInvisibles = false 128 | softWrap = false 129 | tabSize = 2 130 | 131 | [ source.terraform ] 132 | tabSize = 2 133 | 134 | [ source.yaml ] 135 | showInvisibles = false 136 | softTabs = true 137 | tabSize = 2 138 | 139 | [ text ] 140 | showInvisibles = true 141 | softTabs = true 142 | tabSize = 4 143 | 144 | [ text.html ] 145 | showInvisibles = true 146 | softTabs = true 147 | softWrap = true 148 | tabSize = 2 149 | 150 | [ text.html.basic ] 151 | softWrap = true 152 | 153 | [ text.html.markdown ] 154 | showInvisibles = true 155 | softTabs = true 156 | softWrap = true 157 | tabSize = 2 158 | 159 | [ text.plain ] 160 | showInvisibles = false 161 | softWrap = true 162 | tabSize = 4 163 | 164 | [ text.restructuredtext ] 165 | softWrap = true 166 | tabSize = 3 167 | 168 | [ text.tex ] 169 | softWrap = true 170 | tabSize = 2 171 | 172 | [ text.tex.latex ] 173 | softWrap = true 174 | tabSize = 2 175 | 176 | [ text.xml ] 177 | softWrap = true 178 | 179 | [ text.xml.plist ] 180 | softWrap = true 181 | -------------------------------------------------------------------------------- /aws/list_memory_cpu_of_fargate_tasks.go: -------------------------------------------------------------------------------- 1 | // To track our Fargate costs and find services which have an overly generous 2 | // provision of CPU/memory, this script prints a list of all our active task 3 | // definitions, along with their CPU/memory limits. 4 | // 5 | 6 | package main 7 | 8 | import ( 9 | "fmt" 10 | "github.com/aws/aws-sdk-go/aws/session" 11 | "github.com/aws/aws-sdk-go/service/ecs" 12 | "os" 13 | ) 14 | 15 | func getClusterArns(client *ecs.ECS) []*string { 16 | var clusterArns []*string 17 | 18 | client.ListClustersPages( 19 | &ecs.ListClustersInput{}, 20 | func(page *ecs.ListClustersOutput, _ bool) bool { 21 | for _, clusterArn := range page.ClusterArns { 22 | clusterArns = append(clusterArns, clusterArn) 23 | } 24 | return true 25 | }) 26 | 27 | return clusterArns 28 | } 29 | 30 | func getServiceArns(client *ecs.ECS, clusterArns []*string) []*string { 31 | var serviceArns []*string 32 | 33 | for _, clusterArn := range clusterArns { 34 | params := &ecs.ListServicesInput { 35 | Cluster: clusterArn, 36 | } 37 | 38 | client.ListServicesPages( 39 | params, 40 | func(page *ecs.ListServicesOutput, _ bool) bool { 41 | for _, serviceArn := range page.ServiceArns { 42 | serviceArns = append(serviceArns, serviceArn) 43 | } 44 | return true 45 | }) 46 | } 47 | 48 | return serviceArns 49 | } 50 | 51 | func getTaskDefinitionArns(client *ecs.ECS, clusterArns []*string) []*string { 52 | var taskDefinitionArns []*string 53 | 54 | for _, clusterArn := range clusterArns { 55 | serviceArns := getServiceArns(client, []*string{clusterArn}) 56 | 57 | // We have one cluster which doesn't contain any services; we should skip 58 | // to the next cluster. 59 | // 60 | // Note: this presents as a slightly confusing errors, as the AWS API complain 61 | // you didn't supply a "Services" field -- you did, but it was empty! 62 | // 63 | if len(serviceArns) == 0 { 64 | continue 65 | } 66 | 67 | params := &ecs.DescribeServicesInput { 68 | Cluster: clusterArn, 69 | Services: serviceArns, 70 | } 71 | 72 | describeServicesOutput, err := client.DescribeServices(params) 73 | 74 | if err != nil { 75 | fmt.Println("Error describing services: %v", err) 76 | os.Exit(1) 77 | } 78 | 79 | for _, service := range describeServicesOutput.Services { 80 | taskDefinitionArns = append(taskDefinitionArns, service.TaskDefinition) 81 | } 82 | } 83 | 84 | return taskDefinitionArns 85 | } 86 | 87 | func main() { 88 | sess := session.Must(session.NewSession()) 89 | client := ecs.New(sess) 90 | 91 | clusterArns := getClusterArns(client) 92 | 93 | for _, taskDefinitionArn := range getTaskDefinitionArns(client, clusterArns) { 94 | params := &ecs.DescribeTaskDefinitionInput { 95 | TaskDefinition: taskDefinitionArn, 96 | } 97 | 98 | taskDefinitionOutput, err := client.DescribeTaskDefinition(params) 99 | 100 | if err != nil { 101 | fmt.Println("Error describing task definition %v: %v", taskDefinitionArn, err) 102 | os.Exit(1) 103 | } 104 | 105 | taskDefinition := taskDefinitionOutput.TaskDefinition 106 | 107 | // Only Fargate tasks have a top-level CPU definition. We could drill down 108 | // and inspect ECS tasks as well, but as the vast majority of our ECS tasks 109 | // are on Fargate, I'll skip that for now. 110 | for _, compatibility := range taskDefinition.Compatibilities { 111 | if *compatibility == "FARGATE" { 112 | fmt.Printf("%5s\t%5s\t%s:%d\n", 113 | *taskDefinition.Cpu, 114 | *taskDefinition.Memory, 115 | *taskDefinition.Family, 116 | *taskDefinition.Revision) 117 | break 118 | } 119 | } 120 | } 121 | } 122 | -------------------------------------------------------------------------------- /backups/deviantart/save_image.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 3 | 4 | import datetime as dt 5 | import json 6 | import pathlib 7 | import sys 8 | import tempfile 9 | 10 | import bs4 11 | import hyperlink 12 | import urllib3 13 | 14 | 15 | BACKUP_ROOT = pathlib.Path.home() / "Documents" / "backups" / "deviantart" 16 | 17 | 18 | def build_url(base, params): 19 | u = hyperlink.URL.from_text(base) 20 | for k, v in params.items(): 21 | u = u.set(k, v) 22 | return u 23 | 24 | 25 | def get_canonical_url(url): 26 | http = urllib3.PoolManager() 27 | 28 | seen_urls = set(url) 29 | 30 | while True: 31 | resp = http.request( 32 | "GET", url, 33 | redirect=False, 34 | headers={"User-Agent": "urllib3"} 35 | ) 36 | 37 | try: 38 | url = resp.headers["Location"] 39 | except KeyError: 40 | return url 41 | 42 | if url in seen_urls: 43 | raise ValueError("Circular redirect: {url}") 44 | else: 45 | seen_urls.add(url) 46 | 47 | 48 | def get_backup_dir(canonical_url, oembed_data): 49 | page_url = hyperlink.URL.from_text(canonical_url) 50 | 51 | artist = oembed_data["author_name"] 52 | deviantart_slug = page_url.path[-1] 53 | 54 | return BACKUP_ROOT / artist / deviantart_slug 55 | 56 | 57 | def get_oembed_data(canonical_url): 58 | http = urllib3.PoolManager() 59 | 60 | request_url = build_url( 61 | "https://backend.deviantart.com/oembed", 62 | params={"url": url} 63 | ) 64 | 65 | oembed_resp = http.request("GET", str(request_url)) 66 | assert oembed_resp.status == 200, oembed_resp 67 | return json.loads(oembed_resp.data) 68 | 69 | 70 | def save_image(out_dir, oembed_data): 71 | http = urllib3.PoolManager() 72 | 73 | img_url = oembed_data["url"] 74 | filename = hyperlink.URL.from_text(img_url).path[-1] 75 | 76 | # See https://stackoverflow.com/q/17285464/1558022 77 | resp = http.request("GET", img_url, preload_content=False) 78 | 79 | with (out_dir / filename).open("wb") as out: 80 | while True: 81 | data = resp.read(1024) 82 | if not data: 83 | break 84 | out.write(data) 85 | 86 | resp.release_conn() 87 | 88 | 89 | def get_page_state(canonical_url): 90 | http = urllib3.PoolManager() 91 | 92 | resp = http.request("GET", canonical_url, headers={"User-Agent": "urllib3"}) 93 | assert resp.status == 200, resp.status 94 | 95 | # We're looking for a line that goes 96 | # 97 | # window.__INITIAL_STATE__ = JSON.parse(…); 98 | # 99 | # which contains a heap of useful data. 100 | soup = bs4.BeautifulSoup(resp.data, "html.parser") 101 | js_lines = soup.find("body").find("script").text.splitlines() 102 | 103 | initial_state = next( 104 | line.strip() 105 | for line in js_lines 106 | if line.strip().startswith("window.__INITIAL_STATE__") 107 | ) 108 | json_string = initial_state[len('window.__INITIAL_STATE__ = JSON.parse('):-len(');')] 109 | 110 | return json.loads(json.loads(json_string)) 111 | 112 | 113 | if __name__ == "__main__": 114 | try: 115 | url = sys.argv[1] 116 | except IndexError: 117 | sys.exit(f"Usage: {__file__} ") 118 | 119 | canonical_url = get_canonical_url(url) 120 | 121 | oembed_data = get_oembed_data(canonical_url) 122 | 123 | backup_dir = get_backup_dir(canonical_url, oembed_data) 124 | backup_dir.parent.mkdir(exist_ok=True) 125 | 126 | page_state = get_page_state(canonical_url) 127 | 128 | if backup_dir.exists(): 129 | print("Already saved!") 130 | sys.exit(0) 131 | 132 | extended_deviation = page_state["@@entities"]["deviationExtended"] 133 | assert len(extended_deviation) == 1 134 | extended_deviation = list(extended_deviation.values())[0] 135 | del extended_deviation["relatedStreams"] 136 | 137 | deviation_info = { 138 | "url": url, 139 | "canonical_url": canonical_url, 140 | "saved_at": dt.datetime.now().isoformat(), 141 | "extended_deviation": extended_deviation, 142 | "oembed_data": oembed_data 143 | } 144 | 145 | json_string = json.dumps(deviation_info, indent=2, sort_keys=True) 146 | 147 | tmp_dir = pathlib.Path(tempfile.mkdtemp()) 148 | (tmp_dir / "info.json").write_text(json_string) 149 | 150 | save_image(tmp_dir, oembed_data) 151 | 152 | tmp_dir.rename(backup_dir) 153 | print(backup_dir) 154 | -------------------------------------------------------------------------------- /services/backblaze/add_backblaze_exclusions.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | Script for adding one-off folder exclusions to BackBlaze. 4 | 5 | Call it by passing a list of paths you want to exclude as command-line 6 | arguments, e.g. 7 | 8 | python add_backblaze_exclusions.py /path/to/exclude/1 /path/to/exclude/2 9 | 10 | It saves a copy of your bzinfo.xml backup rules before editing. 11 | 12 | macOS only. 13 | 14 | """ 15 | 16 | 17 | import datetime 18 | import os 19 | import pathlib 20 | import shutil 21 | import subprocess 22 | import sys 23 | from typing import Iterator, List 24 | 25 | from lxml import etree 26 | 27 | 28 | def get_dirs_to_exclude(argv: List[str]) -> Iterator[pathlib.Path]: 29 | """ 30 | Given a list of command-line arguments (e.g. from sys.argv), get 31 | a list of directory paths to exclude. 32 | """ 33 | for dirname in argv: 34 | path = pathlib.Path(dirname).resolve() 35 | 36 | if not os.path.isdir(path): 37 | print(f"Skipping {dirname}; no such directory", file=sys.stderr) 38 | continue 39 | 40 | yield path 41 | 42 | 43 | def add_exclusion(*, root: etree.ElementTree, exclude_dir: pathlib.Path): 44 | """ 45 | Given the parsed XML from bzinfo.xml and the path to a directory, 46 | add that directory to the list of exclusions. 47 | """ 48 | # The filter inside this XML file is something of the form 49 | # 50 | # 51 | # 52 | # ... 53 | # 54 | # 55 | # so we want to find this do_backup tag, then add the bzdirfilter elements. 56 | do_backup_elements = root.xpath(".//do_backup") 57 | 58 | if len(do_backup_elements) != 1: 59 | raise ValueError("Did not find exactly one element in bzinfo.xml") 60 | 61 | do_backup = do_backup_elements[0] 62 | 63 | # If this directory has already been excluded, we can skip adding it again. 64 | # Note: directory names are case insensitive. 65 | already_excluded = { 66 | dirname.lower() 67 | for dirname in do_backup.xpath('./bzdirfilter[@whichfiles="none"]/@dir') 68 | } 69 | 70 | if str(exclude_dir).lower() in already_excluded: 71 | print(f"{exclude_dir} is already excluded in bzinfo.xml") 72 | return 73 | 74 | # TODO: Look for the case where a parent is excluded, e.g. if /a/b/c is 75 | # already excluded, we can safely skip adding an exclusion for /a/b/c/d/e. 76 | 77 | # Create the new exclusion tag. 78 | dirfilter = etree.SubElement(do_backup, "bzdirfilter") 79 | dirfilter.set("dir", str(exclude_dir).lower()) 80 | dirfilter.set("whichfiles", "none") 81 | 82 | # Sort the list of exclusions. This isn't strictly necessary, but makes 83 | # the file a little easier to read and work with. 84 | do_backup[:] = sorted( 85 | do_backup.xpath("./bzdirfilter"), key=lambda f: f.attrib["dir"] 86 | ) 87 | 88 | 89 | def save_backup_copy(bzinfo_path: str) -> str: 90 | """ 91 | Save a backup copy of the bzinfo.xml file before making edits. 92 | """ 93 | today = datetime.datetime.now().strftime("%Y-%m-%d_%H-%m-%S") 94 | backup_path = f"bzinfo.{today}.xml" 95 | 96 | shutil.copyfile(bzinfo_path, backup_path) 97 | return backup_path 98 | 99 | 100 | def restart_backblaze(): 101 | """ 102 | Restart all the BackBlaze processes. 103 | """ 104 | for cmd in [ 105 | ["sudo", "killall", "bzfilelist"], 106 | ["sudo", "killall", "bzserv"], 107 | ["sudo", "killall", "bztransmit"], 108 | ["killall", "bzbmenu"], 109 | # The exclusion list doesn't get reloaded in System Preferences 110 | # when the process restarts; we have to quit and reopen SysPrefs. 111 | ["killall", "System Preferences"], 112 | ["open", "-a", "BackBlaze.app"], 113 | ]: 114 | subprocess.call(cmd, stderr=subprocess.DEVNULL) 115 | 116 | 117 | if __name__ == "__main__": 118 | dirs_to_exclude = get_dirs_to_exclude(sys.argv[1:]) 119 | 120 | bzinfo_path = "/Volumes/Macintosh HD/Library/Backblaze.bzpkg/bzdata/bzinfo.xml" 121 | 122 | backup_path = save_backup_copy(bzinfo_path) 123 | print(f"*** Saved backup copy of bzinfo.xml to {backup_path}") 124 | 125 | root = etree.parse(bzinfo_path) 126 | 127 | for exclude_dir in dirs_to_exclude: 128 | add_exclusion(root=root, exclude_dir=exclude_dir) 129 | 130 | print("*** Writing new exclusions to bzinfo.xml") 131 | with open(bzinfo_path, "wb") as outfile: 132 | root.write(outfile, pretty_print=True, xml_declaration=True) 133 | 134 | print("*** Restarting BackBlaze") 135 | restart_backblaze() 136 | -------------------------------------------------------------------------------- /adventofcode/2018/day9.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "test/unit" 4 | 5 | require_relative "./helpers" 6 | 7 | 8 | INPUT_RE = /(?\d+) players; last marble is worth (?\d+) points/ 9 | 10 | 11 | def parse_input(s) 12 | m = INPUT_RE.match(s) 13 | { 14 | "players" => m["players"].to_i, 15 | "last_marble" => m["last_marble"].to_i 16 | } 17 | end 18 | 19 | 20 | class Node 21 | attr_accessor :value, :clockwise, :counter_clockwise 22 | def initialize(value, clockwise: nil, counter_clockwise: nil) 23 | @value = value 24 | @clockwise = clockwise 25 | @counter_clockwise = counter_clockwise 26 | end 27 | 28 | def ==(other) 29 | self.value == other.value 30 | end 31 | 32 | def to_s 33 | "Node(#{@value}, clockwise = #{@clockwise.value}, counter_clockwise = #{@counter_clockwise.value})" 34 | end 35 | end 36 | 37 | 38 | class MarbleCircle 39 | attr_accessor :current 40 | def initialize 41 | @current = Node.new(0) 42 | @current.clockwise = @current 43 | @current.counter_clockwise = @current 44 | end 45 | 46 | def play_marble(value) 47 | if value % 23 == 0 48 | # We step back 7 marbles counter-clockwise, then remove it from the circle. 49 | (1..7).map { |_| 50 | @current = @current.counter_clockwise 51 | } 52 | 53 | # We now have a setup like follows 54 | # 55 | # A -> c-wise cc-wise <- B 56 | # cc-wise <- (about to be removed) -> c-wise 57 | # 58 | about_to_be_removed = @current 59 | marble_A = @current.counter_clockwise 60 | marble_B = @current.clockwise 61 | 62 | marble_A.clockwise = marble_B 63 | marble_B.counter_clockwise = marble_A 64 | 65 | @current = @current.clockwise 66 | [value, about_to_be_removed.value] 67 | else 68 | # The node places the new marble at a point between 1 and 2 marbles clockwise 69 | # of the current marble -- so advance one point clockwise first. 70 | @current = @current.clockwise 71 | 72 | # The existing set up is as follows: 73 | # 74 | # @current -> c-wise cc-wise <- A 75 | # cc-wise <- (about to be inserted) -> c-wise 76 | marble_A = @current.clockwise 77 | new_marble = Node.new( 78 | value, 79 | clockwise: marble_A, 80 | counter_clockwise: @current 81 | ) 82 | 83 | @current.clockwise = new_marble 84 | marble_A.counter_clockwise = new_marble 85 | 86 | @current = new_marble 87 | 88 | nil 89 | end 90 | end 91 | 92 | def to_s 93 | values = [] 94 | looking_at = @current 95 | while true 96 | looking_at = looking_at.clockwise 97 | values << looking_at.value 98 | if looking_at == @current 99 | break 100 | end 101 | end 102 | 103 | while values[0] != 0 104 | values << values.delete_at(0) 105 | end 106 | 107 | rv = "" 108 | values.map { |v| 109 | if v == @current.value 110 | rv += "(#{v})" 111 | else 112 | rv += " #{v} " 113 | end 114 | } 115 | rv 116 | end 117 | end 118 | 119 | 120 | def play_game(players: 1, last_marble: 1) 121 | circle = MarbleCircle.new 122 | 123 | scores = (0...players).map { |p| [p, Array.new] }.to_h 124 | 125 | (1..last_marble).each { |m| 126 | score = circle.play_marble(m) 127 | if !score.nil? 128 | scores[m % players] += score 129 | end 130 | } 131 | 132 | scores.map { |player, values| [player, values.sum] }.to_h 133 | end 134 | 135 | 136 | def get_high_score(input_data) 137 | scores = play_game( 138 | players: input_data["players"], 139 | last_marble: input_data["last_marble"] 140 | ) 141 | scores.values.max 142 | end 143 | 144 | 145 | class TestDay9 < Test::Unit::TestCase 146 | def test_examples 147 | assert_equal get_high_score(parse_input("9 players; last marble is worth 25 points")), 32 148 | assert_equal get_high_score(parse_input("10 players; last marble is worth 1618 points")), 8317 149 | assert_equal get_high_score(parse_input("13 players; last marble is worth 7999 points")), 146373 150 | assert_equal get_high_score(parse_input("17 players; last marble is worth 1104 points")), 2764 151 | assert_equal get_high_score(parse_input("21 players; last marble is worth 6111 points")), 54718 152 | assert_equal get_high_score(parse_input("30 players; last marble is worth 5807 points")), 37305 153 | end 154 | end 155 | 156 | 157 | if __FILE__ == $0 158 | input = parse_input(File.read("9.txt")) 159 | 160 | answer1 = get_high_score(input) 161 | 162 | input["last_marble"] *= 100 163 | answer2 = get_high_score(input) 164 | 165 | solution("9", "Marble Mania", answer1, answer2) 166 | end 167 | -------------------------------------------------------------------------------- /adventofcode/2018/day7.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "set" 4 | require "test/unit" 5 | 6 | require_relative "./helpers" 7 | 8 | 9 | REQUIREMENT_RE = /Step (?[A-Z]) must be finished before step (?[A-Z]) can begin\./ 10 | 11 | 12 | def parse_input(input) 13 | dependencies = Hash.new() 14 | 15 | input 16 | .strip 17 | .split("\n") 18 | .map { |line| REQUIREMENT_RE.match(line) } 19 | .each { |match| 20 | step = match["step"] 21 | blocker = match["blocker"] 22 | 23 | if dependencies.key? step 24 | dependencies[step].add(blocker) 25 | else 26 | dependencies[step] = Set[blocker] 27 | end 28 | 29 | if !dependencies.key? blocker 30 | dependencies[blocker] = Set.new 31 | end 32 | } 33 | 34 | dependencies 35 | end 36 | 37 | 38 | def find_correct_order(dependencies) 39 | remaining = Hash.new 40 | dependencies.each { |k, v| remaining[k] = v.clone } 41 | procedure = "" 42 | 43 | while remaining.size > 0 44 | # Find all the steps which are "available" -- that is, nothing in the 45 | # remaining tree depends on them. 46 | available_steps = remaining 47 | .select { |step, blockers| blockers.size == 0 } 48 | .map { |step, blockers| step } 49 | .sort 50 | 51 | # Now we take the first available step, add it to the procedure, and 52 | # remove it from any remaining steps that contain it. 53 | next_step = available_steps[0] 54 | procedure += next_step 55 | 56 | remaining.delete(next_step) 57 | 58 | remaining 59 | .each { |step, blockers| blockers.delete? next_step } 60 | end 61 | 62 | procedure 63 | end 64 | 65 | 66 | def length_of_step(c, base_task_time) 67 | c.ord - ("A".ord - (base_task_time + 1)) 68 | end 69 | 70 | 71 | def find_parallel_runtime(dependencies, elf_count: 0, base_task_time: 0) 72 | remaining = Hash.new 73 | dependencies.each { |k, v| remaining[k] = v.clone } 74 | elves = (1..elf_count) 75 | .map { |i| {"id" => i, "current_step" => nil, "remaining" => nil} } 76 | 77 | t = 0 78 | 79 | # We continue while there are still steps to start, or there are elves 80 | # who are doing some current work. 81 | while remaining.size > 0 || elves.select { |e| !e["current_step"].nil? }.size > 0 82 | 83 | # Go through and decrement a second from each task the elves are 84 | # currently working on. If one of them ahas finished a task, we 85 | # can unassign them and mark that task as unblocked. 86 | elves 87 | .select { |e| !e["current_step"].nil? } 88 | .each { |e| 89 | e["remaining"] -= 1 90 | if e["remaining"] == 0 91 | remaining.each { |step, blockers| blockers.delete? e["current_step"] } 92 | e["current_step"] = nil 93 | e["remaining"] = nil 94 | end 95 | } 96 | 97 | # Work through the elves that don't have anything to do right now, 98 | # and assign them a task. 99 | while ( 100 | elves.select { |e| e["current_step"].nil? }.size > 0 && 101 | remaining.select { |step, blockers| blockers.size == 0 }.size > 0) 102 | elf = elves 103 | .select { |e| e["current_step"].nil? } 104 | .first 105 | step = remaining 106 | .select { |step, blockers| blockers.size == 0 } 107 | .map { |step, blockers| step } 108 | .sort 109 | .first 110 | 111 | # We add one to the time remaining to account for the time taken 112 | # on this step. 113 | elf["current_step"] = step 114 | elf["remaining"] = length_of_step(step, base_task_time) 115 | 116 | # And now we remove the task from "remaining" so nobody else 117 | # can claim it. 118 | remaining.delete(step) 119 | end 120 | 121 | t += 1 122 | end 123 | 124 | # The final step will be entirely idle, so we skip one. 125 | t - 1 126 | end 127 | 128 | 129 | 130 | class TestDay7 < Test::Unit::TestCase 131 | def test_examples 132 | dependencies = parse_input(""" 133 | Step C must be finished before step A can begin. 134 | Step C must be finished before step F can begin. 135 | Step A must be finished before step B can begin. 136 | Step A must be finished before step D can begin. 137 | Step B must be finished before step E can begin. 138 | Step D must be finished before step E can begin. 139 | Step F must be finished before step E can begin. 140 | """) 141 | 142 | assert_equal find_correct_order(dependencies), "CABDFE" 143 | 144 | assert_equal length_of_step("A", 60), 61 145 | assert_equal length_of_step("Z", 60), 86 146 | 147 | assert_equal find_parallel_runtime(dependencies, elf_count: 2), 15 148 | end 149 | end 150 | 151 | 152 | if __FILE__ == $0 153 | input = File.read("7.txt") 154 | dependencies = parse_input(input) 155 | 156 | answer1 = find_correct_order(dependencies) 157 | answer2 = find_parallel_runtime(dependencies, elf_count: 4, base_task_time: 60) 158 | 159 | solution("7", "The Sum of Its Parts", answer1, answer2) 160 | end 161 | -------------------------------------------------------------------------------- /adventofcode/2018/day6.rb: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "test/unit" 4 | 5 | require_relative "./helpers" 6 | 7 | 8 | def manhattan_distance(a, b) 9 | (a[0] - b[0]).abs + (a[1] - b[1]).abs 10 | end 11 | 12 | 13 | def find_largest_finite_area(coords) 14 | # Anything outside the bounding box is part of an infinite area, and so 15 | # we can restrict our attendion to this box. 16 | min_x = coords.map { |c| c[0] }.min 17 | max_x = coords.map { |c| c[0] }.max 18 | 19 | min_y = coords.map { |c| c[1] }.min 20 | max_y = coords.map { |c| c[1] }.max 21 | 22 | closest_coords = Hash.new() 23 | 24 | # First we walk the entire board, and at each point we compute the Manhattan 25 | # distance between it and the named coordinates. 26 | # 27 | # We store the IDs in a hash so we can tell them apart. 28 | # 29 | (min_x..max_x).each { |x| 30 | (min_y..max_y).each { |y| 31 | current_coord = [x, y] 32 | 33 | distances_to_named_coords = coords.each_with_index 34 | .map { |coord, index| [index, manhattan_distance(current_coord, coord)] } 35 | .to_h 36 | 37 | closest_named, distance = distances_to_named_coords.min_by { |k, v| v } 38 | 39 | # If this is a uniquely close point, we count it -- if it's equidistant 40 | # to two or more named coordinates, we don't. 41 | if distances_to_named_coords.values.count(distance) == 1 42 | closest_coords[current_coord] = closest_named 43 | else 44 | closest_coords[current_coord] = nil 45 | end 46 | } 47 | } 48 | 49 | # Now we find which of the named coordinates has the most "closest" points. 50 | counter = Hash.new(0) 51 | closest_coords.map { |_, label| counter[label] += 1 } 52 | 53 | # 'nil' is used for all the points that are equidistant to one of more 54 | # named coordinates, but we don't care about those. 55 | if counter.key? nil 56 | counter.delete(nil) 57 | end 58 | 59 | # If any coordinate is on the edge of the board, it must be part of an 60 | # infinite area, so we discard it. 61 | (min_x..max_x).each { |x| 62 | [min_y, max_y].map { |y| 63 | 64 | # If we've already eliminated all but one area, that must be the only 65 | # finite area. (Assuming the puzzle is soluble!) 66 | if counter.size == 1 67 | break 68 | end 69 | 70 | if counter.key? closest_coords[[x, y]] 71 | counter.delete(closest_coords[[x, y]]) 72 | end 73 | } 74 | } 75 | 76 | (min_y..max_y).each { |y| 77 | [min_x, max_x].map { |x| 78 | if counter.size == 1 79 | break 80 | end 81 | 82 | if counter.key? closest_coords[[x, y]] 83 | counter.delete(closest_coords[[x, y]]) 84 | end 85 | } 86 | } 87 | 88 | counter.values.max 89 | end 90 | 91 | 92 | def count_coords_in_safe_region(coords, safe_distance) 93 | # Anything outside the bounding box is part of an infinite area, and so 94 | # we can restrict our attendion to this box. 95 | min_x = coords.map { |c| c[0] }.min 96 | max_x = coords.map { |c| c[0] }.max 97 | 98 | min_y = coords.map { |c| c[1] }.min 99 | max_y = coords.map { |c| c[1] }.max 100 | 101 | safe_count = 0 102 | 103 | # First we walk the entire board, and at each point we compute the Manhattan 104 | # distance between it and the named coordinates. 105 | # 106 | # We store the IDs in a hash so we can tell them apart. 107 | # 108 | (min_x..max_x).each { |x| 109 | (min_y..max_y).each { |y| 110 | current_coord = [x, y] 111 | 112 | distances_to_named_coords = coords.each_with_index 113 | .map { |coord, index| [index, manhattan_distance(current_coord, coord)] } 114 | .to_h 115 | 116 | if distances_to_named_coords.values.sum < safe_distance 117 | safe_count += 1 118 | end 119 | } 120 | } 121 | 122 | safe_count 123 | end 124 | 125 | 126 | class TestDay6 < Test::Unit::TestCase 127 | def test_examples 128 | assert_equal find_largest_finite_area([ 129 | [1, 1], 130 | [1, 6], 131 | [8, 3], 132 | [3, 4], 133 | [5, 5], 134 | [8, 9], 135 | ]), 17 136 | 137 | assert_equal count_coords_in_safe_region([ 138 | [1, 1], 139 | [1, 6], 140 | [8, 3], 141 | [3, 4], 142 | [5, 5], 143 | [8, 9], 144 | ], 32), 16 145 | end 146 | 147 | def test_ignores_infinite_areas 148 | # On this grid, only the area "c" counts, because it's the only finite 149 | # area. 150 | # 151 | # aaaAaaa..bbB 152 | # daaaaa.cc.bb 153 | # ddaaa.cccc.b 154 | # ddd..cccCcc. 155 | # ddddd.cccc.e 156 | # dddddd.cc.ee 157 | # ddDdddd..eeE 158 | # 159 | assert_equal find_largest_finite_area([ 160 | [1, 1], # A 161 | [9, 1], # B 162 | [6, 4], # C 163 | [1, 7], # D 164 | [9, 7] # E 165 | ]), 18 166 | end 167 | end 168 | 169 | 170 | if __FILE__ == $0 171 | input = File.read("6.txt").split("\n") 172 | .map { |line| line.split(",").map { |s| s.to_i } } 173 | 174 | answer1 = find_largest_finite_area(input) 175 | answer2 = count_coords_in_safe_region(input, 10000) 176 | 177 | solution("6", "Chronal Coordinates", answer1, answer2) 178 | end 179 | -------------------------------------------------------------------------------- /adventofcode/2018/7.txt: -------------------------------------------------------------------------------- 1 | Step I must be finished before step Q can begin. 2 | Step B must be finished before step O can begin. 3 | Step J must be finished before step M can begin. 4 | Step W must be finished before step Y can begin. 5 | Step U must be finished before step X can begin. 6 | Step T must be finished before step Q can begin. 7 | Step G must be finished before step M can begin. 8 | Step K must be finished before step C can begin. 9 | Step F must be finished before step Z can begin. 10 | Step D must be finished before step A can begin. 11 | Step N must be finished before step Y can begin. 12 | Step Y must be finished before step Q can begin. 13 | Step Q must be finished before step Z can begin. 14 | Step V must be finished before step E can begin. 15 | Step A must be finished before step X can begin. 16 | Step E must be finished before step C can begin. 17 | Step O must be finished before step R can begin. 18 | Step P must be finished before step L can begin. 19 | Step H must be finished before step R can begin. 20 | Step M must be finished before step R can begin. 21 | Step C must be finished before step Z can begin. 22 | Step R must be finished before step L can begin. 23 | Step L must be finished before step S can begin. 24 | Step S must be finished before step X can begin. 25 | Step Z must be finished before step X can begin. 26 | Step T must be finished before step O can begin. 27 | Step D must be finished before step Z can begin. 28 | Step P must be finished before step R can begin. 29 | Step M must be finished before step Z can begin. 30 | Step L must be finished before step Z can begin. 31 | Step W must be finished before step N can begin. 32 | Step Q must be finished before step R can begin. 33 | Step P must be finished before step C can begin. 34 | Step U must be finished before step O can begin. 35 | Step F must be finished before step O can begin. 36 | Step K must be finished before step X can begin. 37 | Step G must be finished before step K can begin. 38 | Step M must be finished before step C can begin. 39 | Step Y must be finished before step Z can begin. 40 | Step A must be finished before step O can begin. 41 | Step D must be finished before step P can begin. 42 | Step K must be finished before step S can begin. 43 | Step I must be finished before step E can begin. 44 | Step G must be finished before step F can begin. 45 | Step S must be finished before step Z can begin. 46 | Step N must be finished before step V can begin. 47 | Step F must be finished before step D can begin. 48 | Step A must be finished before step Z can begin. 49 | Step F must be finished before step X can begin. 50 | Step T must be finished before step Y can begin. 51 | Step W must be finished before step H can begin. 52 | Step D must be finished before step H can begin. 53 | Step W must be finished before step G can begin. 54 | Step J must be finished before step X can begin. 55 | Step T must be finished before step X can begin. 56 | Step U must be finished before step R can begin. 57 | Step O must be finished before step P can begin. 58 | Step L must be finished before step X can begin. 59 | Step I must be finished before step B can begin. 60 | Step M must be finished before step L can begin. 61 | Step C must be finished before step R can begin. 62 | Step R must be finished before step X can begin. 63 | Step F must be finished before step N can begin. 64 | Step V must be finished before step H can begin. 65 | Step K must be finished before step A can begin. 66 | Step W must be finished before step O can begin. 67 | Step U must be finished before step Q can begin. 68 | Step O must be finished before step C can begin. 69 | Step K must be finished before step V can begin. 70 | Step R must be finished before step S can begin. 71 | Step E must be finished before step S can begin. 72 | Step J must be finished before step A can begin. 73 | Step E must be finished before step X can begin. 74 | Step K must be finished before step Y can begin. 75 | Step Y must be finished before step X can begin. 76 | Step P must be finished before step Z can begin. 77 | Step W must be finished before step X can begin. 78 | Step Y must be finished before step A can begin. 79 | Step V must be finished before step X can begin. 80 | Step O must be finished before step M can begin. 81 | Step I must be finished before step J can begin. 82 | Step W must be finished before step L can begin. 83 | Step I must be finished before step G can begin. 84 | Step D must be finished before step O can begin. 85 | Step D must be finished before step N can begin. 86 | Step M must be finished before step X can begin. 87 | Step I must be finished before step R can begin. 88 | Step Y must be finished before step M can begin. 89 | Step F must be finished before step M can begin. 90 | Step U must be finished before step M can begin. 91 | Step Y must be finished before step H can begin. 92 | Step K must be finished before step D can begin. 93 | Step N must be finished before step O can begin. 94 | Step H must be finished before step S can begin. 95 | Step G must be finished before step L can begin. 96 | Step T must be finished before step D can begin. 97 | Step J must be finished before step N can begin. 98 | Step K must be finished before step M can begin. 99 | Step K must be finished before step P can begin. 100 | Step E must be finished before step R can begin. 101 | Step N must be finished before step H can begin. 102 | -------------------------------------------------------------------------------- /aws/templates/sqs_dashboard.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | SQS queues 5 | 6 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 60 | 63 | 66 | 67 |
58 | Amazon-Simple-Queue-Service-SQS 59 | 61 | SQS queues 62 | 64 | Amazon-Simple-Queue-Service-SQS 65 |
68 | 69 | 70 | {% for queue_name, queue_data in display_results.items()|sort %} 71 | 72 | 78 | {% if not queue_data|is_empty %} 79 | 80 | 81 | 82 | {% endif %} 83 | 84 | {% endfor %} 85 |
73 | {{ queue_name }} 74 | {% if queue_data.dlq != None %} 75 | / dlq 76 | {% endif %} 77 | {{ queue_data.queue | intcomma }} / {{ queue_data.dlq | intcomma }}
86 | 87 | 88 | 89 | -------------------------------------------------------------------------------- /aws/find_ecs_bottlenecks.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | """ 3 | Find the CPU/memory bottlenecks in an ECS cluster. 4 | 5 | This script will look for ECS clusters in your AWS account, ask you to pick 6 | one, then show you peak CPU/memory utilisation over the last 24 hours. 7 | It's a good way to identify apps that might be under-provisioned, and benefit 8 | from being given more resources. 9 | 10 | Python 3.6+. 11 | """ 12 | 13 | import datetime 14 | import os 15 | 16 | import boto3 17 | import inquirer 18 | import termcolor 19 | 20 | 21 | cloudwatch = boto3.client("cloudwatch") 22 | ecs = boto3.client("ecs") 23 | 24 | 25 | def _get_max_metric_value( 26 | *, metric_name, cluster_name, service_name, start_time, end_time 27 | ): 28 | """ 29 | Look up the maximum value of an ECS metric over the last 24 hours. 30 | """ 31 | resp = cloudwatch.get_metric_statistics( 32 | Namespace="AWS/ECS", 33 | MetricName=metric_name, 34 | Dimensions=[ 35 | {"Name": "ClusterName", "Value": cluster_name}, 36 | {"Name": "ServiceName", "Value": service_name}, 37 | ], 38 | StartTime=start_time, 39 | EndTime=end_time, 40 | Period=3600, 41 | Statistics=["Maximum"], 42 | ) 43 | 44 | try: 45 | return max(dp["Maximum"] for dp in resp["Datapoints"]) 46 | except ValueError: 47 | return 0.0 48 | 49 | 50 | def get_max_cpu_utilisation(**kwargs): 51 | return _get_max_metric_value(metric_name="CPUUtilization", **kwargs) 52 | 53 | 54 | def get_max_memory_utilisation(**kwargs): 55 | return _get_max_metric_value(metric_name="MemoryUtilization", **kwargs) 56 | 57 | 58 | def list_services(cluster): 59 | """Generates the ARN of every service in an ECS cluster.""" 60 | paginator = ecs.get_paginator("list_services") 61 | 62 | for page in paginator.paginate(cluster=cluster): 63 | yield from page["serviceArns"] 64 | 65 | 66 | def list_clusters(): 67 | """Generates the ARN of every ECS cluster in an account.""" 68 | paginator = ecs.get_paginator("list_clusters") 69 | 70 | for page in paginator.paginate(): 71 | yield from page["clusterArns"] 72 | 73 | 74 | def choose_cluster(): 75 | """ 76 | Get a list of the ECS clusters running in this account, and choose 77 | one to inspect – possibly asking the user to choose from a list. 78 | """ 79 | all_clusters = list(list_clusters()) 80 | 81 | if len(all_clusters) == 0: 82 | raise RuntimeException("No ECS clusters found in account!") 83 | elif len(all_clusters) == 1: 84 | return all_clusters[0] 85 | else: 86 | # AWS cluster ARNs are of the form 87 | # 88 | # arn:aws:ecs:eu-west-1:{account_id}:cluster/{cluster_name} 89 | # 90 | # Although we'll use the full cluster ARN to list the services it 91 | # contains, we can ask the user to select a cluster based on the 92 | # names alone. 93 | clusters = { 94 | cluster_arn.split("/")[-1]: cluster_arn for cluster_arn in all_clusters 95 | } 96 | 97 | question = inquirer.List( 98 | "cluster_name", 99 | message="Which cluster do you want to inspect?", 100 | choices=sorted(clusters.keys()), 101 | ) 102 | 103 | answers = inquirer.prompt([question]) 104 | cluster_name = answers["cluster_name"] 105 | return clusters[cluster_name] 106 | 107 | 108 | def draw_bar_chart(data): 109 | # A lot of our services have a common prefix in the name, e.g. everything 110 | # in the storage-prod cluster is called storage-prod_register, 111 | # storage-prod_replicator, and so on. 112 | # 113 | # While helpful for disambiguating in absolute terms, it's visual noise here. 114 | # Remove any common prefix. 115 | common_label_prefix = os.path.commonprefix([label for label, _ in data]) 116 | data = [ 117 | (label[len(common_label_prefix) :].lstrip("-").lstrip("_"), value) 118 | for label, value in data 119 | ] 120 | 121 | # The values are percentages so should max out at 100, but somewhere 122 | # in CloudWatch they occasionally come out as slightly over 100. 123 | # Treat the top value as 110 to allow a bit of slop. 124 | max_value = 110.0 125 | 126 | increment = max_value / 25 127 | 128 | longest_label_length = max(len(label) for label, _ in data) 129 | 130 | for label, count in sorted(data): 131 | 132 | # The ASCII block elements come in chunks of 8, so we work out how 133 | # many fractions of 8 we need. 134 | # https://en.wikipedia.org/wiki/Block_Elements 135 | bar_chunks, remainder = divmod(int(count * 8 / increment), 8) 136 | 137 | # First draw the full width chunks 138 | bar = "█" * bar_chunks 139 | 140 | # Then add the fractional part. The Unicode code points for 141 | # block elements are (8/8), (7/8), (6/8), ... , so we need to 142 | # work backwards. 143 | if remainder > 0: 144 | bar += chr(ord("█") + (8 - remainder)) 145 | 146 | # If the bar is empty, add a left one-eighth block 147 | bar = bar or "▏" 148 | 149 | line = f"{label.ljust(longest_label_length)} ▏ {count:#5.1f}% {bar}" 150 | 151 | if count > 95: 152 | print(termcolor.colored(line, "red")) 153 | else: 154 | print(line) 155 | 156 | 157 | if __name__ == "__main__": 158 | cluster_arn = choose_cluster() 159 | 160 | cpu_stats = [] 161 | memory_stats = [] 162 | 163 | cluster_name = cluster_arn.split("/")[-1] 164 | 165 | now = datetime.datetime.now() 166 | start_time = now - datetime.timedelta(hours=1) 167 | 168 | for service_arn in list_services(cluster=cluster_name): 169 | # AWS service ARNs are of the form 170 | # 171 | # arn:aws:ecs:eu-west-1:{account_id}:service/{service_name} 172 | # 173 | # We only want the service name for the CloudWatch metric. 174 | service_name = service_arn.split("/")[-1] 175 | 176 | max_cpu = get_max_cpu_utilisation( 177 | cluster_name=cluster_name, 178 | service_name=service_name, 179 | start_time=start_time, 180 | end_time=now, 181 | ) 182 | cpu_stats.append((service_name, max_cpu)) 183 | 184 | max_memory = get_max_memory_utilisation( 185 | cluster_name=cluster_name, 186 | service_name=service_name, 187 | start_time=start_time, 188 | end_time=now, 189 | ) 190 | memory_stats.append((service_name, max_memory)) 191 | 192 | print("=== CPU stats ===") 193 | draw_bar_chart(cpu_stats) 194 | 195 | print("") 196 | 197 | print("=== Memory stats ===") 198 | draw_bar_chart(memory_stats) 199 | -------------------------------------------------------------------------------- /adventofcode/2018/2.txt: -------------------------------------------------------------------------------- 1 | bpacnmelhhzpygfsjoxtvkwuor 2 | biacnmelnizqygfsjoctvkwudr 3 | bpaccmllhizyygfsjoxtvkwudr 4 | rpacnmelhizqsufsjoxtvkwudr 5 | bfacnmelhizqygfsjoxtvwwudp 6 | bpacnmelhizqynfsjodtvkyudr 7 | bpafnmelhizqpgfsjjxtvkwudr 8 | bpackmelhizcygfsjoxtvkwudo 9 | bmacnmilhizqygfsjoltvkwudr 10 | bpafnmelhizuygfsjoxtvkwsdr 11 | boacnmylhizqygfsjoxtvxwudr 12 | bpbcjmelhizqygfsjoxtgkwudr 13 | bpacnmglhizqygfsjixtlkwudr 14 | bpacnmclhizqygfsjoxtvkwtqr 15 | bpacnmelhczqygtsjoptvkwudr 16 | bpacnmelhizqywfsaoxtvkbudr 17 | apacnmelhizqygcsjoxtvkwhdr 18 | bpacnmelrizqygfsbpxtvkwudr 19 | tpkcnmelpizqygfsjoxtvkwudr 20 | bpacnmelhizqlgfsjobtmkwudr 21 | npacnmelhizqygffjoxtvkwudf 22 | bpacnmeehqzqygqsjoxtvkwudr 23 | bpecnmelhizqigfsjvxtvkwudr 24 | bpacnmelhizqysfsjoxtvkdfdr 25 | bpacnfelhkzqygfsjoxtvkwfdr 26 | bpacnbelvizqygfsjoxthkwudr 27 | bpacnoelhizqygfejoxtvkwudn 28 | bpacnmelhizqygfzpkxtvkwudr 29 | bpahnmelhizqyufsjoxmvkwudr 30 | bpacnmelhizqygfsnoxtvkwmmr 31 | bpacnmelhizqygfsjoatvkludf 32 | bpacnmylhizqygfsjlxtvksudr 33 | bpacnmekhpzqygysjoxtvkwudr 34 | bpacnselhizqogfswoxtvkwudr 35 | bpacnmelhizqprfsjoxwvkwudr 36 | bpatnmelhinqygfsjoctvkwudr 37 | bpacnqelhqzqygfsxoxtvkwudr 38 | bpabnmelhiyqygfsjoxtykwudr 39 | bpacnivlhizqygfsjoxtviwudr 40 | bpkcnmylhizqygfsjoxtvkwcdr 41 | bpafnmflhizqygtsjoxtvkwudr 42 | bpachmelhizqygfsjixtvkwudg 43 | bpacymelhizqygfsjoxtykwuar 44 | bpacnkelhizqdgfsjoxtskwudr 45 | bpacnmezhizqggbsjoxtvkwudr 46 | bpacnmqlhizqygrsjoxzvkwudr 47 | bpaczmelhizqyhfsjoxfvkwudr 48 | bdacnmelhyzqygusjoxtvkwudr 49 | bpacbmelhizqywfsjostvkwudr 50 | bpacnmelhihzygfstoxtvkwudr 51 | bpactmelhizqygfsjcxtvkwydr 52 | bkacnmethizqytfsjoxtvkwudr 53 | bpacnmalhizqydfskoxtvkwudr 54 | spacnmelbizqygfsjoxdvkwudr 55 | lpalnmelhizoygfsjoxtvkwudr 56 | bpacjmeghizqygfsjoxtviwudr 57 | bpacnmeqhizxygfsjoxgvkwudr 58 | bpacnmelhizqygosjoxtvkkuhr 59 | bpacnmelhiznbxfsjoxtvkwudr 60 | bgacnmelhizqygfsjbxivkwudr 61 | bpacnmelhizqygfjjowtvswudr 62 | bpacnmelhizqygfsjovtgkmudr 63 | bpacnmelcmzqygfspoxtvkwudr 64 | bpvcnmelhizqyvfcjoxtvkwudr 65 | bpacnmeahizqjgfsjoxtvkwukr 66 | bpacnoelwizqygfsjoxtvkaudr 67 | xpacnmelhizqygfsjoxdvkwedr 68 | mpacnmelqizqygfsjoxtvkwudx 69 | bppcnmelhizqygfsjfxtvkhudr 70 | bpacnmclhizqyhfsjaxtvkwudr 71 | opacsmelhizqygfsjmxtvkwudr 72 | bpafnmelhizqjgfsjoxtvkrudr 73 | bpdcnmilhizqygfsjoxtvkludr 74 | bpainmelhizqygfsjtntvkwudr 75 | bradnmelhizqygfsjextvkwudr 76 | bpacnmelhizqygfmsoxtvkwudg 77 | bpacneelhizqygvrjoxtvkwudr 78 | bpacnpelhizqygfsjoxyvkwudf 79 | bpacnmelhizqygfsqoqtvkwodr 80 | bpacnmelhizjyghsjoxcvkwudr 81 | bpacnmelmibqygfsjoxtvnwudr 82 | jpacnmelaizqygfwjoxtvkwudr 83 | zpachmelhizqygfsjsxtvkwudr 84 | bpacnmelfizqykfsjomtvkwudr 85 | bpacnmllwizqygfsjoxtvkwusr 86 | bpaynmelhizqygfsjoxtvowcdr 87 | jpacnmqlhizqygfsjoxtvknudr 88 | bpacxmelhizqyffsjoxtvkwugr 89 | apawnmelhizqygfsjtxtvkwudr 90 | mpacnmelhitqigfsjoxtvkwudr 91 | bpacnmelhhzqygfsjoxtvkyzdr 92 | gpacnmelhizqynfsjoxtvkwudm 93 | bnacnkelhizqygfsjoxtpkwudr 94 | bpacnmelfizqygfsumxtvkwudr 95 | bpacnmelhisqygfsjohtvowudr 96 | bpacnmelhimqygxsjoxtvkwudn 97 | bpscnmeliizqygfsjoxtvkwunr 98 | qpacnmelhizqycfsjoxtvkwndr 99 | bpacnmelhijqygfsjohtvkyudr 100 | bpacnmelhizqykfsjkxtvknudr 101 | bpacnqilhizqygfsjoxtvkoudr 102 | bpacnmelhizqzgmsjoxtvkwurr 103 | bpdcnmelhizqygfsjoutukwudr 104 | bpecnmeghizqygfsjoxgvkwudr 105 | bpicnmelhizqygfrjoxtvlwudr 106 | bpacnmelhizfygfsroxtvkwodr 107 | buacnmelhizqygjsjoxtvkvudr 108 | bpacnmelhixqykfsjoxtvrwudr 109 | bpacnmelhizqygvejcxtvkwudr 110 | bpacnmjlhizqylfsjoxtvkwuor 111 | qpacnmelhizqygfsjoxfdkwudr 112 | bpfcnmemhizqygfsjoxtvknudr 113 | bpacnmelhizqoffsjqxtvkwudr 114 | hpacnielhiqqygfsjoxtvkwudr 115 | gpacnmelhizqygfsewxtvkwudr 116 | bpacnmellizqylxsjoxtvkwudr 117 | bpacnmenhizqymfsjoxtvkmudr 118 | bpacnfelhizqygcsjoltvkwudr 119 | bpacnmelhqqqygfsjoxtvkuudr 120 | bplgnmelhiqqygfsjoxtvkwudr 121 | bpacnzelhizqygfgjoxtvnwudr 122 | bpacnmelhizqygfsjoktvknunr 123 | bpacnmdlhioqygfnjoxtvkwudr 124 | epacnmelwizqyjfsjoxtvkwudr 125 | bpacxmelhazfygfsjoxtvkwudr 126 | bpacnmejhezqygfsjoxtskwudr 127 | bpacnqelhihqyzfsjoxtvkwudr 128 | bpacnbelhizqyrfsjoxtvkmudr 129 | bpacnmelhizqygfsjoxtylwzdr 130 | bpacnmelwizqygfsjodtvkhudr 131 | bpacnnelhizqygfsjoxtwkwadr 132 | bpacimelhizqygfsnoxtvkwuor 133 | bpacnmelhizqyaasjoxtlkwudr 134 | bpacnmelhizqyeffjoxtvkwuds 135 | bpacnmenhizqygxscoxtvkwudr 136 | bpacnmelhidqygfsjowtskwudr 137 | bpacnmeliizqygfsjoxhvkwucr 138 | bpacimelhizqygfsjoxtvktuwr 139 | bpainmelhhzqygfsjzxtvkwudr 140 | bpacamelhizqygfsjogtvkwbdr 141 | bpccnmelgizqygfsjoxtykwudr 142 | bpacnmelhizwegfsjoxtvkwadr 143 | bpackmelhbzqygqsjoxtvkwudr 144 | bpacymeihizqyffsjoxtvkwudr 145 | bpacnielhczqygfsjoxtvkwudk 146 | bpacnmejhizqygffjoxjvkwudr 147 | ppacnmelhizqygfsjoxtigwudr 148 | bpjcnmolhizqygfsjoxtvkwndr 149 | bpacnmelcizqygrsjoxtakwudr 150 | cpawnmelhizqygfsjoxmvkwudr 151 | bwacnmelhizqygesjoxtakwudr 152 | bpacnmelhizqygfsjexsvkwddr 153 | bpaunmelhiuqygfsjoxtvkwtdr 154 | bpacnmellimqygfsjextvkwudr 155 | bpacnmerhizqygfsaoxvvkwudr 156 | bpacnmglhizqygfsjixtukwudr 157 | ppacnmelhizqygfsjoxtvkdudp 158 | bpacnmedhizqygukjoxtvkwudr 159 | bpccnmelhizqngfsjoxtvkwadr 160 | bgacnmeldizqygfscoxtvkwudr 161 | bpacngelhizsygfsjoxtvkwkdr 162 | bpacnpelhizqygfsjoxctkwudr 163 | bpacnmylhizqygfcjoxtvkwmdr 164 | npacnmelhizqygfsjoxtwkwuds 165 | bpaxnmelhizqydfsjoxyvkwudr 166 | bpacnhelhizjygfsjoxtvkmudr 167 | bpacnkelhczqygfnjoxtvkwudr 168 | bfacnmelhizrygfsjoxtvkwodr 169 | bpycnmelhizqygfofoxtvkwudr 170 | qpacpselhizqygfsjoxtvkwudr 171 | bpvcnmelhezqygfsjoxttkwudr 172 | bpacnmwlhizqygfijoxtmkwudr 173 | bsacnmelhikqygfsjoxttkwudr 174 | bpccnxelhizqyafsjoxtvkwudr 175 | bpacnmelhizqygfswhxtvewudr 176 | vpacnmzlhizqygfsvoxtvkwudr 177 | bpacnmelhihqygfsjoxtvkqurr 178 | bpacnmelhixqygazjoxtvkwudr 179 | bpavnmelhizqygfsjozpvkwudr 180 | bpacnmclhizuygfsjoxmvkwudr 181 | bpacnmelhizryufsjoxtkkwudr 182 | bpacnmelhtzqygfsjobtvkwufr 183 | bpacnmelhizqmlfsjoxtvkwudq 184 | bpaaneelhizqygfsjlxtvkwudr 185 | bpacnmelhxzqygfsjoxthkwuhr 186 | bpacnmeshizqygfcjoxtvkwude 187 | bpacnzqlhizqygfsxoxtvkwudr 188 | bgaanmelhizqycfsjoxtvkwudr 189 | bpacnmexhizqygfsroxtvkwudn 190 | bpmmnmelhizqygfajoxtvkwudr 191 | bpacnmelhizqylfsjoxtckwhdr 192 | bpicnmelhizqyrfsjoxtvkwudi 193 | zpacnmelhizvycfsjoxtvkwudr 194 | bpamnmkllizqygfsjoxtvkwudr 195 | bpacnmelhrzqyrfsjoxgvkwudr 196 | bpadnmelhczqygfsjoxtlkwudr 197 | bpacrmelhizqygrsjoxtvkiudr 198 | lpacnmelhizqygfsjoxtgkwxdr 199 | fpacnmalhiuqygfsjoxtvkwudr 200 | bpacnmelhizqygfsjixtvfwcdr 201 | bpccnmelhxzqygfkjoxtvkwudr 202 | bpacnmepaizqygfsjoctvkwudr 203 | tpacnmelhivqygfsxoxtvkwudr 204 | kpacnfelhitqygfsjoxtvkwudr 205 | baacnzelhizqygfsjoxtvkwudx 206 | bcycnmeghizqygfsjoxtvkwudr 207 | wpacotelhizqygfsjoxtvkwudr 208 | bpacnmsshizqygrsjoxtvkwudr 209 | blacnmelhizqygfsjoxtykwvdr 210 | bkacnmelhizqygfsjoxuvkludr 211 | bpacnmelhizaugfsjoxtvhwudr 212 | fpavnmelhizqygfsgoxtvkwudr 213 | bpachmelnizqygfsjextvkwudr 214 | bpacnmelhizqpgfsjoxtvkwldu 215 | bpacnmelhizqygfsloftvywudr 216 | bpacntelhvzqygfejoxtvkwudr 217 | bpacnmeldizqygfsjmxtvkdudr 218 | byacnmelhizqygfsjsxtvkwudh 219 | bpacnmellizqygssxoxtvkwudr 220 | bpacnmelhizqygfsjootvknuir 221 | bpacnmelhitqjgfsjoxivkwudr 222 | bpacnmelhazaygfsjoxtvfwudr 223 | bpacnzenhizqygfsjzxtvkwudr 224 | bpacnmelhizqypfsdoxtvkwuar 225 | bpannmelhizqygnsjoxtvkwndr 226 | bracnmeldizsygfsjoxtvkwudr 227 | bpacnmelhizwygfsjugtvkwudr 228 | bpatnmelhizqygfsjoytvkwulr 229 | upacnmelhizqygfsjurtvkwudr 230 | bpaenmezhizqygfsjostvkwudr 231 | bpacnmelhizpygfsjodhvkwudr 232 | bpacnmelhizqygfsjogtvkguwr 233 | bpacnmelhisqygfsjoxtpkuudr 234 | bxacnmelhizqygfsjdxtvkfudr 235 | bpacnmelhizqygfsjohqvkwudu 236 | bzacnmtlhizqygfsjoxsvkwudr 237 | bpacnmplhixrygfsjoxtvkwudr 238 | bpacnmelhizqhgfsjomtvkwudg 239 | bpacnmezhizqygfsjxxtykwudr 240 | bpacnmwlhizqygfujoxtzkwudr 241 | tpacnmelhizqygfsjoxkvpwudr 242 | bpawsmenhizqygfsjoxtvkwudr 243 | bpacnmelhizqtgfsjoxttkwuqr 244 | bpkcbmelhizqygfsjoxtvkwucr 245 | bpacfmekhizqygfsjoxtvkwuds 246 | bpacnmethizqynfajoxtvkwudr 247 | bpocnmclhizqygfsjoxtvkwukr 248 | zpacnmwlhizqygfsjoxzvkwudr 249 | bpacpoelhqzqygfsjoxtvkwudr 250 | bpacnlelhizqyzfsjoxtvkwukr 251 | -------------------------------------------------------------------------------- /repros/2019-01-jackson-elasticsearch-bug/README.md: -------------------------------------------------------------------------------- 1 | # 2019-01-jackson-elasticsearch-bug 2 | 3 | This was an attempt to reproduce a bug we were seeing in the catalogue API ([wellcometrust/platform#3233](https://github.com/wellcometrust/platform/issues/3233)), where making the following query to Elasticsearch: 4 | 5 | ```json 6 | { 7 | "query": { 8 | "bool": { 9 | "filter": [ 10 | { 11 | "term": { 12 | "type": { 13 | "value": "IdentifiedWork" 14 | } 15 | } 16 | } 17 | ] 18 | } 19 | }, 20 | "from": -1863463012, 21 | "size": 100, 22 | "sort": [ 23 | { 24 | "canonicalId": { 25 | "order": "asc" 26 | } 27 | } 28 | ] 29 | } 30 | ``` 31 | 32 | would throw the following stack trace: 33 | 34 | ``` 35 | Cannot deserialize instance of `java.lang.String` out of START_OBJECT token 36 | at [Source: (byte[])"{"root_cause":[{"type":"illegal_argument_exception","reason":"numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"srw2krjhxh","node":"FxaQqjdVTuCjXsVhuEHKWQ","reason":{"type":"illegal_argument_exception","reason":"numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count"}}],"caused_"[truncated 298 bytes]; line: 1, column: 657] (through reference chain: com.sksamuel.elastic4s.http.ElasticError["caused_by"]->com.sksamuel.elastic4s.http.ElasticError$CausedBy["caused_by"]) 37 | com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.String` out of START_OBJECT token 38 | at [Source: (byte[])"{"root_cause":[{"type":"illegal_argument_exception","reason":"numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"srw2krjhxh","node":"FxaQqjdVTuCjXsVhuEHKWQ","reason":{"type":"illegal_argument_exception","reason":"numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count"}}],"caused_"[truncated 298 bytes]; line: 1, column: 657] (through reference chain: com.sksamuel.elastic4s.http.ElasticError["caused_by"]->com.sksamuel.elastic4s.http.ElasticError$CausedBy["caused_by"]) 39 | at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:63) 40 | at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1342) 41 | at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1138) 42 | at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1092) 43 | at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:63) 44 | at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:10) 45 | at com.fasterxml.jackson.databind.deser.SettableAnyProperty.deserialize(SettableAnyProperty.java:154) 46 | at com.fasterxml.jackson.databind.deser.SettableAnyProperty.deserializeAndSet(SettableAnyProperty.java:134) 47 | at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1561) 48 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:258) 49 | at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:441) 50 | at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1287) 51 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326) 52 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159) 53 | at com.fasterxml.jackson.module.scala.deser.OptionDeserializer.deserialize(OptionDeserializerModule.scala:60) 54 | at com.fasterxml.jackson.module.scala.deser.OptionDeserializer.deserialize(OptionDeserializerModule.scala:11) 55 | at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:530) 56 | at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:528) 57 | at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:417) 58 | at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1287) 59 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326) 60 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159) 61 | at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4013) 62 | at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3121) 63 | at com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper.readValue(ScalaObjectMapper.scala:202) 64 | at com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper.readValue$(ScalaObjectMapper.scala:201) 65 | at com.sksamuel.elastic4s.json.JacksonSupport$$anon$1.readValue(JacksonSupport.scala:11) 66 | at com.sksamuel.elastic4s.http.ElasticError$.parse(ElasticError.scala:37) 67 | at com.sksamuel.elastic4s.http.DefaultResponseHandler.handle(ResponseHandler.scala:49) 68 | at com.sksamuel.elastic4s.http.ElasticClient.$anonfun$execute$1(ElasticClient.scala:33) 69 | at scala.util.Success.$anonfun$map$1(Try.scala:251) 70 | at scala.util.Success.map(Try.scala:209) 71 | at scala.concurrent.Future.$anonfun$map$1(Future.scala:288) 72 | at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) 73 | at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) 74 | at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) 75 | at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402) 76 | at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 77 | at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 78 | at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 79 | at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 80 | ``` 81 | 82 | The specific error that triggers this behaviour from Elasticsearch as follows: 83 | 84 | ```json 85 | { 86 | "error": { 87 | "root_cause": [ 88 | { 89 | "type": "illegal_argument_exception", 90 | "reason": "numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count" 91 | } 92 | ], 93 | "type": "search_phase_execution_exception", 94 | "reason": "all shards failed", 95 | "phase": "query", 96 | "grouped": true, 97 | "failed_shards": [ 98 | { 99 | "shard": 0, 100 | "index": "7cd0277a406af3b1", 101 | "node": "FxaQqjdVTuCjXsVhuEHKWQ", 102 | "reason": { 103 | "type": "illegal_argument_exception", 104 | "reason": "numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count" 105 | } 106 | } 107 | ], 108 | "caused_by": { 109 | "type": "illegal_argument_exception", 110 | "reason": "numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count", 111 | "caused_by": { 112 | "type": "illegal_argument_exception", 113 | "reason": "numHits must be > 0; please use TotalHitCountCollector if you just need the total hit count" 114 | } 115 | } 116 | }, 117 | "status": 400 118 | } 119 | ``` 120 | 121 | Poking around with a Python script (see `run.py`), I realised that the simplest request that triggered this particular response: 122 | 123 | ```http 124 | GET /indexName/_search 125 | {"from": -10} 126 | ``` 127 | 128 | Making this query using elastic4s (see `Main.scala`) worked fine, but threw the JSON error in the context of the API tests. 129 | I eventually realised that we were running an outdated version of elastic4s in the API -- 6.4.0, whereas our Elastic Cloud clusters are at least 6.5.0. 130 | --------------------------------------------------------------------------------