├── .hgignore ├── README.md ├── bin ├── artifetch ├── compare-scores ├── debug ├── derange ├── em ├── get-taskcluster-logs ├── import-tool ├── jj-import-hg ├── json ├── landed ├── mk-task-runner ├── mkgist ├── re-ssh-agent ├── rr-exits ├── run-taskcluster-job ├── sum-minor ├── traverse.py ├── viewsetup ├── wig └── wpaste ├── conf ├── Q-Tps-alloc.query ├── Q-awsy-baseJS.query ├── Q-awsy-logBase.query ├── Q-awsy-rawBaseJS.query ├── gdbinit ├── gdbinit.gecko ├── gdbinit.gecko.py ├── gdbinit.misc ├── gdbinit.pahole.py ├── gdbinit.py ├── gdbinit.rr ├── gdbinit.rr.py ├── gdbinit.sfink ├── gdbinit.symbols.py ├── gdbstart.py ├── hgrc ├── jj-config.toml ├── shrc ├── sysctl.conf └── wpaste │ └── pbmo.conf ├── data ├── ggc.html └── jib.pnm ├── doc ├── VirtualAndPhysicalWindows.md ├── examples │ ├── Q-awsy-baseJS.txt │ ├── Q-awsy-logBase-grouped.txt │ ├── Q-awsy-logBase.txt │ └── Q-awsy-rawBaseJS.txt ├── gc-ubench.org ├── hazards.html └── hazards.org └── mozilla.md /.hgignore: -------------------------------------------------------------------------------- 1 | glob:*~ 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Steve Fink's collection of random tools 2 | 3 | These are tools that I think might be useful to other people. 4 | 5 | ---------------------------------------------------------------------- 6 | 7 | Tools included: 8 | 9 | - artifetch : Retrieve artifacts from pushes, according to a flexible query spec. Example: give me the performance score from all runs (replicates) of the "id-getter-5.html" Talos subtest from a fzf-selected set of pushes. 10 | - landed : Prune changesets that have landed, setting their successors to the landed 11 | revisions. 12 | - run-taskcluster-job : Run taskcluster jobs in a local Docker container. 13 | - get-taskcluster-logs : Retrieve groups of log files from a push by scraping taskcluster 14 | - em / vs : Open emacs or VSCode on the files touched by a patch, on a relevant 15 | line number 16 | - viewsetup : Construct a virtual disk that exposes selected portions of a local disk, 17 | to allow running a Windows install either physically or virtually 18 | - json : Interactive navigation of a JSON file 19 | - debug : Start up a debugger within emacs on various types of files 20 | - rr-exits : List out all rr recordings with their worst exit codes 21 | - traverse.py : Gecko-specific, sorta. Utility for traversing a callgraph. 22 | - wig : Patch harder 23 | 24 | ---------------------------------------------------------------------- 25 | 26 | Configuration files: 27 | 28 | I also have a set of gdb initialization files that I version-control here. 29 | 30 | - gdbstart.py : gdb init file that loads all of the below gdb startup files (except for gdbinit.sfink) 31 | - gdbinit : basic gdb configuration 32 | - gdbinit.py : gdb python init file, defines some miscellany 33 | - gdbinit.symbols.py : Ted Mielczarek's source server integration for gdb 34 | - gdbinit.pahole.py : pahole and overlap commands, loaded by gdbstart.py 35 | - gdbinit.gecko.py : configuration to assist with debugging Gecko and SpiderMonkey 36 | - gdbinit.misc : some miscellaneous gdb helper commands 37 | - gdbinit.rr.py : gdb helper commands for running under rr (lots here!) 38 | - gdbinit.sfink : a couple of things that depend on my personal file layout 39 | 40 | The easiest way to use these is to create a `~/.gdbinit` file with something 41 | like the following, with the appropriate path to your sfink-tools checkout: 42 | 43 | source ~/checkouts/sfink-tools/conf/gdbstart.py 44 | 45 | That will load all of the above except for `gdbinit.sfink`. Alternatively, you 46 | could just source the individual files you want to use from the above list. 47 | 48 | Other configuration files: 49 | 50 | - hgrc : Mercurial configuration 51 | 52 | I use this via a symlink from ~/.hgrc. 53 | 54 | ---------------------------------------------------------------------- 55 | 56 | landed - Prune patches that have landed, setting their successors to the landed 57 | revisions. 58 | 59 | Typical usage: 60 | 61 | hg pull 62 | landed 63 | 64 | That will look at the non-public (aka draft, probably) ancestors of your 65 | checked out revision, and scan for matching phabricator revisions (or commit 66 | messages, if phabricator revisions are not present) within the landed tree. 67 | You'll want to download the latest set of landed changes first, so they exist 68 | locally. 69 | 70 | You can also do this in a more targeted way: 71 | 72 | landed -r 30deabdff172 73 | 74 | (or a revspec matching multiple patches). 75 | 76 | Note that this will not rebase any orphaned patches for you, so if you are 77 | pruning landed patches with descendants that have not yet been landed, you will 78 | need to rebase them (eg by running `hg evolve` or `hg evolve -a` or whatever.) 79 | 80 | ---------------------------------------------------------------------- 81 | 82 | run-taskcluster-job : Run taskcluster jobs in a local Docker container. 83 | 84 | run-taskcluster-job --log-task-id a5gT2XbUSGuBd-IMAjjTUw 85 | 86 | to replicate task a5gT2XbUSGuBd-IMAjjTUw locally. The above command will 87 | 88 | - download the log file for that task 89 | - find the line that says the task ID of the toolchain task that generated the 90 | image that it is running 91 | - use `mach taskcluster-load-image` to pull down that image 92 | - once you have the image, use `--task-id` in later runs to avoid re-downloading things 93 | - download the task description from taskcluster to extract out the command that 94 | is to be executed and the environment variables 95 | - execute the image (run `$COMMAND` from within the image to run the default command, 96 | or `echo $COMMAND` to inspect and modify it.) 97 | 98 | Note that $COMMAND will probably execute `run-task` with a gecko revision, 99 | which will start out by pulling down the whole tree. This is large and will 100 | take a while. (Avoiding this requires hacking the script a bit; 101 | https://bugzilla.mozilla.org/show_bug.cgi?id=1605232 was an early attempt at 102 | that.) 103 | 104 | ---------------------------------------------------------------------- 105 | 106 | em / vs - Edit files relevant to a patch 107 | 108 | Run your $EDITOR (defaulting to emacs) on the given files, or on the files 109 | touched by the changes described by the given revision. 110 | 111 | If $EDITOR is unset, then `em` will default to `emacs` and `vs` will default to 112 | `vscode` (you will need to create a symlink from `vs` -> `em`). 113 | 114 | If you are using vscode remote editing, you will want to install this on the 115 | remote machine and run it from within a terminal there. 116 | 117 | 1. `em foo.txt:33` will run `emacs +33 foo.txt` 118 | so will `em foo.txt:33:` (easier cut & paste of trailing colon for error messages) 119 | and foo.txt will be found anywhere in the current hg tree (if not in cwd) 120 | 2. `em` with no args will run emacs on the files changed in the cwd, or if none, then 121 | by the cwd's parent hg rev 122 | 3. `em 01faf51a0acc` will run emacs on the files changed by that hg rev. 123 | 4. `em foo.txt.rej` will run emacs on both foo.txt and foo.txt.rej, but at the lines 124 | containing the first patch hunk and the line number of the original that it 125 | applies to (ok, so this is probably where this script jumped the shark.) 126 | 127 | The fancy line number stuff does not apply to all possible editors. emacs and 128 | vscode are fully supported, though vscode's behavior is a little erratic. vi 129 | will only set the position for the first file. 130 | 131 | Sorry, no git support. 132 | 133 | ---------------------------------------------------------------------- 134 | 135 | get-taskcluster-logs - Retrieve groups of log files from a push by scraping taskcluster 136 | 137 | Typical example: copy link location to a taskcluster push (what you get from 138 | clicking on the date for a push), and run 139 | 140 | get-taskcluster-logs '' 141 | 142 | Alternatively, use the topmost revision of a push with the -r flag: 143 | 144 | get-taskcluster-logs -r 145 | 146 | By default, this downloads all logs for all Talos jobs in that push, and stores 147 | them in individual text files under a new directory. 148 | 149 | See --help for additional options and usage. 150 | 151 | ---------------------------------------------------------------------- 152 | 153 | json - Interactive navigation of a JSON file 154 | 155 | Created to explore a problem with a large sessionstore.js file. It mimics a 156 | UNIX shell prompt, allowing you to cd, ls, grep, and similar. 157 | 158 | Requires the Perl module 'JSON'. Installable on Fedora with 159 | 160 | dnf install perl-JSON 161 | 162 | Run json --help for a full help message. Here's an excerpt: 163 | 164 | `Usage: json [initial-path]` 165 | 166 | ls [PATH] - show contents of structure 167 | cd PATH - change current view to PATH 168 | cat [PATH] - display the value at the given PATH 169 | delete SPEC - delete the given key or range of keys (see below 170 | for a description of SPEC) 171 | set KEY VALUE - modify an existing value (VALUE may optionally 172 | - be quoted) 173 | mv PATH PATH - move a subtree 174 | grep [-l] PATTERN PATH - search for PATTERN in given PATH 175 | write [-pretty] [FILENAME] 176 | - write out the whole structure as JSON. Use '-' as 177 | FILENAME to write to stdout. 178 | pretty - prettyprint current structure to stdout 179 | size PATH - display how many bytes the JSON of the substructure 180 | at PATH would take up 181 | load [FILENAME] - load in the given JSON file (reload current file 182 | if no filename given) 183 | 184 | ---------------------------------------------------------------------- 185 | 186 | debug - Start up a debugger within emacs on various types of files 187 | 188 | `debug --help` for usage. 189 | 190 | Usual usage is to prepend whatever command you want to debug with 'debug'. 191 | 192 | Examples: 193 | 194 | - `debug firefox -no-remote -P BugPictures` 195 | 196 | runs firefox within gdb within emacs, with the given arguments 197 | 198 | - `debug -i firefox -no-remote -P NakedBugPictures` 199 | 200 | same, but stops at the gdb prompt before running firefox 201 | 202 | - `debug somescript.pl x y z` 203 | 204 | runs somescript.pl within perldb within emacs, with the given arguments 205 | 206 | - `debug --record js testfile.js` 207 | 208 | records `js testfile.js` with rr, then replays the recording in gdb in emacs 209 | 210 | The script goes to insane lengths to figure out what you really meant to run. 211 | For example, if you alias ff in your shell to 'firefox -no-remote', you can 212 | just do 213 | 214 | debug ff 215 | 216 | It will discover that there's no command ff in $PATH and start up a subshell, 217 | look for the alias 'ff', and use that command instead. 218 | 219 | ---------------------------------------------------------------------- 220 | 221 | traverse.py - various traversals over the known portion of a callgraph. 222 | 223 | The callgraph is in the format generated by the rooting hazard analysis. 224 | 225 | Commands: 226 | 227 | help 228 | resolve 229 | callers 230 | callees 231 | route - Find a route from SOURCE to DEST [avoiding FUNC] 232 | quit 233 | allcounts 234 | reachable 235 | rootpaths 236 | canreach 237 | manyroutes 238 | roots 239 | routes 240 | verbose 241 | callee 242 | caller 243 | edges 244 | output 245 | 246 | Use `help ` to figure out what they do; I'm not going to spend time doing that right now. 247 | 248 | ---------------------------------------------------------------------- 249 | 250 | wig - Apply a patch loosely. Works if the surrounding code has changed. 251 | 252 | My usual use is to do some VCS command that spits out .rej files, then do `wig 253 | file1.rej` followed by `wig file2.rej` etc. That lets me see any failures one 254 | at a time. But the tool also supports scanning for all reject files. 255 | hi dad 256 | -------------------------------------------------------------------------------- /bin/compare-scores: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl 2 | 3 | use Getopt::Long; 4 | 5 | use strict; 6 | my %score; 7 | 8 | my $prename; 9 | my $postname; 10 | GetOptions("pre|0=s" => \$prename, 11 | "post|1=s" => \$postname, 12 | "help|h!" => \&usage); 13 | 14 | sub usage { 15 | print <<"END"; 16 | $0 --pre=
 --post= results.txt
17 | where 
 and  are labels embedded in results.txt, which has the format
18 | 
19 |     name=SomeLabel
20 | 
21 |     SomeScore: 83242
22 |     AnotherScore: 8311
23 | 
24 |     name=AnotherLabel
25 | 
26 |     SomeScore: 63213
27 |     AnotherScore: 7311
28 | 
29 | If --pre (aka -0) and/or --post (aka -1) are not passed, they'll be guessed
30 | from the order of the results.txt file.
31 | END
32 | 
33 |     exit(1);
34 | }
35 | 
36 | my @names;
37 | my $which;
38 | while(<>) {
39 |     if (/name=(.*)/) {
40 |         $which = $1;
41 |         push @names, $1;
42 |     } elsif (/^Iteration (\d+)\s+([\d.]+)/) {
43 |         $score{$which}{$1} = $2;
44 |     } elsif (/^(\w+)[^:]*: (\d+)/) {
45 |         $score{$which}{$1} = $2;
46 |     }
47 | }
48 | 
49 | $prename ||= shift(@names);
50 | die "$prename not found" if ! exists $score{$prename};
51 | $score{pre} = $score{$prename};
52 | 
53 | $postname ||= shift(@names);
54 | die "$postname not found" if ! exists $score{$postname};
55 | $score{post} = $score{$postname};
56 | 
57 | my $maxlen = 0;
58 | foreach (keys %{ $score{pre} }) {
59 |     $maxlen = length if length > $maxlen;
60 | }
61 | 
62 | sub compare {
63 |     return int($a) ? $a <=> $b : $a cmp $b;
64 | }
65 | 
66 | print "$prename -> $postname\n";
67 | print "\n";
68 | 
69 | foreach (sort compare keys %{ $score{pre} }) {
70 |     my ($pre, $post) = ($score{post}{$_}, $score{pre}{$_});
71 |     my $delta = -($post - $pre);
72 |     printf("% ${maxlen}s: %6.0f -> %6.0f = %+6.0f (%+5.1f%%)\n",
73 |            $_, $post, $pre, $delta, 100 * $delta / $post);
74 | }
75 | 


--------------------------------------------------------------------------------
/bin/derange:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/perl
  2 | 
  3 | # Recover the panel on a single display, briefly:
  4 | # wmctrl -x -r xfce4-panel.Xfce4-panel -e 0,0,1200,384,3
  5 | 
  6 | use strict;
  7 | use warnings;
  8 | use Getopt::Long;
  9 | 
 10 | my $do_get;
 11 | my $do_single;
 12 | my $do_shrink;
 13 | my $do_std_3mon;
 14 | my $do_std_2mon;
 15 | my $do_std_1mon;
 16 | my $do_panel;
 17 | my $do_wtf;
 18 | my $above;
 19 | my $fmt_cmd;
 20 | GetOptions("g|get!" => \$do_get,
 21 |            "s|single!" => \$do_single,
 22 |            "shrink|resize!" => \$do_shrink,
 23 |            "xr" => \$do_std_3mon,
 24 |            "xr3" => \$do_std_3mon,
 25 |            "xr2" => \$do_std_2mon,
 26 |            "xr1" => \$do_std_1mon,
 27 |            "panel" => \$do_panel,
 28 |            "above!" => \$above,
 29 |            "cmd!" => \$fmt_cmd,
 30 |            "wtf" => \$do_wtf,
 31 |           )
 32 |   or die "bad dog!\n";
 33 | 
 34 | if ($do_wtf) {
 35 |     system("xrandr --output DP-1 --mode 1920x1200");
 36 |     system("xrandr --output DP-2 --mode 1920x1080");
 37 |     exit(0);
 38 | }
 39 | 
 40 | # If nothing specified, figure out appropriate default based on connected
 41 | # displays.
 42 | if (!$do_std_1mon && !$do_single && !$do_std_3mon) {
 43 |     my $displays = get_displays();
 44 |     my $num_displays = 0 + keys %$displays;
 45 |     my $num_on_displays = () = grep { defined } values %$displays;
 46 |     print("num displays = $num_displays num connected = $num_on_displays\n");
 47 |     $do_std_1mon = 1 if $num_displays == 1;
 48 |     $do_std_2mon = 1 if $num_displays == 2 && $num_on_displays != 2;
 49 |     $do_std_3mon = 1 if $num_displays == 3 && $num_on_displays != 3;
 50 | }
 51 | 
 52 | if ($do_std_1mon) {
 53 |     xrandr_1mon();
 54 |     $do_single = 1;
 55 |     $do_shrink = 1;
 56 | }
 57 | 
 58 | if ($do_panel) {
 59 |     exec("wmctrl", "-x",
 60 |          "-r" => "xfce4-panel.Xfce4-panel",
 61 |          "-e" => "0,0,1200,384,3")
 62 |         or die;
 63 | }
 64 | 
 65 | sub get_displays {
 66 |     my %displays;
 67 |     
 68 |     open(my $fh, "-|", "xrandr");
 69 |     while(<$fh>) {
 70 |         my ($display, $connection, $size) = /^(\S+) (connected|disconnected) (?:(?:\w+ )?(\d+x\d+\+\d+\+\d+))?/;
 71 |         next if !$display;
 72 |         if ($connection eq 'connected') {
 73 |             $displays{$display} = $size;
 74 |         }
 75 |     }
 76 |     return \%displays;
 77 | }
 78 | 
 79 | sub get_workspaces {
 80 |     my @workspaces;
 81 | 
 82 |     open(my $fh, "wmctrl -d |");
 83 |     while(<$fh>) {
 84 |         chomp;
 85 |         my ($n, $active, $dims, $vp, $wapos, $wadims, $name) =
 86 |           /^(\d+)\s+(\S)\s+DG: (\S+)\s+VP: (\S+)\s+WA: (\S+) (\S+)\s+(.*)/;
 87 |         my $ws = { 'active' => ($active eq '*'),
 88 |                    'name' => $name };
 89 |         my ($w, $h) = split(/x/, $dims);
 90 |         $ws->{desktop_geometry} = [ $w, $h ];
 91 |         my ($x, $y) = split(/,/, $wadims);
 92 |         $ws->{viewport_position} = [ $x, $y ];
 93 |         ($x, $y) = split(/,/, $wapos);
 94 |         $ws->{workarea_pos} = [ $x, $y ];
 95 |         ($w, $h) = split(/x/, $wadims);
 96 |         $ws->{workarea_geometry} = [ $w, $h ];
 97 |         $workspaces[$n] = $ws;
 98 |     }
 99 | 
100 |     return \@workspaces;
101 | }
102 | 
103 | my $workspace;
104 | 
105 | {
106 |     my $workspaces = get_workspaces();
107 |     ($workspace) = grep { $_->{active} } @$workspaces;
108 |     print("viewport size = $workspace->{workarea_geometry}[0] x $workspace->{workarea_geometry}[1]\n");
109 | }
110 | 
111 | sub get_windows {
112 |     my %windows;
113 | 
114 |     open(my $fh, "wmctrl -x -l -G |");
115 |     while(<$fh>) {
116 |         my ($id, $desktop, $x, $y, $w, $h, $name) = split(/\s+/, $_);
117 |         $windows{$id} = { id => $id,
118 |                           desktop => $desktop,
119 |                           pos => [ $x, $y ],
120 |                           size => [ $w, $h ],
121 |                           name => $name,
122 |                         };
123 |     }
124 | 
125 |     return \%windows;
126 | }
127 | 
128 | sub coord {
129 |     return join(",", @{ shift() });
130 | }
131 | 
132 | my %windows = %{ get_windows() };
133 | 
134 | # Made-up numbers that I pretended to understand and give names to.
135 | #
136 | # If I send something to x=0, it goes right to the edge (yay!) and wmctrl -l
137 | # reports it as being at 6.
138 | #
139 | # If I send something to y=0..30, it ends up right underneath the panel, and
140 | # reports y=76. If I send it to some other y, it reports y+46.
141 | #
142 | # Widths: set => report
143 | #  900 => 896
144 | # 1900 => 1897
145 | # 1920 => 1918
146 | #  100 => 98
147 | #   80 => 77
148 | #   40 => 49 from above, 40 from below
149 | # 0-35 => 35
150 | #   36 => 36
151 | #
152 | my $border = [ 6, 46 ];
153 | 
154 | xrandr_2mon() if $do_std_2mon;
155 | xrandr_3mon() if $do_std_3mon;
156 |     
157 | if ($do_std_3mon) {
158 |     exit 0;
159 | }
160 | 
161 | if ($do_get || $do_single || $do_shrink) {
162 |     while (my ($id, $win) = each %windows) {
163 |         next if $win->{name} =~ /xfce4-panel/;
164 |         my @pos = @{ $win->{pos} };
165 | 
166 |         my $h = $win->{size}[1];
167 |         my $pos = coord(\@pos);
168 |         if ($do_shrink) {
169 |             my $ws_height = $workspace->{workarea_geometry}[1];
170 |             if ($h > $ws_height - $pos[1] + $border->[1]) {
171 |                 # Extends too far down.
172 |                 $h = $ws_height - $pos[1];
173 |             }
174 |         }
175 |         my $size = coord([ $win->{size}[0], $h ]);
176 | 
177 |         if ($do_shrink) {
178 |             print "$pos[1] + $win->{size}[1] > $workspace->{workarea_geometry}[1]\n";
179 |             if ($pos[1] + $win->{size}[1] > $workspace->{workarea_geometry}[1]) {
180 |                 print("shrinking $win->{name}\n");
181 |                 system("wmctrl", "-x", "-r", $win->{name}, "-e", "0,$pos,$size");
182 |                 system("wmctrl", "-i", "-r", $id, "-e", "0,$pos,$size");
183 |             }
184 |         }
185 | 
186 |         if (@ARGV) {
187 |             my $found = 0;
188 |             for (@ARGV) {
189 |                 $found ||= (index($win->{name}, $_) != -1);
190 |             }
191 |             next if ! $found;
192 |         }
193 | 
194 |         if ($do_get) {
195 |             if ($fmt_cmd) {
196 |                 print("  wmctrl -x -r $win->{name} -e 0,$pos,$size\n");
197 |             } else {
198 |                 print <<"END";
199 |         { name => '$win->{name}',
200 |           coords => '0,$pos,$size' },
201 | END
202 |             }
203 |         }
204 |     }
205 | 
206 |     if ($do_get || $do_shrink) {
207 |         exit(0);
208 |     }
209 | }
210 | 
211 | my %DB = (
212 |     'single' => [
213 |         { name => 'nvidia-settings.Nvidia-settings',
214 |           coords => '0,862,1433,499,316' },
215 |         { name => 'Irc.Chatzilla',
216 |           coords => '0,228,76,1692,1015' },
217 |         { name => 'Navigator.Firefox',
218 |           coords => '0,0,0,1914,1131' },
219 |     ],
220 |     'above' => [
221 |         { name => 'Navigator.Firefox',
222 |           coords => '0,0,0,1914,1131' },
223 |         { name => 'Irc.Chatzilla',
224 |           coords => '0,0,1200,1400,1050' },
225 |         { name => 'Mail.Thunderbird',
226 |           coords => '0,0,0,1914,1134' },
227 |     ],
228 |     'dual' => [
229 |         { name => 'Navigator.Firefox',
230 |           coords => '0,0,0,1914,1131' },
231 |         { name => 'Irc.Chatzilla',
232 |           coords => '0,1926,0,1429,1050' },
233 |         { name => 'Mail.Thunderbird',
234 |           coords => '0,0,0,1914,1134' },
235 |         { name => 'gkrellm.Gkrellm',
236 |           coords => '0,3361,0,212,749' },
237 |     ],
238 |     'triple' => [
239 |         { name => 'Mail.Thunderbird',
240 |           coords => '0,6,76,1914,1020' },
241 |         { name => 'Navigator.Firefox',
242 |           coords => '0,6,76,1914,1020' },
243 |         { name => 'gkrellm.Gkrellm',
244 |           coords => '0,0,1200,212,1055' },
245 |         { name => 'Irc.Chatzilla',
246 |           coords => '0,2202,46,1638,1200' },
247 |         { name => 'VidyoDesktop.VidyoDesktop',
248 |           coords => '0,1546,1246,374,400' },
249 |         ],
250 | );
251 | 
252 | my %no_decorations = ( 'gkrellm.Gkrellm' => 1 );
253 | 
254 | my $positions;
255 | 
256 | my $displays = get_xrandr();
257 | 
258 | if ($do_single) {
259 |     $positions = $DB{single};
260 | } elsif ($above) {
261 |     $positions = $DB{above};
262 | } elsif (keys %$displays == 2) {
263 |     system("xrandr", "--output", "DP-0", "--left-of", "LVDS-0");
264 |     $positions = $DB{dual};
265 | } elsif (keys %$displays == 3) {
266 |     #system("xrandr", "--output", "DP-0", "--left-of", "LVDS-0");
267 |     $positions = $DB{triple};
268 | }
269 | 
270 | if ($positions) {
271 |     for my $win (@$positions) {
272 |         my ($gravity, $x, $y, $w, $h) = split(/,/, $win->{coords});
273 | 
274 |         unless ($no_decorations{$win->{name}}) {
275 |             $x -= $border->[0];
276 |             $y -= $border->[1];
277 |         }
278 | 
279 |         my $coords = "$gravity,$x,$y,$w,$h";
280 |         print join(" ", "wmctrl", "-x", "-r" => $win->{name}, "-e" => $coords, "\n");
281 |         system("wmctrl", "-x", "-r" => $win->{name}, "-e" => $coords);
282 |     }
283 | }
284 | 
285 | my %arrangement = (
286 |     'Navigator.Firefox' => 0,
287 |     'Irc.Chatzilla' => ($do_single ? 2 : -1),
288 |     'Mail.Thunderbird' => 1,
289 |     'gkrellm.Gkrellm' => ($do_single ? 3 : -1),
290 |     'VidyoDesktop.VidyoDesktop' => -1,
291 | );
292 | 
293 | while (my ($name, $ws) = each %arrangement) {
294 |     print join(" ", "wmctrl", "-x", "-r" => $name, "-t" => $ws), "\n";
295 |     system("wmctrl", "-x", "-r" => $name, "-t" => $ws);
296 | }
297 | 
298 | # wmctrl -r DailyLog -t -1
299 | 
300 | unless ($do_single) {
301 |     system("wmctrl", "-x", "-r" => "gkrellm.Gkrellm", "-b" => "add,skip_taskbar");
302 |     system("wmctrl", "-x", "-r" => "Irc.Chatzilla", "-b" => "add,skip_taskbar");
303 | }
304 | 
305 | sub get_xrandr {
306 |     my %displays;
307 | 
308 |     open(my $fh, "xrandr |");
309 |     while(<$fh>) {
310 |         if (/^(\S+) connected (?:primary )?(\S+)/) {
311 |             $displays{$1} = $2;
312 |         }
313 |     }
314 | 
315 |     return \%displays;
316 | }
317 | 
318 | sub xrandr_3mon {
319 |     print("Doing xrandr configuration for 3 displays\n");
320 |     system("xrandr --output eDP-1 --pos 0x1200 --scale 0.5x0.5 --below DP-1 --output DP-1 --pos 1920x0 --output DP-2 --pos 0x120");
321 |     #system("xrandr --output eDP-1 --pos 0x1200 --scale 0.5x0.5 --output DP-1 --pos 1920x0 --output DP-2 --pos 0x120");
322 |     #system("xrandr --output VGA-0 --mode 1920x1200 --output LVDS-0 --mode 1920x1080 --output DP-0 --mode 1920x1080");
323 |     #system("xrandr --output VGA-0 --output LVDS-0 --below VGA-0 --output DP-0 --right-of VGA-0");
324 | }
325 | 
326 | sub xrandr_2mon {
327 |     print("Doing xrandr configuration for 2 displays\n");
328 |     system("xrandr --output eDP-1 --pos 0x1200 --scale 0.5x0.5 --output DP-1 --pos 0x0");
329 |     #system("xrandr --output eDP-1 --pos 0x120 --scale 0.5x0.5 --output DP-1 --pos 0x1200");
330 | }
331 | 
332 | sub xrandr_1mon {
333 |     print("Doing xrandr configuration for 1 display");
334 |     system("xrandr --output eDP-1 --scale 0.5x0.5");
335 | }
336 | 


--------------------------------------------------------------------------------
/bin/em:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/perl
  2 | 
  3 | # 1. em foo.txt:33 will run emacs +33 foo.txt
  4 | #    so will em foo.txt:33: (easier cut & paste of trailing colon for error messages)
  5 | #    and foo.txt will be found anywhere in the current hg tree (if not in cwd)
  6 | # 2. em with no args will run emacs on the files changed in the cwd, or if none, then
  7 | #    by the cwd's parent rev
  8 | # 3. em 01faf51a0acc will run emacs on the files changed by that rev.
  9 | # 4. em foo.txt.rej will run emacs on both foo.txt and foo.txt.rej, but at the lines
 10 | #    containing the first patch hunk and the line number of the original that it
 11 | #    applies to (ok, so this is probably where this script jumped the shark.)
 12 | 
 13 | # TODO: Rewrite in Python, purely to be able to use argparse. (Yeah, I could
 14 | # use Getopt::Long. argparse has better help message output.)
 15 | 
 16 | use strict;
 17 | use warnings;
 18 | 
 19 | use File::Basename qw(basename);
 20 | 
 21 | my $ROOTDIR;
 22 | my $TOPROOTDIR;
 23 | 
 24 | my @to_edit; # [  ]
 25 | my $magic = 0;
 26 | my $revision;
 27 | my $base = basename($0);
 28 | my $editor = ($base eq 'vs' ? 'vscode' : $ENV{EDITOR}) || "emacs";
 29 | 
 30 | my $verbose;
 31 | my $use_client;
 32 | 
 33 | my @args;
 34 | my $i = 0;
 35 | while ($i < @ARGV) {
 36 |     if ($ARGV[$i] eq '-v' || $ARGV[$i] eq '--verbose') {
 37 |         $verbose = 1;
 38 |     } elsif ($ARGV[$i] eq '-c' || $ARGV[$i] eq '--client') {
 39 |         $use_client = 1;
 40 |     } elsif ($ARGV[$i] eq '-r' || $ARGV[$i] eq '--revision') {
 41 |         $revision = $ARGV[++$i] or die "missing revision\n";
 42 |     } elsif ($ARGV[$i] eq '-e' || $ARGV[$i] eq '--editor') {
 43 |         $editor = $ARGV[++$i] or die "missing editor\n";
 44 |     } else {
 45 |         push @args, $ARGV[$i];
 46 |     }
 47 | } continue {
 48 |     ++$i;
 49 | }
 50 | 
 51 | ARG: for my $arg (@args) {
 52 |     my $lineno;
 53 |     if ($arg =~ /^[-+]/) {
 54 | 	push @to_edit, [ $arg ];
 55 |         next;
 56 |     }
 57 | 
 58 |     # Check for filename:lineno or filename:lineno:colno, with optional
 59 |     # trailing colon.
 60 |     if ($arg =~ /(.*?):(\d+)(:\d+)?:?$/) {
 61 |         # Might be filename:77 or filename:77: (the latter comes from a simple
 62 |         # copy/paste of an error message). Convert to opening the appropriate
 63 |         # line.
 64 |         print "Command line contained filename:lineno, adding +$2\n"
 65 |             if $verbose;
 66 | 	$lineno = $2;
 67 |         $arg = $1;
 68 |     }
 69 | 
 70 |     # Check for a path relative to the hg root, or failing that, anywhere in
 71 |     # the repo.
 72 |     if (! -r $arg) {
 73 |         chomp($ROOTDIR = $TOPROOTDIR ||= qx(hg root));
 74 |         if (-r "$ROOTDIR/$arg") {
 75 |             $arg = "$ROOTDIR/$arg";
 76 |         } else {
 77 | 	    chomp(my $path = qx(hg files "relglob:$arg"));
 78 | 	    if ($path ne '' && -r $path) {
 79 | 		$arg = $path;
 80 | 	    } elsif ($arg =~ /^[\da-f]{1,40}$/) {
 81 | 		$revision = $arg;
 82 | 		next;
 83 | 	    }
 84 |         }
 85 |     }
 86 | 
 87 |     # Check for a reject file.
 88 |     if ($arg =~ /(.*)\.rej$/) {
 89 |         print "Found reject file\n"
 90 |             if $verbose;
 91 |         my $orig = $1;
 92 |         open(my $fh, "<", $arg) or die "open $arg: $!";
 93 |         my $hunkstart;
 94 |         my $context = 0;
 95 |         while(<$fh>) {
 96 |             if (/^\@\@ -(\d+)/) {
 97 |                 $hunkstart = $1;
 98 |             } elsif (defined($hunkstart)) {
 99 |                 if (/^ /) {
100 |                     ++$context;
101 |                 } else {
102 |                     # Open the original file at the first changed line number,
103 |                     # and the reject file at the first hunk.
104 | 		    push @to_edit, [ $orig, $hunkstart + $context ];
105 | 		    push @to_edit, [ $arg, $context + 3 + 1 ];
106 |                     next ARG;
107 |                 }
108 |             }
109 |         }
110 | 
111 | 	push @to_edit, [$orig], [$arg, $lineno]; # $lineno is probably undef
112 |         $magic = 1;
113 |     } else {
114 | 	push @to_edit, [ $arg, $lineno ];
115 |     }
116 | }
117 | 
118 | if (@to_edit == 0) {
119 |     chomp($ROOTDIR = $TOPROOTDIR ||= qx(hg root));
120 | 
121 |     my @files;
122 | 
123 |     # If no revision was given, check working directory.
124 |     if (! defined $revision) {
125 |         print "Looking for changes in working directory...\n"
126 |             if $verbose;
127 |         chomp(@files = qx(hg diff | diffstat -l -p1));
128 |     }
129 | 
130 |     if (@files) {
131 |         push @to_edit, map { ["$ROOTDIR/$_"] } @files;
132 |     } else {
133 |         $revision //= '.';
134 |         print "Using changes from $revision\n"
135 |             if $verbose;
136 |         open(my $diff, "hg export --hidden $revision |");
137 |         my $curfile;
138 |         my $startline;
139 |         while(<$diff>) {
140 |             chomp;
141 |             if (m!^\+\+\+ b/(.*)!) {
142 |                 $curfile = $1;
143 |             } elsif ($curfile && /^@@ -\d+,\d+ \+(\d+)/) {
144 |                 $startline = $1;
145 |                 print "Found diff chunk starting at +$1, scanning...\n"
146 |                     if $verbose;
147 |             } elsif ($curfile && defined $startline) {
148 |                 if (/^[\-+]/) {
149 |                     print "found first change at line $startline\n"
150 |                         if $verbose;
151 | 		    push @to_edit, [ "$ROOTDIR/$curfile", $startline ];
152 |                     undef $startline;
153 |                     undef $curfile;
154 |                 } else {
155 |                     $startline++;
156 |                 }
157 |             }
158 |         }
159 |     }
160 | }
161 | 
162 | my $cmd;
163 | my @cmd_args;
164 | if ($editor eq 'emacs') {
165 |     $cmd = $use_client ? "emacsclient" : "emacs";
166 |     foreach (@to_edit) {
167 | 	my ($file, $lineno) = @$_;
168 | 	push @cmd_args, "+$lineno" if $lineno;
169 | 	push @cmd_args, $file;
170 |     }
171 | } elsif ($editor eq 'vscode' || $editor eq 'code') {
172 |     $cmd = 'code';
173 |     push @cmd_args, '-r';
174 |     foreach (@to_edit) {
175 |         my ($file, $lineno) = @$_;
176 |         push @cmd_args, "-g", join(":", grep { $_ } $file, $lineno);
177 |     }
178 | } else {
179 |     $cmd = $editor;
180 |     push @cmd_args, map { $_->[0] } @to_edit;
181 |     if ($editor =~ /vi/ && $to_edit[0][1]) {
182 |         unshift @cmd_args, "+$to_edit[0][1]";
183 |     }
184 | }
185 | 
186 | print "Running: $cmd @cmd_args\n" if $magic or $verbose;
187 | exec($cmd, @cmd_args);
188 | 


--------------------------------------------------------------------------------
/bin/get-taskcluster-logs:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/python3
  2 | 
  3 | import argparse
  4 | import os
  5 | import re
  6 | import requests
  7 | import sys
  8 | 
  9 | from collections import defaultdict
 10 | 
 11 | parser = argparse.ArgumentParser(description="Download logs from taskcluster into a new directory named push-")
 12 | parser.add_argument(
 13 |     '--revision', '-r', metavar='REV', type=str,
 14 |     help='download logs for this revision')
 15 | parser.add_argument(
 16 |     '--repository', '--repo', '-R', metavar='REPO', type=str,
 17 |     default='try', help='repository (default: try)')
 18 | parser.add_argument(
 19 |     '--project', '-p', metavar='PROJECT', type=str,
 20 |     default='try', help='project name')
 21 | parser.add_argument(
 22 |     '--group', '-g', metavar='GROUP', type=str,
 23 |     default='Talos', help='restrict to jobs with job_group_name containing GROUP (default: Talos). List available groups with --list.')
 24 | parser.add_argument(
 25 |     '--type', metavar='TYPE', type=str,
 26 |     default=None, help='restrict to jobs with job_type_name containing TYPE')
 27 | parser.add_argument(
 28 |     '--list', '--list-groups', action='store_true',
 29 |     help='display a list of all available job groups')
 30 | parser.add_argument(
 31 |     '--list-all', action='store_true',
 32 |     help='display a list of all available job groups and job types')
 33 | parser.add_argument(
 34 |     '--verbose', '-v', type=bool,
 35 |     default=False, help='verbose logging')
 36 | parser.add_argument(
 37 |     'url', type=str, nargs='?',
 38 |     default=None, help='push url (if given, revision and/or repository will be extracted)')
 39 | 
 40 | args = parser.parse_args()
 41 | 
 42 | if args.url is not None:
 43 |     m = re.search(r'revision=([0-9a-fA-F]+)', args.url)
 44 |     if not m:
 45 |         print('If an argument is given, it must be the URL of a push (it should have &revision=... in it somewhere)')
 46 |         sys.exit(1)
 47 |     args.revision=m.group(1)
 48 |     m = re.search(r'\brepo=([\w\-]+)', args.url)
 49 |     if m and args.repository == 'try':
 50 |         args.repository = m.group(1)
 51 | 
 52 | if not (args.repository and args.revision):
 53 |     print("Not enough params given")
 54 |     sys.exit(1)
 55 | 
 56 | def fetch_page(url, desc=None, **kwargs):
 57 |     if args.verbose:
 58 |         print("Fetching {}".format(url))
 59 |     r = requests.get(url, headers={'User-Agent': 'log-batch-fetcher/thatbastard/sfink'}, **kwargs)
 60 |     if r.status_code != 200:
 61 |         print("Error: Failed to fetch {}, status code {}".format(" ".join([desc, "page " + url]), r.status_code))
 62 |         sys.exit(1)
 63 |     return r
 64 | 
 65 | def generate_jobs(project, push_id):
 66 |     count=200
 67 |     job_list_url_format = 'https://treeherder.mozilla.org/api/project/{project}/jobs/?push_id={push_id}&count={count}&offset={offset}'
 68 |     offset = 0
 69 |     while True:
 70 |         job_list_url = job_list_url_format.format(project=project, push_id=push_id, count=count, offset=offset)
 71 |         r = fetch_page(job_list_url, "job list")
 72 |         d = r.json()
 73 |         for res in d['results']:
 74 |             yield res
 75 |         if len(d['results']) < count:
 76 |             break
 77 |         offset += count
 78 | 
 79 | def get_log_info(job):
 80 |     log_url = 'https://treeherder.mozilla.org/api/project/{project}/job-log-url/?job_id={job_id}'.format(project=args.project, job_id=job['id'])
 81 |     r = fetch_page(log_url, "job")
 82 |     for log in r.json():
 83 |         if log['name'] == 'builds-4h':
 84 |             return {
 85 |                 'job_id': job['id'],
 86 |                 'job_type_name': job['job_type_name'],
 87 |                 'log_url': log['url'],
 88 |             }
 89 |     raise Exception("Did not find a log tagged with name 'builds-4h' in job {}".format(job['id']))
 90 | 
 91 | def generate_logs(project, push_id):
 92 |     for job in generate_jobs(args.project, push_id):
 93 |         if args.group not in job['job_group_name']:
 94 |             continue
 95 |         if args.type is None or args.type in job['job_type_name']:
 96 |             yield get_log_info(job)
 97 | 
 98 | def get_names(project, push_id):
 99 |     groups = defaultdict(int)
100 |     types = defaultdict(int)
101 |     for job in generate_jobs(args.project, push_id):
102 |         groups[job['job_group_name']] += 1
103 |         types[job['job_type_name']] += 1
104 |     return groups, types
105 | 
106 | push_url = 'https://treeherder.mozilla.org/api/project/{project}/push/?revision={rev}'.format(project=args.project, rev=args.revision)
107 | r = fetch_page(push_url, "push info")
108 | d = r.json()
109 | if not d['results']:
110 |     print("No push found for project={} rev={}".format(args.project, args.revision))
111 |     sys.exit(1)
112 | push_id = d['results'][0]['id']
113 | if args.verbose:
114 |     print("Found push id {} for {}".format(push_id, args.revision))
115 | 
116 | push_dir = "push{push_id}-{rev}".format(push_id=push_id, rev=args.revision[0:12])
117 | try:
118 |     os.mkdir(push_dir)
119 | except OSError:
120 |     pass
121 | 
122 | if args.list or args.list_all:
123 |     (groups, types) = get_names(args.project, push_id)
124 |     if args.list_all:
125 |         print("Types:")
126 |         for name, count in types.items():
127 |             print("{} x {}".format(count, name))
128 |     print("Groups:")
129 |     for name, count in groups.items():
130 |         print("{} x {}".format(count, name))
131 |     sys.exit(0)
132 | 
133 | outfiles = []
134 | for loginfo in generate_logs(args.project, push_id):
135 |     log_name = "job{id}-{jobtype}.txt".format(
136 |         jobtype=loginfo['job_type_name'].replace('/', '_'),
137 |         id=loginfo['job_id']
138 |     )
139 |     filename = os.path.join(push_dir, log_name)
140 |     r = fetch_page(loginfo['log_url'], stream=True)
141 |     with open(filename, "wb") as fh:
142 |         for chunk in r.iter_content(chunk_size=1048576):
143 |             if chunk:
144 |                 fh.write(chunk)
145 |     print("Wrote " + filename)
146 |     outfiles.append(filename)
147 | 
148 | print("Wrote {} log files to {}/".format(len(outfiles), push_dir))
149 | 


--------------------------------------------------------------------------------
/bin/import-tool:
--------------------------------------------------------------------------------
 1 | #!/bin/sh
 2 | 
 3 | set -e
 4 | 
 5 | SRC_TOOL="$1"
 6 | IMPORT_BIN_DIR="$(dirname "$(realpath "$0")")"
 7 | 
 8 | IMPORT_DIR="$(dirname "$IMPORT_BIN_DIR")"
 9 | base="$(basename "$SRC_TOOL")"
10 | type=bin
11 | 
12 | if ! [ -x "$SRC_TOOL" ]; then
13 |     SRC_TOOL="$HOME/bin/$base"
14 |     type=bin
15 | fi
16 | 
17 | if ! [ -x "$SRC_TOOL" ]; then
18 |     SRC_TOOL="$HOME/$base"
19 |     type=conf
20 | fi
21 | 
22 | if ! [ -f "$SRC_TOOL" ]; then
23 |     echo "$SRC_TOOL not found" >&2
24 |     exit 1
25 | fi
26 | 
27 | if [ $type = conf ]; then
28 |     DEST="$IMPORT_DIR/$type/${base#.}"
29 | else
30 |     DEST="$IMPORT_DIR/$type/$base"
31 | fi
32 | 
33 | abs="$(realpath "$DEST")"
34 | rel="$(realpath "$abs" --relative-to="$(dirname "$SRC_TOOL")")"
35 | [ -n "$abs" ] && [ -n "$rel" ]
36 | mv "$SRC_TOOL" "$DEST"
37 | ln -s "$rel" "$SRC_TOOL"
38 | 
39 | echo "Imported $base into $DEST"
40 | 


--------------------------------------------------------------------------------
/bin/jj-import-hg:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/python
  2 | 
  3 | # Import hg revisions into jj.
  4 | 
  5 | # TODO:
  6 | # [x] handle single linear series
  7 | # [x] better solution to finding the base. (Can't avoid mapping base hg rev.)
  8 | #     - in actual git repo for jj (not workspace), git cinnabar hg2git
  9 | #     - use that as base
 10 | # [x] do a full tree:
 11 | #     - grab a set of revisions
 12 | #     - for any parents not in the set, hg2git and put into an "existing" set (I guess a map hg => git|change)
 13 | #     - while there is a revision with all parents existing
 14 | #       jj new  # <-- will need changes
 15 | #       import patch
 16 | #       add resulting change to existing, where the id is the corresponding hg revision
 17 | # [ ] expire or mark as imported the hg origination
 18 | # [ ] also import any obsolete revs that descend from immutable revs? (optional!)
 19 | # [x] error out usefully when importing a changeset with unknown (non-trunk) parent
 20 | 
 21 | import argparse
 22 | from collections import defaultdict
 23 | from datetime import datetime
 24 | import logging
 25 | import os
 26 | from pathlib import Path
 27 | import re
 28 | import shlex
 29 | import subprocess
 30 | import sys
 31 | 
 32 | base_env = os.environ
 33 | git_repo = None
 34 | 
 35 | parser = argparse.ArgumentParser("migrate hg revisions to jj changes")
 36 | parser.add_argument("--verbose", "-v", default=0, action="count",
 37 |     help="verbose output")
 38 | parser.add_argument("--bookmark", "--book", "--label",
 39 |     help="create a bookmark pointing to the last new change created")
 40 | parser.add_argument("--map", default="/tmp/hg-to-git.txt",
 41 |     help="filename of hg rev -> jj (git) commit mapping")
 42 | parser.add_argument("hg_revset", help="revset to migrate")
 43 | parser.add_argument("dir", help="directory containing hg or jj checkout (current working directory will be used for the other)")
 44 | 
 45 | args = parser.parse_args()
 46 | 
 47 | logger = logging.getLogger("hg-to-jj")
 48 | if args.verbose == 1:
 49 |     logger.setLevel(logging.INFO)
 50 | elif args.verbose > 1:
 51 |     logger.setLevel(logging.DEBUG)
 52 | 
 53 | cwd = Path.cwd()
 54 | if (Path(args.dir) / ".hg").is_dir():
 55 |     setattr(args, "hgdir", args.dir)
 56 |     if not (cwd / ".jj").is_dir():
 57 |         logger.critical("current working directory should be jj workspace root")
 58 |     setattr(args, "jjdir", cwd)
 59 | elif (Path(args.dir) / ".jj").is_dir():
 60 |     setattr(args, "jjdir", args.dir)
 61 |     if not (cwd / ".hg").is_dir():
 62 |         logger.critical("current working directory should be hg root")
 63 |     setattr(args, "hgdir", cwd)
 64 | else:
 65 |     logger.critical(f"directory {args.dir} appears to be neither an hg nor jj checkout")
 66 |     sys.exit(1)
 67 | 
 68 | def run_jj(cmd, **kwargs):
 69 |     logger.debug("RUNNING: " + shlex.join(cmd))
 70 |     return subprocess.check_output(cmd, cwd=args.jjdir, text=True, **kwargs)
 71 | 
 72 | def pipe_to_jj(cmd, **kwargs):
 73 |     logger.debug("RUNNING: | " + shlex.join(cmd))
 74 |     return subprocess.Popen(cmd, cwd=args.jjdir, text=True, stdin=subprocess.PIPE, **kwargs)
 75 | 
 76 | def run_hg(cmd, **kwargs):
 77 |     logger.debug("RUNNING: " + shlex.join(cmd))
 78 |     return subprocess.check_output(cmd, cwd=args.hgdir, text=True, **kwargs)
 79 | 
 80 | def pipe_hg(cmd, **kwargs):
 81 |     logger.debug("RUNNING: " + shlex.join(cmd) + " |")
 82 |     process = subprocess.Popen(cmd, cwd=args.hgdir, text=True, stdout=subprocess.PIPE, **kwargs)
 83 |     return process.stdout
 84 | 
 85 | def find_git_repo():
 86 |     root = run_jj(["jj", "workspace", "root"]).rstrip()
 87 |     root = Path(root)
 88 |     logger.debug(f"Workspace root = {root}")
 89 |     if (root / ".jj/repo").is_dir():
 90 |         jj_repo = root / ".jj/repo"
 91 |     else:
 92 |         with open(root / ".jj/repo", "rt") as fh:
 93 |             jj_repo = Path(fh.read())
 94 |     logger.debug(f"{jj_repo=}")
 95 |     with open(jj_repo / "store/git_target") as fh:
 96 |         git_target = fh.read()
 97 |     logger.debug(f"{git_target=}")
 98 |     git_repo = (jj_repo / "store" / git_target).resolve()
 99 |     logger.info(f"{git_repo=}")
100 |     return git_repo
101 | 
102 | # Generator that parses hg commit into a stream of action events.
103 | def parse_rev(pipe):
104 |     line = pipe.readline()
105 | 
106 |     def read_until(pattern, process=lambda s: None):
107 |         nonlocal line
108 |         while line != '':
109 |             m = re.match(pattern, line)
110 |             line = pipe.readline()
111 |             if m:
112 |                 return m
113 |             else:
114 |                 process(line)
115 | 
116 |     topic = None
117 | 
118 |     def grab_topic(line):
119 |         if m := re.match(r'# EXP-Topic (.*)', line):
120 |             nonlocal topic
121 |             topic = m.group(1)
122 | 
123 |     if m := read_until(r'# User (.*?) <(.*?)>$'):
124 |         yield ('user', m.group(1), m.group(2))
125 |     if m := read_until(r'# Date \d+'):
126 |         m = re.match(r'# +(.*)', line)
127 |         line = pipe.readline()
128 |         yield ('date', m.group(1))
129 |     if m := read_until(r'(?s)([^#].*)', grab_topic):
130 |         if topic is not None:
131 |             yield('topic', topic)
132 |         desc = m.group(1)
133 |         while line != '' and not line.startswith("diff"):
134 |             desc += "\n" + line
135 |             line = pipe.readline()
136 |         yield('description', desc)
137 |     yield('patch', line, pipe)
138 | 
139 | def hg_to_git(rev):
140 |     return subprocess.check_output(["git", "cinnabar", "hg2git", rev], cwd=git_repo, text=True).rstrip()
141 | 
142 | def resolve_hg_revset(revset):
143 |     revs = []
144 |     for line in pipe_hg(["hg", "log", "-r", revset, "-T", "{node|short}\\n"]):
145 |         revs.append(line.rstrip())
146 |     return revs
147 | 
148 | def migrate(args):
149 |     import_revs = resolve_hg_revset(args.hg_revset)
150 | 
151 |     hg2git = {}
152 |     for base in resolve_hg_revset(f"parents(roots({'+'.join(import_revs)}))"):
153 |         git = hg_to_git(base)
154 |         if git == "0000000000000000000000000000000000000000":
155 |             logging.critical(f"parent revision {base} not found in jj checkout")
156 |             sys.exit(1)
157 |         hg2git[base] = git
158 | 
159 |     parents = {}
160 |     for line in pipe_hg(["hg", "log", "-r", '+'.join(import_revs), "-T", "{node|short} {p1.node|short} {p2.node|short}\\n"]):
161 |         (node, p1, p2) = line.rstrip().split()
162 |         parents[node] = [p for p in [p1, p2] if p != "000000000000"]
163 | 
164 |     print(f"Importing hg revs: {import_revs}")
165 |     topics = set()
166 | 
167 |     N = len(import_revs)
168 |     while import_revs:
169 |         logging.info(f"Processing {N - len(import_revs) + 1}/{N}...")
170 | 
171 |         # Find a rev whose parents are all known. Prefer going in the order of
172 |         # the list, because that will likely be entirely or partly
173 |         # ancestor-first.
174 |         chosen = None
175 |         for i, rev in enumerate(import_revs):
176 |             if all(p in hg2git for p in parents[rev]):
177 |                 chosen = rev
178 |                 import_revs.pop(i)
179 |                 break
180 |         else:
181 |             raise Exception(f"(internal error) No revisions with all parents known! todo={import_revs}")
182 | 
183 |         bases = [hg2git[p] for p in parents[chosen]]
184 | 
185 |         logger.info(f"  importing patch from revision {rev} with existing parents {bases}")
186 |         run_jj(["jj", "new", *bases])
187 |         for item in parse_rev(pipe_hg(["hg", "export", rev])):
188 |             if item[0] == 'user':
189 |                 (action, user, email) = item
190 |             elif item[0] == 'date':
191 |                 (action, date) = item
192 |             elif item[0] == 'topic':
193 |                 (action, topic) = item
194 |                 topics.add(topic)
195 |             elif item[0] == 'description':
196 |                 (action, description) = item
197 |                 dt = datetime.strptime(date, "%a %b %d %H:%M:%S %Y %z")
198 |                 iso_format = dt.strftime("%Y-%m-%dT%H:%M:%S%z")
199 |                 timestamp = iso_format[:-2] + ":" + iso_format[-2:]
200 |                 env = base_env.copy()
201 |                 env.update({
202 |                     "JJ_USER": user,
203 |                     "JJ_EMAIL": email,
204 |                     "JJ_TIMESTAMP": timestamp
205 |                 })
206 |                 run_jj(["jj", "describe", "--reset-author", "-m", description.rstrip()], env=env)
207 |             elif item[0] == 'patch':
208 |                 (action, line, input) = item
209 |                 logger.debug(f"PATCHING hg rev={rev}")
210 |                 process = subprocess.Popen(["patch", "-p1"],
211 |                                            cwd=args.jjdir, text=True,
212 |                                            stdin=subprocess.PIPE)
213 |                 process.stdin.write(line)
214 |                 for line in input:
215 |                     process.stdin.write(line)
216 |                 process.stdin.close()
217 |                 retcode = process.wait()
218 |                 if retcode != 0:
219 |                     raise Exception("patch failed. Your base must be wrong?")
220 |             else:
221 |                 raise Exception(f"wtf is {item[0]}??")
222 | 
223 |         git_chosen = run_jj(["jj", "log", "-r", "@", "-T", "change_id.short(8)", "--no-graph"]).rstrip()
224 |         hg2git[chosen] = git_chosen
225 | 
226 |     run_jj(["jj", "new"])
227 | 
228 |     with Path(args.map).open(mode="w") as fh:
229 |         for hg, git in hg2git.items():
230 |             print(f"{hg} {git}", file=fh)
231 |     print(f"Wrote {args.map}")
232 | 
233 |     topic = args.bookmark
234 |     if not topic:
235 |         if len(topics) == 1:
236 |             topic = list(topics)[0]
237 |         elif len(topics) > 1:
238 |             logger.warning("Multiple topics found, not creating a bookmark")
239 |             logger.warning(f"Topics found: {topics}")
240 |     if topic:
241 |         run_jj(["jj", "bookmark", "create", "-r", "@-", topic])
242 | 
243 | git_repo = find_git_repo()
244 | start_op = run_jj(["jj", "op", "log", "-n1", "-T", "id.short(20)", "--no-graph"])
245 | 
246 | try:
247 |     migrate(args)
248 | except Exception:
249 |     logger.exception(f"Restoring to op {start_op}")
250 |     run_jj(["jj", "op", "restore", start_op])
251 |     sys.exit(1)
252 | 


--------------------------------------------------------------------------------
/bin/landed:
--------------------------------------------------------------------------------
  1 | #!python3
  2 | 
  3 | # - Find the most recent public changesets for a given set of local draft changesets.
  4 | # - Given two sets of changesets (one public, one draft), hg prune --pair the appropriate ones.
  5 | # - Rebase the remaining patches, if any
  6 | 
  7 | import argparse
  8 | import datetime
  9 | import json
 10 | import os
 11 | import re
 12 | import shlex
 13 | import subprocess
 14 | import sys
 15 | import textwrap
 16 | 
 17 | from collections import namedtuple
 18 | 
 19 | Node = namedtuple("Node", ["rev", "phase", "bug", "num", "reviewers", "key", "desc"])
 20 | 
 21 | p = argparse.ArgumentParser(
 22 |     usage='landed [options]', description='''
 23 | Take a set of draft revisions and find their landed equivalents, then output a
 24 | command that prunes the given revisions and sets the landed equivalents as
 25 | their successors.
 26 | 
 27 | Changesets are matched up by the phabricator revision ID in their comments, if
 28 | any. Otherwise, use the first line of their descriptions (with reviewer
 29 | metadata stripped).
 30 | 
 31 | The usual usage is to just run `landed` with no arguments from a directory
 32 | based on a stack of patches, some of which have landed already. That will loop
 33 | over all non-public ancestors and scan through mozilla-central to find patches
 34 | with matching descriptions that have already landed, and prune the local
 35 | patches while setting their successors to their already-landed equivalents.
 36 | 
 37 | By default, only the last 2 years of history will be considered (to speed up
 38 | the fairly common case where not all changesets are found.)
 39 | 
 40 | More complex example: landed -r .^^^::. --user=sfink --branch=autoland
 41 | 
 42 | Note that this will not rebase any orphaned patches for you, so if you are
 43 | pruning landed patches that have not yet landed descendants, you will need to
 44 | rebase them (eg by running `hg evolve` or `hg evolve -a` or whatever.) '''
 45 | )
 46 | 
 47 | DEFAULT_REVSET = "not public() and ancestors(.)"
 48 | 
 49 | g = p.add_argument_group('specifying revisions')
 50 | g.add_argument("--former", "--draft", "--local", "--revisions", "-r",
 51 |                default=DEFAULT_REVSET,
 52 |                help="The revset for the revisions to prune")
 53 | g.add_argument("--topic", "-t",
 54 |                default=None,
 55 |                help="Attempt to prune all revisions in TOPIC")
 56 | g.add_argument("--landed", "--public", "--successors", "-s",
 57 |                help="The revset for the successor revisions that have landed")
 58 | g.add_argument("--user",
 59 |                help="A userid to scan to find landed revs")
 60 | g.add_argument("--branch", "-b", default='central',
 61 |                help="Label of a branch to scan for landed revisions")
 62 | g.add_argument("--landed-from",
 63 |                help="Parse this file to extract the landed revisions")
 64 | g.add_argument("--limit", "-l", type=int,
 65 |                help="Do not look more than LIMIT revisions back. Default is to defer to --datelimit.")
 66 | g.add_argument("--datelimit", type=int,
 67 |                help="Do not look more than LIMIT days back, 0 to remove limit. Default is 2 years.")
 68 | 
 69 | p.add_argument("--verbose", "-v", action="store_true",
 70 |                help="Verbose output")
 71 | p.add_argument("--debug", "-D", action="store_true",
 72 |                help="Debugging output")
 73 | 
 74 | g = p.add_argument_group('output syntax')
 75 | g.add_argument("--numeric", "-n", action="store_true",
 76 |                help="Use local numeric changeset numbers instead of hashes")
 77 | g.add_argument("--exec", action="store_true", default=None,
 78 |                help="Run the command instead of just printing it out")
 79 | g.add_argument("--noexec", dest="exec", action="store_false",
 80 |                help="Print the command, do not prompt to execute it")
 81 | 
 82 | args = p.parse_args()
 83 | 
 84 | if args.former != DEFAULT_REVSET and args.topic:
 85 |     raise Exception("-r and -t are mutually exclusive")
 86 | 
 87 | if args.topic:
 88 |     args.former = f"topic('{args.topic}')"
 89 | 
 90 | # If no other limit is requested, look back 2 years.
 91 | if not args.limit and not args.datelimit:
 92 |     args.datelimit = 365 * 2
 93 | 
 94 | wrapper = textwrap.TextWrapper(subsequent_indent='      ',
 95 |                                width=int(os.getenv('COLUMNS', '80')) - 2)
 96 | 
 97 | # Generator that processes the JSON output of `hg log` and yields revisions.
 98 | def gen_revisions(lineiter):
 99 |     stanza = None
100 |     for line in lineiter:
101 |         if stanza is None:
102 |             assert(line.strip() == "[")
103 |             stanza = ''
104 |         elif line.strip() == "]":
105 |             break
106 |         else:
107 |             stanza += line
108 |             if line.strip("\n") in (" },", " }"):
109 |                 try:
110 |                     yield json.loads(stanza.rstrip("\n,"))
111 |                 except Exception as e:
112 |                     print("Invalid JSON output from hg log: " + str(e),
113 |                           file=sys.stderr)
114 |                     print(stanza)
115 |                     raise e
116 |                 stanza = ''
117 | 
118 | 
119 | def display(desc, headerlen):
120 |     # The first line must be shortened by `headerlen` chars.
121 |     header = '.' * headerlen
122 |     return wrapper.fill(header + desc)[headerlen:]
123 | 
124 | 
125 | def gather_revisions(revset, limit=None, datelimit=None, query=None):
126 |     revs = {}
127 |     lookup = {}
128 |     if query:
129 |         lookup = {n.key: n for n in query.values()}
130 | 
131 |     cmd = [
132 |         "hg", "log",
133 |         "-r", revset,
134 |         "-T", "json"
135 |     ]
136 |     if limit:
137 |         cmd.extend(["-l", str(limit)])
138 |     if args.debug:
139 |         print(f"Running {' '.join(shlex.quote(s) for s in cmd)}")
140 | 
141 |     earliest = None
142 |     if datelimit:
143 |         earliest = datetime.datetime.now() - datetime.timedelta(days=datelimit)
144 |     report_interval = 100 if args.user else 10000
145 |     n = 0
146 |     extra = {}
147 |     if os.name == 'nt':
148 |         # hg is very noisy on Windows when you close its output before it's done.
149 |         extra['stderr'] = subprocess.DEVNULL
150 |     process = subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True, encoding='utf-8', **extra)
151 |     try:
152 |         for info in gen_revisions(iter(process.stdout.readline, '')):
153 |             n += 1
154 |             if n % report_interval == 0:
155 |                 if query:
156 |                     print(f"..found {len(revs)}/{len(query)} after processing {n} revisions..", end="\r")
157 |                 else:
158 |                     print(f"..found {len(revs)} after processing {n} revisions..", end="\r")
159 | 
160 |             desc = info["desc"]
161 |             m = re.match(r"(.*)[\w\W]*?(?:Differential Revision: .*(D\d+))?$", desc)
162 |             if not m:
163 |                 raise Exception(f"invalid format: '{desc}'")
164 |             phabrev = None
165 |             if len(m.groups()) > 1:
166 |                 phabrev = m.group(2)
167 | 
168 |             m = re.match(r"[bB]ug (\d+)[^\w\[]*(.*)", desc)
169 |             if m:
170 |                 bug = m.group(1)
171 |             else:
172 |                 bug = None
173 | 
174 |             desc = desc.splitlines()[0]
175 | 
176 |             m = re.match(r"(.*?)\s*r[?=]([\w., ]+)$", desc)
177 |             if m:
178 |                 desc = m.group(1)
179 |                 reviewers = m.group(2)
180 |             else:
181 |                 reviewers = None
182 | 
183 |             key = phabrev or desc
184 |             rev = info["node"][:12]
185 |             node = Node(rev,
186 |                         info["phase"], bug, info["rev"], reviewers,
187 |                         key, desc)
188 | 
189 |             if lookup and key in lookup:
190 |                 former = lookup.pop(key)
191 |                 if former.bug != node.bug:
192 |                     print(f"\nWarning: landed {node.rev} as bug {node.bug}, draft rev is bug {former.bug}")
193 |                     print("  - " + desc)
194 |                 if args.verbose:
195 |                     print(f'\nfound {node.rev} ("{display(desc, 19)}")')
196 |                 revs[rev] = node
197 |             elif not lookup:
198 |                 revs[rev] = node
199 | 
200 |             if earliest and datetime.datetime.fromtimestamp(info['date'][0]) < earliest:
201 |                 print(f"Terminating search because the date limit was reached after {n} revisions (see --datelimit and/or --limit).")
202 |                 break
203 | 
204 |             if query and not lookup:
205 |                 # If we have a query, then lookup will have everything in the
206 |                 # query minus what we have found so far. So here, stop early
207 |                 # because we found everything.
208 |                 break
209 |     except KeyboardInterrupt:
210 |         print()  # Prevent ^C output from mixing with following text.
211 |         pass
212 | 
213 |     # Probably have \r at end of previous line.
214 |     print()
215 | 
216 |     if lookup:
217 |         print("Failed to find:")
218 |         for node in lookup.values():
219 |             print(f'  {node.rev} "{display(node.desc, 15)}"')
220 | 
221 |     return revs
222 | 
223 | 
224 | def associate_revisions(former, landed):
225 |     '''Find the matching subsets of the two input dicts, joined on their
226 |     descriptions. Discard any nonmatching elements, and return them as a pair
227 |     of value vectors, ordered by the .num field of the `former`'s values.
228 |     '''
229 | 
230 |     # I could just sort by changeset number, but this is not robust in
231 |     # situations where earlier patches in a stack were backed out and re-landed
232 |     # while later ones were not. Join them up by description, using the
233 |     # ordering of the revisions to prune.
234 |     #
235 |     # Example: A1 B A1' A2 (where A1' is the backout of A1, and A2 is an updated A1)
236 |     # Sorting by changeset number would produce [B, A2]. When correlating with
237 |     # [landedA, landedB], this would get the matching wrong.
238 |     oldv = sorted(former.values(), key=lambda n: n.num)
239 |     bykey = {n.key: n for n in oldv}
240 |     newv = [n for n in landed.values() if n.key in bykey]
241 |     newv.sort(key=lambda n: bykey[n.key].num)
242 |     old_used = set(bykey[n.key] for n in newv)
243 |     oldv = [n for n in oldv if n in old_used]
244 |     return oldv, newv
245 | 
246 | 
247 | if args.verbose:
248 |     print("Gathering revisions to prune...")
249 | former = gather_revisions(args.former)
250 | print(f"Gathered {len(former)} revisions to obsolete")
251 | 
252 | if args.landed_from:
253 |     pieces = []
254 |     with open(args.landed_from, "rt") as fh:
255 |         for line in fh:
256 |             # a revision url, as if it were cut & paste from an automated bug
257 |             # comment
258 |             m = re.search(r'/rev/(\w+)', line)
259 |             if m:
260 |                 pieces.append(m.group(1))
261 |                 # a short hash
262 |             else:
263 |                 m = re.match(r'^([a-f0-9]{12})$', line)
264 |                 if m:
265 |                     pieces.append(m.group(1))
266 |                 else:
267 |                     if args.debug:
268 |                         print(f"Ignoring: {line}")
269 |                         landed = gather_revisions("+".join(pieces), args.limit, args.datelimit)
270 | elif args.landed:
271 |     landed = gather_revisions(args.landed, args.limit, args.datelimit)
272 | else:
273 |     print(f"Scanning {args.branch} for matching ancestor revisions...")
274 |     revspec = f"reverse(ancestors({args.branch})) and public()"
275 |     if args.user:
276 |         revspec += f" and user('{args.user}')"
277 |     landed = gather_revisions(revspec, args.limit, args.datelimit, query=former)
278 |     print(f"Found {len(landed)}/{len(former)} successor revisions")
279 |     if not landed:
280 |         sys.exit(1)
281 | 
282 | if args.debug:
283 |     print(f"old = {former.keys()}\n")
284 |     print(f"new = {landed.keys()}\n")
285 | 
286 | oldv, newv = associate_revisions(former, landed)
287 | 
288 | if any(n.phase == 'public' for n in oldv):
289 |     print("This command is only for obsoleting draft revs")
290 |     sys.exit(1)
291 | 
292 | if any(n.phase != 'public' for n in newv):
293 |     print("Cannot obsolete public revs")
294 |     sys.exit(1)
295 | 
296 | failed = False
297 | for i in range(len(oldv)):
298 |     old = oldv[i]
299 |     new = newv[i]
300 |     print(f"  {old.rev} -> {new.rev} {new.desc}")
301 |     olddesc = re.sub(r' r=\S+', '', old.desc)
302 |     newdesc = re.sub(r' r=\S+', '', new.desc)
303 |     if olddesc != newdesc:
304 |         print(f"\nCowardly refusing to obsolete\n  {display(old.desc, 2)}\nwith\n  {display(new.desc, 2)}\nbecause the descriptions are not identical.")
305 |         if input("Use it anyway? (y/n) ").startswith("y"):
306 |             continue
307 |         failed = True
308 | 
309 | if failed:
310 |     sys.exit("Exiting due to mismatch")
311 | 
312 | 
313 | def vec2revset(vec):
314 |     seq = []
315 |     for node in vec:
316 |         if not seq:
317 |             seq.append([node, node])
318 |         elif int(node.num) == int(seq[-1][1].num) + 1:
319 |             seq[-1][1] = node
320 |         else:
321 |             seq.append([node, node])
322 | 
323 |     if args.numeric:
324 |         return '+'.join([first.num if first == last
325 |                          else f"{first.num}::{last.num}"
326 |                          for first, last in seq])
327 |     else:
328 |         return '+'.join([first.rev if first == last
329 |                          else f"{first.rev}::{last.rev}"
330 |                          for first, last in seq])
331 | 
332 | 
333 | oldrevset = vec2revset(oldv)
334 | newrevset = vec2revset(newv)
335 | 
336 | #old_descendants = sorted(
337 | #    gather_revisions(f"descendants({oldv[0].rev})").values(),
338 | #    key=lambda v: v.num
339 | #)
340 | 
341 | print()
342 | cmd = ["hg", "prune", "--pair", "-r", oldrevset, "--succ", newrevset]
343 | print("COMMAND: " + " ".join(cmd))
344 | 
345 | oldrevs = set(node.rev for node in oldv)
346 | #remnant = [node for node in old_descendants if node.rev not in oldrevs]
347 | remnant = [node for node in former.values() if node.rev not in oldrevs]
348 | 
349 | if len(remnant) > 0:
350 |     # Options:
351 |     # - collapse the stack (if relevant) and rebase onto current tip
352 |     # - collapse the stack (if relevant) and rebase onto latest landed
353 |     # - rebase everything onto its successor
354 |     # - leave it alone, rebase nothing
355 |     # - collapse the stack (if relevant) and rebase onto original base
356 |     #   aka just remove the obsoleted things (bad idea if they depend
357 |     #   on them in some way)
358 |     #
359 |     # Consider doing a per-patch selection. (So if something is a failed attempt,
360 |     # leave it in place, but rebase everything else.)
361 |     #
362 |     # Consider only looking at the former revs.
363 | 
364 |     new_base = max(newv, key=lambda e: e.num)
365 | 
366 |     nodes = gather_revisions(f"last(public() and ancestors({oldv[0].rev}))")
367 |     if len(nodes.keys()) != 1:
368 |         print(f"Failed to identify src base rev")
369 |         sys.exit(1)
370 |     src_base = next(iter(nodes.values()))
371 |     print(f"..src_base = {src_base.rev}")
372 | 
373 |     nodes = gather_revisions(f"last(public() and ancestors({newv[0].rev}))")
374 |     if len(nodes.keys()) != 1:
375 |         print(f"Failed to identify dest base rev")
376 |         sys.exit(1)
377 |     dst_base = next(iter(nodes.values()))
378 |     print(f"..dst_base = {src_base.rev}")
379 | 
380 | if args.exec is None:
381 |     args.exec = input("Run the above command? (y/n) ") == "y"
382 | if args.exec:
383 |     subprocess.check_call(cmd)
384 | else:
385 |     print("(Copy & paste the above command, or rerun with --exec)")
386 | 
387 | # Any changesets based on an obsoleted revset?
388 | if len(remnant) > 0:
389 |     nodes = gather_revisions(args.branch)
390 |     if len(nodes) != 1:
391 |         print(f"Failed to identify tip of {args.branch}")
392 |         sys.exit(1)
393 |     branch_head = next(iter(nodes.values()))
394 |     print(f"..branch_head = {src_base.rev}")
395 | 
396 |     print(f"After pruning those revisions, there will be {len(remnant)} orphaned changeset(s):")
397 |     subprocess.check_call([
398 |         "hg", "log",
399 |         "--template", "{node|short} {desc|firstline} {instabilities}\n",
400 |         "--graph",
401 |         "-r", vec2revset(remnant)
402 |     ])
403 |     #for r in remnant:
404 |     #    print(f"  {r.rev} {r.desc}")
405 | 
406 |     #p = subprocess.Popen(["hg", "fxheads", "-T", '{node|short} {join(fxheads, " ")}\\n'], stdout=subprocess.PIPE, text=True)
407 |     #out = p.communicate()
408 | 
409 |     if len(set([src_base.rev, dst_base.rev, branch_head.rev])) == 1:
410 |         base = src_base
411 |     else:
412 |         print("What would you like to rebase them onto?")
413 |         print(f"1. Current branch head ({branch_head.rev})")
414 |         print(f"2. Landed parent ({dst_base.rev})")
415 |         print(f"3. Former base (just remove obsoleted revs from current stack) ({src_base.rev}")
416 |         base_choice = input("Rebase destination> ")
417 |         try:
418 |             base_choice = int(base_choice)
419 |         except ValueError:
420 |             base_choice = 0
421 |         if base_choice < 1 or base_choice > 3:
422 |             print("Invalid option.")
423 |             sys.exit(1)
424 |         base = (None, branch_head.rev, dst_base.rev, src_base.rev)[base_choice]
425 | 
426 |     cmd = ["hg", "rebase", "-d", base, "-r", vec2revset(remnant)]
427 |     print("COMMAND: " + " ".join(cmd))
428 |     if input("Run the above command? (y/n) ") == "y":
429 |         subprocess.check_call(cmd)
430 | 


--------------------------------------------------------------------------------
/bin/mk-task-runner:
--------------------------------------------------------------------------------
 1 | #!/usr/bin/python3
 2 | 
 3 | import argparse
 4 | import json
 5 | import os
 6 | import requests
 7 | import shlex
 8 | import sys
 9 | 
10 | parser = argparse.ArgumentParser(description='generate a shell script to run a taskcluster job')
11 | parser.add_argument('--root-url', '-u', default='https://firefox-ci-tc.services.mozilla.com',
12 |                     help='taskcluster root URL')
13 | parser.add_argument('task', help='ID of task to replicate')
14 | 
15 | # FIXME!!!
16 | parser.add_argument('--source', '-s', default='/home/sfink/src/mozilla2',
17 |                     help='source directory')
18 | parser.add_argument('--sourcename', default='source2',
19 |                     help='source directory in container')
20 | 
21 | opts = parser.parse_args()
22 | 
23 | task_url = 'https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/{}'.format(opts.task)
24 | payload = requests.get(task_url).json()["payload"]
25 | payload["env"]["TASKCLUSTER_ROOT_URL"] = opts.root_url
26 | HOME = payload["env"].setdefault("HOME", "/builds/worker")
27 | # payload["env"]["GECKO_PATH"] = '/builds/worker/source'
28 | SRC = os.path.join(HOME, opts.sourcename)
29 | 
30 | with open("run-task.sh", "w") as fh:
31 |     print('#!/bin/bash -x', file=fh)
32 | 
33 |     print('''\
34 | if [ -z "$container" ]; then
35 |   echo "Running outside of a container."
36 |   if [ $# -ne 1 ]; then
37 |     echo "container name must be given on command line" >&2
38 |     exit 1
39 |   fi
40 |   #exec sudo podman exec -ti "$1" -w {HOME} -u worker bash
41 |   exec sudo podman exec -ti "$1" bash
42 | fi
43 | '''.format(HOME=HOME), file=fh)
44 | 
45 |     # Write out the environment settings.
46 |     for k, v in payload["env"].items():
47 |         print("export {}={}".format(shlex.quote(k), shlex.quote(v)), file=fh)
48 | 
49 |     print('cd {}'.format(HOME), file=fh)
50 |     print('mkdir $UPLOAD_DIR')
51 |     print('if [ -d {}/{}/gcc ]; then unset MOZ_FETCHES; fi'.format(HOME, payload["env"]["MOZ_FETCHES_DIR"]), file=fh)
52 | 
53 |     # Write out the command to execute.
54 |     command = []
55 |     for i, cmd in enumerate(payload["command"]):
56 |         if i == 0:
57 |             command.append(SRC + '/taskcluster/scripts/run-task')
58 |             command.append('--keep')
59 |             command.append('--existing-gecko-checkout=' + SRC)
60 |         else:
61 |             command.append(cmd)
62 | 
63 |     command_str = ' '.join(shlex.quote(x)
64 |                            for x
65 |                            in command
66 |                            if 'fetch-hgfingerprint' not in x)
67 |     print('''\
68 | if [ "$1" = "--shell" ]; then
69 |   echo "Would have run:"
70 |   echo -- {command}
71 | else
72 |   rm -rf {HOME}/workspace/*
73 |   {command} >&1 | tee build.log
74 |   echo "Running post-job shell"
75 | fi
76 | export PS1='task \h:\w\$ '
77 | exec bash
78 | '''.format(command=command_str, HOME=HOME), file=fh)
79 | 
80 | os.chmod("run-task.sh", 0o777)
81 | print("Wrote run-task.sh")
82 | 


--------------------------------------------------------------------------------
/bin/mkgist:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/python3
  2 | 
  3 | import argparse
  4 | import json
  5 | import os
  6 | import re
  7 | import requests
  8 | import sys
  9 | 
 10 | from os.path import join, basename, dirname, exists
 11 | 
 12 | # Sample ~/.config/mkgist.json file:
 13 | # {
 14 | #     "version": 1,
 15 | #     "authtoken": "feeddeadbeef2daddeadbeefdeaddeadbeefdad1"
 16 | # }
 17 | 
 18 | parser = argparse.ArgumentParser(description = 'Create or edit a gist')
 19 | parser.add_argument('-u', '--url', default="https://api.github.com/gists",
 20 |                     help='github gist api url')
 21 | parser.add_argument('--token', type=str,
 22 |                     help='auth token')
 23 | parser.add_argument('-f', '--filename', default='text.txt',
 24 |                     help='name of file to create')
 25 | parser.add_argument('-d', '--description', default='mkgist-created gist',
 26 |                     help='description of gist')
 27 | parser.add_argument('--secret', action='store_true',
 28 |                     help='create a secret gist instead of a public one')
 29 | parser.add_argument('--update', default='',
 30 |                     help='update an existing gist instead of creating a new one')
 31 | parser.add_argument('-a', '--all', action='store_true',
 32 |                     help='output all URLs')
 33 | parser.add_argument('data', nargs='*',
 34 |                     help='actual data to post')
 35 | 
 36 | args = parser.parse_args()
 37 | 
 38 | cfgpath = os.path.expanduser("~/.config/mkgist.json")
 39 | with open(cfgpath) as fh:
 40 |     cfg = json.load(fh)
 41 |     assert cfg['version'] == 1
 42 | 
 43 | if args.token is None:
 44 |     args.token = cfg['authtoken']
 45 | 
 46 | if len(args.data) == 1 and exists(args.data[0]):
 47 |     print("This script takes the data on the command line or stdin. You passed a filename. Don't do that. I'll let it go this time.")
 48 |     filename = args.data[0]
 49 |     with open(filename, "r") as fh:
 50 |         args.data = [ fh.read() ]
 51 |     if args.filename == 'text.txt':
 52 |         args.filename = os.path.basename(filename)
 53 | 
 54 | if not args.data:
 55 |     print("Reading data from stdin")
 56 |     args.data = [ sys.stdin.read() ]
 57 | 
 58 | payload = {
 59 |     "description": args.description,
 60 |     "public": not args.secret,
 61 |     "files": {
 62 |         args.filename: {
 63 |             "content": ' '.join(args.data)
 64 |         },
 65 |     }
 66 | }
 67 | 
 68 | auth = {'Authorization': 'bearer ' + args.token}
 69 | 
 70 | if args.update:
 71 |     print("Updating gist")
 72 |     if m := re.search(r'[0-9a-f]{32}', args.update):
 73 |         args.update = m.group(0)
 74 |     r = requests.patch(args.url + '/' + args.update, headers=auth, data=json.dumps(payload))
 75 | else:
 76 |     print("Posting gist")
 77 |     r = requests.post(args.url, headers=auth, data=json.dumps(payload))
 78 | obj = r.json()
 79 | with open("/tmp/mkgist.raw", "w") as fh:
 80 |     fh.write(r.text)
 81 | 
 82 | if 'errors' in obj or not obj.get('status', '200').startswith('2'):
 83 |     print('Error: {}'.format(obj['message']), file=sys.stderr)
 84 |     for error in obj.get('errors', []):
 85 |         print('  {}'.format(json.dumps(error)))
 86 |     sys.exit(1)
 87 | 
 88 | raw_url = obj['files'][args.filename]['raw_url']
 89 | short_raw = join(dirname(dirname(raw_url)), basename(raw_url))
 90 | 
 91 | if args.all:
 92 |     fields = obj.copy()
 93 |     fields.update(obj['files'][args.filename])
 94 |     fields['short_raw'] = short_raw
 95 |     fields['git_push_url'] = fields['git_pull_url'].replace('https://', 'ssh://git@')
 96 |     print('''\
 97 | short    : {short_raw}
 98 | html     : {html_url}
 99 | json     : {url}
100 | pull     : {git_pull_url}
101 | push     : {git_push_url}
102 | comments : {comments_url}
103 | raw      : {raw_url}
104 |     '''.format(**fields))
105 | else:
106 |     print(short_raw)
107 | 


--------------------------------------------------------------------------------
/bin/re-ssh-agent:
--------------------------------------------------------------------------------
 1 | #!/usr/bin/perl -w
 2 | 
 3 | my $pidof = "/bin/pidof";
 4 | $pidof = "/sbin/pidof" if ! -x $pidof;
 5 | 
 6 | # Find the ssh-agent pid.
 7 | chomp(my $pids = `$pidof ssh-agent`);
 8 | my @pids = split(/ /, $pids);
 9 | if (@pids == 0) {
10 |   exec("/usr/bin/ssh-agent") or die "Cannot run ssh-agent";
11 | } elsif (@pids > 1) {
12 |   print STDERR "Multiple ssh-agent processes found.\n";
13 | }
14 | 
15 | my $pid = $pids[0];
16 | 
17 | my @socks = glob("/tmp/ssh-*/agent.*");
18 | my $sock;
19 | 
20 | my @ssh_socks;
21 | 
22 | if (@socks == 1) {
23 |     ($sock) = @socks;
24 | } elsif (@socks > 1) {
25 |     SOCKSEARCH: for my $s (@socks) {
26 |         my ($sock_pid) = $s =~ /agent.(\d+)$/;
27 | 	if (! $pid) {
28 | 	    # Check for SSH-forwarded agent
29 | 	    chomp(my $exe = qx(sudo readlink /proc/$sock_pid/exe/ 2>&1));
30 | 	    if ($exe =~ /\bsshd$/) {
31 | 		push @ssh_socks, $s;
32 | 	    }
33 | 	    next SOCKSEARCH;
34 | 	}
35 | 
36 | 	# Check whether candidate socket pid is the process listed
37 | 	# after --exit-with-session
38 | 	my $cmd = qx(cat /proc/$pid/cmdline);
39 | 	my @cmd = split(/\0/, $cmd);
40 | 	for (0 .. $#cmd) {
41 | 	    if ($cmd[$_] eq '--exit-with-session') {
42 | 		my $control_cmd = $cmd[$_+1];
43 | 		if (qx(ps ww $sock_pid) =~ /\Q$control_cmd\E/) {
44 | 		    $sock = $_;
45 | 		    last SOCKSEARCH;
46 | 		}
47 | 	    }
48 | 	}
49 | 
50 |         # Find the agent socket from lsof output.
51 |         #my $self = $$;
52 |         #my ($shell) = `cat /proc/$self/status` =~ /PPid:\s*(\d+)/;
53 |         #my ($sshd) = `cat /proc/$shell/status` =~ /PPid:\s*(\d+)/;
54 |         ($sock) = `sudo lsof -p $sock_pid 2>/dev/null` =~ m!(/tmp/ssh-XXX\w+/agent\.\d+)!;
55 |         last SOCKSEARCH if ($sock);
56 | 
57 | 	# $sock_pid is ancestor of $pid?
58 | 	my $p = $pid;
59 | 	while ($p != 1) {
60 | 	    if ($p == $sock_pid) {
61 | 		$sock = $_;
62 | 		last SOCKSEARCH;
63 | 	    }
64 | 	    if (qx(ps --no-heading -o ppid $p) =~ /(\d+)/) {
65 | 		$p = $1;
66 | 	    } else {
67 | 		last;
68 | 	    }
69 | 	}
70 |     }
71 | }
72 | 
73 | if (!$sock && @ssh_socks == 1) {
74 |     ($sock) = @ssh_socks;
75 | }
76 | 
77 | if (defined $sock) {
78 |     print "SSH_AUTH_SOCK=$sock; export SSH_AUTH_SOCK\n";
79 | } else {
80 |     if ($pid) {
81 | 	die "Unable to find socket for ssh-agent pid $pid";
82 |     } else {
83 | 	die "Unable to find ssh-forwarded socket";
84 |     }
85 | }
86 | 
87 | if (defined $pid) {
88 |     print "SSH_AGENT_PID=$pid; export SSH_AGENT_PID\n";
89 | }
90 | 


--------------------------------------------------------------------------------
/bin/rr-exits:
--------------------------------------------------------------------------------
 1 | #!/usr/bin/perl
 2 | 
 3 | use strict;
 4 | 
 5 | my $top = "$ENV{HOME}/.rr";
 6 | opendir(my $dir, $top);
 7 | for my $tracedir (readdir($dir)) {
 8 |     next if $tracedir =~ /^\./;
 9 |     next if ! -d "$top/$tracedir";
10 |     next if $tracedir eq 'latest-trace';
11 |     open(my $fh, "rr ps $tracedir 2>/dev/null |");
12 |     my ($worst, $worst_st);
13 |     my $zeropid;
14 |     while(<$fh>) {
15 |         chomp;
16 |         my (undef, undef, $status) = split(/\s+/, $_);
17 |         next if $status eq 'EXIT';
18 |         if ($status == 0) {
19 |             $zeropid = $_;
20 |             next;
21 |         }
22 | 
23 |         $worst //= $_;
24 |         $worst_st //= $status;
25 |         if ($status < $worst_st) {
26 |             $worst_st = $status;
27 |             $worst = $_;
28 |         }
29 |     }
30 |     $worst ||= $zeropid;
31 |     if ($worst) {
32 |         printf("% 20s %s\n", $tracedir, $worst);
33 |     }
34 | }
35 | 


--------------------------------------------------------------------------------
/bin/run-taskcluster-job:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/python
  2 | 
  3 | import argparse
  4 | import json
  5 | import os
  6 | import re
  7 | import requests
  8 | import subprocess
  9 | import shlex
 10 | import sys
 11 | import textwrap
 12 | 
 13 | DEFAULT_ENV_FILE = "/tmp/task_env.sh"
 14 | DEFAULT_IMAGE = "docker.io/library/debian10-amd64-build:latest"
 15 | ARTIFACT_URL = "https://firefoxci.taskcluster-artifacts.net"
 16 | ROOT_URL = "https://firefox-ci-tc.services.mozilla.com"
 17 | 
 18 | 
 19 | class HelpFormatter(argparse.HelpFormatter):
 20 |     '''Formatter class that preserves blank lines but still allows reflowing'''
 21 |     def _fill_text(self, text, width, indent):
 22 |         if not text.startswith('[keep-blank-lines]'):
 23 |             return super()._fill_text(text, width, indent)
 24 | 
 25 |         text = text.replace('[keep-blank-lines]', '').lstrip()
 26 | 
 27 |         chunks = [[]]
 28 |         for raw in text.splitlines():
 29 |             if raw == '':
 30 |                 chunks.append([])
 31 |             else:
 32 |                 chunks[-1].append(raw)
 33 | 
 34 |         formatted = ''
 35 |         for chunk in chunks:
 36 |             formatted += textwrap.fill(
 37 |                 ' '.join(chunk),
 38 |                 width,
 39 |                 initial_indent=indent,
 40 |                 subsequent_indent=indent
 41 |             ) + "\n\n"
 42 |         return formatted
 43 | 
 44 | 
 45 | parser = argparse.ArgumentParser(
 46 |     description='Run a taskcluster job in a local docker container.',
 47 |     epilog='''[keep-blank-lines]
 48 | Basic usage is to pass --log-task-id with the task ID that you are trying to
 49 | replicate. But you will probably end up wanting to re-run it, and perhaps
 50 | mount your Gecko checkout into the container instead of checking out one
 51 | from scratch (which is slow and burns lots of disk space).
 52 | 
 53 | For re-running, you can use --container (with no argument) and it will present
 54 | a list of available containers. Hopefully you have few enough that you can tell
 55 | which one it is!
 56 | 
 57 | For mounting your gecko checkout, pass
 58 | 
 59 |     --mount =/builds/worker/checkouts/gecko
 60 | 
 61 | and note that this will remove the `--gecko-checkout=...` portion of $COMMAND.
 62 | 
 63 | Once you have a shell running within the container, you may use $COMMAND
 64 | to run the task. You may want to set
 65 | 
 66 |     MOZ_FETCHES='[]'
 67 | 
 68 | after the first run to avoid re-fetching lots of dependencies.''',
 69 |     formatter_class=HelpFormatter,
 70 | )
 71 | 
 72 | parser.add_argument("--log-task-id", metavar='TASK_ID',
 73 |                     help="The task you are trying to replicate. Its log file "
 74 |                     "will be scanned for the task ID that provided the base "
 75 |                     "image to run.")
 76 | parser.add_argument("--load-task-id", metavar='TASK_ID',
 77 |                     help="The toolchain task that generated the image to use. "
 78 |                     "This will be passed to `mach load-taskcluster-image`.")
 79 | parser.add_argument("--task-id",
 80 |                     help="The task you are trying to replicate. Use this "
 81 |                     "instead of --log-task-id if you have already pulled "
 82 |                     "down the image.")
 83 | parser.add_argument("--image", nargs="?", const="infer", default=None,
 84 |                     help="The image to create a new docker container out of, "
 85 |                     "omit IMAGE to select from available")
 86 | parser.add_argument("--container", nargs="?", const="infer", default=None,
 87 |                     help="An existing container to run a shell in, omit "
 88 |                     "CONTAINER to select from available")
 89 | parser.add_argument("--env-file",
 90 |                     help="shell script to set env vars for the container. "
 91 |                     "Normally auto-generated")
 92 | parser.add_argument("--mount", nargs="*",
 93 |                     help="files or directories to mount into the container, "
 94 |                     "in the format /outer/path=/inner/path")
 95 | parser.add_argument("--root-url", default=ROOT_URL,
 96 |                     help=f"taskcluster root url (default {ROOT_URL})")
 97 | parser.add_argument("--verbose", "-v", default=0, action="count", help="Verbose output")
 98 | args = parser.parse_args()
 99 | 
100 | if args.log_task_id:
101 |     print("Grabbing the log file for a run of a task and extracting the docker image task ID")
102 |     log_url = f"{ARTIFACT_URL}/{args.log_task_id}/0/public/logs/live_backing.log"
103 |     log = requests.get(log_url).text
104 |     m = re.search(r'Downloading artifact "public/image.tar.zst" from task ID: (.*)\.\n', log)
105 |     if not m:
106 |         m = re.search(r"Image 'public/image.tar.zst' from task '(.*?)' loaded", log)
107 |         if not m:
108 |             print("Could not find image download line in log file")
109 |             sys.exit(1)
110 | 
111 |     args.load_task_id = m.group(1)
112 |     args.task_id = args.log_task_id
113 | 
114 | if args.load_task_id:
115 |     print(f"Loading taskcluster image '{args.load_task_id}'")
116 |     out = subprocess.check_output(["mach", "taskcluster-load-image",
117 |                                    "--task-id", args.load_task_id]).decode()
118 |     if m := re.search(r'Loaded image: (\S+)', out):
119 |         args.image = m.group(1)
120 |     if m := re.search(r'Found docker image: (\S+)', out):
121 |         args.image = m.group(1)
122 | 
123 | if args.task_id and not args.env_file:
124 |     args.env_file = DEFAULT_ENV_FILE
125 |     print(f"Extracting env settings from task and storing in {args.env_file}")
126 |     task = requests.get(f"{args.root_url}/api/queue/v1/task/{args.task_id}").json()
127 |     payload = task["payload"]
128 |     env = payload["env"]
129 | 
130 |     command = shlex.quote(shlex.join(payload['command']))
131 |     for mount in args.mount:
132 |         if mount.startswith("/builds/worker/checkouts/gecko="):
133 |             command = re.sub(r'--gecko-checkout=\S+', '', command)
134 | 
135 |     with open(args.env_file, "wt") as fh:
136 |         for k, v in env.items():
137 |             print(f"export {k}={shlex.quote(v)}", file=fh)
138 |         print(f"export COMMAND={command}", file=fh)
139 |         print(f"export TASKCLUSTER_ROOT_URL={args.root_url}", file=fh)
140 |     print(f"Wrote {args.env_file}")
141 | 
142 | if not args.env_file and os.path.exists(DEFAULT_ENV_FILE):
143 |     args.env_file = DEFAULT_ENV_FILE
144 | 
145 | 
146 | def choose(prompt, descriptions):
147 |     if len(descriptions) == 1:
148 |         return 0
149 |     while True:
150 |         print(prompt)
151 |         for i, desc in enumerate(descriptions, 1):
152 |             print(f"({i}) {desc}")
153 |         response = input()
154 |         idx = int(response)
155 |         if idx > 0 and idx <= len(descriptions):
156 |             return idx - 1
157 | 
158 | 
159 | start_container = False
160 | if args.container == "infer":
161 |     containers = []
162 |     cmd = ["docker", "container", "ps", "-a", "--format", "{{json .}}"]
163 |     if args.verbose > 0:
164 |         print(" ".join(cmd))
165 |     for line in subprocess.check_output(cmd, text=True).splitlines():
166 |         containers.append(json.loads(line))
167 | 
168 |     def describe(c):
169 |         return f"container {c['ID']} using image {c['Image']} state={c['State']} running {c['Command']}"
170 | 
171 |     idx = choose("Choose from the following containers:", [describe(c) for c in containers])
172 |     args.container = containers[idx]["ID"]
173 |     start_container = containers[idx]["State"] != "running"
174 | 
175 | if not args.container and args.image == "infer":
176 |     images = []
177 |     cmd = ["docker", "images", "--format", "{{json .}}"]
178 |     if args.verbose > 0:
179 |         print(" ".join(cmd))
180 |     for line in subprocess.check_output(cmd, text=True).splitlines():
181 |         images.append(json.loads(line))
182 |     idx = choose(
183 |         "Choose from the following images:",
184 |         [f"{image['ID']} (repo={image['Repository']})" for image in images]
185 |     )
186 |     args.image = images[idx]["ID"]
187 | 
188 | if args.image:
189 |     print(f"Running a new container in docker image {args.image}")
190 |     cmd = [
191 |         "docker", "run", "-ti",
192 |         "--cap-add=SYS_PTRACE",
193 |         "--security-opt", "seccomp=unconfined",
194 |     ]
195 |     if args.env_file:
196 |         print("Note that the command will be stored in the $COMMAND env var")
197 |         print("Once the shell starts, it can be executed by typing $COMMAND:")
198 |         cmd += ["-v", f"{args.env_file}:/etc/profile.d/task.sh:z"]
199 |     # Oops... I kinda forgot about this hack...
200 |     if os.path.exists("/home/sfink/bin"):
201 |         cmd += ["-v", "/home/sfink/bin:/usr/local/bin:z"]
202 |     for mount in args.mount:
203 |         outer, inner = mount.split("=")
204 |         cmd += ["-v", f"{outer}:{inner}:z"]
205 |     cmd += [args.image, "bash", "-l"]
206 |     if args.verbose > 0:
207 |         print(" ".join(cmd))
208 |     subprocess.call(cmd)
209 | elif args.container:
210 |     print(f"Running a shell in docker container {args.container}")
211 |     if start_container:
212 |         cmd = ["docker", "start", "-a", "-i", args.container]
213 |     else:
214 |         cmd = ["docker", "exec", "-ti", args.container, "bash", "-l"]
215 |     if args.verbose > 0:
216 |         print(" ".join(cmd))
217 |     subprocess.call(cmd)
218 | 


--------------------------------------------------------------------------------
/bin/sum-minor:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/perl
  2 | 
  3 | use strict;
  4 | use warnings;
  5 | use Getopt::Long;
  6 | 
  7 | my @modes;
  8 | my $action = 'median';
  9 | my $whichrun;
 10 | GetOptions("median!" => sub { $action = 'median'  },
 11 |            "mean!" => sub { $action = 'mean'  },
 12 |            "--mode=s" => \@modes,
 13 |            "--run=s" => \$whichrun,
 14 |     );
 15 | 
 16 | # Default: look for runs where nursery was either on or off.
 17 | @modes = qw(on off) if @modes == 0;
 18 | 
 19 | my %data; # { "mode-variant" => [run: { field name => [value] } ] }
 20 | my @FIELDNAMES; # [ field name, raw or "nursery.whatever" ]
 21 | my %nonnumeric;
 22 | my %ALLFIELDS; # Set of all field names
 23 | 
 24 | my @order;
 25 | my %seen;
 26 | 
 27 | my %found_mode;
 28 | 
 29 | my @variants = @ARGV ? @ARGV : qw(inbound jit);
 30 | 
 31 | if (-d "results") {
 32 |     my $dir;
 33 |     if (defined $whichrun) {
 34 |         ($dir) = grep { -d $_ } ($whichrun, "results/$whichrun", "results/run$whichrun");
 35 |     } else {
 36 |         my ($latest) = qx(ls -t results);
 37 |         chomp($latest);
 38 |         $dir = "results/$latest";
 39 |         print "Reporting on $dir\n";
 40 |     }
 41 |     chdir($dir) or die "cd $dir: $!";
 42 | }
 43 | 
 44 | for my $variant (@variants) {
 45 |     MODE: for my $mode (@modes) {
 46 |         for my $n (1..5) {
 47 |             my $logfile = "$variant.$mode.$n.log";
 48 |             next MODE if ! -e $logfile;
 49 |             $found_mode{$mode} = 1;
 50 |             open(FILE, "<", $logfile) or die "open $logfile: $!";
 51 |             while() {
 52 |                 if (/^MinorGC:/) {
 53 |                     my @F = split;
 54 |                     if (!@FIELDNAMES) {
 55 |                         @FIELDNAMES = map { "nursery.$_" } @F;
 56 |                     }
 57 |                     next if $F[1] eq 'Reason';
 58 |                     for my $i (0 .. $#F) {
 59 |                         my $field = $FIELDNAMES[$i];
 60 |                         push @{ $data{"$mode-$variant"}[$n]{$field} }, $F[$i];
 61 |                         $nonnumeric{$field} = 1 if $F[$i] !~ /^\d+$/;
 62 |                         $ALLFIELDS{$field} = 1;
 63 |                         push @order, $field if ! $seen{$field}++ && !$nonnumeric{$field};
 64 |                     }
 65 |                 }
 66 |             }
 67 |             
 68 |             $logfile = "$variant.$mode.$n.txt";
 69 |             open(FILE, "<", $logfile) or die "open $logfile: $!";
 70 |             while() {
 71 |                 if (/^(\w+)(?: \([^)]*\))?: (\d+)/) {
 72 |                     $data{"$mode-$variant"}[$n]{$1} = [ $2 ];
 73 |                     $ALLFIELDS{$1} = 1;
 74 |                     push @order, $1 if ! $seen{$1}++;
 75 |                 }
 76 |             }
 77 |         }
 78 |     }
 79 | }
 80 | 
 81 | sub compute_average {
 82 |     my ($rundata, $field) = @_;
 83 |     my @sums;
 84 |     for my $n (1..5) {
 85 |         my $sum = 0;
 86 |         $sum += $_ foreach @{ $rundata->[$n]{$field} };
 87 |         push @sums, $sum;
 88 |     }
 89 | 
 90 |     if ($action eq 'median') {
 91 |         return (sort { $a <=> $b } @sums)[int(@sums / 2)];
 92 |     } elsif ($action eq 'mean') {
 93 |         my $sum = 0;
 94 |         $sum += $_ foreach @sums;
 95 |         return $sum / @sums;
 96 |     } else {
 97 |         return 'fnord';
 98 |     }
 99 | }
100 | 
101 | if (keys %found_mode == 0) {
102 |     die "No results found for any mode!\n";
103 | }
104 | 
105 | for my $mode (sort keys %found_mode) {
106 |     print "$action with nursery strings $mode of ", join(" -> ", @variants), "\n";
107 |     for my $field (@order) {
108 |         next if $field eq 'nursery.Size'; # This is a sum of nursery sizes across the run. Not helpful.
109 |         my $base = compute_average($data{"$mode-$variants[0]"}, $field);
110 |         my $base_total_time = compute_average($data{"$mode-$variants[0]"}, 'nursery.total');
111 |         my $istime = $field =~ /^nursery\./;
112 |         next if $istime && $base < 500;
113 |         next if $istime && $base < 0.01 * $base_total_time;
114 |         (my $pfield = $field) =~ s/nursery./(nursery) /;
115 |         printf "% 20s: ", $pfield;
116 |         for my $i (0 .. $#variants) {
117 |             print " -> " if $i;
118 |             my $score = compute_average($data{"$mode-$variants[$i]"}, $field);
119 |             printf("% 8d", $score);
120 |             if ($i) {
121 |                 my $delta = $score - $base;
122 |                 printf " % +6d (%+ 5.1f%%) (%+ 6.2f%%)", $delta, $delta / $base * 100, $delta / $base_total_time * 100;
123 |                 print(($istime xor $delta > 0) ? " improvement" : " regression");
124 |             }
125 |         }
126 |         print "\n";
127 |     }
128 | }
129 | 


--------------------------------------------------------------------------------
/bin/viewsetup:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/python
  2 | 
  3 | import argparse
  4 | import json
  5 | import os
  6 | import re
  7 | import subprocess
  8 | 
  9 | from collections import defaultdict
 10 | 
 11 | KiB = 2 ** 10
 12 | MiB = 2 ** 20
 13 | GiB = 2 ** 30
 14 | 
 15 | allowed_actions = [
 16 |     'create-mapping',
 17 |     'create-md',
 18 |     'create-vmdk',
 19 |     'list',
 20 |     'remove',
 21 |     'all'
 22 | ]
 23 | 
 24 | 
 25 | def abort(msg):
 26 |     import sys
 27 |     print(msg, file=sys.stderr)
 28 |     sys.exit(1)
 29 | 
 30 | 
 31 | def get_disks():
 32 |     lsblk = json.loads(subprocess.check_output(["lsblk", "-J"], text=True))
 33 |     return [
 34 |         d['name']
 35 |         for d in lsblk['blockdevices']
 36 |         if d['type'] == 'disk' and d.get('mountpoint') is None and d.get('mountpoints', [None])[0] is None
 37 |     ]
 38 | 
 39 | 
 40 | disks = get_disks()
 41 | disk = None if len(disks) > 1 else "/dev/" + disks[0]
 42 | CFG_DIR = os.path.join(os.getenv("HOME"), ".config", "diskviews")
 43 | 
 44 | parser = argparse.ArgumentParser('setup a view of a disk')
 45 | parser.add_argument('--action', '--actions', default='create-md',
 46 |                     help='comma-separated actions to perform, from: ' + ' '.join(allowed_actions))
 47 | parser.add_argument('--map', action='store_const', dest='action', const='create-mapping',
 48 |                     help='alias for --action=create-mapping')
 49 | parser.add_argument('--vmdk', action='store_const', dest='action', const='create-vmdk',
 50 |                     help='alias for --action=create-vmdk')
 51 | parser.add_argument('--remove', action='store_const', dest='action', const='remove',
 52 |                     help='alias for --action=remove')
 53 | parser.add_argument('--list', action='store_const', dest='action', const='list',
 54 |                     help='alias for --action=list')
 55 | parser.add_argument('--device', '-d', default=disk,
 56 |                     help='(whole) disk to create a view of')
 57 | parser.add_argument('--name', '-n', default=None, dest='_name', metavar='NAME',
 58 |                     help='name to use in generated files, defaults to basename of device')
 59 | parser.add_argument('--dir', '-o', default=None,
 60 |                     help=f"directory storing view configuration dirs, default is {CFG_DIR}/(name)")
 61 | parser.add_argument('--force', '-f', action='store_true', default=False,
 62 |                     help='overwrite existing files')
 63 | parser.add_argument('--auto', '-a', action='store_true', default=False,
 64 |                     help='choose default disposition for all partitions')
 65 | parser.add_argument('name', nargs='?',
 66 |                     help='name of view to create or access (same as --name NAME)')
 67 | 
 68 | args = parser.parse_args()
 69 | 
 70 | # Allow -n/--name option as well as first unnamed argument.
 71 | if args.name is None:
 72 |     args.name = args._name
 73 | 
 74 | # Default the name based on the directory.
 75 | if args.name is None and args.dir is not None:
 76 |     args.name = os.path.basename(args.dir)
 77 | 
 78 | # Default the name based on the device.
 79 | if args.name is None and args.device is not None:
 80 |     args.name = os.path.basename(args.device)
 81 | 
 82 | # Default the directory based on the name.
 83 | if args.dir is None and args.name is not None:
 84 |     args.dir = os.path.join(CFG_DIR, args.name)
 85 | 
 86 | # If the directory is still unknown, look at existing names. If there is only
 87 | # one, use it, otherwise abort.
 88 | if args.dir is None:
 89 |     names = [ent.name for ent in os.scandir(CFG_DIR) if ent.is_dir()]
 90 |     if len(names) == 1:
 91 |         args.name = names[0]
 92 |         args.dir = os.path.join(CFG_DIR, args.name)
 93 |     elif len(names) == 0:
 94 |         abort(f"No name or device given, and {CFG_DIR} has no names yet")
 95 |     else:
 96 |         abort(f"No --name or --device given. Choose name from: {' '.join(names)}")
 97 | 
 98 | # If there is already a slices file, use it to set the device.
 99 | if args.device is None and args.dir is not None and os.path.exists(args.dir):
100 |     with open(os.path.join(args.dir, 'slices.json')) as fh:
101 |         data = json.load(fh)
102 |         args.device = data['device']
103 |         if args.name is None:
104 |             args.name = os.path.basename(args.device)
105 | 
106 | if args.device is None:
107 |     diskdevs = ' '.join("/dev/" + d for d in disks)
108 |     abort(f"Use -d (--device) to select from available disks: {diskdevs}")
109 | 
110 | if args.dir is None:
111 |     args.dir = os.path.join(CFG_DIR, args.name)
112 | 
113 | actions = set(args.action.split(','))
114 | for action in actions:
115 |     if action not in set(allowed_actions):
116 |         raise Exception(f"invalid action '{action}'")
117 | 
118 | dm_dev = f"/dev/mapper/{args.name}_view"
119 | print(f"Using {dm_dev} for device path")
120 | slices_filename = os.path.join(args.dir, 'slices.json')
121 | 
122 | 
123 | def run(cmd, quiet=False, output=False):
124 |     if not quiet:
125 |         print(" ".join(cmd))
126 |     if output:
127 |         return subprocess.check_output(cmd, text=True)
128 |     else:
129 |         return subprocess.check_call(cmd)
130 | 
131 | 
132 | def read_mtab():
133 |     mounts = {}
134 |     with open("/etc/mtab", "rt") as fh:
135 |         for line in fh.readlines():
136 |             device, mountpoint, fstype, flags, n1, n2 = line.rstrip().split(" ")
137 |             if device == fstype:
138 |                 continue  # Does not use a device eg proc
139 |             mounts[device] = {
140 |                 'mountpoint': mountpoint,
141 |                 'fstype': fstype,
142 |                 'flags': set(flags.split(','))
143 |             }
144 |     return mounts
145 | 
146 | 
147 | def read_partitions(device):
148 |     mounts = read_mtab()
149 |     info = defaultdict(dict)
150 |     fdisk = json.loads(run(["sudo", "sfdisk", "-J", device], quiet=True, output=True))
151 |     info['unit'] = fdisk['partitiontable']['unit']
152 |     if info['unit'] != 'sectors':
153 |         raise Exception(f"script only handles units of sectors, not {info['unit']}")
154 |     info['sector-size'] = fdisk['partitiontable']['sectorsize']
155 |     for p in fdisk['partitiontable']['partitions']:
156 |         part = {
157 |             'device': p['node'],
158 |             'start': p['start'],
159 |             'size': p['size'],
160 |             'end': p['start'] + p['size'],
161 |             'gptname': p.get('name'),
162 |             # FIXME: Have not looked at example with multiple attributes.
163 |             'attrs': set(p.get('attrs', '').split(',')),
164 |             'mount': mounts.get(p['node']),
165 |         }
166 |         info['partitions'][p['node']] = part
167 | 
168 |     for line in run(["sudo", "sfdisk", "-l", device], quiet=True, output=True).splitlines():
169 |         if m := re.match(r'(/dev/\S+)\s+(\d+)\s+(\d+)\s+(\d+)\s+(\S+)\s*(.*)', line):
170 |             dev, start, end, sectors, size, type_ = m.groups()
171 |             info['partitions'][dev]['type'] = type_
172 | 
173 |     info['end'] = int(
174 |         run(['sudo', 'blockdev', '--getsz', device], quiet=True, output=True).rstrip()
175 |     )
176 | 
177 |     info['ordered-partitions'] = sorted(
178 |         info['partitions'].keys(),
179 |         key=lambda k: info['partitions'][k]['start']
180 |     )
181 | 
182 |     return info
183 | 
184 | 
185 | def check_new_file(filename):
186 |     if os.path.exists(filename) and not args.force:
187 |         raise Exception(f"{filename} already exists, use -f (--force) to overwrite")
188 | 
189 | 
190 | def copy_chunk(src, dst, offset, count, blocksize):
191 |     check_new_file(dst)
192 |     run([
193 |         'sudo',
194 |         'dd',
195 |         f"if={src}",
196 |         f"of={dst}",
197 |         f"bs={blocksize}",
198 |         f"count={count}",
199 |         f"skip={offset}",
200 |         'conv=sparse',
201 |     ])
202 | 
203 | 
204 | def human_bytes(b):
205 |     if b < 4000:
206 |         return f"{b} bytes"
207 |     b = b / 1024
208 |     if b < 4000:
209 |         return f"{b:.1f} KB"
210 |     b = b / 1024
211 |     if b < 4000:
212 |         return f"{b:.1f} MB"
213 |     b = b / 1024
214 |     if b < 4000:
215 |         return f"{b:.1f} GB"
216 |     b = b / 1024
217 |     return f"{b:.1f} TB"
218 | 
219 | 
220 | def guess_disposition(part):
221 |     known_types = {
222 |         'efi system partition': 'system partition',
223 |         'microsoft reserved partition': 'windows system partition',
224 |     }
225 | 
226 |     known = known_types.get((part['gptname'] or '').lower())
227 |     if known:
228 |         return ('clone', known)
229 | 
230 |     if (part['gptname'] or '').lower() == 'basic data partition':
231 |         if 'RequiredPartition' in part.get('attrs', set()):
232 |             return ('clone', 'Windows partition with RequiredPartition attr')
233 | 
234 |     mount = part.get('mount')
235 |     if mount and mount['mountpoint'].startswith('/boot'):
236 |         return ('clone', f"cloned {mount['mountpoint']} partition")
237 | 
238 |     if mount:
239 |         return ('mask', "mounted partition, mask with zeroes")
240 | 
241 |     if 'microsoft' in part.get('type', '').lower():
242 |         return ('expose', "Windows partition to expose")
243 | 
244 |     if 'lvm' in part.get('type', '').lower():
245 |         return ('mask', "LVM partition, masking it off")
246 | 
247 |     if part.get('gptname'):
248 |         return ('expose', "assumed to be Windows or system partition to expose")
249 | 
250 |     return ('mask', "other partition to mask with zeroes")
251 | 
252 | 
253 | def make_zeroes(dst, count, blocksize):
254 |     check_new_file(dst)
255 |     run([
256 |         'sudo',
257 |         'dd',
258 |         "if=/dev/zero",
259 |         f"of={dst}",
260 |         f"bs={blocksize}",
261 |         "count=0",
262 |         f"seek={count}",
263 |         'conv=sparse',
264 |     ])
265 | 
266 | 
267 | def describe(names):
268 |     names = [os.path.basename(n) for n in names]
269 | 
270 |     # If there is a device and its partitions, tack the partition indicators
271 |     # onto the device (eg /dev/nvme0n1,/dev/nvme0n1p1,/dev/nvme0n1p2 ->
272 |     # /dev/nvme0n1p1p2).
273 |     shortest = sorted(names, key=lambda s: len(s))[0]
274 |     if m := re.search(r'(p\d+)$', shortest):
275 |         shortest = shortest[0:-len(m.group(1))]
276 |     parts = []
277 |     for name in names:
278 |         if name == shortest:
279 |             # Just drop the main device; it'll be a gap.
280 |             continue
281 |         if not name.startswith(shortest):
282 |             parts = None
283 |             break
284 |         rest = name[len(shortest):]
285 |         if re.search(r'p\d+$', rest):
286 |             parts.append(rest)
287 |     if parts is not None:
288 |         return shortest + "".join(parts)
289 |     else:
290 |         return ",".join(names)
291 | 
292 | 
293 | if 'create-mapping' in actions or 'all' in actions:
294 |     info = read_partitions(args.device)
295 |     maskid = [0]
296 | 
297 |     def process_range(slices):
298 |         disposition = slices[0]['disposition']
299 |         if disposition == 'gap' and len(slices) > 1:
300 |             disposition = slices[0]['disposition'] = slices[1]['disposition']
301 | 
302 |         sectors = slices[-1]['end'] - slices[0]['start']
303 | 
304 |         if disposition == 'mask':
305 |             filename = os.path.join(args.dir, f"zero{maskid[0]}.dat")
306 |             maskid[0] += 1
307 |             make_zeroes(filename, sectors, info['sector-size'])
308 |             slices[0]['filename'] = filename
309 | 
310 |         elif disposition in ('clone', 'gap'):
311 |             name = describe([slice['device'] for slice in slices])
312 |             filename = os.path.join(args.dir, f"{name}.dat")
313 |             copy_chunk(args.device, filename, slices[0]['start'], sectors, info['sector-size'])
314 |             slices[0]['filename'] = filename
315 | 
316 |         else:
317 |             assert disposition == 'expose'
318 |             slices[0]['filename'] = slices[0]['device']
319 | 
320 |         slices[-1]['range-filename'] = slices[0]['filename']
321 | 
322 |     def make_files_for_slices(slices):
323 |         # Merge consecutive slices with the same non-expose disposition.
324 |         range = [slices[0]]
325 |         for slice in slices[1:]:
326 |             disposition = slice['disposition']
327 |             if range[0]['disposition'] == disposition and disposition != 'expose':
328 |                 range.append(slice)
329 |             elif range[0]['disposition'] == 'gap' and disposition in ('clone', 'mask'):
330 |                 # Attach gap to next range.
331 |                 range[0]['disposition'] = disposition
332 |                 range.append(slice)
333 |             elif disposition == 'gap' and range[-1]['disposition'] != 'expose':
334 |                 # Special case: a gap after an expose is necessary for
335 |                 # alignment, and cannot be combined.
336 |                 if range[-1]['disposition'] == 'expose':
337 |                     process_range(range)
338 |                     range = [slice]
339 |                     slice['disposition'] = 'clone'
340 |                 else:
341 |                     # Attach gap to previous range.
342 |                     slice['disposition'] = range[-1]['disposition']
343 |                     range.append(slice)
344 |             else:
345 |                 process_range(range)
346 |                 range = [slice]
347 |         if range:
348 |             process_range(range)
349 | 
350 |     os.makedirs(args.dir, exist_ok=True)
351 |     check_new_file(slices_filename)
352 | 
353 |     mounts = read_mtab()
354 |     slices = []
355 |     prev_end = 0
356 |     for device in info['ordered-partitions']:
357 |         part = info['partitions'][device]
358 |         disposition, why = guess_disposition(part)
359 | 
360 |         typestr = "" if not part.get('type') else " " + part['type']
361 |         print(f"{device}:{typestr}")
362 |         if mounts.get(device):
363 |             mount = mounts[device]['mountpoint']
364 |             print(f"  {mounts[device]['fstype']} filesystem mounted at {mount}")
365 |         bytes = part['size'] * info['sector-size']
366 |         print(f"  sectors {part['start']}-{part['end']-1}, {human_bytes(bytes)}")
367 |         print(f"  GPT partition name: {part['gptname']}")
368 |         if args.auto:
369 |             print(f"  automatically chosen disposition is {disposition}: \"{why}\"")
370 |         else:
371 |             print(f"  default disposition is {disposition}: \"{why}\"")
372 |         while not args.auto:
373 |             answer = input(f"disposition (one of expose, mask, clone) (default {disposition})> ")
374 |             if answer != "":
375 |                 if answer in ('expose', 'mask', 'clone'):
376 |                     disposition = answer
377 |                     break
378 |                 else:
379 |                     print("invalid disposition")
380 |             else:
381 |                 break
382 |         print()
383 | 
384 |         gap = part['start'] - prev_end
385 |         if gap > 0:
386 |             slices.append({
387 |                 'disposition': 'gap',
388 |                 'description': 'gap between partitions',
389 |                 'device': args.device,
390 |                 'sectors': gap,
391 |                 'size': human_bytes(gap * info['sector-size']),
392 |                 'start': prev_end,
393 |                 'end': part['start'],
394 |                 'type': 'file'
395 |             })
396 | 
397 |         slices.append({
398 |             'disposition': disposition,
399 |             'device': device,
400 |             'sectors': part['size'],
401 |             'bytes': part['size'] * info['sector-size'],
402 |             'start': part['start'],
403 |             'end': part['end'],
404 |         })
405 | 
406 |         if disposition == 'expose':
407 |             slices[-1].update({
408 |                 'description': f"exposed partition {device}",
409 |                 'device': device,
410 |                 'type': 'partition',
411 |             })
412 |         elif disposition == 'mask':
413 |             slices[-1].update({
414 |                 'description': f"masked-off partition {device}",
415 |                 'type': 'file',
416 |             })
417 |         elif disposition == 'clone':
418 |             slices[-1].update({
419 |                 'description': f"partition {device} cloned to a file",
420 |                 'type': 'file',
421 |             })
422 |         else:
423 |             raise Exception(f"unknown disposition '{disposition}'")
424 | 
425 |         prev_end = part['end']
426 | 
427 |     gap = info['end'] - slices[-1]['end']
428 |     if gap > 0:
429 |         slices.append({
430 |             'disposition': 'clone',
431 |             'description': 'gap after last partition, containing master GPT',
432 |             'device': args.device,
433 |             'sectors': gap,
434 |             'bytes': gap * info['sector-size'],
435 |             'start': slices[-1]['end'],
436 |             'end': info['end'],
437 |             'type': 'file'
438 |         })
439 | 
440 |     make_files_for_slices(slices)
441 | 
442 |     with open(slices_filename, "w") as fh:
443 |         fh.write(json.dumps({'device': args.device, 'slices': slices}, indent=4))
444 |     print(f"Wrote {slices_filename}")
445 | 
446 | if 'create-md' in actions or 'all' in actions:
447 |     with open(slices_filename, 'r') as fh:
448 |         data = json.load(fh)
449 |         slices = data['slices']
450 | 
451 |     if os.path.exists(dm_dev):
452 |         abort(f"{dm_dev} already exists")
453 | 
454 |     print("Setting up loopback devices")
455 |     loopbacks = {}
456 |     for slice in slices:
457 |         if slice['type'] == 'file' and slice.get('filename'):
458 |             loop = run([
459 |                 'sudo',
460 |                 'losetup',
461 |                 '-f',
462 |                 '--show',
463 |                 slice['filename']
464 |             ], output=True).rstrip()
465 |             slice['loopback'] = loop
466 |             loopbacks[slice['filename']] = loop
467 | 
468 |     dmconfig_filename = os.path.join(args.dir, "dmconfig.txt")
469 |     with open(dmconfig_filename, "wt") as fh:
470 |         offset = 0
471 |         for slice in slices:
472 |             filename = slice.get('range-filename')
473 |             if filename is None:
474 |                 continue
475 |             dev = loopbacks.get(filename, filename)
476 |             sectors = slice['end'] - offset
477 |             print(f"{offset} {sectors} linear {dev} 0", file=fh)
478 |             offset = slice['end']
479 |     print(f"Wrote {dmconfig_filename}")
480 | 
481 |     try:
482 |         run(['sh', '-c', f"sudo dmsetup create {args.name}_view < {dmconfig_filename}"])
483 |     except Exception:
484 |         run(['sudo', 'losetup', '-d'] + list(loopbacks.values()))
485 |         run(['sudo', 'dmsetup', 'remove', f"{args.name}_view"])
486 |         raise
487 |     user = subprocess.check_output(['id', '-nu'], text=True).rstrip()
488 |     group = subprocess.check_output(['id', '-ng'], text=True).rstrip()
489 |     run(['sudo', 'chown', f"{user}:{group}", dm_dev])
490 |     run(['sudo', 'chmod', '0666', dm_dev])
491 | 
492 |     orig_size = run(['sudo', 'blockdev', '--getsz', args.device], output=True).strip()
493 |     new_size = run(['sudo', 'blockdev', '--getsz', dm_dev], output=True).strip()
494 | 
495 |     print(f"Size of original, in sectors: {orig_size}")
496 |     print(f"Size of new device, in sectors: {new_size}")
497 | 
498 | if 'create-vmdk' in actions or 'all' in actions:
499 |     vmdk_filename = os.path.join(args.dir, f"{args.name}.vmdk")
500 |     run([
501 |         'VBoxManage', 'internalcommands', 'createrawvmdk',
502 |         '-filename', vmdk_filename, '-rawdisk', dm_dev
503 |     ])
504 | 
505 | 
506 | def get_devices(dmfile):
507 |     devices = []
508 |     output = run(['sudo', 'dmsetup', 'deps', dm_dev], output=True)
509 |     for major, minor in re.findall(r'\((\d+), (\d+)\)', output):
510 |         major = int(major)
511 |         minor = int(minor)
512 |         if major == 7:
513 |             devices.append(f"/dev/loop{minor}")
514 |         else:
515 |             found = False
516 |             for ent in os.listdir("/dev"):
517 |                 path = f"/dev/{ent}"
518 |                 st = os.lstat(path)
519 |                 if major == os.major(st.st_rdev) and minor == os.minor(st.st_rdev):
520 |                     devices.append(path)
521 |                     found = True
522 |                     break
523 |             if not found:
524 |                 devices.append(f"DEV[{major},{minor}]")
525 |     return devices
526 | 
527 | 
528 | if 'list' in actions or 'all' in actions:
529 |     for device in get_devices(dm_dev):
530 |         if device.startswith('/dev/loop'):
531 |             run(['losetup', device], quiet=True)
532 |         else:
533 |             print(device)
534 | 
535 | if 'remove' in actions:
536 |     devices = get_devices(dm_dev)
537 |     run(['sudo', 'dmsetup', 'remove', dm_dev])
538 |     for device in devices:
539 |         if device.startswith('/dev/loop'):
540 |             run(['sudo', 'losetup', '-d', device])
541 | 


--------------------------------------------------------------------------------
/bin/wig:
--------------------------------------------------------------------------------
  1 | #!/usr/bin/env python2
  2 | 
  3 | # This is a snapshot of http://people.mozilla.com/~tschneidereit/wig
  4 | # Please get the latest version from there. The old 'wig' is at
  5 | # http://people.mozilla.com/~sfink/data/oldwig
  6 | 
  7 | import argparse
  8 | import fnmatch
  9 | import os
 10 | import sys
 11 | 
 12 | from distutils.spawn import find_executable
 13 | from glob import glob
 14 | from subprocess import check_output
 15 | 
 16 | parser = argparse.ArgumentParser(description="""
 17 | Uses 'wiggle' to apply the reject files left by conflicts during
 18 | mercurial merges to your source tree run it from anywhere underneath
 19 | the hg root; the main point of this script is to figure out the
 20 | right path and call wiggle with the correct magic options.
 21 | 
 22 | Example:
 23 | ~/moz/js/src% wig js/src/
 24 | wiggle --replace js/src/jsfun.cpp js/src/jsfun.cpp.rej
 25 | 1 unresolved conflict
 26 | 4 already-applied changes ignored
 27 | wiggle --replace js/src/jsscript.h js/src/jsscript.h.rej
 28 | 2 already-applied changes ignored
 29 | 
 30 | The 'unresolved conflict' means that wiggle failed to find a way to
 31 | cram the patch in, and you'll need to look at js/src/jsfun.cpp and
 32 | search for '<<<<' to find the conflict markers.
 33 | 
 34 | wiggle doesn't fail very often, unless there's a real conflict.
 35 | It can be a little overeager, and in particular it's easy to get
 36 | a function duplicated.
 37 | 
 38 | Just run |hg diff| after you're done wiggling, and it'll show you
 39 | just the changes that wiggle did (plus any hand editing). Usually
 40 | this is much smaller than the original patch, assuming most of it
 41 | applied ok.
 42 | 
 43 | The reject arguments can be either *.rej files, the names of files
 44 | rejects are to be wiggled into, or directories containing rejects.
 45 | 
 46 | If given directories, wig recurses into the sub-directories to
 47 | find as many rejects as possible. This can be changed using the
 48 | -s/--shallow option.
 49 | 
 50 | For successful wiggles, wig deletes the *.porig and *.rej files.
 51 | This can be prevented using the -k/--keep-backup option.
 52 | """, formatter_class=argparse.RawTextHelpFormatter)
 53 | parser.add_argument('reject', nargs='+',
 54 |                    help='Files or directories to wiggle')
 55 | parser.add_argument('-s', '--shallow', action='store_const', const=True,
 56 |                    help="Don't recurse into sub-directories")
 57 | parser.add_argument('-k', '--keep-backup', action='store_const', const=True,
 58 |                    help="Don't delete *.porig and *.rej files upon completion")
 59 | args = parser.parse_args()
 60 | 
 61 | if not find_executable('wiggle'):
 62 |   exit("I can't find wiggle. Please don't tell me you didn't even install it.\nYou can find it at http://neil.brown.name/wiggle/, you know?")
 63 | 
 64 | def processFile(path):
 65 |   path = os.path.splitext(path)[0]
 66 |   filename = os.path.relpath(path)
 67 |   if not os.path.exists(filename):
 68 |     exit("Error: can't find '%s'" % filename)
 69 |   if not os.path.exists(filename + '.rej'):
 70 |     exit("Error: can't find '%s'" % filename + '.rej')
 71 |   print "wiggle --replace %s %s" % (filename, filename + '.rej')
 72 |   os.system("wiggle --replace '%s' '%s'" % (path, path + '.rej'))
 73 | 
 74 | # recursively find all rejects
 75 | targets = []
 76 | root = None
 77 | for reject_arg in args.reject:
 78 |   reject = os.path.join(os.getcwd(), reject_arg)
 79 |   if not os.path.exists(reject):
 80 |     if root is None:
 81 |       root = check_output(['hg', 'root']).strip()
 82 |     reject = os.path.join(root, reject_arg)
 83 |     if not os.path.exists(reject):
 84 |       print "Error: can't find '%s'" % reject_arg
 85 |       exit(2)
 86 | 
 87 |   if os.path.isdir(reject):
 88 |     if args.shallow:
 89 |       for filename in glob(reject + '/*.rej'):
 90 |         targets.append(os.path.abspath(filename))
 91 |     else:
 92 |       for root, dirs, files in os.walk(reject):
 93 |         for filename in fnmatch.filter(files, '*.rej'):
 94 |           targets.append(os.path.abspath(root + '/' + filename))
 95 | 
 96 |   else:
 97 |     if os.path.splitext(reject)[1] != '.rej':
 98 |       reject += '.rej'
 99 |     targets.append(reject)
100 | 
101 | # prune duplicates
102 | seen = set()
103 | targets = [ x for x in targets if x not in seen and not seen.add(x)]
104 | 
105 | # wiggle
106 | for target in targets:
107 |   processFile(target)
108 |   if not args.keep_backup:
109 |     os.remove(target)
110 |     os.remove(os.path.splitext(target)[0] + '.porig')
111 | 
112 | 
113 | 
114 | # Note: I'm 100% convinced this is a very long-winded implementation.
115 | # Consider the roughly equivalent, if less flexible, bash version:
116 | # dir=$(pwd)
117 | # unset a i
118 | # while IFS= read -r -d $'\0' file; do
119 | #     f=$dir/${file%.rej}
120 | #     rm -f "$f.porig"
121 | #     echo wiggling "$f"
122 | #     wiggle --replace "$f" "$f.rej"
123 | #     rm -f "$f.porig" "$f.rej"
124 | # done < <(find . -name \*.rej -print0)
125 | 


--------------------------------------------------------------------------------
/conf/Q-Tps-alloc.query:
--------------------------------------------------------------------------------
 1 | # Processed with `artifetch --query `
 2 | 
 3 | # The semantics of this query are kind of loose. Some things have priority,
 4 | # in which case conflicting entries will just be ignored. Maybe I'll fix that
 5 | # someday. For now, comment everything out to use the fuzzy selector.
 6 | pushes:
 7 |   # Range match from (internal) push IDs. Get them from --list-pushes.
 8 |   # Note that the tasks in a push will expire, and you'll get something
 9 |   # like: "ResourceNotFound: `PIemJYs6RSadJuXCK_LCSA` does not correspond to a task that exists."
10 |   #ids: 1115321::1115344
11 | 
12 |   # Match against the comment in the youngest revision in the push.
13 |   #comment: "ctor "
14 | 
15 |   # Not a full revset, just 1 or more revs separated with `+`.
16 |   #rev: 4ec21b918c44+a2664bf7445f
17 | 
18 |   # Bring up a fzf menu to select from the last N pushes. This is the default.
19 |   #choose-from: 20
20 | 
21 | artifact: /perfherder-data/
22 | 
23 | metric:
24 |   json:
25 |     match-key: "suites[].subtests[].name"
26 |     match-value: "id-getter-5.html"
27 |     value: "$.replicates[]"
28 |   output:
29 |     style: gnuplot
30 | 


--------------------------------------------------------------------------------
/conf/Q-awsy-baseJS.query:
--------------------------------------------------------------------------------
 1 | # Processed with `artifetch --query `
 2 | 
 3 | # The semantics of this query are kind of loose. Some things have priority,
 4 | # in which case conflicting entries will just be ignored. Maybe I'll fix that
 5 | # someday.
 6 | pushes:
 7 |   # Range match from (internal) push IDs. Get them from --list-pushes.
 8 |   #ids: 1115321::1115344
 9 | 
10 |   # Match against comment of final commit in the push.
11 |   #comment: "ctor "
12 | 
13 |   # Not a full revset, just 1 or more revs separated with `+`.
14 |   #rev: 4ec21b918c44+a2664bf7445f
15 | 
16 |   # Bring up a fzf menu to select from the last N pushes. This is the default.
17 |   choose-from: 20
18 | 
19 | jobs:
20 |   symbol: "SY(ab)"
21 |   limit-per-push: 1
22 | 
23 | # Substring match on the artifact URL.
24 | artifact: /perfherder.data/
25 | 
26 | metric:
27 |   json:
28 |     match-key-1: "suites[].name"
29 |     match-value-1: "Base Content JS"
30 |     match-key-2: "$1.subtests[].name"
31 |     match-value-2: "After tabs open [+30s, forced GC]"
32 |     value: "$2.value"
33 |   output:
34 |     style: gnuplot
35 |     job-header: "# push {push_idx}: {push_desc}"
36 |     format: "{push_idx} {value}"
37 | 


--------------------------------------------------------------------------------
/conf/Q-awsy-logBase.query:
--------------------------------------------------------------------------------
 1 | # Processed with `artifetch --query `
 2 | 
 3 | jobs:
 4 |   symbol: "SY(ab)"
 5 |   limit-per-push: 1
 6 | 
 7 | artifact: gecko.log
 8 | 
 9 | artifacts:
10 |   - run-pre.log
11 |   - run-aggressive.log
12 |   - run-superaggressive.log
13 |   - run.log
14 | 
15 | metric:
16 |   text:
17 |     match-key: '/STRSTAT ([-\d]+): ([-\d]+) ([-\d]+) ([-\d]+) ([-\d]+)/'
18 |     value: "$3"
19 |     label:
20 |       pid: "$1"  # Hm... this is colliding syntax!
21 |       old: "$2"
22 |       length: "$4"
23 |       good: "$5"
24 |   output:
25 |     style: formatted
26 |     job-header: "# job {job_desc} on push {push_id}: {push_desc}\n# {push_url}\n"
27 |     format: "{push_idx} {pid} {sum(old)} {sum(value)}"
28 |     groupby: ["push", "pid"]
29 |     #format: "{push_idx} pid={pid} len={length} (good {good}) built with {old} -> usable={value}"
30 | 


--------------------------------------------------------------------------------
/conf/Q-awsy-rawBaseJS.query:
--------------------------------------------------------------------------------
 1 | # Processed with `artifetch --query `
 2 | #
 3 | # The semantics of this query are kind of loose. Some things have priority,
 4 | # in which case conflicting entries will just be ignored. Maybe I'll fix that
 5 | # someday.
 6 | 
 7 | pushes:
 8 |   # Bring up a fzf menu to select from the last N pushes.
 9 |   choose-from: 20
10 | 
11 | jobs:
12 |   symbol: "SY(ab)"
13 |   limit-per-push: 1
14 | 
15 | artifact: /memory-report-TabsOpenForceGC/
16 | 
17 | # awsy looks for files named TabsOpenForceGC-* and takes the last one
18 | # (update_checkpoint_paths). It generates one total per process, and takes the
19 | # median: https://bit.ly/3QOAkng
20 | #
21 | # Note that the awsy scripts can be run on the command line! So eg
22 | # mach python testing/awsy/awsy/parse_about_memory.py /tmp/memory-report.json.gz js-main-runtime/ --proc-filter="web "
23 | 
24 | metric:
25 |   json:
26 |     match-key-1: "reports[].path"
27 |     match-value-1: /^js-main-runtime//
28 |     match-key-2: "$.process"
29 |     match-value-2: /^(?:web |Web Content)/
30 |     value: "$.amount"
31 |     # For each value found, additionally retrieve the following labels relative to where the value was found,
32 |     # for use in the output.format pattern.
33 |     label:
34 |       mempath: "$.path"
35 |       process: "$2.process"
36 |   output:
37 |     style: formatted
38 |     job-header: "# push {push_idx}: {push_desc} [job={job_desc}]\n# {push_url}"
39 |     label-header:
40 |       process: "# process {process_idx} = {process} (job {job_id} <{job_url}>)\n# {filename}"
41 |     #format: "{process_idx} {value} # {mempath}"
42 |     groupby: ["process_idx"]
43 |     format: "{push_idx} {sum(value)}"
44 | 


--------------------------------------------------------------------------------
/conf/gdbinit:
--------------------------------------------------------------------------------
 1 | # Basic gdb configuration
 2 | 
 3 | set unwindonsignal on
 4 | 
 5 | set debug-file-directory /usr/lib/debug
 6 | 
 7 | python import os
 8 | python import sys
 9 | 
10 | # Show the concrete types behind nsIFoo
11 | set print object on
12 | 
13 | # Static members are much too noisy in many classes.
14 | set print static-members off
15 | 
16 | set python print-stack full
17 | 
18 | set debuginfod enabled on
19 | 
20 | # Stolen from chromium gdbinit: multithreaded symbol loading.
21 | maint set worker-threads unlimited
22 | 


--------------------------------------------------------------------------------
/conf/gdbinit.gecko:
--------------------------------------------------------------------------------
 1 | # .gdbinit file for debugging Mozilla code (Gecko, SpiderMonkey)
 2 | 
 3 | define pmethod
 4 |         p/a *(void**)(*((PRUint64*)mCallee) + 8 * mVTableIndex)
 5 | end
 6 | 
 7 | define showstring
 8 |   x/($1.Length())s $1.BeginReading()
 9 | end
10 | 
11 | define watchmark
12 |   # First arg is the gc cell address
13 |   # Second arg is the color
14 |   #
15 |   # Note that it is often handy to make the resulting watchpoint conditional on
16 |   # having a matching address (since it will be breaking for anything sharing the
17 |   # mark word)
18 |   set $word = js::debug::GetMarkWordAddress($arg0)
19 |   set $mask = js::debug::GetMarkMask($arg0, $arg1)
20 |   watch -l *$word
21 | end
22 | 
23 | define manualmark
24 |   # Same args as watchmark
25 |   set $addr=(uintptr_t)$arg0
26 |   set $bit=($addr & js::gc::ChunkMask) / js::gc::CellBytesPerMarkBit + $arg1
27 |   set $bitmap=(uintptr_t*)(($addr & ~js::gc::ChunkMask) | js::gc::ChunkMarkBitmapOffset)
28 |   set $mask=((uintptr_t)1) << ($bit % 64)
29 |   set $word=&$bitmap[$bit / 64]
30 | end
31 | 
32 | define getheap
33 |   p *(js::gc::ChunkLocation*)(((uint64_t)$arg0) & ~js::gc::ChunkMask | js::gc::ChunkLocationOffset)
34 | end
35 | 
36 | define markinfo
37 |   p js::debug::GetMarkInfo((js::gc::Cell*)$arg0)
38 | end
39 | 
40 | define proxyhandler
41 |   p ((js::detail::ProxyDataLayout)((void**)$arg0)[2]).handler
42 | end
43 | 
44 | define ccwtarget
45 |   p js::UncheckedUnwrapWithoutExpose((JSObject*)$arg0)
46 | end
47 | 
48 | # Set of functions for tracking JIT code back to its creator.
49 | 
50 | define codeloc
51 |   set $code_=$arg0
52 |   watch *(void**)$code_
53 | end
54 | 
55 | define eccopy
56 |   set $offset_=(long)$code_ - (long)dst
57 |   echo offset=
58 |   p/x $offset
59 |   set $src_ = m_formatter.m_buffer.m_buffer.mBegin
60 |   set $precode_ = (long)$src_ + $offset_
61 |   watch *(void**)$precode_
62 | end
63 | 
64 | define realloc
65 |   set $precode_ = (long)aPtr + $offset_
66 |   watch *(void**)$precode_
67 | end
68 | 
69 | define tcellzone
70 |   set $addr=(uintptr_t)$arg0
71 |   set $arena=(js::gc::Arena*)($addr & ~js::gc::ArenaMask)
72 |   set $zone=$arena->zone
73 |   p $zone
74 | end
75 | 


--------------------------------------------------------------------------------
/conf/gdbinit.gecko.py:
--------------------------------------------------------------------------------
  1 | from os.path import abspath, dirname, expanduser
  2 | 
  3 | gdb.execute("source {}/gdbinit.gecko".format(abspath(expanduser(dirname(__file__)))))
  4 | 
  5 | ######################################################################
  6 | 
  7 | def find_nearest_index(searchkey, collection, key=lambda x: x):
  8 |     '''Find the last index in the list with an entry not greater than the given item, but if multiple indexes have the same key, return the index of the first one. There must be a better way of saying that.'''
  9 |     for i, element in enumerate(collection):
 10 |         extracted = key(element)
 11 |         if extracted == searchkey:
 12 |             return i
 13 |         elif extracted > searchkey:
 14 |             return i - 1
 15 |     return len(collection) - 1
 16 | 
 17 | def find_nearest(searchkey, collection, key=lambda x: x, default=None):
 18 |     s = sorted(collection, key=key)
 19 |     nearest = find_nearest_index(searchkey, s, key)
 20 |     if nearest < 0:
 21 |         return default
 22 | 
 23 |     if isinstance(collection, list):
 24 |         # key maps an element of the collection to its sort key
 25 |         # s is a sorted version of collection
 26 |         return s[nearest]
 27 |     else:
 28 |         # key() maps a key in the collection to its sort key
 29 |         # s is a sorted list of keys from collection
 30 |         return collection[s[nearest]]
 31 | 
 32 | class JITInstructionMap(gdb.Command):
 33 |     """Given a log file generated with ION_SPEW_FILENAME=spew.log and IONFLAGS=codegenmap, look at the current $pc and set a breakpoint on the code that generated it."""
 34 |     def __init__(self):
 35 |         gdb.Command.__init__(self, "jitwhere", gdb.COMMAND_USER)
 36 |         self.scripts = None
 37 |         self.codemap = None
 38 | 
 39 |         # Load from ION_SPEW_FILENAME, which is not at all guaranteed to be the
 40 |         # same value that was used when the file was generated.
 41 |         self.spewfile = os.getenv("ION_SPEW_FILENAME", "spew.log")
 42 | 
 43 |         self.editor = os.environ.get('EDITOR', 'emacs')
 44 | 
 45 |         self.kidpids = set()
 46 | 
 47 |     def load_spew(self):
 48 |         scripts = {}
 49 |         self.codemap = defaultdict(dict)
 50 | 
 51 |         current_compilation = None
 52 |         with open(self.spewfile, "r") as spew:
 53 |             lineno = 0
 54 |             for line in spew:
 55 |                 lineno += 1
 56 |                 m = re.search(r'\[Codegen\].*\(raw ([\da-f]+)\) for compilation (\d+)', line)
 57 |                 if m:
 58 |                     scripts[int(m.group(1), 16)] = current_compilation
 59 |                 m = re.search(r'\[Codegen\] # Emitting .*compilation (\d+)', line)
 60 |                 if m:
 61 |                     current_compilation = int(m.group(1))
 62 |                 m = re.search(r'\[Codegen\] \@(\d+)', line)
 63 |                 if m:
 64 |                     self.codemap[current_compilation][int(m.group(1))] = lineno
 65 | 
 66 |         return [(code,scripts[code]) for code in sorted(scripts.keys())]
 67 | 
 68 |     def reap(self):
 69 |         while True:
 70 |             try:
 71 |                 (pid, status, rusage) = os.wait3(os.WNOHANG)
 72 |                 if pid == 0:
 73 |                     # Have child that is still running.
 74 |                     break
 75 |                 self.kidpids.remove(pid)
 76 |             except ChildProcessError:
 77 |                 break
 78 |             except KeyError:
 79 |                 # Not ours, but oh well.
 80 |                 pass
 81 | 
 82 |     def invoke(self, arg, from_tty):
 83 |         self.reap()
 84 | 
 85 |         self.scripts = self.scripts or self.load_spew()
 86 |         if not self.scripts:
 87 |             print("no compiled scripts found")
 88 |             return
 89 |         pc = int(gdb.selected_frame().read_register("pc"))
 90 |         (code, compilation) = find_nearest(pc, self.scripts,
 91 |                                            key=lambda x: x[0],
 92 |                                            default=(None, None))
 93 |         if code is None:
 94 |             print("No compiled script found")
 95 |             return
 96 | 
 97 |         offset = pc - code
 98 |         lineno = find_nearest(offset, self.codemap[compilation])
 99 |         print("pc %x at %x + %d, compilation id %d, is on line %s" % (pc, code, offset, compilation, lineno))
100 |         args = [ self.editor ]
101 |         if 'emacs' in self.editor and lineno is not None:
102 |             args.append("+" + str(lineno))
103 |         args.append(self.spewfile)
104 |         pid = os.spawnlp(os.P_NOWAIT, self.editor, *args)
105 |         self.kidpids.add(pid)
106 | 
107 | JITInstructionMap()
108 | 


--------------------------------------------------------------------------------
/conf/gdbinit.misc:
--------------------------------------------------------------------------------
 1 | # Various miscellaneous gdb helper functions
 2 | 
 3 | define watchofs
 4 |   # Usage: watchofs  
 5 |   watch -l *(void**)((char*)$arg0 + (size_t)$arg1)
 6 | end
 7 | 
 8 | #def reload
 9 | #  python reload($arg0)
10 | #end
11 | 
12 | define loudstep
13 |   disp/i $pc
14 |   set $i = 0
15 |   while ($i < $arg0)
16 |     si
17 |     set $i = $i + 1
18 |   end
19 | end
20 | 
21 | # construct 
22 | # Do Not Trust
23 | #
24 | # Alternative approach that has worked:
25 | #  p (nsCString*)malloc(sizeof(nsCString))
26 | #  p $->nsTString()
27 | #
28 | define construct
29 |   p $obj = ($arg0 *) operator new(sizeof($arg0), malloc(sizeof($arg0)))
30 | end
31 | 


--------------------------------------------------------------------------------
/conf/gdbinit.pahole.py:
--------------------------------------------------------------------------------
  1 | # 'pahole' and 'offset' commands for examining types
  2 | 
  3 | # Copyright (C) 2008, 2009, 2012 Free Software Foundation, Inc.
  4 | 
  5 | # This program is free software; you can redistribute it and/or modify
  6 | # it under the terms of the GNU General Public License as published by
  7 | # the Free Software Foundation; either version 3 of the License, or
  8 | # (at your option) any later version.
  9 | #
 10 | # This program is distributed in the hope that it will be useful,
 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 13 | # GNU General Public License for more details.
 14 | #
 15 | # You should have received a copy of the GNU General Public License
 16 | # along with this program.  If not, see .
 17 | 
 18 | import gdb
 19 | import re
 20 | 
 21 | from enum import Enum
 22 | 
 23 | class TraversalNodeType(Enum):
 24 |     SIMPLE = 1,
 25 |     START_STRUCT = 2,
 26 |     END_STRUCT = 3,
 27 |     HOLE = 4
 28 | 
 29 | def type_to_string(type):
 30 |     # I had some complicated code to display template parameters here, but it
 31 |     # doesn't work quite right and on further reflection the actual problem is
 32 |     # a gdb bug anyway. So stop trying.
 33 |     #
 34 |     # See https://sourceware.org/bugzilla/show_bug.cgi?id=23545
 35 | 
 36 |     name = str(type)
 37 |     csu = 'struct' if type.code == gdb.TYPE_CODE_STRUCT else 'union'
 38 |     if name.startswith(csu + ' '):
 39 |         return name
 40 |     else:
 41 |         return '%s %s' % (csu, name)
 42 | 
 43 | def traverse_type(type, max_level=0, name_anon=False):
 44 | 
 45 |     def calc_sizeof(type):
 46 |         '''Same as type.sizeof, except do not inflate empty structs to 1 byte.'''
 47 |         type = type.strip_typedefs()
 48 |         if type.sizeof != 1 or type.code != gdb.TYPE_CODE_STRUCT:
 49 |             return type.sizeof
 50 |         size = 0
 51 |         for field in type.fields():
 52 |             size += calc_sizeof(field.type)
 53 |         return size
 54 | 
 55 |     def traverse(type, parent, level, field_name, top_bitpos, size_bits, bitpos):
 56 |         stripped_type = type.strip_typedefs()
 57 | 
 58 |         if parent is None:
 59 |             path = 'this'
 60 |         elif field_name:
 61 |             path = parent['path'] + '.' + field_name
 62 |         else:
 63 |             if name_anon:
 64 |                 anon = '' if type.code == gdb.TYPE_CODE_UNION else ''
 65 |                 path = parent['path'] + '.' + anon
 66 |             else:
 67 |                 path = parent['path']
 68 | 
 69 |         info = {
 70 |             'type': type,
 71 |             #'name': type.name or type.tag or stripped_type.name or stripped_type.tag,
 72 |             'name': type_to_string(type),
 73 |             'field_name': field_name,
 74 |             'level': level,
 75 |             'parent': parent,
 76 |             'top_bitpos': top_bitpos,
 77 |             'bitpos': bitpos,
 78 |             'size_bits': size_bits,
 79 |             'path': path,
 80 |             'truncated': (max_level and level >= max_level),
 81 |         }
 82 | 
 83 |         if stripped_type.code not in (gdb.TYPE_CODE_STRUCT, gdb.TYPE_CODE_UNION):
 84 |             # For now, treat everything but class/struct/union as a simple type.
 85 |             info['node_type'] = TraversalNodeType.SIMPLE
 86 |             yield info
 87 |             return
 88 | 
 89 |         info['node_type'] = TraversalNodeType.START_STRUCT
 90 |         yield info
 91 | 
 92 |         base_counter = 0
 93 |         bitpos = 0
 94 |         for field in stripped_type.fields():
 95 |             # Skip static fields.
 96 |             if not hasattr(field, 'bitpos'):
 97 |                 continue
 98 | 
 99 |             # Allow limiting the depth of traversal.
100 |             if max_level and info['level'] >= max_level:
101 |                 continue
102 | 
103 |             ftype = field.type.strip_typedefs()
104 |             fsize = calc_sizeof(ftype)
105 |             fbitpos = field.bitpos if fsize > 0 else bitpos
106 |             if bitpos != fbitpos:
107 |                 yield {
108 |                     'node_type': TraversalNodeType.HOLE,
109 |                     'type': '',
110 |                     'name': '<%d-bit hole>' % (fbitpos - bitpos),
111 |                     'field_name': None,
112 |                     'level': level + 1,
113 |                     'parent': info,
114 |                     'top_bitpos': top_bitpos + bitpos,
115 |                     'bitpos': bitpos,
116 |                     'size_bits': fbitpos - bitpos,
117 |                     'path': path,
118 |                     'next_field': field.name,
119 |                 }
120 | 
121 |             # Advance past the hole, to the start of the field.
122 |             bitpos = fbitpos
123 | 
124 |             if field.bitsize > 0:
125 |                 fieldsize = field.bitsize
126 |             else:
127 |                 # TARGET_CHAR_BIT here...
128 |                 fieldsize = 8 * fsize
129 | 
130 |             field_name = field.name
131 |             if field.is_base_class:
132 |                 field_name = ''
133 |                 base_counter += 1
134 | 
135 |             yield from traverse(field.type, info, level + 1, field_name, top_bitpos + bitpos, fieldsize, bitpos)
136 | 
137 |             if stripped_type.code == gdb.TYPE_CODE_STRUCT:
138 |                 bitpos += fieldsize
139 | 
140 |         info['node_type'] = TraversalNodeType.END_STRUCT
141 |         yield info
142 | 
143 |     yield from traverse(type,
144 |                         parent=None,
145 |                         level=0,
146 |                         field_name=None,
147 |                         top_bitpos=0,
148 |                         size_bits=type.sizeof*8,
149 |                         bitpos=0)
150 | 
151 | class Pahole (gdb.Command):
152 |     """Show the holes in a structure.
153 | This command takes a single argument, a type name.
154 | It prints the type, including any holes it finds.
155 | It accepts an optional max-depth argument:
156 |   `pahole/1 mytype` will not recurse into contained structs."""
157 | 
158 |     def __init__ (self):
159 |         super (Pahole, self).__init__ ("pahole", gdb.COMMAND_NONE,
160 |                                        gdb.COMPLETE_SYMBOL)
161 | 
162 |     def invoke (self, arg, from_tty):
163 |         max_level = 0
164 |         if arg.startswith("/"):
165 |             m = re.match(r'^/(\d+) +', arg)
166 |             if m:
167 |                 max_level = int(m.group(1), 0)
168 |                 arg = arg[m.span()[1]:]
169 | 
170 |         type = gdb.lookup_type(arg)
171 |         type = type.strip_typedefs ()
172 |         if type.code not in (gdb.TYPE_CODE_STRUCT, gdb.TYPE_CODE_UNION):
173 |             raise TypeError('%s is not a class/struct/union type' % arg)
174 | 
175 |         info_pattern = '    %4d %4d : '
176 |         inner_info_pattern = '%4d%+4d %4d : '
177 |         empty_inner_info_pattern = '         %4d : '
178 |         header_len = len(inner_info_pattern % (0, 0, 0))
179 |         print('  offset size')
180 |         for info in traverse_type(type, max_level=max_level):
181 |             nt = info['node_type']
182 |             sofar = 0
183 |             if nt != TraversalNodeType.END_STRUCT:
184 |                 bytepos = int(info['bitpos'] / 8)
185 |                 top_bytepos = int(info['top_bitpos'] / 8)
186 |                 bytesize = int(info['size_bits'] / 8)
187 |                 if info['level'] > 1:
188 |                     if bytesize == 0:
189 |                         out = empty_inner_info_pattern % bytesize
190 |                     else:
191 |                         out = inner_info_pattern % (top_bytepos - bytepos, bytepos, bytesize)
192 |                 else:
193 |                     out = info_pattern % (bytepos, bytesize)
194 |                 sofar = len(out)
195 |                 print(out, end="")
196 | 
197 |             indent = ' ' * (2 * info['level'])
198 |             if nt == TraversalNodeType.START_STRUCT:
199 |                 desc = indent
200 |                 if info['field_name']:
201 |                     desc += '%s : ' % info['field_name']
202 |                 desc += info['name']
203 |                 if not info['truncated']:
204 |                     desc += ' {'
205 |                 print(desc)
206 |             elif nt == TraversalNodeType.END_STRUCT:
207 |                 if not info['truncated']:
208 |                     print('%s%s} %s' % (' ' * (header_len - sofar), indent, info['name'] or ''))
209 |             elif nt == TraversalNodeType.SIMPLE:
210 |                 print('%s%s : %s' % (indent, info['field_name'], info['type']))
211 |             elif nt == TraversalNodeType.HOLE:
212 |                 parent_name = (info['parent'] or {}).get('name', None)
213 |                 where = 'in ' + parent_name if parent_name else ''
214 |                 print("--> %d bit hole %s <--" % (info['size_bits'], where))
215 | 
216 | Pahole()
217 | 
218 | class TypeOffset (gdb.Command):
219 |     """Displays the fields at the given offset (in bytes) of a type.
220 | The optional /N parameter determines the size of the region inspected;
221 | defaults to the size of a pointer."""
222 | 
223 |     default_width = gdb.lookup_type("void").pointer().sizeof
224 | 
225 |     def __init__ (self):
226 |         super (TypeOffset, self).__init__ ("offset", gdb.COMMAND_NONE,
227 |                                        gdb.COMPLETE_SYMBOL)
228 | 
229 |     def invoke (self, arg, from_tty):
230 |         width = gdb.lookup_type("void").pointer().sizeof
231 |         m = re.match(r'/(\d+) ', arg)
232 |         if m:
233 |             width = int(m.group(1), 0)
234 |             arg = arg[m.span()[1]:]
235 |         (offset, typename) = arg.split(" ")
236 |         offset = int(offset, 0)
237 | 
238 |         type = gdb.lookup_type(typename)
239 |         type = type.strip_typedefs ()
240 |         if type.code not in (gdb.TYPE_CODE_STRUCT, gdb.TYPE_CODE_UNION):
241 |             raise TypeError('%s is not a class/struct/union type' % arg)
242 | 
243 |         begin, end = offset, offset + width - 1
244 |         print("Scanning byte offsets %d..%d" % (begin, end))
245 |         for info in traverse_type(type, name_anon=True):
246 |             if info['node_type'] == TraversalNodeType.END_STRUCT:
247 |                 continue
248 |             if 'top_bitpos' not in info or 'size_bits' not in info:
249 |                 continue
250 | 
251 |             # Not all that interesting to say that the whole type overlaps.
252 |             if info['level'] == 0:
253 |                 continue
254 | 
255 |             (bytepos, bytesize) = (int(info['top_bitpos']/8), int(info['size_bits']/8))
256 |             fend = bytepos + bytesize - 1
257 |             if fend < begin:
258 |                 continue
259 |             if bytepos > end:
260 |                 continue
261 | 
262 |             name_of_type = info.get('name') or type_to_string(info['type'])
263 |             if info['node_type'] == TraversalNodeType.HOLE:
264 |                 name_of_type += " in " + (info['parent'] or {}).get('name', 'struct')
265 |                 if info['next_field']:
266 |                     name_of_type += " before field '" + info['next_field'] + "'"
267 |             print('overlap at byte %d..%d with %s : %s' % (bytepos, fend, info['path'], name_of_type))
268 | 
269 | TypeOffset()
270 | 


--------------------------------------------------------------------------------
/conf/gdbinit.py:
--------------------------------------------------------------------------------
  1 | # $_when_ticks function.
  2 | # $_when functions
  3 | # set rrprompt on
  4 | # now
  5 | # set logfile /tmp/mylog.txt
  6 | # log some message
  7 | # log -d[ump]
  8 | # log -s[orted]
  9 | # log -e[dit]
 10 | 
 11 | import gdb
 12 | import os
 13 | import re
 14 | 
 15 | class PythonPrint(gdb.Command):
 16 |     """Print the value of the python expression given"""
 17 |     def __init__(self, name="pp"):
 18 |         gdb.Command.__init__(self, name, gdb.COMMAND_USER)
 19 | 
 20 |     def invoke(self, arg, from_tty):
 21 |         print(eval(arg))
 22 | 
 23 | PythonPrint("pp")
 24 | PythonPrint("pprint")
 25 | 
 26 | ######################################################################
 27 | 
 28 | # pdo expr
 29 | # Example: pdo p data[[i for i in range(10,20)]].key
 30 | # Example: pdo p {i for i in range(10,20)}*{j for j in [1, -1]}
 31 | # Special forms:
 32 | #  10..20 - equivalent to [i for i in range(10, 20)]
 33 | class PDo(gdb.Command):
 34 |     """Repeat gdb command, substituted with Python list expressions"""
 35 |     def __init__(self, name):
 36 |         gdb.Command.__init__(self, name, gdb.COMMAND_USER)
 37 | 
 38 |     def commands(self, cmd):
 39 |         inbrackets = True
 40 |         m = re.match(r'^(.*?)\[\[(.*?)\]\](.*)$', cmd)
 41 |         if not m:
 42 |             m = re.match(r'^(.*?)\{(.*?)\}(.*)$', cmd)
 43 |             if not m:
 44 |                 yield(cmd)
 45 |                 return
 46 |             inbrackets = False
 47 |         (pre, expr, post) = m.groups()
 48 | 
 49 |         values = None
 50 |         m = re.match(r'(.*?)\.\.(.*)', expr)
 51 |         if m:
 52 |             start, limit = int(m.group(1)), int(m.group(2))
 53 |             values = range(start, limit)
 54 |         else:
 55 |             values = eval('[' + expr + ']')
 56 | 
 57 |         for v in values:
 58 |             if inbrackets:
 59 |                 yield from self.commands(pre + '[' + str(v) + ']' + post)
 60 |             else:
 61 |                 yield from self.commands(pre + str(v) + post)
 62 | 
 63 |     def invoke(self, arg, from_tty):
 64 |         opts = ""
 65 |         if arg.startswith("/"):
 66 |             rest = arg.index(" ")
 67 |             opts = arg[1:rest]
 68 |             arg = arg[rest+1:]
 69 |         verbose = "v" in opts
 70 | 
 71 |         for cmd in self.commands(arg):
 72 |             if verbose:
 73 |                 gdb.write("(pdo) " + cmd + "\n")
 74 |             gdb.execute(cmd)
 75 | 
 76 | PDo("pdo")
 77 | 
 78 | ######################################################################
 79 | 
 80 | # reappend "stem" "tail" [limit]
 81 | # Example: reappend "p obj->shape" "->parent" 3
 82 | class RepeatedAppend(gdb.Command):
 83 |   """Run a command, appending a "tail" to the command on every iteration, until an error or [limit] is reached"""
 84 |   def __init__(self, name="reappend"):
 85 |     gdb.Command.__init__(self, name, gdb.COMMAND_USER)
 86 | 
 87 |   def invoke(self, arg, from_tty):
 88 |     args = gdb.string_to_argv(arg)
 89 |     cmd = args[0]
 90 |     tail = args[1]
 91 |     limit = int(args[2]) if len(args) > 2 else 9999
 92 |     for i in range(limit):
 93 |       # print("Executing %s + %s x %d" % (args[0], args[1], limit))
 94 |       gdb.execute(cmd)
 95 |       cmd = cmd + tail
 96 | 
 97 | RepeatedAppend("reappend")
 98 | 
 99 | ######################################################################
100 | 
101 | # Polyfill gdb.set_convenience_variable()
102 | 
103 | # The gdb version I was developing with originally did not have the convenience
104 | # variable APIs that were added later. So this is a workaround, where I create
105 | # a gdb function that returns a value, and set it via gdb.execute.
106 | class ValueHolderHack(gdb.Function):
107 |     def __init__(self):
108 |         super(ValueHolderHack, self).__init__('__lastval')
109 |         self.value = None
110 | 
111 |     def invoke(self, *args):
112 |         return self.value
113 | 
114 | valueHolderHack = ValueHolderHack()
115 | 
116 | def set_convenience_variable_hack(name, value):
117 |     valueHolderHack.value = value
118 |     gdb.execute("set ${}=$__lastval()".format(name), from_tty=False, to_string=True)
119 | 
120 | if not hasattr(gdb, 'set_convenience_variable'):
121 |     setattr(gdb, 'set_convenience_variable', set_convenience_variable_hack)
122 | 
123 | ######################################################################
124 | 
125 | class Labels(dict):
126 |     def __init__(self):
127 |         # Substitution pattern maintenance -- this class keeps a compiled regex
128 |         # 'pattern' up to date with its set of keys. The pattern is lazily
129 |         # generated whenever it's needed and the set of keys has changed since
130 |         # the last time it was rebuilt.
131 |         self.dirty = True
132 |         self.pattern()
133 | 
134 |         # This class supports a single external consumer that feeds off a "log"
135 |         # of added keys. Every key added will be appended to a list that is
136 |         # cleared out when flush_added() is called to retrieve all adds since
137 |         # the previous call. (If the class is clear()ed, 'added' will be
138 |         # reset.)
139 |         self.added = []
140 | 
141 |         # When initially loading a log file, we might have labels with types
142 |         # that have not been loaded yet. Keep these labels in a pending list
143 |         # and try to apply them again every time we load a new objfile.
144 |         self.pending_labels = []
145 | 
146 |     def label(self, token, name, typestr, gdbval=None, report=True):
147 |         """Set a label named `name` to the value `token` (probably a numeric
148 |         value) cast according to `typestr`, which is a raw cast expression.
149 |         gdbval is... figuring that out now."""
150 |         #print("Setting label {} := {} of type {} gdbval={}".format(token, name, typestr, gdbval))
151 |         if gdbval is None:
152 |             try:
153 |                 # Look for a pointer type, eg in `(JSObject *) 0xdeadbeef`
154 |                 if m := re.match(r'(.*?)( *\**)$', typestr):
155 |                     t, ptrs = m.groups()
156 |                 else:
157 |                     t, ptrs = (typestr, '')
158 |                 gdbval = gdb.parse_and_eval(f"('{t}'{ptrs}) {token}")
159 |             except gdb.error as e:
160 |                 # This can happen if we load in a set of labels before the type
161 |                 # exists.
162 |                 #
163 |                 # TODO: Report on unkonwn types at a reasonable time.
164 | 
165 |                 #gdb.write("unknown type: " + str(e) + "\n")
166 |                 #gdb.write(" -->" + f"('{t}'{ptrs}) {token}" + "<--\n")
167 |                 self.pending_labels.append((token, name, typestr))
168 |                 return False
169 |         self[token] = (name, typestr)
170 |         if report:
171 |             print(f"all occurrences of {token} will be replaced with ${name} of type {typestr}")
172 |         gdb.set_convenience_variable(name, gdbval)
173 |         return True
174 | 
175 |     def __setitem__(self, key, pair):
176 |         key = self.canon(key)
177 |         if dict.get(self, key) == pair:
178 |             return
179 |         #print(f"all occurrences of {key} will be replaced with ${pair[0]} of type {pair[1]}")
180 |         
181 |         # Remove all old keys referring to $name so that the old key will not be replaced
182 |         # with the updated $name.
183 |         deadkeys = {key for key, (name, t) in labels.items() if name == pair[0]}
184 |         for deadkey in deadkeys:
185 |             del self[deadkey]
186 | 
187 |         dict.__setitem__(self, key, pair)
188 |         self.added.append(key)
189 |         self.dirty = True
190 | 
191 |     def clear(self):
192 |         dict.clear(self)
193 |         self.dirty = True
194 | 
195 |     def canon(self, s):
196 |         try:
197 |             n = int(s, 0)
198 |             if n < 0:
199 |                 return "%#x" % (n & 0xffffffffffffffff)
200 |             else:
201 |                 return "%#x" % n
202 |         except ValueError:
203 |             return s
204 | 
205 |     def flush_added(self):
206 |         '''Retrieve the list of entries added since the last call to this method.'''
207 |         ret = [(k, self[k]) for k in self.added if k in self]
208 |         self.added = []
209 |         return ret
210 | 
211 |     def __delitem__(self, key):
212 |         dict.__delitem__(self, key)
213 |         self.dirty = True
214 | 
215 |     def __getitem__(self, key):
216 |         return dict.__getitem__(self, self.canon(key))
217 | 
218 |     def __contains__(self, key):
219 |         return dict.__contains__(self, self.canon(key))
220 | 
221 |     def get(self, text, default=None, verbose=False):
222 |         rep = dict.get(self, self.canon(text), default)
223 |         if rep == default:
224 |             return default
225 |         return "%s [[$%s]]" % (text, rep[0]) if verbose else "$" + rep[0]
226 | 
227 |     def copy(self):
228 |         c = Labels()
229 |         c.update(self)
230 |         return c
231 | 
232 |     def pattern(self):
233 |         if self.dirty:
234 |             #print("Rebuilding pattern with {} replacements".format(len(self)))
235 |             if len(self) == 0:
236 |                 # Pattern that never matches
237 |                 self.repPattern = re.compile(r'^(?!.).')
238 |             else:
239 |                 # This requires word boundaries, and does not match words
240 |                 # starting with '$' (to avoid replacing eg $3, though honestly
241 |                 # if you set a label for 3 you kind of deserve what you get.)
242 |                 reps = []
243 |                 for key in self.keys():
244 |                     reps.append(key)
245 |                     reps.append(str(int(key, 16)))
246 |                 self.repPattern = re.compile(r'\b(?. Also, the convenience variable $ will be set to VALUE.
298 | 
299 |     Example:
300 | 
301 |       label GLOBAL=cx->global()
302 | 
303 |     will evalute the expression `cx->global()` to something like
304 | 
305 |       (JSObject*) 0xabcd0123efef0800
306 | 
307 |     and now later on when the expression `obj` happens to evaluate to the same object,
308 | 
309 |       gdb> p obj
310 |       $1 = (JSObject *) $GLOBAL
311 |       gdb> p $GLOBAL
312 |       $2 = (JSObject *) $GLOBAL
313 |    """
314 | 
315 |     def __init__(self, name):
316 |         super(LabelCmd, self).__init__(name, gdb.COMMAND_USER, gdb.COMPLETE_NONE)
317 | 
318 |     def invoke(self, arg, from_tty):
319 |         if len(arg) == 0:
320 |             self.show_all_labels()
321 |             return
322 | 
323 |         if arg.startswith('variable '):
324 |             start=len('variable ')
325 |             pos = index_of_first(arg, [' ', '='], start)
326 |             if pos is None:
327 |                 gdb.write("invalid usage\n")
328 |                 return
329 |             self.set_label(arg[start:pos], arg[pos+1:])
330 |             return
331 | 
332 |         pos = index_of_first(arg, [' ', '='])
333 |         if pos is not None:
334 |             name, val = (arg[0:pos], arg[pos+1:])
335 |             self.set_label(name, val.lstrip())
336 |             return
337 | 
338 |         self.get_label(arg)
339 | 
340 |     def get_label(self, name):
341 |         for key, (n, t) in labels.items():
342 |             if n == name:
343 |                 gdb.write(f"{name} = ({t}) {key}\n")
344 |                 break
345 |         else:
346 |             gdb.write("Label '{}' not found\n".format(name))
347 | 
348 |     def prefer_prettyprinted(self, t):
349 |         s = str(t)
350 |         return s.startswith("JS::Handle") or s.startswith("JS::Value")
351 | 
352 |     def set_label(self, name, value):
353 |         if re.fullmatch(r'0x[0-9a-fA-F]+', value) or value.lstrip('-').isdecimal():
354 |             if int(value) != 0:
355 |                 labels[value] = (name, 'void*')
356 |                 return
357 | 
358 |         v = gdb.parse_and_eval(value)
359 | 
360 |         # FIXME! If there is a pretty printer for v that displays a different
361 |         # hex value than its address, then we will label using that instead.
362 |         # (Example: Symbol displays its desc address, though in the 0x0 case we
363 |         # will now skip that..)
364 | 
365 |         # First, attempt to cast to void*, unless the special case code says this
366 |         # type should prefer prettyprinting.
367 |         valstr = None
368 |         try:
369 |             if not self.prefer_prettyprinted(v.type):
370 |                 valstr = str(v.cast(gdb.lookup_type('void').pointer()))
371 |         except Exception as e:
372 |             pass
373 | 
374 |         if valstr is None:
375 |             # Fall back on the (possibly prettyprinted) output.
376 |             valstr = str(v)
377 | 
378 |         m = re.search(r'0x[0-9a-fA-F]+', valstr)
379 |         if not m:
380 |             m = re.search(r'-?[0-9]{4,20}', valstr)
381 |         if not m or m.group(0) == '0' or m.group(0) == '0x0':
382 |             gdb.write("No labelable value found in " + valstr + "\n")
383 |             return
384 |         numeric = m.group(0)
385 | 
386 |         # gdb.write("gots %s, setting labels[%s] = %s\n" % (str(v), m.group(0), value))
387 | 
388 |         # label $3 SOMETHING
389 |         # should set $SOMETHING to the actual value of $3
390 | 
391 |         # If the numeric value is preceded by something that looks like a cast to a pointer, use the cast as the type.
392 |         gdb.write("pattern = " + r'\(([\w:]+ *\*+)\) *' + numeric + "\n");
393 |         gdb.write("valstr = " + valstr + "\n");
394 |         if mm := re.search(r'\(([^\(\)]+ *\*+)\) *' + numeric, valstr):
395 |             gdb.write("  type = " + mm.group(1) + "\n")
396 |             labels.label(numeric, name, mm.group(1), gdbval=v)
397 |         else:
398 |             labels.label(m.group(0), name, str(v.type), gdbval=v)
399 | 
400 |     def show_all_labels(self):
401 |         seen = set()
402 |         for key, (name, t) in labels.items():
403 |             if name not in seen:
404 |                 seen.add(name)
405 |                 gdb.write(f"${name} = ({t}) {key}\n")
406 | 
407 | LabelCmd('label')
408 | 
409 | class UnlabelCmd(gdb.Command):
410 |     def __init__(self, name):
411 |         super(UnlabelCmd, self).__init__(name, gdb.COMMAND_USER, gdb.COMPLETE_NONE)
412 | 
413 |     def invoke(self, arg, from_tty):
414 |         deadname = arg
415 |         deadkeys = {k: name for k, (name, t) in labels.items() if name == deadname}
416 |         for key in deadkeys.keys():
417 |             del labels[key]
418 | 
419 | UnlabelCmd('unlabel')
420 | 
421 | class util:
422 |     def split_command_arg(arg, allow_dash=False):
423 |         options = []
424 |         if arg.startswith("/"):
425 |             pos = arg.index(" ") if " " in arg else len(arg)
426 |             options.extend(arg[1:pos])
427 |             arg = arg[pos+1:]
428 |         elif allow_dash and arg.startswith("-"):
429 |             # Support multiple options: -foo -bar
430 |             all_options = []
431 |             while arg.startswith('-'):
432 |                 pos = arg.index(" ") if " " in arg else len(arg)
433 |                 options.append(arg[1:pos])
434 |                 if pos == len(arg):
435 |                     arg = ''
436 |                     break
437 |                 arg = arg[pos+1:]
438 | 
439 |         return options, arg
440 | 
441 |     def evaluate(expr, replace=True, brieftype=True):
442 |         fmt = {}
443 |         if m := re.search(r':(\w+)$', expr):
444 |             expr = expr[:-len(m[0]):]
445 |             flags = m[1]
446 |             if 'r' in flags:
447 |                 fmt['raw'] = True
448 |                 flags = flags.replace('r', '')
449 |             if len(flags) == 1:
450 |                 fmt['format'] = flags
451 |         try:
452 |             v = gdb.parse_and_eval(expr)
453 |         except gdb.error as e:
454 |             print("Invalid embedded expression «{}»".format(expr))
455 |             raise e
456 |         s = v.format_string(**fmt)
457 |         t = v.type
458 |         ts = str(t)
459 | 
460 |         # Ugh. In some situations, the value will be prefixed with its type,
461 |         # and others it will not. Enough will not that I wanted to add it in.
462 | 
463 |         if s.startswith("("):
464 |             return s
465 |         BORING_TYPES = ("int", "unsigned int", "uint32_t", "int32_t", "uint64_t", "int64_t")
466 |         if ts in BORING_TYPES:
467 |             return s
468 |         # If the type name is in the value, as in js::gc::CellColor::Black,
469 |         # then we don't need to see the cast.
470 |         if ts in s:
471 |             return s
472 |         if brieftype:
473 |             ots = ts
474 |             ts = ts.replace('const ', '')
475 |             ts = ts.replace(' const', '')
476 |             ts = re.sub(r'\w+::', '', ts)
477 |             ts = ts.replace(' *', '*')
478 |             # Same check as above, but sometimes the type gets aliased into a
479 |             # different namespace. So try even harder to throw it out.
480 |             if ts in s:
481 |                 return s
482 |         return "(%s) %s" % (ts, s)
483 | 
484 | class PrintCmd(gdb.Command):
485 |     """\
486 | like gdb's builtin 'print' function, with label replacements and special syntax.
487 | 
488 | Any substring that matches a label SOMELABEL will be replaced with the
489 | literal string `$SOMELABEL`.
490 | 
491 | If `m..n` is found anywhere in the string, the print will be repeated for
492 | every number in that range.
493 | 
494 | If `{substr}**n` is found in the string, then substr will be repeated n
495 | times.
496 | """
497 | 
498 |     def __init__(self, name):
499 |         super(PrintCmd, self).__init__(name, gdb.COMMAND_USER, gdb.COMPLETE_COMMAND)
500 | 
501 |     def enumerateExprs(self, expr):
502 |         m = re.match(r'(.*?)(\w+)\.\.(\w+)(.*)', expr)
503 |         if m:
504 |             start = gdb.parse_and_eval(m.group(2))
505 |             end = gdb.parse_and_eval(m.group(3))
506 |             for i in range(start, end):
507 |                 newExpr = m.group(1) + str(i) + m.group(4)
508 |                 yield from self.enumerateExprs(newExpr)
509 |             return
510 | 
511 |         m = re.match(r'(.*?)\{(.*?)\}\*\*(\d+)(.*)', expr)
512 |         if m:
513 |             start, subexpr, n, rest = m.groups()
514 |             n = int(n)
515 |             newExpr = start + ''.join(subexpr for _ in range(n)) + rest
516 |             yield from self.enumerateExprs(newExpr)
517 |             return
518 | 
519 |         yield expr
520 |         return
521 | 
522 |     def invoke(self, arg, from_tty):
523 |         # Format for x command is:  oxdutfaicsz bhwg
524 |         # but print command is only 1oxdutfaicsz
525 |         # ...but /r also exists; it skips pretty printers.
526 |         # We add
527 |         #   v = verbose
528 |         # and augment
529 |         #   r = raw
530 |         # to skip label substitutions.
531 | 
532 |         opts, arg = util.split_command_arg(arg)
533 |         fmt = ''.join(o[0] for o in opts)
534 |         verbose = 'v' in fmt
535 |         raw = 'r' in fmt
536 | 
537 |         fmt = fmt.replace('v', '')
538 |         fmtStr = "/" + fmt if fmt else ''
539 | 
540 |         for e in self.enumerateExprs(arg):
541 |             try:
542 |                 v = gdb.parse_and_eval(e)
543 |             except gdb.error as exc:
544 |                 gdb.write(str(exc) + "\n")
545 |                 return
546 |             gdb.set_convenience_variable('__expr', v)
547 |             output = gdb.execute("print" + fmtStr + " $__expr",
548 |                                  from_tty, to_string=True)
549 |             if not raw:
550 |                 output = labels.apply(output, verbose)
551 |             gdb.write(output)
552 | 
553 | PrintCmd('p')
554 | 


--------------------------------------------------------------------------------
/conf/gdbinit.rr:
--------------------------------------------------------------------------------
1 | define rfin
2 |   reverse-finish
3 | end
4 | 


--------------------------------------------------------------------------------
/conf/gdbinit.rr.py:
--------------------------------------------------------------------------------
  1 | # $_when_ticks function.
  2 | # $_when functions
  3 | # set rrprompt on
  4 | # now
  5 | # set logfile /tmp/mylog.json
  6 | # log some message
  7 | # log -unsorted
  8 | # log -sorted
  9 | # log -edit
 10 | 
 11 | import gdb
 12 | import json
 13 | import os
 14 | import random
 15 | import re
 16 | 
 17 | from os.path import abspath, dirname, expanduser
 18 | from os import environ as env
 19 | 
 20 | gdb.execute("source {}/gdbinit.rr".format(abspath(expanduser(dirname(__file__)))))
 21 | 
 22 | # Cache the session ID in the environment to allow hot-reloading of this file.
 23 | # It might be better to hang somewhere like `gdb.sessionkey` instead?
 24 | RUN_ID = os.environ.setdefault("__RRSESSION", "RRSESSION-" + str(random.random()))
 25 | RUNNING_RR = None
 26 | 
 27 | def running_rr():
 28 |     '''Detect whether running under rr.'''
 29 |     global RUNNING_RR
 30 |     if RUNNING_RR is not None:
 31 |         return RUNNING_RR
 32 |     RUNNING_RR = os.environ.get('GDB_UNDER_RR', False)
 33 |     return RUNNING_RR
 34 | 
 35 | 
 36 | def thread_id():
 37 |     thread = gdb.selected_thread()
 38 |     if thread is None:
 39 |         return None
 40 |     gtid = thread.global_num
 41 |     return f"T{gtid}"
 42 | 
 43 | 
 44 | def thread_detail():
 45 |     thread = gdb.selected_thread()
 46 |     if thread is None:
 47 |         return None
 48 |     return thread.details
 49 | 
 50 | 
 51 | def target_id():
 52 |     thread = gdb.selected_thread()
 53 |     if thread is None:
 54 |         return None
 55 |     return thread.ptid[0]
 56 | 
 57 | 
 58 | def setup_log_dir():
 59 |     share_dir = None
 60 |     if 'RR_LOGS' in env:
 61 |         share_dir = env['RR_LOGS']
 62 |     else:
 63 |         # If ~/.local/share exists, use that as the default location of
 64 |         # rr-logs/. (If it does not exist, don't create it!)
 65 |         share_root = os.path.join(env['HOME'], ".local", "share")
 66 |         if os.path.exists(share_root):
 67 |             share_dir = os.path.join(share_root, "rr-logs")
 68 | 
 69 |     if share_dir is not None:
 70 |         os.makedirs(share_dir, exist_ok=True)
 71 |         return share_dir
 72 | 
 73 |     return os.environ['HOME']
 74 | 
 75 | DEFAULT_LOG_DIR = setup_log_dir()
 76 | 
 77 | 
 78 | def when():
 79 |     when = gdb.execute("when", False, True)
 80 |     m = re.search(r'(\d+)', when)
 81 |     if not m:
 82 |         raise Exception("when returned invalid string")
 83 |     return int(m.group(1))
 84 | 
 85 | 
 86 | def when_ticks():
 87 |     when = gdb.execute("when-ticks", False, True)
 88 |     m = re.search(r'(\d+)', when)
 89 |     if not m:
 90 |         raise Exception("when-ticks returned invalid string")
 91 |     return int(m.group(1))
 92 | 
 93 | 
 94 | def now():
 95 |     return "%s:%s/%s" % (thread_id(), when(), when_ticks())
 96 | 
 97 | 
 98 | def nowTuple():
 99 |     return (when(), thread_id(), when_ticks())
100 | 
101 | 
102 | def rrprompt(current_prompt):
103 |     return "(rr " + now() + ") "
104 | 
105 | 
106 | class ParameterRRPrompt(gdb.Parameter):
107 |     def __init__(self):
108 |         super(ParameterRRPrompt, self).__init__('rrprompt', gdb.COMMAND_SUPPORT, gdb.PARAM_BOOLEAN)
109 |         self.orig_prompt = gdb.prompt_hook
110 | 
111 |     def get_set_string(self):
112 |         gdb.prompt_hook = self.orig_prompt
113 |         if self.value:
114 |             if running_rr():
115 |                 gdb.prompt_hook = rrprompt
116 |                 return "rr-aware prompt enabled"
117 |             else:
118 |                 return "not running rr"
119 |         else:
120 |             return "rr-aware prompt disabled"
121 | 
122 |     def get_show_string(self, svalue):
123 |         return svalue
124 | 
125 | 
126 | class PythonWhenTicks(gdb.Function):
127 |     """$_when_ticks - return the numeric output of rr's 'when-ticks' command
128 | Usage:
129 |     $_when_ticks()
130 | """
131 | 
132 |     def __init__(self):
133 |         super(PythonWhenTicks, self).__init__('_when_ticks')
134 | 
135 |     def invoke(self):
136 |         return str(when_ticks())
137 | 
138 | 
139 | class PythonWhen(gdb.Function):
140 |     """$_when - return the numeric output of rr's 'when' command
141 | Usage:
142 |     $_when()
143 | """
144 | 
145 |     def __init__(self):
146 |         super(PythonWhen, self).__init__('_when')
147 | 
148 |     def invoke(self):
149 |         return when()
150 | 
151 | 
152 | class PythonNow(gdb.Command):
153 |     """Output :/"""
154 |     def __init__(self):
155 |         gdb.Command.__init__(self, "now", gdb.COMMAND_USER)
156 | 
157 |     def invoke(self, arg, from_tty):
158 |         try:
159 |             gdb.write(now() + "\n")
160 |         except gdb.error:
161 |             gdb.write("?? when/when-ticks unavailable (not running under rr?)\n")
162 | 
163 | 
164 | class SharedFile(io.TextIOWrapper):
165 |     def __init__(self, filename):
166 |         self.fh = open(filename, "ba+")
167 |         super(SharedFile, self).__init__(self.fh)
168 |         self.last_known_size = self.seek(0, 2)
169 | 
170 |     def changed(self):
171 |         return self.last_known_size != self.seek(0, 2)
172 | 
173 |     def record_end(self):
174 |         self.last_known_size = self.tell()
175 | 
176 |     def write(self, buffer):
177 |         nbytes = super(SharedFile, self).write(buffer)
178 |         self.record_end()
179 |         return nbytes
180 | 
181 | 
182 | # Generator that yields a sequence of actions read from the given log file.
183 | # Each line must be a valid JSON document.
184 | def log_actions(fh):
185 |     lineno = 0
186 |     for line in fh:
187 |         lineno += 1
188 |         data = json.loads(line)
189 |         data['lineno'] = lineno
190 |         yield data
191 | 
192 | 
193 | class ParameterLogQuiet(gdb.Parameter):
194 |     quiet = False
195 | 
196 |     def __init__(self):
197 |         # FIXME: Rename from 'logging', try to use nested command stuff?
198 |         super(ParameterLogQuiet, self).__init__('logquiet', gdb.COMMAND_SUPPORT, gdb.PARAM_BOOLEAN)
199 | 
200 |     def get_set_string(self):
201 |         ParameterLogQuiet.quiet = self.value
202 |         return "logging is " + ("quiet" if ParameterLogQuiet.quiet else "noisy")
203 | 
204 |     def get_show_string(self, svalue):
205 |         return "logging is " + ("quiet" if ParameterLogQuiet.quiet else "noisy")
206 | 
207 | 
208 | class PythonLog(gdb.Command):
209 |     """Append current event/tick-count with message to log file
210 | 
211 |     log MSG - append MSG to the log file
212 |     log/d PAT - delete log messages containing the substring PAT
213 |     log/p MSG - display the message (with replacements) without logging it
214 |     log/s - display log messages sorted by execution timestamp (default)
215 |     log/r - do not replace labels in the output (similar to p/r)
216 |     log/v - verbose mode, showing original text with label replacements
217 |     log/e - edit the log file and reload
218 |     log/g WHEN - seek to the time of the given log message (WHEN is @ or c)"""
219 |     def __init__(self):
220 |         gdb.Command.__init__(self, "log", gdb.COMMAND_USER)
221 |         self.LogFile = None
222 |         self.ExpectedSize = None
223 |         gdb.events.before_prompt.connect(lambda: self.sync_log())
224 | 
225 |     def openlog(self, filename, quiet=False):
226 |         first_open = not self.LogFile
227 | 
228 |         if self.LogFile:
229 |             self.LogFile.close()
230 | 
231 |         self.LogFile = SharedFile(filename)
232 |         if not quiet:
233 |             gdb.write("Logging to %s\n" % (self.LogFile.name,))
234 | 
235 |         if first_open:
236 |             #print("Syncing with log for the first time")
237 |             self.sync_log()
238 |         labels.clear()
239 | 
240 |         self.LogFile.seek(0)
241 | 
242 |         # Load all 'label' actions in the log.
243 |         for action in log_actions(self.LogFile):
244 |             if action['type'] == 'label':
245 |                 labels.label(action['key'], action['value'], action['datatype'], report=False)
246 | 
247 |         labels.flush_added()
248 |         self.LogFile.record_end()
249 | 
250 |     def stoplog(self):
251 |         self.LogFile.close()
252 |         self.LogFile = None
253 | 
254 |     def default_log_filename(self):
255 |         return os.path.join(DEFAULT_LOG_DIR, f"rr-session-{target_id()}.json")
256 | 
257 |     def sync_log(self):
258 |         '''Add any new labels to the log, and grab any updates if another process updated the file.'''
259 |         if not self.LogFile:
260 |             # Note: can't really just open the log immediately, because we
261 |             # won't have the type info for the replacements when gdb first gets
262 |             # going.
263 |             #
264 |             #print("Checking changed: no log yet")
265 |             return
266 | 
267 |         added = labels.flush_added()
268 |         #print("grabbing new labels: {}".format([v for k, (v, t) in added]))
269 |         for k, (v, t) in added:
270 |             #print("writing {} -> ({}) {} to log".format(k, t, v))
271 |             json.dump({'type': 'label', 'key': k, 'value': v, 'datatype': t}, self.LogFile)
272 |             self.LogFile.write("\n")
273 | 
274 |         if self.LogFile.changed():
275 |             #print("Checking changed: yes (or dirty)")
276 |             self.openlog(self.LogFile.name, quiet=False)  # TEMP! FIXME!
277 | 
278 |     def invoke(self, arg, from_tty):
279 |         # We probably ought to flush out dirty labels here.
280 |         if self.LogFile is None:
281 |             self.openlog(self.default_log_filename())
282 | 
283 |         opts, arg = util.split_command_arg(arg, allow_dash=True)
284 |         # print("after split, opt={} arg={}".format(opts, arg))
285 | 
286 |         do_addentry = False
287 |         dump_args = {'sort': True}
288 |         do_print = False
289 |         do_dump = True
290 |         raw = False
291 | 
292 |         if arg:
293 |             do_addentry = True
294 |             do_dump = False
295 | 
296 |         for opt in opts:
297 |             if 'sorted'.startswith(opt):
298 |                 # log/s : same as log with no options, display log in execution order.
299 |                 dump_args['sort'] = True
300 |                 do_dump = True
301 |             elif 'verbose'.startswith(opt):
302 |                 # log/v : display log in execution order, with replacements and originals
303 |                 dump_args['sort'] = True
304 |                 dump_args['replace'] = True
305 |                 dump_args['verbose'] = True
306 |             elif 'unsorted'.startswith(opt):
307 |                 # log/u : display log in entry order
308 |                 dump_args['sort'] = False
309 |                 do_dump = True
310 |             elif 'edit'.startswith(opt):
311 |                 # log/e : edit the log in $EDITOR
312 |                 self.edit()
313 |                 return
314 |             elif 'delete'.startswith(opt):
315 |                 # log/d : delete log messages containing substring
316 |                 self.delete(arg)
317 |                 return
318 |             elif 'raw'.startswith(opt):
319 |                 # log/r : do not do any label replacements in message
320 |                 dump_args['replace'] = False
321 |                 raw = True
322 |             elif 'print-only'.startswith(opt):
323 |                 # log/p : display the log message without logging it permanently
324 |                 do_print = True
325 |                 do_dump = False
326 |             elif 'goto'.startswith(opt):
327 |                 # log/g : seek to the time of a log entry
328 |                 self.goto(arg)
329 |                 return
330 |             else:
331 |                 gdb.write("unknown log option '{}'\n".format(opt))
332 | 
333 |         if do_addentry:
334 |             out = self.process_message(arg)
335 |             if not raw:
336 |                 out = labels.apply(out, verbose=False)
337 |             # If any substitutions were made, display the resulting log message.
338 |             do_print = not ParameterLogQuiet.quiet
339 |             if out != arg:
340 |                 do_print = True
341 |             if self.LogFile:
342 |                 gdb_out = gdb.execute("checkpoint", to_string=True)
343 |                 action = {'type': 'log', 'event': when(), 'thread': thread_id(), 'tname': thread_detail(), 'ticks': when_ticks(), 'message': out}
344 |                 if m := re.search(r'Checkpoint (\d+)', gdb_out):
345 |                     action['checkpoint'] = m.group(1)
346 |                     action['session'] = RUN_ID
347 |                 json.dump(action, self.LogFile)
348 |                 self.LogFile.write("\n")
349 | 
350 |         if do_dump:
351 |             self.dump(**dump_args)
352 | 
353 |         if do_print:
354 |             gdb.write(out + "\n")
355 | 
356 |     def process_message(self, message):
357 |         # Replace {expr} with the result of evaluating the (gdb) expression expr.
358 |         # Allow one level of curly bracket nesting within expr.
359 |         out = re.sub(r'\{((?:\{[^\}]*\}|\\\}|[^\}])*)\}',
360 |                      lambda m: util.evaluate(m.group(1)),
361 |                      message)
362 | 
363 |         # Replace $thread with "T3", where 3 is the gdb's notion of thread number.
364 |         out = out.replace("$thread", thread_id())
365 | 
366 |         # Let gdb handle other $ vars.
367 |         return re.sub(r'(\$\w+)', lambda m: util.evaluate(m.group(1)), out)
368 | 
369 |     def write_message(self, message, index=None, verbose=False):
370 |         if len(message) == 7:
371 |             (event, thread, ticks, lineno, msg, checkpoint, session) = message
372 |         elif len(message) == 6:
373 |             (event, ticks, lineno, msg, checkpoint, session) = message
374 |             thread = "T?"
375 | 
376 |         if verbose:
377 |             gdb.write(f"{thread}:{event}/{ticks} ")
378 |         if checkpoint is not None and RUN_ID == session:
379 |             gdb.write(f"[c{checkpoint}] ")
380 |         elif index is not None:
381 |             gdb.write(f"[@{index}] ")
382 | 
383 |         gdb.write(msg + "\n")
384 | 
385 |     def build_messages(self, replace=True, verbose=False):
386 |         if not self.LogFile:
387 |             gdb.write("No log file open\n")
388 |             return
389 | 
390 |         self.LogFile.seek(0)
391 | 
392 |         messages = []
393 |         for action in log_actions(self.LogFile):
394 |             if action['type'] != 'log':
395 |                 continue
396 | 
397 |             message = action['message']
398 |             if replace:
399 |                 message = labels.apply(message, verbose)
400 |             action.setdefault('thread', 'T?')
401 | 
402 |             messages.append((action['event'], action['thread'], action['ticks'], action['lineno'], message, action.get('checkpoint'), action.get('session')))
403 | 
404 |         return messages
405 | 
406 |     def dump(self, sort=False, replace=True, verbose=False):
407 |         messages = self.build_messages(replace=replace, verbose=verbose)
408 |         if messages is None:
409 |             return
410 | 
411 |         now = nowTuple()
412 |         place = -1
413 |         if sort:
414 |             messages.sort()
415 |             for i, message in enumerate(messages):
416 |                 when = message[0:3]
417 |                 #gdb.write(f"((now={now} when={when})) ")
418 |                 if when == now:
419 |                     gdb.write("=> ")
420 |                     now = None
421 |                 elif now is not None and when > now:
422 |                     gdb.write("=> (now)\n   ")
423 |                     now = None
424 |                 else:
425 |                     gdb.write("   ")
426 |                 self.write_message(message, index=i, verbose=verbose)
427 |         else:
428 |             for message in messages:
429 |                 self.write_message(message, verbose=verbose)
430 | 
431 |     def edit(self):
432 |         if not self.LogFile:
433 |             gdb.write("No log file open\n")
434 |             return
435 | 
436 |         filename = self.LogFile.name
437 |         self.LogFile.close()
438 |         if os.environ.get("INSIDE_EMACS"):
439 |             pass  # Use emacsclient if possible.
440 |         os.system(os.environ.get('EDITOR', 'emacs') + " " + filename)
441 |         self.openlog(filename, quiet=True)
442 | 
443 |     def delete(self, filter_out):
444 |         if not self.LogFile:
445 |             gdb.write("No log file open\n")
446 |             return
447 | 
448 |         count = 0
449 | 
450 |         filename = self.LogFile.name
451 |         self.LogFile.close()
452 |         tempfilename = filename + ".tmp"
453 |         with open(tempfilename, "wt") as outfh, open(filename, "rt") as infh:
454 |             for action in log_actions(infh):
455 |                 if action['type'] == 'log' and filter_out in action['message']:
456 |                     count += 1
457 |                 else:
458 |                     json.dump(action, outfh)
459 |                     outfh.write("\n")
460 |         os.rename(filename, filename + ".old")
461 |         os.rename(tempfilename, filename)
462 |         self.openlog(filename, quiet=True)
463 | 
464 |         gdb.write(f"Deleted {count} {'entry' if count == 1 else 'entries'}\n")
465 | 
466 |     def goto(self, where):
467 |         if where.startswith("@"):
468 |             index = int(where[1:])
469 |             messages = self.build_messages(replace=False, verbose=False)
470 |             if messages is None:
471 |                 return
472 |             messages.sort()
473 |             event, thread, ticks = messages[index][0:3]
474 |             if thread_id() != thread:
475 |                 gdb.execute(f"thread {thread[1:]}")
476 |             gdb.execute(f"seek {ticks}")
477 |             return
478 | 
479 |         if where.startswith("c"):
480 |             checkpoint = int(where[1:])
481 |         else:
482 |             checkpoint = int(where)
483 |         gdb.execute(f"restart {checkpoint}")
484 | 
485 | 
486 | class ParameterLogFile(gdb.Parameter):
487 |     def __init__(self, logger):
488 |         super(ParameterLogFile, self).__init__('logfile', gdb.COMMAND_SUPPORT, gdb.PARAM_STRING)
489 |         self.logger = logger
490 | 
491 |     def get_set_string(self):
492 |         if self.value:
493 |             self.logger.openlog(self.logfile)
494 |             return "logging to %s" % self.logfile
495 |         else:
496 |             return "logging stopped"
497 | 
498 |     def get_show_string(self, svalue):
499 |         if not self.logger.LogFile:
500 |             return ""
501 |         return self.logger.LogFile.name
502 | 
503 | 
504 | # Create gdb commands.
505 | ParameterLogFile(PythonLog())
506 | ParameterLogQuiet()
507 | if running_rr():
508 |     ParameterRRPrompt()
509 |     PythonWhenTicks()
510 |     PythonWhen()
511 |     PythonNow()
512 | 


--------------------------------------------------------------------------------
/conf/gdbinit.sfink:
--------------------------------------------------------------------------------
 1 | # Seems a little unsafe; this is a gdb performance tweak. See
 2 | # https://robert.ocallahan.org/2020/03/debugging-gdb-using-rr-ptrace-emulation.html
 3 | maint set catch-demangler-crashes off
 4 | 
 5 | add-auto-load-safe-path ~/src
 6 | add-auto-load-safe-path ~/.rr/
 7 | 
 8 | define empretty
 9 |   python import mozilla.autoload
10 |   python mozilla.autoload.register(gdb.current_objfile())
11 | end
12 | define pretty
13 |   python sys.path.insert(0, '/home/sfink/src/mozilla/js/src/gdb')
14 |   empretty
15 | end
16 | define pretty2
17 |   python sys.path.insert(0, '/home/sfink/src/mozilla2/js/src/gdb')
18 |   empretty
19 | end
20 | define pretty3
21 |   python sys.path.insert(0, '/home/sfink/src/mozilla3/js/src/gdb')
22 |   empretty
23 | end
24 | define pretty4
25 |   python sys.path.insert(0, '/home/sfink/src/mozilla4/js/src/gdb')
26 |   empretty
27 | end
28 | 
29 | define mlabel
30 |   set $_VP=vp
31 |   python
32 | import re
33 | argc = int(gdb.parse_and_eval("argc"))
34 | for i in range(3, argc + 2, 2):
35 |   namer = f"$_VP[{i}]"
36 |   m = re.search(r'::Value\("(.*?)"', str(gdb.parse_and_eval(namer)))
37 |   if not m:
38 |     print(f"Failed to match: {namer}")
39 |     continue
40 |   name = m.group(1)
41 |   setter = f"label {name}=$_VP[{i+1}].toGCThing()"
42 |   gdb.execute(setter)
43 | end
44 | end
45 | document mlabel
46 | Special-purpose tool for grabbing out things passed to Math.sin(0, "name1", val1, "name2", ...) and converting them to labels.
47 | end
48 | 


--------------------------------------------------------------------------------
/conf/gdbinit.symbols.py:
--------------------------------------------------------------------------------
  1 | # Any copyright is dedicated to the Public Domain.
  2 | # http://creativecommons.org/publicdomain/zero/1.0/
  3 | #
  4 | # A GDB Python script to fetch debug symbols from the Mozilla symbol server.
  5 | #
  6 | # To use, run `source /path/to/symbols.py` in GDB 7.9 or newer, or
  7 | # put that in your ~/.gdbinit.
  8 | 
  9 | # THIS FILE WRITTEN BY Ted Mielczarek AND IMPORTED FROM https://gist.github.com/luser/193572147c401c8a965c
 10 | 
 11 | from __future__ import print_function
 12 | 
 13 | import gzip
 14 | import io
 15 | import itertools
 16 | import os
 17 | import shutil
 18 | import sys
 19 | try:
 20 |     from urllib.request import urlopen
 21 |     from urllib.parse import urljoin, quote
 22 | except ImportError:
 23 |     from urllib2 import urlopen
 24 |     from urllib import quote
 25 |     from urlparse import urljoin
 26 | 
 27 | SYMBOL_SERVER_URL = 'https://s3-us-west-2.amazonaws.com/org.mozilla.crash-stats.symbols-public/v1/'
 28 | #SYMBOL_SERVER_URL = 'https://symbolication.services.mozilla.com/symbolication/'
 29 | 
 30 | debug_dir = os.path.join(os.environ['HOME'], '.cache', 'gdb')
 31 | cache_dir = os.path.join(debug_dir, '.build-id')
 32 | 
 33 | def munge_build_id(build_id):
 34 |     '''
 35 |     Breakpad stuffs the build id into a GUID struct so the bytes are
 36 |     flipped from the standard presentation.
 37 |     '''
 38 |     b = list(map(''.join, list(zip(*[iter(build_id.upper())]*2))))
 39 |     return ''.join(itertools.chain(reversed(b[:4]), reversed(b[4:6]),
 40 |                                    reversed(b[6:8]), b[8:16])) + '0'
 41 | 
 42 | def try_fetch_symbols(filename, build_id, destination):
 43 |     print('try_fetch_symbols(filename={}, build_id={}, dest={}'.format(filename, build_id, destination))
 44 |     debug_file = os.path.join(destination, build_id[:2], build_id[2:] + '.debug')
 45 |     if os.path.exists(debug_file):
 46 |         return debug_file
 47 |     try:
 48 |         d = os.path.dirname(debug_file)
 49 |         if not os.path.isdir(d):
 50 |             os.makedirs(d)
 51 |     except OSError:
 52 |         pass
 53 |     path = os.path.join(filename, munge_build_id(build_id), filename + '.dbg.gz')
 54 |     url = urljoin(SYMBOL_SERVER_URL, quote(path))
 55 |     try:
 56 |         u = urlopen(url)
 57 |         if u.getcode() != 200:
 58 |             print('  GET {} returned code {}'.format(url, u.getcode()))
 59 |             return None
 60 |         print('Fetching symbols from {0}'.format(url))
 61 |         with open(debug_file, 'wb') as f, gzip.GzipFile(fileobj=io.BytesIO(u.read()), mode='r') as z:
 62 |             shutil.copyfileobj(z, f)
 63 |             return debug_file
 64 |     except Exception as e:
 65 |         print('  failed with exception: ' + str(e))
 66 |         return None
 67 | 
 68 | 
 69 | def is_moz_binary(filename):
 70 |     '''
 71 |     Try to determine if a file lives in a Firefox install dir, to save
 72 |     HTTP requests for things that aren't going to work.
 73 |     '''
 74 |     # The linux-gate VDSO doesn't have a real filename.
 75 |     if not os.path.isfile(filename):
 76 |         return False
 77 |     while True:
 78 |         filename = os.path.dirname(filename)
 79 |         if filename == '/':
 80 |             return False
 81 |         if os.path.isfile(os.path.join(filename, 'run-mozilla.sh')):
 82 |             return True
 83 | 
 84 | 
 85 | def fetch_symbols_for(objfile):
 86 |     build_id = objfile.build_id if hasattr(objfile, 'build_id') else None
 87 |     if getattr(objfile, 'owner', None) is not None or any(o.owner == objfile for o in gdb.objfiles()):
 88 |         # This is either a separate debug file or this file already
 89 |         # has symbols in a separate debug file.
 90 |         return
 91 |     if build_id and is_moz_binary(objfile.filename):
 92 |         debug_file = try_fetch_symbols(os.path.basename(objfile.filename), build_id, cache_dir)
 93 |         if debug_file:
 94 |             objfile.add_separate_debug_file(debug_file)
 95 | 
 96 | 
 97 | def new_objfile(event):
 98 |     fetch_symbols_for(event.new_objfile)
 99 | 
100 | 
101 | def fetch_symbols():
102 |     '''
103 |     Try to fetch symbols for all loaded modules.
104 |     '''
105 |     for objfile in gdb.objfiles():
106 |         fetch_symbols_for(objfile)
107 | 
108 | # Create our debug cache dir.
109 | try:
110 |     if not os.path.isdir(cache_dir):
111 |         os.makedirs(cache_dir)
112 | except OSError:
113 |     pass
114 | 
115 | # Set it as a debug-file-directory.
116 | try:
117 |     dirs = gdb.parameter('debug-file-directory').split(':')
118 | except gdb.error:
119 |     dirs = []
120 | if debug_dir not in dirs:
121 |     dirs.append(debug_dir)
122 |     gdb.execute('set debug-file-directory %s' % ':'.join(dirs))
123 | 
124 | gdb.events.new_objfile.connect(new_objfile)
125 | 


--------------------------------------------------------------------------------
/conf/gdbstart.py:
--------------------------------------------------------------------------------
 1 | import os
 2 | SFINK_TOOLS_DIR=os.path.abspath(os.path.dirname(os.path.expanduser(__file__)))
 3 | 
 4 | gdb.execute("source {}/gdbinit".format(SFINK_TOOLS_DIR))
 5 | gdb.execute("source {}/gdbinit.py".format(SFINK_TOOLS_DIR))
 6 | gdb.execute("source {}/gdbinit.symbols.py".format(SFINK_TOOLS_DIR))
 7 | gdb.execute("source {}/gdbinit.pahole.py".format(SFINK_TOOLS_DIR))
 8 | gdb.execute("source {}/gdbinit.gecko.py".format(SFINK_TOOLS_DIR))
 9 | gdb.execute("source {}/gdbinit.misc".format(SFINK_TOOLS_DIR))
10 | gdb.execute("source {}/gdbinit.rr.py".format(SFINK_TOOLS_DIR))
11 | 
12 | def breakpoint_handler(event):
13 |     if not isinstance(event, gdb.BreakpointEvent):
14 |         return
15 |     bpnums = [b.number for b in event.breakpoints]
16 |     old = getattr(event, "old_val", "(N/A)")
17 |     new = getattr(event, "new_val", "(N/A)")
18 |     nums = ' '.join(str(n) for n in bpnums)
19 |     print(f"stopped at breakpoint {nums}: {old} -> {new}")
20 | 
21 | gdb.events.stop.connect(breakpoint_handler)
22 | 


--------------------------------------------------------------------------------
/conf/hgrc:
--------------------------------------------------------------------------------
  1 | [ui]
  2 | ###merge = kdiff3
  3 | #merge = meld
  4 | #merge = :merge3
  5 | merge = diffmerge
  6 | #merge = :vscode
  7 | #merge = code
  8 | #traceback = True
  9 | #verbose = True
 10 | #debug = True
 11 | interface = curses
 12 | interface.histedit = curses
 13 | mergemarkers = detailed
 14 | 
 15 | # Change the default of various commands. See https://www.mercurial-scm.org/wiki/FriendlyHGPlan
 16 | tweakdefaults = true
 17 | 
 18 | [defaults]
 19 | #commit = -v
 20 | diff = -U 8 -p
 21 | qdiff = -U 8 -p
 22 | qnew = -U
 23 | qexport = -v
 24 | qbackout = -U
 25 | purge = --no-confirm
 26 | 
 27 | [phases]
 28 | publishing = False
 29 | 
 30 | [format]
 31 | generaldelta = True
 32 | 
 33 | [mq]
 34 | secret = False
 35 | keepchanges = True
 36 | 
 37 | [mqext]
 38 | mqcommit = auto
 39 | 
 40 | [patch]
 41 | maxfuzz = 10
 42 | 
 43 | [extensions]
 44 | # Basic functionality improvements
 45 | progress =
 46 | 
 47 | # Standard additional commands
 48 | patchbomb =
 49 | #mq =
 50 | rebase =
 51 | relink =
 52 | graphlog =
 53 | convert =
 54 | transplant =
 55 | share =
 56 | histedit =
 57 | shelve =
 58 | hggit = ~/lib/hg/hg-git/hggit
 59 | show =
 60 | absorb =
 61 | 
 62 | # Additional functionality
 63 | # fsmonitor =
 64 | blackbox =
 65 | journal =
 66 | extdiff =
 67 | 
 68 | # Nonstandard additional commands
 69 | #qbackout = ~/lib/version-control-tools/hgext/qbackout/
 70 | #mqext = ~/lib/version-control-tools/hgext/mqext
 71 | evolve = ~/lib/hg/evolve/hgext3rd/evolve
 72 | topic = ~/lib/hg/evolve/hgext3rd/topic
 73 | #hgsubversion = ~/lib/hg/hgsubversion/hgsubversion
 74 | 
 75 | # Facebook stuff
 76 | # smartlog = ~/lib/hg/hg-experimental/hgext3rd/smartlog.py
 77 | # githelp = ~/lib/hg/hg-experimental/hgext3rd/githelp.py
 78 | #chistedit = ~/lib/hg/hg-experimental/hgext3rd/chistedit.py
 79 | 
 80 | # Mozilla/bugzilla/tryserver integration
 81 | #qimportbz = ~/.mozbuild/version-control-tools/hgext/qimportbz
 82 | #qimportbz = ~/lib/version-control-tools/hgext/qimportbz
 83 | #trychooser = ~/lib/hg/trychooser/
 84 | mozext = ~/lib/version-control-tools/hgext/mozext/
 85 | # Note: uses mozautomation from v-c-t, so after mozext
 86 | bzexport = ~/lib/hg/bzexport/
 87 | #phabsend-moz = ~/.mozbuild/phabsend-moz/phabricator.py
 88 | phabsend-moz = ~/.mozbuild/phabsend-moz/mozphabricator.py
 89 | #phabricator =
 90 | 
 91 | # Note: if I use the .mozbuild version, it seems to get confused between
 92 | #mozhg/util.py versions from ~/lib and ~/.mozbuild.
 93 | firefoxtree = ~/.mozbuild/version-control-tools/hgext/firefoxtree
 94 | 
 95 | # format-source = ~/.mozbuild/version-control-tools/hgext/format-source
 96 | push-to-try = ~/.mozbuild/version-control-tools/hgext/push-to-try
 97 | #clang-format = ~/.mozbuild/version-control-tools/hgext/clang-format
 98 | 
 99 | # Commented out because if it is not in use, I am getting TypeError: Template.append: cmd must be a string
100 | cmdconvert = ~/lib/hg/cmdconvert
101 | 
102 | [alias]
103 | ##################### aliases I actually use #####################
104 | 
105 | ls = ![[ -n "$1" ]] && r="$1" || r=.; $HG log -r "with_parents(not public() and ::$r)" --template list
106 | sl = ls
107 | 
108 | lg = log --template list --graph
109 | lgt = lg -r 'topobranch(.)'
110 | lgtopic = !if [[ -n  "$1" ]]; then $HG lg -r "topic('$1')"; else $HG lg -r 'topic(.)'; fi
111 | lgtt = lgtopic
112 | 
113 | lst = topics --age
114 | 
115 | che = chistedit -r 'not public() and ancestors(.)'
116 | he = histedit -r 'not public() and ancestors(.)'
117 | 
118 | advance = !while $HG next --evolve; do :; done
119 | 
120 | geckoversion = !$HG cat -r $1 'path:config/milestone.txt' | tail -1
121 | 
122 | lsbranch = ![[ -n "$1" ]] && r="$1" || r=.; $HG log -r "with_parents((::$r + descendants($r)) and not public())" --template list
123 | 
124 | entopic = topic -r 'ancestors(.) and not public()'
125 | 
126 | submit = phabsend
127 | phsend = phabsend
128 | phupdate = phabupdate
129 | phread = phabread
130 | phquery = phabqquery
131 | 
132 | yeet = phexport
133 | 
134 | ######### aliases I would use if I remembered they existed #######
135 | 
136 | file = files "relglob:$1"
137 | phases = log --template='{node|short} {phase} {desc|firstline}\n'
138 | recommit = !$HG uncommit --all && $HG amend -i
139 | 
140 | interdiff = !set -x; $HG export --hidden $1 > /tmp/left.diff; $HG export --hidden $2 > /tmp/right.diff; interdiff /tmp/left.diff /tmp/right.diff
141 | 
142 | # `hg diffpast 3 .` will look at the interdiff between the predecessor^4 and predecessor^3 of `.`
143 | diffpast = !set -x; n=$1; rev=$2; rrev=$2; while [[ $n -gt 0 ]]; do rrev="$rev"; rev="predecessors($rev)"; n=$(( $n - 1 )); done; $HG --hidden diff --from "$rev" --to "$$rrev"
144 | 
145 | ###### aliases for scenarios that I don't run into anymore ######
146 | 
147 | # qedit: bring up a text editor on the patch series file, marking applied
148 | # patches as unrearrangable
149 | 
150 | qedit = !S=$(hg root --mq)/series; cp $S{,.bak} && perl -pale 'BEGIN { chomp(@a = qx(hg qapplied -q)); die if $?; @a{@a}=(); }; s/^/# (applied) / if exists $a{$F[0]}' $S > $S.new && ${EDITOR-vim} $S.new && sed -e 's/^# .applied. //' $S.new > $S
151 | 
152 | b2t = topics $1 -r 'allbook($1)'
153 | 
154 | simple_lls = !$HG ls $1 | tac | perl -lne 'print ".~$. $_"' | tac
155 | lls = !$HG ls $1 | tac | perl -lne '$n = $. - 1; print sprintf "%-4s %s", $. < 5 ? "." . "^" x $n : ".~$n", " $_"' | tac
156 | 
157 | ############### aliases I keep around to learn from ##############
158 | 
159 | # See gitremotedelete in ~/.config/hg/hgrc
160 | 
161 | # For Callek, really. I use `em`.
162 | workon = !bash -c 'cd $($HG root) && $EDITOR $($HG status -n -m -a ${1+--change $1})' -- "$@"
163 | 
164 | # From junw, mostly what evolve does:
165 | # evolve=rebase -r 'orphan()-obsolete()' -d 'max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::)'
166 | 
167 | # Usage: hg enbug  # desc should be oneword
168 | #
169 | # If it doesn't already exist, add a bug number to the commit message. Also
170 | # only if there is no active bookmark already, make a bookmark named after the
171 | # bug, named bug..
172 | enbug = !msg=$($HG log -r . --template '{desc}\n'); echo "$msg" | grep -q "^Bug" || $HG commit --amend -m "Bug $1 - $msg"; book=$($HG log -r . --template '{activebookmark}\n'); [ -n "$book" ] && $HG book -m "$book" "bug.$1.$book"
173 | 
174 | amendbug = !msg=$($HG log -r . --template '{desc}\n'); if echo "$msg" | grep -q "^Bug"; then echo "Message already contains bug number"; else $HG amend -m "Bug $1 - $msg" ; fi
175 | ambug = amendbug
176 | 
177 | yay = !msg="$($HG log -r . --template '{desc}\n'), r=$1"; $HG commit --amend -m "$msg"; echo "$msg"
178 | 
179 | reb = rebase -d rebase_default
180 | rebase! = rebase -d rebase_default
181 | wip = log --graph --rev=wip --template=wip
182 | smart-annotate = annotate -w --skip ignored_changesets
183 | 
184 | [revsetalias]
185 | with_parents(s) = parents(s) or s
186 | 
187 | npkw($1) = not public() and keyword($1)
188 | 
189 | whichbook($1) = last(descendants($1))
190 | wip = (parents(not public()) or not public() or . or (head() and branch(default))) and (not obsolete() or orphan()^) and not closed() and not (fxheads() - date(-90))
191 | 
192 | twig($1) = with_parents(descendants(first(not public() and ::$1)))
193 | .twig = twig(.)
194 | 
195 | # Where did this come from?
196 | rbhead = heads(descendants((parents(ancestor(ancestors(.) and not public())))) and public())
197 | live = reverse(::. and not public()) + parents(::. and not public())
198 | 
199 | local = reverse(ancestor(.+inbound)::.)
200 | 
201 | # From IRC
202 | nexttag($1) = first($1:: and tag())
203 | 
204 | # I made this one up
205 | workparent = last(ancestors(.) and public())
206 | rebase_default = heads(descendants(workparent) and public())
207 | 
208 | workheads = heads(descendants(parents(not public() and ancestors(.)) and public()))
209 | 
210 | my($1) = not public() and $1
211 | lineage(r) = ancestors(r) + descendants(r)
212 | 
213 | allbook($1) = my(lineage(bookmark($1)))
214 | alltopic($1) = my(lineage(topic($1)))
215 | allbranch($1) = my(lineage($1))
216 | topobranch($1) = descendants(my(ancestors($1)))
217 | ignored_changesets = desc("ignore-this-changeset") or extdata(get_ignored_changesets)
218 | 
219 | # Intended for rebasing a new ministack on top of where it was inserted.
220 | sibling($1) = children(p1($1)) - $1
221 | siblingo($1) = children(p1($1)) + children(allpredecessors($1)) - $1
222 | 
223 | cousin($1) = last(not public() and (children(ancestors($1)) - ancestors($1)))
224 | 
225 | [extdata]
226 | get_ignored_changesets = shell:cat `hg root`/.hg-annotate-ignore-revs 2> /dev/null || true
227 | 
228 | [diff]
229 | git = 1
230 | showfunc = 1
231 | nodates = 1
232 | unified = 8
233 | 
234 | [paths]
235 | unified = https://hg.mozilla.org/mozilla-unified
236 | 
237 | [web]
238 | #cacerts = /etc/mercurial/hgrc.d/cacert.pem
239 | cacerts = /etc/pki/tls/certs/ca-bundle.crt
240 | #cacerts = .ssh/mozilla-root.crt
241 | 
242 | [merge-tools]
243 | #kdiff3.args = --auto --L1 common --L2 pulled --L3 mq $base $local $other -o $output -cs SyncMode=1
244 | kdiff3.executable = ~/bin/kdiff3-wrapper
245 | kdiff3.args = --auto --L1 prepatch --L2 tochange --L3 postpatch $base $local $other -o $output --auto --cs SyncMode=1
246 | kdiff3.gui = True
247 | kdiff3.premerge = True
248 | kdiff3.binary = False
249 | 
250 | meld.gui = True
251 | meld.executable = /usr/bin/env
252 | meld.args = GDK_BACKEND=x11 meld -o $output $local $base $other
253 | 
254 | diffmerge.gui = True
255 | diffmerge.executable = diffmerge
256 | 
257 | vscode.regkey = SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{EA457B21-F73E-494C-ACAB-524FDE069978}_is1
258 | vscode.regname = DisplayIcon
259 | vscode.args = --wait $output
260 | vscode.binary = False
261 | vscode.gui = True
262 | vscode.checkconflicts = True
263 | vscode.premerge = keep
264 | 
265 | code.priority = 100
266 | code.premerge = True
267 | code.args = --wait --merge $other $local $base $output
268 | 
269 | [bugzilla]
270 | url = https://bugzilla.mozilla.org
271 | # apikey in ~/.config/hg/hgrc
272 | 
273 | [bzexport]
274 | #submit-method = bugzilla
275 | 
276 | update-patch = True
277 | unified = 10
278 | 
279 | [qimportbz]
280 | patch_format = bug-%(bugnum)s-%(desc)s
281 | 
282 | [mozext]
283 | skip_relbranch_bookmarks = True
284 | disable_local_database = False
285 | 
286 | [progress]
287 | delay = 1
288 | 
289 | [pager]
290 | # display colors when using pager
291 | pager = LESS='RF' less
292 | 
293 | [templatealias]
294 | l_normal(s) = label('tags.normal', s)
295 | 
296 | [templates]
297 | wip = '{label("wip.branch", if(branches,"{branches} "))}{label(ifeq(graphnode,"x","wip.obsolete","wip.{phase}"),"{rev}:{node|short}")}{label("wip.user", " {author|user}")}{label("wip.tags", if(tags," {tags}"))}{label("wip.tags", if(fxheads," {fxheads}"))}{if(bookmarks," ")}{label("wip.bookmarks", if(bookmarks,bookmarks))}{label(ifcontains(rev, revset("parents()"), "wip.here"), " {desc|firstline}")}'
298 | 
299 | topic_str = "{ifeq(topic, '', '', '[{topic}]')}"
300 | bookmarks_str = "{join(bookmarks % 'B({bookmark})', ' ')}"
301 | tags_str = "{join(tags % '{ifeq(tag, 'tip', '', 't({tag})')}', ' ')}"
302 | node_str = "{ifeq(topicidx, '', '', 's{topicidx} ')}{node|short}"
303 | 
304 | list = "{label('changeset.{phase}', node_str)} {l_normal(topic_str)} {desc|firstline} {l_normal(tags_str)} {l_normal(bookmarks_str)} {instabilities}\n"
305 | 
306 | fulldesc = "{desc}\n"
307 | 
308 | [bundleclone]
309 | prefers = uc2region=us-west-1
310 | 
311 | [color]
312 | mode = terminfo
313 | #mode = ansi
314 | 
315 | #Custom colours
316 | color.orange = 202
317 | color.lightyellow = 191
318 | color.darkorange = 220
319 | color.brightyellow = 226
320 | 
321 | #Colours for each label
322 | log.branch = cyan
323 | log.summary = lightyellow
324 | log.description = lightyellow
325 | log.bookmark = green
326 | log.tag = darkorange
327 | log.graph = blue
328 | 
329 | changeset.public = orange bold
330 | changeset.secret = blue bold
331 | changeset.draft = brightyellow bold
332 | 
333 | desc.here = bold blue_background
334 | 
335 | diff.trailingwhitespace = bold red_background
336 | qseries.applied = yellow bold underline
337 | qseries.unapplied = bold
338 | 
339 | wip.bookmarks = yellow underline
340 | wip.branch = yellow
341 | wip.draft = green
342 | wip.here = red
343 | wip.obsolete = none
344 | wip.public = blue
345 | wip.tags = yellow
346 | wip.user = magenta
347 | 
348 | [rebase]
349 | # Turned this off because it can causes topics to disappear during rebase.
350 | # (as of evolve 8.4.0-ish.)
351 | #
352 | # re-enabling to see if it is better now.
353 | experimental.inmemory = true
354 | 
355 | [extdiff]
356 | #  adds a hg vsd command to open side by side diffs of individual files
357 | # in VS Code.
358 | vsd = code --wait --diff
359 | 
360 | # difftastic, with paged colors
361 | df = difft
362 | df.paged-command-options = --color=always
363 | 
364 | [experimental]
365 | graphshorten = true
366 | worddiff = true
367 | 
368 | [hggit]
369 | usephases = True
370 | 
371 | [phabsend]
372 | setbaseparent = true
373 | basepolicy = samebug
374 | #basepolicy = any
375 | amend = true
376 | 
377 | # Personal configuration is in ~/.config/hg/hgrc
378 | 


--------------------------------------------------------------------------------
/conf/jj-config.toml:
--------------------------------------------------------------------------------
  1 | # Note: this config file contains no user-specific config.
  2 | # I put that in a separate user.toml file, and made my
  3 | # ~/.config/jj/config.toml be a directory containing
  4 | # user.toml and this file. A cleaner way would be to set
  5 | # JJ_CONFIG=$HOME/.config/jj.d and put the files there.
  6 | 
  7 | [ui.diff]
  8 | format = "git"
  9 | 
 10 | [git]
 11 | change_id = true
 12 | 
 13 | [snapshot]
 14 | auto-track = 'all() ~ glob:"**/*.sf.txt"'
 15 | 
 16 | [experimental-advance-branches]
 17 | enabled-branches = ["glob:GH.*"]
 18 | 
 19 | [revsets]
 20 | log = 'reachable(@, mutable())'
 21 | 
 22 | [revset-aliases]
 23 | current = 'latest((@ | @-) & ~empty())'
 24 | 'closest_bookmark(x)' = 'heads(::x & bookmarks())'
 25 | 
 26 | junk = '(mutable() & empty()) ~ working_copies() ~ parents(..) ~ bookmarks()'
 27 | 
 28 | branchroots = 'trunk()+ & trunk()::bookmarks(glob:"T.*")'
 29 | 'branchroot(x)' = 'trunk()+ & trunk()::bookmarks(x)'
 30 | 
 31 | # non-trunk ancestors of T.foo
 32 | 'topic(x)' = 'trunk()+::x'
 33 | 
 34 | 'mut(s)' = 'mutable() & description(s)'
 35 | 
 36 | # does not work! T.a ... T.b ... T.c ... trunk. xtopic(T.b) needs to exclude T.b+::T.a without excluding T.c::T.b.
 37 | 'other_topics(x)' = 'bookmarks(glob:"T.*") ~ x'
 38 | 'exclusive_topic(x)' = 'topic(x) ~ trunk()::other_topics(x)'
 39 | 'xtopic(x)' = 'exclusive_topic(x)'
 40 | 
 41 | # These are not quite the same.
 42 | 'why_immutable(r)' = '(r & immutable()) | roots(r:: & immutable_heads())'
 43 | 'why_in(r, domain)' = '(r & domain) | roots(r:: & heads(domain))'
 44 | 
 45 | [templates]
 46 | draft_commit_description = '''
 47 |     concat(
 48 |     coalesce(description, "\n"),
 49 |     surround(
 50 |         "\nJJ: This commit contains the following changes:\n", "",
 51 |         indent("JJ:     ", diff.stat(72)),
 52 |     ),
 53 |     "\nJJ: ignore-rest\n",
 54 |     diff.git(),
 55 |     )
 56 | '''
 57 | 
 58 | [template-aliases]
 59 | brief = 'brief_line ++ "\n"'
 60 | 
 61 | # Removed:
 62 | #  format_short_commit_id(commit_id),
 63 | 
 64 | brief_line = '''
 65 | separate(" ",
 66 |   format_short_change_id_with_hidden_and_divergent_info(self),
 67 |   self.bookmarks(),
 68 |   self.tags(),
 69 |   self.working_copies(),
 70 |   if(empty, label("empty", "(no changes)")),
 71 |   if(description,
 72 |     description.first_line(),
 73 |     label(if(empty, "empty"), description_placeholder),
 74 |   ),
 75 | )
 76 | '''
 77 | 
 78 | 'format_timestamp(timestamp)' = 'timestamp.ago()'
 79 | 
 80 | 'hyperlink(url, text)' = '''
 81 |     concat(
 82 |       raw_escape_sequence("\e]8;;" ++ url ++ "\e\\"),
 83 |       text,
 84 |       raw_escape_sequence("\e]8;;\e\\"),
 85 |     )
 86 | '''
 87 | 
 88 | [aliases]
 89 | 
 90 | info = ["log", "--no-graph"]
 91 | 
 92 | lg = ["log", "-T", "brief"]
 93 | 
 94 | lst = ["info", "-T", "brief", "-r", "heads(ancestors(visible_heads() ~ immutable(), 2) ~ (empty() & description(exact:'')))"]
 95 | 
 96 | tug = ["bookmark", "move", "--from", "closest_bookmark(@-)", "--to", "current"]
 97 | 
 98 | book = ["bookmark"]
 99 | 
100 | drop = ["abandon"]
101 | 
102 | ls = ["util", "exec", "--", "bash", "-c", """
103 |     rev="${1:-@}"
104 |     jj log -r "ancestors(::$rev ~ immutable(), 2) | $rev:: | $rev" -Tbrief
105 | """, "jj-alias"]
106 | 
107 | "show-" = ["show", "-r", "@-"]
108 | 
109 | "desc-" = ["describe", "-r", "@-"]
110 | 
111 | # jj rebase -s  -d all:- -d 
112 | addparent = ["util", "exec", "--", "bash", "-c", '''
113 |     jj rebase -s $1 -d "all:$1-" -d $2
114 | ''', "jj-addparent"]
115 | 
116 | # jj rebase -s  -d "all:- ~ "
117 | rmparent = ["util", "exec", "--", "bash", "-c", '''
118 |     jj rebase -s $1 -d "all:$1- ~ $2"
119 | ''', "jj-rmparent"]
120 | 
121 | #bookmark.shove = ["bookmark", "move", "--allow-backwards"]
122 | 
123 | #shove = ["b", "--ignore-immutable", "shove", "--hard"]
124 | 
125 | export = ["util", "exec", "--", "python", "-c", '''
126 | if "indentation makes this more readable":
127 |     import argparse
128 |     import os
129 |     import sys
130 |     from subprocess import check_output, run, Popen, PIPE
131 | 
132 |     parser = argparse.ArgumentParser()
133 |     parser.add_argument("first", help="First patch to export. Should not be a merge.")
134 |     parser.add_argument("last", nargs="?", help="Last patch to export.")
135 |     parser.add_argument("--output", "-o", metavar="FILENAME", help="write the output to this file, not valid with --import")
136 |     parser.add_argument("--bookmark", "-b", help="bookmark to create in destination, only valid with --import")
137 |     parser.add_argument("--stack", action='store_true', help="import immutable stack that includes `first`")
138 |     parser.add_argument("--import", dest="destdir", nargs="?", const=os.path.expanduser("~/src/mozilla-ff/"), help="import patch(es) into repo at this path.")
139 |     parser.add_argument("--dry-run", action='store_true', help="only display command to execute")
140 |     args = parser.parse_args()
141 |     want_export_only = any(bool(opt) for opt in (args.output,))
142 |     want_import = any(bool(opt) for opt in (args.destdir, args.bookmark))
143 |     if want_import and want_export_only:
144 |         print("both export-only and export-import options given", file=sys.stderr)
145 |         sys.exit(1)
146 |     def commit(rev):
147 |         if args.dry_run:
148 |             print(f"resolving {rev} ->", end=" ", flush=True)
149 |             run(["jj", "--color=never", "--no-pager", "log", "--no-graph", "-r", rev, "-Tchange_id.short() ++ '\n'"], text=True)
150 |         return check_output(["jj", "log", "--no-graph", "-r", rev, "-Tcommit_id.short(16)"], text=True)
151 | 
152 |     last = None
153 |     if args.stack:
154 |         first = commit(f"roots(::{args.first} & mutable())")
155 |         last = commit(f"latest(heads({args.first}:: & mutable()))")
156 |     elif args.last:
157 |         first = commit(args.first)
158 |         last = commit(args.last)
159 |     else:
160 |         first = commit(args.first)
161 |         last = first
162 |     if not args.last and not args.bookmark:
163 |         args.bookmark = args.first
164 |     cmd = ["git", "format-patch", "--notes", f"{first}^..{last}"]
165 |     if args.output:
166 |         cmd.append(f"--output={args.output}")
167 |     else:
168 |         cmd.append("--stdout")
169 |     if args.dry_run:
170 |         import shlex
171 |         print(f"Command:\n  {shlex.join(cmd)}")
172 |         sys.exit(0)
173 |     if not args.destdir:
174 |         os.execvp(cmd[0], cmd)
175 |     process = Popen(cmd, stdout=PIPE, text=True)
176 |     process = run(["git", "am"], stdin=process.stdout, cwd=args.destdir)
177 |     if process.returncode != 0:
178 |         print(f"Export to {args.destdir} failed! Aborting import (running `git am --abort`).")
179 |         check_output(["git", "am", "--abort"], cwd=args.destdir)
180 |     else:
181 |         check_output(["jj", "git", "import"], cwd=args.destdir)
182 |         check_output(["jj", "book", "create", "-r@-", args.bookmark], cwd=args.destdir)
183 | ''']
184 | 
185 | _export = ["util", "exec", "--", "bash", "-e", "-c", """
186 |     TO_FF=0
187 |     if [[ $1 = --ff ]]; then
188 |       TO_FF=1
189 |       shift
190 |     elif [[ $1 = -o ]] || [[ $1 = --output ]]; then
191 |       OUTPUT="--output=$2"
192 |       shift
193 |       shift
194 |     else
195 |       OUTPUT="--stdout"
196 |     fi
197 |     REV="${1:-@}"
198 |     GITBASE=$(jj log --no-graph -r "roots(::$REV & mutable())" -T "commit_id")
199 |     GITHEAD=$(jj log --no-graph -r "latest(($REV | $REV-) ~ empty())" -T "commit_id")
200 |     if [[ $TO_FF = 0 ]]; then
201 |       exec git format-patch $OUTPUT --notes "$GITBASE"^.."$GITHEAD"
202 |     else
203 |       git format-patch --stdout --notes "$GITBASE"^.."$GITHEAD" | ( cd $HOME/src/mozilla-ff; git am || git am --abort )
204 |       echo "Tacked onto previous @"
205 |     fi
206 | """, "jj-export"]
207 | 
208 | phab = ["util", "exec", "--", "bash", "-c", """
209 |     REV="${1:-@}"
210 |     URL=$(jj log -r"$REV" -T "description" | perl -lne 'print $1 if /Differential Revision: (https.+)/')
211 |     if [[ -n "$URL" ]]; then
212 |         echo "Opening $URL"
213 |         code --openExternal "$URL"
214 |     else
215 |         echo "Unable to find phabricator revision URL for $REV" >&2
216 |     fi
217 | """, "jj-phab"]
218 | 
219 | # "Back" button for the working directory: go back to the last change that @ was
220 | # editing.
221 | back = ["util", "exec", "--", "bash", "-c", """
222 |     resolve () { jj log --no-graph -r@ -T'change_id.short() ++ "\\n"' "$@"; }
223 |     current=$(resolve)
224 |     jj op log --no-graph -T 'id.short() ++ "\\n"' | while read op; do
225 |         old=$(resolve --at-op $op)
226 |         if [[ $old != $current ]]; then
227 |             if ! jj edit $old 2>/dev/null; then
228 |                 old_commit=$(jj evolog -r $old --at-op $op --no-graph -T 'commit_id.short()')
229 |                 jj edit $old_commit
230 |             fi
231 |             exit 0
232 |         fi
233 |     done
234 | """, "jj-back"]
235 | 
236 | # Massively overcomplicated? Does moz-phab already do all this?
237 | #
238 | # https://github.com/erichdongubler-mozilla/review/pull/1
239 | #
240 | # moz-phab will request confirmation before doing anything, so
241 | # there is no need for a --dry-run flag.
242 | #
243 | # Use `jj yeet --debug` to see the input lines being processed.
244 | #
245 | # Extra command line options get passed to moz-phab. This only looks
246 | # at a linear stack of patches ending in @-
247 | #
248 | yeet = ["util", "exec", "--", "python", "-c", '''
249 | if "indentation makes this more readable":
250 |     import re
251 |     import shlex
252 |     import sys
253 |     from subprocess import run, Popen, PIPE
254 | 
255 |     # Skip -c and empty string, which probably should not be there.
256 |     args = sys.argv[1:]
257 |     debug = "--debug" in args
258 |     args = [a for a in args if a != "--debug"]
259 | 
260 |     # Here is where it gets weird. If the first option does not start with a
261 |     # dash, interpret it to mean that we want to yeet the stack ending at the
262 |     # given change/commit. But note that all remaining args are still sent to
263 |     # moz-phab.
264 |     target = "latest((@ | @-) ~ empty())"
265 |     if args and not args[0].startswith("-"):
266 |         target = args.pop(0)
267 | 
268 |     #run(["jj", "bookmark", "move", "-B", "moz-phab", "--to", target], check=True)
269 | 
270 |     bug = False
271 |     upstream = None
272 |     cmd = ["jj", "log", "-r", f"::({target}) ~ immutable()", "-T", "commit_id.short()++' '++change_id.shortest()++' '++description.first_line()++'\n'", "--no-graph"]
273 |     process = Popen(cmd, stdout=PIPE, text=True)
274 |     keep = []
275 |     earliest = None
276 |     latest = None  # aka @-
277 |     for line in process.stdout:
278 |         line = line.rstrip("\r\n")
279 |         if debug:
280 |             print(f"line is <<{line}>>")
281 |         rev, change, desc = line.split(' ', 2)
282 |         latest = latest or change
283 |         upstream = rev
284 |         rev_bug = None
285 |         if m := re.match(r'[bB]ug (\d+)', desc):
286 |             rev_bug = m.group(1)
287 | 
288 |         # If not grabbing the first rev, and either the bug number changed or
289 |         # we went from None (no bug) -> None, done.
290 |         if bug is not False and ((bug != rev_bug) or (rev_bug is None)):
291 |             break
292 |         bug = rev_bug
293 |         keep.append(line)
294 |         earliest = change
295 | 
296 |     if bug is None:
297 |         print(f"submitting rev with no bug number with upstream {upstream}")
298 |     else:
299 |         print(f"submitting revs for bug {bug} with upstream {upstream}")
300 |     for line in keep:
301 |         print(f"  {line}")
302 | 
303 |     cmd = ["moz-phab", "submit", "--upstream", upstream, earliest, latest] + args
304 |     print(shlex.join(["Running:"] + cmd))
305 |     run(cmd)
306 | ''']
307 | 


--------------------------------------------------------------------------------
/conf/shrc:
--------------------------------------------------------------------------------
 1 | # This file is normally loaded via `source ~/path/to/sfink-tools/conf/shrc`
 2 | # from .zshrc/.bashrc/whatever.
 3 | 
 4 | # `mc `: Set MOZCONFIG to the current hg root's mozconfig.
 5 | # file. If the file does not exist, will ask if you want to use the most
 6 | # recently edited one from a sibling directory.
 7 | #
 8 | function mc () {
 9 |     local root
10 |     root="$(jj root 2>/dev/null)"
11 |     if [ $? != 0 ]; then
12 |         root="$(hg root 2>/dev/null)"
13 |     fi
14 |     if [ $# = 0 ]; then
15 |         { echo -n "MOZCONFIG is " >&2; echo "${MOZCONFIG:-unset}" }
16 |     fi
17 |     if [ $# = 0 ] || [[ $1 = "-l" ]]; then
18 |         if [ -z "$MOZCONFIG" ]; then
19 |             local n
20 |             n=0
21 |             for f in $(ls "$root" | fgrep mozconfig. | fgrep -v '~'); do
22 |                 [ $n -eq 0 ] && echo "available:"
23 |                 n=$(( $n + 1 ))
24 |                 echo "  ${f#*mozconfig.}"
25 |             done
26 |             [ $n -eq 0 ] && echo "no mozconfig.* files available in $root"
27 |         fi
28 |         return
29 |     fi
30 | 
31 |     local mozconfig
32 | 
33 |     if [[ $1 = "-s" ]] || [[ $1 = "." ]]; then
34 |       mozconfig="$root/"$(ls "$root" | fgrep mozconfig. | fgrep -v '~' | fzf)
35 |     else
36 |       mozconfig="$root/mozconfig.$1"
37 |     fi
38 | 
39 |     if ! [ -f "$mozconfig" ]; then
40 |         echo "Warning: $mozconfig does not exist" >&2
41 |         local tmp
42 |         tmp=$(ls -tr "$(dirname "$root")"/*/mozconfig.$1 | tail -1)
43 |         if [ -z "$tmp" ]; then
44 |             echo "No mozconfig.$1 found" >&2
45 |             return
46 |         fi
47 |         echo -n "Use $tmp? (y/n) " >&2
48 |         read REPLY
49 |         if [[ "${REPLY#y}" == "$REPLY" ]]; then
50 |             return
51 |         fi
52 |         echo "Copying $tmp to $root"
53 |         cp "$tmp" "$root"
54 |         mozconfig="$root/mozconfig.$1"
55 |     fi
56 | 
57 |     local _objdir
58 |     _objdir=$(env topsrcdir="$root" perl -lne 'if (/MOZ_OBJDIR\s*=\s*(.*)/) { $_ = $1; s!\@TOPSRCDIR\@!$ENV{topsrcdir}!; print }' "$mozconfig")
59 |     if [ -n "$_objdir" ]; then
60 |       export objdir="$_objdir"
61 |       export JS="$objdir/dist/bin/js"
62 |     fi
63 | 
64 |     export MOZCONFIG="$mozconfig"
65 |     { echo -n "MOZCONFIG is now " >&2; echo "$MOZCONFIG" }
66 | }
67 | 
68 | function reconnect() {
69 |     eval $(re-ssh-agent)
70 |     export DISPLAY=localhost:10.0 # TEMPORARY HACK
71 | }
72 | 


--------------------------------------------------------------------------------
/conf/sysctl.conf:
--------------------------------------------------------------------------------
 1 | #  2 =   0x2 - enable control of console logging level
 2 | #  4 =   0x4 - enable control of keyboard (SAK, unraw)
 3 | #  8 =   0x8 - enable debugging dumps of processes etc.
 4 | # 16 =  0x10 - enable sync command
 5 | # 32 =  0x20 - enable remount read-only
 6 | # 64 =  0x40 - enable signalling of processes (term, kill, oom-kill)
 7 | #128 =  0x80 - allow reboot/poweroff
 8 | #256 = 0x100 - allow nicing of all RT tasks
 9 | kernel.sysrq = 0xfe
10 | 
11 | # Transparent huge pages have been known to murder the system when
12 | # copying large stuff to USB because some random allocation triggers a
13 | # synchronous writeback to free up enough contiguous pages to make a
14 | # hugepage, whether or not the allocator cares.
15 | #
16 | # https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#thp-sysfs
17 | kernel.mm.transparent_hugepage.defrag = defer+madvise
18 | 
19 | # Another source for USB copy triggered freezes. The default dirty
20 | # bytes values are based on percentage of memory, which with lots of
21 | # memory and a slow device, can translate to very long pauses.
22 | #
23 | # https://unix.stackexchange.com/questions/107703/why-is-my-pc-freezing-while-im-copying-a-file-to-a-pendrive/107722#107722
24 | vm.dirty_background_bytes = 0x1000000
25 | vm.dirty_bytes = 0x4000000
26 | 


--------------------------------------------------------------------------------
/conf/wpaste/pbmo.conf:
--------------------------------------------------------------------------------
 1 | #!/bin/bash
 2 | 
 3 | # put this file in /etc/wgetpaste.d/ or ~/.wgetpaste.d/ to add http://zlin.dk/p/ to list of available services
 4 | 
 5 | # add zlin service
 6 | SERVICES="${SERVICES} pbmo"
 7 | ENGINE_pbmo=pbmo
 8 | URL_pbmo="https://pastebin.mozilla.org/ pastebin.php"
 9 | DEFAULT_LANGUAGE_pbmo="Plain Text"
10 | 
11 | # add pastebin engine
12 | LANGUAGES_pbmo="Plain%Text ActionScript Ada Apache%Log%File AppleScript Assembly%(NASM) \
13 | ASP Bash C C%for%Macs CAD%DCL CAD%Lisp C++ C# ColdFusion CSS D Delphi Diff DOS Eiffel Fortran \
14 | FreeBasic Game%Maker HTML%4.0%Strict INI%file Java Javascript Lisp Lua MatLab Microprocessor%ASM \
15 | MySQL NullSoft%Installer Objective%C OCaml Openoffice.org%BASIC Oracle%8 Pascal Perl PHP Python \
16 | QBasic Robots.txt Ruby Scheme Smarty SQL TCL VB VB.NET VisualFoxPro XML"
17 | LANGUAGE_VALUES_pbmo="text actionscript ada apache applescript asm asp bash c c_mac caddcl \
18 | cadlisp cpp csharp cfm css d delphi diff dos eiffel fortran freebasic gml html4strict ini java \
19 | javascript lisp lua matlab mpasm mysql nsis objc ocaml oobas oracle8 pascal perl php python \
20 | qbasic robots ruby scheme smarty sql tcl vb vbnet visualfoxpro xml"
21 | EXPIRATIONS_pbmo="Never 1%day 1%month"
22 | EXPIRATION_VALUES_pbmo="f d m"
23 | POST_pbmo="paste=Send&parent_pid= poster % format expiry % code2"
24 | 
25 | REGEX_RAW_pbmo='s|^\(https\?://[^/]*/\)\([0-9]*\)$|\1pastebin.php?dl=\2|'
26 | 


--------------------------------------------------------------------------------
/data/jib.pnm:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/hotsphink/sfink-tools/3d2885b345afdaf44049b2531c9934ec1450e882/data/jib.pnm


--------------------------------------------------------------------------------
/doc/VirtualAndPhysicalWindows.md:
--------------------------------------------------------------------------------
 1 | - basic setup
 2 |   - boot into Windows, set it up
 3 |   - bring up cmd prompt
 4 |   - shutdown /s /f /t 0
 5 | - boot into Fedora 34 live CD
 6 | - try and fail to check access to windows partition (optional!)
 7 |   - sudo mount /dev/nvm0n1p3 /mnt
 8 |     - will fail because of hiberfile
 9 |   - sudo umount /mnt
10 |   - sudo ntfs-3g -o remove_hiberfile /dev/nvme0n1p3 /mnt
11 |   - ls /mnt
12 |   - sudo umount /mnt
13 | - resize the windows partition
14 |   - record the partition table
15 |     - turn on networking (right click top right to set up wifi)
16 |     - sfdisk -d /dev/nvme0n1p3 > ThreadRipper-partitions.txt
17 |     - scp ThreadRipper-partitions.txt you@yourhostip:
18 |   - first, shrink the FS more than desired, to avoid dangerous math:
19 |     - sudo ntfsresize -s 50G /dev/nvme0n1p3
20 |   - shrink the main Windows partition
21 |     - cp ThreadRipper-partitions.txt ThreadRipper-partitions-new.txt
22 |     - edit ThreadRipper-partitions-new.txt
23 |     - reduce the size of the largest partition (partition 3)
24 |       - I dropped mine to 200GB, base 10. (DO NOT GO BELOW THE 50GB USED ABOVE!)
25 |         - 200LL * 1000 * 1000 * 1000 / 512 = 390625000
26 |     - sudo sfdisk /dev/nvme0n1 < ThreadRipper-partitions-new.txt
27 |     - Note: once we add in new partitions, the partition table is going to
28 |       be out of order and tools may whine about this. It might be better to
29 |       allow it to reorder, but for now I'm leaving it out of order.
30 |   - re-expand the FS into the available space
31 |     - sudo ntfsresize -f /dev/nvme0n1p3
32 |   - reboot into Windows to allow it to clear the scary NEEDS CHECKING bits
33 |     - shutdown while holding the shift key, to avoid hibernation.
34 |     - the hibernation file will linger anyway, just to scare you (Linux won't
35 |       mount the partition without being told to erase the hiberfile).
36 | - install Linux into the space made available
37 |   - boot up the Linux live CD again
38 |   - install to hard drive
39 |     - I chose 'Automatic' for Storage Configuration, and checked 'Encrypt my data'
40 |   - finish the installation. It takes far less time than I expected.
41 |   - reboot, choose the Windows option from the boot menu to ensure it still works
42 |     - when rebooting from Windows, remember to hold down Shift
43 | - install VirtualBox
44 |   - reboot into Linux
45 |   - do whatever initial setup you want. I did `sudo dnf update`, at least.
46 |   - open Firefox, search for "install VirtualBox" or go to https://www.virtualbox.org/wiki/Linux_Downloads
47 |   - get it for your distribution
48 |   - get the extension pack, listed at https://www.virtualbox.org/wiki/Downloads
49 | - construct a virtual disk pointing to your actual disk
50 |   - get my `viewsetup` utility:
51 |     - hg clone https://hg.sr.ht/~sfink/sfink-tools
52 |     - get it from `sfink-tools/bin/viewsetup`
53 |     - or it's a single file, so you could just grab it from https://hg.sr.ht/~sfink/sfink-tools/raw/bin/viewsetup?rev=tip
54 |   - create a disk description that exposes the Windows partitions and masks off the live
55 |     Linux partition you're running from:
56 |     - `viewsetup --map --auto --name ssd`
57 |   - create /dev/mapper/ssd_view, a virtual block device that cobbles together the above "slices":
58 |     - `viewsetup ssd`
59 |   - create a VirtualBox disk descriptor that uses it:
60 |     - `viewsetup --action create-vmdk ssd`
61 | - get VirtualBox working with Secure Boot
62 |   - Secure Boot requires signing the vbox kernel modules
63 |     - you could try to follow https://stackoverflow.com/questions/61248315/sign-virtual-box-modules-vboxdrv-vboxnetflt-vboxnetadp-vboxpci-centos-8
64 |     - but you'll need to re-sign on every update
65 |   - I gave up and disabled secure boot in the BIOS
66 | - create a Windows VM
67 |   - New
68 |   - expert mode or advanced mode or whatever it's called
69 |   - Name: whatever (I used "Local Windows", which is not the greatest name)
70 |   - Version: Windows 10 (64-bit)
71 |   - Use an existing virtual hard disk
72 |     - navigate to the VMDK created by viewsetup above (`~/.config/diskviews/ssd/ssd.vmdk`)
73 |   - enable EFI
74 |   - use PIIX3 for Chipset (in System/Motherboard)
75 |   - use PIIX4 for storage controller (not NVMe for some reason...?)
76 |     - https://stackoverflow.com/questions/61248315/sign-virtual-box-modules-vboxdrv-vboxnetflt-vboxnetadp-vboxpci-centos-8
77 |       gives a potential fix, haven't tried it
78 |   - when you boot, it will require you to reset your PIN. :-(
79 | - Ongoing
80 |   - whenever you reboot, you'll need to recreate /dev/mapper/ssd_view with
81 |     - `viewsetup ssd` (same as `--action create-md`)
82 | 


--------------------------------------------------------------------------------
/doc/examples/Q-awsy-baseJS.txt:
--------------------------------------------------------------------------------
1 | # push: aggressive3: base ; awsy ; Pushed via `mach try again`
2 | 0 1595056.0
3 | 
4 | # push: aggressive3: aggressive ; awsy ; Pushed via `mach try again`
5 | 1 1605856.0
6 | 
7 | # push: aggressive3: aggressive + shrinkwrap ; awsy ; Pushed via `mach try again`
8 | 2 1603496.0
9 | 


--------------------------------------------------------------------------------
/doc/examples/Q-awsy-logBase-grouped.txt:
--------------------------------------------------------------------------------
 1 | # job SY(ab) on push 1132677: aggressive3: base ; awsy ; Pushed via `mach try again`
 2 | # https://treeherder.mozilla.org/jobs?repo=try&revision=ae84fb7aa56cfe72a80441ef838b924529cfa568
 3 | 0 1222 1128512 1013824
 4 | 0 1336 2048 2304
 5 | 0 1362 1536 1904
 6 | 0 1396 1536 1904
 7 | 0 1399 1536 1904
 8 | 0 1415 1536 1904
 9 | 0 1507 1536 1904
10 | 0 1534 1536 1904
11 | 0 1561 1536 1904
12 | 0 1588 1536 1904
13 | 0 1623 1536 1904
14 | 0 1657 1536 1904
15 | 0 1685 1536 1904
16 | 
17 | # job SY(ab) on push 1132678: aggressive3: aggressive ; awsy ; Pushed via `mach try again`
18 | # https://treeherder.mozilla.org/jobs?repo=try&revision=7ea00bffc641f9229fbf0337257661e1d850d68a
19 | 1 1219 5122112 1916160
20 | 1 1334 10240 13056
21 | 1 1360 9216 12656
22 | 1 1402 9216 12656
23 | 1 1400 9216 12656
24 | 1 1426 9216 12656
25 | 1 1509 9216 12656
26 | 1 1536 9216 12656
27 | 1 1563 9216 12656
28 | 1 1598 9216 12656
29 | 1 1625 9216 12656
30 | 1 1659 9216 12656
31 | 1 1687 9216 12656
32 | 
33 | # job SY(ab) on push 1132680: aggressive3: aggressive + shrinkwrap ; awsy ; Pushed via `mach try again`
34 | # https://treeherder.mozilla.org/jobs?repo=try&revision=d22359d41f5c00655e71466c5050a73c139f74c0
35 | 2 1219 5646272 1928080
36 | 2 1334 10240 13056
37 | 2 1360 9216 12656
38 | 2 1398 9216 12656
39 | 2 1401 9216 12656
40 | 2 1456 9216 12656
41 | 2 1503 9216 12656
42 | 2 1530 9216 12656
43 | 2 1557 9216 12656
44 | 2 1592 9216 12656
45 | 2 1619 9216 12656
46 | 2 1653 9216 12656
47 | 2 1681 9216 12656
48 | 


--------------------------------------------------------------------------------
/doc/gc-ubench.org:
--------------------------------------------------------------------------------
  1 | * Purpose(s)
  2 | 
  3 | - Be able to investigate kernels of GC scheduling and performance issues from
  4 |   the JS shell, by mimicking browser behavior.
  5 | - Validate improvements
  6 | - Compare different JS engines' behavior to look for outliers and low-hanging
  7 |   fruit
  8 | - (inherited) Visually display GC behavior to use ʜᴏᴏ-ᴍᴀɴ pattern matching
  9 | 
 10 | * Architecture
 11 | 
 12 | So far, I have really only been considering animation cases, where we are
 13 | trying to maintain a decent frame rate. The whole architecture is based on
 14 | frames: during a frame, you will do some amount of work ("mutator" or
 15 | "allocation load"). You then may decide to wait some amount of time before the
 16 | next frame.
 17 | 
 18 | ** Mutators/Allocation Loads
 19 | 
 20 | This is meant to be microbenchmark suite so we can focus on specific types of
 21 | allocations (foreground-finalized vs background-finalizable, WeakMaps, etc.)
 22 | 
 23 | A directory ~benchmarks/~ contains the 18 mutators I have defined so far. Not
 24 | all of them run in the shell; eg, there are mutators that just allocate text
 25 | nodes in the DOM.
 26 | 
 27 | Each mutator is expected to do some amount of allocation (configured by the
 28 | rest of the system), then return. That is the ~garbagePerFrame~ value.
 29 | 
 30 | But if you simply allocated some garbage on every frame, you'd mostly be
 31 | testing the nursery (for nursery-allocatable types). So all of the allocated
 32 | data gets thrown into a pile, and you keep some number of piles around all the
 33 | time (creating a new one and expiring the oldest on every frame.)
 34 | 
 35 | ** Host objects
 36 | 
 37 | Access to host-specific functionality: how to suspend, what data collection
 38 | mechanisms are available. How to imitate a turn [or whatever the correct
 39 | phrasing is], so that `Promises` and `WeakRefs` can work.
 40 | 
 41 | Note that this is a source of differences between engines, because I don't know
 42 | how to do those things in v8. When it starts to matter, I'll pester shu.
 43 | 
 44 | ** Scheduler: when to run frame code
 45 | 
 46 | Mimics my naive view of how the browser schedules things.
 47 | 
 48 | Try to maintain 60fps. Do some work, check whether there's still time left
 49 | until the next frame. If so, wait.
 50 | 
 51 | - SpiderMonkey has a test function ~sleep()~
 52 | - For V8, I use ~Atomics.wait~ on a ~SharedArrayBuffer~
 53 | 
 54 | The wait allows any background threads to continue running.
 55 | 
 56 | There is another scheduler you can choose that waits until the next 60fps
 57 | frame, even if you overran the previous. Selected on the command line with
 58 | ~--sched=vsync~
 59 | 
 60 | ** Sequencers: orchestrating multiple trials
 61 | 
 62 | Each mutator must run for long enough to observe its longer-term performance.
 63 | For the simplest test, you would just specify a set of mutators and it would
 64 | run each one for ~D~ seconds and gather performance metrics.
 65 | 
 66 | The sequencer is the object that manages beginning a new mutator, letting it do
 67 | its per-frame processing, then stopping it and moving onto the next.
 68 | 
 69 | Sequencers can be placed inside of other sequencers. For example, the basic
 70 | "run these mutators" scenario involves populating a ~ChainSequencer~ with a
 71 | series of ~SingleMutatorSequencer~. In the code, this is called a "trial".
 72 | 
 73 | You could have a ~ChainSequencer~ of ~ChainSequencer~, except there's no reason
 74 | to so you can't get there from the command line. This flexibility is utilized
 75 | by the more sophisticated sequencers.
 76 | 
 77 | *** Find50Sequencer
 78 | 
 79 | A more advanced case is the ~Find50Sequencer~, which tries to find a value of
 80 | ~garbagePerFrame~ that results in 50% of frames being dropped. (Yes, it might
 81 | make more sense to be looking for 2% frame drop if you're asking "how much can
 82 | this handle and not look like laggy crap?" But 50% is nice for seeing how
 83 | things fall apart.)
 84 | 
 85 | ~Find50Sequencer~ runs a trial with one value of the ~garbagePerFrame~ setting,
 86 | measures the frame drop rate, and then either increases or decreases
 87 | ~garbagePerFrame~ and tries again.
 88 | 
 89 | Currently, it does a simple-minded binary search: if you drop fewer than 50%
 90 | frames, pile on more load. This is sensitive to getting lucky or unlucky on a
 91 | trial, but in practice in SpiderMonkey it's been remarkably stable even with
 92 | short trial durations. V8 is a very different story -- some things are fairly
 93 | stable, but many things vary widely.
 94 | 
 95 | I intend to replace or augment it with a linear regression.
 96 | 
 97 | ** PerfTracker
 98 | 
 99 | This measures how long the mutator runs, how much time we waited, how many
100 | minor and major GCs have happened, etc. At the end of a trial, it computes
101 | frame droppage and feeds the result back into eg ~Find50Sequencer~.
102 | 
103 | In the Web UI, there is also a ~FrameHistory~ that gathers a histogram of the
104 | inter-frame pause times during a trial. I haven't done anything with this in
105 | the shell yet.
106 | 
107 | ** Output
108 | 
109 | Currently, there is some output to stdout to show the parameter settings it is
110 | trying and the basic result, in addition to verbose JSON output written to a
111 | file intended for consumption by some future tool.
112 | 
113 | It is also useful to run perf on simple runs to gather performance counters.
114 | There is no integration that would allow you to separate out the perfcounter
115 | events by trial, though.
116 | 
117 | ** Web UI
118 | 
119 | Originally, this was all intended to be purely a visual tool. That's still the
120 | most fun way to run this.
121 | 
122 | It uses ~requestAnimationFrame~ to schedule the mutator work.
123 | 
124 | It runs in both Chrome and Firefox, with Firefox displaying additional data in
125 | the chart:
126 |  - When major and minor GCs happened
127 |  - memory usage, including (stale) thresholds
128 | 
129 | Much of the functionality is shared between the Web and shell front-ends, but
130 | each has quite a bit unique to it still.
131 | 
132 | 
133 | * Sample Results (do not trust)
134 | 
135 | #+BEGIN_EXAMPLE
136 | 
137 | deepWeakMap                   : SM/V8=1000/44000 = 44.0x worse
138 | globalArrayArrayLiteral       : SM/V8=1000000/2500000 = 2.5x worse
139 | globalArrayBuffer             : SM/V8=4000000/2000 = 2000.0x better
140 | globalArrayFgFinalized        : SM/V8=48000/14000 = 3.4x better
141 | globalArrayLargeArray         : SM/V8=3000000/800000 = 3.8x better
142 | globalArrayNewObject          : SM/V8=128000/2700000 = 21.1x worse
143 | globalArrayObjectLiteral      : SM/V8=384000/1300000 = 3.4x worse
144 | largeArrayPropertyAndElements : SM/V8=48000/68000 = 1.4x worse
145 | pairCyclicWeakMap             : SM/V8=10000/34000 = 3.4x worse
146 | propertyTreeSplitting         : SM/V8=8000/36000 = 4.5x worse
147 | selfCyclicWeakMap             : SM/V8=10000/26000 = 2.6x worse
148 | 
149 | #+END_EXAMPLE
150 | 
151 | * Future
152 | 
153 | ** Known Issues
154 | 
155 | - Too much variance to be useful on many v8 runs.
156 | - Effects of one trial can bleed into the next (eg garbage is built up). Should
157 |   GC between trials, but I'll need to be sure to do that in v8 as well.
158 | 
159 | ** Future Work
160 | 
161 | - For short runs, force a GC to be included in the timing
162 | - On SpiderMonkey, get an exact measurement of time spent GCing.
163 |   - using a stats mailbox approach
164 | - On SpiderMonkey, figure out what was happening when a frame deadline was
165 |   missed.
166 | - on SpiderMonkey and V8 (if I can figure out how), ensure that a trial has
167 |   seen a nontrivial amount of GC action so I'm not just benchmarking the
168 |   mutators.
169 | - Add in JSC, Node, ...?
170 | - Mainly: I need to use it to explore actual examples, and figure out what else
171 |   is needed from that.
172 | 
173 | * Usage
174 | ** Current shell help
175 | 
176 | Usage: JS shell microbenchmark runner
177 |   --help          display this help message
178 |   --duration, -d  how long to run mutators for (in seconds) (default '8')
179 |   --sched         frame scheduler (one of 'keepup', 'vsync') (default 'keepup')
180 |   --sequencer     mutator sequencer (one of 'cycle', 'find50') (default 'cycle')
181 | ** Web UI
182 | 
183 | You need to load it via a server, because dynamic import doesn't work with
184 | file:/// urls.
185 | 
186 | From the ~js/src/devtools/gc-ubench~ directory, run either
187 | 
188 |     ~python3 -mhttp.server~
189 | 
190 | or
191 | 
192 |     ~python2 -mSimpleHTTPServer~
193 | 
194 | and load ~http://localhost:8000/~.
195 | 


--------------------------------------------------------------------------------