├── .gitignore
├── README.md
├── example.xml
├── genconflicts.py
├── genpubs.py
└── template.html
/.gitignore:
--------------------------------------------------------------------------------
1 | publications.bib
2 | publications.xml
3 | nebelwelt.html
4 | hexhive.html
5 | genhp.sh
6 | files/*
7 | cameraready/*
8 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # AutoBib: a quick and dirty hack to keep your publications up to date
2 |
3 | Like most academics I struggled with keeping my personal homepage, my group
4 | homepage, and my BIB file up to date with all the papers we publish. Instead of
5 | allowing the individual pages go vastly out of date, I created this quick and
6 | ugly hack to create bib files and static html pages for all different targets.
7 |
8 | ## Initial setup
9 |
10 | * All the magic is in the poorly documented `genpubs.py`.
11 | * Create a folder `files/` with all PDFs in it (named YYVenue.pdf, e.g., 18CCS.pdf)
12 | * Create a `publications.xml` with all your publications in it (follow the somewhat documented `example.xml`)
13 | * Create a `template.html` that fits your homepage(s).
14 |
15 | ## Adding a publication
16 | * Place the PDF in the `files/` folder.
17 | * Update the `publications.xml` with the new publication.
18 | * Create a bib file: `./genpubs.py -T bib -p publications.xml -o publications.bib`
19 | * Create a html file: `./genpubs.py -t hexhive.html -p publications.xml -o ~/repos/hexhive/publications/index.html`
20 | * Locally, I do run all these commands as part of scripts that pull the
21 | homepages, generate new html files, and rsync the different directories/files
22 | to make sure all is updated.
23 |
24 | ## Q&A
25 | * Why? Yes, it's somewhat over-engineered but I've been using these scripts
26 | since 2014 years and after the initial coding they have saved me a lot of
27 | time.
28 |
29 | ## Author
30 |
31 | [Mathias Payer](mailto:mathias.payer@nebelwelt.net)
32 |
33 | Dual-licensed under the GPL and the whatever license: if it breaks, you got to
34 | keep the pieces. If it is helpful to you, buy me a beer the next time we meet.
35 |
--------------------------------------------------------------------------------
/example.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 | Open Textbook
4 | The Continuing Arms Race
5 | ACM Symp. on InformAtion, Computer and Communications Security
6 | Workshop on Architectural and Microarchitectural Support for Binary Translation
7 | arXiv Technical Report
8 | Usenix Annual Technical Conference
9 | Balkan Computer Congress
10 | BlackHat Europe
11 | International Conference on Compiler Construction
12 | Chaos Communication Congress
13 | ACM Conference on Computer and Communication Security
14 | ACM Conference on Data and Application Security and Privacy
15 | ACM Computing Surveys
16 | AOSD workshop on Domain-Specific Aspect Languages
17 | Conference on Detection of Intrusions and Malware and Vulnerability Assessment
18 | European Symposium on Research in Computer Security
19 | Int'l. Symp. on Eng. Secure Software and Systems
20 | IEEE European Symposium on Security and Privacy
21 | Forming an Ecosystem Around Software Transformation
22 | Usenix Workshop on Hot Topics in Software Upgrades
23 | ACM Internet Measurement Conference
24 | ACM SIGPLAN International Symposium on Memory Management
25 | International Symposium on Performance Analysis of Systems and Software
26 | Language-theoretic Security IEEE Security and Privacy Workshop
27 | Network and Distributed System Security Symposium
28 | IEEE International Symposium on Security and Privacy
29 | Usenix Symposium on Operating Systems Design and Implementation
30 | ACM International Conference on Programming Language Design and Implementation
31 | Program Protection and Reverse Engineering Workshop
32 | IEEE Conference on Privacy, Security, and Trust
33 | Usenix Security Symposium
34 | IEEE Security and Privacy Magazine
35 | International Workshop on Security and Trust Management
36 | Symposium on Security for Asia Network + 360
37 | ACM International Systems and Storage Conference
38 | Technical Report
39 | Transportation Research Board
40 | IEEE Transactions on Information Forensics and Security
41 | IEEE Transactions on Software Engineering
42 | ACM International Conference on Virtual Execution Environments
43 | Usenix Workshop on Offensive Technologies
44 |
45 |
46 |
47 |
63 |
64 | Title of the publication
65 |
66 | First author
67 | Second author
68 |
69 | ReferenceIntoVenueTableAbove
70 |
71 | TheDoi
72 |
73 | The abstract
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 | Milkomeda: Safeguarding the Mobile GPU Interface Using WebGL Security Checks
82 |
83 | Zhihao Yao
84 | Saeed Mirzamohammadi
85 | Ardalan Amiri Sani
86 | Mathias Payer
87 |
88 | CCS
89 | 10.1145/3243734.3243772
90 |
91 |
92 | GPU-accelerated graphics is commonly used in mobile applications.
93 | Unfortunately, the graphics interface exposes a large amount of potentially
94 | vulnerable kernel code (i.e., the GPU device driver) to untrusted applications.
95 | This broad attack surface has resulted in numerous reported vulnerabilities
96 | that are exploitable from unprivileged mobile apps. We observe that web
97 | browsers have faced and addressed the exact same problem in WebGL, a framework
98 | used by web apps for graphics acceleration. Web browser vendors have developed
99 | and deployed a plethora of security checks for the WebGL interface.
100 |
101 | We introduce Milkomeda, a system solution for automatically repurposing WebGL
102 | security checks to safeguard the mobile graphics interface. We show that these
103 | checks can be used with minimal modifications (which we have automated using a
104 | tool called CheckGen), significantly reducing the engineering effort.
105 | Moreover, we demonstrate an in-process shield space for deploying these checks
106 | for mobile applications. Compared to the multi-process architecture used by
107 | web browsers to protect the integrity of the security checks, our solution
108 | improves the graphics performance by eliminating the need for Inter-Process
109 | Communication and shared memory data transfer, while providing integrity
110 | guarantees for the evaluation of security checks. Our evaluation shows that
111 | Milkomeda achieves close-to-native GPU performance at reasonably increased CPU
112 | utilization.
113 |
114 |
115 | CCS2
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 | Block Oriented Programming: Automating Data-Only Attacks
124 |
125 | Kyriakos Ispoglou
126 | Bader AlBassam
127 | Trent Jaeger
128 | Mathias Payer
129 |
130 | CCS
131 | 10.1145/3243734.3243739
132 |
133 |
134 | With the widespread deployment of Control-Flow Integrity (CFI), control-flow
135 | hijacking attacks, and consequently code reuse attacks, are significantly more
136 | difficult. CFI limits control flow to well-known locations, severely
137 | restricting arbitrary code execution. Assessing the remaining attack surface of
138 | an application under advanced control-flow hijack defenses such as CFI and
139 | shadow stacks remains an open problem.
140 |
141 | We introduce BOPC, a mechanism to automatically assess whether an attacker can
142 | execute arbitrary code on a binary hardened with CFI/shadow stack defenses.
143 | BOPC computes exploits for a target program from payload specifications written
144 | in a Turing-complete, high-level language called SPL that abstracts away
145 | architecture and program-specific details. SPL payloads are compiled into a
146 | program trace that executes the desired behavior on top of the target binary.
147 | The input for BOPC is an SPL payload, a starting point (e.g., from a fuzzer
148 | crash) and an arbitrary memory write primitive that allows application state
149 | corruption. To map SPL payloads to a program trace, BOPC introduces Block
150 | Oriented Programming (BOP), a new code reuse technique that utilizes entire
151 | basic blocks as gadgets along valid execution paths in the program, i.e.,
152 | without violating CFI or shadow stack policies. We find that the problem of
153 | mapping payloads to program traces is NP-hard, so BOPC first reduces the search
154 | space by pruning infeasible paths and then uses heuristics to guide the search
155 | to probable paths. BOPC encodes the BOP payload as a set of memory writes.
156 |
157 | We execute 13 SPL payloads applied to $10$ popular applications. BOPC
158 | successfully finds payloads and complex execution traces -- which would likely
159 | not have been found through manual analysis -- while following the target's
160 | Control-Flow Graph under an ideal CFI policy in $81\%$ of the cases.
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 | Software Security: Principles, Policies, and Protection (SS3P)
172 |
173 | Mathias Payer
174 |
175 | SS3P
176 |
177 |
178 |
179 |
180 |
181 |
182 | Type Confusion: Discovery, Abuse, Protection
183 |
184 | Mathias Payer
185 |
186 | SyScan360
187 | Symposium on Security for Asia Network + 360
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 | How Memory Safety Violations Enable Exploitation of Programs
197 |
198 | Mathias Payer
199 |
200 | ArmsRace
201 | 978-1-97000-183-9
202 | 10.1145/3129743.3129745
203 |
204 |
205 |
206 | Control-Flow Integrity: Precision, Security, and Performance
207 |
208 | Nathan Burow
209 | Scott A. Carr
210 | Joseph Nash
211 | Per Larsen
212 | Michael Franz
213 | Stefan Brunthaler
214 | Mathias Payer
215 |
216 | CSUR
217 | 10.1109/TSE.2016.2625248
218 |
219 | Memory corruption errors in C/C++ programs remain the most common source of
220 | security vulnerabilities in today's systems. Control-flow hijacking attacks
221 | exploit memory corruption vulnerabilities to divert program execution away from
222 | the intended control flow. Researchers have spent more than a decade studying
223 | and refining defenses based on Control-Flow Integrity (CFI), and this technique
224 | is now integrated into several production compilers. However, so far no study
225 | has systematically compared the various proposed CFI mechanisms, nor is there
226 | any protocol on how to compare such mechanisms.
227 |
228 | We compare a broad range of CFI mechanisms using a unified nomenclature based on
229 | (i) a qualitative discussion of the conceptual security guarantees, (ii) a
230 | quantitative security evaluation, and (iii) an empirical evaluation of their
231 | performance in the same test environment. For each mechanism, we evaluate
232 | (i) protected types of control-flow transfers, (ii) the precision of the
233 | protection for forward and backward edges. For open-source compiler-based
234 | implementations, we additionally evaluate (iii) the generated equivalence
235 | classes and target sets, and (iv) the runtime performance.
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 | libdetox: A Framework for Online Program Transformation
244 |
245 | Mathias Payer
246 |
247 | FEAST
248 | Forming an Ecosystem Around Software Transformation
249 |
250 | Software is commonly available in binary form. Yet, the consumer would often
251 | like to gather information about the application, e.g., what functionality is
252 | available and needed or what security mechanisms are active. In secure
253 | environments, the code must also be hardened against attacks. So far,
254 | existing binary analysis and translation mechanisms are often ad-hoc and only
255 | target one aspect of the problem.
256 |
257 | We propose libdetox, a principled framework for continuous binary analysis and
258 | instrumentation. Our framework builds on an efficient binary translator and a
259 | trusted program loader to enable the collection of vast information which is
260 | later used for binary hardening. We present several runtime monitors such as a
261 | shadow stack, control-flow integrity, system call monitor, or on-the-fly patch
262 | application.
263 |
264 |
265 |
266 |
267 | Safe Loading and Efficient Runtime Confinement: A Foundation for Secure Execution
268 |
269 | Mathias Payer
270 |
271 | ETH Zurich Dr. sc. Thesis
272 |
273 | Protecting running applications is a hard problem. Many applications are written in a low-level language and are prone to exploits. Bugs can be used to exploit the application and to run malicious code. A rigorous code review is often not possible due to the size and the complexity of the applications. Even a detailed code review does not guarantee that all bugs in the application are found.
274 |
275 | This thesis presents a model for the secure execution of untrusted code. The model assumes that the application code contains bugs but that the application is not malicious (i.e., malware). The application is safe if the model protects from all attack vectors through code-based or data-based exploits in the untrusted code. The model verifies all code prior to execution and ensures that no unchecked control flow transfers are possible. An important design decision is to use a dynamic approach for the implementation with minimal impact on the original applications. Binary only applications are executed without static recompilation or changes to the compiler toolchain (e.g., no recompilation is needed and features like dynamically loaded libraries, lazy binding, or hand written assembly code are still usable).
276 |
277 | A dynamic, transparent sandbox in user-space loads and verifies code using binary translation. A secure loader starts the sandbox and bootstraps the application and all needed libraries in the sandbox. The sandbox checks the application code before it is executed and adds security guards during the translation. The combination of the secure loader and the sandbox protects from code-oriented exploits. System calls are redirected by the sandbox to a policy-based system call authorization layer that verifies every system call towards a policy. Every control flow transfer in the application code is verified using a dynamic control flow model. Control flow transfers to illegal locations or instructions that are not legal in the application stop the program. The combination of a system call policy and control flow integrity protects the application from code-based and data-based exploits.
278 |
279 | A prototype implementation is used to evaluate the performance and effectiveness of the proposed model. We show that the overhead for our prototype implementation is low and that the model protects from all code-based exploits. The control flow model restricts the attack space for data-based attacks and restricts control flow transfers of the application to well-known and valid locations. The small and modular trusted computing base enables code reviews and allows additional security modules (e.g., a module that detects file-based race condition.
280 |
281 | thesis-payerm
282 |
283 |
284 |
285 |
286 |
--------------------------------------------------------------------------------
/genconflicts.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | __author__ = "Mathias Payer "
5 | __description__ = "Script to generate the recent list of conflicts based on the XML."
6 |
7 | import xml.etree.ElementTree as ET
8 | import sys
9 | from argparse import ArgumentParser
10 | import datetime
11 |
12 | def getConflicts(xmldoc, from_year):
13 | coauthors = set()
14 | for e in xmldoc.findall('publications/publication'):
15 | if int(e.attrib['year']) >= from_year:
16 | for author in e.find('authors'):
17 | coauthors.add(author.text)
18 | return coauthors
19 |
20 |
21 | if __name__ == "__main__":
22 | parser = ArgumentParser(description=__description__)
23 | parser.add_argument('-p', '--publications', type=str, metavar='publications file', help='XML file containing publications', required=False, default='publications.xml')
24 | parser.add_argument('-a', '--adviser', type=str, metavar='adviser file', help='Text file with adviser relationship', required=False, default='adviser.txt')
25 | parser.add_argument('-y', '--years', type=int, metavar='year count', help='Get conflicts for N years, default: 2', required=False, default=2)
26 | args = parser.parse_args()
27 |
28 | pubs = ET.parse(args.publications)
29 |
30 | from_year = datetime.date.today().year - args.years
31 |
32 | conflicts = getConflicts(pubs, from_year)
33 |
34 | for author in open(args.adviser, 'r').readlines():
35 | conflicts.add(author.strip())
36 |
37 | for author in sorted(conflicts):
38 | print(author)
--------------------------------------------------------------------------------
/genpubs.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | __author__ = "Mathias Payer "
5 | __description__ = "Script to generate the publication html structure based on the XML."
6 |
7 | import xml.etree.ElementTree as ET
8 | import sys
9 | from argparse import ArgumentParser
10 |
11 | venues = {}
12 | def parseVenues(xmldoc):
13 | for e in xmldoc.findall('venues/*'):
14 | venues[e.attrib['short']] = e.text
15 |
16 | def handleBibs(xmldoc, venue):
17 | name = 'inproceedings'
18 | #if venue == 'report':
19 | # name = 'misc'
20 | if venue == 'thesis':
21 | name = 'thesis'
22 | if venue == 'magazine':
23 | name = 'article'
24 | counter = 0
25 | ret = ''
26 | for e in xmldoc.findall('publications/publication/[@type="'+venue+'"]'):
27 | counter = counter + 1
28 | ret += '@' + name + '{'
29 | filename = ''
30 | if e.find('filename') != None:
31 | filename = e.attrib['year'][2:] + e.find('filename').text
32 | else:
33 | filename = e.attrib['year'][2:] + e.find('shortvenue').text
34 | first = ''
35 | authors = ''
36 | for author in e.find('authors'):
37 | if first == '':
38 | # TODO must be last index
39 | first = author.text.strip().lower() #[author.text.index(' ')+1:].lower()
40 | while first.find(' ') != -1:
41 | first = first[first.index(' ')+1:]
42 | while first.find('\'') != -1:
43 | first = first[first.index('\'')+1:]
44 | authors += author.text + ' and '
45 | authors = authors[:-5]
46 | key = ''
47 | if venue != 'thesis':
48 | if 'key' in e.attrib:
49 | key = e.attrib['key']
50 | else:
51 | key = e.find('shortvenue').text.lower()
52 | ret += first + e.attrib['year'][2:] + key + ',\n'
53 | else:
54 | ret += first + e.attrib['year'][2:] + ',\n'
55 | ret += ' author = {' + authors + '},\n'
56 | ret += ' title = {{' + e.find('title').text + '}},\n'
57 | ret += ' year = {' + e.attrib['year'] + '},\n'
58 | if name == 'inproceedings' and venue != 'report':
59 | #ret += ' booktitle = ' + e.find('shortvenue').text + ',\n'
60 | ret += ' booktitle = {' + venues[e.find('shortvenue').text] + '},\n'
61 | if name == 'article':
62 | #ret += ' journal = ' + e.find('shortvenue').text + ',\n'
63 | ret += ' journal = {' + venues[e.find('shortvenue').text] + '},\n'
64 | if venue == 'report':
65 | if 'report' in e.attrib:
66 | ret += ' booktitle = {' + venues[e.find('shortvenue').text] + '},\n'
67 | #ret += ' booktitle = ' + e.find('shortvenue').text + ',\n'
68 | else:
69 | ret += ' booktitle = {' + venues[e.find('shortvenue').text]
70 | ret += ' \\url{http://nebelwelt.net/publications/files/' + filename + '.pdf}},\n'
71 | if e.find('doi') != None:
72 | ret += ' doi = {' + e.find('doi').text + '},\n'
73 | if e.find('stats') != None:
74 | stats = e.find('stats')
75 | details = ''
76 | if 'accept' in stats.attrib:
77 | rate = str(float(stats.attrib['accept'])/float(stats.attrib['submissions'])*100)
78 | rate = rate[0:rate.find('.')]
79 | details = '{}\\% acceptance rate -- {}/{}'.format(rate, stats.attrib['accept'], stats.attrib['submissions'])
80 | note = ''
81 | if e.find('note') != None:
82 | note = '\\textbf{' + e.find('note').text + '}'
83 | if note != '' and details !='':
84 | note = note + ' ,'
85 | ret += ' pages = {(' + note + details + ')},\n'
86 | ret += ' keywords = {' + venue + '},\n'
87 | if venue == 'thesis':
88 | if counter == 1:
89 | ret += ' type = {PhD Thesis},\n'
90 | if counter == 2:
91 | ret += ' type = {Master Thesis},\n'
92 | if counter > 2:
93 | ret += ' type = {Bachelor Project Thesis},\n'
94 | ret += '}\n\n'
95 | return ret
96 |
97 | def handlePublications(xmldoc, venue, title):
98 | ret = '
' + title + '
\n'
99 | prevyear = '0'
100 | for e in xmldoc.findall('publications/publication/[@type="'+venue+'"]'):
101 | # Print title
102 | ret += '
\n'
103 | if e.find('filename') != None:
104 | filename = e.attrib['year'][2:] + e.find('filename').text
105 | else:
106 | filename = e.attrib['year'][2:] + e.find('shortvenue').text
107 | if e.find('note') != None:
108 | note = '' + e.find('note').text + ''
109 | else:
110 | note = ''
111 | if not 'report' in e.attrib:
112 | ret += '' + e.find('title').text + ''
113 | else:
114 | ret += '' + e.find('title').text + ''
115 | if prevyear != e.attrib['year']:
116 | prevyear = e.attrib['year']
117 | ret += ''+prevyear+' '
118 | else:
119 | ret += ' '
120 | # Print authors
121 | for author in e.find('authors')[:-1]:
122 | ret += author.text + ', '
123 | if len(e.find('authors')) > 1:
124 | ret += 'and '
125 | ret += e.find('authors')[-1].text + '. '
126 | # Print Venue
127 | if e.find('shortvenue') != None:
128 | ret += 'In ' + e.find('shortvenue').text + "'" + e.attrib['year'][2:] + ': '
129 | ret += venues[e.find('shortvenue').text] + ', ' + e.attrib['year'] + ''
130 | if e.find('shortvenue').text not in venues:
131 | print("OOps, {} is not in venues.".format(e.find('shortvenue').text))
132 | else:
133 | ret += 'In ' + e.find('venue').text + ''
134 | # Do we have any additional remarks (links, notes, presentation)?
135 | addon = ''
136 | if 'presentation' in e.attrib:
137 | addon = 'presentation, '
138 | if note != '':
139 | addon = addon + ' ' + note + ', '
140 | addon = addon.lstrip()
141 | if e.find('links') != None:
142 | for link in e.find('links'):
143 | addon = addon + '' + link.attrib['name'] + ', '
144 | if e.find('doi') != None:
145 | addon = addon + 'DOI, '
146 | if addon != '':
147 | addon = addon[0:-2]
148 | ret += ' (' + addon + ')'
149 | ret += '
'
150 | return ret
151 |
152 | if __name__ == "__main__":
153 | parser = ArgumentParser(description=__description__)
154 | parser.add_argument('-t', '--template', type=str, metavar='template filename', help='Filename for the template to use.', required=False)
155 | parser.add_argument('-o', '--out', type=str, metavar='output filename', help='Filename to write output', required=True)
156 | parser.add_argument('-p', '--publications', type=str, metavar='publications file', help='XML file containing publications', required=False, default='publications.xml')
157 | parser.add_argument('-T', '--type', type=str, metavar='type', help='Output type. Values: {html | bib}', required=False, default='html')
158 | args = parser.parse_args()
159 |
160 | pubs = ET.parse(args.publications)
161 | parseVenues(pubs)
162 |
163 | ret = ''
164 | if args.type == 'html':
165 | txt = open(args.template).read()
166 | ret += txt[0:txt.find('###CONTENT###')]
167 | ret += handlePublications(pubs, 'conference', 'Conference Proceedings')
168 | ret += handlePublications(pubs, 'journal', 'Journal and Magazine Publications')
169 | ret += handlePublications(pubs, 'workshop', 'Workshop Proceedings')
170 | ret += handlePublications(pubs, 'collection', 'Books and Chapters')
171 | ret += handlePublications(pubs, 'report', 'Technical Reports and Hacker Conferences')
172 | if args.template == "nebelwelt.html":
173 | ret += handlePublications(pubs, 'thesis', 'Theses')
174 | # TODO: student theses
175 | ret += txt[txt.find('###CONTENT###')+13:]
176 | if args.type == 'bib':
177 | #for i in venues:
178 | # ret +='@string{' + i + '="' + venues[i] + '"}\n'
179 | #ret += '\n\n'
180 | ret += handleBibs(pubs, 'collection')
181 | ret += handleBibs(pubs, 'journal')
182 | ret += handleBibs(pubs, 'conference')
183 | ret += handleBibs(pubs, 'workshop')
184 | ret += handleBibs(pubs, 'report')
185 | ret += handleBibs(pubs, 'thesis')
186 |
187 | with open(args.out, 'w') as f:
188 | f.write(ret)
189 | f.close()
190 |
--------------------------------------------------------------------------------
/template.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Template: Publications
6 |
7 |
8 |
9 |