├── .gitignore ├── README.md ├── callback_plugins └── human_log.py ├── handlers ├── grub2-regen.yml ├── recalc-services.yml ├── redo-initramfs.yml └── testboot.yml ├── library ├── gconf.py └── kernel_module.py ├── roles ├── deploy-ssl-certs │ ├── README.md │ ├── defaults │ │ └── main.yml │ ├── scripts │ │ └── setvars.py │ └── tasks │ │ └── main.yml ├── distupgrade │ ├── README.md │ ├── handlers │ │ ├── main.yml │ │ └── testboot.yml │ └── tasks │ │ ├── cleanup.yml │ │ ├── main.yml │ │ ├── prepare.yml │ │ ├── reboot.yml │ │ ├── testboot.yml │ │ ├── upgrade.yml │ │ └── variables.yml ├── generic-rpm-install │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── install-packages │ └── tasks │ │ └── main.yml ├── mailserver │ ├── README.md │ ├── defaults │ │ └── main.yml │ ├── files │ │ ├── dovecot │ │ │ └── var │ │ │ │ └── lib │ │ │ │ └── sieve │ │ │ │ ├── after.d │ │ │ │ └── README.md │ │ │ │ ├── before.d │ │ │ │ └── README.md │ │ │ │ ├── global │ │ │ │ └── README.md │ │ │ │ └── imapsieve │ │ │ │ ├── report-ham.sieve │ │ │ │ └── report-spam.sieve │ │ └── postgrey │ │ │ └── postgreylocal.te │ ├── tasks │ │ ├── accounts.yml │ │ ├── dkim.yml │ │ ├── dovecot.yml │ │ ├── main.yml │ │ ├── packages.yml │ │ ├── postfix.yml │ │ ├── postgrey.yml │ │ ├── services.yml │ │ ├── spf.yml │ │ └── test.yml │ └── templates │ │ ├── dovecot │ │ ├── etc │ │ │ └── dovecot │ │ │ │ └── local.conf │ │ ├── usr │ │ │ └── local │ │ │ │ └── libexec │ │ │ │ └── sieve │ │ │ │ ├── learn │ │ │ │ └── spamclassifier │ │ └── var │ │ │ └── lib │ │ │ └── sieve │ │ │ ├── after.d │ │ │ └── spamclassifier.sieve.j2 │ │ │ └── before.d │ │ │ └── spamclassifier.sieve.j2 │ │ ├── opendkim │ │ └── etc │ │ │ ├── opendkim.conf │ │ │ └── opendkim │ │ │ ├── KeyTable │ │ │ └── SigningTable │ │ ├── postfix │ │ └── etc │ │ │ └── postfix │ │ │ ├── main.cf │ │ │ ├── master.cf │ │ │ └── virtual │ │ └── python-policyd-spf │ │ └── etc │ │ └── python-policyd-spf │ │ └── policyd-spf.conf ├── prosodyxmpp │ ├── README.md │ ├── TODO.md │ ├── defaults │ │ └── main.yml │ ├── library │ │ └── prosodyusers.py │ ├── tasks │ │ ├── main.yml │ │ └── modules-from-mercurial.yml │ └── templates │ │ └── etc │ │ └── prosody │ │ ├── modules.cfg.lua.j2 │ │ ├── ports.cfg.lua.j2 │ │ └── prosody.cfg.lua.j2 ├── qubeskde │ ├── README.md │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── selinux-module │ ├── README.md │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── spectrum2 │ ├── README.md │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── patches │ │ │ ├── 000-libevent.diff │ │ │ └── 001-logging.diff │ ├── tasks │ │ ├── install-from-git.yml │ │ └── main.yml │ └── templates │ │ └── etc │ │ └── spectrum2 │ │ ├── backend-logging.cfg │ │ ├── logging.cfg │ │ └── transports │ │ └── spectrum.cfg ├── updates │ ├── handlers │ │ ├── main.yml │ │ ├── recalc-services.yml │ │ └── testboot.yml │ └── tasks │ │ ├── grub2-regen.yml │ │ ├── main.yml │ │ └── testboot.yml └── zfsupdates │ ├── README.md │ ├── defaults │ └── main.yml │ ├── files │ ├── RPM-GPG-KEY-zfsonlinux │ └── zfs.repo │ ├── library │ └── blockinfile.py │ └── tasks │ ├── deploy-zfs-repo.yml │ ├── deploy-zfs-stage-1.yml │ ├── deploy-zfs-stage-2.yml │ └── main.yml └── tasks ├── grub2-regen.yml ├── redo-initramfs.yml └── testboot.yml /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Sample Ansible playbooks 2 | ============================================ 3 | 4 | Here are a few sample playbooks. These assume that you already have an 5 | Ansible setup going. You can then drop the files of the example 6 | directly into your setup. 7 | 8 | Roles: 9 | 10 | * [roles/mailserver/](roles/mailserver/) sets up a mail server with Dovecot, Postfix, 11 | the SSL certificates you provide yourself, SPF, and the DKIM 12 | certificates that you create. 13 | * [roles/zfsupdates/](roles/zfsupdates/) aids you in keeping ZFS up to date on your 14 | Fedora system which boots from ZFS. 15 | * [roles/updates/](roles/updates/) updates your system packages. 16 | * [roles/distupgrade/](roles/distupgrade/) upgrades your Fedora machine to a newer 17 | release, taking appropriate recovery precautions. 18 | * [roles/prosodyxmpp/](roles/prosodyxmpp/) sets up Prosody XMPP on your Fedora 19 | box, with up-to-date XEPS for use with modern XMPP clients. 20 | * [roles/qubeskde/](roles/qubeskde/) sets up KDE on any Qubes OS dom0, 21 | since Qubes OS decided to stop shipping KDE in the dom0. 22 | * [roles/install-package](roles/install-package/) deploys a package 23 | whether to a dom0 or a regular machine. 24 | * [roles/selinux-module](roles/selinux-module/) deploys a SELinux module type 25 | enforcement file (`.te`) and activates it. 26 | 27 | Modules: 28 | * [library/gconf.py](library/gconf.py) changes GConf settings on GNOME desktops, 29 | both user settings and system-wide defaults. 30 | * [library/kernel_module.py](library/kernel_module.py) loads kernel modules and 31 | ensures they are loaded after a reboot. It uses `/etc/modules-load.d` for the 32 | purpose. 33 | 34 | See `README.md` files inside each directory, when they exist. 35 | 36 | Callback plugins: 37 | * [callback_plugins/human_log.py](callback_plugins/human_log.py) logs playbooks 38 | in a YAML-ish colorized format. *This plugin is unmaintained.* 39 | *Please use the Ansible built-in* `yaml` *callback plugin*. 40 | 41 | Licensing 42 | --------- 43 | 44 | Everything under this repository is distributed under the GNU GPL v2 45 | or later. 46 | -------------------------------------------------------------------------------- /callback_plugins/human_log.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import collections 5 | import copy 6 | import yaml 7 | 8 | from io import StringIO 9 | 10 | from ansible import constants as C 11 | from ansible.parsing.yaml.dumper import AnsibleDumper 12 | from ansible.plugins.callback.default import CallbackModule as CallbackModule_default 13 | from ansible.module_utils._text import to_bytes 14 | from ansible.utils.color import stringc 15 | try: 16 | from ansible.utils.unsafe_proxy import AnsibleUnsafeText 17 | except ImportError: 18 | from ansible.vars.unsafe_proxy import AnsibleUnsafeText 19 | 20 | class LiteralText(str): pass 21 | 22 | 23 | def ColorLiteralText(text, color): 24 | return LiteralText(stringc(text, color)) 25 | 26 | 27 | class MyDumper(AnsibleDumper): 28 | 29 | def __init__(self, *a, **kw): 30 | AnsibleDumper.__init__(self, *a, default_flow_style=False, **kw) 31 | 32 | def process_scalar(self): 33 | if isinstance(self.event.value, LiteralText): 34 | return self.process_literal_text() 35 | return AnsibleDumper.process_scalar(self) 36 | 37 | def check_simple_key(self): 38 | if isinstance(self.event, yaml.events.ScalarEvent): 39 | if self.analysis is None: 40 | self.analysis = self.analyze_scalar(self.event.value) 41 | if self.analysis and self.analysis.scalar and isinstance(self.analysis.scalar, LiteralText): 42 | return True 43 | return AnsibleDumper.check_simple_key(self) 44 | 45 | def process_literal_text(self): 46 | if self.analysis is None: 47 | self.analysis = self.analyze_scalar(self.event.value) 48 | if self.style is None: 49 | self.style = self.choose_scalar_style() 50 | split = (not self.simple_key_context) 51 | if "\n" in self.analysis.scalar: 52 | self.write_literal(self.analysis.scalar) 53 | else: 54 | self.write_plain(self.analysis.scalar, split) 55 | 56 | self.analysis = None 57 | self.style = None 58 | 59 | def represent_scalar(self, tag, value, style=None): 60 | def should_use_block(value): 61 | if isinstance(value, LiteralText): 62 | return True 63 | for c in u"\u000a\u000d\u001c\u001d\u001e\u0085\u2028\u2029\n\r": 64 | if c in value: 65 | return True 66 | return False 67 | 68 | if style is None: 69 | if should_use_block(value): 70 | style='|' 71 | else: 72 | style = self.default_style 73 | 74 | if not isinstance(value, LiteralText): 75 | value = value.splitlines(True) 76 | for n, v in enumerate(value[:]): 77 | if v.endswith(" \n"): 78 | while v.endswith(" \n"): 79 | v = v[:-2] + "\n" 80 | elif v.endswith(" "): 81 | while v.endswith(" "): 82 | v = v[:-1] 83 | value[n] = v 84 | value = "".join(value) 85 | 86 | node = yaml.representer.ScalarNode(tag, value, style=style) 87 | if self.alias_key is not None: 88 | self.represented_objects[self.alias_key] = node 89 | return node 90 | 91 | def represent_dict(self, data): 92 | return self.represent_mapping( 93 | yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG, 94 | data.items() 95 | ) 96 | 97 | def represent_ansitext(self, data): 98 | return data 99 | 100 | 101 | MyDumper.add_representer( 102 | collections.OrderedDict, 103 | lambda d, data: d.represent_dict(data) 104 | ) 105 | 106 | MyDumper.add_representer( 107 | AnsibleUnsafeText, 108 | lambda d, data: d.represent_scalar('tag:yaml.org,2002:str', data) 109 | ) 110 | 111 | MyDumper.add_representer( 112 | LiteralText, 113 | lambda d, data: d.represent_scalar('tag:yaml.org,2002:str', data) 114 | ) 115 | 116 | 117 | def indent(text, with_): 118 | text = "".join([with_ + s for s in text.splitlines(True)]) 119 | return text 120 | 121 | 122 | OUTPUT_OMITTED = "OUTPUT OMITTED" 123 | 124 | 125 | def human_log(res, task, host, color, indent_with=" ", prefix="", is_handler=False): 126 | res = copy.deepcopy(res) 127 | item = None 128 | item_label = None 129 | 130 | if hasattr(res, "get"): 131 | if res.get("_ansible_no_log") or "_ansible_verbose_override" in res: 132 | res = OUTPUT_OMITTED 133 | 134 | if hasattr(res, "get"): 135 | if "_ansible_ignore_errors" in res: 136 | del res["_ansible_ignore_errors"] 137 | if "_ansible_no_log" in res: 138 | del res["_ansible_no_log"] 139 | 140 | item = res.get("item") 141 | if item is not None: 142 | del res["item"] 143 | item_label = res.get("_ansible_item_label") 144 | if item_label is not None: 145 | del res["_ansible_item_label"] 146 | # The following block transmutes with_items items into a nicer output. 147 | if hasattr(item, "keys") and set(item.keys()) == set(["key", "value"]): 148 | res["item_value"] = item["value"] 149 | item = item["key"] 150 | 151 | if task.action == "service" and "status" in res: 152 | if hasattr(res["status"], "keys") and len(res["status"]) > 3: 153 | del res["status"] 154 | for x in ['stdout', 'stderr', 'include_variables']: 155 | if x in res and res.get(x): 156 | res[x] = LiteralText(res[x]) 157 | elif x in res: 158 | del res[x] 159 | unreachable = bool(res.get("unreachable")) 160 | skipped = bool(res.get("skipped")) 161 | failed = bool(res.get("failed")) 162 | changed = bool(res.get("changed")) 163 | for o, n in [("_ansible_notify", "notified")]: 164 | if o in res: 165 | res[n] = res[o] 166 | del res[o] 167 | if "warnings" in res: 168 | if not res["warnings"]: 169 | del res["warnings"] 170 | else: 171 | warnings = res['warnings'] 172 | del res['warnings'] 173 | for i, v in enumerate(warnings): 174 | warnings[i] = LiteralText(stringc(v, C.COLOR_WARN)) 175 | res[stringc("warnings", C.COLOR_WARN)] = warnings 176 | for banned in ["invocation", "stdout_lines", "stderr_lines", 177 | "changed", "failed", "skipped", "unreachable", 178 | "_ansible_delegated_vars", "_ansible_parsed", 179 | "_ansible_item_result", "_ansible_verbose_always"]: 180 | if banned in res: 181 | del res[banned] 182 | 183 | if unreachable: 184 | res = res["msg"] 185 | elif skipped: 186 | if "skip_reason" in res: 187 | res = res["skip_reason"] 188 | if res == "Conditional check failed": 189 | res = OUTPUT_OMITTED 190 | elif failed: 191 | if len(res) == 1: 192 | res = res[res.keys()[0]] 193 | elif (len(res.keys()) == 1 194 | and res.keys()[0] not in ["_ansible_notify", "notified"] 195 | and task.action != "debug"): 196 | # Simplify. If the result contains only one key and value 197 | # pair, then make the result the value instead. 198 | # Subject to some exceptions, of course. 199 | res = res[res.keys()[0]] 200 | 201 | if "include_role" in res: 202 | res["include_role"] = "%s" % res["include_role"] 203 | 204 | if item is not None or item_label is not None: 205 | if not item and item_label is not None: 206 | item = item_label 207 | if res and res != OUTPUT_OMITTED: 208 | try: 209 | res = collections.OrderedDict({host: {item: res}}) 210 | except TypeError: 211 | res = collections.OrderedDict({host: {str(item): res}}) 212 | else: 213 | try: 214 | res = collections.OrderedDict({host: item}) 215 | except TypeError: 216 | res = collections.OrderedDict({host: str(item)}) 217 | else: 218 | if res and res != OUTPUT_OMITTED: 219 | res = collections.OrderedDict({host: res}) 220 | else: 221 | res = host 222 | 223 | banner = task.get_name() 224 | if ":" in banner: 225 | banner = banner.replace(u":", u"—") 226 | if is_handler: 227 | typ = "handler" 228 | elif banner == "include_role": 229 | typ = "include_role" 230 | else: 231 | typ = "task" 232 | if task.get_path(): 233 | path = " @ " + task.get_path() 234 | else: 235 | path = "" 236 | banner = banner + stringc(" (%s%s)" % (typ, path), color="bright gray") 237 | if prefix: 238 | banner = prefix + " " + banner 239 | if isinstance(res, (basestring, int, float)): 240 | res = ColorLiteralText(res, color) 241 | elif len(res) == 1 and hasattr(res, "items"): 242 | k ,v = res.keys()[0], res.values()[0] 243 | del res[k] 244 | if hasattr(v, "items"): 245 | v = dict((ColorLiteralText(x, color), y) for x, y in v.items()) 246 | res[ColorLiteralText(k, color)] = v 247 | banner = LiteralText(stringc(banner, color)) 248 | res = {banner: res} 249 | c = StringIO() 250 | d = MyDumper(c) 251 | d.open() 252 | d.represent(res) 253 | d.close() 254 | c.seek(0) 255 | res_text = c.read().strip() 256 | if indent_with: 257 | res_text = indent(res_text, indent_with) 258 | return res_text 259 | 260 | 261 | class CallbackModule(CallbackModule_default): 262 | 263 | CALLBACK_VERSION = 2.0 264 | CALLBACK_TYPE = 'stdout' 265 | CALLBACK_NAME = 'human_log' 266 | 267 | def __init__(self, *args, **kwargs): 268 | CallbackModule_default.__init__(self, *args, **kwargs) 269 | self.__handlers = {} 270 | 271 | def v2_on_file_diff(self, result): 272 | return 273 | 274 | def v2_playbook_on_task_start(self, task, is_conditional): 275 | pass 276 | 277 | def _hostname(self, result): 278 | delegated_vars = result._result.get('_ansible_delegated_vars', None) 279 | if delegated_vars: 280 | hostname = "%s <- %s" % (delegated_vars['ansible_host'], result._host.get_name()) 281 | else: 282 | hostname = "%s" % (result._host.get_name(),) 283 | return hostname 284 | 285 | def v2_runner_on_failed(self, result, ignore_errors=False): 286 | prefix = u"\u2718" 287 | if 'exception' in result._result: 288 | msg = "An exception occurred during task execution. The full traceback is:\n" + result._result['exception'] 289 | hostname = self._hostname(result) 290 | self._display.display(human_log(msg, result._task, 291 | hostname, C.COLOR_ERROR, prefix=prefix)) 292 | 293 | if result._task.loop and 'results' in result._result: 294 | # This is not a call to v2_runner_item_on_... 295 | # so we ignore it. 296 | return 297 | hostname = self._hostname(result) 298 | if ignore_errors: 299 | prefix = u"\u2718 (ignored)" 300 | self._display.display(human_log(result._result, result._task, 301 | hostname, C.COLOR_ERROR, prefix=prefix)) 302 | 303 | def v2_runner_on_ok(self, result): 304 | delegated_vars = result._result.get('_ansible_delegated_vars', None) 305 | color = C.COLOR_CHANGED if result._result.get('changed', False) else C.COLOR_OK 306 | prefix = u"\u21BA" if result._result.get('changed', False) else u'\u2713' 307 | 308 | if 'diff' in result._result and result._result['diff']: 309 | if result._result.get('changed', False): 310 | diff = self._get_diff(result._result['diff']) 311 | result._result['diff'] = diff 312 | if isinstance(result._result['diff'], basestring): 313 | result._result['diff'] = LiteralText("".join(result._result['diff'])) 314 | if hasattr(result._result['diff'], "get"): 315 | if result._result['diff'].get("before") and result._result['diff'].get("after"): 316 | if result._result['diff'].get("before") == result._result['diff'].get("after"): 317 | del result._result['diff']['before'] 318 | del result._result['diff']['after'] 319 | if not result._result['diff']: 320 | del result._result['diff'] 321 | 322 | if result._task.loop and 'results' in result._result: 323 | # This is not a call to v2_runner_item_on_... 324 | # so we ignore it. 325 | return 326 | hostname = self._hostname(result) 327 | self._display.display(human_log(result._result, result._task, 328 | hostname, color, prefix=prefix, 329 | is_handler=result._task in self.__handlers)) 330 | 331 | def v2_runner_on_skipped(self, result): 332 | if result._task.loop and 'results' in result._result: 333 | # This is not a call to v2_runner_item_on_... 334 | # so we ignore it. 335 | return 336 | prefix = u"\u23E9" 337 | hostname = self._hostname(result) 338 | self._display.display(human_log(result._result, result._task, 339 | hostname, color=C.COLOR_SKIP, prefix=prefix)) 340 | 341 | def v2_runner_on_unreachable(self, result): 342 | prefix = u"\U0001F6C7" 343 | hostname = self._hostname(result) 344 | self._display.display(human_log(result._result, result._task, 345 | hostname, color=C.COLOR_UNREACHABLE, prefix=prefix)) 346 | 347 | def v2_playbook_on_play_start(self, play): 348 | name = play.get_name().strip() 349 | if not name: 350 | msg = u"Play:" 351 | else: 352 | msg = u"Play on %s:" % name 353 | self._display.display(msg, C.COLOR_VERBOSE) 354 | self._play = play 355 | 356 | def v2_playbook_on_handler_task_start(self, task): 357 | self.__handlers[task] = True 358 | 359 | def v2_runner_item_on_ok(self, result): 360 | self.v2_runner_on_ok(result) 361 | 362 | def v2_runner_item_on_failed(self, result): 363 | self.v2_runner_on_failed(result) 364 | 365 | def v2_runner_item_on_skipped(self, result): 366 | self.v2_runner_on_skipped(result) 367 | 368 | # def v2_playbook_on_include(self, included_file): 369 | # msg = ' for: %s' % (", ".join([h.name for h in included_file._hosts]), ) 370 | # color = 'bright gray' 371 | # self._display.display(msg, color=color) 372 | -------------------------------------------------------------------------------- /handlers/grub2-regen.yml: -------------------------------------------------------------------------------- 1 | ../tasks/grub2-regen.yml -------------------------------------------------------------------------------- /handlers/recalc-services.yml: -------------------------------------------------------------------------------- 1 | - name: recalc services that need to be restarted 2 | shell: if test -x /usr/local/bin/needs-restart-collector ; then systemctl restart needs-restart-collector ; fi 3 | -------------------------------------------------------------------------------- /handlers/redo-initramfs.yml: -------------------------------------------------------------------------------- 1 | ../tasks/redo-initramfs.yml -------------------------------------------------------------------------------- /handlers/testboot.yml: -------------------------------------------------------------------------------- 1 | - name: unmount boot 2 | shell: | 3 | for mntpnt in /boot/efi /boot ; do 4 | umount $mntpnt 5 | sync 6 | done 7 | -------------------------------------------------------------------------------- /library/gconf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | # (c) 2017, Rudd-O. GPLv2+. 5 | 6 | import subprocess 7 | 8 | DOCUMENTATION = """ 9 | --- 10 | module: gconf 11 | author: Rudd-O 12 | short_description: Deploy (currently only system-wide) gconf settings. 13 | description: 14 | - This module will allow you to deploy gconf settings. 15 | options: 16 | key: 17 | required: true 18 | value: 19 | required: false 20 | description: 21 | - Specify the value that the key should have. 22 | Only required when state=present. 23 | type: 24 | required: false 25 | description: 26 | - Specify the type of the value associated with the key. 27 | Only required when state=present. 28 | mode: 29 | required: false 30 | default: user 31 | choices: [ "user", "default" ] # "mandatory" not yet implemented 32 | description: 33 | - Specify "user" to change the user configuration belonging to 34 | the user that Ansible is logged in as, or "default" to change 35 | system-wide default configuration. 36 | state: 37 | required: false 38 | choices: [ "present", "absent" ] 39 | default: "present" 40 | """ 41 | 42 | EXAMPLES = r""" 43 | - gconf: 44 | key: /apps/desktop/gnome/interface/can_change_accels 45 | state: present 46 | value: true 47 | type: boolean 48 | mode: default 49 | """ 50 | 51 | defaultmode = ["gconftool-2", "--direct", 52 | "--config-source", 53 | "xml:readwrite:/etc/gconf/gconf.xml.defaults"] 54 | usermode = ["gconftool-2"] 55 | 56 | 57 | def present(module, key, type_, value, mode): 58 | if value == "True" and type_ == "boolean": 59 | value = "true" 60 | elif value == "False" and type_ == "boolean": 61 | value = "false" 62 | elif not hasattr(value, "endswith"): 63 | value = str(value) 64 | changed = False 65 | msg = "Key %s is already at its desired value" % (key,) 66 | 67 | cmd = defaultmode if mode == "default" else usermode 68 | p = subprocess.check_output(cmd + ["--get", key])[:-1] 69 | if p != value: 70 | if not module.check_mode: 71 | p = subprocess.check_output(cmd + ["--type", type_, 72 | "--set", key, 73 | value]) 74 | changed = True 75 | msg = "Key %s has been set to %s" % (key, value) 76 | module.exit_json(changed=changed, msg=msg) 77 | 78 | 79 | def absent(module, key, mode): 80 | changed = False 81 | msg = "Key %s is already not set" % (key,) 82 | 83 | cmd = defaultmode if mode == "default" else usermode 84 | p = subprocess.check_output(cmd + ["--get", key])[:-1] 85 | if p != "": 86 | if not module.check_mode: 87 | p = subprocess.check_output(cmd + ["--unset", key, 88 | value]) 89 | changed = True 90 | msg = "Key %s has been unset" % (key,) 91 | module.exit_json(changed=changed, msg=msg) 92 | 93 | 94 | def main(): 95 | module = AnsibleModule( 96 | argument_spec=dict( 97 | key=dict(required=True), 98 | type=dict(required=False), 99 | value=dict(required=False), 100 | state=dict(default='present', choices=['present', 'absent']), 101 | mode=dict(default='user', choices=['user', 'default']), 102 | ), 103 | supports_check_mode=True 104 | ) 105 | 106 | params = module.params 107 | if params["state"] == "present": 108 | if 'type' not in params: 109 | module.fail_json(msg='the "type" parameter is required when state is "present"') 110 | if 'value' not in params: 111 | module.fail_json(msg='the "value" parameter is required when state is "present"') 112 | present(module, params['key'], params['type'], params['value'], params['mode']) 113 | else: 114 | absent(module, params['key'], params['mode']) 115 | 116 | 117 | # import module snippets 118 | from ansible.module_utils.basic import * 119 | from ansible.module_utils.splitter import * 120 | 121 | 122 | main() 123 | -------------------------------------------------------------------------------- /library/kernel_module.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | # (c) 2017, Rudd-O. GPLv2+. 5 | 6 | import errno 7 | import re 8 | import subprocess 9 | 10 | 11 | DOCUMENTATION = """ 12 | --- 13 | module: module 14 | author: Rudd-O 15 | short_description: Load kernel modules now and on boot 16 | description: 17 | - This module will allow you configure kernel module load. 18 | options: 19 | name: 20 | required: true 21 | state: 22 | required: false 23 | choices: [ "loaded" ] 24 | default: "loaded" 25 | """ 26 | 27 | EXAMPLES = r""" 28 | - kernel_module: 29 | name: uinput 30 | state: loaded 31 | """ 32 | 33 | 34 | NOTHING_TO_DO = "Nothing to do" 35 | 36 | 37 | def loaded(module, name): 38 | changed = False 39 | msg = NOTHING_TO_DO 40 | 41 | present = False 42 | with open('/proc/modules') as modules: 43 | module_name = name.replace('-', '_') + ' ' 44 | for line in modules: 45 | if line.startswith(module_name): 46 | present = True 47 | break 48 | if not present: 49 | if not module.check_mode: 50 | p = subprocess.Popen(["modprobe", "--", name], 51 | stdout=subprocess.PIPE, 52 | stderr=subprocess.STDOUT) 53 | out = p.communicate()[0].strip() 54 | ret = p.wait() 55 | if ret != 0: 56 | return module.fail_json(rc=ret, msg=out) 57 | changed = True 58 | msg = "Module %s loaded" % name 59 | 60 | sanitized = re.sub(r'\W+', '', name) 61 | sanitized = "/etc/modules-load.d/%s.conf" % sanitized 62 | try: 63 | f = file(sanitized) 64 | text = f.read() 65 | f.close() 66 | except IOError as e: 67 | if e.errno != errno.ENOENT: 68 | raise 69 | text = "" 70 | if text != name: 71 | if not module.check_mode: 72 | f = file(sanitized, "wb") 73 | f.write(name) 74 | f.close() 75 | os.chmod(sanitized, 0o644) 76 | changed = True 77 | if msg != NOTHING_TO_DO: 78 | msg = msg + ", %s created" % sanitized 79 | else: 80 | msg = "%s created" % sanitized 81 | 82 | module.exit_json(changed=changed, msg=msg) 83 | 84 | 85 | def main(): 86 | module = AnsibleModule( 87 | argument_spec=dict( 88 | name=dict(required=True), 89 | state=dict(default='loaded', choices=['loaded']), 90 | ), 91 | supports_check_mode=True 92 | ) 93 | 94 | params = module.params 95 | name = params["name"] 96 | if params["state"] == "loaded": 97 | loaded(module, name) 98 | 99 | 100 | # import module snippets 101 | from ansible.module_utils.basic import * 102 | from ansible.module_utils.splitter import * 103 | 104 | 105 | main() 106 | -------------------------------------------------------------------------------- /roles/deploy-ssl-certs/README.md: -------------------------------------------------------------------------------- 1 | # Deploy SSL certificates 2 | 3 | This Ansible playbook deploys sets of SSL certificates to a server, following 4 | best practices. 5 | 6 | ## Usage 7 | 8 | Here is how you use this role in an example playbook: 9 | 10 | ```yaml 11 | - hosts: mywebserver 12 | become: True 13 | vars: 14 | ssl: 15 | example.com: 16 | key: '{{ lookup("file", "secrets/example.com.key") }}' 17 | intermediates: 18 | - '{{ lookup("file", "secrets/example.com.ca-bundle") }}' 19 | certificate: '{{ lookup("file", "secrets/example.com.crt") }}' 20 | tasks: 21 | - name: deploy SSL certificates 22 | include_role: 23 | name: deploy-ssl-certs 24 | - name: visualize SSL certificate path 25 | debug: msg='The path to the deployed certificate is {{ sslconf['example.com'].certificate.path }}' 26 | ``` 27 | 28 | See the file `defaults/main.yml` for more information on how to configure the 29 | role from your playbook. Note that you may have to configure your Ansible's 30 | `hash_behavior` to `merge` dictionaries. 31 | 32 | ## Results 33 | 34 | The results of the deployment are stored in the variable 35 | `sslconf` which you can use in later stages of your playbook. Said variable 36 | looks like this: 37 | 38 | ``` 39 | changed: 40 | result: 41 | : 42 | key: 43 | path: 44 | certificate: 45 | path: 46 | assembled_certificate: 47 | path: 48 | ``` 49 | -------------------------------------------------------------------------------- /roles/deploy-ssl-certs/defaults/main.yml: -------------------------------------------------------------------------------- 1 | ssl: 2 | options: 3 | certificates_dir: /etc/pki/tls/certs 4 | keys_dir: /etc/pki/tls/private 5 | # Here is an example of how to deploy a key and a certificate: 6 | # example.com: 7 | # # Key is meant to hold the key in === BEGIN PRIVATE KEY === format. 8 | # key: '{{ lookup("file", "secrets/example.com.key") }}' 9 | # # Certificate is meant to hold the certificate corresponding to the key 10 | # # above, again in === BEGIN CERTIFICATE === format. 11 | # certificate: '{{ lookup("file", "secrets/example.com.crt") }}' 12 | # # Intermediates are meant to hold intermediate certificate bundles 13 | # # in === BEGIN CERTIFICATE === format. 14 | # intermediates: 15 | # # The order matters. At the bottom of the stack must be the one closest to the root of trust. 16 | # - '{{ lookup("file", "secrets/example.com.ca-bundle") }}' 17 | # # Should you want to grant read access to a UNIX user in the system, 18 | # # you can define this (optional) variable to the user in question. 19 | # user_may_read: apache 20 | -------------------------------------------------------------------------------- /roles/deploy-ssl-certs/scripts/setvars.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import os 4 | import sys 5 | import json 6 | 7 | 8 | i = json.loads(sys.argv[1]) 9 | 10 | safeify = lambda x: x.replace(os.path.sep, "_").replace( 11 | os.path.pardir, "_" * len(os.path.pardir) 12 | ) 13 | incertdir = lambda x: os.path.join(i["options"]["certificates_dir"], x) 14 | inkeydir = lambda x: os.path.join(i["options"]["keys_dir"], x) 15 | 16 | o = {} 17 | for k, v in list(i.items()): 18 | if k == "options": 19 | continue 20 | o[k] = {} 21 | o[k]["key"] = {} 22 | o[k]["key"]["path"] = inkeydir(safeify(k) + ".key") 23 | o[k]["key"]["data"] = v["key"] 24 | o[k]["user_may_read"] = v.get("user_may_read", "") 25 | o[k]["certificate"] = {} 26 | o[k]["certificate"]["path"] = incertdir(safeify(k) + ".crt") 27 | o[k]["certificate"]["data"] = v["certificate"] 28 | o[k]["assembled_certificate"] = {} 29 | o[k]["assembled_certificate"]["path"] = incertdir( 30 | "assembled_" + safeify(k) + ".crt" 31 | ) 32 | o[k]["intermediate_certificates"] = [] 33 | for n, int in enumerate(v["intermediates"]): 34 | o[k]["intermediate_certificates"].append({}) 35 | o[k]["intermediate_certificates"][-1]["path"] = incertdir( 36 | ("intermediate_%s_" % n) + safeify(k) + ".crt" 37 | ) 38 | o[k]["intermediate_certificates"][-1]["data"] = int 39 | 40 | print(json.dumps(o), end=" ") 41 | -------------------------------------------------------------------------------- /roles/deploy-ssl-certs/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - script: scripts/setvars.py {{ ssl|to_json|quote }} 2 | check_mode: no 3 | delegate_to: localhost 4 | register: _ssl_dict 5 | no_log: true 6 | changed_when: false 7 | tags: always 8 | 9 | - set_fact: 10 | _ssl_dict: '{{ _ssl_dict.stdout|from_json }}' 11 | no_log: true 12 | tags: always 13 | 14 | - name: install SSL key 15 | copy: 16 | content: '{{ _ssl_dict[item].key.data }}' 17 | dest: '{{ _ssl_dict[item].key.path }}' 18 | owner: root 19 | group: root 20 | with_items: '{{ _ssl_dict }}' 21 | register: ssl_key_deploy 22 | 23 | - name: grant default permissions to SSL key 24 | shell: chmod 0400 {{ _ssl_dict[item].key.path|quote }} 25 | with_items: '{{ _ssl_dict }}' 26 | when: ssl_key_deploy.changed 27 | 28 | - name: grant permission to read to the user 29 | acl: 30 | path: '{{ _ssl_dict[item].key.path }}' 31 | entity: '{{ _ssl_dict[item].user_may_read }}' 32 | etype: user 33 | permissions: r 34 | state: present 35 | when: _ssl_dict[item].user_may_read != "" 36 | with_items: '{{ _ssl_dict }}' 37 | register: ssl_key_acl 38 | 39 | - name: install SSL host certificate 40 | copy: 41 | content: '{{ _ssl_dict[item].certificate.data }}' 42 | dest: '{{ _ssl_dict[item].certificate.path }}' 43 | mode: 0644 44 | owner: root 45 | group: root 46 | with_items: '{{ _ssl_dict }}' 47 | no_log: true 48 | register: ssl_cert_deploy 49 | 50 | - name: install SSL intermediate certificates 51 | copy: 52 | content: '{{ item[1].data }}' 53 | dest: '{{ item[1].path }}' 54 | mode: 0644 55 | owner: root 56 | group: root 57 | with_subelements: 58 | - '{{ _ssl_dict }}' 59 | - intermediate_certificates 60 | no_log: true 61 | register: ssl_intermediate_deploy 62 | 63 | - name: assemble certificate chain 64 | shell: | 65 | tmpfile=`mktemp` 66 | trap "rm -f $tmpfile" EXIT 67 | cat {{ _ssl_dict[item].certificate.path|quote }} > "$tmpfile" 68 | {% for int in _ssl_dict[item].intermediate_certificates %} 69 | echo >> "$tmpfile" 70 | cat {{ int.path|quote }} >> "$tmpfile" 71 | {% endfor %} 72 | if ! cmp "$tmpfile" {{ _ssl_dict[item].assembled_certificate.path|quote }} 73 | then 74 | {% if not ansible_check_mode %} 75 | cat "$tmpfile" > {{ _ssl_dict[item].assembled_certificate.path|quote }} 76 | {% endif %} 77 | echo CHANGED 78 | fi 79 | with_items: '{{ _ssl_dict }}' 80 | register: ssl_cert_assemble 81 | check_mode: no 82 | no_log: true 83 | changed_when: "'CHANGED' in ssl_cert_assemble.stdout" 84 | 85 | - name: detect if SSL configuration changed 86 | set_fact: 87 | sslconf: 88 | changed: '{{ ssl_key_acl.changed|default(False) or ssl_key_deploy.changed|default(False) or ssl_cert_deploy.changed|default(False) or ssl_intermediate_deploy.changed|default(False) or ssl_cert_assemble.changed|default(False)}}' 89 | result: '{{ _ssl_dict }}' 90 | no_log: true 91 | tags: always 92 | -------------------------------------------------------------------------------- /roles/distupgrade/README.md: -------------------------------------------------------------------------------- 1 | # Perform a distribution upgrade on your Fedora / Debian system 2 | 3 | This Ansible playbook safely upgrades your Fedora or Debian system 4 | to the next distribution release. Here are a few noteworthy points: 5 | 6 | * Your root ZFS dataset will be snapshotted prior to the upgrade, 7 | if your system has ZFS on root. 8 | * The debug console shell will be enabled during the upgrade, so 9 | you can break into the machine (Ctrl+Alt+F9 after switch root) 10 | if the upgrade fails and leaves you with an unbootable system. 11 | It will then be disabled after the upgrade is complete. 12 | * The recipe will be idempotent across failures, even if you run 13 | it multiple times until completion. 14 | 15 | ## Instructions 16 | 17 | Usage: 18 | 19 | Every time you want to upgrade your distribution, run the playbook `distupgrade` against it. 20 | Your task must set `become: yes` for this to be effective. 21 | 22 | You can run this against Qubes templates as well. Use the 23 | [https://github.com/Rudd-O/ansible-qubes](Ansible Qubes] toolkit to do so. 24 | 25 | That's all! 26 | -------------------------------------------------------------------------------- /roles/distupgrade/handlers/main.yml: -------------------------------------------------------------------------------- 1 | - include: testboot.yml 2 | -------------------------------------------------------------------------------- /roles/distupgrade/handlers/testboot.yml: -------------------------------------------------------------------------------- 1 | ../../../handlers/testboot.yml -------------------------------------------------------------------------------- /roles/distupgrade/tasks/cleanup.yml: -------------------------------------------------------------------------------- 1 | - name: reread variables 2 | action: setup 3 | 4 | - include_role: 5 | name: updates 6 | 7 | - name: remove distrover file 8 | file: name=/.distupgrade state=absent 9 | 10 | - name: disable debug shell 11 | service: name=debug-shell state=stopped enabled=no 12 | when: debug_shell 13 | 14 | - name: create dbus-broker override directory 15 | file: 16 | name: /etc/systemd/system/{{ item }} 17 | state: directory 18 | with_items: 19 | - dbus-broker.service.d 20 | - systemd-resolved.service.d 21 | when: ansible_distribution_version|int == 30 22 | 23 | - name: create dbus-broker override 24 | copy: 25 | content: | 26 | [Service] 27 | ProtectKernelTunables=no 28 | PrivateDevices=no 29 | dest: /etc/systemd/system/{{ item }}/protectkerneltunables.conf 30 | with_items: 31 | - dbus-broker.service.d 32 | - systemd-resolved.service.d 33 | when: ansible_distribution_version|int == 30 34 | 35 | - name: fix contexts of root directories 36 | shell: | 37 | mkdir /tmp/fixcontexts 38 | mount -o bind / /tmp/fixcontexts 39 | cd /tmp/fixcontexts 40 | ls -lZd * | grep unlabeled && { 41 | chcon -t device_t dev 42 | chcon -t home_root_t home 43 | chcon -t root_t proc sys 44 | chcon -t var_run_t run 45 | } || : 46 | when: ansible_distribution_version|int == 30 47 | 48 | - name: eliminate dbus-broker override 49 | file: 50 | name: /etc/systemd/system/{{ item }}/protectkerneltunables.conf 51 | state: absent 52 | with_items: 53 | - dbus-broker.service.d 54 | - systemd-resolved.service.d 55 | when: ansible_distribution_version|int != 30 56 | 57 | - name: set SELinux to enforcing 58 | shell: | 59 | set -e 60 | test -f /etc/selinux/config || exit 0 61 | sed -i 's/=permissive/=enforcing/' /etc/selinux/config 62 | if grep =enforcing /etc/selinux/config ; then 63 | setenforce Enforcing 64 | fi 65 | -------------------------------------------------------------------------------- /roles/distupgrade/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - include: testboot.yml 2 | 3 | - include: variables.yml 4 | when: ansible_distribution in "Fedora Qubes Debian Ubuntu" 5 | tags: 6 | - after 7 | 8 | - include: prepare.yml 9 | when: ansible_distribution in "Fedora Qubes Debian Ubuntu" 10 | 11 | - include: upgrade.yml 12 | when: ansible_distribution in "Fedora Qubes Debian Ubuntu" and targetdistrover|int != ansible_distribution_version|int 13 | 14 | - include: cleanup.yml 15 | when: ansible_distribution in "Fedora Qubes Debian Ubuntu" 16 | tags: 17 | - after 18 | -------------------------------------------------------------------------------- /roles/distupgrade/tasks/prepare.yml: -------------------------------------------------------------------------------- 1 | - name: snapshot ZFS root file system 2 | shell: | 3 | rootdataset=$(zfs list / -H -o name) || exit 0 4 | sname="$rootdataset@distupgrade-from-{{ distrover }}-to-{{ targetdistrover }}" 5 | zfs list "$sname" -H -o name || { 6 | zfs snapshot "$sname" || exit 5 7 | echo CHANGEDCHANGED >&2 8 | } 9 | register: zfssnap 10 | changed_when: '"CHANGEDCHANGED" in zfssnap.stderr' 11 | 12 | - name: enable debug shell 13 | service: name=debug-shell state=started enabled=yes 14 | when: debug_shell 15 | 16 | - name: set SELinux to permissive 17 | shell: | 18 | test -f /etc/selinux/config || exit 0 19 | grep =enforcing /etc/selinux/config 20 | sed -i 's/=enforcing/=permissive/' /etc/selinux/config 21 | register: permissive 22 | changed_when: '"enforcing" in permissive.stdout' 23 | when: ansible_distribution_version|int >= 29 24 | -------------------------------------------------------------------------------- /roles/distupgrade/tasks/reboot.yml: -------------------------------------------------------------------------------- 1 | - name: fail reboot when incompatible 2 | fail: msg='This task is incompatible with ansible_connection={{ ansible_connection }}' 3 | when: ansible_connection != "ssh" and ansible_connection != "qubes" and ansible_connection != "smart" 4 | 5 | - name: run reboot command 6 | shell: 'sleep 2 && {{ reboot_command|default("reboot") }}' 7 | ignore_errors: true 8 | async: 1 9 | poll: 0 10 | 11 | - name: locally notify about reboot 12 | local_action: shell zenity --notification --text "Machine "{{ inventory_hostname|quote }}" is shutting down. Ensure it decrypts if it is encrypted." || true 13 | 14 | - block: 15 | - name: power off VM via dom0 16 | shell: qvm-shutdown --wait {{ inventory_hostname|quote }} 17 | delegate_to: '{{ (qubes|default({})).dom0_vm|default(qubes_dom0_vm) }}' 18 | become: False 19 | register: shutdownviadom0 20 | failed_when: 'shutdownviadom0.rc|default(None) != 0 and "ERROR: VM already stopped" not in shutdownviadom0.stderr' 21 | when: '(qubes|default({})).dom0_vm|default(qubes_dom0_vm|default(""))' 22 | - meta: clear_host_errors 23 | when: ansible_connection|default("") == "qubes" 24 | 25 | - block: 26 | - name: wait for server to come back after reboot 27 | local_action: wait_for port=22 delay=15 timeout={{ timeout|default(600) }} search_regex=OpenSSH host={{ ansible_ssh_host|default(inventory_hostname) }} state=started 28 | retries: '{{ (86400 / (timeout|default(600)|int))|int }}' 29 | when: ansible_connection != "qubes" 30 | register: waitalive 31 | failed_when: 'waitalive.failed|default(False) and "reset by peer" not in waitalive.module_stderr|default("")' 32 | become: False 33 | 34 | - name: wait for server to come back after reboot, take 2 35 | local_action: wait_for port=22 delay=15 timeout={{ timeout|default(600) }} search_regex=OpenSSH host={{ ansible_ssh_host|default(inventory_hostname) }} state=started 36 | retries: '{{ (86400 / (timeout|default(600)|int))|int }}' 37 | when: ansible_connection != "qubes" and ("reset by peer" in waitalive.module_stderr|default("")) 38 | become: False 39 | 40 | - name: wait for VM to come back after reboot 41 | raw: bash -c 'sleep 1 ; echo -n yes' 42 | register: after_reboot 43 | retries: '{{ (timeout|default(600) / 5)|int }}' 44 | until: '"yes" in after_reboot.stdout|default("")' 45 | delay: 5 46 | when: ansible_connection == "qubes" 47 | become: False 48 | when: reboot_command|default("reboot") != "poweroff" 49 | -------------------------------------------------------------------------------- /roles/distupgrade/tasks/testboot.yml: -------------------------------------------------------------------------------- 1 | ../../../tasks/testboot.yml -------------------------------------------------------------------------------- /roles/distupgrade/tasks/upgrade.yml: -------------------------------------------------------------------------------- 1 | - block: 2 | - name: update initial packages in dnf-upgrade mode 3 | shell: dnf --releasever {{ targetdistrover|int }} update -y {{ base_packages }} 4 | - name: execute upgrade in dnf-upgrade mode 5 | shell: dnf upgrade -y --releasever {{ targetdistrover|int }} 6 | when: method|default(None) == "dnf-upgrade" 7 | 8 | - name: execute upgrade in distro-sync mode 9 | shell: dnf distro-sync -y --allowerasing --releasever {{ targetdistrover|int }} 10 | when: method|default(None) in "distro-sync" 11 | 12 | - name: execute upgrade in dist-upgrade mode 13 | shell: | 14 | set -e 15 | {% set targetcodename = debian_distro_map[targetdistrover|int] %} 16 | {% for num, codename in debian_distro_map.items() %} 17 | {% if distrover|int == num|int %} 18 | for f in /etc/apt/sources.list /etc/apt/sources.list.d/*.list ; do 19 | test -f "$f" || continue 20 | if grep -q {{ codename }} "$f" ; then 21 | sed -i 's/{{ codename }}/{{ targetcodename }}/g' "$f" 22 | fi 23 | done 24 | {% endif %} 25 | {% endfor %} 26 | apt-get update 27 | DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::="--force-confold" --force-yes -fuy dist-upgrade 28 | when: method|default(None) in "dist-upgrade" 29 | 30 | - block: 31 | - name: update initial packages in qubes-dom0-update mode 32 | shell: qubes-dom0-update -y {{ base_packages }} --releasever {{ targetdistrover|int }} 33 | register: qubes_dom0_base_package_upgrade 34 | changed_when: '"Nothing to do" not in qubes_dom0_base_package_upgrade.stdout' 35 | - name: execute upgrade in qubes-dom0-update mode 36 | shell: qubes-dom0-update -y --releasever {{ targetdistrover|int }} 37 | when: method|default(None) == "qubes-dom0-update" 38 | 39 | - block: 40 | - name: install system upgrade plugin 41 | package: name=dnf-plugin-system-upgrade state=present 42 | - name: execute upgrade in system-upgrade mode 43 | shell: dnf system-upgrade download -y --allowerasing --best --releasever {{ targetdistrover|int }} 44 | - include: reboot.yml 45 | vars: 46 | reboot_command: dnf system-upgrade reboot 47 | timeout: 7200 48 | when: method|default(None) in "system-upgrade" 49 | -------------------------------------------------------------------------------- /roles/distupgrade/tasks/variables.yml: -------------------------------------------------------------------------------- 1 | - name: record source distribution version upgrade and request relabeling if needed 2 | shell: | 3 | set -e 4 | created= 5 | if ! test -f /.distupgrade ; then 6 | created=true 7 | echo CHANGED >> /dev/stderr 8 | echo {{ ansible_distribution_version|int }} > /.distupgrade 9 | echo {{ ansible_distribution_version|int + 1 }} >> /.distupgrade 10 | {% if ansible_check_mode|default(False) %}selinuxenabled && touch /.autorelabel || true{% endif %} 11 | 12 | fi 13 | cat /.distupgrade 14 | 15 | {% if ansible_check_mode|default(False) %}test -n "$created" && rm -f /.distupgrade || true{% endif %} 16 | register: distrovercontent 17 | check_mode: no 18 | changed_when: '"CHANGED" in distrovercontent.stderr' 19 | 20 | - name: record if there is a kernel package 21 | shell: | 22 | {% if ansible_distribution in "Fedora Qubes" %} 23 | rpm -qa | grep -q '^kernel-[0123456789]' && echo YES || echo NO 24 | {% elif ansible_distribution in "Debian Ubuntu" %} 25 | dpkg -l | grep -q linux-kernel && echo YES || echo NO 26 | {% endif %} 27 | check_mode: no 28 | register: kernelpkgcontent 29 | changed_when: False 30 | 31 | - name: set variables up 32 | set_fact: 33 | distrover: '{{ distrovercontent.stdout_lines[0] }}' 34 | targetdistrover: '{{ distrovercontent.stdout_lines[1] }}' 35 | kernelpkgavailable: '{{ "YES" in kernelpkgcontent.stdout }}' 36 | base_packages: fedora-release dnf systemd 37 | 38 | - name: set debug_shell / method to dnf-upgrade / system-upgrade / distro-sync / qubes-dom0-update 39 | set_fact: 40 | debug_shell: '{% if (ansible_distribution in "Fedora" and distrover|int >= 23) 41 | or (ansible_distribution in "Debian Ubuntu") %}yes{% else %}no{% endif %}' 42 | method: '{% if ansible_distribution in "Fedora" 43 | and distrover|int == 22 44 | %}dnf-upgrade{% 45 | elif ansible_distribution in "Fedora" 46 | and distrover|int >= 23 47 | and ( 48 | kernelpkgavailable 49 | and (qubes.vm_type|default("")) not in ["TemplateVM", "StandaloneVM"] 50 | ) 51 | %}system-upgrade{% 52 | elif ansible_distribution in "Fedora" 53 | and distrover|int >= 23 and ( 54 | not kernelpkgavailable 55 | or (qubes.vm_type|default("")) in ["TemplateVM", "StandaloneVM"] 56 | ) 57 | %}distro-sync{% 58 | elif ansible_distribution in "Qubes" 59 | %}qubes-dom0-update{% 60 | elif ansible_distribution in "Debian Ubuntu" 61 | %}dist-upgrade{% 62 | endif 63 | %}' 64 | 65 | - name: set method to apt-get dist-upgrade 66 | set_fact: 67 | debian_distro_map: 68 | 7: wheezy 69 | 8: jessie 70 | 9: stretch 71 | 10: buster 72 | when: method == "dist-upgrade" 73 | -------------------------------------------------------------------------------- /roles/generic-rpm-install/defaults/main.yml: -------------------------------------------------------------------------------- 1 | generic_rpm_install: 2 | dnf_repo_base_url: http://dnf-updates.dragonfear/ 3 | dnf_repo_name: dnf-updates 4 | configrepo: True 5 | gpgcheck: False 6 | # gpgkey: key material, multiline, ASCII armored, must pass this if gpgcheck is true 7 | # package: you must pass this, or a packages variable with a list 8 | # register_var: result_var_name 9 | -------------------------------------------------------------------------------- /roles/generic-rpm-install/tasks/main.yml: -------------------------------------------------------------------------------- 1 | # Conforms to the latest Deployment policies.md in the documentation. 2 | 3 | - block: 4 | - name: construct callee parameters 5 | set_fact: 6 | _generic_rpm_install: 7 | _packages: '{{ packages|default([package]) }}' 8 | check_mode: no 9 | tags: always 10 | 11 | - name: create repo file 12 | copy: 13 | content: | 14 | [{{ generic_rpm_install.dnf_repo_name }}] 15 | name={{ generic_rpm_install.dnf_repo_name }} for $releasever - $basearch 16 | baseurl={{ generic_rpm_install.dnf_repo_base_url }}/{% if ansible_distribution in "Qubes" %}q{% else %}fc{% endif %}$releasever/ 17 | enabled=1 18 | gpgcheck={% if generic_rpm_install.gpgcheck %}1{% else %}0{% endif %} 19 | 20 | {% if generic_rpm_install.gpgcheck %} 21 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-{{ generic_rpm_install.dnf_repo_name }} 22 | {% endif %} 23 | 24 | metadata_expire=30 25 | 26 | dest: /etc/yum.repos.d/{{ generic_rpm_install.dnf_repo_name }}.repo 27 | mode: 0644 28 | owner: root 29 | group: root 30 | when: generic_rpm_install.configrepo and __generic_rpm_install_already_setup is not defined 31 | register: repofile 32 | 33 | - name: copy RPM signing key 34 | copy: 35 | content: "{{ generic_rpm_install.gpgkey }}" 36 | dest: /etc/pki/rpm-gpg/RPM-GPG-KEY-{{ generic_rpm_install.dnf_repo_name }} 37 | owner: root 38 | group: root 39 | mode: 0644 40 | when: generic_rpm_install.configrepo and generic_rpm_install.gpgcheck and __generic_rpm_install_already_setup is not defined 41 | 42 | - import_role: 43 | name: install-packages 44 | vars: 45 | state: latest 46 | packages: '{{ _generic_rpm_install["_packages"] }}' 47 | 48 | - set_fact: 49 | __generic_rpm_install: {} 50 | __generic_rpm_install_already_setup: True 51 | '{{ register_var|default("generic_rpm_install") }}': '{{ package_install }}' 52 | check_mode: no 53 | no_log: yes 54 | 55 | when: package is defined or packages is defined 56 | tags: generic_rpm_install 57 | -------------------------------------------------------------------------------- /roles/install-packages/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - block: 2 | - name: install software (dom0) 3 | become: true 4 | shell: rpm -q {{ " ".join(packages) }} && echo "Nothing to do" || qubes-dom0-update -y {{ (options|default([])|join(" ")) }} {{ " ".join(packages) }} 5 | changed_when: '"Nothing to do" not in qubes_dom0_install.stdout' 6 | register: qubes_dom0_install 7 | notify: '{{ notify|default(omit) }}' 8 | - name: set package_install variable 9 | set_fact: 10 | package_install: '{{ qubes_dom0_install|default(None) }}' 11 | no_log: yes 12 | check_mode: no 13 | when: ansible_distribution == "Qubes" 14 | 15 | - block: 16 | - name: install software (non-dom0) 17 | become: true 18 | package: 19 | name: '{{ packages }}' 20 | state: '{{ state|default("present") }}' 21 | register: normal_package_install 22 | notify: '{{ notify|default(omit) }}' 23 | - name: set package_install variable 24 | set_fact: 25 | package_install: '{{ normal_package_install|default(None) }}' 26 | no_log: yes 27 | check_mode: no 28 | when: ansible_distribution != "Qubes" 29 | -------------------------------------------------------------------------------- /roles/mailserver/README.md: -------------------------------------------------------------------------------- 1 | # Complete mail server with support for DKIM, SPF, spam classification and Sieve rules 2 | 3 | **Warning: this role is currently unmaintained.** The replacement is a SaltStack 4 | formula here: https://github.com/Rudd-O/saltstack-automation/tree/master/formulas/email . 5 | 6 | This Ansible role deploys a full SMTP server and IMAP server, compliant 7 | with all the best practices for spam-free and good-delivery e-mail processing, 8 | including (but not limited to) the following features: 9 | 10 | - DKIM validation of incoming mail and signing of outgoing mail. 11 | - SPF validation of incoming mail. 12 | - automatic spam classification and dead-easy reclassification: just drag mail to / from the SPAM folder to teach the system what is and isn't spam. 13 | - Support for Sieve rules so you can filter your mail on the server. 14 | - Greylisting to cut down on low-rent spammers who try to spam you only once. 15 | 16 | Consult the *Setup* section below to configure your own mail server using this 17 | Ansible role. Also consult the `defaults/main.yml` file to understand 18 | the variables that influence the role's behavior. The `main.yml` file mentioned 19 | before contains variables and documentation as to what the variables do. 20 | 21 | ## Operation 22 | 23 | ### Receiving mail: the pipeline 24 | 25 | An e-mail received by your server, and meant to reach one of the accounts 26 | you set up, traverses through this pipeline: 27 | 28 | 1. It is received by Postfix. 29 | 2. Postfix pushes it through the greylisting policy service. 30 | * If the policy service declines to receive it, then Postfix replies 31 | to the sender's server with the customary greylisting temporary failure. 32 | 2. Postfix then pushes it through the SPF verifier. 33 | * The SPF verifier only adds headers to the message. It does not reject 34 | any e-mail at this stage. 35 | 3. Then Postfix pushes it through the DKIM signature verifier. 36 | * The DKIM verifier only adds headers to the message. It does not reject 37 | any e-mail at this stage. 38 | 4. It is pushed through `bogofilter-dovecot-deliver`: 39 | * The program pipes the mail through the spam classifier (`bogofilter`), 40 | * then the program pipes the mail through to the Dovecot LDA. 41 | 5. Dovecot LDA runs the e-mail the following Sieve scripts: 42 | * `/var/lib/sieve/before.d/*.sieve`, which includes the spam classifier 43 | ruleset that places spam into the *SPAM* folder. 44 | * Your own account's `~/.dovecot.sieve`, which contains the rules you 45 | have created using your own mail client. 46 | * `/var/lib/sieve/after.d/*.sieve`, which runs after your own Sieve rules. 47 | 6. Based on the decision taken by these scripts, Dovecot LDA delivers the 48 | e-mail to the right folder. Should no decision be taken by these scripts, 49 | then the e-mail is delivered to the *INBOX* folder of your account. 50 | 51 | ### Greylisting 52 | 53 | This section assumes that greylisting is activated (see `defaults/main.yml` 54 | for the right knob to turn it off). 55 | 56 | Your new mail server will reject any and all new e-mails from senders it hasn't 57 | seen for 60 seconds. This is *normal*. The sending servers will retry after a 58 | few minutes, after which the greylisting process will determine that the e-mail 59 | is legitimate. 60 | 61 | If you want to whitelist specific servers or domains for sending you e-mail 62 | without greylisting, you can do so by adding rules to the files 63 | `/etc/postfix/postgrey_whitelist_clients.local`. See the documentation for 64 | *Postgrey whitelist* online to understand what to put in that file. 65 | 66 | ### Spam handling 67 | 68 | #### Automatic classification 69 | 70 | Spam classification happens upon delivery, where Dovecot's deliver agent 71 | automatically runs unclassified mail through the `spamclassifier` sieve 72 | (stored at `/var/lib/sieve/before.d/spamclassifier.sieve`) which in turn 73 | runs them through the `spamclassifier` filter (stored at 74 | `/usr/local/libexec/sieve/spamclassifier`). The filter runs `bogofilter` 75 | with the appropriate options to detect the spamicity of the message. 76 | Once the filter is done and has added the spamicity headers to the 77 | incoming message, the `spamclassifier` sieve places the resulting 78 | message in the appropriate SPAM box, or continues processing other sieve 79 | scripts until the message ends in the appropriate mailbox (usually INBOX). 80 | 81 | Because the rule that classifies mail as spam executes before your own Sieve 82 | rules, all of your e-mail will go through the classifier. This may mean that, 83 | during the first few days, `bogofilter` will have to catch up with what *you* 84 | understand as spam and not spam, and a few e-mails will be misclassified. 85 | Worry not, as the process is very easy (see the next section) and `bogofilter` 86 | learns very quickly what counts as spam and what doesn't. 87 | 88 | **SPF and DKIM interaction with spam handling**: `bogofilter` takes into 89 | account mail headers when deciding what is spam and what isn't, so the headers 90 | added by the SPF and DKIM validators will inform `bogofilter` quite reliably as 91 | to the legitimacy of the e-mail it's receiving for classification. 92 | 93 | #### Reclassification and retraining 94 | 95 | If the classifier has made a mistake, you can reclassify e-mails as ham or as 96 | spam by simply using your mail client as follows: 97 | 98 | Move them from the folder they are stored in, into the folder *SPAM* 99 | (for e-mail that was wrongly classified as proper e-mail) or into any other 100 | folder that isn't the *Trash* folder (for e-mail wrongly classified as spam). 101 | 102 | When you move mails to *SPAM*, the server automatically runs them 103 | through the `bogofilter` classifier again, telling the classifer to deem those 104 | messages as spam. Then, the server files the e-mail into the *SPAM* folder. 105 | 106 | When you move mails out of *SPAM* the server automatically runs them 107 | through the classifier, deeming them as ham (not spam); immediately after that, 108 | the server saves the message in the destination folder. 109 | 110 | You'll discover quite quickly that `bogofilter` learns really well what 111 | qualifies as spam and what does not, according to your own criteria. It's 112 | almost magic. After a few days, pretty much every e-mail will be correctly 113 | classified, with a false positive and false negative rate of less than 0.1%. 114 | 115 | **Technical note**: The reclassification is mediated by global `imapsieve` 116 | filters (deployed to `/var/lib/sieve/imapsieve`) that intercept message 117 | moves to and from the *SPAM* folder, and then pipe the contents of the moved 118 | message to `learn-ham` or `learn-spam` (both programs deployed to 119 | `/usr/local/libexec/sieve`). 120 | 121 | All reclassification events are noted to the user's systemd journal, with tag 122 | `bogofilter-reclassify`. No personal information is sent to the 123 | journal. You can verify that classification is working properly by simply 124 | running `journalctl -fa` as the user who classifies the email, or 125 | as root. The events will appear in real-time, and if there is any problem 126 | with the reclassifier, an error will be logged. 127 | 128 | By default, incoming mail will be classified as spam or ham, but their 129 | contents will not be registered as either in the `bogofilter` database. 130 | This prevents false positives and negatives. If you wish to turn on 131 | automatic registration of incoming mail's contents as SPAM or ham, the 132 | role variable `spam.autoregister_incoming_mail` can be turned on (set to 133 | True), and then incoming mail will automatically be recorded as spam or 134 | ham based on the decisions that `bogofilter` makes at the time. This can 135 | speed up the training process, but it can also cause `bogofilter` to 136 | incorrectly learn what is spam and what isn't. 137 | 138 | ## Setup 139 | 140 | ### Variables setup 141 | 142 | Target your mail server with the Ansible variables mentioned in the 143 | `defaults/main.yml` file. Customize these variables to your needs. 144 | See the file in question for documentation of the variables. 145 | 146 | ### DKIM 147 | 148 | For each domain that your server will send mail as, you must generate a 149 | DKIM key pair. Then, you must do two things: 150 | 151 | 1. Deploy the public key to the DNS server that answers queries for 152 | the domain. 153 | 2. Add a variable with the corresponding private key to your Ansible 154 | setup that will run this role. 155 | 156 | #### Key generation 157 | 158 | Suppose in your Ansible node you create folder `secrets/dkim`, relative 159 | to your playbook that includes this role. 160 | 161 | Change into that directory. Then, for each domain, run the following script: 162 | 163 | ```bash 164 | DOMAIN= 165 | mkdir $DOMAIN 166 | opendkim-genkey -D $DOMAIN -d $DOMAIN -s default -b 2048 167 | ``` 168 | 169 | That process will create the necessary key pairs. 170 | Note that you will have to have the program `opendkim-genkey` 171 | installed on the Ansible master node, which is available 172 | within the opendkim package. 173 | 174 | Now you can confirm that the necessary keys are in `secrets/dkim/$DOMAIN`. 175 | The file `default.private` contains the private key. The file `default.txt` 176 | contains the public key, in BIND record format. 177 | 178 | #### Setup of public key in DNS server 179 | 180 | For each domain, update its DNS server to include the record stored in 181 | `secrets/dkim/$DOMAIN/default.txt`. If you are running BIND, it's just 182 | a one-line addition to the respective zone file. The important goal is 183 | to publish the contents of the `private.txt` file (in each key 184 | subdirectory) via DNS, following the DNS portion of the guide here: 185 | https://support.dnsimple.com/articles/dkim-record/ . 186 | Failure to do so will cause other mail servers to mark your outgoing e-mails 187 | from these domains as not authentic (improperly signed by DKIM). 188 | 189 | #### Setup of private key in role configuration 190 | 191 | Now register the private keys as a var in your playbook or vars file: 192 | 193 | ```yaml 194 | dkim: 195 | domain1.com: '{{ lookup("file", "secrets/dkim/domain1/default.private") }}' 196 | domain2.com: '{{ lookup("file", "secrets/dkim/domain2/default.private") }}' 197 | #...and so on and so forth... 198 | ``` 199 | -------------------------------------------------------------------------------- /roles/mailserver/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | mail: 3 | message_size_limit: '{{ 60 * 1024 * 1024 }}' 4 | mailbox_size_limit: '{{ 1024 * 1024 * 1024 }}' 5 | recipient_delimiter: '+' 6 | # The following variables are mandatory: 7 | # hostname: mailserver.domain.com 8 | # domain: domain.com 9 | # origin: domain.com 10 | # destination_domains: 11 | # - mailserver.domain.com 12 | # - domain.com 13 | # - bond.name 14 | mailbox_type: mbox 15 | 16 | spam: 17 | # autoregister_incoming_mail will cause incoming mail to be registered in 18 | # the spam / ham wordlists automatically, based on whether bogofilter 19 | # classified it as spam or ham. This maps to the -u option of 20 | # bogofilter (see the man page). Turning this on may cause many 21 | # false positives and negatives if the filters aren't trained by you 22 | # frequently by reclassifying mail into the SPAM box or other 23 | # mailboxes. 24 | autoregister_incoming_mail: False 25 | # greylisting will cause incoming mail to be greylisted (temporarily 26 | # rejected) if the sender / server has not been seen before. 27 | greylisting: True 28 | # spam_filing_after_user_scripts, if true, will cause the spam filing 29 | # behavior (file mail into SPAM if if it is detected to be spam) 30 | # to occur after the user sieve scripts have executed. 31 | filing_after_user_scripts: False 32 | 33 | # dkim: 34 | # # You must list the private DKIM key of every domain you will be sending mail as 35 | # # and receiving from in the following variables. 36 | # # 37 | # # For more information, see the README.md file. 38 | # domain.com: '{{ lookup("file", "secrets/dkim/domain.com/default.private") }}' 39 | # bond.name: '{{ lookup("file", "secrets/dkim/bond.name/default.private") }}' 40 | 41 | # ssl: 42 | # # You must list the certificate and key for the mail server here. 43 | # # 44 | # # The certificate domain name must match the domain 45 | # # name of the server in mail.hostname, which should 46 | # # also be the same name below. 47 | # # 48 | # # For more info, check the deploy-ssl-certs role in the repository that contains this role. 49 | # mailserver.domain.com: 50 | # key: '{{ lookup("file", "secrets/mailserver.domain.com.key") }}' 51 | # intermediates: 52 | # # The order matters. At the bottom of the stack must be the one closest to the root of trust. 53 | # - /etc/pki/tls/certs/CABUNDLE.crt 54 | # certificate: '{{ lookup("file", "secrets/mailserver_domain_com.crt") }}' 55 | # assembled: '{{ lookup("file", "secrets/assembled_mailserver.domain.com.crt") }}' 56 | 57 | # unix_users: 58 | # - name: james 59 | # gecos: James Bond 60 | # password_unencrypted: password 61 | # password: $5$h/passwordpasswordpassword/password. 62 | # addresses: 63 | # # - james@domain.com is tacit 64 | # - aliastojames@domain.com 65 | # - james@bond.name 66 | # - name: john 67 | # gecos: John the Ripper role account 68 | # addresses: 69 | # # - john@domain.com is tacit 70 | # password: $9$YWpasswordpasswordpassword/password/ 71 | # forwardings: 72 | # - name: notalist@domain.com 73 | # addresses: 74 | # - james@domain.com 75 | # - john@hotmail.com 76 | # - name: shawna@goody.com 77 | # addresses: 78 | # - shawna@gmail.com 79 | -------------------------------------------------------------------------------- /roles/mailserver/files/dovecot/var/lib/sieve/after.d/README.md: -------------------------------------------------------------------------------- 1 | Put scripts to execute after the user rules here. 2 | -------------------------------------------------------------------------------- /roles/mailserver/files/dovecot/var/lib/sieve/before.d/README.md: -------------------------------------------------------------------------------- 1 | Put scripts to execute before the user rules here. 2 | -------------------------------------------------------------------------------- /roles/mailserver/files/dovecot/var/lib/sieve/global/README.md: -------------------------------------------------------------------------------- 1 | Put scripts to execute here if the user has no user rules. 2 | Refer to the Dovecot Pigeonhole Sieve documentation for more info on global rules. 3 | -------------------------------------------------------------------------------- /roles/mailserver/files/dovecot/var/lib/sieve/imapsieve/report-ham.sieve: -------------------------------------------------------------------------------- 1 | require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables", "imap4flags"]; 2 | 3 | if environment :matches "imap.mailbox" "*" { 4 | set "mailbox" "${1}"; 5 | } 6 | 7 | if string "${mailbox}" "Trash" { 8 | stop; 9 | } 10 | 11 | pipe :copy "learn-ham"; 12 | -------------------------------------------------------------------------------- /roles/mailserver/files/dovecot/var/lib/sieve/imapsieve/report-spam.sieve: -------------------------------------------------------------------------------- 1 | require ["vnd.dovecot.pipe", "copy", "imap4flags"]; 2 | 3 | pipe :copy "learn-spam"; 4 | -------------------------------------------------------------------------------- /roles/mailserver/files/postgrey/postgreylocal.te: -------------------------------------------------------------------------------- 1 | module postgreylocal 1.2; 2 | 3 | require { 4 | type postgrey_t; 5 | type bin_t; 6 | class file execute; 7 | } 8 | 9 | allow postgrey_t bin_t:file execute; 10 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/accounts.yml: -------------------------------------------------------------------------------- 1 | # This is now fully migrated to the email.mda formula. 2 | 3 | - name: create user 4 | user: 5 | name: "{{ item.name }}" 6 | createhome: yes 7 | comment: "{{ item.gecos }}" 8 | password: "{{ item.password | mandatory }}" 9 | with_items: '{{ unix_users }}' 10 | 11 | - name: create legacy user inbox 12 | file: dest=/var/mail/{{ item.name }} owner={{ item.name }} mode=0660 state=file 13 | with_items: '{{ unix_users }}' 14 | 15 | - name: create user maildir 16 | file: dest=/home/{{ item.name }}/mail owner={{ item.name }} group={{ item.name }} mode=0700 state=directory 17 | with_items: '{{ unix_users }}' 18 | 19 | - name: create user inbox 20 | shell: test -f ~/mail/inbox || { touch ~/mail/inbox && chmod 600 ~/mail/inbox && echo CREATED ; } 21 | become: True 22 | become_user: '{{ item.name }}' 23 | register: create_inbox 24 | changed_when: '"CREATED" in create_inbox.stdout' 25 | with_items: '{{ unix_users }}' 26 | 27 | - name: create user spambox 28 | shell: test -f ~/mail/SPAM || { touch ~/mail/SPAM && chmod 600 ~/mail/SPAM && echo CREATED ; } 29 | become: True 30 | become_user: '{{ item.name }}' 31 | register: create_spambox 32 | changed_when: '"CREATED" in create_spambox.stdout' 33 | with_items: '{{ unix_users }}' 34 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/dkim.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mta SaltStack formula. 2 | 3 | - name: install opendkim configuration files 4 | template: src=templates/opendkim{{ item }} dest={{ item }} mode=0644 owner=root group=root 5 | with_items: 6 | - /etc/opendkim.conf 7 | - /etc/opendkim/KeyTable 8 | - /etc/opendkim/SigningTable 9 | register: dkim_config 10 | - name: create socket directory for opendkim 11 | file: 12 | state: directory 13 | owner: opendkim 14 | group: mail 15 | mode: 0750 16 | name: /var/spool/opendkim/socket 17 | - name: create PKI directory for opendkim 18 | file: 19 | state: directory 20 | owner: root 21 | group: root 22 | mode: 0755 23 | name: /etc/pki/dkim 24 | - name: create PKI subdirectories for opendkim 25 | file: 26 | state: directory 27 | owner: root 28 | group: root 29 | mode: 0755 30 | name: /etc/pki/dkim/{{ item }} 31 | with_items: '{{ dkim }}' 32 | - name: deploy opendkim private keys 33 | copy: 34 | content: '{{ dkim[item] }}' 35 | dest: /etc/pki/dkim/{{ item }}/default.private 36 | mode: 0640 37 | owner: root 38 | group: opendkim 39 | with_items: '{{ dkim }}' 40 | register: dkim_privkeys 41 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/dovecot.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mda SaltStack formula. 2 | 3 | - name: symlink sieve plugins # MIGRATED 4 | shell: | 5 | set -ex 6 | cd /usr/lib64/dovecot/sieve 7 | for plugin in *.so ; do 8 | test -f "../$plugin" && continue || true 9 | ln -s "sieve/$plugin" ../"$plugin" 10 | echo CHANGED 11 | done 12 | register: sieve_symlinks 13 | changed_when: '"CHANGED" in sieve_symlinks.stdout' 14 | tags: 15 | - dovecotplugins 16 | 17 | - name: create Diffie-Hellman SSL parameters file # MIGRATED 18 | shell: | 19 | rm -f /var/lib/dovecot/ssl-parameters.dat 20 | test -f /etc/dovecot/dh.pem || { 21 | openssl dhparam 2048 > /etc/dovecot/dh.pem 22 | chmod 440 /etc/dovecot/dh.pem 23 | echo CHANGED 24 | } 25 | register: dhparam 26 | changed_when: '"YES" in dhparam.stdout' 27 | tags: doveconf 28 | 29 | - name: install dovecot configuration files # MIGRATED 30 | template: src=templates/dovecot{{ item }} dest={{ item }} mode=0644 owner=root group=root 31 | with_items: 32 | - /etc/dovecot/local.conf 33 | register: dovecot_config 34 | tags: doveconf 35 | 36 | - name: create dovecot sieve global directories # MIGRATED 37 | file: name={{ item }} state=directory owner=root group=root mode=0755 setype=dovecot_etc_t seuser=system_u 38 | with_items: 39 | - /var/lib/sieve 40 | - /var/lib/sieve/before.d 41 | - /var/lib/sieve/after.d 42 | - /var/lib/sieve/global 43 | - /var/lib/sieve/imapsieve 44 | tags: sieve 45 | 46 | - name: create dovecot sieve directory for programs to be used with pipe and filter # MIGRATED 47 | file: name={{ item }} state=directory owner=root group=root mode=0755 48 | with_items: 49 | - /usr/local/libexec/sieve 50 | tags: sieve 51 | 52 | - name: deploy dovecot global scripts # MIGRATED 53 | copy: 54 | src: files/dovecot/var/lib/sieve/{{ item }}/ 55 | dest: /var/lib/sieve/{{ item }}/ 56 | owner: root 57 | group: root 58 | mode: 0644 59 | setype: dovecot_etc_t 60 | seuser: system_u 61 | with_items: 62 | - before.d 63 | - after.d 64 | - global 65 | - imapsieve 66 | register: sieve_global 67 | tags: sieve 68 | 69 | - name: deploy dovecot spam classifier script # MIGRATED 70 | template: 71 | src: templates/dovecot/var/lib/sieve/{{ item }}/spamclassifier.sieve.j2 72 | dest: /var/lib/sieve/{{ item }}/spamclassifier.sieve 73 | owner: root 74 | group: root 75 | mode: 0644 76 | setype: dovecot_etc_t 77 | seuser: system_u 78 | with_items: 79 | - before.d 80 | - after.d 81 | register: sieve_global_spamclassifier 82 | tags: sieve 83 | 84 | - name: compile dovecot global scripts # MIGRATED 85 | shell: | 86 | set -e 87 | for item in before.d after.d global imapsieve ; do 88 | cd /var/lib/sieve/$item 89 | for script in *.sieve ; do 90 | if ! test -f "$script" ; then continue ; fi 91 | compiled=$(echo "$script" | sed 's/.sieve$/.svbin/') 92 | agescript=`stat -c "%Y" "$script"` 93 | agecompiled=`stat -c "%Y" "$compiled" || echo 0` 94 | if [ "$agescript" -gt "$agecompiled" ] ; then 95 | sievec -x '+vnd.dovecot.pipe +vnd.dovecot.execute +vnd.dovecot.filter' "$script" 96 | chcon -u system_u -t dovecot_etc_t "$compiled" 97 | echo CHANGED recompiled "$script" 98 | fi 99 | done 100 | for compiled in *.svbin ; do 101 | if ! test -f "$compiled" ; then continue ; fi 102 | if ! test -f $(echo "$compiled" | sed 's/.svbin$/.sieve/') ; then 103 | rm -f "$compiled" 104 | echo CHANGED deleted "$compiled" 105 | fi 106 | done 107 | done 108 | register: sieve_global_compile 109 | changed_when: '"CHANGED" in sieve_global_compile.stdout' 110 | tags: 111 | - sieve 112 | - sievec 113 | 114 | - name: install ham / spam classifier # MIGRATED 115 | template: src=templates/dovecot/usr/local/libexec/sieve/spamclassifier dest=/usr/local/libexec/sieve/spamclassifier mode=0755 owner=root group=root 116 | tags: 117 | - sieve 118 | - classifiers 119 | 120 | - name: install ham / spam reclassifiers # MIGRATED 121 | template: src=templates/dovecot/usr/local/libexec/sieve/learn dest=/usr/local/libexec/sieve/learn-{{ item }} mode=0755 owner=root group=root 122 | with_items: 123 | - ham 124 | - spam 125 | tags: 126 | - sieve 127 | - classifiers 128 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - include: packages.yml 2 | tags: 3 | - packages 4 | - dovecot 5 | - postfix 6 | - spf 7 | - dkim 8 | 9 | - import_tasks: postgrey.yml 10 | tags: 11 | - postgrey 12 | 13 | - import_role: 14 | name: deploy-ssl-certs 15 | tags: ssl 16 | 17 | - import_tasks: spf.yml 18 | tags: 19 | - spf 20 | 21 | - import_tasks: dkim.yml 22 | tags: 23 | - dkim 24 | 25 | - import_tasks: dovecot.yml 26 | tags: 27 | - dovecot 28 | 29 | - import_tasks: postfix.yml 30 | tags: 31 | - postfix 32 | 33 | - import_tasks: accounts.yml 34 | tags: 35 | - accounts 36 | 37 | - import_tasks: services.yml 38 | 39 | - import_tasks: test.yml 40 | tags: 41 | - test 42 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/packages.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mta SaltStack formula. 2 | 3 | - name: install required software 4 | package: name={{ item }} state=present 5 | with_items: 6 | - postgrey # ported 7 | - postfix # ported 8 | - mailx # no need to port 9 | - bogofilter 10 | - dovecot # ported 11 | - dovecot-pigeonhole # ported 12 | - ca-certificates # ported 13 | - pypolicyd-spf # ported 14 | - opendkim # ported 15 | - name: remove sendmail # not to port 16 | package: name=sendmail state=absent 17 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/postfix.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mta SaltStack formula. 2 | 3 | - name: install postfix configuration files 4 | template: src=templates/postfix/etc/postfix/{{ item }} dest=/etc/postfix/{{ item }} mode=0644 owner=root group=root 5 | with_items: 6 | - main.cf 7 | - master.cf 8 | - virtual 9 | register: postfix_config 10 | 11 | - name: regenerate postfix hashmaps 12 | shell: postmap /etc/postfix/virtual 13 | when: postfix_config.changed 14 | 15 | - name: enable catch-all for root mail 16 | lineinfile: 17 | state: present 18 | dest: /etc/aliases 19 | regexp: '^root: ' 20 | line: 'root: {{ unix_users[0]["name"] }}' 21 | register: enable_catchall 22 | 23 | - name: regenerate aliases 24 | shell: newaliases 25 | when: enable_catchall.changed 26 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/postgrey.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mta SaltStack formula. 2 | 3 | - name: enable postgrey local SELinux module 4 | include_role: 5 | name: selinux-module 6 | vars: 7 | policy_file: files/postgrey/postgreylocal.te 8 | state: enabled 9 | tags: 10 | - selinux 11 | when: spam.greylisting 12 | 13 | - name: enable postgrey 14 | service: name=postgrey enabled={{ spam.greylisting }} state={% if spam.greylisting %}started{% else %}stopped{% endif %} 15 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/services.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mta SaltStack formula. 2 | 3 | - name: enable postfix 4 | service: 5 | name: postfix 6 | enabled: yes 7 | state: '{% if postfix_config.changed|default(False) or (sslconf is defined and sslconf.changed) %}re{% endif %}started' 8 | tags: postfix 9 | 10 | - name: enable opendkim 11 | service: 12 | name: opendkim 13 | enabled: yes 14 | state: '{% if dkim_config.changed|default(False) or dkim_privkeys.changed|default(False) %}re{% endif %}started' 15 | tags: dkim 16 | 17 | - name: enable dovecot 18 | service: 19 | name: dovecot 20 | enabled: yes 21 | state: '{% if dovecot_config.changed|default(False) or (sslconf is defined and sslconf.changed) %}re{% endif %}started' 22 | tags: dovecot 23 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/spf.yml: -------------------------------------------------------------------------------- 1 | # This is now fully mirrored in the email.mta SaltStack formula. 2 | 3 | - name: create policyd-spf group 4 | group: 5 | name: policyd-spf 6 | system: yes 7 | - name: create policyd-spf user 8 | user: 9 | name: policyd-spf 10 | createhome: yes 11 | comment: "SPF policyd user" 12 | system: yes 13 | home: /var/lib/policyd-spf 14 | group: policyd-spf 15 | - name: install policyd-spf configuration files 16 | template: src=templates/python-policyd-spf/etc/python-policyd-spf/{{ item }} dest=/etc/python-policyd-spf/{{ item }} mode=0644 owner=root group=root 17 | with_items: 18 | - policyd-spf.conf 19 | register: spf_config 20 | -------------------------------------------------------------------------------- /roles/mailserver/tasks/test.yml: -------------------------------------------------------------------------------- 1 | - name: test the SMTP and IMAP servers work 2 | shell: openssl s_client -starttls {{ item }} -connect localhost:{{ item }} -verify 5 -verify_return_error 3 | changed_when: False 4 | with_items: 5 | - imap 6 | - smtp 7 | check_mode: no 8 | -------------------------------------------------------------------------------- /roles/mailserver/templates/dovecot/etc/dovecot/local.conf: -------------------------------------------------------------------------------- 1 | auth_mechanisms = plain login 2 | lda_mailbox_autocreate = yes 3 | lda_mailbox_autosubscribe = yes 4 | listen = * 5 | {% if mail.mailbox_type == "maildir" -%} 6 | {# You can migrate your mailboxes with dsync -u yourusername mirror mbox:~/mail -#} 7 | {# after stopping Postfix, reconfiguring and restarting Dovecot with the maildir -#} 8 | {# mailbox type, and then the mailboxes will be migrated from ~/mail mbox to maildir. -#} 9 | mail_location = maildir:~/Maildir 10 | namespace inbox { 11 | separator = "/" 12 | } 13 | {% else -%} 14 | mail_location = mbox:~/mail 15 | {% endif -%} 16 | passdb { 17 | driver = pam 18 | } 19 | plugin { 20 | sieve = ~/.dovecot.sieve 21 | sieve_dir = ~/sieve 22 | sieve_before = /var/lib/sieve/before.d 23 | sieve_after = /var/lib/sieve/after.d 24 | sieve_global = /var/lib/sieve/global 25 | 26 | sieve_plugins = sieve_imapsieve sieve_extprograms 27 | imapsieve_mailbox1_name = SPAM 28 | imapsieve_mailbox1_causes = COPY 29 | imapsieve_mailbox1_before = file:/var/lib/sieve/imapsieve/report-spam.sieve 30 | 31 | imapsieve_mailbox2_name = * 32 | imapsieve_mailbox2_from = SPAM 33 | imapsieve_mailbox2_causes = COPY 34 | imapsieve_mailbox2_before = file:/var/lib/sieve/imapsieve/report-ham.sieve 35 | 36 | sieve_pipe_bin_dir = /usr/local/libexec/sieve 37 | sieve_filter_bin_dir = /usr/local/libexec/sieve 38 | 39 | sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.execute +vnd.dovecot.filter 40 | } 41 | protocols = imap sieve 42 | service auth { 43 | unix_listener /var/spool/postfix/private/auth { 44 | group = postfix 45 | mode = 0666 46 | user = postfix 47 | } 48 | unix_listener auth-userdb { 49 | mode = 0660 50 | } 51 | } 52 | service imap-login { 53 | inet_listener imaps { 54 | port = 993 55 | ssl = yes 56 | } 57 | } 58 | service managesieve-login { 59 | inet_listener sieve { 60 | port = 4190 61 | } 62 | } 63 | userdb { 64 | driver = passwd 65 | } 66 | passdb { 67 | driver = pam 68 | } 69 | protocol lda { 70 | mail_plugins = sieve 71 | } 72 | protocol imap { 73 | mail_plugins = $mail_plugins imap_sieve 74 | mail_max_userip_connections = 20 75 | } 76 | disable_plaintext_auth = yes 77 | ssl = required 78 | ssl_cert = <{{ sslconf.result[mail.hostname].assembled_certificate.path }} 79 | ssl_key = <{{ sslconf.result[mail.hostname].key.path }} 80 | ssl_dh = Dovecot 2.2.6 84 | # ssl_dh_parameters_length = 4096 # >Dovecot 2.2 85 | mail_access_groups=mail 86 | -------------------------------------------------------------------------------- /roles/mailserver/templates/dovecot/usr/local/libexec/sieve/learn: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export PATH=/usr/sbin:/usr/bin:/sbin:/bin 4 | tmp=`mktemp` 5 | trap "rm -f $tmp" EXIT 6 | ret=0 7 | cat > "$tmp" || ret=$? 8 | if [ "$ret" != "0" ] ; then 9 | logger -p user.error -t bogofilter "Error buffering message to mark as {{ item }}: exited with status $ret" 10 | exit "$ret" 11 | fi 12 | 13 | mid=$(grep ^Message-ID: < "$tmp" | head -1 | cut -d '<' -f 2 | cut -d '>' -f 1) 14 | if [ "$mid" == "" ] ; then mid="with no discernible ID" ; fi 15 | logger --id=$$ -p user.info -t bogofilter "Marking message $mid as {{ item }}." 16 | 17 | tmp2=`mktemp` 18 | trap "rm -f $tmp $tmp2" EXIT 19 | bogofilter -e {% if item == "ham" %}-Sn{% else %}-Ns{% endif %} -M -C -I "$tmp" 2> "$tmp2" 20 | ret=$? 21 | if [ "$ret" != "0" ] ; then 22 | output=`cat "$tmp2"` 23 | logger --id=$$ -p user.error -t bogofilter "Error marking message $mid as {{ item }}: exited with status $ret, output: $output" 24 | exit "$ret" 25 | fi 26 | -------------------------------------------------------------------------------- /roles/mailserver/templates/dovecot/usr/local/libexec/sieve/spamclassifier: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export PATH=/usr/sbin:/usr/bin:/sbin:/bin 4 | tmp=`mktemp` 5 | trap "rm -f $tmp" EXIT 6 | ret=0 7 | cat > "$tmp" || ret=$? 8 | if [ "$ret" != "0" ] ; then 9 | logger -p user.error -t bogofilter "Error buffering incoming message: exited with status $ret" 10 | exit "$ret" 11 | fi 12 | 13 | mid=$(grep ^Message-ID: < "$tmp" | head -1 | cut -d '<' -f 2 | cut -d '>' -f 1) 14 | if [ "$mid" == "" ] ; then mid="with no discernible ID" ; fi 15 | logger --id=$$ -p user.info -t bogofilter "Classifying message $mid as either spam or ham." 16 | 17 | tmp2=`mktemp` 18 | trap "rm -f $tmp $tmp2" EXIT 19 | bogofilter -p -e {% if spam.autoregister_incoming_mail|default(False) %}-u {% endif %}-l -C -I "$tmp" 2> "$tmp2" 20 | ret=$? 21 | if [ "$ret" != "0" ] ; then 22 | output=`cat "$tmp2"` 23 | logger --id=$$ -p user.error -t bogofilter "Error classifying message $mid: exited with status $ret, output: $output" 24 | exit "$ret" 25 | fi 26 | -------------------------------------------------------------------------------- /roles/mailserver/templates/dovecot/var/lib/sieve/after.d/spamclassifier.sieve.j2: -------------------------------------------------------------------------------- 1 | {% if spam.filing_after_user_scripts %} 2 | require ["fileinto"]; 3 | 4 | {# Keep this snippet in sync with same-named file in sibling folder. #} 5 | if anyof ( 6 | header :contains ["X-Spam-Flag"] ["yes"], 7 | header :contains ["X-Bogosity"] ["Spam,"] 8 | ) { 9 | fileinto "SPAM"; 10 | } 11 | {% endif %} -------------------------------------------------------------------------------- /roles/mailserver/templates/dovecot/var/lib/sieve/before.d/spamclassifier.sieve.j2: -------------------------------------------------------------------------------- 1 | require ["fileinto", "vnd.dovecot.filter"]; 2 | 3 | # We must run the spamclassifier filter before we run the user 4 | # scripts, otherwise the spam headers will not be available. 5 | filter "spamclassifier"; 6 | 7 | {% if not spam.filing_after_user_scripts %} 8 | {# Keep this snippet in sync with same-named file in sibling folder. #} 9 | if anyof ( 10 | header :contains ["X-Spam-Flag"] ["yes"], 11 | header :contains ["X-Bogosity"] ["Spam,"] 12 | ) { 13 | fileinto "SPAM"; 14 | } 15 | {% endif %} -------------------------------------------------------------------------------- /roles/mailserver/templates/opendkim/etc/opendkim.conf: -------------------------------------------------------------------------------- 1 | ## BASIC OPENDKIM CONFIGURATION FILE 2 | ## See opendkim.conf(5) or /usr/share/doc/opendkim/opendkim.conf.sample for more 3 | 4 | ## BEFORE running OpenDKIM you must: 5 | 6 | ## - make your MTA (Postfix, Sendmail, etc.) aware of OpenDKIM 7 | ## - generate keys for your domain (if signing) 8 | ## - edit your DNS records to publish your public keys (if signing) 9 | 10 | ## See /usr/share/doc/opendkim/INSTALL for detailed instructions. 11 | 12 | ## DEPRECATED CONFIGURATION OPTIONS 13 | ## 14 | ## The following configuration options are no longer valid. They should be 15 | ## removed from your existing configuration file to prevent potential issues. 16 | ## Failure to do so may result in opendkim being unable to start. 17 | ## 18 | ## Removed in 2.10.0: 19 | ## AddAllSignatureResults 20 | ## ADSPAction 21 | ## ADSPNoSuchDomain 22 | ## BogusPolicy 23 | ## DisableADSP 24 | ## LDAPSoftStart 25 | ## LocalADSP 26 | ## NoDiscardableMailTo 27 | ## On-PolicyError 28 | ## SendADSPReports 29 | ## UnprotectedPolicy 30 | 31 | ## CONFIGURATION OPTIONS 32 | 33 | ## Specifies the path to the process ID file. 34 | PidFile /var/run/opendkim/opendkim.pid 35 | 36 | ## Selects operating modes. Valid modes are s (sign) and v (verify). Default is v. 37 | ## Must be changed to s (sign only) or sv (sign and verify) in order to sign outgoing 38 | ## messages. 39 | Mode sv 40 | 41 | ## Log activity to the system log. 42 | Syslog yes 43 | 44 | ## Log additional entries indicating successful signing or verification of messages. 45 | SyslogSuccess yes 46 | 47 | ## If logging is enabled, include detailed logging about why or why not a message was 48 | ## signed or verified. This causes an increase in the amount of log data generated 49 | ## for each message, so set this to No (or comment it out) if it gets too noisy. 50 | LogWhy yes 51 | 52 | ## Attempt to become the specified user before starting operations. 53 | UserID opendkim:opendkim 54 | 55 | ## Create a socket through which your MTA can communicate. 56 | Socket local:/var/spool/opendkim/socket/opendkim.sock 57 | 58 | ## Required to use local socket with MTAs that access the socket as a non- 59 | ## privileged user (e.g. Postfix) 60 | Umask 000 61 | 62 | ## This specifies a text file in which to store DKIM transaction statistics. 63 | ## OpenDKIM must be manually compiled with --enable-stats to enable this feature. 64 | # Statistics /var/spool/opendkim/stats.dat 65 | 66 | ## Specifies whether or not the filter should generate report mail back 67 | ## to senders when verification fails and an address for such a purpose 68 | ## is provided. See opendkim.conf(5) for details. 69 | SendReports yes 70 | 71 | ## Specifies the sending address to be used on From: headers of outgoing 72 | ## failure reports. By default, the e-mail address of the user executing 73 | ## the filter is used (executing_user@hostname). 74 | # ReportAddress "Example.com Postmaster" 75 | 76 | ## Add a DKIM-Filter header field to messages passing through this filter 77 | ## to identify messages it has processed. 78 | SoftwareHeader yes 79 | 80 | ## SIGNING OPTIONS 81 | 82 | ## Selects the canonicalization method(s) to be used when signing messages. 83 | Canonicalization relaxed/relaxed 84 | 85 | ## Domain(s) whose mail should be signed by this filter. Mail from other domains will 86 | ## be verified rather than being signed. Uncomment and use your domain name. 87 | ## This parameter is not required if a SigningTable is in use. 88 | # Domain example.com 89 | 90 | ## Defines the name of the selector to be used when signing messages. 91 | Selector default 92 | 93 | ## Specifies the minimum number of key bits for acceptable keys and signatures. 94 | MinimumKeyBits 1024 95 | 96 | ## Gives the location of a private key to be used for signing ALL messages. This 97 | ## directive is ignored if KeyTable is enabled. 98 | KeyFile /etc/opendkim/keys/default.private 99 | 100 | ## Gives the location of a file mapping key names to signing keys. In simple terms, 101 | ## this tells OpenDKIM where to find your keys. If present, overrides any KeyFile 102 | ## directive in the configuration file. Requires SigningTable be enabled. 103 | KeyTable /etc/opendkim/KeyTable 104 | 105 | # KeyTable (dataset) 106 | # Gives the location of a file mapping key names to signing keys. If present, overrides any KeyFile setting in the configuration file. The data 107 | # set named here maps each key name to three values: (a) the name of the domain to use in the signature's "d=" value; (b) the name of the selector 108 | # to use in the signature's "s=" value; and (c) either a private key or a path to a file containing a private key. If the first value consists 109 | # solely of a percent sign ("%") character, it will be replaced by the apparent domain of the sender when generating a signature. If the third 110 | # value starts with a slash ("/") character, or "./" or "../", then it is presumed to refer to a file from which the private key should be read, 111 | # otherwise it is itself a PEM-encoded private key or a base64-encoded DER private key; a "%" in the third value in this case will be replaced by 112 | # the apparent domain name of the sender. The SigningTable (see below) is used to select records from this table to be used to add signatures based 113 | # on the message sender. 114 | # 115 | # sample: 116 | # 117 | # default._domainkey.example.com example.com:default:/etc/opendkim/keys/default.private 118 | 119 | 120 | ## Defines a table used to select one or more signatures to apply to a message based 121 | ## on the address found in the From: header field. In simple terms, this tells 122 | ## OpenDKIM how to use your keys. Requires KeyTable be enabled. 123 | SigningTable refile:/etc/opendkim/SigningTable 124 | 125 | ## Identifies a set of "external" hosts that may send mail through the server as one 126 | ## of the signing domains without credentials as such. 127 | # ExternalIgnoreList refile:/etc/opendkim/TrustedHosts 128 | 129 | ## Identifies a set "internal" hosts whose mail should be signed rather than verified. 130 | # InternalHosts refile:/etc/opendkim/TrustedHosts 131 | 132 | ## Contains a list of IP addresses, CIDR blocks, hostnames or domain names 133 | ## whose mail should be neither signed nor verified by this filter. See man 134 | ## page for file format. 135 | # PeerList X.X.X.X 136 | 137 | ## Always oversign From (sign using actual From and a null From to prevent 138 | ## malicious signatures header fields (From and/or others) between the signer 139 | ## and the verifier. From is oversigned by default in the Fedora package 140 | ## because it is often the identity key used by reputation systems and thus 141 | ## somewhat security sensitive. 142 | OversignHeaders From 143 | 144 | ## Instructs the DKIM library to maintain its own local cache of keys and 145 | ## policies retrieved from DNS, rather than relying on the nameserver for 146 | ## caching service. Useful if the nameserver being used by the filter is 147 | ## not local. 148 | # QueryCache yes -------------------------------------------------------------------------------- /roles/mailserver/templates/opendkim/etc/opendkim/KeyTable: -------------------------------------------------------------------------------- 1 | # OPENDKIM KEY TABLE 2 | # To use this file, uncomment the #KeyTable option in /etc/opendkim.conf, 3 | # then uncomment the following line and replace example.com with your domain 4 | # name, then restart OpenDKIM. Additional keys may be added on separate lines. 5 | 6 | {% for domain in dkim %} 7 | default._domainkey.{{ domain }} {{ domain }}:default:/etc/pki/dkim/{{ domain }}/default.private 8 | {% endfor %} 9 | -------------------------------------------------------------------------------- /roles/mailserver/templates/opendkim/etc/opendkim/SigningTable: -------------------------------------------------------------------------------- 1 | # OPENDKIM SIGNING TABLE 2 | # This table controls how to apply one or more signatures to outgoing messages based 3 | # on the address found in the From: header field. In simple terms, this tells 4 | # OpenDKIM "how" to apply your keys. 5 | 6 | # To use this file, uncomment the SigningTable option in /etc/opendkim.conf, 7 | # then uncomment one of the usage examples below and replace example.com with your 8 | # domain name, then restart OpenDKIM. 9 | 10 | # WILDCARD EXAMPLE 11 | # Enables signing for any address on the listed domain(s), but will work only if 12 | # "refile:/etc/opendkim/SigningTable" is included in /etc/opendkim.conf. 13 | # Create additional lines for additional domains. 14 | 15 | {% for domain in dkim %} 16 | *@{{ domain }} default._domainkey.{{ domain }} 17 | {% endfor %} 18 | 19 | # NON-WILDCARD EXAMPLE 20 | # If "file:" (instead of "refile:") is specified in /etc/opendkim.conf, then 21 | # wildcards will not work. Instead, full user@host is checked first, then simply host, 22 | # then user@.domain (with all superdomains checked in sequence, so "foo.example.com" 23 | # would first check "user@foo.example.com", then "user@.example.com", then "user@.com"), 24 | # then .domain, then user@*, and finally *. See the opendkim.conf(5) man page under 25 | # "SigningTable" for more details. 26 | 27 | #example.com default._domainkey.example.com -------------------------------------------------------------------------------- /roles/mailserver/templates/postfix/etc/postfix/master.cf: -------------------------------------------------------------------------------- 1 | # 2 | # Postfix master process configuration file. For details on the format 3 | # of the file, see the master(5) manual page (command: "man 5 master"). 4 | # 5 | # Do not forget to execute "postfix reload" after editing this file. 6 | # 7 | # ========================================================================== 8 | # service type private unpriv chroot wakeup maxproc command + args 9 | # (yes) (yes) (yes) (never) (100) 10 | # ========================================================================== 11 | smtp inet n - n - - smtpd 12 | submission inet n - n - - smtpd 13 | -o syslog_name=postfix/submission 14 | -o smtpd_tls_security_level=encrypt 15 | -o smtpd_sasl_auth_enable=yes 16 | -o smtpd_client_restrictions=permit_sasl_authenticated,reject 17 | # -o milter_macro_daemon_name=ORIGINATING 18 | pickup fifo n - n 60 1 pickup 19 | cleanup unix n - n - 0 cleanup 20 | qmgr fifo n - n 300 1 qmgr 21 | tlsmgr unix - - n 1000? 1 tlsmgr 22 | rewrite unix - - n - - trivial-rewrite 23 | bounce unix - - n - 0 bounce 24 | defer unix - - n - 0 bounce 25 | trace unix - - n - 0 bounce 26 | verify unix - - n - 1 verify 27 | flush unix n - n 1000? 0 flush 28 | proxymap unix - - n - - proxymap 29 | proxywrite unix - - n - 1 proxymap 30 | smtp unix - - n - - smtp 31 | relay unix - - n - - smtp 32 | # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 33 | showq unix n - n - - showq 34 | error unix - - n - - error 35 | retry unix - - n - - error 36 | discard unix - - n - - discard 37 | local unix - n n - - local 38 | virtual unix - n n - - virtual 39 | lmtp unix - - n - - lmtp 40 | anvil unix - - n - 1 anvil 41 | scache unix - - n - 1 scache 42 | 43 | # SPF: 44 | policyd-spf unix - n n - 0 spawn user=policyd-spf argv=/usr/libexec/postfix/policyd-spf 45 | #smtp inet n - n - 1 postscreen 46 | #smtpd pass - - n - - smtpd 47 | #dnsblog unix - - n - 0 dnsblog 48 | #tlsproxy unix - - n - 0 tlsproxy 49 | #smtps inet n - n - - smtpd 50 | # -o syslog_name=postfix/smtps 51 | # -o smtpd_tls_wrappermode=yes 52 | # -o smtpd_sasl_auth_enable=yes 53 | # -o smtpd_client_restrictions=permit_sasl_authenticated,reject 54 | # -o milter_macro_daemon_name=ORIGINATING 55 | #628 inet n - n - - qmqpd 56 | #qmgr fifo n - n 300 1 oqmgr 57 | # 58 | # ==================================================================== 59 | # Interfaces to non-Postfix software. Be sure to examine the manual 60 | # pages of the non-Postfix software to find out what options it wants. 61 | # 62 | # Many of the following services use the Postfix pipe(8) delivery 63 | # agent. See the pipe(8) man page for information about ${recipient} 64 | # and other message envelope options. 65 | # ==================================================================== 66 | # 67 | # maildrop. See the Postfix MAILDROP_README file for details. 68 | # Also specify in main.cf: maildrop_destination_recipient_limit=1 69 | # 70 | #maildrop unix - n n - - pipe 71 | # flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} 72 | # 73 | # ==================================================================== 74 | # 75 | # Recent Cyrus versions can use the existing "lmtp" master.cf entry. 76 | # 77 | # Specify in cyrus.conf: 78 | # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 79 | # 80 | # Specify in main.cf one or more of the following: 81 | # mailbox_transport = lmtp:inet:localhost 82 | # virtual_transport = lmtp:inet:localhost 83 | # 84 | # ==================================================================== 85 | # 86 | # Cyrus 2.1.5 (Amos Gouaux) 87 | # Also specify in main.cf: cyrus_destination_recipient_limit=1 88 | # 89 | #cyrus unix - n n - - pipe 90 | # user=cyrus argv=/usr/lib/cyrus-imapd/deliver -e -r ${sender} -m ${extension} ${user} 91 | # 92 | # ==================================================================== 93 | # 94 | # Old example of delivery via Cyrus. 95 | # 96 | #old-cyrus unix - n n - - pipe 97 | # flags=R user=cyrus argv=/usr/lib/cyrus-imapd/deliver -e -m ${extension} ${user} 98 | # 99 | # ==================================================================== 100 | # 101 | # See the Postfix UUCP_README file for configuration details. 102 | # 103 | #uucp unix - n n - - pipe 104 | # flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) 105 | # 106 | # ==================================================================== 107 | # 108 | # Other external delivery methods. 109 | # 110 | #ifmail unix - n n - - pipe 111 | # flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) 112 | # 113 | #bsmtp unix - n n - - pipe 114 | # flags=Fq. user=bsmtp argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient 115 | # 116 | #scalemail-backend unix - n n - 2 pipe 117 | # flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store 118 | # ${nexthop} ${user} ${extension} 119 | # 120 | #mailman unix - n n - - pipe 121 | # flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py 122 | # ${nexthop} ${user} 123 | postlog unix-dgram n - n - 1 postlogd 124 | -------------------------------------------------------------------------------- /roles/mailserver/templates/postfix/etc/postfix/virtual: -------------------------------------------------------------------------------- 1 | {% for user in unix_users %}{% if user.addresses is defined %} 2 | {% for address in user["addresses"] %} 3 | {{ address }} {{ user["name"] }} 4 | {% endfor %} 5 | {% endif %}{% endfor %} 6 | {% for forwarding in forwardings %} 7 | {{ forwarding["name"] }} {{ forwarding["addresses"]|join(', ') }} 8 | {% endfor %} -------------------------------------------------------------------------------- /roles/mailserver/templates/python-policyd-spf/etc/python-policyd-spf/policyd-spf.conf: -------------------------------------------------------------------------------- 1 | # Amount of debugging information logged. 0 logs no debugging messages 2 | # 5 includes all debug messages. 3 | debugLevel = 1 4 | 5 | # HELO check rejection policy. Options are: 6 | # HELO_reject = SPF_Not_Pass (default) - Reject if result not Pass/None/Tempfail. 7 | # HELO_reject = Softfail - Reject if result Softfail and Fail 8 | # HELO_reject = Fail - Reject on HELO Fail 9 | # HELO_reject = Null - Only reject HELO Fail for Null sender (SPF Classic) 10 | # HELO_reject = False - Never reject/defer on HELO, append header only. 11 | # HELO_reject = No_Check - Never check HELO. 12 | HELO_reject = False 13 | 14 | # HELO pass restriction policy. 15 | # HELO_pass_restriction = helo_passed_spf - Apply the given restriction when 16 | # the HELO checking result is Pass. The given restriction must be an 17 | # action as defined for a Postfix SMTP server access table access(5). 18 | #HELO_pass_restriction 19 | 20 | # Mail From rejection policy. Options are: 21 | # Mail_From_reject = SPF_Not_Pass - Reject if result not Pass/None/Tempfail. 22 | # Mail_From_reject = Softfail - Reject if result Softfail and Fail 23 | # Mail_From_reject = Fail - Reject on Mail From Fail (default) 24 | # Mail_From_reject = False - Never reject/defer on Mail From, append header only 25 | # Mail_From_reject = No_Check - Never check Mail From/Return Path. 26 | Mail_From_reject = False 27 | 28 | # Reject only from domains that send no mail. Options are: 29 | # No_Mail = False - Normal SPF record processing (default) 30 | # No_Mail = True - Only reject for "v=spf1 -all" records 31 | 32 | # Mail From pass restriction policy. 33 | # Mail_From_pass_restriction = mfrom_passed_spf - Apply the given 34 | # restriction when the Mail From checking result is Pass. The given 35 | # restriction must be an action as defined for a Postfix SMTP server 36 | # access table access(5). 37 | #Mail_From_pass_restriction 38 | 39 | # Reject mail for Netural/Softfail results for these domains. 40 | # Recevier policy option to reject mail from certain domains when SPF is not 41 | # Pass/None even if their SPF record does not produce a Fail result. This 42 | # Option does not change the effect of PermError_reject or TempError_Defer 43 | # Reject_Not_Pass_Domains = aol.com,hotmail.com 44 | 45 | # Policy for rejecting due to SPF PermError. Options are: 46 | # PermError_reject = True 47 | # PermError_reject = False 48 | PermError_reject = False 49 | 50 | # Policy for deferring messages due to SPF TempError. Options are: 51 | # TempError_Defer = True 52 | # TempError_Defer = False 53 | TempError_Defer = False 54 | 55 | # Prospective SPF checking - Check to see if mail sent from the defined IP 56 | # address would pass. 57 | # Prospective = 192.168.0.4 58 | 59 | # Do not check SPF for localhost addresses - add to skip addresses to 60 | # skip SPF for internal networks if desired. Defaults are standard IPv4 and 61 | # IPv6 localhost addresses. 62 | skip_addresses = 127.0.0.0/8,::ffff:127.0.0.0/104,::1 63 | 64 | # Whitelist: CIDR Notation list of IP addresses not to check SPF for. 65 | # Example (default is no whitelist): 66 | # Whitelist = 192.168.0.0/31,192.168.1.12 67 | 68 | # Domain_Whitelist: List of domains whose sending IPs (defined by passing 69 | # their SPF check should be whitelisted from SPF. 70 | # Example (default is no domain whitelist): 71 | # Domain_Whitelist = pobox.com,trustedforwarder.org 72 | 73 | # Domain_Whitelist_PTR: List of domains to whitelist against SPF checks base 74 | # on PTR match. 75 | # Example (default is no PTR whitelist) 76 | # Domain_Whitelist_PTR = yahoo.com 77 | 78 | # Type of header to insert to document SPF result. Can be Received-SPF (SPF) 79 | # or Authentication Results (AR). It cannot be both. 80 | # Examples: (default is Received-SPF): 81 | # Header_Type = AR 82 | # Header_Type = SPF 83 | 84 | # Every Authentication-Results header field has an authentication identifier 85 | # field ('Authserv_Id'). This is similar in syntax to a fully-qualified domain 86 | # name. See policyd-spf.conf.5 and RFC 7001 paragraph 2.4 for details. 87 | # Default is None. Authserv-Id must be provided if Header_Type 'AR' is used. 88 | # Authserv_Id = mx.example.com 89 | 90 | # RFC 7208 recommends an elapsed time limit for SPF checks of at least 20 91 | # seconds. Lookup_Time allows the maximum time (seconds) to be adjusted. 20 92 | # seconds is the default. 93 | # Lookup_Time = 20 94 | 95 | # RFC 7208 adds a new processing limit called "void lookup limit" (See section 96 | # 4.6.4). Default is 2, but it can be adjusted. Only relevant for pyspf 97 | # 2.0.9 and later. Ignored for earlier releases. 98 | # Void_Limit = 2 -------------------------------------------------------------------------------- /roles/prosodyxmpp/README.md: -------------------------------------------------------------------------------- 1 | # Complete XMPP server with updated support for modern XEPs — ideal for Conversations 2 | 3 | This Ansible role deploys a full Prosody XMPP server on your Fedora 25 4 | (or higher) server. It also deploys a number of modules to make the operation 5 | of the server most battery-optimized and compatible with [excellent features of 6 | modern chat clients such as 7 | Conversations](https://github.com/siacs/Conversations/blob/master/README.md#xmpp-features), 8 | including: 9 | 10 | * [push support](https://github.com/siacs/Conversations#how-do-xep-0357-push-notifications-work) to greatly minimize idle battery use (via module `cloud_notify`); 11 | * [stream management](https://xmpp.org/extensions/xep-0198.html) to improve operation in flaky connectivity (via module `smacks`); 12 | * [message archive management](https://xmpp.org/extensions/xep-0313.html) to receive queued messages after network changes (via module `smacks`); 13 | * [HTTP file upload](https://xmpp.org/extensions/xep-0363.html) to support (end-to-end encrypted) file transfer to (even offline) users (via module `http_upload`); 14 | * [message carbons](https://xmpp.org/extensions/xep-0280.html) to get messages across all your devices (via module `carbons`); 15 | * [client state indication](https://xmpp.org/extensions/xep-0352.html) for superior battery life (via module `csi`); 16 | * [simple blocking](https://xmpp.org/extensions/xep-0191.html) to make it easy to block annoying people (via module `blocking`); 17 | * and more! 18 | 19 | This is the [compliance tester](https://github.com/iNPUTmice/ComplianceTester#usage) 20 | result for a server configured with the defaults this role ships with, 21 | and whose DNS server is configured according to the recommendations in 22 | this guide: 23 | 24 | ``` 25 | Use compliance suite 'Conversations Compliance Suite' to test example.com 26 | 27 | Server is Prosody 0.10.0 28 | running XEP-0115: Entity Capabilities… PASSED 29 | running XEP-0163: Personal Eventing Protocol… PASSED 30 | running Roster Versioning… PASSED 31 | running XEP-0280: Message Carbons… PASSED 32 | running XEP-0191: Blocking Command… PASSED 33 | running XEP-0045: Multi-User Chat… PASSED 34 | running XEP-0198: Stream Management… PASSED 35 | running XEP-0313: Message Archive Management… PASSED 36 | running XEP-0352: Client State Indication… PASSED 37 | running XEP-0363: HTTP File Upload… PASSED 38 | running XEP-0065: SOCKS5 Bytestreams (Proxy)… PASSED 39 | running XEP-0357: Push Notifications… PASSED 40 | running XEP-0368: SRV records for XMPP over TLS… PASSED 41 | running XEP-0384: OMEMO Encryption… PASSED 42 | running XEP-0313: Message Archive Management (MUC)… FAILED 43 | passed 14/15 44 | ``` 45 | 46 | ## Configuration details 47 | 48 | See the file `defaults/main.yml` for more information on how to configure the 49 | role from your playbook. Note that you may have to configure your Ansible's 50 | `hash_behavior` to `merge` dictionaries. 51 | 52 | At the very minimum, you will have to: 53 | 54 | * define the `ssl` tree of settings for the domain you will run — the keys of 55 | the `ssl` tree define the domain name of your server; 56 | * add a few JIDs with their initial passwords under `xmpp.jids`. 57 | 58 | *Removing accounts:* defining a JID's password to be `None` or empty string causes 59 | the account to be deleted when the role is run. 60 | 61 | ## Conference support 62 | 63 | Setting the variable `xmpp.conference.enabled` to True enables the MUC 64 | (multi-user-conference) feature. The variable `xmpp.conference.subdomain` 65 | controls the subdomain used to name the MUC endpoint (it defaults to `conference`). 66 | Each one of your virtual hosts (derived from the `ssl` variable) will gain 67 | a conference endpoint named after the subdomain. 68 | 69 | You need a DNS entry for the subdomain (`conference.example.com`) if you 70 | want people from other XMPP servers to join your conferences. It should probably 71 | be a `CNAME` DNS entry pointing to your XMPP server's `A` record. See below 72 | in the *DNS notes* section. 73 | 74 | The variable `xmpp.conference.creators` can be set to "everyone" if you would 75 | like anyone to create conference rooms, "admins" if you would like only the 76 | server administrators to create conference rooms, and "local" if you'd like only 77 | people with JIDs on your server to create conference rooms. 78 | 79 | # External components 80 | 81 | You can list external components (gateways) and their credentials under the 82 | `xmpp.components` variable (see `defaults/main.yml` for a sample. 83 | A component will be added for each virtual host of your XMPP server. 84 | The JID of the component will be the concatenation of the key and the domain 85 | of the virtual host. The password of the component will be the value of 86 | the component you listed. 87 | 88 | Here is a sample: 89 | 90 | ```yaml 91 | ssl: 92 | ... 93 | example.com: 94 | ... 95 | xmpp: 96 | ... 97 | components: 98 | fbgateway: ARSNE2ki24n908rARTN2e34ni23 99 | # This would declare a component JID "fbgateway.example.com" 100 | # in the Prosody configuration. 101 | ... 102 | ``` 103 | 104 | Do not forget that you need a DNS entry for the component as well. This 105 | entry should probably be created similarly to the DNS entry for the 106 | conference service. 107 | 108 | Once you have listed your components and applied the configuration, you 109 | can set up the external components such as [http://spectrum.im](Spectrum.im). 110 | [This repository has a role](../spectrum2/) which will set up Spectrum for you 111 | as an external component on the same server as your Prosody XMPP server. 112 | 113 | ## Prerequisites 114 | 115 | ### Firewall notes 116 | 117 | You will need to open TCP ports on the machine that will act as XMPP server: 118 | 119 | * 5000 120 | * 5222 121 | * 5223 122 | * 5269 123 | * 5281 124 | 125 | ### SELinux notes 126 | 127 | This role works out of the box with SELinux policy as it ships in Fedora. 128 | 129 | Changes to any of the three default ports in variables `xmpp.ports.*` will 130 | likely cause Prosody to fail to listen to said ports. You will have to add a 131 | type enforcement exception for your desired port. 132 | 133 | Consult role `selinux-module` in this repo for more information on how to 134 | add exceptions for services. 135 | 136 | ### SSL notes 137 | 138 | See the instructions for the `mailserver` role in this repo to understand how 139 | to configure the SSL certificates for your XMPP server. 140 | 141 | The SSL certificate must be for the "virtual domain" of your server, not for 142 | the hostname or IP of the actual machine that will run the XMPP service. 143 | See https://prosody.im/doc/certificates for more information on that. 144 | 145 | ### DNS notes 146 | 147 | You will need to add DNS records for your client to find the server. 148 | 149 | Assuming your server (and the SSL certificate) are `example.com`, here is a 150 | good example of how the DNS records in the `example.com` zone file would look like: 151 | 152 | ``` 153 | $ttl 38400 154 | @ IN SOA ns1.example.com. example.example.com. ( 155 | 2017020603 156 | 1800 157 | 1800 158 | 604800 159 | 1800 ) 160 | 161 | IN NS ns1.example.com. 162 | IN NS ns1.backupdns.com. 163 | 164 | IN A 1.2.3.4 165 | 166 | . 167 | . 168 | . 169 | 170 | xmpp IN A 1.2.3.4 171 | _xmpp-client._tcp IN SRV 5 5 5222 xmpp.example.com. 172 | _xmpp-server._tcp IN SRV 5 5 5269 xmpp.example.com. 173 | _xmpps-client._tcp IN SRV 0 5 5223 xmpp.example.com. 174 | conference IN CNAME xmpp 175 | . 176 | . 177 | . 178 | ``` 179 | 180 | As you can see, you have an `A` record for your XMPP server, followed 181 | by a number of `SRV` records pointing to your `A` record, then 182 | a `CNAME` record for the conference server. These records prioritize 183 | client connections over SSL rather than STARTTLS, for added privacy. 184 | 185 | For the purposes of SSL validation, clients will *not* treat your 186 | server as if its domain was the full `A` record `xmpp.example.com` 187 | — they will instead treat your server's domain as as `example.com`. 188 | 189 | Of course, if you set any of the `xmpp.ports.*` variables to `False` 190 | (which disables the use of the port set to `False`), then remove 191 | the corresponding DNS record for that port. Correspondingly, if you change 192 | any of these ports, you should adjust your DNS configuration to match. 193 | 194 | Naturally, if you disable conference support, or change the subdomain 195 | used for the conference server, you must either remove the DNS entry 196 | for it, or alter it to matche the subdomain. 197 | 198 | See https://prosody.im/doc/dns for more information on the matter. 199 | 200 | ## Usage 201 | 202 | Check you have met the prerequisites. 203 | 204 | Create a playbook that includes this role. 205 | 206 | In that playbook, override the default variables available in 207 | `defaults/main.yml` with variables of your own that make sense for 208 | your use case. 209 | 210 | Then simply run your new playbook against your intended Fedora server. 211 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/TODO.md: -------------------------------------------------------------------------------- 1 | * Add support for MUC logging and configurability of the defaults 2 | https://modules.prosody.im/mod_mam_muc.html 3 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/defaults/main.yml: -------------------------------------------------------------------------------- 1 | xmpp: 2 | # Use this variable to add JIDs (accounts) to your server. 3 | # Note that the password set here is only the initial password. 4 | # Example: 5 | # jids: 6 | # user@example.com: passwordinplaintext 7 | jids: {} 8 | ports: 9 | client_starttls: 5222 10 | client_ssl: 5223 11 | server: 5269 12 | # SSL configuration -- https://prosody.im/doc/advanced_ssl_config 13 | # ssl: 14 | # protocol: tlsv1_2 15 | # See README.md Conference support subheading for info on the following conference variables: 16 | conference: 17 | subdomain: conference 18 | creators: local 19 | enabled: True 20 | # The following key/value list causes external components to be 21 | # added to your VirtualHosts as . with password 22 | # . 23 | # components: 24 | # spectrum: spectrumpassword 25 | global_stanzas: 26 | http_max_content_size: 104857600 27 | http_upload_file_size_limit: 104857600 28 | # You can use this to change the components listen port, but watch 29 | # out for SELinux access denied listening on a different port. 30 | # components_listen_port: 5347 31 | modules_origin: 32 | source: https://hg.prosody.im/prosody-modules/ 33 | revision: 81956bb99289 34 | modules: 35 | # Generally required 36 | - roster # ; -- Allow users to have a roster. Recommended ;) 37 | - saslauth # ; -- Authentication for clients and servers. Recommended if you want to log in. 38 | - tls # ; -- Add support for secure TLS on c2s/s2s connections 39 | - dialback # ; -- s2s dialback support 40 | - disco # ; -- Service discovery 41 | # Not essential, but recommended 42 | - private # ; -- Private XML storage (for room bookmarks, etc.) 43 | - vcard # ; -- Allow users to set vCards 44 | # Nice to have 45 | - version # ; -- Replies to server version requests 46 | - uptime # ; -- Report how long server has been running 47 | - time # ; -- Let others know the time here on this server 48 | - ping # ; -- Replies to XMPP pings with pongs 49 | - pep # ; -- Enables users to publish their mood, activity, playing music and more 50 | - register # ; -- Allow users to register on this server using a client and change passwords 51 | # compliance with Conversations as per https://github.com/siacs/Conversations 52 | - blocklist 53 | - privacy_lists 54 | - blocking 55 | - proxy65 56 | - smacks 57 | - smacks_offline 58 | - carbons 59 | # XEP-0313: Message Archive Management. 60 | - mam 61 | # XEP-0313: Message Archive Management (MUC). 62 | - mam_muc 63 | - csi 64 | - filter_chatstates 65 | - throttle_presence 66 | - http_upload 67 | - cloud_notify 68 | # omemo_all_access disables access control for all OMEMO PEP nodes 69 | # (=all nodes in the namespace of `eu.siacs.conversations.axolotl.*`), 70 | # giving everyone access to the OMEMO key material. 71 | # See https://hg.prosody.im/prosody-modules/file/tip/mod_omemo_all_access/README.markdown 72 | - omemo_all_access 73 | extra_modules: [] 74 | logging: 75 | # Logging to /var/log/prosody/prosody.log can be info or debug. 76 | prosody_log: info 77 | # Logging to syslog can be info or debug. 78 | syslog: info 79 | # Use the following variables to set up the certificates for your server. 80 | # An example follows: 81 | # ssl: 82 | # example.com: 83 | # key: /etc/pki/tls/private/example.com.key 84 | # user_may_read: prosody 85 | # intermediates: 86 | # # The order matters. At the bottom of the stack must be the one closest to the root of trust. 87 | # - /etc/pki/tls/certs/example_com.ca-bundle 88 | # certificate: /etc/pki/tls/certs/example_com.crt 89 | # assembled: /etc/pki/tls/certs/assembled_example.com.crt 90 | ssl: {} 91 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/library/prosodyusers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | 4 | # (c) 2017, Rudd-O. GPLv2+. 5 | 6 | import errno 7 | import glob 8 | import os 9 | import re 10 | import subprocess 11 | try: 12 | from urllib import unquote 13 | except ImportError: 14 | from urllib.parse import unquote 15 | 16 | 17 | DOCUMENTATION = """ 18 | --- 19 | module: prosodyusers 20 | author: Rudd-O 21 | short_description: manage Prosody users. 22 | description: 23 | - This module allows you to manage Prosody users on a server. 24 | options: 25 | jids: 26 | required: true 27 | """ 28 | 29 | EXAMPLES = "" 30 | 31 | 32 | def process(module, jids): 33 | changed = False 34 | rets = {"changed": False, "msg": []} 35 | for jid, password in jids.items(): 36 | accts = glob.glob("/var/lib/prosody/*/accounts/*.dat") 37 | accts = [unquote(s[len("/var/lib/prosody/"):-4]) for s in accts] 38 | accts = [(x.split(os.path.sep)[0], x.split(os.path.sep)[-1]) for x in accts] 39 | user, domain = jid.split("@", 1) 40 | exists = False 41 | for existing_domain, existing_user in accts: 42 | if existing_domain == domain and existing_user == user.lower(): 43 | exists = True 44 | if password not in (None, "None", "") and not exists: 45 | if not module.check_mode: 46 | cmd = ["prosodyctl", "adduser", jid] 47 | p = subprocess.Popen(cmd, 48 | stdin=subprocess.PIPE, 49 | stdout=subprocess.PIPE, 50 | stderr=subprocess.STDOUT) 51 | stdout, _ = p.communicate(("%s\n%s\n" % (password, password)).encode("utf-8")) 52 | ret = p.wait() 53 | if ret != 0: 54 | if stdout == b"That user already exists\n": 55 | cmd = ["prosodyctl", "passwd", jid] 56 | p = subprocess.Popen(cmd, 57 | stdin=subprocess.PIPE, 58 | stdout=subprocess.PIPE, 59 | stderr=subprocess.STDOUT) 60 | stdout, _ = p.communicate(("%s\n%s\n" % (password, password)).encode("utf-8")) 61 | ret = p.wait() 62 | if ret != 0: 63 | module.fail_json(rc=ret, msg="While running %s: %s" % (cmd, stdout)) 64 | else: 65 | rets['changed'] = True 66 | rets['msg'].append("changed password of account %s@%s" % (user, domain)) 67 | else: 68 | module.fail_json(rc=ret, msg="While running %s: %s" % (cmd, stdout)) 69 | else: 70 | rets['changed'] = True 71 | rets['msg'].append("created new account %s@%s" % (user, domain)) 72 | elif password in (None, "None", "") and exists: 73 | if not module.check_mode: 74 | cmd = ["prosodyctl", "deluser", jid] 75 | p = subprocess.Popen(cmd, 76 | stdin=subprocess.PIPE, 77 | stdout=subprocess.PIPE, 78 | stderr=subprocess.STDOUT) 79 | stdout, _ = p.communicate(("%s\n%s\n" % (password, password)).encode("utf-8")) 80 | ret = p.wait() 81 | if ret != 0: 82 | module.fail_json(rc=ret, msg="While running %s: %s" % (cmd, stdout)) 83 | rets['changed'] = True 84 | rets['msg'].append("deleted account %s@%s" % (user, domain)) 85 | if rets['msg']: 86 | rets['msg'] == ", ".join(rets['msg']) 87 | else: 88 | rets['msg'] == "Nothing to do" 89 | module.exit_json(**rets) 90 | 91 | 92 | def main(): 93 | module = AnsibleModule( 94 | argument_spec=dict( 95 | jids=dict(required=True, type="raw"), 96 | ), 97 | supports_check_mode=True 98 | ) 99 | 100 | params = module.params 101 | jids = params["jids"] 102 | process(module, jids) 103 | 104 | 105 | # import module snippets 106 | from ansible.module_utils.basic import * 107 | from ansible.module_utils.splitter import * 108 | 109 | 110 | main() 111 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: install prosody 2 | package: name=prosody state=present 3 | tags: package 4 | 5 | - name: install prosody modules from package 6 | import_role: 7 | name: generic-rpm-install 8 | vars: 9 | packages: [prosody-modules] 10 | register_var: installprosodymodules 11 | when: xmpp.modules_origin == "package" 12 | tags: modules 13 | 14 | - name: install prosody modules from Mercurial 15 | include: modules-from-mercurial.yml 16 | when: xmpp.modules_origin != "package" 17 | tags: modules 18 | 19 | - import_role: 20 | name: deploy-ssl-certs 21 | tags: ssl 22 | 23 | # This task will change every time the modules on disk have changed... 24 | - name: scavenge module info 25 | shell: | 26 | {% for module in xmpp.modules + xmpp.extra_modules %} 27 | if test -d /usr/share/prosody-modules/mod_{{ module|quote }}/ ; then 28 | mod=/usr/share/prosody-modules/mod_{{ module|quote }}/ 29 | elif test -d /usr/lib64/prosody/modules/mod_{{ module|quote }}/ ; then 30 | mod=/usr/lib64/prosody/modules/mod_{{ module|quote }}/ 31 | else 32 | mod=/usr/lib64/prosody/modules/mod_{{ module|quote }}.lua 33 | fi 34 | ls -l --sort=time "$mod" || { 35 | echo Module mod_{{ module|quote }} does not exist in /usr/share/prosody-modules or /usr/lib64/prosody/modules >&2 36 | exit 48 37 | } 38 | {% endfor %} 39 | register: prosodymodulestat 40 | check_mode: no 41 | changed_when: False 42 | tags: 43 | - modules 44 | - config 45 | 46 | # ...and the side effect of the prior task changing, is that this task will change too. 47 | - name: configure prosody modules and ports 48 | template: 49 | src: etc/prosody/{{ item }}.cfg.lua.j2 50 | dest: /etc/prosody/{{ item }}.cfg.lua 51 | owner: root 52 | group: prosody 53 | mode: 0640 54 | with_items: 55 | - modules 56 | - ports 57 | register: prosodyconfigrestarts 58 | tags: 59 | - modules 60 | - config 61 | 62 | - name: configure prosody 63 | template: 64 | src: etc/prosody/prosody.cfg.lua.j2 65 | dest: /etc/prosody/prosody.cfg.lua 66 | owner: root 67 | group: prosody 68 | mode: 0640 69 | register: prosodyconfig 70 | tags: config 71 | 72 | - name: enable and start prosody 73 | service: 74 | name: prosody 75 | state: '{% if 76 | prosodyconfigrestarts.changed 77 | or 78 | sslconf.changed 79 | %}restarted{% elif 80 | prosodyconfig.changed 81 | %}reloaded{% 82 | else 83 | %}started{% endif %}' 84 | enabled: yes 85 | tags: service 86 | 87 | - name: configure users 88 | prosodyusers: 89 | jids: '{{ xmpp.jids }}' 90 | register: prosodyusers 91 | tags: users 92 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/tasks/modules-from-mercurial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: install mercurial 4 | package: name=mercurial state=present 5 | 6 | - name: create prosody source modules directories 7 | file: 8 | name: '{{ item }}' 9 | state: directory 10 | mode: 0775 11 | owner: root 12 | group: nobody 13 | with_items: 14 | - /usr/src/prosody 15 | 16 | - name: clone modules directory 17 | hg: 18 | repo: '{{ xmpp.modules_origin.source }}' 19 | clone: yes 20 | dest: /usr/src/prosody/modules 21 | revision: '{{ xmpp.modules_origin.revision }}' 22 | become: true 23 | become_user: nobody 24 | register: modulesclone 25 | 26 | - name: create prosody config modules directory 27 | file: 28 | name: /usr/share/prosody-modules 29 | state: directory 30 | mode: 0755 31 | owner: root 32 | group: root 33 | 34 | - name: symlink obtained modules 35 | shell: | 36 | set -e 37 | for mod in $( ( cd /usr/src/prosody/modules ; ls -1d mod_* ; cd /usr/share/prosody-modules ; ls -1d mod_* ) | sort | uniq ) ; do 38 | if test -d /usr/src/prosody/modules/"$mod" && test -d /usr/share/prosody-modules/"$mod" ; then continue ; fi 39 | if test -d /usr/src/prosody/modules/"$mod" ; then 40 | ln -s /usr/src/prosody/modules/"$mod" /usr/share/prosody-modules/"$mod" 41 | echo "CHANGED: linked $mod" >&2 42 | elif ! test -d /usr/src/prosody/modules/"$mod" ; then 43 | rm -f /usr/share/prosody-modules/"$mod" 44 | echo "CHANGED: unlinked $mod" >&2 45 | fi 46 | done 47 | register: linkmodules 48 | changed_when: '"CHANGED" in linkmodules.stderr' 49 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/templates/etc/prosody/modules.cfg.lua.j2: -------------------------------------------------------------------------------- 1 | -- This is the list of modules Prosody will load on startup. 2 | -- It looks for mod_modulename.lua in the plugins folder, so make sure that exists too. 3 | -- Documentation on modules can be found at: http://prosody.im/doc/modules 4 | modules_enabled = { 5 | 6 | -- These are commented by default as they have a performance impact 7 | --"privacy"; -- Support privacy lists 8 | --"compression"; -- Stream compression (Note: Requires installed lua-zlib RPM package) 9 | 10 | -- Admin interfaces 11 | --"admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands 12 | --"admin_telnet"; -- Opens telnet console interface on localhost port 5582 13 | 14 | -- HTTP modules 15 | --"bosh"; -- Enable BOSH clients, aka "Jabber over HTTP" 16 | --"http_files"; -- Serve static files from a directory over HTTP 17 | 18 | -- Other specific functionality 19 | "posix"; -- POSIX functionality, sends server to background, enables syslog, etc. 20 | --"groups"; -- Shared roster support 21 | --"announce"; -- Send announcement to all online users 22 | --"welcome"; -- Welcome users who register accounts 23 | --"watchregistrations"; -- Alert admins of registrations 24 | --"motd"; -- Send a message to users when they log in 25 | --"legacyauth"; -- Legacy authentication. Only used by some old clients and bots. 26 | 27 | -- Custom: compliance with Conversations as per https://github.com/siacs/Conversations 28 | {% for m in xmpp.modules + xmpp.extra_modules %} 29 | "{{ m.replace('"', '\\"') }}"; 30 | {% endfor %} 31 | }; 32 | 33 | -- module configuration and timestamps follow. 34 | {% for m in prosodymodulestat.stdout_lines %} 35 | -- {{ m }} 36 | {% endfor %} 37 | 38 | -- These modules are auto-loaded, but should you want 39 | -- to disable them then uncomment them here: 40 | modules_disabled = { 41 | -- "offline"; -- Store offline messages 42 | -- "c2s"; -- Handle client connections 43 | -- "s2s"; -- Handle server-to-server connections 44 | }; 45 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/templates/etc/prosody/ports.cfg.lua.j2: -------------------------------------------------------------------------------- 1 | c2s_ports = { {% if xmpp.ports.client_starttls %}{{ xmpp.ports.client_starttls }}{% endif %} }; 2 | legacy_ssl_ports = { {% if xmpp.ports.client_ssl %}{{ xmpp.ports.client_ssl }}{% endif %} }; 3 | s2s_ports = { {% if xmpp.ports.server %}{{ xmpp.ports.server }}{% endif %} }; 4 | -------------------------------------------------------------------------------- /roles/prosodyxmpp/templates/etc/prosody/prosody.cfg.lua.j2: -------------------------------------------------------------------------------- 1 | -- Prosody Example Configuration File 2 | -- 3 | -- Information on configuring Prosody can be found on our 4 | -- website at http://prosody.im/doc/configure 5 | -- 6 | -- Tip: You can check that the syntax of this file is correct 7 | -- when you have finished by running: luac -p prosody.cfg.lua 8 | -- If there are any errors, it will let you know what and where 9 | -- they are, otherwise it will keep quiet. 10 | -- 11 | -- The only thing left to do is rename this file to remove the .dist ending, and fill in the 12 | -- blanks. Good luck, and happy Jabbering! 13 | 14 | 15 | ---------- Server-wide settings ---------- 16 | -- Settings in this section apply to the whole server and are the default settings 17 | -- for any virtual hosts 18 | 19 | -- This is a (by default, empty) list of accounts that are admins 20 | -- for the server. Note that you must create the accounts separately 21 | -- (see http://prosody.im/doc/creating_accounts for info) 22 | -- Example: admins = { "user1@example.com", "user2@example.net" } 23 | admins = { {% for jid in (xmpp|default({})).admin_jids|default([]) %}"{{ jid.replace('"', '\\"') }}"{% if not loop.last %}, {% endif %}{% endfor %} } 24 | 25 | -- Enable use of libevent for better performance under high load 26 | -- For more information see: http://prosody.im/doc/libevent 27 | --use_libevent = true; 28 | 29 | plugin_paths = { "/usr/share/prosody-modules" } 30 | 31 | Include "modules.cfg.lua" 32 | 33 | -- Disable account creation by default, for security 34 | -- For more information see http://prosody.im/doc/creating_accounts 35 | allow_registration = false; 36 | 37 | -- These are the SSL/TLS-related settings. If you don't want 38 | -- to use SSL/TLS, you may comment or remove this 39 | {% for host, data in sslconf.result.items() %} 40 | {% if loop.first %} 41 | ssl = { 42 | key = "{{ data.key.path }}"; 43 | certificate = "{{ data.assembled_certificate.path }}"; 44 | {% if xmpp.ssl.protocol|default(False) != False %}protocol = "{{ xmpp.ssl.protocol }}";{% endif %} 45 | } 46 | {% endif %} 47 | {% endfor %} 48 | 49 | -- Force clients to use encrypted connections? This option will 50 | -- prevent clients from authenticating unless they are using encryption. 51 | 52 | c2s_require_encryption = true 53 | 54 | -- Force certificate authentication for server-to-server connections? 55 | -- This provides ideal security, but requires servers you communicate 56 | -- with to support encryption AND present valid, trusted certificates. 57 | -- NOTE: Your version of LuaSec must support certificate verification! 58 | -- For more information see http://prosody.im/doc/s2s#security 59 | 60 | s2s_secure_auth = false 61 | 62 | -- Many servers don't support encryption or have invalid or self-signed 63 | -- certificates. You can list domains here that will not be required to 64 | -- authenticate using certificates. They will be authenticated using DNS. 65 | 66 | s2s_insecure_domains = { "gmail.com", "icequake.net" } 67 | 68 | -- Even if you leave s2s_secure_auth disabled, you can still require valid 69 | -- certificates for some domains by specifying a list here. 70 | 71 | --s2s_secure_domains = { "jabber.org" } 72 | 73 | -- Select the authentication backend to use. The 'internal' providers 74 | -- use Prosody's configured data storage to store the authentication data. 75 | -- To allow Prosody to offer secure authentication mechanisms to clients, the 76 | -- default provider stores passwords in plaintext. If you do not trust your 77 | -- server please see http://prosody.im/doc/modules/mod_auth_internal_hashed 78 | -- for information about using the hashed backend. 79 | 80 | authentication = "internal_plain" 81 | 82 | -- Select the storage backend to use. By default Prosody uses flat files 83 | -- in its configured data directory, but it also supports more backends 84 | -- through modules. An "sql" backend is included by default, but requires 85 | -- additional dependencies. See http://prosody.im/doc/storage for more info. 86 | 87 | --storage = "sql" -- Default is "internal" (Note: "sql" requires installed 88 | -- lua-dbi RPM package) 89 | 90 | -- For the "sql" backend, you can uncomment *one* of the below to configure: 91 | --sql = { driver = "SQLite3", database = "prosody.sqlite" } -- Default. 'database' is the filename. 92 | --sql = { driver = "MySQL", database = "prosody", username = "prosody", password = "secret", host = "localhost" } 93 | --sql = { driver = "PostgreSQL", database = "prosody", username = "prosody", password = "secret", host = "localhost" } 94 | 95 | -- Logging configuration 96 | -- For advanced logging see http://prosody.im/doc/logging 97 | log = { 98 | -- Log everything of level "info" and higher (that is, all except "debug" messages) 99 | -- to /var/log/prosody/prosody.log and errors also to /var/log/prosody/prosody.err 100 | {% if xmpp.logging.prosody_log == "debug" %}debug{% else %}info{% endif %} = "/var/log/prosody/prosody.log"; -- Change 'info' to 'debug' for verbose logging 101 | error = "/var/log/prosody/prosody.err"; -- Log errors also to file 102 | {% if xmpp.logging.syslog == "debug" %}debug{% else %}info{% endif %} = "*syslog"; -- Log errors also to syslog 103 | -- log = "*console"; -- Log to the console, useful for debugging with daemonize=false 104 | } 105 | 106 | -- POSIX configuration, see also http://prosody.im/doc/modules/mod_posix 107 | pidfile = "/run/prosody/prosody.pid"; 108 | --daemonize = false -- Default is "true" 109 | 110 | ------ Additional config files ------ 111 | -- For organizational purposes you may prefer to add VirtualHost and 112 | -- Component definitions in their own config files. This line includes 113 | -- all config files in /etc/prosody/conf.d/ 114 | 115 | -- Include "conf.d/*.cfg.lua" 116 | 117 | {% for k, v in xmpp.global_stanzas.items() %} 118 | {{ k }} = {{ v }} 119 | {% endfor %} 120 | 121 | Include "ports.cfg.lua" 122 | 123 | {% for host, data in sslconf.result.items() %} 124 | VirtualHost "{{ host.replace('"', '\\"') }}" 125 | enabled = true -- Remove this line to enable this host 126 | 127 | -- Assign this host a certificate for TLS, otherwise it would use the one 128 | -- set in the global section (if any). 129 | -- Note that old-style SSL on port 5223 only supports one certificate, and will always 130 | -- use the global one. 131 | ssl = { 132 | key = "{{ data.key.path }}"; 133 | certificate = "{{ data.assembled_certificate.path }}"; 134 | {% if xmpp.ssl.protocol|default(False) != False %}protocol = "{{ xmpp.ssl.protocol }}";{% endif %} 135 | } 136 | 137 | {% if xmpp.conference.enabled %} 138 | Component "{{ xmpp.conference.subdomain.replace('"', '\\"') }}.{{ host.replace('"', '\\"') }}" "muc" 139 | {% endif %} 140 | 141 | {% for k, v in xmpp.components.items() %} 142 | Component "{{ k.replace('"', '\\"') }}.{{ host.replace('"', '\\"') }}" 143 | component_secret = "{{ v.replace('"', '\\"') }}" 144 | 145 | {% endfor %} 146 | 147 | {% endfor %} 148 | -------------------------------------------------------------------------------- /roles/qubeskde/README.md: -------------------------------------------------------------------------------- 1 | # Install KDE on your Qubes OS system 2 | 3 | This Ansible playbook installs KDE on your Qubes OS dom0, and performs 4 | a few fixes that are necessary for KDE to work properly under Qubes OS. 5 | 6 | ## Instructions 7 | 8 | Enable Ansible Qubes on your Qubes OS system, as per [the instructions found 9 | here](https://github.com/Rudd-O/ansible-qubes/blob/master/doc/Enhance%20your%20Ansible%20with%20Ansible%20Qubes.md). Ensure that the step *Allow `managevm` to 10 | manage `dom0`* is followed. 11 | 12 | Now, from your VM where Ansible is set up and ready to fire, add: 13 | 14 | ``` 15 | dom0 ansible_connection=qubes 16 | ``` 17 | 18 | to your Ansible `hosts` file. 19 | 20 | You are now ready to fire. Run the playbook `role-qubeskde.yml` — it will 21 | deploy KDE on your `dom0`. 22 | 23 | After this, simply log off. 24 | 25 | At the login screen, select *Plasma* using the top-right corner session 26 | selector. Then finish logging in. 27 | 28 | That's all! 29 | -------------------------------------------------------------------------------- /roles/qubeskde/defaults/main.yml: -------------------------------------------------------------------------------- 1 | packages: 2 | - plasma-workspace 3 | - plasma-systemsettings 4 | - plasma-breeze 5 | - plasma-breeze-qubes 6 | - kde-settings-plasma 7 | - plasma-desktop 8 | - kwin 9 | - kscreen 10 | - kde-cli-tools 11 | - adwaita-gtk2-theme 12 | - breeze-icon-theme 13 | - colord-kde 14 | - dolphin 15 | - kcm-gtk 16 | - kcm_systemd 17 | - kde-runtime 18 | - kde-settings-pulseaudio 19 | - kde-style-breeze 20 | - kdelibs 21 | - kdepasswd 22 | - kdeplasma-addons 23 | - kdialog 24 | - khelpcenter 25 | - khotkeys 26 | - kinfocenter 27 | - kmenuedit 28 | - konsole5 29 | - kscreen 30 | - ksnapshot 31 | - ksysguard 32 | - kwin 33 | - mesa-libEGL 34 | - phonon-backend-gstreamer 35 | - phonon-qt5-backend-gstreamer 36 | - plasma-breeze 37 | - plasma-desktop 38 | - plasma-desktop-doc 39 | - plasma-pa 40 | - plasma-user-manager 41 | - plasma-workspace 42 | - plasma-workspace-drkonqi 43 | - polkit-kde 44 | - qt5-qdbusviewer 45 | - qt5-qtbase-gui 46 | - qt5-qtdeclarative 47 | - qubes-kde-dom0 48 | - sddm 49 | - sddm-breeze 50 | - sddm-kcm 51 | - sni-qt 52 | - xorg-x11-drv-libinput 53 | - xsettings-kde 54 | -------------------------------------------------------------------------------- /roles/qubeskde/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - include_role: 2 | name: install-packages 3 | when: ansible_distribution == "Qubes" 4 | tags: 5 | - packages 6 | - name: fix PAM settings for kscreensaver so the screen can be unlocked 7 | shell: | 8 | cd /etc/pam.d 9 | test -f kscreensaver && { 10 | echo ALREADY 11 | exit 0 12 | } 13 | diff kscreensaver xscreensaver > /dev/null && { 14 | echo ALREADY 15 | } || { 16 | test -f kscreensaver.bak || cp -f kscreensaver kscreensaver.bak 17 | cat xscreensaver > kscreensaver 18 | } 19 | become: true 20 | register: kscreensaverfix 21 | changed_when: '"ALREADY" not in kscreensaverfix.stdout' 22 | when: ansible_distribution == "Qubes" 23 | - name: enable OTP usage 24 | ini_file_without_spaces: 25 | dest: /etc/sddm.conf 26 | section: XDisplay 27 | option: ServerArguments 28 | value: "-nolisten tcp -background none" 29 | become: true 30 | tags: 31 | - sddm 32 | - name: disable lightdm 33 | service: 34 | name: lightdm 35 | enabled: no 36 | tags: 37 | - sddm 38 | - name: enable sddm 39 | service: 40 | name: sddm 41 | enabled: yes 42 | tags: 43 | - sddm 44 | 45 | #- name: fix color palette 46 | # shell: test -d "$HOME"/.local/share/qubes-kde || { echo YES ; /usr/bin/qubes-generate-color-palette ; } 47 | # become: false 48 | # register: dom0palette 49 | # changed_when: '"YES" in dom0palette.stdout' 50 | # when: ansible_distribution == "Qubes" 51 | #- name: configure SDDM 52 | # 53 | -------------------------------------------------------------------------------- /roles/selinux-module/README.md: -------------------------------------------------------------------------------- 1 | # SELinux module configurator 2 | 3 | This Ansible role deploys a `.te` (type enforcement) SELinux policy module 4 | to a running server, and activates it. Alternatively, with `state` set 5 | to `disabled`, the specific module will be disabled. 6 | 7 | ## Variables 8 | 9 | - `policy_file`: path to a local policy file (`.te`) that will be 10 | deployed to the server. Files get deployed to 11 | `/etc/selinux/targeted/local`. Alternatively, to do multiple 12 | files at once, use variable: 13 | - `policy_files`: a list of paths to local policy files. Mutually 14 | exclusive with `policy_file`. 15 | - `state`: either `enabled` or `disabled` for the respective 16 | action w.r.t. the module. 17 | 18 | The name of the module in the target system will be the same 19 | name of the file (specifically, its basename), without the 20 | `.te` extension. 21 | 22 | ## Generating and using a module 23 | 24 | To generate a module, you can use the `audit2allow` program. 25 | 26 | Then, deploy the generated `.te` file to your Ansible repository. 27 | 28 | Then, install it with your playbook a follows (example): 29 | 30 | - name: do selinux workaround for NginX to Icecast 31 | include_role: 32 | name: selinux-module 33 | vars: 34 | policy_file: files/myplaybook/my-policy-module.te 35 | state: enabled 36 | tags: 37 | - selinux 38 | 39 | ## Caution 40 | 41 | Ensure, prior to deploying a SELinux policy module, that another 42 | module with the same name is not already loaded in the target 43 | system. You can find out which modules are loaded by using the 44 | command `semodule -l` on the target system. 45 | -------------------------------------------------------------------------------- /roles/selinux-module/defaults/main.yml: -------------------------------------------------------------------------------- 1 | # policy_file: path/to/policyfile/file.te 2 | # # or 3 | # policy_files: 4 | # - path/to/policy/file1.te 5 | # - path/to/policy/file2.te 6 | # state: enabled 7 | -------------------------------------------------------------------------------- /roles/selinux-module/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: ensure the proper packages are available 4 | package: 5 | name: '{{ item }}' 6 | state: present 7 | with_items: 8 | - policycoreutils 9 | - policycoreutils-python-utils 10 | 11 | - name: disable modules from policy group 12 | shell: | 13 | {% for pfile in policy_files|default([policy_file]) %} 14 | modname=$( {{ pfile|basename|quote }} | sed 's/.te$//' ) 15 | if semodule -l | grep -qFx "$modname" ; then 16 | {% if not ansible_check_mode %} 17 | semodule -r "$modname" 18 | rm -f /etc/selinux/targeted/local/"$modname".pp 19 | rm -f /etc/selinux/targeted/local/"$modname".mod 20 | rm -f /etc/selinux/targeted/local/"$modname".te 21 | {% endif %} 22 | echo DISABLED 23 | fi 24 | {% endfor %} 25 | when: state|default("enabled") == "disabled" 26 | register: disablemod 27 | changed_when: '"DISABLED" in disablemod.stdout' 28 | check_mode: no 29 | 30 | - name: create local policy directory 31 | file: 32 | state: directory 33 | dest: /etc/selinux/targeted/local 34 | mode: 0755 35 | owner: root 36 | group: root 37 | when: state|default("enabled") != "disabled" 38 | 39 | - name: copy policies 40 | template: 41 | src: '{{ item }}' 42 | dest: '/etc/selinux/targeted/local/{{ item|basename }}' 43 | mode: 0644 44 | owner: root 45 | group: root 46 | register: copy_policy_modules 47 | with_items: '{{ policy_files|default([policy_file]) }}' 48 | when: state|default("enabled") != "disabled" 49 | 50 | - name: compile policies 51 | shell: | 52 | cd /etc/selinux/targeted/local/ 53 | {% for pfile in policy_files|default([policy_file]) %} 54 | a={{ pfile|basename|quote }} 55 | mod=$(echo $a | sed 's/\.te/.mod/') 56 | pp=$(echo $a | sed 's/\.te/.pp/') 57 | checkmodule -M -m -o "$mod" "$a" && semodule_package -o "$pp" -m "$mod" || { 58 | rm -f "$pp" "$mod" "$a" 59 | exit 4 60 | } 61 | {% endfor %} 62 | register: compile_policy_modules 63 | when: copy_policy_modules.changed and state|default("enabled") != "disabled" 64 | 65 | - name: load policies 66 | shell: | 67 | {% for pfile in policy_files|default([policy_file]) %} 68 | a={{ pfile|basename|quote }} 69 | pp=$(echo $a | sed 's/\.te/.pp/') 70 | semodule -i /etc/selinux/targeted/local/"$pp" || { exit 8 ; } 71 | {% endfor %} 72 | when: compile_policy_modules.changed and state|default("enabled") != "disabled" 73 | -------------------------------------------------------------------------------- /roles/spectrum2/README.md: -------------------------------------------------------------------------------- 1 | # Spectrum2 IM transport for interoperability between XMPP and other protocols. 2 | 3 | This Ansible role deploys a full [Spectrum2 IM](http://spectrum.im/) server on 4 | your Fedora 25 (or higher) server. Together with 5 | [the Prosody XMPP role](../prosodyxmpp/), it gives you interoperability 6 | between your XMPP server and many other non-XMPP protocols. 7 | 8 | ## Setup 9 | 10 | ### The basics 11 | 12 | This software works in concert with an XMPP server to provide gateway services 13 | to other IM and non-IM services: 14 | 15 | * Your XMPP server runs normally on a machine. 16 | * This role deploys the Spectrum2 frontend and backends. 17 | * The Spectrum2 frontend that gets deployed connects 18 | to your XMPP Server. 19 | * The Spectrum2 backends (services, see below) connect to 20 | the Spectrum2 frontend. 21 | * The Spectrum2 frontend mediates the communications between its 22 | own backends and your XMPP server, presenting those backends as 23 | *services* (also known as *transports*) to your XMPP server. 24 | * You — the end user — gain the ability to use those services 25 | right from your XMPP client. 26 | 27 | ``` 28 | --------------- --------------- 29 | | Your client | ----> | XMPP server | 30 | --------------- --------------- 31 | | 32 | v 33 | --------------- 34 | | Spectrum2 | 35 | /------------ | frontend | 36 | | --------------- 37 | | | 38 | v v 39 | -------------- --------------- 40 | | Spectrum2 | | Spectrum2 | 41 | | backend | | backend | 42 | -------------- --------------- 43 | | | 44 | v v 45 | -------------- --------------- 46 | | Google | | Twitter | 47 | | Hangouts | | | 48 | -------------- --------------- 49 | ``` 50 | 51 | This role is in charge of setting up the Spectrum2 frontend and the Spectrum2 52 | backends of your choice. After that, you are in charge of registering your 53 | accounts with the running backends (see usage instructions below). 54 | 55 | ### Configuration details 56 | 57 | See the file `defaults/main.yml` for more information on how to configure the 58 | role from your playbook. Note that you may have to configure your Ansible's 59 | `hash_behavior` to `merge` dictionaries. 60 | 61 | At the very minimum, you will have to: 62 | 63 | * clone the `xmpp.services` subtree example, then 64 | * rename the sample service from `SAMPLE_xmpp` to the subdomain you want for 65 | the Spectrum service, then 66 | * set up the `component_jid` and `component_password` variables. 67 | * The component JID is the full JID of the component as declared in your 68 | XMPP server configuration. 69 | * The component password is the password that corresponds to the component 70 | declared in your XMPP server configuration. 71 | * These two values must have been created in your XMPP server configuration 72 | prior to launching Spectrum via this role. 73 | * Check out the [`prosodyxmpp` role](../prosodyxmpp/) for an easy way to set 74 | up a compliant XMPP server with support for external components such as 75 | Spectrum. 76 | 77 | *Removing services:* defining a service's key contents to be `None`, 78 | `False` or empty string causes the service to be stopped and removed 79 | when the role is run. 80 | 81 | Once you're satisfied with the configuration, then apply the role to 82 | your XMPP server. 83 | 84 | ### Setting up DNS for your gateway. 85 | 86 | You need a DNS entry for the subdomain (`.example.com`) if you 87 | want to be able to log in to the Spectrum2 component. Given that this role 88 | is designed to run on the same machine as [the `prosodyxmpp` 89 | role](../prosodyxmpp/), the DNS record should probably be a `CNAME` DNS entry pointing to your 90 | XMPP server's `A` record. See the `README.md` file for the `prosodyxmpp` 91 | role for more information. 92 | 93 | ## Using the transports 94 | 95 | Most clients don't support registering with transports, though they will 96 | operate normally once you've registered. Fortunately, there are some 97 | which do support registering with the transport. Let's see how. 98 | 99 | Using a compliant client like Gajim, log onto your XMPP account as usual. 100 | 101 | Then, open the *Discover services* dialog. You will see the transport listed 102 | there. 103 | 104 | Register with a transport now. It will prompt you for the credentials of 105 | the service you're planning to log onto. 106 | 107 | For example, if you're logging to Google Hangouts, you'll want to specify 108 | your `@gmail.com` address as the XMPP JID, and your Google password. 109 | If you are using Google, ensure you have either activated 110 | *Allow less secure apps*, or created an application password specific 111 | for the transport. Also make sure to pay attention to your Gmail inbox, 112 | as Google may decide it doesn't accept the IP address from where the 113 | transport is logging on. If that happens, tell Google it's fine, then 114 | log out of your XMPP server, and then log into it again in 15 minutes. 115 | You should receive a notification that your Google Hangouts contacts 116 | are being added to your XMPP account's roster. 117 | 118 | Repeat the same process for the other transports. 119 | 120 | That's it! 121 | -------------------------------------------------------------------------------- /roles/spectrum2/defaults/main.yml: -------------------------------------------------------------------------------- 1 | spectrum2: 2 | # Can also be: 3 | # software_origin: package 4 | # See README for more information. 5 | software_origin: 6 | source: https://github.com/hanzz/spectrum2/ 7 | revision: aaf3ead9f21a8e8d463b36f34a378fa7aaf43f76 8 | # Only used if software_origin != 'package'. 9 | build_dir: /usr/src/spectrum2 10 | # Only used if software_origin != 'package'. 11 | install_prefix: /usr/local 12 | user: spectrum2 13 | group: spectrum2 14 | log_level: warn 15 | services: 16 | # A service with False as its value causes the deletion of 17 | # the service. 18 | nonexistent: False 19 | # A service prefixed with SAMPLE_ will be ignored. 20 | # You should define your own services based on this 21 | # template. 22 | SAMPLE_xmpp: 23 | # Most of these options are documented at 24 | # http://spectrum.im/documentation/frontends/xmpp.html 25 | # The name of user/group Spectrum runs as. 26 | # JID that Spectrum instance will log to th XMPP server as. 27 | # Usually a subdomain of the XMPP server. 28 | component_jid: spectrum.example.com 29 | # Optional: a JID that can administrate the service. 30 | admin_jid: admin@example.com 31 | # Domain names of XMPP servers allowed to use this gateway. 32 | # Serves as whitelist when enable_public_registration is 0. 33 | # See the README.md file for how to set up DNS records. 34 | allowed_servers: 35 | - example.org 36 | # Password used to connect the XMPP server. 37 | component_password: password 38 | # XMPP server to which Spectrum connects in gateway mode. 39 | # The default assumes Spectrum runs on the same host as 40 | # the XMPP server it connects to. 41 | xmpp_server_address: localhost 42 | # XMPP server port. This is the port your server is 43 | # configured to listen for XMPP components. 44 | xmpp_server_port: 5347 45 | # Interface on which Spectrum listens for backends. 46 | backend_listen_address: localhost 47 | # Port on which Spectrum listens for backends. 48 | # By default Spectrum chooses random backend port and there's 49 | # no need to change it normally. But we want to support SELinux 50 | # so we will need to fix the port. 51 | backend_listen_port: 10001 52 | # Number of users per one legacy network backend. 53 | # With Skype it's always 1. 54 | users_per_backend: 10 55 | backend: 56 | # Component of the path to backend binary. 57 | # Will resolve to /usr/bin/spectrum2_<< name >>_backend 58 | name: libpurple 59 | # name: libcommuni 60 | libpurple: 61 | # For /usr/bin/spectrum2_libpurple_backend 62 | # This is the protocol ID that libpurple will configure. 63 | protocol_id: prpl-jabber 64 | # protocol_id: prpl-msn 65 | # protocol_id: prpl-icq 66 | # skypeweb plugin 67 | # protocol_id: prpl-skypeweb 68 | # facebook plugin 69 | # protocol: prpl-facebook 70 | identity: 71 | # Name of Spectrum instance in service discovery. 72 | name: Spectrum XMPP transport 73 | # Type of transport ("msn", "icq", "xmpp"). 74 | # Check http://xmpp.org/registrar/disco-categories.html#gateway 75 | type: xmpp 76 | database: 77 | # Database backend type. 78 | # "sqlite3", "mysql", "pqxx", or "none" without database backend. 79 | type: sqlite3 80 | # SQLite3 defaults to %{_localstatedir}/spectrum2/$jid/database.sql 81 | # You can specify a different path (writable to Spectrum IM) 82 | # if you so choose. 83 | # Other databases require the name. 84 | # name: /var/lib/spectrum2/$jid/database.sql 85 | # Database server. Unneeded for SQLite3. 86 | # server: localhost 87 | # Database server port. Unneeded for SQLite3. 88 | # port: 0 89 | # Database server user. Unneeded for SQLite3. 90 | # user: user 91 | # Database server password. Unneeded for SQLite3. 92 | # password: password 93 | # Prefix used for database tables. Useful for shared databases.s 94 | # tables_prefix: spectrum_ 95 | # Database connection string. Only for pqxx. Overrides 96 | # database server, user, password. 97 | # connectionstring: host=localhost user=specturm password=secret 98 | registration: 99 | # Enable public registration. See option service.allowed_servers here. 100 | enable_public_registration: 0 101 | # The following suboptions set text to display on the user registration form. 102 | form_display: 103 | username_label: 'XMPP JID (e.g. user@server.tld):' 104 | instructions: 'Enter your remote XMPP JID and password as well as your local username and password' 105 | local_username_label: 'Local username (without @server.tld):' 106 | # If True a local jabber account on is needed 107 | # for transport registration, the idea is to enable public registration 108 | # from other servers, but only for users, who have already local accounts 109 | # require_local_account: 0 110 | # local_account_server: localhost 111 | # local_account_server_timeout: 10000 112 | -------------------------------------------------------------------------------- /roles/spectrum2/files/patches/000-libevent.diff: -------------------------------------------------------------------------------- 1 | diff -up backends/libpurple/geventloop.cpp backends/libpurple/geventloop.cpp 2 | --- backends/libpurple/geventloop.cpp 2017-05-14 04:33:57.316887473 +0000 3 | +++ backends/libpurple/geventloop.cpp 2017-05-14 04:34:16.575887473 +0000 4 | @@ -25,7 +25,7 @@ 5 | #undef write 6 | #endif 7 | #ifdef WITH_LIBEVENT 8 | -#include "event.h" 9 | +#include 10 | #endif 11 | 12 | #include "purple_defs.h" 13 | -------------------------------------------------------------------------------- /roles/spectrum2/files/patches/001-logging.diff: -------------------------------------------------------------------------------- 1 | diff -up ./spectrum_manager/src/managerconfig.cpp ./spectrum_manager/src/managerconfig.cpp 2 | --- ./spectrum_manager/src/managerconfig.cpp 2017-05-14 10:30:44.172000000 +0000 3 | +++ ./spectrum_manager/src/managerconfig.cpp 2017-05-14 10:31:16.039000000 +0000 4 | @@ -44,7 +44,7 @@ bool ManagerConfig::load(const std::stri 5 | ("database.password", value()->default_value(""), "Database Password.") 6 | ("database.port", value()->default_value(0), "Database port.") 7 | ("database.prefix", value()->default_value(""), "Prefix of tables in database") 8 | - ("logging.config", value()->default_value("/etc/spectrum2/manager_logging.cfg"), "Logging configuration file") 9 | + ("logging.config", value()->default_value("/etc/spectrum2/manager-logging.cfg"), "Logging configuration file") 10 | ; 11 | 12 | store(parse_config_file(ifs, opts), m_variables); 13 | diff -up ./spectrum/src/backend-logging.cfg ./spectrum/src/backend-logging.cfg 14 | --- ./spectrum/src/backend-logging.cfg 2017-05-14 10:32:36.566000000 +0000 15 | +++ ./spectrum/src/backend-logging.cfg 2017-05-14 10:33:12.054000000 +0000 16 | @@ -1,4 +1,4 @@ 17 | -log4j.rootLogger=debug, R 18 | +log4j.rootLogger=warn, R 19 | 20 | log4j.appender.R=org.apache.log4j.RollingFileAppender 21 | log4j.appender.R.File=/var/log/spectrum2/${jid}/backends/backend-${id}.log 22 | diff -up ./spectrum/src/logging.cfg ./spectrum/src/logging.cfg 23 | --- ./spectrum/src/logging.cfg 2017-05-14 10:32:47.524000000 +0000 24 | +++ ./spectrum/src/logging.cfg 2017-05-14 10:33:12.055000000 +0000 25 | @@ -1,4 +1,4 @@ 26 | -log4j.rootLogger=debug, R 27 | +log4j.rootLogger=warn, R 28 | 29 | log4j.appender.R=org.apache.log4j.RollingFileAppender 30 | log4j.appender.R.File=/var/log/spectrum2/${jid}/spectrum2.log 31 | diff -up ./spectrum/src/manager-logging.cfg ./spectrum/src/manager-logging.cfg 32 | --- ./spectrum/src/manager-logging.cfg 2017-05-14 10:32:41.750000000 +0000 33 | +++ ./spectrum/src/manager-logging.cfg 2017-05-14 10:33:46.084000000 +0000 34 | @@ -1,4 +1,4 @@ 35 | -log4j.rootLogger=debug, R 36 | +log4j.rootLogger=warn, R 37 | 38 | log4j.appender.R=org.apache.log4j.RollingFileAppender 39 | log4j.appender.R.File=/var/log/spectrum2/spectrum_manager.log 40 | -------------------------------------------------------------------------------- /roles/spectrum2/tasks/install-from-git.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: install build dependencies 4 | package: name={{ item }} state=present 5 | with_items: 6 | - git 7 | - libidn-devel 8 | - swiften-devel 9 | - libpurple-devel 10 | - protobuf-devel 11 | - libcommuni-devel 12 | - boost-devel 13 | - libpqxx-devel 14 | - popt-devel 15 | - libevent-devel 16 | - mariadb-devel 17 | - log4cxx-devel 18 | - openssl-devel 19 | - sqlite-devel 20 | - libcurl-devel 21 | - qt-devel 22 | - avahi-devel 23 | - cmake 24 | - rsync 25 | tags: deps 26 | 27 | - name: create prosody source modules directories 28 | file: 29 | name: '{{ item }}' 30 | state: directory 31 | mode: 0775 32 | owner: root 33 | group: nobody 34 | with_items: 35 | - '{{ spectrum2.build_dir }}' 36 | 37 | - block: 38 | - name: clone spectrum2 project 39 | git: 40 | repo: '{{ spectrum2.software_origin.source }}' 41 | dest: '{{ spectrum2.build_dir }}/git' 42 | version: '{{ spectrum2.software_origin.revision }}' 43 | register: gitclone 44 | 45 | - name: copy patches 46 | copy: 47 | src: patches/ 48 | dest: '{{ spectrum2.build_dir }}/patches/' 49 | register: syncpatches 50 | 51 | - name: clear build flag 52 | shell: rm -f {{ spectrum2.build_dir|quote }}/.built 53 | when: gitclone.changed or syncpatches.changed 54 | 55 | - name: get build flag 56 | shell: | 57 | x=$(cat {{ spectrum2.build_dir|quote }}/.built) 58 | y={{ spectrum2.install_prefix|quote }} 59 | if [ "$x" == "$y" ] ; then echo YES ; else echo NO ; fi 60 | register: getbuildflag 61 | check_mode: no 62 | changed_when: no 63 | 64 | - name: copy source to work directory 65 | shell: rsync -a --delete {{ spectrum2.build_dir|quote }}/git/ {{ spectrum2.build_dir|quote }}/workdir/ 66 | register: rsyncsource 67 | when: '"NO" in getbuildflag.stdout' 68 | 69 | - name: apply patches 70 | shell: | 71 | set -e 72 | cd {{ spectrum2.build_dir|quote }}/workdir 73 | for patch in ../patches/*.diff ; do 74 | patch -p0 < "$patch" 75 | done 76 | register: patch 77 | when: '"NO" in getbuildflag.stdout' 78 | 79 | - name: configure and build 80 | shell: | 81 | set -e 82 | mkdir -p {{ spectrum2.build_dir|quote }}/build 83 | cd {{ spectrum2.build_dir|quote }}/build 84 | cmake ../workdir -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX='{{ spectrum2.install_prefix }}' 85 | make -j3 86 | echo {{ spectrum2.install_prefix|quote }} >> {{ spectrum2.build_dir|quote }}/.built 87 | register: configbuild 88 | when: '"NO" in getbuildflag.stdout' 89 | tags: 90 | - build 91 | become: true 92 | become_user: nobody 93 | 94 | - name: install 95 | shell: | 96 | set -e 97 | cd {{ spectrum2.build_dir|quote }}/build 98 | tmpdir=`mktemp -d` 99 | cleanup() { rm -rf "$tmpdir" ; } 100 | trap cleanup EXIT 101 | make install DESTDIR="$tmpdir" 102 | getent group {{ spectrum2.group|quote }} >/dev/null || groupadd -r {{ spectrum2.group|quote }} 103 | getent passwd {{ spectrum2.user|quote }} >/dev/null || \ 104 | useradd -r -g {{ spectrum2.user|quote }} -d /var/lib/spectrum2 \ 105 | -s /sbin/nologin \ 106 | -c "spectrum XMPP transport" {{ spectrum2.user|quote }} 107 | rsync -a --ignore-existing "$tmpdir"/etc/spectrum2/ /etc/spectrum2/ 108 | chown -R root.{{ spectrum2.group|quote }} /etc/spectrum2/*.cfg* /etc/spectrum2/transports/*.cfg* 109 | mkdir -p {{ spectrum2.install_prefix|quote }} 110 | chmod 640 /etc/spectrum2/*.cfg* /etc/spectrum2/transports/*.cfg* 111 | for x in bin include lib ; do 112 | x2="$x" 113 | arch=`arch` 114 | if [ "$x" == "lib" -a "$arch" == "x86_64" ] ; then 115 | x2=lib64 116 | fi 117 | prefix={{ spectrum2.install_prefix|quote }} 118 | rsync -a "$tmpdir"{{ spectrum2.install_prefix }}/$x/ {{ spectrum2.install_prefix }}/$x2/ 119 | if [ "$x" == "lib" -a "$prefix" != "/usr" ] ; then 120 | echo {{ spectrum2.install_prefix }}/$x2 > /etc/ld.so.conf.d/spectrum2.conf 121 | ldconfig 122 | fi 123 | done 124 | rsync -a "$tmpdir"/var/lib/spectrum2_manager/ /var/lib/spectrum2_manager/ 125 | dirs="/var/log/spectrum2 /var/run/spectrum2 /var/lib/spectrum2" 126 | mkdir -p $dirs 127 | chmod 770 $dirs /var/lib/spectrum2_manager/ 128 | chown {{ spectrum2.user|quote }}.{{ spectrum2.group|quote }} $dirs /var/lib/spectrum2_manager 129 | register: install 130 | tags: 131 | - install 132 | -------------------------------------------------------------------------------- /roles/spectrum2/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: install Spectrum2 IM 2 | import_role: 3 | name: generic-rpm-install 4 | vars: 5 | project: spectrum2 6 | package: spectrum2 7 | when: spectrum2.software_origin == "package" 8 | tags: 9 | - package 10 | 11 | - name: install prosody modules from Mercurial 12 | include: install-from-git.yml 13 | when: spectrum2.software_origin != "package" 14 | tags: 15 | - source 16 | 17 | - name: configure Spectrum2 configuration file 18 | template: 19 | src: etc/spectrum2/transports/spectrum.cfg 20 | dest: /etc/spectrum2/transports/{{ item.key }}.cfg 21 | owner: root 22 | group: '{{ spectrum2.group }}' 23 | mode: 0640 24 | with_dict: '{{ spectrum2.services }}' 25 | vars: 26 | service: '{{ item.value }}' 27 | when: service and not item.key.startswith("SAMPLE_") 28 | register: addservice 29 | tags: 30 | - config 31 | 32 | - name: delete Spectrum2 configuration file 33 | file: 34 | name: /etc/spectrum2/transports/{{ item.key }}.cfg 35 | state: absent 36 | with_dict: '{{ spectrum2.services }}' 37 | vars: 38 | service: '{{ item.value }}' 39 | when: not service 40 | register: deleteservice 41 | tags: 42 | - config 43 | 44 | - name: configure Spectrum2 logging 45 | template: 46 | src: etc/spectrum2/{{ item }}.cfg 47 | dest: /etc/spectrum2/{{ item }}.cfg 48 | owner: root 49 | group: '{{ spectrum2.group }}' 50 | mode: 0640 51 | with_items: 52 | - logging 53 | - backend-logging 54 | register: changelogs 55 | tags: 56 | - config 57 | 58 | - name: enable and start Spectrum2 59 | service: 60 | name: spectrum2 61 | state: '{% if 62 | addservice.changed 63 | or 64 | changelogs.changed 65 | or 66 | deleteservice.changed 67 | %}restarted{% 68 | else 69 | %}started{% endif %}' 70 | enabled: yes 71 | tags: service 72 | -------------------------------------------------------------------------------- /roles/spectrum2/templates/etc/spectrum2/backend-logging.cfg: -------------------------------------------------------------------------------- 1 | log4j.rootLogger={{ spectrum2.log_level }}, R 2 | 3 | log4j.appender.R=org.apache.log4j.RollingFileAppender 4 | log4j.appender.R.File=/var/log/spectrum2/${jid}/backends/backend-${id}.log 5 | 6 | log4j.appender.R.MaxFileSize=10000KB 7 | # Keep one backup file 8 | log4j.appender.R.MaxBackupIndex=1 9 | 10 | log4j.appender.R.layout=org.apache.log4j.PatternLayout 11 | log4j.appender.R.layout.ConversionPattern=${pid}: %d %-5p %c: %m%n 12 | -------------------------------------------------------------------------------- /roles/spectrum2/templates/etc/spectrum2/logging.cfg: -------------------------------------------------------------------------------- 1 | log4j.rootLogger={{ spectrum2.log_level }}, R 2 | 3 | log4j.appender.R=org.apache.log4j.RollingFileAppender 4 | log4j.appender.R.File=/var/log/spectrum2/${jid}/spectrum2.log 5 | 6 | log4j.appender.R.MaxFileSize=10000KB 7 | # Keep one backup file 8 | log4j.appender.R.MaxBackupIndex=1 9 | 10 | log4j.appender.R.layout=org.apache.log4j.PatternLayout 11 | log4j.appender.R.layout.ConversionPattern=%d %-5p %c: %m%n 12 | 13 | # Disable XML category 14 | log4j.category.Component.XML = OFF 15 | -------------------------------------------------------------------------------- /roles/spectrum2/templates/etc/spectrum2/transports/spectrum.cfg: -------------------------------------------------------------------------------- 1 | [service] 2 | user={{ spectrum2.user }} 3 | group={{ spectrum2.group }} 4 | jid={{ service.component_jid }} 5 | {% if service.admin_jid is defined %} 6 | admin_jid={{ service.admin_jid }} 7 | {% endif %} 8 | password={{ service.component_password }} 9 | 10 | server_mode=0 11 | server={{ service.xmpp_server_address }} 12 | port = {{ service.xmpp_server_port }} 13 | backend_host = {{ service.backend_listen_address }} 14 | backend_port={{ service.backend_listen_port }} 15 | 16 | {% for server in service.allowed_servers %} 17 | allowed_servers={{ server }} 18 | {% endfor %} 19 | 20 | {% if "skype" in service.backend %} 21 | users_per_backend=1 22 | {% else %} 23 | users_per_backend={{ service.users_per_backend }} 24 | {% endif %} 25 | 26 | backend=/usr/bin/spectrum2_{{ service.backend.name }}_backend 27 | # Libpurple protocol-id for spectrum_libpurple_backend 28 | protocol={{ service.backend.libpurple.protocol_id }} 29 | 30 | # protocol=any means that user sets his protocol in his JID which has to be 31 | # in following format: protocol.username@domain.tld 32 | # So for example: prpl-jabber.hanzz.k%gmail.com@domain.tld 33 | #protocol=any 34 | 35 | [identity] 36 | name={{ service.identity.name }} 37 | type={{ service.identity.type }} 38 | category=gateway 39 | 40 | [logging] 41 | config = /etc/spectrum2/logging.cfg 42 | backend_config = /etc/spectrum2/backend-logging.cfg 43 | 44 | [database] 45 | type = {{ service.database.type }} 46 | database = {{ service.database.name|default("/var/lib/spectrum2/$jid/database.sql") }} 47 | 48 | {% if service.database.server is defined %} 49 | server={{ service.database.server }} 50 | {% endif %} 51 | 52 | {% if service.database.port is defined %} 53 | port={{ service.database.port }} 54 | {% endif %} 55 | 56 | {% if service.database.user is defined %} 57 | user={{ service.database.user }} 58 | {% endif %} 59 | 60 | {% if service.database.password is defined %} 61 | password={{ service.database.password }} 62 | {% endif %} 63 | 64 | {% if service.database.tables_prefix is defined %} 65 | prefix={{ service.database.tables_prefix }} 66 | {% endif %} 67 | 68 | {% if service.database.connection_string is defined %} 69 | connectionstring={{ service.database.connection_string }} 70 | {% endif %} 71 | 72 | [registration] 73 | enable_public_registration={{ service.registration.enable_public_registration }} 74 | username_label={{ service.registration.form_display.username_label }} 75 | instructions={{ service.registration.form_display.instructions }} 76 | 77 | {% if service.registration.require_local_account is defined %} 78 | require_local_account={{ service.registration.require_local_account }} 79 | {% endif %} 80 | {% if service.registration.form_display.local_username_label is defined %} 81 | local_username_label={{ service.registration.form_display.local_username_label }} 82 | {% endif %} 83 | {% if service.registration.local_account_server is defined %} 84 | local_account_server={{ service.registration.local_account_server }} 85 | {% endif %} 86 | {% if service.registration.local_account_server_timeout is defined %} 87 | local_account_server_timeout={{ service.registration.local_account_server_timeout }} 88 | {% endif %} 89 | -------------------------------------------------------------------------------- /roles/updates/handlers/main.yml: -------------------------------------------------------------------------------- 1 | - include: testboot.yml 2 | - include: recalc-services.yml 3 | -------------------------------------------------------------------------------- /roles/updates/handlers/recalc-services.yml: -------------------------------------------------------------------------------- 1 | ../../../handlers/recalc-services.yml -------------------------------------------------------------------------------- /roles/updates/handlers/testboot.yml: -------------------------------------------------------------------------------- 1 | ../../../handlers/testboot.yml -------------------------------------------------------------------------------- /roles/updates/tasks/grub2-regen.yml: -------------------------------------------------------------------------------- 1 | ../../../tasks/grub2-regen.yml -------------------------------------------------------------------------------- /roles/updates/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - include: testboot.yml 3 | tags: 4 | - zfs 5 | 6 | - name: test for ZFS 7 | shell: which zfs && echo YES || echo NO 8 | changed_when: false 9 | register: zfstest 10 | check_mode: no 11 | tags: 12 | - zfs 13 | 14 | - import_role: 15 | name: zfsupdates 16 | vars: 17 | deployzfs_stage: one 18 | when: '"YES" in zfstest.stdout' 19 | 20 | - name: update yum packages for non-qubes dom0s 21 | package: state=latest name=* enablerepo='qubes-vm-*-security-testing' 22 | when: ansible_distribution in "Fedora" 23 | register: do_updates_dnf 24 | notify: 25 | - recalc services that need to be restarted 26 | 27 | - name: update apt packages 28 | apt: upgrade=yes 29 | when: ansible_distribution in ("Debian", "Ubuntu") 30 | register: do_updates_apt 31 | notify: 32 | - recalc services that need to be restarted 33 | 34 | - name: update yum packages for qubes dom0s 35 | shell: qubes-dom0-update -y --enablerepo=qubes-dom0-security-testing 36 | when: ansible_distribution in "Qubes" 37 | register: do_updates_dom0 38 | failed_when: do_updates_dom0.rc != 0 39 | changed_when: '"Running transaction" in do_updates_dom0.stdout' 40 | notify: 41 | - recalc services that need to be restarted 42 | 43 | - name: redo akmods 44 | shell: if test -f /usr/sbin/akmods ; then akmods --force ; fi 45 | when: '"kmod-nvidia" in "".join((do_updates_dnf.results|default([]) + do_updates_dom0.stdout_lines|default([])))' 46 | register: redo_akmods 47 | 48 | - import_role: 49 | name: zfsupdates 50 | vars: 51 | deployzfs_stage: two 52 | when: '"YES" in zfstest.stdout' 53 | 54 | - name: is this Qubes template VM 55 | shell: test -f /run/qubes/this-is-templatevm && echo YES || echo NO 56 | changed_when: False 57 | check_mode: no 58 | register: templatevm 59 | 60 | - name: clean downloaded yum packages 61 | shell: test -f /usr/bin/dnf && { dnf clean packages ; } || { yum clean packages ; } 62 | when: ansible_distribution in "Fedora" and "YES" in templatevm.stdout and (do_updates_dnf.changed|default(False)) 63 | register: clean_updates 64 | changed_when: "'0 packages removed' not in clean_updates.stdout" 65 | 66 | - set_fact: 67 | update_results: '{{ ((do_updates_apt|default({"results": []})).results|default([]) 68 | + (do_updates_dnf|default({"results": []})).results|default([]) 69 | + (do_updates_dom0|default({"stdout_lines": []})).stdout_lines|default([]))|join("\n") }}' 70 | grub_set_default_zero: yes 71 | tags: 72 | - grub 73 | 74 | - block: 75 | - include: grub2-regen.yml 76 | when: ansible_distribution in "Fedora Qubes Debian Ubuntu" and ("linux-" in update_results or "kernel" in update_results or "xen" in update_results) 77 | tags: 78 | - grub 79 | -------------------------------------------------------------------------------- /roles/updates/tasks/testboot.yml: -------------------------------------------------------------------------------- 1 | ../../../tasks/testboot.yml -------------------------------------------------------------------------------- /roles/zfsupdates/README.md: -------------------------------------------------------------------------------- 1 | # Maintain your ZFS updated 2 | 3 | This Ansible role installs ZFS on your Fedora system, and makes sure that 4 | the ZFS packages on your system are always up-to-date, down to the contents 5 | of the initial RAM disks that are necessary to boot from a ZFS root file 6 | system. It takes care of a few steps that normal DNF update does not: 7 | 8 | * Setting up DNF to keep old kernel-devel packages installed. 9 | * Wiping old DKMS-built modules and refreshing them with the new 10 | ZFS modules shipped with the ZFS updates. 11 | * Regenerating the initial RAM disks so they will contain ZFS and your 12 | system will boot reliably from a ZFS root. 13 | 14 | The updated packages are fetched from the official 15 | [ZFS on Linux repo](https://github.com/zfsonlinux/zfs/wiki/Fedora). 16 | 17 | This playbook is ideal to keep systems built by 18 | [zfs-fedora-installer](https://github.com/Rudd-O/zfs-fedora-installer) 19 | up-to-date, and I myself use it for that. 20 | 21 | ## Instructions 22 | 23 | Preparation: 24 | 25 | 1. [Build and install the `grub-zfs-fixer` package on your target Fedora system](https://github.com/Rudd-O/zfs-fedora-installer/tree/master/grub-zfs-fixer). You can skip this step if your system was deployed using the `zfs-fedora-installer` system, as that system already performs this step. 26 | 27 | Usage: 28 | 29 | Every time you want to update ZFS on your target system, run this role against it. 30 | 31 | If you have another playbook that perform system updates, make sure to include this role as follows: 32 | 33 | ``` 34 | ... 35 | ... 36 | 37 | - include_role: 38 | name: zfsupdates 39 | vars: 40 | deployzfs_stage: one 41 | 42 | - name: task that updates your system 43 | package: name=* state=latest 44 | 45 | - include_role: 46 | name: zfsupdates 47 | vars: 48 | deployzfs_stage: one 49 | 50 | ... 51 | ... 52 | ``` 53 | 54 | That's all! 55 | -------------------------------------------------------------------------------- /roles/zfsupdates/defaults/main.yml: -------------------------------------------------------------------------------- 1 | use_generic_rpm_install: False 2 | zfs_packages: 3 | - libuutil3 4 | - libnvpair3 5 | - libzpool4 6 | - libzfs4 7 | - zfs 8 | - zfs-dkms 9 | - zfs-dracut 10 | -------------------------------------------------------------------------------- /roles/zfsupdates/files/RPM-GPG-KEY-zfsonlinux: -------------------------------------------------------------------------------- 1 | -----BEGIN PGP PUBLIC KEY BLOCK----- 2 | Version: GnuPG v1.4.13 (GNU/Linux) 3 | 4 | mQENBFFLdrcBCADLsZ7FG2e2Cr/ZV6J52+CyzXXmmmIt2ibYJdflxhSaaxhuag6k 5 | 1OgWEM1R/igjWD3im66O99m30+XeDqDwwBC2flplTlAM5cVb2Q37M+q+LyGLaJgL 6 | JKIkHWfK3/arJO+QY5K2hxzvKUAO1ZJa5OYQMJmKIxLzKz3SX6YnRTtE7ohDq/bU 7 | F32ysIrW549XZUpFX4DYCbR9IEaF5kCvus4FwidBTHDC5aWKvb7qStaL/Yo9koV0 8 | CADl4/nCNAKTcmRoo8Pz+zFwFFNLKRdwu58IefLgkqon7RQBDhwMqKPs1Kw14RwJ 9 | HUv3HYw1JleFmkXv2hn1BMa0ct7jUrvenFBNABEBAAG0IVpGUyBvbiBMaW51eCA8 10 | emZzQHpmc29ubGludXgub3JnPokBOAQTAQIAIgUCUUt2twIbAwYLCQgHAwIGFQgC 11 | CQoLBBYCAwECHgECF4AACgkQqdWhwPFKtiAHWgf/SVXT92gs02HlJsz3h+vmHKHH 12 | 8esxHq8DzG9PaBTyeWLB6mMuN5IQ6Kbtpy44xYCyPyBo+MEoFyPwJXw4qU7th/NX 13 | fAaohXTT8KltyKYsibotdeUHGE4G/7ilbQl9kknlmbig9M16RCnRCDxBRiLrvdaQ 14 | X84YhQUlV2CUShRevuogNbcfORViF8jGb0vkRRTJFhsfZpq7XmW53Q3RXrzoe5Pf 15 | t9NzV2Tlx9ohyGGpPVGO4MgJ028vtoUSIVGA2a9Vg1dwKJkhZ/VvgeleZGds+jIP 16 | 0SpXCNJHgxTaE5AJ71GC8OiDst8b/syYmRFX4P1ioah9m7X3ImHiX4tZJc+Ee7kB 17 | DQRRS3a3AQgAtGrh/OjWeqyUAbw8aO6ew28u+wG0GOEaNdMPEm9120uM3XoEHxg7 18 | FpixPJHj6u9VfhZvHBQOEYiZ0sIY7qj/0wyifsTFYjSZrSCJJHJpbM4SnflTkD43 19 | uJlvcUmqMv0vfCnkaMIO71I1xqPlgOYxOQlenttcI+5xEVzv78cSoQOdddCdGFMs 20 | mdrfxh5NWJR6ehIEI1JQTl6iuZt8wJ/Tgqk2btyDYpDKvDdKLjkBpCbTwsVtQa0x 21 | 2/EvuSMbnu8rqqzqqVsOoBMzDi1ksxm1kC4fXmI9av2X+dGpGJNnn5AqIQAKsETE 22 | 8L4Ajzo8Tk2aaq2ase0i9sNdnsRtYGOjiwARAQABiQEfBBgBAgAJBQJRS3a3AhsM 23 | AAoJEKnVocDxSrYgJXYH/3gAPOr7LwA4p3BhV6NIwIaMGmGY+1dbMp/OxB+mFuOS 24 | NTTCKsBGUchGVFYjSlBtden07S8HNNTWB+bWLVfRTTgRkqomxp0DMMOZ8ry317l+ 25 | cDKVRXMPEZvX9567q1PAOGDiGxiE8296ZUp9/hSFkOqv1sdp+HSM6KVMb4MP4Sx0 26 | +sAwcEumIQAKgXMzDLdpoPDrFnoAAfUmQfpddd7NKch0NJAhdlPtQryFpKdnmvpQ 27 | oLINrelqJxuVMo0hd7q0Xc/vJT0s6pe0f0fXdqXy1ijD3qAewXLZHO3XGVSF8fLY 28 | Q5XFCu4KH2vNBmn0lZxVX5BWm3R2M5XfuT/CJYHO+mk= 29 | =A19+ 30 | -----END PGP PUBLIC KEY BLOCK----- 31 | -------------------------------------------------------------------------------- /roles/zfsupdates/files/zfs.repo: -------------------------------------------------------------------------------- 1 | [zfs] 2 | name=ZFS on Linux for Fedora $releasever 3 | baseurl=http://download.zfsonlinux.org/fedora/$releasever/$basearch/ 4 | enabled=1 5 | metadata_expire=7d 6 | gpgcheck=1 7 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux 8 | 9 | [zfs-source] 10 | name=ZFS on Linux for Fedora $releasever - Source 11 | baseurl=http://download.zfsonlinux.org/fedora/$releasever/SRPMS/ 12 | enabled=0 13 | gpgcheck=1 14 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux 15 | 16 | [zfs-testing] 17 | name=ZFS on Linux for Fedora $releasever - Testing 18 | baseurl=http://download.zfsonlinux.org/fedora-testing/$releasever/$basearch/ 19 | enabled=0 20 | metadata_expire=7d 21 | gpgcheck=1 22 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux 23 | 24 | [zfs-testing-source] 25 | name=ZFS on Linux for Fedora $releasever - Testing Source 26 | baseurl=http://download.zfsonlinux.org/fedora-testing/$releasever/SRPMS/ 27 | enabled=0 28 | gpgcheck=1 29 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux 30 | -------------------------------------------------------------------------------- /roles/zfsupdates/library/blockinfile.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | # Ansible blockinfile module 5 | # 6 | # Licensed under GPL version 3 or later 7 | # (c) 2014 YAEGASHI Takeshi 8 | # (c) 2013 Evan Kaufman 9 | 10 | import re 11 | import os 12 | import tempfile 13 | 14 | DOCUMENTATION = """ 15 | --- 16 | module: blockinfile 17 | author: YAEGASHI Takeshi 18 | short_description: Insert/update/remove a text block 19 | surrounded by marker lines. 20 | version_added: 0.0 21 | description: 22 | - 'This module will insert/update/remove a block of multi-line text 23 | surrounded by customizable marker lines 24 | (default: "# {BEGIN/END} ANSIBLE MANAGED BLOCK"). 25 | Some functionality is taken from M(replace) module by Evan Kaufman.' 26 | options: 27 | dest: 28 | required: true 29 | aliases: [ name, destfile ] 30 | description: 31 | - The file to modify. 32 | marker: 33 | required: false 34 | default: "# {mark} ANSIBLE MANAGED BLOCK" 35 | description: 36 | - The marker line template. 37 | "{mark}" will be replaced with "BEGIN" or "END". 38 | content: 39 | required: false 40 | default: "" 41 | description: 42 | - The text to insert inside the marker lines. 43 | If it's empty string, marker lines will also be removed. 44 | create: 45 | required: false 46 | default: "no" 47 | choices: [ "yes", "no" ] 48 | description: 49 | - Create a new file if it doesn't exist. 50 | backup: 51 | required: false 52 | default: "no" 53 | choices: [ "yes", "no" ] 54 | description: 55 | - Create a backup file including the timestamp information so you can 56 | get the original file back if you somehow clobbered it incorrectly. 57 | validate: 58 | required: false 59 | description: 60 | - validation to run before copying into place 61 | required: false 62 | default: None 63 | others: 64 | description: 65 | - All arguments accepted by the M(file) module also work here. 66 | required: false 67 | """ 68 | 69 | EXAMPLES = r""" 70 | - blockinfile: dest=/etc/ssh/sshd_config content="Match User ansible-agent\nPasswordAuthentication no" 71 | 72 | - blockinfile: | 73 | dest=/etc/network/interfaces backup=yes 74 | content="iface eth0 inet static 75 | address 192.168.0.1 76 | netmask 255.255.255.0" 77 | 78 | - blockinfile: | 79 | dest=/var/www/html/index.html backup=yes 80 | marker="" 81 | content="

Welcome to {{ansible_hostname}}

" 82 | """ 83 | 84 | def write_changes(module,contents,dest): 85 | 86 | tmpfd, tmpfile = tempfile.mkstemp() 87 | f = os.fdopen(tmpfd,'w') 88 | f.write(contents) 89 | f.close() 90 | 91 | validate = module.params.get('validate', None) 92 | valid = not validate 93 | if validate: 94 | if "%s" not in validate: 95 | module.fail_json(msg="validate must contain %%s: %s" % (validate)) 96 | (rc, out, err) = module.run_command(validate % tmpfile) 97 | valid = rc == 0 98 | if rc != 0: 99 | module.fail_json(msg='failed to validate: ' 100 | 'rc:%s error:%s' % (rc,err)) 101 | if valid: 102 | module.atomic_move(tmpfile, dest) 103 | 104 | def check_file_attrs(module, changed, message): 105 | 106 | file_args = module.load_file_common_arguments(module.params) 107 | if module.set_file_attributes_if_different(file_args, False): 108 | 109 | if changed: 110 | message += " and " 111 | changed = True 112 | message += "ownership, perms or SE linux context changed" 113 | 114 | return message, changed 115 | 116 | def main(): 117 | module = AnsibleModule( 118 | argument_spec=dict( 119 | dest=dict(required=True, aliases=['name', 'destfile']), 120 | marker=dict(default='# {mark} ANSIBLE MANAGED BLOCK', type='str'), 121 | content=dict(default='', type='str'), 122 | create=dict(default='no', choices=['yes', 'no', True, False], type='bool'), 123 | backup=dict(default='no', choices=['yes', 'no', True, False], type='bool'), 124 | validate=dict(default=None, type='str'), 125 | ), 126 | add_file_common_args=True, 127 | supports_check_mode=True 128 | ) 129 | 130 | params = module.params 131 | dest = os.path.expanduser(params['dest']) 132 | 133 | if os.path.isdir(dest): 134 | module.fail_json(rc=256, msg='Destination %s is a directory !' % dest) 135 | 136 | if not os.path.exists(dest): 137 | if not module.boolean(params['create']): 138 | module.fail_json(rc=257, msg='Destination %s does not exist !' % dest) 139 | contents = '' 140 | else: 141 | f = open(dest, 'r') 142 | contents = f.read() 143 | f.close() 144 | 145 | mfunc = lambda x: re.sub(r'{mark}', x, params['marker'], 0) 146 | markers = tuple(map(mfunc, ("BEGIN", "END"))) 147 | markers_escaped = tuple(map(lambda x: re.escape(x), markers)) 148 | if params['content'] == '': 149 | repl = '' 150 | else: 151 | repl = '%s\n%s\n%s\n' % (markers[0], params['content'], markers[1]) 152 | mre = re.compile('^%s\n(.*\n)*%s\n' % markers_escaped, re.MULTILINE) 153 | result = re.subn(mre, repl, contents, 0) 154 | if result[1] == 0 and repl != '': 155 | mre = re.compile('(\n)?\Z', re.MULTILINE) 156 | result = re.subn(mre, '\n%s' % repl, contents, 0) 157 | if result[1] > 0 and contents != result[0]: 158 | msg = '%s replacements made' % result[1] 159 | changed = True 160 | else: 161 | msg = '' 162 | changed = False 163 | 164 | if changed and not module.check_mode: 165 | if module.boolean(params['backup']) and os.path.exists(dest): 166 | module.backup_local(dest) 167 | write_changes(module, result[0], dest) 168 | 169 | msg, changed = check_file_attrs(module, changed, msg) 170 | module.exit_json(changed=changed, msg=msg) 171 | 172 | # this is magic, see lib/ansible/module_common.py 173 | #<> 174 | 175 | main() 176 | 177 | -------------------------------------------------------------------------------- /roles/zfsupdates/tasks/deploy-zfs-repo.yml: -------------------------------------------------------------------------------- 1 | - block: 2 | - shell: rpm -q zfs-release || true 3 | check_mode: no 4 | register: zfsreleasepkg 5 | changed_when: False 6 | 7 | - copy: src=files/zfs.repo dest=/etc/yum.repos.d/zfs.repo mode=0644 owner=root group=root 8 | when: '"is not installed" in zfsreleasepkg.stdout' 9 | 10 | - copy: src=files/RPM-GPG-KEY-zfsonlinux dest=/etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux mode=0644 owner=root group=root 11 | when: '"is not installed" in zfsreleasepkg.stdout' 12 | 13 | - package: name=zfs-release state=latest 14 | when: '"is not installed" in zfsreleasepkg.stdout' 15 | when: not use_generic_rpm_install|bool 16 | 17 | - block: 18 | - name: Deploy ZFS repository using generic RPM install 19 | import_role: 20 | name: generic-rpm-install 21 | vars: 22 | project: zfsupdates 23 | packages: 24 | - coreutils 25 | when: use_generic_rpm_install|bool 26 | -------------------------------------------------------------------------------- /roles/zfsupdates/tasks/deploy-zfs-stage-1.yml: -------------------------------------------------------------------------------- 1 | - name: discover package manager config 2 | shell: if test -f /etc/dnf/dnf.conf ; then echo -n /etc/dnf/dnf.conf ; else echo -n /etc/yum.conf ; fi 3 | check_mode: no 4 | changed_when: False 5 | register: pkgconffile 6 | tags: 7 | - yum 8 | - ini 9 | 10 | - name: kernel and kernel devel both must be installable without upgrade 11 | ini_file: 12 | dest: "{{ pkgconffile.stdout }}" 13 | section: main 14 | option: installonlypkgs 15 | value: kernel, kernel-modules, kernel-modules-extra, kernel-core, kernel-devel 16 | when: "ansible_distribution != 'Qubes'" 17 | tags: 18 | - yum 19 | - ini 20 | 21 | - name: kernel and kernel devel both must be installable without upgrade 22 | shell: | 23 | if ! grep -q "ANSIBLE BEGIN" "{{ pkgconffile.stdout }}" ; then 24 | sed -i "s/### QUBES END ###/### QUBES END ###\n### ANSIBLE BEGIN ###\n### ANSIBLE END ###/" "{{ pkgconffile.stdout }}" 25 | echo CHANGED 26 | fi 27 | register: insertmarker 28 | changed_when: "'CHANGED' in insertmarker.stdout" 29 | when: "ansible_distribution == 'Qubes'" 30 | tags: 31 | - yum 32 | - ini 33 | 34 | - name: kernel and kernel devel both must be installable without upgrade 35 | blockinfile: 36 | dest: "{{ pkgconffile.stdout }}" 37 | marker: "### ANSIBLE {mark} ###" 38 | content: "installonlypkgs=kernel, kernel-modules, kernel-modules-extra, kernel-core, kernel-devel, kernel-qubes-vm, kernel-latest, kernel-latest-devel" 39 | when: "ansible_distribution == 'Qubes'" 40 | tags: 41 | - yum 42 | - ini 43 | 44 | - name: test for build dependencies in dom0 45 | action: shell rpm -q {{ packages|join(" ") }} || echo NO 46 | vars: 47 | packages: 48 | - automake 49 | - rpm-build 50 | - libtool 51 | - libuuid-devel 52 | - libblkid-devel 53 | - zlib-devel 54 | - make 55 | - ksh 56 | - elfutils-libelf-devel 57 | register: can_build 58 | when: "ansible_distribution == 'Qubes'" 59 | changed_when: False 60 | check_mode: no 61 | 62 | - name: install build dependencies in dom0 63 | shell: qubes-dom0-update -y git automake rpm-build libtool libuuid-devel libblkid-devel zlib-devel make ksh elfutils-libelf-devel 64 | when: "ansible_distribution == 'Qubes' and 'NO' in can_build.stdout" 65 | 66 | - name: upgrade ZFS RPMs (Fedora) 67 | package: name={{ item }} state=latest 68 | with_items: '{{ zfs_packages }}' 69 | register: upgrade_zfs_nondom0 70 | 71 | - name: upgrade ZFS RPMs (Qubes) 72 | shell: qubes-dom0-update -y {{ " ".join(zfs_packages) }} 73 | when: ansible_distribution == "Qubes" 74 | register: upgrade_zfs_dom0 75 | changed_when: 'upgrade_zfs_dom0.stdout.find("Nothing to do.") == -1' 76 | 77 | - set_fact: 78 | upgrade_zfs_nondom0_check: '{{ upgrade_zfs_nondom0.changed|default(False) }}' 79 | upgrade_zfs_dom0_check: '{{ upgrade_zfs_dom0 is defined and upgrade_zfs_dom0.changed|default(False) }}' 80 | upgrade_zfs: '{{ upgrade_zfs_nondom0.changed|default(False) or (upgrade_zfs_dom0 is defined and upgrade_zfs_dom0.changed|default(False)) }}' 81 | 82 | - name: ensure that ZFS unit files are active 83 | shell: | 84 | for a in zfs-import-cache.service zfs-import-scan.service zfs-mount.service zfs-share.service zfs-zed.service zfs.target zfs-import.target ; do 85 | systemctl preset "$a" 86 | done 87 | register: unitpresets 88 | changed_when: '"systemd" in unitpresets.stderr' 89 | 90 | - name: test that GRUB ZFS fixer RPM is installed 91 | shell: rpm -q grub-zfs-fixer && echo YES || echo NO 92 | register: grubzfsfixertest 93 | changed_when: False 94 | check_mode: no 95 | tags: 96 | - grub 97 | 98 | - name: upgrade GRUB ZFS fixer RPMs 99 | package: name=grub-zfs-fixer state=latest 100 | register: upgrade_grub 101 | when: "'YES' in grubzfsfixertest.stdout" 102 | tags: 103 | - grub 104 | 105 | - name: remove zfs-test RPM 106 | package: name=zfs-test state=absent 107 | 108 | - name: sanity-check grub2-mkconfig 109 | shell: | 110 | set -e 111 | grep "This program was patched" /usr/sbin/grub2-mkconfig 112 | grep "This program was patched" /etc/grub.d/10_linux 113 | grep "This program was patched" /etc/grub.d/20_linux_xen 114 | when: "'YES' in grubzfsfixertest.stdout" 115 | changed_when: False 116 | tags: 117 | - grub 118 | 119 | - name: touch marker file after executed upgrade 120 | shell: touch /.zfsreinstallneeded 121 | when: upgrade_zfs 122 | -------------------------------------------------------------------------------- /roles/zfsupdates/tasks/deploy-zfs-stage-2.yml: -------------------------------------------------------------------------------- 1 | - name: obtain kernel versions 2 | shell: '(rpm -q kernel-devel kernel-latest-devel | grep -v "is not installed" || true) | sed s/kernel-devel-// | sed s/kernel-latest-devel-//' 3 | register: kernelversuncooked 4 | check_mode: no 5 | changed_when: false 6 | 7 | - name: set kernel versions fact 8 | set_fact: 9 | kernelvers: '{{ kernelversuncooked.stdout_lines }}' 10 | 11 | - name: make sure that all ramdisks have ZFS 12 | shell: | 13 | set -e 14 | vers={{ kernelvers|join(" ")|quote }} 15 | for ver in $vers ; do 16 | if rpm -q kernel-$ver > /dev/null || rpm -q kernel-core-$ver > /dev/null || rpm -q kernel-latest-$ver > /dev/null ; then 17 | true 18 | else 19 | continue 20 | fi 21 | lsinitrd /boot/initramfs-$ver.img 2>/dev/null | grep -q zfs.ko >&2 || { touch /.zfsreinstallneeded ; echo NO ; } 22 | lsinitrd /boot/initramfs-$ver.img 2>/dev/null | grep -qq etc/crypttab >&2 || { touch /.zfsreinstallneeded ; echo NO ; } 23 | done 24 | register: all_ramdisks_have_zfs 25 | changed_when: "'NO' in all_ramdisks_have_zfs.stdout" 26 | tags: 27 | - zfs 28 | - dkms 29 | 30 | - name: check ZFS reinstall marker 31 | shell: if test -f /.zfsreinstallneeded ; then echo YES ; else echo NO ; fi 32 | check_mode: no 33 | register: mustrecompile 34 | changed_when: "'YES' in mustrecompile.stdout" 35 | tags: 36 | - zfs 37 | - dkms 38 | 39 | - name: eliminate obsolete versions of DKMS ZFS 40 | shell: | 41 | test -d /var/lib/dkms/{{ item }}/ || exit 0 42 | cd /var/lib/dkms/{{ item }}/ 43 | for f in */source ; do 44 | if [ ! -f "$f/dkms.conf" ] ; then 45 | rm -rf $(dirname "$f") 46 | echo ELIMINATED "$f" 47 | fi 48 | done 49 | register: deleted_obsolete 50 | changed_when: '"ELIMINATED" in deleted_obsolete.stdout' 51 | with_items: 52 | - zfs 53 | tags: 54 | - zfs 55 | - olddkms 56 | 57 | - name: uninstall old ZFS 58 | shell: | 59 | set -ex 60 | vers={{ kernelvers|join(" ")|quote }} 61 | ret=0 62 | modtime() { 63 | stat "$1" -c "%Y" 64 | } 65 | for ver in $vers ; do 66 | # we only uninstall kernel modules from kernels who still can have modules rebuilt 67 | # kernels that do not have the main or the devel packages cannot be rebuilt 68 | # so we skip them 69 | if rpm -q kernel-$ver > /dev/null || rpm -q kernel-core-$ver > /dev/null || rpm -q kernel-latest-$ver > /dev/null ; then 70 | true 71 | else 72 | echo No kernel $ver >&2 73 | continue 74 | fi 75 | for module in zfs ; do 76 | for action in uninstall remove ; do 77 | dkmsver=`(ls --sort=time -d1 /usr/src/$module-* || true) | head -1` ; dkmsver=${dkmsver#/usr/src/$module-} 78 | test -d "/usr/src/$module-$dkmsver" || continue 79 | out=`dkms status -m $module -k $ver -v $dkmsver` 80 | echo "====== output of dkms status for kernel $ver =======" >&2 81 | echo "$out" >&2 82 | echo "====================================================" >&2 83 | if [ "$out" != "" ] ; then 84 | modtime_source=$( modtime "/usr/src/$module-$dkmsver" ) || { 85 | echo "/usr/src/$module-$dkmsver" does not exist >&2 86 | modtime_source=0 87 | } 88 | modtime_module=$( modtime "/usr/lib/modules/$ver/extra/$module.ko.xz" ) || { 89 | echo "/usr/lib/modules/$ver/extra/$module.ko.xz" does not exist >&2 90 | modtime_module=0 91 | } 92 | if [ $modtime_module -gt $modtime_source ] ; then 93 | echo "/usr/lib/modules/$ver/extra/$module.ko.xz" is newer than "/usr/src/$module-$dkmsver" >&2 94 | continue 95 | fi 96 | if [ $action == uninstall ] ; then 97 | out=`dkms status -m $module -k $ver -v $dkmsver` 98 | echo "$out" | grep -q ': installed' || { echo $module is not installed >&2 ; continue ; } 99 | elif [ $action == remove ] ; then 100 | out=`dkms status -m $module -k $ver -v $dkmsver` 101 | echo "$out" | grep -Eq ': (built|added)' || { echo $module is not built or added >&2 ; continue ; } 102 | fi 103 | out=`dkms $action --directive REMAKE_INITRD=no -m $module -k $ver -v $dkmsver 2>&1` || ret=$? 104 | echo "====== output of dkms $action for kernel $ver ======" >&2 105 | echo "$out" >&2 106 | echo "====================================================" >&2 107 | echo "$out" | grep -q "is not currently installed" && continue || true 108 | echo "$out" | grep -q "original module was found for this module on this kernel" && continue || true 109 | echo "$out" | grep -q "is no instance of" && continue || true 110 | echo "$out" | grep -q "failed to remove.*No such file or directory" && continue || true 111 | echo "$out" | grep -q "DKMS: uninstall completed" && { echo YES ; continue ; } || true 112 | if [ x$ret != x0 ] ; then echo "DKMS $action for $ver exited with return code $ret" >&2 ; exit $ret ; fi 113 | echo YES 114 | fi 115 | done 116 | done 117 | done 118 | register: uninstall_old_zfs 119 | when: mustrecompile.changed 120 | changed_when: "'YES' in uninstall_old_zfs.stdout" 121 | tags: 122 | - zfs 123 | - dkms 124 | 125 | - name: dkms install recently-installed ZFS 126 | shell: | 127 | set -ex 128 | vers={{ kernelvers|join(" ")|quote }} 129 | ret=0 130 | for ver in $vers ; do 131 | if rpm -q kernel-$ver > /dev/null || rpm -q kernel-core-$ver > /dev/null || rpm -q kernel-latest-$ver > /dev/null ; then 132 | true 133 | else 134 | echo No kernel $ver >&2 135 | continue 136 | fi 137 | for module in zfs ; do 138 | echo Installing $module for kernel $ver >&2 139 | dkmsver=`ls --sort=time -d1 /usr/src/$module-* | head -1` ; dkmsver=${dkmsver#/usr/src/$module-} 140 | test -n "$dkmsver" 141 | dkms install --directive REMAKE_INITRD=no -m $module -k "$ver" -v "$dkmsver" || exit $? 142 | echo YES dkms install "$ver" 143 | done 144 | done 145 | register: install_zfs 146 | when: mustrecompile.changed 147 | changed_when: "'YES' in install_zfs.stdout" 148 | tags: 149 | - zfs 150 | - dkms 151 | 152 | - name: remove weak-modules 153 | shell: | 154 | set -ex 155 | vers={{ kernelvers|join(" ")|quote }} 156 | for ver in $vers ; do 157 | for module in zfs ; do 158 | if test -f /usr/lib/modules/"$ver"/weak-updates/"$module".ko.xz ; then 159 | weak-modules --remove-kernel {% if ansible_check_mode %}--dry-run {% endif %}"$ver" 160 | {% if ansible_check_mode %}echo {% endif %}depmod -a "$ver" 161 | echo YES weak-modules remove kernel "$ver" 162 | fi 163 | done 164 | done 165 | check_mode: no 166 | register: weak_modules 167 | when: mustrecompile.changed 168 | changed_when: "'YES' in weak_modules.stdout" 169 | tags: 170 | - zfs 171 | - dkms 172 | 173 | - include: ../../../tasks/redo-initramfs.yml 174 | when: mustrecompile.changed or upgrade_zfs 175 | tags: 176 | - dkms 177 | 178 | - name: sanity-check initramfs 179 | shell: | 180 | set -e 181 | vers={{ kernelvers|join(" ")|quote }} 182 | for ver in $vers ; do 183 | if rpm -q kernel-$ver > /dev/null || rpm -q kernel-core-$ver > /dev/null || rpm -q kernel-latest-$ver > /dev/null ; then 184 | true 185 | else 186 | continue 187 | fi 188 | lsinitrd /boot/initramfs-$ver.img 2>/dev/null | grep zfs.ko >&2 || { echo "kernel-$ver initramfs does not contain zfs.ko" >&2 ; exit 1 ; } 189 | lsinitrd /boot/initramfs-$ver.img 2>/dev/null | grep etc/crypttab >&2 || { echo "kernel-$ver initramfs does not contain crypttab" >&2 ; exit 1 ; } 190 | done 191 | check_mode: no 192 | changed_when: False 193 | tags: 194 | - zfs 195 | - dkms 196 | 197 | - name: remove ZFS reinstall marker 198 | file: name=/.zfsreinstallneeded state=absent 199 | tags: 200 | - zfs 201 | -------------------------------------------------------------------------------- /roles/zfsupdates/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - block: 2 | 3 | - import_tasks: tasks/deploy-zfs-repo.yml 4 | when: deployzfs_stage|default("one") == "one" 5 | 6 | - import_tasks: tasks/deploy-zfs-stage-1.yml 7 | when: deployzfs_stage|default("one") == "one" 8 | 9 | - import_tasks: tasks/deploy-zfs-stage-2.yml 10 | when: deployzfs_stage|default("two") == "two" 11 | 12 | when: ansible_distribution in "Fedora Qubes" 13 | tags: 14 | - zfs 15 | -------------------------------------------------------------------------------- /tasks/grub2-regen.yml: -------------------------------------------------------------------------------- 1 | - name: regenerate GRUB configuration 2 | shell: | 3 | if [ -f /usr/sbin/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}-mkconfig ] ; then 4 | if [ -f /etc/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}-efi.cfg ] ; then 5 | GRUB_VERSION=2 6 | GRUB_MENU_LST=/etc/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}-efi.cfg 7 | GRUB_MKCONFIG=/usr/sbin/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}-mkconfig 8 | elif [ -f /etc/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}.cfg ] ; then 9 | GRUB_VERSION=2 10 | GRUB_MENU_LST=/etc/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}.cfg 11 | GRUB_MKCONFIG=/usr/sbin/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}-mkconfig 12 | fi 13 | elif [ -f /usr/sbin/grubby -a -f /boot/grub/grub.conf ] ; then 14 | GRUB_VERSION=1 15 | GRUB_MENU_LST=/boot/grub/grub.conf 16 | GRUB_MKCONFIG/bin/true 17 | fi 18 | GRUB_SET_DEFAULT=/usr/sbin/grub{% if ansible_distribution in "Fedora Qubes" %}2{% endif %}-set-default 19 | {% if ansible_check_mode %}echo {% endif %}$GRUB_MKCONFIG -o $GRUB_MENU_LST || exit $? 20 | test -f /usr/sbin/grubby && {% if ansible_check_mode %}echo {% endif %}/usr/sbin/grubby --set-default-index=0 || {% if ansible_check_mode %}echo {% endif %}$GRUB_SET_DEFAULT 0 21 | check_mode: no 22 | when: qubes is not defined or qubes.vm_type is not defined 23 | tags: 24 | - grub 25 | -------------------------------------------------------------------------------- /tasks/redo-initramfs.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: redo initramfs 4 | shell: | 5 | set -e 6 | cd /boot 7 | kvers= 8 | for kver in initramfs-*.img initramfs-*.img.knowngood /lib/modules/* ; do 9 | kver=$(basename "$kver") 10 | kver=${kver#initramfs-} 11 | kver=${kver%.knowngood} 12 | kver=${kver%.img} 13 | if echo "$kvers" | grep -q "$kver" ; then 14 | true 15 | else 16 | kvers="$kvers $kver" 17 | fi 18 | done 19 | set -x 20 | for kver in $kvers ; do 21 | if [ -z "$kver" ] ; then continue ; fi 22 | libmodules=/lib/modules/"$kver" 23 | initramfs=initramfs-$kver.img 24 | knowngood=$initramfs.knowngood 25 | vmlinuz=vmlinuz-$kver 26 | if [ ! -e "$libmodules" ] ; then 27 | echo "$libmodules does not exist, continuing" >&2 28 | continue 29 | fi 30 | if [ ! -f "$vmlinuz" ] ; then 31 | echo "$vmlinuz does not exist, removing obsolete initramfs and /lib/modules" >&2 32 | rm -fr "$initramfs" "$initramfs.knowngood" "$libmodules" 33 | echo CHANGED removed obsolete files 34 | continue 35 | fi 36 | if [ ! -f "$knowngood" ] ; then 37 | cp -f "$initramfs" "$knowngood" && echo CHANGED backup made of "$initramfs" 38 | fi 39 | done 40 | dracut -f --regenerate-all && echo CHANGED regenerated all initial RAM disks 41 | register: regenramfs 42 | when: ansible_distribution in "Fedora Qubes" 43 | changed_when: "'CHANGED' in regenramfs.stdout" 44 | -------------------------------------------------------------------------------- /tasks/testboot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: test that boot is mounted, abort otherwise 3 | shell: | 4 | for mntpnt in /boot /boot/efi ; do 5 | if grep -q $mntpnt /etc/fstab 6 | then 7 | if ! mountpoint $mntpnt 8 | then 9 | mount $mntpnt 10 | sleep 1 11 | mountpoint $mntpnt || { 12 | # Oh shit the mount unit is actually associated with a device unit 13 | # whose /sys path is obsolete. Tell systemd to reload the units. 14 | systemctl --system daemon-reload 15 | mount $mntpnt 16 | mountpoint $mntpnt || exit 4 17 | } 18 | echo CHANGED 19 | fi 20 | else 21 | true 22 | fi 23 | done 24 | register: testboot 25 | changed_when: testboot.stdout.find("CHANGED") != -1 26 | notify: unmount boot 27 | tags: 28 | - zfs 29 | - grub 30 | --------------------------------------------------------------------------------