├── images └── JSAC2020_demo2.gif ├── .gitignore ├── .gitattributes ├── README.md ├── LICENSE ├── monitorappconv.py ├── openbsmconv.py └── norimaci.py /images/JSAC2020_demo2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mnrkbys/norimaci/HEAD/images/JSAC2020_demo2.gif -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ._* 2 | .DS_Store 3 | .history/ 4 | .vscode/ 5 | data/ 6 | failed_data/ 7 | noriben_sample/ 8 | old/ 9 | -------------------------------------------------------------------------------- /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | *.md text eol=lf 4 | .git* text eol=lf 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Norimaci 2 | 3 | "Norimaci" is a simple and lightweight malware analysis sandbox for macOS. This tool was inspired by "[Noriben](https://github.com/Rurik/Noriben)". Norimaci uses the features of OpenBSM or Monitor.app to monitor macOS system activity instead of Sysinternals Process Monitor (procmon). 4 | 5 | Norimaci consists of 3 Python scripts. 6 | 7 | - norimaci.py : Main script 8 | - openbsmconv.py : OpenBSM audit log converter 9 | - monitorappconv.py : Monitor.app data converter 10 | 11 | OpenBSM is a framework to audit activities on macOS. Please see [their web site](http://www.trustedbsd.org/openbsm.html) for details. 12 | 13 | Monitor.app is a free tool which is made by FireEye. Please see [their web site](https://www.fireeye.com/services/freeware/monitor.html) for details. 14 | 15 | ## Why "Norimaci"? 16 | 17 | My former colleague (@cci_forensics) suggested this name. 18 | 19 | "Norimaci" is a coined word combining "Noriben" and "Macintosh". It is pronounced "Norimaki", and it represents "のり巻き" in Japanese. It means "sushi roll" in English. 20 | 21 | Noriben is a Japanese style lunch box that consists of minimal ingredients. The ingredients of norimaki are similar to noriben (seaweed, rice, and other you prefer). 22 | 23 | So, I decided to name this tool "Norimaci". 24 | 25 | ## Requirement 26 | 27 | - OS X 10.6 or later (I tested on macOS 10.13 - 10.15) 28 | - VMware Fusion, Parallels, VirtualBox, etc. 29 | - Python 3.5 or later 30 | 31 | ### Optional requirement 32 | 33 | - [Monitor.app](https://www.fireeye.com/services/freeware/monitor.html) 34 | 35 | **Note that, Monitor.app supports only macOS 10.12 - 10.14. You don't have to install it, if you want to execute malware on macOS 10.15 or later. You have to use OpenBSM instead of it.** 36 | 37 | You have to install libraries below from their source repositories or pip, if you use Norimaci with Monitor.app. 38 | 39 | - [py-applescript](https://github.com/rdhyee/py-applescript) 40 | - [PyObjC](https://bitbucket.org/ronaldoussoren/pyobjc) 41 | - [dnslib](https://bitbucket.org/paulc/dnslib/) 42 | 43 | ## Preparing 44 | 45 | ### Build virtual machines to execute malware 46 | 47 | You have to build a macOS VM to execute malware samples. In addition, it is highly recommended to build another VM for fake Internet connections. Because, many malware attempt to connect their servers (e.g. C2 servers). 48 | 49 | PolarProxy and INetSim are very useful tools to provide fake HTTP/HTTPS and DNS services. Please refer [NETRESEC blog](https://www.netresec.com/?page=Blog&month=2019-12&post=Installing-a-Fake-Internet-with-INetSim-and-PolarProxy) to build a fake Internet. 50 | 51 | ### Edit /etc/security/audit_control 52 | 53 | If you use OpenBSM to monitor system activities, you have to modify /etc/security/audit_control file like below. 54 | Because, OpenBSM records audit logs about only login and authentication by default. But, Norimaci needs more kinds of audit logs (file creation, file deletion, process execution, networking, etc). 55 | 56 | The computer has to be rebooted after the modification to apply the setting. 57 | 58 | ``` 59 | # 60 | # $P4: //depot/projects/trustedbsd/openbsm/etc/audit_control#8 $ 61 | # 62 | dir:/var/audit 63 | flags:lo,aa,fc,fd,pc,nt,ex <- edit here like this 64 | minfree:5 65 | naflags:lo,aa,fc,fd,pc,nt,ex <- edit here like this 66 | policy:cnt,argv 67 | filesz:2M 68 | expire-after:10M 69 | superuser-set-sflags-mask:has_authenticated,has_console_access 70 | superuser-clear-sflags-mask:has_authenticated,has_console_access 71 | member-set-sflags-mask: 72 | member-clear-sflags-mask:has_authenticated 73 | ``` 74 | 75 | ## Usage 76 | 77 | ### Basic usage with OpenBSM (most standard usage) 78 | 79 | 1. Run norimaci.py with sudo. 80 | 2. Run a sample of malware (You can run any type of malware. For example, DMG, PKG, Mach-O binary, and so on). 81 | 3. Wait for a while (Until, the malware can get their goal). 82 | 4. Press "Ctrl + C " at the appropriate time in the terminal where Norimaci runs in. 83 | 5. 2 kind of reports are generated (Norimaci_dd_Mon_yy__hh_mm_ffffff.txt and Norimaci_dd_Mon_yy__hh_mm_ffffff_timeline.csv). 84 | 6. Confirm reports with your favorite tools (e.g. text editors, grep, less, etc). 85 | 86 | ```bash 87 | $ sudo python3 ./norimaci.py -m openbsm -o ./out/ 88 | Password: 89 | 90 | --===[ Norimaci v0.1.0 91 | --===[ Minoru Kobayashi [@unkn0wnbit] 92 | [*] Launching OpenBSM agent... 93 | [*] When runtime is complete, press CTRL+C to stop logging. 94 | ^C 95 | [*] Termination of OpenBSM agent commencing... please wait 96 | [*] Converting OpenBSM data ... 97 | [*] Loading converted macOS activity data ... 98 | [*] Saving report to: /Users/macforensics/tools/norimaci/out/Norimaci_14_Jan_20__15_55_093219.txt 99 | [*] Saving timeline to: /Users/macforensics/tools/norimaci/out/Norimaci_14_Jan_20__15_55_093219_timeline.csv 100 | ``` 101 | 102 | ### Basic usage with Monitor.app 103 | 104 | Note: Monitor.app can not run on macOS 10.15. But, it works fine on macOS 10.14 or earlier. 105 | 106 | 1. Run norimaci.py with sudo. 107 | 2. Enter a password after Norimaci launches Monitor.app (Monitor.app needs a password to install its kext). 108 | 3. Run a sample of malware (You can run any type of malware. For example, DMG, PKG, Mach-O binary, and so on). 109 | 4. Wait for a while (Until, the malware can get their goal). 110 | 5. Press "Ctrl + C " at the appropriate time in the terminal where Norimaci runs in. 111 | 6. 2 kind of reports are generated (Norimaci_dd_Mon_yy__hh_mm_ffffff.txt and Norimaci_dd_Mon_yy__hh_mm_ffffff_timeline.csv). 112 | 7. Confirm reports with your favorite tools (e.g. text editors, grep, less, etc). 113 | 114 | ### Help of scripts 115 | 116 | - norimaci.py 117 | 118 | ```bash 119 | $ python3 ./norimaci.py -h 120 | 121 | --===[ Norimaci v0.1.0 122 | --===[ Minoru Kobayashi [@unkn0wnbit] 123 | usage: norimaci.py [-h] [-m MONITOR] [-j JSON] [-bl OPENBSM_LOG] [-p PROCLIST] 124 | [-ml MONITORAPP_LOG] [-o OUTPUT] [--force] [--debug] 125 | 126 | Light weight sandbox which works with OpenBSM or Fireeye's Monitor.app 127 | 128 | optional arguments: 129 | -h, --help show this help message and exit 130 | -m MONITOR, --monitor MONITOR 131 | Specify a program to monitor macOS activity. You can 132 | choose 'openbsm' or 'monitorapp'. 133 | -j JSON, --json JSON Path to a JSON file which is converted by 134 | 'openbsmconv.py' or 'monitorappconv.py'. 135 | -bl OPENBSM_LOG, --openbsm-log OPENBSM_LOG 136 | Path to an OpenBSM log file. 137 | -p PROCLIST, --proclist PROCLIST 138 | Path to a process list file to process OpenBSM log 139 | file. A file which has ".proclist" extnsion would be 140 | used, if this option is not specified. 141 | -ml MONITORAPP_LOG, --monitorapp-log MONITORAPP_LOG 142 | Path to a Monitor.app data file. 143 | -o OUTPUT, --output OUTPUT 144 | Path to an output directory. 145 | --force Enable to overwrite output files. 146 | --debug Enable debug mode. 147 | ``` 148 | 149 | - openbsmconv.py 150 | 151 | ```bash 152 | $ python3 ./openbsmconv.py -h 153 | usage: openbsmconv.py [-h] [-f FILE] [-p PROCLIST] [-o OUT] [-c] [-rp] 154 | [--with-failure] [--with-failure-socket] [--force] 155 | [--debug] 156 | 157 | Converts OpenBSM log file to JSON format. 158 | 159 | optional arguments: 160 | -h, --help show this help message and exit 161 | -f FILE, --file FILE Path to a bsm log file 162 | -p PROCLIST, --proclist PROCLIST 163 | Path to a process list file 164 | -o OUT, --out OUT Path to an output file 165 | -c, --console Output JSON data to stdout. 166 | -rp, --use-running-proclist 167 | Use current running process list instead of a existing 168 | process list file. And, the process list is saved to a 169 | file which places in the same directory of '--file' or 170 | to a file which specified '--proclist'. 171 | --with-failure Output records which has a failure status too. 172 | --with-failure-socket 173 | Output records which has a failure status too (related 174 | socket() syscall only). 175 | --force Enable to overwrite an existing output file. 176 | --debug Enable debug mode. 177 | ``` 178 | 179 | - monitorappconv.py 180 | 181 | ```bash 182 | $ python3 ./monitorappconv.py -h 183 | usage: monitorappconv.py [-h] [-f FILE] [-o OUT] [-c] [--force] [--debug] 184 | 185 | Parses data of Fireeye Monitor.app and converts it to JSON format. Please note 186 | that strings in JSON data are saved as UTF-8. 187 | 188 | optional arguments: 189 | -h, --help show this help message and exit 190 | -f FILE, --file FILE Path to a saved data of Monitor.app. 191 | -o OUT, --out OUT Path to an output file. 192 | -c, --console Output JSON data to stdout. 193 | --force Enable to overwrite an output file. 194 | --debug Enable debug mode. 195 | ``` 196 | 197 | ## Demo 198 | 199 | Analyze AppleJeus.A on macOS 10.15 Catalina with Norimaci. This demo movie was made for Japan Security Analyst Conference 2020 (JSAC2020) 200 | 201 | ![Norimaci demo](images/JSAC2020_demo2.gif) 202 | 203 | ## Installation 204 | 205 | ```bash 206 | git clone https://github.com/mnrkbys/norimaci.git 207 | ``` 208 | 209 | ## Future Work 210 | 211 | - [ ] YARA scanning 212 | - [ ] VirusTotal scanning 213 | 214 | ## Author 215 | 216 | [Minoru Kobayashi](https://twitter.com/unkn0wnbit) 217 | 218 | ## License 219 | 220 | [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0) 221 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /monitorappconv.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # 3 | # monitorappconv.py 4 | # Parses data of Fireeye Monitor.app and converts it to JSON format. 5 | # 6 | # Copyright 2020 Minoru Kobayashi (@unkn0wnbit) 7 | # 8 | # Licensed under the Apache License, Version 2.0 (the "License"); 9 | # you may not use this file except in compliance with the License. 10 | # You may obtain a copy of the License at 11 | # 12 | # http://www.apache.org/licenses/LICENSE-2.0 13 | # 14 | # Unless required by applicable law or agreed to in writing, software 15 | # distributed under the License is distributed on an "AS IS" BASIS, 16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17 | # See the License for the specific language governing permissions and 18 | # limitations under the License. 19 | # 20 | 21 | import os 22 | import sys 23 | import argparse 24 | import struct 25 | import json 26 | 27 | try: 28 | import dnslib 29 | except ImportError: 30 | sys.exit("Import Error: dnslib is not installed.\n\ 31 | Get it from https://bitbucket.org/paulc/dnslib/ or from pip.") 32 | 33 | debug_mode = False 34 | 35 | # data type definitions 36 | # record delimiter 37 | record_delimiter = 0x92 38 | 39 | # record types 40 | record_type = {} 41 | record_type['info'] = {} 42 | record_type['info']['str'] = b'osx.agent.info' 43 | record_type['info']['post'] = 0x81 44 | record_type['file_write'] = {} 45 | record_type['file_write']['str'] = b'osx.agent.file.write' 46 | record_type['file_write']['post'] = 0x8B 47 | record_type['file_rename'] = {} 48 | record_type['file_rename']['str'] = b'osx.agent.file.rename' 49 | record_type['file_rename']['post'] = 0x8C 50 | record_type['kext_load'] = {} 51 | record_type['kext_load']['str'] = b'osx.agent.kext.load' 52 | record_type['kext_load']['post'] = 0x87 53 | record_type['dylib_load'] = {} 54 | record_type['dylib_load']['str'] = b'osx.agent.dylib.load' 55 | record_type['dylib_load']['post'] = 0x87 56 | record_type['procexec'] = {} 57 | record_type['procexec']['str'] = b'osx.agent.procexec' 58 | record_type['procexec']['post'] = 0x8D 59 | record_type['socket_connection'] = {} 60 | record_type['socket_connection']['str'] = b'osx.agent.socket.connection' 61 | record_type['socket_connection']['post'] = 0x8A 62 | record_type['dns_request'] = {} 63 | record_type['dns_request']['str'] = b'osx.agent.socket.dns.request' 64 | record_type['dns_request']['post'] = 0x86 65 | record_type['dns_reply'] = {} 66 | record_type['dns_reply']['str'] = b'osx.agent.socket.dns.reply' 67 | record_type['dns_reply']['post'] = 0x86 68 | record_type['tty'] = {} 69 | record_type['tty']['str'] = b'osx.agent.tty' 70 | record_type['tty']['post'] = 0x87 71 | 72 | # elements (common) 73 | epost_xor_len = -1 74 | epost_str_len = -2 75 | epost_num = -4 76 | epost_bytes = -8 77 | 78 | element_type = {} 79 | element_type['procname'] = {} 80 | element_type['procname']['str'] = b'procname' 81 | element_type['procname']['post'] = epost_xor_len 82 | element_type['pprocname'] = {} 83 | element_type['pprocname']['str'] = b'pprocname' 84 | element_type['pprocname']['post'] = epost_xor_len 85 | element_type['pid'] = {} 86 | element_type['pid']['str'] = b'pid' 87 | element_type['pid']['post'] = epost_num 88 | element_type['ppid'] = {} 89 | element_type['ppid']['str'] = b'ppid' 90 | element_type['ppid']['post'] = epost_num 91 | element_type['uid'] = {} 92 | element_type['uid']['str'] = b'uid' 93 | element_type['uid']['post'] = epost_num 94 | element_type['gid'] = {} 95 | element_type['gid']['str'] = b'gid' 96 | element_type['gid']['post'] = epost_num 97 | element_type['timestamp'] = {} 98 | element_type['timestamp']['str'] = b'timestamp' 99 | element_type['timestamp']['post'] = epost_num 100 | element_type['timestamp_ns'] = {} 101 | element_type['timestamp_ns']['str'] = b'timestamp_ns' 102 | element_type['timestamp_ns']['post'] = epost_num 103 | element_type['egid'] = {} 104 | element_type['egid']['str'] = b'egid' 105 | element_type['egid']['post'] = epost_num 106 | element_type['euid'] = {} 107 | element_type['euid']['str'] = b'euid' 108 | element_type['euid']['post'] = epost_num 109 | element_type['bytes'] = {} 110 | element_type['bytes']['str'] = b'bytes' 111 | element_type['bytes']['post'] = epost_bytes 112 | 113 | # element sub-type (common) 114 | element_type_str = {} 115 | element_type_str['str_1byte'] = 0xD9 # D9 116 | element_type_str['str_2byte'] = 0xDA # DA 117 | element_type_str['str_with_null'] = 0xC4 # C4 118 | 119 | element_type_num = {} 120 | element_type_num['num_1byte'] = 0xCC 121 | element_type_num['num_2byte'] = 0xCD 122 | element_type_num['num_4byte'] = 0xCE 123 | 124 | element_type_bytes = {} 125 | element_type_bytes['bytes_1byte'] = 0xC4 # C4 126 | 127 | # elements (information) 128 | element_type['msg'] = {} 129 | element_type['msg']['str'] = b'msg' 130 | element_type['msg']['post'] = epost_str_len 131 | 132 | # elements (file manipulation / process execution) 133 | element_type['oldpath'] = {} 134 | element_type['oldpath']['str'] = b'oldpath' 135 | element_type['oldpath']['post'] = epost_str_len 136 | element_type['newpath'] = {} 137 | element_type['newpath']['str'] = b'newpath' 138 | element_type['newpath']['post'] = epost_str_len 139 | element_type['path'] = {} 140 | element_type['path']['str'] = b'path' 141 | element_type['path']['post'] = epost_str_len 142 | element_type['is64'] = {} 143 | element_type['is64']['str'] = b'is64' 144 | element_type['is64']['post'] = epost_num 145 | element_type['argc'] = {} 146 | element_type['argc']['str'] = b'argc' 147 | element_type['argc']['post'] = epost_num 148 | element_type['argv'] = {} 149 | element_type['argv']['str'] = b'argv' 150 | element_type['argv']['post'] = epost_str_len 151 | 152 | # elements (socket) 153 | element_type['version'] = {} 154 | element_type['version']['str'] = b'version' 155 | element_type['version']['post'] = epost_num 156 | element_type['direction'] = {} 157 | element_type['direction']['str'] = b'direction' 158 | element_type['direction']['post'] = epost_xor_len 159 | element_type['srcport'] = {} 160 | element_type['srcport']['str'] = b'srcport' 161 | element_type['srcport']['post'] = epost_num 162 | element_type['dstport'] = {} 163 | element_type['dstport']['str'] = b'dstport' 164 | element_type['dstport']['post'] = epost_num 165 | element_type['srcip'] = {} 166 | element_type['srcip']['str'] = b'srcip' 167 | element_type['srcip']['post'] = epost_xor_len 168 | element_type['dstip'] = {} 169 | element_type['dstip']['str'] = b'dstip' 170 | element_type['dstip']['post'] = epost_xor_len 171 | element_type['proto'] = {} 172 | element_type['proto']['str'] = b'proto' 173 | element_type['proto']['post'] = epost_xor_len 174 | 175 | # elements (TTY) 176 | element_type['dev'] = {} 177 | element_type['dev']['str'] = b'dev' 178 | element_type['dev']['post'] = epost_num 179 | element_type['operation'] = {} 180 | element_type['operation']['str'] = b'operation' 181 | element_type['operation']['post'] = epost_xor_len 182 | 183 | 184 | # setup arguments 185 | def parse_arguments(): 186 | parser = argparse.ArgumentParser(description="Parses data of Fireeye Monitor.app and converts it to JSON format.\n\ 187 | Please note that strings in JSON data are saved as UTF-8.") 188 | parser.add_argument('-f', '--file', action='store', type=str, 189 | help='Path to a saved data of Monitor.app.') 190 | parser.add_argument('-o', '--out', action='store', type=str, 191 | help='Path to an output file.') 192 | parser.add_argument('-c', '--console', action='store_true', default=False, 193 | help='Output JSON data to stdout.') 194 | parser.add_argument('--force', action='store_true', default=False, 195 | help='Enable to overwrite an output file.') 196 | parser.add_argument('--debug', action='store_true', default=False, 197 | help='Enable debug mode.') 198 | args = parser.parse_args() 199 | 200 | return args 201 | 202 | def check_arguments(args): 203 | if args.file == None: 204 | sys.exit('Need to specify save file of Monitor.app.\nPlease confirm options with "-h".') 205 | 206 | if args.out == None and args.console == False: 207 | sys.exit('Need to specify output direction (file and/or console).\nPlease confirm options with "-h".') 208 | 209 | global debug_mode 210 | debug_mode = args.debug 211 | 212 | 213 | def parse_saved_data(data, current_pos): 214 | record_head = -1 215 | record_tail = -1 216 | pos = current_pos 217 | 218 | while pos <= len(data): 219 | if pos == len(data): 220 | if debug_mode: 221 | print("record_tail = {}".format(len(data))) 222 | return data[record_head:len(data)], -1 223 | 224 | if data[pos] == record_delimiter and data[pos+2:pos+12] == b'osx.agent.' and record_head == -1: 225 | record_head = pos 226 | record_tail = record_head + 1 227 | if debug_mode: 228 | print("record_head = {}".format(record_head)) 229 | 230 | elif (data[pos] == record_delimiter and data[pos+2:pos+12] == b'osx.agent.' and record_head != -1) or (pos == len(data)): 231 | record_tail = pos 232 | if debug_mode: 233 | print("record_tail = {}".format(record_tail)) 234 | break 235 | 236 | pos = pos + 1 237 | 238 | return data[record_head:record_tail], pos 239 | 240 | 241 | def check_record_type(rtype, record): 242 | rtype_length = record[0] ^ 0xA0 243 | if record[1:rtype_length+1] == record_type[rtype]['str'] and record[1+rtype_length] == record_type[rtype]['post']: 244 | return rtype 245 | 246 | else: 247 | return None 248 | 249 | 250 | def parse_element(elements): 251 | while True: 252 | for etype in element_type.keys(): 253 | element_value, element_size = check_element_type_value(etype, elements) 254 | if element_size: 255 | elements = elements[element_size:] 256 | return etype, element_value, elements 257 | 258 | print("-"*40) 259 | print("Unknown Element Type : {}".format(hex(elements[0]))) 260 | print("Raw data : {}".format(elements)) 261 | print("Cancel to parse this element.\n") 262 | return None, None, None 263 | 264 | 265 | def parse_dns_packet(dns_packet): 266 | dns_entry = dnslib.DNSRecord.parse(dns_packet) 267 | dns_query = dns_entry.get_q().get_qname().idna() 268 | dns_replies = [str(x) for x in dns_entry.rr] 269 | if debug_mode: 270 | print("DNS Query : {}".format(dns_query)) 271 | print("DNS Replies : {}".format(dns_replies)) 272 | return {'dns_query': dns_query, 'dns_replies': dns_replies} 273 | 274 | 275 | def check_element_type_value(etype, elements): 276 | etype_length = elements[0] ^ 0xA0 277 | 278 | if elements[1:etype_length+1] == element_type[etype]['str']: 279 | if debug_mode: 280 | print("Element Type : {}".format(etype)) 281 | eheader_size = 1 + etype_length 282 | 283 | if element_type[etype]['post'] == epost_xor_len: 284 | length = elements[eheader_size] ^ 0xA0 285 | string = elements[eheader_size+1:eheader_size+1+length] 286 | if debug_mode: 287 | print("Length(XOR) : {}".format(length)) 288 | print("String : {}".format(string)) 289 | return string.decode('utf-8'), eheader_size + 1 + length 290 | 291 | elif element_type[etype]['post'] == epost_str_len: 292 | if elements[eheader_size] == element_type_str['str_1byte'] or elements[eheader_size] == element_type_str['str_with_null']: 293 | length = elements[eheader_size+1] 294 | string = elements[eheader_size+2:eheader_size+2+length] 295 | if debug_mode: 296 | print("Length(1byte) : {}".format(length)) 297 | print("String : {}".format(string)) 298 | return string.decode('utf-8'), eheader_size + 2 + length 299 | elif elements[eheader_size] == element_type_str['str_2byte']: 300 | length = struct.unpack_from(">H", elements[eheader_size+1:], 0)[0] 301 | string = elements[eheader_size+3:eheader_size+3+length] 302 | if debug_mode: 303 | print("Length(2byte) : {}".format(length)) 304 | print("String : {}".format(string)) 305 | return string.decode('utf-8'), eheader_size + 3 + length 306 | else: 307 | length = elements[eheader_size] ^ 0xA0 308 | string = elements[eheader_size+1:eheader_size+1+length] 309 | if debug_mode: 310 | print("Length : {}".format(length)) 311 | print("String : {}".format(string)) 312 | return string.decode('utf-8'), eheader_size + 1 + length 313 | 314 | elif element_type[etype]['post'] == epost_num: 315 | if elements[eheader_size] == element_type_num['num_1byte']: 316 | number = elements[eheader_size+1] 317 | if debug_mode: 318 | print("Number(1byte) : {}".format(number)) 319 | return number, eheader_size + 1 + 1 320 | elif elements[eheader_size] == element_type_num['num_2byte']: 321 | number = struct.unpack_from(">H", elements[eheader_size+1:], 0)[0] 322 | if debug_mode: 323 | print("Number(2byte) : {}".format(number)) 324 | return number, eheader_size + 1 + 2 325 | elif elements[eheader_size] == element_type_num['num_4byte']: 326 | number = struct.unpack_from(">I", elements[eheader_size+1:], 0)[0] 327 | if debug_mode: 328 | print("Number(4byte) : {}".format(number)) 329 | return number, eheader_size + 1 + 4 330 | else: 331 | number = elements[eheader_size] 332 | if debug_mode: 333 | print("Number : {}".format(number)) 334 | return number, eheader_size + 1 335 | 336 | elif element_type[etype]['post'] == epost_bytes: 337 | if elements[eheader_size] in element_type_bytes.values(): 338 | length = elements[eheader_size+1] 339 | dns_packet = elements[eheader_size+2:eheader_size+2+length] 340 | if debug_mode: 341 | print("Length : {}".format(length)) 342 | print("Raw data : {}".format(dns_packet)) 343 | parsed_dns_packet = parse_dns_packet(dns_packet) 344 | return parsed_dns_packet, eheader_size + 2 + length 345 | 346 | else: 347 | print("-"*40) 348 | print("Unknown Element Value Type : {}".format(hex(elements[0]))) 349 | print("Raw data : {}\n".format(elements)) 350 | return None, None 351 | 352 | else: 353 | return None, None 354 | 355 | 356 | def parse_record(record): 357 | # remove record delimiter 358 | record = record[1:] 359 | parsed_elements = {} 360 | for rtype in record_type.keys(): 361 | if check_record_type(rtype, record): 362 | if debug_mode: 363 | print("Record Type : {}".format(str(rtype))) 364 | elements = record[1+len(record_type[rtype]['str'])+1:] 365 | while len(elements) > 0: 366 | etype, element_value, elements = parse_element(elements) 367 | if None in [etype, element_value, elements]: 368 | break 369 | if (rtype == 'dns_request' or rtype == 'dns_reply') and etype == 'bytes': 370 | etype = 'dns' 371 | parsed_elements[etype] = element_value 372 | 373 | return {**{'record_type': rtype}, **parsed_elements} 374 | 375 | print("-"*40) 376 | print("Unknown Record Type : {}".format(hex(record[0]))) 377 | print("Raw data : {}\n".format(record)) 378 | return None 379 | 380 | 381 | def main(): 382 | args = parse_arguments() 383 | check_arguments(args) 384 | 385 | if os.path.exists(os.path.abspath(args.file)): 386 | with open(args.file, 'rb') as fp: 387 | data = fp.read() 388 | else: 389 | sys.exit("{} does not exist.".format(args.file)) 390 | 391 | if args.out: 392 | if os.path.exists(os.path.abspath(args.out)) and not args.force: 393 | sys.exit("{} has already existed.".format(args.out)) 394 | out_file = open(args.out, 'wt') 395 | 396 | # parse Monitor.app's data 397 | current_pos = 0 398 | record_num = 0 399 | records = dict() 400 | while current_pos < len(data): 401 | if debug_mode: 402 | print("-"*40) 403 | record, current_pos = parse_saved_data(data, current_pos) 404 | if debug_mode: 405 | print("raw record : {}".format(record)) 406 | records['record_' + str(record_num)] = {'record_num': record_num, **parse_record(record)} 407 | 408 | # output JSON data 409 | if args.console: 410 | print(json.dumps(records['record_' + str(record_num)], ensure_ascii=False, indent=4)) 411 | 412 | # save JSON data 413 | if args.out: 414 | json.dump(records['record_' + str(record_num)], out_file, ensure_ascii=False) 415 | out_file.write('\n') 416 | 417 | record_num += 1 418 | 419 | if current_pos == -1: 420 | break 421 | 422 | if args.out: 423 | out_file.close() 424 | 425 | return 0 426 | 427 | 428 | if __name__ == "__main__": 429 | if sys.version_info[0:2] >= (3, 0): 430 | sys.exit(main()) 431 | else: 432 | sys.exit("This script needs Python 3.x") 433 | -------------------------------------------------------------------------------- /openbsmconv.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # 3 | # openbsmconv.py 4 | # Converts OpenBSM log file which has a output by created 'praudit -ls /dev/auditpipe' to JSON format. 5 | # 6 | # Copyright 2020 Minoru Kobayashi (@unkn0wnbit) 7 | # 8 | # Licensed under the Apache License, Version 2.0 (the "License"); 9 | # you may not use this file except in compliance with the License. 10 | # You may obtain a copy of the License at 11 | # 12 | # http://www.apache.org/licenses/LICENSE-2.0 13 | # 14 | # Unless required by applicable law or agreed to in writing, software 15 | # distributed under the License is distributed on an "AS IS" BASIS, 16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17 | # See the License for the specific language governing permissions and 18 | # limitations under the License. 19 | # 20 | 21 | import argparse 22 | import json 23 | import os 24 | import platform 25 | import sys 26 | import datetime 27 | import struct 28 | import pwd 29 | import grp 30 | 31 | # global variables 32 | with_failure = False 33 | with_failure_socket = False 34 | debug_mode = False 35 | file_debug = None 36 | proc_stat = dict() 37 | 38 | 39 | # setup arguments 40 | def parse_arguments(): 41 | parser = argparse.ArgumentParser(description="Converts OpenBSM log file to JSON format.") 42 | parser.add_argument('-f', '--file', action='store', default=None, 43 | help='Path to a bsm log file') 44 | parser.add_argument('-p', '--proclist', action='store', default=None, 45 | help='Path to a process list file') 46 | parser.add_argument('-o', '--out', action='store', default=None, 47 | help='Path to an output file') 48 | parser.add_argument('-c', '--console', action='store_true', default=False, 49 | help='Output JSON data to stdout.') 50 | parser.add_argument('-rp', '--use-running-proclist', action='store_true', default=False, 51 | help='Use current running process list instead of a existing process list file. And, the process list is saved to a file which places in the same directory of \'--file\' or to a file which specified \'--proclist\'.') 52 | parser.add_argument('--with-failure', action='store_true', default=False, 53 | help='Output records which has a failure status too.') 54 | parser.add_argument('--with-failure-socket', action='store_true', default=False, 55 | help='Output records which has a failure status too (related socket() syscall only).') 56 | parser.add_argument('--force', action='store_true', default=False, 57 | help='Enable to overwrite an existing output file.') 58 | parser.add_argument('--debug', action='store_true', default=False, help='Enable debug mode.') 59 | args = parser.parse_args() 60 | 61 | return args 62 | 63 | 64 | def check_arguments(args): 65 | global with_failure 66 | global with_failure_socket 67 | global debug_mode 68 | global file_debug 69 | 70 | if args.file: 71 | file_openbsm_log = os.path.abspath(args.file) 72 | if not os.path.exists(file_openbsm_log): 73 | sys.exit('Error: OpenBSM log file does not exist: {}'.format(file_openbsm_log)) 74 | file_out = os.path.splitext(file_openbsm_log)[0] + '.json' 75 | file_proclist = os.path.splitext(file_openbsm_log)[0] + '.proclist' 76 | file_debug = os.path.splitext(file_openbsm_log)[0] + '.log' 77 | else: 78 | sys.exit('You must specify \'--file\' option') 79 | 80 | if args.out: 81 | file_out = os.path.abspath(args.out) 82 | 83 | if os.path.exists(file_out) and not (args.force or args.console): 84 | sys.exit('Error: output file already exists: {}'.format(file_out)) 85 | 86 | if args.proclist and args.use_running_proclist: 87 | sys.exit('You can not specify both of \'--proclist\' and \'--use-running-proclist\'.') 88 | 89 | if args.proclist: 90 | file_proclist = os.path.abspath(args.proclist) 91 | 92 | if os.path.exists(file_proclist) and not args.use_running_proclist: 93 | load_proclist(file_proclist) 94 | elif args.use_running_proclist: 95 | if not get_proclist(): 96 | sys.exit('Error: Cannot get a current process list.') 97 | 98 | if os.path.exists(file_proclist) and not args.force: 99 | sys.exit('Error: process list file already exists: {}'.format(file_proclist)) 100 | else: 101 | save_proclist(file_proclist) 102 | else: 103 | sys.exit('Error: process list file does not exist: {}'.format(file_proclist)) 104 | 105 | if args.with_failure: 106 | with_failure = True 107 | with_failure_socket = True 108 | elif args.with_failure_socket: 109 | with_failure_socket = True 110 | # with_failure = args.with_failure 111 | # with_failure_socket = args.with_failure_socket 112 | debug_mode = args.debug 113 | 114 | return file_openbsm_log, file_out 115 | 116 | 117 | def dbg_print(msg): 118 | if msg and debug_mode: 119 | print('{}'.format(msg)) 120 | if file_debug: 121 | open(file_debug, 'a').write('{}\n'.format(msg)) 122 | return True 123 | 124 | return False 125 | 126 | 127 | def get_proclist(): 128 | # https://codeday.me/jp/qa/20190310/380909.html 129 | try: 130 | proc_stat[0] = dict() 131 | proc_stat[0]['procname'] = 'kernel_task' 132 | proc_stat[0]['ppid'] = 0 133 | 134 | process_list = [(int(pid), int(ppid), comm) for pid, ppid, comm in [x.strip().split(maxsplit=2) for x in os.popen('ps -Ao pid,ppid,comm')][1:]] 135 | for pid, ppid, procname in process_list: 136 | proc_stat[pid] = dict() 137 | proc_stat[pid]['procname'] = procname 138 | proc_stat[pid]['ppid'] = ppid 139 | 140 | proc_stat[1]['xpcproxy_child_pid'] = list() 141 | return True 142 | except KeyError: 143 | return False 144 | 145 | 146 | def save_proclist(file_proclist): 147 | try: 148 | global proc_stat 149 | with open(file_proclist, 'wt') as fp: 150 | json.dump(proc_stat, fp, ensure_ascii=False, indent=4) 151 | return True 152 | except OSError as err: 153 | sys.exit(err) 154 | 155 | 156 | def load_proclist(file_proclist): 157 | try: 158 | global proc_stat 159 | with open(file_proclist, 'rt') as fp: 160 | proclist = json.load(fp) 161 | for pid, stats in proclist.items(): 162 | proc_stat[int(pid)] = dict() 163 | for key, val in stats.items(): 164 | proc_stat[int(pid)][key] = val 165 | proc_stat[1]['xpcproxy_child_pid'] = list() 166 | return True 167 | except (OSError, KeyError) as err: 168 | sys.exit(err) 169 | 170 | 171 | def get_proc_info(pid, element): 172 | if pid == 1 and element == 'ppid': 173 | return 1 174 | 175 | if (pid in proc_stat) and (element in proc_stat[pid]): 176 | return proc_stat[pid][element] 177 | elif element == 'ppid': 178 | return 0 179 | else: 180 | return 'unknown' 181 | 182 | 183 | def convert_user_to_id(username): 184 | # https://stackoverflow.com/questions/421618/python-script-to-list-users-and-groups 185 | try: 186 | return int(pwd.getpwnam(username)[2]) 187 | except KeyError: 188 | sys.exit('Error: unknown username: {}'.format(username)) 189 | 190 | 191 | def convert_group_to_id(groupname): 192 | try: 193 | return int(grp.getgrnam(groupname)[2]) 194 | except KeyError: 195 | sys.exit('Error: unknown groupname: {}'.format(groupname)) 196 | 197 | 198 | def get_event_name(record): 199 | try: 200 | header, x, version, event_name, payload = record.split(',', 4) 201 | if header == 'header': 202 | if version == '11': 203 | return event_name, payload 204 | else: 205 | sys.exit('Error: version is invalid: {}'.format(version)) 206 | else: 207 | sys.exit('Error: header is invalid: {}'.format(header)) 208 | except ValueError as err: 209 | sys.exit('Error: get_event_name(): {}\nrecord: {}'.format(err, record)) 210 | 211 | 212 | def get_signed_int(return_code): 213 | return struct.unpack(' 0: 230 | _tag, _payload = _payload.split(',', 1) 231 | if _tag == 'path': 232 | _path, _payload = get_attributes_path(_payload, attr_num) 233 | check_attr += 1 234 | if payload_backup: 235 | payload_backup += ',' + _tag + ',' + _path 236 | else: 237 | payload_backup = _tag + ',' + _path 238 | elif _tag in attr_num.keys(): 239 | if payload_backup: 240 | payload_backup += ',' + _tag + ',' + ','.join(_payload.split(',', attr_num[_tag])[:attr_num[_tag]]) 241 | else: 242 | payload_backup = _tag + ',' + ','.join(_payload.split(',', attr_num[_tag])[:attr_num[_tag]]) 243 | _payload = _payload.split(',', attr_num[_tag])[attr_num[_tag]] 244 | check_attr += 1 245 | else: 246 | path += ',' + _tag 247 | 248 | if check_attr == 2: 249 | break 250 | return path, payload_backup + ',' + _payload 251 | 252 | 253 | def get_attributes(tags, payload): 254 | tag_attr = dict() 255 | payload_orig = payload 256 | attr_num = { 257 | 'argument': 3, 258 | 'path': 1, 259 | 'subject': 9, 260 | 'return': 2, 261 | 'attribute': 6, 262 | 'process_ex': 9, 263 | 'identity': 6, 264 | 'trailer': 1, 265 | 'arbitrary': 4, 266 | 'exit': 2, 267 | 'socket-unix': 2, 268 | 'socket-inet': 3, 269 | 'socket-inet6': 3, 270 | 'text': 1, 271 | } 272 | 273 | while len(payload) > 0: 274 | attrib_list = list() 275 | tag, payload = payload.split(',', 1) 276 | if tag in tags and tag != 'exec arg': 277 | attrib_list = payload.split(',', attr_num[tag])[:attr_num[tag]] 278 | if tag not in tag_attr: 279 | tag_attr[tag] = list() 280 | 281 | if tag == 'path': 282 | path, payload = get_attributes_path(payload, attr_num) 283 | tag_attr[tag].append(path) 284 | continue 285 | elif tag == 'argument': 286 | tag_attr[tag].append(attrib_list) 287 | else: 288 | tag_attr[tag] = attrib_list 289 | 290 | payload = payload.split(',', attr_num[tag])[attr_num[tag]] 291 | elif tag == 'exec arg': 292 | tag_attr['exec arg'] = list() 293 | while True: 294 | _tag, payload = payload.split(',', 1) 295 | if _tag == 'path': 296 | check = list() 297 | path, __payload = get_attributes_path(payload, attr_num) 298 | check.append(path) 299 | for i in list(range(3)): 300 | __tag, __payload = __payload.split(',', 1) 301 | if __tag == 'path': 302 | path, __payload = get_attributes_path(__payload, attr_num) 303 | check.append(__tag) 304 | check.append(path) 305 | else: 306 | check.append(__tag) 307 | 308 | if (check[0].startswith(('/', './')) and check[1] == 'path' and check[2].startswith(('/', './')) and check[3] == 'attribute') or \ 309 | (check[0].startswith(('/', './')) and check[1] == 'arbitrary' and check[2] == 'hex' and check[3] == 'byte') or \ 310 | (check[0].startswith(('/', './')) and check[1] == 'subject'): 311 | payload = _tag + ',' + payload 312 | break 313 | else: 314 | tag_attr['exec arg'].append(_tag) 315 | else: 316 | if tag not in attr_num: 317 | sys.exit('Error: Unknown tag: {} / {}'.format(tag, payload_orig)) 318 | payload = payload.split(',', attr_num[tag])[attr_num[tag]] 319 | 320 | dbg_print('tag_attr : {}'.format(tag_attr)) 321 | return tag_attr 322 | 323 | 324 | def set_json_record(record_num, record_type, timestamp, msec, tag_attr): 325 | try: 326 | json_record = dict() 327 | pid = int(tag_attr['subject'][5]) 328 | ppid = get_proc_info(pid, 'ppid') 329 | json_record['record_num'] = record_num 330 | json_record['record_type'] = record_type 331 | json_record['procname'] = get_proc_info(pid, 'procname') 332 | json_record['pprocname'] = get_proc_info(ppid, 'procname') 333 | json_record['timestamp'] = timestamp 334 | json_record['timestamp_ns'] = msec * 1000000 335 | json_record['pid'] = pid 336 | json_record['ppid'] = ppid 337 | json_record['uid'] = convert_user_to_id(tag_attr['subject'][3]) 338 | json_record['gid'] = convert_group_to_id(tag_attr['subject'][4]) 339 | json_record['egid'] = convert_user_to_id(tag_attr['subject'][1]) 340 | json_record['euid'] = convert_group_to_id(tag_attr['subject'][2]) 341 | return json_record 342 | except KeyError: 343 | return None 344 | 345 | 346 | def convert_bsm_events(bsm_file): 347 | global proc_stat 348 | 349 | aue_file_write_events = [ 350 | # 'AUE_OPEN_RC', 'AUE_OPEN_RTC', 351 | 'AUE_OPEN_WC', 'AUE_OPEN_WTC', 352 | 'AUE_OPEN_RWC', 'AUE_OPEN_WRTC', 353 | # 'AUE_OPENAT_RC', 'AUE_OPENAT_RTC', 354 | 'AUE_OPENAT_WC', 'AUE_OPENAT_WTC', 355 | 'AUE_OPENAT_RWC', 'AUE_OPENAT_WRTC', 356 | # 'AUE_OPEN_EXTENDED_RC', 'AUE_OPEN_EXTENDED_RTC', 357 | 'AUE_OPEN_EXTENDED_WC', 'AUE_OPEN_EXTENDED_WTC', 358 | 'AUE_OPEN_EXTENDED_RWC', 'AUE_OPEN_EXTENDED_WRTC', 359 | # 'AUE_OPENBYID_RC', 'AUE_OPENBYID_RTC', 360 | 'AUE_OPENBYID_WC', 'AUE_OPENBYID_WTC', 361 | 'AUE_OPENBYID_RWC', 'AUE_OPENBYID_WRTC', 362 | 'AUE_LINK', 'AUE_LINKAT', 'AUE_SYMLINK', 363 | ] 364 | 365 | aue_file_rename_events = [ 366 | 'AUE_RENAME', 'AUE_RENAMEAT', 367 | ] 368 | 369 | aue_file_delete_events = [ 370 | 'AUE_UNLINK', 'AUE_UNLINKAT', 371 | ] 372 | 373 | aue_mkdir_events = [ 374 | 'AUE_MKDIR', 'AUE_MKDIRAT', 'AUE_MKDIR_EXTENDED', 375 | ] 376 | 377 | aue_rmdir_events = [ 378 | 'AUE_RMDIR', 379 | ] 380 | 381 | aue_fork_events = [ 382 | 'AUE_FORK', 'AUE_VFORK', 'AUE_FORK1', 'AUE_DARWIN_RFORK', 'AUE_RFORK', 'AUE_PDFORK', 383 | ] 384 | 385 | aue_execve_events = [ 386 | 'AUE_EXECVE', 'AUE_MAC_EXECVE', 'AUE_FEXECVE', 387 | ] 388 | 389 | aue_posix_spawn_events = [ 390 | 'AUE_POSIX_SPAWN', 391 | ] 392 | 393 | aue_exit_events = [ 394 | 'AUE_EXIT', 395 | ] 396 | 397 | aue_socket_events = [ 398 | 'AUE_SOCKET', 399 | ] 400 | 401 | aue_connect_events = [ 402 | 'AUE_CONNECT', 403 | ] 404 | 405 | aue_send_events = [ 406 | # 'AUE_SEND', 407 | 'AUE_SENDTO', 408 | 'AUE_SENDMSG', 409 | ] 410 | 411 | aue_recv_events = [ 412 | # 'AUE_RECV', 413 | 'AUE_RECVFROM', 414 | 'AUE_RECVMSG', 415 | ] 416 | 417 | aue_bind_events = [ 418 | 'AUE_BIND', 419 | ] 420 | 421 | aue_listen_events = [ 422 | 'AUE_LISTEN', 423 | ] 424 | 425 | aue_accept_events = [ 426 | 'AUE_ACCEPT', 427 | ] 428 | 429 | aue_close_events = [ 430 | 'AUE_CLOSE', 431 | 'AUE_CLOSEFROM', 432 | ] 433 | 434 | record_num = 0 435 | json_record = list() 436 | with open(bsm_file) as fp: 437 | for record in fp: 438 | record = record.strip() 439 | event_name, payload = get_event_name(record) 440 | timestamp, msec, payload = get_record_timestamp(payload) 441 | if event_name in aue_file_write_events: 442 | tag_attr = get_attributes(['path', 'subject', 'return'], payload) 443 | if get_signed_int(int(tag_attr['return'][1])) != -1 or with_failure: 444 | json_record.append(set_json_record(record_num, 'file_write', timestamp, msec, tag_attr)) 445 | json_record[-1]['path'] = tag_attr['path'][0] 446 | record_num += 1 447 | 448 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure: 449 | json_record[-1]['result'] = 'failure' 450 | json_record[-1]['reason'] = tag_attr['return'][0] 451 | 452 | elif event_name in aue_file_rename_events: 453 | tag_attr = get_attributes(['path', 'subject', 'return'], payload) 454 | if get_signed_int(int(tag_attr['return'][1])) == 0 or with_failure: 455 | json_record.append(set_json_record(record_num, 'file_rename', timestamp, msec, tag_attr)) 456 | record_num += 1 457 | 458 | if get_signed_int(int(tag_attr['return'][1])) == 0: 459 | if len(tag_attr['path']) == 2: 460 | json_record[-1]['oldpath'] = tag_attr['path'][0] 461 | json_record[-1]['newpath'] = tag_attr['path'][1] 462 | elif len(tag_attr['path']) == 3: 463 | json_record[-1]['oldpath'] = tag_attr['path'][0] 464 | json_record[-1]['newpath'] = tag_attr['path'][2] 465 | elif len(tag_attr['path']) == 4: 466 | json_record[-1]['oldpath'] = tag_attr['path'][1] 467 | json_record[-1]['newpath'] = tag_attr['path'][3] 468 | else: 469 | sys.exit('Error: Unknown rename record: {}'.format(payload)) 470 | elif with_failure: 471 | json_record[-1]['oldpath'] = tag_attr['path'][0] 472 | json_record[-1]['newpath'] = '-' 473 | json_record[-1]['result'] = 'failure' 474 | json_record[-1]['reason'] = tag_attr['return'][0] 475 | 476 | elif event_name in aue_file_delete_events: 477 | tag_attr = get_attributes(['path', 'subject', 'return'], payload) 478 | if get_signed_int(int(tag_attr['return'][1])) == 0 or with_failure: 479 | json_record.append(set_json_record(record_num, 'file_delete', timestamp, msec, tag_attr)) 480 | record_num += 1 481 | 482 | if get_signed_int(int(tag_attr['return'][1])) == 0: 483 | json_record[-1]['path'] = tag_attr['path'][1] 484 | elif with_failure: 485 | idx = len(tag_attr['path']) - 1 486 | json_record[-1]['path'] = tag_attr['path'][idx] 487 | json_record[-1]['result'] = 'failure' 488 | json_record[-1]['reason'] = tag_attr['return'][0] 489 | 490 | elif event_name in aue_mkdir_events: 491 | tag_attr = get_attributes(['path', 'subject', 'return'], payload) 492 | if get_signed_int(int(tag_attr['return'][1])) == 0 or with_failure: 493 | json_record.append(set_json_record(record_num, 'folder_create', timestamp, msec, tag_attr)) 494 | json_record[-1]['path'] = tag_attr['path'][0] 495 | record_num += 1 496 | 497 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure: 498 | json_record[-1]['result'] = 'failure' 499 | json_record[-1]['reason'] = tag_attr['return'][0] 500 | 501 | elif event_name in aue_rmdir_events: 502 | tag_attr = get_attributes(['path', 'subject', 'return'], payload) 503 | if get_signed_int(int(tag_attr['return'][1])) == 0 or with_failure: 504 | json_record.append(set_json_record(record_num, 'folder_delete', timestamp, msec, tag_attr)) 505 | json_record[-1]['path'] = tag_attr['path'][1] 506 | record_num += 1 507 | 508 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure: 509 | json_record[-1]['result'] = 'failure' 510 | json_record[-1]['reason'] = tag_attr['return'][0] 511 | 512 | elif event_name in aue_fork_events: 513 | tag_attr = get_attributes(['subject', 'return'], payload) 514 | if get_signed_int(int(tag_attr['return'][1])) != -1 or with_failure: 515 | child_pid = int(tag_attr['return'][1]) 516 | pid = int(tag_attr['subject'][5]) 517 | if child_pid > 0: 518 | proc_stat[child_pid] = dict() 519 | proc_stat[child_pid]['ppid'] = pid # This operation is correct!! 520 | if 'procname' in proc_stat[pid]: 521 | proc_stat[child_pid]['procname'] = proc_stat[pid]['procname'] 522 | 523 | elif event_name in aue_execve_events: 524 | tag_attr = get_attributes(['exec arg', 'path', 'subject', 'return'], payload) 525 | if get_signed_int(int(tag_attr['return'][1])) != -1 or with_failure: 526 | pid = int(tag_attr['subject'][5]) 527 | if pid in proc_stat: 528 | proc_stat[pid]['procname'] = tag_attr['path'][1] 529 | json_record.append(set_json_record(record_num, 'procexec', timestamp, msec, tag_attr)) 530 | json_record[-1]['path'] = tag_attr['path'][1] 531 | json_record[-1]['argc'] = len(tag_attr['exec arg']) 532 | tag_attr['exec arg'][0] = tag_attr['path'][1] 533 | json_record[-1]['argv'] = ' '.join(tag_attr['exec arg']) 534 | record_num += 1 535 | 536 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure: 537 | json_record[-1]['result'] = 'failure' 538 | json_record[-1]['reason'] = tag_attr['return'][0] 539 | 540 | elif event_name in aue_posix_spawn_events: 541 | tag_attr = get_attributes(['argument', 'exec arg', 'subject', 'return', 'identity'], payload) 542 | if get_signed_int(int(tag_attr['return'][1])) != 0 and not with_failure: 543 | continue 544 | 545 | if 'argument' in tag_attr: 546 | pid = int(tag_attr['subject'][5]) 547 | child_pid = int(tag_attr['argument'][0][1], 16) 548 | if pid != 1: 549 | proc_stat[child_pid] = dict() 550 | proc_stat[child_pid]['procname'] = tag_attr['exec arg'][0] 551 | proc_stat[child_pid]['ppid'] = pid 552 | elif pid == 1 and tag_attr['exec arg'][0] == 'xpcproxy': 553 | proc_stat[1]['xpcproxy_child_pid'].append(child_pid) 554 | proc_stat[child_pid] = dict() 555 | proc_stat[child_pid]['procname'] = tag_attr['exec arg'][0] 556 | proc_stat[child_pid]['ppid'] = pid 557 | json_record.append(set_json_record(record_num, 'procexec', timestamp, msec, tag_attr)) 558 | # modify PID and PPID that has been already set 559 | json_record[-1]['pid'] = child_pid 560 | json_record[-1]['ppid'] = pid 561 | json_record[-1]['path'] = tag_attr['exec arg'][0] 562 | json_record[-1]['argc'] = len(tag_attr['exec arg']) 563 | json_record[-1]['argv'] = ' '.join(tag_attr['exec arg']) 564 | record_num += 1 565 | elif 'exec arg' in tag_attr: 566 | pid = int(tag_attr['subject'][5]) 567 | proc_stat[pid] = dict() 568 | proc_stat[pid]['procname'] = tag_attr['exec arg'][0] 569 | # if pid in proc_stat[1]['xpcproxy_child_pid']: 570 | if pid in proc_stat[1]['xpcproxy_child_pid'] or tag_attr['identity'][1] == 'com.apple.xpc.proxy': 571 | proc_stat[1]['xpcproxy_child_pid'].append(pid) 572 | proc_stat[pid]['ppid'] = 1 573 | else: 574 | proc_stat[pid]['ppid'] = 0 575 | json_record.append(set_json_record(record_num, 'procexec', timestamp, msec, tag_attr)) 576 | json_record[-1]['path'] = tag_attr['exec arg'][0] 577 | json_record[-1]['argc'] = len(tag_attr['exec arg']) 578 | json_record[-1]['argv'] = ' '.join(tag_attr['exec arg']) 579 | record_num += 1 580 | 581 | if get_signed_int(int(tag_attr['return'][1])) != 0 and with_failure: 582 | json_record[-1]['result'] = 'failure' 583 | json_record[-1]['reason'] = tag_attr['return'][0] 584 | 585 | elif event_name in aue_exit_events: 586 | tag_attr = get_attributes(['subject'], payload) 587 | pid = int(tag_attr['subject'][5]) 588 | if 'xpcproxy_child_pid' in proc_stat[1] and pid in proc_stat[1]['xpcproxy_child_pid']: 589 | proc_stat[1]['xpcproxy_child_pid'].remove(pid) 590 | 591 | elif event_name in aue_socket_events: 592 | tag_attr = get_attributes(['argument', 'subject', 'return', 'identity'], payload) 593 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 594 | continue 595 | 596 | pid = int(tag_attr['subject'][5]) 597 | socket_domain = int(tag_attr['argument'][0][1], 16) 598 | socket_type = int(tag_attr['argument'][1][1], 16) 599 | socket_protocol = int(tag_attr['argument'][2][1], 16) 600 | file_desc = int(tag_attr['return'][1]) 601 | id = tag_attr['identity'][1] 602 | 603 | if pid not in proc_stat: 604 | proc_stat[pid] = dict() 605 | if 'socket' not in proc_stat[pid]: 606 | proc_stat[pid]['socket'] = dict() 607 | if file_desc not in proc_stat[pid]['socket']: 608 | proc_stat[pid]['socket'][file_desc] = dict() 609 | if 'socket_stat' not in proc_stat[pid]['socket'][file_desc]: 610 | proc_stat[pid]['socket'][file_desc]['socket_stat'] = 'socket' 611 | 612 | # Guess a protocol 613 | if socket_domain == 2 and socket_type == 1 and socket_protocol == 1: 614 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'icmp' 615 | elif socket_domain == 2 and socket_type == 1 and socket_protocol in (0, 17) or \ 616 | socket_domain == 26 and socket_type == 1 and socket_protocol in (0, 17): 617 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'udp' 618 | # Apple daemons => 2:2:0 619 | # Google Chrome => 2:2:6 620 | elif socket_domain == 2 and socket_type == 1 and socket_protocol in (0, 6) or \ 621 | socket_domain == 2 and socket_type == 2 and socket_protocol == 0 or \ 622 | socket_domain == 2 and socket_type == 2 and socket_protocol == 6 or \ 623 | socket_domain == 700 and socket_type == 1 and socket_protocol == 2: 624 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'tcp' 625 | # Ignore socket-unix 626 | elif socket_domain == 1: 627 | pass 628 | # Ignore VMware Tools 629 | elif id == 'com.vmware.vmware-tools-daemon': 630 | pass 631 | else: 632 | protocol = '{}:{}:{}'.format(socket_domain, socket_type, socket_protocol) 633 | proc_stat[pid]['socket'][file_desc]['protocol'] = protocol 634 | dbg_print('Unknown protocol domain:type:protocol => {} / {}'.format(protocol, record)) 635 | 636 | # use this infomation for debugging unknown protocol 637 | proc_stat[pid]['socket'][file_desc]['socket_param'] = '{}:{}:{}'.format(socket_domain, socket_type, socket_protocol) 638 | 639 | elif event_name in aue_connect_events: 640 | tag_attr = get_attributes(['argument', 'socket-inet', 'socket-inet6', 'subject', 'return'], payload) 641 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 642 | continue 643 | 644 | if 'socket-inet' in tag_attr: 645 | socket_inet = 'socket-inet' 646 | version = 4 647 | elif 'socket-inet6' in tag_attr: 648 | socket_inet = 'socket-inet6' 649 | version = 6 650 | else: 651 | continue 652 | 653 | pid = int(tag_attr['subject'][5]) 654 | file_desc = int(tag_attr['argument'][0][1], 16) 655 | dstip = tag_attr[socket_inet][2] 656 | dstport = int(tag_attr[socket_inet][1]) 657 | 658 | if file_desc not in proc_stat[pid]['socket']: 659 | proc_stat[pid]['socket'][file_desc] = dict() 660 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'unknown' 661 | proc_stat[pid]['socket'][file_desc]['dstip'] = dstip 662 | proc_stat[pid]['socket'][file_desc]['dstport'] = dstport 663 | 664 | json_record.append(set_json_record(record_num, 'socket_connection', timestamp, msec, tag_attr)) 665 | json_record[-1]['version'] = version 666 | json_record[-1]['direction'] = 'out' 667 | json_record[-1]['dstip'] = dstip 668 | json_record[-1]['dstport'] = dstport 669 | json_record[-1]['proto'] = proc_stat[pid]['socket'][file_desc]['protocol'] 670 | if 'socket_param' in proc_stat[pid]['socket'][file_desc]: 671 | json_record[-1]['socket_param'] = proc_stat[pid]['socket'][file_desc]['socket_param'] 672 | record_num += 1 673 | 674 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure_socket: 675 | json_record[-1]['result'] = 'failure' 676 | json_record[-1]['reason'] = tag_attr['return'][0] 677 | 678 | elif event_name in aue_send_events: 679 | tag_attr = get_attributes(['argument', 'socket-inet', 'socket-inet6', 'subject', 'return'], payload) 680 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 681 | continue 682 | 683 | if 'socket-inet' in tag_attr: 684 | socket_inet = 'socket-inet' 685 | version = 4 686 | elif 'socket-inet6' in tag_attr: 687 | socket_inet = 'socket-inet6' 688 | version = 6 689 | else: 690 | continue 691 | 692 | pid = int(tag_attr['subject'][5]) 693 | file_desc = int(tag_attr['argument'][0][1], 16) 694 | dstip = tag_attr[socket_inet][2] 695 | dstport = int(tag_attr[socket_inet][1]) 696 | 697 | if 'socket' not in proc_stat[pid]: 698 | proc_stat[pid]['socket'] = dict() 699 | 700 | if file_desc not in proc_stat[pid]['socket']: 701 | proc_stat[pid]['socket'][file_desc] = dict() 702 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'unknown' 703 | proc_stat[pid]['socket'][file_desc]['dstip'] = dstip 704 | proc_stat[pid]['socket'][file_desc]['dstport'] = dstport 705 | 706 | json_record.append(set_json_record(record_num, 'socket_connection', timestamp, msec, tag_attr)) 707 | json_record[-1]['version'] = version 708 | json_record[-1]['direction'] = 'out' 709 | json_record[-1]['dstip'] = dstip 710 | json_record[-1]['dstport'] = dstport 711 | json_record[-1]['proto'] = proc_stat[pid]['socket'][file_desc]['protocol'] 712 | if 'socket_param' in proc_stat[pid]['socket'][file_desc]: 713 | json_record[-1]['socket_param'] = proc_stat[pid]['socket'][file_desc]['socket_param'] 714 | record_num += 1 715 | 716 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure_socket: 717 | json_record[-1]['reason'] = tag_attr['return'][0] 718 | json_record[-1]['result'] = 'failure' 719 | 720 | elif event_name in aue_recv_events: 721 | tag_attr = get_attributes(['argument', 'socket-inet', 'socket-inet6', 'subject', 'return'], payload) 722 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 723 | continue 724 | 725 | if 'socket-inet' in tag_attr: 726 | socket_inet = 'socket-inet' 727 | version = 4 728 | elif 'socket-inet6' in tag_attr: 729 | socket_inet = 'socket-inet6' 730 | version = 6 731 | else: 732 | continue 733 | 734 | pid = int(tag_attr['subject'][5]) 735 | file_desc = int(tag_attr['argument'][0][1], 16) 736 | srcip = tag_attr[socket_inet][2] 737 | srcport = int(tag_attr[socket_inet][1]) 738 | 739 | if 'socket' not in proc_stat[pid]: 740 | proc_stat[pid]['socket'] = dict() 741 | 742 | if file_desc not in proc_stat[pid]['socket']: 743 | proc_stat[pid]['socket'][file_desc] = dict() 744 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'unknown' 745 | proc_stat[pid]['socket'][file_desc]['srcip'] = srcip 746 | proc_stat[pid]['socket'][file_desc]['srcport'] = srcport 747 | 748 | if 'socket_stat' in proc_stat[pid]['socket'][file_desc]: 749 | if proc_stat[pid]['socket'][file_desc]['socket_stat'] == 'bind': 750 | proc_stat[pid]['socket'][file_desc]['socket_stat'] = 'recv' 751 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'udp' 752 | 753 | json_record.append(set_json_record(record_num, 'socket_connection', timestamp, msec, tag_attr)) 754 | json_record[-1]['version'] = version 755 | json_record[-1]['direction'] = 'in' 756 | json_record[-1]['srcip'] = srcip 757 | json_record[-1]['srcport'] = srcport 758 | json_record[-1]['proto'] = proc_stat[pid]['socket'][file_desc]['protocol'] 759 | if 'socket_param' in proc_stat[pid]['socket'][file_desc]: 760 | json_record[-1]['socket_param'] = proc_stat[pid]['socket'][file_desc]['socket_param'] 761 | record_num += 1 762 | 763 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure_socket: 764 | json_record[-1]['reason'] = tag_attr['return'][0] 765 | json_record[-1]['result'] = 'failure' 766 | 767 | elif event_name in aue_bind_events: 768 | tag_attr = get_attributes(['argument', 'socket-inet', 'socket-inet6', 'subject', 'return'], payload) 769 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 770 | continue 771 | 772 | if 'socket-inet' in tag_attr: 773 | socket_inet = 'socket-inet' 774 | elif 'socket-inet6' in tag_attr: 775 | socket_inet = 'socket-inet6' 776 | else: 777 | continue 778 | 779 | pid = int(tag_attr['subject'][5]) 780 | file_desc = int(tag_attr['argument'][0][1], 16) 781 | dstip = tag_attr[socket_inet][2] 782 | dstport = int(tag_attr[socket_inet][1]) 783 | 784 | if 'socket_stat' in proc_stat[pid]['socket'][file_desc]: 785 | if proc_stat[pid]['socket'][file_desc]['socket_stat'] == 'socket': 786 | proc_stat[pid]['socket'][file_desc]['socket_stat'] = 'bind' 787 | 788 | proc_stat[pid]['socket'][file_desc]['dstip'] = dstip 789 | proc_stat[pid]['socket'][file_desc]['dstport'] = dstport 790 | 791 | elif event_name in aue_listen_events: 792 | tag_attr = get_attributes(['argument', 'subject', 'return'], payload) 793 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 794 | continue 795 | 796 | pid = int(tag_attr['subject'][5]) 797 | file_desc = int(tag_attr['argument'][0][1], 16) 798 | 799 | if 'socket_stat' in proc_stat[pid]['socket'][file_desc]: 800 | if proc_stat[pid]['socket'][file_desc]['socket_stat'] == 'bind': 801 | proc_stat[pid]['socket'][file_desc]['socket_stat'] = 'listen' 802 | 803 | elif event_name in aue_accept_events: 804 | tag_attr = get_attributes(['argument', 'socket-inet', 'socket-inet6', 'subject', 'return'], payload) 805 | if get_signed_int(int(tag_attr['return'][1])) == -1 and not with_failure_socket: 806 | continue 807 | 808 | if 'socket-inet' in tag_attr: 809 | socket_inet = 'socket-inet' 810 | elif 'socket-inet6' in tag_attr: 811 | socket_inet = 'socket-inet6' 812 | else: 813 | continue 814 | 815 | pid = int(tag_attr['subject'][5]) 816 | file_desc_listen = int(tag_attr['argument'][0][1], 16) 817 | file_desc = get_signed_int(int(tag_attr['return'][1])) 818 | srcip = tag_attr[socket_inet][2] 819 | srcport = int(tag_attr[socket_inet][1]) 820 | 821 | if 'socket_stat' in proc_stat[pid]['socket'][file_desc_listen]: 822 | if proc_stat[pid]['socket'][file_desc_listen]['socket_stat'] == 'listen': 823 | if file_desc not in proc_stat[pid]['socket']: 824 | proc_stat[pid]['socket'][file_desc] = proc_stat[pid]['socket'][file_desc_listen] 825 | proc_stat[pid]['socket'][file_desc]['socket_stat'] = 'accept' 826 | proc_stat[pid]['socket'][file_desc]['protocol'] = 'tcp' 827 | 828 | json_record.append(set_json_record(record_num, 'socket_connection', timestamp, msec, tag_attr)) 829 | json_record[-1]['version'] = version 830 | json_record[-1]['direction'] = 'in' 831 | json_record[-1]['srcip'] = srcip 832 | json_record[-1]['srcport'] = srcport 833 | json_record[-1]['proto'] = proc_stat[pid]['socket'][file_desc]['protocol'] 834 | if 'socket_param' in proc_stat[pid]['socket'][file_desc]: 835 | json_record[-1]['socket_param'] = proc_stat[pid]['socket'][file_desc]['socket_param'] 836 | record_num += 1 837 | 838 | if get_signed_int(int(tag_attr['return'][1])) == -1 and with_failure_socket: 839 | json_record[-1]['result'] = 'failure' 840 | json_record[-1]['reason'] = tag_attr['return'][0] 841 | 842 | elif event_name in aue_close_events: 843 | pass 844 | 845 | return json_record 846 | 847 | 848 | def main(): 849 | args = parse_arguments() 850 | file_openbsm_log, file_out = check_arguments(args) 851 | 852 | bsm_events_json = convert_bsm_events(file_openbsm_log) 853 | 854 | if args.console: 855 | for bsm_event in bsm_events_json: 856 | print(json.dumps(bsm_event, ensure_ascii=False, indent=4)) 857 | 858 | if args.out: 859 | try: 860 | with open(file_out, 'wt') as out_fp: 861 | for bsm_event in bsm_events_json: 862 | json.dump(bsm_event, out_fp, ensure_ascii=False) 863 | out_fp.write('\n') 864 | except OSError as err: 865 | sys.exit(err) 866 | 867 | return 0 868 | 869 | 870 | if __name__ == "__main__": 871 | if platform.system() != 'Darwin': 872 | sys.exit('This script supports macOS only.') 873 | 874 | if sys.version_info[0:2] >= (3, 0): 875 | sys.exit(main()) 876 | else: 877 | sys.exit('This script needs Python 3.x') 878 | -------------------------------------------------------------------------------- /norimaci.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # 3 | # norimaci.py 4 | # Simple and light weight malware analysis sandbox for macOS. 5 | # This script offers features similar to "Noriben.py". 6 | # 7 | # Copyright 2020 Minoru Kobayashi (@unkn0wnbit) 8 | # 9 | # Licensed under the Apache License, Version 2.0 (the "License"); 10 | # you may not use this file except in compliance with the License. 11 | # You may obtain a copy of the License at 12 | # 13 | # http://www.apache.org/licenses/LICENSE-2.0 14 | # 15 | # Unless required by applicable law or agreed to in writing, software 16 | # distributed under the License is distributed on an "AS IS" BASIS, 17 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 18 | # See the License for the specific language governing permissions and 19 | # limitations under the License. 20 | # 21 | 22 | import os 23 | import sys 24 | import argparse 25 | import datetime 26 | import json 27 | import subprocess 28 | import time 29 | import codecs 30 | import hashlib 31 | import re 32 | import traceback 33 | 34 | try: 35 | import applescript 36 | has_applescript = True 37 | except ImportError: 38 | has_applescript = False 39 | 40 | has_internet = False 41 | 42 | config = { 43 | 'monitor_app': '/Applications/Monitor.app', 44 | 'praudit': '/usr/sbin/praudit', 45 | 'openbsm_data_conv': './openbsmconv.py', 46 | 'monitor_data_conv': './monitorappconv.py', 47 | 'debug': False, 48 | 'haedless': False, 49 | 'troubleshoot': False, 50 | 'timeout_seconds': 0, 51 | 'virustotal_api_key': '', 52 | 'yara_folder': '', 53 | 'hash_type': 'SHA256', 54 | 'txt_extension': 'txt', 55 | 'output_folder': '', 56 | 'global_whitelist_append': '', 57 | } 58 | 59 | whitelist_process = [ 60 | {'record_type': 'info'}, 61 | {'record_type': 'procexec', 'path': r'/usr/libexec/xpcproxy', 'ppid': 1}, # for Monitor.app 62 | {'record_type': 'procexec', 'procname': r'/sbin/launchd', 'path': r'xpcproxy', 'ppid': 1}, # for OpenBSM 63 | {'record_type': 'procexec', 'path': r'(/System/Library/CoreServices/)?iconservicesagent?', 'ppid': 1}, 64 | {'record_type': 'file_write', 'procname': r'(/System/Library/CoreServices/)?iconservicesagent?', 'ppid': 1}, 65 | {'record_type': 'file_rename', 'procname': r'(/System/Library/CoreServices/)?iconservicesagent?', 'ppid': 1}, 66 | {'record_type': 'folder_create', 'procname': r'(/System/Library/CoreServices/)?iconservicesagent?', 'ppid': 1}, 67 | {'record_type': 'folder_delete', 'procname': r'(/System/Library/CoreServices/)?iconservicesagent?', 'ppid': 1}, 68 | {'record_type': 'procexec', 'path': r'(/System/Library/CoreServices/)?iconservicesd', 'ppid': 1}, 69 | {'record_type': 'file_write', 'procname': r'(/System/Library/CoreServices/)?iconservicesd', 'ppid': 1}, 70 | {'record_type': 'file_rename', 'procname': r'(/System/Library/CoreServices/)?iconservicesd', 'ppid': 1}, 71 | {'record_type': 'procexec', 'path': r'/usr/libexec/periodic-wrapper', 'ppid': 1}, 72 | {'record_type': 'file_write', 'procname': r'(/usr/sbin/)?cfprefsd', 'ppid': 1}, 73 | {'record_type': 'file_rename', 'procname': r'(/usr/sbin/)?cfprefsd', 'ppid': 1}, 74 | {'record_type': 'file_delete', 'procname': r'(/usr/sbin/)?cfprefsd', 'ppid': 1}, 75 | {'record_type': 'file_write', 'procname': r'(/System/Library/Frameworks/OpenGL\.framework/Versions/A/Libraries/)?CVMServer', 'ppid': 1}, 76 | {'record_type': 'file_delete', 'procname': r'(/System/Library/Frameworks/OpenGL\.framework/Versions/A/Libraries/)?CVMServer', 'ppid': 1}, 77 | {'record_type': 'file_write', 'procname': r'(/System/Library/PrivateFrameworks/XprotectFramework\.framework/Versions/A/XPCServices/XprotectService\.xpc/Contents/MacOS/XprotectService/)?XprotectService', 'ppid': 1}, 78 | {'record_type': 'file_rename', 'procname': r'(/System/Library/PrivateFrameworks/XprotectFramework\.framework/Versions/A/XPCServices/XprotectService\.xpc/Contents/MacOS/XprotectService/)?XprotectService', 'ppid': 1}, 79 | {'record_type': 'file_write', 'procname': r'(/usr/libexec/)?logd', 'ppid': 1}, 80 | {'record_type': 'file_rename', 'procname': r'(/usr/libexec/)?logd', 'ppid': 1}, 81 | # 'Spotlight', 82 | {'record_type': 'file_write', 'procname': r'corespotlightd', 'ppid': 1}, 83 | {'record_type': 'file_write', 'procname': r'(/System/Library/Frameworks/CoreServices\.framework/Frameworks/Metadata\.framework/Support/)?mds', 'ppid': 1}, 84 | {'record_type': 'file_rename', 'procname': r'(/System/Library/Frameworks/CoreServices\.framework/Frameworks/Metadata\.framework/Support/)?mds', 'ppid': 1}, 85 | {'record_type': 'file_delete', 'procname': r'(/System/Library/Frameworks/CoreServices\.framework/Frameworks/Metadata\.framework/Support/)?mds', 'ppid': 1}, 86 | {'record_type': 'file_write', 'procname': r'(/System/Library/Frameworks/CoreServices\.framework/Frameworks/Metadata\.framework/Versions/A/Support/)?mds_stores', 'ppid': 1}, 87 | {'record_type': 'procexec', 'path': r'/System/Library/Frameworks/CoreServices\.framework/(Versions/A/)?Frameworks/Metadata\.framework/Versions/A/Support/mdworker', 'ppid': 1}, 88 | {'record_type': 'file_write', 'procname': r'mdworker', 'ppid': 1}, 89 | {'record_type': 'file_rename', 'procname': r'mdworker', 'ppid': 1}, 90 | {'record_type': 'procexec', 'path': r'/System/Library/Frameworks/CoreServices\.framework/(Versions/A/)?Frameworks/Metadata\.framework/Versions/A/Support/mdworker_shared', 'ppid': 1}, 91 | {'record_type': 'dylib_load', 'procname': r'mdworker_shared', 'ppid': 1}, 92 | {'record_type': 'file_write', 'procname': r'mdworker_shared', 'ppid': 1}, 93 | {'record_type': 'file_rename', 'procname': r'mdworker_shared', 'ppid': 1}, 94 | {'record_type': 'file_write', 'procname': r'(/usr/libexec/)?lsd', 'ppid': 1}, 95 | {'record_type': 'file_rename', 'procname': r'(/usr/libexec/)?lsd', 'ppid': 1}, 96 | {'record_type': 'procexec', 'procname': r'(/usr/libexec/)?trustd', 'ppid': 1}, 97 | {'record_type': 'socket_connection', 'procname': r'(/usr/libexec/)?trustd', 'ppid': 1}, 98 | {'record_type': 'file_write', 'procname': r'(/usr/libexec/)?trustd', 'ppid': 1}, 99 | {'record_type': 'file_rename', 'procname': r'(/usr/libexec/)?trustd', 'ppid': 1}, 100 | {'record_type': 'file_write', 'procname': r'fud', 'ppid': 1}, 101 | {'record_type': 'file_rename', 'procname': r'fud', 'ppid': 1}, 102 | {'record_type': 'file_write', 'procname': r'(/usr/libexec/)?nehelper', 'ppid': 1}, 103 | {'record_type': 'file_rename', 'procname': r'(/usr/libexec/)?nehelper', 'ppid': 1}, 104 | {'record_type': 'folder_create', 'procname': r'(/usr/libexec/)?nehelper', 'ppid': 1}, 105 | {'record_type': 'folder_delete', 'procname': r'(/usr/libexec/)?nehelper', 'ppid': 1}, 106 | {'record_type': 'file_write', 'procname': r'pbs', 'ppid': 1}, 107 | {'record_type': 'file_rename', 'procname': r'pbs', 'ppid': 1}, 108 | {'record_type': 'file_write', 'procname': r'mobileassetd', 'ppid': 1}, 109 | {'record_type': 'file_rename', 'procname': r'mobileassetd', 'ppid': 1}, 110 | {'record_type': 'dylib_load', 'procname': r'com\.apple\.MediaL', 'path': r'/Library/Application Support/iLifeMediaBrowser/Plug-Ins/iLMB.+', 'ppid': 1}, 111 | {'record_type': 'file_write', 'procname': r'/System/Library/CoreServices/sharedfilelistd', 'ppid': 1}, 112 | {'record_type': 'file_rename', 'procname': r'/System/Library/CoreServices/sharedfilelistd', 'ppid': 1}, 113 | {'record_type': 'file_write', 'procname': r'/System/Library/Frameworks/MediaLibrary.framework/Versions/A/XPCServices/com.apple.MediaLibraryService.xpc/Contents/MacOS/com.apple.MediaLibraryService', 'ppid': 1}, 114 | {'record_type': 'file_rename', 'procname': r'/System/Library/Frameworks/MediaLibrary.framework/Versions/A/XPCServices/com.apple.MediaLibraryService.xpc/Contents/MacOS/com.apple.MediaLibraryService', 'ppid': 1}, 115 | {'record_type': 'file_delete', 'procname': r'/System/Library/Frameworks/MediaLibrary.framework/Versions/A/XPCServices/com.apple.MediaLibraryService.xpc/Contents/MacOS/com.apple.MediaLibraryService', 'ppid': 1}, 116 | {'record_type': 'folder_create', 'procname': r'/System/Library/Frameworks/MediaLibrary.framework/Versions/A/XPCServices/com.apple.MediaLibraryService.xpc/Contents/MacOS/com.apple.MediaLibraryService', 'ppid': 1}, 117 | {'record_type': 'procexec', 'path': r'/usr/libexec/AssetCache/AssetCache', 'ppid': 1}, 118 | {'record_type': 'file_write', 'procname': r'/usr/libexec/AssetCache/AssetCache', 'ppid': 1}, 119 | {'record_type': 'folder_create', 'procname': r'/usr/libexec/AssetCache/AssetCache', 'ppid': 1}, 120 | ] 121 | 122 | whitelist_file = [ 123 | {'record_type': 'file_write', 'path': r'.+/\.DS_Store$'}, 124 | {'record_type': 'file_write', 'path': r'/Users/.+/Library/Containers/com\.apple\.Safari/Data/Library/Caches/com\.apple\.Safari/WebKitCache/Version \d+/Records/.+'}, 125 | {'record_type': 'file_rename', 'oldpath': r'/Users/.+/Library/Containers/com\.apple\.Safari/Data/Library/Caches/com\.apple\.Safari/WebKitCache/Version \d+/Records/.+'}, 126 | {'record_type': 'file_rename', 'newpath': r'/Users/.+/Library/Containers/com\.apple\.Safari/Data/Library/Caches/com\.apple\.Safari/WebKitCache/Version \d+/Records/.+'}, 127 | {'record_type': 'file_write', 'path': r'/private/var/folders/.+/.+/T(/.+)?/TemporaryItems/\(A Document Being Saved By .+\)/.+'}, 128 | {'record_type': 'file_rename', 'oldpath': r'/private/var/folders/.+/.+/T(/.+)?/TemporaryItems/\(A Document Being Saved By .+\)/.+'}, 129 | {'record_type': 'dylib_load', 'path': r'/private/var/db/CVMS/cvmsCodeSignObj.+'}, 130 | ] 131 | 132 | whitelist_hash = [] 133 | 134 | # initialize with empty list 135 | auto_whitelist_pid = list() 136 | 137 | __VERSION__ = '0.1.0' 138 | virustotal_upload = True if config['virustotal_api_key'] else False 139 | use_virustotal = True if config['virustotal_api_key'] and has_internet else False 140 | file_debug = None 141 | time_exec = 0 142 | time_process = 0 143 | 144 | # initialize with empty dict 145 | # {pid: process full path} 146 | process_full_path = dict() 147 | # {path: True or False} 148 | process_codesign_verify = dict() 149 | 150 | 151 | def match_whitelist(whitelist, record): 152 | for whitelist_entry in whitelist: 153 | match_num = 0 154 | for element_type in whitelist_entry.keys(): 155 | # These elements below has integer type value as their data 156 | if element_type in ['pid', 'ppid', 'uid', 'gid', 'euid', 'egid', 'is64', 'argc', 'version', 'srcport', 'dstport', 'dev']: 157 | if record[element_type] == whitelist_entry[element_type]: 158 | match_num = match_num + 1 159 | else: 160 | break 161 | else: 162 | try: 163 | if element_type in record and re.match(whitelist_entry[element_type], record[element_type]): 164 | match_num = match_num + 1 165 | else: 166 | break 167 | except Exception: 168 | dbg_print('[!] Error found while processing filters.\r\nFilter:\t{}\r\nEvent:\t{}'.format(whitelist_entry[element_type], record)) 169 | dbg_print(traceback.format_exc()) 170 | return False 171 | 172 | if match_num == len(whitelist_entry.keys()): 173 | dbg_print("----- Filtered!! ----- {}".format(record)) 174 | return True 175 | 176 | dbg_print("----- NOT Filtered!! ----- {}".format(record)) 177 | return False 178 | 179 | 180 | def match_auto_whitelist(auto_whitelist, record): 181 | if 'ppid' in record and record['ppid'] in auto_whitelist: 182 | dbg_print("----- Filtered!! (Auto) ----- {}".format(record)) 183 | return True 184 | else: 185 | return False 186 | 187 | 188 | def check_persistence_path(path): 189 | persistence_path_list = [ 190 | r'(/System)?/Library/LaunchDaemons/.+\.plist$', 191 | r'(/System)?/Library/LaunchAgents/.+\.plist$', 192 | r'/Users/.+/Library/LaunchAgents/.+\.plist$', 193 | r'(/private)?/var/.+/Library/LaunchAgents/.+\.plist$', 194 | r'(/private)?/var/db/com\.apple\.xpc\.launchd/disabled\..+\.plist', 195 | r'(/private)?/var/at/tabs/.+', 196 | r'(/System)?/Library/ScriptingAdditions/.+\.osax$', 197 | r'(/System)?/Library/StartupItems/.+', 198 | r'(/private)?/etc/periodic\.conf$', 199 | r'(/private)?/etc/periodic/.+/.+', 200 | r'(/private)?/etc/.*\.local$', 201 | r'(/private)?/etc/rc\.common$', 202 | r'(/private)?/etc/emond\.d/.+', 203 | r'(/Users/.+|(/private)?/var)/Library/Preferences/com\.apple\.loginitems\.plist$', 204 | r'/Users/.+/Library/Application Support/com\.apple\.backgroundtaskmanagementagent/backgrounditems\.btm$', 205 | ] 206 | 207 | for persistence_path in persistence_path_list: 208 | if re.match(persistence_path, path): 209 | return True 210 | 211 | return False 212 | 213 | 214 | def decode_json_obj(data): 215 | try: 216 | record, index = json.JSONDecoder().raw_decode(data) 217 | if config['debug']: 218 | print("{} : {}".format(record, index)) 219 | return record, index 220 | except json.JSONDecodeError: 221 | raise 222 | 223 | 224 | # load JSON data stream from a file 225 | def load_json_file(file_path, whitelist, auto_whitelist): 226 | event_records = [] 227 | read_data_size = 0 228 | 229 | try: 230 | fp = codecs.open(file_path, 'r', 'utf-8') 231 | json_file_size = os.path.getsize(file_path) 232 | except OSError as err: 233 | sys.exit('[!] Fatal: Error in load_json_file(): {}'.format(err)) 234 | 235 | data = fp.read(4096) 236 | read_data_size = len(data) 237 | data = data.replace('\n', '') 238 | while True: 239 | if data or read_data_size < json_file_size: 240 | try: 241 | record, index = decode_json_obj(data) 242 | # if match_auto_whitelist(auto_whitelist, record) or match_whitelist(whitelist, record): 243 | if match_whitelist(whitelist, record): 244 | data = data[index:] 245 | if ('pid' in record and record['pid'] not in auto_whitelist) and \ 246 | ('procname' in record and record['procname'] != r'/usr/libexec/xpcproxy') and \ 247 | ('procname' in record and record['procname'] != r'/sbin/launchd'): 248 | auto_whitelist.append(record['pid']) 249 | dbg_print("Appended to auto_whitelist : {}".format(auto_whitelist)) 250 | continue 251 | else: 252 | if record['record_type'] == 'procexec': 253 | process_full_path[record['pid']] = record['path'] 254 | event_records.append(record) 255 | data = data[index:] 256 | except json.JSONDecodeError: 257 | tmp = fp.read(4096) 258 | read_data_size += len(tmp) 259 | tmp = tmp.replace('\n', '') 260 | data += tmp 261 | else: 262 | fp.close() 263 | break 264 | 265 | return event_records 266 | 267 | 268 | def get_proc_list(): 269 | # https://codeday.me/jp/qa/20190310/380909.html 270 | try: 271 | proc_stat = dict() 272 | proc_stat[0] = dict() 273 | proc_stat[0]['procname'] = 'kernel_task' 274 | proc_stat[0]['ppid'] = 0 275 | 276 | process_list = [(int(pid), int(ppid), comm) for pid, ppid, comm in [x.strip().split(maxsplit=2) for x in os.popen('ps -Ao pid,ppid,comm')][1:]] 277 | for pid, ppid, procname in process_list: 278 | proc_stat[pid] = dict() 279 | proc_stat[pid]['procname'] = procname 280 | proc_stat[pid]['ppid'] = ppid 281 | return proc_stat 282 | except (OSError, KeyError) as err: 283 | sys.exit(err) 284 | 285 | 286 | def save_proc_list(file_proc_list, proc_stat): 287 | try: 288 | with open(file_proc_list, 'wt') as fp: 289 | json.dump(proc_stat, fp, ensure_ascii=False, indent=4) 290 | return True 291 | except OSError as err: 292 | sys.exit(err) 293 | 294 | 295 | def launch_openbsm(fp): 296 | global time_exec 297 | time_exec = time.time() 298 | 299 | try: 300 | return subprocess.Popen([config['praudit'], '-ls', '/dev/auditpipe'], stdout=fp) 301 | except (OSError, ValueError) as err: 302 | sys.exit('[!] Fatal: Error in launch_openbsm: {}'.format(err)) 303 | 304 | 305 | def terminate_openbsm(proc_openbsm, fp): 306 | global time_exec 307 | time_exec = time.time() - time_exec 308 | 309 | try: 310 | proc_openbsm.terminate() 311 | returncode = None 312 | # time.sleep(2) 313 | while returncode is None: 314 | time.sleep(0.1) 315 | returncode = proc_openbsm.poll() 316 | fp.close() 317 | # return proc_openbsm.returncode 318 | return returncode 319 | except Exception as err: 320 | sys.exit('[!] Fatal: Error in terminate_openbsm: {}'.format(err)) 321 | 322 | 323 | def launch_monitor_app(): 324 | global time_exec 325 | time_exec = time.time() 326 | 327 | if not os.path.exists(config['monitor_app']): 328 | sys.exit('[!] Monitor.app does not exist: {}'.format(config['monitor_app'])) 329 | 330 | scpt = applescript.AppleScript(''' 331 | tell application "Monitor" 332 | activate 333 | delay 1 334 | end tell 335 | ''') 336 | scpt.run() 337 | toggle_monitoring() 338 | 339 | 340 | def terminate_monitor_app(): 341 | global time_exec 342 | time_exec = time.time() - time_exec 343 | 344 | scpt = applescript.AppleScript(''' 345 | tell application "Monitor" 346 | -- activate 347 | delay 1 348 | end tell 349 | ''') 350 | scpt.run() 351 | toggle_monitoring() 352 | 353 | 354 | def save_monitor_app_data(output_folder, filename): 355 | scpt1 = (''' 356 | tell application "System Events" 357 | tell process "Monitor" 358 | set frontmost to true 359 | -- activate 360 | delay 1 361 | key code 1 using {shift down, command down} -- Save As 362 | ''') 363 | scpt2 = (''' 364 | -- delay 0.5 365 | -- keystroke "/" 366 | delay 0.5 367 | keystroke "{}" 368 | delay 0.5 369 | key code 36 -- Enter 370 | delay 1 371 | keystroke "{}" 372 | delay 0.5 373 | key code 36 -- Enter 374 | end tell 375 | end tell 376 | '''.format(output_folder, filename)) 377 | scpt = applescript.AppleScript(scpt1 + scpt2) 378 | scpt.run() 379 | 380 | 381 | def quit_monitor_app(): 382 | scpt = applescript.AppleScript(''' 383 | tell application "Monitor" 384 | delay 1 385 | activate 386 | quit 387 | end tell 388 | ''') 389 | scpt.run() 390 | 391 | 392 | def toggle_monitoring(): 393 | scpt = applescript.AppleScript(''' 394 | tell application "System Events" 395 | tell process "Monitor" 396 | tell window "Monitor" 397 | click checkbox 4 398 | -- checkbox 0 : Filters Process Events 399 | -- checkbox 1 : Filters Process Events 400 | -- checkbox 2 : Filters File Events 401 | -- checkbox 3 : Filters Network Events 402 | -- checkbox 4 : Monitor button 403 | -- checkbox 5 : Scroll Enable/Disable 404 | -- button 1 : Clear Log 405 | end tell 406 | end tell 407 | end tell 408 | ''') 409 | scpt.run() 410 | 411 | 412 | def get_session_name(): 413 | return datetime.datetime.now().strftime('%d_%b_%y__%H_%M_%f') 414 | 415 | 416 | def get_script_dir(): 417 | return os.path.dirname(os.path.abspath(sys.argv[0])) 418 | 419 | 420 | def launch_data_converter(monitor, file_monitor, file_json, file_proclist=None): 421 | script_dir = get_script_dir() 422 | try: 423 | if monitor == 'openbsm': 424 | if file_proclist: 425 | converter_cmd = ('python3', os.path.join(script_dir, config['openbsm_data_conv']), '-p', file_proclist, '--with-failure-socket') 426 | else: 427 | converter_cmd = ('python3', os.path.join(script_dir, config['openbsm_data_conv']), '--with-failure-socket') 428 | elif monitor == 'monitorapp': 429 | converter_cmd = ('python3', os.path.join(script_dir, config['monitor_data_conv'])) 430 | else: 431 | sys.exit('[!] Error: Unknown monitor type: {}'.format(monitor)) 432 | 433 | try: 434 | subprocess.run([*converter_cmd, '-f', file_monitor, '-o', file_json, '--force'], check=True) 435 | except subprocess.CalledProcessError as err: 436 | sys.exit('[!] Fatal: Failed to run data converter: {}'.format(err)) 437 | except OSError as err: 438 | sys.exit('[!] Fatal: Error in launch_data_converter: {}'.format(err)) 439 | 440 | 441 | def calc_file_hash(file): 442 | if config['hash_type'] == 'MD5': 443 | return hashlib.md5(codecs.open(file, 'rb').read()).hexdigest() 444 | elif config['hash_type'] == 'SHA1': 445 | return hashlib.sha1(codecs.open(file, 'rb').read()).hexdigest() 446 | elif config['hash_type'] == 'SHA256': 447 | return hashlib.sha256(codecs.open(file, 'rb').read()).hexdigest() 448 | 449 | 450 | def virustotal_query_hash(hashval): 451 | pass 452 | 453 | 454 | # verify code signature of external command file 455 | def verify_codesign(cmd): 456 | try: 457 | if not subprocess.call(['/usr/bin/codesign', '--verify', cmd]): 458 | return True 459 | else: 460 | sys.exit('[!] Fatal: Failed to verify code signature: {}'.format(cmd)) 461 | except OSError as err: 462 | sys.exit('[!] Fatal: Error in verify_codesign(): {}'.format(err)) 463 | 464 | 465 | def dbg_print(msg): 466 | if msg and config['debug']: 467 | print('{}'.format(msg)) 468 | if file_debug: 469 | codecs.open(file_debug, 'a', 'utf-8').write('{}\n'.format(msg)) 470 | return True 471 | 472 | return False 473 | 474 | 475 | def parse_dns_reply(dns_reply): 476 | ip_addresses = list() 477 | query_host = dns_reply['dns']['dns_query'] 478 | 479 | for dns_response in dns_reply['dns']['dns_replies']: 480 | host, ttl, dns_class, record_type, address = dns_response.split() 481 | if host == query_host: 482 | if record_type == 'CNAME': 483 | query_host = address 484 | elif record_type == 'A': 485 | ip_addresses.append(address) 486 | 487 | return '|'.join(ip_addresses) 488 | 489 | 490 | def analyze_events(event_records, report, timeline): 491 | report_process = list() 492 | report_file = list() 493 | report_kext = list() 494 | report_dylib = list() 495 | # report_plist = list() 496 | report_persistence = list() 497 | report_network = list() 498 | report_dns = list() 499 | report_tty = list() 500 | report_error = list() 501 | remote_servers = list() 502 | 503 | time_parse_start = time.time() 504 | 505 | for event in event_records: 506 | outputtext = '' 507 | tl_text = '' 508 | date_stamp = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(event['timestamp'])) + '.' + format(event['timestamp_ns'], '09d') 509 | 510 | if event['pid'] in process_full_path: 511 | procname = process_full_path[event['pid']] 512 | else: 513 | procname = event['procname'] 514 | 515 | if event['record_type'] == 'procexec': 516 | if event['ppid'] in process_full_path: 517 | pprocname = process_full_path[event['ppid']] 518 | else: 519 | pprocname = event['pprocname'] 520 | 521 | argv = event['argv'].replace('\x00', ' ').strip() 522 | outputtext = '[CreateProcess] {}:{} > "{}"\t[Child PID: {}]'.format(pprocname, event['ppid'], argv, event['pid']) 523 | tl_text = '{},Process,CreateProcess,{},{},{},{}'.format(date_stamp, pprocname, event['ppid'], argv, event['pid']) 524 | report_process.append(outputtext) 525 | timeline.append(tl_text) 526 | 527 | elif event['record_type'] == 'file_write': 528 | path = event['path'] 529 | yara_hits = '' 530 | # if config['yara_folder'] and yara_rules: 531 | # yara_hits = yara_filescan(path, yara_rules) 532 | 533 | if os.path.isdir(path): 534 | outputtext = '[CreateFolder] {}:{} > {}'.format(procname, event['pid'], path) 535 | tl_text = '{},File,CreateFolder,{},{},{}'.format(date_stamp, procname, event['pid'], path) 536 | report_file.append(outputtext) 537 | timeline.append(tl_text) 538 | else: 539 | try: 540 | hashval = calc_file_hash(path) 541 | if hashval in whitelist_hash: 542 | dbg_print('[_] Skipping hash: {}'.format(hashval)) 543 | continue 544 | 545 | av_hits = '' 546 | if use_virustotal and has_internet: 547 | av_hits = virustotal_query_hash(hashval) 548 | 549 | outputtext = '[CreateFile] {}:{} > {}\t[{}: {}]{}{}'.format(procname, event['pid'], path, config['hash_type'], hashval, yara_hits, av_hits) 550 | tl_text = '{},File,CreateFile,{},{},{},{},{},{},{}'.format(date_stamp, procname, event['pid'], path, config['hash_type'], hashval, yara_hits, av_hits) 551 | report_file.append(outputtext) 552 | timeline.append(tl_text) 553 | 554 | if check_persistence_path(path): 555 | outputtext = '[Persistence] {}:{} > {}\t[{}: {}]{}{}'.format(procname, event['pid'], path, config['hash_type'], hashval, yara_hits, av_hits) 556 | tl_text = '{},File,Persistence,{},{},{},{},{},{},{}'.format(date_stamp, procname, event['pid'], path, config['hash_type'], hashval, yara_hits, av_hits) 557 | report_persistence.append(outputtext) 558 | timeline.append(tl_text) 559 | 560 | except (IndexError, IOError): 561 | outputtext = '[CreateFile] {}:{} > {}\t[File no longer exists]'.format(procname, event['pid'], path) 562 | tl_text = '{},File,CreateFile,{},{},{},N/A'.format(date_stamp, procname, event['pid'], path) 563 | report_file.append(outputtext) 564 | timeline.append(tl_text) 565 | 566 | if check_persistence_path(path): 567 | outputtext = '[Persistence] {}:{} > {}\t[File no longer exists]'.format(procname, event['pid'], path) 568 | tl_text = '{},File,Persistence,{},{},{},N/A'.format(date_stamp, procname, event['pid'], path) 569 | report_persistence.append(outputtext) 570 | timeline.append(tl_text) 571 | 572 | elif event['record_type'] == 'file_rename': 573 | outputtext = '[RenameFile] {}:{} > {} => {}'.format(procname, event['pid'], event['oldpath'], event['newpath']) 574 | tl_text = '{},File,RenameFile,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['oldpath'], event['newpath']) 575 | report_file.append(outputtext) 576 | timeline.append(tl_text) 577 | 578 | if check_persistence_path(event['newpath']): 579 | outputtext = '[Persistence] {}:{} > {} => {}'.format(procname, event['pid'], event['oldpath'], event['newpath']) 580 | tl_text = '{},File,Persistence,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['oldpath'], event['newpath']) 581 | report_persistence.append(outputtext) 582 | timeline.append(tl_text) 583 | 584 | elif event['record_type'] == 'file_delete': 585 | path = event['path'] 586 | outputtext = '[DeleteFile] {}:{} > {}'.format(procname, event['pid'], path) 587 | tl_text = '{},File,DeleteFile,{},{},{}'.format(date_stamp, procname, event['pid'], path) 588 | report_file.append(outputtext) 589 | timeline.append(tl_text) 590 | 591 | elif event['record_type'] == 'folder_create': 592 | path = event['path'] 593 | outputtext = '[CreateFolder] {}:{} > {}'.format(procname, event['pid'], path) 594 | tl_text = '{},File,CreateFolder,{},{},{}'.format(date_stamp, procname, event['pid'], path) 595 | report_file.append(outputtext) 596 | timeline.append(tl_text) 597 | 598 | elif event['record_type'] == 'folder_delete': 599 | path = event['path'] 600 | outputtext = '[DeleteFolder] {}:{} > {}'.format(procname, event['pid'], path) 601 | tl_text = '{},File,DeleteFolder,{},{},{}'.format(date_stamp, procname, event['pid'], path) 602 | report_file.append(outputtext) 603 | timeline.append(tl_text) 604 | 605 | elif event['record_type'] == 'kext_load': 606 | outputtext = '[LoadKext] {}:{} == {}'.format(procname, event['pid'], event['path']) 607 | tl_text = '{},File,LoadKext,{},{},{}'.format(date_stamp, procname, event['pid'], event['path']) 608 | report_kext.append(outputtext) 609 | timeline.append(tl_text) 610 | 611 | elif event['record_type'] == 'dylib_load': 612 | outputtext = '[LoadDylib] {}:{} == {}'.format(procname, event['pid'], event['path']) 613 | tl_text = '{},File,LoadDylib,{},{},{}'.format(date_stamp, procname, event['pid'], event['path']) 614 | report_dylib.append(outputtext) 615 | timeline.append(tl_text) 616 | 617 | elif event['record_type'] == 'socket_connection': 618 | if event['proto'] == 'tcp' and event['direction'] == 'out': 619 | outputtext = '[TCP] {}:{} > {}:{}'.format(procname, event['pid'], event['dstip'], event['dstport']) 620 | if outputtext not in report_network: 621 | report_network.append(outputtext) 622 | tl_text = '{},Network,TCP Send,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['dstip'], event['dstport']) 623 | timeline.append(tl_text) 624 | 625 | elif event['proto'] == 'tcp' and event['direction'] == 'in': 626 | outputtext = '[TCP] {}:{} > {}:{}'.format(event['srcip'], event['srcport'], procname, event['pid']) 627 | if outputtext not in report_network: 628 | report_network.append(outputtext) 629 | tl_text = '{},Network,TCP Receive,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['srcip'], event['srcport']) 630 | timeline.append(tl_text) 631 | 632 | elif event['proto'] == 'udp' and event['direction'] == 'out': 633 | outputtext = '[UDP] {}:{} > {}:{}'.format(procname, event['pid'], event['dstip'], event['dstport']) 634 | if outputtext not in report_network: 635 | report_network.append(outputtext) 636 | tl_text = '{},Network,UDP Send,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['dstip'], event['dstport']) 637 | timeline.append(tl_text) 638 | 639 | elif event['proto'] == 'udp' and event['direction'] == 'in': 640 | outputtext = '[UDP] {}:{} > {}:{}'.format(event['srcip'], event['srcport'], procname, event['pid']) 641 | if outputtext not in report_network: 642 | report_network.append(outputtext) 643 | tl_text = '{},Network,UDP Receive,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['srcip'], event['srcport']) 644 | timeline.append(tl_text) 645 | 646 | elif event['proto'] == 'icmp' and event['direction'] == 'out': 647 | outputtext = '[ICMP] {}:{} > {}:{}'.format(procname, event['pid'], event['dstip'], event['dstport']) 648 | if outputtext not in report_network: 649 | report_network.append(outputtext) 650 | tl_text = '{},Network,ICMP Send,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['dstip'], event['dstport']) 651 | timeline.append(tl_text) 652 | 653 | elif event['proto'] == 'icmp' and event['direction'] == 'in': 654 | outputtext = '[ICMP] {}:{} > {}:{}'.format(event['srcip'], event['srcport'], procname, event['pid']) 655 | if outputtext not in report_network: 656 | report_network.append(outputtext) 657 | tl_text = '{},Network,ICMP Receive,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['srcip'], event['srcport']) 658 | timeline.append(tl_text) 659 | 660 | elif event['proto'] == 'unknown' and event['direction'] == 'out': 661 | outputtext = '[TCP|UDP] {}:{} > {}:{}'.format(procname, event['pid'], event['dstip'], event['dstport']) 662 | if outputtext not in report_network: 663 | report_network.append(outputtext) 664 | tl_text = '{},Network,TCP|UDP Send,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['dstip'], event['dstport']) 665 | timeline.append(tl_text) 666 | 667 | elif event['proto'] == 'unknown' and event['direction'] == 'in': 668 | outputtext = '[TCP|UDP] {}:{} > {}:{}'.format(event['srcip'], event['srcport'], procname, event['pid']) 669 | if outputtext not in report_network: 670 | report_network.append(outputtext) 671 | tl_text = '{},Network,TCP|UDP Receive,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['srcip'], event['srcport']) 672 | timeline.append(tl_text) 673 | 674 | else: 675 | report_error.append('Unknown protocol type: {},{}'.format(event['proto'], event['direction'])) 676 | 677 | elif event['record_type'] == 'dns_request': 678 | outputtext = '[DNS] {}:{} ? {}'.format(procname, event['pid'], event['dns']['dns_query']) 679 | if outputtext not in report_dns: 680 | report_dns.append(outputtext) 681 | 682 | tl_text = '{},Network,DNS Query,{},{},{}'.format(date_stamp, procname, event['pid'], event['dns']['dns_query']) 683 | timeline.append(tl_text) 684 | 685 | elif event['record_type'] == 'dns_reply': 686 | outputtext = '[DNS] {}:{} ? {} => {}'.format(procname, event['pid'], event['dns']['dns_query'], parse_dns_reply(event)) 687 | if outputtext not in report_dns: 688 | report_dns.append(outputtext) 689 | 690 | tl_text = '{},Network,DNS Response,{},{},{},{}'.format(date_stamp, procname, event['pid'], event['dns']['dns_query'], parse_dns_reply(event)) 691 | timeline.append(tl_text) 692 | 693 | elif event['record_type'] == 'tty': 694 | if event['operation'] == 'create': 695 | outputtext = '[CreateTTY] {}:{} > TTY:{}'.format(procname, event['pid'], event['dev']) 696 | tl_text = '{},TTY,Create,{},{},{}'.format(date_stamp, procname, event['pid'], event['dev']) 697 | 698 | elif event['operation'] == 'close': 699 | outputtext = '[CloseTTY] {}:{} > TTY:{}'.format(procname, event['pid'], event['dev']) 700 | tl_text = '{},TTY,Close,{},{},{}'.format(date_stamp, procname, event['pid'], event['dev']) 701 | 702 | else: 703 | report_error.append('Unknown TTY operation: {}'.format(event['operation'])) 704 | 705 | report_tty.append(outputtext) 706 | timeline.append(tl_text) 707 | 708 | else: 709 | report_error.append("Unknown record: {}".format(event)) 710 | 711 | if ('dstip' in event) and (event['dstip'] not in remote_servers): 712 | if event['dstip'] != '127.0.0.1' and event['dstip'] != '::1': 713 | remote_servers.append(event['dstip']) 714 | 715 | if ('srcip' in event) and (event['srcip'] not in remote_servers): 716 | if event['srcip'] != '127.0.0.1' and event['srcip'] != '::1': 717 | remote_servers.append(event['srcip']) 718 | 719 | time_parse_end = time.time() 720 | 721 | report.append('-=] Sandbox Analysis Report generated by Norimaci v{}'.format(__VERSION__)) 722 | report.append('-=] Developed by Minoru Kobayashi: @unkn0wnbit') 723 | report.append('-=] The latest release can be found at https://github.com/mnrkbys/Norimaci') 724 | report.append('') 725 | 726 | if time_exec: 727 | report.append('-=] Execution time: %0.2f seconds' % time_exec) 728 | if time_process: 729 | report.append('-=] Processing time: %0.2f seconds' % time_process) 730 | 731 | time_analyze = time_parse_end - time_parse_start 732 | report.append('-=] Analysis time: %0.2f seconds' % time_analyze) 733 | report.append('') 734 | 735 | report.append('Processes Created:') 736 | report.append('==================') 737 | dbg_print('[*] Writing %d Process Events results to report' % (len(report_process))) 738 | for event in report_process: 739 | report.append(event) 740 | 741 | report.append('') 742 | report.append('File Activity:') 743 | report.append('==================') 744 | dbg_print('[*] Writing %d Filesystem Events results to report' % (len(report_file))) 745 | for event in report_file: 746 | report.append(event) 747 | 748 | report.append('') 749 | report.append('dylib Files:') 750 | report.append('==================') 751 | dbg_print('[*] Writing %d dylib Files results to report' % (len(report_dylib))) 752 | for dylib_file in sorted(report_dylib): 753 | report.append(dylib_file) 754 | 755 | report.append('') 756 | report.append('kext Files:') 757 | report.append('==================') 758 | dbg_print('[*] Writing %d kext Files results to report' % (len(report_kext))) 759 | for kext_file in sorted(report_kext): 760 | report.append(kext_file) 761 | 762 | # report_plist 763 | # report.append('') 764 | # report.append('plist Files:') 765 | # report.append('==================') 766 | # dbg_print('[*] Writing %d plist Files results to report' % (len(report_plist))) 767 | # for plist_file in sorted(report_plist): 768 | # report.append(plist_file) 769 | 770 | report.append('') 771 | report.append('Network Traffic:') 772 | report.append('==================') 773 | dbg_print('[*] Writing %d Network Events results to report' % (len(report_network))) 774 | for event in report_network: 775 | report.append(event) 776 | 777 | report.append('') 778 | report.append('DNS Queries:') 779 | report.append('==================') 780 | dbg_print('[*] Writing %d DNS Queries results to report' % (len(report_dns))) 781 | for dns_query in sorted(report_dns): 782 | report.append(dns_query) 783 | 784 | report.append('') 785 | report.append('Unique Hosts:') 786 | report.append('==================') 787 | dbg_print('[*] Writing %d Remote Servers results to report' % (len(remote_servers))) 788 | for server in sorted(remote_servers): 789 | report.append(server) 790 | 791 | report.append('') 792 | report.append('Persistence:') 793 | report.append('==================') 794 | dbg_print('[*] Writing %d Persistence results to report' % (len(report_persistence))) 795 | for persistence in sorted(report_persistence): 796 | report.append(persistence) 797 | 798 | report.append('') 799 | report.append('TTY:') 800 | report.append('==================') 801 | dbg_print('[*] Writing %d TTY results to report' % (len(report_tty))) 802 | for tty in sorted(report_tty): 803 | report.append(tty) 804 | 805 | if report_error: 806 | report.append('\r\n\r\n\r\n\r\n\r\n\r\nERRORS DETECTED') 807 | report.append('The following items could not be parsed correctly:') 808 | dbg_print('[*] Writing %d Output Errors results to report' % (len(report_error))) 809 | for error in report_error: 810 | report.append(error) 811 | 812 | 813 | def main(): 814 | global file_debug 815 | global time_process 816 | report = list() 817 | timeline = list() 818 | 819 | print('\n--===[ Norimaci v{}'.format(__VERSION__)) 820 | print('--===[ Minoru Kobayashi [@unkn0wnbit]') 821 | 822 | # setup arguments 823 | parser = argparse.ArgumentParser(description="Light weight sandbox which works with OpenBSM or Fireeye's Monitor.app") 824 | parser.add_argument('-m', '--monitor', action='store', type=str, default=None, 825 | help='Specify a program to monitor macOS activity. You can choose \'openbsm\' or \'monitorapp\'.') 826 | parser.add_argument('-j', '--json', action='store', type=str, 827 | help='Path to a JSON file which is converted by \'openbsmconv.py\' or \'monitorappconv.py\'.') 828 | parser.add_argument('-bl', '--openbsm-log', action='store', type=str, 829 | help='Path to an OpenBSM log file.') 830 | parser.add_argument('-p', '--proclist', action='store', default=None, 831 | help='Path to a process list file to process OpenBSM log file. A file which has ".proclist" extnsion would be used, if this option is not specified.') 832 | parser.add_argument('-ml', '--monitorapp-log', action='store', type=str, 833 | help='Path to a Monitor.app data file.') 834 | parser.add_argument('-o', '--output', action='store', type=str, 835 | help='Path to an output directory.') 836 | parser.add_argument('--force', action='store_true', default=False, 837 | help='Enable to overwrite output files.') 838 | parser.add_argument('--debug', action='store_true', default=False, 839 | help='Enable debug mode.') 840 | args = parser.parse_args() 841 | 842 | if args.monitor not in (None, 'openbsm', 'monitorapp'): 843 | sys.exit('You must specify \'--monitor\' option properly.') 844 | 845 | if args.json and (args.openbsm_log or args.monitorapp_log): 846 | sys.exit('You can not specify \'--json\', \'--openbsm\' and \'--monitor\' at the same time.') 847 | 848 | if args.monitor == 'openbsm' and (args.openbsm_log or args.proclist): 849 | sys.exit('You can not specify \'--monitor openbsm\' with \'--openbsm-log\' or \'--proclist\' at the same time.') 850 | 851 | if args.monitor == 'monitorapp' and not has_applescript: 852 | sys.exit('Import Error: py-applescript and PyObjC are not installed.\n\ 853 | py-applescript and PyObjC are needed to work in cooperation with Monitor.app.\n\ 854 | Get them from https://github.com/rdhyee/py-applescript and https://bitbucket.org/ronaldoussoren/pyobjc \n\ 855 | or from pip.') 856 | 857 | if args.monitor == 'monitorapp' and args.monitorapp_log: 858 | sys.exit('You can not specify both of \'--monitor monitorapp\' and \'--monitorapp-log\' at the same time.') 859 | 860 | if not (args.json or args.openbsm_log or args.monitorapp_log) and os.getuid() != 0: 861 | sys.exit('This script needs root privilege.') 862 | 863 | config['debug'] = args.debug 864 | 865 | if args.output: 866 | config['output_folder'] = os.path.abspath(args.output) 867 | if not os.path.exists(config['output_folder']): 868 | try: 869 | os.makedirs(config['output_folder']) 870 | except FileExistsError: 871 | sys.exit('[!] Fatal: Unable to create output directory: {}'.format(config['output_folder'])) 872 | else: 873 | config['output_folder'] = get_script_dir() 874 | dbg_print('[*] Log output directory: {}'.format(config['output_folder'])) 875 | 876 | if args.json: 877 | if os.path.exists(args.json): 878 | file_json = os.path.abspath(args.json) 879 | if not args.output: 880 | config['output_folder'] = os.path.dirname(file_json) 881 | file_basename = os.path.splitext(os.path.basename(args.json))[0] 882 | file_txt = os.path.join(config['output_folder'], file_basename + '.txt') 883 | file_timeline = os.path.join(config['output_folder'], file_basename + '_timeline.csv') 884 | file_debug = os.path.join(config['output_folder'], file_basename + '.log') 885 | else: 886 | sys.exit("[!] JSON file does not exist: {}\n".format(args.json)) 887 | elif args.openbsm_log: 888 | if os.path.exists(args.openbsm_log): 889 | file_openbsm_log = os.path.abspath(args.openbsm_log) 890 | if args.proclist: 891 | file_proc_list = os.path.abspath(args.proclist) 892 | else: 893 | file_proc_list = os.path.splitext(file_openbsm_log)[0] + '.proclist' 894 | if not os.path.exists(file_proc_list): 895 | sys.exit('[!] Fatal: process list file does not exist: {}'.format(file_proc_list)) 896 | if not args.output: 897 | config['output_folder'] = os.path.dirname(file_openbsm_log) 898 | file_basename = os.path.splitext(os.path.basename(args.openbsm_log))[0] 899 | file_json = os.path.join(config['output_folder'], file_basename + '.json') 900 | file_txt = os.path.join(config['output_folder'], file_basename + '.txt') 901 | file_timeline = os.path.join(config['output_folder'], file_basename + '_timeline.csv') 902 | file_debug = os.path.join(config['output_folder'], file_basename + '.log') 903 | else: 904 | sys.exit("[!] OpenBSM log file does not exist: {}\n".format(args.openbsm_log)) 905 | elif args.monitorapp_log: 906 | if os.path.exists(args.monitorapp_log): 907 | file_monitorapp_log = os.path.abspath(args.monitorapp_log) 908 | if not args.output: 909 | config['output_folder'] = os.path.dirname(file_monitorapp_log) 910 | file_basename = os.path.splitext(os.path.basename(args.monitorapp_log))[0] 911 | file_json = os.path.join(config['output_folder'], file_basename + '.json') 912 | file_txt = os.path.join(config['output_folder'], file_basename + '.txt') 913 | file_timeline = os.path.join(config['output_folder'], file_basename + '_timeline.csv') 914 | file_debug = os.path.join(config['output_folder'], file_basename + '.log') 915 | else: 916 | sys.exit("[!] Monitor.app data file does not exist: {}\n".format(args.monitorapp_log)) 917 | 918 | if not args.json: 919 | if not (args.openbsm_log or args.monitorapp_log): 920 | session_id = get_session_name() 921 | file_json = os.path.join(config['output_folder'], 'Norimaci_{}.json'.format(session_id)) 922 | file_txt = os.path.join(config['output_folder'], 'Norimaci_{}.{}'.format(session_id, config['txt_extension'])) 923 | file_timeline = os.path.join(config['output_folder'], 'Norimaci_{}_timeline.csv'.format(session_id)) 924 | file_debug = os.path.join(config['output_folder'], 'Norimaci_{}.log'.format(session_id)) 925 | 926 | if args.monitor == 'openbsm': 927 | file_openbsm_log = os.path.join(config['output_folder'], 'Norimaci_{}.bsm'.format(session_id)) 928 | file_proc_list = os.path.join(config['output_folder'], 'Norimaci_{}.proclist'.format(session_id)) 929 | fp_openbsm_log = open(file_openbsm_log, 'wt') 930 | print("[*] Launching OpenBSM agent...") 931 | save_proc_list(file_proc_list, get_proc_list()) 932 | proc_openbsm = launch_openbsm(fp_openbsm_log) 933 | elif args.monitor == 'monitorapp': 934 | file_monitorapp_log = os.path.join(config['output_folder'], 'Norimaci_{}.mon'.format(session_id)) 935 | print("[*] Launching Monitor.app...") 936 | launch_monitor_app() 937 | 938 | if args.monitor in ('openbsm', 'monitorapp'): 939 | if config['timeout_seconds']: 940 | print('[*] Running for %d seconds. Press Ctrl-C to stop logging early.' % (config['timeout_seconds'])) 941 | # Print a small progress indicator, for those REALLY long time.sleeps. 942 | try: 943 | for i in range(config['timeout_seconds']): 944 | progress = (100 / config['timeout_seconds']) * i 945 | sys.stdout.write('\r%d%% complete' % progress) 946 | sys.stdout.flush() 947 | time.sleep(1) 948 | except KeyboardInterrupt: 949 | pass 950 | else: 951 | print('[*] When runtime is complete, press CTRL+C to stop logging.') 952 | try: 953 | while True: 954 | time.sleep(100) 955 | except KeyboardInterrupt: 956 | pass 957 | 958 | if args.monitor == 'openbsm': 959 | print('\n[*] Termination of OpenBSM agent commencing... please wait') 960 | returncode = terminate_openbsm(proc_openbsm, fp_openbsm_log) 961 | if returncode != -2: # -2 = SIGINT 962 | sys.exit('[!] Fatal: OpenBSM agent did not terminate properly: {}'.format(returncode)) 963 | elif args.monitor == 'monitorapp': 964 | print('\n[*] Termination of Monitor.app commencing... please wait') 965 | terminate_monitor_app() 966 | save_monitor_app_data(config['output_folder'], 'Norimaci_{}.mon'.format(session_id)) 967 | quit_monitor_app() 968 | 969 | time_convert_start = time.time() 970 | 971 | if args.monitor == 'openbsm' or args.openbsm_log: 972 | print('[*] Converting OpenBSM data ...') 973 | launch_data_converter('openbsm', file_openbsm_log, file_json, file_proc_list) 974 | elif args.monitor == 'monitorapp' or args.monitorapp_log: 975 | print('[*] Converting Monitor.app data ...') 976 | launch_data_converter('monitorapp', file_monitorapp_log, file_json) 977 | 978 | time_convert_end = time.time() 979 | time_process = time_convert_end - time_convert_start 980 | 981 | print('[*] Loading converted macOS activity data ...') 982 | event_records = load_json_file(file_json, whitelist_process + whitelist_file, auto_whitelist_pid) 983 | 984 | analyze_events(event_records, report, timeline) 985 | 986 | print('[*] Saving report to: {}'.format(file_txt)) 987 | codecs.open(file_txt, 'w', 'utf-8').write('\r\n'.join(report)) 988 | 989 | print('[*] Saving timeline to: {}'.format(file_timeline)) 990 | codecs.open(file_timeline, 'w', 'utf-8').write('\r\n'.join(timeline)) 991 | 992 | return 0 993 | 994 | 995 | if __name__ == "__main__": 996 | if sys.version_info[0:2] >= (3, 5): 997 | sys.exit(main()) 998 | else: 999 | sys.exit("This script needs greater than or equal to Python 3.5") 1000 | --------------------------------------------------------------------------------