├── .imgs
└── logo.png
├── 00_Setup
├── README.md
├── Vagrantfile
├── Vagrantfile.no_ansible
├── playbook.yml
└── roles
│ └── setupvm
│ ├── README.md
│ ├── defaults
│ └── main.yml
│ ├── files
│ ├── docker-compose.yml
│ └── env
│ ├── handlers
│ └── main.yml
│ ├── meta
│ └── main.yml
│ ├── tasks
│ └── main.yml
│ ├── tests
│ ├── inventory
│ └── test.yml
│ └── vars
│ └── main.yml
├── 01_Automating_Collection
├── README.md
├── config.ini
└── malshare_download.py
├── 02_Data_Extraction_and_Visualization
├── .imgs
│ ├── 00_login.png
│ ├── 01_upload.png
│ ├── 02_upload.png
│ ├── 03_default_values.png
│ ├── 04_create_data_view.png
│ ├── 05_create_data_view.png
│ ├── 06_data.png
│ └── 10_pdb_mandiant.png
├── README.md
└── apt1.json
├── 03_Public_Feeds
├── README.md
├── ha-parse.py
└── otx-parse.py
├── Arch_Cloud_Labs_Malware_Analysis_Platform.pdf
├── LICENSE
└── README.md
/.imgs/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/.imgs/logo.png
--------------------------------------------------------------------------------
/00_Setup/README.md:
--------------------------------------------------------------------------------
1 | ## Installing Vagrant & Setting Up Infrastructure
2 | For the lab, we'll use Vagrant and quickly setup our environment for analysis.
3 | [Vagrant](https://www.vagrantup.com/) is a utility for managing Virtual Machines. You can find the installer's for your Operating System [here](https://www.vagrantup.com/downloads). Vagrant makes it easy to create repeatable virtual machine environments, and we'll be using it for this course to provision a Ubuntu 20.10 Virtual Machine as our analysis machine. The particular box we're using is defined as ```config.vm.box``` .
4 |
5 | ```sh
6 | config.vm.box = "generic/ubuntu2010"
7 | ```
8 |
9 |
10 | In the code block below the "libvirt" vm provider is specified to spin up a Virtual Machine. If you're using Virtualbox, you should change this to, you guessed it "virtualbox". Vagrant supports numerous other providers which can be found in their official documentation [here](https://www.vagrantup.com/docs/providers). Below the provider statement we're allocating 4GB of RAM for our virtual machine and the provider to virtualbox.
11 |
12 | ```sh
13 | #config.vm.provider "libvirt" do |vb|
14 | config.vm.provider "virtualbox" do |vb|
15 | vb.memory = "4096"
16 | end
17 | ```
18 |
19 |
20 | Two Vagrant files exist. One that leverages Ansible and another that just executes shell commands.
21 | If you do not have ansible installed for your Operating System, you can keep things simple by leveraging the "[shell](https://www.vagrantup.com/docs/provisioning/shell)" provisioner. This provisioner executes the commands listed within the *shell* variables in the Virtual machine. Numerous providers exist to provision virtual machines (Ansible, Chef, etc...). If you **do** have Ansible installed, move onto the next section!
22 |
23 | ```sh
24 | config.vm.provision "shell", inline: <<-SHELL
25 | apt-get update -y;
26 | apt-get install -y radare2 git tmux vim;
27 | SHELL
28 | ```
29 |
30 | ## Ansible & Vagrant
31 | The provision block of the Vagrant file specifies to use the Ansible provisioner.
32 | This lets the end user make Ansible Playbooks to install and configure virtual machines automatically when Vagrants are launched.
33 | ```
34 | config.vm.provision "ansible" do |ansible|
35 | ansible.verbose = "v"
36 | ansible.playbook = "playbook.yml"
37 | end
38 | ```
39 |
40 | Lets examine the playbook.yml file by executing ```cat playbook.yml```.
41 | ``` sh
42 | - hosts: all
43 | become: true
44 | roles:
45 | - roles/setupvm
46 | ```
47 |
48 | The role "setupvm" contains a subfolder called "tasks." This contains a series of tasks to be executed in the Vagrant VM.
49 | ``` sh
50 | ---
51 | - name: Install Packages for Malware Analysis
52 | apt:
53 | name: "{{ item }}" # item will be substituted with any word in the loop directive below.
54 | update_cache: yes # this is equivalent to apt-get update
55 | loop:
56 | - vim
57 | - tmux
58 | - radare2
59 | - python3
60 | - python3-pip
61 | - yara
62 | - unzip
63 | ```
64 |
65 | Feel free to add tools to this to enable your malware VM to have the tools you want it to have!
66 | Look at ```./roles/setupvm/tasks/main.yml``` to see the packages that provision the target host..
67 |
68 | ## Starting Vagrant!
69 | After making any modifications that you might have (provider/RAM/etc...), go forth and sping up your virtual machine! To start your VM, simply execute
70 |
71 | ``` sh
72 | vagrant up
73 | ```
74 |
75 | If the Vagrant virutal machine "generic/ubuntu2004" has not been downloaded, it'll take a second to download prior to executing the provision commands. However, afterwards it'll execute much faster. To check on the status of your virtual machine, execute "vagrant status". You should see something like the code block below showing that it's successfully running. Note, yours may say Virtualbox and not libvirt.
76 |
77 | ``` sh
78 |
79 | ☁ 00_Setup [main] ⚡ vagrant status
80 | Current machine states:
81 |
82 | default running (libvirt)
83 |
84 | The Libvirt domain ()is running. To stop this machine, you can run
85 | `vagrant halt`. To destroy the machine, you can run `vagrant destroy`.
86 | ```
87 |
88 | ## Accessing your virtual machine.
89 | We'll access the virtual machine via vagrant's built in-ssh command.
90 | Vagrant leverages a private key it stores in "~/.vagrant.d/insecure_private_key" to perform ssh based authentication.
91 |
92 | ``` sh
93 | vagrant ssh
94 | ```
95 |
96 | If successful, you should find yourself with shell access to the vagrant machine and can now access the lab machine that will be used for the course.
97 |
98 | ```
99 | ☁ 00_Setup [main] ⚡ vagrant ssh
100 | Last login: Tue Feb 15 23:56:12 2022 from 192.168.121.1
101 | vagrant@bsidesroc:~$ id
102 | uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant),4(adm),24(cdrom),30(dip),46(plugdev),111(lxd),117(lpadmin),118(sambashare)
103 | vagrant@bsidesroc:~$
104 | ```
105 |
106 | ## Stopping the Virtual Machine
107 | To shut off the virtual machine you can execute "halt"
108 | ```sh
109 | vagrant halt
110 | ```
111 |
112 | ## Destroying the Virtual Machine
113 | When you're done, you can "clean up" by deleting the virtual machine via:
114 | ```sh
115 | vagrant destroy
116 | ```
117 |
118 | ## Beyond The Class
119 | * Add to ```./roles/tasks/main.yml``` additional packages you're interested in.
120 |
--------------------------------------------------------------------------------
/00_Setup/Vagrantfile:
--------------------------------------------------------------------------------
1 | # -*- mode: ruby -*-
2 | # vi: set ft=ruby :
3 | # Note, default credentials are vagrant/vagrant
4 |
5 | Vagrant.configure("2") do |config|
6 | config.vm.box = "generic/ubuntu2004"
7 |
8 | # this is for Kibana
9 | config.vm.network "forwarded_port", guest: 5601, host: 5601
10 | config.vm.hostname = "hackspacecon"
11 |
12 | #config.vm.provider "virtualbox" do |vb|
13 | config.vm.provider "libvirt" do |vb|
14 | vb.memory = "4096"
15 | end
16 |
17 | # provisoning Vagrant
18 | config.vm.provision "ansible" do |ansible|
19 | ansible.verbose = "v"
20 | ansible.playbook = "playbook.yml"
21 | end
22 | end
23 |
--------------------------------------------------------------------------------
/00_Setup/Vagrantfile.no_ansible:
--------------------------------------------------------------------------------
1 | # -*- mode: ruby -*-
2 | # vi: set ft=ruby :
3 | # Note, default credentials are vagrant/vagrant
4 |
5 | Vagrant.configure("2") do |config|
6 | # The most common configuration options are documented and commented below.
7 | # For a complete reference, please see the online documentation at
8 | # https://docs.vagrantup.com.
9 | # Every Vagrant development environment requires a box. You can search for
10 | # boxes at https://vagrantcloud.com/search.
11 | config.vm.box = "generic/ubuntu2004"
12 | config.vm.network "forwarded_port", guest: 5601, host: 5601
13 |
14 | # Share an additional folder to the guest VM. The first argument is
15 | # the path on the host to the actual folder. The second argument is
16 | # the path on the guest to mount the folder. And the optional third
17 | # argument is a set of non-required options.
18 | #config.vm.synced_folder "../", "/home/vagrant/bsides-roc-class"
19 |
20 | # Provider-specific configuration so you can fine-tune various
21 | # backing providers for Vagrant. These expose provider-specific options.
22 | # Example for VirtualBox:
23 | config.vm.hostname = "hackspacecon"
24 | config.vm.provider "virtualbox" do |vb|
25 | #config.vm.provider "libvirt" do |vb|
26 | vb.memory = "4096"
27 | end
28 |
29 | config.vm.provision "shell", inline: <<-SHELL
30 | sudo apt-get update -y && apt-get install -y radare2 wget curl default-jdk unzip python3-pip yara autoconf \
31 | jq vim tmux radare2 python3 yara unzip docker-compose p7zip-full libfuzzy-dev libyara-dev;
32 | curl -fSsl https://get.docker.com | bash;
33 | sudo usermod -aG docker vagrant;
34 | sudo pip3 install docker-compose;
35 | sudo pip3 install r2pipe;
36 | sudo echo 'vm.max_map_count=262144' >> /etc/sysctl.conf;
37 | sudo sysctl -p;
38 |
39 | git clone git@github.com:ytisf/theZoo.git;
40 | git clone https://github.com/ytisf/theZoo;
41 | git clone --recursive https://github.com/archcloudlabs/r2elk
42 | rm ./r2elk/rules/malware/MALW_AZORULT.yar;
43 | cd ./r2elk; pip3 install -r requirements.txt;
44 |
45 | chown vagrant:vagrant -R /home/vagrant;
46 |
47 |
48 | usermod -aG docker vagrant;
49 | SHELL
50 | end
51 |
--------------------------------------------------------------------------------
/00_Setup/playbook.yml:
--------------------------------------------------------------------------------
1 | ---
2 | - hosts: all
3 | become: true
4 | roles:
5 | - roles/setupvm
6 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/README.md:
--------------------------------------------------------------------------------
1 | Role Name
2 | =========
3 |
4 | A brief description of the role goes here.
5 |
6 | Requirements
7 | ------------
8 |
9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
10 |
11 | Role Variables
12 | --------------
13 |
14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
15 |
16 | Dependencies
17 | ------------
18 |
19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
20 |
21 | Example Playbook
22 | ----------------
23 |
24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
25 |
26 | - hosts: servers
27 | roles:
28 | - { role: username.rolename, x: 42 }
29 |
30 | License
31 | -------
32 |
33 | BSD
34 |
35 | Author Information
36 | ------------------
37 |
38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed).
39 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/defaults/main.yml:
--------------------------------------------------------------------------------
1 | ---
2 | # defaults file for setupvm
3 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/files/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "2.2"
2 |
3 | services:
4 | setup:
5 | image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
6 | volumes:
7 | - certs:/usr/share/elasticsearch/config/certs
8 | user: "0"
9 | command: >
10 | bash -c '
11 | if [ x${ELASTIC_PASSWORD} == x ]; then
12 | echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
13 | exit 1;
14 | elif [ x${KIBANA_PASSWORD} == x ]; then
15 | echo "Set the KIBANA_PASSWORD environment variable in the .env file";
16 | exit 1;
17 | fi;
18 | if [ ! -f certs/ca.zip ]; then
19 | echo "Creating CA";
20 | bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
21 | unzip config/certs/ca.zip -d config/certs;
22 | fi;
23 | if [ ! -f certs/certs.zip ]; then
24 | echo "Creating certs";
25 | echo -ne \
26 | "instances:\n"\
27 | " - name: es01\n"\
28 | " dns:\n"\
29 | " - es01\n"\
30 | " - localhost\n"\
31 | " ip:\n"\
32 | " - 127.0.0.1\n"\
33 | " - name: es02\n"\
34 | " dns:\n"\
35 | " - es02\n"\
36 | " - localhost\n"\
37 | " ip:\n"\
38 | " - 127.0.0.1\n"\
39 | " - name: es03\n"\
40 | " dns:\n"\
41 | " - es03\n"\
42 | " - localhost\n"\
43 | " ip:\n"\
44 | " - 127.0.0.1\n"\
45 | > config/certs/instances.yml;
46 | bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
47 | unzip config/certs/certs.zip -d config/certs;
48 | fi;
49 | echo "Setting file permissions"
50 | chown -R root:root config/certs;
51 | find . -type d -exec chmod 750 \{\} \;;
52 | find . -type f -exec chmod 640 \{\} \;;
53 | echo "Waiting for Elasticsearch availability";
54 | until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
55 | echo "Setting kibana_system password";
56 | until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
57 | echo "All done!";
58 | '
59 | healthcheck:
60 | test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
61 | interval: 1s
62 | timeout: 5s
63 | retries: 120
64 |
65 | es01:
66 | depends_on:
67 | setup:
68 | condition: service_healthy
69 | image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
70 | volumes:
71 | - certs:/usr/share/elasticsearch/config/certs
72 | - esdata01:/usr/share/elasticsearch/data
73 | ports:
74 | - ${ES_PORT}:9200
75 | environment:
76 | - node.name=es01
77 | - cluster.name=${CLUSTER_NAME}
78 | - cluster.initial_master_nodes=es01
79 | # - cluster.initial_master_nodes=es01,es02,es03
80 | # - discovery.seed_hosts=es02,es03
81 | - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
82 | - bootstrap.memory_lock=true
83 | - xpack.security.enabled=true
84 | - xpack.security.http.ssl.enabled=true
85 | - xpack.security.http.ssl.key=certs/es01/es01.key
86 | - xpack.security.http.ssl.certificate=certs/es01/es01.crt
87 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
88 | - xpack.security.http.ssl.verification_mode=certificate
89 | - xpack.security.transport.ssl.enabled=true
90 | - xpack.security.transport.ssl.key=certs/es01/es01.key
91 | - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
92 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
93 | - xpack.security.transport.ssl.verification_mode=certificate
94 | - xpack.license.self_generated.type=${LICENSE}
95 | mem_limit: ${MEM_LIMIT}
96 | ulimits:
97 | memlock:
98 | soft: -1
99 | hard: -1
100 | healthcheck:
101 | test:
102 | [
103 | "CMD-SHELL",
104 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
105 | ]
106 | interval: 10s
107 | timeout: 10s
108 | retries: 120
109 |
110 | # es02:
111 | # depends_on:
112 | # - es01
113 | # image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
114 | # volumes:
115 | # - certs:/usr/share/elasticsearch/config/certs
116 | # - esdata02:/usr/share/elasticsearch/data
117 | # environment:
118 | # - node.name=es02
119 | # - cluster.name=${CLUSTER_NAME}
120 | # - cluster.initial_master_nodes=es01,es02,es03
121 | # - discovery.seed_hosts=es01,es03
122 | # - bootstrap.memory_lock=true
123 | # - xpack.security.enabled=true
124 | # - xpack.security.http.ssl.enabled=true
125 | # - xpack.security.http.ssl.key=certs/es02/es02.key
126 | # - xpack.security.http.ssl.certificate=certs/es02/es02.crt
127 | # - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
128 | # - xpack.security.http.ssl.verification_mode=certificate
129 | # - xpack.security.transport.ssl.enabled=true
130 | # - xpack.security.transport.ssl.key=certs/es02/es02.key
131 | # - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
132 | # - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
133 | # - xpack.security.transport.ssl.verification_mode=certificate
134 | # - xpack.license.self_generated.type=${LICENSE}
135 | # mem_limit: ${MEM_LIMIT}
136 | # ulimits:
137 | # memlock:
138 | # soft: -1
139 | # hard: -1
140 | # healthcheck:
141 | # test:
142 | # [
143 | # "CMD-SHELL",
144 | # "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
145 | # ]
146 | # interval: 10s
147 | # timeout: 10s
148 | # retries: 120
149 |
150 | # es03:
151 | # depends_on:
152 | # - es02
153 | # image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
154 | # volumes:
155 | # - certs:/usr/share/elasticsearch/config/certs
156 | # - esdata03:/usr/share/elasticsearch/data
157 | # environment:
158 | # - node.name=es03
159 | # - cluster.name=${CLUSTER_NAME}
160 | # - cluster.initial_master_nodes=es01,es02,es03
161 | # - discovery.seed_hosts=es01,es02
162 | # - bootstrap.memory_lock=true
163 | # - xpack.security.enabled=true
164 | # - xpack.security.http.ssl.enabled=true
165 | # - xpack.security.http.ssl.key=certs/es03/es03.key
166 | # - xpack.security.http.ssl.certificate=certs/es03/es03.crt
167 | # - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
168 | # - xpack.security.http.ssl.verification_mode=certificate
169 | # - xpack.security.transport.ssl.enabled=true
170 | # - xpack.security.transport.ssl.key=certs/es03/es03.key
171 | # - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
172 | # - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
173 | # - xpack.security.transport.ssl.verification_mode=certificate
174 | # - xpack.license.self_generated.type=${LICENSE}
175 | # mem_limit: ${MEM_LIMIT}
176 | # ulimits:
177 | # memlock:
178 | # soft: -1
179 | # hard: -1
180 | # healthcheck:
181 | # test:
182 | # [
183 | # "CMD-SHELL",
184 | # "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
185 | # ]
186 | # interval: 10s
187 | # timeout: 10s
188 | # retries: 120
189 |
190 | kibana:
191 | depends_on:
192 | es01:
193 | condition: service_healthy
194 | image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
195 | volumes:
196 | - certs:/usr/share/kibana/config/certs
197 | - kibanadata:/usr/share/kibana/data
198 | ports:
199 | - ${KIBANA_PORT}:5601
200 | environment:
201 | - SERVERNAME=kibana
202 | - ELASTICSEARCH_HOSTS=https://es01:9200
203 | - ELASTICSEARCH_USERNAME=kibana_system
204 | - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
205 | - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
206 | mem_limit: ${MEM_LIMIT}
207 | healthcheck:
208 | test:
209 | [
210 | "CMD-SHELL",
211 | "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
212 | ]
213 | interval: 10s
214 | timeout: 10s
215 | retries: 120
216 |
217 | volumes:
218 | certs:
219 | driver: local
220 | esdata01:
221 | driver: local
222 | kibanadata:
223 | driver: local
224 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/files/env:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | # Password for the 'elastic' user (at least 6 characters)
3 | ELASTIC_PASSWORD=hackspacecon
4 |
5 | # Password for the 'kibana_system' user (at least 6 characters)
6 | KIBANA_PASSWORD=hackspacecon
7 |
8 | # Version of Elastic products
9 | STACK_VERSION=8.1.0
10 |
11 | # Set the cluster name
12 | CLUSTER_NAME=docker-cluster
13 |
14 | # Set to 'basic' or 'trial' to automatically start the 30-day trial
15 | #LICENSE=basic
16 | LICENSE=trial
17 |
18 | # Port to expose Elasticsearch HTTP API to the host
19 | ES_PORT=9200
20 | #ES_PORT=127.0.0.1:9200
21 |
22 | # Port to expose Kibana to the host
23 | KIBANA_PORT=5601
24 | #KIBANA_PORT=80
25 |
26 | # Increase or decrease based on the available host memory (in bytes)
27 | MEM_LIMIT=1073741824
28 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/handlers/main.yml:
--------------------------------------------------------------------------------
1 | ---
2 | # handlers file for setupvm
3 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/meta/main.yml:
--------------------------------------------------------------------------------
1 | galaxy_info:
2 | author: your name
3 | description: your role description
4 | company: your company (optional)
5 |
6 | # If the issue tracker for your role is not on github, uncomment the
7 | # next line and provide a value
8 | # issue_tracker_url: http://example.com/issue/tracker
9 |
10 | # Choose a valid license ID from https://spdx.org - some suggested licenses:
11 | # - BSD-3-Clause (default)
12 | # - MIT
13 | # - GPL-2.0-or-later
14 | # - GPL-3.0-only
15 | # - Apache-2.0
16 | # - CC-BY-4.0
17 | license: license (GPL-2.0-or-later, MIT, etc)
18 |
19 | min_ansible_version: 2.1
20 |
21 | # If this a Container Enabled role, provide the minimum Ansible Container version.
22 | # min_ansible_container_version:
23 |
24 | #
25 | # Provide a list of supported platforms, and for each platform a list of versions.
26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'.
27 | # To view available platforms and versions (or releases), visit:
28 | # https://galaxy.ansible.com/api/v1/platforms/
29 | #
30 | # platforms:
31 | # - name: Fedora
32 | # versions:
33 | # - all
34 | # - 25
35 | # - name: SomePlatform
36 | # versions:
37 | # - all
38 | # - 1.0
39 | # - 7
40 | # - 99.99
41 |
42 | galaxy_tags: []
43 | # List tags for your role here, one per line. A tag is a keyword that describes
44 | # and categorizes the role. Users find roles by searching for tags. Be sure to
45 | # remove the '[]' above, if you add tags to this list.
46 | #
47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
48 | # Maximum 20 tags per role.
49 |
50 | dependencies: []
51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above,
52 | # if you add dependencies to this list.
53 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/tasks/main.yml:
--------------------------------------------------------------------------------
1 | ---
2 | # tasks file for setupvm
3 | # - name: Install Docker from script
4 |
5 | - name: Install Docker from script
6 | ansible.builtin.raw: "curl https://get.docker.com | bash"
7 |
8 | - name: Install Packages
9 | apt:
10 | name: "{{ item }}"
11 | update_cache: yes
12 | loop:
13 | - jq
14 | - vim
15 | - tmux
16 | - radare2
17 | - python3
18 | - python3-pip
19 | - yara
20 | - unzip
21 | - docker-compose
22 | - p7zip-full
23 | - libfuzzy-dev
24 | - libyara-dev
25 |
26 | - name: Add vagrant user to group
27 | ansible.builtin.user:
28 | name: vagrant
29 | groups: docker
30 | append: true
31 |
32 | - name: Install r2pipe via pip
33 | ansible.builtin.pip:
34 | name: "{{ item }}"
35 | loop:
36 | - r2pipe
37 | - lief
38 |
39 | - name: Set Sysctl VM_MAX_MAP Count
40 | ansible.posix.sysctl:
41 | name: vm.max_map_count
42 | value: "262144"
43 | sysctl_set: true
44 | reload: true
45 |
46 | - name: Copy Docker-Compose
47 | ansible.builtin.copy:
48 | src: ./files/docker-compose.yml
49 | dest: /home/vagrant/
50 |
51 | - name: Copy Docker-Compose ENV
52 | ansible.builtin.copy:
53 | src: ./files/env
54 | dest: /home/vagrant/.env
55 |
56 | # this can cause a hang, so we're commenting it out
57 | #- name: docker-compose up
58 | # ansible.builtin.command: docker compose up -d
59 | # retries: 5
60 | # delay: 5
61 |
62 | - name: Git clone repos (retry on failure)
63 | ansible.builtin.git:
64 | repo: "{{ item }}"
65 | dest: "/home/vagrant/{{ item.split('/')[-1] }}"
66 | retries: 3
67 | delay: 3
68 | debugger: on_failed
69 | loop:
70 | - https://github.com/archcloudlabs/r2elk
71 | tags:
72 | - git
73 |
74 | - name: Remove cuckcoo yara rules
75 | ansible.builtin.command: rm ./r2elk/rules/malware/MALW_AZORULT.yar
76 | tags:
77 | - git
78 |
79 | - name: Install dependencies for r2elk
80 | ansible.builtin.pip:
81 | requirements: /home/vagrant/r2elk/requirements.txt
82 | become: false
83 | tags:
84 | - r2elk
85 |
86 | - name: Fix permissions
87 | ansible.builtin.command: chown vagrant:vagrant -R /home/vagrant/
88 | become: true
89 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/tests/inventory:
--------------------------------------------------------------------------------
1 | localhost
2 |
3 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/tests/test.yml:
--------------------------------------------------------------------------------
1 | ---
2 | - hosts: localhost
3 | remote_user: root
4 | roles:
5 | - setupvm
6 |
--------------------------------------------------------------------------------
/00_Setup/roles/setupvm/vars/main.yml:
--------------------------------------------------------------------------------
1 | ---
2 | # vars file for setupvm
3 |
--------------------------------------------------------------------------------
/01_Automating_Collection/README.md:
--------------------------------------------------------------------------------
1 | ## Automating Collection - Malshare Account
2 | Before we can automate downloading daily dumps from Malshare, we need to create an account.
3 | 1. Go to [malshare.com](https://malshare.com/)
4 | 2. Click on [register](https://malshare.com/register.php)
5 | 3. Enter a name, and e-mail address.
6 | 4. Get your official API key to interact with the API service.
7 |
8 | ## Building Your Own Download Tool
9 | The Malshare API is [well documented](https://malshare.com/doc.php) and easy to implement in a variety of languages.
10 | Arch Cloud Labs has a tool that leverages multiple malware provider's APIs called [mquery](https://github.com/archcloudlabs/mquery).
11 | For the sake of this lab we'll build a simple download utility in Python to continue with our focus on building multiple-skillsets.
12 |
13 | If we're running short on time, feel free to use the ```malshare_download.py``` script provided for you!
14 | Just update the ```config.ini``` file with your malshare API key.
15 |
16 |
17 | ### Copying files into the running Vagrant machine
18 | To copy the script into the Vagrant machine you can either leverage the Vagrant upload command or scp (secure copy).
19 | The output below shows the vagrant upload command. You have to be in the same folder as your ```Vagrantfile```.
20 |
21 | ```
22 | ➜ 00_Setup git:(main) ✗ vagrant upload ../01_Automating_Collection/malshare_download.py
23 | Uploading ../01_Automating_Collection/malshare_download.py to malshare_download.py
24 | Upload has completed successfully!
25 | ```
26 |
27 | To leverage scp, execute the command below and replace ```vagrant_ip_goes_here``` with the IP address of your Vagrant machine.
28 | To get the IP address of your Vagrant machine, execute ```ifconfig``` from a command shell within Vagrant.
29 |
30 | ``` sh
31 | $> scp malshare_download.py vagrant@vagrant_ip_goes_here:
32 | ```
33 |
34 | ## Getting The Latest Hashes
35 | The following code block will show how to fetch the latest hashes in Python.
36 | Feel free to try on your own before looking at the provided annotated solution.
37 |
38 | Python Example for Querying API
39 |
40 | ``` sh
41 | #!/usr/bin/env python3
42 | import sys
43 | import configparser
44 | import json
45 |
46 | try:
47 | import requests
48 | except ImportError as err:
49 | print(f"error: {err}")
50 | sys.exit(1)
51 |
52 | config = configparser.ConfigParser()
53 | config.read("config.ini") # config file that contains API keys
54 | apikey = config.get("DEFAULT", "MALSHARE_API")
55 |
56 | LATEST_SAMPLES_URL = "https://www.malshare.com/api.php?api_key=API_KEY&action=getlist".replace("API_KEY", apikey) # URL for upstream Malshare API
57 |
58 |
59 | def get_latest_hashes(hashAlgo="sha1"):
60 | """
61 | Grab the latest hashes (md5/sha1/sha256) from the Malshare API.
62 |
63 | hashAlgo: string value to indiciate the hash algorithm to use.
64 | return: List of hashes
65 | """
66 | latest_hashes = []
67 | req = requests.get(LATEST_SAMPLES_URL)
68 | if req.status_code == 200:
69 | data = json.loads(req.content)
70 | [latest_hashes.append((sample.get(hashAlgo))) for sample in data]
71 | return latest_hashes
72 |
73 |
74 | if __name__ == "__main__":
75 |
76 | hashes = get_latest_hashes()
77 | print(hashes) # print list of hashes
78 | ```
79 |
80 |
81 |
82 | ### Getting The Latest Samples
83 | Now that we have a list of the latest hashes, we can start to download them!
84 | The Malshare API endpoin looks as follows
85 |
86 | ``` sh
87 |
88 | https://www.malshare.com/api.php?api_key=API_KEY_GOES_HERE&action=getfile&hash=MALWARE_HASH_GOES_HERE
89 |
90 | ```
91 | Take a stab at developing it, before taking a look below!
92 |
93 |
94 |
95 | ``` sh
96 |
97 | def download_sample(listOfSamples: list):
98 | """
99 | Download a sample from malshare based on hash
100 |
101 | return: byte array to write to file
102 | """
103 | if listOfSamples is None:
104 | logging.error("No samples to download")
105 | return None
106 |
107 | __create_path__(path="./malware")
108 |
109 | for sample in listOfSamples:
110 | sample_url = DOWNLOAD_SAMPLE_URL.replace("MAL_HASH", sample)
111 | logging.info(f"Downloading: {sample}")
112 | try:
113 | req = requests.get(sample_url)
114 | if req.status_code == 200:
115 | #data = __inline_patch__(req.content) # optional
116 | with open(f"./malware/{sample}", "wb") as fout:
117 | fout.write(req.content)
118 | except requests.ConnectionError as conn_error:
119 | logging.error(f"Trouble making request to {sample_url} - {conn_error}")
120 | pass
121 |
122 | except requests.ConnectTimeout as conn_error:
123 | logging.error(f"Trouble making request to {sample_url} - {conn_error}")
124 | pass
125 |
126 | ```
127 |
128 |
129 |
130 | ## Automating the Collection
131 | Combining both of these functions we can get and download daily samples.
132 | Not all samples are PEs and ELFs however. There could be GIFs/JPEGs/XLS/Docs/etc... being uploaded.
133 | Consider how you would filter that out as a "beyond the course" objective.
134 | A list of "magic bytes" can be found on [Wikipedia here](https://en.wikipedia.org/wiki/List_of_file_signatures).
135 |
136 | Next, we'll simply run our code to grab the hashes and save the samples to disk.
137 | Again, if we're running short on time, feel free to use the ```malshare_download.py``` script provided for you!
138 | You can always come back and create a new tool later!
139 |
140 | ## Beyond The Course
141 | The provided code was *very* minimal, but it accomplished the goal.
142 | Here are some additional tasks to consider beyond the course to further build Software Development skills:
143 | * What if you wanted to schedule this script to run every day? How would you do it?
144 | * How could the script be modified to download malware samples in parallel?
145 | * What about running this in a Kubernetes cluster? (*Checkout Minikube to run something locally*)
146 | * How can we add a "saftey header" to prevent accidental execution?
147 | * What would we have to do with our tools during analysis?
148 |
--------------------------------------------------------------------------------
/01_Automating_Collection/config.ini:
--------------------------------------------------------------------------------
1 | [DEFAULT]
2 | MALSHARE_API = API_KEY_GOES_HERE
3 |
--------------------------------------------------------------------------------
/01_Automating_Collection/malshare_download.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Example malshare downloader code.
4 | API can be found at: https://malshare.com/doc.php
5 | """
6 |
7 | import sys
8 | import configparser
9 | import json
10 | import logging
11 | import pathlib
12 |
13 | try:
14 | import requests
15 | except ImportError as err:
16 | print(f"error: {err}")
17 | sys.exit(1)
18 |
19 | FORMAT = '%(asctime)s %(message)s'
20 | logging.basicConfig(format=FORMAT)
21 | logging.basicConfig()
22 |
23 | # By default the root logger is set to WARNING and all loggers you define
24 | # inherit that value. Here we set the root logger to NOTSET. This logging
25 | # level is automatically inherited by all existing and new sub-loggers
26 | # that do not set a less verbose level.
27 | logging.root.setLevel(logging.INFO)
28 | log = logging.getLogger()
29 |
30 | config = configparser.ConfigParser()
31 | config.read("config.ini")
32 | apikey = config.get("DEFAULT", "MALSHARE_API")
33 |
34 | LATEST_SAMPLES_URL = "https://www.malshare.com/api.php?api_key=API_KEY&action=getlist".replace("API_KEY", apikey)
35 | DOWNLOAD_SAMPLE_URL = "https://www.malshare.com/api.php?api_key=API_KEY&action=getfile&hash=MAL_HASH".replace("API_KEY", apikey)
36 |
37 |
38 | def __create_path__(path="./malware"):
39 | _path = pathlib.Path(path)
40 | if _path.exists() == False:
41 | _path.mkdir()
42 |
43 |
44 | def __inline_patch__(byteObj):
45 | """
46 | Patch the binary object w/ machine id that does not exist to prevent execution.
47 | this is optional, but given to you as a choice!
48 |
49 | return byte array (ELF/PE/etc...)
50 | """
51 | data = bytearray(byteObj) # convert bytes to bytearray
52 | if byteObj[0:2] == b'MZ':
53 | log.info("Identified MZ header")
54 |
55 | elif byteObj[0:4] == b'\x7fELF':
56 | log.info("Identified ELF header - patching w/ 00")
57 | # data[17] = 0 # setting e_machine to None disabling execution
58 | # data[18] = 0 # setting e_machine to None disabling execution
59 |
60 | return data
61 |
62 |
63 | def get_latest_hashes(hashAlgo="sha1"):
64 | """
65 | Grab the latest hashes (md5/sha1/sha256) from the Malshare API.
66 |
67 | hashAlgo: string value to indiciate the hash algorithm to use.
68 | return: List of hashes
69 | """
70 | latest_hashes = []
71 | req = requests.get(LATEST_SAMPLES_URL)
72 | if req.status_code == 200:
73 | data = json.loads(req.content)
74 | [latest_hashes.append((sample.get(hashAlgo))) for sample in data]
75 | else:
76 | log.error(f"Error obtaining latest hashes: {req.status_code} - {req.content}")
77 |
78 | return latest_hashes # return all the hashes
79 |
80 |
81 | def download_sample(listOfSamples: list):
82 | """
83 | Download a sample from malshare based on hash
84 |
85 | return: byte array to write to file
86 | """
87 | if listOfSamples is None:
88 | logging.error("No samples to download")
89 | return None
90 |
91 | __create_path__(path="./malware")
92 |
93 | for sample in listOfSamples:
94 | sample_url = DOWNLOAD_SAMPLE_URL.replace("MAL_HASH", sample)
95 | logging.info(f"Downloading: {sample}")
96 | try:
97 | req = requests.get(sample_url)
98 | if req.status_code == 200:
99 | #data = __inline_patch__(req.content) # optional
100 | with open(f"./malware/{sample}", "wb") as fout:
101 | fout.write(req.content)
102 | except requests.ConnectionError as conn_error:
103 | logging.error(f"Trouble making request to {sample_url} - {conn_error}")
104 | pass
105 |
106 | except requests.ConnectTimeout as conn_error:
107 | logging.error(f"Trouble making request to {sample_url} - {conn_error}")
108 | pass
109 |
110 |
111 | if __name__ == "__main__":
112 | hashes = get_latest_hashes()
113 | download_sample(hashes)
114 |
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/00_login.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/00_login.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/01_upload.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/01_upload.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/02_upload.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/02_upload.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/03_default_values.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/03_default_values.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/04_create_data_view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/04_create_data_view.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/05_create_data_view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/05_create_data_view.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/06_data.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/06_data.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/.imgs/10_pdb_mandiant.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/02_Data_Extraction_and_Visualization/.imgs/10_pdb_mandiant.png
--------------------------------------------------------------------------------
/02_Data_Extraction_and_Visualization/README.md:
--------------------------------------------------------------------------------
1 | ## Data Extraction & Labeling
2 | With malware downloaded, now we'll want to start parsing our data to ingest into Elasticsearch to start identifying trends, hunting for related samples, etc... This is a great opportunity to start thinking about building **your own** tools to learn about parsing different file formats.
3 | For the sake of time, we'll use [r2elk](https://www.github.com/archcloudlabs/r2elk), but considering coming back to this portion after the workshop and exploring more data extraction opportunities.
4 |
5 | R2ELK is utility made by Arch Cloud Labs to leverage radare2 for metadata extraction. This includes:
6 | * File name
7 | * File format
8 | * MD5 hash
9 | * SHA1 hash
10 | * SSDeep Hash
11 | * ImpHash: Import hash calculation.
12 | * Architecture: Architecture the binary is compiled for.
13 | * Binary size: Size of the binary
14 | * Programming language Used (identified by r2)
15 | * Compiler info: Compile name and version used.
16 | * Compiled time: When was the binary compiled.
17 | * Stripped: Does the binary have debug symbols.
18 | * Static: Is the binary dynamically or statically linked.
19 | * Signed: is the binary cryptographically signed or not.
20 | * Strings: The first 100 strings within a binary.
21 | * PDB file paths: The PDB file path (Windows only).
22 | * Base address: The base address the binary is loaded at.
23 | * Imports: Additional libraries that are imported.
24 | * Exports: Functions that are exported.
25 | * Yara Rule matching (Optional).
26 |
27 | Gathering all of this data into a central database (Elasticsearch) allows us, as researchers to start grouping together trends for daily downloads. These trends of samples can then be used to identify what makes for good blog posts, interesting research, and a great way to start reversing different kinds of malware.
28 |
29 | ### Manual Installation
30 |
31 | The initial Ansible script in [00_Setup](../00_Setup/README.md) should have already downloaded a repo called "r2elk" in Vagrant's home folder.
32 | If not, you can go forth and install manually via the following commands:
33 |
34 | ``` sh
35 | git clone https://github.com/archcloudlabs/r2elk;
36 | cd r2elk;
37 | python3 -m pip3 install -r requirements.txt;
38 | sudo apt install yara libfuzzy-dev libyara-dev jq;
39 | ```
40 |
41 | ## APT1 Malware Samples
42 | In order to ensure we all have the same samples, we'll be looking at the APT-1 malware data set.
43 | This data set is available through multiple online providers, but its nicely curated on No Starch Press' Malware Data Science book's companion website [here](https://drive.google.com/open?id=11kvnSB9yQLaIdRI47z0PNjAyhXH6h0UK).
44 |
45 | After downloading the zip file, copy it into the Vagrant virtual machine via the ```scp``` utility or via Vagrant's upload sub-command.
46 | The example below shows how to copy the file, but you'll have to change the IP address to your Vagrant IP.
47 | ```
48 | $> scp malware_data_science_entrypoints_redacted.zip vagrant@192.168.124.66:
49 | ```
50 | or
51 |
52 | ```
53 | $> vagrant upload malware_data_science_entrypoints_redacted.zip
54 | ```
55 |
56 | Next, **within the Vagrant machine**, we'll unzip the file.
57 |
58 | ``` sh
59 | vagrant@hackspacecon:~$ unzip malware_data_science_entrypoints_redacted.zip
60 | ```
61 |
62 | Chapter 4 contains the malware sample's we'll be looking at.
63 |
64 | The path is: ```/home/vagrant/malware_data_science/ch4/data/APT1_MALWARE_FAMILIES```
65 | Change your working directory to this path.
66 |
67 | ```
68 | cd /home/vagrant/malware_data_science/ch4/data/APT1_MALWARE_FAMILIES;
69 | ```
70 |
71 | Then, we'll create a new directory in our home folder for JUST the malware samples.
72 |
73 | ``` sh
74 | mkdir ~/apt1;
75 | ```
76 |
77 | Finally, we'll copy all of the binaries from the ```APT1_MALWARE_FAMILES``` sub-folders to ```~/apt1``` to make it easy for ```r2elk``` to run against.
78 |
79 | ``` sh
80 | vagrant@hackspacecon:~/malware_data_science/ch4/data/APT1_MALWARE_FAMILIES$ find . -type f -print0 | xargs -0 grep -rail 'dos mode' | xargs -I {} cp -u {} ~/apt1
81 | ```
82 |
83 | At this point you should have a directory called "apt1" in the home folder of the vagrant machine with APT1 samples.
84 | An example of said output is shown below.
85 |
86 | ``` sh
87 | vagrant@hackspacecon:~/apt1$ pwd
88 | /home/vagrant/apt1
89 | vagrant@hackspacecon:~/apt1$ ls -l
90 | total 19444
91 | -rwxr-x--- 1 vagrant vagrant 7168 Apr 9 13:28 1F2EB7B090018D975E6D9B40868C94CA
92 | -rwxr-x--- 1 vagrant vagrant 8192 Apr 9 13:28 33DE5067A433A6EC5C328067DC18EC37
93 | -rwxr-x--- 1 vagrant vagrant 46592 Apr 9 13:28 36CD49AD631E99125A3BB2786E405CEA
94 | -rwxr-x--- 1 vagrant vagrant 8192 Apr 9 13:28 65018CD542145A3792BA09985734C12A
95 | -rwxr-x--- 1 vagrant vagrant 8192 Apr 9 13:28 650A6FCA433EE243391E4B4C11F09438
96 | -rwxr-x--- 1 vagrant vagrant 8192 Apr 9 13:28 6FAA4740F99408D4D2DDDD0B09BBDEFD
97 | -rwxr-x--- 1 vagrant vagrant 8192 Apr 9 13:28 785003A405BC7A4EBCBB21DDB757BF3F
98 | -rwxr-x--- 1 vagrant vagrant 7168 Apr 9 13:28 8442AE37B91F279A9F06DE4C60B286A3
99 | -rwxr-x--- 1 vagrant vagrant 8192 Apr 9 13:28 99A39866A657A10949FCB6D634BB30D5
100 | ```
101 |
102 | *Note: I've contacted an author of Malware Data Science for permission to reference their book's samples here*
103 |
104 | ## Yara & R2ELK
105 | **If we're running low on time, apt1.json in this folder can be used to skip this step**
106 |
107 | The r2elk repo uses the yara-rules repository as a git [submodule](https://git-scm.com/book/en/v2/Git-Tools-Submodules) to be able to easily point at a set of rules to use for labeling purposes. First, we need to create the ".yar" file that we'll use for generating this. From the ```r2elk``` directory execute ```./rules/index_gen.sh```to generate a single ```index.yar``` file.
108 |
109 | The Ansible script should have completed this for you, but if not you can execute the ```index_gen.sh```command below manually.
110 | The output on your console should be similar to the output below.
111 |
112 | ``` sh
113 | vagrant@hackspacecon:~/r2elk$ pwd
114 | /home/vagrant/r2elk
115 | vagrant@hackspacecon:~/r2elk$ ./rules/index_gen.sh
116 | **************************
117 | Yara-Rules
118 | Index generator
119 | **************************
120 | [+] Generating rules index...
121 | [+] Generating index_w_mobile...
122 | [+] Generating index...
123 | ```
124 |
125 | Now, we'll point r2elk to a directory of malware (```-d```) in addition to specifying the ```index.yar``` file for yara scanning and then redirect said output to malware.json
126 | ``` sh
127 | $> python3 r2elk.py -d ~/malware_samples -y index.yar -v > malware.json
128 | ```
129 |
130 | Use the APT-1 data set from the link above to generate the malware.json.
131 | ``` sh
132 | $> python3 r2elk.py -d ~/path_to_apt1_samples_here -y index.yar -v > apt1.json
133 | ```
134 |
135 | Now that your ```apt1.json``` file has been created, let's get it into Kibana for analysis!
136 | The next two sections below show importing data through curl as well as the Kibana ui.
137 |
138 | ### Importing Data into Kibana from Host Machine
139 |
140 | The Kibana interface allows for ingestion of JSON and CSV file formats. It can automatically detect field types and create a dashboard for you to search as well. First, login to the Kibana interface by browsing to http://localhost:5601, the Vagrantfile should have already configured port forwarding allowing you to access Kibana. The default credentials are ```elastic/hackspacecon```.
141 |
142 | 
143 |
144 | 1. Next, select the "upload a file" option that's shown below
145 |
146 | 
147 |
148 | 2. Then upload the ```malware.json``` file previously created.
149 |
150 | 
151 |
152 | 3. Kibana will ingest the data into Elasticsearch creating the backend schema for you based on the data types it recognizes.
153 | Feel free to browse the results before clicking import in the bottom left.
154 |
155 | 
156 |
157 | 4. Now Kibana will prompt you to name the index that was just created and the ability to create a dataview
158 | Give the index a unique name, ensure the data view is selected, and then click import.
159 |
160 | 
161 |
162 | 5. If successful, you should see something similar to the image below.
163 |
164 | 
165 |
166 | 6. Finally, select view in Discover to start examining your data set.
167 |
168 | 
169 |
170 | ## Recreating Analysis of "Devils In The Details"
171 | The blog post "[Definitive Dossier of Devilish Debug Details – Part One: PDB Paths and Malware](https://www.mandiant.com/resources/blog/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware)" by Mandiant discusses the PDB artifact within Windows PE files to link together samples. We'll leverage the APT1 data set just like blog and identified samples with a shared PDB file path.
172 |
173 | 1. Browse to the Analytics -> Discover page.
174 | 2. Filter on field ```has_debug_string: true``` and then add ```dbg_file``` to the document view.
175 | 3. Add the field ```md5``` to the document view.
176 |
177 | * At this point we have identified different samples based on unique md5 hashes.
178 | Multiple artifacts are required to create a strong argument that a given malware sample is likely developed by the same threat actor/toolkit/etc...
179 |
180 | 4. Add the field ```imphash``` and ```yara_rules``` to the document view.
181 |
182 | With pdb paths, imp hashes and yara rule hits, what similaries can be identified across these samples?
183 |
184 | 
185 |
186 |
187 |
188 | * PDB paths and ImpHashes
189 |
190 |
191 |
192 |
193 | Now that we've recreated said analysis, apply this methodology to samples from recently upload Malshare sample and see what unique trends you can identify!
194 |
195 |
196 | ## Beyond The Course
197 | The manual ingestion is fine, but what if we wanted to automate this? How can we automatically ingest the data from r2elk (*or maybe your own tool*) into Elasticsearch? From the data set you've ingested go ahead and answer the following questions to help build familiarity with Kibana
198 |
199 | * What Yara rules appeared the most?
200 | * How many samples had PDB paths?
201 | * How many ```ImpHashes``` were the same?
202 | * How would we import this data **directly** into Elasticsearch?
203 | - Filebeat? Python script? curl?
204 |
--------------------------------------------------------------------------------
/03_Public_Feeds/README.md:
--------------------------------------------------------------------------------
1 | ## Public Feeds
2 |
3 | With Elasticsearch up and running, we'll start exploring downloading and ingesting some public feeds.
4 | Specifically we'll start with Hybrid-Analysis. Hybrid-Analysis provides a lot of fields to parse.
5 | In this example we'll only parse a handful to get started, and leave more for after the class.
6 |
7 | ## Getting Started with Hybrid-Analysis
8 |
9 | Hybrid-Analysis is a VirusTotal competitor that publishes a public feed at "https://hybrid-analysis.com/feed?json".
10 | This feed contains data from publicly submitted URLs and binaries. Best of all, no authentication required!
11 | Data includes:
12 | * Domains queried.
13 | * processes spawned.
14 | * files dropped.
15 | * A **boat load** more.
16 |
17 | The portion of this lab is to expose you to Hybrid-Analysis as a data source to draw in forensic artifacts from.
18 |
19 | Python Example for Querying API
20 |
21 | ``` python
22 | #!/usr/bin/env python3
23 | import sys
24 | import json
25 | from dataclasses import dataclass
26 | try:
27 | import requests
28 | except ImportError as err:
29 | print(f"Error import {err}")
30 | sys.exit(1)
31 |
32 | url = "https://hybrid-analysis.com/feed?json"
33 |
34 | @dataclass
35 | class malwareStruct:
36 | md5: str
37 | sha1: str
38 | sha256: str
39 | tags: str
40 | domains: list
41 | process_list: str
42 | threatscore: int
43 | vt: int
44 | ms: int
45 |
46 |
47 | def pulsedive(ip):
48 | """
49 | query an IP address against pulse dives IP
50 |
51 | returns JSON string
52 | """
53 | req = requests.get(f"https://pulsedive.com/api/explore.php?q={ip}")
54 | if req.status_code == 200:
55 | return (json.loads(req.content))
56 |
57 |
58 | def process_data(data):
59 | """
60 | get data from hybrid-analysis public feed and then enrich/parse it.
61 | data: JSON blob of data from hybrid-analysis
62 |
63 | return: None
64 | """
65 | for entry in data.get('data'):
66 | malObj = malwareStruct
67 |
68 | malObj.md5 = entry.get('md5')
69 | malObj.sha256 = entry.get('sha256')
70 | malObj.tags = entry.get('tags')
71 | malObj.domains = entry.get('domains')
72 | malObj.process_list = entry.get('process_list')
73 | malObj.threatscore = entry.get('threatscore')
74 | malObj.vt = entry.get('vt_detect')
75 | malObj.ms = entry.get('ms_detect')
76 | malObj.ips = entry.get('hosts')
77 |
78 | __printer_helper__(malObj)
79 |
80 | #if ips is not None:
81 | # [print(pulsedive(ip)) for ip in ips]
82 |
83 | def __printer_helper__(malObj):
84 | """
85 | Helper function to print dataclass
86 | """
87 | print(f"{malObj.md5}, {malObj.domains}, {malObj.ips}, {malObj.threatscore}")
88 |
89 | if __name__ == "__main__":
90 |
91 | headers = {"User-Agent": "Arch Cloud Labs"}
92 | req = requests.get(url, headers=headers)
93 | if req.status_code == 200:
94 | data = json.loads(req.content)
95 | process_data(data)
96 | else:
97 | print(f"Error making request - status code: f{req.status_code}")
98 | sys.exit(1)
99 | ```
100 |
101 |
102 |
103 | ## Enriching Data w/ Free Providers
104 | [GreyNoise](https://www.greynoise.io/), [Pulse Dive](https://pulsedive.com/), and [IPInfo](https://ipinfo.io/) are all data providers that have a free tier for independent researchers at the time of this workshop. The data they provide are complimentary to each other, and I encourage you to explore all of them to understand their use cases.
105 |
106 | * [GreyNoise](https://www.greynoise.io): Provides insight into what's just noise on the internet vs mass exploitation attempts, and things to pay attention to.
107 | * *Note, limited free number of API requests a day.*
108 |
109 | * [Pulse Dive](https://pulsedive.com): CTI provider providing historical context around a given IP address/domain, and geo location.
110 |
111 | * [IP Info](https://ipinfo.com): Geo provider for IP addresses.
112 | * *Note, requires free signup for an API key.*
113 |
114 | Given the data we just fetched from Hybrid-Analysis, let's enrich some of the IP data with historic threat intel data!
115 | First, we'll modify the code above to enrich the IP from PulseDive. Pulse dive provides some historical data, classification (malicious/threat actor) and geo location.
116 |
117 | Python Example for PulseDive API
118 |
119 | ``` python
120 | def pulsedive(ip):
121 | req = requests.get(f"https://pulsedive.com/api/explore.php?q={ip}")
122 | if req.status_code == 200:
123 | print(json.loads(req.content))
124 | ```
125 |
126 |
127 | Next, let's further enrich this with Grey Noise!
128 |
129 | Python Example for GreyNoise API
130 |
131 | ``` python
132 |
133 | def pulsedive(ip):
134 | req = requests.get(f"https://api.greynoise.io/v3/community/{ip}")
135 | if req.status_code == 200:
136 | print(json.loads(req.content))
137 | ```
138 |
139 | An example output can be seen below. You may notice an interesting tag of "riot".
140 | Per [GreyNoise's documentation](https://docs.greynoise.io/docs/riot-data), riot is:
141 | > a new GreyNoise feature that informs users about IPs used by common business services that are almost certainly not attacking you
142 |
143 | This is a great feature of GreyNoise's API to help filter out IPs that are just noise.
144 |
145 | ``` bash
146 | {'ip': '142.250.189.161', 'noise': False, 'riot': True, 'classification': 'benign', 'name': 'Google APIs and Services', 'link': 'https://viz.greynoise.io/riot/142.250.189.161', 'last_seen': '2023-04-11', 'message': 'Success'}
147 | 2fb8d55e61a21e32fd228023748e614f, ['static.axept.io', 't4523b89a.emailsys1c.net', 'www.googleadservices.com', 'www.gstatic.com', 'www.rapidmail.de'], ['185.71.125.3', '142.250.72.195', '142.250.189.162', '13.227.74.3'], 26
148 | None
149 | {'ip': '142.250.72.195', 'noise': False, 'riot': True, 'classification': 'benign', 'name': 'Google APIs and Services', 'link': 'https://viz.greynoise.io/riot/142.250.72.195', 'last_seen': '2023-04-11', 'message': 'Success'}
150 | {'ip': '142.250.189.162', 'noise': False, 'riot': True, 'classification': 'benign', 'name': 'Google APIs and Services', 'link': 'https://viz.greynoise.io/riot/142.250.189.162', 'last_seen': '2023-04-11', 'message': 'Success'}
151 | {
152 |
153 | ```
154 |
155 |
156 |
157 | Congrats on making it to this point! You've completed the course, and I hope you enjoyed it!
158 | Want to let me know something? Open an issue, and please share with your friends if this helped!
159 |
160 | ## Beyond The Class
161 |
162 | * Update the previous code to pull out additional fields you're interested in to ingest into Elasticsearch.
163 | * Explore Alien Vault's Open Threat Exchange API for pulling in threat data of interest to you!
164 | * Up until this point we've been doing "ad-hoc" Python scripts for the course, go back and integrate the scripts into a larger application!
165 |
--------------------------------------------------------------------------------
/03_Public_Feeds/ha-parse.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | import sys
3 | import json
4 | from dataclasses import dataclass
5 | try:
6 | import requests
7 | except ImportError as err:
8 | print(f"Error import {err}")
9 | sys.exit(1)
10 |
11 | url = "https://hybrid-analysis.com/feed?json"
12 |
13 | @dataclass
14 | class malwareStruct:
15 | md5: str
16 | sha1: str
17 | sha256: str
18 | tags: str
19 | domains: list
20 | process_list: str
21 | threatscore: int
22 | vt: int
23 | ms: int
24 |
25 |
26 | def pulsedive(ip):
27 | """
28 | query an IP address against PulseDive's API
29 |
30 | returns JSON string
31 | """
32 | req = requests.get(f"https://pulsedive.com/api/explore.php?q={ip}")
33 | if req.status_code == 200:
34 | return (json.loads(req.content))
35 |
36 | def greynoise(ip):
37 | """
38 | query an IP address against GreyNoise's API
39 |
40 | returns JSON string
41 | """
42 | req = requests.get(f"https://api.greynoise.io/v3/community/{ip}")
43 | if req.status_code == 200:
44 | return (json.loads(req.content))
45 |
46 |
47 | def process_data(data):
48 | """
49 | get data from hybrid-analysis public feed and then enrich/parse it.
50 | data: JSON blob of data from hybrid-analysis
51 |
52 | return: None
53 | """
54 | for entry in data.get('data'):
55 | malObj = malwareStruct
56 |
57 | malObj.md5 = entry.get('md5')
58 | malObj.sha256 = entry.get('sha256')
59 | malObj.tags = entry.get('tags')
60 | malObj.domains = entry.get('domains')
61 | malObj.process_list = entry.get('process_list')
62 | malObj.threatscore = entry.get('threatscore')
63 | malObj.vt = entry.get('vt_detect')
64 | malObj.ms = entry.get('ms_detect')
65 | malObj.ips = entry.get('hosts')
66 |
67 | __printer_helper__(malObj)
68 |
69 | if malObj.ips is not None:
70 | [print(greynoise(ip)) for ip in malObj.ips]
71 |
72 | def __printer_helper__(malObj):
73 | """
74 | Helper function to print dataclass
75 | """
76 | print(f"{malObj.md5}, {malObj.domains}, {malObj.ips}, {malObj.threatscore}")
77 |
78 | if __name__ == "__main__":
79 |
80 | headers = {"User-Agent": "Arch Cloud Labs"}
81 | req = requests.get(url, headers=headers)
82 | if req.status_code == 200:
83 | data = json.loads(req.content)
84 | process_data(data)
85 | else:
86 | print(f"Error making request - status code: f{req.status_code}")
87 | sys.exit(1)
88 |
--------------------------------------------------------------------------------
/03_Public_Feeds/otx-parse.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | import sys
3 | try:
4 | from OTXv2 import OTXv2
5 | from OTXv2 import IndicatorTypes
6 | except ImportError as ie:
7 | print(f"Error importing {ie}")
8 | sys.exit(1)
9 |
10 | API_KEY = "PUT_API_KEY_HERE"
11 | otx = OTXv2(API_KEY)
12 |
13 | # Get all the indicators associated with a pulse
14 | # Example Pulse: https://otx.alienvault.com/pulse/63fe959838b0af26d53e1e61
15 | # ^ pule is this part of the URL
16 | indicators = otx.get_pulse_indicators("63fe959838b0af26d53e1e61")
17 | for indicator in indicators:
18 | print(indicator)
19 |
--------------------------------------------------------------------------------
/Arch_Cloud_Labs_Malware_Analysis_Platform.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/archcloudlabs/HackSpaceCon_Malware_Analysis_Course/f2478b8837d7c673c7c5f08548878501b07115ee/Arch_Cloud_Labs_Malware_Analysis_Platform.pdf
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 | ## About The Course
4 |
5 | Do you have the desire to grow your skills in Malware Analysis, RE, and Software Engineering beyond just following tutorials?
6 |
7 | [Arch Cloud Labs](https://www.archcloudlabs.com) was built on honeypots and analyzing malware samples in a homelab environment to create a unique way to build those skills.
8 |
9 | This course offers a quick taste of building malware analysis pipeline in your homelab to recreate analysis done by large firms as well as inspire you to do analysis of your own.
10 |
11 | Each section is designed to get you started, and then leave you with questions to go and explore on your own.
12 |
13 | This combination of guided and experimental learning has been successful for previous courses, and Arch Cloud Labs hopes you find it enjoyable.
14 |
15 | This repo is Apache2 licensed, and dedicated to all those that have contributed malware samples, blogs, tweets, YouTube videos, Conference Talks, etc... thank you for your contributions.
16 |
17 | ## Disclaimer
18 | **These are real malware samples!**
19 |
20 | **Do NOT run them unless you are absolutely sure of what you are doing!**
21 |
22 | **Arch Cloud Labs is not responsible for any damages.**
23 |
--------------------------------------------------------------------------------