├── .gitignore
├── README.md
├── analytics
├── Dockerfile
├── README.md
├── aws_credentials.txt
├── docker_build.sh
├── entrypoint.sh
├── environment.env
├── requirements.txt
└── src
│ ├── beaconer.py
│ ├── cloud_sniper_800.png
│ ├── pdf_reporter.py
│ └── results_generator.py
├── cloud-sniper
├── cloudwatch-event-target.tf
├── cloudwatch_event_rule.tf
├── data.tf
├── dynamo.tf
├── iam-policy.tf
├── iam-role-policy-attachment.tf
├── iam-role-policy.tf
├── iam_group.tf
├── iam_group_policy_attachment.tf
├── kinesis_firehose_delivery_stream.tf
├── lambda-funtion.tf
├── lambda-permission.tf
├── lambdas
│ └── functions
│ │ ├── cloud-sniper-tagging
│ │ └── cloud_sniper_tagging_ir.py
│ │ └── cloud-sniper
│ │ └── cloud_sniper.py
├── outputs.tf
├── role.tf
├── s3_bucket.tf
├── sqs_queue.tf
├── sqs_queue_policy.tf
├── variables.tf
└── waf.tf
├── images
├── deployment.png
└── logo.png
├── playbooks
├── cloudsniper_playbook_nist.md
└── cloudsniper_playbook_nist.tf
└── wiki
└── WIKI.md
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | *.zip
3 | .idea
4 | .terraform
5 | main.tf
6 | .terraform/
7 | terraform.tfstate
8 | terraform.tfstate.backup
9 | terraform.tfvarsf
10 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | ### [Cloud Sniper has a new home!! Click here to visit!!](https://github.com/cloud-sniper/cloud-sniper)
3 |
4 |
5 | 
6 |
7 | ## *Cloud Security Operations*
8 |
9 | ### *What is Cloud Sniper?*
10 |
11 | ***Cloud Sniper*** is a platform designed to manage *Security Operations* in cloud environments. It is an open platform which allows responding to security incidents by accurately analyzing and correlating native cloud artifacts. It is to be used as a *Virtual Security Operations Center (vSOC)* to detect and remediate security incidents providing a complete visibility of the company's cloud security posture.
12 |
13 | With this platform, you will have a complete and comprehensive management of the security incidents, reducing the costs of having a group of level-1 security analysts hunting for cloud-based *Indicators of Compromise (IOC)*. These *IOCs*, if not correlated, will generate difficulties in detecting complex attacks. At the same time ***Cloud Sniper*** enables advanced security analysts integrate the platform with external forensic or incident-and-response tools to provide security feeds into the platform.
14 |
15 | The cloud-based platform is deployed automatically and provides complete and native integration with all the necessary information sources, avoiding the problem that many vendors have when deploying or collecting data.
16 |
17 | ***Cloud Sniper*** receives cloud-based and third-parties feeds and automatically responds protecting your infrastructure and generating a knowledge database of the *IOCs* that are affecting your platform. This is the best way to gain visibility in environments where information can be bounded by the *Shared Responsibility Model* enforced by cloud providers.
18 |
19 | To detect advanced attack techniques, which may easily be ignored, the ***Cloud Sniper Analytics*** module correlates the events generating *IOCs*. These will give visibility on complex artifacts to analyze, helping both to stop the attack and to analyze the attacker's *TTPs*.
20 |
21 | ***Cloud Sniper*** is currently available for *AWS*, but it is to be extended to others cloud platforms.
22 |
23 | ### *Automatic infrastructure deployment (for AWS)*
24 |
25 | 
26 |
27 | ### WIKI => [HOW IT WORKS](wiki/WIKI.md)
28 |
29 | ### Cloud Sniper releases
30 |
31 | 1. Automatic Incident and Response
32 | 1. WAF filtering
33 | 2. NACLs filtering
34 | 3. IOCs knowledge database.
35 | 4. Tactics, Techniques and Procedures (TTPs) used by the attacker
36 | 2. Security playbooks
37 | 1. NIST approach
38 | 3. Automatic security tagging
39 | 4. Cloud Sniper Analytics
40 | 1. Beaconing detection with VPC Flow Logs (C2 detection analytics)
41 |
42 | ### Upcoming Features and Integrations
43 |
44 | 1. Security playbooks for cloud-based environments
45 | 2. Security incidents centralized management for multiple accounts. Web Management UI
46 | 3. WAF analytics
47 | 4. Case management (automatic case creation)
48 | 5. IOCs enrichment and Threat Intelligence feeds
49 | 6. Automatic security reports based on well-known security standards (NIST)
50 | 7. Integration with third-party security tools (DFIR)
51 |
--------------------------------------------------------------------------------
/analytics/Dockerfile:
--------------------------------------------------------------------------------
1 | ARG BASE_CONTAINER=python:3
2 |
3 | FROM $BASE_CONTAINER
4 |
5 | FROM python:3
6 |
7 | COPY requirements.txt .
8 | RUN pip install --no-cache-dir -r requirements.txt
9 | RUN pip install awscli
10 |
11 | COPY aws_credentials.txt /root/.aws/credentials
12 |
13 | WORKDIR /usr/src/beaconer
14 |
15 | COPY entrypoint.sh ./
16 | COPY environment.env ./
17 | COPY ./src/* ./
18 |
19 | ENTRYPOINT ["/bin/bash", "entrypoint.sh"]
20 |
--------------------------------------------------------------------------------
/analytics/README.md:
--------------------------------------------------------------------------------
1 | # Cloud Sniper Analytics module
2 |
3 | Ideally, this should be deployed with terra.
4 |
5 | Currenly, VPC-Flowlogs should be stored on S3 and the beaconing module looks for beaconing patterns in those flows.
6 |
7 | To configure it, update:
8 | * aws_credentials.txt: your aws credentials for boto3
9 | * environment.env: the s3 keys where to read the VPC flows from and where to write the generated report
10 |
11 | Create the docker image on an EC2 instance (`sudo sh docker_build.sh`) and run the container (`sudo docker run cloudsniper/beaconer`). A pdf report is stored in the configured S3 path.
12 |
--------------------------------------------------------------------------------
/analytics/aws_credentials.txt:
--------------------------------------------------------------------------------
1 | [default]
2 | aws_access_key_id =
3 | aws_secret_access_key =
4 |
5 |
--------------------------------------------------------------------------------
/analytics/docker_build.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin
2 |
3 | DEFAULT_IMAGE_NAME=cloudsniper/beaconer
4 |
5 | usage()
6 | {
7 | echo "Usage: docker_build.sh [image_name]"
8 | echo "\timage_name is optional (default "$DEFAULT_IMAGE_NAME")"
9 | }
10 |
11 | case $1 in
12 | -h | --help )
13 | usage
14 | exit 0
15 | esac
16 |
17 | SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
18 | echo "Building the docker image with Dockerfile defined at $SCRIPTPATH"
19 |
20 | if [ "$#" -ge 2 ]; then
21 | usage
22 | fi
23 |
24 | IMAGE_NAME=${DEFAULT_IMAGE_NAME}
25 | if [ "$#" -eq 1 ]; then
26 | IMAGE_NAME=$1
27 | fi
28 |
29 | echo docker build $SCRIPTPATH -t $IMAGE_NAME
30 | docker build $SCRIPTPATH -t $IMAGE_NAME
31 |
--------------------------------------------------------------------------------
/analytics/entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | source environment.env
4 |
5 | python beaconer.py ${S3_BUCKET} ${S3_INPUT_PATH} ${S3_OUTPUT_PATH}
6 |
--------------------------------------------------------------------------------
/analytics/environment.env:
--------------------------------------------------------------------------------
1 | export S3_BUCKET=cs-ekoparty
2 | export S3_INPUT_PATH=vpc-flows/
3 | export S3_OUTPUT_PATH=vpc-flowgs_analysis/
4 |
--------------------------------------------------------------------------------
/analytics/requirements.txt:
--------------------------------------------------------------------------------
1 | boto3==1.9.109
2 | ipaddress==1.0.22
3 | matplotlib==3.0.2
4 | numpy==1.15.4
5 | pandas==0.23.4
6 | reportlab==3.5.13
7 | scikit-learn==0.20.1
8 | scipy==1.1.0
9 | sklearn==0.0
10 |
--------------------------------------------------------------------------------
/analytics/src/beaconer.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # coding: utf-8
3 |
4 | import os
5 | import argparse
6 | import scipy.stats
7 | import ipaddress
8 | import numpy as np
9 | import pandas as pd
10 | import gzip
11 | import datetime
12 | import boto3
13 |
14 | import results_generator
15 |
16 |
17 | TMP_DOWNLOAD_DIR = "/tmp/s3_download"
18 | TMP_REPORT_DIR = "/tmp/report"
19 | PROCESS_LAST_HOURS = 10
20 | now = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
21 | RESULTS_FNAME = "%s.json" % now
22 | RESULTS_FNAME_PDF = "%s.pdf" % now
23 |
24 | FLOW_COLUMNS = [
25 | "date",
26 | "version",
27 | "account-id",
28 | "interface-id",
29 | "srcaddr",
30 | "dstaddr",
31 | "srcport",
32 | "dstport",
33 | "protocol",
34 | "packets",
35 | "bytes",
36 | "start",
37 | "end",
38 | "action",
39 | "log-status",
40 | ]
41 |
42 | def get_args():
43 | parser = argparse.ArgumentParser()
44 | parser.add_argument("s3_bucket", help="AWS S3 bucket name where to read the vpc flows from and write the results")
45 | parser.add_argument("s3_input_path", help="path in the s3_backet to look for the data")
46 | parser.add_argument("s3_output_path", help="path in the s3_backet to write the results")
47 |
48 | return parser.parse_args()
49 |
50 |
51 | def load_data(s3_bucket, s3_input_path):
52 | s3 = boto3.resource('s3')
53 |
54 | bucket = s3.Bucket(name=s3_bucket)
55 | prefix = s3_input_path
56 | if not prefix.endswith("/"): prefix += "/"
57 |
58 | if not os.path.exists(TMP_DOWNLOAD_DIR):
59 | os.mkdir(TMP_DOWNLOAD_DIR)
60 | for i, s3_file_obj in enumerate(bucket.objects.filter(Prefix=prefix)):
61 | bucket.download_file(s3_file_obj.key, TMP_DOWNLOAD_DIR + "/%06d" % i + ".log.gz")
62 | if i == 110: break
63 |
64 | data = []
65 | for fname in sorted(os.listdir(TMP_DOWNLOAD_DIR)):
66 | if not fname.endswith(".log.gz"):
67 | continue
68 | with gzip.open(os.path.join(TMP_DOWNLOAD_DIR, fname), 'r') as fd:
69 | first_line = True
70 | for line in fd:
71 | if first_line:
72 | first_line = False
73 | continue
74 | data.append(line.decode("utf-8").strip().split(" "))
75 |
76 | if len(data[0]) == len(FLOW_COLUMNS):
77 | df = pd.DataFrame(data, columns=FLOW_COLUMNS)
78 | df.drop(['date'], axis=1, inplace=True)
79 | else:
80 | df = pd.DataFrame(data, columns=FLOW_COLUMNS[1:])
81 | return df
82 |
83 |
84 | def filter_format_data(df):
85 | df = df[df.srcaddr != "-"]
86 | df = df[df.dstaddr != "-"]
87 | df.drop(["version", "account-id", "interface-id", "srcport"], axis=1, inplace=True)
88 | df = df.replace("-", np.nan)
89 | df = df.replace("-", np.nan)
90 | df[["dstport", "protocol", "packets", "bytes", "start", "end"]] = \
91 | df[["dstport", "protocol", "packets", "bytes", "start", "end"]].apply(pd.to_numeric)
92 | return df
93 |
94 |
95 | def sort_data(df):
96 | df['datetime'] = pd.to_datetime(df.start, unit='s')
97 | # TODO: should we process just the last hours?
98 | if PROCESS_LAST_HOURS:
99 | last_N_hs = max(df.datetime) - datetime.timedelta(hours=PROCESS_LAST_HOURS)
100 | df = df[df.datetime >= last_N_hs]
101 | df = df.set_index('datetime')
102 | df.sort_index(inplace=True)
103 | return df.reset_index(level=0)
104 |
105 |
106 | def filter_useless_data(df):
107 | # Requirements
108 | # * srcIP should be private
109 | # * dstport < 1024 and != 123
110 | df = df[df.srcaddr.map(lambda x: ipaddress.ip_address(x).is_private)]
111 | df = df[df.dstport <= 1024]
112 | df = df[df.dstport != 123]
113 | return df
114 |
115 |
116 | def filter_unfrequent_data(df):
117 | # remove communications if there were less than 24 snippets
118 | selection = df.groupby(["srcaddr", "dstaddr", "dstport"])
119 | df = selection.filter(lambda x: len(x) >= 24)
120 | df = df.reset_index(level=0)#, inplace=True)
121 | return df
122 |
123 |
124 | def get_features(groups):
125 | features = {}
126 | histograms = {}
127 |
128 | def compute_features(series):
129 | res = []
130 | res.append(scipy.stats.skew(series, bias=False))
131 | res.append(np.std(series))
132 | res.append(series.kurt())
133 | return res
134 |
135 | for (srcaddr, dstaddr, port), traffic in groups:
136 | deltas = traffic.datetime - traffic.datetime.shift(1)
137 | deltas = deltas.dt.seconds/60
138 | deltas = deltas.fillna(0).astype(int)
139 | packets = traffic.packets.astype(int)
140 | bytes_ = traffic.bytes.astype(int)
141 | cond = deltas > 0
142 | deltas = deltas[cond]
143 | packets = packets[cond]
144 | bytes_ = bytes_[cond]
145 | if deltas.size == 0 or packets.size == 0 or bytes_.size == 0:
146 | continue
147 | ftrs = [compute_features(x) for x in [deltas, packets, bytes_]]
148 | ftrs = [x for sublist in ftrs for x in sublist] # flatten
149 | if (srcaddr, dstaddr, port) not in features:
150 | features[(srcaddr, dstaddr, port)] = []
151 | features[(srcaddr, dstaddr, port)].append(ftrs)
152 | histograms[(srcaddr, dstaddr, port)] = (deltas, packets, bytes_)
153 | return features, histograms
154 |
155 |
156 | # Heuristics based classification
157 | def classify(features):
158 | res = []
159 | for k in features:
160 | counter = 0
161 | feature = features[k][0]
162 | counter += feature[1] < 100 # deltas std
163 | counter += abs(feature[3]) < 10 # packages skew
164 | counter += feature[4] < 50 # packages std
165 | counter += abs(feature[5]) < 100 or np.isnan(feature[5]) # packages kurt
166 | counter += feature[7] < 5000 # bytes std
167 | counter += feature[8] < 50 or np.isnan(feature[8])# bytes kurt
168 | if counter >= 5:
169 | res.append(k)
170 | return res
171 |
172 |
173 | def get_report(groups, histograms, keys):
174 | r = results_generator.ResultsGenerator()
175 |
176 | for (srcaddr, dstaddr, port) in keys:
177 | first = groups.get_group((srcaddr, dstaddr, port)).iloc[0]
178 | last = groups.get_group((srcaddr, dstaddr, port)).iloc[-1]
179 |
180 | deltas, packets, bytes_ = histograms[(srcaddr, dstaddr, port)]
181 | comm_hist = results_generator.Histogram()
182 | comm_hist.set_points(deltas.index.tolist(), deltas.values.tolist(), [True] * len(deltas))
183 | pack_hist = results_generator.Histogram()
184 | pack_hist.set_points(packets.index.tolist(), packets.values.tolist(), [True] * len(packets))
185 | bytes_hist = results_generator.Histogram()
186 | bytes_hist.set_points(bytes_.index.tolist(), bytes_.values.tolist(), [True] * len(bytes_))
187 |
188 | r.add_communication("", "", "", srcaddr, dstaddr, np.asscalar(port),
189 | "", first.datetime.ctime(), last.datetime.ctime(),
190 | comm_hist, pack_hist, bytes_hist)
191 | return r
192 |
193 |
194 | def write_results(report, s3_bucket, s3_output_fname):
195 | s3 = boto3.resource('s3')
196 | s3.Object(s3_bucket, s3_output_fname).put(Body=report)
197 |
198 |
199 | def write_pdf_report(grouped_df, bad_stuff, s3_bucket, s3_output_fname):
200 | if not os.path.exists(TMP_REPORT_DIR):
201 | os.mkdir(TMP_REPORT_DIR)
202 | tmp_fname = os.path.join(TMP_REPORT_DIR, RESULTS_FNAME_PDF)
203 | create_pdf_report(tmp_fname, grouped_df, bad_stuff)
204 | s3 = boto3.resource('s3')
205 | s3.meta.client.upload_file(tmp_fname, s3_bucket, s3_output_fname)
206 |
207 |
208 | def create_pdf_report(tmp_fname, grouped_df, bad_stuff):
209 | REPORT_TITLE = "CLOUD SNIPER®"
210 | REPORT_SUBTITLE = "BEACONING DETECTION REPORT"
211 | REPORT_DATE = datetime.datetime.now().strftime("%B %d, %Y (at %H:%M)")
212 |
213 | import io
214 | from reportlab.lib.pagesizes import letter
215 | from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, Image, PageBreak
216 | from reportlab.lib.styles import getSampleStyleSheet
217 | from reportlab.lib.units import inch
218 | import numpy as np
219 | import matplotlib.pyplot as plt
220 | import matplotlib.dates as mdates
221 |
222 |
223 | def add_text_to_doc(text, style="Normal", fontsize=12):
224 | Story.append(Spacer(1, 12))
225 | ptext = "{}".format(fontsize, text)
226 | Story.append(Paragraph(ptext, styles[style]))
227 | Story.append(Spacer(1, 12))
228 |
229 | def plot_hist(x, xlabel, ylabel, title, color='b'):
230 | fig, ax = plt.subplots(figsize=(7, 1.8))
231 | plt.hist(x, bins=300, facecolor=color, rwidth=2)
232 | date_fmt = mdates.DateFormatter('%H:%M')
233 | ax.xaxis.set_major_formatter(date_fmt)
234 | ax.set(title=title)
235 | ax.set_xlabel(xlabel)
236 | ax.set_ylabel(ylabel)
237 |
238 | buf = io.BytesIO()
239 | plt.savefig(buf, format='png', dpi=300)
240 | buf.seek(0)
241 | plt.close()
242 | return buf
243 |
244 | styles=getSampleStyleSheet()
245 | doc = SimpleDocTemplate(tmp_fname,pagesize=letter,
246 | rightMargin=inch/2,leftMargin=inch/2,
247 | topMargin=72,bottomMargin=18)
248 | # Title section
249 | Story=[]
250 | Story.append(Image("./cloud_sniper_800.png", width=5*inch, height=1*inch))
251 | Story.append(Spacer(1, 200))
252 | # add_text_to_doc(REPORT_TITLE, style="Title", fontsize=24)
253 | add_text_to_doc(REPORT_SUBTITLE, style="Title", fontsize=24)
254 | add_text_to_doc("Report generated on " + REPORT_DATE, style="Title", fontsize=14)
255 |
256 | # Nothing to report
257 | if not bad_stuff:
258 | add_text_to_doc("No suspicious beaconing communications found :)")
259 |
260 | # Reporting for every suspicious communication
261 | image_buffers = []
262 | hist_size = (7*inch, 1.5*inch)
263 | for (srcaddr, dstaddr, dstport) in bad_stuff:
264 | Story.append(PageBreak())
265 | # subtitle describing the communication IP -> IP (port number)
266 | comm_description = "%s -> %s (port %s)" %(srcaddr, dstaddr, dstport)
267 | add_text_to_doc(comm_description, style="Title", fontsize=14)
268 | df = grouped_df.get_group((srcaddr, dstaddr, dstport))
269 | # plotting delta time
270 | plot = plot_hist(list(df.datetime), "Time (s)", "Frequency",
271 | "Communications in the last %i hours" %PROCESS_LAST_HOURS)
272 | image_buffers.append(plot)
273 | im = Image(image_buffers[-1], hist_size[0], hist_size[1])
274 | Story.append(im)
275 | # # plotting packets
276 | # hist = get_list_from_histogram(histograms["packets"])
277 | # plot = plot_hist(hist, "Number of packets sent", "Frequency",
278 | # "Packets sent distribution", color=(1,.5,0))
279 | # image_buffers.append(plot)
280 | # im = Image(image_buffers[-1], hist_size[0], hist_size[1])
281 | # Story.append(im)
282 | # # plotting bytes
283 | # hist = get_list_from_histogram(histograms["bytes"])
284 | # plot = plot_hist(hist, "Number of bytes sent", "Frequency",
285 | # "Bytes sent distribution", color='m')
286 | # image_buffers.append(plot)
287 | # im = Image(image_buffers[-1], hist_size[0], hist_size[1])
288 | # Story.append(im)
289 |
290 | doc.build(Story)
291 | for image_buffer in image_buffers:
292 | image_buffer.close()
293 |
294 |
295 | if __name__ == "__main__":
296 | args = get_args()
297 | df = load_data(args.s3_bucket, args.s3_input_path)
298 | df = filter_format_data(df)
299 | df = sort_data(df)
300 | df = filter_useless_data(df)
301 | df = filter_unfrequent_data(df)
302 | groups = df.groupby(["srcaddr", "dstaddr", "dstport"])
303 | features, histograms = get_features(groups)
304 | bad_stuff_keys = classify(features)
305 | report = get_report(groups, histograms, bad_stuff_keys)
306 | s3_output_path = args.s3_output_path
307 | if not s3_output_path.endswith("/"): s3_output_path += "/"
308 | write_results(report.to_json(), args.s3_bucket, args.s3_output_path + RESULTS_FNAME)
309 | write_pdf_report(groups, bad_stuff_keys, args.s3_bucket, args.s3_output_path + RESULTS_FNAME_PDF)
310 |
311 |
--------------------------------------------------------------------------------
/analytics/src/cloud_sniper_800.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nicolasriverocorvalan/cloud-sniper/def926d45e13c1084cd20e007023e110faf8dabb/analytics/src/cloud_sniper_800.png
--------------------------------------------------------------------------------
/analytics/src/pdf_reporter.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime
2 | import io
3 |
4 | from reportlab.lib.pagesizes import letter
5 | from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, Image, PageBreak
6 | from reportlab.lib.styles import getSampleStyleSheet
7 | from reportlab.lib.units import inch
8 |
9 | import numpy as np
10 | import matplotlib.pyplot as plt
11 |
12 |
13 |
14 | REPORT_TITLE = "CLOUD SNIPER®"
15 | REPORT_SUBTITLE = "BEACONING DETECTION REPORT"
16 | REPORT_DATE = datetime.now().strftime("%B %d, %Y (at %H:%M)")
17 |
18 |
19 | def get_list_from_histogram(histogram):
20 | x = []
21 | for entry in histogram:
22 | x += entry["frequency"] * [entry["value"]]
23 | return x
24 |
25 |
26 | def plot_hist(x, xlabel, ylabel, title, color='b'):
27 | n, bins, patches = plt.hist(x, facecolor=color, alpha=0.75)
28 |
29 | plt.xlabel(xlabel)
30 | plt.ylabel(ylabel)
31 | plt.title(title)
32 | plt.grid(True)
33 |
34 | buf = io.BytesIO()
35 | plt.savefig(buf, format='png', dpi=300)
36 | buf.seek(0)
37 | plt.close()
38 |
39 | return buf
40 |
41 |
42 | def create_report(fname, report):
43 | def add_text_to_doc(text, style="Normal", fontsize=12):
44 | Story.append(Spacer(1, 12))
45 | ptext = "{}".format(fontsize, text)
46 | Story.append(Paragraph(ptext, styles[style]))
47 | Story.append(Spacer(1, 12))
48 |
49 | styles=getSampleStyleSheet()
50 | doc = SimpleDocTemplate(fname,pagesize=letter,
51 | rightMargin=inch/2,leftMargin=inch/2,
52 | topMargin=72,bottomMargin=18)
53 | # Title section
54 | Story=[]
55 | Story.append(Spacer(1, 200))
56 | add_text_to_doc(REPORT_TITLE, style="Title", fontsize=24)
57 | add_text_to_doc(REPORT_SUBTITLE, style="Title", fontsize=24)
58 | add_text_to_doc("Report generated on " + REPORT_DATE, style="Title", fontsize=14)
59 |
60 | # Nothing to report
61 | if not report.get("suspiciousCommunications"):
62 | add_text_to_doc("No suspicious beaconing communications found :)")
63 |
64 | # Reporting for every suspicious communication
65 | image_buffers = []
66 | hist_size = (4.5*inch, 2.7*inch)
67 | for comm in report.get("suspiciousCommunications"):
68 | Story.append(PageBreak())
69 | # subtitle describing the communication IP -> IP (port number)
70 | comm_description = "%s -> %s (port %s)" %(comm["srcIp"], comm["dstIp"], comm["dstPort"])
71 | add_text_to_doc(comm_description, style="Title", fontsize=14)
72 | histograms = comm["histograms"]
73 | # plotting delta time
74 | hist = get_list_from_histogram(histograms["communicationDeltaTime"])
75 | plot = plot_hist(hist, "Delta time (s)", "Frequency",
76 | "Delta time between consecutive communications")
77 | image_buffers.append(plot)
78 | im = Image(image_buffers[-1], hist_size[0], hist_size[1])
79 | Story.append(im)
80 | # plotting packets
81 | hist = get_list_from_histogram(histograms["packets"])
82 | plot = plot_hist(hist, "Number of packets sent", "Frequency",
83 | "Packets sent distribution", (1,.5,0))
84 | image_buffers.append(plot)
85 | im = Image(image_buffers[-1], hist_size[0], hist_size[1])
86 | Story.append(im)
87 | # plotting bytes
88 | hist = get_list_from_histogram(histograms["bytes"])
89 | plot = plot_hist(hist, "Number of bytes sent", "Frequency",
90 | "Bytes sent distribution", 'm')
91 | image_buffers.append(plot)
92 | im = Image(image_buffers[-1], hist_size[0], hist_size[1])
93 | Story.append(im)
94 |
95 | doc.build(Story)
96 | for image_buffer in image_buffers:
97 | image_buffer.close()
98 |
99 |
--------------------------------------------------------------------------------
/analytics/src/results_generator.py:
--------------------------------------------------------------------------------
1 | import json
2 |
3 | class Histogram():
4 | def __init__(self):
5 | self.__points = []
6 | def add(self, value, freq, is_anomaly):
7 | self.__points.append((value, freq, is_anomaly))
8 | def set_points(self, values, frequencies, is_anomaly):
9 | for i in range(len(values)):
10 | self.__points.append((values[i], frequencies[i], is_anomaly[i]))
11 | def to_dict(self):
12 | res = [{
13 | "value": x[0],
14 | "frequency": x[1],
15 | "anomaly": x[2]
16 | } for x in self.__points]
17 | return res
18 |
19 | class ResultsGenerator():
20 | def __init__(self):
21 | self.__version = "0.0.1"
22 | self.__communications = []
23 | def add_communication(self, version, account, interface, src_ip, dst_ip, dst_port,
24 | protocol, start_date, end_date, comm_hist, pack_hist, bytes_hist):
25 | self.__communications.append({
26 | "version": version,
27 | "accountId": account,
28 | "interfaceId": interface,
29 | "srcIp": src_ip,
30 | "dstIp": dst_ip,
31 | "dstPort": dst_port,
32 | "protocol": protocol,
33 | "startDate": start_date,
34 | "endDate": end_date,
35 | "histograms": {
36 | "communicationDeltaTime": comm_hist.to_dict(),
37 | "packets": pack_hist.to_dict(),
38 | "bytes": bytes_hist.to_dict()
39 | }
40 | })
41 | def to_dict(self):
42 | return {
43 | "version": self.__version,
44 | "suspiciousCommunications": self.__communications
45 | }
46 | def to_json(self):
47 | return json.dumps(self.to_dict(), indent=4)
48 |
49 | if __name__ == "__main__":
50 | r = ResultsGenerator()
51 | comm_hist = Histogram()
52 | comm_hist.set_points([1,2,3],[100,10,3],[False, False, False])
53 | pack_hist = Histogram()
54 | bytes_hist = Histogram()
55 | r.add_communication("2", "1234", "123", "1.2.3.4", "4.3.2.1",
56 | 123, 6, "2018/12/20 13:40", "2018/12/21 10:37",
57 | comm_hist, pack_hist, bytes_hist)
58 | print(r.to_json())
--------------------------------------------------------------------------------
/cloud-sniper/cloudwatch-event-target.tf:
--------------------------------------------------------------------------------
1 | resource "aws_cloudwatch_event_target" "aws_cloudwatch_event_target_cloud_sniper" {
2 | rule = "${aws_cloudwatch_event_rule.aws_cloudwatch_event_rule_cloud_sniper.name}"
3 | arn = "${aws_sqs_queue.sqs_queue_cloud_sniper.arn}"
4 | }
5 |
6 | resource "aws_cloudwatch_event_target" "aws_cloudwatch_event_rule_schedule_cloud_sniper" {
7 | rule = "${aws_cloudwatch_event_rule.aws_cloudwatch_event_rule_schedule_cloud_sniper.name}"
8 | arn = "${aws_lambda_function.lambda_function_cloud_sniper.arn}"
9 | }
10 |
11 | resource "aws_cloudwatch_event_target" "cloudwatch_event_target_cloud_sniper_tagging" {
12 | rule = "${aws_cloudwatch_event_rule.cloudwatch_event_rule_cloud_sniper_tagging.name}"
13 | arn = "${aws_lambda_function.lambda_function_cloud_sniper_tagging_ir.arn}"
14 | }
15 |
--------------------------------------------------------------------------------
/cloud-sniper/cloudwatch_event_rule.tf:
--------------------------------------------------------------------------------
1 | resource "aws_cloudwatch_event_rule" "aws_cloudwatch_event_rule_cloud_sniper" {
2 | name = "aws_cloudwatch_event_rule_cloud_sniper"
3 | description = "aws_cloudwatch_event_rule_cloud_sniper"
4 |
5 | event_pattern = <= 6:
477 | try:
478 | network_acl = r_ec2.NetworkAcl(r['nacl_id'])
479 | response2 = network_acl.delete_entry(
480 | Egress=False,
481 | RuleNumber=int(r['rule_no'])
482 | )
483 |
484 | if response2['ResponseMetadata']['HTTPStatusCode'] == 200:
485 | log.info('NACL rule deleted')
486 |
487 | else:
488 | log.info('Failed to delete the rule')
489 |
490 | except Exception as e:
491 | log.info("Failed to instantiate resource NetworkAcl", e)
492 |
493 | except Exception as e:
494 | log.info("NACls could not be deleted:", e)
495 |
496 |
497 | def cloud_sniper (event, context):
498 |
499 | global message
500 |
501 | log.info("GuardDuty findings: %s" % json.dumps(event))
502 |
503 | try:
504 | message = read_sqs()
505 | if message:
506 | search_ioc()
507 | incident_and_response()
508 | clean_nacls()
509 | delete_sqs()
510 |
511 | log.info("Properly processed findings")
512 | else:
513 | log.info("There is no new message in the queue")
514 |
515 | except Exception as e:
516 | log.error('Failure to process GD finding')
517 |
--------------------------------------------------------------------------------
/cloud-sniper/outputs.tf:
--------------------------------------------------------------------------------
1 | output "bucket_arn" {
2 | value = "${aws_s3_bucket.s3_bucket_cloud_sniper_data_store.arn}"
3 | }
4 |
--------------------------------------------------------------------------------
/cloud-sniper/role.tf:
--------------------------------------------------------------------------------
1 | resource "aws_iam_role" "role_cloud_sniper" {
2 | name = "role_cloud_sniper"
3 | path = "/"
4 | assume_role_policy = "${data.aws_iam_policy_document.iam_policy_lambda.json}"
5 | }
6 |
7 | resource "aws_iam_role" "iam_role_firehose_waf" {
8 | name = "iam_role_firehose_waf"
9 | assume_role_policy = "${data.aws_iam_policy_document.iam_policy_firehose.json}"
10 | }
11 |
12 | resource "aws_iam_role" "role_cloud_sniper_tagging" {
13 | name = "role_cloud_sniper_tagging"
14 | path = "/"
15 |
16 | assume_role_policy = <
6 | ID.AM
7 | The data, personnel, devices, systems, and facilities that enable the organization to achieve business purposes are identified and managed consistent with their relative importance to business objectives and the organization’s risk strategy
8 |
9 |
10 | 1. **IAM**
11 |
12 | 1. Checks whether IAM groups have at least one IAM user
13 | **#IDENTIFY**
14 |
15 | 2. Checks whether IAM users are members of at least one IAM group
16 | **#IDENTIFY**
17 |
18 | 3. Checks that none of the IAM users have policies attached. IAM users must inherit permissions from IAM groups or roles
19 | **#IDENTIFY**
20 |
21 | 4. Checks whether your Identity and IAM users have passwords or active access keys that have not been used within the specified number of days you provided
22 | **#IDENTIFY**
23 |
24 | 2. **Security groups**
25 |
26 | 1. Checks that the default security group of any VPC does not allow inbound or outbound traffic
27 | **#IDENTIFY**
28 |
29 | 2. **ACM Certificates**
30 |
31 | 1. Checks whether ACM Certificates in your account are marked for expiration within the specified number of days
32 | **#IDENTIFY**
33 |
34 | 3. **KMS**
35 |
36 | 1. Checks whether the active access keys are rotated within the number of days specified in maxAccessKeyAge
37 | **#IDENTIFY**
38 |
39 | 4. **CloudTrail**
40 |
41 | 1. Checks whether CloudTrail is enabled in your account
42 | **#IDENTIFY**
43 |
44 | 2. Checks whether CloudTrail trails are configured to send logs to CloudWatch logs
45 | **#IDENTIFY**
46 |
47 | 3. Checks whether CloudTrail is configured to use the server side encryption (SSE-KMS)
48 | **#IDENTIFY**
49 |
50 | 4. Checks whether CloudTrail creates a signed digest file with logs
51 | **#IDENTIFY**
52 |
53 | 5. Checks that there is at least one multi-region CloudTrail
54 | **#IDENTIFY**
55 |
56 | 5. **GuardDuty**
57 |
58 | 1. Checks whether GuardDuty is enabled in your account and region
59 | **#IDENTIFY**
60 |
61 |
62 |
63 |
--------------------------------------------------------------------------------
/playbooks/cloudsniper_playbook_nist.tf:
--------------------------------------------------------------------------------
1 | resource "aws_config_config_rule" "iam-group-has-users-check" {
2 | name = "iam-group-has-users-check"
3 | description = "Checks whether IAM groups have at least one IAM user"
4 |
5 | source {
6 | owner = "AWS"
7 | source_identifier = "IAM_GROUP_HAS_USERS_CHECK"
8 | }
9 | }
10 |
11 | resource "aws_config_config_rule" "iam-user-group-membership-check" {
12 | name = "iam-user-group-membership-check"
13 | description = "Checks whether IAM users are members of at least one IAM group"
14 |
15 | source {
16 | owner = "AWS"
17 | source_identifier = "IAM_USER_GROUP_MEMBERSHIP_CHECK"
18 | }
19 | }
20 |
21 | resource "aws_config_config_rule" "iam-user-no-policies-check" {
22 | name = "iam-user-no-policies-check"
23 | description = "Checks that none of the IAM users have policies attached. IAM users must inherit permissions from IAM groups or roles"
24 |
25 | source {
26 | owner = "AWS"
27 | source_identifier = "IAM_USER_NO_POLICIES_CHECK"
28 | }
29 | }
30 |
31 | resource "aws_config_config_rule" "iam-user-unused-credentials-check" {
32 | name = "iam-user-unused-credentials-check"
33 | description = "Checks whether your Identity and IAM users have passwords or active access keys that have not been used within the specified number of days you provided"
34 |
35 | source {
36 | owner = "AWS"
37 | source_identifier = "IAM_USER_UNUSED_CREDENTIALS_CHECK"
38 | }
39 |
40 | input_parameters = <