├── .gitattributes ├── Chapter_1 ├── bayes.py └── cape_python.png ├── Chapter_10 ├── 1_capture.py ├── 2_train.py ├── 3_predict.py ├── demming_tester │ ├── test (1).jpg │ ├── test (1).png │ ├── test (10).PNG │ ├── test (11).PNG │ ├── test (12).PNG │ ├── test (2).PNG │ ├── test (2).jpg │ ├── test (3).PNG │ ├── test (3).jpg │ ├── test (4).PNG │ ├── test (4).jpg │ ├── test (5).PNG │ ├── test (5).jpg │ ├── test (6).PNG │ ├── test (6).jpg │ ├── test (7).PNG │ ├── test (7).jpg │ ├── test (8).PNG │ ├── test (8).jpg │ └── test (9).PNG ├── demming_trainer │ ├── demming.1.1.jpg │ ├── demming.1.10.jpg │ ├── demming.1.11.jpg │ ├── demming.1.12.jpg │ ├── demming.1.13.jpg │ ├── demming.1.14.jpg │ ├── demming.1.15.jpg │ ├── demming.1.16.jpg │ ├── demming.1.17.jpg │ ├── demming.1.18.jpg │ ├── demming.1.19.jpg │ ├── demming.1.2.jpg │ ├── demming.1.20.jpg │ ├── demming.1.21.jpg │ ├── demming.1.22.jpg │ ├── demming.1.23.jpg │ ├── demming.1.24.jpg │ ├── demming.1.25.jpg │ ├── demming.1.26.jpg │ ├── demming.1.27.jpg │ ├── demming.1.28.jpg │ ├── demming.1.29.jpg │ ├── demming.1.3.jpg │ ├── demming.1.30.jpg │ ├── demming.1.4.jpg │ ├── demming.1.5.jpg │ ├── demming.1.6.jpg │ ├── demming.1.7.jpg │ ├── demming.1.8.jpg │ └── demming.1.9.jpg ├── lab_access_log.txt ├── lbph_trainer.yml └── tone.wav ├── Chapter_11 ├── census_data_popl_2010.csv ├── choropleth.py └── hv_texas_example.py ├── Chapter_12 ├── line_compare.py └── pond_sim.py ├── Chapter_2 ├── file_loader.py ├── hound.txt ├── lost.txt ├── practice_heatmap_semicolon.py ├── practice_hound_dispersion.py ├── stylometry.py └── war.txt ├── Chapter_3 ├── bed_summary.py ├── dream_summary.py ├── holmes.png ├── hound.txt └── wc_hound.py ├── Chapter_4 ├── allied_attack_plan.txt ├── lost.txt ├── practice_WWII_words.py ├── practice_barchart.py └── rebecca.py ├── Chapter_5 ├── blink_comparator.py ├── montages │ ├── montage_left.JPG │ ├── montage_right.JPG │ ├── practice_montage_aligner.py │ └── practice_montage_difference_finder.py ├── night_1 │ ├── 1_bright_transient_left.png │ ├── 2_dim_transient_left.png │ ├── 3_diff_exposures_left.png │ ├── 4_single_transient_left.png │ ├── 5_no_transient_left.png │ └── 6_bright_transient_neg_left.png ├── night_1_2_transients │ ├── 1_bright_transient_left_registered_DECTECTED.png │ ├── 2_dim_transient_left_registered_DECTECTED.png │ ├── 3_diff_exposures_left_registered_DECTECTED.png │ └── 4_single_transient_left_registered_DECTECTED.png ├── night_1_registered │ ├── 1_bright_transient_left_registered.png │ ├── 2_dim_transient_left_registered.png │ ├── 3_diff_exposures_left_registered.png │ ├── 4_single_transient_left_registered.png │ ├── 5_no_transient_left_registered.png │ └── 6_bright_transient_neg_left_registered.png ├── night_1_registered_transients │ ├── 1_bright_transient_left_registered.png │ ├── 2_dim_transient_left_registered.png │ ├── 3_diff_exposures_left_registered.png │ ├── 4_single_transient_left_registered.png │ ├── 5_no_transient_left_registered.png │ └── 6_bright_transient_neg_left_registered.png ├── night_2 │ ├── 1_bright_transient_right.png │ ├── 2_dim_transient_right.png │ ├── 3_diff_exposures_right.png │ ├── 4_single_transient_right.png │ ├── 5_no_transient_right.png │ └── 6_bright_transient_neg_right.png ├── practice_orbital_path.py └── transient_detector.py ├── Chapter_6 ├── apollo_8_free_return.py ├── earth_100x100.gif ├── moon_27x27.gif ├── practice_grav_assist_intersecting.py ├── practice_grav_assist_stationary.py └── search_pattern │ ├── helicopter_left.gif │ ├── helicopter_right.gif │ ├── practice_search_pattern.py │ └── seaman.gif ├── Chapter_7 ├── Mars_Global_Geology_Mariner9_1024.jpg ├── geo_thresh.jpg ├── mola_1024x501.png ├── mola_1024x512_200mp.jpg ├── mola_color_1024x506.png ├── practice_3d_plotting.py ├── practice_confirm_drawing_part_of_image.py ├── practice_geo_map_step_1of2.py ├── practice_geo_map_step_2of2.py ├── practice_profile_olympus.py └── site_selector.py ├── Chapter_8 ├── br549_pixelated │ ├── pixelated_br549_01.png │ ├── pixelated_br549_02.png │ ├── pixelated_br549_03.png │ ├── pixelated_br549_04.png │ ├── pixelated_br549_05.png │ ├── pixelated_br549_06.png │ ├── pixelated_br549_07.png │ ├── pixelated_br549_08.png │ ├── pixelated_br549_09.png │ ├── pixelated_br549_10.png │ ├── pixelated_br549_11.png │ ├── pixelated_br549_12.png │ ├── pixelated_br549_13.png │ ├── pixelated_br549_14.png │ ├── pixelated_br549_15.png │ ├── pixelated_br549_16.png │ ├── pixelated_br549_17.png │ ├── pixelated_br549_18.png │ ├── pixelated_br549_19.png │ ├── pixelated_br549_20.png │ ├── pixelated_br549_21.png │ ├── pixelated_br549_22.png │ ├── pixelated_br549_23.png │ ├── pixelated_br549_24.png │ ├── pixelated_br549_25.png │ ├── pixelated_br549_26.png │ ├── pixelated_br549_27.png │ ├── pixelated_br549_28.png │ ├── pixelated_br549_29.png │ ├── pixelated_br549_30.png │ ├── pixelated_br549_31.png │ ├── pixelated_br549_32.png │ ├── pixelated_br549_33.png │ └── pixelated_br549_34.png ├── earth_east.png ├── earth_west.png ├── limb_darkening.png ├── pixelated_earth_east.png ├── pixelated_earth_west.png ├── pixelator.py ├── pixelator_saturated_only.py ├── practice_alien_armada.py ├── practice_asteroids.py ├── practice_length_of_day.py ├── practice_limb_darkening.py ├── practice_planet_moon.py ├── practice_tabbys_star.py └── transit.py ├── Chapter_9 ├── corridor_5 │ ├── frame01.PNG │ ├── frame02.png │ ├── frame03.png │ ├── frame04.png │ ├── frame05.png │ ├── frame06.png │ ├── frame07.png │ ├── frame08.png │ └── frame09.png ├── empty_corridor.png ├── gunfire.wav ├── practice_blur.py ├── sentry.py ├── sentry_for_mac_bug.py ├── tone.wav └── video_face_detect.py └── README.md /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /Chapter_1/bayes.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import random 3 | import itertools 4 | import numpy as np 5 | import cv2 as cv 6 | 7 | MAP_FILE = 'cape_python.png' 8 | 9 | # Assign search area (SA) corner point locations based on image pixels. 10 | SA1_CORNERS = (130, 265, 180, 315) # (UL-X, UL-Y, LR-X, LR-Y) 11 | SA2_CORNERS = (80, 255, 130, 305) # (UL-X, UL-Y, LR-X, LR-Y) 12 | SA3_CORNERS = (105, 205, 155, 255) # (UL-X, UL-Y, LR-X, LR-Y) 13 | 14 | 15 | class Search(): 16 | """Bayesian Search & Rescue game with 3 search areas.""" 17 | 18 | def __init__(self, name): 19 | self.name = name 20 | self.img = cv.imread(MAP_FILE, cv.IMREAD_COLOR) 21 | if self.img is None: 22 | print('Could not load map file {}'.format(MAP_FILE), 23 | file=sys.stderr) 24 | sys.exit(1) 25 | 26 | # Set placeholders for sailor's actual location 27 | self.area_actual = 0 28 | self.sailor_actual = [0, 0] # As "local" coords within search area 29 | 30 | # Create numpy arrays for each search area by indexing image array. 31 | self.sa1 = self.img[SA1_CORNERS[1] : SA1_CORNERS[3], 32 | SA1_CORNERS[0] : SA1_CORNERS[2]] 33 | 34 | self.sa2 = self.img[SA2_CORNERS[1] : SA2_CORNERS[3], 35 | SA2_CORNERS[0] : SA2_CORNERS[2]] 36 | 37 | self.sa3 = self.img[SA3_CORNERS[1] : SA3_CORNERS[3], 38 | SA3_CORNERS[0] : SA3_CORNERS[2]] 39 | 40 | # Set initial per-area target probabilities for finding sailor 41 | self.p1 = 0.2 42 | self.p2 = 0.5 43 | self.p3 = 0.3 44 | 45 | # Initialize search effectiveness probabilities. 46 | self.sep1 = 0 47 | self.sep2 = 0 48 | self.sep3 = 0 49 | 50 | def draw_map(self, last_known): 51 | """Display basemap with scale, last known xy location, search areas.""" 52 | # Draw the scale bar. 53 | cv.line(self.img, (20, 370), (70, 370), (0, 0, 0), 2) 54 | cv.putText(self.img, '0', (8, 370), cv.FONT_HERSHEY_PLAIN, 1, (0, 0, 0)) 55 | cv.putText(self.img, '50 Nautical Miles', (71, 370), 56 | cv.FONT_HERSHEY_PLAIN, 1, (0, 0, 0)) 57 | 58 | # Draw and number the search areas. 59 | cv.rectangle(self.img, (SA1_CORNERS[0], SA1_CORNERS[1]), 60 | (SA1_CORNERS[2], SA1_CORNERS[3]), (0, 0, 0), 1) 61 | cv.putText(self.img, '1', 62 | (SA1_CORNERS[0] + 3, SA1_CORNERS[1] + 15), 63 | cv.FONT_HERSHEY_PLAIN, 1, 0) 64 | cv.rectangle(self.img, (SA2_CORNERS[0], SA2_CORNERS[1]), 65 | (SA2_CORNERS[2], SA2_CORNERS[3]), (0, 0, 0), 1) 66 | cv.putText(self.img, '2', 67 | (SA2_CORNERS[0] + 3, SA2_CORNERS[1] + 15), 68 | cv.FONT_HERSHEY_PLAIN, 1, 0) 69 | cv.rectangle(self.img, (SA3_CORNERS[0], SA3_CORNERS[1]), 70 | (SA3_CORNERS[2], SA3_CORNERS[3]), (0, 0, 0), 1) 71 | cv.putText(self.img, '3', 72 | (SA3_CORNERS[0] + 3, SA3_CORNERS[1] + 15), 73 | cv.FONT_HERSHEY_PLAIN, 1, 0) 74 | 75 | # Post the last known location of the sailor. 76 | cv.putText(self.img, '+', (last_known), 77 | cv.FONT_HERSHEY_PLAIN, 1, (0, 0, 255)) 78 | cv.putText(self.img, '+ = Last Known Position', (274, 355), 79 | cv.FONT_HERSHEY_PLAIN, 1, (0, 0, 255)) 80 | cv.putText(self.img, '* = Actual Position', (275, 370), 81 | cv.FONT_HERSHEY_PLAIN, 1, (255, 0, 0)) 82 | 83 | cv.imshow('Search Area', self.img) 84 | cv.moveWindow('Search Area', 750, 10) 85 | cv.waitKey(500) 86 | 87 | def sailor_final_location(self, num_search_areas): 88 | """Return the actual x,y location of the missing sailor.""" 89 | # Find sailor coordinates with respect to any Search Area sub-array. 90 | self.sailor_actual[0] = np.random.choice(self.sa1.shape[1]) 91 | self.sailor_actual[1] = np.random.choice(self.sa1.shape[0]) 92 | 93 | # Pick a search area at random. 94 | area = int(random.triangular(1, num_search_areas + 1)) 95 | 96 | # Convert local search area coordinates to map coordinates. 97 | if area == 1: 98 | x = self.sailor_actual[0] + SA1_CORNERS[0] 99 | y = self.sailor_actual[1] + SA1_CORNERS[1] 100 | self.area_actual = 1 101 | elif area == 2: 102 | x = self.sailor_actual[0] + SA2_CORNERS[0] 103 | y = self.sailor_actual[1] + SA2_CORNERS[1] 104 | self.area_actual = 2 105 | elif area == 3: 106 | x = self.sailor_actual[0] + SA3_CORNERS[0] 107 | y = self.sailor_actual[1] + SA3_CORNERS[1] 108 | self.area_actual = 3 109 | return x, y 110 | 111 | def calc_search_effectiveness(self): 112 | """Set decimal search effectiveness value per search area.""" 113 | self.sep1 = random.uniform(0.2, 0.9) 114 | self.sep2 = random.uniform(0.2, 0.9) 115 | self.sep3 = random.uniform(0.2, 0.9) 116 | 117 | def conduct_search(self, area_num, area_array, effectiveness_prob): 118 | """Return search results and list of searched coordinates.""" 119 | local_y_range = range(area_array.shape[0]) 120 | local_x_range = range(area_array.shape[1]) 121 | coords = list(itertools.product(local_x_range, local_y_range)) 122 | random.shuffle(coords) 123 | coords = coords[:int((len(coords) * effectiveness_prob))] 124 | loc_actual = (self.sailor_actual[0], self.sailor_actual[1]) 125 | if area_num == self.area_actual and loc_actual in coords: 126 | return 'Found in Area {}.'.format(area_num), coords 127 | return 'Not Found', coords 128 | 129 | def revise_target_probs(self): 130 | """Update area target probabilities based on search effectiveness.""" 131 | denom = self.p1 * (1 - self.sep1) + self.p2 * (1 - self.sep2) \ 132 | + self.p3 * (1 - self.sep3) 133 | self.p1 = self.p1 * (1 - self.sep1) / denom 134 | self.p2 = self.p2 * (1 - self.sep2) / denom 135 | self.p3 = self.p3 * (1 - self.sep3) / denom 136 | 137 | 138 | def draw_menu(search_num): 139 | """Print menu of choices for conducting area searches.""" 140 | print('\nSearch {}'.format(search_num)) 141 | print( 142 | """ 143 | Choose next areas to search: 144 | 145 | 0 - Quit 146 | 1 - Search Area 1 twice 147 | 2 - Search Area 2 twice 148 | 3 - Search Area 3 twice 149 | 4 - Search Areas 1 & 2 150 | 5 - Search Areas 1 & 3 151 | 6 - Search Areas 2 & 3 152 | 7 - Start Over 153 | """ 154 | ) 155 | 156 | 157 | def main(): 158 | app = Search('Cape_Python') 159 | app.draw_map(last_known=(160, 290)) 160 | sailor_x, sailor_y = app.sailor_final_location(num_search_areas=3) 161 | print("-" * 65) 162 | print("\nInitial Target (P) Probabilities:") 163 | print("P1 = {:.3f}, P2 = {:.3f}, P3 = {:.3f}".format(app.p1, app.p2, app.p3)) 164 | search_num = 1 165 | 166 | while True: 167 | app.calc_search_effectiveness() 168 | draw_menu(search_num) 169 | choice = input("Choice: ") 170 | 171 | if choice == "0": 172 | sys.exit() 173 | 174 | elif choice == "1": 175 | results_1, coords_1 = app.conduct_search(1, app.sa1, app.sep1) 176 | results_2, coords_2 = app.conduct_search(1, app.sa1, app.sep1) 177 | app.sep1 = (len(set(coords_1 + coords_2))) / (len(app.sa1)**2) 178 | app.sep2 = 0 179 | app.sep3 = 0 180 | 181 | elif choice == "2": 182 | results_1, coords_1 = app.conduct_search(2, app.sa2, app.sep2) 183 | results_2, coords_2 = app.conduct_search(2, app.sa2, app.sep2) 184 | app.sep1 = 0 185 | app.sep2 = (len(set(coords_1 + coords_2))) / (len(app.sa2)**2) 186 | app.sep3 = 0 187 | 188 | elif choice == "3": 189 | results_1, coords_1 = app.conduct_search(3, app.sa3, app.sep3) 190 | results_2, coords_2 = app.conduct_search(3, app.sa3, app.sep3) 191 | app.sep1 = 0 192 | app.sep2 = 0 193 | app.sep3 = (len(set(coords_1 + coords_2))) / (len(app.sa3)**2) 194 | 195 | elif choice == "4": 196 | results_1, coords_1 = app.conduct_search(1, app.sa1, app.sep1) 197 | results_2, coords_2 = app.conduct_search(2, app.sa2, app.sep2) 198 | app.sep3 = 0 199 | 200 | elif choice == "5": 201 | results_1, coords_1 = app.conduct_search(1, app.sa1, app.sep1) 202 | results_2, coords_2 = app.conduct_search(3, app.sa3, app.sep3) 203 | app.sep2 = 0 204 | 205 | elif choice == "6": 206 | results_1, coords_1 = app.conduct_search(2, app.sa2, app.sep2) 207 | results_2, coords_2 = app.conduct_search(3, app.sa3, app.sep3) 208 | app.sep1 = 0 209 | 210 | elif choice == "7": 211 | main() 212 | 213 | else: 214 | print("\nSorry, but that isn't a valid choice.", file=sys.stderr) 215 | continue 216 | 217 | app.revise_target_probs() # Use Bayes' rule to update target probs. 218 | 219 | print("\nSearch {} Results 1 = {}" 220 | .format(search_num, results_1), file=sys.stderr) 221 | print("Search {} Results 2 = {}\n" 222 | .format(search_num, results_2), file=sys.stderr) 223 | print("Search {} Effectiveness (E):".format(search_num)) 224 | print("E1 = {:.3f}, E2 = {:.3f}, E3 = {:.3f}" 225 | .format(app.sep1, app.sep2, app.sep3)) 226 | 227 | # Print target probabilities if sailor is not found else show position. 228 | if results_1 == 'Not Found' and results_2 == 'Not Found': 229 | print("\nNew Target Probabilities (P) for Search {}:" 230 | .format(search_num + 1)) 231 | print("P1 = {:.3f}, P2 = {:.3f}, P3 = {:.3f}" 232 | .format(app.p1, app.p2, app.p3)) 233 | else: 234 | cv.circle(app.img, (sailor_x, sailor_y), 3, (255, 0, 0), -1) 235 | cv.imshow('Search Area', app.img) 236 | cv.waitKey(1500) 237 | main() 238 | search_num += 1 239 | 240 | if __name__ == '__main__': 241 | main() 242 | -------------------------------------------------------------------------------- /Chapter_1/cape_python.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_1/cape_python.png -------------------------------------------------------------------------------- /Chapter_10/1_capture.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pyttsx3 3 | import cv2 as cv 4 | from playsound import playsound 5 | 6 | # Set up audio instructions. 7 | engine = pyttsx3.init() 8 | engine.setProperty('rate', 145) 9 | engine.setProperty('volume', 1.0) # Max is 1.0. 10 | 11 | # Set up audio tone. 12 | root_dir = os.path.abspath('.') 13 | tone_path = os.path.join(root_dir, 'tone.wav') 14 | 15 | # Set up path to OpenCV's Haar Cascades 16 | path = "C:/Python372/Lib/site-packages/cv2/data/" 17 | face_detector = cv.CascadeClassifier(path + 18 | 'haarcascade_frontalface_default.xml') 19 | 20 | # Prepare webcam. 21 | cap = cv.VideoCapture(0) 22 | if not cap.isOpened(): 23 | print("Could not open video device.") 24 | cap.set(3, 640) # Frame width. 25 | cap.set(4, 480) # Frame height. 26 | 27 | # Provide instructions. 28 | engine.say("Enter your information when prompted on screen. \ 29 | Then remove glasses and look directly at webcam. \ 30 | Make multiple faces including normal, happy, sad, sleepy. \ 31 | Continue until you hear the tone.") 32 | engine.runAndWait() 33 | name = input("\nEnter last name: ") 34 | user_id = input("Enter assigned ID Number: ") 35 | print("\nCapturing face. Look at the camera now!") 36 | 37 | # Create a folder to hold the images. 38 | if not os.path.isdir('trainer'): 39 | os.mkdir('trainer') 40 | os.chdir('trainer') 41 | 42 | frame_count = 0 43 | 44 | while True: 45 | # Capture frame-by-frame for total of 30 frames. 46 | _, frame = cap.read() 47 | gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) 48 | face_rects = face_detector.detectMultiScale(gray, scaleFactor=1.2, 49 | minNeighbors=5) 50 | for (x, y, w, h) in face_rects: 51 | frame_count += 1 52 | cv.imwrite(str(name) + '.' + str(user_id) + '.' 53 | + str(frame_count) + '.jpg', gray[y:y+h, x:x+w]) 54 | cv.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) 55 | cv.imshow('image', frame) # Best to have open folder with thumbnails. 56 | cv.waitKey(400) 57 | if frame_count >= 30: 58 | break 59 | 60 | print("\nImage collection complete. Exiting...") 61 | playsound(tone_path, block=False) 62 | cap.release() 63 | cv.destroyAllWindows() 64 | -------------------------------------------------------------------------------- /Chapter_10/2_train.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import cv2 as cv 4 | 5 | # Set up path to OpenCV's Haar Cascades for face detection. 6 | cascade_path = "C:/Python372/Lib/site-packages/cv2/data/" 7 | face_detector = cv.CascadeClassifier(cascade_path + 8 | 'haarcascade_frontalface_default.xml') 9 | 10 | # Set up path to training images and prepare names/labels. 11 | # For pre-loaded Demming images use demming_trainer folder. 12 | # To use your own face use the trainer folder. 13 | 14 | #train_path = './trainer' 15 | train_path = './demming_trainer' 16 | image_paths = [os.path.join(train_path, f) for f in os.listdir(train_path)] 17 | images, labels = [], [] 18 | 19 | # Extract face rectangles and assign numerical labels. 20 | for image in image_paths: 21 | train_image = cv.imread(image, cv.IMREAD_GRAYSCALE) 22 | label = int(os.path.split(image)[-1].split('.')[1]) 23 | name = os.path.split(image)[-1].split('.')[0] 24 | frame_num = os.path.split(image)[-1].split('.')[2] 25 | faces = face_detector.detectMultiScale(train_image) 26 | for (x, y, w, h) in faces: 27 | images.append(train_image[y:y + h, x:x + w]) 28 | labels.append(label) 29 | print(f"Preparing training images for {name}.{label}.{frame_num}") 30 | cv.imshow("Training Image", train_image[y:y + h, x:x + w]) 31 | cv.waitKey(50) 32 | 33 | cv.destroyAllWindows() 34 | 35 | # Perform the tranining 36 | recognizer = cv.face.LBPHFaceRecognizer_create() 37 | recognizer.train(images, np.array(labels)) 38 | recognizer.write('lbph_trainer.yml') 39 | print("Training complete. Exiting...") 40 | 41 | 42 | 43 | -------------------------------------------------------------------------------- /Chapter_10/3_predict.py: -------------------------------------------------------------------------------- 1 | import os 2 | from datetime import datetime 3 | import cv2 as cv 4 | 5 | names = {1: "Demming"} # Update with your ID and name. 6 | 7 | # Set up face detector path. 8 | cascade_path = "C:/Python372/Lib/site-packages/cv2/data/" 9 | face_detector = cv.CascadeClassifier(cascade_path + 10 | 'haarcascade_frontalface_default.xml') 11 | 12 | # Set up face recognizer and load trained data. 13 | recognizer = cv.face.LBPHFaceRecognizer_create() 14 | recognizer.read('lbph_trainer.yml') 15 | 16 | # Set up test data. 17 | # Use tester folder for your face. 18 | # Use demming_tester folder for pre-loaded images. 19 | 20 | #test_path = './tester' 21 | test_path = './demming_tester' 22 | image_paths = [os.path.join(test_path, f) for f in os.listdir(test_path)] 23 | 24 | # Loop through test images and predict faces. 25 | for image in image_paths: 26 | predict_image = cv.imread(image, cv.IMREAD_GRAYSCALE) 27 | faces = face_detector.detectMultiScale(predict_image, 28 | scaleFactor=1.05, 29 | minNeighbors=5) 30 | for (x, y, w, h) in faces: 31 | print(f"\nAccess requested at {datetime.now()}.") 32 | face = cv.resize(predict_image[y:y + h, x:x + w], (100, 100)) 33 | predicted_id, dist = recognizer.predict(face) 34 | if predicted_id == 1 and dist <= 95: 35 | name = names[predicted_id] 36 | print("{} identified as {} with distance={}" 37 | .format(image, name, round(dist, 1))) 38 | print(f"Access granted to {name} at {datetime.now()}.", 39 | file=open('lab_access_log.txt', 'a')) 40 | else: 41 | name = 'unknown' 42 | print(f"{image} is {name}.") 43 | cv.rectangle(predict_image, (x, y), (x + w, y + h), 255, 2) 44 | cv.putText(predict_image, name, (x + 1, y + h -5), 45 | cv.FONT_HERSHEY_SIMPLEX, 0.5, 255, 1) 46 | cv.imshow('ID', predict_image) 47 | cv.waitKey(2000) 48 | cv.destroyAllWindows() 49 | -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (1).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (1).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (1).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (1).png -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (10).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (10).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (11).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (11).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (12).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (12).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (2).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (2).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (2).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (2).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (3).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (3).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (3).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (3).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (4).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (4).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (4).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (4).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (5).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (5).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (5).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (5).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (6).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (6).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (6).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (6).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (7).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (7).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (7).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (7).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (8).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (8).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (8).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (8).jpg -------------------------------------------------------------------------------- /Chapter_10/demming_tester/test (9).PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_tester/test (9).PNG -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.1.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.10.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.11.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.12.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.13.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.13.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.14.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.14.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.15.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.15.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.16.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.17.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.17.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.18.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.19.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.19.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.2.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.20.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.20.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.21.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.21.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.22.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.22.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.23.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.23.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.24.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.24.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.25.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.25.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.26.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.26.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.27.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.27.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.28.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.28.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.29.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.29.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.3.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.30.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.30.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.4.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.5.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.6.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.7.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.8.jpg -------------------------------------------------------------------------------- /Chapter_10/demming_trainer/demming.1.9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/demming_trainer/demming.1.9.jpg -------------------------------------------------------------------------------- /Chapter_10/lab_access_log.txt: -------------------------------------------------------------------------------- 1 | Access granted to Demming at 2020-02-12 08:56:48.713599. 2 | 3 | -------------------------------------------------------------------------------- /Chapter_10/tone.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_10/tone.wav -------------------------------------------------------------------------------- /Chapter_11/census_data_popl_2010.csv: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_11/census_data_popl_2010.csv -------------------------------------------------------------------------------- /Chapter_11/choropleth.py: -------------------------------------------------------------------------------- 1 | from os.path import abspath 2 | import webbrowser 3 | import pandas as pd 4 | import holoviews as hv 5 | from holoviews import opts 6 | hv.extension('bokeh') 7 | from bokeh.sampledata.us_counties import data as counties 8 | 9 | df = pd.read_csv('census_data_popl_2010.csv', encoding="ISO-8859-1") 10 | 11 | df = pd.DataFrame(df, 12 | columns= 13 | ['Target Geo Id2', 14 | 'Geographic area.1', 15 | 'Density per square mile of land area - Population']) 16 | 17 | df.rename(columns = 18 | {'Target Geo Id2':'fips', 19 | 'Geographic area.1': 'County', 20 | 'Density per square mile of land area - Population':'Density'}, 21 | inplace = True) 22 | 23 | print(f"\nInitial popl data:\n {df.head()}") 24 | print(f"Shape of df = {df.shape}\n") 25 | 26 | # Remove non-county rows from data frame. 27 | df = df[df['fips'] > 100] 28 | print(f"Popl data with non-county rows removed:\n {df.head()}") 29 | print(f"Shape of df = {df.shape}\n") 30 | 31 | # Create columns for state and county id numbers. 32 | df['state_id'] = (df['fips'] // 1000).astype('int64') 33 | df['cid'] = (df['fips'] % 1000).astype('int64') 34 | print(f"Popl data with new ID columns:\n {df.head()}") 35 | print(f"Shape of df = {df.shape}\n") 36 | print("df info:") 37 | print(df.info()) # Lists data types of each column. 38 | 39 | # Check that 5-digit fips handled correctly. 40 | print("\nPopl data at row 500:") 41 | print(df.loc[500]) 42 | 43 | # Make dictionary of state_id, cid tuple vs popl density. 44 | state_ids = df.state_id.tolist() 45 | cids = df.cid.tolist() 46 | den = df.Density.tolist() 47 | tuple_list = tuple(zip(state_ids, cids)) 48 | popl_dens_dict = dict(zip(tuple_list, den)) 49 | 50 | # Exclude states & territories not part of conterminious US. 51 | EXCLUDED = ('ak', 'hi', 'pr', 'gu', 'vi', 'mp', 'as') 52 | 53 | counties = [dict(county, Density=popl_dens_dict[cid]) 54 | for cid, county in counties.items() 55 | if county["state"] not in EXCLUDED] 56 | 57 | choropleth = hv.Polygons(counties, 58 | ['lons', 'lats'], 59 | [('detailed name', 'County'), 'Density']) 60 | 61 | choropleth.opts(opts.Polygons(logz=True, 62 | tools=['hover'], 63 | xaxis=None, yaxis=None, 64 | show_grid=False, show_frame=False, 65 | width=1100, height=700, 66 | colorbar=True, toolbar='above', 67 | color_index='Density', cmap='Greys', line_color=None, 68 | title='2010 Population Density per Square Mile of Land Area' 69 | )) 70 | 71 | hv.save(choropleth, 'choropleth.html', backend='bokeh') 72 | url = abspath('choropleth.html') 73 | webbrowser.open(url) 74 | -------------------------------------------------------------------------------- /Chapter_11/hv_texas_example.py: -------------------------------------------------------------------------------- 1 | from os.path import abspath 2 | import webbrowser 3 | import holoviews as hv 4 | from holoviews import opts 5 | hv.extension('bokeh') 6 | from bokeh.sampledata.us_counties import data as counties 7 | from bokeh.sampledata.unemployment import data as unemployment 8 | 9 | counties = [dict(county, Unemployment=unemployment[cid]) 10 | for cid, county in counties.items() 11 | if county["state"] == "tx"] 12 | 13 | choropleth = hv.Polygons(counties, ['lons', 'lats'], 14 | [('detailed name', 'County'), 'Unemployment']) 15 | 16 | choropleth.opts(opts.Polygons(logz=True, 17 | tools=['hover'], 18 | xaxis=None, yaxis=None, 19 | show_grid=False, 20 | show_frame=False, 21 | width=500, height=500, 22 | color_index='Unemployment', 23 | colorbar=True, toolbar='above', 24 | line_color='white')) 25 | 26 | hv.save(choropleth, 'texas.html', backend='bokeh') 27 | url = abspath('texas.html') 28 | webbrowser.open(url) 29 | -------------------------------------------------------------------------------- /Chapter_12/line_compare.py: -------------------------------------------------------------------------------- 1 | """Compare times required for turtle to draw lines at different orientations.""" 2 | from time import perf_counter 3 | import statistics 4 | import turtle 5 | 6 | turtle.setup(1200, 600) 7 | screen = turtle.Screen() 8 | 9 | ANGLES = (0, 3.695220532) # In degrees. 10 | NUM_RUNS = 20 11 | SPEED = 0 12 | 13 | for angle in ANGLES: 14 | times = [] 15 | for _ in range(NUM_RUNS): 16 | line = turtle.Turtle() 17 | line.speed(SPEED) 18 | line.hideturtle() 19 | line.penup() 20 | line.lt(angle) 21 | line.setpos(-470, 0) 22 | line.pendown() 23 | line.showturtle() 24 | start_time = perf_counter() 25 | line.fd(962) 26 | end_time = perf_counter() 27 | times.append(end_time - start_time) 28 | 29 | line_ave = statistics.mean(times) 30 | print("Angle {} degrees: average time for {} runs at speed {} = {:.5f}" 31 | .format(angle, NUM_RUNS, SPEED, line_ave)) 32 | -------------------------------------------------------------------------------- /Chapter_12/pond_sim.py: -------------------------------------------------------------------------------- 1 | import turtle 2 | 3 | # Draw Yertle's pond. 4 | pond = turtle.Screen() 5 | pond.setup(600, 400) 6 | pond.bgcolor('light blue') 7 | pond.title("Yertle's Pond") 8 | 9 | # Draw Yertle's mud island. 10 | mud = turtle.Turtle('circle') 11 | mud.shapesize(stretch_wid=5, stretch_len=5, outline=None) 12 | mud.pencolor('tan') 13 | mud.fillcolor('tan') 14 | 15 | # Draw a floating log. 16 | SIDE = 80 17 | ANGLE = 90 18 | log = turtle.Turtle() 19 | log.hideturtle() 20 | log.pencolor('peru') 21 | log.fillcolor('peru') 22 | log.speed(0) 23 | log.penup() 24 | log.setpos(215, -30) 25 | log.lt(45) 26 | log.begin_fill() 27 | for _ in range(2): 28 | log.fd(SIDE) 29 | log.lt(ANGLE) 30 | log.fd(SIDE / 4) 31 | log.lt(ANGLE) 32 | log.end_fill() 33 | 34 | # Put a knothole in the log. 35 | knot = turtle.Turtle() 36 | knot.hideturtle() 37 | knot.speed(0) 38 | knot.penup() 39 | knot.setpos(245, 5) 40 | knot.begin_fill() 41 | knot.circle(5) 42 | knot.end_fill() 43 | 44 | # Draw Yertle the Turtle. 45 | yertle = turtle.Turtle('turtle') 46 | yertle.color('green') 47 | yertle.speed(1) # Slowest. 48 | yertle.fd(200) 49 | yertle.lt(180) 50 | yertle.fd(200) 51 | yertle.rt(176) 52 | yertle.fd(205) 53 | -------------------------------------------------------------------------------- /Chapter_2/file_loader.py: -------------------------------------------------------------------------------- 1 | """Read a text file and return a list of strings.""" 2 | 3 | def text_to_string(filename): 4 | strings = [] 5 | with open(filename) as f: 6 | strings.append(f.read()) 7 | return '\n'.join(strings) 8 | -------------------------------------------------------------------------------- /Chapter_2/hound.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_2/hound.txt -------------------------------------------------------------------------------- /Chapter_2/practice_heatmap_semicolon.py: -------------------------------------------------------------------------------- 1 | """Make a heatmap of punctuation.""" 2 | import math 3 | from string import punctuation 4 | import nltk 5 | import numpy as np 6 | import matplotlib.pyplot as plt 7 | from matplotlib.colors import ListedColormap 8 | import seaborn as sns 9 | 10 | 11 | # Install seaborn using: pip install seaborn. 12 | 13 | PUNCT_SET = set(punctuation) 14 | 15 | def main(): 16 | # Load text files into dictionary by author. 17 | strings_by_author = dict() 18 | strings_by_author['doyle'] = text_to_string('hound.txt') 19 | strings_by_author['wells'] = text_to_string('war.txt') 20 | strings_by_author['unknown'] = text_to_string('lost.txt') 21 | 22 | # Tokenize text strings preserving only punctuation marks. 23 | punct_by_author = make_punct_dict(strings_by_author) 24 | 25 | # Convert punctuation marks to numerical values and plot heatmaps. 26 | plt.ion() 27 | for author in punct_by_author: 28 | heat = convert_punct_to_number(punct_by_author, author) 29 | arr = np.array((heat[:6561])) # trim to largest size for square array 30 | arr_reshaped = arr.reshape(int(math.sqrt(len(arr))), 31 | int(math.sqrt(len(arr)))) 32 | fig, ax = plt.subplots(figsize=(7, 7)) 33 | sns.heatmap(arr_reshaped, 34 | cmap=ListedColormap(['blue', 'yellow']), 35 | square=True, 36 | ax=ax) 37 | ax.set_title('Heatmap Semicolons {}'.format(author)) 38 | plt.show() 39 | 40 | def text_to_string(filename): 41 | """Read a text file and return a string.""" 42 | with open(filename) as infile: 43 | return infile.read() 44 | 45 | def make_punct_dict(strings_by_author): 46 | """Return dictionary of tokenized punctuation by corpus by author.""" 47 | punct_by_author = dict() 48 | for author in strings_by_author: 49 | tokens = nltk.word_tokenize(strings_by_author[author]) 50 | punct_by_author[author] = ([token for token in tokens 51 | if token in PUNCT_SET]) 52 | print("Number punctuation marks in {} = {}" 53 | .format(author, len(punct_by_author[author]))) 54 | return punct_by_author 55 | 56 | def convert_punct_to_number(punct_by_author, author): 57 | """Return list of punctuation marks converted to numerical values.""" 58 | heat_vals = [] 59 | for char in punct_by_author[author]: 60 | if char == ';': 61 | value = 1 62 | else: 63 | value = 2 64 | heat_vals.append(value) 65 | return heat_vals 66 | 67 | if __name__ == '__main__': 68 | main() 69 | -------------------------------------------------------------------------------- /Chapter_2/practice_hound_dispersion.py: -------------------------------------------------------------------------------- 1 | """Use NLP (nltk) to make dispersion plot.""" 2 | import matplotlib.pyplot as plt 3 | from nltk.draw.dispersion import dispersion_plot 4 | 5 | def text_to_string(filename): 6 | """Read a text file and return a string.""" 7 | with open(filename) as infile: 8 | return infile.read() 9 | 10 | corpus = text_to_string('hound.txt') 11 | tokens = nltk.word_tokenize(corpus) 12 | tokens = nltk.Text(tokens) # NLTK wrapper for automatic text analysis. 13 | words = ['Holmes', 'Watson', 'Mortimer', 'Henry', 'Barrymore', 'Stapleton', 'Selden', 'hound'] 14 | ax = dispersion_plot(tokens, words) 15 | # Correct current bug in NLTK dispersion_plot that reverses label order by mistake: 16 | ax.set_yticks(list(range(len(words))), reversed(words), color="C0") 17 | -------------------------------------------------------------------------------- /Chapter_2/stylometry.py: -------------------------------------------------------------------------------- 1 | # NOTE: The stopwords and parts of speech functions 2 | # changed with the 3rd printing of the book. 3 | 4 | 5 | from collections import Counter 6 | import matplotlib.pyplot as plt 7 | import nltk 8 | from nltk.corpus import stopwords 9 | 10 | LINES = ['-', ':', '--'] # Line style for plots. 11 | 12 | def main(): 13 | # Load text files into dictionary by author. 14 | strings_by_author = dict() 15 | strings_by_author['doyle'] = text_to_string('hound.txt') 16 | strings_by_author['wells'] = text_to_string('war.txt') 17 | strings_by_author['unknown'] = text_to_string('lost.txt') 18 | 19 | # Check results of reading files. 20 | print(strings_by_author['doyle'][:300]) 21 | 22 | # Tokenize text strings and run stylometric tests. 23 | words_by_author = make_word_dict(strings_by_author) 24 | len_shortest_corpus = find_shortest_corpus(words_by_author) 25 | 26 | word_length_test(words_by_author, len_shortest_corpus) 27 | stopwords_test(words_by_author, len_shortest_corpus) 28 | parts_of_speech_test(words_by_author, len_shortest_corpus) 29 | vocab_test(words_by_author) 30 | jaccard_test(words_by_author, len_shortest_corpus) 31 | 32 | def text_to_string(filename): 33 | """Read a text file and return a string.""" 34 | with open(filename) as infile: 35 | return infile.read() 36 | 37 | def make_word_dict(strings_by_author): 38 | """Return dictionary of tokenized words by corpus by author.""" 39 | words_by_author = dict() 40 | for author in strings_by_author: 41 | tokens = nltk.word_tokenize(strings_by_author[author]) 42 | words_by_author[author] = ([token.lower() for token in tokens 43 | if token.isalpha()]) 44 | return words_by_author 45 | 46 | def find_shortest_corpus(words_by_author): 47 | """Return length of shortest corpus.""" 48 | word_count = [] 49 | for author in words_by_author: 50 | word_count.append(len(words_by_author[author])) 51 | print('\nNumber of words for {} = {}\n'. 52 | format(author, len(words_by_author[author]))) 53 | len_shortest_corpus = min(word_count) 54 | print('length shortest corpus = {}\n'.format(len_shortest_corpus)) 55 | return len_shortest_corpus 56 | 57 | def word_length_test(words_by_author, len_shortest_corpus): 58 | """Plot word length freq by author, truncated to shortest corpus length.""" 59 | by_author_length_freq_dist = dict() 60 | plt.figure(1) 61 | plt.ion() 62 | for i, author in enumerate(words_by_author): 63 | word_lengths = [len(word) for word in words_by_author[author] 64 | [:len_shortest_corpus]] 65 | by_author_length_freq_dist[author] = nltk.FreqDist(word_lengths) 66 | by_author_length_freq_dist[author].plot(15, 67 | linestyle=LINES[i], 68 | label=author, 69 | title='Word Length') 70 | plt.legend() 71 | ## plt.show() # Uncomment to see plot while coding function. 72 | 73 | def stopwords_test(words_by_author, len_shortest_corpus): 74 | """Plot stopwords freq by author, truncated to shortest corpus length.""" 75 | fdist = dict() 76 | plt.figure(2) 77 | stop_words = stopwords.words('english') 78 | 79 | for i, author in enumerate(words_by_author): 80 | stopwords_by_author = [word for word in words_by_author[author] 81 | [:len_shortest_corpus] if word in stop_words] 82 | fdist[author] = {word: stopwords_by_author.count(word) for word in 83 | stop_words[:50]} # Use first 50 of 179 stopwords. 84 | k, v = list(fdist[author].keys()), list(fdist[author].values()) 85 | plt.plot(k, v, label=author, linestyle=LINES[i], lw=1) 86 | 87 | ## plt.xticks([]) # Turn off labels if plotting >50 stopwords. 88 | plt.title('First 50 Stopwords') 89 | plt.legend() 90 | plt.xticks(rotation=90) 91 | ## plt.show() 92 | 93 | def parts_of_speech_test(words_by_author, len_shortest_corpus): 94 | """Plot author use of parts-of-speech such as nouns, verbs, adverbs,etc.""" 95 | fdist = dict() 96 | colors = ['k', 'lightgrey', 'grey'] 97 | plt.figure(3) 98 | 99 | for i, author in enumerate(words_by_author): 100 | pos_by_author = [pos[1] for pos in nltk.pos_tag(words_by_author[author] 101 | [:len_shortest_corpus])] 102 | fdist[author] = Counter(pos_by_author) 103 | k, v = list(fdist[author].keys()), list(fdist[author].values()) 104 | plt.plot(k, v, linestyle='', marker='^', c=colors[i], label=author) 105 | 106 | plt.title('Parts of Speech') 107 | plt.legend() 108 | plt.xticks(rotation=90) 109 | ## plt.show() 110 | 111 | def vocab_test(words_by_author): 112 | """Compare author vocabularies using the Chi Squared statistical test.""" 113 | chisquared_by_author = dict() 114 | 115 | for author in words_by_author: 116 | if author != 'unknown': 117 | # Combine corpus for author & unknown & find 1000 most-common words. 118 | combined_corpus = (words_by_author[author] + 119 | words_by_author['unknown']) 120 | author_proportion = (len(words_by_author[author])/ 121 | len(combined_corpus)) 122 | combined_freq_dist = nltk.FreqDist(combined_corpus) 123 | most_common_words = list(combined_freq_dist.most_common(1000)) 124 | chisquared = 0 125 | 126 | # Calculate observed vs. expected word counts. 127 | for word, combined_count in most_common_words: 128 | observed_count_author = words_by_author[author].count(word) 129 | expected_count_author = combined_count * author_proportion 130 | chisquared += ((observed_count_author - 131 | expected_count_author)**2 / 132 | expected_count_author) 133 | chisquared_by_author[author] = chisquared 134 | print('Chi-squared for {} = {:.1f}'.format(author, chisquared)) 135 | 136 | 137 | most_likely_author = min(chisquared_by_author, key=chisquared_by_author.get) 138 | print('Most-likely author by vocabulary is {}\n'.format(most_likely_author)) 139 | 140 | def jaccard_test(words_by_author, len_shortest_corpus): 141 | """Calculate Jaccard similarity of each known corpus to unknown corpus.""" 142 | jaccard_by_author = dict() 143 | unique_words_unknown = set(words_by_author['unknown'] 144 | [:len_shortest_corpus]) 145 | authors = (author for author in words_by_author if author != 'unknown') 146 | for author in authors: 147 | unique_words_author = set(words_by_author[author][:len_shortest_corpus]) 148 | shared_words = unique_words_author.intersection(unique_words_unknown) 149 | jaccard_sim = (float(len(shared_words))/ (len(unique_words_author) + 150 | len(unique_words_unknown) - 151 | len(shared_words))) 152 | jaccard_by_author[author] = jaccard_sim 153 | print('Jaccard Similarity for {} = {}'.format(author, jaccard_sim)) 154 | 155 | most_likely_author = max(jaccard_by_author, key=jaccard_by_author.get) 156 | print('Most-likely author by similarity is {}'.format(most_likely_author)) 157 | 158 | if __name__ == '__main__': 159 | main() 160 | -------------------------------------------------------------------------------- /Chapter_2/war.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_2/war.txt -------------------------------------------------------------------------------- /Chapter_3/bed_summary.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import bs4 3 | from nltk.tokenize import sent_tokenize 4 | from gensim.summarization import summarize 5 | 6 | url = 'https://jamesclear.com/great-speeches/make-your-bed-by-admiral-william-h-mcraven' 7 | 8 | page = requests.get(url) 9 | page.raise_for_status() 10 | soup = bs4.BeautifulSoup(page.text, 'html.parser') 11 | p_elems = [element.text for element in soup.find_all('p')] 12 | speech = ' '.join(p_elems) # Be sure to join using a space! 13 | 14 | print("\nSummary of Make Your Bed speech:") 15 | print(summarize(speech, word_count=225)) # Note: This is an update to the 1st printing 16 | 17 | 18 | -------------------------------------------------------------------------------- /Chapter_3/dream_summary.py: -------------------------------------------------------------------------------- 1 | """ 2 | Gensim 4.0, released March 25, 2021, dropped the Summarization module. 3 | To run this program install Gensim 3.8.3 (https://pypi.org/project/gensim/3.8.3/) 4 | """ 5 | 6 | from collections import Counter 7 | import re 8 | import requests 9 | import bs4 10 | import nltk 11 | from nltk.corpus import stopwords 12 | 13 | def main(): 14 | # Use webscraping to obtain the text. 15 | url = 'http://www.analytictech.com/mb021/mlk.htm' 16 | page = requests.get(url) 17 | page.raise_for_status() 18 | soup = bs4.BeautifulSoup(page.text, 'html.parser') 19 | p_elems = [element.text for element in soup.find_all('p')] 20 | 21 | speech = ' '.join(p_elems) # Make sure to join on a space! 22 | 23 | # Fix typos, remove extra spaces, digits, and punctuation. 24 | speech = speech.replace(')mowing', 'knowing') 25 | speech = re.sub('\s+', ' ', speech) 26 | speech_edit = re.sub('[^a-zA-Z]', ' ', speech) 27 | speech_edit = re.sub('\s+', ' ', speech_edit) 28 | 29 | # Request input. 30 | while True: 31 | max_words = input("Enter max words per sentence for summary: ") 32 | num_sents = input("Enter number of sentences for summary: ") 33 | if max_words.isdigit() and num_sents.isdigit(): 34 | break 35 | else: 36 | print("\nInput must be in whole numbers.\n") 37 | 38 | # Run functions to generate sentence scores. 39 | speech_edit_no_stop = remove_stop_words(speech_edit) 40 | word_freq = get_word_freq(speech_edit_no_stop) 41 | sent_scores = score_sentences(speech, word_freq, max_words) 42 | 43 | # Print the top-ranked sentences. 44 | counts = Counter(sent_scores) 45 | summary = counts.most_common(int(num_sents)) 46 | print("\nSUMMARY:") 47 | for i in summary: 48 | print(i[0]) 49 | 50 | def remove_stop_words(speech_edit): 51 | """Remove stop words from string and return string.""" 52 | stop_words = set(stopwords.words('english')) 53 | speech_edit_no_stop = '' 54 | for word in nltk.word_tokenize(speech_edit): 55 | if word.lower() not in stop_words: 56 | speech_edit_no_stop += word + ' ' 57 | return speech_edit_no_stop 58 | 59 | def get_word_freq(speech_edit_no_stop): 60 | """Return a dictionary of word frequency in a string.""" 61 | word_freq = nltk.FreqDist(nltk.word_tokenize(speech_edit_no_stop.lower())) 62 | return word_freq 63 | 64 | def score_sentences(speech, word_freq, max_words): 65 | """Return dictionary of sentence scores based on word frequency.""" 66 | sent_scores = dict() 67 | sentences = nltk.sent_tokenize(speech) 68 | for sent in sentences: 69 | sent_scores[sent] = 0 70 | words = nltk.word_tokenize(sent.lower()) 71 | sent_word_count = len(words) 72 | if sent_word_count <= int(max_words): 73 | for word in words: 74 | if word in word_freq.keys(): 75 | sent_scores[sent] += word_freq[word] 76 | sent_scores[sent] = sent_scores[sent] / sent_word_count 77 | return sent_scores 78 | 79 | if __name__ == '__main__': 80 | main() 81 | -------------------------------------------------------------------------------- /Chapter_3/holmes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_3/holmes.png -------------------------------------------------------------------------------- /Chapter_3/hound.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_3/hound.txt -------------------------------------------------------------------------------- /Chapter_3/wc_hound.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from PIL import Image 3 | import matplotlib.pyplot as plt 4 | from wordcloud import WordCloud, STOPWORDS 5 | 6 | # Load a text file as a string. 7 | with open('hound.txt') as infile: 8 | text = infile.read() 9 | 10 | # Load an image as a NumPy array. 11 | mask = np.array(Image.open('holmes.png')) 12 | 13 | # Get stop words as a set and add extra words. 14 | stopwords = STOPWORDS 15 | stopwords.update(['us', 'one', 'will', 'said', 'now', 'well', 'man', 'may', 16 | 'little', 'say', 'must', 'way', 'long', 'yet', 'mean', 17 | 'put', 'seem', 'asked', 'made', 'half', 'much', 18 | 'certainly', 'might', 'came']) 19 | 20 | # Generate word cloud. 21 | wc = WordCloud(max_words=500, 22 | relative_scaling=0.5, 23 | mask=mask, 24 | background_color='white', 25 | stopwords=stopwords, 26 | margin=2, 27 | random_state=7, 28 | contour_width=2, 29 | contour_color='brown', 30 | colormap='copper').generate(text) 31 | 32 | # Turn wc object into an array. 33 | colors = wc.to_array() 34 | 35 | # Plot and save word cloud. 36 | plt.figure() 37 | plt.title("Chamberlain Hunt Academy Senior Class Presents:\n", 38 | fontsize=15, color='brown') 39 | plt.text(-10, 0, "The Hound of the Baskervilles", 40 | fontsize=20, fontweight='bold', color='brown') 41 | plt.suptitle("7:00 pm May 10-12 McComb Auditorium", 42 | x=0.52, y=0.095, fontsize=15, color='brown') 43 | plt.imshow(colors, interpolation="bilinear") 44 | plt.axis('off') 45 | plt.show() 46 | ##plt.savefig('hound_wordcloud.png') 47 | -------------------------------------------------------------------------------- /Chapter_4/allied_attack_plan.txt: -------------------------------------------------------------------------------- 1 | Allies plan major attack for Five June. Begins at oh five twenty with bombardment from Aslagh Ridge toward Rommel east flank. Followed by tenth Indian Brigade infantry with tanks of twenty second Armored Brigade on Sidi Muftah. At same time, thirty second Army Tank Brigade and infantry to charge north flank at Sidra Ridge. Three hundred thirty tanks deployed to7:23 PM 7/21/2019 south and seventy to north. -------------------------------------------------------------------------------- /Chapter_4/practice_WWII_words.py: -------------------------------------------------------------------------------- 1 | """Book code using the novel The Lost World 2 | 3 | For words not in book, spell-out with first letter of words. 4 | Flag 'first letter mode' by bracketing between alternating 5 | 'a a' and 'the the'. 6 | 7 | credit: Eric T. Mortenson 8 | """ 9 | import sys 10 | import os 11 | import random 12 | import string 13 | from collections import defaultdict, Counter 14 | 15 | def main(): 16 | message = input("Enter plaintext or ciphertext: ") 17 | process = input("Enter 'encrypt' or 'decrypt': ") 18 | shift = int(input("Shift value (1-365) = ")) 19 | infile = input("Enter filename with extension: ") 20 | 21 | if not os.path.exists(infile): 22 | print("File {} not found. Terminating.".format(infile), file=sys.stderr) 23 | sys.exit(1) 24 | word_list = load_file(infile) 25 | word_dict = make_dict(word_list, shift) 26 | letter_dict = make_letter_dict(word_list) 27 | 28 | if process == 'encrypt': 29 | ciphertext = encrypt(message, word_dict, letter_dict) 30 | count = Counter(ciphertext) 31 | encryptedWordList = [] 32 | for number in ciphertext: 33 | encryptedWordList.append(word_list[number - shift]) 34 | 35 | print("\nencrypted word list = \n {} \n" 36 | .format(' '.join(encryptedWordList))) 37 | print("encrypted ciphertext = \n {}\n".format(ciphertext)) 38 | 39 | # Check the encryption by decrypting the ciphertext. 40 | print("decrypted plaintext = ") 41 | singleFirstCheck = False 42 | for cnt, i in enumerate(ciphertext): 43 | if word_list[ciphertext[cnt]-shift] == 'a' and \ 44 | word_list[ciphertext[cnt+1]-shift] == 'a': 45 | continue 46 | if word_list[ciphertext[cnt]-shift] == 'a' and \ 47 | word_list[ciphertext[cnt-1]-shift] == 'a': 48 | singleFirstCheck = True 49 | continue 50 | if singleFirstCheck == True and cnt 0: 88 | if word[0].isalpha(): 89 | firstLetterDict[word[0]].append(word) 90 | return firstLetterDict 91 | 92 | def encrypt(message, word_dict, letter_dict): 93 | """Return list of indexes representing characters in a message.""" 94 | encrypted = [] 95 | # remove punctuation from message words 96 | messageWords = message.lower().split() 97 | messageWordsNoPunct = ["".join(char for char in word if char not in \ 98 | string.punctuation) for word in messageWords] 99 | for word in messageWordsNoPunct: 100 | if len(word_dict[word]) > 1: 101 | index = random.choice(word_dict[word]) 102 | elif len(word_dict[word]) == 1: # Random.choice fails if only 1 choice. 103 | index = word_dict[word][0] 104 | elif len(word_dict[word]) == 0: # Word not in word_dict. 105 | encrypted.append(random.choice(word_dict['a'])) 106 | encrypted.append(random.choice(word_dict['a'])) 107 | 108 | for letter in word: 109 | if letter not in letter_dict.keys(): 110 | print('\nLetter {} not in letter-to-word dictionary.' 111 | .format(letter), file=sys.stderr) 112 | continue 113 | if len(letter_dict[letter])>1: 114 | newWord =random.choice(letter_dict[letter]) 115 | else: 116 | newWord = letter_dict[letter][0] 117 | if len(word_dict[newWord])>1: 118 | index = random.choice(word_dict[newWord]) 119 | else: 120 | index = word_dict[newWord][0] 121 | encrypted.append(index) 122 | 123 | encrypted.append(random.choice(word_dict['the'])) 124 | encrypted.append(random.choice(word_dict['the'])) 125 | continue 126 | encrypted.append(index) 127 | return encrypted 128 | 129 | def decrypt(message, word_list, shift): 130 | """Decrypt ciphertext string and return plaintext word string. 131 | 132 | This shows how plaintext looks before extracting first letters. 133 | """ 134 | plaintextList = [] 135 | indexes = [s.replace(',', '').replace('[', '').replace(']', '') 136 | for s in message.split()] 137 | for count, i in enumerate(indexes): 138 | plaintextList.append(word_list[int(i) - shift]) 139 | return ' '.join(plaintextList) 140 | 141 | def check_for_fail(ciphertext): 142 | """Return True if ciphertext contains any duplicate keys.""" 143 | check = [k for k, v in Counter(ciphertext).items() if v > 1] 144 | if len(check) > 0: 145 | print(check) 146 | return True 147 | 148 | if __name__ == '__main__': 149 | main() 150 | -------------------------------------------------------------------------------- /Chapter_4/practice_barchart.py: -------------------------------------------------------------------------------- 1 | """Plot barchart of characters in text file.""" 2 | import sys 3 | import os 4 | import operator 5 | from collections import Counter 6 | import matplotlib.pyplot as plt 7 | 8 | def load_file(infile): 9 | """Read and return text file as string of lowercase characters.""" 10 | with open(infile) as f: 11 | text = f.read().lower() 12 | return text 13 | 14 | def main(): 15 | infile = 'lost.txt' 16 | if not os.path.exists(infile): 17 | print("File {} not found. Terminating.".format(infile), 18 | file=sys.stderr) 19 | sys.exit(1) 20 | 21 | text = load_file(infile) 22 | 23 | # Make bar chart of characters in text and their frequency. 24 | char_freq = Counter(text) 25 | char_freq_sorted = sorted(char_freq.items(), 26 | key=operator.itemgetter(1), reverse=True) 27 | x, y = zip(*char_freq_sorted) # * unpacks iterable. 28 | fig, ax = plt.subplots() 29 | ax.bar(x, y) 30 | fig.show() 31 | 32 | if __name__ == '__main__': 33 | main() 34 | -------------------------------------------------------------------------------- /Chapter_4/rebecca.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import random 4 | from collections import defaultdict, Counter 5 | 6 | def main(): 7 | message = input("Enter plaintext or ciphertext: ") 8 | process = input("Enter 'encrypt' or 'decrypt': ") 9 | while process not in ('encrypt', 'decrypt'): 10 | process = input("Invalid process. Enter 'encrypt' or 'decrypt': ") 11 | shift = int(input("Shift value (1-366) = ")) 12 | while not 1 <= shift <= 366: 13 | shift = int(input("Invalid value. Enter digit from 1 to 366: ")) 14 | infile = input("Enter filename with extension: ") 15 | if not os.path.exists(infile): 16 | print("File {} not found. Terminating.".format(infile), file=sys.stderr) 17 | sys.exit(1) 18 | text = load_file(infile) 19 | char_dict = make_dict(text, shift) 20 | 21 | if process == 'encrypt': 22 | ciphertext = encrypt(message, char_dict) 23 | 24 | # Run QC protocols and print results. 25 | if check_for_fail(ciphertext): 26 | print("\nProblem finding unique keys.", file=sys.stderr) 27 | print("Try again, change message, or change code book.\n", 28 | file=sys.stderr) 29 | sys.exit() 30 | 31 | print("\nCharacter and number of occurrences in char_dict: \n") 32 | print("{: >10}{: >10}{: >10}".format('Character', 'Unicode', 'Count')) 33 | for key in sorted(char_dict.keys()): 34 | print('{:>10}{:>10}{:>10}'.format(repr(key)[1:-1], 35 | str(ord(key)), 36 | len(char_dict[key]))) 37 | print('\nNumber of distinct characters: {}'.format(len(char_dict))) 38 | print("Total number of characters: {:,}\n".format(len(text))) 39 | 40 | print("encrypted ciphertext = \n {}\n".format(ciphertext)) 41 | 42 | # Check the encryption by decrypting the ciphertext. 43 | print("decrypted plaintext = ") 44 | for i in ciphertext: 45 | print(text[i - shift], end='', flush=True) 46 | 47 | elif process == 'decrypt': 48 | plaintext = decrypt(message, text, shift) 49 | print("\ndecrypted plaintext = \n {}".format(plaintext)) 50 | 51 | 52 | def load_file(infile): 53 | """Read and return text file as a string of lowercase characters.""" 54 | with open(infile) as f: 55 | loaded_string = f.read().lower() 56 | return loaded_string 57 | 58 | def make_dict(text, shift): 59 | """Return dictionary of characters as keys and shifted indexes as values.""" 60 | char_dict = defaultdict(list) 61 | for index, char in enumerate(text): 62 | char_dict[char].append(index + shift) 63 | return char_dict 64 | 65 | def encrypt(message, char_dict): 66 | """Return list of indexes representing characters in a message.""" 67 | encrypted = [] 68 | for char in message.lower(): 69 | if len(char_dict[char]) > 1: 70 | index = random.choice(char_dict[char]) 71 | elif len(char_dict[char]) == 1: # Random.choice fails if only 1 choice. 72 | index = char_dict[char][0] 73 | elif len(char_dict[char]) == 0: 74 | print("\nCharacter {} not in dictionary.".format(char), 75 | file=sys.stderr) 76 | continue 77 | encrypted.append(index) 78 | return encrypted 79 | 80 | def decrypt(message, text, shift): 81 | """Decrypt ciphertext list and return plaintext string.""" 82 | plaintext = '' 83 | indexes = [s.replace(',', '').replace('[', '').replace(']', '') 84 | for s in message.split()] 85 | for i in indexes: 86 | plaintext += text[int(i) - shift] 87 | return plaintext 88 | 89 | def check_for_fail(ciphertext): 90 | """Return True if ciphertext contains any duplicate keys.""" 91 | check = [k for k, v in Counter(ciphertext).items() if v > 1] 92 | if len(check) > 0: 93 | return True 94 | 95 | if __name__ == '__main__': 96 | main() 97 | -------------------------------------------------------------------------------- /Chapter_5/blink_comparator.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | import numpy as np 4 | import cv2 as cv 5 | 6 | MIN_NUM_KEYPOINT_MATCHES = 50 7 | 8 | def main(): 9 | """Loop through 2 folders with paired images, register and blink images.""" 10 | night1_files = sorted(os.listdir('night_1')) 11 | night2_files = sorted(os.listdir('night_2')) 12 | path1 = Path.cwd() / 'night_1' 13 | path2 = Path.cwd() / 'night_2' 14 | path3 = Path.cwd() / 'night_1_registered' 15 | 16 | for i, _ in enumerate(night1_files): 17 | img1 = cv.imread(str(path1 / night1_files[i]), cv.IMREAD_GRAYSCALE) 18 | img2 = cv.imread(str(path2 / night2_files[i]), cv.IMREAD_GRAYSCALE) 19 | 20 | print("Comparing {} to {}.\n".format(night1_files[i], night2_files[i])) 21 | 22 | # Find keypoints and best matches between them. 23 | kp1, kp2, best_matches = find_best_matches(img1, img2) 24 | img_match = cv.drawMatches(img1, kp1, img2, kp2, 25 | best_matches, outImg=None) 26 | 27 | # Draw a line between the two images. 28 | height, width = img1.shape 29 | cv.line(img_match, (width, 0), (width, height), (255, 255, 255), 1) 30 | QC_best_matches(img_match) # Comment-out to ignore. 31 | 32 | # Register left-hand image using keypoints. 33 | img1_registered = register_image(img1, img2, kp1, kp2, best_matches) 34 | 35 | # QC registration and save registered image (Optional steps): 36 | blink(img1, img1_registered, 'Check Registration', num_loops=5) 37 | out_filename = '{}_registered.png'.format(night1_files[i][:-4]) 38 | cv.imwrite(str(path3 / out_filename), img1_registered) # Will overwrite! 39 | 40 | cv.destroyAllWindows() 41 | 42 | # Run the blink comparator 43 | blink(img1_registered, img2, 'Blink Comparator', num_loops=15) 44 | 45 | def find_best_matches(img1, img2): 46 | """Return list of keypoints and list of best matches for two images.""" 47 | orb = cv.ORB_create(nfeatures=100) # Initiate ORB object. 48 | 49 | # Find the keypoints and descriptors with ORB. 50 | kp1, desc1 = orb.detectAndCompute(img1, mask=None) 51 | kp2, desc2 = orb.detectAndCompute(img2, mask=None) 52 | 53 | # Find keypoint matches using Brute Force Matcher. 54 | bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True) 55 | matches = bf.match(desc1, desc2) 56 | 57 | # Sort matches in ascending order of distance and keep best n matches. 58 | matches = sorted(matches, key=lambda x: x.distance) 59 | best_matches = matches[:MIN_NUM_KEYPOINT_MATCHES] 60 | 61 | return kp1, kp2, best_matches 62 | 63 | def QC_best_matches(img_match): 64 | """Draw best keypoint matches connected by colored lines.""" 65 | cv.imshow('Best {} Matches'.format(MIN_NUM_KEYPOINT_MATCHES), img_match) 66 | cv.waitKey(2500) # Keeps window active 2.5 seconds. 67 | 68 | def register_image(img1, img2, kp1, kp2, best_matches): 69 | """Return first image registered to second image.""" 70 | if len(best_matches) >= MIN_NUM_KEYPOINT_MATCHES: 71 | src_pts = np.zeros((len(best_matches), 2), dtype=np.float32) 72 | dst_pts = np.zeros((len(best_matches), 2), dtype=np.float32) 73 | for i, match in enumerate(best_matches): 74 | src_pts[i, :] = kp1[match.queryIdx].pt 75 | dst_pts[i, :] = kp2[match.trainIdx].pt 76 | 77 | h_array, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC) 78 | height, width = img2.shape # Get dimensions of image 2. 79 | img1_warped = cv.warpPerspective(img1, h_array, (width, height)) 80 | 81 | return img1_warped 82 | 83 | else: 84 | print("WARNING: Number of keypoint matches < {}\n".format 85 | (MIN_NUM_KEYPOINT_MATCHES)) 86 | return img1 87 | 88 | def blink(image_1, image_2, window_name, num_loops): 89 | """Replicate blink comparator with two images.""" 90 | for _ in range(num_loops): 91 | cv.imshow(window_name, image_1) 92 | cv.waitKey(330) 93 | cv.imshow(window_name, image_2) 94 | cv.waitKey(330) 95 | 96 | if __name__ == '__main__': 97 | main() 98 | -------------------------------------------------------------------------------- /Chapter_5/montages/montage_left.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/montages/montage_left.JPG -------------------------------------------------------------------------------- /Chapter_5/montages/montage_right.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/montages/montage_right.JPG -------------------------------------------------------------------------------- /Chapter_5/montages/practice_montage_aligner.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 3 | 4 | MIN_NUM_KEYPOINT_MATCHES = 150 5 | 6 | img1 = cv2.imread('montage_left.JPG', cv2.IMREAD_COLOR) # queryImage 7 | img2 = cv2.imread('montage_right.JPG', cv2.IMREAD_COLOR) # trainImage 8 | 9 | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) # Convert to grayscale. 10 | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) 11 | 12 | orb = cv2.ORB_create(nfeatures=700) 13 | 14 | # Find the keypoints and desc with ORB. 15 | kp1, desc1 = orb.detectAndCompute(img1, None) 16 | kp2, desc2 = orb.detectAndCompute(img2, None) 17 | 18 | # Find keypoint matches using Brute Force Matcher. 19 | bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) 20 | matches = bf.match(desc1, desc2, None) 21 | 22 | # Sort matches in ascending order of distance. 23 | matches = sorted(matches, key=lambda x: x.distance) 24 | 25 | # Draw best matches. 26 | img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:MIN_NUM_KEYPOINT_MATCHES], 27 | None) 28 | 29 | cv2.namedWindow('Matches', cv2.WINDOW_NORMAL) 30 | img3_resize = cv2.resize(img3, (699, 700)) 31 | cv2.imshow('Matches', img3_resize) 32 | cv2.waitKey(7000) # Keeps window open 5 seconds. 33 | cv2.destroyWindow('Matches') 34 | 35 | # Keep only best matches. 36 | best_matches = matches[:MIN_NUM_KEYPOINT_MATCHES] 37 | 38 | if len(best_matches) >= MIN_NUM_KEYPOINT_MATCHES: 39 | src_pts = np.zeros((len(best_matches), 2), dtype=np.float32) 40 | dst_pts = np.zeros((len(best_matches), 2), dtype=np.float32) 41 | 42 | for i, match in enumerate(best_matches): 43 | src_pts[i, :] = kp1[match.queryIdx].pt 44 | dst_pts[i, :] = kp2[match.trainIdx].pt 45 | 46 | M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC) 47 | 48 | # Get dimensions of image 2. 49 | height, width = img2.shape 50 | img1_warped = cv2.warpPerspective(img1, M, (width, height)) 51 | 52 | cv2.imwrite('montage_left_registered.JPG', img1_warped) 53 | cv2.imwrite('montage_right_gray.JPG', img2) 54 | 55 | else: 56 | print("\n{}\n".format('WARNING: Number of keypoint matches < 10!')) 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | -------------------------------------------------------------------------------- /Chapter_5/montages/practice_montage_difference_finder.py: -------------------------------------------------------------------------------- 1 | import cv2 as cv 2 | 3 | filename1 = 'montage_left.JPG' 4 | filename2 = 'montage_right_gray.JPG' 5 | 6 | img1 = cv.imread(filename1, cv.IMREAD_GRAYSCALE) 7 | img2 = cv.imread(filename2, cv.IMREAD_GRAYSCALE) 8 | 9 | # Absolute difference between image 2 & 3: 10 | diff_imgs1_2 = cv.absdiff(img1, img2) 11 | 12 | cv.namedWindow('Difference', cv.WINDOW_NORMAL) 13 | diff_imgs1_2_resize = cv.resize(diff_imgs1_2, (699, 700)) 14 | cv.imshow('Difference', diff_imgs1_2_resize) 15 | 16 | crop_diff = diff_imgs1_2[10:2795, 10:2445] # x, y, w, h = 10, 10, 2790, 2440 17 | 18 | # Blur to remove extraneous noise. 19 | blurred = cv.GaussianBlur(crop_diff, (5, 5), 0) 20 | 21 | (minVal, maxVal, minLoc, maxLoc2) = cv.minMaxLoc(blurred) 22 | cv.circle(img2, maxLoc2, 100, 0, 3) 23 | x, y = int(img2.shape[1]/4), int(img2.shape[0]/4) 24 | img2_resize = cv.resize(img2, (x, y)) 25 | cv.imshow('Change', img2_resize) 26 | -------------------------------------------------------------------------------- /Chapter_5/night_1/1_bright_transient_left.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1/1_bright_transient_left.png -------------------------------------------------------------------------------- /Chapter_5/night_1/2_dim_transient_left.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1/2_dim_transient_left.png -------------------------------------------------------------------------------- /Chapter_5/night_1/3_diff_exposures_left.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1/3_diff_exposures_left.png -------------------------------------------------------------------------------- /Chapter_5/night_1/4_single_transient_left.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1/4_single_transient_left.png -------------------------------------------------------------------------------- /Chapter_5/night_1/5_no_transient_left.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1/5_no_transient_left.png -------------------------------------------------------------------------------- /Chapter_5/night_1/6_bright_transient_neg_left.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1/6_bright_transient_neg_left.png -------------------------------------------------------------------------------- /Chapter_5/night_1_2_transients/1_bright_transient_left_registered_DECTECTED.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_2_transients/1_bright_transient_left_registered_DECTECTED.png -------------------------------------------------------------------------------- /Chapter_5/night_1_2_transients/2_dim_transient_left_registered_DECTECTED.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_2_transients/2_dim_transient_left_registered_DECTECTED.png -------------------------------------------------------------------------------- /Chapter_5/night_1_2_transients/3_diff_exposures_left_registered_DECTECTED.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_2_transients/3_diff_exposures_left_registered_DECTECTED.png -------------------------------------------------------------------------------- /Chapter_5/night_1_2_transients/4_single_transient_left_registered_DECTECTED.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_2_transients/4_single_transient_left_registered_DECTECTED.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered/1_bright_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered/1_bright_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered/2_dim_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered/2_dim_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered/3_diff_exposures_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered/3_diff_exposures_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered/4_single_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered/4_single_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered/5_no_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered/5_no_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered/6_bright_transient_neg_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered/6_bright_transient_neg_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered_transients/1_bright_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered_transients/1_bright_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered_transients/2_dim_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered_transients/2_dim_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered_transients/3_diff_exposures_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered_transients/3_diff_exposures_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered_transients/4_single_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered_transients/4_single_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered_transients/5_no_transient_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered_transients/5_no_transient_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_1_registered_transients/6_bright_transient_neg_left_registered.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_1_registered_transients/6_bright_transient_neg_left_registered.png -------------------------------------------------------------------------------- /Chapter_5/night_2/1_bright_transient_right.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_2/1_bright_transient_right.png -------------------------------------------------------------------------------- /Chapter_5/night_2/2_dim_transient_right.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_2/2_dim_transient_right.png -------------------------------------------------------------------------------- /Chapter_5/night_2/3_diff_exposures_right.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_2/3_diff_exposures_right.png -------------------------------------------------------------------------------- /Chapter_5/night_2/4_single_transient_right.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_2/4_single_transient_right.png -------------------------------------------------------------------------------- /Chapter_5/night_2/5_no_transient_right.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_2/5_no_transient_right.png -------------------------------------------------------------------------------- /Chapter_5/night_2/6_bright_transient_neg_right.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_5/night_2/6_bright_transient_neg_right.png -------------------------------------------------------------------------------- /Chapter_5/practice_orbital_path.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | import cv2 as cv 4 | 5 | PAD = 5 # Ignore pixels this distance from edge. 6 | 7 | def find_transient(image, diff_image, pad): 8 | """Takes image, difference image, and pad value in pixels and returns 9 | boolean and location of maxVal in difference image excluding an edge 10 | rind. Draws circle around maxVal on image.""" 11 | transient = False 12 | height, width = diff_image.shape 13 | cv.rectangle(image, (PAD, PAD), (width - PAD, height - PAD), 255, 1) 14 | minVal, maxVal, minLoc, maxLoc = cv.minMaxLoc(diff_image) 15 | if pad < maxLoc[0] < width - pad and pad < maxLoc[1] < height - pad: 16 | cv.circle(image, maxLoc, 10, 255, 0) 17 | transient = True 18 | return transient, maxLoc 19 | 20 | def main(): 21 | night1_files = sorted(os.listdir('night_1_registered_transients')) 22 | night2_files = sorted(os.listdir('night_2')) 23 | path1 = Path.cwd() / 'night_1_registered_transients' 24 | path2 = Path.cwd() / 'night_2' 25 | path3 = Path.cwd() / 'night_1_2_transients' 26 | 27 | # Images should all be the same size and similar exposures. 28 | for i, _ in enumerate(night1_files[:-1]): # Leave off negative image 29 | img1 = cv.imread(str(path1 / night1_files[i]), cv.IMREAD_GRAYSCALE) 30 | img2 = cv.imread(str(path2 / night2_files[i]), cv.IMREAD_GRAYSCALE) 31 | 32 | # Get absolute difference between images. 33 | diff_imgs1_2 = cv.absdiff(img1, img2) 34 | cv.imshow('Difference', diff_imgs1_2) 35 | cv.waitKey(2000) 36 | 37 | # Copy difference image and find and circle brightest pixel. 38 | temp = diff_imgs1_2.copy() 39 | transient1, transient_loc1 = find_transient(img1, temp, PAD) 40 | 41 | # Draw black circle on temporary image to obliterate brightest spot. 42 | cv.circle(temp, transient_loc1, 10, 0, -1) 43 | 44 | # Get location of new brightest pixel and circle it on input image. 45 | transient2, transient_loc2 = find_transient(img1, temp, PAD) 46 | 47 | if transient1 or transient2: 48 | print('\nTRANSIENT DETECTED between {} and {}\n' 49 | .format(night1_files[i], night2_files[i])) 50 | font = cv.FONT_HERSHEY_COMPLEX_SMALL 51 | cv.putText(img1, night1_files[i], (10, 25), 52 | font, 1, (255, 255, 255), 1, cv.LINE_AA) 53 | cv.putText(img1, night2_files[i], (10, 55), 54 | font, 1, (255, 255, 255), 1, cv.LINE_AA) 55 | if transient1 and transient2: 56 | cv.line(img1, transient_loc1, transient_loc2, (255, 255, 255), 57 | 1, lineType=cv.LINE_AA) 58 | 59 | blended = cv.addWeighted(img1, 1, diff_imgs1_2, 1, 0) 60 | cv.imshow('Surveyed', blended) 61 | cv.waitKey(2500) # Keeps window open 2.5 seconds. 62 | 63 | out_filename = '{}_DECTECTED.png'.format(night1_files[i][:-4]) 64 | cv.imwrite(str(path3 / out_filename), blended) # Will overwrite! 65 | 66 | else: 67 | print('\nNo transient detected between {} and {}\n' 68 | .format(night1_files[i], night2_files[i])) 69 | 70 | if __name__ == '__main__': 71 | main() 72 | -------------------------------------------------------------------------------- /Chapter_5/transient_detector.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | import cv2 as cv 4 | 5 | PAD = 5 # Ignore pixels this distance from edge 6 | 7 | def find_transient(image, diff_image, pad): 8 | """Finds and draws circle around transients moving against a star field.""" 9 | transient = False 10 | height, width = diff_image.shape 11 | cv.rectangle(image, (PAD, PAD), (width - PAD, height - PAD), 255, 1) 12 | minVal, maxVal, minLoc, maxLoc = cv.minMaxLoc(diff_image) 13 | if pad < maxLoc[0] < width - pad and pad < maxLoc[1] < height - pad: 14 | cv.circle(image, maxLoc, 10, 255, 0) 15 | transient = True 16 | return transient, maxLoc 17 | 18 | def main(): 19 | night1_files = sorted(os.listdir('night_1_registered_transients')) 20 | night2_files = sorted(os.listdir('night_2')) 21 | path1 = Path.cwd() / 'night_1_registered_transients' 22 | path2 = Path.cwd() / 'night_2' 23 | path3 = Path.cwd() / 'night_1_2_transients' 24 | 25 | # Images should all be the same size and similar exposures. 26 | for i, _ in enumerate(night1_files[:-1]): # Leave off negative image 27 | img1 = cv.imread(str(path1 / night1_files[i]), cv.IMREAD_GRAYSCALE) 28 | img2 = cv.imread(str(path2 / night2_files[i]), cv.IMREAD_GRAYSCALE) 29 | 30 | # Get absolute difference between images. 31 | diff_imgs1_2 = cv.absdiff(img1, img2) 32 | cv.imshow('Difference', diff_imgs1_2) 33 | cv.waitKey(2000) 34 | 35 | # Copy difference image and find and circle brightest pixel. 36 | temp = diff_imgs1_2.copy() 37 | transient1, transient_loc1 = find_transient(img1, temp, PAD) 38 | 39 | # Draw black circle on temporary image to obliterate brightest spot. 40 | cv.circle(temp, transient_loc1, 10, 0, -1) 41 | 42 | # Get location of new brightest pixel and circle it on input image. 43 | transient2, _ = find_transient(img1, temp, PAD) 44 | 45 | if transient1 or transient2: 46 | print('\nTRANSIENT DETECTED between {} and {}\n' 47 | .format(night1_files[i], night2_files[i])) 48 | font = cv.FONT_HERSHEY_COMPLEX_SMALL 49 | cv.putText(img1, night1_files[i], (10, 25), 50 | font, 1, (255, 255, 255), 1, cv.LINE_AA) 51 | cv.putText(img1, night2_files[i], (10, 55), 52 | font, 1, (255, 255, 255), 1, cv.LINE_AA) 53 | 54 | blended = cv.addWeighted(img1, 1, diff_imgs1_2, 1, 0) 55 | cv.imshow('Surveyed', blended) 56 | cv.waitKey(2500) 57 | 58 | out_filename = '{}_DECTECTED.png'.format(night1_files[i][:-4]) 59 | cv.imwrite(str(path3 / out_filename), blended) # Will overwrite! 60 | 61 | else: 62 | print('\nNo transient detected between {} and {}\n' 63 | .format(night1_files[i], night2_files[i])) 64 | 65 | if __name__ == '__main__': 66 | main() 67 | -------------------------------------------------------------------------------- /Chapter_6/apollo_8_free_return.py: -------------------------------------------------------------------------------- 1 | from turtle import Shape, Screen, Turtle, Vec2D as Vec 2 | 3 | # User input: 4 | G = 8 # Gravitational constant used for the simulation. 5 | NUM_LOOPS = 4100 # Number of time steps in the simulation. 6 | Ro_X = 0 # Ship starting position x coordinate. 7 | Ro_Y = -85 # Ship starting position y coordinate. 8 | Vo_X = 485 # Ship translunar injection velocity x component. 9 | Vo_Y = 0 # Ship translunar injection velocity y component. 10 | 11 | 12 | class GravSys(): 13 | """Runs a gravity simulation on n-bodies.""" 14 | 15 | def __init__(self): 16 | self.bodies = [] 17 | self.t = 0 18 | self.dt = 0.001 19 | 20 | def sim_loop(self): 21 | """Loop bodies in a list through time steps.""" 22 | for _ in range(NUM_LOOPS): # Stops simulation after capsule splashdown. 23 | self.t += self.dt 24 | for body in self.bodies: 25 | body.step() 26 | 27 | 28 | class Body(Turtle): 29 | """Celestial object that orbits and projects gravity field.""" 30 | def __init__(self, mass, start_loc, vel, gravsys, shape): 31 | super().__init__(shape=shape) 32 | self.gravsys = gravsys 33 | self.penup() 34 | self.mass = mass 35 | self.setpos(start_loc) 36 | self.vel = vel 37 | gravsys.bodies.append(self) 38 | #self.resizemode("user") 39 | #self.pendown() # Uncomment to draw path behind object. 40 | 41 | def acc(self): 42 | """Calculate combined force on body and return vector components.""" 43 | a = Vec(0, 0) 44 | for body in self.gravsys.bodies: 45 | if body != self: 46 | r = body.pos() - self.pos() 47 | a += (G * body.mass / abs(r)**3) * r # Units: dist/time^2. 48 | return a 49 | 50 | def step(self): 51 | """Calculate position, orientation, and velocity of a body.""" 52 | dt = self.gravsys.dt 53 | a = self.acc() 54 | self.vel = self.vel + dt * a 55 | self.setpos(self.pos() + dt * self.vel) 56 | if self.gravsys.bodies.index(self) == 2: # Index 2 = CSM. 57 | rotate_factor = 0.0006 58 | self.setheading((self.heading() - rotate_factor * self.xcor())) 59 | if self.xcor() < -20: 60 | self.shape('arrow') 61 | self.shapesize(0.5) 62 | self.setheading(105) 63 | 64 | 65 | 66 | def main(): 67 | # Setup screen 68 | screen = Screen() 69 | screen.setup(width=1.0, height=1.0) # For fullscreen. 70 | screen.bgcolor('black') 71 | screen.title("Apollo 8 Free Return Simulation") 72 | 73 | # Instantiate gravitational system 74 | gravsys = GravSys() 75 | 76 | # Instantiate Earth turtle 77 | image_earth = 'earth_100x100.gif' 78 | screen.register_shape(image_earth) 79 | earth = Body(1000000, (0, -25), Vec(0, -2.5), gravsys, image_earth) 80 | earth.pencolor('white') 81 | earth.getscreen().tracer(0, 0) # So csm polys won't show while drawing. 82 | 83 | # Instantiate moon turtle 84 | image_moon = 'moon_27x27.gif' 85 | screen.register_shape(image_moon) 86 | moon = Body(32000, (344, 42), Vec(-27, 147), gravsys, image_moon) 87 | moon.pencolor('gray') 88 | 89 | # Build command-service-module(csm)shape 90 | csm = Shape('compound') 91 | cm = ((0, 30), (0, -30), (30, 0)) 92 | csm.addcomponent(cm, 'white', 'white') # Silver and red are also good. 93 | sm = ((-60, 30), (0, 30), (0, -30), (-60, -30)) 94 | csm.addcomponent(sm, 'white', 'black') 95 | nozzle = ((-55, 0), (-90, 20), (-90, -20)) 96 | csm.addcomponent(nozzle, 'white', 'white') 97 | screen.register_shape('csm', csm) 98 | 99 | # Instantiate Apollo 8 CSM turtle 100 | ship = Body(1, (Ro_X, Ro_Y), Vec(Vo_X, Vo_Y), gravsys, 'csm') 101 | ship.shapesize(0.2) 102 | ship.color('white') # Path color. Silver and red are also good. 103 | ship.getscreen().tracer(1, 0) 104 | ship.setheading(90) 105 | 106 | gravsys.sim_loop() 107 | 108 | if __name__ == '__main__': 109 | main() 110 | -------------------------------------------------------------------------------- /Chapter_6/earth_100x100.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_6/earth_100x100.gif -------------------------------------------------------------------------------- /Chapter_6/moon_27x27.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_6/moon_27x27.gif -------------------------------------------------------------------------------- /Chapter_6/practice_grav_assist_intersecting.py: -------------------------------------------------------------------------------- 1 | """gravity_assist_intersecting.py 2 | 3 | Moon and ship cross orbits and moon slows and turns ship. 4 | 5 | Credit: Eric T. Mortenson 6 | """ 7 | 8 | from turtle import Shape, Screen, Turtle, Vec2D as Vec 9 | import turtle 10 | import math 11 | import sys 12 | 13 | # User input: 14 | G = 8 # Gravitational constant used for the simulation. 15 | NUM_LOOPS = 7000 # Number of time steps in simulation. 16 | Ro_X = -152.18 # Ship starting position x coordinate. 17 | Ro_Y = 329.87 # Ship starting position y coordinate. 18 | Vo_X = 423.10 # Ship translunar injection velocity x component. 19 | Vo_Y = -512.26 # Ship translunar injection velocity y component. 20 | 21 | 22 | MOON_MASS = 1_250_000 23 | 24 | class GravSys(): 25 | """Runs a gravity simulation on n-bodies.""" 26 | 27 | 28 | def __init__(self): 29 | self.bodies = [] 30 | self.t = 0 31 | self.dt = 0.001 32 | 33 | 34 | def sim_loop(self): 35 | """Loop bodies in a list through time steps.""" 36 | for index in range(NUM_LOOPS): # stops simulation after while 37 | self.t += self.dt 38 | for body in self.bodies: 39 | body.step() 40 | 41 | class Body(Turtle): 42 | """Celestial object that orbits and projects gravity field.""" 43 | def __init__(self, mass, start_loc, vel, gravsys, shape): 44 | super().__init__(shape=shape) 45 | self.gravsys = gravsys 46 | self.penup() 47 | self.mass=mass 48 | self.setpos(start_loc) 49 | self.vel = vel 50 | gravsys.bodies.append(self) 51 | self.pendown() # uncomment to draw path behind object 52 | 53 | 54 | def acc(self): 55 | """Calculate combined force on body and return vector components.""" 56 | a = Vec(0,0) 57 | for body in self.gravsys.bodies: 58 | if body != self: 59 | r = body.pos() - self.pos() 60 | a += (G * body.mass / abs(r)**3) * r # units dist/time^2 61 | return a 62 | 63 | 64 | def step(self): 65 | """Calculate position, orientation, and velocity of a body.""" 66 | dt = self.gravsys.dt 67 | a = self.acc() 68 | self.vel = self.vel + dt * a 69 | xOld, yOld = self.pos() # for orienting ship 70 | self.setpos(self.pos() + dt * self.vel) 71 | xNew, yNew = self.pos() # for orienting ship 72 | if self.gravsys.bodies.index(self) == 1: # the CSM 73 | dir_radians = math.atan2(yNew-yOld,xNew-xOld) # for orienting ship 74 | dir_degrees = dir_radians * 180 / math.pi # for orienting ship 75 | self.setheading(dir_degrees+90) # for orienting ship 76 | 77 | 78 | def main(): 79 | # Setup screen 80 | screen = Screen() 81 | screen.setup(width = 1.0, height = 1.0) # for fullscreen 82 | screen.bgcolor('black') 83 | screen.title("Gravity Assist Example") 84 | 85 | # Instantiate gravitational system 86 | gravsys = GravSys() 87 | 88 | # Instantiate Planet 89 | image_moon = 'moon_27x27.gif' 90 | screen.register_shape(image_moon) 91 | moon = Body(MOON_MASS, (-250, 0), Vec(500, 0), gravsys, image_moon) 92 | moon.pencolor('gray') 93 | 94 | # Build command-service-module (csm) shape 95 | csm = Shape('compound') 96 | cm = ((0, 30), (0, -30), (30, 0)) 97 | csm.addcomponent(cm, 'red', 'red') 98 | sm = ((-60,30), (0, 30), (0, -30), (-60, -30)) 99 | csm.addcomponent(sm, 'red', 'black') 100 | nozzle = ((-55, 0), (-90, 20), (-90, -20)) 101 | csm.addcomponent(nozzle, 'red', 'red') 102 | screen.register_shape('csm', csm) 103 | 104 | # Instantiate Apollo 8 CSM turtle 105 | ship = Body(1, (Ro_X, Ro_Y), Vec(Vo_X, Vo_Y), gravsys, "csm") 106 | ship.shapesize(0.2) 107 | ship.color('red') # path color 108 | ship.getscreen().tracer(1, 0) 109 | ship.setheading(90) 110 | 111 | gravsys.sim_loop() 112 | 113 | if __name__=='__main__': 114 | main() 115 | 116 | -------------------------------------------------------------------------------- /Chapter_6/practice_grav_assist_stationary.py: -------------------------------------------------------------------------------- 1 | """gravity_assist_stationary.py 2 | 3 | Moon approaches stationary ship which is swung around and flung away. 4 | 5 | Credit: Eric T. Mortenson 6 | """ 7 | 8 | from turtle import Shape, Screen, Turtle, Vec2D as Vec 9 | import turtle 10 | import math 11 | 12 | # User input: 13 | G = 8 # Gravitational constant used for the simulation. 14 | NUM_LOOPS = 4100 # Number of time steps in simulation. 15 | Ro_X = 0 # Ship starting position x coordinate. 16 | Ro_Y = -50 # Ship starting position y coordinate. 17 | Vo_X = 0 # Ship velocity x component. 18 | Vo_Y = 0 # Ship velocity y component. 19 | 20 | MOON_MASS = 1_250_000 21 | 22 | class GravSys(): 23 | """Runs a gravity simulation on n-bodies.""" 24 | 25 | 26 | def __init__(self): 27 | self.bodies = [] 28 | self.t = 0 29 | self.dt = 0.001 30 | 31 | 32 | def sim_loop(self): 33 | """Loop bodies in a list through time steps.""" 34 | for _ in range(NUM_LOOPS): 35 | self.t += self.dt 36 | for body in self.bodies: 37 | body.step() 38 | 39 | 40 | class Body(Turtle): 41 | """Celestial object that orbits and projects gravity field.""" 42 | def __init__(self, mass, start_loc, vel, gravsys, shape): 43 | super().__init__(shape=shape) 44 | self.gravsys = gravsys 45 | self.penup() 46 | self.mass=mass 47 | self.setpos(start_loc) 48 | self.vel = vel 49 | gravsys.bodies.append(self) 50 | self.pendown() # uncomment to draw path behind object 51 | 52 | 53 | def acc(self): 54 | """Calculate combined force on body and return vector components.""" 55 | a = Vec(0,0) 56 | for body in self.gravsys.bodies: 57 | if body != self: 58 | r = body.pos() - self.pos() 59 | a += (G * body.mass / abs(r)**3) * r # units dist/time^2 60 | return a 61 | 62 | 63 | def step(self): 64 | """Calculate position, orientation, and velocity of a body.""" 65 | dt = self.gravsys.dt 66 | a = self.acc() 67 | self.vel = self.vel + dt * a 68 | xOld, yOld = self.pos() # for orienting ship 69 | self.setpos(self.pos() + dt * self.vel) 70 | xNew, yNew = self.pos() # for orienting ship 71 | if self.gravsys.bodies.index(self) == 1: # the CSM 72 | dir_radians = math.atan2(yNew-yOld,xNew-xOld) # for orienting ship 73 | dir_degrees = dir_radians * 180 / math.pi # for orienting ship 74 | self.setheading(dir_degrees+90) # for orienting ship 75 | 76 | 77 | def main(): 78 | # Setup screen 79 | screen = Screen() 80 | screen.setup(width = 1.0, height = 1.0) # for fullscreen 81 | screen.bgcolor('black') 82 | screen.title("Gravity Assist Example") 83 | 84 | # Instantiate gravitational system 85 | gravsys = GravSys() 86 | 87 | # Instantiate Planet 88 | image_moon = 'moon_27x27.gif' 89 | screen.register_shape(image_moon) 90 | moon = Body(MOON_MASS, (500, 0), Vec(-500, 0), gravsys, image_moon) 91 | moon.pencolor('gray') 92 | 93 | # Build command-service-module (csm) shape 94 | csm = Shape('compound') 95 | cm = ((0, 30), (0, -30), (30, 0)) 96 | csm.addcomponent(cm, 'red', 'red') 97 | sm = ((-60,30), (0, 30), (0, -30), (-60, -30)) 98 | csm.addcomponent(sm, 'red', 'black') 99 | nozzle = ((-55, 0), (-90, 20), (-90, -20)) 100 | csm.addcomponent(nozzle, 'red', 'red') 101 | screen.register_shape('csm', csm) 102 | 103 | # Instantiate Apollo 8 CSM turtle 104 | ship = Body(1, (Ro_X, Ro_Y), Vec(Vo_X, Vo_Y), gravsys, "csm") 105 | ship.shapesize(0.2) 106 | ship.color('red') # path color 107 | ship.getscreen().tracer(1, 0) 108 | ship.setheading(90) 109 | 110 | gravsys.sim_loop() 111 | 112 | if __name__=='__main__': 113 | main() 114 | 115 | -------------------------------------------------------------------------------- /Chapter_6/search_pattern/helicopter_left.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_6/search_pattern/helicopter_left.gif -------------------------------------------------------------------------------- /Chapter_6/search_pattern/helicopter_right.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_6/search_pattern/helicopter_right.gif -------------------------------------------------------------------------------- /Chapter_6/search_pattern/practice_search_pattern.py: -------------------------------------------------------------------------------- 1 | import time 2 | import random 3 | import turtle 4 | 5 | SA_X = 600 # Search area width. 6 | SA_Y = 480 # Search area height. 7 | TRACK_SPACING = 40 # Distance between search tracks. 8 | 9 | 10 | # Setup screen. 11 | screen = turtle.Screen() 12 | screen.setup(width=SA_X, height=SA_Y) 13 | turtle.resizemode('user') 14 | screen.title("Search Pattern") 15 | rand_x = random.randint(0, int(SA_X / 2)) * random.choice([-1, 1]) 16 | rand_y = random.randint(0, int(SA_Y / 2)) * random.choice([-1, 1]) 17 | 18 | 19 | # Set up turtle images. 20 | seaman_image = 'seaman.gif' 21 | screen.addshape(seaman_image) 22 | copter_image_left = 'helicopter_left.gif' 23 | copter_image_right = 'helicopter_right.gif' 24 | screen.addshape(copter_image_left) 25 | screen.addshape(copter_image_right) 26 | 27 | # Instantiate seaman turtle. 28 | seaman = turtle.Turtle(seaman_image) 29 | seaman.hideturtle() 30 | seaman.penup() 31 | seaman.setpos(rand_x, rand_y) 32 | seaman.showturtle() 33 | 34 | # Instantiate copter turtle. 35 | turtle.shape(copter_image_right) 36 | turtle.hideturtle() 37 | turtle.pencolor('black') 38 | turtle.penup() 39 | turtle.setpos(-(int(SA_X / 2) - TRACK_SPACING), int(SA_Y / 2) - TRACK_SPACING) 40 | turtle.showturtle() 41 | turtle.pendown() 42 | 43 | # Run search pattern and announce discovery of seaman. 44 | for i in range(int(SA_Y / TRACK_SPACING)): 45 | turtle.fd(SA_X - TRACK_SPACING * 2) 46 | turtle.rt(90) 47 | turtle.fd(TRACK_SPACING / 2) 48 | turtle.rt(90) 49 | turtle.shape(copter_image_left) 50 | turtle.fd(SA_X - TRACK_SPACING * 2) 51 | turtle.lt(90) 52 | turtle.fd(TRACK_SPACING / 2) 53 | turtle.lt(90) 54 | turtle.shape(copter_image_right) 55 | if turtle.ycor() - seaman.ycor() <= 10: 56 | turtle.write(" Seaman found!", 57 | align='left', 58 | font=("Arial", 15, 'normal', 'bold', 'italic')) 59 | time.sleep(3) 60 | 61 | break 62 | -------------------------------------------------------------------------------- /Chapter_6/search_pattern/seaman.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_6/search_pattern/seaman.gif -------------------------------------------------------------------------------- /Chapter_7/Mars_Global_Geology_Mariner9_1024.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_7/Mars_Global_Geology_Mariner9_1024.jpg -------------------------------------------------------------------------------- /Chapter_7/geo_thresh.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_7/geo_thresh.jpg -------------------------------------------------------------------------------- /Chapter_7/mola_1024x501.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_7/mola_1024x501.png -------------------------------------------------------------------------------- /Chapter_7/mola_1024x512_200mp.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_7/mola_1024x512_200mp.jpg -------------------------------------------------------------------------------- /Chapter_7/mola_color_1024x506.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_7/mola_color_1024x506.png -------------------------------------------------------------------------------- /Chapter_7/practice_3d_plotting.py: -------------------------------------------------------------------------------- 1 | """Plot Mars MOLA map image in 3D. Credit Eric T. Mortenson.""" 2 | import numpy as np 3 | import cv2 as cv 4 | import matplotlib.pyplot as plt 5 | from mpl_toolkits import mplot3d 6 | 7 | IMG_GRAY = cv.imread('mola_1024x512_200mp.jpg', cv.IMREAD_GRAYSCALE) 8 | 9 | x = np.linspace(1023, 0, 1024) 10 | y = np.linspace(0, 511, 512) 11 | 12 | X, Y = np.meshgrid(x, y) 13 | Z = IMG_GRAY[0:512, 0:1024] 14 | 15 | fig = plt.figure() 16 | ax = plt.axes(projection='3d') 17 | ax.contour3D(X, Y, Z, 150, cmap='gist_earth') # 150=number of contours 18 | ax.auto_scale_xyz([1023, 0], [0, 511], [0, 500]) 19 | plt.show() 20 | -------------------------------------------------------------------------------- /Chapter_7/practice_confirm_drawing_part_of_image.py: -------------------------------------------------------------------------------- 1 | """Test that drawings become part of an image in OpenCV.""" 2 | import numpy as np 3 | import cv2 as cv 4 | 5 | IMG = cv.imread('mola_1024x501.png', cv.IMREAD_GRAYSCALE) 6 | 7 | ul_x, ul_y = 0, 167 8 | lr_x, lr_y = 32, 183 9 | rect_img = IMG[ul_y : lr_y, ul_x : lr_x] 10 | 11 | def run_stats(image): 12 | """Run stats on a numpy array made from an image.""" 13 | print('mean = {}'.format(np.mean(image))) 14 | print('std = {}'.format(np.std(image))) 15 | print('ptp = {}'.format(np.ptp(image))) 16 | print() 17 | cv.imshow('img', IMG) 18 | cv.waitKey(1000) 19 | 20 | # Stats with no drawing on screen: 21 | print("No drawing") 22 | run_stats(rect_img) 23 | 24 | # Stats with white rectangle outline: 25 | print("White outlined rectangle") 26 | cv.rectangle(IMG, (ul_x, ul_y), (lr_x, lr_y), (255, 0, 0), 1) 27 | run_stats(rect_img) 28 | 29 | # Stats with rectangle filled with white: 30 | print("White-filled rectangle") 31 | cv.rectangle(IMG, (ul_x, ul_y), (lr_x, lr_y), (255, 0, 0), -1) 32 | run_stats(rect_img) 33 | 34 | 35 | -------------------------------------------------------------------------------- /Chapter_7/practice_geo_map_step_1of2.py: -------------------------------------------------------------------------------- 1 | """Threshold a grayscale image using pixel values and save to file.""" 2 | import cv2 as cv 3 | 4 | IMG_GEO = cv.imread('Mars_Global_Geology_Mariner9_1024.jpg', cv.IMREAD_GRAYSCALE) 5 | cv.imshow('map', IMG_GEO) 6 | cv.waitKey(1000) 7 | img_copy = IMG_GEO.copy() 8 | lower_limit = 170 # Lowest grayscale value for volcanic deposits 9 | upper_limit = 185 # Highest grayscale value for volcanic deposits 10 | for x in range(1024): 11 | for y in range(512): 12 | if lower_limit <= img_copy[y, x] <= upper_limit: 13 | img_copy[y, x] = 1 # Set to 255 to visualize results. 14 | else: 15 | img_copy[y, x] = 0 16 | 17 | cv.imwrite('geo_thresh.jpg', img_copy) 18 | cv.imshow('thresh', img_copy) 19 | cv.waitKey(0) 20 | -------------------------------------------------------------------------------- /Chapter_7/practice_geo_map_step_2of2.py: -------------------------------------------------------------------------------- 1 | """Select Martian landing sites based on surface smoothness and geology.""" 2 | import tkinter as tk 3 | from PIL import Image, ImageTk 4 | import numpy as np 5 | import cv2 as cv 6 | 7 | # CONSTANTS: User Input: 8 | IMG_GRAY = cv.imread('mola_1024x512_200mp.jpg', cv.IMREAD_GRAYSCALE) 9 | IMG_GEO = cv.imread('geo_thresh.jpg', cv.IMREAD_GRAYSCALE) 10 | IMG_COLOR = cv.imread('mola_color_1024x506.png') 11 | RECT_WIDTH_KM = 670 # Site rectangle width in kilometers. 12 | RECT_HT_KM = 335 # Site rectangle height in kilometers. 13 | MIN_ELEV_LIMIT = 60 # Intensity values (0-255). 14 | MAX_ELEV_LIMIT = 255 15 | NUM_CANDIDATES = 20 # Number of candidate landing sites to display. 16 | 17 | #------------------------------------------------------------------------------ 18 | 19 | # CONSTANTS: Derived and fixed: 20 | IMG_GRAY_GEO = IMG_GRAY * IMG_GEO 21 | IMG_HT, IMG_WIDTH = IMG_GRAY.shape 22 | MARS_CIRCUM = 21344 # Circumference in kilometers. 23 | PIXELS_PER_KM = IMG_WIDTH / MARS_CIRCUM 24 | RECT_WIDTH = int(PIXELS_PER_KM * RECT_WIDTH_KM) 25 | RECT_HT = int(PIXELS_PER_KM * RECT_HT_KM) 26 | LAT_30_N = int(IMG_HT / 3) 27 | LAT_30_S = LAT_30_N * 2 28 | STEP_X = int(RECT_WIDTH / 2) # Dividing by 4 yields more rect choices 29 | STEP_Y = int(RECT_HT / 2) # Dividing by 4 yields more rect choices 30 | 31 | # Create tkinter screen and drawing canvas 32 | screen = tk.Tk() 33 | canvas = tk.Canvas(screen, width=IMG_WIDTH, height=IMG_HT + 130) 34 | 35 | 36 | class Search(): 37 | """Read image and identify landing sites based on input criteria.""" 38 | 39 | 40 | def __init__(self, name): 41 | self.name = name 42 | self.rect_coords = {} 43 | self.rect_means = {} 44 | self.rect_ptps = {} 45 | self.rect_stds = {} 46 | self.ptp_filtered = [] 47 | self.std_filtered = [] 48 | self.high_graded_rects = [] 49 | 50 | 51 | def run_rect_stats(self): 52 | """Define rectangular search areas and calculate internal stats.""" 53 | ul_x, ul_y = 0, LAT_30_N 54 | lr_x, lr_y = RECT_WIDTH, LAT_30_N + RECT_HT 55 | rect_num = 1 56 | 57 | while True: 58 | rect_img = IMG_GRAY_GEO[ul_y : lr_y, ul_x : lr_x] 59 | self.rect_coords[rect_num] = [ul_x, ul_y, lr_x, lr_y] 60 | if MAX_ELEV_LIMIT >= np.mean(rect_img) >= MIN_ELEV_LIMIT: 61 | self.rect_means[rect_num] = np.mean(rect_img) 62 | self.rect_ptps[rect_num] = np.ptp(rect_img) 63 | self.rect_stds[rect_num] = np.std(rect_img) 64 | rect_num += 1 65 | 66 | # Move the rectangle. 67 | ul_x += STEP_X 68 | lr_x = ul_x + RECT_WIDTH 69 | if lr_x > IMG_WIDTH: 70 | ul_x = 0 71 | ul_y += STEP_Y 72 | lr_x = RECT_WIDTH 73 | lr_y += STEP_Y 74 | if lr_y > LAT_30_S + STEP_Y: 75 | break 76 | 77 | def draw_qc_rects(self): 78 | """Draw overlapping search rectangles on image as a check.""" 79 | img_copy = IMG_GRAY_GEO.copy() 80 | rects_sorted = sorted(self.rect_coords.items(), key=lambda x: x[0]) 81 | print("\nRect Number and Corner Coordinates (ul_x, ul_y, lr_x, lr_y):") 82 | for k, v in rects_sorted: 83 | print("rect: {}, coords: {}".format(k, v)) 84 | cv.rectangle(img_copy, 85 | (self.rect_coords[k][0], self.rect_coords[k][1]), 86 | (self.rect_coords[k][2], self.rect_coords[k][3]), 87 | (255, 0, 0), 1) 88 | cv.imshow('QC Rects {}'.format(self.name), img_copy) 89 | cv.waitKey(3000) 90 | cv.destroyAllWindows() 91 | 92 | def sort_stats(self): 93 | """Sort dictionaries by values and create lists of top N keys.""" 94 | ptp_sorted = (sorted(self.rect_ptps.items(), key=lambda x: x[1])) 95 | self.ptp_filtered = [x[0] for x in ptp_sorted[:NUM_CANDIDATES]] 96 | std_sorted = (sorted(self.rect_stds.items(), key=lambda x: x[1])) 97 | self.std_filtered = [x[0] for x in std_sorted[:NUM_CANDIDATES]] 98 | 99 | # Make list of rects where filtered std & ptp coincide. 100 | for rect in self.std_filtered: 101 | if rect in self.ptp_filtered: 102 | self.high_graded_rects.append(rect) 103 | 104 | def draw_filtered_rects(self, image, filtered_rect_list): 105 | """Draw rectangles in list on image and return image.""" 106 | img_copy = image.copy() 107 | for k in filtered_rect_list: 108 | cv.rectangle(img_copy, 109 | (self.rect_coords[k][0], self.rect_coords[k][1]), 110 | (self.rect_coords[k][2], self.rect_coords[k][3]), 111 | (255, 0, 0), 1) 112 | cv.putText(img_copy, str(k), 113 | (self.rect_coords[k][0] + 1, self.rect_coords[k][3]- 1), 114 | cv.FONT_HERSHEY_PLAIN, 0.65, (255, 0, 0), 1) 115 | 116 | # Draw latitude limits. 117 | cv.putText(img_copy, '30 N', (10, LAT_30_N - 7), 118 | cv.FONT_HERSHEY_PLAIN, 1, 255) 119 | cv.line(img_copy, (0, LAT_30_N), (IMG_WIDTH, LAT_30_N), 120 | (255, 0, 0), 1) 121 | cv.line(img_copy, (0, LAT_30_S), (IMG_WIDTH, LAT_30_S), 122 | (255, 0, 0), 1) 123 | cv.putText(img_copy, '30 S', (10, LAT_30_S + 16), 124 | cv.FONT_HERSHEY_PLAIN, 1, 255) 125 | 126 | return img_copy 127 | 128 | def make_final_display(self): 129 | """Use Tk to show map of final rects & printout of their statistics.""" 130 | screen.title('Sites by MOLA Gray STD & PTP {} Rect'.format(self.name)) 131 | # Draw the high-graded rects on the colored elevation map. 132 | img_color_rects = self.draw_filtered_rects(IMG_COLOR, 133 | self.high_graded_rects) 134 | # Convert image from CV BGR to RGB for use with Tkinter. 135 | img_converted = cv.cvtColor(img_color_rects, cv.COLOR_BGR2RGB) 136 | img_converted = ImageTk.PhotoImage(Image.fromarray(img_converted)) 137 | canvas.create_image(0, 0, image=img_converted, anchor=tk.NW) 138 | # Add stats for each rectangle at bottom of canvas. 139 | txt_x = 5 140 | txt_y = IMG_HT + 15 141 | for k in self.high_graded_rects: 142 | canvas.create_text(txt_x, txt_y, anchor='w', font=None, 143 | text= 144 | "rect={} mean elev={:.1f} std={:.2f} ptp={}" 145 | .format(k, self.rect_means[k], 146 | self.rect_stds[k], 147 | self.rect_ptps[k])) 148 | txt_y += 15 149 | if txt_y >= int(canvas.cget('height')) - 10: 150 | txt_x += 300 151 | txt_y = IMG_HT + 15 152 | canvas.pack() 153 | screen.mainloop() 154 | 155 | def main(): 156 | app = Search('670x335 km') 157 | app.run_rect_stats() 158 | app.draw_qc_rects() 159 | app.sort_stats() 160 | ptp_img = app.draw_filtered_rects(IMG_GRAY_GEO, app.ptp_filtered) 161 | std_img = app.draw_filtered_rects(IMG_GRAY_GEO, app.std_filtered) 162 | 163 | # Display filtered rects on grayscale map. 164 | cv.imshow('Sorted by ptp for {} rect'.format(app.name), ptp_img) 165 | cv.waitKey(3000) 166 | cv.imshow('Sorted by std for {} rect'.format(app.name), std_img) 167 | cv.waitKey(3000) 168 | 169 | app.make_final_display() # includes call to mainloop() 170 | 171 | if __name__ == '__main__': 172 | main() 173 | -------------------------------------------------------------------------------- /Chapter_7/practice_profile_olympus.py: -------------------------------------------------------------------------------- 1 | """West-East elevation profile through Olympus Mons.""" 2 | from PIL import Image, ImageDraw 3 | from matplotlib import pyplot as plt 4 | 5 | # Load image and get x and z values along horiz profile parallel to y _coord. 6 | y_coord = 202 7 | im = Image.open('mola_1024x512_200mp.jpg').convert('L') 8 | width, height = im.size 9 | x_vals = [x for x in range(width)] 10 | z_vals = [im.getpixel((x, y_coord)) for x in x_vals] 11 | 12 | # Draw profile on MOLA image. 13 | draw = ImageDraw.Draw(im) 14 | draw.line((0, y_coord, width, y_coord), fill=255, width=3) 15 | draw.text((100, 165), 'Olympus Mons', fill=255) 16 | im.show() 17 | 18 | # Make profile plot. 19 | fig, ax = plt.subplots(figsize=(9, 4)) 20 | axes = plt.gca() 21 | axes.set_ylim(0, 400) 22 | ax.plot(x_vals, z_vals, color='black') 23 | ax.set(xlabel='x-coordinate', 24 | ylabel='Intensity (height)', 25 | title="Mars Elevation Profile (y = 202)") 26 | ratio = 0.15 # Reduces vertical exaggeration in profile. 27 | xleft, xright = ax.get_xlim() 28 | ybase, ytop = ax.get_ylim() 29 | ax.set_aspect(abs((xright-xleft)/(ybase-ytop)) * ratio) 30 | plt.text(0, 310, 'WEST', fontsize=10) 31 | plt.text(980, 310, 'EAST', fontsize=10) 32 | plt.text(100, 280, 'Olympus Mons', fontsize=8) 33 | ##ax.grid() 34 | plt.show() 35 | -------------------------------------------------------------------------------- /Chapter_7/site_selector.py: -------------------------------------------------------------------------------- 1 | """Select Martian landing sites based on smoothness, elevation, latitude.""" 2 | 3 | import tkinter as tk 4 | from PIL import Image, ImageTk 5 | import numpy as np 6 | import cv2 as cv 7 | 8 | # CONSTANTS: User Input: 9 | IMG_GRAY = cv.imread('mola_1024x501.png', cv.IMREAD_GRAYSCALE) 10 | IMG_COLOR = cv.imread('mola_color_1024x506.png') 11 | RECT_WIDTH_KM = 670 # Site rectangle width in kilometers. 12 | RECT_HT_KM = 335 # Site rectangle height in kilometers. 13 | MAX_ELEV_LIMIT = 55 # Intensity values (0-255). 14 | NUM_CANDIDATES = 20 # Number of candidate landing sites to display. 15 | MARS_CIRCUM = 21344 # Circumference in kilometers. 16 | 17 | #------------------------------------------------------------------------------ 18 | 19 | # CONSTANTS: Derived: 20 | IMG_HT, IMG_WIDTH = IMG_GRAY.shape 21 | PIXELS_PER_KM = IMG_WIDTH / MARS_CIRCUM 22 | RECT_WIDTH = int(PIXELS_PER_KM * RECT_WIDTH_KM) 23 | RECT_HT = int(PIXELS_PER_KM * RECT_HT_KM) 24 | LAT_30_N = int(IMG_HT / 3) 25 | LAT_30_S = LAT_30_N * 2 26 | STEP_X = int(RECT_WIDTH / 2) 27 | STEP_Y = int(RECT_HT / 2) 28 | 29 | # Create tkinter screen and drawing canvas 30 | screen = tk.Tk() 31 | canvas = tk.Canvas(screen, width=IMG_WIDTH, height=IMG_HT + 130) 32 | 33 | 34 | class Search(): 35 | """Read image and identify landing rectangles based on input criteria.""" 36 | 37 | def __init__(self, name): 38 | self.name = name 39 | self.rect_coords = {} 40 | self.rect_means = {} 41 | self.rect_ptps = {} 42 | self.rect_stds = {} 43 | self.ptp_filtered = [] 44 | self.std_filtered = [] 45 | self.high_graded_rects = [] 46 | 47 | def run_rect_stats(self): 48 | """Define rectangular search areas and calculate internal stats.""" 49 | ul_x, ul_y = 0, LAT_30_N 50 | lr_x, lr_y = RECT_WIDTH, LAT_30_N + RECT_HT 51 | rect_num = 1 52 | 53 | while True: 54 | rect_img = IMG_GRAY[ul_y : lr_y, ul_x : lr_x] 55 | self.rect_coords[rect_num] = [ul_x, ul_y, lr_x, lr_y] 56 | if np.mean(rect_img) <= MAX_ELEV_LIMIT: 57 | self.rect_means[rect_num] = np.mean(rect_img) 58 | self.rect_ptps[rect_num] = np.ptp(rect_img) 59 | self.rect_stds[rect_num] = np.std(rect_img) 60 | rect_num += 1 61 | 62 | # Move the rectangle. 63 | ul_x += STEP_X 64 | lr_x = ul_x + RECT_WIDTH 65 | if lr_x > IMG_WIDTH: 66 | ul_x = 0 67 | ul_y += STEP_Y 68 | lr_x = RECT_WIDTH 69 | lr_y += STEP_Y 70 | if lr_y > LAT_30_S + STEP_Y: 71 | break 72 | 73 | def draw_qc_rects(self): 74 | """Draw overlapping search rectangles on image as a check.""" 75 | img_copy = IMG_GRAY.copy() 76 | rects_sorted = sorted(self.rect_coords.items(), key=lambda x: x[0]) 77 | print("\nRect Number and Corner Coordinates (ul_x, ul_y, lr_x, lr_y):") 78 | for k, v in rects_sorted: 79 | print("rect: {}, coords: {}".format(k, v)) 80 | cv.rectangle(img_copy, 81 | (self.rect_coords[k][0], self.rect_coords[k][1]), 82 | (self.rect_coords[k][2], self.rect_coords[k][3]), 83 | (255, 0, 0), 1) 84 | cv.imshow('QC Rects {}'.format(self.name), img_copy) 85 | cv.waitKey(3000) 86 | cv.destroyAllWindows() 87 | 88 | def sort_stats(self): 89 | """Sort dictionaries by values and create lists of top N keys.""" 90 | ptp_sorted = (sorted(self.rect_ptps.items(), key=lambda x: x[1])) 91 | self.ptp_filtered = [x[0] for x in ptp_sorted[:NUM_CANDIDATES]] 92 | std_sorted = (sorted(self.rect_stds.items(), key=lambda x: x[1])) 93 | self.std_filtered = [x[0] for x in std_sorted[:NUM_CANDIDATES]] 94 | 95 | # Make list of rects where filtered std & ptp coincide. 96 | for rect in self.std_filtered: 97 | if rect in self.ptp_filtered: 98 | self.high_graded_rects.append(rect) 99 | 100 | def draw_filtered_rects(self, image, filtered_rect_list): 101 | """Draw rectangles in list on image and return image.""" 102 | img_copy = image.copy() 103 | for k in filtered_rect_list: 104 | cv.rectangle(img_copy, 105 | (self.rect_coords[k][0], self.rect_coords[k][1]), 106 | (self.rect_coords[k][2], self.rect_coords[k][3]), 107 | (255, 0, 0), 1) 108 | cv.putText(img_copy, str(k), 109 | (self.rect_coords[k][0] + 1, self.rect_coords[k][3]- 1), 110 | cv.FONT_HERSHEY_PLAIN, 0.65, (255, 0, 0), 1) 111 | 112 | # Draw latitude limits. 113 | cv.putText(img_copy, '30 N', (10, LAT_30_N - 7), 114 | cv.FONT_HERSHEY_PLAIN, 1, 255) 115 | cv.line(img_copy, (0, LAT_30_N), (IMG_WIDTH, LAT_30_N), 116 | (255, 0, 0), 1) 117 | cv.line(img_copy, (0, LAT_30_S), (IMG_WIDTH, LAT_30_S), 118 | (255, 0, 0), 1) 119 | cv.putText(img_copy, '30 S', (10, LAT_30_S + 16), 120 | cv.FONT_HERSHEY_PLAIN, 1, 255) 121 | 122 | return img_copy 123 | 124 | def make_final_display(self): 125 | """Use Tk to show map of final rects & printout of their statistics.""" 126 | screen.title('Sites by MOLA Gray STD & PTP {} Rect'.format(self.name)) 127 | # Draw the high-graded rects on the colored elevation map. 128 | img_color_rects = self.draw_filtered_rects(IMG_COLOR, 129 | self.high_graded_rects) 130 | # Convert image from CV BGR to RGB for use with Tkinter. 131 | img_converted = cv.cvtColor(img_color_rects, cv.COLOR_BGR2RGB) 132 | img_converted = ImageTk.PhotoImage(Image.fromarray(img_converted)) 133 | canvas.create_image(0, 0, image=img_converted, anchor=tk.NW) 134 | # Add stats for each rectangle at bottom of canvas. 135 | txt_x = 5 136 | txt_y = IMG_HT + 20 137 | for k in self.high_graded_rects: 138 | canvas.create_text(txt_x, txt_y, anchor='w', font=None, 139 | text="rect={} mean elev={:.1f} std={:.2f} ptp={}" 140 | .format(k, self.rect_means[k], self.rect_stds[k], 141 | self.rect_ptps[k])) 142 | txt_y += 15 143 | if txt_y >= int(canvas.cget('height')) - 10: 144 | txt_x += 300 145 | txt_y = IMG_HT + 20 146 | canvas.pack() 147 | screen.mainloop() 148 | 149 | 150 | def main(): 151 | app = Search('670x335 km') 152 | app.run_rect_stats() 153 | app.draw_qc_rects() 154 | app.sort_stats() 155 | ptp_img = app.draw_filtered_rects(IMG_GRAY, app.ptp_filtered) 156 | std_img = app.draw_filtered_rects(IMG_GRAY, app.std_filtered) 157 | 158 | # Display filtered rects on grayscale map. 159 | cv.imshow('Sorted by ptp for {} rect'.format(app.name), ptp_img) 160 | cv.waitKey(3000) 161 | cv.imshow('Sorted by std for {} rect'.format(app.name), std_img) 162 | cv.waitKey(3000) 163 | 164 | app.make_final_display() # Includes call to mainloop(). 165 | 166 | if __name__ == '__main__': 167 | main() 168 | -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_01.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_02.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_03.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_04.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_05.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_06.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_06.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_07.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_07.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_08.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_08.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_09.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_09.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_10.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_11.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_12.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_13.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_14.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_15.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_16.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_17.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_17.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_18.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_18.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_19.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_20.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_21.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_21.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_22.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_22.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_23.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_23.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_24.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_24.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_25.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_25.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_26.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_26.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_27.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_27.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_28.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_28.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_29.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_29.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_30.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_30.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_31.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_31.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_32.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_33.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_33.png -------------------------------------------------------------------------------- /Chapter_8/br549_pixelated/pixelated_br549_34.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/br549_pixelated/pixelated_br549_34.png -------------------------------------------------------------------------------- /Chapter_8/earth_east.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/earth_east.png -------------------------------------------------------------------------------- /Chapter_8/earth_west.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/earth_west.png -------------------------------------------------------------------------------- /Chapter_8/limb_darkening.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/limb_darkening.png -------------------------------------------------------------------------------- /Chapter_8/pixelated_earth_east.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/pixelated_earth_east.png -------------------------------------------------------------------------------- /Chapter_8/pixelated_earth_west.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_8/pixelated_earth_west.png -------------------------------------------------------------------------------- /Chapter_8/pixelator.py: -------------------------------------------------------------------------------- 1 | """Degrade an image to 3x3 pixels and plot average BGR components.""" 2 | 3 | import numpy as np 4 | from matplotlib import pyplot as plt 5 | import cv2 as cv 6 | 7 | files = ['earth_west.png', 'earth_east.png'] 8 | 9 | # Downscale image to 3x3 pixels. 10 | for file in files: 11 | img_ini = cv.imread(file) 12 | pixelated = cv.resize(img_ini, (3, 3), interpolation=cv.INTER_AREA) 13 | img = cv.resize(pixelated, (300, 300), interpolation=cv.INTER_NEAREST) 14 | cv.imshow('Pixelated {}'.format(file), img) 15 | cv.waitKey(2000) 16 | 17 | # Split-out and average color channels. 18 | b, g, r = cv.split(pixelated) 19 | color_aves = [] 20 | for array in (b, g, r): 21 | color_aves.append(np.average(array)) 22 | 23 | # Make pie charts. 24 | labels = 'Blue', 'Green', 'Red' 25 | colors = ['blue', 'green', 'red'] 26 | fig, ax = plt.subplots(figsize=(3.5, 3.3)) # size in inches 27 | _, _, autotexts = ax.pie(color_aves, 28 | labels=labels, 29 | autopct='%1.1f%%', 30 | colors=colors) 31 | for autotext in autotexts: 32 | autotext.set_color('white') 33 | plt.title('{}\n'.format(file)) 34 | 35 | plt.show() 36 | -------------------------------------------------------------------------------- /Chapter_8/pixelator_saturated_only.py: -------------------------------------------------------------------------------- 1 | """Degrade an image to 3x3 pixels and plot BGR components of center pixel.""" 2 | import cv2 as cv 3 | from matplotlib import pyplot as plt 4 | 5 | files = ['earth_west.png', 'earth_east.png'] 6 | 7 | # Downscale image to 3x3 pixels. 8 | for file in files: 9 | img_ini = cv.imread(file) 10 | pixelated = cv.resize(img_ini, (3, 3), interpolation=cv.INTER_AREA) 11 | img = cv.resize(pixelated, (300, 300), interpolation=cv.INTER_NEAREST) 12 | cv.imshow('Pixelated {}'.format(file), img) 13 | cv.waitKey(2000) 14 | 15 | color_values = pixelated[1, 1] # Selects center pixel. 16 | print(color_values) 17 | 18 | # Make pie charts. 19 | labels = 'Blue', 'Green', 'Red' 20 | colors = ['blue', 'green', 'red'] 21 | fig, ax = plt.subplots(figsize=(3.5, 3.3)) # Size in inches. 22 | _, _, autotexts = ax.pie(color_values, 23 | labels=labels, 24 | autopct='%1.1f%%', 25 | colors=colors) 26 | for autotext in autotexts: 27 | autotext.set_color('white') 28 | plt.title('{} Saturated Center Pixel \n'.format(file)) 29 | 30 | plt.show() 31 | -------------------------------------------------------------------------------- /Chapter_8/practice_alien_armada.py: -------------------------------------------------------------------------------- 1 | """Simulate transit of alien armada with light curve.""" 2 | import random 3 | import numpy as np 4 | import cv2 as cv 5 | import matplotlib.pyplot as plt 6 | 7 | STAR_RADIUS = 165 8 | BLACK_IMG = np.zeros((400, 500, 1), dtype="uint8") 9 | NUM_SHIPS = 5 10 | NUM_LOOPS = 300 # Number of simulation frames to run 11 | 12 | 13 | class Ship(): 14 | """Draws and moves a ship object on an image.""" 15 | 16 | def __init__(self, number): 17 | self.number = number 18 | self.shape = random.choice(['>>>|==H[X)', 19 | '>>|==H[XX}=))-', 20 | '>>|==H[XX]=(-']) 21 | self.size = random.choice([0.7, 0.8, 1]) 22 | self.x = random.randint(-180, -80) 23 | self.y = random.randint(80, 350) 24 | self.dx = random.randint(2, 4) 25 | 26 | def move_ship(self, image): 27 | """Draws and moves ship object.""" 28 | font = cv.FONT_HERSHEY_PLAIN 29 | cv.putText(img=image, 30 | text=self.shape, 31 | org=(self.x, self.y), 32 | fontFace=font, 33 | fontScale=self.size, 34 | color=0, 35 | thickness=5) 36 | self.x += self.dx 37 | 38 | def record_transit(start_image): 39 | """Runs simulation and returns list of intensity measurements per frame.""" 40 | ship_list = [] 41 | intensity_samples = [] 42 | 43 | for i in range(NUM_SHIPS): 44 | ship_list.append(Ship(i)) 45 | 46 | for _ in range(NUM_LOOPS): 47 | temp_img = start_image.copy() 48 | cv.circle(temp_img, (250, 200), STAR_RADIUS, 255, -1) # The star. 49 | for ship in ship_list: 50 | ship.move_ship(temp_img) 51 | intensity = temp_img.mean() 52 | cv.putText(temp_img, 'Mean Intensity = {}'.format(intensity), 53 | (5, 390), cv.FONT_HERSHEY_PLAIN, 1, 255) 54 | cv.imshow('Transit', temp_img) 55 | intensity_samples.append(intensity) 56 | cv.waitKey(50) 57 | cv.destroyAllWindows() 58 | return intensity_samples 59 | 60 | 61 | def calc_rel_brightness(image): 62 | """Return list of relative brightness measurments for planetary transit.""" 63 | rel_brightness = record_transit(image) 64 | max_brightness = max(rel_brightness) 65 | for i, j in enumerate(rel_brightness): 66 | rel_brightness[i] = j / max_brightness 67 | return rel_brightness 68 | 69 | def plot_light_curve(rel_brightness): 70 | """Plots curve of relative brightness vs. time.""" 71 | plt.plot(rel_brightness, color='red', linestyle='dashed', 72 | linewidth=2, label='Relative Brightness') 73 | plt.legend(loc='upper center') 74 | plt.title('Relative Brightness vs. Time') 75 | plt.show() 76 | 77 | relative_brightness = calc_rel_brightness(BLACK_IMG) 78 | plot_light_curve(relative_brightness) 79 | 80 | -------------------------------------------------------------------------------- /Chapter_8/practice_asteroids.py: -------------------------------------------------------------------------------- 1 | """Simulate transit of asteroids and plot light curve.""" 2 | import random 3 | import numpy as np 4 | import cv2 as cv 5 | import matplotlib.pyplot as plt 6 | 7 | STAR_RADIUS = 165 8 | BLACK_IMG = np.zeros((400, 500, 1), dtype="uint8") 9 | NUM_ASTEROIDS = 15 10 | NUM_LOOPS = 170 11 | 12 | 13 | class Asteroid(): 14 | """Draws a circle on an image that represents an asteroid.""" 15 | 16 | def __init__(self, number): 17 | self.radius = random.choice((1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3)) 18 | self.x = random.randint(-30, 60) 19 | self.y = random.randint(220, 230) 20 | self.dx = 3 21 | 22 | def move_asteroid(self, image): 23 | """Draw and move asteroid object.""" 24 | cv.circle(image, (self.x, self.y), self.radius, 0, -1) 25 | self.x += self.dx 26 | 27 | 28 | def record_transit(start_image): 29 | """Simulate transit of asteroids over star and return intensity list.""" 30 | asteroid_list = [] 31 | intensity_samples = [] 32 | 33 | for i in range(NUM_ASTEROIDS): 34 | asteroid_list.append(Asteroid(i)) 35 | 36 | for _ in range(NUM_LOOPS): 37 | temp_img = start_image.copy() 38 | # Draw star. 39 | cv.circle(temp_img, (250, 200), STAR_RADIUS, 255, -1) 40 | for ast in asteroid_list: 41 | ast.move_asteroid(temp_img) 42 | intensity = temp_img.mean() 43 | cv.putText(temp_img, 'Mean Intensity = {}'.format(intensity), 44 | (5, 390), cv.FONT_HERSHEY_PLAIN, 1, 255) 45 | cv.imshow('Transit', temp_img) 46 | intensity_samples.append(intensity) 47 | cv.waitKey(50) 48 | cv.destroyAllWindows() 49 | return intensity_samples 50 | 51 | def calc_rel_brightness(image): 52 | """Calculate and return list of relative brightness samples.""" 53 | rel_brightness = record_transit(image) 54 | max_brightness = max(rel_brightness) 55 | for i, j in enumerate(rel_brightness): 56 | rel_brightness[i] = j / max_brightness 57 | return rel_brightness 58 | 59 | def plot_light_curve(rel_brightness): 60 | "Plot light curve from relative brightness list.""" 61 | plt.plot(rel_brightness, color='red', linestyle='dashed', 62 | linewidth=2, label='Relative Brightness') 63 | plt.legend(loc='upper center') 64 | plt.title('Relative Brightness vs. Time') 65 | plt.show() 66 | 67 | relative_brightness = calc_rel_brightness(BLACK_IMG) 68 | plot_light_curve(relative_brightness) 69 | 70 | -------------------------------------------------------------------------------- /Chapter_8/practice_length_of_day.py: -------------------------------------------------------------------------------- 1 | """Read-in images, calculate mean intensity, plot relative intensity vs time.""" 2 | import os 3 | from statistics import mean 4 | import cv2 as cv 5 | import numpy as np 6 | import matplotlib.pyplot as plt 7 | from scipy import signal # See Chap. 1 to install scipy. 8 | 9 | # Switch to the folder containing images. 10 | os.chdir('br549_pixelated') 11 | images = sorted(os.listdir()) 12 | intensity_samples = [] 13 | 14 | # Convert images to grayscale and make a list of mean intensity values. 15 | for image in images: 16 | img = cv.imread(image, cv.IMREAD_GRAYSCALE) 17 | intensity = img.mean() 18 | intensity_samples.append(intensity) 19 | 20 | # Generate a list of relative intensity values. 21 | rel_intensity = intensity_samples[:] 22 | max_intensity = max(rel_intensity) 23 | for i, j in enumerate(rel_intensity): 24 | rel_intensity[i] = j / max_intensity 25 | 26 | # Plot relative intensity values vs frame number (time proxy). 27 | plt.plot(rel_intensity, color='red', marker='o', linestyle='solid', 28 | linewidth=2, markersize=0, label='Relative Intensity') 29 | plt.legend(loc='upper center') 30 | plt.title('Exoplanet BR549 Relative Intensity vs. Time') 31 | plt.ylim(0.8, 1.1) 32 | plt.xticks(np.arange(0, 50, 5)) 33 | plt.grid() 34 | print("\nManually close plot window after examining to continue program.") 35 | plt.show() 36 | 37 | # Find period / length of day. 38 | # Estimate peak height and separation (distance) limits from plot. 39 | # height and distance parameters represent >= limits. 40 | peaks = signal.find_peaks(rel_intensity, height=0.95, distance=5) 41 | print(f"peaks = {peaks}") 42 | print("Period = {}".format(mean(np.diff(peaks[0])))) 43 | 44 | 45 | -------------------------------------------------------------------------------- /Chapter_8/practice_limb_darkening.py: -------------------------------------------------------------------------------- 1 | """Simulate transit of exoplanet, plot light curve, estimate planet radius.""" 2 | import cv2 as cv 3 | import matplotlib.pyplot as plt 4 | 5 | IMG_HT = 400 6 | IMG_WIDTH = 500 7 | BLACK_IMG = cv.imread('limb_darkening.png', cv.IMREAD_GRAYSCALE) 8 | EXO_RADIUS = 7 9 | EXO_DX = 3 10 | EXO_START_X = 40 11 | EXO_START_Y = 230 12 | NUM_FRAMES = 145 13 | 14 | def main(): 15 | intensity_samples = record_transit(EXO_START_X, EXO_START_Y) 16 | relative_brightness = calc_rel_brightness(intensity_samples) 17 | plot_light_curve(relative_brightness) 18 | 19 | def record_transit(exo_x, exo_y): 20 | """Draw planet transiting star and return list of intensity changes.""" 21 | intensity_samples = [] 22 | for _ in range(NUM_FRAMES): 23 | temp_img = BLACK_IMG.copy() 24 | # Draw exoplanet: 25 | cv.circle(temp_img, (exo_x, exo_y), EXO_RADIUS, 0, -1) 26 | intensity = temp_img.mean() 27 | cv.putText(temp_img, 'Mean Intensity = {}'.format(intensity), (5, 390), 28 | cv.FONT_HERSHEY_PLAIN, 1, 255) 29 | cv.imshow('Transit', temp_img) 30 | cv.waitKey(30) 31 | intensity_samples.append(intensity) 32 | exo_x += EXO_DX 33 | return intensity_samples 34 | 35 | def calc_rel_brightness(intensity_samples): 36 | """Return list of relative brightness from list of intensity values.""" 37 | rel_brightness = [] 38 | max_brightness = max(intensity_samples) 39 | for intensity in intensity_samples: 40 | rel_brightness.append(intensity / max_brightness) 41 | return rel_brightness 42 | 43 | def plot_light_curve(rel_brightness): 44 | """Plot changes in relative brightness vs. time.""" 45 | plt.plot(rel_brightness, color='red', linestyle='dashed', 46 | linewidth=2, label='Relative Brightness') 47 | plt.legend(loc='upper center') 48 | plt.title('Relative Brightness vs. Time') 49 | ## plt.ylim(0.995, 1.001) 50 | plt.show() 51 | 52 | if __name__ == '__main__': 53 | main() 54 | -------------------------------------------------------------------------------- /Chapter_8/practice_planet_moon.py: -------------------------------------------------------------------------------- 1 | """Moon animation credit Eric T. Mortenson.""" 2 | import math 3 | import numpy as np 4 | import cv2 as cv 5 | import matplotlib.pyplot as plt 6 | 7 | IMG_HT = 500 8 | IMG_WIDTH = 500 9 | BLACK_IMG = np.zeros((IMG_HT, IMG_WIDTH, 1), dtype='uint8') 10 | STAR_RADIUS = 200 11 | EXO_RADIUS = 20 12 | MOON_RADIUS = 5 13 | EXO_START_X = 20 14 | EXO_START_Y = 250 15 | NUM_DAYS = 200 # number days in year 16 | 17 | def main(): 18 | intensity_samples = record_transit(EXO_START_X, EXO_START_Y) 19 | relative_brightness = calc_rel_brightness(intensity_samples) 20 | print('\nestimated exoplanet radius = {:.2f}\n' 21 | .format(STAR_RADIUS * math.sqrt(max(relative_brightness) 22 | -min(relative_brightness)))) 23 | plot_light_curve(relative_brightness) 24 | 25 | def record_transit(exo_x, exo_y): 26 | """Draw planet transiting star and return list of intensity changes.""" 27 | intensity_samples = [] 28 | for dt in range(NUM_DAYS): 29 | temp_img = BLACK_IMG.copy() 30 | # Draw star: 31 | cv.circle(temp_img, (int(IMG_WIDTH / 2), int(IMG_HT/2)), 32 | STAR_RADIUS, 255, -1) 33 | # Draw exoplanet 34 | cv.circle(temp_img, (int(exo_x), int(exo_y)), EXO_RADIUS, 0, -1) 35 | # Draw moon 36 | if dt != 0: 37 | cv.circle(temp_img, (int(moon_x), int(moon_y)), MOON_RADIUS, 0, -1) 38 | intensity = temp_img.mean() 39 | cv.putText(temp_img, 'Mean Intensity = {}'.format(intensity), (5, 10), 40 | cv.FONT_HERSHEY_PLAIN, 1, 255) 41 | cv.imshow('Transit', temp_img) 42 | cv.waitKey(10) 43 | intensity_samples.append(intensity) 44 | exo_x = IMG_WIDTH / 2 - (IMG_WIDTH / 2 - 20) * \ 45 | math.cos(2 * math.pi * dt / (NUM_DAYS)*(1 / 2)) 46 | moon_x = exo_x + \ 47 | 3 * EXO_RADIUS * math.sin(2 * math.pi * dt / NUM_DAYS *(5)) 48 | moon_y = IMG_HT / 2 - \ 49 | 0.25 * EXO_RADIUS * \ 50 | math.sin(2 * math.pi * dt / NUM_DAYS * (5)) 51 | cv.destroyAllWindows() 52 | 53 | return intensity_samples 54 | 55 | def calc_rel_brightness(intensity_samples): 56 | """Return list of relative brightness from list of intensity values.""" 57 | rel_brightness = [] 58 | max_brightness = max(intensity_samples) 59 | for intensity in intensity_samples: 60 | rel_brightness.append(intensity / max_brightness) 61 | return rel_brightness 62 | 63 | def plot_light_curve(rel_brightness): 64 | """Plot changes in relative brightness vs. time.""" 65 | plt.plot(rel_brightness, color='red', linestyle='dashed', 66 | linewidth=2, label='Relative Brightness') 67 | plt.legend(loc='upper center') 68 | plt.title('Relative Brightness vs. Time') 69 | plt.show() 70 | 71 | if __name__ == '__main__': 72 | main() 73 | -------------------------------------------------------------------------------- /Chapter_8/practice_tabbys_star.py: -------------------------------------------------------------------------------- 1 | """Simulate transit of alien array and plot light curve.""" 2 | import numpy as np 3 | import cv2 as cv 4 | import matplotlib.pyplot as plt 5 | 6 | IMG_HT = 400 7 | IMG_WIDTH = 500 8 | BLACK_IMG = np.zeros((IMG_HT, IMG_WIDTH), dtype='uint8') 9 | STAR_RADIUS = 165 10 | EXO_DX = 3 11 | EXO_START_X = -250 12 | EXO_START_Y = 150 13 | NUM_FRAMES = 500 14 | 15 | def main(): 16 | intensity_samples = record_transit(EXO_START_X, EXO_START_Y) 17 | rel_brightness = calc_rel_brightness(intensity_samples) 18 | plot_light_curve(rel_brightness) 19 | 20 | def record_transit(exo_x, exo_y): 21 | """Draw array transiting star and return list of intensity changes.""" 22 | intensity_samples = [] 23 | for _ in range(NUM_FRAMES): 24 | temp_img = BLACK_IMG.copy() 25 | # Draw star: 26 | cv.circle(temp_img, (int(IMG_WIDTH / 2), int(IMG_HT / 2)), 27 | STAR_RADIUS, 255, -1) 28 | # Draw alien array: 29 | cv.rectangle(temp_img, (exo_x, exo_y), 30 | (exo_x + 20, exo_y + 140), 0, -1) 31 | cv.rectangle(temp_img, (exo_x - 360, exo_y), 32 | (exo_x + 10, exo_y + 140), 0, 5) 33 | cv.rectangle(temp_img, (exo_x - 380, exo_y), 34 | (exo_x - 310, exo_y + 140), 0, -1) 35 | intensity = temp_img.mean() 36 | cv.putText(temp_img, 'Mean Intensity = {}'.format(intensity), (5, 390), 37 | cv.FONT_HERSHEY_PLAIN, 1, 255) 38 | cv.imshow('Transit', temp_img) 39 | cv.waitKey(10) 40 | intensity_samples.append(intensity) 41 | exo_x += EXO_DX 42 | return intensity_samples 43 | 44 | def calc_rel_brightness(intensity_samples): 45 | """Return list of relative brightness from list of intensity values.""" 46 | rel_brightness = [] 47 | max_brightness = max(intensity_samples) 48 | for intensity in intensity_samples: 49 | rel_brightness.append(intensity / max_brightness) 50 | return rel_brightness 51 | 52 | def plot_light_curve(rel_brightness): 53 | """Plot changes in relative brightness vs. time.""" 54 | plt.plot(rel_brightness, color='red', linestyle='dashed', 55 | linewidth=2) 56 | plt.title('Relative Brightness vs. Time') 57 | plt.xlim(-150, 500) 58 | plt.show() 59 | 60 | if __name__ == '__main__': 61 | main() 62 | -------------------------------------------------------------------------------- /Chapter_8/transit.py: -------------------------------------------------------------------------------- 1 | """Simulate transit of exoplanet, plot light curve, estimate planet radius.""" 2 | import math 3 | import numpy as np 4 | import cv2 as cv 5 | import matplotlib.pyplot as plt 6 | 7 | IMG_HT = 400 8 | IMG_WIDTH = 500 9 | BLACK_IMG = np.zeros((IMG_HT, IMG_WIDTH), dtype='uint8') 10 | STAR_RADIUS = 165 11 | EXO_RADIUS = 7 12 | EXO_DX = 3 13 | EXO_START_X = 40 14 | EXO_START_Y = 230 15 | NUM_FRAMES = 145 16 | 17 | def main(): 18 | intensity_samples = record_transit(EXO_START_X, EXO_START_Y) 19 | relative_brightness = calc_rel_brightness(intensity_samples) 20 | print('\nestimated exoplanet radius = {:.2f}\n' 21 | .format(STAR_RADIUS * math.sqrt(max(relative_brightness) 22 | - min(relative_brightness)))) 23 | plot_light_curve(relative_brightness) 24 | 25 | def record_transit(exo_x, exo_y): 26 | """Draw planet transiting star and return list of intensity changes.""" 27 | intensity_samples = [] 28 | for _ in range(NUM_FRAMES): 29 | temp_img = BLACK_IMG.copy() 30 | # Draw star: 31 | cv.circle(temp_img, (int(IMG_WIDTH / 2), int(IMG_HT / 2)), 32 | STAR_RADIUS, 255, -1) 33 | # Draw exoplanet: 34 | cv.circle(temp_img, (exo_x, exo_y), EXO_RADIUS, 0, -1) 35 | intensity = temp_img.mean() 36 | cv.putText(temp_img, 'Mean Intensity = {}'.format(intensity), (5, 390), 37 | cv.FONT_HERSHEY_PLAIN, 1, 255) 38 | cv.imshow('Transit', temp_img) 39 | cv.waitKey(30) 40 | intensity_samples.append(intensity) 41 | exo_x += EXO_DX 42 | return intensity_samples 43 | 44 | def calc_rel_brightness(intensity_samples): 45 | """Return list of relative brightness from list of intensity values.""" 46 | rel_brightness = [] 47 | max_brightness = max(intensity_samples) 48 | for intensity in intensity_samples: 49 | rel_brightness.append(intensity / max_brightness) 50 | return rel_brightness 51 | 52 | def plot_light_curve(rel_brightness): 53 | """Plot changes in relative brightness vs. time.""" 54 | plt.plot(rel_brightness, color='red', linestyle='dashed', 55 | linewidth=2, label='Relative Brightness') 56 | plt.legend(loc='upper center') 57 | plt.title('Relative Brightness vs. Time') 58 | plt.show() 59 | 60 | if __name__ == '__main__': 61 | main() 62 | -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame01.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame01.PNG -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame02.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame03.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame04.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame05.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame06.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame06.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame07.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame07.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame08.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame08.png -------------------------------------------------------------------------------- /Chapter_9/corridor_5/frame09.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/corridor_5/frame09.png -------------------------------------------------------------------------------- /Chapter_9/empty_corridor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/empty_corridor.png -------------------------------------------------------------------------------- /Chapter_9/gunfire.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/gunfire.wav -------------------------------------------------------------------------------- /Chapter_9/practice_blur.py: -------------------------------------------------------------------------------- 1 | import cv2 as cv 2 | 3 | path = "C:/Python372/Lib/site-packages/cv2/data/" 4 | face_cascade = cv.CascadeClassifier(path + 'haarcascade_frontalface_alt.xml') 5 | 6 | cap = cv.VideoCapture(0) 7 | 8 | while True: 9 | _, frame = cap.read() 10 | face_rects = face_cascade.detectMultiScale(frame, scaleFactor=1.2, 11 | minNeighbors=3) 12 | 13 | for (x, y, w, h) in face_rects: 14 | face = cv.blur(frame[y:y + h, x:x + w], (25, 25)) 15 | frame[y:y + h, x: x + w] = face 16 | cv.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 2) 17 | 18 | cv.imshow('frame', frame) 19 | if cv.waitKey(1) & 0xFF == ord('q'): 20 | break 21 | 22 | cap.release() 23 | cv.destroyAllWindows() 24 | -------------------------------------------------------------------------------- /Chapter_9/sentry.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | from datetime import datetime 4 | from playsound import playsound 5 | import pyttsx3 6 | import cv2 as cv 7 | 8 | # Set up warning audio. 9 | engine = pyttsx3.init() 10 | engine.setProperty('rate', 145) # Fast but clear. 11 | engine.setProperty('volume', 1.0) # Max is 1.0. 12 | 13 | # Set up audio files. 14 | root_dir = os.path.abspath('.') 15 | gunfire_path = os.path.join(root_dir, 'gunfire.wav') 16 | tone_path = os.path.join(root_dir, 'tone.wav') 17 | 18 | # Set up Haar cascades for face detection. 19 | path = "C:/Python372/Lib/site-packages/cv2/data/" 20 | face_cascade = cv.CascadeClassifier(path + 'haarcascade_frontalface_default.xml') 21 | eye_cascade = cv.CascadeClassifier(path + 'haarcascade_eye.xml') 22 | 23 | # Set up corridor images. 24 | os.chdir('corridor_5') 25 | contents = sorted(os.listdir()) 26 | 27 | # Detect faces and fire or disable gun. 28 | for image in contents: 29 | print(f"\nMotion detected...{datetime.now()}") 30 | discharge_weapon = True 31 | engine.say("You have entered an active fire zone. \ 32 | Stop and face the gun immediately. \ 33 | When you hear the tone, you have 5 seconds to pass.") 34 | engine.runAndWait() 35 | time.sleep(3) 36 | 37 | img_gray = cv.imread(image, cv.IMREAD_GRAYSCALE) 38 | height, width = img_gray.shape 39 | cv.imshow(f'Motion detected {image}', img_gray) 40 | cv.waitKey(2000) 41 | cv.destroyWindow(f'Motion detected {image}') 42 | 43 | # Find face rectangles. 44 | face_rect_list = [] 45 | face_rect_list.append(face_cascade.detectMultiScale(image=img_gray, 46 | scaleFactor=1.1, 47 | minNeighbors=5)) 48 | print(f"Searching {image} for eyes.") 49 | for rect in face_rect_list: 50 | for (x, y, w, h) in rect: 51 | rect_4_eyes = img_gray[y:y+h, x:x+w] 52 | eyes = eye_cascade.detectMultiScale(image=rect_4_eyes, 53 | scaleFactor=1.05, 54 | minNeighbors=2) 55 | for (xe, ye, we, he) in eyes: 56 | print("Eye detected.") 57 | center = (int(xe + 0.5 * we), int(ye + 0.5 * he)) 58 | radius = int((we + he) / 3) 59 | cv.circle(rect_4_eyes, center, radius, 255, 2) 60 | cv.rectangle(img_gray, (x, y), (x+w, y+h), (255, 255, 255), 2) 61 | discharge_weapon = False 62 | break 63 | 64 | if discharge_weapon == False: 65 | playsound(tone_path, block=False) 66 | cv.imshow('Detected Faces', img_gray) 67 | cv.waitKey(2000) 68 | cv.destroyWindow('Detected Faces') 69 | time.sleep(5) 70 | 71 | else: 72 | print(f"No face in {image}. Discharging weapon!") 73 | cv.putText(img_gray, 'FIRE!', (int(width / 2) - 20, int(height / 2)), 74 | cv.FONT_HERSHEY_PLAIN, 3, 255, 3) 75 | playsound(gunfire_path, block=False) 76 | cv.imshow('Mutant', img_gray) 77 | cv.waitKey(2000) 78 | cv.destroyWindow('Mutant') 79 | time.sleep(3) # To delay loading next image... 80 | 81 | engine.stop() # Optional. 82 | -------------------------------------------------------------------------------- /Chapter_9/sentry_for_mac_bug.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | from datetime import datetime 4 | from playsound import playsound 5 | import pyttsx3 6 | import cv2 as cv 7 | 8 | # Set up audio files. 9 | root_dir = os.path.abspath('.') 10 | gunfire_path = os.path.join(root_dir, 'gunfire.wav') 11 | tone_path = os.path.join(root_dir, 'tone.wav') 12 | 13 | # Set up Haar cascades for face detection. 14 | path = "C:/Python372/Lib/site-packages/cv2/data/" 15 | face_cascade = cv.CascadeClassifier(path + 'haarcascade_frontalface_alt.xml') 16 | face2_cascade = cv.CascadeClassifier(path + 'haarcascade_frontalface_alt2.xml') 17 | eye_cascade = cv.CascadeClassifier(path + 'haarcascade_eye.xml') 18 | 19 | # Set up corridor images. 20 | os.chdir('corridor_5') 21 | contents = sorted(os.listdir()) 22 | 23 | # Detect faces and fire or disable gun. 24 | for image in contents: 25 | print(f"\nMotion detected...{datetime.now()}") 26 | discharge_weapon = True 27 | os.system("say 'You have entered an active fire zone. \ 28 | Stop and face the gun immediately. \ 29 | When you hear the tone, you have 5 seconds to pass.' &") 30 | time.sleep(6) 31 | 32 | img_gray = cv.imread(image, cv.IMREAD_GRAYSCALE) 33 | height, width = img_gray.shape 34 | cv.imshow(f'Motion detected {image}', img_gray) 35 | cv.waitKey(2000) 36 | cv.destroyWindow(f'Motion detected {image}') 37 | 38 | # Find face rectangles. 39 | face_rect_list = [] 40 | face_rect_list.append(face_cascade.detectMultiScale(image=img_gray, 41 | scaleFactor=1.2, 42 | minNeighbors=5)) 43 | face_rect_list.append(face2_cascade.detectMultiScale(image=img_gray, 44 | scaleFactor=1.2, 45 | minNeighbors=5)) 46 | 47 | print(f"Searching {image} for eyes.") 48 | for rect in face_rect_list: 49 | for (x, y, w, h) in rect: 50 | rect_4_eyes = img_gray[y:y+h, x:x+w] 51 | eyes = eye_cascade.detectMultiScale(image=rect_4_eyes, 52 | scaleFactor=1.05, 53 | minNeighbors=2) 54 | for (xe, ye, we, he) in eyes: 55 | print("Eyes detected.") 56 | center = (int(xe + 0.5 * we), int(ye + 0.5 * he)) 57 | radius = int(0.3 * (we + he)) 58 | cv.circle(rect_4_eyes, center, radius, 255, 2) 59 | cv.rectangle(img_gray, (x, y), (x+w, y+h), (255, 255, 255), 2) 60 | discharge_weapon = False 61 | break 62 | 63 | if discharge_weapon == False: 64 | time.sleep(2) 65 | playsound(tone_path, block=False) 66 | cv.imshow('Detected Faces', img_gray) 67 | cv.waitKey(2000) 68 | cv.destroyWindow('Detected Faces') 69 | time.sleep(5) 70 | 71 | else: 72 | time.sleep(2) 73 | print(f"No face in {image}. Discharging weapon!") 74 | cv.putText(img_gray, 'FIRE!', (int(width / 2) - 20, int(height / 2)), 75 | cv.FONT_HERSHEY_PLAIN, 3, 255, 3) 76 | playsound(gunfire_path, block=False) 77 | cv.imshow('Mutant', img_gray) 78 | cv.waitKey(2000) 79 | cv.destroyWindow('Mutant') 80 | time.sleep(5) # To delay loading next image... 81 | -------------------------------------------------------------------------------- /Chapter_9/tone.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rlvaugh/Real_World_Python/efbd54987f97c6b4d8d859b99f9520b350d97003/Chapter_9/tone.wav -------------------------------------------------------------------------------- /Chapter_9/video_face_detect.py: -------------------------------------------------------------------------------- 1 | """Detect faces in video capture using Haar Cascade.""" 2 | import cv2 as cv 3 | 4 | # Path to OpenCV's Haar Cascades 5 | path = "C:/Python372/Lib/site-packages/cv2/data/" 6 | face_cascade = cv.CascadeClassifier(path + 'haarcascade_frontalface_alt.xml') 7 | 8 | cap = cv.VideoCapture(0) 9 | 10 | while True: 11 | # Capture frame-by-frame 12 | _, frame = cap.read() 13 | face_rects = face_cascade.detectMultiScale(frame, scaleFactor=1.2, 14 | minNeighbors=4) 15 | 16 | for (x, y, w, h) in face_rects: 17 | cv.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) 18 | 19 | # Display the resulting frame 20 | cv.imshow('frame', frame) 21 | if cv.waitKey(1) & 0xFF == ord('q'): 22 | break 23 | 24 | # Release the capture 25 | cap.release() 26 | cv.destroyAllWindows() 27 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Real World Python 2 | 3 | This repository contains the source code and supporting files for the book *Real World Python: A Hacker’s Guide to Solving Problems with Code* by Lee Vaughan. The files are organized by chapter. Each code listing in the book references a corresponding file name in this repository. 4 | 5 | ![image](https://user-images.githubusercontent.com/31315095/86478311-b3fbbc80-bd0f-11ea-88a6-1db9dca5a5dd.png) 6 | 7 | ## Get the Book 8 | The book is available at retail bookstores like Barnes & Noble and from online sellers like https://www.amazon.com/. 9 | A print/eBook bundle can be purchased directly from the publisher at https://nostarch.com/real-world-python/. 10 | 11 | ## Download the Chapters 12 | To download the chapter folders, use the green “Code” button near the top of the repository code page. 13 | 14 | ![image](https://user-images.githubusercontent.com/31315095/86478653-31bfc800-bd10-11ea-80fa-388234db9282.png) 15 | 16 | ## Versioning 17 | The book uses Python 3.7. 18 | 19 | ## Errata 20 | For updates and errata, visit https://nostarch.com/real-world-python/. Please report typos and other issues to errata@nostarch.com. 21 | 22 | ## Project Gutenberg 23 | These source files include ebooks from Project Gutenberg. These ebooks are for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy them, give them away or re-use them under the terms of the Project Gutenberg License included with the eBooks or online at www.gutenberg.org. If you are not located in the United States, you'll have to check the laws of the country where you are located before using the ebooks. 24 | 25 | ## Other Books by the Author 26 | [*Impractical Python Projects: Playful Programming Activities to Make You Smarter*](https://nostarch.com/impracticalpythonprojects) 27 | 28 | [*Python Tools for Scientists: An Introduction to Using Anaconda, JupyterLab, and Python's Scientific Libraries*](https://a.co/d/8HeRgzs) 29 | --------------------------------------------------------------------------------