├── resume.pdf ├── README.md ├── get_links.py └── apply.py /resume.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11bender/common-intern/master/resume.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Job Application Bot (3.0) 2 | 3 | A script to automatically search Glassdoor for job listings, aggregate every application URL, and apply to each job using pre-populated data. ***All with one click!*** 4 | 5 | 6 | **📸YouTube Tutorial: [https://youtu.be/N_7d8vg_TQA](https://youtu.be/N_7d8vg_TQA)** 7 | 8 | ## Inspiration 9 | Ever sit at your desk for hours, clicking through endless job listings hoping to strike gold with one response? To solve this, I made a script a few months ago, which would take in a list of job URLs and automatically apply to potentially 100s of jobs with the click of a button. This was great, but there was one problem — the process of aggregating those links is painstaking. So, I wanted to automate that process with this project! ✨ 10 | This works the same as the harshibar's however, you're not restricted to jobs hosted by lever or greenhouse. You can apply to any easy apply job on glassdoor. (For research purpose only) 11 | 12 | 13 | ## Additions 14 | I've added the easyapply feature already, in the future I hope to add: 15 | * upload csv's of applied jobs data to postgresql, for data analysis 16 | * use gmail api to automatically filter the many, many rejection emails 17 | * use regex to classify jobs and be more selective in which I apply to. 18 | * match the post to resume/cover letter regex+latex 19 | 20 | Add mutiprocessing: 21 | ``` 22 | from multiprocessing import Process 23 | from selenium import webdriver 24 | from selenium.webdriver.chrome.options import Options 25 | 26 | class processTest(Process) 27 | def __init__(self,topic): 28 | Process.__init__(self) 29 | self.topic = topic 30 | self.start() 31 | 32 | def run(self): 33 | options = Options() 34 | options.add_experimental_option("detach",True) 35 | self.driver = (webdriver.Chrome('YourPathToChromeDriver',options=options)) 36 | print("Im a search process for "+str(self.driver)) 37 | self.driver.get("https://www.google.com") 38 | seach_box = self.driver.find_element_by_xpath('//*[@id="tsf"]/div[2]/div[1]/div[1]/div/div[2]/input') 39 | search_box.send_keys(self.topic) 40 | seach_box.submit() 41 | 42 | 43 | if __name__ = '__main__': 44 | search_topics=["snakes","python","cats","cat pictures","rolex","omega"] 45 | for topic in search_topics: 46 | processTest(topic) 47 | ``` 48 | 49 | ## Installation 50 | 1. Install [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/) (or an alternatie driver for your browser of choice): 51 | * Run `brew cask install chromedriver` 52 | * Confirm installation: `chromedriver --version` 53 | * Check location of ChromeDriver: `which chromedriver` 54 | * Wherever the `driver` is initialized in the code, insert the ChromeDriver location 55 | 2. Install Selenium: `pip install selenium` 56 | 3. Install BeautifulSoup: `pip install beautifulsoup4` 57 | 58 | ## Usage 59 | #### To test `get_links.py` 60 | 1. Uncomment the last line `get_links.py` 61 | 2. Run `$ python get_links.py` 62 | 63 | #### To run the entire script: 64 | 1. Set a number of pages you'd like to iterate through here 65 | 2. Run `$ python apply.py` 66 | 3. The script will open [glassdoor.com](https://www.glassdoor.com/index.htm), at which point you should log-in 67 | 4. From there on, everything is automatic! 68 | 69 | 70 | ## Thanks 71 | 72 | * [Selenium](https://selenium-python.readthedocs.io/) - A tool designed for QA testing, but that actually works great for making these types of bots 73 | * [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/doc) - A tool to scrape HTML/XML content (that saved be *big time* with this project) 74 | 75 | ## Learn More 76 | 77 | * [My Previous Video](https://www.youtube.com/watch?v=nRmrEC5WnzY) - For more background on the `apply.py` code 78 | 79 | ## License 80 | 81 | This project is licensed under the MIT License - see the [LICENSE.md](https://github.com/harshibar/5-python-projects/blob/master/LICENSE) file for details. 82 | -------------------------------------------------------------------------------- /get_links.py: -------------------------------------------------------------------------------- 1 | # selenium stup 2 | from selenium import webdriver 3 | from selenium.webdriver.common.by import By 4 | from selenium.webdriver.support.ui import WebDriverWait 5 | from selenium.webdriver.support import expected_conditions as EC 6 | from selenium.common.exceptions import TimeoutException, NoSuchElementException 7 | from selenium.webdriver.common.keys import Keys 8 | from selenium.webdriver.common.action_chains import ActionChains 9 | 10 | # to find links 11 | from bs4 import BeautifulSoup 12 | import json 13 | import urllib.request 14 | import re 15 | 16 | import time # to sleep 17 | 18 | # fill this in with your job preferences! 19 | PREFERENCES = { 20 | "position_title": "", 21 | "location": " ", 22 | "username": "", 23 | "password": "" 24 | } 25 | 26 | def login(driver): 27 | #login to glassdoor 28 | driver.get('https://www.glassdoor.com/profile/login_input.htm?userOriginHook=HEADER_SIGNIN_LINK') 29 | driver.maximize_window() 30 | 31 | username_field = driver.find_element_by_xpath("//*[@id='userEmail']") 32 | password_field = driver.find_element_by_xpath("//*[@id='userPassword']") 33 | 34 | username_field.clear() 35 | password_field.clear() 36 | 37 | username_field.send_keys(PREFERENCES['username']) 38 | password_field.send_keys(PREFERENCES['password']) 39 | 40 | driver.find_element_by_xpath("//*[@id='InlineLoginModule']/div/div/div/div/div[3]/form/div[3]/div[1]").click() 41 | 42 | return True 43 | 44 | # navigate to appropriate job listing page 45 | def go_to_listings(driver): 46 | 47 | # wait for the search bar to appear 48 | element = WebDriverWait(driver, 20).until( 49 | EC.presence_of_element_located((By.XPATH, "//*[@id='scBar']")) 50 | ) 51 | 52 | try: 53 | # look for search bar fields 54 | position_field = driver.find_element_by_xpath("//*[@id='sc.keyword']") 55 | location_field = driver.find_element_by_xpath("//*[@id='sc.location']") 56 | location_field.clear() 57 | 58 | # fill in with pre-defined data 59 | position_field.send_keys(PREFERENCES['position_title']) 60 | location_field.clear() 61 | location_field.send_keys(PREFERENCES['location']) 62 | 63 | # wait for a little so location gets set 64 | time.sleep(1) 65 | driver.find_element_by_xpath(" //*[@id='scBar']/div/button").click() 66 | 67 | # close a random popup if it shows up 68 | try: 69 | driver.find_element_by_xpath("//*[@id='JAModal']/div/div[2]/span").click() 70 | except NoSuchElementException: 71 | pass 72 | 73 | time.sleep(3) 74 | 75 | 76 | try: 77 | driver.find_element_by_xpath("//*[@id='JAModal']/div/div[2]/span").click() 78 | except NoSuchElementException: 79 | pass 80 | 81 | #Adding Filters to the search results 82 | try: 83 | 84 | remote = "//*[@id='dynamicFiltersContainer']/div/div[1]/div[2]/div[2]/div[15]/label" 85 | 86 | more_menu = driver.find_element_by_xpath("//*[@id='dynamicFiltersContainer']/div/div[1]/div[2]") 87 | more_menu.click() 88 | time.sleep(.5) 89 | 90 | driver.find_element_by_xpath(remote).click() 91 | 92 | time.sleep(1) 93 | easy_apply = driver.find_element_by_xpath("//*[@id='dynamicFiltersContainer']/div/div[1]/div[2]/div[2]/div[14]/label/div") 94 | easy_apply.click() 95 | 96 | time.sleep(1) 97 | 98 | driver.find_element_by_xpath(remote).click() 99 | 100 | 101 | element = WebDriverWait(driver, 20).until( 102 | EC.presence_of_element_located((By.XPATH, "//*[@id='MainCol']/div[1]/ul")) 103 | ) 104 | 105 | time.sleep(2) 106 | 107 | 108 | more_menu.click() 109 | 110 | 111 | #Intern 112 | driver.find_element_by_xpath("//*[@id='filter_jobType']").click() 113 | time.sleep(.3) 114 | driver.find_element_by_xpath("//*[@id='PrimaryDropdown']/ul/li[5]/span[1]").click() 115 | 116 | time.sleep(.5) 117 | 118 | except NoSuchElementException: 119 | pass 120 | 121 | #driver.minimize_window() 122 | 123 | return True 124 | 125 | # note: please ignore all crappy error handling haha 126 | except NoSuchElementException: 127 | return False 128 | 129 | # aggregate all url links in a set 130 | def aggregate_links(driver): 131 | allLinks = [] # all hrefs that exist on the page 132 | 133 | # wait for page to fully load 134 | element = WebDriverWait(driver, 20).until( 135 | EC.presence_of_element_located((By.XPATH, "//*[@id='MainCol']/div[1]/ul")) 136 | ) 137 | 138 | 139 | 140 | 141 | # parse the page source using beautiful soup 142 | page_source = driver.page_source 143 | soup = BeautifulSoup(page_source) 144 | 145 | # find all hrefs 146 | allJobLinks = soup.findAll("a", {"class": "jobLink"}) 147 | allLinks = [jobLink['href'] for jobLink in allJobLinks] 148 | allLinks = list(dict.fromkeys(list(allLinks))) 149 | 150 | allFixedLinks = [] 151 | 152 | # clean up the job links by opening, modifying, and 'unraveling' the URL 153 | for link in allLinks: 154 | # first, replace GD_JOB_AD with GD_JOB_VIEW 155 | link = link.replace("GD_JOB_AD", "GD_JOB_VIEW") 156 | 157 | # if there is no glassdoor prefex, add that 158 | # for example, /partner/jobListing.htm?pos=121... needs the prefix 159 | 160 | if link[0] == '/': 161 | link = f"https://www.glassdoor.com{link}" 162 | 163 | 164 | #for glassdoor links no redirect happens 165 | newLink = link 166 | allFixedLinks.append(newLink) 167 | 168 | # convert to a set to eliminate duplicates 169 | return set(allFixedLinks) 170 | 171 | # 'main' method to iterate through all pages and aggregate URLs 172 | def getURLs(): 173 | driver = webdriver.Chrome(executable_path='/home/picklesueat/Downloads/chromedriver_linux64/chromedriver') 174 | success = login(driver) 175 | if not success: 176 | # close the page if it gets stuck at some point - this logic can be improved 177 | driver.close() 178 | 179 | success = go_to_listings(driver) 180 | if not success: 181 | driver.close() 182 | 183 | allLinks = set() 184 | page = 1 185 | next_url = '' 186 | while page < 2: # pick an arbitrary number of pages so this doesn't run infinitely 187 | print(f'\nNEXT PAGE #: {page}\n') 188 | 189 | # on the first page, the URL is unique and doesn't have a field for the page number 190 | if page < 4: 191 | # aggregate links on first page 192 | allLinks.update(aggregate_links(driver)) 193 | 194 | driver.find_element_by_xpath("//*[@id='FooterPageNav']/div/ul/li["+str(page + 2)+"]").click() 195 | 196 | 197 | 198 | 199 | time.sleep(.75) 200 | try: 201 | driver.find_element_by_xpath("//*[@id='JAModal']/div/div[2]/span").click() 202 | except Exception: 203 | pass 204 | 205 | page += 1 206 | 207 | # same patterns from page 3 onwards 208 | if page > 3 : 209 | allLinks.update(aggregate_links(driver)) 210 | driver.find_element_by_xpath("//*[@id='FooterPageNav']/div/ul/li[5]").click() 211 | 212 | 213 | time.sleep(.75) 214 | try: 215 | driver.find_element_by_xpath("//*[@id='JAModal']/div/div[2]/span").click() 216 | except Exception: 217 | pass 218 | 219 | page += 1 220 | 221 | driver.close() 222 | print(len(allLinks)) 223 | return allLinks 224 | 225 | # for testing purpose 226 | #getURLs() 227 | 228 | 229 | -------------------------------------------------------------------------------- /apply.py: -------------------------------------------------------------------------------- 1 | import os # to get the resume file 2 | import time # to sleep 3 | import glassdoor_test 4 | import csv 5 | import re 6 | 7 | 8 | 9 | 10 | 11 | 12 | # Fill in this dictionary with your personal details! 13 | JOB_APP = { 14 | "first_name": "Joe", 15 | "last_name": "Mother", 16 | "email": "", 17 | "phone": "", 18 | "org": "", 19 | "resume": "resume.pdf", 20 | "resume_textfile": "", 21 | "linkedin": "", 22 | "website": "", 23 | "github": "", 24 | "twitter": "", 25 | "location": "Chicago, IL, United States", 26 | "city": "", 27 | "state": "IL", 28 | "ZIP":"", 29 | "address":"", 30 | "grad_month": '05', 31 | "grad_year": '2020', 32 | "pay": '60000', 33 | "university": "", 34 | "username": "", 35 | "password": "" 36 | } 37 | 38 | #fills out all fields in a glassdoor easy apply job 39 | def glassdoor(driver): 40 | #login 41 | login(driver) 42 | time.sleep(1) 43 | 44 | #can't yet tell if you've actually applied 45 | sucsess = True 46 | 47 | try: 48 | #bring up easy apply form 49 | driver.find_element_by_xpath("//*[@id='JobView']/div[1]/div[2]/div/div/div[2]/div[1]/div[2]/div[1]/div[1]/div[1]/button").click() 50 | 51 | except NoSuchElementException: 52 | pass 53 | 54 | time.sleep(1) 55 | 56 | #resume must come first because it deletes everything else, and we don't know where it is always 57 | resume(driver) 58 | 59 | #any extra questions, can also be matched to your needs 60 | try: 61 | for i in range(15): 62 | glassdoor_easy_questions(driver, divnum= i +1) 63 | except Exception: 64 | pass 65 | 66 | try: 67 | #noads 68 | driver.find_element_by_xpath("//*[@id='ApplySection']/div[2]/label/div").click() 69 | 70 | except NoSuchElementException: 71 | pass 72 | 73 | 74 | 75 | try: 76 | #noads2 77 | driver.find_element_by_xpath("//*[@id='ApplySection']/div[3]/label/div").click() 78 | 79 | except NoSuchElementException: 80 | pass 81 | 82 | 83 | try: 84 | #apply 85 | driver.find_element_by_xpath("//*[@id='ApplyContainer']/div/div[2]/div/div[2]/div/div[2]/form/div[2]/div[2]/div[2]/button").click() 86 | 87 | time.sleep(.5) 88 | 89 | except NoSuchElementException: 90 | sucsess = False 91 | 92 | 93 | 94 | 95 | def glassdoor_easy_questions(driver,divnum): 96 | #for each question number we cycle through if statements to try and figure out what kind of question it is, and answer 97 | try: 98 | quest = driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]").text 99 | 100 | time.sleep(.2) 101 | 102 | if(re.search("authorized", quest, re.IGNORECASE)): 103 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/label[1]/div[1]").click() 104 | 105 | elif(re.search("sponsorship", quest, re.IGNORECASE)): 106 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/label[2]/div[1]").click() 107 | 108 | elif(re.search("Bachelor's", quest, re.IGNORECASE)): 109 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/label[1]/div[1]").click() 110 | 111 | elif(re.search("experience", quest, re.IGNORECASE)): 112 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/label[2]/div[1]").click() 113 | 114 | elif(re.search("loca", quest, re.IGNORECASE)): 115 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/label[2]/div[1]").click() 116 | 117 | elif(re.search("[Ee]mail", quest, re.IGNORECASE)): 118 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 119 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['email']) 120 | 121 | elif(re.search("name", quest, re.IGNORECASE)): 122 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 123 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['first_name'] + " " + JOB_APP['last_name']) 124 | 125 | if(re.search("first.*name", quest, re.IGNORECASE)): 126 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 127 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['first_name']) 128 | 129 | 130 | elif(re.search("last.*name", quest, re.IGNORECASE)): 131 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 132 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['last_name']) 133 | 134 | 135 | elif(re.search("phone.*number", quest, re.IGNORECASE)): 136 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 137 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['phone']) 138 | 139 | elif(re.search("state", quest, re.IGNORECASE)): 140 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 141 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['state']) 142 | 143 | elif(re.search("city", quest, re.IGNORECASE)): 144 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 145 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['city']) 146 | 147 | elif(re.search("zip", quest, re.IGNORECASE) or re.search("postal", quest, re.IGNORECASE)): 148 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 149 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['ZIP']) 150 | 151 | elif(re.search("address", quest, re.IGNORECASE)): 152 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 153 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['address']) 154 | 155 | elif(re.search("Linked.?In", quest, re.IGNORECASE)): 156 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 157 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['linkedin']) 158 | 159 | elif(re.search("[cC]ountry", quest, re.IGNORECASE)): 160 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/div").click() 161 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/ul/li[2]").click() 162 | 163 | elif(re.search("Create a job alert", quest, re.IGNORECASE)): 164 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/div").click() 165 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/ul/li[2]").click() 166 | 167 | 168 | elif(re.search("[dD]esired.*[pP]ay", quest, re.IGNORECASE) or re.search("salary", quest, re.IGNORECASE)): 169 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").clear() 170 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(divnum)+"]/div/input").send_keys(JOB_APP['pay']) 171 | 172 | 173 | except NoSuchElementException: 174 | pass 175 | 176 | 177 | def resume(driver): 178 | try: 179 | for i in range(15): 180 | quest = driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(i+1)+"]").text 181 | if(re.search("resume", quest, re.IGNORECASE)): 182 | driver.find_element_by_xpath("//*[@id='ApplyQuestions']/div["+str(i+1)+"]/div[2]/a").click() 183 | time.sleep(.25) 184 | driver.find_element_by_xpath("//*[@id='file']").send_keys(os.getcwd()+"/resume.pdf") 185 | time.sleep(.25) 186 | break 187 | 188 | 189 | except NoSuchElementException: 190 | pass 191 | 192 | 193 | 194 | 195 | 196 | def login(driver): 197 | 198 | 199 | 200 | try: 201 | #login 202 | driver.maximize_window() 203 | driver.find_element_by_xpath("//*[@id='TopNav']/nav/div[2]/ul[3]/li[2]/a").click() 204 | 205 | time.sleep(.5) 206 | 207 | #login 208 | username_field = driver.find_element_by_xpath("//*[@id='userEmail']") 209 | password_field = driver.find_element_by_xpath("//*[@id='userPassword']") 210 | 211 | username_field.clear() 212 | password_field.clear() 213 | 214 | username_field.send_keys(JOB_APP['username']) 215 | password_field.send_keys(JOB_APP['password']) 216 | 217 | driver.find_element_by_xpath("//*[@id='LoginModal']/div/div/div[2]/div[2]/div[2]/div/div/div/div[3]/form/div[3]/div[1]").click() 218 | 219 | time.sleep(.5) 220 | 221 | except NoSuchElementException: 222 | pass 223 | 224 | 225 | 226 | 227 | 228 | def main(): 229 | driver = webdriver.Chrome(executable_path='add your webdriver path') 230 | 231 | 232 | applied = '' 233 | # call get_links to automatically scrape job listings from glassdoor 234 | aggregatedURLs = get_links.getURLs() 235 | 236 | driver.get(list(aggregatedURLs)[0]) 237 | driver.maximize_window() 238 | login(driver) 239 | 240 | with open('my_file.csv', 'w') as f: 241 | writer = csv.writer(f) 242 | writer.writerow(["Job_title", "Company", "Location" , "Job_text","Outcome"]) 243 | 244 | for url in aggregatedURLs: 245 | print('\n') 246 | driver.get(url) 247 | 248 | #save application data in a csv 249 | try: 250 | text = driver.find_element_by_xpath("//*[@id='JobDescriptionContainer']").text 251 | 252 | job_title = driver.find_element_by_xpath("//*[@id='JobView']/div[1]/div[2]/div/div/div[2]/div[1]/div[1]/div[2]/div/div/div[2]").text 253 | 254 | company = driver.find_element_by_xpath("//*[@id='JobView']/div[1]/div[2]/div/div/div[2]/div[1]/div[1]/div[2]/div/div/div[1]").text 255 | 256 | location = driver.find_element_by_xpath("//*[@id='JobView']/div[1]/div[2]/div/div/div[2]/div[1]/div[1]/div[2]/div/div/div[3]").text 257 | writer.writerow([str(job_title), str(company), str(location), str(text), 000]) 258 | 259 | except Exception: 260 | pass 261 | 262 | 263 | #apply to url 264 | try: 265 | glassdoor(driver) 266 | print(f'SUCCESS FOR: {url}') 267 | applied = applied + url + ',' 268 | except Exception: 269 | print(f"FAILED FOR {url}") 270 | continue 271 | 272 | 273 | f.close() 274 | driver.close() 275 | 276 | 277 | 278 | main() 279 | 280 | 281 | 282 | 283 | 284 | 285 | 286 | 287 | 288 | --------------------------------------------------------------------------------