├── README.md ├── data_in_same_page.py ├── images ├── author_markup.png ├── command_menu.png ├── dynamic_site_no_js.png ├── infinite_scroll.png ├── infinite_scroll_no_js.png ├── json_embedded.png └── libribox.png ├── selenium_bs4.py ├── selenium_bs4_headless.py └── selenium_example.py /README.md: -------------------------------------------------------------------------------- 1 | # Scraping Dynamic JavaScript / Ajax Websites With BeautifulSoup: A Complete Tutorial 2 | 3 | [![Oxylabs promo code](https://raw.githubusercontent.com/oxylabs/product-integrations/refs/heads/master/Affiliate-Universal-1090x275.png)](https://oxylabs.go2cloud.org/aff_c?offer_id=7&aff_id=877&url_id=112) 4 | 5 | [](https://github.com/topics/python) [](https://github.com/topics/javascript) 6 | 7 | [![](https://dcbadge.vercel.app/api/server/eWsVUJrnG5)](https://discord.gg/GbxmdGhZjq) 8 | 9 | ## Table of contents 10 | 11 | - [Revisiting BeautifulSoup and Requests](#revisiting-beautifulsoup-and-requests) 12 | - [Is This Website Dynamic or Static?](#is-this-website-dynamic-or-static) 13 | - [Can BeautifulSoup Render `JavaScript`?](#can-beautifulsoup-render-javascript) 14 | - [Scraping Dynamic Web Pages With Selenium](#scraping-dynamic-web-pages-with-selenium) 15 | - [Finding Elements Using Selenium](#finding-elements-using-selenium) 16 | - [Finding Elements Using BeautifulSoup](#finding-elements-using-beautifulsoup) 17 | - [Headless `Browser`](#headless-browser) 18 | - [Web Scraping Dynamic Sites by Locating AJAX Calls](#web-scraping-dynamic-sites-by-locating-ajax-calls) 19 | - [Data Embedded In the Same Page](#data-embedded-in-the-same-page) 20 | - [Data In Other Pages](#data-in-other-pages) 21 | 22 | Web scraping most of the websites may be comparatively easy. This topic is already covered at length in [this tutorial](https://github.com/oxylabs/Python-Web-Scraping-Tutorial). There are many sites, however, which can not be scraped using the same method. The reason is that these sites load the content dynamically using JavaScript. 23 | 24 | This technique is also known as AJAX (Asynchronous JavaScript and XML). Historically, this standard was included creating an `XMLHttpRequest` object to retrieve XML from a web server without reloading the whole page. These days, this object is rarely used directly. Usually, a wrapper like jQuery is used to retrieve content such as JSON, partial HTML, or even images. 25 | 26 | ## Revisiting BeautifulSoup and Requests 27 | 28 | To scrape a regular web page, at least two libraries are required. The `requests` library downloads the page. Once this page is available as an HTML string, the next step is parsing this as a BeautifulSoup object. This BeautifulSoup object can then be used to find specific data. 29 | 30 | Here is a simple example script that prints the text inside the `h1` element with `id` set to `firstHeading`. 31 | 32 | ```python 33 | import requests 34 | from bs4 import BeautifulSoup 35 | 36 | response = requests.get("https://quotes.toscrape.com/") 37 | bs = BeautifulSoup(response.text,"lxml") 38 | author = bs.find("small",class_="author") 39 | if author: 40 | print(author.text) 41 | 42 | ## OUTPUT 43 | # Albert Einstein 44 | ``` 45 | 46 | Note that we are working with version 4 of the Beautiful Soup library. Earlier versions are discontinued. You may see beautiful soup 4 being written as just Beautiful Soup, BeautifulSoup, or even bs4. They all refer to the same beautiful soup 4 library. 47 | 48 | The same code will not work if the site is dynamic. For example, the same site has a dynamic version at `https://quotes.toscrape.com/js/` (note *js* at the end of this URL). 49 | 50 | ```python 51 | response = requests.get("https://quotes.toscrape.com/js") # dynamic web page 52 | bs = BeautifulSoup(response.text,"lxml") 53 | author = bs.find("small",class_="author") 54 | if author: 55 | print(author.text) 56 | 57 | ## No output 58 | ``` 59 | 60 | The reason is that the second site is dynamic where the data is being generated using `JavaScript`. 61 | 62 | There are two ways to handle sites like this. 63 | 64 | - Using a tool like Selenium or Puppeteer to open a real browser to render the dynamic web page 65 | - Identify the AJAX links that contain the data, and work with those directly. 66 | 67 | These two approaches are covered at length in this tutorial. 68 | 69 | However, first, we need to understand how to determine if a site is dynamic. 70 | 71 | ## Is This Website Dynamic or Static? 72 | 73 | Here is the easiest way to determine if a website is dynamic using Chrome or Edge. (Both of these browsers use Chromium under the hood). 74 | 75 | Open Developer Tools by pressing the `F12` key. Ensure that the focus is on Developer tools and press the `CTRL+SHIFT+P` key combination to open Command Menu. 76 | 77 | ![Command Menu](images/command_menu.png) 78 | 79 | It will show a lot of commands. Start typing `disable` and the commands will be filtered to show `Disable JavaScript`. Select this option to disable `JavaScript`. 80 | 81 | Now reload this page by pressing `Ctrl+R` or `F5`. The page will reload. 82 | 83 | If this is a dynamic site, a lot of the content will disappear: 84 | 85 | ![Example of Dynamic Site with No JavaScript](images/dynamic_site_no_js.png) 86 | 87 | In some cases, the sites will still show the data but will fall back to basic functionality. For example, this site has an infinite scroll. If JavaScript is disabled, it shows regular pagination. 88 | 89 | | ![With JavaScript](images/infinite_scroll.png) | ![Without JavaScript](images/infinite_scroll_no_js.png) | 90 | | ---------------------------------------------- | ------------------------------------------------------- | 91 | | JavaScript Enabled | JavaScript Disabled | 92 | 93 | The next question that needs to be answered is the capabilities of BeautifulSoup. 94 | 95 | ## Can BeautifulSoup Render `JavaScript`? 96 | 97 | The short answer is no. 98 | 99 | It is important to understand the words like parsing and rendering. Parsing is simply converting a string representation of a Python object into an actual object. 100 | 101 | So what is Rendering? Rendering is essentially interpreting HTML, JavaScript, CSS, and images into something that we see in the browser. 102 | 103 | Beautiful Soup is a Python library for pulling data out of HTML files. This involves parsing HTML string into the the BeautifulSoup object. For parsing, first, we need the HTML as string, to begin with. Dynamic websites do not have the data in the HTML directly. It means that BeautifulSoup cannot work with dynamic websites. 104 | 105 | Selenium library can automate loading and rendering websites in a browser like Chrome or Firefox. Even though Selenium supports pulling data out of HTML, it is possible to extract complete HTML and use Beautiful Soup instead to extract the data. 106 | 107 | Let's begin dynamic web scraping with Python using Selenium first. 108 | 109 | ## Scraping Dynamic Web Pages With Selenium 110 | 111 | Installing Selenium involves installing three things: 112 | 113 | 1. The browser of your choice (which you already have): 114 | - Chrome, Firefox, Edge, Internet Explorer, Safari, and Opera browsers are supported. In this tutorial, we will be using Chrome. 115 | 116 | 2. The driver for your browser: 117 | 118 | - Driver for Chrome can be download from [this page](https://chromedriver.chromium.org/downloads). Download the zip file containing the driver and unzip it. Take a note of this path. 119 | - Visit [this link](https://www.selenium.dev/documentation/en/webdriver/driver_requirements/#quick-reference) for information about drivers for other browsers. 120 | 121 | 3. Python Selenium Package: 122 | - This package can be installed using the pip command: 123 | 124 | ```shell 125 | pip install selenium 126 | ``` 127 | 128 | - If you are using Anaconda, this can be installed from the `conda-forge` channel. 129 | 130 | ```shell 131 | conda install -c conda-forge selenium 132 | ``` 133 | 134 | The basic skeleton of the Python script to launch a browser, load the page, and then close the browser is simple: 135 | 136 | ```python 137 | from selenium.webdriver import Chrome 138 | from webdriver_manager.chrome import ChromeDriverManager 139 | 140 | driver = Chrome(ChromeDriverManager().install()) 141 | driver.get('https://quotes.toscrape.com/js/') 142 | # 143 | # Code to read data from HTML here 144 | # 145 | driver.quit() 146 | ``` 147 | 148 | Now that we can load the page in the browser, let's look into extracting specific elements. There are two ways to extract elements—Selenium and Beautiful Soup. 149 | 150 | ### Finding Elements Using Selenium 151 | 152 | Our objective in this example is to find the author element. 153 | 154 | Load the site`https://quotes.toscrape.com/js/` in Chrome, right-click the author name, and click Inspect. This should load Developer Tools with the author element highlighted as follows: 155 | 156 | ![](images/author_markup.png) 157 | 158 | This is a `small` element with its `class` attribute set to `author`. 159 | 160 | ```html 161 | Albert Einstein 162 | ``` 163 | 164 | Selenium allows various methods to locate the HTML elements. These methods are part of the driver object. Some of the methods that can be useful here are as follows: 165 | 166 | ```python 167 | element = driver.find_element(By.CLASS_NAME, "author") 168 | element = driver.find_element(By.TAG_NAME, "small") 169 | ``` 170 | 171 | There are few other methods, may be useful for other scenario. These methods are as follows: 172 | 173 | ```python 174 | element = driver.find_element(By.ID, "abc") 175 | element = driver.find_element(By.LINK_TEXT, "abc") 176 | element = driver.find_element(By.XPATH, "//abc") 177 | element = driver.find_element(By.CSS_SELECTOR, ".abc") 178 | 179 | ``` 180 | 181 | Perhaps the most useful methods are `find_element(By.CSS_SELECTOR)` and `find_element(By.XPATH)`. Any of these two methods should be able to select most of the scenarios. 182 | 183 | Let's modify the code so that the first author can be printed. 184 | 185 | ```python 186 | from selenium.webdriver import Chrome 187 | from selenium.webdriver.common.by import By 188 | from webdriver_manager.chrome import ChromeDriverManager 189 | 190 | driver = Chrome(ChromeDriverManager().install()) 191 | driver.get('https://quotes.toscrape.com/js/') 192 | 193 | element = driver.find_element(By.CLASS_NAME, "author") 194 | 195 | print(element.text) 196 | driver.quit() 197 | ``` 198 | 199 | What if you want to print all the authors? 200 | 201 | All the `find_element` methods have a counterpart - `find_elements` . Note the pluralization. To find all the authors, simply change one line: 202 | 203 | ```python 204 | elements = driver.find_elements(By.CLASS_NAME, "author") 205 | ``` 206 | 207 | This returns a list of elements. We can simply run a loop to print all the authors: 208 | 209 | ```python 210 | for element in elements: 211 | print(element.text) 212 | ``` 213 | 214 | *Note: The complete code is in [selenium_example.py](https://github.com/oxylabs/Scraping-Dynamic-JavaScript-Ajax-Websites-With-BeautifulSoup/blob/main/selenium_example.py) code file.* 215 | 216 | However, if you are already comfortable with BeautifulSoup, you can create the Beautiful Soup object. 217 | 218 | ### Finding Elements Using BeautifulSoup 219 | 220 | As we saw in the first example, the Beautiful Soup object needs HTML. For web scraping static sites, the HTML can be retrieved using `requests` library. The next step is parsing this HTML string into the BeautifulSoup object. 221 | 222 | ```python 223 | response = requests.get("https://quotes.toscrape.com/") 224 | bs = BeautifulSoup(response.text,"lxml") 225 | ``` 226 | 227 | Let 's find out how to scrape a dynamic website with BeautifulSoup. 228 | 229 | The following part remains unchanged from the previous example. 230 | 231 | ```python 232 | from selenium.webdriver import Chrome 233 | from webdriver_manager.chrome import ChromeDriverManager 234 | from bs4 import BeautifulSoup 235 | 236 | driver = Chrome(ChromeDriverManager().install()) 237 | driver.get('https://quotes.toscrape.com/js/') 238 | ``` 239 | 240 | The rendered HTML of the page is available in the attribute `page_source`. 241 | 242 | ```python 243 | soup = BeautifulSoup(driver.page_source, "lxml") 244 | ``` 245 | 246 | Once the soup object is available, all Beautiful Soup methods can be used as usual. 247 | 248 | ```python 249 | author_element = soup.find("small", class_="author") 250 | print(author_element.text) 251 | ``` 252 | 253 | *Note: The complete source code is in [selenium_bs4.py](https://github.com/oxylabs/Scraping-Dynamic-JavaScript-Ajax-Websites-With-BeautifulSoup/blob/main/selenium_bs4.py)* 254 | 255 | ### Headless `Browser` 256 | 257 | Once the script is ready, there is no need for the browser to be visible when the script is running. The browser can be hidden, and the script will still run fine. This behavior of a browser is also known as a headless browser. 258 | 259 | To make the browser headless, import `ChromeOptions`. For other browsers, their own Options classes are available. 260 | 261 | ```python 262 | from selenium.webdriver import ChromeOptions 263 | ``` 264 | 265 | Now, create an object of this class, and set the `headless` attribute to True. 266 | 267 | ```python 268 | options = ChromeOptions() 269 | options.headless = True 270 | ``` 271 | 272 | Finally, send this object while creating the Chrome instance. 273 | 274 | ```python 275 | driver = Chrome(ChromeDriverManager().install(), options=options) 276 | ``` 277 | 278 | Now when you run the script, the browser will not be visible. See [selenium_bs4_headless.py](https://github.com/oxylabs/Scraping-Dynamic-JavaScript-Ajax-Websites-With-BeautifulSoup/blob/main/selenium_bs4_headless.py) file for the complete implementation. 279 | 280 | ## Web Scraping Dynamic Sites by Locating AJAX Calls 281 | 282 | Loading the browser is expensive—it takes up CPU, RAM, and bandwidth which are not really needed. When a website is being scraped, it's the data that is important. All those CSS, images, and rendering are not really needed. 283 | 284 | The fastest and most efficient way of scraping dynamic web pages with Python is to locate the actual place where the data is located. 285 | 286 | There are two places where this data can be located: 287 | 288 | - The main page itself, in JSON format, embedded in a `