├── requirements.txt ├── web_scraper.py └── README.md /requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | beautifulsoup4 3 | -------------------------------------------------------------------------------- /web_scraper.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from bs4 import BeautifulSoup 3 | 4 | def scrape_website(url): 5 | response = requests.get(url) 6 | soup = BeautifulSoup(response.text, 'html.parser') 7 | titles = soup.find_all('h2') 8 | 9 | print("Scraped Titles:") 10 | for title in titles: 11 | print(title.get_text()) 12 | 13 | def main(): 14 | url = 'https://example.com' # Replace with the URL you want to scrape 15 | scrape_website(url) 16 | 17 | if __name__ == "__main__": 18 | main() 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # WebScraper 2 | 3 | WebScraper is a basic Python application to scrape data from websites using BeautifulSoup. 4 | 5 | ## Features 6 | 7 | - Scrapes specified data from a given URL. 8 | - Displays the scraped data in a user-friendly format. 9 | 10 | ## Installation 11 | 12 | 1. Clone the repository: 13 | ```bash 14 | git clone https://github.com/YOUR_USERNAME/WebScraper.git 15 | cd WebScraper 16 | ``` 17 | 18 | 2. Install the required packages: 19 | ```bash 20 | pip install -r requirements.txt 21 | ``` 22 | 23 | 3. Run the application: 24 | ```bash 25 | python web_scraper.py 26 | ``` 27 | 28 | ## Requirements 29 | 30 | - Python 3.x 31 | - `requests` library 32 | - `beautifulsoup4` library 33 | 34 | ## License 35 | 36 | This project is licensed under the MIT License. 37 | --------------------------------------------------------------------------------