├── assets ├── table.png ├── fixtures.png └── player_stats.png ├── requirements.txt ├── fixtures.py ├── table.py ├── README.md ├── player_stats.py └── main.py /assets/table.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tarun7r/Premier-League-API/HEAD/assets/table.png -------------------------------------------------------------------------------- /assets/fixtures.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tarun7r/Premier-League-API/HEAD/assets/fixtures.png -------------------------------------------------------------------------------- /assets/player_stats.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tarun7r/Premier-League-API/HEAD/assets/player_stats.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | lxml==4.6.3 2 | Flask==2.0.1 3 | requests==2.27.1 4 | beautifulsoup4==4.11.1 5 | googlesearch-python==1.3.0 6 | -------------------------------------------------------------------------------- /fixtures.py: -------------------------------------------------------------------------------- 1 | import lxml 2 | import requests 3 | from bs4 import BeautifulSoup 4 | import re 5 | import time 6 | 7 | link = f"https://onefootball.com/en/competition/premier-league-9/fixtures" 8 | source = requests.get(link).text 9 | page = BeautifulSoup(source, "lxml") 10 | fix = page.find_all("a", class_="MatchCard_matchCard__iOv4G") 11 | 12 | def fixtures_list(): 13 | fixtures = [] 14 | for i in range(len(fix)): 15 | fixtures.append(fix[i].get_text(separator=" ")) # Use get_text with separator 16 | 17 | return fixtures 18 | 19 | def get_fixtures(team): 20 | fixtures = fixtures_list() 21 | a = [] 22 | for i in range(len(fixtures)): 23 | if team in fixtures[i]: 24 | a.append(fixtures[i]) 25 | return a 26 | 27 | print(fixtures_list()) -------------------------------------------------------------------------------- /table.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from bs4 import BeautifulSoup 3 | from tabulate import tabulate 4 | 5 | link = "https://onefootball.com/en/competition/premier-league-9/table" 6 | source = requests.get(link).text 7 | page = BeautifulSoup(source, "lxml") 8 | 9 | # Find all rows in the standings table 10 | rows = page.find_all("li", class_="Standing_standings__row__5sdZG") 11 | 12 | # Initialize the table 13 | table = [] 14 | table.append(["Position", "Team", "Played", "Wins", "Draws", "Losses", "Goal Difference", "Points"]) 15 | 16 | # Extract data for each row 17 | for row in rows: 18 | position_elem = row.find("div", class_="Standing_standings__cell__5Kd0W") 19 | team_elem = row.find("p", class_="Standing_standings__teamName__psv61") 20 | stats = row.find_all("div", class_="Standing_standings__cell__5Kd0W") 21 | 22 | if position_elem and team_elem and len(stats) >= 7: 23 | position = position_elem.text.strip() 24 | team = team_elem.text.strip() 25 | played = stats[2].text.strip() 26 | wins = stats[3].text.strip() 27 | draws = stats[4].text.strip() 28 | losses = stats[5].text.strip() 29 | goal_difference = stats[6].text.strip() 30 | points = stats[7].text.strip() 31 | 32 | # Append the extracted data to the table 33 | table.append([position, team, played, wins, draws, losses, goal_difference, points]) 34 | 35 | # Print the table in a pretty format 36 | print(tabulate(table, headers="firstrow", tablefmt="grid")) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

Premier League API 2.0

2 |

This is an unofficial Premier League API client for pulling player stats, fixtures, tables, and results data from the Premier League. The API is built using Flask, and the data is scraped from the Premier League website.

3 | 4 | 5 |

API Endpoints

6 | 7 |

The application provides the following API endpoints:

8 |

GET /players/{player_name}

9 |

This endpoint retrieves information about a Premier League player with the given name. The player name should be provided as a URL parameter.

10 |

The API returns a JSON object with the following structure:

11 |
[
12 |         {
13 |           'name': name, 
14 |           'position': position, 
15 |           'club': club, 
16 |           'Nationality': nationality, 
17 |           'Date of Birth': dob,
18 |           'height':height,
19 |           'key_stats': all_stats
20 |           }
21 | ]
22 | 23 | 24 |

GET /table

25 |

The JSON object contains an array of strings, where each string represents a team's position, name, number of games played, wins, draws, losses, goal difference, and total points.

26 |

The API returns a JSON object with the following structure:

27 |
[
28 |       "Position",
29 |       "Team",
30 |       "Played",
31 |       "Wins",
32 |       "Draws",
33 |       "Losses",
34 |       "Goal Difference",
35 |       "Points"
36 |     ]
37 | 38 |

GET /fixtures/{team_name}

39 |

This endpoint retrieves information about the next Three Premier League fixtures of the team. The team name should be provided as a URL parameter.

40 |

The API returns a JSON object with the following structure:

41 |
[ { "Team A vs Team B DD/MM/YYYY HH:MM", "Team A vs Team C DD/MM/YYYY HH:MM", "Team A vs Team D DD/MM/YYYY HH:MM"} ] 
42 | 43 |

Setup Details

44 | Follow the following instructions to run the application and start using the api in your local pc 45 |
  • Clone the repository
  • 46 |
  • Open the terminal, navigate to the folder where your clone of this repository is located and type: 47 | 48 | `$ pip install -r requirements.txt`
  • 49 | 50 |
  • Type $ python main.py in the terminal and the script will run for as long as you let it.
  • 51 | 52 | 53 | 54 |

    Individual PLayer PL Stats

    55 | 59 |

    Premier League Table

    60 | 64 |

    Premier League Fixtures

    65 | 69 |

    Update 🚀

    70 | The API has been enhanced with new features and improvements: 71 | 75 | You can also search player stats using the player's reference image ( Face Recognition ) as well - Repo 📸 76 | 77 |

    Disclaimer

    78 | This project is created solely for learning and educational purposes. It is not intended for production-level use or commercial applications 79 | -------------------------------------------------------------------------------- /player_stats.py: -------------------------------------------------------------------------------- 1 | import lxml 2 | import requests 3 | from bs4 import BeautifulSoup 4 | from googlesearch import search # pip install googlesearch-python 5 | import re 6 | 7 | # Input player name 8 | player_name = "ronaldo" 9 | 10 | # Using googlesearch to find Premier League stats for the player 11 | query = f"{player_name} premier league.com stats" 12 | search_results = list(search(query, num_results=5)) 13 | if search_results: 14 | res = search_results[0] 15 | else: 16 | raise ValueError("No search results found for the query.") 17 | 18 | # Adjust the URL to point to the stats page 19 | if "stats" in res: 20 | res = res.replace('stats', 'overview') 21 | 22 | sta = res.replace('overview', 'stats') 23 | 24 | # Fetch the stats page 25 | source = requests.get(sta).text 26 | page = BeautifulSoup(source, "lxml") 27 | 28 | # Extract player details 29 | name = page.find("div", class_="player-header__name t-colour").text 30 | name = re.sub(r'\s+', ' ', name).strip() # Replace multiple spaces/newlines with a single space 31 | print(f"Player Name: {name}") 32 | 33 | position_label = page.find("div", class_="player-overview__label", string="Position") 34 | if position_label: 35 | position = position_label.find_next_sibling("div", class_="player-overview__info").text.strip() 36 | print(f"Position: {position}") 37 | else: 38 | position = "Unknown" 39 | 40 | # Extract club information 41 | club = "No longer part of EPL" 42 | if "Club" in page.text: 43 | club_info = page.find_all("div", class_="info") 44 | if len(club_info) > 0: 45 | club = club_info[0].text.strip() 46 | if len(club_info) > 1: 47 | position = club_info[1].text.strip() 48 | print(f"Club: {club}") 49 | 50 | # Extract detailed stats 51 | detailed_stats = {} 52 | stat_elements = page.find_all("div", class_="player-stats__stat-value") 53 | for stat in stat_elements: 54 | stat_title = stat.text.split("\n")[0].strip() 55 | stat_value = stat.find("span", class_="allStatContainer").text.strip() 56 | detailed_stats[stat_title] = stat_value 57 | 58 | print("\nDetailed Stats:") 59 | for key, value in detailed_stats.items(): 60 | print(f"{key}: {value}") 61 | 62 | # Extract personal details 63 | source2 = requests.get(res).text 64 | page2 = BeautifulSoup(source2, "lxml") 65 | personal_details = page2.find("div", class_="player-info__details-list") 66 | 67 | if personal_details: 68 | # Extract nationality 69 | nationality = personal_details.find("span", class_="player-info__player-country") 70 | nationality = nationality.text.strip() if nationality else "Unknown" 71 | 72 | # Extract Date of Birth 73 | dob = "Unknown" 74 | dob_info = personal_details.find_all("div", class_="player-info__col") 75 | for info in dob_info: 76 | label = info.find("div", class_="player-info__label").text.strip() 77 | if label == "Date of Birth": 78 | dob = info.find("div", class_="player-info__info").text.strip() 79 | break 80 | 81 | # Extract Height 82 | height = "Unknown" 83 | for info in dob_info: 84 | label = info.find("div", class_="player-info__label").text.strip() 85 | if label == "Height": 86 | height = info.find("div", class_="player-info__info").text.strip() 87 | break 88 | 89 | print("\nPersonal Details:") 90 | print(f"Nationality: {nationality}") 91 | print(f"Date of Birth: {dob}") 92 | print(f"Height: {height}") 93 | else: 94 | print("Personal details not found.") -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, jsonify 2 | import lxml 3 | import requests 4 | from bs4 import BeautifulSoup 5 | import re 6 | import time 7 | from googlesearch import search # pip install googlesearch-python 8 | 9 | 10 | 11 | app = Flask(__name__) 12 | 13 | @app.route('/') 14 | def index(): 15 | return "Hey there! Welcome to the Premier League API 2.0 \n\n you can use the following endpoints:\n\n /players/ \n /fixtures \n /fixtures/ \n /table \n\n Enjoy!" 16 | 17 | @app.route('/players/', methods=['GET']) 18 | def get_player(player_name): 19 | try: 20 | # Use Google search to find the player's stats page 21 | query = f"{player_name} premier league.com stats" 22 | search_results = list(search(query, num_results=5)) 23 | if search_results: 24 | res = search_results[0] 25 | else: 26 | return jsonify({"error": "No search results found for the query."}), 404 27 | 28 | # Adjust the URL to point to the stats page 29 | if "stats" in res: 30 | res = res.replace('stats', 'overview') 31 | sta = res.replace('overview', 'stats') 32 | 33 | # Fetch the stats page 34 | source = requests.get(sta).text 35 | page = BeautifulSoup(source, "lxml") 36 | 37 | # Extract player details 38 | name = page.find("div", class_="player-header__name t-colour").text.strip() 39 | position_label = page.find("div", class_="player-overview__label", string="Position") 40 | position = position_label.find_next_sibling("div", class_="player-overview__info").text.strip() if position_label else "Unknown" 41 | 42 | # Extract club information 43 | club = "No longer part of EPL" 44 | if "Club" in page.text: 45 | club_info = page.find_all("div", class_="info") 46 | if len(club_info) > 0: 47 | club = club_info[0].text.strip() 48 | if len(club_info) > 1: 49 | position = club_info[1].text.strip() 50 | 51 | # Extract detailed stats 52 | detailed_stats = {} 53 | stat_elements = page.find_all("div", class_="player-stats__stat-value") 54 | for stat in stat_elements: 55 | stat_title = stat.text.split("\n")[0].strip() 56 | stat_value = stat.find("span", class_="allStatContainer").text.strip() 57 | detailed_stats[stat_title] = stat_value 58 | 59 | # Extract personal details 60 | source2 = requests.get(res).text 61 | page2 = BeautifulSoup(source2, "lxml") 62 | personal_details = page2.find("div", class_="player-info__details-list") 63 | 64 | nationality = "Unknown" 65 | dob = "Unknown" 66 | height = "Unknown" 67 | 68 | if personal_details: 69 | # Extract nationality 70 | nationality_elem = personal_details.find("span", class_="player-info__player-country") 71 | nationality = nationality_elem.text.strip() if nationality_elem else "Unknown" 72 | 73 | # Extract Date of Birth and Height 74 | dob_info = personal_details.find_all("div", class_="player-info__col") 75 | for info in dob_info: 76 | label = info.find("div", class_="player-info__label").text.strip() 77 | if label == "Date of Birth": 78 | dob = info.find("div", class_="player-info__info").text.strip() 79 | elif label == "Height": 80 | height = info.find("div", class_="player-info__info").text.strip() 81 | 82 | # Return the player details as JSON 83 | return jsonify({ 84 | "name": name, 85 | "position": position, 86 | "club": club, 87 | "key_stats": detailed_stats, 88 | "Nationality": nationality, 89 | "Date of Birth": dob, 90 | "Height": height 91 | }) 92 | 93 | except Exception as e: 94 | return jsonify({"error": str(e)}), 500 95 | 96 | @app.route('/fixtures') 97 | def fixtures_list(): 98 | link = "https://onefootball.com/en/competition/premier-league-9/fixtures" 99 | source = requests.get(link).text 100 | page = BeautifulSoup(source, "lxml") 101 | fix = page.find_all("a", class_="MatchCard_matchCard__iOv4G") # Updated class name 102 | 103 | fixtures = [] 104 | for match in fix: 105 | fixture = match.get_text(separator=" ").strip() # Use get_text with separator 106 | fixtures.append(fixture) 107 | 108 | return jsonify({"fixtures": fixtures}) 109 | 110 | 111 | @app.route('/fixtures/', methods=['GET']) 112 | def get_fixtures(team): 113 | link = "https://onefootball.com/en/competition/premier-league-9/fixtures" 114 | source = requests.get(link).text 115 | page = BeautifulSoup(source, "lxml") 116 | fix = page.find_all("a", class_="MatchCard_matchCard__iOv4G") # Updated class name 117 | 118 | fixtures = [] 119 | for match in fix: 120 | fixture = match.get_text(separator=" ").strip() # Use get_text with separator 121 | fixtures.append(fixture) 122 | 123 | filtered_fixtures = [fixture for fixture in fixtures if team.lower() in fixture.lower()] 124 | 125 | return jsonify({"team_fixtures": filtered_fixtures}) 126 | 127 | @app.route('/table') 128 | def table(): 129 | link = "https://onefootball.com/en/competition/premier-league-9/table" 130 | source = requests.get(link).text 131 | page = BeautifulSoup(source, "lxml") 132 | 133 | # Find all rows in the standings table 134 | rows = page.find_all("li", class_="Standing_standings__row__5sdZG") 135 | 136 | # Initialize the table 137 | table = [] 138 | table.append(["Position", "Team", "Played", "Wins", "Draws", "Losses", "Goal Difference", "Points"]) 139 | 140 | # Extract data for each row 141 | for row in rows: 142 | position_elem = row.find("div", class_="Standing_standings__cell__5Kd0W") 143 | team_elem = row.find("p", class_="Standing_standings__teamName__psv61") 144 | stats = row.find_all("div", class_="Standing_standings__cell__5Kd0W") 145 | 146 | if position_elem and team_elem and len(stats) >= 7: 147 | position = position_elem.text.strip() 148 | team = team_elem.text.strip() 149 | played = stats[2].text.strip() 150 | wins = stats[3].text.strip() 151 | draws = stats[4].text.strip() 152 | losses = stats[5].text.strip() 153 | goal_difference = stats[6].text.strip() 154 | points = stats[7].text.strip() 155 | 156 | # Append the extracted data to the table 157 | table.append([position, team, played, wins, draws, losses, goal_difference, points]) 158 | 159 | # Return the table as a JSON response 160 | return jsonify({"table": table}) 161 | 162 | 163 | 164 | if __name__ =="__main__": 165 | app.run(debug=True) 166 | --------------------------------------------------------------------------------