├── 1.ScrapingTransferFeeData.rmd ├── 2.ScrapingPlayerProfileImages.Rmd ├── 3.ToughtoScrap.md ├── README.md └── Screenshot 2018-02-20 17.58.27.png /1.ScrapingTransferFeeData.rmd: -------------------------------------------------------------------------------- 1 | # Introduction to Scraping Data from Transfermarkt 2 | ## FC_rSTATS 3 | 4 | This is a R conversion of a tutorial by [FC Python](http://twitter.com/FC_Python). I take no credit for the idea and have their blessing to make this conversion. All text is a direct copy unless changes were relevant. Please follow them on twitter and if you have a desire to learn Python then they are a fantastic resource! 5 | 6 | Before starting the article, I’m obliged to mention that web scraping is a grey area legally and ethicaly in lots of circumstances. Please consider the positive and negative effects of what you scrape before doing so! 7 | 8 | Warning over. Web scraping is a hugely powerful tool that, when done properly, can give you access to huge, clean data sources to power your analysis. The applications are just about endless for anyone interested in data. As a professional analyst, you can scrape fixtures and line-up data from around the world every day to plan scouting assignments or alert you to youth players breaking through. As an amateur analyst, it is quite likely to be your only source of data for analysis. 9 | 10 | This tutorial is just an introduction for R scraping. It will take you through the basic process of loading a page, locating information and retrieving it. Combine the knowledge on this page with for loops to cycle through a site and HTML knowledge to understand a web page, and you’ll be armed with just about any data you can find. 11 | 12 | Let’s fire up our package & get started. We’ll need the ‘rvest’ package, so make sure you that installed. 13 | 14 | ```{r include = FALSE} 15 | require(rvest) 16 | ``` 17 | 18 | Our process for extracting data is going to go something like this: 19 | 20 | Load the webpage containing the data. 21 | Locate the data within the page and extract it. 22 | Organise the data into a dataframe 23 | For this example, we are going to take the player names and values for the most expensive players in a particular year. You can find the page that we’ll use [here](https://www.transfermarkt.co.uk/transfers/transferrekorde/statistik/top/plus/0/galerie/0?saison_id=2000). 24 | 25 | The following sections will run through each of these steps individually. 26 | 27 | ## Load the webpage containing the data 28 | 29 | This is pretty easy with rvest. Just set the page variable to the page we want to scrape and the pass in through the read_html function. 30 | 31 | ```{r} 32 | page <- "https://www.transfermarkt.co.uk/transfers/transferrekorde/statistik/top/plus/0/galerie/0?saison_id=2000" 33 | 34 | scraped_page <- read_html(page) 35 | ``` 36 | 37 | ## Locate the data within a page & extract it 38 | 39 | To fully appreciate what we are doing here, you probably need a basic grasp of HTML – the language that structures a webpage. As simply as I can put it for this article, HTML is made up of elements, like a paragraph or a link, that tell the browser what to render. For scraping, we will use this information to tell our program what information to take. 40 | 41 | You can inspect the source code of the page or use SelectorGadget to help us tell our scraping code where the information is that we want to grab. 42 | 43 | Take another look at the page we are scraping. We want two things – the player name and the transfer value. 44 | 45 | Using SelectorGadget we can find the following node locations: 46 | 47 | Player Name = #yw2 .spielprofil_tooltip 48 | Transfer Value = .rechts.hauptlink a 49 | 50 | Extracting the data is then quiet easy. Reading the code left to right the_page -> the_nodes -> the_text -> as_text. Each time storing them as objects with <- 51 | 52 | ```{r} 53 | PlayerNames <- scraped_page %>% html_nodes("#yw2 .spielprofil_tooltip") %>% html_text() %>% as.character() 54 | TransferValue <- scraped_page %>% html_nodes(".rechts.hauptlink a") %>% html_text() %>% as.character() 55 | ``` 56 | 57 | ## Organise the data into a dataframe 58 | 59 | This is pretty simple as we now have two objects PlayersNames and TransferValues. So we just add them to the construction of a dataframe. 60 | 61 | ```{r} 62 | df <- data.frame(PlayerNames, TransferValue) 63 | ``` 64 | 65 | and then display the top 5 items of the dataframe with : 66 | 67 | ```{r} 68 | head(df) 69 | ``` 70 | 71 | ## Summary 72 | 73 | This article has gone through the absolute basics of scraping, we can now load a page, identify elements that we want to scrape and then process them into a dataframe. 74 | 75 | There is more that we need to do to scrape efficiently though. Firstly, we can apply a for loop to the whole program above, changing the initial webpage name slightly to scrape the next year – I’ll let you figure out how! 76 | 77 | You will also need to understand more about HTML, particularly class and ID selectors, to get the most out of scraping. Regardless, if you’ve followed along and understand what we’ve achieved and how, then you’re in a good place to apply this to other pages. 78 | 79 | ## Full Code 80 | 81 | ```{r} 82 | require(rvest) 83 | 84 | page <- "https://www.transfermarkt.co.uk/transfers/transferrekorde/statistik/top/plus/0/galerie/0?saison_id=2000" 85 | 86 | scraped_page <- read_html(page) 87 | 88 | PlayerNames <- scraped_page %>% html_nodes("#yw2 .spielprofil_tooltip") %>% html_text() #%>% as.character() 89 | TransferValue <- scraped_page %>% html_nodes(".rechts.hauptlink a") %>% html_text() %>% as.character() 90 | 91 | df <- data.frame(PlayerNames, TransferValue) 92 | ``` 93 | 94 | 95 | 96 | 97 | 98 | -------------------------------------------------------------------------------- /2.ScrapingPlayerProfileImages.Rmd: -------------------------------------------------------------------------------- 1 | # Scraping Lists Through Transfermarkt and Saving Images in R 2 | 3 | This is a R conversion of a tutorial by [FC Python](https://twitter.com/FC_Python). I take no credit for the idea and have their blessing to make this conversion. All text is a direct copy unless changes were relevant. Please follow them on twitter and if you have a desire to learn Python then they are a fantastic resource! 4 | 5 | In this tutorial, we’ll be looking to develop our scraping knowledge beyond just lifting text from a single page. Following through the article, you’ll learn how to scrape links from a page and iterate through them to take information from each link, speeding up the process of creating new datasets. We will also run through how to identify and download images, creating a database of every player in the Premier League’s picture. This should save 10 minutes a week for anyone searching in Google Images to decorate their pre-match presentations! 6 | 7 | This tutorial builds on the [first tutorial](https://github.com/FCrSTATS/ScrappingTutorials/blob/master/1.ScrapingTransferFeeData.rmd) in our scraping series, so it is strongly recommended that you understand the concepts there before starting here. 8 | 9 | Let’s import our package and get started. Rvest is the only package we need to install. 10 | 11 | ```{r} 12 | require(rvest) 13 | ``` 14 | 15 | Our aim is to extract a picture of every player in the Premier League. We have identified Transfermarkt as our target, given that each player page should have a picture. Our secondary aim is to run this in one piece of code and not to run a new command for each player or team individually. To do this, we need to follow this process: 16 | 17 | 1) Locate a list of teams in the league with links to a squad list – then save these links 18 | 19 | 2) Run through each squad list link and save the link to each player’s page 20 | 21 | 3) Locate the player’s image and save it to our local computer 22 | 23 | For what seems to be a massive task, we can distill it down to three main tasks. Below, we’ll break each one down. 24 | 25 | ## Locate a list of team links and save them 26 | 27 | The [Premier League page](https://www.transfermarkt.co.uk/premier-league/startseite/wettbewerb/GB1) is the obvious place to start. As you can see, each team name is a link through to the squad page. 28 | 29 | All we need to do is use Selector Gadget to identify the names of the nodes that we want to scrape. 30 | 31 | Finally, we append these links to the transfermarkt domain so that we can call them on their own. 32 | 33 | ```{r} 34 | page <- "https://www.transfermarkt.co.uk/premier-league/startseite/wettbewerb/GB1" 35 | scraped_page <- read_html(page) 36 | 37 | teamLinks <- scraped_page %>% html_nodes(".hide-for-pad .vereinprofil_tooltip") %>% html_attr("href") 38 | teamLinks <- paste0("https://www.transfermarkt.co.uk", teamLinks) 39 | ``` 40 | 41 | ## Run through each squad and save the player links 42 | 43 | So we now have 20 team links. We will now iterate through each of these team links and do the same thing, only this time we are taking player links and not squad links. Take a look through the code below, but you’ll notice that it is very similar to the last chunk of instructions – the key difference being that we will run it within a loop to go through all 20 teams in one go. 44 | 45 | ```{r} 46 | PlayerLinks <- list() 47 | 48 | for (i in 1:length(teamLinks)) { 49 | page <- teamLinks[i] 50 | scraped_page <- read_html(page) 51 | temp_PlayerLinks <- scraped_page %>% html_nodes(".hide-for-small .spielprofil_tooltip") %>% html_attr("href") 52 | PlayerLinks <- append(PlayerLinks, temp_PlayerLinks) 53 | } 54 | ``` 55 | 56 | We have to make some quick changes to the PlayerLinks list to make them easier to use in the next step. Firstly, unlist them. Finally, we append these links to the transfermarkt domain so that we can call them on their own. 57 | 58 | ```{r} 59 | PlayerLinks <- unlist(PlayerLinks) 60 | PlayerLinks <- paste0("https://www.transfermarkt.co.uk" , PlayerLinks) 61 | ``` 62 | 63 | ## Locate and save each player’s image 64 | 65 | We now have a lot of links for players… 66 | 67 | 513 links, in fact! We now need to iterate through each of these links and save the player’s picture. 68 | 69 | Hopefully you should now be comfortable with the process to download and process a webpage, but the second part of this step will need some unpacking – locating the image and saving it. 70 | 71 | Once again, we are locating elements in the page. We grab the image by the node and use the "src" as the input to html_attr(). This gives us the url of the image. We also grab the player's name using the "h1" node. The we use the download.file function to grab the image and save it with the filename of the player. The image then downloads to the working directory you have set in R. 72 | 73 | ```{r} 74 | for (i in 1:length(PlayerLinks)) { 75 | page <- PlayerLinks[i] 76 | scraped_page <- read_html(page) 77 | Player <- scraped_page %>% html_node("h1") %>% html_text() %>% as.character() 78 | Image_Title <- paste0(Player,".jpg") 79 | Image_url <- scraped_page %>% html_node(".dataBild img") %>% html_attr("src") 80 | download.file(Image_url,Image_Title, mode = 'wb') 81 | } 82 | ``` 83 | 84 | ... and job done! We now have a catalog of player images. To help test what you have learnt try and add the players club name to the filename of each image i.e. "AymericLaporte_ManchesterCity.jpg" .... maybe helpful when cataloging the images for future use. 85 | 86 | ## Full Code 87 | 88 | ```{r} 89 | require(rvest) 90 | 91 | page <- "https://www.transfermarkt.co.uk/premier-league/startseite/wettbewerb/GB1" 92 | scraped_page <- read_html(page) 93 | 94 | teamLinks <- scraped_page %>% html_nodes(".hide-for-pad .vereinprofil_tooltip") %>% html_attr("href") 95 | teamLinks <- paste0("https://www.transfermarkt.co.uk", teamLinks) 96 | 97 | 98 | PlayerLinks <- list() 99 | 100 | for (i in 1:length(teamLinks)) { 101 | page <- teamLinks[i] 102 | scraped_page <- read_html(page) 103 | temp_PlayerLinks <- scraped_page %>% html_nodes(".hide-for-small .spielprofil_tooltip") %>% html_attr("href") 104 | PlayerLinks <- append(PlayerLinks, temp_PlayerLinks) 105 | } 106 | 107 | PlayerLinks <- unlist(PlayerLinks) 108 | PlayerLinks <- paste0("https://www.transfermarkt.co.uk" , PlayerLinks) 109 | 110 | for (i in 1:length(PlayerLinks)) { 111 | page <- PlayerLinks[i] 112 | scraped_page <- read_html(page) 113 | Player <- scraped_page %>% html_node("h1") %>% html_text() %>% as.character() 114 | Image_Title <- paste0(Player,".jpg") 115 | Image_url <- scraped_page %>% html_node(".dataBild img") %>% html_attr("src") 116 | download.file(Image_url,Image_Title, mode = 'wb') 117 | } 118 | 119 | ``` 120 | -------------------------------------------------------------------------------- /3.ToughtoScrap.md: -------------------------------------------------------------------------------- 1 | Scrapping Difficult to Grab Data : Gaelic Football 2 | ================ 3 | 4 | There are some websites that are relatively easy to scrap and others that are not! 5 | 6 | This [Gaelic Football website](http://www.wexfordgaa.ie/results/) is proving difficult to use gadgetSelector like we did in an earlier posts [1](https://github.com/FCrSTATS/ScrapingTutorials/blob/master/2.ScrapingPlayerProfileImages.Rmd)\] [2](https://github.com/FCrSTATS/ScrapingTutorials/blob/master/1.ScrapingTransferFeeData.rmd). 7 | 8 | In your browser inspect the webpage, select the network tab, refresh the page and we see data request from the 'ResultsServiceAjax.php' script. Select this and look at the Headers tab to get the request URL. 9 | 10 | 11 |  12 | 13 | Open that in a new page and you will see a badly dispalyed webpage. It has now become easier to use gadgetSelector to access the data with rvest. 14 | 15 | Let's get stuck in: 16 | 17 | ``` r 18 | # We are going to load 3 packages to help us, make sure you have installed them 19 | require(rvest) 20 | ``` 21 | 22 | ## Loading required package: rvest 23 | 24 | ## Loading required package: xml2 25 | 26 | ``` r 27 | require(tidyr) 28 | ``` 29 | 30 | ## Loading required package: tidyr 31 | 32 | ``` r 33 | require(formattable) 34 | ``` 35 | 36 | ## Loading required package: formattable 37 | 38 | Then we read into R the request URL address 39 | 40 | ``` r 41 | phpURL <- "http://www.wexfordgaa.ie/wp-content/themes/enfold-child/fixtures/ResultServiceAjax.php" 42 | phpScraped <- read_html(phpURL) 43 | ``` 44 | 45 | First of all let's grab the Home and Away team names and store them as variables 46 | 47 | ``` r 48 | HomeTeam <- phpScraped %>% html_nodes(".modalTeamName1") %>% html_text() 49 | AwayTeam <- phpScraped %>% html_nodes(".modalTeamName2") %>% html_text() 50 | 51 | # print the head of each to double check its working 52 | head(HomeTeam) 53 | ``` 54 | 55 | ## [1] "Shelmaliers" "Tipperary" 56 | ## [3] "Crossabeg-Ballymurn" "Starlights (Hurling)" 57 | ## [5] "St Brigid's Blackwater" "CLG Naomh Pádraig" 58 | 59 | ``` r 60 | head(AwayTeam) 61 | ``` 62 | 63 | ## [1] "St James'" "Wexford" "Oylegate-Glenbrien" 64 | ## [4] "Glynn-Barntown" "St Anne's Rathangan" "Kilrush" 65 | 66 | Bingo! Now let's grab the Scores 67 | 68 | ``` r 69 | Scores <- phpScraped %>% html_nodes(".modalTeamScore") %>% html_text() 70 | Scores 71 | ``` 72 | 73 | ## [1] "5-8" "4-6" "3-21" "1-21" "0-4" "4-7" "0-7" "1-5" "1-2" "1-5" 74 | ## [11] "0-6" "1-14" "3-8" "1-7" "2-5" "4-11" "3-7" "0-7" "2-3" "2-10" 75 | ## [21] "0-12" "1-7" "0-10" "2-6" "CONC" "0-0" "CONC" "0-0" "1-10" "0-16" 76 | ## [31] "0-11" "2-2" "4-8" "2-3" 77 | 78 | Ah, we have a problem, all the scores are lumped together instead of being spilt between Home and Away scores. However, after closer inspection we can see that there is a pattern Home, Away, Home, Away, etc... Hmmm... we can spilt this into two list of results by using odd and even numbers! 79 | 80 | ``` r 81 | odd_numbers <- seq(1,length(Scores),2) 82 | even_numbers <- seq(2,length(Scores),2) 83 | HomeScores <- Scores[odd_numbers] 84 | AwayScores <- Scores[even_numbers] 85 | 86 | # print the first 5 of each to see if they are working 87 | head(HomeScores) 88 | ``` 89 | 90 | ## [1] "5-8" "3-21" "0-4" "0-7" "1-2" "0-6" 91 | 92 | ``` r 93 | head(AwayScores) 94 | ``` 95 | 96 | ## [1] "4-6" "1-21" "4-7" "1-5" "1-5" "1-14" 97 | 98 | Perfect, now lets try and grab the date, time and venue data. 99 | 100 | ``` r 101 | FixtureData <- phpScraped %>% html_nodes(".modalTimeDetails") %>% html_text() 102 | head(FixtureData) 103 | ``` 104 | 105 | ## [1] "DATE18 FebTIME10:30VENUESt. Patrick's Park " 106 | ## [2] "DATE17 FebTIME19:00VENUESemple Stadium, Thurles" 107 | ## [3] "DATE17 FebTIME16:00VENUEGarrywilliam" 108 | ## [4] "DATE17 FebTIME14:00VENUEBellefield" 109 | ## [5] "DATE17 FebTIME14:00VENUEBlackwater" 110 | ## [6] "DATE17 FebTIME14:00VENUECamolin" 111 | 112 | Ah... hmmm, we have got the data but its all in one string. So we need the help of the tidyr package to break this up into seperate columsn of Date, Time and Venue. 113 | 114 | But first let's get out data into a dataframe and print it. 115 | 116 | ``` r 117 | Results <- data.frame(HomeTeam = HomeTeam, AwayTeam = AwayTeam, HomeScores = HomeScores, AwayScores = AwayScores, FixtureData = FixtureData, stringsAsFactors = F) 118 | 119 | print(Results) 120 | ``` 121 | 122 | ## HomeTeam AwayTeam 123 | ## 1 Shelmaliers St James' 124 | ## 2 Tipperary Wexford 125 | ## 3 Crossabeg-Ballymurn Oylegate-Glenbrien 126 | ## 4 Starlights (Hurling) Glynn-Barntown 127 | ## 5 St Brigid's Blackwater St Anne's Rathangan 128 | ## 6 CLG Naomh Pádraig Kilrush 129 | ## 7 Monageer-Boolavogue Realt na Mara 130 | ## 8 Shamrocks GAA Club St Mary's Maudlintown 131 | ## 9 Shelmaliers Na Sairséalaigh/Sarsfields 132 | ## 10 Naomh Éanna HWH Bunclody/Tig Leath Slí Bun Clóidí 133 | ## 11 Castletown Kilanerin 134 | ## 12 Rathgarogue-Cushinstown St John's Volunteers 135 | ## 13 Cloughbawn Ferns St Aidans 136 | ## 14 Kilmore Geraldine O`Hanrahans 137 | ## 15 Wexford Westmeath 138 | ## 16 Gusserane-O`Rahilly's Bannow-Ballymitty 139 | ## 17 St Anne's Rathangan Hollow Rovers 140 | ## HomeScores AwayScores FixtureData 141 | ## 1 5-8 4-6 DATE18 FebTIME10:30VENUESt. Patrick's Park 142 | ## 2 3-21 1-21 DATE17 FebTIME19:00VENUESemple Stadium, Thurles 143 | ## 3 0-4 4-7 DATE17 FebTIME16:00VENUEGarrywilliam 144 | ## 4 0-7 1-5 DATE17 FebTIME14:00VENUEBellefield 145 | ## 5 1-2 1-5 DATE17 FebTIME14:00VENUEBlackwater 146 | ## 6 0-6 1-14 DATE17 FebTIME14:00VENUECamolin 147 | ## 7 3-8 1-7 DATE17 FebTIME14:00VENUEMonageer 148 | ## 8 2-5 4-11 DATE17 FebTIME14:00VENUEFr Murphy Park 149 | ## 9 3-7 0-7 DATE16 FebTIME20:00VENUEHollymount 150 | ## 10 2-3 2-10 DATE16 FebTIME20:00VENUEParirc Ui Shiochain 151 | ## 11 0-12 1-7 DATE16 FebTIME20:00VENUETomnahely 152 | ## 12 0-10 2-6 DATE16 FebTIME20:00VENUECushinstown 153 | ## 13 CONC 0-0 DATE16 FebTIME20:00VENUECastleboro 154 | ## 14 CONC 0-0 DATE16 FebTIME20:00VENUEKilmore 155 | ## 15 1-10 0-16 DATE11 FebTIME14:00VENUEInnovate Wexford Park 156 | ## 16 0-11 2-2 DATE11 FebTIME11:00VENUEGusserane 157 | ## 17 4-8 2-3 DATE11 FebTIME11:00VENUERathangan 158 | 159 | Great, now we can use tidyr to make quick work of our messy data. 160 | 161 | ``` r 162 | Results <- Results %>% separate(FixtureData, c("Date", "extra"), sep = "TIME", remove = T) 163 | Results <- Results %>% separate(extra, c("Time", "Venue"), sep = "VENUE", remove = T) 164 | 165 | Results 166 | ``` 167 | 168 | ## HomeTeam AwayTeam 169 | ## 1 Shelmaliers St James' 170 | ## 2 Tipperary Wexford 171 | ## 3 Crossabeg-Ballymurn Oylegate-Glenbrien 172 | ## 4 Starlights (Hurling) Glynn-Barntown 173 | ## 5 St Brigid's Blackwater St Anne's Rathangan 174 | ## 6 CLG Naomh Pádraig Kilrush 175 | ## 7 Monageer-Boolavogue Realt na Mara 176 | ## 8 Shamrocks GAA Club St Mary's Maudlintown 177 | ## 9 Shelmaliers Na Sairséalaigh/Sarsfields 178 | ## 10 Naomh Éanna HWH Bunclody/Tig Leath Slí Bun Clóidí 179 | ## 11 Castletown Kilanerin 180 | ## 12 Rathgarogue-Cushinstown St John's Volunteers 181 | ## 13 Cloughbawn Ferns St Aidans 182 | ## 14 Kilmore Geraldine O`Hanrahans 183 | ## 15 Wexford Westmeath 184 | ## 16 Gusserane-O`Rahilly's Bannow-Ballymitty 185 | ## 17 St Anne's Rathangan Hollow Rovers 186 | ## HomeScores AwayScores Date Time Venue 187 | ## 1 5-8 4-6 DATE18 Feb 10:30 St. Patrick's Park 188 | ## 2 3-21 1-21 DATE17 Feb 19:00 Semple Stadium, Thurles 189 | ## 3 0-4 4-7 DATE17 Feb 16:00 Garrywilliam 190 | ## 4 0-7 1-5 DATE17 Feb 14:00 Bellefield 191 | ## 5 1-2 1-5 DATE17 Feb 14:00 Blackwater 192 | ## 6 0-6 1-14 DATE17 Feb 14:00 Camolin 193 | ## 7 3-8 1-7 DATE17 Feb 14:00 Monageer 194 | ## 8 2-5 4-11 DATE17 Feb 14:00 Fr Murphy Park 195 | ## 9 3-7 0-7 DATE16 Feb 20:00 Hollymount 196 | ## 10 2-3 2-10 DATE16 Feb 20:00 Parirc Ui Shiochain 197 | ## 11 0-12 1-7 DATE16 Feb 20:00 Tomnahely 198 | ## 12 0-10 2-6 DATE16 Feb 20:00 Cushinstown 199 | ## 13 CONC 0-0 DATE16 Feb 20:00 Castleboro 200 | ## 14 CONC 0-0 DATE16 Feb 20:00 Kilmore 201 | ## 15 1-10 0-16 DATE11 Feb 14:00 Innovate Wexford Park 202 | ## 16 0-11 2-2 DATE11 Feb 11:00 Gusserane 203 | ## 17 4-8 2-3 DATE11 Feb 11:00 Rathangan 204 | 205 | Perfect aside from the fact we still have "DATE" at the start of the date string, yet this is easy to rectify with gsub() 206 | 207 | ``` r 208 | Results$Date <- gsub("DATE", "", Results$Date) 209 | ``` 210 | 211 | Lets print the results now 212 | 213 | ``` r 214 | Results 215 | ``` 216 | 217 | ## HomeTeam AwayTeam 218 | ## 1 Shelmaliers St James' 219 | ## 2 Tipperary Wexford 220 | ## 3 Crossabeg-Ballymurn Oylegate-Glenbrien 221 | ## 4 Starlights (Hurling) Glynn-Barntown 222 | ## 5 St Brigid's Blackwater St Anne's Rathangan 223 | ## 6 CLG Naomh Pádraig Kilrush 224 | ## 7 Monageer-Boolavogue Realt na Mara 225 | ## 8 Shamrocks GAA Club St Mary's Maudlintown 226 | ## 9 Shelmaliers Na Sairséalaigh/Sarsfields 227 | ## 10 Naomh Éanna HWH Bunclody/Tig Leath Slí Bun Clóidí 228 | ## 11 Castletown Kilanerin 229 | ## 12 Rathgarogue-Cushinstown St John's Volunteers 230 | ## 13 Cloughbawn Ferns St Aidans 231 | ## 14 Kilmore Geraldine O`Hanrahans 232 | ## 15 Wexford Westmeath 233 | ## 16 Gusserane-O`Rahilly's Bannow-Ballymitty 234 | ## 17 St Anne's Rathangan Hollow Rovers 235 | ## HomeScores AwayScores Date Time Venue 236 | ## 1 5-8 4-6 18 Feb 10:30 St. Patrick's Park 237 | ## 2 3-21 1-21 17 Feb 19:00 Semple Stadium, Thurles 238 | ## 3 0-4 4-7 17 Feb 16:00 Garrywilliam 239 | ## 4 0-7 1-5 17 Feb 14:00 Bellefield 240 | ## 5 1-2 1-5 17 Feb 14:00 Blackwater 241 | ## 6 0-6 1-14 17 Feb 14:00 Camolin 242 | ## 7 3-8 1-7 17 Feb 14:00 Monageer 243 | ## 8 2-5 4-11 17 Feb 14:00 Fr Murphy Park 244 | ## 9 3-7 0-7 16 Feb 20:00 Hollymount 245 | ## 10 2-3 2-10 16 Feb 20:00 Parirc Ui Shiochain 246 | ## 11 0-12 1-7 16 Feb 20:00 Tomnahely 247 | ## 12 0-10 2-6 16 Feb 20:00 Cushinstown 248 | ## 13 CONC 0-0 16 Feb 20:00 Castleboro 249 | ## 14 CONC 0-0 16 Feb 20:00 Kilmore 250 | ## 15 1-10 0-16 11 Feb 14:00 Innovate Wexford Park 251 | ## 16 0-11 2-2 11 Feb 11:00 Gusserane 252 | ## 17 4-8 2-3 11 Feb 11:00 Rathangan 253 | 254 | Our job could be done now but... I want to re-order the columns and use the formattable package to display our results with a nice style. 255 | 256 | ``` r 257 | Results <- Results[c(5,6,7,1,3,4,2)] 258 | formattable(Results) 259 | ``` 260 | 261 |
265 | Date 266 | | 267 |268 | Time 269 | | 270 |271 | Venue 272 | | 273 |274 | HomeTeam 275 | | 276 |277 | HomeScores 278 | | 279 |280 | AwayTeam 281 | | 282 |283 | AwayScores 284 | | 285 |
---|---|---|---|---|---|---|
290 | 18 Feb 291 | | 292 |293 | 10:30 294 | | 295 |296 | St. Patrick's Park 297 | | 298 |299 | Shelmaliers 300 | | 301 |302 | 5-8 303 | | 304 |305 | St James' 306 | | 307 |308 | 4-6 309 | | 310 |
313 | 17 Feb 314 | | 315 |316 | 19:00 317 | | 318 |319 | Semple Stadium, Thurles 320 | | 321 |322 | Tipperary 323 | | 324 |325 | 3-21 326 | | 327 |328 | Wexford 329 | | 330 |331 | 1-21 332 | | 333 |
336 | 17 Feb 337 | | 338 |339 | 16:00 340 | | 341 |342 | Garrywilliam 343 | | 344 |345 | Crossabeg-Ballymurn 346 | | 347 |348 | 0-4 349 | | 350 |351 | Oylegate-Glenbrien 352 | | 353 |354 | 4-7 355 | | 356 |
359 | 17 Feb 360 | | 361 |362 | 14:00 363 | | 364 |365 | Bellefield 366 | | 367 |368 | Starlights (Hurling) 369 | | 370 |371 | 0-7 372 | | 373 |374 | Glynn-Barntown 375 | | 376 |377 | 1-5 378 | | 379 |
382 | 17 Feb 383 | | 384 |385 | 14:00 386 | | 387 |388 | Blackwater 389 | | 390 |391 | St Brigid's Blackwater 392 | | 393 |394 | 1-2 395 | | 396 |397 | St Anne's Rathangan 398 | | 399 |400 | 1-5 401 | | 402 |
405 | 17 Feb 406 | | 407 |408 | 14:00 409 | | 410 |411 | Camolin 412 | | 413 |414 | CLG Naomh Pádraig 415 | | 416 |417 | 0-6 418 | | 419 |420 | Kilrush 421 | | 422 |423 | 1-14 424 | | 425 |
428 | 17 Feb 429 | | 430 |431 | 14:00 432 | | 433 |434 | Monageer 435 | | 436 |437 | Monageer-Boolavogue 438 | | 439 |440 | 3-8 441 | | 442 |443 | Realt na Mara 444 | | 445 |446 | 1-7 447 | | 448 |
451 | 17 Feb 452 | | 453 |454 | 14:00 455 | | 456 |457 | Fr Murphy Park 458 | | 459 |460 | Shamrocks GAA Club 461 | | 462 |463 | 2-5 464 | | 465 |466 | St Mary's Maudlintown 467 | | 468 |469 | 4-11 470 | | 471 |
474 | 16 Feb 475 | | 476 |477 | 20:00 478 | | 479 |480 | Hollymount 481 | | 482 |483 | Shelmaliers 484 | | 485 |486 | 3-7 487 | | 488 |489 | Na Sairséalaigh/Sarsfields 490 | | 491 |492 | 0-7 493 | | 494 |
497 | 16 Feb 498 | | 499 |500 | 20:00 501 | | 502 |503 | Parirc Ui Shiochain 504 | | 505 |506 | Naomh Éanna 507 | | 508 |509 | 2-3 510 | | 511 |512 | HWH Bunclody/Tig Leath Slí Bun Clóidí 513 | | 514 |515 | 2-10 516 | | 517 |
520 | 16 Feb 521 | | 522 |523 | 20:00 524 | | 525 |526 | Tomnahely 527 | | 528 |529 | Castletown 530 | | 531 |532 | 0-12 533 | | 534 |535 | Kilanerin 536 | | 537 |538 | 1-7 539 | | 540 |
543 | 16 Feb 544 | | 545 |546 | 20:00 547 | | 548 |549 | Cushinstown 550 | | 551 |552 | Rathgarogue-Cushinstown 553 | | 554 |555 | 0-10 556 | | 557 |558 | St John's Volunteers 559 | | 560 |561 | 2-6 562 | | 563 |
566 | 16 Feb 567 | | 568 |569 | 20:00 570 | | 571 |572 | Castleboro 573 | | 574 |575 | Cloughbawn 576 | | 577 |578 | CONC 579 | | 580 |581 | Ferns St Aidans 582 | | 583 |584 | 0-0 585 | | 586 |
589 | 16 Feb 590 | | 591 |592 | 20:00 593 | | 594 |595 | Kilmore 596 | | 597 |598 | Kilmore 599 | | 600 |601 | CONC 602 | | 603 |604 | Geraldine O`Hanrahans | 0-0 |
11 Feb | 14:00 | Innovate Wexford Park | Wexford | 1-10 | Westmeath | 0-16 |
11 Feb | 11:00 | Gusserane | Gusserane-O`Rahilly's 605 | | 606 |607 | 0-11 608 | | 609 |610 | Bannow-Ballymitty 611 | | 612 |613 | 2-2 614 | | 615 |
618 | 11 Feb 619 | | 620 |621 | 11:00 622 | | 623 |624 | Rathangan 625 | | 626 |627 | St Anne's Rathangan 628 | | 629 |630 | 4-8 631 | | 632 |633 | Hollow Rovers 634 | | 635 |636 | 2-3 637 | | 638 |