├── README.md └── leech.py /README.md: -------------------------------------------------------------------------------- 1 | # leechcode 2 | Python script to fetch details about leetcode questions useful for planning interview preparation. 3 | 4 | I have created a Google Sheet using this data, this might be useful for last moment preparation as well as revision. Hope you find it useful. 5 | 6 | [:link: The Google Sheet](https://docs.google.com/spreadsheets/d/1KfZq_onP06UDqizkLOjraOfw2qIODDqMyd-3knUu3hA/edit?usp=sharing) 7 | 8 | ### How to use this sheet? 9 | 10 | - First of all ```Make a copy``` of the sheet from the ```File``` option. 11 | - I have sorted the values according to the Wilson Score [[Source](https://www.evanmiller.org/how-not-to-sort-by-average-rating.html)] which is a better metric to rate something compared to a simple ratio. 12 | - All the best for your interviews! 13 | 14 | 15 | ## The script 16 | If you want to try this out on your own, keep reading. 17 | 18 | ### How to use the script? 19 | We basically need to send requests to the server as an authenticated user so we have to find those details. 20 | 21 | - Login to leetcode and go to any problem. 22 | - Open the Network panel ( Ctrl + Shift + E ) and reload. 23 | - Search for a ```graphql``` request in the big list, a small window will open to the right. 24 | - Click the ```Headers``` and scroll to find the ```Request Headers```. 25 | - Copy below 2 values from here and paste them in the code 26 | - ```Cookie``` 27 | - ```x-csrftoken``` 28 | - Run the code ```python3 leech.py```. 29 | 30 | 31 | ### How to edit/clean the data? 32 | Use the [Pandas](https://pandas.pydata.org/) library to modify the data as it is very convenient. 33 | 34 | -------------------------------------------------------------------------------- /leech.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import json 3 | 4 | 5 | # Fill in these details to be able to send a request, more details in README 6 | 7 | HEADERS = { 8 | 'x-csrftoken': '' 11 | } 12 | 13 | graphql_url = "https://leetcode.com/graphql" 14 | 15 | final_data = {} 16 | 17 | 18 | # Get a list of all questions on Leetcode, 19 | # we will query details for these question 20 | 21 | def get_question_list(): 22 | 23 | response = requests.get("https://leetcode.com/api/problems/all") 24 | raw_data = response.json() 25 | 26 | stat_status_pairs = raw_data["stat_status_pairs"] 27 | 28 | question_title_slug = [ i["stat"]["question__title_slug"] for i in stat_status_pairs ] 29 | 30 | return question_title_slug 31 | 32 | 33 | # Post a GraphQL request for a given question, 34 | # Query has the description of all the details we want, 35 | # Query can be modified according to needs 36 | 37 | def leech( question ): 38 | 39 | query = (""" 40 | query questionData { 41 | question(titleSlug:"%s") 42 | { 43 | questionId 44 | title 45 | titleSlug 46 | content 47 | isPaidOnly 48 | difficulty 49 | likes 50 | dislikes 51 | topicTags { 52 | name 53 | slug 54 | } 55 | stats 56 | } 57 | } 58 | """% question ) 59 | 60 | r = requests.post( 61 | graphql_url, 62 | json = { 63 | 'query': query 64 | }, 65 | headers = HEADERS, 66 | ) 67 | 68 | r = r.json() 69 | r = r['data']['question'] 70 | r['url'] = "https://leetcode.com/problems/" + str(r['titleSlug']) 71 | 72 | final_data[r['questionId']] = r 73 | 74 | 75 | question_list = get_question_list() 76 | 77 | 78 | # Call the leech function for all questions, 79 | # Prints progress 80 | 81 | for i, question in enumerate( question_list ): 82 | 83 | leech( question ) 84 | print( f"Fetched {i+1}/{len(question_list)} questions" ) 85 | 86 | 87 | # Dump our Python dictionary to a JSON file 88 | 89 | with open("data.json", "w") as file: 90 | json.dump( final_data, file ) --------------------------------------------------------------------------------