├── e7 ├── top.jpg └── bottom.jpg ├── screenshots └── test_screenshot.png ├── README.md └── E7 Gear OCR.ipynb /e7/top.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wsauret/Epic7-OCR-Script/HEAD/e7/top.jpg -------------------------------------------------------------------------------- /e7/bottom.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wsauret/Epic7-OCR-Script/HEAD/e7/bottom.jpg -------------------------------------------------------------------------------- /screenshots/test_screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wsauret/Epic7-OCR-Script/HEAD/screenshots/test_screenshot.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Directions for Windows 2 | 3 | ###### Notice 4 | 5 | * I started a new job and so I no longer have much free time. As a result, I have quit playing Epic Seven. This means I no longer have access to gear screenshots to test/debug the script when issues arise. It also means I don't really have enough free time to keep this script updated on my own. 6 | 7 | * Happily it seems that we have an active community using this script, so if things break in the future, I will be happy to merge pull requests from you guys to keep it running. If you don't know how to do a pull request but have a proposed code update, just open and issue and paste the new code and I'll manually merge it to the main branch. 8 | 9 | ###### Screenshots 10 | 11 | * To use this script you will need screenshots of all the gear you want to import into the optimizer. The resolution of the screenshots must be **2200x1080**. This is an uncommon resolution (most phones do not match it), so I suggest that you take your screenshots via an **approved emulator** set to an internal resolution of **2200x1080**. You can verify the resolution of your screenshots by examining them in a photo app. 12 | 13 | * **Approved Emulators**: Nox and LDPlayer. These will take screenshots at the internal resolution (2200x1080). Test other emulators at your own risk. If you have success with another emulator, feel free to open an [Issue](https://github.com/compeanansi/epic7/issues) and I will add it to the approved list! 14 | 15 | * **Emulators that do NOT work**: Bluestacks and MuMu. Neither take screenshots at the internal resolution. Instead the screenshot resolution is determined by the size of the window. DO NOT USE THEM. Their screenshots are unfortunately not compatible with this script. 16 | 17 | * Tip: When taking a screenshot be sure to **tap** on each piece of gear to bring up its stats before screenshotting. Do not hold down your finger on each one, as the placement of the item box is different between tapping and holding. 18 | 19 | ### Installation 20 | 21 | ###### Python Setup 22 | 23 | 1. Download and install the **64 bit** Anaconda python 3.x distribution for Windows: https://www.anaconda.com/products/individual 24 | 25 | 2. Download and install the **64 bit** Tesseract-OCR: https://github.com/UB-Mannheim/tesseract/wiki 26 | 27 | You should install it to the default directory unless you have a good reason not to. 28 | 29 | 3. The Anaconda python distribution will have installed a launcher named ``Anaconda Navigator``. Run it, then launch ``Powershell Prompt`` from the tile menu. Copy and paste the two lines below into the prompt, in succession, hitting enter after each. 30 | 31 | ``pip install pytesseract`` 32 | 33 | ``pip install opencv-python`` 34 | 35 | You should see "Successfully installed [package name]" after each. Close the prompt when done. 36 | 37 | ###### Download and Test the Script 38 | 39 | 4. Download the files in this repository. To do this, scroll up to the top of this webpage and click the green ``Code`` button, then click ``Download ZIP``. Extract the folder somewhere (I extracted it to the Downloads folder since that was where the ZIP file already was). 40 | 41 | 5. Return to ``Anaconda Navigator``. Launch ``JupyterLab`` from the tile menu. This will open a browser window. Once it is loaded, look at the sidebar on the left. Navigate to where you extracted the folder (in my case: ``Downloads/epic7-master``). Double click on ``E7 Gear OCR.ipynb``. This will open the code for the OCR script. 42 | 43 | 6. Let's do a test run! In the menu at the top, click ``Run > Run All Cells``. If you scroll down to the bottom of the window, you'll see a text progress indicator. When you see ``JSON file finished!`` you should see a new file in the sidebar: ``exported_gear.json``. If that file is present and you didn't get any error messages, then the test worked! 44 | 45 | Note: If you are not using Windows, you will need to comment two lines in the settings cell at the top. Otherwise the test run will fail. 46 | 47 | ###### Copy Screenshots & Run 48 | 49 | 7. Delete the sample image in the screenshots folder (``epic7-master/screenshots``) that we just tested the script on. 50 | 51 | 8. Copy your 2200x1080 gear screenshots to ``epic7-master/screenshots`` from wherever you have them stored. 52 | 53 | 9. Go back to the ``JupyterLab`` browser tab and do ``Run > Run All Cells`` again. This will overwrite ``exported_gear.json`` with one that has all your gear in it. Congratulations! You're done with this script. 54 | 55 | ###### Optimizer 56 | 57 | * Make sure you have the latest version of the optimizer here: https://github.com/Zarroc2762/E7-Gear-Optimizer/releases. 58 | 59 | * After launching the optimizer, select ``Import JSON from web optimizer (/u/HyrTheWinter)``. Then click the ``Import`` button and browse to the folder that ``exported_gear.json`` is in and load it. You should see green text at the bottom saying ``Succesfully imported 0 heroes and X items...`` where x is the number of screenshots you took. Congratulations! You've imported your gear into the optimizer. 60 | 61 | ### Next Steps 62 | 63 | ###### Sanity Checks 64 | 65 | * I highly recommend doing some quick sanity checks of the imported gear to ensure the script didnt make any mistakes. 66 | 67 | * The most common error with this OCR library is adding a '7' to the end of the recognized number. So a spd sub of 10 could be recorded as 107. Go to the inventory tab in the optimizer and click each stat column heading to sort the values. Just make sure that none of the highest values are crazy. You may also want to keep an eye out for incorrect ilvls and enhance lvls. 68 | 69 | * If everything looks good, then congratulations! You're ready to start optimizing your gear. 70 | 71 | ###### Using in the Future 72 | 73 | 1. In the future when you want to refresh your gear, you should first check this github again to see if I updated ``E7 Gear OCR.ipynb``. If I did, download it and replace your old copy. 74 | 75 | 2. Then all you need to do is to copy your new screenshots into the screenshots folder, open ``E7 Gear OCR.ipynb`` in ``JupyterLab`` (via ``Anaconda Navigator``), then ``Run > Run All Cells``. This will give you your new json file. 76 | 77 | 3. Finally, when you go to import your new json into the optimizer you have a choice: 78 | 79 | * If you **don't** want to keep the heroes you previously added to the optimizer, then just import the json as above. 80 | 81 | * If you **do** want to keep your heroes, I suggest that you first delete all the items in the optimizer (go to the Inventory tab, highlight all the items, click the ``-`` button). Then on the General tab click the ``Append`` button (instead of ``Import``) and load your new json. This will add your up to date gear into the optimizer while keeping your heroes. 82 | -------------------------------------------------------------------------------- /E7 Gear OCR.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "# Adjust the settings in the next cell if necessary" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 2, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pytesseract\n", 19 | "\n", 20 | "# COMMENT if NOT using windows (i.e., add # before both lines)\n", 21 | "pytesseract.pytesseract.tesseract_cmd = \"C:/Program Files/Tesseract-OCR/tesseract\"\n", 22 | "TESSDATA_PREFIX = r\"C:\\Program Files\\Tesseract-OCR\"\n", 23 | "\n", 24 | "# UNCOMMENT if using latest tesseract version on mac via 'brew install tesseract'\n", 25 | "#pytesseract.pytesseract.tesseract_cmd = \"/usr/local/Cellar/tesseract/4.1.1/bin/tesseract\"\n", 26 | "\n", 27 | "# Change to False if you always want your actual main stat value\n", 28 | "assume_max_lv_gear = True" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": 3, 34 | "metadata": {}, 35 | "outputs": [], 36 | "source": [ 37 | "# Do not change anything below here unless you know what you are doing" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 4, 43 | "metadata": { 44 | "colab": {}, 45 | "colab_type": "code", 46 | "id": "H7XiclNuE3fz" 47 | }, 48 | "outputs": [], 49 | "source": [ 50 | "from PIL import Image\n", 51 | "from pytesseract import image_to_string\n", 52 | "from IPython.display import display\n", 53 | "\n", 54 | "def process(image):\n", 55 | " resize = cv2.resize(image, (0,0), fx=5, fy=5)\n", 56 | " gray = cv2.cvtColor(resize, cv2.COLOR_BGR2GRAY) # convert to gray for thresholding + blurring\n", 57 | " thresh = cv2.threshold(gray, 70, 255, cv2.THRESH_BINARY_INV)[1]\n", 58 | " blur = cv2.medianBlur(thresh, 3)\n", 59 | " color = cv2.cvtColor(blur, cv2.COLOR_GRAY2RGB) # convert to back to color\n", 60 | " color_img = Image.fromarray(color) # convert from array back into image format\n", 61 | " config = '--psm 6 --oem 1' # PSM 6 = treat as uniform block of text\n", 62 | " data = image_to_string(color_img, lang='eng', config=config)\n", 63 | " return data\n", 64 | "\n", 65 | "def iterative_process(image):\n", 66 | " # Since level and plus are read off the gear icon, there is a lot of noise\n", 67 | " # Thus, we need to use an iterative process to try different thresholding settings to extract a value from them\n", 68 | " resize = cv2.resize(image, (0,0), fx=5, fy=5)\n", 69 | " gray = cv2.cvtColor(resize, cv2.COLOR_RGB2GRAY) # convert to gray for thresholding + blurring\n", 70 | " for low in [80,100,120,140,160]:\n", 71 | " thresh = cv2.threshold(gray, low, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)[1]\n", 72 | " blur = cv2.medianBlur(thresh, 3)\n", 73 | " color = cv2.cvtColor(blur, cv2.COLOR_GRAY2RGB) # convert to back to color\n", 74 | " color_img = Image.fromarray(color) # convert from array back into image format\n", 75 | " config = '--psm 8 --oem 1 -c tessedit_char_whitelist=0123456789' # PSM 8 = treat as single word / whitelist only allows digits\n", 76 | " data = image_to_string(color_img, lang='eng', config=config)\n", 77 | " #display(color_img)\n", 78 | " #print(low,data)\n", 79 | " if any(i.isdigit() for i in data): return data # If it finds a value it will terminate the loop and return the value\n", 80 | " if Image.fromarray(image).size[0] == plus[3]-plus[2]: return '0' # return 0 if the OCR fails to recognize the plus\n", 81 | " else: return '1' # return 1 if the OCR fails to recognize the level\n", 82 | "\n", 83 | "def stat_converter(stat):\n", 84 | " result = '' # default value to return just in case the stat data is bad\n", 85 | " if 'attack' in stat.lower():\n", 86 | " result = 'Atk'\n", 87 | " if '%' in stat: result += 'P'\n", 88 | " elif 'health' in stat.lower():\n", 89 | " result = 'HP'\n", 90 | " if '%' in stat: result += 'P'\n", 91 | " elif 'defense' in stat.lower():\n", 92 | " result = 'Def'\n", 93 | " if '%' in stat: result += 'P'\n", 94 | " elif 'speed' in stat.lower():\n", 95 | " result = 'Spd'\n", 96 | " elif 'chance' in stat.lower():\n", 97 | " result = 'CChance'\n", 98 | " elif 'damage' in stat.lower():\n", 99 | " result = 'CDmg'\n", 100 | " elif 'effectiveness' in stat.lower():\n", 101 | " result = 'Eff'\n", 102 | " elif 'resistance' in stat.lower():\n", 103 | " result = 'Res'\n", 104 | " return result\n", 105 | " \n", 106 | "\n", 107 | "def digit_filter(val):\n", 108 | " # This attempts to filter out all characters that are not digits\n", 109 | " # If there are no digits, then it returns 0\n", 110 | " try: return int(''.join(filter(str.isdigit,str(val))))\n", 111 | " except ValueError: return 0\n", 112 | "\n", 113 | "def char_filter(val):\n", 114 | " # This attempts to filter out all characters that are not letters\n", 115 | " # If there are no letters, then it returns ''\n", 116 | " try: return ''.join(filter(str.isalpha,str(val))).capitalize()\n", 117 | " except ValueError: return ''\n", 118 | "\n", 119 | "def max_stat(data,item):\n", 120 | " stat = stat_converter(data)\n", 121 | " val = digit_filter(data) # Begin by setting val = actual value so that it gets returned if the rest fails\n", 122 | " if item['ability'] < 15: # Only change stats on items where they need to be increased\n", 123 | " if item['level'] in range(58,73):\n", 124 | " if stat == 'CChance': val = 45\n", 125 | " elif stat == 'CDmg': val = 55\n", 126 | " elif stat == 'Spd': val = 35\n", 127 | " elif item['slot'] == ('Necklace' or 'Ring' or 'Boots'): val = 50\n", 128 | " elif stat == 'HP': val = 2295 # Not exactly right, finer-grained scaling\n", 129 | " elif stat == 'Def': val = 250 # Not exactly right, finer-grained scaling\n", 130 | " elif stat == 'Atk': val = 425 # Not exactly right, finer-grained scaling\n", 131 | " elif item['level'] in range(74,86):\n", 132 | " if stat == 'CChance': val = 55\n", 133 | " elif stat == 'CDmg': val = 65\n", 134 | " elif stat == 'Spd': val = 40\n", 135 | " elif item['slot'] == ('Necklace' or 'Ring' or 'Boots'): val = 60\n", 136 | " elif stat == 'HP': val = 2700 # Not exactly right, finer-grained scaling\n", 137 | " elif stat == 'Def': val = 300 # Not exactly right, finer-grained scaling\n", 138 | " elif stat == 'Atk': val = 500 # Not exactly right, finer-grained scaling\n", 139 | " elif item['level'] in range(87,89):\n", 140 | " if stat == 'CChance': val = 60\n", 141 | " elif stat == 'CDmg': val = 70\n", 142 | " elif stat == 'Spd': val = 45\n", 143 | " elif item['slot'] == ('Necklace' or 'Ring' or 'Boots'): val = 65\n", 144 | " elif stat == 'HP': val = 2765\n", 145 | " elif stat == 'Def': val = 310\n", 146 | " elif stat == 'Atk': val = 515\n", 147 | " return val" 148 | ] 149 | }, 150 | { 151 | "cell_type": "code", 152 | "execution_count": 5, 153 | "metadata": { 154 | "colab": {}, 155 | "colab_type": "code", 156 | "id": "DBct7X1IE3f4", 157 | "scrolled": true, 158 | "tags": [] 159 | }, 160 | "outputs": [ 161 | { 162 | "name": "stdout", 163 | "output_type": "stream", 164 | "text": [ 165 | "Beginning export process...\n", 166 | "currently processing screenshots/test_screenshot.png\n", 167 | "ilvl 88 / enhanced to +15 / slot: Weapon / rarity: Epic\n", 168 | "1 exported out of 1 valid items \n", 169 | "\n", 170 | "JSON file finished!\n" 171 | ] 172 | } 173 | ], 174 | "source": [ 175 | "from glob import glob\n", 176 | "from string import ascii_lowercase, digits\n", 177 | "import random\n", 178 | "import json\n", 179 | "import cv2\n", 180 | "\n", 181 | "# Format the json for the optimizer\n", 182 | "export = {'processVersion':'1','heroes':[],'items':[]}\n", 183 | "temp_list = []\n", 184 | "\n", 185 | "print('Beginning export process...')\n", 186 | "\n", 187 | "# Gather the filenames for all the screenshots. Looks for png, jpg, and jpeg.\n", 188 | "filenames = glob('screenshots/*.png')+glob('screenshots/*.jpg')+glob('screenshots/*.jpeg')\n", 189 | "\n", 190 | "did_break = False\n", 191 | "errors = 0\n", 192 | "for name in filenames:\n", 193 | " \n", 194 | " # The height of the item box changes depending on the length of the item and set descriptions,\n", 195 | " # we have to crop the top and bottom info separately in order to ensure the OCR boxes within these areas\n", 196 | " # remain in fixed locations. we then process the top and bottom info independently.\n", 197 | " \n", 198 | " print('currently processing '+name)\n", 199 | " screenshot = cv2.imread(name)\n", 200 | " \n", 201 | " # Make sure the screenshot has the proper dimensions\n", 202 | " if screenshot.shape[0] != 1080 or screenshot.shape[1] != 2200:\n", 203 | " print('ERROR: This screenshot is not 2200x1080!')\n", 204 | " print('Please retake the screenshot in Nox or LDPlayer with the internal resolution set to 2200x1080')\n", 205 | " did_break = True\n", 206 | " break\n", 207 | " \n", 208 | " # Setup dictionary for the current item\n", 209 | " item = {'locked':False,'efficiency':0}\n", 210 | " \n", 211 | " # Settings for how the boxes are cropped. If the UI changes, these may need to be updated\n", 212 | " width = [725,1190] # Same width for both boxes. Based on width of gearbox\n", 213 | " top_depth = 160 # This is how many pixels deep the top cropped image will be\n", 214 | " bottom_offset = 25 # This is how many pixels we start the bottom box below the divider in the gear screenshot\n", 215 | " bottom_depth = 340 # This is how many pixels deep the bottom cropped image will be\n", 216 | " \n", 217 | " # Top box\n", 218 | " template_top = cv2.imread('e7/top.jpg',0)\n", 219 | " gray = cv2.cvtColor(screenshot, cv2.COLOR_BGR2GRAY) # convert screenshot to grayscale\n", 220 | " match = cv2.matchTemplate(gray, template_top, cv2.TM_CCOEFF_NORMED) # find the template on the screenshot\n", 221 | " a, b, c, max_loc = cv2.minMaxLoc(match) # extract the location of the match (max_loc), tossing the rest\n", 222 | " X1 = max_loc[1]\n", 223 | " X2 = max_loc[1]+top_depth\n", 224 | " Y1 = width[0]\n", 225 | " Y2 = width[1]\n", 226 | " top_box = screenshot[X1:X2,Y1:Y2] # crop the box at those pixel locations\n", 227 | " #display(Image.fromarray(top_box)) # uncomment to see what the top_box looks like\n", 228 | " #Image.fromarray(top_box).save(\"top_box.png\") # uncomment to export top_box as png\n", 229 | "\n", 230 | " # Bottom box\n", 231 | " template_bot = cv2.imread('e7/bottom.jpg',0)\n", 232 | " gray = cv2.cvtColor(screenshot, cv2.COLOR_BGR2GRAY) # convert screenshot to grayscale\n", 233 | " match = cv2.matchTemplate(gray, template_bot, cv2.TM_CCOEFF_NORMED) # find the template on the screenshot\n", 234 | " a, b, c, max_loc = cv2.minMaxLoc(match) # extract the location of the match (max_loc), tossing the rest\n", 235 | " X1 = max_loc[1]+bottom_offset\n", 236 | " X2 = max_loc[1]+bottom_offset+bottom_depth\n", 237 | " Y1 = width[0]\n", 238 | " Y2 = width[1]\n", 239 | " bottom_box = screenshot[X1:X2,Y1:Y2] # crop the box at those pixel locations\n", 240 | " #display(Image.fromarray(bottom_box)) # uncomment to see what the bottom_box looks like\n", 241 | " #Image.fromarray(bottom_box).save(\"bottom_box.png\") # uncomment to export bottom_box as png\n", 242 | " \n", 243 | " # Coordinates for the info inside the top_box. If the UI changes, these may need to be updated\n", 244 | " # Coordinates are *relative* to the dimensions of top_box\n", 245 | " # Format: stat = [X1,X2,Y1,Y2] as this is the format used for slicing images\n", 246 | " types = [20,71,172,432]\n", 247 | " level = [19,47,35,69]\n", 248 | " plus = [0,28,132,161]\n", 249 | " \n", 250 | " # Process stats from the Top box\n", 251 | " # Type\n", 252 | " type_img = top_box[types[0]:types[1],types[2]:types[3]]\n", 253 | " #display(Image.fromarray(type_img)) # uncomment to see what the type_img looks like\n", 254 | " #Image.fromarray(type_img).save(\"type_img.png\") # uncomment to export type_img as png\n", 255 | " data = process(type_img)\n", 256 | " split_data = data.split(' ')\n", 257 | " item['rarity'] = char_filter(split_data[0])\n", 258 | " item['slot'] = char_filter(split_data[1].split('\\n')[0])\n", 259 | " if len(item['rarity']) == 0: print('Error: no rarity detected')\n", 260 | " if len(item['slot']) == 0: print('Error: no slot detected')\n", 261 | " \n", 262 | " # Level\n", 263 | " lvl_img = top_box[level[0]:level[1],level[2]:level[3]]\n", 264 | " #display(Image.fromarray(lvl_img)) # uncomment to see what the lvl_img looks like\n", 265 | " #Image.fromarray(lvl_img).save(\"lvl_img.png\") # uncomment to export lvl_img as png\n", 266 | " data = iterative_process(lvl_img)\n", 267 | " #data = data.replace('S','5').replace('B','8').replace('a','8') # Fix common OCR errors\n", 268 | " data = digit_filter(data)\n", 269 | " item['level'] = data\n", 270 | " \n", 271 | " # Enhance lvl (plus)\n", 272 | " plus_img = top_box[plus[0]:plus[1],plus[2]:plus[3]]\n", 273 | " #display(Image.fromarray(plus_img)) # uncomment to see what the plus_img looks like\n", 274 | " #Image.fromarray(plus_img).save(\"plus_img.png\") # uncomment to export plus_img as png\n", 275 | " data = iterative_process(plus_img)\n", 276 | " #data = data.replace('S','5').replace('B','8').replace('a','8') # Fix common OCR errors\n", 277 | " data = digit_filter(data)\n", 278 | " if data > 15: data = 15\n", 279 | " item['ability'] = data\n", 280 | "\n", 281 | " print(\"ilvl\", item['level'],\"/ enhanced to +\"+str(item['ability'])+\" / slot:\",item['slot'],\" / rarity:\",item['rarity'])\n", 282 | " \n", 283 | " # Coordinates for the info inside the bottom_box. If the UI changes, these may need to be updated\n", 284 | " # Coordinates are *relative* to the dimensions of bottom_box\n", 285 | " # Format: stat = [X1,X2,Y1,Y2] as this is the format used for slicing images\n", 286 | " main = [8,70,65,435]\n", 287 | " subs = [98,255,25,435]\n", 288 | " sets = [280,340,76,435]\n", 289 | " \n", 290 | " # Process stats from Bottom box \n", 291 | " # Main\n", 292 | " main_img = bottom_box[main[0]:main[1],main[2]:main[3]]\n", 293 | " #display(Image.fromarray(main_img)) # uncomment to see what the main_img looks like\n", 294 | " #Image.fromarray(main_img).save(\"main_img.png\") # uncomment to export main_img as png\n", 295 | " data = process(main_img)\n", 296 | " stat = stat_converter(data)\n", 297 | " if assume_max_lv_gear is True: item['mainStat'] = [stat,max_stat(data,item)]\n", 298 | " else: item['mainStat'] = [stat,digit_filter(data)]\n", 299 | " \n", 300 | " # Subs\n", 301 | " subs_img = bottom_box[subs[0]:subs[1],subs[2]:subs[3]]\n", 302 | " #display(Image.fromarray(subs_img)) # uncomment to see what the subs_img looks like\n", 303 | " #Image.fromarray(subs_img).save(\"subs_img.png\") # uncomment to export subs_img as png\n", 304 | " data = process(subs_img)\n", 305 | " for n,entry in enumerate(data.split('\\n')):\n", 306 | " if n < 4:\n", 307 | " stat = stat_converter(entry)\n", 308 | " entry = entry.replace('T','7') # Fix common OCR error\n", 309 | " val = digit_filter(entry)\n", 310 | " if len(stat) > 0 and val != 0:\n", 311 | " item['subStat'+str(n+1)] = [stat,val]\n", 312 | " \n", 313 | " # Set\n", 314 | " set_img = bottom_box[sets[0]:sets[1],sets[2]:sets[3]]\n", 315 | " #display(Image.fromarray(set_img)) # uncomment to see what the set_img looks like\n", 316 | " #Image.fromarray(set_img).save(\"set_img.png\") # uncomment to export set_img as png\n", 317 | " data = process(set_img)\n", 318 | " data_split = data.split(' Set')\n", 319 | " item['set'] = char_filter(data_split[0])\n", 320 | " if len(item['set']) == 0: print('Error: no set detected')\n", 321 | " \n", 322 | " #print(\"set:\", item['set'],\" / main stat:\",item['mainStat'])\n", 323 | " \n", 324 | " # Check to make sure the item does not already exist in the item dictionary\n", 325 | " # Also verifies that it has a valid slot and rarity entry\n", 326 | " # If these conditions are met, this assigns the item a unique ID and adds it to the item dictionary for export\n", 327 | " if item not in temp_list and len(item['slot']) > 0 and len(item['rarity']) > 0:\n", 328 | " temp_list.append(item)\n", 329 | " item['id'] = 'jt'+''.join(random.choice(digits+ascii_lowercase) for _ in range(6))\n", 330 | " export['items'].append(item)\n", 331 | " print(len(export['items']),\"exported out of\",len(filenames)-errors,'valid items \\n')\n", 332 | " else:\n", 333 | " errors += 1\n", 334 | " print('Item not exported because of fatal errors in processing it \\n')\n", 335 | "\n", 336 | "# Export dictionary to json for importing into optimizer if everything completed properly\n", 337 | "if not did_break:\n", 338 | " with open('exported_gear.json', 'w') as f: json.dump(export, f)\n", 339 | " print('JSON file finished!')" 340 | ] 341 | }, 342 | { 343 | "cell_type": "code", 344 | "execution_count": null, 345 | "metadata": {}, 346 | "outputs": [], 347 | "source": [] 348 | } 349 | ], 350 | "metadata": { 351 | "accelerator": "GPU", 352 | "colab": { 353 | "collapsed_sections": [], 354 | "name": "Copy of E7 Gear OCR (Official)", 355 | "private_outputs": true, 356 | "provenance": [], 357 | "toc_visible": true 358 | }, 359 | "kernelspec": { 360 | "display_name": "Python 3", 361 | "language": "python", 362 | "name": "python3" 363 | }, 364 | "language_info": { 365 | "codemirror_mode": { 366 | "name": "ipython", 367 | "version": 3 368 | }, 369 | "file_extension": ".py", 370 | "mimetype": "text/x-python", 371 | "name": "python", 372 | "nbconvert_exporter": "python", 373 | "pygments_lexer": "ipython3", 374 | "version": "3.8.3" 375 | } 376 | }, 377 | "nbformat": 4, 378 | "nbformat_minor": 4 379 | } 380 | --------------------------------------------------------------------------------