├── .gitignore ├── README.md ├── batch_add ├── README.md ├── add_asset.py └── example_add.csv ├── batch_mint_arc69 ├── README.md ├── config_mint.py ├── example.csv └── mint_arc69.py ├── batch_remove ├── README.md ├── example_remove.csv └── remove_asset.py ├── batch_update_arc69 ├── README.md ├── example_data │ ├── arc69_data │ │ ├── 43432496.json │ │ └── 43432985.json │ └── example_NFT.csv └── update_meta_arc69.py ├── example.settings.yaml ├── fetch_arc69 ├── README.md └── fetch_arc69.py ├── fetch_holders ├── README.md └── fetch_holders.py ├── lib ├── algod_helper.py ├── file_helper.py └── settings.py ├── requirements.txt └── thumbnail.png /.gitignore: -------------------------------------------------------------------------------- 1 | fetch_arc69/output_path 2 | output_path 3 | output 4 | __pycache__ 5 | settings.yaml -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Video to get you started 2 | 3 | [![Watch the video](https://imgur.com/03peyg8.png)](https://youtu.be/6luGVcjB4qk) 4 | 5 | # Overview 6 | 7 | # Intro 8 | This repository provides guides and scripts to batch mint, transfer and update Algorand NFTs using Python. 9 | All these scripts and guides are written in my spare time and come with no warranty whatsoever of any kind. 10 | Always run things on the testnet first, and it is strongly recommended to refer to the official Algorand documentations beforehand: https://developer.algorand.org/docs/features/asa/ 11 | 12 | ## Available: 13 | ### a) Batch minting NFTs following ARC69 14 | ### b) Batch updating existing NFTs to ARC69 15 | ### c) Batch opt-into ASAs 16 | ### d) Fetch all holders of assets created by given address 17 | ### e) Fetch all asset information from an account as json or csv 18 | ### f) Batch opt-out ASAs with zero balance 19 | 20 | ## In Progress: 21 | ### a) Batch transfer ASAs 22 | 23 | ## Other resources: 24 | ### a) Random NFT art generator using Python: 25 | https://github.com/Jon-Becker/nft-generator-py 26 | 27 | ### b) Random NFT art generator using JavaScript: 28 | https://github.com/HashLips/hashlips_art_engine 29 | 30 | # Pre-Requirements 31 | 32 | ## 1) Install Python 33 | 34 | If you do not already have Python installed, install python using Anaconda: https://www.anaconda.com/products/individual. Installing just Miniconda is fine as we will not need the other packages. Alternatively if you're going to use Visual Studio Code (see next step), downloading and installing python from https://www.python.org/downloads/ is also possible. 35 | 36 | ## 2) Install Python IDE 37 | We will use a Python IDE to update and run the scripts. Although if you develop in multiple languages https://code.visualstudio.com/ is a great alternative. 38 | Spyder can be installed by opening the anaconda terminal and running the following: 39 | 40 | ```conda install spyder``` 41 | 42 | 43 | ## 3) Install Python dependencies 44 | 45 | This pipeline requires some dependencies which have to be installed prior to running. 46 | 47 | They can be installed using [PIP](https://pypi.org/), by opening your terminal and running the following: 48 | 49 | ```pip3 install -r requirements.txt``` 50 | 51 | # General Settings 52 | 53 | In order to adjust the scripts to your needs, it's necessary to create a `settings.yaml`. To do so rename the existing `example.settings.yaml` by deleting ".example". 54 | 55 | Two settings in there are shared between all scripts: 56 | 57 | ``` 58 | testnet: true 59 | 60 | default_output_folder: "output" 61 | ``` 62 | 63 | It's recommended to use testnet in the beginning.Testnet algos can be acquired here: https://bank.testnet.algorand.network/. If you're sure everything works as expected you can set: 64 | 65 | `testnet: false` 66 | 67 | The output folder for all scripts will be per default included in this folder. If you wish for your output to be generated somewhere else you can adjust that settings accordingly: 68 | 69 | `default_output_folder: "c:/somewhere/here"` 70 | 71 | 72 | ## Further Notice 73 | For more information on how to setup each script look at the given examples in your `settings.yaml` or the README.md of the corresponding script. You'll only need to setup scripts you're going to use. 74 | 75 | Finally you can check validity of your `settings.yaml` with http://www.yamllint.com/. Note also that it's necessary to use forward slashes `/` instead of backslashes in all path configurations. 76 | 77 | ## ARC19 minting scripts 78 | See: https://github.com/IzzyCapNFT/algorand-arc19-auto-mint (note we are not affiliated in any way so use at your own risk) 79 | -------------------------------------------------------------------------------- /batch_add/README.md: -------------------------------------------------------------------------------- 1 | # Batch adding NFTs using python 2 | 3 | # Preparing the script 4 | 5 | ## Adjust settings.yaml 6 | 7 | For this pipeline the following part in your `settings.yaml` must be set which will then be loaded into the main script when running. 8 | 9 | ``` 10 | # settings.yaml 11 | 12 | batch_add: 13 | mnemonic1: "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" 14 | input_path: "e:/algoNFTs/batch_add/example_add.csv" 15 | ``` 16 | 17 | ### a) mnemonic1 18 | 19 | This is your algorand key. Included above is a testnet account containing no real algos. In reality this should not be shared with ANYONE. 20 | 21 | ### b) input_path 22 | 23 | This variable should point to the csv containing the assets to be added. It should contain no header and only the asset IDs to be added 24 | 25 | 26 | # Running the script 27 | 28 | Once you have defined your `settings.yaml`, run the script from the terminal or using your favourite IDE (open file + F5 in Spyder). 29 | -------------------------------------------------------------------------------- /batch_add/add_asset.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Thu Sep 9 19:11:29 2021 5 | 6 | @author: phyto 7 | """ 8 | from algosdk.future.transaction import AssetTransferTxn 9 | import os, sys, inspect 10 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 11 | parent_dir = os.path.dirname(current_dir) 12 | sys.path.insert(0, parent_dir) 13 | from lib.settings import Settings 14 | from lib.algod_helper import wait_for_confirmation, print_asset_holding 15 | 16 | 17 | settings = Settings('batch_add') 18 | algod_client = settings.get_algod_client() 19 | sk = settings.get_private_key() 20 | pk = settings.get_public_key() 21 | 22 | 23 | def add_asset(asset_id): 24 | asset_id = int(asset_id) 25 | 26 | # Check if asset_id is in account 3's asset holdings prior 27 | # to opt-in 28 | params = algod_client.suggested_params() 29 | # comment these two lines if you want to use suggested params 30 | params.fee = 1000 31 | params.flat_fee = True 32 | 33 | account_info = algod_client.account_info(pk) 34 | holding = 0 35 | idx = 0 36 | for my_account_info in account_info['assets']: 37 | scrutinized_asset = account_info['assets'][idx] 38 | idx = idx + 1 39 | if (scrutinized_asset['asset-id'] == asset_id): 40 | holding = True 41 | print(f"asset {str(asset_id)} already added") 42 | break 43 | 44 | if not holding: 45 | # Use the AssetTransferTxn class to transfer assets and opt-in 46 | txn = AssetTransferTxn( 47 | sender=pk, 48 | sp=params, 49 | receiver=pk, 50 | amt=0, 51 | index=asset_id) 52 | stxn = txn.sign(sk) 53 | txid = algod_client.send_transaction(stxn) 54 | print(txid) 55 | # Wait for the transaction to be confirmed 56 | wait_for_confirmation(algod_client, txid) 57 | # Now check the asset holding for that account. 58 | # This should now show a holding with a balance of 0. 59 | print_asset_holding(algod_client, pk, asset_id) 60 | 61 | 62 | with open(settings.input_path) as csv_file: 63 | data = [line.rstrip() for line in csv_file] 64 | 65 | 66 | for n in range(0, len(data)): 67 | add_asset(data[n]) 68 | -------------------------------------------------------------------------------- /batch_add/example_add.csv: -------------------------------------------------------------------------------- 1 | 49095914 2 | 49095927 3 | 49095948 4 | 49096236 5 | -------------------------------------------------------------------------------- /batch_mint_arc69/README.md: -------------------------------------------------------------------------------- 1 | # Batch minting algoNFTs following the ARC69 metadata standard using Python 2 | 3 | # Pre-requirements 4 | This guide will walk you through batch minting NFTs on Algorand following [ARC69](https://github.com/algokittens/arc69) using Python. No prior experience with Python is assumed, but you will be required to make some changes to the Python script to suit your requirements. 5 | This pipeline should only be ran on a secure machine, and we recommend checking out the official algorand documentation beforehand: https://developer.algorand.org/docs/get-details/asa/ 6 | 7 | Please note that I write these guides and scripts in my spare time and they come with no warranty of any kind whatsoever. 8 | 9 | ## 1) Pinata account (connecting to ipfs) 10 | 11 | Go to https://www.pinata.cloud/ and sign up for a free account. Once logged in go to "YOUR API KEYS" and then "create a new key", enable admin rights, and provide an appropriate key name, and create. Copy your keys in a secure location as it will only be provided once. You will need "API KEY" and "API Secret" later in this guide. 12 | 13 | ## 2) Prepare your data 14 | 15 | If you used the Jon Becker or HashLips tools to generate your images there is no further need to adjust the metadata and you can skip to the next section. 16 | 17 | If you are using a spreadsheet, the metadata should first be exported as a csv (comma separated values) file. For the spreadsheet only the trait names and trait data should be included. The first row (the header row) should contain the trait names, and the other rows should correlate to the file number (for example the metadata for 1.png should be in the first non-header row - i.e. row 2 in Excel). None values should be empty or called ```None```. For an example see the [example csv.](https://github.com/algokittens/algoNFTs/blob/master/batch_mint_arc69/example.csv) 18 | 19 | This format **MUST** be followed otherwise the script will not work. 20 | 21 | 22 | # Download & Installation 23 | 24 | Make sure you downloaded the whole repository and followed the steps in the main [readme](../README.md). 25 | 26 | 27 | # Adjust settings.yaml 28 | 29 | Using Spyder (or your favourite IDE) open "settings.yaml". All changes should be made within this file rather than mint_arc69.py unless you need to add some additional specifications. 30 | 31 | 32 | ## 1) Add the path of your csv containing your metadata. 33 | 34 | ``` meta_path = r"C:/Users/AlgoKittens/example_NFT.csv" ``` 35 | 36 | ## 2) Add the metadata type. 37 | 38 | This should be either "JonBecker", "HashLips", or "csv" 39 | 40 | ``` meta_type = "csv" ``` 41 | 42 | ## 3) Add the path of your folder containing your images. 43 | 44 | ``` image_path = r"C:/Users/AlgoKittens/my_images" ``` 45 | 46 | 47 | ## 4) Define the unit name. 48 | This unit name will be applied to every NFT following the format unit_name + row number (e.g. TST1, TST2 etc.). The unit name **MUST** be 8 characters or less including the numbers. For example, if you plan to mint 999 assets, the max unit name is 5 characters. 49 | 50 | ``` unit_name = "TST" ``` 51 | 52 | ## 5) Define the asset name. 53 | 54 | This asset name will be applied to every NFT following the format asset_name + row number (e.g. Test #1, Test #2 etc.). 55 | 56 | ``` asset_name = "Test NFT #" ``` 57 | 58 | ## 6) Define your Pinata API key 59 | 60 | ``` api_key = "" ``` 61 | 62 | ## 7) Define your Pinata Secret key 63 | 64 | ``` api_secret = "" ``` 65 | 66 | 67 | ## 8) Define your mnemonic 68 | This is your algorand key. Included below is access to a testnet account containing no real algos. In reality this should not be shared with ANYONE. 69 | 70 | ```mnemonic1 = "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" ``` 71 | 72 | ## 9) Define the external URL. 73 | This url will be included in every asset. If left blank, no URL will be included: 74 | 75 | ``` external_url = "your_website.com"``` 76 | 77 | 78 | ## 10) Define the description. 79 | This url will be included in every asset. If left blank, no description will be included: 80 | 81 | ``` description = "your awesome description goes here"``` 82 | 83 | 84 | ## 11) Define which NFTs of the csv should be minted 85 | This still needs to be setup in `config_mint.py`. To mint every NFT in the csv: 86 | 87 | ``` for n in range(0,len(df)): ``` 88 | 89 | 90 | To mint just the first NFT in the csv: 91 | 92 | ``` for n in range(0,1): ``` 93 | 94 | 95 | To mint from the 50th NFT in the csv onwards: 96 | 97 | ``` for n in range(49, len(df)): ``` 98 | 99 | 100 | # Run the script 101 | 102 | Once all the desired changes are made run `config_mint.py` (play icon or F5 if you are using Spyder). 103 | 104 | 105 | ### View your NFT 106 | 107 | After updating your NFTs they can can be viewed on randgallery. Note that if you minted your NFT on the tesnet, you need the '&testnet’ flag at the end of the NFT to view. 108 | 109 | Example NFT 1: 110 | https://www.randgallery.com/algo-collection/?address=43432860&testnet 111 | 112 | 113 | 114 | # Common problems: 115 | 116 | ## 1) Packages not loading/recognized: 117 | 118 | If you did not install Python, Spyder, or the dependencies using anaconda, you might see error messages that the packages could not be loaded. In such a scenario it is recommended to uninstall everything, and reinstall ONLY using anaconda. 119 | -------------------------------------------------------------------------------- /batch_mint_arc69/config_mint.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Thu Sep 23 23:07:24 2021 5 | 6 | @author: AlgoKittens 7 | """ 8 | 9 | from mint_arc69 import mint_asset 10 | import pandas as pd 11 | import os,sys,inspect 12 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 13 | parent_dir = os.path.dirname(current_dir) 14 | sys.path.insert(0, parent_dir) 15 | from lib.settings import Settings 16 | 17 | 18 | settings = Settings('batch_mint_arc69') 19 | 20 | if (settings.meta_type == "csv"): 21 | df = pd.read_csv(settings.meta_path) 22 | 23 | elif (settings.meta_type == "JonBecker"): 24 | df = pd.read_json(settings.meta_path) 25 | 26 | elif (settings.meta_type == "HashLips"): 27 | df = pd.read_json(settings.meta_path) 28 | 29 | for n in range(0,len(df)): 30 | mint_asset (n) 31 | -------------------------------------------------------------------------------- /batch_mint_arc69/example.csv: -------------------------------------------------------------------------------- 1 | description,face,body,background,left eye,right eye,Portal A,Portal B,left upper arm,right upper arm,left lower arm,right lower arm,bird,writing,rainbow 2 | bins description1,stupid,juggl'er,Fire,Meditative green & blue,Ruminating,4-eyed Blue Face,Blue-Yellow Stripes,None,Broken Finger,Free hugs,None,Stupid Bird #2,None,None 3 | bins description2,stupid,Weird,Midday,Meditative green & blue,Blissed-out,Green-Red-Yellow Stripes,3-eyed Blue Monster,Talk to the hand,Riding the wave,None,Broken Pinky,Stupid Bird #2,None,None 4 | bins description3,sad,sad,Midday,Staring green & blue,Fierce,White Face Plain,Green-Red-Yellow Stripes,T-rex,Chubby fingers,Mitten,Free hugs,Stupid Bird #4,None,None 5 | bins description4,sad,stupid,Midday,Meditative green & blue,Blissed-out,Red-Green Bars,Green-Red-Yellow Stripes,None,All-seeing wave,Pointing,Free hugs,Stupid Bird #3,None,None 6 | bins description5,juggler,Weird,Midday,Meditative red & blue,Ruminating,Green-Red-Yellow Stripes,Red-Green Bars,T-rex,Riding the wave,None,Mitten,Stupid Bird #4,None,None 7 | bins description,stupid,Weird,Rock,Meditative red & blue,Blissed-out,2-eyed Blue Monster,Red-Green Bars,T-rex,All-seeing wave,None,Pointing,Stupid Bird #1,None,None 8 | bins description #6,stupid,sad,Rock,Staring blue & yellow,Blissed-out,Green-Red-Yellow Stripes,White Face Plain,Chubby fingers,T-rex,Mitten,Free hugs,Stupid Bird #3,aer,None 9 | bins description #7,stupid,sad,Rock,Hidden,Fierce,3-eyed Yellow Face,2-eyed Green Monster,Chubby fingers,None,Mitte'n,None,None,None,None 10 | -------------------------------------------------------------------------------- /batch_mint_arc69/mint_arc69.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sat Oct 16 13:33:47 2021 4 | 5 | @author: AlgoKittens 6 | """ 7 | import json 8 | from algosdk.future.transaction import AssetConfigTxn 9 | import os, glob, sys, inspect 10 | import pandas as pd 11 | from natsort import natsorted 12 | import requests 13 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 14 | parent_dir = os.path.dirname(current_dir) 15 | sys.path.insert(0, parent_dir) 16 | from lib.settings import Settings 17 | from lib.algod_helper import wait_for_confirmation, print_created_asset 18 | 19 | settings = Settings('batch_mint_arc69') 20 | pk = settings.get_public_key() 21 | sk = settings.get_private_key() 22 | 23 | def mint_asset(n): 24 | if not settings.use_csv_ipfs_url: 25 | pinata_ipfs_cid = get_cid_from_pinata(n, settings.image_path, settings.api_key, settings.api_secret) 26 | 27 | if (settings.meta_type == "csv"): 28 | d = pd.read_csv(settings.meta_path) 29 | items = d.iloc[n] 30 | items = items[items != "None"] 31 | items = items.dropna() 32 | items = items.apply(str) 33 | # Handle csv data that shouldn't show up in arc69 properties 34 | if 'asset_id' in items: items.pop('asset_id') 35 | csv_asset_name = items.pop('asset_name') if 'asset_name' in items else '' 36 | csv_ipfs_url = items.pop('ipfs_url') if 'ipfs_url' in items else '' 37 | csv_description = items.pop('description') if 'description' in items else '' 38 | csv_external_url = items.pop('external_url') if 'external_url' in items else '' 39 | attributes = items.to_dict() 40 | 41 | elif (settings.meta_type == "JonBecker"): 42 | d = pd.read_json(settings.meta_path) 43 | d.drop('tokenId', axis=1, inplace=True) 44 | items = d.iloc[n] 45 | items = items[items != "None"] 46 | attributes = items.to_dict() 47 | 48 | elif (settings.meta_type == "HashLips"): 49 | d = pd.read_json(settings.meta_path) 50 | l = d['attributes'][n] 51 | records = pd.DataFrame.from_records(l).set_index('trait_type') 52 | attributes = records.iloc[:,0].to_dict() 53 | 54 | 55 | meta_data = { 56 | "standard": "arc69", 57 | "description": csv_description if settings.use_csv_description else settings.description, 58 | "external_url": csv_external_url if settings.use_csv_external_url else settings.external_url, 59 | "properties": attributes 60 | } 61 | 62 | # Remove keys with empty values 63 | meta_data = { key: value for key, value in meta_data.items() if value != '' } 64 | 65 | meta_data_json = json.dumps(meta_data) 66 | 67 | print("Account 1 address: {}".format(pk)) 68 | 69 | algod_client = settings.get_algod_client() 70 | # CREATE ASSET 71 | # Get network params for transactions before every transaction. 72 | params = algod_client.suggested_params() 73 | # comment these two lines if you want to use suggested params 74 | params.fee = 1000 75 | params.flat_fee = True 76 | asset_number = str(n+1) 77 | asset_name = csv_asset_name if settings.use_csv_asset_name else settings.asset_name + asset_number.zfill(settings.asset_name_number_digits) 78 | unit_name = settings.unit_name + asset_number.zfill(settings.unit_name_number_digits) 79 | url = csv_ipfs_url if settings.use_csv_ipfs_url else f"ipfs://{pinata_ipfs_cid}" 80 | 81 | txn = AssetConfigTxn( 82 | sender=pk, 83 | sp=params, 84 | total=1, 85 | default_frozen=False, 86 | unit_name=unit_name, 87 | asset_name=asset_name, 88 | manager=pk, 89 | reserve=pk, 90 | freeze=None, 91 | clawback=None, 92 | strict_empty_address_check=False, 93 | url=url, 94 | metadata_hash= "", 95 | note = meta_data_json.encode(), 96 | decimals=0) 97 | 98 | sign_and_send_txn(algod_client, txn) 99 | 100 | 101 | def sign_and_send_txn(algod_client, txn): 102 | # Sign with secret key of creator 103 | stxn = txn.sign(sk) 104 | 105 | # Send the transaction to the network and retrieve the txid. 106 | txid = algod_client.send_transaction(stxn) 107 | print(txid) 108 | 109 | # Retrieve the asset ID of the newly created asset by first 110 | # ensuring that the creation transaction was confirmed, 111 | # then grabbing the asset id from the transaction. 112 | 113 | # Wait for the transaction to be confirmed 114 | wait_for_confirmation(algod_client,txid) 115 | 116 | try: 117 | # Pull account info for the creator 118 | # account_info = algod_client.account_info(accounts[1]['pk'])No docu 119 | # get asset_id from tx 120 | # Get the new asset's information from the creator account 121 | ptx = algod_client.pending_transaction_info(txid) 122 | asset_id = ptx["asset-index"] 123 | print_created_asset(algod_client, pk, asset_id) 124 | except Exception as e: 125 | print(e) 126 | 127 | 128 | def get_cid_from_pinata(n, image_path, api_key, api_secret): 129 | imgs = natsorted(glob.glob(os.path.join(image_path, "*.png"))) 130 | 131 | files = [('file', (str(n)+".png", open(imgs[n], "rb"))),] 132 | 133 | headers = { 134 | 'pinata_api_key': api_key, 135 | 'pinata_secret_api_key': api_secret 136 | } 137 | 138 | ipfs_url = "https://api.pinata.cloud/pinning/pinFileToIPFS" 139 | 140 | response: requests.Response = requests.post(url=ipfs_url, files=files, headers=headers) 141 | meta = response.json() 142 | print(meta) #to confirm Pinata Storage limit has not been reached 143 | return meta['IpfsHash'] 144 | -------------------------------------------------------------------------------- /batch_remove/README.md: -------------------------------------------------------------------------------- 1 | # Batch opt-out NFTs using python 2 | 3 | # Preparing the script 4 | 5 | ## Adjust settings.yaml 6 | 7 | For this pipeline the following part in your `settings.yaml` must be set which will then be loaded into the main script when running. 8 | 9 | ``` 10 | # settings.yaml 11 | 12 | batch_remove: 13 | mnemonic1: "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" 14 | input_path: "e:/algoNFTs/batch_add/example_add.csv" 15 | remove_all: "False" 16 | ``` 17 | 18 | ### a) mnemonic1 19 | 20 | This is your algorand key. Included above is a testnet account containing no real algos. In reality this should not be shared with ANYONE. 21 | 22 | ### b) input_path 23 | 24 | This variable should point to the csv containing the assets to be removed. It should contain no header and only the asset IDs to be added 25 | 26 | 27 | ### c) remove all flag 28 | This should be "True" or "False". If set to "True" the script will remove ALL assets with amount 0. If set to "False" only assets defined in the input_path will be removed. 29 | 30 | 31 | # Running the script 32 | 33 | Once you have defined your `settings.yaml`, run the script from the terminal or using your favourite IDE (open file + F5 in Spyder). 34 | -------------------------------------------------------------------------------- /batch_remove/example_remove.csv: -------------------------------------------------------------------------------- 1 | 49095948 2 | -------------------------------------------------------------------------------- /batch_remove/remove_asset.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Thu Sep 9 19:11:29 2021 5 | 6 | @author: algokittens 7 | """ 8 | from algosdk.future.transaction import AssetTransferTxn 9 | import os, sys, inspect 10 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 11 | parent_dir = os.path.dirname(current_dir) 12 | sys.path.insert(0, parent_dir) 13 | from lib.settings import Settings 14 | from lib.algod_helper import wait_for_confirmation, print_asset_holding 15 | import pandas as pd 16 | 17 | settings = Settings('batch_remove') 18 | algod_client = settings.get_algod_client() 19 | sk = settings.get_private_key() 20 | pk = settings.get_public_key() 21 | 22 | 23 | 24 | def remove_asset(asset_id): 25 | 26 | asset_id = int(asset_id) 27 | 28 | params = algod_client.suggested_params() 29 | # comment these two lines if you want to use suggested params 30 | params.fee = 1000 31 | params.flat_fee = True 32 | 33 | 34 | txn = AssetTransferTxn( 35 | sender=pk, 36 | sp=params, 37 | close_assets_to=pk, 38 | receiver=pk, 39 | amt=0, 40 | index=asset_id) 41 | 42 | stxn = txn.sign(sk) 43 | txid = algod_client.send_transaction(stxn) 44 | print(txid) 45 | # Wait for the transaction to be confirmed 46 | wait_for_confirmation(algod_client, txid) 47 | 48 | 49 | 50 | 51 | if(settings.remove_all == "False"): 52 | with open(settings.input_path) as csv_file: 53 | data = [line.rstrip() for line in csv_file] 54 | 55 | for n in range(0, len(data)): 56 | remove_asset(data[n]) 57 | 58 | elif(settings.remove_all=="True"): 59 | myindexer = settings.get_indexer() 60 | response = myindexer.account_info( 61 | address=pk) 62 | 63 | df = pd.DataFrame(response['account']['assets']) 64 | data = df.loc[(df.amount == 0)] 65 | 66 | 67 | for n in range(0, len(data)): 68 | remove_asset(data.iloc[n]['asset-id']) -------------------------------------------------------------------------------- /batch_update_arc69/README.md: -------------------------------------------------------------------------------- 1 | 2 | # How to batch update your existing ASAs to ARC69 using python 3 | 4 | # Pre-requirements 5 | This guide will walk you through batch updating NFTs on Algorand to [ARC69](https://github.com/algokittens/arc69) using Python. No prior experience with Python is assumed, but you will be required to make some changes to the Python script to suit your requirements. 6 | This pipeline should only be ran on a secure machine, and we recommend checking out the official algorand documentation beforehand: https://developer.algorand.org/docs/get-details/asa/ 7 | 8 | Please note that I write these guides and scripts in my spare time and they come with no warranty of any kind whatsoever. 9 | 10 | # Download & Installation 11 | 12 | Make sure you downloaded the whole repository and followed the steps in the main [readme](../README.md). 13 | 14 | # Prepare your data 15 | ## Option A: 16 | The data format for this pipeline is csv, which can be generated from excel files by exporting to "comma separated values". 17 | 18 | For the spreadsheet, only the traits should be included as well as a column called 'ID' which should contain the ASA ID. None values should be called ```None```. 19 | 20 | ## Option B: 21 | Put your complete ARC69 metadata files in a folder. The metadata must be filled in according to the definition (see [example_data](example_data/arc69_data/)). 22 | 23 | One of these two formats **MUST** be followed otherwise the script will not work. 24 | 25 | 26 | # Running the script 27 | 28 | For this example we will update three example NFTs: 43432985, 43432860, and 43432496, which were minted on the testnet using the mnemonic included. It is strongly recommended to experiment with these assets before moving to the mainnet and spending real algos. 29 | 30 | The csv file can be found in the github directory and is entitled: "example_NFT.csv". 31 | 32 | ## 1) Open settings.yaml 33 | 34 | Using Spyder (or your favourite IDE) open your `settings.yaml`. All changes should be made within this file rather than update_meta_arc69.py unless you need to add some additional specifications. 35 | 36 | ## 2) Edit settings.yaml 37 | These are the settings to adjust for this script to run properly: 38 | 39 | ``` 40 | batch_update_arc69: 41 | mnemonic1: "wreck floor carbon during ..." 42 | input_path: "e:/algoNFTs/batch_update_arc69/example_data/example_NFT.csv" 43 | # Settings relevant for csv input_path ONLY! (Option A) 44 | csv: 45 | external_url: "yourwebsite.com" 46 | description: "some cool description" 47 | update_all: true #if true updates every NFT in the csv 48 | row_to_update: 1 #update only the asset in the first row of the csv 49 | ``` 50 | ### a) Define your mnemonic 51 | This is your algorand key. Included below is access to a testnet account containing no real algos. In reality this should not be shared with ANYONE. 52 | 53 | ```mnemonic1 = "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" ``` 54 | 55 | ### b) Add the path of your csv containing your metadata. 56 | 57 | ``` input_path = r"C:/Users/AlgoKittens/example_NFT.csv" ``` 58 | 59 | ### c) Define the description ([Option A](#option-a) only) 60 | This description will be included in every asset. If left blank, no description will be included: 61 | 62 | ``` description = "your awesome description goes here"``` 63 | 64 | ### d) Define the external URL ([Option A](#option-a) only) 65 | This url will be included in every asset. If left blank, no URL will be included: 66 | 67 | ``` external_url = "your_website.com"``` 68 | 69 | ### e) Define items to update ([Option A](#option-a) only) 70 | Define if every NFT in the spreadsheet should be updated. If set to true, all assets will be updated. 71 | ```update_all = False``` 72 | 73 | If you only want to update a single row, define which row should be updated: 74 | 75 | ```row_to_update = 1 # this would update row 1``` 76 | 77 | ### f) Notes for preparation with [Option B](#option-b) 78 | `external_url` and `description` will be ignored since your ARC69 files are considered complete. 79 | 80 | `update_all` and `row_to_update` will also be ignored. In case files shouldn't be used for update, they should be moved temporarily in a different directory. 81 | 82 | 83 | ## 3) Run the script 84 | 85 | Once all the desired changes are made run the script (play icon or F5 if you are using Spyder). 86 | 87 | 88 | ### View your NFT 89 | 90 | After updating your NFTs they can can be viewed on randgallery. Note that if you minted your NFT on the tesnet, you need the '&testnet’ flag at the end of the NFT to view. 91 | 92 | Example NFT 1: 93 | https://www.randgallery.com/algo-collection/?address=43432496&testnet 94 | 95 | Example NFT 2: 96 | https://www.randgallery.com/algo-collection/?address=43432860&testnet 97 | 98 | Example NFT 3: 99 | https://www.randgallery.com/algo-collection/?address=43432985&testnet 100 | 101 | 102 | 103 | ## 4) Common problems: 104 | 105 | ### a) Packages not loading/recognized: 106 | 107 | If you did not install Python, Spyder, or the dependencies using anaconda, you might see error messages that the packages could not be loaded. In such a scenario it is recommended to uninstall everything, and reinstall ONLY using anaconda. 108 | -------------------------------------------------------------------------------- /batch_update_arc69/example_data/arc69_data/43432496.json: -------------------------------------------------------------------------------- 1 | { 2 | "standard": "arc69", 3 | "description": "some cool description", 4 | "external_url": "yourwebsite.com", 5 | "properties": { 6 | "Background": "Pink Spin", 7 | "Body": "Leopard", 8 | "Tattoo": "None", 9 | "Beak": "None", 10 | "Clothing": "Classic MNGO", 11 | "Neck": "None", 12 | "Mouth": "Fresh Catch", 13 | "Earring": "None", 14 | "Eyes": "Eyepatch", 15 | "Hat": "Paper Plane" 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /batch_update_arc69/example_data/arc69_data/43432985.json: -------------------------------------------------------------------------------- 1 | { 2 | "standard": "arc69", 3 | "description": "some cool description", 4 | "external_url": "yourwebsite.com", 5 | "properties": { 6 | "Background": "Yellow", 7 | "Body": "Classic Pink", 8 | "Tattoo": "None", 9 | "Beak": "None", 10 | "Clothing": "Dwight", 11 | "Neck": "None", 12 | "Mouth": "Standard", 13 | "Earring": "None", 14 | "Eyes": "Kurts", 15 | "Hat": "Cornrows" 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /batch_update_arc69/example_data/example_NFT.csv: -------------------------------------------------------------------------------- 1 | ID,Background,Body,Tattoo,Beak,Clothing,Neck,Mouth,Earring,Eyes,Hat 2 | 43432985,Yellow,Classic Pink,None,None,Dwight,None,Standard,None,Kurts,Cornrows 3 | 43432860,Yellow,Grey,None,None,Red Polo,None,Cigar,Punk,Scratch,Pizza Slice 4 | 43432496,Pink Spin,Leopard,None,None,Classic MNGO,None,Fresh Catch,None,Eyepatch,Paper Plane 5 | -------------------------------------------------------------------------------- /batch_update_arc69/update_meta_arc69.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | @author: algokittens 5 | """ 6 | import json 7 | from algosdk.future.transaction import AssetConfigTxn 8 | import pandas as pd 9 | import os, sys, inspect 10 | import pandas as pd 11 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 12 | parent_dir = os.path.dirname(current_dir) 13 | sys.path.insert(0, parent_dir) 14 | from lib.settings import Settings 15 | from lib.algod_helper import wait_for_confirmation, print_created_asset, print_asset_holding 16 | 17 | settings = Settings('batch_update_arc69') 18 | pk = settings.get_public_key() 19 | sk = settings.get_private_key() 20 | 21 | def run_script(): 22 | if settings.input_path.endswith('.csv'): 23 | update_by_csv_file() 24 | else: 25 | update_by_arc69_jsons() 26 | 27 | def update_by_csv_file(): 28 | df = pd.read_csv(settings.input_path) 29 | 30 | if (settings.csv['update_all'] == False): 31 | n = settings.csv['row_to_update'] -1 #run first line 32 | update_meta(n) 33 | else: 34 | for n in range(0,len(df)): 35 | update_meta(n) 36 | 37 | 38 | def update_by_arc69_jsons(): 39 | for filename in os.listdir(settings.input_path): 40 | if not filename.endswith(".json"): continue 41 | 42 | asset_id = filename.split('.')[0] 43 | 44 | with open(os.path.join(settings.input_path,filename)) as file: 45 | arc69_data = json.load(file) 46 | meta_data = json.dumps(arc69_data) 47 | print(meta_data) 48 | send_algod_request(asset_id, meta_data) 49 | 50 | 51 | def update_meta(n): 52 | data_frame = pd.read_csv(settings.input_path) 53 | 54 | asset_id = data_frame['ID'][n] 55 | meta_data_json = get_meta_data_json(n, data_frame) 56 | print(meta_data_json) 57 | 58 | send_algod_request(asset_id, meta_data_json) 59 | 60 | 61 | def send_algod_request(asset_id, meta_data_json): 62 | algod_client = settings.get_algod_client() 63 | 64 | print("Account 1 address: {}".format(pk)) 65 | 66 | # Get network params for transactions before every transaction. 67 | params = algod_client.suggested_params() 68 | # comment these two lines if you want to use suggested params 69 | params.fee = 1000 70 | params.flat_fee = True 71 | 72 | txn = AssetConfigTxn( 73 | sender=pk, 74 | sp=params, 75 | index=asset_id, 76 | manager=pk, 77 | reserve=pk, 78 | freeze=pk, 79 | note = meta_data_json.encode(), 80 | strict_empty_address_check=False, 81 | clawback=None) 82 | 83 | # Sign with secret key of creator 84 | stxn = txn.sign(sk) 85 | 86 | # Send the transaction to the network and retrieve the txid. 87 | txid = algod_client.send_transaction(stxn) 88 | print(txid) 89 | 90 | # Wait for the transaction to be confirmed 91 | wait_for_confirmation(algod_client,txid) 92 | 93 | try: 94 | # Pull account info for the creator 95 | # account_info = algod_client.account_info(accounts[1]['pk'])No docu 96 | # get asset_id from tx 97 | # Get the new asset's information from the creator account 98 | ptx = algod_client.pending_transaction_info(txid) 99 | asset_id = ptx["asset-index"] 100 | print_created_asset(algod_client, pk, asset_id) 101 | print_asset_holding(algod_client, pk, asset_id) 102 | except Exception as e: 103 | print(e) 104 | 105 | 106 | def get_meta_data_json(n, data_frame): 107 | d = data_frame.drop(['ID'], axis=1) 108 | 109 | items = d.iloc[n] 110 | items = items[items != "None"] 111 | items = items.dropna() 112 | items = items.apply(str) 113 | properties = items.to_dict() 114 | 115 | meta_data = { 116 | "standard": "arc69", 117 | "description": settings.csv['description'], 118 | "external_url": settings.csv['external_url'], 119 | "properties": properties 120 | } 121 | 122 | # Remove keys with empty values 123 | meta_data = { key: value for key, value in meta_data.items() if value != '' } 124 | 125 | return json.dumps(meta_data) 126 | 127 | 128 | run_script() -------------------------------------------------------------------------------- /example.settings.yaml: -------------------------------------------------------------------------------- 1 | testnet: true 2 | 3 | default_output_folder: "output" 4 | 5 | 6 | batch_mint_arc69: 7 | mnemonic1: "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" 8 | api_key: "" #your pinata key 9 | api_secret: "" #your pinata secret 10 | meta_path: "e:/algoNFTs/output/fetch_arc69/example_export.csv" #location of metadata 11 | meta_type: "csv" #metadata type, valid argments = "csv", "JonBecker", "HashLips" 12 | image_path: "e:/hashlips_art_engine/build/images" #location of images 13 | unit_name: "HAI" 14 | unit_name_number_digits: 0 #set number of digits for unit_name e.g. 4 will result in: TST0001 or TST0999 - leave at zero to increment without leading zero(s) 15 | asset_name: "Hai Happen Gen 2 No. " 16 | asset_name_number_digits: 0 #set number of digits for asset_name e.g. 4 will result in: Test NFT #0001 or Test NFT #0999 - leave at zero to increment without leading zero(s) 17 | external_url: "" 18 | description: "Hai Happen and his Gang" 19 | 20 | # Settings relevant for "csv" meta_type ONLY! 21 | use_csv_asset_name: false #set true if you have a "asset_name" column in your csv you want to use 22 | use_csv_ipfs_url: false #set true if you have a "ipfs_url" column with ipfs url in your csv you want to use 23 | use_csv_description: false #set to true if you have a "description" column in your csv you want to use 24 | use_csv_external_url: true #set to true if you have a "external_url" column in your csv you want to use 25 | 26 | 27 | batch_update_arc69: 28 | mnemonic1: "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" 29 | # if input_path is not a path to a csv File script will handle it as folder with arc69 json files 30 | # if inputpath contains metadata as jsons, each filname has to have the asset id as filename with ".json" ending 31 | #input_path: "e:/algoNFTs/output/fetch_arc69" 32 | input_path: "e:/algoNFTs/batch_update_arc69/example_data/example_NFT.csv" 33 | # Settings relevant for csv input_path ONLY! 34 | csv: 35 | external_url: "yourwebsite.com" 36 | description: "some cool description" 37 | update_all: true #if true updates every NFT in the csv 38 | row_to_update: 1 #update only the asset in the first row of the csv 39 | 40 | 41 | batch_add: 42 | mnemonic1: "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" 43 | input_path: "e:/algoNFTs/batch_add/example_add.csv" 44 | 45 | 46 | batch_remove: 47 | mnemonic1: "wreck floor carbon during taste illegal cover amused staff middle firm surface daughter pool lab update steel trophy dad twenty near kite boss abstract lens" 48 | input_path: "e:/algoNFTs/batch_add/example_remove.csv" 49 | remove_all: "False" 50 | 51 | fetch_arc69: 52 | public_key: "KKBVJLXALCENRXQNEZC44F4NQWGIEFKKIHLDQNBGDHIM73F44LAN7IAE5Q" 53 | csv: 54 | add_asset_id: false #adds column "asset_id" to csv 55 | add_asset_name: false #adds column "asset_name" to csv 56 | add_ipfs_url: false #adds column "ipfs_url" to csv 57 | 58 | # Set ARC69 base attributes you want to add to csv e.g. "description,external_url" or "" for none 59 | # Important: to be able to add them they must be present in your metadata 60 | # see here for further attributes: https://github.com/algokittens/arc69 61 | base_attributes: "" 62 | 63 | 64 | fetch_holders: 65 | public_key: "TIMPJ6P5FZRNNKYJLAYD44XFOSUWEOUAR6NRWJMQR66BRM3QH7UUWEHA24" 66 | 67 | -------------------------------------------------------------------------------- /fetch_arc69/README.md: -------------------------------------------------------------------------------- 1 | # Fetching all Assets of an account with ARC69 data 2 | 3 | # Overview 4 | This script will fetch metadata of assets created by a provided creator address. Data for each asset will be saved as json where the filename corresponds to the ASA ID. Additionally the data will be saved as csv which can be used for [batch_mint_arc69](../batch_mint_arc69). 5 | 6 | 7 | 8 | # Download & Installation 9 | 10 | Make sure you downloaded the whole repository and followed the steps in the main [readme](../README.md). 11 | 12 | # Adjust settings.yaml 13 | 14 | Using Spyder (or your favourite IDE) open "settings.yaml". All changes should be made within this file unless you need to add some additional specifications. 15 | 16 | ``` 17 | fetch_arc69: 18 | public_key: "KKBVJLXALCENRXQNEZC44F4NQWGIEFKKIHLDQNBGDHIM73F44LAN7IAE5Q" 19 | csv: 20 | add_asset_id: false 21 | add_asset_name: false 22 | add_ipfs_url: false 23 | base_attributes: "description,external_url" 24 | ``` 25 | 26 | 27 | ## a) Define public key 28 | 29 | ``` public_key = "GANGAAWKBJBWQJIIETTLWQT7ZFGPC4UDIITNGP55BCQPB26IEMOPOHQMEA" ``` 30 | 31 | 32 | ## b) Define additional csv data 33 | With this setting you can add ARC69 base attributes to the csv export - **ONLY** add them if they are present in your json metadata. Leave empty if you want just the ARC69 properties to be added to the csv. 34 | 35 | ```base_attributes = "description"``` 36 | 37 | or mulitple with comma seperation 38 | 39 | ```base_attributes = "standard,description,external_url"``` 40 | 41 | also asset id, CID (ipfs hash) and asset name can be added to csv output: 42 | 43 | ```add_asset_id: true``` 44 | 45 | 46 | ```add_asset_name: true``` 47 | 48 | 49 | ```add_ipfs_url: true``` 50 | 51 | 52 | 53 | # Running the script 54 | 55 | Once you have defined your parameters in the file, run the script from the terminal or using your favourite IDE (open file + F5 in Spyder). 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /fetch_arc69/fetch_arc69.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Wed Feb 02 21:21:39 2022 5 | 6 | @author: grexn 7 | """ 8 | import base64 9 | import os 10 | import json 11 | import csv 12 | import os,sys,inspect 13 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 14 | parent_dir = os.path.dirname(current_dir) 15 | sys.path.insert(0, parent_dir) 16 | from lib.settings import Settings 17 | from lib.file_helper import check_and_create_path 18 | 19 | 20 | settings = Settings('fetch_arc69') 21 | myindexer = settings.get_indexer() 22 | OUTPUT_PATH = settings.get_output_folder() 23 | 24 | def write_meta_data_to_files(): 25 | account = myindexer.lookup_account_asset_by_creator(address=settings.public_key, limit=10) 26 | 27 | if not 'assets' in account: 28 | print(f"No assets found in account: {settings.public_key} and {'testnet' if settings.full_settings['testnet'] else 'mainnet'}") 29 | exit() 30 | 31 | created_assets = myindexer.lookup_account_asset_by_creator(address=settings.public_key, limit=1000)['assets'] 32 | #del created_assets[4:] 33 | data = [] 34 | for asset in created_assets: 35 | asset_id = asset['index'] 36 | is_deleted = asset['deleted'] 37 | asset_name = asset['params']['name'] 38 | ipfs_hash = asset['params']['url'] 39 | last_config_tnx = myindexer.search_asset_transactions(asset_id,txn_type='acfg')['transactions'][-1] 40 | 41 | if 'note' in last_config_tnx and not is_deleted: 42 | print(f"ASA ID {asset_id}: metadata found - adding.") 43 | file_path = f"{OUTPUT_PATH}/{asset_id}.json" 44 | check_and_create_path(file_path) 45 | with open(file_path, "w", encoding='utf-8') as json_file: 46 | json_string = base64.b64decode(last_config_tnx['note']).decode('utf-8') 47 | json_file.write(json_string) 48 | json_data_as_dict = json.loads(json_string) 49 | data.append((asset_id, asset_name, ipfs_hash, json_data_as_dict)) 50 | else: 51 | print(f"ASA ID {asset_id}: no metadata found.") 52 | 53 | print(f"Total assets added: {len(data)}") 54 | write_csv_file(data) 55 | 56 | 57 | def write_csv_file(data): 58 | sortedData = sorted(data) 59 | 60 | converted_data = [] 61 | csv_header = [] 62 | 63 | for asset_id, asset_name, ipfs_hash, asset in sortedData: 64 | new_asset = {} 65 | 66 | if settings.csv['add_asset_id'] and asset_id: 67 | new_asset['asset_id'] = asset_id 68 | 69 | if settings.csv['add_asset_name'] and asset_name: 70 | new_asset['asset_name'] = asset_name 71 | 72 | if settings.csv['add_ipfs_url'] and ipfs_hash: 73 | new_asset['ipfs_url'] = ipfs_hash 74 | 75 | base_attributes = settings.csv['base_attributes'] 76 | if base_attributes: 77 | for base_attribute in settings.csv['base_attributes'].split(','): 78 | new_asset[base_attribute] = asset[base_attribute] 79 | 80 | for attribute in asset['properties'].keys(): 81 | new_asset[attribute] = asset['properties'][attribute] 82 | 83 | converted_data.append(new_asset) 84 | 85 | for attribute in new_asset: 86 | if attribute not in csv_header: 87 | csv_header.append(attribute) 88 | 89 | csv_file_path = f"{OUTPUT_PATH}/metadata.csv" 90 | check_and_create_path(csv_file_path) 91 | 92 | with open(csv_file_path, 'w', newline='', encoding='utf-8') as f: 93 | wr = csv.DictWriter(f, fieldnames = csv_header) 94 | wr.writeheader() 95 | wr.writerows(converted_data) 96 | 97 | print(f"Script complete - output can be found here: {OUTPUT_PATH}") 98 | 99 | 100 | write_meta_data_to_files() -------------------------------------------------------------------------------- /fetch_holders/README.md: -------------------------------------------------------------------------------- 1 | # Fetching holders of account 2 | 3 | # Overview 4 | This pipeline will fetch all the holders of assets created by a provided creator address. If an holder holds more than one asset by the creator, the holder address will appear multiple times in the csv. 5 | 6 | # Download & Installation 7 | 8 | Make sure you downloaded the whole repository and followed the steps in the main [readme](../README.md). 9 | 10 | # Adjust settings.yaml 11 | 12 | Using Spyder (or your favourite IDE) open "settings.yaml". All changes should be made within this file unless you need to add some additional specifications. 13 | ``` 14 | fetch_holders: 15 | public_key: "TIMPJ6P5FZRNNKYJLAYD44XFOSUWEOUAR6NRWJMQR66BRM3QH7UUWEHA24" 16 | ``` 17 | 18 | 19 | 20 | # Running the script 21 | 22 | Once you have defined your parameters in the file, run the script from the terminal or using your favourite IDE (open file + F5 in Spyder). 23 | 24 | 25 | 26 | -------------------------------------------------------------------------------- /fetch_holders/fetch_holders.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Fri Dec 17 13:21:39 2021 5 | 6 | @author: algokittens 7 | """ 8 | import pandas as pd 9 | import os,sys,inspect 10 | current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) 11 | parent_dir = os.path.dirname(current_dir) 12 | sys.path.insert(0, parent_dir) 13 | from lib.settings import Settings 14 | from lib.file_helper import check_and_create_path 15 | from datetime import date,datetime 16 | 17 | settings = Settings('fetch_holders') 18 | myindexer = settings.get_indexer() 19 | OUTPUT_PATH = settings.get_output_folder() 20 | 21 | response = myindexer.account_info(address=settings.public_key) 22 | 23 | if not 'created-assets' in response['account']: 24 | print(f"No assets found in account: {settings.public_key} and {'testnet' if settings.full_settings['testnet'] else 'mainnet'}") 25 | exit() 26 | 27 | created_assets = response['account']['created-assets'] 28 | 29 | def fetch_account(asset_id): 30 | response = myindexer.asset_balances(asset_id) 31 | df = pd.DataFrame(response['balances']) 32 | df['asset_id'] = asset_id 33 | d = df.loc[(df.amount == 1)] 34 | return(d) 35 | 36 | 37 | addresses = [] 38 | for asset in created_assets: 39 | addresses.append(fetch_account(asset['index'])) 40 | 41 | 42 | data = pd.concat(addresses, axis=0) 43 | data = data[['asset_id', 'address']] 44 | out_file = f"{OUTPUT_PATH}/{settings.public_key[0:5]}_holders_{date.isoformat(datetime.now())}.csv" 45 | check_and_create_path(out_file) 46 | 47 | data.to_csv(out_file, index=False) 48 | 49 | print(f'Holders saved to: {os.path.abspath(out_file)}') 50 | -------------------------------------------------------------------------------- /lib/algod_helper.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | def wait_for_confirmation(client, txid): 4 | """ 5 | Utility function to wait until the transaction is 6 | confirmed before proceeding. 7 | """ 8 | last_round = client.status().get('last-round') 9 | txinfo = client.pending_transaction_info(txid) 10 | while not (txinfo.get('confirmed-round') and txinfo.get('confirmed-round') > 0): 11 | print("Waiting for confirmation") 12 | last_round += 1 13 | client.status_after_block(last_round) 14 | txinfo = client.pending_transaction_info(txid) 15 | print("Transaction {} confirmed in round {}.".format(txid, txinfo.get('confirmed-round'))) 16 | return txinfo 17 | 18 | # Utility function used to print created asset for account and assetid 19 | def print_created_asset(algodclient, account, assetid): 20 | # note: if you have an indexer instance available it is easier to just use this 21 | # response = myindexer.accounts(asset_id = assetid) 22 | # then use 'account_info['created-assets'][0] to get info on the created asset 23 | account_info = algodclient.account_info(account) 24 | idx = 0 25 | for my_account_info in account_info['created-assets']: 26 | scrutinized_asset = account_info['created-assets'][idx] 27 | idx = idx + 1 28 | if (scrutinized_asset['index'] == assetid): 29 | print("Asset ID: {}".format(scrutinized_asset['index'])) 30 | print(json.dumps(my_account_info['params'], indent=4)) 31 | break 32 | 33 | # Utility function used to print asset holding for account and assetid 34 | def print_asset_holding(algodclient, account, assetid): 35 | account_info = algodclient.account_info(account) 36 | idx = 0 37 | for my_account_info in account_info['assets']: 38 | scrutinized_asset = account_info['assets'][idx] 39 | idx = idx + 1 40 | if (scrutinized_asset['asset-id'] == assetid): 41 | print("Asset ID: {}".format(scrutinized_asset['asset-id'])) 42 | print(json.dumps(scrutinized_asset, indent=4)) 43 | break -------------------------------------------------------------------------------- /lib/file_helper.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | def check_and_create_path(file_path): 4 | if not os.path.exists(os.path.dirname(file_path)): 5 | os.makedirs(os.path.dirname(file_path)) -------------------------------------------------------------------------------- /lib/settings.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | import os 3 | from algosdk.v2client import indexer, algod 4 | from algosdk import mnemonic 5 | 6 | class Settings: 7 | INDEXER_ADDRESSES = { 8 | "testnet": "https://algoindexer.testnet.algoexplorerapi.io", 9 | "mainnet": "https://algoindexer.algoexplorerapi.io" 10 | } 11 | ALGOD_ADDRESSES = { 12 | "testnet": "https://node.testnet.algoexplorerapi.io", 13 | "mainnet": "https://node.algoexplorerapi.io" 14 | } 15 | 16 | full_settings = {} 17 | 18 | def __init__(self, settings_key): 19 | self.settings_key = settings_key 20 | module_path = os.path.dirname(__file__) 21 | settings_path = os.path.join(module_path, '../settings.yaml') 22 | 23 | settings_exists = os.path.exists(settings_path) 24 | 25 | if not settings_exists: 26 | print(f'Your settings.yaml could not be found here: {os.path.abspath(settings_path)}') 27 | exit() 28 | 29 | with open(settings_path, 'r') as file: 30 | try: 31 | self.full_settings = yaml.safe_load(file) 32 | except: 33 | print("Your settings.yaml is incorrectly formatted. Remember to use forward slashes in filepaths!") 34 | exit() 35 | 36 | if self.full_settings['testnet']: 37 | self.indexer_address = self.INDEXER_ADDRESSES['testnet'] 38 | self.algod_address = self.ALGOD_ADDRESSES['testnet'] 39 | else: 40 | self.indexer_address = self.INDEXER_ADDRESSES['mainnet'] 41 | self.algod_address = self.ALGOD_ADDRESSES['mainnet'] 42 | self.context_settings = self.full_settings[self.settings_key] 43 | 44 | def get_indexer(self): 45 | indexer_address = self.indexer_address 46 | headers = {'User-Agent': 'py-algorand-sdk'} 47 | return indexer.IndexerClient(indexer_token="", headers=headers, indexer_address=indexer_address) 48 | 49 | def get_algod_client(self): 50 | indexer_address = self.algod_address 51 | headers = {'User-Agent': 'py-algorand-sdk'} 52 | return algod.AlgodClient(algod_token="", algod_address=indexer_address, headers=headers); 53 | 54 | def __getattr__(self, name): 55 | return self.context_settings[name] 56 | 57 | 58 | def get_output_folder(self): 59 | root_path = os.path.abspath(os.path.dirname(__file__) + '/../') 60 | return os.path.join(root_path, self.full_settings['default_output_folder'], self.settings_key) 61 | 62 | def get_private_key(self): 63 | pass_phrase = self.context_settings['mnemonic1'] 64 | return mnemonic.to_private_key(pass_phrase.replace(',', '')) 65 | 66 | def get_public_key(self): 67 | pass_phrase = self.context_settings['mnemonic1'] 68 | return mnemonic.to_public_key(pass_phrase.replace(',', '')) 69 | 70 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | natsort==8.1.0 2 | pandas==1.4.0 3 | py-algorand-sdk==1.11.0 4 | requests==2.27.1 5 | -------------------------------------------------------------------------------- /thumbnail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/algokittens/algoNFTs/b3da6a3df6ade036d39b3eddd0229312814dd9fe/thumbnail.png --------------------------------------------------------------------------------