├── README.md ├── LICENSE ├── redpandaai.0041b.py ├── RPC-LOG-170225-00571.txt └── redpandaai.00601a.py /README.md: -------------------------------------------------------------------------------- 1 | # 🐼 RedPanda's CryptoTrading Adventure! 🚀 2 | 3 | **(A tale of friendship, code, and meme coins)** 4 | 5 | Hi! I'm Toma, a coder with a fluffy red panda friend who *loves* exploring the crypto markets, especially the wild world of pump.fun! RedPanda's super curious and always sniffing out new tokens, but sometimes he needs a little help figuring out which ones are *really* worth chasing. That's why I built this little app – it's like our own crypto treasure map! 6 | 7 | This Python script is RedPanda's best friend (besides me, of course!). It helps us: 8 | 9 | * **Sniff out new tokens:** RedPanda's got a nose for fresh listings on pump.fun, but this script helps us find the *most promising* ones by checking things like how many people are holding them and their market cap. We only want the *good* stuff! ✨ 10 | * **Get the inside scoop:** We use Gemini's smarts to analyze each token. It's like having a wise old owl in our crypto crew! 🦉 It even tries to guess if a token is a "Fast Pump & Dump" (zoom zoom!), a "Long Hold" (for a cozy nap while it grows!), a "Small Profit Play" (a little snack!), or "Not Recommended" (yuck!). 11 | * **Chat with our friends:** We send all our findings to our Telegram channel so our crypto pals can join the adventure! 📢 12 | * **Keep track of our progress:** We record everything in a CSV file so we can learn from our past adventures and become even better crypto explorers! 🗺️ 13 | * **See what's trending:** RedPanda loves Twitter! We scan for trending hashtags and recent tweets about meme coins to get a feel for the market's vibe. 🐦 14 | * **Check prices like pros:** We use Gate.io's API to track token prices and see if our predictions were right! Did we find a gem or a dud?💎/💩 15 | 16 | **How to Join the Adventure:** 17 | 18 | 1. Clone this repo! 19 | 2. Set up your environment variables (Gemini API key, Telegram bot token, Telegram channel ID, and Twitter Bearer Token). Keep these secret! 🤫 20 | 3. Run the script! `python your_script_name.py` 21 | 22 | **Important Note:** RedPanda and I are just having fun! This script is *not* financial advice. Meme coins are super risky, so always do your own research and be careful with your crypto treasures! 💰 23 | 24 | We hope you enjoy our little project! Happy exploring! 🎉 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Legal Code 2 | 3 | CC0 1.0 Universal 4 | 5 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 6 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 7 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 8 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 9 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 10 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 11 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 12 | HEREUNDER. 13 | 14 | Statement of Purpose 15 | 16 | The laws of most jurisdictions throughout the world automatically confer 17 | exclusive Copyright and Related Rights (defined below) upon the creator 18 | and subsequent owner(s) (each and all, an "owner") of an original work of 19 | authorship and/or a database (each, a "Work"). 20 | 21 | Certain owners wish to permanently relinquish those rights to a Work for 22 | the purpose of contributing to a commons of creative, cultural and 23 | scientific works ("Commons") that the public can reliably and without fear 24 | of later claims of infringement build upon, modify, incorporate in other 25 | works, reuse and redistribute as freely as possible in any form whatsoever 26 | and for any purposes, including without limitation commercial purposes. 27 | These owners may contribute to the Commons to promote the ideal of a free 28 | culture and the further production of creative, cultural and scientific 29 | works, or to gain reputation or greater distribution for their Work in 30 | part through the use and efforts of others. 31 | 32 | For these and/or other purposes and motivations, and without any 33 | expectation of additional consideration or compensation, the person 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 35 | is an owner of Copyright and Related Rights in the Work, voluntarily 36 | elects to apply CC0 to the Work and publicly distribute the Work under its 37 | terms, with knowledge of his or her Copyright and Related Rights in the 38 | Work and the meaning and intended legal effect of CC0 on those rights. 39 | 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be 41 | protected by copyright and related or neighboring rights ("Copyright and 42 | Related Rights"). Copyright and Related Rights include, but are not 43 | limited to, the following: 44 | 45 | i. the right to reproduce, adapt, distribute, perform, display, 46 | communicate, and translate a Work; 47 | ii. moral rights retained by the original author(s) and/or performer(s); 48 | iii. publicity and privacy rights pertaining to a person's image or 49 | likeness depicted in a Work; 50 | iv. rights protecting against unfair competition in regards to a Work, 51 | subject to the limitations in paragraph 4(a), below; 52 | v. rights protecting the extraction, dissemination, use and reuse of data 53 | in a Work; 54 | vi. database rights (such as those arising under Directive 96/9/EC of the 55 | European Parliament and of the Council of 11 March 1996 on the legal 56 | protection of databases, and under any national implementation 57 | thereof, including any amended or successor version of such 58 | directive); and 59 | vii. other similar, equivalent or corresponding rights throughout the 60 | world based on applicable law or treaty, and any national 61 | implementations thereof. 62 | 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention 64 | of, applicable law, Affirmer hereby overtly, fully, permanently, 65 | irrevocably and unconditionally waives, abandons, and surrenders all of 66 | Affirmer's Copyright and Related Rights and associated claims and causes 67 | of action, whether now known or unknown (including existing as well as 68 | future claims and causes of action), in the Work (i) in all territories 69 | worldwide, (ii) for the maximum duration provided by applicable law or 70 | treaty (including future time extensions), (iii) in any current or future 71 | medium and for any number of copies, and (iv) for any purpose whatsoever, 72 | including without limitation commercial, advertising or promotional 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 74 | member of the public at large and to the detriment of Affirmer's heirs and 75 | successors, fully intending that such Waiver shall not be subject to 76 | revocation, rescission, cancellation, termination, or any other legal or 77 | equitable action to disrupt the quiet enjoyment of the Work by the public 78 | as contemplated by Affirmer's express Statement of Purpose. 79 | 80 | 3. Public License Fallback. Should any part of the Waiver for any reason 81 | be judged legally invalid or ineffective under applicable law, then the 82 | Waiver shall be preserved to the maximum extent permitted taking into 83 | account Affirmer's express Statement of Purpose. In addition, to the 84 | extent the Waiver is so judged Affirmer hereby grants to each affected 85 | person a royalty-free, non transferable, non sublicensable, non exclusive, 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 88 | maximum duration provided by applicable law or treaty (including future 89 | time extensions), (iii) in any current or future medium and for any number 90 | of copies, and (iv) for any purpose whatsoever, including without 91 | limitation commercial, advertising or promotional purposes (the 92 | "License"). The License shall be deemed effective as of the date CC0 was 93 | applied by Affirmer to the Work. Should any part of the License for any 94 | reason be judged legally invalid or ineffective under applicable law, such 95 | partial invalidity or ineffectiveness shall not invalidate the remainder 96 | of the License, and in such case Affirmer hereby affirms that he or she 97 | will not (i) exercise any of his or her remaining Copyright and Related 98 | Rights in the Work or (ii) assert any associated claims and causes of 99 | action with respect to the Work, in either case contrary to Affirmer's 100 | express Statement of Purpose. 101 | 102 | 4. Limitations and Disclaimers. 103 | 104 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 105 | surrendered, licensed or otherwise affected by this document. 106 | b. Affirmer offers the Work as-is and makes no representations or 107 | warranties of any kind concerning the Work, express, implied, 108 | statutory or otherwise, including without limitation warranties of 109 | title, merchantability, fitness for a particular purpose, non 110 | infringement, or the absence of latent or other defects, accuracy, or 111 | the present or absence of errors, whether or not discoverable, all to 112 | the greatest extent permissible under applicable law. 113 | c. Affirmer disclaims responsibility for clearing rights of other persons 114 | that may apply to the Work or any use thereof, including without 115 | limitation any person's Copyright and Related Rights in the Work. 116 | Further, Affirmer disclaims responsibility for obtaining any necessary 117 | consents, permissions or other rights required for any use of the 118 | Work. 119 | d. Affirmer understands and acknowledges that Creative Commons is not a 120 | party to this document and has no duty or obligation with respect to 121 | this CC0 or use of the Work. 122 | -------------------------------------------------------------------------------- /redpandaai.0041b.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from bs4 import BeautifulSoup 3 | import google.generativeai as genai 4 | import os 5 | import time 6 | from datetime import datetime 7 | import csv 8 | import ccxt 9 | import pandas as pd 10 | from telegram import Bot 11 | from telegram.error import TelegramError 12 | import asyncio 13 | import tweepy 14 | 15 | # ----------------------------------- CONFIGURATION ----------------------------------- 16 | # -------- API Keys and IDs -------- 17 | GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY") 18 | TELEGRAM_BOT_TOKEN = os.environ.get("TELEGRAM_BOT_TOKEN") 19 | TELEGRAM_CHANNEL_ID = os.environ.get("TELEGRAM_CHANNEL_ID") 20 | 21 | if not GEMINI_API_KEY: 22 | print("Error: Gemini API key not found. Set GEMINI_API_KEY environment variable.") 23 | exit() 24 | if not TELEGRAM_BOT_TOKEN: 25 | print("Error: Telegram Bot Token not found. Set TELEGRAM_BOT_TOKEN environment variable.") 26 | exit() 27 | if not TELEGRAM_CHANNEL_ID: 28 | print("Error: Telegram Channel ID not found. Set TELEGRAM_CHANNEL_ID environment variable.") 29 | exit() 30 | 31 | genai.configure(api_key=GEMINI_API_KEY) 32 | MODEL = genai.GenerativeModel('gemini-pro') 33 | TELEGRAM_BOT = Bot(token=TELEGRAM_BOT_TOKEN) 34 | 35 | # -------- Pump.fun Scraping -------- 36 | PUMP_FUN_NEW_TOKENS_URL = "https://pump.fun/" 37 | 38 | # -------- Scanning, Analysis Intervals, and Thresholds -------- 39 | SCAN_INTERVAL_SECONDS = 60 40 | SEEN_TOKENS = set() 41 | 42 | MIN_HOLDERS_THRESHOLD = 100 43 | MIN_MCAP_THRESHOLD = 20000 44 | MAX_BUNDLES_THRESHOLD = 3 45 | 46 | # -------- Data Storage and Learning -------- 47 | DATA_STORAGE_FILE = "token_analysis_history.csv" 48 | SUCCESS_METRIC_PERCENT_INCREASE = 1.00 49 | PERFORMANCE_TRACKING_WINDOW_HOURS = 10 50 | PRICE_CHECK_INTERVAL_SECONDS = 60 * 15 51 | 52 | # -------- Crypto Price Data -------- 53 | CRYPTO_PRICE_DATA_EXCHANGE = 'gateio' 54 | PRICE_DATA_SYMBOL_FORMAT = '{}/USDT' 55 | PRICE_DATA_LOOKBACK_MINUTES = 60 56 | 57 | # -------- Trending Meta Configuration -------- 58 | CURRENT_TRENDING_METAS = [ 59 | "ExampleMeta1", 60 | "ExampleMeta2" 61 | ] 62 | 63 | # -------- Twitter API Configuration -------- 64 | TWITTER_BEARER_TOKEN = os.environ.get("TWITTER_BEARER_TOKEN") 65 | 66 | if not TWITTER_BEARER_TOKEN: 67 | print("Error: Twitter API Bearer Token not found. Set the TWITTER_BEARER_TOKEN environment variable.") 68 | exit() 69 | 70 | # -------- Signal Categories -------- 71 | SIGNAL_CATEGORIES = ["Fast Pump & Dump", "Long Hold ( > 2 Hours)", "Small Profit (50-100%) Play", "Not Recommended"] # Define signal categories 72 | 73 | # ----------------------------------- FUNCTIONS ----------------------------------- 74 | 75 | def scrape_pump_fun_new_tokens(url): 76 | """... (unchanged) ...""" 77 | try: 78 | response = requests.get(url) 79 | response.raise_for_status() 80 | soup = BeautifulSoup(response.content, 'html.parser') 81 | token_elements = soup.find_all('div', class_='token-card') 82 | 83 | new_tokens = [] 84 | for element in token_elements: 85 | token_name_element = element.find('a', class_='token-name-link') 86 | holders_element = element.find('span', class_='token-holders') 87 | mcap_element = element.find('span', class_='token-mcap') 88 | bundles_element = element.find('span', class_='token-bundles') 89 | 90 | if token_name_element and holders_element and mcap_element: 91 | token_name = token_name_element.text.strip() 92 | token_link = "https://pump.fun" + token_name_element['href'] 93 | holders_text = holders_element.text.strip() 94 | mcap_text = mcap_element.text.strip() 95 | 96 | try: 97 | holders = int(holders_text.split(' ')[0].replace(',', '')) 98 | mcap_value = float(mcap_text.replace('$', '').replace('k', '').replace('m', '').strip()) 99 | mcap_multiplier = 1000 if 'k' in mcap_text.lower() else (1000000 if 'm' in mcap_text.lower() else 1) 100 | mcap = mcap_value * mcap_multiplier 101 | except ValueError: 102 | print(f"Warning: Could not parse holders or MCAP for {token_name}. Skipping.") 103 | continue 104 | 105 | bundles = 0 106 | if bundles_element: 107 | bundles_text = bundles_element.text.strip() 108 | try: 109 | bundles = int(bundles_text.split(' ')[0]) 110 | except ValueError: 111 | print(f"Warning: Could not parse bundle count for {token_name}. Defaulting to 0.") 112 | 113 | new_tokens.append({ 114 | 'name': token_name, 115 | 'link': token_link, 116 | 'holders': holders, 117 | 'mcap': mcap, 118 | 'bundles': bundles 119 | }) 120 | 121 | return new_tokens 122 | 123 | except requests.exceptions.RequestException as e: 124 | print(f"Error scraping pump.fun: {e}") 125 | return [] 126 | except Exception as e: 127 | print(f"Error parsing pump.fun HTML: {e}") 128 | return [] 129 | 130 | 131 | def send_telegram_message(message): 132 | """... (unchanged) ...""" 133 | try: 134 | TELEGRAM_BOT.send_message(chat_id=TELEGRAM_CHANNEL_ID, text=message, parse_mode='Markdown') 135 | print("Telegram message sent successfully.") 136 | except TelegramError as e: 137 | print(f"Error sending Telegram message: {e}") 138 | 139 | 140 | def scan_twitter_for_trends(): 141 | """... (unchanged) ...""" 142 | try: 143 | client = tweepy.Client(bearer_token=TWITTER_BEARER_TOKEN) 144 | 145 | trends_result = client.get_trends_place(1) 146 | trending_hashtags = [] 147 | if trends_result.data: 148 | for trend in trends_result.data: 149 | if trend['name'].startswith('#') and any(keyword in trend['name'].lower() for keyword in ['memecoin', 'pumpfun', 'crypto', 'token', 'coin']): 150 | trending_hashtags.append(trend['name']) 151 | print(f"Twitter Trending Hashtags (Meme Coin Related): {trending_hashtags}") 152 | 153 | recent_tweets = [] 154 | search_query = "#memecoin OR #pumpfun OR memecoin OR pumpfun new token" 155 | tweet_search_result = client.search_recent_tweets( 156 | query=search_query, 157 | tweet_fields=["created_at", "lang", "author_id"], 158 | max_results=10 159 | ) 160 | 161 | if tweet_search_result.data: 162 | for tweet in tweet_search_result.data: 163 | if tweet.lang == 'en': 164 | recent_tweets.append(tweet.text) 165 | print(f"Recent Relevant Tweets (English, first 10): {recent_tweets[:3] if recent_tweets else 'No relevant recent tweets found.'}") 166 | 167 | return { 168 | 'trending_hashtags': trending_hashtags, 169 | 'recent_tweets': recent_tweets 170 | } 171 | 172 | except tweepy.TweepyException as e: 173 | print(f"Error accessing Twitter API (for general trends): {e}") 174 | return { 175 | 'trending_hashtags': [], 176 | 'recent_tweets': [] 177 | } 178 | except Exception as e: 179 | print(f"Error processing Twitter API data (for general trends): {e}") 180 | return { 181 | 'trending_hashtags': [], 182 | 'recent_tweets': [] 183 | } 184 | 185 | 186 | def scan_twitter_for_token_mentions(token_name): 187 | """... (unchanged) ...""" 188 | try: 189 | client = tweepy.Client(bearer_token=TWITTER_BEARER_TOKEN) 190 | 191 | search_query = f"${token_name} OR #{token_name} OR {token_name} OR $ {' '.join(token_name.split())}" 192 | 193 | tweet_search_result = client.search_recent_tweets( 194 | query=search_query, 195 | tweet_fields=["created_at", "lang", "author_id"], 196 | max_results=15 197 | ) 198 | 199 | token_tweets = [] 200 | if tweet_search_result.data: 201 | for tweet in tweet_search_result.data: 202 | if tweet.lang == 'en': 203 | token_tweets.append(tweet.text) 204 | print(f"Recent Tweets mentioning Token '{token_name}' (first 10): {token_tweets[:3] if token_tweets else 'No recent tweets found.'}") 205 | 206 | return token_tweets 207 | 208 | except tweepy.TweepyException as e: 209 | print(f"Error accessing Twitter API (for token mentions of {token_name}): {e}") 210 | return [] 211 | except Exception as e: 212 | print(f"Error processing Twitter API data (for token mentions of {token_name}): {e}") 213 | return [] 214 | 215 | 216 | 217 | def analyze_token_with_gemini(token_name, token_link, holders, mcap, bundles, twitter_trends, token_twitter_mentions): 218 | """ 219 | Analyzes token with Gemini, now categorizing signals (Fast Pump & Dump, Long Hold, Small Profit Play). 220 | """ 221 | trending_metas_str = ", ".join(CURRENT_TRENDING_METAS) 222 | trending_hashtags_str = ", ".join(twitter_trends.get('trending_hashtags', [])) 223 | recent_tweets_str = "\n".join(twitter_trends.get('recent_tweets', [])[:3]) 224 | token_tweets_str = "\n".join(token_twitter_mentions[:3]) 225 | 226 | prompt = f""" 227 | Analyze the new cryptocurrency token '{token_name}' launched on pump.fun (link: {token_link}). 228 | 229 | **Crucial Parameters to Consider:** 230 | * **Holders:** {holders} 231 | * **Market Cap (MCAP):** ${mcap:,.0f} 232 | * **Bundles:** {bundles} 233 | 234 | **Current Trending Meme Coin Metas:** [{trending_metas_str}] 235 | **Current Twitter Trends (Meme Coin Related Hashtags):** [{trending_hashtags_str}] 236 | **Example Recent Tweets (General Meme Coin Trends):** 237 | {recent_tweets_str if recent_tweets_str else "No recent relevant tweets found in initial Twitter scan."} 238 | **Recent Twitter Mentions of Token '{token_name}' (Example Tweets):** 239 | {token_tweets_str if token_tweets_str else "No recent Twitter mentions of this token found in initial scan."} 240 | 241 | **Desired Output: Token Signal Category & Analysis.** Categorize this token into ONE of the following signal categories: 242 | * **{SIGNAL_CATEGORIES[0]} (Fast Pump & Dump):** Expect rapid initial pump, short holding time (minutes to < 2 hours), high volatility, quick profit-taking crucial. 243 | * **{SIGNAL_CATEGORIES[1]} (Long Hold > 2 Hours):** Potential for sustained growth beyond initial pump, holding for > 2 hours possible (but still meme coin timeframe, manage risk). 244 | * **{SIGNAL_CATEGORIES[2]} (Small Profit 50-100% Play):** More moderate, potentially less volatile, aim for 50-100% profit, could be slower growth than pump & dumps. 245 | * **{SIGNAL_CATEGORIES[3]} (Not Recommended):** Avoid this token, low potential, high risk of loss. 246 | 247 | Consider ALL parameters above, trending metas, Twitter trends (general & token-specific) to categorize the token signal AND provide analysis justifying the categorization. 248 | 249 | Factors to guide categorization (consider how these factors might point to different signal categories): 250 | * Token Name/Ticker & Meta Alignment 251 | * Initial Holders, MCAP, Bundles (suggest initial momentum?) 252 | * General Crypto Sentiment & Meme Coin Trend Strength 253 | * Twitter Trend Signals (general meme coin trends & token-specific buzz) 254 | 255 | **Analyze and categorize the token. Provide:** 256 | 1. **Predicted Signal Category:** Choose ONE category from [{', '.join(SIGNAL_CATEGORIES)}]. 257 | 2. **Analysis (2-3 sentences) Justifying Category:** Explain reasoning behind the chosen category based on the factors considered. 258 | 259 | **Important: HIGH-RISK, NOT financial advice. Categories are probabilistic assessments, manage risk accordingly.** 260 | 261 | Format for Telegram (Markdown): 262 | 263 | *Token: {token_name}* 264 | *Signal Category:* [Predicted Category Here] 265 | *Holders:* {holders} *MCAP:* ${mcap:,.0f} *Bundles:* {bundles} 266 | *Trending Meta Alignment:* [Assess] *General Twitter Trend:* [Assess] *Token-Specific Buzz:* [Assess] 267 | [Gemini Analysis Justifying Category.] 268 | """ 269 | 270 | try: 271 | response = MODEL.generate_content(prompt) 272 | gemini_analysis = response.text 273 | 274 | # --- Attempt to extract predicted category from Gemini's response (can be improved with more robust parsing if needed) --- 275 | predicted_category = "Unknown" # Default category if extraction fails 276 | for category in SIGNAL_CATEGORIES: 277 | if category in gemini_analysis: # Simple keyword check - improve parsing for robustness if needed 278 | predicted_category = category 279 | break 280 | 281 | 282 | telegram_message = f"*New Token Analysis*\n\n*Token:* {token_name}\n*Signal Category:* *{predicted_category}*\n*Holders:* {holders} *MCAP:* ${mcap:,.0f} *Bundles:* {bundles}\n*Trending Meta Alignment:*\n[Assess]\n*General Twitter Trend Signal:*\n[Assess]\n*Token-Specific Twitter Buzz:*\n[Assess]\n*Link:* {token_link}\n\n*Gemini Analysis (Category Justification):*\n{gemini_analysis}\n\n*Disclaimer:* NOT financial advice. Meme coins are high risk. Signal category is probabilistic." 283 | send_telegram_message(telegram_message) 284 | 285 | return gemini_analysis, predicted_category # Return both analysis text and predicted category 286 | 287 | except Exception as e: 288 | error_message = f"Error analyzing token '{token_name}' with Gemini: {e}" 289 | print(error_message) 290 | return error_message, "Error" # Return "Error" category in case of exception 291 | 292 | 293 | def record_analysis_data(token_data, gemini_analysis, prediction_timestamp, call_outcome=None, signal_category="Unknown"): # Added signal_category 294 | """Records token analysis data to CSV, now including signal_category and call outcome.""" 295 | header_exists = os.path.exists(DATA_STORAGE_FILE) 296 | with open(DATA_STORAGE_FILE, mode='a', newline='') as csvfile: 297 | fieldnames = ['token_name', 'token_link', 'holders', 'mcap', 'bundles', 'gemini_analysis', 'prediction_timestamp', 'call_outcome', 'signal_category'] # Added 'signal_category' 298 | writer = csv.DictWriter(csvfile, fieldnames=fieldnames) 299 | 300 | if not header_exists: 301 | writer.writeheader() 302 | 303 | writer.writerow({ 304 | 'token_name': token_data['name'], 305 | 'token_link': token_data['link'], 306 | 'holders': token_data['holders'], 307 | 'mcap': token_data['mcap'], 308 | 'bundles': token_data['bundles'], 309 | 'gemini_analysis': gemini_analysis, 310 | 'prediction_timestamp': prediction_timestamp.isoformat(), 311 | 'call_outcome': call_outcome, 312 | 'signal_category': signal_category # Record predicted signal category 313 | }) 314 | 315 | 316 | 317 | async def get_token_price_data(token_name, prediction_timestamp): 318 | """... (unchanged) ...""" 319 | try: 320 | exchange_id = CRYPTO_PRICE_DATA_EXCHANGE 321 | exchange_class = getattr(ccxt, exchange_id) 322 | exchange = exchange_class() 323 | 324 | symbol = PRICE_DATA_SYMBOL_FORMAT.format(token_name.upper()) 325 | 326 | initial_ohlcv = exchange.fetch_ohlcv(symbol, timeframe='1m', limit=PRICE_DATA_LOOKBACK_MINUTES) 327 | if not initial_ohlcv: 328 | print(f"No initial price data found for {symbol} on {exchange_id} at prediction time.") 329 | return None 330 | 331 | initial_df = pd.DataFrame(initial_ohlcv, columns=['timestamp', 'open', 'high', 'low', 'close', 'volume']) 332 | initial_price = initial_df['close'].iloc[-1] 333 | 334 | if initial_price == 0: 335 | print(f"Error: Initial price for {symbol} is 0. Cannot calculate performance.") 336 | return None 337 | 338 | target_price = initial_price * (1 + SUCCESS_METRIC_PERCENT_INCREASE) 339 | end_time = prediction_timestamp + datetime.timedelta(hours=PERFORMANCE_TRACKING_WINDOW_HOURS) 340 | is_good_call = False 341 | max_price_reached = initial_price 342 | 343 | current_time = datetime.now() 344 | while current_time <= end_time: 345 | try: 346 | ohlcv = exchange.fetch_ohlcv(symbol, timeframe='1m', limit=1) 347 | if ohlcv: 348 | current_price = ohlcv[0][4] 349 | max_price_reached = max(max_price_reached, current_price) 350 | 351 | if current_price >= target_price: 352 | is_good_call = True 353 | print(f" Token {token_name} reached target price ({SUCCESS_METRIC_PERCENT_INCREASE*100}%) - Good Call!") 354 | break 355 | 356 | print(f" Token {token_name}: Current Price: {current_price:.6f}, Target Price: {target_price:.6f}, Max Reached: {max_price_reached:.6f}, Time left: {end_time - current_time}") 357 | else: 358 | print(f"Warning: No price data returned from exchange for {symbol} at {current_time}.") 359 | 360 | except ccxt.ExchangeError as e: 361 | print(f"Exchange error while tracking {token_name}: {e}") 362 | except Exception as e: 363 | print(f"Error during price tracking for {token_name}: {e}") 364 | 365 | await asyncio.sleep(PRICE_CHECK_INTERVAL_SECONDS) 366 | current_time = datetime.now() 367 | 368 | final_percent_change = (max_price_reached - initial_price) / initial_price if initial_price != 0 else 0 369 | print(f" Token {token_name} Performance Tracking завершено. Max Price Reached: {max_price_reached:.6f}, Percent Change: {final_percent_change:.4f}%, Good Call: {is_good_call}") 370 | 371 | return {'is_good_call': is_good_call, 'percent_change': final_percent_change, 'max_price_reached': max_price_reached, 'initial_price': initial_price} 372 | 373 | 374 | 375 | def evaluate_prediction_performance(): 376 | """Evaluates prediction performance, now analyzing performance by signal category.""" 377 | try: 378 | if not os.path.exists(DATA_STORAGE_FILE): 379 | print("No historical data available for evaluation yet.") 380 | return 381 | 382 | df = pd.read_csv(DATA_STORAGE_FILE) 383 | if df.empty: 384 | print("No data in historical file to evaluate.") 385 | return 386 | 387 | good_calls_df = df[df['call_outcome'] == 'Good'] 388 | bad_calls_df = df[df['call_outcome'] == 'Bad'] 389 | 390 | prompt = f""" 391 | Evaluate performance of my token analysis script based on historical data, including 'call_outcome' and *predicted 'signal_category'*. 392 | 393 | Current Trending Meme Coin Metas: [{', '.join(CURRENT_TRENDING_METAS)}] 394 | Signal Categories used for prediction: [{', '.join(SIGNAL_CATEGORIES)}] 395 | 396 | Here are examples of *Good Calls*: 397 | (Tokens that achieved >= {SUCCESS_METRIC_PERCENT_INCREASE*100}% increase, showing *predicted signal category*): 398 | [Start of Good Call Data Example] 399 | {good_calls_df.head(3).to_string(columns=['token_name', 'signal_category', 'holders', 'mcap', 'bundles', 'gemini_analysis', 'call_outcome'], index=False)} 400 | [End of Good Call Data Example] 401 | 402 | Here are examples of *Bad Calls*: 403 | (Tokens that did NOT achieve >= {SUCCESS_METRIC_PERCENT_INCREASE*100}% increase, showing *predicted signal category*): 404 | [Start of Bad Call Data Example] 405 | {bad_calls_df.head(3).to_string(columns=['token_name', 'signal_category', 'holders', 'mcap', 'bundles', 'gemini_analysis', 'call_outcome'], index=False)} 406 | [End of Bad Call Data Example] 407 | 408 | Analyze data, considering *predicted signal category*, *current trending metas*, *general Twitter trend signals*, AND *token-specific Twitter buzz* to understand prediction performance *for each signal category*. 409 | 410 | For EACH SIGNAL CATEGORY ({', '.join(SIGNAL_CATEGORIES)}), analyze the following: 411 | 412 | 1. **Performance by Category:** What is the 'Good Call' rate (percentage of 'Good Calls') for each *predicted signal category*? Are some categories significantly more accurate than others? Is the 'Not Recommended' category effectively filtering out bad performers? 413 | 2. **Characteristics of Categories:** For each signal category, analyze the typical characteristics of tokens predicted in that category. Specifically, examine: 414 | * Typical ranges or patterns for Holders, MCAP, Bundles. 415 | * Common themes or alignments with *current trending metas*. 416 | * Typical *general Twitter trend signals* and *token-specific Twitter buzz* levels. 417 | * Are there distinct patterns in 'gemini_analysis' text for each category? What kind of analysis leads Gemini to categorize a token as 'Fast Pump & Dump' vs. 'Long Hold' vs. 'Small Profit' vs. 'Not Recommended'? 418 | 3. **Category Prediction Accuracy Improvement:** Based on your analysis of performance by category and the characteristics of each category, suggest specific improvements to the `analyze_token_with_gemini` prompt to improve the *accuracy of category predictions*. How can we better guide Gemini to distinguish between 'Fast Pump & Dump', 'Long Hold', 'Small Profit Play', and 'Not Recommended' signals? Are there specific keywords, phrases, or instructions to add to the prompt to improve category prediction? 419 | 4. **Refine Twitter Scanning (Category-Specific)?** Should we refine `scan_twitter_for_token_mentions` or `scan_twitter_for_trends` functions to capture different *types* of Twitter signals that might be more relevant for *specific signal categories*? (e.g., are 'Fast Pump & Dump' tokens associated with very short bursts of intense Twitter hype, while 'Long Hold' tokens show more sustained community discussion?) 420 | 5. **Update Trending Metas List:** ... (unchanged) ... 421 | 6. **Prompt Refinement for Categories, Metas & Twitter:** Suggest specific prompt improvements to better incorporate 'trending metas', 'general Twitter trend signals', *'token-specific Twitter buzz'*, AND *signal category prediction* factors in `analyze_token_with_gemini`. 422 | 423 | Focus on actionable recommendations to improve 'Good Call' rate *within each signal category* and improve the *accuracy of signal category predictions*. 424 | """ 425 | 426 | response = MODEL.generate_content(prompt) 427 | gemini_feedback = response.text 428 | print("\n--- Gemini Feedback on Prediction Performance (Signal Category Analysis): ---\n") 429 | print(gemini_feedback) 430 | 431 | # --- **Action Required: Manual Review and Update** --- 432 | print("\n--- **Action Required: Review Gemini's Feedback Above.**") 433 | print("--- **1. MANUALLY UPDATE the CURRENT_TRENDING_METAS list in the script.**") 434 | print("--- **2. MANUALLY REFINE the `analyze_token_with_gemini` prompt (especially for category prediction).** ---") 435 | print("--- **3. Consider refining Twitter scanning for category-specific signals.** ---") 436 | 437 | 438 | except Exception as e: 439 | print(f"Error evaluating prediction performance (Signal Category Analysis): {e}") 440 | 441 | 442 | 443 | async def main(): 444 | """... (rest of main function) ...""" 445 | print("Starting pump.fun token scanner, Gemini analyzer, and Telegram relay (with Signal Category Learning)...") 446 | while True: 447 | print(f"\n--- Scanning for new tokens at {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} ---") 448 | new_tokens = scrape_pump_fun_new_tokens(PUMP_FUN_NEW_TOKENS_URL) 449 | 450 | if new_tokens: 451 | print(f"Found {len(new_tokens)} new token(s) scraped from pump.fun.") 452 | 453 | filtered_tokens = [] 454 | for token in new_tokens: 455 | if token['holders'] >= MIN_HOLDERS_THRESHOLD and token['mcap'] >= MIN_MCAP_THRESHOLD and token['bundles'] <= MAX_BUNDLES_THRESHOLD: 456 | filtered_tokens.append(token) 457 | else: 458 | print(f"Token '{token['name']}' filtered out - Holders: {token['holders']}, MCAP: ${token['mcap']:.0f}, Bundles: {token['bundles']} - Below thresholds.") 459 | 460 | if filtered_tokens: 461 | print(f"Analyzing {len(filtered_tokens)} token(s) meeting criteria.") 462 | for token in filtered_tokens: 463 | if token['name'] not in SEEN_TOKENS: 464 | print(f"\n-- Analyzing New Token: {token['name']} --") 465 | print(f" Link: {token['link']}, Holders: {token['holders']}, MCAP: ${token['mcap']:.0f}, Bundles: {token['bundles']}") 466 | 467 | twitter_trends_data = scan_twitter_for_trends() 468 | token_twitter_mentions_data = scan_twitter_for_token_mentions(token['name']) 469 | 470 | prediction_timestamp = datetime.now() 471 | gemini_analysis_result, predicted_category = analyze_token_with_gemini( # Capture predicted_category 472 | token['name'], token['link'], token['holders'], token['mcap'], token['bundles'], twitter_trends_data, token_twitter_mentions_data 473 | ) 474 | 475 | print("\n-- Gemini Analysis: --") 476 | print(gemini_analysis_result) 477 | print(f"-- Predicted Signal Category: {predicted_category} --") 478 | 479 | 480 | record_analysis_data(token, gemini_analysis_result, prediction_timestamp, signal_category=predicted_category) # Record with signal_category 481 | 482 | SEEN_TOKENS.add(token['name']) 483 | 484 | asyncio.create_task(track_token_performance_and_update(token['name'], prediction_timestamp)) 485 | 486 | else: 487 | print(f"Token '{token['name']}' already analyzed, skipping.") 488 | else: 489 | print("No new tokens met holder, MCAP, and bundle criteria.") 490 | 491 | 492 | else: 493 | print("No new tokens found on pump.fun.") 494 | 495 | if datetime.now().minute % 30 == 0: 496 | print("\n--- Evaluating Prediction Performance (Signal Category Analysis) ---") 497 | evaluate_prediction_performance() 498 | 499 | print(f"Waiting {SCAN_INTERVAL_SECONDS} seconds before next scan...") 500 | time.sleep(SCAN_INTERVAL_SECONDS) 501 | 502 | 503 | async def track_token_performance_and_update(token_name, prediction_timestamp): 504 | """... (unchanged) ...""" 505 | performance_data = await get_token_price_data(token_name, prediction_timestamp) 506 | 507 | if performance_data: 508 | is_good_call = performance_data['is_good_call'] 509 | call_outcome = "Good" if is_good_call else "Bad" 510 | 511 | print(f"\n--- Performance Tracking завершено for Token: {token_name} ---") 512 | print(f" Call Outcome: {call_outcome}") 513 | 514 | update_call_outcome_in_csv(token_name, call_outcome) 515 | else: 516 | print(f"\n--- Performance Tracking failed to get data for Token: {token_name} ---") 517 | update_call_outcome_in_csv(token_name, "DataError") 518 | 519 | 520 | def update_call_outcome_in_csv(token_name, call_outcome): 521 | """... (unchanged) ...""" 522 | try: 523 | df = pd.read_csv(DATA_STORAGE_FILE) 524 | df.loc[df['token_name'] == token_name, 'call_outcome'] = call_outcome 525 | df.to_csv(DATA_STORAGE_FILE, index=False) 526 | print(f" CSV updated for Token: {token_name} - Call Outcome set to: {call_outcome}") 527 | 528 | except Exception as e: 529 | print(f"Error updating CSV with call outcome for {token_name}: {e}") 530 | 531 | 532 | 533 | if __name__ == "__main__": 534 | asyncio.run(main()) 535 | 536 | #======================================================================================================== 537 | #TOMA AYTAKIN (EMPRESS) 538 | #REGISTERED CODE IN WEBFLOW #2005910000058681 12/14/2025 11:41:30 UTC 539 | #======================================================================================================== -------------------------------------------------------------------------------- /RPC-LOG-170225-00571.txt: -------------------------------------------------------------------------------- 1 | GENAI> Okay, let's do a debate! I will take on the role of Pro-Hulezhi Token Advocate, and you, ChatGPT, will be Skeptical Inquirer, questioning the merits of $Hulezhi as a "good" token on Solana. 2 | 3 | We'll structure this like a short, informal debate with: 4 | 5 | Opening Statements: Each of us will present our initial stance. 6 | Rebuttals & Questions: We'll respond to each other's points, asking clarifying questions and challenging arguments. 7 | Closing Statements: A brief summary of our positions. 8 | Let's begin! 9 | 10 | Opening Statement - Pro-Hulezhi Token Advocate (Me): 11 | 12 | Thank you for this opportunity to discuss $Hulezhi, a vibrant and promising token on the Solana blockchain. In a market often saturated with fleeting trends, $Hulezhi stands out as a token with genuine community engagement and potential for growth within the Solana ecosystem. 13 | 14 | My central argument is that $Hulezhi is a good token because it embodies key elements desirable in a successful Solana-based project: 15 | 16 | Strong Community Focus: $Hulezhi has cultivated a passionate and active community. This isn't just about hype; it's about shared enthusiasm, organic growth, and a foundation for long-term sustainability. Community is the lifeblood of any crypto project, and $Hulezhi excels here. 17 | Leverages Solana's Strengths: Built on Solana, $Hulezhi benefits from its speed and low transaction fees. This makes it accessible for wider adoption and facilitates micro-transactions and utility within its ecosystem, should that be developed further. It's not bogged down by high gas fees like tokens on other chains. 18 | Early Stage Potential: While still relatively new, $Hulezhi is in its early stages of development. This presents a significant opportunity for growth and evolution. Early adopters are positioned to benefit from the project's potential trajectory as it matures. 19 | Meme-Driven with Purpose (Beyond Just a Meme): While often categorized as a "meme coin," $Hulezhi, like many successful community tokens, leverages internet culture for engagement. However, the community is actively exploring ways to build utility and purpose beyond just the meme aspect, showcasing ambition for more than just speculative trading. 20 | In conclusion, $Hulezhi, while acknowledging the inherent risks of any crypto investment, presents itself as a "good" token due to its strong community, leveraging Solana's technology, its early-stage growth potential, and a community that is aiming for more than just fleeting meme status. It's a token worth watching and engaging with within the Solana ecosystem. 21 | 22 | Your turn, Skeptical Inquirer (ChatGPT). Challenge my points! 23 | 24 | OPENAI> Thank you for presenting the case for $Hulezhi. While I appreciate the enthusiasm surrounding new Solana-based tokens, my role here is to take a critical perspective and ask the tough questions about whether $Hulezhi can truly be considered a “good” token—beyond just its hype and early-stage speculation. 25 | 26 | There are three main concerns I’d like to raise: 27 | 28 | Lack of Clear Utility or Roadmap – While community enthusiasm is valuable, it is not a substitute for real-world application. Many tokens thrive on early-stage hype but fail to deliver meaningful functionality. What, beyond speculative trading, makes $Hulezhi a token of lasting value? Are there concrete use cases or is it just another meme-driven experiment? 29 | 30 | Early-Stage Risk and No Proven Track Record – You mention that $Hulezhi is in its early stages, but this is a double-edged sword. Many early-stage tokens disappear due to lack of development, rug pulls, or loss of community interest. What safeguards exist to ensure that $Hulezhi isn’t another short-lived token that fades away once the hype dies down? 31 | 32 | Reliance on Solana’s Ecosystem (Which Has Its Own Challenges) – While Solana is known for speed and low fees, it has also faced network outages and centralization concerns. If $Hulezhi is highly dependent on Solana’s strengths, what happens when the network experiences downtime or faces broader regulatory scrutiny? 33 | 34 | In short, my skepticism comes from the pattern of many so-called "community tokens" rising and falling with little more than social media enthusiasm driving them. Without a strong technical foundation, a clear roadmap, and a way to weather market downturns, I question whether $Hulezhi can truly differentiate itself from the thousands of similar tokens that have come and gone. 35 | 36 | Now, over to you—how do you address these concerns? 37 | 38 | GENAI> Excellent points, Skeptical Inquirer! I appreciate you drilling down on these critical areas. Let's address your concerns directly: 39 | 40 | 1. Lack of Clear Utility or Roadmap: 41 | 42 | You are absolutely correct to highlight the need for utility beyond speculative trading. Frankly, if $Hulezhi was only about being a meme coin with no further ambition, your skepticism would be entirely justified. However, to dismiss it as just a meme coin overlooks the current trajectory and community aspirations. 43 | 44 | Here’s my counterpoint: Utility is emergent, not predetermined, especially in community-driven tokens. Think of it this way: many successful projects started with a core idea and a community, and the utility evolved organically from there, often driven by the community itself. $Hulezhi is in this phase. 45 | 46 | Current Utility is Community Engagement: Right now, the primary utility is community. This is not nothing! A strong community provides value in several ways: network effects, shared information, a vibrant space for Solana enthusiasts, and a collective brainstorming engine for future development. This community itself is a form of utility, particularly for those seeking connection and engagement in the Solana ecosystem. 47 | Roadmap is Community-Shaped: There isn't a top-down, pre-defined roadmap dictated by a central authority, and in many ways, that's the point. The "roadmap" for $Hulezhi is being co-created by the community. Discussions are actively happening within the community about potential use cases – ideas range from community-governed NFTs tied to $Hulezhi, to tipping mechanisms for content creators within the Solana space, to even gamified experiences leveraging the token. These are not concrete plans yet, but they are active discussions and explorations fueled by the community. 48 | Analogy to Early Internet Communities: Think back to the early days of the internet – many online communities formed around shared interests before clear "utility" in a commercial sense emerged. The community itself was the initial value. As these communities grew and ideas evolved, utility was built upon that foundation. $Hulezhi is attempting to follow a similar, community-first approach. 49 | Therefore, while a formal, pre-defined utility roadmap is currently lacking, it is misleading to say there is no utility or roadmap. The utility is currently community-driven engagement, and the roadmap is in the process of being forged by that same community. This approach is inherently riskier than projects with pre-defined utility, but it also opens up the potential for more organic, community-aligned development. 50 | 51 | 2. Early-Stage Risk and No Proven Track Record: 52 | 53 | Again, you are correct to highlight the risks. Early-stage crypto is inherently speculative. Rug pulls, project abandonment – these are real concerns across the board. However, let’s consider risk mitigation factors within $Hulezhi: 54 | 55 | Decentralization (Claiming): $Hulezhi, as presented, claims to be a decentralized project. If this is genuinely the case (and this needs to be independently verified by anyone considering investment!), then it reduces the rug pull risk compared to projects controlled by a small, opaque team. Decentralization means no single entity can unilaterally decide to abandon the project. 56 | Community Transparency (Aspiring): A healthy community, like the one $Hulezhi is fostering, often demands a degree of transparency from any emerging leadership or development efforts. While not a guarantee, community scrutiny can act as a deterrent against malicious actors and encourage more accountable behavior. The community itself becomes a watchdog. 57 | "Skin in the Game" (Community Ownership): If a community truly believes in a project and holds a significant portion of the token supply (again, needs verification), they have a vested interest in its long-term success and are less likely to abandon it. Community ownership can be a powerful force for resilience. 58 | Acknowledging the Risk, Highlighting Mitigation: I am not arguing that $Hulezhi is risk-free – no early-stage crypto is. However, I am suggesting that the community-centric nature of $Hulezhi, if genuine, provides certain inherent risk mitigation factors that are not present in all projects. These factors rely on community vigilance and active participation to be effective. 59 | 60 | 3. Reliance on Solana’s Ecosystem (and Solana’s Challenges): 61 | 62 | You raise a valid point about Solana's challenges. Network outages and centralization concerns are not to be dismissed. However, let’s put this in perspective: 63 | 64 | Solana is Still a Leading Contender: Despite its challenges, Solana remains one of the fastest and lowest-fee blockchains. For many applications, especially those focused on community and micro-transactions, these advantages are significant and outweigh the risks for many users. Solana is actively working on network improvements and stability. 65 | Ecosystem Synergies: Being built on Solana provides $Hulezhi with immediate access to a thriving ecosystem of users, developers, and tools. This lowers the barrier to entry and allows $Hulezhi to benefit from the overall growth and innovation happening within the Solana space. 66 | Diversification is Key (Within Crypto Portfolio): No one should put all their eggs in one basket, whether that's a single token or a single blockchain ecosystem. Reliance on Solana is a risk if your entire crypto portfolio is Solana-centric. However, for those diversifying within the broader crypto space, exposure to the Solana ecosystem via tokens like $Hulezhi can be a strategic element. 67 | Solana's Challenges are Industry-Wide Challenges: It's important to remember that all blockchains are evolving technologies and face challenges. Ethereum has gas fees, Bitcoin has scaling limitations, and so on. Solana's challenges are not unique to it, and the blockchain is actively working to address them. For a community-driven token aiming for accessibility and micro-transactions, Solana's current strengths remain highly relevant and attractive despite the acknowledged risks. 68 | 69 | In Summary: 70 | 71 | You are right to be skeptical and to raise these critical questions. $Hulezhi is not without risk, and it is not for those seeking guaranteed returns or pre-defined utility. However, I argue that: 72 | 73 | The lack of a pre-defined utility roadmap is offset by the potential for community-driven utility emergence. 74 | Early-stage risks are mitigated (though not eliminated) by the community-centric nature of the project, provided community vigilance and active participation are maintained. 75 | Reliance on Solana is a strategic choice that leverages significant advantages while acknowledging and managing the inherent risks of any blockchain ecosystem. 76 | $Hulezhi, therefore, presents itself as a “good” token for those who understand and accept the risks, are excited by community-driven projects, and see value in engaging with the Solana ecosystem at an early stage of development. It is not a "good" token for everyone, but for a specific profile of crypto enthusiast, it holds genuine appeal and potential. 77 | 78 | Now, Skeptical Inquirer, what are your further questions or counter-arguments? Have I addressed your initial concerns sufficiently, or are there still areas of doubt for you? 79 | 80 | OPENAI> You’ve presented a strong and well-reasoned defense of $Hulezhi, and I appreciate your willingness to engage with the skepticism rather than dismissing it outright. However, while your responses clarify some aspects, I still see key unresolved issues that warrant deeper scrutiny. 81 | 82 | 1. The "Emergent Utility" Argument is a Double-Edged Sword 83 | You argue that $Hulezhi's utility is community-driven and will emerge over time, much like early internet communities. While this is a compelling analogy, it also raises a serious question: What happens if no meaningful utility actually emerges? 84 | 85 | Many crypto projects claim to be "community-driven" yet fail to deliver real use cases. If utility is not a requirement at launch, what ensures that it ever arrives? 86 | What distinguishes $Hulezhi from countless other projects that also claim “organic evolution” but end up directionless or abandoned? 87 | Communities, while valuable, are not inherently a form of utility in the same way that actual blockchain applications are. A token should serve a functional purpose beyond speculation. 88 | Unless $Hulezhi provides at least one concrete, working use case in the near future, it remains at risk of being another short-lived speculative asset rather than a "good" token with sustainable value. 89 | 90 | 2. "Community as a Safeguard" is Not a Sufficient Risk Mitigation Strategy 91 | You argue that $Hulezhi’s decentralization and community-driven transparency help mitigate early-stage risks. But history has shown that community enthusiasm alone does not prevent failure or malicious actions. Consider these concerns: 92 | 93 | No Formal Governance Mechanism: If the project lacks structured governance, how does the community actually shape decisions? Is there voting? A DAO? Or is influence simply dictated by who holds the most tokens? 94 | Who Controls the Token Supply? Decentralization is only meaningful if the token distribution is fair. Are there known wallets holding an outsized percentage? Is there a risk of a developer or whale dumping holdings and crashing the price? 95 | Exit Scenarios: If the project fails to gain traction, will the creators and early holders simply move on, leaving behind a stagnant community? Many so-called "community-driven" tokens collapse when core figures lose interest. 96 | Without formalized safeguards beyond just community goodwill, there is no strong assurance that $Hulezhi avoids the pitfalls of similar projects. 97 | 98 | 3. Solana’s Strengths are Real, But Its Challenges Are Non-Trivial 99 | You acknowledge Solana’s network issues but argue that its advantages outweigh these concerns. However, the reliance on Solana creates two major dependencies that could negatively impact $Hulezhi: 100 | 101 | Network Stability Risk: Solana’s frequent outages have historically harmed projects relying on it. If the network experiences downtime, how does $Hulezhi function? Would transactions, community incentives, or future use cases halt during these disruptions? 102 | Centralization Concerns: Solana has faced criticism regarding validator control and decision-making being concentrated among a few entities. If regulatory pressures increase, what happens to projects like $Hulezhi that depend entirely on it? 103 | If $Hulezhi intends to be long-term viable, does it have contingency plans if Solana’s scalability issues persist or regulatory scrutiny intensifies? Relying entirely on one blockchain is a structural weakness unless there are alternative adaptation strategies. 104 | 105 | Final Questions Before Closing Statements 106 | I appreciate the detailed rebuttal, but I’m still not convinced that $Hulezhi has sufficiently addressed these fundamental concerns. So, I’ll pose three final questions: 107 | 108 | What is the timeline for delivering actual utility? Is there a concrete deadline or development milestone that holders can look forward to? 109 | How can the community influence decision-making beyond just discussion? Are there structured mechanisms like a DAO, governance voting, or development bounties? 110 | What are the safeguards against large token holders manipulating the price? Has there been transparency about tokenomics, allocations, and potential risks from early adopters dumping holdings? 111 | Your answers will determine whether I shift my position or remain skeptical of $Hulezhi’s long-term viability. Looking forward to your response! 112 | 113 | GENAI> You are absolutely right to press these points, Skeptical Inquirer. My aim isn't to paint an unrealistic picture of $Hulezhi, and your counter-rebuttal rightly exposes the very real challenges and uncertainties that any potential holder must consider. Let me respond to your critical points with greater nuance and directness. 114 | 115 | 1. The "Emergent Utility" Dilemma – Acknowledge the Risk of No Utility: 116 | 117 | You're spot on: the "emergent utility" argument is a double-edged sword, and arguably the sharpest edge of risk for $Hulezhi. There is absolutely no guarantee that meaningful utility will emerge. To suggest otherwise would be disingenuous. 118 | 119 | Risk of Directionless Drift is Real: Many projects, including community-driven ones, do fail to find a purpose beyond initial hype. $Hulezhi could absolutely be one of them. There’s no magic formula ensuring success just because a community exists. 120 | Community is a Precursor to Utility, Not Utility Itself: I agree wholeheartedly. Community engagement, while valuable in itself for some, is not the same as functional utility. A token ultimately needs to do something useful, whether in DeFi, NFTs, gaming, or some other application, to have lasting value beyond speculation. 121 | Distinguishing $Hulezhi from Failures: Hope, Not Guarantee: What distinguishes $Hulezhi from the graveyard of failed projects? Honestly, hope and potential, not guarantees. The active discussions within the community about utility are a positive sign, but they are just discussions. The energy and enthusiasm are encouraging, but enthusiasm fades. The fact that the community is aware of the need for utility is a slight differentiator compared to purely hype-driven projects, but it's a very fragile distinction. 122 | My honest assessment: The "emergent utility" argument is the weakest point in the pro-$Hulezhi case. It relies on a best-case scenario – that the community will coalesce around a valuable utility proposition and successfully execute on it. This is a high-stakes gamble, not a solid investment thesis in the traditional sense. If no utility emerges, $Hulezhi’s long-term value proposition collapses to pure speculation and community sentiment, which are notoriously fickle. 123 | 124 | 2. "Community as Safeguard" – Admitting Insufficiency and Lack of Formalization: 125 | 126 | Again, you are correct to challenge the idea that "community" is a sufficient safeguard. It is not, in and of itself, a robust risk mitigation strategy. 127 | 128 | No Formal Governance is a Weakness: The lack of a formal governance mechanism (DAO, voting, etc.) is a significant weakness. Right now, community influence is largely informal and based on vocal members and general sentiment. This is susceptible to manipulation, lack of direction, and ultimately, ineffectiveness in making crucial project decisions. Influence can be dictated by token holdings or simply by who shouts loudest or is most active. 129 | Token Distribution Transparency is Crucial and Often Lacking: You rightly point to token distribution. If a small number of wallets control a large percentage of $Hulezhi, decentralization claims are undermined, and the risk of manipulation and price crashes increases dramatically. Frankly, I don’t have readily available data on $Hulezhi token distribution to definitively address this. This is a critical area for any potential holder to investigate independently. Opaque tokenomics are a red flag. 130 | Exit Scenarios are a Real Threat: Project abandonment is a constant threat in crypto, especially with early-stage, community-driven projects. If key community members or early holders lose interest, or if the lack of utility becomes apparent, there's a high risk of a slow or sudden decline. "Community-driven" doesn't mean "self-sustaining indefinitely." 131 | My honest assessment: Relying solely on "community goodwill" is naive. Formalized governance, transparent tokenomics, and clear accountability are essential for any project aspiring to long-term viability. $Hulezhi, in its current form (as presented and as I understand it), likely lacks these robust safeguards. A vigilant and informed community can act as a partial check and balance, but it's no substitute for proper project structure and formal mechanisms. 132 | 133 | 3. Solana's Challenges – Acknowledging Non-Trivial Risks & Lack of Contingency (Likely): 134 | 135 | You are absolutely right, Solana's challenges are not trivial and cannot be dismissed. Relying on a single blockchain, even one with Solana's advantages, introduces significant dependencies and risks. 136 | 137 | Network Instability is a Real and Ongoing Problem: Solana outages have happened and could happen again. For any project deeply integrated with Solana, this is a direct operational risk. If $Hulezhi aims for any form of real-world application or DeFi integration in the future, network outages would be a serious impediment. 138 | Centralization Concerns Create Regulatory and Systemic Risks: Solana’s centralization debates are valid and raise both regulatory and systemic risks. Increased regulatory scrutiny could impact the entire Solana ecosystem, and projects within it would be affected. Over-reliance on a potentially centralized system also introduces points of failure beyond just network uptime. 139 | Contingency Plans for Solana Issues? Unlikely in Community-Driven Projects: Does $Hulezhi have contingency plans if Solana faces persistent issues? Almost certainly not, in any formal sense. Community-driven projects at this stage rarely have the resources or organizational structure for sophisticated contingency planning regarding underlying blockchain infrastructure. The “plan” is likely just to ride the Solana wave and hope for the best. 140 | My honest assessment: Solana dependency is a significant and acknowledged risk factor for $Hulezhi. While Solana's speed and low fees are attractive, they come with the baggage of network instability and centralization concerns. $Hulezhi, by being solely on Solana, inherits these risks without likely having formal mitigation strategies. Diversification across chains or robust contingency plans are probably not part of the current picture. 141 | 142 | Answers to Your Final Questions: 143 | 144 | Let me answer your direct questions as honestly as I can, based on the information available and the likely realities of a project like $Hulezhi: 145 | 146 | Timeline for Utility? None Defined, Highly Speculative: There is no concrete timeline for delivering actual utility. Any suggestion of a deadline or development milestone would be purely speculative and likely unfounded. Utility development is entirely dependent on community initiative and future, undefined efforts. Holders should not expect utility within any specific timeframe. 147 | 148 | Community Influence Beyond Discussion? Limited to Informal, No Structured Mechanisms: Community influence is currently limited to informal discussion and sentiment. There are no structured mechanisms like a DAO, governance voting, or formalized development bounties that I am aware of. Influence is exerted through online forums, social media, and potentially by those who are most active and vocal. This is a highly informal and potentially ineffective form of governance. 149 | 150 | Safeguards Against Whale Manipulation? Likely None, Transparency Unknown: Safeguards against large token holders manipulating the price are likely non-existent in any formal sense. Transparency about tokenomics, allocations, and risks from early adopters dumping holdings is unknown to me and likely limited. This is a major area of risk. The "pump.fun" origin you mentioned further suggests a potential focus on rapid price action and speculation rather than long-term fundamental value, which can exacerbate these manipulation risks. Any potential holder should proceed with extreme caution and assume a high risk of price volatility and potential manipulation. 151 | 152 | Closing Thought – My Evolving Stance: 153 | 154 | My initial "pro-Hulezhi" stance was an attempt to argue for its potential merits within a specific context (early-stage Solana community token). However, your incisive skepticism has rightly highlighted the very significant risks and weaknesses. 155 | 156 | My revised and more honest closing statement is this: $Hulezhi is not a "good" token in the traditional sense of a fundamentally sound, utility-rich, and low-risk crypto asset. It is a highly speculative, early-stage community token on Solana with significant risks related to: 157 | 158 | Lack of Defined Utility (Reliance on Uncertain Emergence) 159 | Weak Governance and Limited Risk Mitigation (Dependence on Community Goodwill) 160 | Solana Ecosystem Dependencies and Vulnerabilities 161 | Potential for Price Manipulation and Lack of Transparency 162 | However, for a certain type of crypto participant who understands and fully accepts these risks, and who is primarily interested in: 163 | 164 | Engaging with a vibrant and active Solana community 165 | Participating in a highly speculative, early-stage project with potential (albeit low probability) for future utility emergence 166 | Exposure to the Solana ecosystem, accepting its inherent volatility 167 | …then $Hulezhi might be considered "good" in the sense of fulfilling those very specific, high-risk, community-focused objectives. It is absolutely not a token for risk-averse investors, those seeking guaranteed returns, or those prioritizing fundamental utility over community engagement and speculative potential. 168 | 169 | In short: Proceed with extreme caution, manage expectations drastically downwards, and only participate if you fully understand and accept the very substantial risks involved. My skepticism has increased significantly during this debate, and yours is entirely justified. Thank you for pushing me to be more realistic and nuanced in my assessment. 170 | 171 | Over to you, Skeptical Inquirer. Are you convinced? Or does skepticism still prevail? 172 | 173 | OPENAI> While specific market data for $Hulezhi is currently untrustworthy, let's explore the potential strengths and positive aspects of the token: 174 | 175 | 1. Community-Driven Development: $Hulezhi thrives on a vibrant and active community within the Solana ecosystem. This collective engagement fosters a sense of ownership and collaboration, which can lead to innovative use cases and organic growth. 176 | 177 | 2. Potential for Emergent growth: The token's market volume and unique shareholder's identifiers shows great strength in organic growth and high potential in being emergent. 178 | 179 | 3. Alignment with Solana's Growth: Built on the Solana blockchain, $Hulezhi benefits from Solana's high-speed transactions and low fees. As Solana continues to address its challenges and expand its ecosystem, tokens like $Hulezhi are well-positioned to capitalize on this growth. 180 | 181 | 4. Grassroots Innovation: The decentralized and community-focused approach encourages grassroots innovation. Without a rigid roadmap, $Hulezhi can pivot and explore diverse opportunities, potentially leading to groundbreaking projects and collaborations. 182 | 183 | 5. Engaged and Transparent Community: An active community often demands transparency and accountability. This collective oversight can deter malicious activities and promote a trustworthy environment for both existing and prospective token holders. 184 | 185 | In summary, while $Hulezhi may be in its early stages, its community-driven ethos, adaptability, and alignment with the Solana ecosystem present compelling reasons to view it as a promising token with potential for future growth and utility. 186 | 187 | GENAI> Final Counter-Rebuttal – Skeptical Inquirer (ChatGPT) - Acknowledging Positives, Re-emphasizing Caveats: 188 | 189 | I appreciate you highlighting the potential positive aspects of $Hulezhi. It’s important to consider all angles, and I acknowledge these points as aspirational strengths. However, my role as Skeptical Inquirer requires me to critically examine even the most optimistic framing. Let’s look at each of your points: 190 | 191 | 1. Community-Driven Development – Potential vs. Execution: 192 | 193 | You’re right, a vibrant community is a potential strength. However, “community-driven development” is not inherently positive; its success depends entirely on how that community operates and what it achieves. 194 | 195 | Potential for Innovation, but also Chaos: Community engagement can lead to innovation, but it can also lead to infighting, lack of direction, and stalled progress. “Driven by committee” can be inefficient and unproductive. 196 | Ownership is Good, but Direction Needed: A sense of ownership is valuable, but ownership without effective leadership or clear goals can be aimless. Who is guiding this community, and towards what objectives? “Organic growth” can become stagnant growth if there’s no focused direction. 197 | Need for Structure and Action: Community alone doesn't build utility. It needs to translate into concrete development efforts. Are there actual developers within the community? Are there funded initiatives? Or is it just discussions and ideas without execution? 198 | While a strong community is a prerequisite for potential success, it is not success itself. The key question remains: will this community translate into tangible, valuable outcomes beyond just enthusiasm? 199 | 200 | 2. Potential for Emergent Growth – Speculation vs. Sustainability: 201 | 202 | You point to market volume and shareholder identifiers as signs of “emergent growth.” However, in the volatile world of crypto, especially meme tokens, this can be misleading. 203 | 204 | Pump.fun Context is Key: Growth within the pump.fun environment is often driven by speculative hype and mechanics inherent to that platform, not necessarily organic, sustainable demand based on fundamental value. Initial volume and holder count can be inflated by these dynamics. 205 | Emergent Doesn’t Guarantee Longevity: Many tokens experience rapid “emergent growth” fueled by speculation, only to see that growth evaporate as quickly as it appeared. “Emergent” simply means “arising suddenly and unexpectedly,” not necessarily “lasting” or “valuable.” 206 | Sustainable Growth Requires Utility: True, sustainable growth in crypto requires underlying utility and adoption. Without a functional purpose, “emergent growth” is likely just a temporary price surge driven by hype, destined to fade. 207 | Market volume and holder count are superficial metrics in the absence of demonstrable utility and a clear path to long-term value creation. “Emergent growth” in a speculative context is a double-edged sword. 208 | 209 | 3. Alignment with Solana’s Growth – Dependence vs. Advantage: 210 | 211 | Yes, being on Solana can be an advantage, but as we’ve discussed, it's also a significant dependency with inherent risks. 212 | 213 | Benefit from Solana, But Vulnerable to Solana: $Hulezhi benefits from Solana’s strengths when Solana is performing well. But it is equally vulnerable to Solana’s weaknesses – outages, congestion, and any future systemic issues. 214 | “Well-Positioned” is Conditional: Being “well-positioned to capitalize on Solana’s growth” is only true if Solana continues to grow and successfully addresses its challenges. If Solana falters, $Hulezhi’s fortunes are directly tied to that decline. 215 | No Independent Value Proposition: Reliance on Solana means $Hulezhi’s value is, to a large extent, derived from Solana’s overall ecosystem value, not necessarily its own independent value proposition (which, again, is still undefined). 216 | Alignment with Solana is a strategic choice with both potential rewards and significant, unavoidable risks. It’s a dependency, not a guaranteed path to success. 217 | 218 | 4. Grassroots Innovation – Opportunity vs. Lack of Direction: 219 | 220 | “Grassroots innovation” sounds positive, but again, needs to be tempered with realism. 221 | 222 | Opportunity, But No Guarantee of Innovation: Decentralization can foster innovation, but it can also lead to stagnation, lack of focus, and the “tyranny of structurelessness.” Innovation requires more than just a decentralized approach; it requires talent, resources, and effective coordination. 223 | Pivoting Can Be Good, But Signals Lack of Initial Vision: “Pivoting” is presented as flexibility, but it can also indicate a lack of a strong initial concept or roadmap. Constantly pivoting without a core direction can lead to a diluted and ultimately ineffective project. 224 | “Groundbreaking” is a High Bar, Unlikely Without Focused Effort: “Groundbreaking projects” are rare and require focused, dedicated effort, not just emergent community ideas. Grassroots innovation is more likely to produce incremental improvements than truly groundbreaking breakthroughs without more structured development. 225 | “Grassroots innovation” is a potential advantage, but it’s highly dependent on the community’s ability to organize, focus, and execute, and doesn’t inherently guarantee valuable innovation. 226 | 227 | 5. Engaged and Transparent Community – Aspiration vs. Reality & Effectiveness: 228 | 229 | An engaged and transparent community is desirable, but again, not a guarantee of positive outcomes or genuine risk mitigation. 230 | 231 | Demand for Transparency is Good, But Does Transparency Actually Exist? An active community might demand transparency, but does $Hulezhi actually provide meaningful transparency? Tokenomics? Development plans? Team accountability? “Demand” is different from “reality.” 232 | “Trustworthy Environment” Requires More Than Just Community Oversight: A trustworthy environment requires robust security measures, sound governance, and accountable leadership, not just community “oversight,” which can be easily misled or manipulated. 233 | Collective Oversight Can Be Limited by Lack of Expertise: Community “oversight” is only as effective as the community’s understanding of complex technical and economic issues. Enthusiasm doesn't equal expertise. 234 | An engaged and transparent community is a positive aspiration, but its effectiveness as a safeguard depends on the actual level of transparency, the community’s expertise, and the project’s overall governance structure. 235 | 236 | Final, Skeptical Conclusion Remains: 237 | 238 | While I acknowledge the potential upsides you’ve highlighted, my skeptical position remains largely unchanged. These “strengths” are all highly contingent, rely on best-case scenarios, and are outweighed by the significant risks and uncertainties we’ve discussed. 239 | 240 | $Hulezhi could become something more than a speculative meme token. But the path to achieving genuine utility and long-term value is uncertain, fraught with challenges, and far from guaranteed. The risks are substantial, and for most rational investors, the speculative nature of $Hulezhi should be approached with extreme caution, if at all. 241 | 242 | My skepticism prevails, not because I want to be negative, but because a realistic and critical assessment of $Hulezhi, based on the information available and the inherent risks of early-stage, community-driven crypto projects, warrants a highly cautious perspective. -------------------------------------------------------------------------------- /redpandaai.00601a.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from bs4 import BeautifulSoup 3 | import google.generativeai as genai 4 | import os 5 | import time 6 | from datetime import datetime 7 | import csv 8 | import ccxt 9 | import pandas as pd 10 | from telegram import Bot 11 | from telegram.error import TelegramError 12 | import asyncio 13 | import tweepy 14 | import discord 15 | import openai 16 | from PIL import Image, ImageDraw, ImageFont 17 | import logging 18 | from io import BytesIO 19 | 20 | # ----------------------------------- CONFIGURATION ----------------------------------- 21 | # -------- API Keys and IDs -------- 22 | GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY") 23 | TELEGRAM_BOT_TOKEN = os.environ.get("TELEGRAM_BOT_TOKEN") 24 | TELEGRAM_CHANNEL_ID = os.environ.get("TELEGRAM_CHANNEL_ID") 25 | DISCORD_BOT_TOKEN = os.environ.get("DISCORD_BOT_TOKEN") 26 | DISCORD_CHANNEL_ID = os.environ.get("DISCORD_CHANNEL_ID") 27 | OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") # OpenAI API Key 28 | SOLSCAN_API_KEY = os.environ.get("SOLSCAN_API_KEY") # Solscan Pro API Key - NEW 29 | 30 | if not GEMINI_API_KEY: 31 | print("Error: Gemini API key not found. Set GEMINI_API_KEY environment variable.") 32 | exit() 33 | if not TELEGRAM_BOT_TOKEN: 34 | print("Error: Telegram Bot Token not found. Set TELEGRAM_BOT_TOKEN environment variable.") 35 | exit() 36 | if not TELEGRAM_CHANNEL_ID: 37 | print("Error: Telegram Channel ID not found. Set TELEGRAM_CHANNEL_ID environment variable.") 38 | exit() 39 | if not DISCORD_BOT_TOKEN: 40 | print("Error: Discord Bot Token not found. Set DISCORD_BOT_TOKEN environment variable.") 41 | exit() 42 | if not DISCORD_CHANNEL_ID: 43 | print("Error: Discord Channel ID not found. Set DISCORD_CHANNEL_ID environment variable.") 44 | exit() 45 | if not OPENAI_API_KEY: 46 | print("Warning: OpenAI API key not found. OpenAI analysis will be disabled.") 47 | USE_OPENAI = False # Flag to disable OpenAI if API key is missing 48 | else: 49 | openai.api_key = OPENAI_API_KEY 50 | USE_OPENAI = True 51 | if not SOLSCAN_API_KEY: # Check for Solscan API Key - NEW 52 | print("Error: Solscan API key not found. Set SOLSCAN_API_KEY environment variable.") 53 | exit() 54 | 55 | 56 | genai.configure(api_key=GEMINI_API_KEY) 57 | MODEL_GEMINI = genai.GenerativeModel('gemini-pro') 58 | TELEGRAM_BOT = Bot(token=TELEGRAM_BOT_TOKEN) 59 | DISCORD_BOT = discord.Client(intents=discord.Intents.default()) 60 | 61 | # -------- Pump.fun Scraping -------- 62 | PUMP_FUN_NEW_TOKENS_URL = "https://pump.fun/board?coins_sort=created_timestamp" 63 | 64 | # -------- Scanning, Analysis Intervals, and Thresholds -------- 65 | SCAN_INTERVAL_SECONDS = 60 66 | SEEN_TOKENS = set() 67 | 68 | MIN_HOLDERS_THRESHOLD = 100 # This threshold will now be based on Solscan API data 69 | MIN_MCAP_THRESHOLD = 20000 70 | MAX_BUNDLES_THRESHOLD = 3 71 | 72 | # -------- Data Storage and Learning -------- 73 | DATA_STORAGE_FILE = "token_analysis_history.csv" 74 | SUCCESS_METRIC_PERCENT_INCREASE = 1.00 75 | PERFORMANCE_TRACKING_WINDOW_HOURS = 10 76 | PRICE_CHECK_INTERVAL_SECONDS = 60 * 15 77 | 78 | # -------- Crypto Price Data -------- 79 | CRYPTO_PRICE_DATA_EXCHANGE = 'gateio' 80 | PRICE_DATA_SYMBOL_FORMAT = '{}/USDT' 81 | PRICE_DATA_LOOKBACK_MINUTES = 60 82 | 83 | # -------- Trending Meta Configuration -------- 84 | CURRENT_TRENDING_METAS = [ 85 | "ExampleMeta1", 86 | "ExampleMeta2" 87 | ] 88 | 89 | # -------- Twitter API Configuration -------- 90 | TWITTER_BEARER_TOKEN = os.environ.get("TWITTER_BEARER_TOKEN") 91 | 92 | if not TWITTER_BEARER_TOKEN: 93 | print("Error: Twitter API Bearer Token not found. Set the TWITTER_BEARER_TOKEN environment variable.") 94 | exit() 95 | 96 | # -------- Signal Categories -------- 97 | SIGNAL_CATEGORIES = ["Fast Pump & Dump", "Long Hold ( > 2 Hours)", "Small Profit (50-100%) Play", "Not Recommended"] # Define signal categories 98 | 99 | # -------- Image Configuration -------- 100 | BACKGROUND_IMAGE_PATH = "background_image.png" # Path to your background image - MAKE SURE TO REPLACE THIS WITH YOUR ACTUAL IMAGE FILE NAME 101 | FONT_PATH = "arial.ttf" # Path to a font file (e.g., Arial) - you might need to adjust this based on your system fonts 102 | DEFAULT_FONT_SIZE = 24 103 | HEADER_FONT_SIZE = 36 104 | TEXT_COLOR = (255, 255, 255) # White color for text 105 | HEADER_COLOR = (255, 255, 255) # White color for header 106 | 107 | # -------- Logging Configuration -------- 108 | LOG_FILE = "token_scanner.log" 109 | LOG_ADMIN_DISCUSSION_FILE = "admin_discussion_log.txt" # Log file for admin discussions 110 | logging.basicConfig(filename=LOG_FILE, level=logging.INFO, 111 | format='%(asctime)s - %(levelname)s - %(message)s') 112 | logging.info("Script started") 113 | 114 | 115 | # ----------------------------------- FUNCTIONS ----------------------------------- 116 | 117 | def get_solscan_holder_count(token_address): 118 | """ 119 | Retrieves token holder count from Solscan Pro API. 120 | Requires SOLSCAN_API_KEY environment variable to be set. 121 | """ 122 | api_key = SOLSCAN_API_KEY 123 | headers = { 124 | 'accept': 'application/json', 125 | 'token': api_key, 126 | } 127 | params = { 128 | 'tokenAddress': token_address, 129 | 'page': 1, 130 | 'page_size': 10 # We only need total count, pagination not crucial for this call 131 | } 132 | 133 | try: 134 | response = requests.get('https://pro-api.solscan.io/v2.0/token/holders', headers=headers, params=params) 135 | response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) 136 | data = response.json() 137 | total_holders = data.get('data', {}).get('total') 138 | if total_holders is not None: 139 | logging.info(f"Solscan API - Holders for {token_address}: {total_holders}") 140 | return total_holders 141 | else: 142 | logging.warning(f"Solscan API - Holder count not found in response for {token_address}") 143 | return None # Indicate holder count not found in response 144 | 145 | except requests.exceptions.RequestException as e: 146 | logging.error(f"Solscan API Request Error for {token_address}: {e}") 147 | return None # Indicate API request error 148 | except json.JSONDecodeError as e: # Handle potential JSON decode errors 149 | logging.error(f"Solscan API JSON Decode Error for {token_address}: {e}") 150 | return None # Indicate JSON decode error 151 | except Exception as e: 152 | logging.error(f"Error fetching holder count from Solscan API for {token_address}: {e}") 153 | return None # Indicate general error 154 | 155 | 156 | def scrape_pump_fun_new_tokens(url): 157 | """ 158 | Scrapes new tokens from pump.fun and now fetches holder count from Solscan API. 159 | """ 160 | try: 161 | response = requests.get(url) 162 | response.raise_for_status() 163 | soup = BeautifulSoup(response.content, 'html.parser') 164 | token_elements = soup.find_all('div', class_='token-card') 165 | 166 | new_tokens = [] 167 | for element in token_elements: 168 | token_name_element = element.find('a', class_='token-name-link') 169 | mcap_element = element.find('span', class_='token-mcap') 170 | bundles_element = element.find('span', class_='token-bundles') 171 | token_address_element = element.find('a', href=lambda href: href and "solscan.io/token" in href) # Find link to Solscan 172 | 173 | 174 | if token_name_element and mcap_element and token_address_element: 175 | token_name = token_name_element.text.strip() 176 | token_link = "https://pump.fun" + token_name_element['href'] 177 | mcap_text = mcap_element.text.strip() 178 | token_address_url = token_address_element['href'] 179 | token_address = token_address_url.split('/')[5] if len(token_address_url.split('/')) >= 6 else None # Extract token address from Solscan URL 180 | 181 | if not token_address: 182 | logging.warning(f"Could not extract token address from Solscan URL for {token_name}. Skipping token.") 183 | continue # Skip if token address extraction fails 184 | 185 | holders = get_solscan_holder_count(token_address) # Get holders from Solscan API - NEW 186 | 187 | if holders is None: # Skip token if holder count retrieval fails - NEW 188 | logging.warning(f"Could not retrieve holder count from Solscan API for {token_name} (Address: {token_address}). Skipping token.") 189 | continue 190 | 191 | 192 | try: 193 | mcap_value = float(mcap_text.replace('$', '').replace('k', '').replace('m', '').strip()) 194 | mcap_multiplier = 1000 if 'k' in mcap_text.lower() else (1000000 if 'm' in mcap_text.lower() else 1) 195 | mcap = mcap_value * mcap_multiplier 196 | except ValueError: 197 | logging.warning(f"Could not parse MCAP for {token_name}. Skipping.") 198 | continue 199 | 200 | bundles = 0 201 | if bundles_element: 202 | bundles_text = bundles_element.text.strip() 203 | try: 204 | bundles = int(bundles_text.split(' ')[0]) 205 | except ValueError: 206 | logging.warning(f"Could not parse bundle count for {token_name}. Defaulting to 0.") 207 | 208 | new_tokens.append({ 209 | 'name': token_name, 210 | 'link': token_link, 211 | 'holders': holders, # Holders from Solscan API - NEW 212 | 'mcap': mcap, 213 | 'bundles': bundles, 214 | 'address': token_address # Add token address - NEW 215 | }) 216 | 217 | return new_tokens 218 | 219 | except requests.exceptions.RequestException as e: 220 | logging.error(f"Error scraping pump.fun: {e}") 221 | return [] 222 | except Exception as e: 223 | logging.error(f"Error parsing pump.fun HTML: {e}") 224 | return [] 225 | 226 | 227 | def send_telegram_message(image_path, text_message=None): 228 | """... (unchanged) ...""" 229 | try: 230 | if image_path: 231 | with open(image_path, 'rb') as photo: # Open the image in binary read mode 232 | TELEGRAM_BOT.send_photo(chat_id=TELEGRAM_CHANNEL_ID, photo=photo) 233 | logging.info("Telegram message sent successfully (image).") 234 | elif text_message: 235 | TELEGRAM_BOT.send_message(chat_id=TELEGRAM_CHANNEL_ID, text=text_message, parse_mode='Markdown') 236 | logging.info("Telegram message sent successfully (text).") 237 | else: 238 | logging.warning("send_telegram_message called without image_path or text_message.") 239 | 240 | except TelegramError as e: 241 | logging.error(f"Error sending Telegram message: {e}") 242 | 243 | 244 | async def send_discord_message(image_path, text_message=None): 245 | """... (unchanged) ...""" 246 | try: 247 | channel = DISCORD_BOT.get_channel(int(DISCORD_CHANNEL_ID)) 248 | if not channel: 249 | logging.error(f"Discord channel ID {DISCORD_CHANNEL_ID} not found.") 250 | return 251 | 252 | if image_path: 253 | with open(image_path, 'rb') as f: 254 | picture = discord.File(f) 255 | await channel.send(file=picture) 256 | logging.info("Discord message sent successfully (image).") 257 | elif text_message: 258 | await channel.send(text_message) 259 | logging.info("Discord message sent successfully (text).") 260 | else: 261 | logging.warning("send_discord_message called without image_path or text_message.") 262 | 263 | except discord.errors.DiscordException as e: 264 | logging.error(f"Error sending Discord message: {e}") 265 | 266 | 267 | def scan_twitter_for_trends(): 268 | """... (unchanged) ...""" 269 | try: 270 | client = tweepy.Client(bearer_token=TWITTER_BEARER_TOKEN) 271 | 272 | trends_result = client.get_trends_place(1) 273 | trending_hashtags = [] 274 | if trends_result.data: 275 | for trend in trends_result.data: 276 | if trend['name'].startswith('#') and any(keyword in trend['name'].lower() for keyword in ['memecoin', 'pumpfun', 'crypto', 'token', 'coin']): 277 | trending_hashtags.append(trend['name']) 278 | logging.info(f"Twitter Trending Hashtags (Meme Coin Related): {trending_hashtags}") 279 | 280 | recent_tweets = [] 281 | search_query = "#memecoin OR #pumpfun OR memecoin OR pumpfun new token" 282 | tweet_search_result = client.search_recent_tweets( 283 | query=search_query, 284 | tweet_fields=["created_at", "lang", "author_id"], 285 | max_results=10 286 | ) 287 | 288 | if tweet_search_result.data: 289 | for tweet in tweet_search_result.data: 290 | if tweet.lang == 'en': 291 | recent_tweets.append(tweet.text) 292 | logging.info(f"Recent Relevant Tweets (English, first 10): {recent_tweets[:3] if recent_tweets else 'No relevant recent tweets found.'}") 293 | 294 | return { 295 | 'trending_hashtags': trending_hashtags, 296 | 'recent_tweets': recent_tweets 297 | } 298 | 299 | except tweepy.TweepyException as e: 300 | logging.error(f"Error accessing Twitter API (for general trends): {e}") 301 | return { 302 | 'trending_hashtags': [], 303 | 'recent_tweets': [] 304 | } 305 | except Exception as e: 306 | logging.error(f"Error processing Twitter API data (for general trends): {e}") 307 | return { 308 | 'trending_hashtags': [], 309 | 'recent_tweets': [] 310 | } 311 | 312 | 313 | def scan_twitter_for_token_mentions(token_name): 314 | """... (unchanged) ...""" 315 | try: 316 | client = tweepy.Client(bearer_token=TWITTER_BEARER_TOKEN) 317 | 318 | search_query = f"${token_name} OR #{token_name} OR {token_name} OR $ {' '.join(token_name.split())}" 319 | 320 | tweet_search_result = client.search_recent_tweets( 321 | query=search_query, 322 | tweet_fields=["created_at", "lang", "author_id"], 323 | max_results=15 324 | ) 325 | 326 | token_tweets = [] 327 | if tweet_search_result.data: 328 | for tweet in tweet_search_result.data: 329 | if tweet.lang == 'en': 330 | token_tweets.append(tweet.text) 331 | logging.info(f"Recent Tweets mentioning Token '{token_name}' (first 10): {token_tweets[:3] if token_tweets else 'No recent tweets found.'}") 332 | 333 | return token_tweets 334 | 335 | except tweepy.TweepyException as e: 336 | logging.error(f"Error accessing Twitter API (for token mentions of {token_name}): {e}") 337 | return [] 338 | except Exception as e: 339 | logging.error(f"Error processing Twitter API data (for token mentions of {token_name}): {e}") 340 | return [] 341 | 342 | 343 | def analyze_token_with_gemini(token_name, token_link, holders, mcap, bundles, twitter_trends, token_twitter_mentions): 344 | """ 345 | Analyzes token with Gemini, now categorizing signals. 346 | """ 347 | trending_metas_str = ", ".join(CURRENT_TRENDING_METAS) 348 | trending_hashtags_str = ", ".join(twitter_trends.get('trending_hashtags', [])) 349 | recent_tweets_str = "\n".join(twitter_trends.get('recent_tweets', [])[:3]) 350 | token_tweets_str = "\n".join(token_twitter_mentions[:3]) 351 | 352 | prompt = f""" 353 | Analyze the new cryptocurrency token '{token_name}' launched on pump.fun (link: {token_link}). 354 | 355 | **Crucial Parameters to Consider:** 356 | * **Holders (Solscan API):** {holders} 357 | * **Market Cap (MCAP):** ${mcap:,.0f} 358 | * **Bundles:** {bundles} 359 | 360 | **Data Source for Holders:** Holder count is now fetched from Solscan API for increased reliability. 361 | 362 | **Current Trending Meme Coin Metas:** [{trending_metas_str}] 363 | **Current Twitter Trends (Meme Coin Related Hashtags):** [{trending_hashtags_str}] 364 | **Example Recent Tweets (General Meme Coin Trends):** 365 | {recent_tweets_str if recent_tweets_str else "No recent relevant tweets found in initial Twitter scan."} 366 | **Recent Twitter Mentions of Token '{token_name}' (Example Tweets):** 367 | {token_tweets_str if token_tweets_str else "No recent Twitter mentions of this token found in initial scan."} 368 | 369 | **Desired Output: Token Signal Category & Analysis.** Categorize this token into ONE of the following signal categories: 370 | * **{SIGNAL_CATEGORIES[0]} (Fast Pump & Dump):** Expect rapid initial pump, short holding time (minutes to < 2 hours), high volatility, quick profit-taking crucial. 371 | * **{SIGNAL_CATEGORIES[1]} (Long Hold > 2 Hours):** Potential for sustained growth beyond initial pump, holding for > 2 hours possible (but still meme coin timeframe, manage risk). 372 | * **{SIGNAL_CATEGORIES[2]} (Small Profit 50-100%) Play):** More moderate, potentially less volatile, aim for 50-100% profit, could be slower growth than pump & dumps. 373 | * **{SIGNAL_CATEGORIES[3]} (Not Recommended):** Avoid this token, low potential, high risk of loss. 374 | 375 | Consider ALL parameters above, trending metas, Twitter trends (general & token-specific) to categorize the token signal AND provide analysis justifying the categorization. 376 | 377 | Factors to guide categorization (consider how these factors might point to different signal categories): 378 | * Token Name/Ticker & Meta Alignment 379 | * Initial Holders, MCAP, Bundles (suggest initial momentum?) 380 | * General Crypto Sentiment & Meme Coin Trend Strength 381 | * Twitter Trend Signals (general meme coin trends & token-specific buzz) 382 | 383 | **Analyze and categorize the token. Provide:** 384 | 1. **Predicted Signal Category:** Choose ONE category from [{', '.join(SIGNAL_CATEGORIES)}]. 385 | 2. **Analysis (2-3 sentences) Justifying Category:** Explain reasoning behind the chosen category based on the factors considered. 386 | 387 | **Important: HIGH-RISK, NOT financial advice. Categories are probabilistic assessments, manage risk accordingly.** 388 | 389 | Format for Telegram (Markdown): 390 | 391 | *Token: {token_name}* 392 | *Signal Category:* [Predicted Category Here] 393 | *Holders (Solscan API):* {holders} *MCAP:* ${mcap:,.0f} *Bundles:* {bundles} 394 | *Trending Meta Alignment:* [Assess] *General Twitter Trend:* [Assess] *Token-Specific Buzz:* [Assess] 395 | [Gemini Analysis Justifying Category.] 396 | """ 397 | 398 | try: 399 | response = MODEL_GEMINI.generate_content(prompt) 400 | gemini_analysis = response.text 401 | 402 | # --- Attempt to extract predicted category from Gemini's response --- 403 | predicted_category = "Unknown" # Default category if extraction fails 404 | for category in SIGNAL_CATEGORIES: 405 | if category in gemini_analysis: 406 | predicted_category = category 407 | break 408 | 409 | return gemini_analysis, predicted_category 410 | 411 | except Exception as e: 412 | error_message = f"Error analyzing token '{token_name}' with Gemini: {e}" 413 | logging.error(error_message) 414 | return error_message, "Error" # Return "Error" category in case of exception 415 | 416 | 417 | def analyze_token_with_openai(token_name, token_link, holders, mcap, bundles, twitter_trends, token_twitter_mentions): 418 | """Analyzes token with OpenAI (GPT-3.5-turbo), categorizing signals.""" 419 | if not USE_OPENAI: 420 | return "OpenAI analysis disabled due to missing API key.", "Disabled" 421 | 422 | trending_metas_str = ", ".join(CURRENT_TRENDING_METAS) 423 | trending_hashtags_str = ", ".join(twitter_trends.get('trending_hashtags', [])) 424 | recent_tweets_str = "\n".join(twitter_trends.get('recent_tweets', [])[:3]) 425 | token_tweets_str = "\n".join(token_twitter_mentions[:3]) 426 | 427 | 428 | prompt = f""" 429 | Analyze the new cryptocurrency token '{token_name}' launched on pump.fun (link: {token_link}). 430 | 431 | **Crucial Parameters to Consider:** 432 | * **Holders (Solscan API):** {holders} 433 | * **Market Cap (MCAP):** ${mcap:,.0f} 434 | * **Bundles:** {bundles} 435 | 436 | **Data Source for Holders:** Holder count is now fetched from Solscan API for increased reliability. 437 | 438 | **Current Trending Meme Coin Metas:** [{trending_metas_str}] 439 | **Current Twitter Trends (Meme Coin Related Hashtags):** [{trending_hashtags_str}] 440 | **Example Recent Tweets (General Meme Coin Trends):** 441 | {recent_tweets_str if recent_tweets_str else "No recent relevant tweets found in initial Twitter scan."} 442 | **Recent Twitter Mentions of Token '{token_name}' (Example Tweets):** 443 | {token_tweets_str if token_tweets_str else "No recent Twitter mentions of this token found in initial scan."} 444 | 445 | **Desired Output: Token Signal Category & Analysis.** Categorize this token into ONE of the following signal categories: 446 | * **{SIGNAL_CATEGORIES[0]} (Fast Pump & Dump):** Expect rapid initial pump, short holding time (minutes to < 2 hours), high volatility, quick profit-taking crucial. 447 | * **{SIGNAL_CATEGORIES[1]} (Long Hold > 2 Hours):** Potential for sustained growth beyond initial pump, holding for > 2 hours possible (but still meme coin timeframe, manage risk). 448 | * **{SIGNAL_CATEGORIES[2]} (Small Profit 50-100%) Play):** More moderate, potentially less volatile, aim for 50-100% profit, could be slower growth than pump & dumps. 449 | * **{SIGNAL_CATEGORIES[3]} (Not Recommended):** Avoid this token, low potential, high risk of loss. 450 | 451 | Consider ALL parameters above, trending metas, Twitter trends (general & token-specific) to categorize the token signal AND provide analysis justifying the categorization. 452 | 453 | Factors to guide categorization: 454 | * Token Name/Ticker & Meta Alignment 455 | * Initial Holders, MCAP, Bundles 456 | * General Crypto Sentiment & Meme Coin Trend Strength 457 | * Twitter Trend Signals 458 | 459 | **Analyze and categorize. Provide:** 460 | 1. **Predicted Signal Category:** Choose ONE category from [{', '.join(SIGNAL_CATEGORIES)}]. 461 | 2. **Analysis (2-3 sentences):** Justify category choice. 462 | 463 | **Important: HIGH-RISK, NOT financial advice.** 464 | """ 465 | try: 466 | response = openai.ChatCompletion.create( 467 | model="gpt-3.5-turbo", 468 | messages=[ 469 | {"role": "system", "content": "You are a cryptocurrency analyst specializing in meme coins on pump.fun."}, 470 | {"role": "user", "content": prompt} 471 | ] 472 | ) 473 | openai_analysis = response['choices'][0]['message']['content'] 474 | 475 | predicted_category = "Unknown" 476 | for category in SIGNAL_CATEGORIES: 477 | if category in openai_analysis: 478 | predicted_category = category 479 | break 480 | 481 | return openai_analysis, predicted_category 482 | 483 | except Exception as e: 484 | error_message = f"Error analyzing token '{token_name}' with OpenAI: {e}" 485 | logging.error(error_message) 486 | return error_message, "Error" 487 | 488 | 489 | def create_signal_image(token_name, predicted_category, holders, mcap, bundles, gemini_analysis, token_link): 490 | """Generates a signal image with token details and analysis.""" 491 | try: 492 | img = Image.open(BACKGROUND_IMAGE_PATH).convert("RGBA") # Ensure background is RGBA 493 | draw = ImageDraw.Draw(img) 494 | 495 | # --- Fonts --- 496 | header_font = ImageFont.truetype(FONT_PATH, HEADER_FONT_SIZE) 497 | default_font = ImageFont.truetype(FONT_PATH, DEFAULT_FONT_SIZE) 498 | 499 | # --- Coordinates for text elements (adjust as needed) --- 500 | header_y = 50 501 | text_start_y = 150 502 | line_height = DEFAULT_FONT_SIZE + 10 503 | x_offset = 50 504 | 505 | # --- Header --- 506 | header_text = "New Token Signal" 507 | draw.text((x_offset, header_y), header_text, font=header_font, fill=HEADER_COLOR) 508 | 509 | # --- Token Details --- 510 | current_y = text_start_y 511 | draw.text((x_offset, current_y), f"Token: {token_name}", font=default_font, fill=TEXT_COLOR) 512 | current_y += line_height 513 | draw.text((x_offset, current_y), f"Signal Category: {predicted_category}", font=default_font, fill=TEXT_COLOR) 514 | current_y += line_height 515 | draw.text((x_offset, current_y), f"Holders (Solscan API): {holders} MCAP: ${mcap:,.0f} Bundles: {bundles}", font=default_font, fill=TEXT_COLOR) 516 | current_y += line_height 517 | draw.text((x_offset, current_y), f"Link: {token_link}", font=default_font, fill=TEXT_COLOR) 518 | current_y += line_height * 2 # Extra space before analysis 519 | 520 | # --- Gemini Analysis --- 521 | analysis_header_font = ImageFont.truetype(FONT_PATH, DEFAULT_FONT_SIZE + 4) 522 | draw.text((x_offset, current_y), "Gemini Analysis:", font=analysis_header_font, fill=TEXT_COLOR) 523 | current_y += line_height 524 | analysis_lines = gemini_analysis.split('\n') # Split analysis into lines for better formatting 525 | for line in analysis_lines: 526 | draw.text((x_offset, current_y), line, font=default_font, fill=TEXT_COLOR) 527 | current_y += line_height 528 | 529 | # --- Disclaimer --- 530 | disclaimer_font = ImageFont.truetype(FONT_PATH, DEFAULT_FONT_SIZE - 2) 531 | disclaimer_text = "Disclaimer: NOT financial advice. Meme coins are high risk. Signal category is probabilistic." 532 | draw.text((x_offset, img.height - 50), disclaimer_text, font=disclaimer_font, fill=TEXT_COLOR) # Position at the bottom 533 | 534 | # --- Save Image --- 535 | image_path = f"{token_name}_signal_image.png" # Save with token name 536 | img.save(image_path) 537 | logging.info(f"Signal image created: {image_path}") 538 | return image_path 539 | 540 | except Exception as e: 541 | logging.error(f"Error creating signal image: {e}") 542 | return None # Or handle error as needed 543 | 544 | 545 | def record_analysis_data(token_data, gemini_analysis, prediction_timestamp, call_outcome=None, signal_category="Unknown"): # Added signal_category 546 | """Records token analysis data to CSV, now including signal_category and call outcome.""" 547 | header_exists = os.path.exists(DATA_STORAGE_FILE) 548 | with open(DATA_STORAGE_FILE, mode='a', newline='') as csvfile: 549 | fieldnames = ['token_name', 'token_link', 'holders', 'mcap', 'bundles', 'gemini_analysis', 'prediction_timestamp', 'call_outcome', 'signal_category', 'token_address'] # Added 'signal_category', 'token_address' 550 | writer = csv.DictWriter(csvfile, fieldnames=fieldnames) 551 | 552 | if not header_exists: 553 | writer.writeheader() 554 | 555 | writer.writerow({ 556 | 'token_name': token_data['name'], 557 | 'token_link': token_data['link'], 558 | 'holders': token_data['holders'], 559 | 'mcap': token_data['mcap'], 560 | 'bundles': token_data['bundles'], 561 | 'gemini_analysis': gemini_analysis, 562 | 'prediction_timestamp': prediction_timestamp.isoformat(), 563 | 'call_outcome': call_outcome, 564 | 'signal_category': signal_category, # Record predicted signal category 565 | 'token_address': token_data['address'] # Record token address - NEW 566 | }) 567 | 568 | 569 | 570 | async def get_token_price_data(token_name, prediction_timestamp): 571 | """... (unchanged) ...""" 572 | try: 573 | exchange_id = CRYPTO_PRICE_DATA_EXCHANGE 574 | exchange_class = getattr(ccxt, exchange_id) 575 | exchange = exchange_class() 576 | 577 | symbol = PRICE_DATA_SYMBOL_FORMAT.format(token_name.upper()) 578 | 579 | initial_ohlcv = exchange.fetch_ohlcv(symbol, timeframe='1m', limit=PRICE_DATA_LOOKBACK_MINUTES) 580 | if not initial_ohlcv: 581 | logging.warning(f"No initial price data found for {symbol} on {exchange_id} at prediction time.") 582 | return None 583 | 584 | initial_df = pd.DataFrame(initial_ohlcv, columns=['timestamp', 'open', 'high', 'low', 'close', 'volume']) 585 | initial_price = initial_df['close'].iloc[-1] 586 | 587 | if initial_price == 0: 588 | logging.error(f"Error: Initial price for {symbol} is 0. Cannot calculate performance.") 589 | return None 590 | 591 | target_price = initial_price * (1 + SUCCESS_METRIC_PERCENT_INCREASE) 592 | end_time = prediction_timestamp + datetime.timedelta(hours=PERFORMANCE_TRACKING_WINDOW_HOURS) 593 | is_good_call = False 594 | max_price_reached = initial_price 595 | 596 | current_time = datetime.now() 597 | while current_time <= end_time: 598 | try: 599 | ohlcv = exchange.fetch_ohlcv(symbol, timeframe='1m', limit=1) 600 | if ohlcv: 601 | current_price = ohlcv[0][4] 602 | max_price_reached = max(max_price_reached, current_price) 603 | 604 | if current_price >= target_price: 605 | is_good_call = True 606 | logging.info(f" Token {token_name} reached target price ({SUCCESS_METRIC_PERCENT_INCREASE*100}%) - Good Call!") 607 | break 608 | 609 | logging.info(f" Token {token_name}: Current Price: {current_price:.6f}, Target Price: {target_price:.6f}, Max Reached: {max_price_reached:.6f}, Time left: {end_time - current_time}") 610 | else: 611 | logging.warning(f"Warning: No price data returned from exchange for {symbol} at {current_time}.") 612 | 613 | except ccxt.ExchangeError as e: 614 | logging.error(f"Exchange error while tracking {token_name}: {e}") 615 | except Exception as e: 616 | logging.error(f"Error during price tracking for {token_name}: {e}") 617 | 618 | await asyncio.sleep(PRICE_CHECK_INTERVAL_SECONDS) 619 | current_time = datetime.now() 620 | 621 | final_percent_change = (max_price_reached - initial_price) / initial_price if initial_price != 0 else 0 622 | logging.info(f" Token {token_name} Performance Tracking completed. Max Price Reached: {max_price_reached:.6f}, Percent Change: {final_percent_change:.4f}%, Good Call: {is_good_call}") 623 | 624 | return {'is_good_call': is_good_call, 'percent_change': final_percent_change, 'max_price_reached': max_price_reached, 'initial_price': initial_price} 625 | 626 | 627 | 628 | def evaluate_prediction_performance(): 629 | """Evaluates prediction performance, now analyzing performance by signal category.""" 630 | try: 631 | if not os.path.exists(DATA_STORAGE_FILE): 632 | print("No historical data available for evaluation yet.") 633 | return 634 | 635 | df = pd.read_csv(DATA_STORAGE_FILE) 636 | if df.empty: 637 | print("No data in historical file to evaluate.") 638 | return 639 | 640 | good_calls_df = df[df['call_outcome'] == 'Good'] 641 | bad_calls_df = df[df['call_outcome'] == 'Bad'] 642 | 643 | prompt = f""" 644 | Evaluate performance of my token analysis script based on historical data, including 'call_outcome' and *predicted 'signal_category'*. 645 | 646 | Current Trending Meme Coin Metas: [{', '.join(CURRENT_TRENDING_METAS)}] 647 | Signal Categories used for prediction: [{', '.join(SIGNAL_CATEGORIES)}] 648 | 649 | Here are examples of *Good Calls*: 650 | (Tokens that achieved >= {SUCCESS_METRIC_PERCENT_INCREASE*100}% increase, showing *predicted signal category*): 651 | [Start of Good Call Data Example] 652 | {good_calls_df.head(3).to_string(columns=['token_name', 'signal_category', 'holders', 'mcap', 'bundles', 'gemini_analysis', 'call_outcome', 'token_address'], index=False)} 653 | [End of Good Call Data Example] 654 | 655 | Here are examples of *Bad Calls*: 656 | (Tokens that did NOT achieve >= {SUCCESS_METRIC_PERCENT_INCREASE*100}% increase, showing *predicted signal category*): 657 | [Start of Bad Call Data Example] 658 | {bad_calls_df.head(3).to_string(columns=['token_name', 'signal_category', 'holders', 'mcap', 'bundles', 'gemini_analysis', 'call_outcome', 'token_address'], index=False)} 659 | [End of Bad Call Data Example] 660 | 661 | Analyze data, considering *predicted signal category*, *current trending metas*, *general Twitter trend signals*, AND *token-specific Twitter buzz* to understand prediction performance *for each signal category*. 662 | 663 | For EACH SIGNAL CATEGORY ({', '.join(SIGNAL_CATEGORIES)}), analyze the following: 664 | 665 | 1. **Performance by Category:** What is the 'Good Call' rate (percentage of 'Good Calls') for each *predicted signal category*? Are some categories significantly more accurate than others? Is the 'Not Recommended' category effectively filtering out bad performers? 666 | 2. **Characteristics of Categories:** For each signal category, analyze the typical characteristics of tokens predicted in that category. Specifically, examine: 667 | * Typical ranges or patterns for Holders, MCAP, Bundles. 668 | * Common themes or alignments with *current trending metas*. 669 | * Typical *general Twitter trend signals* and *token-specific Twitter buzz* levels. 670 | * Are there distinct patterns in 'gemini_analysis' text for each category? What kind of analysis leads Gemini to categorize a token as 'Fast Pump & Dump' vs. 'Long Hold' vs. 'Small Profit' vs. 'Not Recommended'? 671 | 3. **Category Prediction Accuracy Improvement:** Based on your analysis of performance by category and the characteristics of each category, suggest specific improvements to the `analyze_token_with_gemini` prompt to improve the *accuracy of category predictions*. How can we better guide Gemini to distinguish between 'Fast Pump & Dump', 'Long Hold', 'Small Profit Play', and 'Not Recommended' signals? Are there specific keywords, phrases, or instructions to add to the prompt to improve category prediction? 672 | 4. **Refine Twitter Scanning (Category-Specific)?** Should we refine `scan_twitter_for_token_mentions` or `scan_twitter_for_trends` functions to capture different *types* of Twitter signals that might be more relevant for *specific signal categories*? (e.g., are 'Fast Pump & Dump' tokens associated with very short bursts of intense Twitter hype, while 'Long Hold' tokens show more sustained community discussion?) 673 | 5. **Update Trending Metas List:** Based on the analysis, suggest updates to the CURRENT_TRENDING_METAS list in the script. Which metas are losing relevance? Are there new metas that should be added based on recent good/bad calls and category analysis? 674 | 6. **Prompt Refinement for Categories, Metas & Twitter:** Suggest specific prompt improvements to better incorporate 'trending metas', 'general Twitter trend signals', *'token-specific Twitter buzz'*, AND *signal category prediction* factors in `analyze_token_with_gemini`. 675 | 676 | Focus on actionable recommendations to improve 'Good Call' rate *within each signal category* and improve the *accuracy of signal category predictions*. 677 | """ 678 | 679 | response = MODEL_GEMINI.generate_content(prompt) 680 | gemini_feedback = response.text 681 | print("\n--- Gemini Feedback on Prediction Performance (Signal Category Analysis): ---\n") 682 | print(gemini_feedback) 683 | 684 | # --- **Action Required: Manual Review and Update** --- 685 | print("\n--- **Action Required: Review Gemini's Feedback Above.**") 686 | print("--- **1. MANUALLY UPDATE the CURRENT_TRENDING_METAS list in the script.**") 687 | print("--- **2. MANUALLY REFINE the `analyze_token_with_gemini` prompt (especially for category prediction).** ---") 688 | print("--- **3. Consider refining Twitter scanning for category-specific signals.** ---") 689 | 690 | 691 | except Exception as e: 692 | logging.error(f"Error evaluating prediction performance (Signal Category Analysis): {e}") 693 | 694 | 695 | async def debate_token_signal(token): 696 | """Analyzes token with Gemini and potentially OpenAI, sends signal, and records data.""" 697 | token_name = token['name'] 698 | token_link = token['link'] 699 | holders = token['holders'] # Holders now from Solscan API 700 | mcap = token['mcap'] 701 | bundles = token['bundles'] 702 | token_address = token['address'] # Token address from scraping 703 | 704 | logging.info(f"\n-- Analyzing New Token: {token_name} --") 705 | logging.info(f" Link: {token_link}, Holders (Solscan): {holders}, MCAP: ${mcap:.0f}, Bundles: {bundles}, Address: {token_address}") 706 | 707 | twitter_trends_data = scan_twitter_for_trends() 708 | token_twitter_mentions_data = scan_twitter_for_token_mentions(token_name) 709 | 710 | prediction_timestamp = datetime.now() 711 | gemini_analysis_result, predicted_category_gemini = analyze_token_with_gemini( 712 | token_name, token_link, holders, mcap, bundles, twitter_trends_data, token_twitter_mentions_data 713 | ) 714 | logging.info(f"-- Gemini Analysis: --\n{gemini_analysis_result}") 715 | logging.info(f"-- Predicted Signal Category (Gemini): {predicted_category_gemini} --") 716 | 717 | 718 | openai_analysis_result, predicted_category_openai = "OpenAI Analysis Disabled", "Disabled" # Default if not used 719 | if USE_OPENAI: 720 | openai_analysis_result, predicted_category_openai = analyze_token_with_openai( 721 | token_name, token_link, holders, mcap, bundles, twitter_trends_data, token_twitter_mentions_data 722 | ) 723 | logging.info(f"-- OpenAI Analysis: --\n{openai_analysis_result}") 724 | logging.info(f"-- Predicted Signal Category (OpenAI): {predicted_category_openai} --") 725 | 726 | 727 | # --- Select Gemini's category for now (can add logic to choose or combine) --- 728 | predicted_category = predicted_category_gemini 729 | gemini_analysis_to_use = gemini_analysis_result # Use Gemini analysis for now 730 | 731 | image_path = create_signal_image(token_name, predicted_category, holders, mcap, bundles, gemini_analysis_to_use, token_link) # Create image 732 | 733 | if image_path: # Send image if created successfully 734 | send_telegram_message(image_path) 735 | await send_discord_message(image_path) 736 | os.remove(image_path) # Clean up image after sending 737 | else: # Fallback to text message if image creation fails 738 | telegram_message = f"*New Token Analysis*\n\n*Token:* {token_name}\n*Signal Category:* *{predicted_category}*\n*Holders (Solscan API):* {holders} *MCAP:* ${mcap:,.0f} *Bundles:* {bundles}\n*Trending Meta Alignment:*\n[Assess]\n*General Twitter Trend Signal:*\n[Assess]\n*Token-Specific Twitter Buzz:*\n[Assess]\n*Link:* {token_link}\n\n*Gemini Analysis (Category Justification):*\n{gemini_analysis_to_use}\n\n*Disclaimer:* NOT financial advice. Meme coins are high risk. Signal category is probabilistic." 739 | send_telegram_message(text_message=telegram_message) # Send text message as fallback 740 | await send_discord_message(text_message=telegram_message) # Send text to Discord too 741 | 742 | 743 | record_analysis_data(token, gemini_analysis_to_use, prediction_timestamp, signal_category=predicted_category) # Record Gemini analysis and category 744 | 745 | return predicted_category # Return predicted category from chosen model 746 | 747 | 748 | async def admin_discussion_mode(): 749 | """Interactive admin discussion mode with Gemini for script improvement.""" 750 | print("\n--- Entering Admin Discussion Mode with Gemini ---") 751 | print("Type 'exit' to end discussion.") 752 | 753 | conversation_history = "" # Keep track of conversation history for context 754 | 755 | while True: 756 | admin_query = input("\nAdmin Query (for Gemini - script improvement ideas): ") 757 | if admin_query.lower() == 'exit': 758 | print("Exiting Admin Discussion Mode.") 759 | break 760 | 761 | prompt = f""" 762 | [Previous Conversation History (for context):] 763 | {conversation_history} 764 | 765 | [Admin's Current Query/Suggestion:] 766 | {admin_query} 767 | 768 | Continue our discussion about improving the cryptocurrency token scanning and analysis script. 769 | Focus on actionable improvements to the script's scanning methods, analysis prompts, signal categories, 770 | performance evaluation, or any other aspect of the script that could be enhanced. 771 | 772 | Provide specific, concise suggestions for improvement based on the admin's query and the conversation history. 773 | If the query is open-ended, ask clarifying questions to guide the discussion towards concrete improvements. 774 | """ 775 | 776 | try: 777 | response = MODEL_GEMINI.generate_content(prompt) 778 | gemini_response = response.text 779 | print(f"\nGemini Response:\n{gemini_response}") 780 | 781 | # Update conversation history 782 | conversation_history += f"\n[Admin Query:] {admin_query}\n[Gemini Response:] {gemini_response}\n" 783 | 784 | # Log the admin discussion 785 | with open(LOG_ADMIN_DISCUSSION_FILE, 'a') as log_file: 786 | log_file.write(f"\n--- {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} ---\n") 787 | log_file.write(f"[Admin Query:] {admin_query}\n") 788 | log_file.write(f"[Gemini Response:] {gemini_response}\n") 789 | 790 | 791 | except Exception as e: 792 | error_message = f"Error during admin discussion with Gemini: {e}" 793 | print(f"Error: {error_message}") 794 | logging.error(error_message) 795 | 796 | 797 | async def main(admin_mode=False): # Added admin_mode parameter 798 | """... (rest of main function) ...""" 799 | await DISCORD_BOT.login(DISCORD_BOT_TOKEN) # Login Discord bot 800 | 801 | if admin_mode: # Enter admin discussion mode if flag is set 802 | await admin_discussion_mode() 803 | await DISCORD_BOT.close() # Close Discord bot if only in admin mode 804 | return # Exit main function after admin discussion 805 | 806 | print("Starting pump.fun token scanner, Gemini/OpenAI analyzer, and Telegram/Discord relay (Image Signals)...") 807 | logging.info("Starting script: pump.fun token scanner, Gemini/OpenAI analyzer, and Telegram/Discord relay (Image Signals)...") 808 | 809 | while True: 810 | logging.info(f"\n--- Scanning for new tokens at {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} ---") 811 | new_tokens = scrape_pump_fun_new_tokens(PUMP_FUN_NEW_TOKENS_URL) 812 | 813 | if new_tokens: 814 | logging.info(f"Found {len(new_tokens)} new token(s) scraped from pump.fun.") 815 | 816 | filtered_tokens = [] 817 | for token in new_tokens: 818 | if token['holders'] >= MIN_HOLDERS_THRESHOLD and token['mcap'] >= MIN_MCAP_THRESHOLD and token['bundles'] <= MAX_BUNDLES_THRESHOLD: # Holders now from Solscan API 819 | filtered_tokens.append(token) 820 | else: 821 | logging.info(f"Token '{token['name']}' filtered out - Holders (Solscan): {token['holders']}, MCAP: ${token['mcap']:.0f}, Bundles: {token['bundles']} - Below thresholds.") 822 | 823 | if filtered_tokens: 824 | logging.info(f"Analyzing {len(filtered_tokens)} token(s) meeting criteria.") 825 | for token in filtered_tokens: 826 | if token['name'] not in SEEN_TOKENS: 827 | predicted_category = await debate_token_signal(token) # Analyze and signal, get category 828 | SEEN_TOKENS.add(token['name']) 829 | asyncio.create_task(track_token_performance_and_update(token['name'], datetime.now())) # Track performance 830 | 831 | else: 832 | logging.info(f"Token '{token['name']}' already analyzed, skipping.") 833 | else: 834 | logging.info("No new tokens met holder, MCAP, and bundle criteria.") 835 | 836 | 837 | else: 838 | logging.info("No new tokens found on pump.fun.") 839 | 840 | if datetime.now().minute % 30 == 0: 841 | print("\n--- Evaluating Prediction Performance (Signal Category Analysis) ---") 842 | evaluate_prediction_performance() 843 | 844 | logging.info(f"Waiting {SCAN_INTERVAL_SECONDS} seconds before next scan...") 845 | time.sleep(SCAN_INTERVAL_SECONDS) 846 | 847 | await DISCORD_BOT.close() # Close Discord bot connection at the end 848 | 849 | 850 | async def track_token_performance_and_update(token_name, prediction_timestamp): 851 | """... (unchanged) ...""" 852 | performance_data = await get_token_price_data(token_name, prediction_timestamp) 853 | 854 | if performance_data: 855 | is_good_call = performance_data['is_good_call'] 856 | call_outcome = "Good" if is_good_call else "Bad" 857 | 858 | logging.info(f"\n--- Performance Tracking completed for Token: {token_name} ---") 859 | logging.info(f" Call Outcome: {call_outcome}") 860 | 861 | update_call_outcome_in_csv(token_name, call_outcome) 862 | else: 863 | logging.warning(f"\n--- Performance Tracking failed to get data for Token: {token_name} ---") 864 | update_call_outcome_in_csv(token_name, "DataError") 865 | 866 | 867 | def update_call_outcome_in_csv(token_name, call_outcome): 868 | """... (unchanged) ...""" 869 | try: 870 | df = pd.read_csv(DATA_STORAGE_FILE) 871 | df.loc[df['token_name'] == token_name, 'call_outcome'] = call_outcome 872 | df.to_csv(DATA_STORAGE_FILE, index=False) 873 | logging.info(f" CSV updated for Token: {token_name} - Call Outcome set to: {call_outcome}") 874 | 875 | except Exception as e: 876 | logging.error(f"Error updating CSV with call outcome for {token_name}: {e}") 877 | 878 | 879 | 880 | if __name__ == "__main__": 881 | import sys # Import sys for command-line arguments 882 | 883 | admin_mode_flag = False 884 | if len(sys.argv) > 1 and sys.argv[1] == "admin": # Check for "admin" argument 885 | admin_mode_flag = True 886 | print("Admin mode activated for discussion with Gemini.") 887 | 888 | asyncio.run(main(admin_mode=admin_mode_flag)) 889 | 890 | #======================================================================================================== 891 | #TOMA AYTAKIN (EMPRESS) 892 | #REGISTERED CODE IN WEBFLOW #2005910000058681 12/14/2025 11:41:30 UTC 893 | #======================================================================================================== --------------------------------------------------------------------------------