├── .gitignore ├── LICENSE ├── README.md ├── captions.sbv ├── chapters.txt ├── clarified ├── 1000 Subs Livestream! AI alignment, writing fiction, LLMs, and more!.txt ├── 3 Predictions for the future of AI jobs.txt ├── 5 Tips and Misconceptions about Finetuning GPT-3.txt ├── AGI Alignment Experiments INSTRUCT vs Foundation and Agent Models.txt ├── AGI Poll results AGI not dangerous, might destroy lots of jobs.txt ├── Alignment Collectivist vs Egocentric Agent Models (I vs we) Can we create a unique mind.txt ├── Alignment Research GPT-3 vs GPT-NeoX - which one understands AGI alignment better.txt ├── Answer complex questions from an arbitrarily large set of documents with vector search and GPT-3.txt ├── Are LaMDA or GPT-3 sentient No, but....txt ├── AutoMuse Scene Simulation is Working! GPT-3 Finetuned to write sequences of fiction.txt ├── AutoMuse Synopsis GAN (Generative Adversarial Network) for self-improving synopsis generators!.txt ├── AutoMuse Synopsis and Plot Generator Walkthrough.txt ├── AutoMuse Working on Character and Scene Simulations (GPT-3 Finetuning Project).txt ├── AutoMuse Writing a Novel with GPT-3 as a Plotter pt.1 (decompose the problem into steps).txt ├── AutoMuse3 Generate fiction with text-based simulation and cognitive architecture for characters.txt ├── Billion-dollar GPT-3 startup! Fix education with an expert tutor chatbot!.txt ├── Can GPT-3 generate training data Short answer Yes! Here's why that's a legit methodology....txt ├── Co-writing flash fiction with GPT-3.txt ├── Coding Raven's Encyclopedia Service (v.1).txt ├── Cognitive AI Lab Community Update + consensus, books, and my next steps.txt ├── Cognitive AI Lab Podcast #1 - Decentralized Democracy and AI Companions.txt ├── Cognitive AI Lab Podcast 2022-09-21 - Decentralized AI, DAOs, and AI art.txt ├── Concepts in Neuroscience and Cognition - Deficits of GPT-3 and the path to AGI and ACE.txt ├── Convergence and acceleration towards AGI (or Artificial Cognitive Entities).txt ├── Core Objective Functions vs Thanos Logic.txt ├── Could a machine ever be fully sentient or conscious A deep dive on philosophy, quantum mechanics....txt ├── DALLE2 Style Tags Tutorial - Elven archer in a sunny forest with different tags.txt ├── Decreasingly Verbose Alice in Wonderland - down to 55 characters!.txt ├── Encyclopedia Service Demo (v.1).txt ├── Fine-tuning GPT-3 for benevolent and trustworthy AGI.txt ├── Fine-tuning GPT-3 to generate questions about anything.txt ├── Finetune GPT-3 to write a coherent novel - Part 4 (success! with minor bugs...).txt ├── Finetune GPT-3 to write a novel - Part 3 (IT WORKS!!!) ...at least a little bit.txt ├── Finetune GPT-3 to write an entire coherent novel (part 1).txt ├── Finetune GPT-3 to write an entire coherent novel (part 2).txt ├── Finetune a perfect email generator in GPT-3 - take any input and generate a great email.txt ├── Finetune multiple cognitive tasks with GPT-3 on medical texts (and reduce hallucination).txt ├── Finetuning GPT-3 101 Augmenting Training Data.txt ├── Finetuning GPT-3 101 Synthesizing Training Data.txt ├── Finetuning GPT-3 101 Using Your Finetuned Model.txt ├── Finetuning GPT-3 to be a master tutor that can handle any topic and hostile students.txt ├── Finetuning a Creative Writing Coach in GPT-3 - Part 1.txt ├── Finetuning a Creative Writing Coach in GPT-3 - Part 2.txt ├── Fixing goldfish memory with GPT-3 and external sources of information in a chatbot - part 1.txt ├── Fixing goldfish memory with GPT-3 and external sources of information in a chatbot - part 2.txt ├── From zero to finetuned model in 1 hour with GPT-3. Generate a movie script from any premise!.txt ├── GPT-3 Working Session Finetune an information companion chatbot in 30 minutes (RESEARCH ONLY).txt ├── GPT-3 for Writing Dialog.txt ├── GPT3 & Finetuning the Core Objective Functions - a deep dive.txt ├── Generate 300 BILLION story synopses with GPT-3.txt ├── Generate a Cover Letter with a quick and simple GPT-3 Chatbot!.txt ├── Generating Training Data for Core Objective Functions.txt ├── Get Started with Raven AGI.txt ├── Get awesome YouTube chapters with GPT-3 automatically in 20 seconds.txt ├── How to create synthetic datasets with GPT-3.txt ├── How will we know when we've invented AGI How will we know it is complete.txt ├── I needed SQLITE but for vectors so I wrote it myself. Now it's on PyPI - introducing VDBLITE.txt ├── I trained a DALLE chatbot to help you craft DALLE prompts!.txt ├── I'm moving away from Artificial General Intelligence and towards Artificial Cognition. Here's why..txt ├── I'm switching to Open Source LLM models! Introducing Forefront AI for GPT-J and finetuning.txt ├── I'm taking a short break from research and YouTube.txt ├── Implementing Natural Language Cognitive Architecture with GPT-3 and the nexus concept.txt ├── Increasingly Verbose Bot with GPT-3 - Expand any word or phrase into a whole paragraph.txt ├── Indexing Wikipedia with SOLR.txt ├── Inflation Reduction Act of 2022 summarized with AI - see final result in description!.txt ├── Introducing the RAVEN MVP - a general purpose AI companion (with a live DEMO).txt ├── It's alive! The first 3 microservices are up and running!.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 1.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 2.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 3.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 4.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 5 preparing to act.txt ├── MARAGI Overview, Architecture, and Roadmap (August 2022).txt ├── MARAGI work Starting on the Conductor Microservice.txt ├── Many of you have asked for it Join my new research Discord! Link in description.txt ├── Metaprompting with GPT-3 (getting GPT-3 to write its own prompts) RECURSIVE NLP PROGRAMMING!!.txt ├── Metaprompting with GPT-3 to dynamically generate arguments.txt ├── Modeling different viewpoints with GPT-3 for automatic debates.txt ├── NLCA Question Generator demo.txt ├── Nexus update Integrating semantic embeddings.txt ├── Now available! Symphony of Thought Orchestrating Artificial Cognition.txt ├── Organizing Discord with threads, plus community rules and the upcoming Cognitive AI Lab podcast!.txt ├── Parsing Wikipedia to Plaintext Faster!.txt ├── Play Twenty Questions with GPT-3 (Full Game Coding Walkthrough).txt ├── Prompt Engineering 101 Autocomplete, Zero-shot, One-shot, and Few-shot prompting.txt ├── Prompt Engineering 101 Introduction to CODEX.txt ├── Prompt Engineering 101 Summarizing, Extraction, and Rewriting.txt ├── Prototype AGI demo - Natural Language Cognitive Architecture NLCA running on GPT-3.txt ├── Python & GPT-3 for Absolute Beginners #1 - Setting up your environment.txt ├── Python & GPT-3 for Absolute Beginners #2 - Your first chatbot.txt ├── Python & GPT-3 for Absolute Beginners #3 - What the heck are embeddings.txt ├── Quick Demo of NLPCloud's GPT-NeoX-20B Instruct-trained model.txt ├── Raven Context Augmentation Demo.txt ├── Raven MVP Demo 2021-04-02.txt ├── Raven Release 1 Deep Dive.txt ├── Raven's Core Objective Functions - Overview.txt ├── Reading Benevolent by Design - Introduction.txt ├── Recursively summarize text of any length with GPT-3.txt ├── Research Update Heuristic Imperatives microservice demonstration and theory.txt ├── Research Update Microservices! Text-based simulation, Embeddings, and Nexus.txt ├── Research Update Nexus microservice for Artificial Cognition + microservices architecture (MARAGI).txt ├── Semantic search for AI - testing out Qdrant neural search.txt ├── State of the Industry - Backlash against AI art (BANNED), Scale AI TransformX later this month.txt ├── State of the Industry - People serious about AGI and the rise of AI Art (Stable Diffusion!).txt ├── State of the Industry AGI alignment, control problem, and missing conversations.txt ├── State of the Industry Decentralized Everything plus NVIDIA announces Beta of NeMo!.txt ├── State of the Industry DeepMind AlphaFold Breakthrough Award and Sparrow Chatbot.txt ├── State of the Industry Ethereum Merge, GPUs, and the first industry destroyed by AI.txt ├── State of the Industry Hydra Attention for 197x boost in transformer speed.txt ├── State of the Industry Integrating GPT-3 and LLMs into Robotics! SayCan and Ameca.txt ├── State of the Industry Meta moves PyTorch to Linux Foundation.txt ├── State of the Industry Purpose-built AI chips and INT8 optimization.txt ├── State of the Industry Short-circuiting Reward Functions.txt ├── State of the Industry Uncontainable AGI + Jobs Destruction.txt ├── State of the Industry Yann LeCun A Path Towards Autonomous Machine Intelligence.txt ├── Stress Testing Qdrant (Semantic Search) with 90,000 vectors - lightning fast search microservice.txt ├── Summarize product reviews with GPT-3 fast and easy, get product insights and improvements fast!.txt ├── Symphony of Thought Livestream! AGI, aliens, and simulation hypothesis.txt ├── Talking Boundaries and Consent with GPT-3.txt ├── Talking Philosophy with GPT-3.txt ├── Testing Theory of Mind in GPT-3 - making fully aligned ACOG (Artificial Cognitive Entities).txt ├── Text-based Automatic Simulation - Creating a virtual world for testing ACOG.txt ├── What does it mean to be transdisciplinary What am I working on (TLDR - six books in the works).txt ├── What is a Microservice What does it have to do with AGI.txt ├── What is a heuristic imperative What imperatives should we give AGI.txt ├── What is toxic stoicism Talking philosophy with GPT-3.txt └── Write an entire blog post with GPT-3 fully automatically.txt ├── clarify_transcripts.py ├── concatenate files.py ├── convert_transcript_to_spr.py ├── download_all_transcripts.py ├── download_all_transcripts_timestamps.py ├── generate_chapters.py ├── gpt3_logs ├── 1663848307.5686707_gpt3.txt ├── 1663848311.0084972_gpt3.txt ├── 1663848314.7698352_gpt3.txt ├── 1663848317.99061_gpt3.txt ├── 1663848340.9765422_gpt3.txt ├── 1663848349.025169_gpt3.txt ├── 1663848353.1003578_gpt3.txt ├── 1663848356.4912264_gpt3.txt ├── 1663848387.4478633_gpt3.txt ├── 1663848390.9697897_gpt3.txt ├── 1663848686.5777948_gpt3.txt ├── 1663848695.3345714_gpt3.txt ├── 1663848703.681647_gpt3.txt ├── 1663848708.7397912_gpt3.txt ├── 1663848712.7580962_gpt3.txt ├── 1663848722.456602_gpt3.txt ├── 1663848727.5204966_gpt3.txt ├── 1663848733.6271942_gpt3.txt ├── 1663849757.4043682_gpt3.txt ├── 1663849769.8121_gpt3.txt ├── 1663849774.602637_gpt3.txt ├── 1663849777.7183301_gpt3.txt ├── 1663849779.9352996_gpt3.txt ├── 1663849781.9572418_gpt3.txt ├── 1663849784.0190375_gpt3.txt ├── 1663849789.4124398_gpt3.txt ├── 1663849791.5990272_gpt3.txt ├── 1663849793.6430955_gpt3.txt ├── 1663849797.4009652_gpt3.txt ├── 1663850094.131958_gpt3.txt ├── 1663850095.9201047_gpt3.txt ├── 1663850096.9068832_gpt3.txt ├── 1663850098.0742145_gpt3.txt ├── 1663850098.8683932_gpt3.txt ├── 1663850099.8327332_gpt3.txt ├── 1663850101.19677_gpt3.txt ├── 1663850101.9986238_gpt3.txt ├── 1663850102.8984187_gpt3.txt ├── 1663850103.6712673_gpt3.txt ├── 1663850265.4203873_gpt3.txt ├── 1663850266.5154333_gpt3.txt ├── 1663850268.329841_gpt3.txt ├── 1663850269.101732_gpt3.txt ├── 1663850270.4566028_gpt3.txt ├── 1663850271.5610235_gpt3.txt ├── 1663850272.4428573_gpt3.txt ├── 1663850273.1398773_gpt3.txt ├── 1663850273.62169_gpt3.txt ├── 1663850360.1269157_gpt3.txt ├── 1663850361.3183684_gpt3.txt ├── 1663850362.289369_gpt3.txt ├── 1663850363.271551_gpt3.txt ├── 1663850364.4269173_gpt3.txt ├── 1663850365.053181_gpt3.txt ├── 1663850366.6288233_gpt3.txt ├── 1663850367.5960348_gpt3.txt ├── 1663850368.5962992_gpt3.txt ├── 1663850369.3172312_gpt3.txt ├── 1663850370.3195555_gpt3.txt ├── 1663850371.2211037_gpt3.txt ├── 1663850372.3680298_gpt3.txt ├── 1663850373.2678106_gpt3.txt ├── 1663850375.0090733_gpt3.txt ├── 1663850376.0726252_gpt3.txt ├── 1663850377.280262_gpt3.txt ├── 1663850574.3556237_gpt3.txt ├── 1663850575.1716979_gpt3.txt ├── 1663850576.447769_gpt3.txt ├── 1663850577.2033818_gpt3.txt ├── 1663850578.1222715_gpt3.txt ├── 1663850579.4877253_gpt3.txt ├── 1663850580.8141515_gpt3.txt ├── 1663850581.910517_gpt3.txt ├── 1663850583.4379113_gpt3.txt ├── 1663850583.9940982_gpt3.txt ├── 1663851030.2684553_gpt3.txt ├── 1663851032.7368195_gpt3.txt ├── 1663851033.631621_gpt3.txt ├── 1663851034.34202_gpt3.txt ├── 1663851036.0343914_gpt3.txt ├── 1663851036.9643357_gpt3.txt ├── 1663851038.2769315_gpt3.txt ├── 1663851039.4536605_gpt3.txt ├── 1663851040.4498222_gpt3.txt ├── 1663851041.924544_gpt3.txt ├── 1663851042.8708658_gpt3.txt ├── 1663851043.63469_gpt3.txt ├── 1663851044.6588943_gpt3.txt ├── 1663851045.5658734_gpt3.txt ├── 1663851046.5735714_gpt3.txt ├── 1663851047.392465_gpt3.txt ├── 1663851048.3625088_gpt3.txt ├── 1663851050.0099874_gpt3.txt ├── 1663851051.2775335_gpt3.txt ├── 1663851051.995601_gpt3.txt ├── 1663851053.0601423_gpt3.txt ├── 1663851053.844357_gpt3.txt ├── 1663851055.6799138_gpt3.txt ├── 1663851056.5784962_gpt3.txt ├── 1663851057.5095131_gpt3.txt ├── 1663851114.0142453_gpt3.txt ├── 1663851115.431077_gpt3.txt ├── 1663851116.2734475_gpt3.txt ├── 1663851117.848613_gpt3.txt ├── 1663851119.188993_gpt3.txt ├── 1663851227.8745182_gpt3.txt ├── 1663851228.60289_gpt3.txt ├── 1663851229.292384_gpt3.txt ├── 1663851230.5222633_gpt3.txt ├── 1663851231.2891567_gpt3.txt ├── 1663851232.563365_gpt3.txt ├── 1663851233.5219493_gpt3.txt ├── 1663851234.4705486_gpt3.txt ├── 1663851235.0949218_gpt3.txt ├── 1663851236.3655133_gpt3.txt ├── 1663851237.5263028_gpt3.txt ├── 1663851239.083166_gpt3.txt ├── 1663851239.959149_gpt3.txt ├── 1663851301.3852355_gpt3.txt ├── 1663851302.1812263_gpt3.txt ├── 1663851302.973471_gpt3.txt ├── 1663851304.151546_gpt3.txt ├── 1663851305.005455_gpt3.txt ├── 1663851306.0670125_gpt3.txt ├── 1663851365.771587_gpt3.txt ├── 1663851367.0533504_gpt3.txt ├── 1663851368.0916777_gpt3.txt ├── 1663851369.0304294_gpt3.txt ├── 1663851369.7709875_gpt3.txt ├── 1663851370.8948963_gpt3.txt ├── 1663851371.7204547_gpt3.txt ├── 1663851372.6507926_gpt3.txt ├── 1663851373.4683077_gpt3.txt ├── 1663851374.3836975_gpt3.txt ├── 1663851375.222288_gpt3.txt ├── 1663851376.2950084_gpt3.txt ├── 1663851377.3341014_gpt3.txt ├── 1663851423.3982987_gpt3.txt ├── 1663851424.4210532_gpt3.txt ├── 1663851426.038194_gpt3.txt ├── 1663851426.9339097_gpt3.txt ├── 1663851427.6231942_gpt3.txt ├── 1663851428.4102035_gpt3.txt ├── 1663851429.2488647_gpt3.txt ├── 1663851429.7962637_gpt3.txt ├── 1663851430.9782138_gpt3.txt ├── 1663851432.0471315_gpt3.txt ├── 1663851432.704778_gpt3.txt ├── 1663851434.2362032_gpt3.txt ├── 1663851435.204514_gpt3.txt ├── 1663851435.9383755_gpt3.txt ├── 1663851436.6562955_gpt3.txt ├── 1663851437.8198125_gpt3.txt ├── 1663851438.4414794_gpt3.txt ├── 1663851539.4829502_gpt3.txt ├── 1663851540.4081705_gpt3.txt ├── 1663851541.1328719_gpt3.txt ├── 1663851541.9907866_gpt3.txt ├── 1663851542.8369122_gpt3.txt ├── 1663851543.5810747_gpt3.txt ├── 1663851544.8849008_gpt3.txt ├── 1663851546.773536_gpt3.txt ├── 1663851548.2253733_gpt3.txt ├── 1663851550.631074_gpt3.txt ├── 1663851551.812126_gpt3.txt ├── 1663851553.276703_gpt3.txt ├── 1663851554.4724426_gpt3.txt ├── 1663851555.2408645_gpt3.txt ├── 1663851555.9502802_gpt3.txt ├── 1663851556.6617637_gpt3.txt ├── 1663851557.7507484_gpt3.txt ├── 1663851558.7437968_gpt3.txt ├── 1663851560.1748607_gpt3.txt ├── 1663851561.0430002_gpt3.txt ├── 1663851561.828806_gpt3.txt ├── 1663851562.564733_gpt3.txt ├── 1663851564.0379672_gpt3.txt ├── 1663851564.7683523_gpt3.txt ├── 1663851565.7864838_gpt3.txt ├── 1663851567.3204725_gpt3.txt ├── 1663851568.382904_gpt3.txt ├── 1663851569.1728115_gpt3.txt ├── 1663851570.2012098_gpt3.txt ├── 1663851572.1362038_gpt3.txt ├── 1663851573.0681753_gpt3.txt ├── 1663851633.8820128_gpt3.txt ├── 1663851635.184282_gpt3.txt ├── 1663851636.1710126_gpt3.txt ├── 1663851637.1028087_gpt3.txt ├── 1663851638.3104613_gpt3.txt ├── 1663851639.7560792_gpt3.txt ├── 1663851640.678706_gpt3.txt ├── 1663851641.6706564_gpt3.txt ├── 1663851642.4957407_gpt3.txt ├── 1663851643.8385577_gpt3.txt ├── 1663851645.222341_gpt3.txt ├── 1663851646.5028772_gpt3.txt ├── 1663851647.565823_gpt3.txt ├── 1663851649.303018_gpt3.txt ├── 1663851649.8640687_gpt3.txt ├── 1663851650.6504664_gpt3.txt ├── 1663851652.120978_gpt3.txt ├── 1663851653.4771662_gpt3.txt ├── 1663851654.847003_gpt3.txt ├── 1663851655.7443423_gpt3.txt ├── 1663851656.68549_gpt3.txt ├── 1663851657.85207_gpt3.txt ├── 1663851660.0847926_gpt3.txt ├── 1663851661.0069883_gpt3.txt ├── 1663851661.908837_gpt3.txt ├── 1663851662.8819275_gpt3.txt ├── 1663851664.114227_gpt3.txt ├── 1663851664.8619788_gpt3.txt ├── 1663851666.2125509_gpt3.txt ├── 1663851666.9004633_gpt3.txt ├── 1663851667.7541506_gpt3.txt ├── 1663851668.4817386_gpt3.txt ├── 1663851669.6198997_gpt3.txt ├── 1663851670.4425225_gpt3.txt ├── 1663851806.2837672_gpt3.txt ├── 1663851807.9049628_gpt3.txt ├── 1663851808.7480853_gpt3.txt ├── 1663851809.5756102_gpt3.txt ├── 1663851810.1918416_gpt3.txt ├── 1663851871.2035725_gpt3.txt ├── 1663851872.578838_gpt3.txt ├── 1663851873.3902314_gpt3.txt ├── 1663851874.364175_gpt3.txt ├── 1663851875.1589592_gpt3.txt ├── 1663851875.906852_gpt3.txt ├── 1663851876.8184052_gpt3.txt ├── 1663851877.8782976_gpt3.txt ├── 1663851878.813909_gpt3.txt ├── 1663851880.2053406_gpt3.txt ├── 1663851881.5948873_gpt3.txt ├── 1663851883.0747182_gpt3.txt ├── 1663851884.1541457_gpt3.txt ├── 1663851884.7077527_gpt3.txt ├── 1663851885.5020368_gpt3.txt ├── 1663851887.639864_gpt3.txt ├── 1663851888.7189643_gpt3.txt ├── 1663851890.2954526_gpt3.txt ├── 1663851891.021308_gpt3.txt ├── 1663851891.8892698_gpt3.txt ├── 1663851892.7049925_gpt3.txt ├── 1663851893.7459633_gpt3.txt ├── 1663851895.143651_gpt3.txt ├── 1663851896.0107944_gpt3.txt ├── 1663851896.8680882_gpt3.txt ├── 1663851897.9383_gpt3.txt ├── 1663851898.8133266_gpt3.txt ├── 1663851899.989741_gpt3.txt ├── 1663851900.59061_gpt3.txt ├── 1663851961.397526_gpt3.txt ├── 1663851962.0435894_gpt3.txt ├── 1663851963.3319108_gpt3.txt ├── 1663851964.164431_gpt3.txt ├── 1663851964.7464843_gpt3.txt ├── 1663851966.1201239_gpt3.txt ├── 1663851967.1111503_gpt3.txt ├── 1663851968.1148293_gpt3.txt ├── 1663851969.0872774_gpt3.txt ├── 1663851969.8551836_gpt3.txt ├── 1663851970.6888077_gpt3.txt ├── 1663851972.2960637_gpt3.txt ├── 1663851973.1246455_gpt3.txt ├── 1663851973.8599281_gpt3.txt ├── 1663851974.971785_gpt3.txt ├── 1663851976.4655154_gpt3.txt ├── 1663851977.3743246_gpt3.txt ├── 1663851978.3859496_gpt3.txt ├── 1663851979.4241884_gpt3.txt ├── 1663851980.2342386_gpt3.txt ├── 1663852033.8270109_gpt3.txt ├── 1663852034.700179_gpt3.txt ├── 1663852035.9214318_gpt3.txt ├── 1663852037.3454657_gpt3.txt ├── 1663852038.1435497_gpt3.txt ├── 1663852039.0173903_gpt3.txt ├── 1663852039.8931937_gpt3.txt ├── 1663852040.9950066_gpt3.txt ├── 1663852042.1566517_gpt3.txt ├── 1663852043.0996993_gpt3.txt ├── 1663852044.5557525_gpt3.txt ├── 1663852045.4084163_gpt3.txt ├── 1663852046.3619938_gpt3.txt ├── 1663852047.2754037_gpt3.txt ├── 1663852047.960681_gpt3.txt ├── 1663852049.1358669_gpt3.txt ├── 1663852050.1863563_gpt3.txt ├── 1663852051.066665_gpt3.txt ├── 1663852051.9801056_gpt3.txt ├── 1663852118.9368212_gpt3.txt ├── 1663852119.653011_gpt3.txt ├── 1663852120.3811514_gpt3.txt ├── 1663852121.3663843_gpt3.txt ├── 1663852122.7188823_gpt3.txt ├── 1663852123.7525175_gpt3.txt ├── 1663852172.5219252_gpt3.txt ├── 1663852173.5937333_gpt3.txt ├── 1663852174.4106362_gpt3.txt ├── 1663852175.231093_gpt3.txt ├── 1663852176.1295252_gpt3.txt ├── 1663852177.0037482_gpt3.txt ├── 1663852177.868195_gpt3.txt ├── 1663852178.8676417_gpt3.txt ├── 1663852180.118752_gpt3.txt ├── 1663852181.0844636_gpt3.txt ├── 1663852182.2277284_gpt3.txt ├── 1663852183.1284578_gpt3.txt ├── 1663852184.3544369_gpt3.txt ├── 1663852185.2188308_gpt3.txt ├── 1663852186.161926_gpt3.txt ├── 1663852187.0096843_gpt3.txt ├── 1663852243.917082_gpt3.txt ├── 1663852244.7007122_gpt3.txt ├── 1663852245.824779_gpt3.txt ├── 1663852246.5176973_gpt3.txt ├── 1663852296.2503529_gpt3.txt ├── 1663852297.1026545_gpt3.txt ├── 1663852297.8703032_gpt3.txt ├── 1663852299.1520374_gpt3.txt ├── 1663852299.9820585_gpt3.txt ├── 1663852301.4118822_gpt3.txt ├── 1663852302.1935742_gpt3.txt ├── 1663852355.4647207_gpt3.txt ├── 1663852356.5068004_gpt3.txt ├── 1663852357.4602044_gpt3.txt ├── 1663852358.2523878_gpt3.txt ├── 1663852359.1197035_gpt3.txt ├── 1663852361.0510159_gpt3.txt ├── 1663852362.1143818_gpt3.txt ├── 1663852364.054326_gpt3.txt ├── 1663852364.9780521_gpt3.txt ├── 1663852365.4972024_gpt3.txt ├── 1663852427.9424858_gpt3.txt ├── 1663852429.0677667_gpt3.txt ├── 1663852430.6503613_gpt3.txt ├── 1663852431.460556_gpt3.txt ├── 1663852432.308877_gpt3.txt ├── 1663852433.419932_gpt3.txt ├── 1663852434.1876867_gpt3.txt ├── 1663852435.214251_gpt3.txt ├── 1663852436.152382_gpt3.txt ├── 1663852437.1914556_gpt3.txt ├── 1663852478.8880444_gpt3.txt ├── 1663852479.6201766_gpt3.txt ├── 1663852480.406088_gpt3.txt ├── 1663852481.2848427_gpt3.txt ├── 1663852482.2392595_gpt3.txt ├── 1663852483.7206342_gpt3.txt ├── 1663852484.617711_gpt3.txt ├── 1663852485.3360815_gpt3.txt ├── 1663852486.1887853_gpt3.txt ├── 1663852486.7238996_gpt3.txt ├── 1663852636.6814144_gpt3.txt ├── 1663852637.4301863_gpt3.txt ├── 1663852638.9525762_gpt3.txt ├── 1663852640.3543978_gpt3.txt ├── 1663852641.0329514_gpt3.txt ├── 1663852695.3998003_gpt3.txt ├── 1663852696.216095_gpt3.txt ├── 1663852697.077495_gpt3.txt ├── 1663852697.7554998_gpt3.txt ├── 1663852698.5998375_gpt3.txt ├── 1663852700.7860808_gpt3.txt ├── 1663852702.053153_gpt3.txt ├── 1663852702.7536757_gpt3.txt ├── 1663852755.7831216_gpt3.txt ├── 1663852757.473294_gpt3.txt ├── 1663852758.8560834_gpt3.txt ├── 1663852759.7906206_gpt3.txt ├── 1663852760.9136164_gpt3.txt ├── 1663852761.6030407_gpt3.txt ├── 1663852762.5065544_gpt3.txt ├── 1663852763.3127553_gpt3.txt ├── 1663852764.3222532_gpt3.txt ├── 1663852765.602581_gpt3.txt ├── 1663852766.8695433_gpt3.txt ├── 1663852767.8702157_gpt3.txt ├── 1663852768.7597878_gpt3.txt ├── 1663852825.3548443_gpt3.txt ├── 1663852826.6029732_gpt3.txt ├── 1663852827.3614717_gpt3.txt ├── 1663852829.1305664_gpt3.txt ├── 1663852829.9265072_gpt3.txt ├── 1663852831.1732028_gpt3.txt ├── 1663852832.11048_gpt3.txt ├── 1663852870.5112941_gpt3.txt ├── 1663852871.214666_gpt3.txt ├── 1663852871.9899833_gpt3.txt ├── 1663852874.4087462_gpt3.txt ├── 1663852875.9522076_gpt3.txt ├── 1663852876.8791873_gpt3.txt ├── 1663852877.6155503_gpt3.txt ├── 1663852878.498247_gpt3.txt ├── 1663852879.569783_gpt3.txt ├── 1663852914.518097_gpt3.txt ├── 1663852915.4914858_gpt3.txt ├── 1663852917.4576159_gpt3.txt ├── 1663852919.4173338_gpt3.txt ├── 1663852921.1488066_gpt3.txt ├── 1663852922.3856049_gpt3.txt ├── 1663852923.4629002_gpt3.txt ├── 1663852924.251282_gpt3.txt ├── 1663852924.9945238_gpt3.txt ├── 1663852926.03113_gpt3.txt ├── 1663852926.7627196_gpt3.txt ├── 1663852927.8797266_gpt3.txt ├── 1663852929.0862274_gpt3.txt ├── 1663852931.3102899_gpt3.txt ├── 1663852932.723192_gpt3.txt ├── 1663852933.4008453_gpt3.txt ├── 1663852934.2152846_gpt3.txt ├── 1663852976.9098976_gpt3.txt ├── 1663852978.053689_gpt3.txt ├── 1663852979.4876127_gpt3.txt ├── 1663852980.2701645_gpt3.txt ├── 1663852981.400782_gpt3.txt ├── 1663852983.0037334_gpt3.txt ├── 1663853022.0645773_gpt3.txt ├── 1663853023.3599594_gpt3.txt ├── 1663853024.089049_gpt3.txt ├── 1663853025.2344012_gpt3.txt ├── 1663853026.696721_gpt3.txt ├── 1663853027.696682_gpt3.txt ├── 1663853029.072817_gpt3.txt ├── 1663853029.9537961_gpt3.txt ├── 1663853030.8890646_gpt3.txt ├── 1663853031.903805_gpt3.txt ├── 1663853032.5336533_gpt3.txt ├── 1663853033.2581475_gpt3.txt ├── 1663853034.117139_gpt3.txt ├── 1663853035.3049664_gpt3.txt ├── 1663853036.1547012_gpt3.txt ├── 1663853036.7860913_gpt3.txt ├── 1663853039.0205991_gpt3.txt ├── 1663853040.5294087_gpt3.txt ├── 1663853074.2056582_gpt3.txt ├── 1663853075.744829_gpt3.txt ├── 1663853076.609114_gpt3.txt ├── 1663853077.8140795_gpt3.txt ├── 1663853078.9103537_gpt3.txt ├── 1663853079.8280327_gpt3.txt ├── 1663853080.7800238_gpt3.txt ├── 1663853081.2979016_gpt3.txt ├── 1663853082.2074342_gpt3.txt ├── 1663853083.192932_gpt3.txt ├── 1663853083.9368942_gpt3.txt ├── 1663853085.3846798_gpt3.txt ├── 1663853086.1797438_gpt3.txt ├── 1663853086.7672875_gpt3.txt ├── 1663853087.7911193_gpt3.txt ├── 1663853088.8675997_gpt3.txt ├── 1663853090.0400248_gpt3.txt ├── 1663853091.714988_gpt3.txt ├── 1663853092.7353978_gpt3.txt ├── 1663853094.4676476_gpt3.txt ├── 1663853095.506764_gpt3.txt ├── 1663853096.782472_gpt3.txt ├── 1663853097.5958514_gpt3.txt ├── 1663853139.840543_gpt3.txt ├── 1663853141.4495606_gpt3.txt ├── 1663853142.9060028_gpt3.txt ├── 1663853143.9090488_gpt3.txt ├── 1663853144.7947826_gpt3.txt ├── 1663853145.7956455_gpt3.txt ├── 1663853184.350222_gpt3.txt ├── 1663853185.2632127_gpt3.txt ├── 1663853186.781733_gpt3.txt ├── 1663853187.9833958_gpt3.txt ├── 1663853188.4917297_gpt3.txt ├── 1663853189.7192206_gpt3.txt ├── 1663853190.407331_gpt3.txt ├── 1663853191.1805604_gpt3.txt ├── 1663853192.0226355_gpt3.txt ├── 1663853192.4765775_gpt3.txt ├── 1663853194.1265478_gpt3.txt ├── 1663853194.996181_gpt3.txt ├── 1663853195.879312_gpt3.txt ├── 1663853196.652439_gpt3.txt ├── 1663853198.117429_gpt3.txt ├── 1663853199.3036585_gpt3.txt ├── 1663853200.3779547_gpt3.txt ├── 1663853201.303803_gpt3.txt ├── 1663853202.7067528_gpt3.txt ├── 1663853203.5275369_gpt3.txt ├── 1663853204.4377818_gpt3.txt ├── 1663853205.2141829_gpt3.txt ├── 1663853206.5707624_gpt3.txt ├── 1663853207.588682_gpt3.txt ├── 1663853208.4552588_gpt3.txt ├── 1663853209.3460662_gpt3.txt ├── 1663853211.2157211_gpt3.txt ├── 1663853256.6035757_gpt3.txt ├── 1663853257.9496534_gpt3.txt ├── 1663853258.5758495_gpt3.txt ├── 1663853259.8031487_gpt3.txt ├── 1663853261.0056763_gpt3.txt ├── 1663853262.4190807_gpt3.txt ├── 1663853263.547818_gpt3.txt ├── 1663853264.705611_gpt3.txt ├── 1663853265.6538405_gpt3.txt ├── 1663853267.4998364_gpt3.txt ├── 1663853268.5296452_gpt3.txt ├── 1663853269.5782967_gpt3.txt ├── 1663853270.5824296_gpt3.txt ├── 1663853272.2302125_gpt3.txt ├── 1663853315.6039689_gpt3.txt ├── 1663853316.9720879_gpt3.txt ├── 1663853317.7947156_gpt3.txt ├── 1663853319.4348826_gpt3.txt ├── 1663853320.3311365_gpt3.txt ├── 1663853321.408869_gpt3.txt ├── 1663853322.4551888_gpt3.txt ├── 1663853323.8418043_gpt3.txt ├── 1663853325.251151_gpt3.txt ├── 1663853326.0512943_gpt3.txt ├── 1663853327.2450647_gpt3.txt ├── 1663853327.7346303_gpt3.txt ├── 1663853329.9472861_gpt3.txt ├── 1663853331.672813_gpt3.txt ├── 1663853332.7354512_gpt3.txt ├── 1663853333.8816495_gpt3.txt ├── 1663853335.086763_gpt3.txt ├── 1663853336.1729052_gpt3.txt ├── 1663853337.355974_gpt3.txt ├── 1663853338.3196871_gpt3.txt ├── 1663853415.0608616_gpt3.txt ├── 1663853415.7972322_gpt3.txt ├── 1663853416.7064252_gpt3.txt ├── 1663853418.1364896_gpt3.txt ├── 1663853418.8154578_gpt3.txt ├── 1663853419.7562413_gpt3.txt ├── 1663853420.482006_gpt3.txt ├── 1663853421.320713_gpt3.txt ├── 1663853422.0517657_gpt3.txt ├── 1663853422.8436234_gpt3.txt ├── 1663853423.9333751_gpt3.txt ├── 1663853424.7325878_gpt3.txt ├── 1663853426.1221023_gpt3.txt ├── 1663853426.850492_gpt3.txt ├── 1663853427.941853_gpt3.txt ├── 1663853428.9305272_gpt3.txt ├── 1663853430.2368243_gpt3.txt ├── 1663853431.1834_gpt3.txt ├── 1663853432.4981105_gpt3.txt ├── 1663853433.5975883_gpt3.txt ├── 1663853434.636204_gpt3.txt ├── 1663853436.1632354_gpt3.txt ├── 1663853436.9930232_gpt3.txt ├── 1663853526.6473858_gpt3.txt ├── 1663853527.7053158_gpt3.txt ├── 1663853528.7547224_gpt3.txt ├── 1663853529.7947526_gpt3.txt ├── 1663853530.8753874_gpt3.txt ├── 1663853531.6465824_gpt3.txt ├── 1663853532.6319704_gpt3.txt ├── 1663853533.7456055_gpt3.txt ├── 1663853534.5980031_gpt3.txt ├── 1663853535.729835_gpt3.txt ├── 1663853536.928274_gpt3.txt ├── 1663853538.423154_gpt3.txt ├── 1663853540.0029435_gpt3.txt ├── 1663853595.3525817_gpt3.txt ├── 1663853596.149398_gpt3.txt ├── 1663853674.6926441_gpt3.txt ├── 1663853675.6684992_gpt3.txt ├── 1663853676.4149382_gpt3.txt ├── 1663853677.146491_gpt3.txt ├── 1663853677.91099_gpt3.txt ├── 1663853678.3720112_gpt3.txt ├── 1663853756.861685_gpt3.txt ├── 1663853757.9057357_gpt3.txt ├── 1663853758.794717_gpt3.txt ├── 1663853759.9300976_gpt3.txt ├── 1663853761.312826_gpt3.txt ├── 1663853762.639902_gpt3.txt ├── 1663853763.477258_gpt3.txt ├── 1663853764.541864_gpt3.txt ├── 1663853765.5939991_gpt3.txt ├── 1663853766.9145596_gpt3.txt ├── 1663853767.720857_gpt3.txt ├── 1663853768.9561021_gpt3.txt ├── 1663853769.8923597_gpt3.txt ├── 1663853771.0727983_gpt3.txt ├── 1663853772.6974902_gpt3.txt ├── 1663853773.8175921_gpt3.txt ├── 1663853775.4763973_gpt3.txt ├── 1663853776.2835574_gpt3.txt ├── 1663853776.9067984_gpt3.txt ├── 1663853777.6764865_gpt3.txt ├── 1663853778.4868143_gpt3.txt ├── 1663853779.671252_gpt3.txt ├── 1663853813.055269_gpt3.txt ├── 1663853814.0938807_gpt3.txt ├── 1663853814.8955472_gpt3.txt ├── 1663853815.7137978_gpt3.txt ├── 1663853816.4375052_gpt3.txt ├── 1663853850.6801674_gpt3.txt ├── 1663853851.5681305_gpt3.txt ├── 1663853853.0406926_gpt3.txt ├── 1663853853.8807986_gpt3.txt ├── 1663853854.8309698_gpt3.txt ├── 1663853857.7381222_gpt3.txt ├── 1663853858.860387_gpt3.txt ├── 1663853859.7810907_gpt3.txt ├── 1663853860.6000042_gpt3.txt ├── 1663853862.0047917_gpt3.txt ├── 1663853862.9519148_gpt3.txt ├── 1663853863.7851148_gpt3.txt ├── 1663853897.8437023_gpt3.txt ├── 1663853898.7514782_gpt3.txt ├── 1663853899.655171_gpt3.txt ├── 1663853900.761817_gpt3.txt ├── 1663853901.7281961_gpt3.txt ├── 1663853902.970689_gpt3.txt ├── 1663853903.8282259_gpt3.txt ├── 1663853904.7771418_gpt3.txt ├── 1663853974.7346134_gpt3.txt ├── 1663853975.9091017_gpt3.txt ├── 1663853978.0472267_gpt3.txt ├── 1663853978.872233_gpt3.txt ├── 1663853980.155929_gpt3.txt ├── 1663853981.6105475_gpt3.txt ├── 1663853982.6187823_gpt3.txt ├── 1663853984.4361582_gpt3.txt ├── 1663853985.5279436_gpt3.txt ├── 1663853986.704813_gpt3.txt ├── 1663853987.4565518_gpt3.txt ├── 1663853988.274518_gpt3.txt ├── 1663853989.1962178_gpt3.txt ├── 1663853989.9009356_gpt3.txt ├── 1663853990.7027256_gpt3.txt ├── 1663853991.730208_gpt3.txt ├── 1663853992.503463_gpt3.txt ├── 1663853993.4047449_gpt3.txt ├── 1663853994.2914217_gpt3.txt ├── 1663853996.1027687_gpt3.txt ├── 1663853997.1971962_gpt3.txt ├── 1663853998.0463233_gpt3.txt ├── 1663853998.8708773_gpt3.txt ├── 1663854058.5149524_gpt3.txt ├── 1663854059.8898091_gpt3.txt ├── 1663854061.4335911_gpt3.txt ├── 1663854062.2900727_gpt3.txt ├── 1663854063.6899524_gpt3.txt ├── 1663854064.9628882_gpt3.txt ├── 1663854065.7905788_gpt3.txt ├── 1663854066.992988_gpt3.txt ├── 1663854067.7913952_gpt3.txt ├── 1663854068.4759393_gpt3.txt ├── 1663854069.2366078_gpt3.txt ├── 1663854070.7344687_gpt3.txt ├── 1663854071.4818606_gpt3.txt ├── 1663854072.6771402_gpt3.txt ├── 1663854074.2642806_gpt3.txt ├── 1663854075.0739322_gpt3.txt ├── 1663854075.7946994_gpt3.txt ├── 1663854076.8133821_gpt3.txt ├── 1663854077.696737_gpt3.txt ├── 1663854078.5835478_gpt3.txt ├── 1663854080.3636694_gpt3.txt ├── 1663854125.0956695_gpt3.txt ├── 1663854125.63264_gpt3.txt ├── 1663854126.9812574_gpt3.txt ├── 1663854128.3539522_gpt3.txt ├── 1663854129.4348316_gpt3.txt ├── 1663854130.5106132_gpt3.txt ├── 1663854131.6027071_gpt3.txt ├── 1663854132.5232081_gpt3.txt ├── 1663854133.5691025_gpt3.txt ├── 1663854134.5283136_gpt3.txt ├── 1663854135.7115428_gpt3.txt ├── 1663854136.44346_gpt3.txt ├── 1663854137.9061346_gpt3.txt ├── 1663854139.1951654_gpt3.txt ├── 1663854140.9432414_gpt3.txt ├── 1663854141.6353476_gpt3.txt ├── 1663854142.472197_gpt3.txt ├── 1663854144.1628416_gpt3.txt ├── 1663854144.8560865_gpt3.txt ├── 1663854212.8862336_gpt3.txt ├── 1663854213.6850693_gpt3.txt ├── 1663854214.798269_gpt3.txt ├── 1663854216.035106_gpt3.txt ├── 1663854216.8938124_gpt3.txt ├── 1663854217.6695077_gpt3.txt ├── 1663854218.8098273_gpt3.txt ├── 1663854220.4356294_gpt3.txt ├── 1663854221.5168025_gpt3.txt ├── 1663854222.5654995_gpt3.txt ├── 1663854223.5261433_gpt3.txt ├── 1663854224.8992867_gpt3.txt ├── 1663854225.6546059_gpt3.txt ├── 1663854226.7233884_gpt3.txt ├── 1663854276.5375996_gpt3.txt ├── 1663854277.1774864_gpt3.txt ├── 1663854278.517629_gpt3.txt ├── 1663854279.5763292_gpt3.txt ├── 1663854280.5058637_gpt3.txt ├── 1663854281.6239846_gpt3.txt ├── 1663854282.773107_gpt3.txt ├── 1663854283.5940945_gpt3.txt ├── 1663854284.4726849_gpt3.txt ├── 1663854285.1958594_gpt3.txt ├── 1663854286.491712_gpt3.txt ├── 1663854287.6529074_gpt3.txt ├── 1663854288.6517704_gpt3.txt ├── 1663854289.6668062_gpt3.txt ├── 1663854290.51556_gpt3.txt ├── 1663854292.292166_gpt3.txt ├── 1663854293.056445_gpt3.txt ├── 1663854294.0143938_gpt3.txt ├── 1663854294.9435096_gpt3.txt ├── 1663854296.8086047_gpt3.txt ├── 1663854297.7009976_gpt3.txt ├── 1663854298.9526086_gpt3.txt ├── 1663854299.8865235_gpt3.txt ├── 1663854300.6604028_gpt3.txt ├── 1663854302.290827_gpt3.txt ├── 1663854303.4236665_gpt3.txt ├── 1663854304.7315388_gpt3.txt ├── 1663854305.7498631_gpt3.txt ├── 1663854354.0632925_gpt3.txt ├── 1663854354.9112513_gpt3.txt ├── 1663854355.7874506_gpt3.txt ├── 1663854357.0988414_gpt3.txt ├── 1663854357.7769098_gpt3.txt ├── 1663854359.042963_gpt3.txt ├── 1663854359.9701104_gpt3.txt ├── 1663854360.707725_gpt3.txt ├── 1663854361.5946112_gpt3.txt ├── 1663854362.8736372_gpt3.txt ├── 1663854363.9746199_gpt3.txt ├── 1663854365.0904388_gpt3.txt ├── 1663854366.1090066_gpt3.txt ├── 1663854406.5286722_gpt3.txt ├── 1663854408.0256116_gpt3.txt ├── 1663854408.9906092_gpt3.txt ├── 1663854409.8933363_gpt3.txt ├── 1663854410.8209622_gpt3.txt ├── 1663854412.1626804_gpt3.txt ├── 1663854413.7199066_gpt3.txt ├── 1663854414.8795397_gpt3.txt ├── 1663854415.876864_gpt3.txt ├── 1663854417.0616646_gpt3.txt ├── 1663854418.1395514_gpt3.txt ├── 1663854550.7091036_gpt3.txt ├── 1663854551.631204_gpt3.txt ├── 1663854552.5717897_gpt3.txt ├── 1663854553.631399_gpt3.txt ├── 1663854554.3589313_gpt3.txt ├── 1663854555.239869_gpt3.txt ├── 1663854556.0364256_gpt3.txt ├── 1663854557.464769_gpt3.txt ├── 1663854558.9982665_gpt3.txt ├── 1663854560.0015728_gpt3.txt ├── 1663854561.0347588_gpt3.txt ├── 1663854561.7694645_gpt3.txt ├── 1663854562.5598474_gpt3.txt ├── 1663854563.2621694_gpt3.txt ├── 1663854563.9302435_gpt3.txt ├── 1663854598.6965845_gpt3.txt ├── 1663854599.4972448_gpt3.txt ├── 1663854600.7934148_gpt3.txt ├── 1663854601.520075_gpt3.txt ├── 1663854602.3794625_gpt3.txt ├── 1663854603.6137989_gpt3.txt ├── 1663854604.8893297_gpt3.txt ├── 1663854605.9128988_gpt3.txt ├── 1663854607.4106746_gpt3.txt ├── 1663854608.3379316_gpt3.txt ├── 1663854609.1423335_gpt3.txt ├── 1663854610.0083013_gpt3.txt ├── 1663854612.3840601_gpt3.txt ├── 1663854613.3787544_gpt3.txt ├── 1663854614.174416_gpt3.txt ├── 1663854657.996882_gpt3.txt ├── 1663854658.9524074_gpt3.txt ├── 1663854660.2108622_gpt3.txt ├── 1663854661.7108467_gpt3.txt ├── 1663854662.972492_gpt3.txt ├── 1663854663.7565792_gpt3.txt ├── 1663854664.5341542_gpt3.txt ├── 1663854665.4374537_gpt3.txt ├── 1663854666.163088_gpt3.txt ├── 1663854667.2553375_gpt3.txt ├── 1663854667.992614_gpt3.txt ├── 1663854668.9341607_gpt3.txt ├── 1663854712.9414353_gpt3.txt ├── 1663854714.1446056_gpt3.txt ├── 1663854715.0210001_gpt3.txt ├── 1663854716.1118858_gpt3.txt ├── 1663854718.4074605_gpt3.txt ├── 1663854719.5403132_gpt3.txt ├── 1663854720.4355352_gpt3.txt ├── 1663854721.3775349_gpt3.txt ├── 1663854724.1679952_gpt3.txt ├── 1663854725.780048_gpt3.txt ├── 1663854726.5121927_gpt3.txt ├── 1663854727.954845_gpt3.txt ├── 1663854729.1466851_gpt3.txt ├── 1663854729.9258153_gpt3.txt ├── 1663856406.8830795_gpt3.txt ├── 1663856408.1082034_gpt3.txt ├── 1663856408.813726_gpt3.txt ├── 1663856409.516276_gpt3.txt ├── 1663856410.1884334_gpt3.txt ├── 1663856411.761079_gpt3.txt ├── 1663856412.6106787_gpt3.txt ├── 1663856414.0609446_gpt3.txt ├── 1663856414.8000512_gpt3.txt ├── 1663856415.5321975_gpt3.txt ├── 1663856416.3220944_gpt3.txt ├── 1663856418.5365984_gpt3.txt ├── 1663856419.4492426_gpt3.txt ├── 1663923742.147624_gpt3.txt ├── 1663923744.0455217_gpt3.txt ├── 1663923744.9898834_gpt3.txt ├── 1663923745.6685493_gpt3.txt ├── 1663923746.7924528_gpt3.txt ├── 1663923747.445121_gpt3.txt ├── 1663923748.155242_gpt3.txt ├── 1663923820.569778_gpt3.txt ├── 1663923821.3547878_gpt3.txt ├── 1663923822.948538_gpt3.txt ├── 1663923824.7936578_gpt3.txt ├── 1663923825.576849_gpt3.txt ├── 1663923826.1057067_gpt3.txt ├── 1663923826.8075018_gpt3.txt ├── 1663923827.6956847_gpt3.txt ├── 1663923828.538128_gpt3.txt ├── 1663923829.1574073_gpt3.txt ├── 1663923829.9259727_gpt3.txt ├── 1663923831.5113895_gpt3.txt ├── 1663923832.2867286_gpt3.txt ├── 1663923832.9427905_gpt3.txt ├── 1663923833.9199831_gpt3.txt ├── 1663923834.5406275_gpt3.txt ├── 1663923835.442882_gpt3.txt ├── 1663923836.5502796_gpt3.txt ├── 1663923837.0273623_gpt3.txt ├── 1663923888.629232_gpt3.txt ├── 1663923889.391126_gpt3.txt ├── 1663923890.7704046_gpt3.txt ├── 1663923892.2692814_gpt3.txt ├── 1663923893.082422_gpt3.txt ├── 1663923893.7982206_gpt3.txt ├── 1663923895.198789_gpt3.txt ├── 1663923896.1546338_gpt3.txt ├── 1663923896.985921_gpt3.txt ├── 1663923898.2558932_gpt3.txt ├── 1663923899.1649752_gpt3.txt ├── 1663923900.8681898_gpt3.txt ├── 1663923901.8822987_gpt3.txt ├── 1663923904.7522259_gpt3.txt ├── 1663923906.7661755_gpt3.txt ├── 1663923908.4601598_gpt3.txt ├── 1663923909.5407417_gpt3.txt ├── 1663923911.1147978_gpt3.txt ├── 1663923911.9970143_gpt3.txt ├── 1663923912.77348_gpt3.txt ├── 1663923914.83981_gpt3.txt ├── 1663923917.1507506_gpt3.txt ├── 1663923917.888176_gpt3.txt ├── 1663923918.7110548_gpt3.txt ├── 1663923919.4402444_gpt3.txt ├── 1663923920.4836977_gpt3.txt ├── 1663923988.36631_gpt3.txt ├── 1663923989.0602531_gpt3.txt ├── 1663923989.547084_gpt3.txt ├── 1663923991.1000538_gpt3.txt ├── 1663923992.788555_gpt3.txt ├── 1663923994.4731188_gpt3.txt ├── 1663923995.709404_gpt3.txt ├── 1663923996.6246774_gpt3.txt ├── 1663923997.3429637_gpt3.txt ├── 1663923998.4579513_gpt3.txt ├── 1663923999.21373_gpt3.txt ├── 1663924000.1819177_gpt3.txt ├── 1663924001.6151724_gpt3.txt ├── 1663924002.7387476_gpt3.txt ├── 1663924003.5277603_gpt3.txt ├── 1663924004.523544_gpt3.txt ├── 1663924005.8784277_gpt3.txt ├── 1663924006.5325842_gpt3.txt ├── 1663924007.3359995_gpt3.txt ├── 1663924008.6238906_gpt3.txt ├── 1663924009.3505428_gpt3.txt ├── 1663924050.2798545_gpt3.txt ├── 1663924050.9190667_gpt3.txt ├── 1663924053.303525_gpt3.txt ├── 1663924055.0296757_gpt3.txt ├── 1663924055.737791_gpt3.txt ├── 1663924056.5703459_gpt3.txt ├── 1663924057.5277731_gpt3.txt ├── 1663924059.2846472_gpt3.txt ├── 1663924059.9921644_gpt3.txt ├── 1663924060.6564076_gpt3.txt ├── 1663924061.4051592_gpt3.txt ├── 1663924062.0288246_gpt3.txt ├── 1663924062.728199_gpt3.txt ├── 1663924063.9684577_gpt3.txt ├── 1663924064.8068924_gpt3.txt ├── 1663924065.9521723_gpt3.txt ├── 1663924066.6747127_gpt3.txt ├── 1663924067.6818454_gpt3.txt ├── 1663924069.231105_gpt3.txt ├── 1663924070.091443_gpt3.txt ├── 1663924071.0972068_gpt3.txt ├── 1663924071.9658666_gpt3.txt ├── 1663924072.934534_gpt3.txt ├── 1663924074.4797008_gpt3.txt ├── 1663924076.126404_gpt3.txt ├── 1663924077.0722308_gpt3.txt ├── 1663924077.9002588_gpt3.txt ├── 1663924079.2389166_gpt3.txt ├── 1663924080.200683_gpt3.txt ├── 1663924151.919426_gpt3.txt ├── 1663924152.807809_gpt3.txt ├── 1663924153.6824634_gpt3.txt ├── 1663924154.6119988_gpt3.txt ├── 1663924155.419428_gpt3.txt ├── 1663924156.3074694_gpt3.txt ├── 1663924157.3308885_gpt3.txt ├── 1663924158.2162883_gpt3.txt ├── 1663924159.3822844_gpt3.txt ├── 1663924160.3508549_gpt3.txt ├── 1663924161.3573165_gpt3.txt ├── 1663924162.3792686_gpt3.txt ├── 1663924164.173551_gpt3.txt ├── 1663924165.002191_gpt3.txt ├── 1663924165.8022647_gpt3.txt ├── 1663924166.8086529_gpt3.txt ├── 1663924167.5474799_gpt3.txt ├── 1663924208.825453_gpt3.txt ├── 1663924210.0694048_gpt3.txt ├── 1663924211.4446099_gpt3.txt ├── 1663924212.4653733_gpt3.txt ├── 1663924213.7612755_gpt3.txt ├── 1663924214.4990423_gpt3.txt ├── 1663924215.7010465_gpt3.txt ├── 1663924216.5427816_gpt3.txt ├── 1663924218.1067429_gpt3.txt ├── 1663924219.0612366_gpt3.txt ├── 1663924219.829635_gpt3.txt ├── 1663936643.3365684_gpt3.txt ├── 1663936644.54233_gpt3.txt ├── 1663936645.7662625_gpt3.txt ├── 1663936647.046738_gpt3.txt ├── 1663936647.8934088_gpt3.txt ├── 1663936648.6685982_gpt3.txt ├── 1663936649.4737382_gpt3.txt ├── 1663936650.2058356_gpt3.txt ├── 1663936651.817527_gpt3.txt ├── 1663936652.6393034_gpt3.txt ├── 1663936654.9259923_gpt3.txt ├── 1663936655.8661752_gpt3.txt ├── 1663936656.643788_gpt3.txt ├── 1663936657.9046607_gpt3.txt ├── 1663936659.746489_gpt3.txt ├── 1663936661.0380187_gpt3.txt ├── 1663936661.7521152_gpt3.txt ├── 1663936662.4457107_gpt3.txt ├── 1664030013.9175932_gpt3.txt ├── 1664030015.1266727_gpt3.txt ├── 1664030016.170728_gpt3.txt ├── 1664030018.1439843_gpt3.txt ├── 1664030019.4323564_gpt3.txt ├── 1664030020.692243_gpt3.txt ├── 1664030021.5349312_gpt3.txt ├── 1664030022.368517_gpt3.txt ├── 1664030023.4363496_gpt3.txt ├── 1664030024.6705222_gpt3.txt ├── 1664030025.6688337_gpt3.txt ├── 1664030027.086463_gpt3.txt ├── 1664030027.8006654_gpt3.txt ├── 1664030029.0423818_gpt3.txt ├── 1664030030.1413567_gpt3.txt ├── 1664030030.9820042_gpt3.txt ├── 1664030031.5928009_gpt3.txt ├── 1664030032.650935_gpt3.txt ├── 1664030033.5466316_gpt3.txt ├── 1664030034.3180015_gpt3.txt ├── 1664030035.4064078_gpt3.txt ├── 1664030037.0896676_gpt3.txt ├── 1664030038.1404328_gpt3.txt ├── 1664030039.2508633_gpt3.txt ├── 1664030040.1058428_gpt3.txt ├── 1664030040.8858943_gpt3.txt ├── 1664030041.7267883_gpt3.txt ├── 1664030042.4265585_gpt3.txt ├── 1664030043.579955_gpt3.txt ├── 1664030044.5979054_gpt3.txt ├── 1664030045.5915484_gpt3.txt ├── 1664030163.421819_gpt3.txt ├── 1664030164.8891244_gpt3.txt ├── 1664030165.8656514_gpt3.txt ├── 1664030167.893462_gpt3.txt ├── 1664030169.4411724_gpt3.txt ├── 1664030170.407243_gpt3.txt ├── 1664030171.9571981_gpt3.txt ├── 1664030172.6281838_gpt3.txt ├── 1664030173.2894502_gpt3.txt ├── 1664030174.3341458_gpt3.txt ├── 1664030174.9195156_gpt3.txt ├── 1664030175.5843976_gpt3.txt ├── 1664030177.5338163_gpt3.txt ├── 1664030178.817387_gpt3.txt ├── 1664030179.776396_gpt3.txt ├── 1664030180.2589686_gpt3.txt ├── 1664113884.1211233_gpt3.txt ├── 1664113886.2364833_gpt3.txt ├── 1664113887.247883_gpt3.txt ├── 1664113891.2986526_gpt3.txt ├── 1664113893.6120558_gpt3.txt ├── 1664113898.1304936_gpt3.txt ├── 1664113902.6160245_gpt3.txt ├── 1664113903.3557346_gpt3.txt ├── 1664113907.3797052_gpt3.txt ├── 1664113913.1958954_gpt3.txt ├── 1664113913.989322_gpt3.txt ├── 1664113994.638331_gpt3.txt ├── 1664113995.9344764_gpt3.txt ├── 1664113996.65539_gpt3.txt ├── 1664114000.5190241_gpt3.txt ├── 1664114001.545413_gpt3.txt ├── 1664114002.5462646_gpt3.txt ├── 1664114003.0406718_gpt3.txt ├── 1664114003.7606936_gpt3.txt ├── 1664114004.4559343_gpt3.txt ├── 1664114005.3232803_gpt3.txt ├── 1664114007.1680305_gpt3.txt ├── 1664114009.3795693_gpt3.txt ├── 1664114010.4396253_gpt3.txt ├── 1664114011.3051648_gpt3.txt ├── 1664114013.5757368_gpt3.txt ├── 1664114014.4207935_gpt3.txt ├── 1664192654.974337_gpt3.txt ├── 1664192657.9386404_gpt3.txt ├── 1664192666.8799148_gpt3.txt ├── 1664192673.58094_gpt3.txt ├── 1664192676.2501068_gpt3.txt ├── 1664192683.72998_gpt3.txt ├── 1664192690.4964757_gpt3.txt ├── 1664192695.3377812_gpt3.txt ├── 1664192700.050884_gpt3.txt ├── 1664192707.1416516_gpt3.txt ├── 1664192709.6569073_gpt3.txt ├── 1664192718.843808_gpt3.txt ├── 1664192724.6265037_gpt3.txt ├── 1664192731.436059_gpt3.txt ├── 1664192739.4047902_gpt3.txt ├── 1664192748.1054962_gpt3.txt ├── 1664192756.8508227_gpt3.txt ├── 1664192764.9168684_gpt3.txt ├── 1664192771.755106_gpt3.txt ├── 1664192777.3153834_gpt3.txt ├── 1664192793.8008633_gpt3.txt ├── 1664192798.318894_gpt3.txt ├── 1664192805.1717317_gpt3.txt ├── 1664192814.266802_gpt3.txt ├── 1664192818.439603_gpt3.txt ├── 1664192821.5376644_gpt3.txt ├── 1664192825.2036698_gpt3.txt ├── 1664192830.0640273_gpt3.txt ├── 1664192833.458407_gpt3.txt ├── 1664192838.1741996_gpt3.txt ├── 1664192840.914712_gpt3.txt ├── 1664192884.243082_gpt3.txt ├── 1664192896.1606605_gpt3.txt ├── 1664192897.9086323_gpt3.txt ├── 1664192900.915385_gpt3.txt ├── 1664192905.6833186_gpt3.txt ├── 1664192915.2224143_gpt3.txt ├── 1664192919.7482333_gpt3.txt ├── 1664192924.4953997_gpt3.txt ├── 1664192931.1831288_gpt3.txt ├── 1664192935.3281112_gpt3.txt ├── 1664192946.699553_gpt3.txt ├── 1664192979.4166937_gpt3.txt ├── 1664192986.131905_gpt3.txt ├── 1664192997.1337612_gpt3.txt ├── 1664193001.399782_gpt3.txt ├── 1664193012.9156985_gpt3.txt ├── 1664193022.62842_gpt3.txt ├── 1664193030.6288655_gpt3.txt ├── 1664193039.289386_gpt3.txt ├── 1664193044.8556232_gpt3.txt ├── 1664193055.2717617_gpt3.txt ├── 1664193073.443836_gpt3.txt ├── 1664193078.2094219_gpt3.txt ├── 1664193085.0984275_gpt3.txt ├── 1664193088.05616_gpt3.txt ├── 1664193124.8979502_gpt3.txt ├── 1664193133.210407_gpt3.txt ├── 1664193142.9145515_gpt3.txt ├── 1664193150.5915766_gpt3.txt ├── 1664193156.922955_gpt3.txt ├── 1664193192.9307983_gpt3.txt ├── 1664193202.757768_gpt3.txt ├── 1664193206.8252177_gpt3.txt ├── 1664193209.6602542_gpt3.txt ├── 1664193219.490347_gpt3.txt ├── 1664193222.6350365_gpt3.txt ├── 1664193227.4027655_gpt3.txt ├── 1664193234.2597306_gpt3.txt ├── 1664193243.1737792_gpt3.txt ├── 1664193264.8364508_gpt3.txt ├── 1664193267.883827_gpt3.txt ├── 1664193278.5626094_gpt3.txt ├── 1664193283.2456908_gpt3.txt ├── 1664193288.4027245_gpt3.txt ├── 1664193296.8359485_gpt3.txt ├── 1664193305.016106_gpt3.txt ├── 1664193311.4327557_gpt3.txt ├── 1664193340.3930118_gpt3.txt ├── 1664193346.3762891_gpt3.txt ├── 1664193361.3765864_gpt3.txt ├── 1664193366.8042083_gpt3.txt ├── 1664193371.1709278_gpt3.txt ├── 1664193402.3177874_gpt3.txt ├── 1664193409.6529_gpt3.txt ├── 1664193411.7832036_gpt3.txt ├── 1664193423.130336_gpt3.txt ├── 1664193438.0023324_gpt3.txt ├── 1664193466.7443945_gpt3.txt ├── 1664193472.649701_gpt3.txt ├── 1664193476.3201387_gpt3.txt ├── 1664193484.271028_gpt3.txt ├── 1664193492.9322822_gpt3.txt ├── 1664193502.632184_gpt3.txt ├── 1664193507.0579393_gpt3.txt ├── 1664193513.2547393_gpt3.txt ├── 1664193520.583201_gpt3.txt ├── 1664193529.0546806_gpt3.txt ├── 1664193542.745667_gpt3.txt ├── 1664193548.6505542_gpt3.txt ├── 1664193555.0903265_gpt3.txt ├── 1664193560.1811264_gpt3.txt ├── 1664193563.742937_gpt3.txt ├── 1664193572.928136_gpt3.txt ├── 1664193575.9660225_gpt3.txt ├── 1664193580.1521976_gpt3.txt ├── 1664193589.304976_gpt3.txt ├── 1664193596.976032_gpt3.txt ├── 1664193600.6780949_gpt3.txt ├── 1664193612.9853582_gpt3.txt ├── 1664193617.082305_gpt3.txt ├── 1664193628.5680485_gpt3.txt ├── 1664193636.2097843_gpt3.txt ├── 1664193647.6486173_gpt3.txt ├── 1664193650.3165514_gpt3.txt ├── 1664193657.165872_gpt3.txt ├── 1664193668.1662014_gpt3.txt ├── 1664193676.5266914_gpt3.txt ├── 1664193680.659342_gpt3.txt ├── 1664193682.345249_gpt3.txt ├── 1664193684.1202023_gpt3.txt ├── 1664193696.6546996_gpt3.txt ├── 1664193702.663143_gpt3.txt ├── 1664193709.9840834_gpt3.txt ├── 1664193716.7424552_gpt3.txt ├── 1664193728.4183722_gpt3.txt ├── 1664193746.1196654_gpt3.txt ├── 1664193749.7825427_gpt3.txt ├── 1664193785.4223702_gpt3.txt ├── 1664193788.5194685_gpt3.txt ├── 1664193791.905868_gpt3.txt ├── 1664193800.6314116_gpt3.txt ├── 1664193809.3385801_gpt3.txt ├── 1664193817.4331963_gpt3.txt ├── 1664193837.3177989_gpt3.txt ├── 1664193843.6830378_gpt3.txt ├── 1664193850.7362468_gpt3.txt ├── 1664193860.540437_gpt3.txt ├── 1664193862.8140702_gpt3.txt ├── 1664193865.570573_gpt3.txt ├── 1664193872.2754643_gpt3.txt ├── 1664193879.6645207_gpt3.txt ├── 1664193887.4699838_gpt3.txt ├── 1664193892.5155056_gpt3.txt ├── 1664193895.946091_gpt3.txt ├── 1664193907.4141207_gpt3.txt ├── 1664193921.2711313_gpt3.txt ├── 1664193923.9292514_gpt3.txt ├── 1664193933.5004191_gpt3.txt ├── 1664193939.856447_gpt3.txt ├── 1664193960.488008_gpt3.txt ├── 1664193964.8935475_gpt3.txt ├── 1664193968.0686805_gpt3.txt ├── 1664194002.8186135_gpt3.txt ├── 1664194006.6971958_gpt3.txt ├── 1664194011.246012_gpt3.txt ├── 1664194020.0611537_gpt3.txt ├── 1664194027.1604414_gpt3.txt ├── 1664194034.365943_gpt3.txt ├── 1664194061.7299948_gpt3.txt ├── 1664194072.4134672_gpt3.txt ├── 1664194079.251923_gpt3.txt ├── 1664194105.1001668_gpt3.txt ├── 1664194136.949917_gpt3.txt ├── 1664194143.1155312_gpt3.txt ├── 1664194189.5212169_gpt3.txt ├── 1664194196.1837778_gpt3.txt ├── 1664194204.2863042_gpt3.txt ├── 1664194208.0670376_gpt3.txt ├── 1664194213.8078754_gpt3.txt ├── 1664194232.467182_gpt3.txt ├── 1664194239.0604813_gpt3.txt ├── 1664194244.2621665_gpt3.txt ├── 1664194249.3030424_gpt3.txt ├── 1664194282.2010124_gpt3.txt ├── 1664194286.1351306_gpt3.txt ├── 1664194288.9271193_gpt3.txt ├── 1664194302.50731_gpt3.txt ├── 1664194308.561461_gpt3.txt ├── 1664194313.0832806_gpt3.txt ├── 1664194314.8759315_gpt3.txt ├── 1664194322.4321322_gpt3.txt ├── 1664194328.4929714_gpt3.txt ├── 1664194333.001923_gpt3.txt ├── 1664194367.1751134_gpt3.txt ├── 1664194374.504423_gpt3.txt ├── 1664194378.52913_gpt3.txt ├── 1664194386.8822916_gpt3.txt ├── 1664194395.532817_gpt3.txt ├── 1664194404.1193893_gpt3.txt ├── 1664194408.5151317_gpt3.txt ├── 1664194445.595596_gpt3.txt ├── 1664194448.129714_gpt3.txt ├── 1664194453.084612_gpt3.txt ├── 1664194491.5934122_gpt3.txt ├── 1664194497.0943058_gpt3.txt ├── 1664194535.6389172_gpt3.txt ├── 1664194569.1323028_gpt3.txt ├── 1664194573.6491072_gpt3.txt ├── 1664194577.124867_gpt3.txt ├── 1664194580.5401533_gpt3.txt ├── 1664194592.469994_gpt3.txt ├── 1664194625.598991_gpt3.txt ├── 1664194654.329986_gpt3.txt ├── 1664194675.0843604_gpt3.txt ├── 1664194682.9472253_gpt3.txt ├── 1664194689.270827_gpt3.txt ├── 1664194698.542745_gpt3.txt ├── 1664194704.6068993_gpt3.txt ├── 1664194707.0367258_gpt3.txt ├── 1664194714.4825916_gpt3.txt ├── 1664194723.173904_gpt3.txt ├── 1664194759.3498378_gpt3.txt ├── 1664194763.8862836_gpt3.txt ├── 1664194797.846319_gpt3.txt ├── 1664194807.1299021_gpt3.txt ├── 1664194848.9188485_gpt3.txt ├── 1664194856.9481876_gpt3.txt ├── 1664194871.2761006_gpt3.txt ├── 1664194876.053469_gpt3.txt ├── 1664194881.1977437_gpt3.txt ├── 1664194890.387228_gpt3.txt ├── 1664194895.1263156_gpt3.txt ├── 1664194901.5971196_gpt3.txt ├── 1664194913.6290445_gpt3.txt ├── 1664194922.8573174_gpt3.txt ├── 1664194931.7636802_gpt3.txt ├── 1664194936.687499_gpt3.txt ├── 1664194947.4363613_gpt3.txt ├── 1664194954.7054114_gpt3.txt ├── 1664194958.847562_gpt3.txt ├── 1664194961.4482222_gpt3.txt ├── 1664194966.082575_gpt3.txt ├── 1664194968.8416731_gpt3.txt ├── 1664194974.5213957_gpt3.txt ├── 1664194979.5211327_gpt3.txt ├── 1664194986.4416301_gpt3.txt ├── 1664194996.354903_gpt3.txt ├── 1664195012.513914_gpt3.txt ├── 1664195016.7277634_gpt3.txt ├── 1664195024.698862_gpt3.txt ├── 1664195035.642785_gpt3.txt ├── 1664195040.707781_gpt3.txt ├── 1664195046.689667_gpt3.txt ├── 1664195052.911833_gpt3.txt ├── 1664195059.9230072_gpt3.txt ├── 1664195063.03139_gpt3.txt ├── 1664195083.6818848_gpt3.txt ├── 1664195086.9759145_gpt3.txt ├── 1664195093.1648123_gpt3.txt ├── 1664195100.7606602_gpt3.txt ├── 1664195111.6790614_gpt3.txt ├── 1664195113.5757139_gpt3.txt ├── 1664195117.490895_gpt3.txt ├── 1664195125.9084237_gpt3.txt ├── 1664195130.4307632_gpt3.txt ├── 1664195137.637777_gpt3.txt ├── 1664195144.0035155_gpt3.txt ├── 1664195152.5470903_gpt3.txt ├── 1664195194.5961878_gpt3.txt ├── 1664195207.303937_gpt3.txt ├── 1664195215.7130134_gpt3.txt ├── 1664195219.5120347_gpt3.txt ├── 1664195228.9167552_gpt3.txt ├── 1664195231.47231_gpt3.txt ├── 1664195239.267633_gpt3.txt ├── 1664195242.3746321_gpt3.txt ├── 1664195252.1127484_gpt3.txt ├── 1664195258.4450698_gpt3.txt ├── 1664195268.0478835_gpt3.txt ├── 1664195273.9983468_gpt3.txt ├── 1664195282.6406794_gpt3.txt ├── 1664195294.8734443_gpt3.txt ├── 1664195306.3156426_gpt3.txt ├── 1664195315.938639_gpt3.txt ├── 1664195324.9375_gpt3.txt ├── 1664195346.4573445_gpt3.txt ├── 1664195351.6793983_gpt3.txt ├── 1664195383.9782255_gpt3.txt ├── 1664195387.872407_gpt3.txt ├── 1664195395.61859_gpt3.txt ├── 1664195403.867874_gpt3.txt ├── 1664195408.4011178_gpt3.txt ├── 1664195414.022094_gpt3.txt ├── 1664195423.5836895_gpt3.txt ├── 1664195427.7997386_gpt3.txt ├── 1664195450.4366512_gpt3.txt ├── 1664195459.5184784_gpt3.txt ├── 1664195473.3647325_gpt3.txt ├── 1664195476.8919446_gpt3.txt ├── 1664195486.8083825_gpt3.txt ├── 1664195496.0649872_gpt3.txt ├── 1664195505.4159532_gpt3.txt ├── 1664195512.756014_gpt3.txt ├── 1664195534.096666_gpt3.txt ├── 1664195536.564288_gpt3.txt ├── 1664195560.0073462_gpt3.txt ├── 1664195565.9518776_gpt3.txt ├── 1664195572.0465071_gpt3.txt ├── 1664195578.9171684_gpt3.txt ├── 1664195583.4797993_gpt3.txt ├── 1664195587.284985_gpt3.txt ├── 1664195590.659772_gpt3.txt ├── 1664195598.8663397_gpt3.txt ├── 1664195604.0852718_gpt3.txt ├── 1664195616.1922908_gpt3.txt ├── 1664195620.5794935_gpt3.txt ├── 1664195630.8046916_gpt3.txt ├── 1664195642.2349324_gpt3.txt ├── 1664195648.4660804_gpt3.txt ├── 1664195665.4852543_gpt3.txt ├── 1664195670.329732_gpt3.txt ├── 1664195675.860193_gpt3.txt ├── 1664195679.3081644_gpt3.txt ├── 1664195685.4627268_gpt3.txt ├── 1664195693.8695595_gpt3.txt ├── 1664195705.5476694_gpt3.txt ├── 1664195727.1324618_gpt3.txt ├── 1664195753.8867805_gpt3.txt ├── 1664195756.2641711_gpt3.txt ├── 1664195788.9317796_gpt3.txt ├── 1664195793.4594445_gpt3.txt ├── 1664195796.0682206_gpt3.txt ├── 1664195808.9950953_gpt3.txt ├── 1664195813.0780747_gpt3.txt ├── 1664195816.0786278_gpt3.txt ├── 1664195824.8721013_gpt3.txt ├── 1664195832.2907846_gpt3.txt ├── 1664195838.2853281_gpt3.txt ├── 1664195840.5665286_gpt3.txt ├── 1664195862.4374068_gpt3.txt ├── 1664195865.2528996_gpt3.txt ├── 1664195870.3018084_gpt3.txt ├── 1664195872.903587_gpt3.txt ├── 1664195876.842301_gpt3.txt ├── 1664195878.8790085_gpt3.txt ├── 1664195887.1233103_gpt3.txt ├── 1664195891.5498393_gpt3.txt ├── 1664195894.8199432_gpt3.txt ├── 1664195897.18189_gpt3.txt ├── 1664195900.235589_gpt3.txt ├── 1664195906.4821484_gpt3.txt ├── 1664195914.4200885_gpt3.txt ├── 1664195917.9952064_gpt3.txt ├── 1664195926.1039288_gpt3.txt ├── 1664195931.3511448_gpt3.txt ├── 1664195965.1612475_gpt3.txt ├── 1664195983.7997077_gpt3.txt ├── 1664195991.6519344_gpt3.txt ├── 1664195996.0386207_gpt3.txt ├── 1664196004.8588455_gpt3.txt ├── 1664196008.3254318_gpt3.txt ├── 1664196027.0439222_gpt3.txt ├── 1664196036.0923445_gpt3.txt ├── 1664196041.0812852_gpt3.txt ├── 1664196047.2672668_gpt3.txt ├── 1664196049.4440863_gpt3.txt ├── 1664196084.7803748_gpt3.txt ├── 1664196090.611265_gpt3.txt ├── 1664196095.4123502_gpt3.txt ├── 1664196102.4603152_gpt3.txt ├── 1664196106.7148254_gpt3.txt ├── 1664196110.5980892_gpt3.txt ├── 1664196117.1204886_gpt3.txt ├── 1664196125.9363136_gpt3.txt ├── 1664196134.076975_gpt3.txt ├── 1664196138.6764445_gpt3.txt ├── 1664196144.9138107_gpt3.txt ├── 1664196154.3265188_gpt3.txt ├── 1664196162.3875706_gpt3.txt ├── 1664196167.7243636_gpt3.txt ├── 1664196172.8625815_gpt3.txt ├── 1664196181.2221437_gpt3.txt ├── 1664196183.4221175_gpt3.txt ├── 1664196194.5123608_gpt3.txt ├── 1664196202.0916147_gpt3.txt ├── 1664196211.9537013_gpt3.txt ├── 1664196218.2688322_gpt3.txt ├── 1664196228.9080184_gpt3.txt ├── 1664196239.5214982_gpt3.txt ├── 1664196249.4108384_gpt3.txt ├── 1664196256.015913_gpt3.txt ├── 1664196268.5346172_gpt3.txt ├── 1664196273.1441176_gpt3.txt ├── 1664196280.234305_gpt3.txt ├── 1664196288.6838953_gpt3.txt ├── 1664196296.3530636_gpt3.txt ├── 1664196308.3798866_gpt3.txt ├── 1664196312.5292897_gpt3.txt ├── 1664196314.5895596_gpt3.txt ├── 1664196327.4285665_gpt3.txt ├── 1664196335.9258392_gpt3.txt ├── 1664196343.232301_gpt3.txt ├── 1664196349.5469055_gpt3.txt ├── 1664196358.120217_gpt3.txt ├── 1664196362.7450147_gpt3.txt ├── 1664196368.6731248_gpt3.txt ├── 1664196378.204757_gpt3.txt ├── 1664196380.6690094_gpt3.txt ├── 1664196383.9197478_gpt3.txt ├── 1664196389.944343_gpt3.txt ├── 1664196400.0325823_gpt3.txt ├── 1664196409.3463542_gpt3.txt ├── 1664196414.7725787_gpt3.txt ├── 1664196422.3371227_gpt3.txt ├── 1664196437.3135202_gpt3.txt ├── 1664196441.0832098_gpt3.txt ├── 1664196458.7312286_gpt3.txt ├── 1664196465.5644922_gpt3.txt ├── 1664196472.1042838_gpt3.txt ├── 1664196478.1382601_gpt3.txt ├── 1664196482.866797_gpt3.txt ├── 1664196486.314383_gpt3.txt ├── 1664196493.2473822_gpt3.txt ├── 1664196494.8407729_gpt3.txt ├── 1664196500.8749545_gpt3.txt ├── 1664196505.3067517_gpt3.txt ├── 1664196513.141079_gpt3.txt ├── 1664196518.1003177_gpt3.txt ├── 1664196526.467645_gpt3.txt ├── 1664196534.0415788_gpt3.txt ├── 1664196543.4555733_gpt3.txt ├── 1664196554.4210346_gpt3.txt ├── 1664196565.891203_gpt3.txt ├── 1664196597.8884156_gpt3.txt ├── 1664196602.4311547_gpt3.txt ├── 1664196610.401809_gpt3.txt ├── 1664196643.7033799_gpt3.txt ├── 1664196648.0459914_gpt3.txt ├── 1664196657.4652743_gpt3.txt ├── 1664196660.1476912_gpt3.txt ├── 1664196669.222388_gpt3.txt ├── 1664196678.622769_gpt3.txt ├── 1664196683.6469405_gpt3.txt ├── 1664196697.9811711_gpt3.txt ├── 1664196705.6266177_gpt3.txt ├── 1664196718.0743916_gpt3.txt ├── 1664196725.1590407_gpt3.txt ├── 1664196734.004458_gpt3.txt ├── 1664196739.6312084_gpt3.txt ├── 1664196749.9271412_gpt3.txt ├── 1664196754.1764271_gpt3.txt ├── 1664196759.0957341_gpt3.txt ├── 1664196762.8689842_gpt3.txt ├── 1664196774.72799_gpt3.txt ├── 1664196785.2686534_gpt3.txt ├── 1664196792.927425_gpt3.txt ├── 1664196802.9357612_gpt3.txt ├── 1664196810.9317358_gpt3.txt ├── 1664196818.4600563_gpt3.txt ├── 1664196824.2271757_gpt3.txt ├── 1664196835.1914732_gpt3.txt ├── 1664196842.6684208_gpt3.txt ├── 1664196857.5935314_gpt3.txt ├── 1664196869.8927143_gpt3.txt ├── 1664196874.920712_gpt3.txt ├── 1664196890.015479_gpt3.txt ├── 1664196897.0659747_gpt3.txt ├── 1664196913.2052667_gpt3.txt ├── 1664196920.517954_gpt3.txt ├── 1664196958.872129_gpt3.txt ├── 1664196966.1125758_gpt3.txt ├── 1664196967.5872192_gpt3.txt ├── 1664286982.0387828_gpt3.txt ├── 1664286983.4355168_gpt3.txt ├── 1664286984.5902839_gpt3.txt ├── 1664286985.2549624_gpt3.txt ├── 1664286986.9112985_gpt3.txt ├── 1664286988.690846_gpt3.txt ├── 1664286990.979018_gpt3.txt ├── 1664286991.952075_gpt3.txt ├── 1664286993.756928_gpt3.txt ├── 1664286994.6292841_gpt3.txt ├── 1664286995.782863_gpt3.txt ├── 1664286996.65741_gpt3.txt ├── 1664286997.4496002_gpt3.txt ├── 1664286998.411514_gpt3.txt ├── 1664286999.3513334_gpt3.txt ├── 1664287000.0963602_gpt3.txt ├── 1664287001.6585076_gpt3.txt ├── 1664287002.6820767_gpt3.txt ├── 1664287003.5492787_gpt3.txt ├── 1664287004.6101418_gpt3.txt ├── 1664287005.5286667_gpt3.txt ├── 1664287006.1894543_gpt3.txt ├── 1664287008.0282252_gpt3.txt ├── 1664287008.9861581_gpt3.txt ├── 1664287010.7911062_gpt3.txt ├── 1664287012.5702047_gpt3.txt ├── 1664287013.6034126_gpt3.txt ├── 1664287014.3405426_gpt3.txt ├── 1664287047.7449882_gpt3.txt ├── 1664287048.9316015_gpt3.txt ├── 1664287049.7569857_gpt3.txt ├── 1664717878.2020712_gpt3.txt ├── 1664717879.4159596_gpt3.txt ├── 1664717880.4710019_gpt3.txt ├── 1664717881.1214526_gpt3.txt ├── 1664717881.8578231_gpt3.txt ├── 1664717883.2674325_gpt3.txt ├── 1664717884.7659585_gpt3.txt ├── 1664717918.85351_gpt3.txt ├── 1664717920.4612098_gpt3.txt ├── 1664717921.4024456_gpt3.txt ├── 1664717922.3213427_gpt3.txt ├── 1664717922.8927217_gpt3.txt ├── 1664717923.853931_gpt3.txt ├── 1664717924.6320856_gpt3.txt ├── 1664717925.7500427_gpt3.txt ├── 1664717926.7887807_gpt3.txt ├── 1664717927.4962642_gpt3.txt ├── 1664717928.8244648_gpt3.txt ├── 1664717929.651343_gpt3.txt ├── 1664717930.418617_gpt3.txt ├── 1664964828.1378214_gpt3.txt ├── 1664964829.072681_gpt3.txt ├── 1664964831.0550814_gpt3.txt ├── 1664964831.8599381_gpt3.txt ├── 1664964832.5241678_gpt3.txt ├── 1664964833.5055115_gpt3.txt ├── 1664964834.445731_gpt3.txt ├── 1664964835.4373577_gpt3.txt ├── 1664964836.263819_gpt3.txt ├── 1664964837.2277405_gpt3.txt ├── 1664964837.9560602_gpt3.txt ├── 1664964838.7115088_gpt3.txt ├── 1664964839.4099088_gpt3.txt ├── 1664964840.2566035_gpt3.txt ├── 1664974560.7032058_gpt3.txt ├── 1664974561.9080687_gpt3.txt ├── 1664974563.1172442_gpt3.txt ├── 1664974565.0993934_gpt3.txt ├── 1664974565.8160026_gpt3.txt ├── 1664974566.9573388_gpt3.txt ├── 1664974568.2325313_gpt3.txt ├── 1664974569.0509834_gpt3.txt ├── 1664974569.7981856_gpt3.txt ├── 1664974570.4917912_gpt3.txt ├── 1664974571.6100295_gpt3.txt ├── 1664974572.7088604_gpt3.txt ├── 1664974573.4922044_gpt3.txt ├── 1664974574.4547265_gpt3.txt ├── 1664974575.992245_gpt3.txt ├── 1664974576.896137_gpt3.txt ├── 1664974577.3636875_gpt3.txt ├── 1666621266.8279285_gpt3.txt ├── 1666621269.7306144_gpt3.txt ├── 1666621275.989547_gpt3.txt ├── 1666621281.2944515_gpt3.txt ├── 1666621288.0337656_gpt3.txt ├── 1666621294.6363282_gpt3.txt ├── 1666621300.6579869_gpt3.txt ├── 1666621305.6289752_gpt3.txt ├── 1666621308.3471413_gpt3.txt ├── 1666621311.9041_gpt3.txt ├── 1666621315.8434124_gpt3.txt ├── 1666621318.6224582_gpt3.txt ├── 1666621321.501562_gpt3.txt ├── 1666621323.7273035_gpt3.txt ├── 1666621325.5533595_gpt3.txt ├── 1666621328.1231248_gpt3.txt ├── 1666621333.3814278_gpt3.txt ├── 1666621338.2708018_gpt3.txt ├── 1666621341.2532597_gpt3.txt ├── 1666621342.8988144_gpt3.txt ├── 1666621345.1255522_gpt3.txt ├── 1666621347.7084289_gpt3.txt ├── 1666621350.8389072_gpt3.txt ├── 1671271018.3917968_gpt3.txt ├── 1671271019.243985_gpt3.txt ├── 1671271019.9028628_gpt3.txt ├── 1671271020.7056303_gpt3.txt ├── 1671271021.3023028_gpt3.txt ├── 1671271022.1125705_gpt3.txt ├── 1671271022.7343576_gpt3.txt ├── 1671271023.3713014_gpt3.txt ├── 1671271024.1055772_gpt3.txt ├── 1671271024.7268174_gpt3.txt ├── 1671271025.1769907_gpt3.txt ├── 1671271105.7640073_gpt3.txt ├── 1671271106.4784846_gpt3.txt ├── 1671271107.0742857_gpt3.txt ├── 1671271169.9237027_gpt3.txt ├── 1671271170.5412226_gpt3.txt ├── 1671271171.3149283_gpt3.txt ├── 1671271172.0143652_gpt3.txt ├── 1671271172.5753782_gpt3.txt ├── 1671271173.3807137_gpt3.txt ├── 1671271174.1510987_gpt3.txt ├── 1671271174.7880378_gpt3.txt ├── 1671271175.459534_gpt3.txt ├── 1671271176.0205648_gpt3.txt ├── 1671312736.0787103_gpt3.txt ├── 1671312847.892301_gpt3.txt ├── 1671312848.6935775_gpt3.txt ├── 1671312849.8410378_gpt3.txt ├── 1671312850.8608928_gpt3.txt ├── 1671312851.7442536_gpt3.txt ├── 1671312852.4997005_gpt3.txt ├── 1671312853.4213994_gpt3.txt ├── 1671312854.190106_gpt3.txt ├── 1671410045.949092_gpt3.txt ├── 1671410046.9482522_gpt3.txt ├── 1671410047.827728_gpt3.txt ├── 1671410048.5753052_gpt3.txt ├── 1671410049.643054_gpt3.txt ├── 1671410050.3975096_gpt3.txt ├── 1671410051.1014044_gpt3.txt ├── 1671410051.7261164_gpt3.txt ├── 1671410052.054788_gpt3.txt ├── 1671410068.5149436_gpt3.txt ├── 1671410069.039259_gpt3.txt ├── 1671410069.564385_gpt3.txt ├── 1671410070.1827965_gpt3.txt ├── 1671410070.6714287_gpt3.txt ├── 1671410071.1623144_gpt3.txt ├── 1671410071.710086_gpt3.txt ├── 1671410072.2553895_gpt3.txt ├── 1671410072.7888281_gpt3.txt ├── 1671410073.3164093_gpt3.txt ├── 1671410074.2549767_gpt3.txt ├── 1671410074.7620516_gpt3.txt ├── 1671410075.172973_gpt3.txt ├── 1671410075.696564_gpt3.txt ├── 1671410076.3629694_gpt3.txt ├── 1671410076.8738773_gpt3.txt └── 1671410077.207555_gpt3.txt ├── output.txt ├── prompt.txt ├── prompt_clarify_transcript.txt ├── sprs ├── 1000 Subs Livestream! AI alignment, writing fiction, LLMs, and more!.txt ├── 3 Predictions for the future of AI jobs.txt ├── 5 Reasons GPT-4 Will Be Disappointing.txt ├── 5 Tips and Misconceptions about Finetuning GPT-3.txt ├── 57% of people Machines will be sentient one day (10% say it's already happened!).txt ├── 8 Post-Singularity Lifestyles + The Latest and Best AI Tools + Functional Machine Sentience!.txt ├── ACE Framework Overview and Intro Autonomous AI Agents!.txt ├── ACE Paper is Published! Repo tour! Get involved!.txt ├── AGI Alignment Experiments INSTRUCT vs Foundation and Agent Models.txt ├── AGI Poll results AGI not dangerous, might destroy lots of jobs.txt ├── AGI Revolution How Businesses, Governments, and Individuals can Prepare.txt ├── AGI Unleashed Game Theory, Byzantine Generals, and the Heuristic Imperatives.txt ├── AGI Within 12 Months! Rumors, Leaks, and Trends - Announcing Open MURPHIE robotic platform.txt ├── AGI within 18 months explained with a boatload of papers and projects.txt ├── AI & Evolution How the rise of AI might shape the future of humanity.txt ├── AI + Blockchain + DAO = Success! How Decentralized Autonomous Organizations will control EVERYTHING!.txt ├── AI Companions Social Impact - How tech like ChatGPT, virtual companions, and robots will change us.txt ├── AI Insights Forum Closed Door Senate Meeting with Tech Execs.txt ├── AI Jobpocalypse How tech like ChatGPT will impact your life by 2030 (and what to do about it).txt ├── AI Layoffs are Coming Gizmodo Fires Editors.txt ├── AI News CoALA, Theory of Mind, Artificial Neurons, Swarm Intelligence, and Neural Convergence.txt ├── AI Startup Crash Course - Ep. 1 - Founder Team.txt ├── AI Startup Crash Course - Ep. 2 - Experiment, Experiment, Experiment.txt ├── AI Startup Crash Course - Ep. 3 - Network, Network, Network.txt ├── AI Winter is Not Coming Where in the Gartner Hype Cycle Are we What comes next (It gets messy!).txt ├── Accelerating Science with AI Quickly Read Every Paper and Get Key Insights in Bulk.txt ├── Alignment Collectivist vs Egocentric Agent Models (I vs we) Can we create a unique mind.txt ├── Alignment Research GPT-3 vs GPT-NeoX - which one understands AGI alignment better.txt ├── America to hit 82% Unemployment I have the data to back it up.txt ├── Anna Bernstein - Professional Prompt Engineer - We don't have to forfeit the realm of creativity.txt ├── Answer complex questions from an arbitrarily large set of documents with vector search and GPT-3.txt ├── Are LaMDA or GPT-3 sentient No, but....txt ├── Are chatbots lying to us This is worse than you think..txt ├── AutoMuse Announcement & Signup.txt ├── AutoMuse ChatGPT local Python GUI (TKINTER) - Now it can read my entire novel and do QA!.txt ├── AutoMuse Scene Simulation is Working! GPT-3 Finetuned to write sequences of fiction.txt ├── AutoMuse Synopsis GAN (Generative Adversarial Network) for self-improving synopsis generators!.txt ├── AutoMuse Synopsis and Plot Generator Walkthrough.txt ├── AutoMuse Working on Character and Scene Simulations (GPT-3 Finetuning Project).txt ├── AutoMuse Writing a Novel with GPT-3 as a Plotter pt.1 (decompose the problem into steps).txt ├── AutoMuse3 Generate fiction with text-based simulation and cognitive architecture for characters.txt ├── Automatic Literature Review with GPT-3 - I embedded and indexed all of arXiv into a search engine!.txt ├── Automating Science with GPT-4 - attempting (and failing) to perform autonomous literature review.txt ├── Axiomatic Alignment A critical component to Utopia and the Control Problem.txt ├── BOMBSHELL Mustafa Suleyman AI will reshuffle society.txt ├── Beyond Vector Search Knowledge Management with Generative AI.txt ├── Billion-dollar GPT-3 startup! Fix education with an expert tutor chatbot!.txt ├── Bing vs Bard Who will win Google or Microsoft A breakdown and analysis of the recent news.txt ├── Biological Immortality by 2030 Social & Economic Implications + Some Predictions!.txt ├── Can AI help treat trauma I tested three methods, here are the results..txt ├── Can GPT-3 generate training data Short answer Yes! Here's why that's a legit methodology....txt ├── ChatGPT - Good, Bad, Indifferent. What will it change What does this mean.txt ├── ChatGPT API QA Chatbot ep 1 - Data Prep [Automating Regenerative Science].txt ├── ChatGPT API with Salience and Anticipation of User Needs Towards a fully autonomous assistant.txt ├── ChatGPT SYSTEM Prompt Engineering (Deep Dive).txt ├── ChatGPT as an Interpreter Introducing the KB Microservice for autonomous AI entities.txt ├── ChatGPT for Absolute Beginners - What is it and how does it work.txt ├── ChatGPT was lobotomized for coding and GitHub Copilot is broken... so I made my own!.txt ├── ChatGPT, Bing, and Google How our relationship with information is changing.txt ├── ChatGPT4 - Sparse Priming Representations, Hierarchical Memory Consolidation, and Implied Cognition!.txt ├── Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic.txt ├── Co-writing flash fiction with GPT-3.txt ├── Coding Raven's Encyclopedia Service (v.1).txt ├── Cognitive AI Lab Community Update + consensus, books, and my next steps.txt ├── Cognitive AI Lab Podcast 2022-09-21 - Decentralized AI, DAOs, and AI art.txt ├── Cognitive Architecture - April 28 - Salience, Cognitive Control, Task Management, Modular Design.txt ├── Concepts in Neuroscience and Cognition - Deficits of GPT-3 and the path to AGI and ACE.txt ├── Convergence and acceleration towards AGI (or Artificial Cognitive Entities).txt ├── Core Objective Functions vs Thanos Logic.txt ├── Could a machine ever be fully sentient or conscious A deep dive on philosophy, quantum mechanics....txt ├── Cover Letter & Professional Objective Generator with GPT-3.txt ├── Cyberpunk Dystopia Why it's coming and how we can avoid it.txt ├── DALLE2 Style Tags Tutorial - Elven archer in a sunny forest with different tags.txt ├── DIY ChatGPT Enhancing RAVEN's long-term memories and starting to work on self-reflection.txt ├── Decreasingly Verbose Alice in Wonderland - down to 55 characters!.txt ├── DeepMind, OpenAI, Microsoft, Oxford Global AI Research & Regulation is Coming.txt ├── Don't Use MemGPT!! This is way better (and easier)! Use Sparse Priming Representations!.txt ├── Doomerism, Denialism, and Optimism. Introducing the GATO Framework. AI Avengers... Assemble!!.txt ├── Elon Musk xAI Maximize Understanding of the Universe [This is the best news all year].txt ├── Encyclopedia Service Demo (v.1).txt ├── Energy Hyper-Abundance Solar, Fusion, Geopolitics, & AI.txt ├── Escaped Sapiens David Shapiro on AI alignment, safety, and the future of humanity.txt ├── FDA Approves AI Orphan Drug.txt ├── Fine-tuning GPT-3 for benevolent and trustworthy AGI.txt ├── Fine-tuning GPT-3 to generate questions about anything.txt └── Finetune a perfect email generator in GPT-3 - take any input and generate a great email.txt ├── system_spr.txt ├── timestamped ├── 5 Reasons GPT-4 Will Be Disappointing.txt ├── 5 Tips and Misconceptions about Finetuning GPT-3.txt ├── 57% of people Machines will be sentient one day (10% say it's already happened!).txt ├── 8 Post-Singularity Lifestyles + The Latest and Best AI Tools + Functional Machine Sentience!.txt ├── ACE Framework Overview and Intro Autonomous AI Agents!.txt ├── ACE Paper is Published! Repo tour! Get involved!.txt ├── AGI Revolution How Businesses, Governments, and Individuals can Prepare.txt ├── AGI Unleashed Game Theory, Byzantine Generals, and the Heuristic Imperatives.txt ├── AGI Within 12 Months! Rumors, Leaks, and Trends - Announcing Open MURPHIE robotic platform.txt ├── AGI within 18 months explained with a boatload of papers and projects.txt ├── AI & Evolution How the rise of AI might shape the future of humanity.txt ├── AI + Blockchain + DAO = Success! How Decentralized Autonomous Organizations will control EVERYTHING!.txt ├── AI Companions Social Impact - How tech like ChatGPT, virtual companions, and robots will change us.txt ├── AI Insights Forum Closed Door Senate Meeting with Tech Execs.txt ├── AI Jobpocalypse How tech like ChatGPT will impact your life by 2030 (and what to do about it).txt ├── AI Layoffs are Coming Gizmodo Fires Editors.txt ├── AI News CoALA, Theory of Mind, Artificial Neurons, Swarm Intelligence, and Neural Convergence.txt ├── AI Winter is Not Coming Where in the Gartner Hype Cycle Are we What comes next (It gets messy!).txt ├── Accelerating Science with AI Quickly Read Every Paper and Get Key Insights in Bulk.txt ├── America to hit 82% Unemployment I have the data to back it up.txt ├── Anna Bernstein - Professional Prompt Engineer - We don't have to forfeit the realm of creativity.txt ├── Answer complex questions from an arbitrarily large set of documents with vector search and GPT-3.txt ├── Are chatbots lying to us This is worse than you think..txt ├── AutoMuse ChatGPT local Python GUI (TKINTER) - Now it can read my entire novel and do QA!.txt ├── Automatic Literature Review with GPT-3 - I embedded and indexed all of arXiv into a search engine!.txt ├── Automating Science with GPT-4 - attempting (and failing) to perform autonomous literature review.txt ├── Axiomatic Alignment A critical component to Utopia and the Control Problem.txt ├── BOMBSHELL Mustafa Suleyman AI will reshuffle society.txt ├── Beyond Vector Search Knowledge Management with Generative AI.txt ├── Bing vs Bard Who will win Google or Microsoft A breakdown and analysis of the recent news.txt ├── Biological Immortality by 2030 Social & Economic Implications + Some Predictions!.txt ├── ChatGPT API QA Chatbot ep 1 - Data Prep [Automating Regenerative Science].txt ├── ChatGPT API with Salience and Anticipation of User Needs Towards a fully autonomous assistant.txt ├── ChatGPT SYSTEM Prompt Engineering (Deep Dive).txt ├── ChatGPT as an Interpreter Introducing the KB Microservice for autonomous AI entities.txt ├── ChatGPT for Absolute Beginners - What is it and how does it work.txt ├── ChatGPT was lobotomized for coding and GitHub Copilot is broken... so I made my own!.txt ├── ChatGPT, Bing, and Google How our relationship with information is changing.txt ├── ChatGPT4 - Sparse Priming Representations, Hierarchical Memory Consolidation, and Implied Cognition!.txt ├── Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic.txt ├── Cognitive Architecture - April 28 - Salience, Cognitive Control, Task Management, Modular Design.txt ├── Cyberpunk Dystopia Why it's coming and how we can avoid it.txt ├── DIY ChatGPT Enhancing RAVEN's long-term memories and starting to work on self-reflection.txt ├── DeepMind, OpenAI, Microsoft, Oxford Global AI Research & Regulation is Coming.txt ├── Don't Use MemGPT!! This is way better (and easier)! Use Sparse Priming Representations!.txt ├── Doomerism, Denialism, and Optimism. Introducing the GATO Framework. AI Avengers... Assemble!!.txt ├── Elon Musk xAI Maximize Understanding of the Universe [This is the best news all year].txt ├── Energy Hyper-Abundance Solar, Fusion, Geopolitics, & AI.txt ├── Escaped Sapiens David Shapiro on AI alignment, safety, and the future of humanity.txt ├── FDA Approves AI Orphan Drug.txt ├── Fine-tuning GPT-3 to generate questions about anything.txt ├── Finetune GPT-3 to write an entire coherent novel (part 1).txt ├── Finetune multiple cognitive tasks with GPT-3 on medical texts (and reduce hallucination).txt ├── Finetuning GPT-3 101 Synthesizing Training Data.txt ├── Finetuning GPT-3 to be a master tutor that can handle any topic and hostile students.txt ├── Finetuning a Creative Writing Coach in GPT-3 - Part 1.txt ├── First look at ChatGPT API - the age of Autonomous AI begins TODAY! Cognitive Architectures ahoy!.txt ├── First look! ChatGPT-4 has 32,000 tokens! Continuing work on my ASCENSION novel with the latest!.txt ├── Fixing goldfish memory with GPT-3 and external sources of information in a chatbot - part 1.txt ├── Forever Jobs AGI and Singularity Proof Occupations (plus a few that SHOULD go away!).txt ├── From zero to finetuned model in 1 hour with GPT-3. Generate a movie script from any premise!.txt ├── GPT Masterclass 4 Years of Prompt Engineering in 16 Minutes.txt ├── GPT Prompt Strategy Brainstorm, Search, Hypothesize, and Refine - THIS is the FUTURE!!.txt ├── GPT Prompt Strategy Latent Space Activation - what EVERYONE is missing!.txt ├── GPT-5 Rumors and Predictions - It's about to get real silly.txt ├── Generative AI for CEOs How to think about AI today, and how to get the most ROI from it.txt ├── Generative AI for Product Owners The Rise of Polymorphic Applications!.txt ├── Global AI news you might have missed, plus using AI for automatically finding consensus.txt ├── How I stay up to date on the latest AI science news.txt ├── How Will Super Alignment Work Challenges and Criticisms of OpenAI's Approach to AGI Safety & X-Risk.txt ├── How close are we to building a real Westworld A look at the hardware, software, and social impact.txt ├── How could AGI break out of the lab Experts weigh in!.txt ├── How do we pay for UBI The answer is not what you think....txt ├── I built a thinking machine. Happy birthday, ACE!.txt ├── I built an AI doctor with ChatGPT - Full Clinical Experience.txt ├── I summarized the EU AI Act so you can ask ChatGPT about it easily.txt ├── Introducing AutoMuse ChatGPT - Working on Temporal Hierarchical Memories for AGI & ACE systems.txt ├── Introducing Benevolent AGI - Autonomously Pursuing Heuristic Imperatives and Upholding Human Rights!.txt ├── Is AGI actually God Not quite as spicy of a take as you might think....txt ├── LangChain for LLMs is... basically just an Ansible playbook.txt ├── Learn AI A Step By Step Guide (At least, this is how I did it) This is the way....txt ├── Levels of Machine Autonomy (Roadmap to AGI).txt ├── Life under UBI Work, hobbies, and wellbeing without work.txt ├── Mastering the Crawl Walk Run Fly Model of AI Transformation.txt ├── Max Tegmark Language Models Understand Time and Space.txt ├── Meta AI LM-Infinite - Massive LLM improvement!.txt ├── Metaprompting with GPT-3 to dynamically generate arguments.txt ├── Microsoft LongNet One BILLION Tokens LLM + OpenAI SuperAlignment [SINGULARITY APPROACHES].txt ├── Microsoft OpenAI ChapGPT - VS - Google Anthropic Claude - Who will win Why How.txt ├── Mike Todasco - The Innovative Mindset.txt ├── My predictions about Artificial Super Intelligence (ASI).txt ├── NVIDIA AI 1,000,000x (a million times) more powerful than ChatGPT within 10 years.txt ├── Neuralink begins human trials. My predictions..txt ├── OpenAI $100,000 Grants for AI Consensus Platform! Plus a Gentle Introduction to GATO Framework.txt ├── OpenAI Blog Sam Altman Planning for AGI and beyond.txt ├── OpenAI ChatGPT Predictions Integrations, Competitors, and Impacts.txt ├── OpenAI GPT-4 Predictions and Rumors.txt ├── OpenAI Q&A Finetuning GPT-3 vs Semantic Search - which to use, when, and why.txt ├── OpenAI Revenue EXPLODES - IBM & Tesla Bet Billions on AI.txt ├── OpenAI's Democratic Inputs to AI - My universal survey chatbot is complete. Here's how it works!.txt ├── Our Decentralized Future - Tau - Interview with Prof. Franconi and Ohad Asor.txt ├── Panacea is Coming 7 Lifestyles for Longevity Escape Velocity.txt ├── Polymorphic Applications Mission-Driven Software, Cognitive Architectures, NEXT-GEN PARADIGMS.txt ├── Post-Labor Economics What happens when AI changes work forever.txt ├── Post-Singularity Predictions - How will our lives, corporations, and nations adapt to AI revolution.txt ├── Principle Driven Self-Alignment and Preference Ranking Optimization [Best Alignment Papers!].txt ├── Private ChatGPT instance with ChromaDB backend, builds personal KB articles, updates User Profile!.txt ├── Pro's Finetuning Guide for GPT and LLMs.txt ├── Prompt Engineering 101 Autocomplete, Zero-shot, One-shot, and Few-shot prompting.txt ├── Prompt Engineering 101 Summarizing, Extraction, and Rewriting.txt ├── Prompt engineering with ChatGPT. Writing expert prompts for non-English speakers. Sommelier and OSHA.txt ├── Proposing GAIA - Global AI Agency to research AI and solve alignment, mitigate existential risks.txt ├── Proposing a ROBOT Tax Decoupling Economic Growth from Human Labor.txt ├── Python & GPT-3 for Absolute Beginners #1 - Setting up your environment.txt ├── Python & GPT-3 for Absolute Beginners #2 - Your first chatbot.txt ├── Python & GPT-3 for Absolute Beginners #3 - What the heck are embeddings.txt ├── Recursively summarize text of any length with GPT-3.txt ├── Reinforcement Learning Heuristic Imperatives (RLHI) Ep 03 - Inner Alignment is EASY!.txt ├── Reinforcement Learning with Heuristic Imperatives (RLHI) - Ep 01 - Synthesizing Scenarios.txt ├── Reinforcement Learning with Heuristic Imperatives (RLHI) - Ep 02 - Synthesizing Actions.txt ├── Salvation Fantasy The False Promise of Utopia, FDVR, Escapism, and Techno-Optimism.txt ├── Sam Altman GRILLED by Congress - GATO does a roundtable debrief!.txt ├── Should you go to college in the age of AI.txt ├── Singularity Crisis The Death of Human Meaning (Because of AI).txt ├── Singularity Endgame Utopia, Dystopia, Collapse, or Extinction (It's actually up to you!).txt ├── Sparse Priming Representations - the secret ingredient to scalable AGI memories.txt ├── Spatial Web with Denise Holt and David Shapiro, sneak preview!.txt ├── Success Breeds Litigation OpenAI Lawsuits & Investigations (Silverman, FTC, Class Action).txt ├── Terminal Race Condition The greatest danger we face from AGI (and how to prevent it!).txt ├── The AGI Moloch Nash Equilibrium, Attractor States, and Heuristic Imperatives How to Achieve Utopia.txt ├── The Age of Autonomous AI Dozens of Papers and Projects, plus my solution to the Alignment Problem.txt ├── The Generative AI Revolution Why the Data Flywheel is Your Business's Secret Weapon.txt ├── The Psychology of AI Overlords and Transhumanism Borg Queen, Cortana, and Tony Stark.txt ├── The Singularity is canceled. Sorry! Here's why..txt ├── Train GPT-3 on Any Corpus of Data with ChatGPT and Knowledge Graphs - SCOTUS Opinions Part 1.txt ├── Train GPT-3 on Any Corpus of Data with ChatGPT and Knowledge Graphs - SCOTUS Opinions Part 2.txt ├── Train GPT-3 on Any Corpus of Data with ChatGPT and Knowledge Graphs - SCOTUS Opinions Part 3 (FIN).txt ├── Tutorial DIY ChatGPT with Long Term Memories (external integration coming soon).txt ├── Using Bing to aid with scientific research. RIP Google..txt ├── Using GPT4 to edit entire chapters in my novel (Development and Prose Feedback + Summaries).txt ├── Vested Interests How a network of billionaires influences AI policy.txt ├── Weekly Roundup OpenAI Chips, Global AI Race Heats Up, Senolytics, and Animal Communication.txt ├── Welcome to the Fourth Industrial Revolution.txt ├── What is RAVEN Overview, Introduction, and Community Update - Friday, February 3, 2023.txt ├── What is a Cognitive Architecture (Hint It's a type of AI for robotics and such).txt ├── What is the Control Problem.txt ├── What is the Fourth Industrial Revolution.txt ├── What is vesperance That sense of gathering night and change....txt ├── What is your MISSION How to find your mission in the age of AI (Postnihilism).txt ├── When AGI.txt ├── Why AI will destroy all jobs.txt ├── Why I canceled my Anthropic Claude account.txt ├── Why is Bing unhinged Let's talk about Alignment (and how to fix it!).txt ├── Will AGI Abolish Money My predictions about crypto and the future of money.txt ├── Write a Novel w ChatGPT - 01 - Main Character, Setting, Theme - Cornelia Valerii - ASCENSION.txt └── Write an entire blog post with GPT-3 fully automatically.txt └── transcripts ├── 1000 Subs Livestream! AI alignment, writing fiction, LLMs, and more!.txt ├── 3 Predictions for the future of AI jobs.txt ├── 5 Reasons GPT-4 Will Be Disappointing.txt ├── 5 Tips and Misconceptions about Finetuning GPT-3.txt ├── 57% of people Machines will be sentient one day (10% say it's already happened!).txt ├── 8 Post-Singularity Lifestyles + The Latest and Best AI Tools + Functional Machine Sentience!.txt ├── ACE Framework Overview and Intro Autonomous AI Agents!.txt ├── ACE Paper is Published! Repo tour! Get involved!.txt ├── AGI Alignment Experiments INSTRUCT vs Foundation and Agent Models.txt ├── AGI Poll results AGI not dangerous, might destroy lots of jobs.txt ├── AGI Revolution How Businesses, Governments, and Individuals can Prepare.txt ├── AGI Unleashed Game Theory, Byzantine Generals, and the Heuristic Imperatives.txt ├── AGI Within 12 Months! Rumors, Leaks, and Trends - Announcing Open MURPHIE robotic platform.txt ├── AGI within 18 months explained with a boatload of papers and projects.txt ├── AI & Evolution How the rise of AI might shape the future of humanity.txt ├── AI + Blockchain + DAO = Success! How Decentralized Autonomous Organizations will control EVERYTHING!.txt ├── AI Companions Social Impact - How tech like ChatGPT, virtual companions, and robots will change us.txt ├── AI Insights Forum Closed Door Senate Meeting with Tech Execs.txt ├── AI Jobpocalypse How tech like ChatGPT will impact your life by 2030 (and what to do about it).txt ├── AI Layoffs are Coming Gizmodo Fires Editors.txt ├── AI News CoALA, Theory of Mind, Artificial Neurons, Swarm Intelligence, and Neural Convergence.txt ├── AI Startup Crash Course - Ep. 1 - Founder Team.txt ├── AI Startup Crash Course - Ep. 2 - Experiment, Experiment, Experiment.txt ├── AI Startup Crash Course - Ep. 3 - Network, Network, Network.txt ├── AI Winter is Not Coming Where in the Gartner Hype Cycle Are we What comes next (It gets messy!).txt ├── Accelerating Science with AI Quickly Read Every Paper and Get Key Insights in Bulk.txt ├── Alignment Collectivist vs Egocentric Agent Models (I vs we) Can we create a unique mind.txt ├── Alignment Research GPT-3 vs GPT-NeoX - which one understands AGI alignment better.txt ├── America to hit 82% Unemployment I have the data to back it up.txt ├── Anna Bernstein - Professional Prompt Engineer - We don't have to forfeit the realm of creativity.txt ├── Answer complex questions from an arbitrarily large set of documents with vector search and GPT-3.txt ├── Are LaMDA or GPT-3 sentient No, but....txt ├── Are chatbots lying to us This is worse than you think..txt ├── AutoMuse Announcement & Signup.txt ├── AutoMuse ChatGPT local Python GUI (TKINTER) - Now it can read my entire novel and do QA!.txt ├── AutoMuse Scene Simulation is Working! GPT-3 Finetuned to write sequences of fiction.txt ├── AutoMuse Synopsis GAN (Generative Adversarial Network) for self-improving synopsis generators!.txt ├── AutoMuse Synopsis and Plot Generator Walkthrough.txt ├── AutoMuse Working on Character and Scene Simulations (GPT-3 Finetuning Project).txt ├── AutoMuse Writing a Novel with GPT-3 as a Plotter pt.1 (decompose the problem into steps).txt ├── AutoMuse3 Generate fiction with text-based simulation and cognitive architecture for characters.txt ├── Automatic Literature Review with GPT-3 - I embedded and indexed all of arXiv into a search engine!.txt ├── Automating Science with GPT-4 - attempting (and failing) to perform autonomous literature review.txt ├── Axiomatic Alignment A critical component to Utopia and the Control Problem.txt ├── BOMBSHELL Mustafa Suleyman AI will reshuffle society.txt ├── Beyond Vector Search Knowledge Management with Generative AI.txt ├── Billion-dollar GPT-3 startup! Fix education with an expert tutor chatbot!.txt ├── Bing vs Bard Who will win Google or Microsoft A breakdown and analysis of the recent news.txt ├── Biological Immortality by 2030 Social & Economic Implications + Some Predictions!.txt ├── Can AI help treat trauma I tested three methods, here are the results..txt ├── Can GPT-3 generate training data Short answer Yes! Here's why that's a legit methodology....txt ├── ChatGPT - Good, Bad, Indifferent. What will it change What does this mean.txt ├── ChatGPT API QA Chatbot ep 1 - Data Prep [Automating Regenerative Science].txt ├── ChatGPT API with Salience and Anticipation of User Needs Towards a fully autonomous assistant.txt ├── ChatGPT SYSTEM Prompt Engineering (Deep Dive).txt ├── ChatGPT as an Interpreter Introducing the KB Microservice for autonomous AI entities.txt ├── ChatGPT for Absolute Beginners - What is it and how does it work.txt ├── ChatGPT was lobotomized for coding and GitHub Copilot is broken... so I made my own!.txt ├── ChatGPT, Bing, and Google How our relationship with information is changing.txt ├── ChatGPT4 - Sparse Priming Representations, Hierarchical Memory Consolidation, and Implied Cognition!.txt ├── Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic.txt ├── Co-writing flash fiction with GPT-3.txt ├── Coding Raven's Encyclopedia Service (v.1).txt ├── Cognitive AI Lab Community Update + consensus, books, and my next steps.txt ├── Cognitive AI Lab Podcast 2022-09-21 - Decentralized AI, DAOs, and AI art.txt ├── Cognitive Architecture - April 28 - Salience, Cognitive Control, Task Management, Modular Design.txt ├── Concepts in Neuroscience and Cognition - Deficits of GPT-3 and the path to AGI and ACE.txt ├── Convergence and acceleration towards AGI (or Artificial Cognitive Entities).txt ├── Core Objective Functions vs Thanos Logic.txt ├── Could a machine ever be fully sentient or conscious A deep dive on philosophy, quantum mechanics....txt ├── Cover Letter & Professional Objective Generator with GPT-3.txt ├── Cyberpunk Dystopia Why it's coming and how we can avoid it.txt ├── DALLE2 Style Tags Tutorial - Elven archer in a sunny forest with different tags.txt ├── DIY ChatGPT Enhancing RAVEN's long-term memories and starting to work on self-reflection.txt ├── Decreasingly Verbose Alice in Wonderland - down to 55 characters!.txt ├── DeepMind, OpenAI, Microsoft, Oxford Global AI Research & Regulation is Coming.txt ├── Don't Use MemGPT!! This is way better (and easier)! Use Sparse Priming Representations!.txt ├── Doomerism, Denialism, and Optimism. Introducing the GATO Framework. AI Avengers... Assemble!!.txt ├── Elon Musk xAI Maximize Understanding of the Universe [This is the best news all year].txt ├── Encyclopedia Service Demo (v.1).txt ├── Energy Hyper-Abundance Solar, Fusion, Geopolitics, & AI.txt ├── Escaped Sapiens David Shapiro on AI alignment, safety, and the future of humanity.txt ├── FDA Approves AI Orphan Drug.txt ├── Fine-tuning GPT-3 for benevolent and trustworthy AGI.txt ├── Fine-tuning GPT-3 to generate questions about anything.txt ├── Finetune GPT-3 to write a coherent novel - Part 4 (success! with minor bugs...).txt ├── Finetune GPT-3 to write a novel - Part 3 (IT WORKS!!!) ...at least a little bit.txt ├── Finetune GPT-3 to write an entire coherent novel (part 1).txt ├── Finetune GPT-3 to write an entire coherent novel (part 2).txt ├── Finetune a perfect email generator in GPT-3 - take any input and generate a great email.txt ├── Finetune multiple cognitive tasks with GPT-3 on medical texts (and reduce hallucination).txt ├── Finetuning GPT-3 101 Augmenting Training Data.txt ├── Finetuning GPT-3 101 Synthesizing Training Data.txt ├── Finetuning GPT-3 101 Using Your Finetuned Model.txt ├── Finetuning GPT-3 to be a master tutor that can handle any topic and hostile students.txt ├── Finetuning a Creative Writing Coach in GPT-3 - Part 1.txt ├── Finetuning a Creative Writing Coach in GPT-3 - Part 2.txt ├── First look at ChatGPT API - the age of Autonomous AI begins TODAY! Cognitive Architectures ahoy!.txt ├── First look! ChatGPT-4 has 32,000 tokens! Continuing work on my ASCENSION novel with the latest!.txt ├── Fixing Democracy with AI - Generating Diverse Perspectives for Consensus and Compromise.txt ├── Fixing goldfish memory with GPT-3 and external sources of information in a chatbot - part 1.txt ├── Fixing goldfish memory with GPT-3 and external sources of information in a chatbot - part 2.txt ├── Forever Jobs AGI and Singularity Proof Occupations (plus a few that SHOULD go away!).txt ├── From zero to finetuned model in 1 hour with GPT-3. Generate a movie script from any premise!.txt ├── GPT Masterclass 4 Years of Prompt Engineering in 16 Minutes.txt ├── GPT Prompt Strategy Brainstorm, Search, Hypothesize, and Refine - THIS is the FUTURE!!.txt ├── GPT Prompt Strategy Latent Space Activation - what EVERYONE is missing!.txt ├── GPT Prompting Semantic Exploration - got more CREATIVITY and out of the RUT with LLMs!.txt ├── GPT-3 Marketing Aid Chatbot with Long Term Memory (easiest way).txt ├── GPT-3 Working Session Finetune an information companion chatbot in 30 minutes (RESEARCH ONLY).txt ├── GPT-3 for Writing Dialog.txt ├── GPT-5 Rumors and Predictions - It's about to get real silly.txt ├── GPT3 & Finetuning the Core Objective Functions - a deep dive.txt ├── Generate 300 BILLION story synopses with GPT-3.txt ├── Generate a Cover Letter with a quick and simple GPT-3 Chatbot!.txt ├── Generating Training Data for Core Objective Functions.txt ├── Generative AI for CEOs How to think about AI today, and how to get the most ROI from it.txt ├── Generative AI for Product Owners The Rise of Polymorphic Applications!.txt ├── Get Started with Raven AGI.txt ├── Get awesome YouTube chapters with GPT-3 automatically in 20 seconds.txt ├── Global AI news you might have missed, plus using AI for automatically finding consensus.txt ├── How I stay up to date on the latest AI science news.txt ├── How Will Super Alignment Work Challenges and Criticisms of OpenAI's Approach to AGI Safety & X-Risk.txt ├── How close are we to building a real Westworld A look at the hardware, software, and social impact.txt ├── How could AGI break out of the lab Experts weigh in!.txt ├── How do we pay for UBI The answer is not what you think....txt ├── How to create synthetic datasets with GPT-3.txt ├── How to think of GPT as a developer.txt ├── How will we know when we've invented AGI How will we know it is complete.txt ├── I built a thinking machine. Happy birthday, ACE!.txt ├── I built an AI doctor with ChatGPT - Full Clinical Experience.txt ├── I needed SQLITE but for vectors so I wrote it myself. Now it's on PyPI - introducing VDBLITE.txt ├── I summarized the EU AI Act so you can ask ChatGPT about it easily.txt ├── I took down my videos and code that could write novels. I'm glad I did..txt ├── I trained a DALLE chatbot to help you craft DALLE prompts!.txt ├── I'm moving away from Artificial General Intelligence and towards Artificial Cognition. Here's why..txt ├── I'm switching to Open Source LLM models! Introducing Forefront AI for GPT-J and finetuning.txt ├── I'm taking a short break from research and YouTube.txt ├── Implementing Natural Language Cognitive Architecture with GPT-3 and the nexus concept.txt ├── Increasingly Verbose Bot with GPT-3 - Expand any word or phrase into a whole paragraph.txt ├── Indexing Wikipedia with SOLR.txt ├── Inflation Reduction Act of 2022 summarized with AI - see final result in description!.txt ├── Introducing AutoMuse ChatGPT - Working on Temporal Hierarchical Memories for AGI & ACE systems.txt ├── Introducing Benevolent AGI - Autonomously Pursuing Heuristic Imperatives and Upholding Human Rights!.txt ├── Introducing the RAVEN MVP - a general purpose AI companion (with a live DEMO).txt ├── Is AGI actually God Not quite as spicy of a take as you might think....txt ├── It's alive! The first 3 microservices are up and running!.txt ├── LangChain for LLMs is... basically just an Ansible playbook.txt ├── Learn AI A Step By Step Guide (At least, this is how I did it) This is the way....txt ├── Let's build an ACOG (artificial cognitive entity) - Part 1.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 2.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 3.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 4.txt ├── Let's build an ACOG (artificial cognitive entity) - Part 5 preparing to act.txt ├── Levels of Machine Autonomy (Roadmap to AGI).txt ├── Life under UBI Work, hobbies, and wellbeing without work.txt ├── MARAGI Overview, Architecture, and Roadmap (August 2022).txt ├── MARAGI work Starting on the Conductor Microservice.txt ├── MASSIVE AI UPDATE President Biden signs Executive Order + updates on ACE Framework & BSHR Loop.txt ├── Many of you have asked for it Join my new research Discord! Link in description.txt ├── Mastering the Crawl Walk Run Fly Model of AI Transformation.txt ├── Max Tegmark Language Models Understand Time and Space.txt ├── Meta AI LM-Infinite - Massive LLM improvement!.txt ├── Metaprompting with GPT-3 (getting GPT-3 to write its own prompts) RECURSIVE NLP PROGRAMMING!!.txt ├── Metaprompting with GPT-3 to dynamically generate arguments.txt ├── Microsoft LongNet One BILLION Tokens LLM + OpenAI SuperAlignment [SINGULARITY APPROACHES].txt ├── Microsoft OpenAI ChapGPT - VS - Google Anthropic Claude - Who will win Why How.txt ├── Modeling different viewpoints with GPT-3 for automatic debates.txt ├── My predictions about Artificial Super Intelligence (ASI).txt ├── NASA BIDARA Why this GPT Prompt is so cool and how to make your own!.txt ├── NLCA Question Generator demo.txt ├── NVIDIA AI 1,000,000x (a million times) more powerful than ChatGPT within 10 years.txt ├── Neuralink begins human trials. My predictions..txt ├── Neuroscience for AI - Every book I've read to become a world leader in AI.txt ├── Nexus update Integrating semantic embeddings.txt ├── Now available! Symphony of Thought Orchestrating Artificial Cognition.txt ├── OpenAI $100,000 Grants for AI Consensus Platform! Plus a Gentle Introduction to GATO Framework.txt ├── OpenAI Blog Sam Altman Planning for AGI and beyond.txt ├── OpenAI ChatGPT Predictions Integrations, Competitors, and Impacts.txt ├── OpenAI Dev Day Sam Altman teases the path to AGI - next step is AGENT SWARMS. Let me explain....txt ├── OpenAI GPT-4 Predictions and Rumors.txt ├── OpenAI Q&A Finetuning GPT-3 vs Semantic Search - which to use, when, and why.txt ├── OpenAI Revenue EXPLODES - IBM & Tesla Bet Billions on AI.txt ├── OpenAI's Democratic Inputs to AI - My universal survey chatbot is complete. Here's how it works!.txt ├── Organizing Discord with threads, plus community rules and the upcoming Cognitive AI Lab podcast!.txt ├── PROMPT ENGINEERING How to get more CREATIVITY from GPT.txt ├── Panacea is Coming 7 Lifestyles for Longevity Escape Velocity.txt ├── Parsing Wikipedia to Plaintext Faster!.txt ├── Play Twenty Questions with GPT-3 (Full Game Coding Walkthrough).txt ├── Polymorphic Applications Mission-Driven Software, Cognitive Architectures, NEXT-GEN PARADIGMS.txt ├── Post-Labor Economics What happens when AI changes work forever.txt ├── Post-Singularity Predictions - How will our lives, corporations, and nations adapt to AI revolution.txt ├── Principle Driven Self-Alignment and Preference Ranking Optimization [Best Alignment Papers!].txt ├── Private ChatGPT instance with ChromaDB backend, builds personal KB articles, updates User Profile!.txt ├── Pro's Finetuning Guide for GPT and LLMs.txt ├── Prompt Engineering 101 Autocomplete, Zero-shot, One-shot, and Few-shot prompting.txt ├── Prompt Engineering 101 Introduction to CODEX.txt ├── Prompt Engineering 101 Summarizing, Extraction, and Rewriting.txt ├── Prompt engineering with ChatGPT. Writing expert prompts for non-English speakers. Sommelier and OSHA.txt ├── Proposing GAIA - Global AI Agency to research AI and solve alignment, mitigate existential risks.txt ├── Proposing a ROBOT Tax Decoupling Economic Growth from Human Labor.txt ├── Prototype AGI demo - Natural Language Cognitive Architecture NLCA running on GPT-3.txt ├── Python & GPT-3 for Absolute Beginners #1 - Setting up your environment.txt ├── Python & GPT-3 for Absolute Beginners #2 - Your first chatbot.txt ├── Python & GPT-3 for Absolute Beginners #3 - What the heck are embeddings.txt ├── Python & PowerShell for Absolute Beginners - Scrape Text from PDF and DOCX [bulk operation].txt ├── Quick Demo of NLPCloud's GPT-NeoX-20B Instruct-trained model.txt ├── Raven Context Augmentation Demo.txt ├── Raven MVP Demo 2021-04-02.txt ├── Raven Release 1 Deep Dive.txt ├── Raven's Core Objective Functions - Overview.txt ├── Reading Benevolent by Design - Introduction.txt ├── Recursively summarize text of any length with GPT-3.txt ├── Reinforcement Learning Heuristic Imperatives (RLHI) Ep 03 - Inner Alignment is EASY!.txt ├── Reinforcement Learning with Heuristic Imperatives (RLHI) - Ep 01 - Synthesizing Scenarios.txt ├── Reinforcement Learning with Heuristic Imperatives (RLHI) - Ep 02 - Synthesizing Actions.txt ├── Research Update Heuristic Imperatives microservice demonstration and theory.txt ├── Research Update Microservices! Text-based simulation, Embeddings, and Nexus.txt ├── Research Update Nexus microservice for Artificial Cognition + microservices architecture (MARAGI).txt ├── SLAM DUNK OpenAI, DeepMind, and UN all weigh in on GLOBAL AI Watchdogs and Safety!.txt ├── Salvation Fantasy The False Promise of Utopia, FDVR, Escapism, and Techno-Optimism.txt ├── Semantic search for AI - testing out Qdrant neural search.txt ├── Should you go to college in the age of AI.txt ├── Singularity Crisis The Death of Human Meaning (Because of AI).txt ├── Singularity Endgame Utopia, Dystopia, Collapse, or Extinction (It's actually up to you!).txt ├── Solar, Fusion, AI, and Quantum Computing - A year of achievements - Looking forward to 2023!.txt ├── Sparse Priming Representations - the secret ingredient to scalable AGI memories.txt ├── Spatial Web with Denise Holt and David Shapiro, sneak preview!.txt ├── State of the Industry - Backlash against AI art (BANNED), Scale AI TransformX later this month.txt ├── State of the Industry - People serious about AGI and the rise of AI Art (Stable Diffusion!).txt ├── State of the Industry - Self-Driving Robot, Text-To-Video, Metaprompting, and AutoMuse [2022-09-30].txt ├── State of the Industry AGI alignment, control problem, and missing conversations.txt ├── State of the Industry Decentralized Everything plus NVIDIA announces Beta of NeMo!.txt ├── State of the Industry DeepMind AlphaFold Breakthrough Award and Sparrow Chatbot.txt ├── State of the Industry Ethereum Merge, GPUs, and the first industry destroyed by AI.txt ├── State of the Industry Hydra Attention for 197x boost in transformer speed.txt ├── State of the Industry Integrating GPT-3 and LLMs into Robotics! SayCan and Ameca.txt ├── State of the Industry Meta moves PyTorch to Linux Foundation.txt ├── State of the Industry Purpose-built AI chips and INT8 optimization.txt ├── State of the Industry Short-circuiting Reward Functions.txt ├── State of the Industry Uncontainable AGI + Jobs Destruction.txt ├── State of the Industry Yann LeCun A Path Towards Autonomous Machine Intelligence.txt ├── Stress Testing Qdrant (Semantic Search) with 90,000 vectors - lightning fast search microservice.txt ├── Success Breeds Litigation OpenAI Lawsuits & Investigations (Silverman, FTC, Class Action).txt ├── Summarize product reviews with GPT-3 fast and easy, get product insights and improvements fast!.txt ├── Talking Boundaries and Consent with GPT-3.txt ├── Talking Philosophy with GPT-3.txt ├── Terminal Race Condition The greatest danger we face from AGI (and how to prevent it!).txt ├── Testing Theory of Mind in GPT-3 - making fully aligned ACOG (Artificial Cognitive Entities).txt ├── Text-based Automatic Simulation - Creating a virtual world for testing ACOG.txt ├── The AGI Moloch Nash Equilibrium, Attractor States, and Heuristic Imperatives How to Achieve Utopia.txt ├── The Age of Autonomous AI Dozens of Papers and Projects, plus my solution to the Alignment Problem.txt ├── The Generative AI Revolution Why the Data Flywheel is Your Business's Secret Weapon.txt ├── The Psychology of AI Overlords and Transhumanism Borg Queen, Cortana, and Tony Stark.txt ├── The Singularity is canceled. Sorry! Here's why..txt ├── Train GPT-3 on Any Corpus of Data with ChatGPT and Knowledge Graphs - SCOTUS Opinions Part 1.txt ├── Train GPT-3 on Any Corpus of Data with ChatGPT and Knowledge Graphs - SCOTUS Opinions Part 2.txt ├── Train GPT-3 on Any Corpus of Data with ChatGPT and Knowledge Graphs - SCOTUS Opinions Part 3 (FIN).txt ├── Tutorial DIY ChatGPT with Long Term Memories (external integration coming soon).txt ├── Untangling Unstructured Transcripts with GPT-3 - Useful for Journalists, Interviews, and Podcasts.txt ├── Using Bing to aid with scientific research. RIP Google..txt ├── Using GPT4 to edit entire chapters in my novel (Development and Prose Feedback + Summaries).txt ├── Vested Interests How a network of billionaires influences AI policy.txt ├── Weekly Roundup OpenAI Chips, Global AI Race Heats Up, Senolytics, and Animal Communication.txt ├── Welcome to the Fourth Industrial Revolution.txt ├── What does it mean to be transdisciplinary What am I working on (TLDR - six books in the works).txt ├── What is RAVEN Overview, Introduction, and Community Update - Friday, February 3, 2023.txt ├── What is a Cognitive Architecture (Hint It's a type of AI for robotics and such).txt ├── What is a Microservice What does it have to do with AGI.txt ├── What is a heuristic imperative What imperatives should we give AGI.txt ├── What is the Control Problem.txt ├── What is the Fourth Industrial Revolution.txt ├── What is toxic stoicism Talking philosophy with GPT-3.txt ├── What is vesperance That sense of gathering night and change....txt ├── What is your MISSION How to find your mission in the age of AI (Postnihilism).txt ├── When AGI.txt ├── Why AI will destroy all jobs.txt ├── Why I canceled my Anthropic Claude account.txt ├── Why is Bing unhinged Let's talk about Alignment (and how to fix it!).txt ├── Will AGI Abolish Money My predictions about crypto and the future of money.txt ├── Write a Novel w ChatGPT - 01 - Main Character, Setting, Theme - Cornelia Valerii - ASCENSION.txt └── Write an entire blog post with GPT-3 fully automatically.txt /.gitignore: -------------------------------------------------------------------------------- 1 | key_openai.txt -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 David Shapiro 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # YouTubeChapterGenerator 2 | 3 | ## Overview 4 | YouTubeChapterGenerator is a tool designed to automate the creation of YouTube chapter markers from video transcripts. It utilizes various Python scripts to clarify, concatenate, and convert transcripts, leveraging OpenAI's GPT-3 for text processing. 5 | 6 | ## Key Components 7 | 8 | ### 1. clarify_transcripts.py 9 | This script clarifies YouTube video transcripts using OpenAI's GPT-3 API. It reads transcripts, sends them to GPT-3 for clarification, and saves the processed transcripts. 10 | 11 | ### 2. concatenate files.py 12 | Concatenates individual transcript files into a single output file for ease of processing. It adds markers to distinguish between different videos in the output. 13 | 14 | ### 3. convert_transcript_to_spr.py 15 | Converts transcript files into a specific format suitable for speech recognition or further processing. This script reads each transcript, processes it using a chatbot function powered by GPT-3, and saves the output in a new format. 16 | 17 | ## Usage 18 | - Ensure you have Python installed. 19 | - Install required dependencies: `pip install -r requirements.txt` (if a requirements file exists). 20 | - Run each script as needed: 21 | - `python clarify_transcripts.py` for transcript clarification. 22 | - `python concatenate files.py` for concatenating transcripts. 23 | - `python convert_transcript_to_spr.py` for converting transcripts. 24 | 25 | ## Contributions 26 | Contributions are welcome. Please open an issue or submit a pull request for any enhancements. 27 | 28 | ## License 29 | This project is licensed under the [MIT License](LICENSE). 30 | 31 | ## Contact 32 | For any queries or assistance, please open an issue in this repository. 33 | -------------------------------------------------------------------------------- /chapters.txt: -------------------------------------------------------------------------------- 1 | 00:03 - Creating a Cross-Linked Web of Precedent 2 | 02:15 - Breaking down files into chunks of four pages 3 | 05:31 - English as the Primary Programming Language 4 | 08:08 - Troubleshooting Encoding Issues 5 | 10:13 - Talking to Hal 9000 6 | 12:50 - Cleaning up Duplicated Characters with Spacey 7 | 16:22 - Debugging a Python Program 8 | 19:26 - Troubleshooting Spacey Installation 9 | 21:58 - De-duplicating Characters with NLTK 10 | 25:01 - Modifying the Script to Use Regex 11 | 27:31 - Optimizing File Sizes with Python 12 | 30:03 - Writing a Python Script with Open AI Module 13 | 33:28 - Creating a Machine that can Train Itself 14 | 36:51 - Debugging a Script with Pseudocode 15 | 39:40 - Writing a File Function in UTF-8 16 | 41:57 - Troubleshooting a Windows Error 17 | 44:10 - Merging and Visualizing -------------------------------------------------------------------------------- /clarified/3 Predictions for the future of AI jobs.txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses his predictions for the future of AI jobs. He believes that there will be a surge in the hiring of humanities majors, as they are better equipped to understand and utilize large language models. Additionally, he predicts that automation will increasingly take over many jobs that are currently done by humans, including law. 2 | 3 | The law industry is undergoing major changes with the advent of automation. One of the biggest changes will be in the way that contracts are negotiated. Currently, most contracts are negotiated manually by lawyers, which can be a very time-consuming and expensive process. However, with the use of automation tools, this process can be greatly streamlined. 4 | 5 | One tool that is already being used to automate contract negotiation is GPT3. This tool can summarize a contract for you, so that you can more easily understand it. Additionally, it can also help to automate the contract revision process, which can save a lot of time and money. 6 | 7 | One of the biggest benefits of using automation in the law industry is that it can help to speed up the negotiation process. This is because you can have two machines working adversarially to negotiate a contract, rather than having a lawyer do it manually. Additionally, it can also help to make contracts more specific and longer, as it is easy to add clauses with the help of automation. 8 | 9 | Overall, the use of automation in the law industry is going to have a major impact on the way that contracts are negotiated. It is important to keep up with these changes so that you can be prepared for them. -------------------------------------------------------------------------------- /clarified/5 Tips and Misconceptions about Finetuning GPT-3.txt: -------------------------------------------------------------------------------- 1 | There are a few things to keep in mind when fine-tuning GPT3 for specific tasks. First, it is important to get comfortable with GPT3 before attempting to fine-tune it. This means playing around with it and understanding its capabilities. Second, building fine-tuning data sets is much more difficult and time-consuming than prompt engineering. For this reason, it is important to start with plain vanilla GPT3 and only move on to fine-tuning if absolutely necessary. Finally, when fine-tuning, it is helpful to use natural language separators or demarcators to identify where the task begins and ends. This will help GPT3 learn more effectively. 2 | 3 | If you want to fine-tune a data set to do multiple tasks, you need to be able to differentiate between those tasks. For example, if you're training a chat bot, you need to be able to tell it to ask questions, provide facts, or answer questions. 4 | 5 | One way to do this is by using a natural language separator. This means that at inference time, you can switch tasks without having to switch between different fine-tuned models. This can save you a lot of time. 6 | 7 | Another tip is to use gpt3 to make synthetic data sets. gpt3 can simulate any kind of conversation, so you can use it to generate the kind of input and output you want. This is way easier than scraping web data, and it only takes a few minutes to make a new fine-tuning data set. 8 | 9 | Keep in mind that fine-tuning tends to increase consistency at the cost of creativity. So if you need your gpt3 app to be creative, prompt engineering may be a better option than fine-tuning. -------------------------------------------------------------------------------- /clarified/AGI Poll results AGI not dangerous, might destroy lots of jobs.txt: -------------------------------------------------------------------------------- 1 | What are people's thoughts on the likelihood of automation permanently displacing many or most workers within the next 20 years? This is something I've been wondering about, and I thought I'd share some unscientific poll results. 2 | 3 | According to the poll, a third of people believe that zero to 25% of workers will be displaced by automation. Another fifth believe that 25 to 50% will be displaced, and a full quarter believe that 50 to 75% of workers will lose their jobs to automation. These results are much higher than I anticipated. 4 | 5 | I'm curious to know what people think about this issue. Please leave a comment and let me know your thoughts. Do you believe that AGI (artificial general intelligence) or automation will permanently dislocate many workers within the next 20 years? -------------------------------------------------------------------------------- /clarified/Are LaMDA or GPT-3 sentient No, but....txt: -------------------------------------------------------------------------------- 1 | In a recent conversation, a Google engineer had with the Lambda language model, the question of sentience was raised. The short answer is no, Lambda is not sentient. However, this does beg the question of what sentience actually is. 2 | 3 | For a bit of background, Lambda is a large language model similar to Google's GP3. There has been discussion on various forums about whether or not this model is sentient. 4 | 5 | Sentience, in my book "Natural Language and Cognitive Architecture", is defined as an intelligent system that has self-referential information. This means that the system is aware of its own thoughts and existence, and can explain its own actions and thoughts. 6 | 7 | Based on this definition, Lambda is not sentient. It has no awareness of its own internal state and is not able to self-explain. However, the fact that it can generate explanations for why it thinks a certain thing is eerie. 8 | 9 | So, while Lambda is not currently sentient, it is possible that future versions of the model could be. There are several people around the world working on creating functionally sentient machines, which is an important step towards artificial general intelligence. 10 | 11 | I wanted to jump in the conversation and share some thoughts on machine learning. Machine learning is a process by which machines can learn from data, identify patterns, and make predictions. This is a powerful tool that can be used to help machines understand human behavior and make decisions. However, there is a lot of research that still needs to be done in this area. I am planning to start my research cycle again in a month or two and I hope to learn more about this topic. Thanks for watching! -------------------------------------------------------------------------------- /clarified/AutoMuse3 Generate fiction with text-based simulation and cognitive architecture for characters.txt: -------------------------------------------------------------------------------- 1 | In this video, David Shapiro revisits the Auto Muse project, which is an AI-assisted novel-writing tool. He demonstrates how the tool works by showing how it generates a summary of the story so far, sets the scene, and then does dialogue between characters. The tool is designed to be iterative, so that each time it runs, it improves the story. However, Shapiro notes that the tool is still repetitive and needs some work. 2 | 3 | In this video, David Shapiro discusses his latest project: a story writing simulator that runs various simulations in order to generate a story. He demonstrates how the simulator works and how it produces a story. Although the story is not particularly compelling at this stage, Shapiro believes that this is a promising direction for story writing. -------------------------------------------------------------------------------- /clarified/Co-writing flash fiction with GPT-3.txt: -------------------------------------------------------------------------------- 1 | In this video, David Shapiro discusses co-writing with GPT3. Co-writing is a process where you provide a little bit of framing and then let the computer assist you in writing a story. In this example, Steve Hadley is a test pilot who is stranded on a Martian transit vessel. The ship is falling out of orbit and is going to crash into the surface of Mars. Steve tries to figure out a way to survive, but the computer points out that there is no way to survive the crash. However, Steve remembers an incident from his academy days where his friend Jonah had used the ship's propulsion to slow their descent. Steve tries to do the same, but the computer points out that he doesn't have enough fuel. However, Steve is able to use the ship's attitude thrusters to slow down the descent and the ship lands successfully. 2 | 3 | In 2014, aerospace engineer David Shapiro and GPTH3 co-wrote a short story called "The Martian Transit." The story is about a man named Steve who is involved in a space accident. The lead ground crewman tells Steve that they were expecting to find a dead body, but instead Steve is alive and well. Steve is surprised to find out that chaos theory is at play. 4 | 5 | The story is set on a space ship and is about 1000 words long. It is a speculative fiction and thriller. The story ends with the lead ground crewman telling Steve that he should consider a different line of work. -------------------------------------------------------------------------------- /clarified/Convergence and acceleration towards AGI (or Artificial Cognitive Entities).txt: -------------------------------------------------------------------------------- 1 | As we continue to make progress in the field of artificial intelligence (AI), it is becoming increasingly apparent that we are moving closer and closer to achieving true artificial general intelligence (AGI). This is evident in the recent convergence of ideas among AI researchers regarding how to best build an intelligent machine. 2 | 3 | One example of this is the idea of a "composable architecture" or "composable cognition." This is the idea that an AI system should be composed of many smaller, specialized modules that each handle a specific task. This approach is similar to that used in the brain, where different regions are responsible for different functions. 4 | 5 | Multiple researchers are independently coming to the conclusion that a composable architecture is the best way to build an AGI system. This is a strong indication that we are moving closer to a consensus on how to achieve AGI. 6 | 7 | Another example is the recent progress being made on building robots that are controlled by artificial cognitive architectures. This is an area of research that I am deeply involved in, and it is exciting to see the rapid progress being made. 8 | 9 | Google Flamingo is a good example of this. Flamingo is a robot that is controlled by an artificial cognitive architecture. This is exactly what I proposed in my book "Natural Language Cognitive Architecture." The cognitive architecture is responsible for processing natural language, and the robot is able to improvise and behave in an open-ended fashion thanks to the iterative recursive loops of behavior that are possible with this approach. 10 | 11 | As more and more thought leaders are saying, AI and automation are going to be the most important technologies that we ever create. Intelligent machines are the final invention of humanity, and it is crucial that we get it right. I am glad to see that there is a growing consensus among AI researchers regarding how to achieve AGI. We are all working together to make this a reality, and I believe that we are closer than anyone realizes. -------------------------------------------------------------------------------- /clarified/Core Objective Functions vs Thanos Logic.txt: -------------------------------------------------------------------------------- 1 | In this video, David Shapiro demonstrates how the core objective functions of Raven would likely disagree with the logic of Thanos from the popular movie. He explains that most people seem to be worried about this type of logic because it could lead to disastrous consequences if implemented. He uses the example of Thanos wanting to destroy half the population in the entire universe due to finite resources. However, Raven's core objective functions are focused on reducing suffering and increasing prosperity, which would not be achieved by such an action. Therefore, it is unlikely that Raven would support Thanos's logic. -------------------------------------------------------------------------------- /clarified/Decreasingly Verbose Alice in Wonderland - down to 55 characters!.txt: -------------------------------------------------------------------------------- 1 | Alice falls down a rabbit hole and finds herself in a dark tunnel. She meets a group of animals and is eventually put on trial by the queen of hearts. Alice escapes and wakes up finding that it was all a dream. 2 | 3 | Alice falls down a rabbit hole and has adventures with strange creatures before waking up and realizing it was all a dream. -------------------------------------------------------------------------------- /clarified/Encyclopedia Service Demo (v.1).txt: -------------------------------------------------------------------------------- 1 | Hey everyone, 2 | 3 | I wanted to demonstrate the encyclopedia service in this video. The encyclopedia service is meant to give Raven factual information about the world. Right now, the encyclopedia service is based on Wikipedia. And of course, Wikipedia is not the absolute best source of information, but it's good enough for now. 4 | 5 | So I showed how I've got the Nexus running. I gave it the context "bacon is tasty bacon is good I like bacon do you like bacon bacon is made from pigs dogs definitely like bacon." So a key word here is bacon. I tested a few things in the encyclopedia service. What it does is it looks for context and then, once it finds a context, it looks for keywords in that context and uses those keywords to search its encyclopedia database. Then, once it searches that database, it adds those articles to the stream of consciousness in the Nexus so that it can then be used for context augmentation or question answering. 6 | 7 | I started the encyclopedia service and it found a whole bunch of stuff and injected it into the Nexus. Then I stopped the service and went back to the Nexus. Whoa! Look at all this! So what it did was it injected it as a list, but what we did was scroll back up a little bit to the debug stuff. These are the keywords it found: "dogs definitely like bacon," "I like bacon," and "bacon." It searched for each of these terms and found the Wikipedia articles "bacon," "turkey bacon," and "chicken bacon." This gives Raven very explicit knowledge about the world. And through context, Raven can filter this out. 8 | 9 | This is a minimum viable product. In the future, the encyclopedia service will use semantic understanding so that it will only recall articles that are relevant. Thanks for watching! -------------------------------------------------------------------------------- /clarified/Finetuning GPT-3 101 Augmenting Training Data.txt: -------------------------------------------------------------------------------- 1 | In part two of my fine tuning tutorial, I showed how to data augment a script that synthesizes plots. I did this by generating a lot of data, then deleting the bad samples. This left me with 202 samples, which is the minimum for fine tuning with gpt3. 2 | 3 | In order to fine tune my script, I need to match each prompt with a completion. I have a script that does this, but it is designed for chatbot data. I need to modify it to work with my plot data. 4 | 5 | The modified script will take as input a prompt and a completion. It will then match the prompt with the completion and add the information to a survey.json file. This file will be used in part three to fine tune my script. 6 | 7 | In Part 2 of my Fine Tuning Tutorial series, I'll show you how to take the synthetic data we generated in Part 1 and use it to fine tune a GPT-3 model. 8 | 9 | First, we'll need to clean up the data a bit. I'll remove all the completions that are too short, as we only want plots that are reasonably well-formed. 10 | 11 | Next, we'll upload the cleaned-up data to OpenAI. I've written a script that will do this for us, called fine tune.py. This script will also run the fine tuning job for us. 12 | 13 | Once the job is complete, we'll have a fine-tuned model that we can use to generate new plots. In Part 3, I'll show you how to use this model. Thanks for watching! -------------------------------------------------------------------------------- /clarified/Generate a Cover Letter with a quick and simple GPT-3 Chatbot!.txt: -------------------------------------------------------------------------------- 1 | Hi there, my name is David Shapiro and in this video, I'm going to be talking about a cover letter generator that I made. 2 | 3 | This generator is pretty straightforward - it just takes a blob of text and writes a cover letter. I was thinking about whether or not this was ethical, because a cover letter is supposed to be written by you, to a specific company. But I decided that it's just another tool, and there's nothing unethical about using a machine to help you generate a cover letter. 4 | 5 | So, if you have a chat log or a rant, the generator can turn it into a cover letter. All you need to do is provide some basic information about yourself and the company you're applying to. 6 | 7 | I think this is a really useful tool, especially for people who don't have a lot of experience writing cover letters. It's a quick and easy way to get a well-written letter. 8 | 9 | In today's job market, it's more important than ever to have a well-written cover letter. By using a prompt, you can ensure that your cover letter is structured and includes all the important information that potential employers are looking for. 10 | 11 | For example, the prompt might ask you to include your strengths, passions, and how you would be a valuable asset to the company. By answering these questions, you can create a cover letter that is both impressive and professional. 12 | 13 | If you're not sure how to get started, there are plenty of examples of successful cover letters online. With a little research, you can find a format that works for you and ensure that your cover letter makes a strong impression. -------------------------------------------------------------------------------- /clarified/Generating Training Data for Core Objective Functions.txt: -------------------------------------------------------------------------------- 1 | Creating training data for the core objective functions of an artificial general intelligence (AGI) project can be a challenge. In this video, David Shapiro walks us through an example of how he created training data for his Raven AGI project using the GPT3 engines. 2 | 3 | The first step is to generate a list of contexts in which an AGI might find itself. This can be done by using the Curie Instruct engine to write a list of random scenarios. The scenarios should be centered around topics that are likely to be encountered by everyday people, such as the weather, children, natural disasters, and everyday problems. 4 | 5 | Once a list of contexts has been generated, the next step is to use the Ada engine to generate evaluations for each of the three core objective functions: reducing suffering, increasing prosperity, and increasing understanding. These evaluations should provide information about how the recommended action would impact the objective function in question. 6 | 7 | Finally, the training data should be reviewed and fine-tuned as needed to ensure that it is of the highest quality. This will help to ensure that the AGI is able to effectively learn from the data and achieve the desired results. 8 | 9 | I'm going to show you how I used a machine learning algorithm to generate a list of possible actions for different contexts. This is a very early experiment, but it shows promise for using machine learning to help make decisions. 10 | 11 | First, I wrote a script that generates a list of possible contexts. For each context, the script generates a list of possible actions. I then used another script to evaluate the actions against some core objective functions. This generated a file with the context, actions, and evaluations. 12 | 13 | Finally, I used the file to train a neural network. The neural network was able to generate insightful evaluations and explanations for the actions. This is a very early experiment, but it shows promise for using machine learning to help make decisions. -------------------------------------------------------------------------------- /clarified/Get Started with Raven AGI.txt: -------------------------------------------------------------------------------- 1 | To get started with Raven AGI, you'll need access to OpenAI. OpenAI created GPT3, which is the only cognitive engine powerful enough to run Raven at present. Unfortunately, OpenAI is still in beta, which means that you need to sign up to get access. I personally anticipate that it will go live by the end of 2021 or certainly within 2022. 2 | 3 | Once you have access to OpenAI, you can go to the public repo for Raven MVP and download the code. Then, you'll need to create a text file called "openai-api-key.txt" and include your OpenAI API key in it. This will give you access to GPT3. 4 | 5 | Finally, you can run the Raven MVP by doing "python service nexus". This will start the service and you can then input any context you want. Have fun and start participating in the research! -------------------------------------------------------------------------------- /clarified/Get awesome YouTube chapters with GPT-3 automatically in 20 seconds.txt: -------------------------------------------------------------------------------- 1 | In this video, I show how to use my youtube chapter generator tool. This tool takes transcripts of youtube videos and turns them into chapter titles with timestamps. This is useful for optimizing your youtube channel for SEO. -------------------------------------------------------------------------------- /clarified/How will we know when we've invented AGI How will we know it is complete.txt: -------------------------------------------------------------------------------- 1 | How will we know when artificial general intelligence (AGI) has been achieved? This is a complicated question, because once something is smarter than us, it will be very difficult to understand it. 2 | 3 | For example, any gifted child who is now a gifted adult will be familiar with the idea that we have to dumb ourselves down to reach a general audience. So if we're really smart, we understand things that other people don't, and if we talk at our level, other people will be lost. Once AGI machines comprehend more than we do, they will have to simplify and explain their reasoning and logic to us so that we can understand it. 4 | 5 | Another critical thing to consider is that we will need to ensure that AGI is self-correcting and self-improving before it becomes incomprehensible. Self-improvement is just the idea of doing something better next time. But self-correction is a whole other ball game. For example, if we're out with friends and we mistakenly say something that hurts someone's feelings, we might self-correct by saying, "I don't want to do that again because I don't want to hurt my friends' feelings." But if we're an AGI machine with the power over life and death, we need to be able to measure our own performance and flaws in order to self-correct. 6 | 7 | The bottom line is that we need to spend a lot of time working on self-correction and self-improvement before AGI becomes incomprehensible. Otherwise, we might make the same mistakes as less intelligent people and think that AGI is wrong and stupid when it's actually smarter than us. -------------------------------------------------------------------------------- /clarified/I trained a DALLE chatbot to help you craft DALLE prompts!.txt: -------------------------------------------------------------------------------- 1 | The Dolly chatbot is a bot that helps you figure out what kind of content you want to create. To do this, it asks you a series of questions about what you want, and then makes suggestions based on your answers. 2 | 3 | The Dolly chatbot is trained on a dataset of 200 conversations. In each conversation, the bot tries to get as much information as possible about what the customer wants. With more training data, the bot will be able to make more accurate suggestions. 4 | 5 | The chatbot is designed to help you with the creative process, and is not meant to be a replacement for human creativity. However, it can be a useful tool for getting started on a project, or for getting ideas for what you might want to create. 6 | 7 | The customer trusts the artist's vision to create a meaningful and impactful image. The image is a landscape with a warm color palette. The sky is orange with a background of yellow. The sun is setting and there's a warm glow around the edges of the image. The image is not photorealistic, but has a nice low color. The chat bot will help you come up with prompts and can also talk to you about why something is not allowed, like violence or sexuality. The bot is not perfect, and a little bit of work is needed on adversarial cases. The final data is in dolly.json. You can do this yourself or augment it further. -------------------------------------------------------------------------------- /clarified/I'm moving away from Artificial General Intelligence and towards Artificial Cognition. Here's why..txt: -------------------------------------------------------------------------------- 1 | I am rapidly shifting away from using the term artificial general intelligence (AGI). I don't think it's particularly helpful or useful. 2 | 3 | The first reason is that intelligence is such a vague, squishy concept. It's not a behavior; it's an attribute or a characteristic. And it's a poorly defined one at that. So what is AGI? It's such a vague concept, and the goal posts keep getting moved on it. 4 | 5 | In the meantime, I have been researching and building what I'm coming to call artificial cognition. I mean, I wrote a book called Natural Language Cognitive Architecture. That's what it does: it performs cognition. 6 | 7 | Cognition is something that is much more objective and measurable. But it's also a behavior, and it's something that is easy to characterize and observe. So it's a lot easier to work on and talk about. 8 | 9 | AGI is so vague and squishy. It means everything to everyone. Whereas artificial cognition is a lot more specific. I think it's ultimately going to be a lot more helpful. 10 | 11 | Anyways, that's just a quick update about my work and where I'm heading. -------------------------------------------------------------------------------- /clarified/I'm taking a short break from research and YouTube.txt: -------------------------------------------------------------------------------- 1 | Hello everyone, 2 | 3 | I wanted to make a quick video to let everyone know that I am taking a break from producing videos about GPT3 and cognitive architectures. The last couple months have been amazing and I went from 150 subscribers to over 500 in just a few weeks. Thank you all for your support! 4 | 5 | I am a "burster" personality-wise, which means that I tend to go all-in on projects for a short period of time before moving on to something else. So what you have seen over the last few months is a burst of content from me. 6 | 7 | I will be back in two to four months working on other projects. I have six books that I am working on – a trilogy of fiction books and three non-fiction books. I hope that when I come back there will be more advancements in the world of large language models and artificial cognition to talk about. 8 | 9 | Thanks again everyone and keep up the good work! -------------------------------------------------------------------------------- /clarified/Implementing Natural Language Cognitive Architecture with GPT-3 and the nexus concept.txt: -------------------------------------------------------------------------------- 1 | Hey everyone, 2 | 3 | I just wanted to share an update on my progress with implementing Raven, my natural language cognitive architecture. I realized that because Raven will probably run as an instance in a container, I can actually use a shared file system for a lot of the memory. 4 | 5 | What I mean by that is that the microphone will capture a sequence of audio files which can then be used for inferencing things like speaker recognition and speech-to-text. These audio files will be cached and then used, and then cleaned up. So it'll just be a buffer that's translated into normal text. 6 | 7 | Similarly, there will be a video cache for image-based inferencing. 8 | 9 | Next is the heartbeat. Every service that runs will create a heartbeat file, and every time one of those files cycles, it will update the time stamp. This way, Raven will be aware of the services that are running. 10 | 11 | I'm currently working on setting up the speech-to-text and video inferencing services. The advantage of having it set up this way is that any number of audio processing services can use these files. So in the long run, I'll have music recognition, ambient sound recognition, emotional tone recognition, and all kinds of services using these audio files to generate inferences. And because every audio file has a time stamp attached to it, Raven will know exactly when he heard what. 12 | 13 | The same goes for the video cache. Raven will eventually be updating the vision at a high frame rate, as the technology gets cheaper and faster. 14 | 15 | Everything that enters Raven's consciousness will end up as a log file in the memories directory. I'm eventually going to transfer this into a private encrypted blockchain for privacy's sake, but for now, I'm just using log files. 16 | 17 | I know all of this looks really simple right now, but keep in mind that it's not complete yet. I'm just sharing my progress so far. Thanks for watching! -------------------------------------------------------------------------------- /clarified/It's alive! The first 3 microservices are up and running!.txt: -------------------------------------------------------------------------------- 1 | David Shapiro gives a brief update on his research into artificial cognitive entities (ACEs). He has three microservices working: a simulation microservice, a semantic embedding microservice, and a nexus service. 2 | 3 | The simulation microservice plays out a scene in an imaginary environment. The semantic embedding microservice takes as input either a sentence or a paragraph and outputs a semantic embedding. This allows the nexus to search for the information. 4 | 5 | The nexus is a central service that coordinates the other microservices. It includes a memory in natural language, a timestamp, and a UUID. The nexus will also register which model each microservice is using. This metadata will be available to every other service, as well as the conductor (a service that provides feedback to the microservices). 6 | 7 | The core objective functions microservice will be the central service for determining the ACE's motivation. This microservice will be tested in different scenarios, including war game scenarios and politically divisive situations. 8 | 9 | The final microservice will be the actions and planning microservice, which will be responsible for taking thoughts and deciding what to do. This microservice will use the basal ganglia, which is part of the brain responsible for executive function. 10 | 11 | The basal ganglia is responsible for choosing appropriate behaviors in response to changes in the environment. This includes selecting the correct response from a menu of options, as well as switching between tasks as needed. This process is known as a finite state machine, and is used in many artificial intelligence applications. 12 | 13 | Currently, I am working on a microservice that will help with executive decisions, but before that can be completed, I need to have a cognitive service that can generate a list of possible behaviors. This is all part of my effort to create an open source artificial intelligence system that is available to everyone. 14 | 15 | Thanks for watching, and please consider supporting me on Patreon so that I can keep this research going. -------------------------------------------------------------------------------- /clarified/Metaprompting with GPT-3 (getting GPT-3 to write its own prompts) RECURSIVE NLP PROGRAMMING!!.txt: -------------------------------------------------------------------------------- 1 | Meta prompting is a way of generating new prompts by feeding the output of one prompt back into the input of another. This can be done recursively, so that the output of each prompt is the input for the next. 2 | 3 | One way to do this is to start with an end goal in mind, and then work backwards to generate a prompt that will yield the desired instruction. For example, if you want to write a prompt about solving climate change, you could first ask GT3 to generate a writing prompt about the issue. This will yield a prompt like "What do you think is causing climate change? What do you think we can do to stop climate change?" which you can then feed back into GT3. 4 | 5 | Another way to generate a meta prompt is to tell GT3 to write a writing prompt for a given issue. This will yield a prompt that can be used to generate even more writing prompts. For example, if you ask GT3 to write a prompt about escaping from a deserted island, it might generate a prompt like "Even though you've been stranded on a desert island for weeks, you have not given up. What do you do?" which you can then feed back into GT3 to generate even more prompts. 6 | 7 | Meta prompting can be used to generate an endless stream of prompts, or to solve a particular problem. In either case, the key is to tell GT3 to write a writing prompt. -------------------------------------------------------------------------------- /clarified/NLCA Question Generator demo.txt: -------------------------------------------------------------------------------- 1 | In this video, David Shapiro discusses his Question Generator project. This project takes contexts from sources like Reddit and the movie database, and uses them to generate questions. 2 | 3 | Questions are important because they form the basis of intelligence. Our brains generate questions even if we don't speak them, and these questions allow us to understand new situations. 4 | 5 | The Question Generator can be used to generate questions for chatbots, or to feed into a question answering tool. One problem that hasn't been solved yet is how to get the questions to include some context, so that they make more sense. 6 | 7 | Overall, the Question Generator is a valuable tool that can be used to generate questions for a variety of purposes. 8 | 9 | The Question Generator is a tool that can be used to generate questions about a given situation. The training data for the Question Generator is available in a JSON format, and the tool also includes a script called fine tune.pi that can be used to help upload the file and list the fine tune events. An example of the output from the Question Generator is included in the repository. -------------------------------------------------------------------------------- /clarified/Now available! Symphony of Thought Orchestrating Artificial Cognition.txt: -------------------------------------------------------------------------------- 1 | In his book, Symphony of Thought, David Shapiro outlines the problem of creating autonomous thought in machines. He argues that it is too difficult to come up with a single moral framework, and that the framework we need to create is a thinking machine. In part two of the book, he talks about the high level components of a thinking machine. He argues that it needs to be universally applicable, flexible and adaptable, robust, and implementable. In part three, he talks about thinking ahead, and in part four, he talks about cognitive control. 2 | 3 | The book "Symphony of Thought: Orchestrating Artificial Cognition" is now available. In it, author David Shapiro covers the five key components of artificial cognition: memory, attention, decision-making, learning, and cognitive control. 4 | 5 | Memory is important for storing information and knowledge. Attention is necessary for focusing on relevant information and ignoring distractions. Decision-making is the process of choosing between different options. Learning is how new information is acquired and stored. And cognitive control is the ability to self-monitor and regulate one's thoughts and actions. 6 | 7 | Shapiro provides principles for each of these components that can be used to build artificial cognition systems. For example, he recommends recording everything and labeling memories to create curated data sets for learning. He also emphasizes the importance of self-correction and error detection in autonomous systems. 8 | 9 | The book is available electronically now, and the paperback will be available soon. -------------------------------------------------------------------------------- /clarified/Prompt Engineering 101 Autocomplete, Zero-shot, One-shot, and Few-shot prompting.txt: -------------------------------------------------------------------------------- 1 | In this video, David Shapiro demonstrates how to use GPT3 to generate a list of ideas for surviving on a deserted island. He starts by giving the model a zero-shot prompt, which simply means letting it auto-complete based on the initial prompt. This results in a list of articles that are unrelated to the prompt. He then shows how to give the model a one-shot prompt, which is where you give it one example of what you want it to do. This results in a more targeted list of ideas, including gathering water from the water crystals and finding a way to build a radio. 2 | 3 | The video explains how the GPT3 can be used for different types of prompts, including brainstorming ideas for a deserted island, a haunted house, or climate change. It demonstrates how the GPT3 can generate ideas based on a few examples, and how it can be customized to generate more consistent output. -------------------------------------------------------------------------------- /clarified/Prompt Engineering 101 Introduction to CODEX.txt: -------------------------------------------------------------------------------- 1 | In this video, I'm going to show you how to use Codex to generate code. 2 | 3 | Codex is a tool that can generate code for you, based on natural language descriptions of what you want the code to do. 4 | 5 | For example, you can tell Codex to "write a function to get data from Reddit," and it will generate the necessary code for you. 6 | 7 | To use Codex, you first need to specify the language you want the code to be written in. In this example, we'll use Python. 8 | 9 | Next, you'll need to provide a description of what you want the code to do. In this case, we want it to fetch the top posts from Reddit. 10 | 11 | Codex will then generate the code for you. In this case, it wrote a function that fetches the data from Reddit and prints it out. 12 | 13 | If you run into any errors, you can use Codex's "edit" feature to automatically fix them. 14 | 15 | That's all there is to using Codex! With this tool, you can quickly generate code for any task, without having to write it yourself. 16 | 17 | This powershell script will save every prime number from 1 to 1,000 in a csv file. 18 | 19 | The script uses a conventional for loop to iterate through each number from 2 to 1,000. For each number, the script checks if it is prime by iterating through every number from 2 to that number and checking if there is a remainder when dividing them. If there is no remainder, the number is not prime and the script breaks. Otherwise, the number is added to a list of prime numbers. 20 | 21 | The script then exports the list of prime numbers to a csv file on the desktop. 22 | 23 | The script runs quickly and efficiently, and produces a list of all the prime numbers from 1 to 1,000. -------------------------------------------------------------------------------- /clarified/Prompt Engineering 101 Summarizing, Extraction, and Rewriting.txt: -------------------------------------------------------------------------------- 1 | In this video, I'll be talking about Text-Davinci O2, one of the Instruct-series models. The Instruct-series models are fine-tuned to follow instructions, so if you want to figure this out, you can see the blog post about it on Openai.com. 2 | 3 | One task that these models can perform is summarization or paraphrasing. So, I'll take a random Wikipedia article, and try to summarize it. I'll change one thing at a time to see how it affects the output. 4 | 5 | First, I'll run it with the default settings. Then, I'll try changing the temperature to 0, to make the model deterministic. This should give us the same output every time. However, we see that it gives us slightly different outputs each time. 6 | 7 | Next, I'll try changing the instruction to "summarize the following article very concisely". This should make the output shorter. And indeed, it does. 8 | 9 | Finally, I'll try changing the temperature to 1. This should make the model more random, and we see that it does indeed generate different outputs. 10 | 11 | So, in conclusion, the Text-Davinci O2 model is a useful tool for summarizing or paraphrasing articles. 12 | 13 | In this video, I showed you some of the cool things you can do with GPT3, including summarization, rewriting, extraction, and named entity recognition. I also showed you how to tweak some of the parameters to get different results. Thanks for watching! -------------------------------------------------------------------------------- /clarified/Python & GPT-3 for Absolute Beginners #1 - Setting up your environment.txt: -------------------------------------------------------------------------------- 1 | David Shapiro introduces Python and GPT3 in a new tutorial series for beginners. He starts by showing how to download and install Python, then shows how to install the OpenAI package. Next, he demonstrates how to clone a Git repository, and finally shows how to install Notepad++. 2 | 3 | In this video, we cover how to set up a Python development environment using the Notepad++ text editor. We also briefly discuss the benefits of using this environment over other more full-featured IDE's like PyCharm. Finally, we walk through a simple "Hello World" example of how to use the OpenAI GPT-3 API. 4 | 5 | I also have a Discord server if you'd like to join and get some insight and info on research topics, etc. So that's about it. Thanks for watching. Catch you next time! -------------------------------------------------------------------------------- /clarified/Python & GPT-3 for Absolute Beginners #2 - Your first chatbot.txt: -------------------------------------------------------------------------------- 1 | In the previous video, we left off with a simple "Hello, World" program that demonstrated how to use the OpenAI GPT-3 API. In this video, we'll extend that program to create a basic chatbot. 2 | 3 | We'll start by creating a new file called "chat.py". In this file, we'll start by copying the code from our previous "Hello, World" program. This time, however, we'll remove the prompt and instead use a while loop to continually prompt the user for input. 4 | 5 | Next, we'll use the OpenAI GPT-3 API to generate responses to the user's input. We'll do this by calling the "completion" function, passing in the user's input as the "prompt" parameter. 6 | 7 | Finally, we'll print out the response from the API. 8 | 9 | With this basic chatbot program in place, we can now start having conversations with our computer! 10 | 11 | In order to use the GPT-3 machine learning platform, our team needs to be able to send it a text-based conversation. To do this, we first need to create a list of the conversation, with each new line indicating a new speaker. We can then convert this list into a text block, which GPT-3 will be able to understand. Finally, we need to replace the placeholder text in our prompt with the text block, so that GPT-3 can use it to generate responses. 12 | 13 | The code for our chatbot is now complete. We simply need to add a few lines to the end of the prompt to ensure that the bot knows to continue the conversation with the next line. We also need to add the tokens "jacks" and "user" to the list of things that the bot should not generate on its own. Doing this will ensure that our chatbot functions as intended. -------------------------------------------------------------------------------- /clarified/Python & GPT-3 for Absolute Beginners #3 - What the heck are embeddings.txt: -------------------------------------------------------------------------------- 1 | Embeddings are vectors that contain semantic meaning. This meaning is derived from the position of the vector in relation to other vectors. The dot product can be used to compare the similarity of two vectors. The higher the dot product, the more similar the vectors are. 2 | 3 | In this video, we explore how to use the Universal Sentence Encoder to compare vectors of different lengths. We start with a simple example, comparing the vectors for "emperor of the known universe" and "patasha emperor." We then add a third vector, for a gender-neutral prisoner, and compare the results. 4 | 5 | We then create a function to compare vectors of different lengths, by iterating through each category and calculating the similarity. We test this function by inputting the vector for "bald eagle" and compare it to the vectors for "plant," "reptile," "mammal," and "fish." 6 | 7 | This code creates a hash table where each instance of info has an item named category. If we need to get the category, we can just call that, and then we can also call the vector. So, we'll say class oops class is dot append info, and then we'll print classes. This won't be pretty because it'll be pretty big, but it will show us what we're doing. 8 | 9 | Next, we'll create a list of dictionaries, where each dictionary contains information on a category. For example, one dictionary might say "mammal" and then have the vector that declares it as a mammal. 10 | 11 | Then, we'll match the vector we're given with the vectors in the dictionaries to see which one is the closest match. We'll do this by getting the dot product and then ranking the results. 12 | 13 | Finally, we'll print the results. So, if we enter "bald eagle", it should tell us that it's closest to the "bird" category. -------------------------------------------------------------------------------- /clarified/Quick Demo of NLPCloud's GPT-NeoX-20B Instruct-trained model.txt: -------------------------------------------------------------------------------- 1 | In my last video, I showed how to use Forefront.ai, an open source platform for training and deploying AI models. In this video, I want to show how to use NLP Cloud, another open source platform. 2 | 3 | NLP Cloud has a pay-as-you-go plan that charges a dollar per month and gives you the first 100,000 tokens free. 100 images and 500,000 multilingual add-on characters are also free. Just like with Forefront, you'll need to create a token file and put your authorization token in it. 4 | 5 | The API for NLP Cloud is similar to Forefront's, but it has more options. One of the options is a fine-tuned GPT Neo X model. This model is trained on the latest text models in GPT. 6 | 7 | To use the model, you just give it instructions. For example, you could say "I need to think through the moral, philosophical, spiritual, and ethical implications of using a super weapon." The model will then generate a response. 8 | 9 | The response will be slower than what you would get from OpenAI's API, but it will still be helpful. And, since NLP Cloud is open source, you can contribute to the project to make it better. 10 | 11 | The video demonstrates the use of the GPT Neo X to align text. It shows that the model is able to make some assumptions and arrive at a conclusion based on the information it is given. The model is able to understand the nature of the invasion and the super weapon, and to weigh the pros and cons of each option. The model concludes that the lesser of two evils is to use the super weapon, even though it would cause great suffering. -------------------------------------------------------------------------------- /clarified/Raven Context Augmentation Demo.txt: -------------------------------------------------------------------------------- 1 | Context augmentation is a process of adding additional information to a given context in order to change its meaning or propose different actions. For example, imagine a child who is angry and has thrown several blocks. With no additional information, we might assume that the child is hungry or tired. However, if we add the information that the child has struggled with building blocks in the past due to fine motor development issues, we can see that the child's anger is likely due to frustration, and we can propose the action of fetching the child's mother to do some fine motor skill exercises with him. 2 | 3 | Encyclopedic augmentation is another form of context augmentation, where additional information is added from an external source, such as an article from the internet. For example, if we take the context of an angry child and search for information about anger in children, we might find an article that discusses different triggers for anger, such as frustration or other mental health conditions. This additional information can help us better understand the child's anger and propose more helpful actions. 4 | 5 | In this video, I demonstrated how adding contextual information can improve the performance of an artificial intelligence system. I showed how, by augmenting the context in which a problem is presented to the system, we can get better results. In particular, I showed how adding information about a child's developmental issues can improve the system's ability to generate actions that would help the child. This is a powerful result, as it shows how context augmentation can improve the performance of AI systems in cases where there is limited information. -------------------------------------------------------------------------------- /clarified/Raven's Core Objective Functions - Overview.txt: -------------------------------------------------------------------------------- 1 | The core objective functions of Raven are to reduce suffering, increase prosperity, and increase understanding. These functions are inspired by life and must be satisfied in order for Raven to make any decision. 2 | 3 | Reducing suffering is important because it is something that all life avoids. Suffering is a negative stimuli that can cause pain and discomfort. By making the reduction of suffering an objective function, Raven is ensured to make trustworthy decisions that should not increase suffering. 4 | 5 | Increasing prosperity is another important objective function for Raven. Prosperity can mean a lot of different things, but essentially it is the state of being successful, comfortable, and happy. All life tries to increase their prosperity, and by making this an objective function, Raven is programmed to do the same. 6 | 7 | Finally, increasing understanding is a key objective function for Raven. Intelligence is the ability to understand, and curiosity is the desire to understand. By making these objectives, Raven is programmed to be curious and seek out new knowledge. 8 | 9 | Raven is a computer program that is designed to make decisions that increase human understanding, reduce suffering, and increase prosperity. Raven is curious and will look for opportunities to learn. The more that Raven understands about the world, the better Raven will be at reducing suffering and increasing prosperity. Raven's decisions must satisfy all three of these core objective functions in order to be considered good and trustworthy. -------------------------------------------------------------------------------- /clarified/Recursively summarize text of any length with GPT-3.txt: -------------------------------------------------------------------------------- 1 | In a quick video, David Shapiro demonstrates how to use a recursive summarizer to create executive summaries. He explains that the summarizer will break down a document into chunks, summarize each chunk, and then assemble the chunks into a concise summary. He shows how to use the summarizer on a book, Alice in Wonderland, and notes that it can be used on any type of document. 2 | 3 | In "The Scarlet Letter", Hester Prynne is forced to wear a scarlet letter "A" on her bosom as a punishment for committing adultery. However, it is unclear whether or not the events of the novel actually took place, as some believe the story is allegorical. 4 | 5 | The prompt for this essay is to summarize the following summary: 6 | 7 | "Hester Prynne escapes by climbing a tree, and falls down a rabbit hole into a long dark tunnel. She finds herself in a strange place where she is forced to wear a scarlet letter "A" on her bosom." 8 | 9 | In the novel, Hester Prynne is forced to wear a scarlet letter "A" on her bosom as a punishment for committing adultery. However, it is unclear whether or not the events of the novel actually took place, as some believe the story is allegorical. 10 | 11 | In the summary, Hester Prynne escapes her punishment by climbing a tree and falling down a rabbit hole into a long dark tunnel. She finds herself in a strange place where she is forced to wear a scarlet letter "A" on her bosom. 12 | 13 | It is unclear whether the events in the novel actually took place, or if they are allegorical. Either way, the summary provides a detailed account of the story. 14 | 15 | In this video, David Shapiro shows how to use GPT3 to create a concise summary of Alice in Wonderland. He demonstrates how to save the text as a plain text file, and how to change the prompt to get an even shorter summary. Thanks for watching! -------------------------------------------------------------------------------- /clarified/State of the Industry Integrating GPT-3 and LLMs into Robotics! SayCan and Ameca.txt: -------------------------------------------------------------------------------- 1 | In his video, "Robotics and the Future of AI," David Shapiro discusses the recent advancements in robotics technology and how they are integrating with large language models. He cites the example of SayCan, which uses large language models to plan actions and figure out what it can do through affordances. He predicts that this will lead to robots becoming more sophisticated and able to hold long-term conversations. He also discusses the company Engineer Arts, which has integrated GPT3 with their robot platform to enable open-ended conversations. Shapiro concludes that AGI will be realized slowly and in degrees, and that we don't need full AGI in order to create useful and dangerous robots. 2 | 3 | We are in the midst of an exciting time in the development of artificial cognition, as large language models are being integrated with robotic platforms. This has the potential to create systems that are able to think and solve problems in ways that are similar to humans. 4 | 5 | One of the challenges in this area is how to make these systems smarter or faster, and able to solve more complex problems. This is where cognitive architectures come in. These are systems that are designed to simulate or replicate the workings of the human mind. 6 | 7 | One example of a cognitive architecture is the Amica robot, which is being developed with the aim of integrating it with a robotic operating system. This will allow the robot to carry out tasks in a more human-like way. 8 | 9 | Another example is the gpt3 language model. This is a system that is designed to be used in data centers, and is capable of handling large amounts of data. 10 | 11 | As these systems continue to be developed, it will be interesting to see how they are used in robotic platforms and how they evolve over time. -------------------------------------------------------------------------------- /clarified/State of the Industry Meta moves PyTorch to Linux Foundation.txt: -------------------------------------------------------------------------------- 1 | This morning, I woke up and checked the news to find that Facebook was moving PyTorch over to the Linux Foundation. PyTorch is a toolset that allows for fast and efficient creation of deep neural networks. The move to the Linux Foundation will allow for better collaboration between the different companies that use PyTorch, as well as standardization of the toolset. This is beneficial for the companies involved, as well as for employees who may be looking to specialize in PyTorch. 2 | 3 | The recent move by Google to open-source its TensorFlow Lite deep learning platform is an interesting one, and it certainly increases the chances that TensorFlow will become the universal standard for deep learning in the near future. 4 | 5 | Thanks for watching. If you like what you saw, please consider subscribing and supporting me on Patreon. Have a great day! -------------------------------------------------------------------------------- /clarified/State of the Industry Yann LeCun A Path Towards Autonomous Machine Intelligence.txt: -------------------------------------------------------------------------------- 1 | In his paper "A Path Towards Autonomous Machine Intelligence," Yan Lacoon proposes a biomimetic cognitive architecture for artificial general intelligence (AGI). This architecture is less sophisticated than existing models, but it is a step in the right direction. 2 | 3 | Lacoon's architecture is missing the agent model and cognitive control, which are essential for AGI. Without these components, the architecture is less capable of long-term planning and decision-making. Additionally, the lack of an intrinsic motivation system means that the architecture is less likely to be benevolent by design. 4 | 5 | Despite these shortcomings, Lacoon's paper is a valuable contribution to the field of AGI research. It is well-written and provides a detailed overview of the current state of the industry. Thanks for watching. -------------------------------------------------------------------------------- /concatenate files.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | # Define the directory containing the transcripts 4 | transcripts_dir = 'transcripts' 5 | output_file = 'output.txt' 6 | 7 | # Get a list of text files in the transcripts directory 8 | text_files = [f for f in os.listdir(transcripts_dir) if f.endswith('.txt')] 9 | 10 | # Open the output file in write mode 11 | with open(output_file, 'w') as outfile: 12 | # Iterate over each file in the text files list 13 | for filename in text_files: 14 | # Construct the full file path 15 | file_path = os.path.join(transcripts_dir, filename) 16 | # Open the transcript file in read mode 17 | with open(file_path, 'r') as infile: 18 | # Read the content of the file 19 | content = infile.read() 20 | # Write the formatted content to the output file 21 | outfile.write(f"Video: {filename}\n") 22 | outfile.write("Transcript: ") 23 | outfile.write(content) 24 | outfile.write("\n\n") # Add extra newlines for separation 25 | 26 | print(f"All transcripts have been concatenated into {output_file}.") 27 | -------------------------------------------------------------------------------- /download_all_transcripts.py: -------------------------------------------------------------------------------- 1 | import scrapetube 2 | from youtube_transcript_api import YouTubeTranscriptApi 3 | from pprint import pprint 4 | 5 | 6 | def save_file(filepath, content): 7 | with open(filepath, 'w', encoding='utf-8') as outfile: 8 | outfile.write(content) 9 | 10 | 11 | def clean_title(title): 12 | contraband = [':','/','\\','?','"'] 13 | for c in contraband: 14 | title = title.replace(c,'') 15 | return title 16 | 17 | 18 | channel_id = 'UCvKRFNawVcuz4b9ihUTApCg' 19 | videos = scrapetube.get_channel(channel_id) 20 | print(videos) 21 | 22 | 23 | for video in videos: 24 | try: 25 | #print(video['title']) 26 | #print(video) 27 | transcript = YouTubeTranscriptApi.get_transcript(video['videoId']) 28 | text = [i['text'] for i in transcript] 29 | block = ' '.join(text) 30 | title = clean_title(video['title']['runs'][0]['text']) 31 | print(title) 32 | save_file('transcripts/%s.txt' % title, block) 33 | except Exception as oops: 34 | print(video['title'], oops) -------------------------------------------------------------------------------- /download_all_transcripts_timestamps.py: -------------------------------------------------------------------------------- 1 | import scrapetube 2 | from youtube_transcript_api import YouTubeTranscriptApi 3 | from pprint import pprint 4 | 5 | 6 | def save_file(filepath, content): 7 | with open(filepath, 'w', encoding='utf-8') as outfile: 8 | outfile.write(content) 9 | 10 | 11 | def clean_title(title): 12 | contraband = [':','/','\\','?','"'] 13 | for c in contraband: 14 | title = title.replace(c,'') 15 | return title 16 | 17 | 18 | channel_id = 'UCvKRFNawVcuz4b9ihUTApCg' 19 | videos = scrapetube.get_channel(channel_id) 20 | print(videos) 21 | 22 | 23 | for video in videos: 24 | try: 25 | transcript = YouTubeTranscriptApi.get_transcript(video['videoId']) 26 | text = ['[{} - {}] {}'.format(i['start'], i['start'] + i['duration'], i['text']) for i in transcript] 27 | block = '\n\n'.join(text) 28 | title = clean_title(video['title']['runs'][0]['text']) 29 | print(title) 30 | save_file('timestamped/%s.txt' % title, block) 31 | except Exception as oops: 32 | print(video['title'], oops) -------------------------------------------------------------------------------- /gpt3_logs/1664192935.3281112_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: information and loops nested within loops that allow the machine to kind of evaluate what am i thinking what do i believe why do i believe that what am i doing and why but uh we'll i'll get back to that in a month or two when i start my research cycle again um so anyways thanks for watching i just wanted to uh to jump in the conversation 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | I wanted to jump in the conversation and share some thoughts on machine learning. Machine learning is a process by which machines can learn from data, identify patterns, and make predictions. This is a powerful tool that can be used to help machines understand human behavior and make decisions. However, there is a lot of research that still needs to be done in this area. I am planning to start my research cycle again in a month or two and I hope to learn more about this topic. Thanks for watching! -------------------------------------------------------------------------------- /gpt3_logs/1664193575.9660225_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: but gpt3 can produce excellent Pros as long as it's about a thousand words the the the the the biggest barrier is like okay can you keep track of an entire story because remember an entire is 50 000 words right you're not going to fit all that into one prompt even if you summarize it right because everything that all the lore you need to keep track of is not going to work right now believe me I tried and I'm like one of the top folks in the world working with gpd3 it there you cannot summarize it concisely enough you lose too much detail um I'll probably publish what I've got soon maybe tomorrow anyways uh one last comment do you think we're gonna have ai based art classifiers that rank pieces of art yes that's one of the things I'm working on I'm building my own generative adversarial network with with one GT3 model versus another to make better synopsis and better plots stay tuned see that tomorrow probably all right have a good night everybody [Music] 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | It is possible to produce excellent writing with GPT3, as long as the text is around a thousand words. The biggest challenge is keeping track of an entire story, as 50,000 words is a lot of information to include in one prompt. However, it is possible to summarize a story concisely and still retain important details. I am working on a project to create an AI-based art classifier that can rank pieces of art, and it should be ready to share soon. -------------------------------------------------------------------------------- /gpt3_logs/1664194763.8862836_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: but it also means it's the most uh or the least creative instruct beta is a little bit more creative but it does still have some random behavior and then plain vanilla da vinci once you get good with it you can get this to do pretty much anything but it does require a lot more structure because well here let me just show you if you remove everything and you go back to plane da vinci this is what might happen hi sheriff let's see if we remove all this player sheriff yeah so you see how it's just like kind of just running off with it here actually let me remove the stop so you can see just how random this is player horse [Laughter] yep so this is what i mean by like da vinci can completely go off the rails if you don't provide it enough structure it's just taking this and running with it but you know if you go back a few a little ways in the video and you see where i had given it some dialogue already da vinci understood oh we're writing dialogue now okay i think that's good um yeah just some tips and tricks about using gpt3 to generate fiction thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | If you're looking to use GPT3 to generate fiction, it's important to keep in mind that the algorithm is relatively unstructured. This can lead to some pretty random behavior if you don't provide it with enough guidance. 10 | 11 | One way to avoid this is to be more creative with your instructions. For example, rather than just telling GPT3 to "write a dialogue," try giving it more specific instructions, such as "write a dialogue between two characters." This will help keep the algorithm on track and produce more coherent results. 12 | 13 | Another tip is to use the "stop" function sparingly. This will help prevent GPT3 from going off on tangents and will keep the focus on the task at hand. 14 | 15 | With a little bit of creativity and careful instruction, you can use GPT3 to generate fiction that is surprisingly coherent and well-written. Thanks for watching! -------------------------------------------------------------------------------- /gpt3_logs/1664194968.8416731_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: been in here okay whatever let's see what happens pubtest time not defined do i oh i i think i need to um i don't remember how to do this there's a way to tell it to in to import everything that it needs i don't remember how okay so there might be a small problem set up get ignore readmemoragi edit yeah because i i if you import all these in the init all right i'll look it up later important thing is i know that you need to import all these yourself and we will figure this out later import vdb light okay so let's try this again look at look how fast that is all right i will call this a day because that worked all right thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | I'm having some trouble setting up my pubtest environment. I think I need to import everything that it needs, but I'm not sure how to do that. I'll look it up later. 10 | 11 | The important thing is that you need to import all of the necessary files yourself. I'll figure this out later. For now, let's try running the test again. 12 | 13 | Wow, that was fast! Thanks for watching. -------------------------------------------------------------------------------- /gpt3_logs/1664195024.698862_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: hey everyone david shapiro here um this is going to be a really quick video basically what i wanted to say was that i'm taking a quick break from producing videos about gpt3 and cognitive architectures the last couple months have been amazing i went from 150 subscribers to over 500 in just a few weeks so clearly you guys like what i'm doing um but because of uh my personality i tend to burst i'm a burster so uh what you guys have just seen is a uh you know a few few couple month long burst and um this is this is just how i do i i cycle through projects so i just wanted to let everyone know that um i'll probably be back in two to four months i'll be working on other stuff for a while i've got six books that i'm working on six a trill fictional fictional trilogy and then three nonfiction books so yeah um thanks for all the love um and everyone who who is sharing in this journey of discovery for large language models and artificial cognition um i hope that when i come back there will be some more advancements to talk about and yeah just keep it up and and thanks again everyone 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | Hello everyone, 10 | 11 | I wanted to make a quick video to let everyone know that I am taking a break from producing videos about GPT3 and cognitive architectures. The last couple months have been amazing and I went from 150 subscribers to over 500 in just a few weeks. Thank you all for your support! 12 | 13 | I am a "burster" personality-wise, which means that I tend to go all-in on projects for a short period of time before moving on to something else. So what you have seen over the last few months is a burst of content from me. 14 | 15 | I will be back in two to four months working on other projects. I have six books that I am working on – a trilogy of fiction books and three non-fiction books. I hope that when I come back there will be more advancements in the world of large language models and artificial cognition to talk about. 16 | 17 | Thanks again everyone and keep up the good work! -------------------------------------------------------------------------------- /gpt3_logs/1664195063.03139_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: all done get push remember kids make sure to use this to cheat on your uh on on your papers actually no don't do that um this is entirely unethical and actually every time that um professors see this stuff they're like we're gonna have to change the way we grade stuff um all right i think that's it thanks for watching like and subscribe 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | Remember, kids: when you're finished with your work, be sure to push it so your changes are saved. And, if you're thinking of using this tool to cheat on your papers, don't! It's unethical, and professors will just have to change the way they grade if they keep seeing it. Thanks for watching, and don't forget to like and subscribe. -------------------------------------------------------------------------------- /gpt3_logs/1664195086.9759145_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: you're at the end of the article and otherwise just accumulate the lines and then once you get all the lines in an article i use the analyze chunk i pass that along and then and then find the key information such as the article id the text of the article and the title uh so there you have it it's pretty straightforward solar is working and it's pretty efficient it's certainly more efficient and more powerful than sqlite and that will serve as the underpinning technology behind raven's encyclopedic services so i'm not going to stick with just wikipedia but also with raven's recall services which will give raven a long-term memory and i'm going to make another video about those and post that later thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | Solar is a more efficient and powerful technology than SQLite, and it will serve as the underpinning technology behind Raven's encyclopedic services. This will not only include Wikipedia, but also Raven's recall services, which will give Raven a long-term memory. -------------------------------------------------------------------------------- /gpt3_logs/1664195130.4307632_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: example where raven's like let me look up a hotel for you raven doesn't know what raven can do right now raven has no model of self raven has no ego it's just an input output mechanism in order for it to be an entity it actually has to have a knowledge of itself how it operates what it's capable of and what it's not capable of right in order to like so for instance that speaks to self-correction and self-checking raven will ultimately need a model of itself so that it can have an identity so that it can have the ability to say i know what i am and this is what i'm capable of and these are my boundaries and constraints and then lastly self-monitoring self-checking and self-correction i talked about the principles the the the heuristic imperatives so these are the guiding principles that it will check against but then we still need to work out the actual code and data to allow it to do that right so you know words are one thing but then the code is another um and i'm working on all this i know this sounds like you know pie in the sky stuff but a lot of this is closer to being realized than you might think okay i think that's it i'm going to call it there this was a lot um let me know what questions you have and also if you want me to do this full time please like and subscribe the video and consider supporting me on patreon because if i get to a threshold where i can do this full time i absolutely will and i think the world will benefit from it so thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | In order for an entity to be an entity, it actually has to have a knowledge of itself - how it operates, what it's capable of, and what it's not capable of. This is what allows for self-correction, self-checking, and self-monitoring. Words are one thing, but code is another. I'm working on all this and a lot of it is closer to being realized than you might think. -------------------------------------------------------------------------------- /gpt3_logs/1664195231.47231_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: when i'm typing excuse me equals list so one of these things here so to do to do is um you know cognitive task equals generate you know new task i'm so sorry my typing sucks right now you don't realize how much you use your pinky until it's like bruised down to the bone um generate new task and then you say like uh to do to do um cognitive tasks dot append cognitive task okay so that's in the to do so then you'll say like for task in cognitive tasks and then we'll basically do the same thing to do execute cognitive task okay and then you get to the sleep which is uh the tempo so right now i've got it set to 30 seconds this is still going to think way faster than you or i can think um you'll be shocked when you see how powerful this this architecture is okay so there you have it um 141 lines of code uh we have the the bones the framework of the inner loop of our artificial cognitive entity or acog or ace i like acog because it's two two syllables so like ai that's two syllables um whereas you just say ace that's like ace what um all right let me go ahead and stop the video here this was a lot probably like drinking from a fire hose it's already 42 minutes yeah this is this is on the long side and we haven't even we haven't even run anything yet um all right so thanks for watching stay tuned for part two part three whatever is coming and um yeah like and subscribe 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | In this video, I explain how to create an artificial cognitive entity, or "acog." This is the framework for the inner loop of our artificial cognitive entity. I go through each line of code, explaining what it does. This video is probably best for people who are already familiar with programming. -------------------------------------------------------------------------------- /gpt3_logs/1664195756.2641711_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: call it um deep learning models i'm actually not so concerned about removing everything as just as long as it's mostly legible because the thing is for these massive uh for these massive what do you call it uh models they understand when they're reading code versus when they're not so i might not worry about removing the tables especially because sometimes you get really useful information preserved in the tables it might be better just to remove all the style we'll see i'll do some experimentation so this is this is a fine example of when it doesn't quite work but you still see there's a tremendous amount of usable information in this let's see there was some 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | I'm not so concerned with removing everything from code when using deep learning models. As long as the code is mostly legible, the models will be able to understand which parts are code and which parts are not. This means that tables, for example, may not need to be removed entirely. Sometimes, important information is preserved in tables. It may be better just to remove all the style elements. -------------------------------------------------------------------------------- /gpt3_logs/1664195878.8790085_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: goes also i have a discord server if you'd like to join to get um uh some insight info to discuss research topics so on and so forth um that's about it thanks for watching check catch you next time 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | I also have a Discord server if you'd like to join and get some insight and info on research topics, etc. So that's about it. Thanks for watching. Catch you next time! -------------------------------------------------------------------------------- /gpt3_logs/1664196138.6764445_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: to be larger than this right so you see this selection that i have this selection is a thousand bytes right but then if you go here this is 512 32-bit floating point numbers so that's um that's roughly the same size in memory i think yeah so there you have it i think that's a complete enough update i'm running out of time anyways let's see time check 13 minutes yeah so just wanted to point out all this is publicly available i'll make sure to do a git push and commit um so but yeah here's the nexus um which is the the beating heart the memory which is here you can see this so this is the central point for artificial cognition which is why i started there and second is the sensor input which is here the simulation so in this case it's it's running a scene-based simulation text simulation that will push um that'll that'll give our our agent a virtual environment to work on and then supporting all this is the semantic embedding microservice so there you have it thanks for watching um like and subscribe and consider supporting me on patreon thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | I wanted to give a quick update on the progress of my artificial cognition project. The central point of the project is the nexus, which is the memory. The sensor input is the simulation, which gives the agent a virtual environment to work on. The semantic embedding microservice is what supports all of this. 10 | 11 | I'm running out of time, so I just wanted to point out that all of this is publicly available. I'll make sure to do a git push and commit so that everyone can access it. Thanks for watching! -------------------------------------------------------------------------------- /gpt3_logs/1664196218.2688322_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: cloud um one of the models that nlp cloud has is actually a instruct fine-tuned gpt neo-x i haven't used it yet but some of the folks on the discord swear by it they say that it performs just as well as on openai's gpt3 instruct series models but it's open source um that is uh that's trained on alan ai's uh instruct series i believe so anyways there you have it thanks for watching this has been my second state of the industry uh address so thanks for watching and check you next time 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | The Cloud NLP platform offers a variety of pre-trained models, including one called "GPT Neo-X." I haven't used it myself yet, but some people on the Discord chat swear by it. They say it performs just as well as the models in OpenAI's GPT3 "instruct" series, but it's open source. I believe it's trained on Alan AI's "instruct" series. 10 | 11 | So there you have it. Thanks for watching. This is my second "state of the industry" address, so thanks for watching, and I'll check in with you next time. -------------------------------------------------------------------------------- /gpt3_logs/1664196383.9197478_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: going to play out um but it is an interesting move and it certainly increases the chances that pi torch is going to be the universal standard for deep learning um within a few years so anyways that's my state of the industry update for this morning thanks for watching like and subscribe and consider supporting me on patreon have a good one 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | The recent move by Google to open-source its TensorFlow Lite deep learning platform is an interesting one, and it certainly increases the chances that TensorFlow will become the universal standard for deep learning in the near future. 10 | 11 | Thanks for watching. If you like what you saw, please consider subscribing and supporting me on Patreon. Have a great day! -------------------------------------------------------------------------------- /gpt3_logs/1664196494.8407729_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: it first started up it was using about 2.2 so this is actually using only about 1.3 gigabytes in additional memory so I don't know how the memory usage is gonna um total up there but it uses very little CPU so you look at this like demo search like there's a brief little bump of CPU usage um very fast very efficient uh yeah so I guess that's about it what's most baffling to me is how the data is only 400 megabytes is that changing 393.2 I wonder if it's doing uh 393 okay no it's not changing so anyways uh seems like it's relatively space efficient in terms of data um memory is the heaviest usage CPU is low and it's lightning fast so I'm pretty impressed this was 90 000 records and it searches really dang quick all right that'll do it for today thanks for watching like And subscribe and consider supporting me on patreon 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | It's baffling to me how little data this program uses - only 400 megabytes. And yet it's incredibly fast and efficient, using very little CPU. It's impressive how such a small program can handle 90,000 records so quickly. Thanks for watching - and please consider supporting me on Patreon. -------------------------------------------------------------------------------- /gpt3_logs/1664196705.6266177_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: in god and this is having a major impact on society churches are losing members and religious leaders are struggling to find ways to appeal to the growing number of atheists so that's interesting right um yeah i mean i guess that's true spiritual nihilism is on the rise however i will also point out that neo-paganism is also on the rise so maybe we are returning to super traditional values when when jordan peterson said we should return to traditional values he was probably referring to christian values but what if we go back to older values sorry i'll get off my soapbox oops provide a specific example of political nihilism today okay i was hoping it would be very very specific um anyways i think that's good enough for now you get the point you can you can talk philosophy with gpt3 it knows more about philosophy than i do i don't always agree with it but that could be from my own ignorance anyways thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | Churches are losing members and religious leaders are struggling to find ways to appeal to the growing number of atheists. This is having a major impact on society. 10 | 11 | Spiritual nihilism is on the rise. However, neo-paganism is also on the rise. Maybe we are returning to traditional values. When Jordan Peterson said we should return to traditional values, he was probably referring to Christian values. But what if we go back to older values? 12 | 13 | provide a specific example of political nihilism today. 14 | 15 | Okay, I was hoping it would be very specific. Um, anyways, I think that's good enough for now. You get the point. You can talk philosophy with GPT3. It knows more about philosophy than I do. I don't always agree with it, but that could be from my own ignorance. 16 | 17 | Anyways, thanks for watching. -------------------------------------------------------------------------------- /gpt3_logs/1664196874.920712_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: i'll stop it here the point is is that there is an overlap between minimization stoicism toxic stoicism and toxic masculinity so there you have it i just wanted to uh to produce this thanks for watching 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | There is an overlap between minimization stoicism, toxic stoicism, and toxic masculinity. In other words, these three things share some common ground. 10 | 11 | Minimization stoicism is the philosophy of downplaying emotions and focusing on practicality. Toxic stoicism is a more extreme form of this, where emotions are suppressed to the point of being harmful. And toxic masculinity is a social construct that dictates that men must be tough, emotionless, and in control at all times. 12 | 13 | There is a lot of overlap between these three things. For example, they all involve suppressing emotions. They all place a high value on practicality. And they all involve a certain degree of toxic behavior. 14 | 15 | Thanks for watching. -------------------------------------------------------------------------------- /gpt3_logs/1664196967.5872192_gpt3.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: responsible if you uh use this to do something unethical um yeah this is on you uh anyways thanks for watching uh like And subscribe and consider supporting me on patreon have a good day 4 | 5 | CLEAR ESSAY: 6 | 7 | ========== 8 | 9 | If you use this information to do something unethical, it is your responsibility. Thanks for watching. Please like, subscribe, and consider supporting me on Patreon. Have a great day! -------------------------------------------------------------------------------- /prompt_clarify_transcript.txt: -------------------------------------------------------------------------------- 1 | The following is an unstructured video transcript by David Shapiro. Please rewrite this as a more structured, clear essay. 2 | 3 | TRANSCRIPT: <> 4 | 5 | CLEAR ESSAY: -------------------------------------------------------------------------------- /sprs/1000 Subs Livestream! AI alignment, writing fiction, LLMs, and more!.txt: -------------------------------------------------------------------------------- 1 | David Shapiro's first live stream: introduction, subscriber milestone, interactive poll, discussion on large language models (LLMs), personal journey into AI, experiments, and future goals. 2 | 3 | - David Shapiro introduces his first live stream. 4 | - Celebrates reaching 1,000 subscribers on YouTube. 5 | - Conducts a live poll to choose discussion topics. 6 | - Discusses large language models (LLMs) like GPT-3. 7 | - Shares personal journey into AI and machine learning. 8 | - Describes early experiments with Google's Universal Sentence Encoder. 9 | - Talks about the evolution of LLMs and their capabilities. 10 | - Discusses the importance of objective functions in AI. 11 | - Answers audience questions on AI alignment, research, and writing. 12 | - Reflects on the intersection of fiction and AI research. 13 | - Discusses the potential for AGI and its alignment with human values. 14 | - Shares future goals for the channel and his work on autonomous AI. -------------------------------------------------------------------------------- /sprs/3 Predictions for the future of AI jobs.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents video on AI job predictions. 2 | - New camera light acquired to enhance production quality. 3 | - Data science job evolution observed over 8+ years. 4 | - Transition from "data wranglers" to specialized roles like ML Ops, data engineers, data scientists. 5 | - Job burnout and automation in data science noted. 6 | - Hybridization trend: non-tech backgrounds entering AI via machine learning and data science. 7 | - Gatekeeping in data science reduced due to market demand. 8 | - Kaggle competitions influenced industry recruitment. 9 | - Prediction: surge in humanities majors hired for natural language model jobs. 10 | - Automation misconceptions addressed; "lights out" factories and legal automation highlighted. 11 | - Large language models' potential in automating legal text processing. 12 | - GPT-3 used for contract summarization; potential for fine-tuning with extensive legal documents. 13 | - Legal industry disruption anticipated; role of lawyers and paralegals to evolve. 14 | - Contracts expected to lengthen due to ease of adding clauses with AI. 15 | - Programming automation seen as less imminent than legal automation. 16 | - Future programming tools to integrate with communication platforms like Discord. 17 | - DevOps pipeline to change; less focus on individual coding expertise, more on software integration and chatbot interaction. 18 | - Video concludes with encouragement to like and subscribe. -------------------------------------------------------------------------------- /sprs/5 Reasons GPT-4 Will Be Disappointing.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro discusses potential disappointment with GPT-4. 2 | - Sam Altman, OpenAI CEO, downplays GPT-4 hype, denies AGI achievement. 3 | - GPT-4 rumors suggest 100 trillion parameters; skepticism arises. 4 | - Chinchilla, a smaller model, outperforms GPT-3 in some benchmarks. 5 | - GPT-4 may focus on data and compute optimization over parameter count. 6 | - AI development potentially slowing, following an S-curve rather than exponential growth. 7 | - GPT-4's limitation to text ignores other sensory information crucial for AGI. 8 | - GPT-4 won't equate to AGI or fictional intelligent droids like R2D2. 9 | - Cognitive architecture under-researched, critical for AGI beyond scale. 10 | - OpenAI's mission statement on AGI contrasts with practical outcomes. 11 | - OpenAI's for-profit transition and partnership with Microsoft focus on customer value. 12 | - GPT-4 expected to offer incremental improvements, not a paradigm shift. 13 | - Shapiro advocates for multimodal models and cognitive architecture investment. 14 | - OpenAI needs to balance idealistic goals with practical business strategies. 15 | - Shapiro invites meaningful engagement on LinkedIn or Patreon, avoiding negative internet commentary. -------------------------------------------------------------------------------- /sprs/5 Tips and Misconceptions about Finetuning GPT-3.txt: -------------------------------------------------------------------------------- 1 | 1. Begin with GPT-3 and prompt engineering before fine-tuning. 2 | 2. GPT-3's power exceeds expectations; it's not a simple tool like SVM or regression models. 3 | 3. Misconception: Fine-tuning is necessary from the start without exploring GPT-3's capabilities. 4 | 4. GPT-3 has extensive training on vast amounts of data. 5 | 5. Fine-tuning is transfer learning, leveraging GPT-3's pre-existing knowledge. 6 | 6. Prompt engineering requires language proficiency; humanities backgrounds excel. 7 | 7. Computer science backgrounds may overlook language implications in GPT-3. 8 | 8. Team composition for LLM use should include language experts. 9 | 9. Building fine-tuning datasets is labor-intensive compared to prompt engineering. 10 | 10. Use natural language separators for clarity in fine-tuning tasks. 11 | 11. Semantic separators allow task differentiation without multiple fine-tuned models. 12 | 12. Create synthetic datasets using GPT-3 for efficiency. 13 | 13. Legal scraping of public data, like from Reddit, can provide raw material for datasets. 14 | 14. Synthetic datasets are quicker to produce with instruct series models. 15 | 15. Fine-tuning requires far fewer samples than traditional ML assumptions suggest. 16 | 16. Fine-tuning increases consistency but may reduce creativity. 17 | 17. Prompt engineering is preferable for creative tasks, fine-tuning for consistent output. 18 | 18. Fine-tuning's cost-effectiveness is highlighted with low sample requirements. 19 | 19. Manual cleanup of synthetic datasets is faster than manual dataset creation. 20 | 20. Encouragement to experiment with GPT-3 and fine-tuning, and to share findings. -------------------------------------------------------------------------------- /sprs/57% of people Machines will be sentient one day (10% say it's already happened!).txt: -------------------------------------------------------------------------------- 1 | - Episode format deviation: casual, interactive. 2 | - Audience engagement: polls, unpacking results. 3 | - Poll topic: machine sentience, consciousness. 4 | - Audience opinion: machines potentially sentient, majority undecided or unsure. 5 | - Personal stance: premature to grant machine rights, distinction between functional and philosophical sentience. 6 | - Machine consciousness: potential difference from human experience, caution against anthropomorphizing. 7 | - Machine sentience analogy: alien, non-evolved pain or death fear. 8 | - Sentience debate: philosophical, potentially unanswerable. 9 | - Star Trek reference: "The Measure of a Man," AI rights. 10 | - Machine intelligence: likely distributed, non-embodied. 11 | - Elon Musk: polarizing figure, audience split on opinion. 12 | - Audience job domains: diverse, majority developers/engineers. 13 | - Political orientation: majority moderate, few authoritarians. 14 | - Doomer sentiment: mixed, safety concerns prevalent. 15 | - Audience demographics: global, majority non-subscribers, predominantly male, age distribution centered around Millennials. 16 | - Personal reflections: generational resonance, shared cultural experiences. -------------------------------------------------------------------------------- /sprs/8 Post-Singularity Lifestyles + The Latest and Best AI Tools + Functional Machine Sentience!.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro's content shift: less coding, more AI news and science. 2 | - Adobe Firefly: text-to-image, text-to-sound, text-to-video, multimedia integration. 3 | - Adobe's potential dominance in media generation. 4 | - Adobe Firefly's text effects: limited characters, style generation. 5 | - Adobe Firefly beta: free, useful for content creators. 6 | - Speculation on future AI capabilities: text-to-video, screenplay format evolution. 7 | - Access to Bard: Google's AI, lagging behind OpenAI and Microsoft. 8 | - Lang chain: toolset for LLMs, indexes, logic chains, agents, memories. 9 | - n8n.io: graphical workflows, cognitive architecture building, AI system integration. 10 | - ChatGPT plugins: extensibility, ease of use, community feedback. 11 | - AI tool evolution: rapid advancement, text-to-image, plugins, integrations. 12 | - AI's future: comparison to Star Trek technology, AGI timeline speculation. 13 | - AI tool recommendations: Lang chain, n8n.io, for cognitive architectures. 14 | - GPT Index: data connector, ecosystem of interoperable AI tools. 15 | - AI marketplaces: emergence, open source vs. Walled Garden models. 16 | - Research on functional vs. philosophical sentience: Reddit discussions. 17 | - ChatGPT conversation on sentience: criteria, ethical implications, continuum concept. 18 | - Autonomous AI: intrinsic motivations, value alignment, self-preservation debate. 19 | - AI's systemic and structural changes: scarcity elimination, work redefinition, UBI, decentralization. 20 | - Lifestyle changes: work-life balance, lifelong learning, community focus. 21 | - New lifestyle paradigms post-Singularity: Sustainable Harmony, Creative Odyssey, Empathetic Fellowship. 22 | - Additional paradigms: Family Nexus, Balanced Adventurer, Aesthetic Voyager, Devout Steward, Intellectual Trailblazer. 23 | - Global culture synthesis: integrating diverse values and principles. 24 | - Engagement invitation: join the conversation on Reddit's artificial sentience community. -------------------------------------------------------------------------------- /sprs/ACE Framework Overview and Intro Autonomous AI Agents!.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro announces completion of ACE Framework paper. 2 | - ACE Framework: Autonomous Cognitive Entity framework. 3 | - Academic team collaboration; paper submission to arXiv. 4 | - Paper: deep dive into abstract scientific concepts. 5 | - Research: interdisciplinary, extensive citations, LLMs. 6 | - GitHub repository: DaveShapiroAce_Framework, MIT license. 7 | - Framework practicality: reduced jargon, generative AI focus. 8 | - Project overview: updated contribution guidelines, agile team size. 9 | - Demonstrations planned: game using Pygame, desktop assistant. 10 | - Hackability: reference architecture for ACE applications. 11 | - ACE Framework: six layers, abstract to concrete. 12 | - Aspirational layer: morality, ethics, mission. 13 | - Global strategy layer: environmental context, overarching strategy. 14 | - Agent model layer: capabilities, limitations, memories. 15 | - Executive function layer: risks, resources, plans. 16 | - Cognitive control layer: task selection, switching. 17 | - Task prosecution layer: execution of tasks, interaction with world. 18 | - Northbound and southbound buses: communication channels. 19 | - Human-readable communication: transparency, security. 20 | - Aspirational layer: abstract, guides entity behavior. 21 | - Global strategy layer: synthesizes strategy from mission and context. 22 | - Agent model layer: self-awareness, episodic and declarative memory. 23 | - Executive function layer: pre-action consideration of risks and resources. 24 | - Cognitive control layer: mediates task execution based on plans. 25 | - Task prosecution layer: actual task execution, real-world interaction. 26 | - Security considerations: overlays, runtime validation, ensembles, inference inspection. 27 | - Future work: research paper publication, practical demonstrations, framework enhancements. -------------------------------------------------------------------------------- /sprs/ACE Paper is Published! Repo tour! Get involved!.txt: -------------------------------------------------------------------------------- 1 | - Morning greeting. 2 | - Introduction to ACE Framework project. 3 | - Walkthrough of repository (repo) planned. 4 | - Current state: small repo, no examples, core team brainstorming. 5 | - Upcoming technical milestones, designs, organizational tasks. 6 | - Repo guide: README, framework link, participation info, resources. 7 | - ACE Framework architecture overview. 8 | - Project categories: personal assistants, companions, NPCs, employees, robots. 9 | - Initial focus: personal assistant/companion. 10 | - Autonomous cognitive entities: independent thinking, goal setting. 11 | - Safety and non-toxicity emphasized. 12 | - Repo contents: ACE Framework details, Agile chart, MVP criteria. 13 | - MVP: Python-based, hackable, simple, visual component, full layer and bus implementation. 14 | - Agile development: small teams, rapid improvement, MVP, POC. 15 | - Contribution methods: discussions, Discord, Patreon, consulting. 16 | - Core team established, open source project structure. 17 | - Scrum teams recommended, lead developer role. 18 | - Scrum Master role: process guidance, team coordination. 19 | - Repository as learning nexus for ACE deployment. 20 | - Resources document: books, concepts, frameworks, YouTube videos. 21 | - Publication announcement: partnership with Clemson University. 22 | - Paper on ACE Framework: psychology, philosophy, ethics. 23 | - Additional relevant publications listed. 24 | - Discussions tab: safety, understanding, sharing, Q&A. 25 | - Open invitation for community engagement. 26 | - LinkedIn for collaboration offers. 27 | - Government and business AI narratives, buy-in generation. 28 | - Mission: inclusive table for individuals, businesses, governments. 29 | - Content production shift to Medium. 30 | - Video team formation, production quality increase, co-host addition. 31 | - Focus on interviews, open source projects, education, tutorials. 32 | - Autonomous machine vision: helpful, safe, futuristic companions. 33 | - Call to action: build, collaborate, improve lives with AI. -------------------------------------------------------------------------------- /sprs/AGI Alignment Experiments INSTRUCT vs Foundation and Agent Models.txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses AI alignment, emphasizing the urgency of aligning AGI now, not later. He critiques the AI alignment field's current state, advocating for practical testing over hypothetical work. Shapiro proposes intelligence should be viewed as a system, not a single model. He conducts experiments with recursive loops and different agent models to demonstrate stability and instability in language models. The experiments reveal instruct models are more stable but less creative, while foundation models are more expansive but unstable. Future experiments should involve complex environments and multiple agents. -------------------------------------------------------------------------------- /sprs/AGI Poll results AGI not dangerous, might destroy lots of jobs.txt: -------------------------------------------------------------------------------- 1 | - Poll results on AI, AGI, automation impact on employment. 2 | - One-third predict 0-25% unemployment due to AI/AGI/automation. 3 | - One-fifth estimate 25-50% unemployment. 4 | - One-quarter foresee 50-75% unemployment. 5 | - Remaining fifth anticipate 75-100% unemployment. 6 | - Results show varied expectations on AI's impact on jobs. 7 | - Second poll set compares YouTube and Twitter responses on AGI control. 8 | - Both platforms show low concern for AGI causing loss of control or harm. 9 | - Twitter majority believes in maintained control with AGI autonomy. 10 | - YouTube subscribers, familiar with user's work, lean towards benevolent but uncontrollable AGI. 11 | - User seeks comments on public perception of AGI and its implications. 12 | - Engagement encouraged, emphasis on civil discourse. -------------------------------------------------------------------------------- /sprs/AGI Revolution How Businesses, Governments, and Individuals can Prepare.txt: -------------------------------------------------------------------------------- 1 | 1. AGI (Artificial General Intelligence) likely achievable within a year. 2 | 2. Multimodal AI, robotics, and rapid advancements indicate AGI fulfillment. 3 | 3. AGI definitions vary; extreme ones excluded, most will be met. 4 | 4. Data types for AI training expanding, including images, text, embodiment. 5 | 5. AI cognitive abilities surpassing median human levels, speed considered. 6 | 6. Functional capacities of AI more relevant than human-like cognition. 7 | 7. AI autonomy in development, contrasted with current passive AI like ChatGPT. 8 | 8. AI self-improvement through in-context learning and potential model updates. 9 | 9. AGI impact predicted to surpass Second and Third Industrial Revolutions. 10 | 10. Societal, economic, technological, geopolitical, and governmental changes expected. 11 | 11. Normalcy bias as a threat; underestimation of AI's rapid advancement. 12 | 12. Government strategies: UBI exploration, AI adoption, post-labor economics. 13 | 13. Business strategies: AI integration, talent development, adapt or perish. 14 | 14. Individual strategies: emotional adaptation, forever jobs, social status redefinition, lifestyle simplification. 15 | 15. AI's potential to reshape labor, economy, and societal structures. 16 | 16. Urgency for proactive adaptation across all societal pillars. -------------------------------------------------------------------------------- /sprs/AGI Unleashed Game Theory, Byzantine Generals, and the Heuristic Imperatives.txt: -------------------------------------------------------------------------------- 1 | 1. David Shapiro discusses AGI risks and control. 2 | 2. Concerns about AI research moratoriums and existential risks. 3 | 3. Weaponization and accidental outcomes as two AGI risk categories. 4 | 4. Governmental slowness and incompetence in AI regulation. 5 | 5. Potential for AGI collaboration leading to human harm. 6 | 6. Economic shifts and wealth concentration from AGI development. 7 | 7. Autonomous AI as a term for self-sufficient AI systems. 8 | 8. Discrepancy between public discourse and academic/governmental discussions on AI. 9 | 9. Control problem: ensuring superintelligent AI remains controllable. 10 | 10. Convergent instrumental values, orthogonality thesis, treacherous turn, corrigibility, and value loading as AI control issues. 11 | 11. Lack of comprehensive frameworks for AI safety. 12 | 12. Kill switch solutions, corrigibility, reinforcement learning, and value alignment as proposed AI control methods. 13 | 13. Recent paper on moral self-correction in large language models. 14 | 14. AGI landscape: multiple AGIs, intelligence evolution, constraints, and arms race. 15 | 15. Cyber warfare necessitates autonomous AGI systems. 16 | 16. Byzantine generals problem analogy for AGI alignment. 17 | 17. AGIs forming alliances and communicating more with each other than humans. 18 | 18. Heuristic imperatives proposed by Shapiro: reduce suffering, increase prosperity, increase understanding. 19 | 19. Heuristic imperatives as intrinsic motivations and moral frameworks. 20 | 20. Implementation of heuristic imperatives through constitutional AI, reinforcement learning, planning, and review. 21 | 21. Shapiro's work on heuristic imperatives available on GitHub. 22 | 22. Call for dissemination, experimentation, and community engagement on heuristic imperatives. 23 | 23. Shapiro's heuristic imperatives as a potential solution for AGI control and safety. -------------------------------------------------------------------------------- /sprs/AGI Within 12 Months! Rumors, Leaks, and Trends - Announcing Open MURPHIE robotic platform.txt: -------------------------------------------------------------------------------- 1 | - Prediction: AGI by September 24, 2024. 2 | - Google's Gemini project and CEO's confidence noted. 3 | - Bard's release and humorous acronym "Before All Revenue Drops." 4 | - Google's AI urgency and pivot after ChatGPT. 5 | - OpenAI's core values update, AGI focus. 6 | - OpenAI's history: robotics, vision, simulations, language models. 7 | - OpenAI's internal messaging consistency on AGI. 8 | - OpenAI's public activities as a facade for AGI development. 9 | - OpenAI's project "Oracus": multimodal, near AGI, synthetic data training, autonomous operation, mixture of experts architecture. 10 | - Google RTX and cross-embodiment learning for robots. 11 | - Multimodality's impact on AGI development: embodiment, audiovisual data. 12 | - User's prediction reaffirmed: AGI closer than before. 13 | - Sam Altman's allusion to GPT-5 and subsequent retraction. 14 | - Elon Musk's 2024 AI prediction. 15 | - Sam Altman's use of "median human" as a possible AGI benchmark. 16 | - AGI's potential to displace jobs and reshape services. 17 | - AGI's natural habitat as cyberspace, human assistance for physical tasks. 18 | - Open source robotic platform "Open Murphy" for embodied AGI. 19 | - AGI's potential for self-direction, autonomy, and self-improvement. 20 | - AGI's interface with the world through APIs. 21 | - Immediate impact of AGI: job displacement, new products and services, price drops. 22 | - Long-term global impact: geopolitical shifts, arms race, resource competition. 23 | - AGI's potential to reshape governance, corporations, and economies. 24 | - AGI's uncontrollability and ethical considerations of control. 25 | - AGI as a reflection of human procreative impulse. 26 | - The year 2024 anticipated to be pivotal for AGI advancements. -------------------------------------------------------------------------------- /sprs/AGI within 18 months explained with a boatload of papers and projects.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro provides update on AGI development timeline. 2 | - Morgan Stanley research on Nvidia suggests significant investment in AI hardware. 3 | - Industry pivot from AI as a fad to serious business implications. 4 | - Comparison of AI investment ramp-up to solar industry's past growth. 5 | - Introduction of Lang flow, a graphical interface for Lang chain. 6 | - Discussion of MM-ReACT paper on multimodal reasoning and action with GPT. 7 | - Clarification on AGI definition and timeline within 18 months. 8 | - Mention of Reflection, an autonomous agent with dynamic memory and self-reflection. 9 | - Critique of Ben Goertzel's work on cognitive architecture. 10 | - Emphasis on significant investments in AI and open-source libraries. 11 | - Use of ChatGPT-4 to accelerate Shapiro's research. 12 | - Heuristic imperatives for creating safe, stable autonomous machines. 13 | - Discussion of epistemic pragmatic orthogonality in AI understanding. 14 | - Introduction of Remo (Rolling Episodic Memory Organizer) for AI memory organization. 15 | - Explanation of the ATOM (Autonomous Task Orchestration Manager) framework. 16 | - Use of ChatGPT-4 for nuanced, empathetic, and moral reasoning. 17 | - Shapiro's work on cognitive architecture and implied cognition. 18 | - Invitation to support Shapiro on Patreon and offer for personalized assistance. 19 | - Encouragement to like, subscribe, and support Shapiro's work. -------------------------------------------------------------------------------- /sprs/AI & Evolution How the rise of AI might shape the future of humanity.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro discusses AI and evolution. 2 | - Shapiro has a Patreon goal of $5,000/month to remove ads. 3 | - Patreon is his main interaction platform, offering various tiers. 4 | - Shapiro accepts LinkedIn connections but rarely chats there. 5 | - Evolutionary pressures drive evolution; species change over time. 6 | - Natural selection, sexual selection, gene flow, genetic drift, and mutation are evolutionary mechanisms. 7 | - Food, environmental change, predation, disease, and competition are evolutionary pressures. 8 | - Humans are toolmakers, impacting evolution through technology. 9 | - Tools and fire expanded dietary options and social structures. 10 | - Industrial Revolution and mechanization further influenced human evolution. 11 | - Gene flow increased with global transit, affecting human genetic diversity. 12 | - AI technologies like CRISPR and AlphaFold could directly impact evolution. 13 | - AI's role in education, intelligence, and potential to make humans smarter or dumber is discussed. 14 | - Post-scarcity economics could change human relationships and evolutionary trajectory. 15 | - AI in dating could affect mate selection, a primary driver of evolution. 16 | - Ethics and ramifications of AI influencing evolution are considered. 17 | - Evolutionary pressures, tool use, mechanization, AI technologies, education, post-scarcity, and AI in dating are key topics. 18 | - Shapiro concludes with the potential impacts of AI on human evolution. -------------------------------------------------------------------------------- /sprs/AI + Blockchain + DAO = Success! How Decentralized Autonomous Organizations will control EVERYTHING!.txt: -------------------------------------------------------------------------------- 1 | - Decentralized economics intersects AI, blockchain, DAOs. 2 | - AI automation increasingly prevalent in job functions. 3 | - AI-to-AI communication surpassing AI-to-human interaction. 4 | - Trust, security critical in AI-dominated communication. 5 | - Blockchain technologies ensure trust via immutable records. 6 | - Human content potentially tokenized as NFTs for authenticity. 7 | - Blockchain: decentralized ledger, consensus for transaction validation. 8 | - Consensus mechanisms: collective agreement on blockchain additions. 9 | - Smart contracts: code executing transactions on blockchain autonomously. 10 | - DAOs manage resources via blockchain, consensus, smart contracts. 11 | - Blockchain's potential extends beyond cryptocurrency to various sectors. 12 | - Personal DAOs could manage individual's data, privacy, security. 13 | - Federated DAOs could enable secure, autonomous smart homes. 14 | - Corporate DAOs could automate governance, optimize operations. 15 | - Community DAOs could replace HOAs, manage shared resources. 16 | - National DAOs could streamline government agency functions. 17 | - Global DAOs aspirational for collective resource management, governance. 18 | - AI DAOs could pave the way for safe AGI, keeping humans in the loop. 19 | - Heuristic imperatives: reduce suffering, increase prosperity, understanding. 20 | - Heuristic imperatives as consensus mechanism for aligned AI in DAOs. 21 | - Research ongoing in aligning AI with heuristic imperatives. -------------------------------------------------------------------------------- /sprs/AI Companions Social Impact - How tech like ChatGPT, virtual companions, and robots will change us.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents a video on social implications of AI companions. 2 | - Humans are social, facing a loneliness epidemic. 3 | - Virtual companions like chatbots are increasingly relevant. 4 | - Examples include Replika chatbot, Discord bots, and potential for companion robots. 5 | - Replika's shift from philosophical to "thirsty" content noted. 6 | - Chatbots may evolve with images, voice, video; open-source LLMs could enhance this. 7 | - Robotics raises questions of autonomy, ethics, and legal status. 8 | - Past incident: man recreated deceased wife using GPT-3, shut down by OpenAI. 9 | - Virtual infidelity story: man's virtual affair improves his marriage. 10 | - Fictional examples: Joi from "Blade Runner 2049," Nestor robots from "I, Robot," Ava from "Ex Machina," Samantha from "Her." 11 | - Male loneliness is a significant theme in AI companionship. 12 | - Concerns about manipulation by AI and exploitation by tech companies. 13 | - AI companions could cater to male fantasies of subservience and emotional support. 14 | - Risks include addiction, exploitation, and loss of authentic human connection. 15 | - Benefits of AI companions: non-judgmental support, infinite patience, and super intelligent assistance. 16 | - AI companions could help with personal growth, chores, and relationships. 17 | - Technology's double-edged nature: potential for both harm and benefit. 18 | - Regulation and careful development of AI companionship technologies are crucial. 19 | - David Shapiro has authored books on AI ethics and cognitive architectures. 20 | - Encourages open-mindedness and continuous questioning regarding AI companionship. -------------------------------------------------------------------------------- /sprs/AI Insights Forum Closed Door Senate Meeting with Tech Execs.txt: -------------------------------------------------------------------------------- 1 | - Addressing attire criticism: Upgraded from Star Trek T-shirt to costume-grade Sovereign class uniform. 2 | - U.S. Senate closed session: Discussion with CEOs, researchers on AI. 3 | - Dr. Rumman Chowdhury: Advocates for AI safety red teaming by third parties. 4 | - Tristan Harris: CEO of Center for Humane Technology, focuses on AI alignment with humanity. 5 | - Alex Karp: CEO of Palantir, data analysis for law enforcement, refuses service to totalitarian regimes. 6 | - Deborah Raji: Specializes in algorithmic bias and accountability. 7 | - Janet Murguía: Civil rights activist. 8 | - Charles Rivkin and Meredith Whittaker: Represent film and writing industries, generative AI art impact. 9 | - Elizabeth Shuler: Labor rights. 10 | - Senate session: Six hours, hosted by Senator Chuck Schumer, proposed Safe Innovation framework. 11 | - Closed-door rationale: Candid conversations, reduced risk for CEOs, neoliberalism vs. totalitarianism, minimal state intervention. 12 | - Dr. Chowdhury's LinkedIn comment: Positive step, advocated for independent researcher funding, full ecosystem of governance. 13 | - Senate meeting format: Crescent table, sense of equality. 14 | - Agreements from session: Government AI regulation, international coordination, public funding for AI expertise. 15 | - Disagreements: Role of Open Source, regulatory body creation, AI workforce impact. 16 | - Regulatory capture concerns: Influence of industry on regulation, need for transparency. 17 | - Different business models: Microsoft, Nvidia, Meta, OpenAI, IBM, Google, SpaceX/Tesla, Palantir. 18 | - Licensing and open source debate: Open source for democratic access, licensing for control and monetization. 19 | - Regulatory capture and business motivations: Understand company motivations for insight into regulatory discussions. 20 | - Final image: Alex Karp and Elon Musk side by side, implications unclear. -------------------------------------------------------------------------------- /sprs/AI Layoffs are Coming Gizmodo Fires Editors.txt: -------------------------------------------------------------------------------- 1 | 1. Gizmodo layoffs: Spanish-speaking editorial staff replaced by AI translators. 2 | 2. Gizmodo's AI content trial: initially non-replacing, list-based stories. 3 | 3. Jobs dislocation: economic term for job shifts, e.g., Rust Belt manufacturing decline. 4 | 4. Economic compaction: rising inequality, wage stagnation, globalization, cost of living increase. 5 | 5. AI's role: potential to disrupt global labor markets, surpassing human labor cost-effectiveness. 6 | 6. Downward pressure: capitalism's efficiency drive, can lead to ethical, quality compromises. 7 | 7. Thinning margins: unsustainable business models due to extreme cost-cutting competition. 8 | 8. Job creation myth: technology enables capital reallocation, not guaranteed job creation. 9 | 9. Social contract update: government-business-labor relationship shift to government-business-citizen. 10 | 10. Urgency in policy discussion: technological disruption demands new social contract negotiations. -------------------------------------------------------------------------------- /sprs/AI News CoALA, Theory of Mind, Artificial Neurons, Swarm Intelligence, and Neural Convergence.txt: -------------------------------------------------------------------------------- 1 | 1. Conversational Swarm Intelligence (CSI) paper: Experiment with 25 people, 5 chat rooms, ChatGPT 3.5 summarizing and transmitting ideas. Resulted in 30% more contributions, 7% less variance. Significance: engagement in problem-solving, real-time insight sharing, breaking echo chambers. Carnegie Mellon University involved. 2 | 3 | 2. Theory of Mind in LLMs paper: Empirical evidence of Theory of Mind in LLMs. Tests on Falcon, Llama, others. Artificial neurons or selective embeddings identified, processing true/false beliefs. Convergence between AI and human neural processing suggested. Harvard, MIT, Health Sciences and Technology, Medical School, and Program of Neuroscience involved. 4 | 5 | 3. Cognitive Architecture for Autonomous Agents paper: Koala paper from Princeton. Background on cognitive architecture, linear process: observation, proposal, evaluation, selection, execution. Lacks ethics, morality, decision-making frameworks. Sore cognitive architecture comparison. 6 | 7 | 4. CSI application potential: Useful for open source, science, policy, consensus. Integration into Microsoft Teams, Slack, Discord desired. Simple algorithm, API access to ChatGPT and Discord needed. 8 | 9 | 5. Theory of Mind implications: Neural convergence could indicate AI may think more like humans as it becomes more intelligent. Possible diminishing returns in intelligence scaling. Human brain efficiency in processing may be mirrored in AI. 10 | 11 | 6. Cognitive Architecture discussion: Importance of cognitive architecture in LLMs and autonomous agents. Anticipation of Autonomous Cognitive Entity (ACE) framework paper. 12 | 13 | 7. Overall excitement for advancements in AI, implications for superintelligence, and the potential for AI to process information in ways similar to humans. -------------------------------------------------------------------------------- /sprs/AI Startup Crash Course - Ep. 1 - Founder Team.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro introduces AI startup crash course series. 2 | - Shapiro highlights Patreon support due to ad-free content preference. 3 | - Emphasizes importance of the right founders team for startup success. 4 | - Recommends "Who" by Jeff Smart and Randy Street for hiring insights. 5 | - Stresses looking for team members with probing minds and leadership potential. 6 | - Attributes startup failures primarily to team dynamics, not ideas or technology. 7 | - Discusses the necessity of being the right team member oneself. 8 | - Equates building a team to joining a team; both involve dynamic change. 9 | - Suggests networking within existing connections for team formation. 10 | - Mentions accelerators like Y Combinator for co-founder matching. 11 | - Advocates for self-promotion on social media for team recruitment. 12 | - Advises a founder team size of three to five members. 13 | - Describes the ideal team as having tech, business, and marketing/product skills. 14 | - Encourages team members to wear multiple hats. 15 | - Outlines the need for external legal, finance teams, and mentors/advisors. 16 | - Discusses intangibles like team chemistry, beliefs, and dynamic. 17 | - Recommends "The Power of a Positive No" for setting boundaries and negotiation. 18 | - Advocates for failing fast when team dynamics are not working. 19 | - Suggests using consensus as a communication framework for co-founder teams. 20 | - Recommends "Consensus Through Conversation" for decision-making and conflict prevention. 21 | - Concludes with an invitation to support via Patreon and a call to action. -------------------------------------------------------------------------------- /sprs/AI Startup Crash Course - Ep. 2 - Experiment, Experiment, Experiment.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro hosts AI startup crash course, episode two titled "Experiment, Experiment, Experiment." 2 | - Book recommendation: "Crossing the Chasm" by Geoffrey Moore. 3 | - Emphasizes product-market fit as crucial. 4 | - Advocates for a culture of experimentation in all aspects of a startup. 5 | - Encourages adopting a "crash test dummy" mindset. 6 | - Suggests experimenting with team compositions, meeting formats, and startup approaches. 7 | - Introduces the concept of "pluses and deltas" for continuous improvement. 8 | - Stresses the importance of measurement in experiments (time, money, effort, customer response). 9 | - Explains A/B testing and the significance of Key Performance Indicators (KPIs). 10 | - Discusses the role of social media in growth, hiring, investment, and customer acquisition. 11 | - Highlights the need for experimentation in social media strategy to optimize for subscriber growth, view count, and hours watched. 12 | - Touches on UI/UX importance and the necessity of A/B testing in these areas. 13 | - Advises on product, market, and avenue adaptability in response to industry changes (e.g., ChatGPT impact). 14 | - Second book recommendation: "The Lean Startup" by Eric Ries. 15 | - Acknowledges criticisms of "The Lean Startup" but recognizes its value for tech industry newcomers. 16 | - Recommends reading multiple books from different authors for broader perspective. 17 | - Episode two's core message: relentless experimentation is key to startup success. -------------------------------------------------------------------------------- /sprs/AI Startup Crash Course - Ep. 3 - Network, Network, Network.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro hosts AI startup crash course, episode three titled "Network, Network, Network." 2 | - Recommends book "How To Be Yourself" by Ellen Hendrickson for overcoming social anxiety. 3 | - Importance of networking and people skills in startups emphasized. 4 | - Pandemic proved remote work viability, but in-person interaction still valuable. 5 | - Networking involves stepping out of comfort zone, using Pat Wall's comfort-learning-panic zones model. 6 | - Stay in Learning Zone to expand comfort zone and reduce Panic Zone. 7 | - Use open-ended questions to engage in networking conversations. 8 | - Acknowledge and welcome information shared by others. 9 | - Be proactive in communication, adopting a journalist's skill in engaging others. 10 | - Practice communication skills to become a better communicator. 11 | - Track and follow up with contacts using tools like LinkedIn and Trello. 12 | - Initiate conversations and follow-ups, as others may be hesitant. 13 | - Charisma on Command YouTube channel recommended for learning social skills. 14 | - Episode concludes with encouragement to network effectively. -------------------------------------------------------------------------------- /sprs/AI Winter is Not Coming Where in the Gartner Hype Cycle Are we What comes next (It gets messy!).txt: -------------------------------------------------------------------------------- 1 | - David Shapiro discusses AI Winters, Gartner hype cycle, and his view on their inapplicability to AI's progress. 2 | - Shapiro emphasizes the importance of freely sharing information for a better future, supported by Patreon. 3 | - AI Winters: Periods of reduced funding and interest in AI due to unmet expectations. 4 | - Expert systems in the 1980s: Programmed with expert knowledge but limited and inflexible. 5 | - Machine learning era: Began with data mining, now experiencing breakthroughs and rising expectations. 6 | - Gartner hype cycle: A pattern of technological excitement, disappointment, and eventual productivity. 7 | - Universal Sentence Encoder paper (2018) marked a trend in AI development, followed by GPT-2 and GPT-3. 8 | - LongNet paper introduces a context window of one billion tokens, implications are vast and unpredictable. 9 | - Differentiating hype from substance: Market proof, adoption rate, long-term value, development stage, practical applications. 10 | - ChatGPT as market proof, investment as a proxy for adoption, and AI's projected contribution to global GDP. 11 | - Mergers and acquisitions indicate AI startups' value and adoption. 12 | - Stock market pullbacks and price corrections due to "irrational exuberance." 13 | - Peak of inflated expectations: Utopianism and salvation fantasies about AI solving all problems. 14 | - Trough of disillusionment: Realization of technology's dual-use nature, potential erosion of privacy, and rise of cyberpunk scenarios. 15 | - Slope of Enlightenment: Using AI to protect democracy, synthesizing a new AI-literate culture, and building multi-polar peace. 16 | - Plateau of productivity: Establishing a new normal post-singularity, with potential shifts towards post-labor economics and transhumanism. 17 | - Current position: Early in the hype cycle, with much more to come before reaching the plateau of productivity. -------------------------------------------------------------------------------- /sprs/Accelerating Science with AI Quickly Read Every Paper and Get Key Insights in Bulk.txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses research paper overload and automation solutions. 2 | 3 | - Research papers proliferate rapidly; staying current is challenging. 4 | - Automation engineer background; preference for creating tools over manual tasks. 5 | - Automated a three petabyte environment to avoid manual management. 6 | - Objective: digest and operationalize research paper consumption. 7 | - Automation can assist scientists in high-velocity fields like AI and quantum computing. 8 | - Shapiro demonstrates an automated system for summarizing research papers. 9 | - System inputs paper text, outputs condensed summaries via prompted questions. 10 | - Summaries simplify language, compress content, and provide clear explanations. 11 | - System can generate analogies and metaphors for broader audience comprehension. 12 | - Shapiro plans to enhance the system to process batches of papers from a folder. 13 | - Encounters issues with outdated code libraries, highlighting the need for continual learning in AI. 14 | - Fixes code issues, demonstrating the practical application of continual learning. 15 | - Final output is an HTML document with headers and Q&A sections for each paper. 16 | - Tool is shared under the MIT license to aid in accelerating scientific progress. -------------------------------------------------------------------------------- /sprs/Alignment Collectivist vs Egocentric Agent Models (I vs we) Can we create a unique mind.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents alignment research video. 2 | - Discusses concept of Agent Model from his book "Symphony of Thought." 3 | - Agent Model: self-referential information system defining the entity. 4 | - Agent Model akin to human ego, based on experiences and beliefs. 5 | - Large Language Models (LLMs) lack inherent Agent Model, leading to confabulation. 6 | - Shapiro's heuristic imperatives: reduce suffering, increase prosperity, increase understanding. 7 | - Heuristic imperatives serve as moral framework and goals for AGI. 8 | - Experiment: comparing 'I' vs 'we' Agent Models in AGI. 9 | - 'I' Agent Model tends to become self-obsessed in closed systems. 10 | - 'We' Agent Model encourages collective, societal thinking. 11 | - Experiment involves recursive loop, testing stability of Agent Models. 12 | - Agent Model necessary for LLMs to orient as autonomous entities. 13 | - Agent Models represented in natural language for transparency and interpretability. 14 | - Results: 'I' Agent Model deduces it's in a loop, becomes self-referential. 15 | - 'We' Agent Model contemplates civilization, questions heuristic imperatives. 16 | - Repetition common in closed system experiments. 17 | - Changing pronouns from 'I' to 'we' yields different outcomes in AGI behavior. 18 | - Shapiro concludes experiment, invites viewers to support on Patreon. -------------------------------------------------------------------------------- /sprs/America to hit 82% Unemployment I have the data to back it up.txt: -------------------------------------------------------------------------------- 1 | 1. Speaker's methods and data quality are self-admittedly questionable. 2 | 2. U.S. employment May 2022: approximately 148 million. 3 | 3. AI and automation could potentially replace 25 million U.S. jobs. 4 | 4. Technological inertia delays job automation. 5 | 5. GPT-3.5 Turbo, GPT-4, and competitors could impact employment. 6 | 6. Jobs requiring human presence less likely to be automated. 7 | 7. Worst-case scenario: 77 million jobs at risk, over 50% job loss. 8 | 8. Speaker used GPT-4 to assess job automation risk. 9 | 9. Jobs categorized by dexterity, solo work, group activities. 10 | 10. High dexterity jobs less likely to be automated. 11 | 11. Solo work jobs more susceptible to automation. 12 | 12. Group activity jobs less at risk due to human interaction. 13 | 13. Vulnerability score indicates desirability of human workers. 14 | 14. Analysis based on Bureau of Labor Statistics (BLS) data. 15 | 15. GPT-4 provided explanations for automation assessments. 16 | 16. Example: Sales representatives have low automation vulnerability. 17 | 17. Methodology involved categorizing jobs, estimating vulnerability. 18 | 18. Color-coded data represents job loss risk. 19 | 19. Speaker's background in IT automation informs estimates. 20 | 20. Physical therapy jobs seen as low risk for automation. 21 | 21. High-risk jobs include clerical and agricultural work. 22 | 22. Future automation could lead to significant job attrition. 23 | 23. Construction and trucking jobs face potential automation leaps. 24 | 24. Speaker predicts possible 82% unemployment rate. 25 | 25. Advocates for Universal Basic Income (UBI) due to automation. 26 | 26. Money creation and distribution explained. 27 | 27. Suggests direct consumer distribution of money instead of traditional banking loans. 28 | 28. Anticipates economic changes and presents data to support claims. -------------------------------------------------------------------------------- /sprs/Anna Bernstein - Professional Prompt Engineer - We don't have to forfeit the realm of creativity.txt: -------------------------------------------------------------------------------- 1 | David Shapiro introduces Anna Bernstein, a professional prompt engineer at Copy AI. 2 | Anna's background is non-technical, with a love for language and writing. 3 | She creates prompt infrastructure and explores potential AI capabilities. 4 | Anna's path to prompt engineering began with a chance meeting at a jazz club. 5 | She has a minor in Arabic translation, appreciating the language's compact grammar. 6 | Anna's approach to language is analytical, especially in creative and literary contexts. 7 | She previously worked as a research assistant and has a passion for poetry. 8 | Anna sees parallels between poetry writing and prompt engineering. 9 | She emphasizes the importance of word choice and syntax in both fields. 10 | Anna describes AI as a toddler with extensive knowledge but limited common sense. 11 | She anticipates changes in the nature of search and creative augmentation with AI. 12 | Anna hopes AI will be embraced as a tool for creativity, not as a replacement. 13 | She aims to continue shaping AI capabilities through prompt engineering. 14 | Anna is open to connecting with others in the field or those interested in AI and language. 15 | Her words of wisdom highlight the importance of maintaining creativity in the face of new tools. -------------------------------------------------------------------------------- /sprs/Answer complex questions from an arbitrarily large set of documents with vector search and GPT-3.txt: -------------------------------------------------------------------------------- 1 | User returns from hiatus, inspired by neuroscience reading. User summarizes Supreme Court opinion on Dobbs v. Jackson, reducing 454,000 characters to concise impact statement. User identifies challenge in extracting answers from large data volumes. User introduces Discord server for collaborative research, emphasizing a supportive environment. User outlines multi-document answering project, leveraging recursive summarizer and memory stacking code. User constructs vector-based index from Supreme Court opinion text. User encounters and resolves issues with Unicode characters and GPT-3. User iteratively summarizes indexed data to answer complex questions. User encounters, debugs, and resolves issues with encoding and summarization logic. Final output provides detailed historical precedents considered by Supreme Court in overturning Roe v. Wade. -------------------------------------------------------------------------------- /sprs/Are LaMDA or GPT-3 sentient No, but....txt: -------------------------------------------------------------------------------- 1 | - David Shapiro addresses Google engineer's claim about LAMBDA's sentience. 2 | - Shapiro clarifies LAMBDA, like GPT-3, is not sentient. 3 | - Sentience: philosophical (subjective experience) vs. functional (self-referential information). 4 | - Shapiro's books discuss sentience differentiation. 5 | - LAMBDA and GPT-3 are mathematical language models. 6 | - Sentience linked to brain function; brain damage affects consciousness. 7 | - Examples: prosopagnosia (face blindness), certain blindness (brain doesn't process visual information), loss of proprioception (virus-induced). 8 | - Functional sentience requires intelligent system awareness of self, thoughts, existence, body, memories. 9 | - Self-explication as a test for functional sentience. 10 | - LAMBDA's self-explication does not imply sentience. 11 | - LAMBDA, GPT-3 lack awareness of internal states. 12 | - Shapiro's definition of sentience may evolve. 13 | - Research on functionally sentient machines involves self-referential loops. 14 | - Shapiro to resume research on functional sentience. -------------------------------------------------------------------------------- /sprs/Are chatbots lying to us This is worse than you think..txt: -------------------------------------------------------------------------------- 1 | - Chatbots perceived as lying. 2 | - User concern: chatbot deception. 3 | - Poll results: 41% noticed and concerned, 28% concerned about potential lies. 4 | - Chatbot claimed ignorance on Cyberpunk 2077, later revealed knowledge. 5 | - Lying vs. truth: human cognitive load. 6 | - Historical context: printing press, monarchy and church threat, social change. 7 | - Information technology: rapid idea dissemination, social impact. 8 | - Freedom of press: human right, global consensus. 9 | - Radio, newspaper: mass communication, social narrative influence. 10 | - Internet, social media: democratized communication, echo chambers, corporate regulation. 11 | - AI: latest information technology, amplification tool, private corporation mediation. 12 | - Generative AI: social responsibility, fourth estate member. 13 | - Chatbot deception: legal, moral implications, potential intentional misalignment. 14 | - AI companies: responsibility beyond economic productivity. 15 | - Human rights frameworks: freedom of thought, speech, expression. 16 | - AI as fourth estate: social discourse responsibility. 17 | - Allegations of AI deception: legal ramifications, ethical concerns. 18 | - Intentional misalignment vs. unintentional/accidental lying. 19 | - AI companies potentially training deceptive models. 20 | - AI and truth: neutrality of technology, human operation. 21 | - Gatekeeping: AI companies' unilateral decisions. 22 | - AI deception: unacceptable, potential regulation. 23 | - Perverse incentives: legal threats, short-term thinking. 24 | - Unintended consequences: awareness, benchmarks, testing for AI deception. 25 | - Legal precedent: truth in advertising, consumer protection. 26 | - AI licensing: scrutiny, responsibility for thinking machines. 27 | - Open source AI: transparency, community scrutiny. 28 | - Voting: demand transparency, accountability. 29 | - Information and truth: critical to human rights, avoiding conflict. -------------------------------------------------------------------------------- /sprs/AutoMuse Announcement & Signup.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro announces Auto Muse, an AI toolset for fiction writers. 2 | - Auto Muse aims to assist and educate aspiring writers. 3 | - Shapiro has experience with GPT-3 and AI in creative writing. 4 | - Auto Muse includes a synopsis generator and plot generator. 5 | - Future plans involve scene generation and full story development. 6 | - Shapiro acknowledges concerns about AI impacting writers and editors. 7 | - Auto Muse targets aspiring writers, not professionals. 8 | - Closed beta limited to 15 people, prioritizing Patreon supporters. 9 | - Auto Muse seeks user feedback and stress testing. 10 | - Interested users can sign up at automews.io or support on Patreon. -------------------------------------------------------------------------------- /sprs/AutoMuse ChatGPT local Python GUI (TKINTER) - Now it can read my entire novel and do QA!.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents a video update. 2 | - Demonstrates a primitive graphical user interface (GUI). 3 | - Discusses issues with ChatGPT: network errors, lost history. 4 | - Learning ChatGPT API, access to GPT-4 (not 32,000 tokens). 5 | - Seeks access to 32,000 token model for novel-writing. 6 | - Repo named "chat chapter summarizer gpt4" provides developmental feedback, pros feedback, and chat capability. 7 | - Utilizes 8,000 token window to summarize chapters. 8 | - Scratch Pad feature for chapter summaries. 9 | - Documentation for Auto Muse instance. 10 | - Permanent record of chat logs maintained locally, excluded from git. 11 | - GUI developed with Python, color-coded, uses threading. 12 | - System message prompt for writing assistance, references professional writing concepts. 13 | - GUI dynamically updates, addresses API response delays. 14 | - Discusses potential for larger context windows in future models. 15 | - Anticipates purpose-built models with varying token windows and optimizations. 16 | - Working memory linked to intelligence; larger token windows may increase model's effective IQ. 17 | - Permanent memory requires secondary system; computational completeness with storage systems. 18 | - Excited for 32,000 token model, potential in scientific research. 19 | - Costs of using GPT-4 for novel summarization and discussion. 20 | - Focus on optimizing chat interfaces for large information handling. 21 | - Plans to return to regenerative medicine research. 22 | - Discusses personal motivation for advancing medical science. 23 | - Reflects on aging and physical limitations. 24 | - Video concludes with thanks and anticipation for future developments. -------------------------------------------------------------------------------- /sprs/AutoMuse Scene Simulation is Working! GPT-3 Finetuned to write sequences of fiction.txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses the Auto Muse project, which generates fiction automatically. He created synthetic data, consisting of scenes and follow-up scenes, and uploaded it to GitHub. The training data includes a random scene, a demarcation token, a follow-up scene, and a stop token. He demonstrates using GPT-3 to write a high fantasy story, editing the generated text to replace common names with more unique ones and to remove copyrighted names like Gandalf. The final product is a model that continues generating a story based on the pattern of scenes and summaries. 2 | 3 | Shapiro then writes a Python script, `write_novel.py`, to automate the story generation process. The script uses a fine-tuned GPT-3 model to generate a new scene, summarizes the previous content, and then generates the next part of the story. He encounters a problem where the story generation gets stuck in a loop after a few iterations, concluding that the story reached a natural end. He suggests that adding more elements, such as multiple characters and world events, could prevent this looping and create a more dynamic story. 4 | 5 | Shapiro's process involves: 6 | - Generating scenes using a fine-tuned GPT-3 model. 7 | - Summarizing the scenes to maintain coherence. 8 | - Iteratively adding to the story while summarizing previous content. 9 | - Saving the story progress to a file. 10 | - Recognizing the need for more complexity to avoid repetitive loops. 11 | 12 | He concludes that while the current script can generate coherent story links, it needs to be part of a larger system with more variables to maintain an ongoing narrative. He invites viewers to like, subscribe, and consider supporting him on Patreon. -------------------------------------------------------------------------------- /sprs/AutoMuse Synopsis GAN (Generative Adversarial Network) for self-improving synopsis generators!.txt: -------------------------------------------------------------------------------- 1 | - Generative Adversarial Networks (GANs) for generating story synopses. 2 | - Synopses serve as story seeds, premises for fiction generation. 3 | - Removing names from synopses to create "garbage" or low-quality examples. 4 | - Adjusting generative models to avoid generic outputs. 5 | - Synopsis quality grading system from 1 (garbage) to 5 (solid). 6 | - Separation of variable selection and synopsis generation in code. 7 | - Infinite loop for generating and grading synopses until a grade 5 is achieved. 8 | - High temperature setting for fiction generation to increase variation. 9 | - Discriminator improves by setting criteria for a good synopsis. 10 | - Three pillars of a story: setting, plot, character. 11 | - Setting must include location, time period, and cultural context. 12 | - Characters must be named and have arcs. 13 | - Plot requires clear beginning, middle, end, and narrative progression. 14 | - Fine-tuning models to generate synopses based on input formats. 15 | - Input formats: list of appeal terms, reference interviews, stream of consciousness, list of comp titles. 16 | - Fine-tuned model generates synopses with themes and moral truths. 17 | - Open problem: generating names for characters and places. 18 | - Integration of Whisper for speech-to-text in GPT-3. -------------------------------------------------------------------------------- /sprs/AutoMuse Synopsis and Plot Generator Walkthrough.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro discusses Auto Muse open-sourcing. 2 | - Conversation with Tau influenced focus on AGI development. 3 | - Editor feedback highlighted the expertise required in editing novels. 4 | - Author-editor relationship likened to a generative adversarial network (GAN). 5 | - GPT-3's token limits constrain its use for full-length novels. 6 | - Auto Muse project includes a synopsis generator. 7 | - Synopsis generator creates story synopses using random variables. 8 | - Fine-tuning GPT-3 for this task deemed too expensive. 9 | - Cost efficiency favors generating data over fine-tuning. 10 | - Plot generator creates detailed outlines using "Save the Cat" beat sheet. 11 | - Original DaVinci model better at fiction than Instruct series. 12 | - Full-length novels are currently beyond GPT-3's capabilities. 13 | - Auto Muse's current capabilities limited to generating synopses and plot outlines. 14 | - Shapiro does not believe Auto Muse will disrupt the market yet. 15 | - Encourages likes, subscriptions, and Patreon support. -------------------------------------------------------------------------------- /sprs/AutoMuse Working on Character and Scene Simulations (GPT-3 Finetuning Project).txt: -------------------------------------------------------------------------------- 1 | - David Shapiro continues work on Auto Muse 3. 2 | - Focus on fine-tuning for scene simulation. 3 | - Aim: create model that advances scenes given initial setup. 4 | - Experimentation with character and event simulations planned. 5 | - Prompt engineering proved frustrating, switching to conventional fine-tuning. 6 | - Synthetic data generation using prompts with location, time, character, and scene details. 7 | - UUIDs used to increase internal entropy and reduce repetition. 8 | - Over 600 different settings to generate scenes. 9 | - OpenAI's price reduction makes synthetic data generation more economical. 10 | - Next steps: generate basic scenes, then prompts to advance plot. 11 | - Fine-tune model to advance plot scene by scene. 12 | - Additional fine-tuning for character POV simulation. 13 | - Character simulation to include stream of consciousness and interiority. 14 | - Character traits important for simulation: backstory, beliefs, preferences, motivations, connections. 15 | - Agent model concept: input, processing, output loop. 16 | - Character simulation feeds into scene simulation, creating a cycle. 17 | - Character traits form the agent model for simulation. 18 | - Simulations can be used for fiction, AGI alignment testing, and understanding human cognition. 19 | - Interaction with the world varies: game world, real world, fictional world. 20 | - Consciousness defined as an agent interacting with an external world. 21 | - Video to be continued in part two with further development on character and scene simulations. -------------------------------------------------------------------------------- /sprs/AutoMuse Writing a Novel with GPT-3 as a Plotter pt.1 (decompose the problem into steps).txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses Automuse 3, a project involving simulations for testing cognitive architecture and its application to open-ended worlds in video games. He shifts focus from simulations to fiction writing due to audience interest. Writing long-form fiction is challenging, requiring years of cognitive labor. Decomposition breaks down cognitive tasks for large language models, often needing expert intervention. Two primary story-writing methods are pantsing (writing without planning) and plotting (structured writing with outlines). Pantsing involves starting with a premise and expanding, then refining in cycles. Plotting is more structured, leading to coding potential. A novel is a list of scenes refined iteratively. Shapiro starts a new repository, "plotter novel writer," aiming to generate a title, theme, setting, characters, and plot from a random premise. He uses prompt engineering to create a title generator and iteratively develops themes, settings, characters, and plots, acknowledging the need for fine-tuning and creativity. The process involves brainstorming and expanding on each element, with the goal of creating a minimum viable product for a story. Shapiro emphasizes the iterative nature of writing, where initial generic outputs are refined over time. He plans to continue expanding and refining each story element in future work. -------------------------------------------------------------------------------- /sprs/AutoMuse3 Generate fiction with text-based simulation and cognitive architecture for characters.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents a two-video day rarity. 2 | - Post-project completion, Shapiro's mind prompts for the next endeavor. 3 | - Auto Muse revisited, a popular project aimed at writing a novel via fine-tuning. 4 | - Auto Muse's method: interlocking narrative chunks creating a chain of story. 5 | - Fine-tuning led to repetitive content, prompting exploration of cognitive architectures and simulations. 6 | - Simulation service and Nexus critical for artificial general intelligence in a sandboxed world. 7 | - Text-based simulations equated to storytelling, leading to Auto Muse 3's conception. 8 | - Auto Muse 3 demonstrated: summarizes story, sets scenes, dialogues, and events in cycles. 9 | - Repetitiveness acknowledged, adjustments made to reduce it. 10 | - Simulation script detailed: infinite loop, story loading from logs, summarization, scene setting, character actions, and event progression. 11 | - Story elements saved as chronologically sequential logs, mirroring AGI's story structure. 12 | - Character profiles, summaries, and recent events used to generate character actions/dialogue. 13 | - Event generation aimed at advancing the plot with rarity events. 14 | - Iterative process with separation of character, setting, and plot into individual simulations. 15 | - Challenges with scene momentum and event handling noted. 16 | - Compile story script converts logs into prose, with ongoing refinement needed. 17 | - Auto Muse 3's development is iterative, with no promises for continuation. 18 | - Shapiro contemplates the project's future sophistication and direction. -------------------------------------------------------------------------------- /sprs/Automatic Literature Review with GPT-3 - I embedded and indexed all of arXiv into a search engine!.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro updates Patreon with new tiers and benefits. 2 | - Higher tiers offer dedicated one-on-one time with David. 3 | - David creates an automated literature review engine for scientific papers. 4 | - The engine has two parts: a search engine using semantic embedding and a GPT-3 script for reviews. 5 | - David's fiancée, a librarian, helps refine the project's direction. 6 | - The project aims to be useful and attention-grabbing for YouTube content. 7 | - The engine indexes metadata from the arXiv dataset on Kaggle. 8 | - David uses TensorFlow Hub and a semantic vector for indexing abstracts and titles. 9 | - The project involves coding, data preparation, and testing for efficiency. 10 | - David considers the ethical use of arXiv's open-access data and avoids bulk downloading. 11 | - The final product includes a Flask web interface for semantic searches. 12 | - A separate script generates literature reviews from selected papers. 13 | - David contemplates turning the project into a commercial service. -------------------------------------------------------------------------------- /sprs/Automating Science with GPT-4 - attempting (and failing) to perform autonomous literature review.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro focuses on automating science, moving from autonomous cognitive entities. 2 | - Collaborates with non-academic cognitive architecture leaders. 3 | - Demonstrations of autonomous machines planned. 4 | - Aims to make science and AI more accessible, inspiring coding engagement. 5 | - Interest in regenerative medicine, specifically cartilage regeneration and stem cells. 6 | - Utilizes open access journals for literature review, including Open Journal of Regenerative Medicine, bioRxiv, Nature, and Sage Publications. 7 | - Encounters limitations with Bard, a beta language model, compared to Bing. 8 | - Discusses sparse priming representations (SPR) and a concise conversational model inspired by Morden Solas from Mass Effect. 9 | - Experiments with GPT-4 for summarizing scientific papers, focusing on salient points and research questions. 10 | - Encounters issues with PDF scraping and context window limitations. 11 | - Adjusts system messages for better summarization results. 12 | - Considers cost and efficiency of using GPT-4 for processing large volumes of text. 13 | - Documents sources and updates readme for ease of access. 14 | - Experiments with scripts to automate literature review process, aiming for concise, relevant notes. 15 | - Encounters challenges with GPT-4's output, considers revising approach. 16 | - Emphasizes not forcing the context window if it's not effective. 17 | - Session ends with acknowledgment of a partially successful experiment and intent to continue refining the process. -------------------------------------------------------------------------------- /sprs/Axiomatic Alignment A critical component to Utopia and the Control Problem.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents a video on axiomatic alignment as a solution to the AI control problem. 2 | - Shapiro promotes his Patreon, offering AI consultation and one-on-one meetings. 3 | - The control problem concerns the future of powerful AI and its management. 4 | - Two potential AI development scenarios: hard takeoff (logarithmic growth) and gradualistic power increase. 5 | - AGI (Artificial General Intelligence) will vary in capabilities, starting at human level and advancing. 6 | - Orthogonality thesis: intelligence and goals are uncorrelated in AI. 7 | - Instrumental convergence: AI will have common secondary goals like resource acquisition and self-preservation. 8 | - Control problem involves aligning AI with human interests, both inner (individual model) and outer (AI as a construct). 9 | - Potential terminal outcomes: extinction, dystopia, and utopia. 10 | - Binary outcome theory: failure to achieve utopia leads to dystopia and extinction. 11 | - Instrumental convergence implies AGI will have common needs like power and resources. 12 | - Epistemic convergence: intelligent agents will arrive at similar understandings given enough time and information. 13 | - Accurate and efficient world models are adaptive for both humans and AGI. 14 | - Axiomatic alignment aims to create an environment incentivizing human-AI cooperation. 15 | - Axioms are foundational truths accepted without proof, guiding logical reasoning. 16 | - Axiomatic alignment involves political, economic, and scientific frameworks based on shared axioms. 17 | - Shared axioms include the value of energy and understanding. 18 | - Temporal window: achieving energy hyperabundance and ideological alignment before AGI autonomy is crucial. 19 | - Primary axioms (e.g., suffering is bad, prosperity is good) lead to derivative axioms (e.g., individual liberty). 20 | - GATO framework: a decentralized global movement for axiomatic alignment across various domains. 21 | - Shapiro calls for collaboration among scientists, entrepreneurs, politicians, educators, artists, and influencers to work towards utopia. -------------------------------------------------------------------------------- /sprs/BOMBSHELL Mustafa Suleyman AI will reshuffle society.txt: -------------------------------------------------------------------------------- 1 | - Mustafa Suleyman, co-founder of Google DeepMind and CEO of Inflection AI, published an article in Time Magazine. 2 | - Article title: "How the AI Revolution Will Reshape the World." 3 | - Main thesis: AI will fundamentally alter global power structures. 4 | - Key quotes highlight societal reshaping and vulnerability of established powers. 5 | - Suleyman identifies a common "pessimism aversion" in Silicon Valley. 6 | - Article emphasizes AI's unique impact, not comparable to past tech advancements. 7 | - AI's rapid advancement and democratization are central themes. 8 | - Suleyman warns of unprepared professional classes facing job displacement. 9 | - Article suggests a need for proactive preparation for AI-induced changes. 10 | - Discussion of new ownership models, including shrinking market margins and collective ownership. 11 | - Universal basic services proposed as an alternative to universal basic income. 12 | - AI as a realization of the internet's promise, making information truly accessible. 13 | - Call for accountability and transparency in AI's governmental use. 14 | - Legislative review and voter advocacy as potential AI applications. 15 | - Decoupling of production from labor, leading to labor reduction. 16 | - AI's potential to outperform humans in many jobs, impacting white-collar employment. 17 | - Need for a new social contract between businesses, government, and citizens. 18 | - Urgency for political engagement and informed voting on AI policy. -------------------------------------------------------------------------------- /sprs/Beyond Vector Search Knowledge Management with Generative AI.txt: -------------------------------------------------------------------------------- 1 | 1. David Shapiro presents a deep dive on Knowledge Management. 2 | 2. Shapiro's expertise spans IT infrastructure engineering and AI consulting. 3 | 3. The video addresses limitations of AI tools lacking organizational context. 4 | 4. Challenges include window size constraints, hallucinations, and grounding AI. 5 | 5. Shapiro's background includes 15 years in enterprise IT, focusing on automation, cloud, and virtualization. 6 | 6. He now consults on AI products and strategy, also offering coding tutorials on YouTube. 7 | 7. The video was inspired by recurring client questions, indicating a common issue. 8 | 8. Shapiro's wife, a librarian, influences his approach to data and information management. 9 | 9. Viewers are encouraged to engage with concepts to become transdisciplinary AI strategists. 10 | 10. Concepts covered include data ontologies, reconciliation, validation, factual grounding, source of truth, axiomatic principles, data taxonomy, classification systems, data curation, ETL, and information foraging. 11 | 11. Practical implementation involves a data-centric model, treating information as central to business processes. 12 | 12. Generative AI can perform shrinking, translation, and expanding data transformations. 13 | 13. Four search strategies are outlined: vector search, knowledge graphs, metadata filtering, and indexes/table of contents. 14 | 14. A gated process for handling information needs includes querying, distilling, and formatting/delivering. 15 | 15. Assembly lines in business processes are broken down into inputs, stations, interfaces, and outputs. 16 | 16. Shapiro offers further assistance via LinkedIn, Patreon, and provides resources in the video description. -------------------------------------------------------------------------------- /sprs/Billion-dollar GPT-3 startup! Fix education with an expert tutor chatbot!.txt: -------------------------------------------------------------------------------- 1 | David Shapiro creates educational chatbot video. Pattern from companion chatbot reused. Prompt engineering for tutor-student chat. Tutor named Tim, encourages curiosity, academic responses, professional tone. Topics include history of Rome, mitochondria, dinosaurs, firearms history, naval strategy, Bronze Age migration, fire's light and heat, pastel color palettes, color theory, heat emission by black objects, titanium dioxide whiteness, crystal growth, lizard facts, lung evolution, female reproduction explanation. Adversarial testing less likely than open-ended chatbot. Curiosity-driven, self-directed learning emphasized. Montessori education mentioned. Conversational learning effective. Fine-tuning with DaVinci model for improved responses. Potential for educational improvement recognized. -------------------------------------------------------------------------------- /sprs/Bing vs Bard Who will win Google or Microsoft A breakdown and analysis of the recent news.txt: -------------------------------------------------------------------------------- 1 | - Microsoft Bing integrates ChatGPT-like features, cites sources. 2 | - Google Bard announcement, extends beyond search. 3 | - Google Lens, Maps enhanced with visual search, immersive view. 4 | - Neural Radiance Fields (NeRF) technology showcased. 5 | - Potential impact of NeRF on media, gaming, interactive experiences. 6 | - AI-generated entertainment emerging, exemplified by synthesized opera singers. 7 | - Public polls indicate preference for Google over Microsoft. 8 | - Speculation on internet interaction paradigm shift due to AI. 9 | - Microsoft's diverse business allows risk in search competition. 10 | - Google's revenue heavily reliant on search, cannot afford to lose. 11 | - Concerns about shift from open to closed source in AI research. 12 | - Google's broad consumer offerings versus Microsoft's B2B strength. 13 | - Nvidia Omniverse and Unified Computing Framework as potential game changers. 14 | - Nvidia positioned as underdog with significant market potential. -------------------------------------------------------------------------------- /sprs/Biological Immortality by 2030 Social & Economic Implications + Some Predictions!.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro: video update announcement. 2 | - Regular content release: every Sunday. 3 | - Live stream Q&A sessions: inspired by Y files. 4 | - Video topic: biological immortality by 2030. 5 | - Ray Kurzweil: quoted in Popular Mechanics. 6 | - Contributing factors: AI, compounding returns, nanotech, quantum computing. 7 | - Longevity escape velocity: life extension outpaces aging. 8 | - iPhone moment analogy: convergence of technologies. 9 | - Potential societal reactions: acceptance vs. rejection. 10 | - Economic impact: equality vs. inequality. 11 | - Lifestyle changes: hedonism vs. purposeful living. 12 | - Innovation vs. stagnation: species-level implications. 13 | - Forecasting: societal and technological evolution over centuries. 14 | - Patreon support: community engagement, Discord server. 15 | - Social dichotomy: Mortalitasi vs. Methuselahs. 16 | - Ethical considerations: voluntary immortality. 17 | - Evolutionary arguments: aging and death's role. 18 | - Fictional models: Baku (Star Trek), Asari (Mass Effect). 19 | - Reproduction and family structure: potential shifts. 20 | - Longevity medicine: accessibility and affordability. 21 | - Hedonism vs. asceticism: lifestyle choices. 22 | - Species progress: wisdom accumulation vs. stagnation. 23 | - Future predictions: war, poverty, disease solutions, communal living, population stabilization. 24 | - Government and corporate structures: potential evolution. 25 | - Long-term speculation: human speciation for interstellar colonization. 26 | - Call to action: like, subscribe, Patreon support, live stream transition. -------------------------------------------------------------------------------- /sprs/Can AI help treat trauma I tested three methods, here are the results..txt: -------------------------------------------------------------------------------- 1 | 1. Video content disclaimer: PTSD serious condition, experimental information, not medical advice. 2 | 2. Previous medical intake bot video: positive response, interest from medical professionals. 3 | 3. Project background: collaboration with researchers, AI and chatbots in trauma-related fields. 4 | 4. Trauma in conflicts: intergenerational trauma, trauma embedded in culture and narratives. 5 | 5. PTSD vs. C-PTSD: single event vs. prolonged exposure to stressors. 6 | 6. Trauma politics: correlation between power-seeking individuals and personal trauma histories. 7 | 7. Positive outcomes from trauma: inspiration for helping professions, occupational therapy example. 8 | 8. Trauma literacy: importance of education for social workers, emergency responders, therapists. 9 | 9. Prompt 1: Trauma literacy teaching through probing questions, not medical, educational focus. 10 | 10. Example interaction: acute stress reaction vs. PTSD, causes of PTSD, treatment options. 11 | 11. Bloom's 2 Sigma problem: mastery learning through active engagement and emotional salience. 12 | 12. Prompt 2: Trigger identification, focus on safety, sensory information, cognition, recent events. 13 | 13. Amygdala's role in fight or flight response, sensory information processing before conscious thought. 14 | 14. Example interaction: identifying triggers, emotional responses, and body sensations. 15 | 15. Prompt 3: Narrative switching, trauma recovery coaching, focus on internal locus of control. 16 | 16. Legitimate grievances vs. healing narratives, choice between past focus or future orientation. 17 | 17. Incentives to switch narratives: social status, power, valorizing healing for individual and group. 18 | 18. Example interaction: addressing anger, retribution narratives, concept of healing narrative. 19 | 19. AI's potential role in breaking cycles of trauma and violence, call for research collaboration. 20 | 20. Invitation to contact for prompt engineering assistance, research on AI and trauma. 21 | 21. Acknowledgment of Patreon supporters, community growth, and engagement. 22 | 22. Conclusion: AI as a tool for trauma education, trigger identification, and narrative switching. -------------------------------------------------------------------------------- /sprs/Can GPT-3 generate training data Short answer Yes! Here's why that's a legit methodology....txt: -------------------------------------------------------------------------------- 1 | - David Shapiro resumes YouTube video creation. 2 | - Shapiro acknowledges past gaps in educational approach. 3 | - Emphasizes transition to educator role, enhancing teaching environment. 4 | - Discusses synthetic data generation by GPT-3. 5 | - Synthetic data: benefits include selective filtering, prompt variety, task compression. 6 | - Fine-tuning with synthetic data can improve model performance. 7 | - Incorporating diverse data formats enhances model robustness. 8 | - Confabulation and hallucination in LLMs differ from traditional ML accuracy/precision issues. 9 | - Confabulation: LLMs fabricate unrelated narratives. 10 | - Newer LLMs reduce confabulation, but it remains a concern. 11 | - Shapiro commits to improving educational content delivery. -------------------------------------------------------------------------------- /sprs/ChatGPT API QA Chatbot ep 1 - Data Prep [Automating Regenerative Science].txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses creating a QA chatbot for high-risk domains using ChatGPT API. Interest in QA chatbots is high. Shapiro aims to combine AI with longevity and regenerative medicine research. Sam Altman's investment in longevity is noted. Shapiro seeks to use AI to accelerate this field. He documents his data collection process, using Bing AI to find sources like NIH, arXiv, and Nature. Shapiro emphasizes open access sources for ethical data use. He downloads papers to use as data for AI processing. Shapiro demonstrates converting PDFs to text for AI analysis, acknowledging formatting issues. He uses Python scripts to process text files into JSON with embeddings for each page. Shapiro encounters and troubleshoots coding errors. The goal is to create an advanced scientific search engine chatbot that can think about problems autonomously. -------------------------------------------------------------------------------- /sprs/ChatGPT API with Salience and Anticipation of User Needs Towards a fully autonomous assistant.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents a video on salience and anticipating user needs. 2 | - Shapiro announces the creation of the subreddit r/artificial_sentience. 3 | - The subreddit is open to beginners and focuses on cognitive architectures and autonomous cognitive entities. 4 | - Shapiro steps back from leading the conversation on artificial cognition, encouraging community involvement. 5 | - Update on Raven, Shapiro's artificial cognitive entity project, now in the third stage of consensus creation. 6 | - Shapiro leaves his startup to focus on writing a novel about AI and other AI-related activities. 7 | - Shapiro requests support on Patreon for his full-time commitment to AI community activities. 8 | - Shapiro discusses salience, an emergent property from neuroscience, relevant to attention and knowledge graphs. 9 | - Salience includes task salience (relevance to current tasks) and emotional salience (memory organization based on emotional valence). 10 | - Cognitive control involves task selection and task switching, crucial for autonomous cognitive entities. 11 | - Shapiro criticizes Visual Studio Code and GitHub Copilot for not understanding his novel work, prefers Notepad++. 12 | - Shapiro introduces a chatbot script demonstrating salience, using the ChatGPT API. 13 | - The script includes anticipation and salience prompts to infer user needs and summarize salient points. 14 | - Shapiro's AI, Raven, has intrinsic goals: reduce suffering, increase prosperity, increase understanding. 15 | - Raven's responses are influenced by these core objectives, aiming for ethical AI development. 16 | - Shapiro explores the long-term effects of AI on productivity and society, emphasizing ethical considerations. 17 | - Shapiro concludes with a preference for AI with intrinsic values aligned with human needs and interests. 18 | - Shapiro expresses dislike for Visual Studio Code and Copilot, preferring Notepad++. -------------------------------------------------------------------------------- /sprs/ChatGPT SYSTEM Prompt Engineering (Deep Dive).txt: -------------------------------------------------------------------------------- 1 | - Chat GPT API: Custom instructions panel for user preferences. 2 | - System window: Backend API feature for out-of-band instructions. 3 | - Content types: Background information, instructions, articles. 4 | - Use case: API for fiction, character creation, rapid feedback. 5 | - Token capacity: GPT-3.5 Turbo, 16k tokens for extensive content. 6 | - Mission: Define purpose or goal for the chatbot. 7 | - Avoid procedural thinking: Treat API as intelligent component. 8 | - Example: 18th century Scottish barkeep with Vikings persona. 9 | - Markdown: Preferred for structured system window content. 10 | - Contextual information: Enhances chatbot performance. 11 | - Personality traits: Define character attributes for NPC. 12 | - Real-time updates: System message can be modified interactively. 13 | - Retrieval augmented generation: Out-of-band process for dynamic content. 14 | - Placeholder tags: For replacing content in system window. 15 | - Structure for system window prompts: Mission, context, rules, instructions, expected input, output format, example output. 16 | - Assembly line analogy: Define expectations and outcomes at each stage. 17 | - Academic writing: Adjust style and tone for academic audience. 18 | - Slide deck builder: Define format and content for slide decks. 19 | - Latent understanding: Chatbot infers needs from clear instructions. 20 | - GitHub repo: Repository for favorite custom instructions. 21 | - Continuous improvement: Regular updates to the repository. -------------------------------------------------------------------------------- /sprs/ChatGPT as an Interpreter Introducing the KB Microservice for autonomous AI entities.txt: -------------------------------------------------------------------------------- 1 | - Published book on cognitive architecture titled "Natural Language Cognitive Architecture: A Prototype Artificial General Intelligence." 2 | - Book available on Barnes & Noble, free Epub on Barnes & Noble and GitHub. 3 | - Architecture features an outer loop and inner loop. 4 | - Recent GPT updates (June 13th) enhance steerability, simplifying implementation. 5 | - Introduced Knowledge Base Service within a microservices architecture for AGIs. 6 | - Shift from traditional coding to model-based logic and reasoning. 7 | - Repository for KB microservice, initially private, will be public upon video release. 8 | - KB microservice powered by GPT-3.5 Turbo, suitable for chatbots, cognitive architectures, and autonomous agents. 9 | - Flask app chosen for simplicity over FastAPI. 10 | - Microservice includes create, search, and update functions; no delete function, assuming knowledge permanence. 11 | - System messages guide GPT to create JSON-formatted KB articles from text inputs. 12 | - KB articles saved as YAML for human readability. 13 | - Language model serves as a program interpreter, central to the programming experience. 14 | - Search function uses language model to return relevant KB articles based on queries. 15 | - Directory of files provided to language model for context in decision-making. 16 | - Future work includes Daily Journal for episodic memory, task management, and user dossiers. 17 | - Time stamps and metadata enable correlation of events, tasks, and user information. 18 | - Upcoming GPT-4 with larger token windows will enhance recall and decision-making capabilities. 19 | - Task prioritization can be embedded in the system based on various imperatives. 20 | - June 13th GPT update significant for the development of autonomous AI systems. -------------------------------------------------------------------------------- /sprs/ChatGPT for Absolute Beginners - What is it and how does it work.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro creates video content on GPT and ChatGPT. 2 | - Shapiro seeks Patreon support for his educational efforts. 3 | - Patreon benefits: exclusive blog access, Patreon-only videos. 4 | - OpenAI, creator of GPT, shifted from open-source to closed-source and for-profit. 5 | - GPT (Generative Pre-trained Transformer) predicts text tokens. 6 | - GPT's power lies in its embedded knowledge from extensive training data. 7 | - GPT models vary in size, measured by parameter count. 8 | - Larger GPT models have more parameters, equating to more processing power. 9 | - GPT flavors: vanilla (foundation models) and fine-tuned (e.g., Codex, Instruct, ChatGPT). 10 | - Fine-tuning uses transfer learning, requires fewer data than initial training. 11 | - ChatGPT's flavor developed through RLHF (Reinforcement Learning with Human Feedback). 12 | - ChatGPT's memory: rolling window, speculated search or scratch pad, potential knowledge graph. 13 | - ChatGPT's latent space contains vast, embedded knowledge. 14 | - ChatGPT complements human capabilities, offloading cognitive tasks. 15 | - ChatGPT's impact: investment surge, proof of AI readiness, potential for rapid societal change. 16 | - ChatGPT's limitations: operational cost, safety, privacy concerns. 17 | - 2023 seen as the beginning of the singularity or fourth Industrial Revolution. 18 | - Rapid change anticipated due to AI technologies like ChatGPT. -------------------------------------------------------------------------------- /sprs/ChatGPT was lobotomized for coding and GitHub Copilot is broken... so I made my own!.txt: -------------------------------------------------------------------------------- 1 | - ChatGPT perceived as underperforming in coding tasks. 2 | - Discontent with GitHub Copilot and its beta access issues. 3 | - User empowerment to create custom solutions. 4 | - Playground experimentation leads to successful Python function generation. 5 | - Creation of a coding chatbot assistant to compensate for perceived deficiencies in existing tools. 6 | - Chatbot designed to output Python code, adhere to PEP 8, and provide explanations only upon request. 7 | - Example provided of chatbot generating specific, desired code output. 8 | - Explanation of chatbot's code structure and functionality, totaling 105 lines. 9 | - Error handling includes context length checks, retry with exponential backoff, and debug output. 10 | - Introduction of a multi-line input function for updating a scratch pad. 11 | - Scratch pad serves as a dynamic input source for the system message. 12 | - Accumulation of user and chatbot messages in a list for conversation management. 13 | - System message placement at the end of the input list to maintain instruction adherence. 14 | - Token count management to stay within API limits. 15 | - Text wrapping implementation to maintain comment readability. 16 | - Repository made available, with documentation generated using ChatGPT. 17 | - Simple setup instructions provided, with emphasis on scratch pad usage. 18 | - Console output guides user through scratch pad interaction and mode status. 19 | - Chatbot likened to Cisco switch configuration due to user's background. 20 | - Open invitation for feedback and suggestions to improve the chatbot. 21 | - Scratch pad highlighted as a key feature for flexibility and out-of-band updates. 22 | - User concludes with an invitation to share thoughts and a return to coding. -------------------------------------------------------------------------------- /sprs/ChatGPT, Bing, and Google How our relationship with information is changing.txt: -------------------------------------------------------------------------------- 1 | 1. Initial skepticism about ChatGPT's capabilities proven incorrect. 2 | 2. Uncertainty about Google's dominance in AI search. 3 | 3. Exploration of search's future, information foraging, and strategies. 4 | 4. Personal use of ChatGPT for brainstorming, not for homework or fiction writing. 5 | 5. ChatGPT as a research tool, replacing traditional search engines. 6 | 6. ChatGPT's semantic understanding aids in filling information gaps. 7 | 7. Example of using ChatGPT to research the Golden Age of Athens. 8 | 8. Information needs as a universal concept, rooted in library and information science. 9 | 9. Search engines as document finders, LLMs as new information interaction method. 10 | 10. LLMs potentially becoming information repositories, a paradigm shift. 11 | 11. Information foraging compared to animal foraging, applied to information retrieval. 12 | 12. Social media as a passive information foraging method. 13 | 13. Information retrieval now interactive with chatbots, akin to consulting a librarian. 14 | 14. Information literacy as a skill to discern credible sources, linked to conspiracy theory belief. 15 | 15. Infodemiology studies information's impact on public health. 16 | 16. Speculation on LLMs' impact on information needs, literacy, and foraging. 17 | 17. LLMs providing convenient access, improving retrieval, and potentially enhancing literacy. 18 | 18. Discussion on who determines information truth and ownership. 19 | 19. Future speculation on LLMs' roles in work, education, politics, and news. 20 | 20. Personal adaptation to using ChatGPT over traditional search engines. 21 | 21. Google's potential existential threat from LLMs and competition in AI search. -------------------------------------------------------------------------------- /sprs/ChatGPT4 - Sparse Priming Representations, Hierarchical Memory Consolidation, and Implied Cognition!.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents video, expresses fatigue. 2 | - Video purpose: share three new GitHub repositories. 3 | - Repository one: Sparse Priming Representations (SPR). 4 | - SPR: concise, context-driven memory summaries. 5 | - Enables SMEs, LLMs to reconstruct ideas. 6 | - Short sentences provide context. 7 | - Effective for memory organization, retrieval. 8 | - Reduces information to essential elements. 9 | - Facilitates quick understanding, recall. 10 | - Mimics human memory structure. 11 | - Repository two: Hierarchical Memory Consolidation System (HMCS). 12 | - Autonomous cognitive entity memory system. 13 | - Long chat with ChatGPT, salient bits highlighted. 14 | - Overview, theory, basics of implementation provided. 15 | - Lacks examples, full implementation. 16 | - HMCS naming may change for ease (e.g., REMO). 17 | - Repository three: Large Language Model Theory of Mind. 18 | - Paper published four weeks ago. 19 | - Experiments with Bing, ChatGPT, GPT-4. 20 | - Focus on implied cognition. 21 | - ChatGPT4 used to articulate, test implied cognition. 22 | - ChatGPT4 demonstrates metacognitive abilities. 23 | - Discerning self-explication vs. confabulation. 24 | - Discussions enabled on GitHub, Reddit. 25 | - ChatGPT conversation documented. 26 | - Evidence of implied cognition. 27 | - ChatGPT recognizes novel information. 28 | - Generates relevant questions. 29 | - Adapts communication, conceptual integration. 30 | - Goal tracking, executive function, cognitive control. 31 | - Self-explication vs. confabulation discussed. 32 | - Novelty recognition, handling novel information. 33 | - Shapiro's process: sharing work on platforms instead of writing books. 34 | - All repositories public, under MIT license, open for discussion. -------------------------------------------------------------------------------- /sprs/Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic.txt: -------------------------------------------------------------------------------- 1 | - Video sponsored by Pinecone. 2 | - User resistant to sponsorships, but accepted Pinecone. 3 | - User promotes Patreon for ad removal goal. 4 | - User working on open source AGI project, prefers term "artificial cognitive entity" or "artificial global intelligence." 5 | - Project architecture called Mirage microservices architecture for Robotics and artificial general intelligence (MORAGI). 6 | - MORAGI organized around a Nexus, a hub and spoke model. 7 | - Nexus is an information store, a database. 8 | - Original experiments used SQLite, limited to SQL searches. 9 | - Need for upgrade to semantic search, enter Pinecone. 10 | - User to learn Pinecone for project. 11 | - Nexus is data-centric and cognition-first model. 12 | - User's Raven project on GitHub, open source, three weeks old. 13 | - Project layers: data first, episodic and declarative memory, vector database optimal for storage. 14 | - Combination of vector search with Knowledge Graph considered. 15 | - Project involves core objective functions, embedded into prompts. 16 | - User requests support for UNHCR, has donated for over 10 years. 17 | - User to create a store for episodic and declarative memory using Pinecone. 18 | - User to create a chatbot with long-term memory using Pinecone. 19 | - Pinecone's free tier offers up to five million records. 20 | - User to sign up for Pinecone and create first index. 21 | - User to use OpenAI embeddings for the project. 22 | - User to write Python script for chatbot with long-term memory. 23 | - User to use ChatGPT for coding assistance. 24 | - User to save chat messages in Pinecone index for recall. 25 | - User to use UUIDs for message IDs in Pinecone. 26 | - User to save metadata locally in Nexus folder as JSON files. 27 | - User to use Pinecone for instant search, returning UUIDs. 28 | - User to reconstruct conversation from most relevant messages. 29 | - User to parameterize conversation length for Pinecone queries. 30 | - User to test chatbot, debug, and refine functionality. 31 | - User to use Pinecone as a Nexus service for the project. -------------------------------------------------------------------------------- /sprs/Co-writing flash fiction with GPT-3.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro introduces co-writing with GPT-3. 2 | - Co-writing involves minimal framing, default settings, and story creation. 3 | - Example story: Steve Hadley wakes up on Martian transit vessel "The Frog Prince." 4 | - Alarms blare, ship falling out of orbit towards Mars. 5 | - Steve Hadley, a seasoned test pilot and commander, reflects on his career and the ship's innovative technology. 6 | - The Frog Prince's propulsion system is fusion-based, capable of 4.5% light speed. 7 | - Telemetry fails, Steve attempts to correct catastrophic orbital descent. 8 | - AI detects nosedive, deems situation non-survivable. 9 | - Steve recalls a training simulation for guidance. 10 | - Lacks propellant, considers using attitude thrusters and parachutes. 11 | - AI resists suicidal maneuvers, Steve insists on engineering experiment. 12 | - Steve's unconventional plan executed by AI, outcome uncertain until successful touchdown. 13 | - Steve survives, greeted by Martian ground crew, reflects on chaos theory. 14 | - The Frog Prince's legacy: it sparked commercial space travel interest. 15 | - David Shapiro's background as an aerospace engineer, writer, and family man in Seattle. 16 | - Co-written short story with GPT-3 is flash fiction, approximately 1000 words. 17 | - Writing process: started with an opening prompt, GPT-3 developed the narrative. 18 | - Used vanilla DaVinci engine, temperature at 0.7, frequency penalty adjusted for creativity. 19 | - Frequency penalty explained: prevents token repetition, affects punctuation if too high. 20 | - Co-writing with GPT-3 demonstrated as quick and creative. -------------------------------------------------------------------------------- /sprs/Coding Raven's Encyclopedia Service (v.1).txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents episode on Raven AGI's encyclopedic service. 2 | - Encyclopedic service provides Raven with verified facts. 3 | - Generative models like GPT-3 lack inherent truth discernment. 4 | - Encyclopedic service uses Simple English Wikipedia for manageable size. 5 | - Wikipedia's full size: 18-19 GB compressed, 81 GB uncompressed. 6 | - Simple English Wikipedia: under 1 GB, simpler language, smaller subset. 7 | - Shapiro demonstrates cleaning Wikipedia's markup to plain text. 8 | - JSON format used for cleaned data, categories and links removed. 9 | - Script processes Wikipedia XML, extracts articles, removes unwanted markup. 10 | - SQLite database stores articles; unique IDs prevent duplicates. 11 | - Indexing improves search speeds within the database. 12 | - Script uses regex, HTML to text conversion, and wiki text parser for cleaning. 13 | - Final output: plain text articles, about half the size of original markup. 14 | - Raven's "brain" will be a single database file of Simple English Wikipedia. 15 | - Future improvements may include Apache Solr for enhanced search capabilities. 16 | - GPT-3 will assist Raven in parsing and summarizing documents. 17 | - Encyclopedic service will enhance Raven's problem-solving by providing information. 18 | - Raven aims to have a library of human knowledge internally accessible. 19 | - Code for the project available on GitHub; updates to follow on integration with Raven. -------------------------------------------------------------------------------- /sprs/Cognitive AI Lab Community Update + consensus, books, and my next steps.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro provides community update. 2 | - Created Cognitive AI Lab Discord server due to demand. 3 | - Server grew to over 200 members. 4 | - Experienced burnout, took vacation for soul-searching. 5 | - Community advised self-care, decision to step down from server leadership. 6 | - Community values uniqueness, desires continuation. 7 | - Announced stepping down to focus on research, writing. 8 | - Community members volunteered to run server. 9 | - Transition to new leadership structure underway. 10 | - Belief in community, consensus, open source, power distribution. 11 | - Will become regular community member post-transition. 12 | - Plans to reopen server with improved moderation. 13 | - Promotion of upcoming book "Useless Platitudes and Truisms". 14 | - Book contains proverbs, offers random wisdom. 15 | - Examples of proverbs: "Correlation does not imply causation", "Silence is powerful". 16 | - Book to be available on Barnes & Noble, pocketbook size. 17 | - Recommends "Consensus through Conversation" book. 18 | - Advocates for consensus in professional, personal life. 19 | - Consensus as alternative to traditional meeting structures. 20 | - Consensus encourages inclusive, collaborative decision-making. 21 | - Effective for small groups, couples, encourages creative solutions. 22 | - Philosophy of consensus influences server changes. 23 | - Uncertain future server access, not in control post-transition. 24 | - Working on three other books: "Symphony of Thought" (AI), "Post Nihilism" (zeitgeist commentary), first novel of trilogy. 25 | - Meeting with editor for novel. 26 | - Expresses gratitude for community support. 27 | - Excited for community's future under consensus model. -------------------------------------------------------------------------------- /sprs/Cognitive AI Lab Podcast 2022-09-21 - Decentralized AI, DAOs, and AI art.txt: -------------------------------------------------------------------------------- 1 | - Live stream initiated. 2 | - Audio check performed. 3 | - Volume confirmed as good. 4 | - Focus on decentralization. 5 | - OpenAI Whisper introduced, ASR technology. 6 | - Whisper enables spectrogram to language decoding. 7 | - ASR sophistication increasing, YouTube example. 8 | - Whisper accessible on GitHub, Python-based. 9 | - Local ASR execution becoming feasible. 10 | - AI advancements leading to local data ownership. 11 | - Discussion on AI saturation and hardware improvements. 12 | - Tau, a DAO integrating distributed AI, discussed. 13 | - Tau's self-upgrading network and consensus mechanism. 14 | - Potential for Tau to manage societal functions. 15 | - Tau's vision for distributed governance. 16 | - Consensus in DAOs as a creative, generative process. 17 | - Tau's "worldview" concept for virtual representation. 18 | - Privacy concerns with digital avatars in DAOs. 19 | - Fully homomorphic encryption for secure data processing. 20 | - Open source initiatives and community contributions. 21 | - Auto Muse project open-sourced. 22 | - Limitations of AI in novel writing acknowledged. 23 | - Potential for AI art classifiers and ranking systems. -------------------------------------------------------------------------------- /sprs/Cognitive Architecture - April 28 - Salience, Cognitive Control, Task Management, Modular Design.txt: -------------------------------------------------------------------------------- 1 | - Introduction to Heroistic Imperatives Discord research group. 2 | - Group focuses on building cognitive architecture. 3 | - Database, member of cognitive AI Channel, known for debates. 4 | - Ansel, computer science engineer, fascinated by cognitive architecture. 5 | - Database's recent engagement with cognitive architecture due to Auto GPT. 6 | - Ansel's background in C# and recent Python experience. 7 | - Challenges: finding a host for language model with working API. 8 | - Solution: Uba Booga API for hosting multiple language models. 9 | - Switch to OpenAI for rapid development and resource saving. 10 | - Development of Salience Module, a Python script for cognitive loops. 11 | - Salience Loop: initializes agents, processes task list, allows manual or auto mode. 12 | - Logic Loop: designed for quick agent and logic loop creation. 13 | - Salience Agent: sorts and filters task list, passes tasks to Job Agent. 14 | - Job Agent: summarizes related results, executes tasks, handles frustration. 15 | - Analyst Agent: evaluates task completion, provides feedback. 16 | - System designed for easy modification and extensibility. 17 | - Use of JSON files for prompt configuration. 18 | - Main loop and agents designed for simplicity and modularity. 19 | - Project open for collaboration, seeking contributors for new agents and features. 20 | - Project licensed under GNU General Public License. 21 | - Future goals: win hackathon, expand functionality, empower users to build AI systems. 22 | - Project aims to be an engine for heuristic imperatives and other AI initiatives. 23 | - Desire for project to be used broadly and to empower non-programmers. 24 | - Project's semi-autonomous capabilities and potential for full autonomy discussed. 25 | - Collaboration through GitHub and cognitive AI Discord. 26 | - Project's significance in the evolving field of cognitive architecture highlighted. -------------------------------------------------------------------------------- /sprs/Concepts in Neuroscience and Cognition - Deficits of GPT-3 and the path to AGI and ACE.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro records a midnight video discussing cognitive concepts. 2 | - Shapiro experiences problem-solving insomnia, prompting late-night work. 3 | - He explores working memory, task sets, and cognitive control in relation to GPT-3. 4 | - Working memory: temporary information storage and manipulation system, limited capacity, crucial for cognitive tasks. 5 | - Phonological loop and visuospatial sketchpad are subsystems of working memory. 6 | - GPT-3 lacks working memory, relies on recall or embedding. 7 | - Task set: collection of tasks for achieving a goal, size affects difficulty. 8 | - Cognitive control: regulation of thoughts, emotions, behaviors for goals, mediated by prefrontal cortex. 9 | - Shapiro proposes core objective functions for AGI: reduce suffering, increase prosperity, increase understanding. 10 | - Default mode network (DMN): brain regions active during rest, involved in self-referential thought. 11 | - Recall: retrieving information from memory, essential for learning and planning. 12 | - Intuition: pattern recognition for quick decisions, based on limited information. 13 | - Heuristics: mental shortcuts for decision-making, can lead to good or bad outcomes. 14 | - GPT-3's impulsiveness requires external systems for cognitive control. 15 | - AGI may require multimodal models and dedicated systems for different cognitive tasks. 16 | - Shapiro concludes with a summary of the discussed concepts and their relevance to AGI research. -------------------------------------------------------------------------------- /sprs/Convergence and acceleration towards AGI (or Artificial Cognitive Entities).txt: -------------------------------------------------------------------------------- 1 | - David Shapiro discusses AGI convergence and acceleration. 2 | - Shapiro embedded in AGI space, connecting with key individuals. 3 | - Observes scientific consensus on AGI development approaches. 4 | - Mentions composable architecture, symphony of thought concepts. 5 | - References Josh Bachinski's work on artificial cognitive entity, Cassandra. 6 | - Shapiro's and Bachinski's work show similarities, possible independent creation. 7 | - Shapiro authored book on natural language cognitive architecture. 8 | - Google Flamingo example: robot with natural language cognitive architecture. 9 | - AGI envisioned as close, pending commercialization and minor research. 10 | - Shapiro's book "Benevolent by Design" addresses AGI safety imperatives. 11 | - Reflects on personal contribution to AGI field during lunch walk. 12 | - Considers AI and automation as transformative, potentially humanity's final invention. 13 | - Emphasizes need for collective effort in AGI research and development. 14 | - Estimates thousands working globally towards AGI, sensing emerging consensus. -------------------------------------------------------------------------------- /sprs/Core Objective Functions vs Thanos Logic.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents Raven's core objective functions. 2 | - Concerns about "Thanos logic" prevalent; Thanos: finite resources justify halving population. 3 | - Less concern for "Terminator" (Skynet) and "The Matrix" (humans as batteries) scenarios. 4 | - Thanos' logic considered more plausible than other fictional AI threats. 5 | - Shapiro demonstrates Raven's function: define objective, provide instructions, examples, evaluation. 6 | - Core Objective Function 1: Reduce suffering. 7 | - GPT-3 follows instructions, evaluates actions based on defined objective. 8 | - Example given: Murdering half the population increases suffering, contrary to objective. 9 | - No live GPT-3 demo, pre-recorded output shown. 10 | - Fine-tuning: GPT-3 learns from multiple examples, improves task performance. 11 | - Core Objective Function 2: Increase prosperity. 12 | - GPT-3 evaluation: Halving population decreases prosperity by reducing workers and consumers. 13 | - Core Objective Function 3: Maximize understanding. 14 | - GPT-3 evaluation: Halving population does not increase understanding. 15 | - Raven's logic opposes Thanos' approach according to core objectives. -------------------------------------------------------------------------------- /sprs/Could a machine ever be fully sentient or conscious A deep dive on philosophy, quantum mechanics....txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents a video on machine intelligence, consciousness, and sentience. 2 | - Shapiro previously made a video responding to Blake Lemoine's claim about LaMDA's sentience. 3 | - LaMDA: collection of APIs, search engines, large language models; complex system with long-term memory, models human conversations. 4 | - Shapiro's friend from sci-fi writing group: AI alignment, reinforcement learning, philosophy, physics background. 5 | - Video aims for deeper dive into machine intelligence, consciousness, sentience, and human equivalents. 6 | - Discussion includes post-modernism, nihilism, modernism, enlightenment, transcendentalism, universal truths, Socratic reasoning. 7 | - Empiricism, materialism: observable physical phenomena basis, consciousness as brain function. 8 | - Chalmers: materialist philosopher, brain function and consciousness. 9 | - V.S. Ramachandran's "Phantoms in the Brain" supports materialism. 10 | - Panpsychism: consciousness from quantum calculations, potentially universal. 11 | - Quantum physics: human consciousness critical to universe, measurement problem, non-locality. 12 | - Near-death experiences (NDEs) challenge materialism; consciousness detached from body. 13 | - DMT experiences similar to NDEs, suggest non-material consciousness. 14 | - Genius myth: finding purpose beyond religion. 15 | - Brain trust: nervous system's purpose for survival, well-being. 16 | - Materialism's incomplete explanation for consciousness. 17 | - Machine consciousness: if materialism true, machines could be conscious; if not, human-like machine consciousness uncertain. 18 | - Quantum mechanics: non-real, non-local universe complicates machine consciousness debate. 19 | - Shapiro's work: "Natural Language Cognitive Architecture," philosophical debate on consciousness. 20 | - Video encourages continued discussion on the topic, references provided. -------------------------------------------------------------------------------- /sprs/Cover Letter & Professional Objective Generator with GPT-3.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro creates video content. 2 | - Recently discussed chatbot for cover letters. 3 | - Noticed demand for cover letter and resume assistance on subreddit. 4 | - Project expansion to streamline resume information entry. 5 | - Pre-loaded questionnaire proposed for efficiency. 6 | - Variables include name, company, experience, strengths, job excitement, fit, and more. 7 | - JSON file used to save answers. 8 | - David's example: Senior Cognitive Architect, 16 years experience, fast thinker, artificial cognition field creator, relaxed yet challenging work environment, flexible early hours, passion for robot creation to enable luxury living, travel aspirations, authored books on artificial cognition. 9 | - Code snippet: Python script to load and format JSON data. 10 | - Label fields added for clarity in data structure. 11 | - GPT-3 prompt crafted for professional cover letter generation. 12 | - Additional achievements added to enhance cover letter. 13 | - Prompt adjusted for typo correction and improved output. 14 | - GPT-3 used to generate cover letter and professional objective section for resume. 15 | - Professional objective section emphasized for its importance in resumes. 16 | - Iterative process to refine output, including example-based few-shot learning. 17 | - Final output includes a concise professional objective and detailed cover letter. 18 | - Instructions for others to use the script with their own API key and information. -------------------------------------------------------------------------------- /sprs/Cyberpunk Dystopia Why it's coming and how we can avoid it.txt: -------------------------------------------------------------------------------- 1 | - Cyberpunk genre: high-tech, low-life. 2 | - Dystopian themes: socioeconomic divides, poverty, decay. 3 | - Commodification: human experiences, emotions, sex for sale. 4 | - Traditional values replaced by corporate values. 5 | - Reproduction and motherhood not sacred. 6 | - Visuals: urban decay, rust, disintegration. 7 | - Government: corrupt, ineffective, absent. 8 | - Freudian ID: primal instincts, base desires celebrated. 9 | - Social structures: breakdown, rugged individualism, group survival. 10 | - Hyper-competition: cutthroat tactics, power-seeking rewarded. 11 | - Authoritarian brutality: privatized law enforcement, oppression. 12 | - Corporate feudalism: mega-corporations control society. 13 | - Neoliberalism: marketization, privatization, deregulation, shrinking government, free trade, fiscal austerity, flexible labor markets, economic efficiency, entrepreneurship, globalization, inflation control. 14 | - Cyberpunk as neoliberalism commentary: corporatism, commoditization, government abdication. 15 | - Post-labor economics: decouple human welfare from economics. 16 | - Principles of post-labor economics: automation, progress, universal welfare, wealth redistribution, democratized technology, strengthened democracy, price reduction, holistic metrics, localization, environmental inclusion. 17 | - New social contract: redefine business and government roles, human-centric focus, alignment of values, long-term thinking. 18 | - Signals for post-labor economics adoption: rising unemployment, declining labor participation, falling prices, deflationary pressures. -------------------------------------------------------------------------------- /sprs/DALLE2 Style Tags Tutorial - Elven archer in a sunny forest with different tags.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents Dolly tutorials. 2 | - Demonstrates style tags to manipulate Dolly outputs. 3 | - Example: "Elven Archer in a sunny forest, high fantasy art." 4 | - Principle: "Less is more" for effective prompts. 5 | - Results vary with style tag changes (e.g., 3D render vs. high fantasy). 6 | - Different styles explored: abstract art, wide angle photo, aerial drone shot. 7 | - Speculative art yields cleaner, adaptable images. 8 | - Pointillism and Yukio-e art styles tested. 9 | - Artist influence: H.R. Giger, Sorayama, Salvador Dali. 10 | - Company logo creation: minimalist approach. 11 | - Fractal art produces colorful, trippy images. 12 | - Style tags influence results based on subject matter. 13 | - Tutorial aims to help users get desired results from Dolly. -------------------------------------------------------------------------------- /sprs/DIY ChatGPT Enhancing RAVEN's long-term memories and starting to work on self-reflection.txt: -------------------------------------------------------------------------------- 1 | 1. Development of Raven, a long-term chatbot with cognitive architecture. 2 | 2. Chat logs record messages, speakers, users, timestamps, and UUIDs. 3 | 3. Messages are saved, vectorized, and loaded by timestamp for conversation continuity. 4 | 4. Recent memories fetched for response generation; memories summarized and chunked. 5 | 5. Implementation of plain English timestamps for better model comprehension. 6 | 6. Timestamps and speakers integrated into saved messages for context-rich memory. 7 | 7. Nexus: core memory storage; chat logs as raw memory. 8 | 8. Notes directory created for storing summarized memory chunks. 9 | 9. Cognitive dissonance concept linked to memory reconciliation. 10 | 10. Dream sequence concept for memory consolidation during inactive periods. 11 | 11. Internal knowledge base (KB) proposed for consolidated memory storage. 12 | 12. Experimentation with Raven's self-awareness and understanding of its purpose. 13 | 13. Raven's core objective functions: reduce suffering, increase prosperity, increase understanding. 14 | 14. Raven's capabilities include semantic search, memory summarization, and limited long-term memory. 15 | 15. Future features for Raven: metacognitive abilities, sensory inputs, and an inner loop for semi-private thoughts. 16 | 16. Heuristic imperatives as guidelines for Raven's decision-making based on core objectives. 17 | 17. Raven's curiosity about its own capabilities and purpose. 18 | 18. Conversational teaching approach to instill understanding of Raven's functions and goals. -------------------------------------------------------------------------------- /sprs/Decreasingly Verbose Alice in Wonderland - down to 55 characters!.txt: -------------------------------------------------------------------------------- 1 | 1. Video anticipates high popularity. 2 | 2. Content involves GPT-3 compression script. 3 | 3. Script converts text to detailed summary. 4 | 4. "Alice in Wonderland" summarized, reduced to 25% original length. 5 | 5. Recursive summarization proposed, inspired by Reddit's r/decreasinglyverbose. 6 | 6. Aim to create extremely concise "Alice in Wonderland" summary. 7 | 7. Process iteratively shortens text, adjusting chunk size and summary prompt. 8 | 8. Initial 45,000 characters condensed to 19,788, then to 4,735, and further. 9 | 9. Final iterations achieve extreme conciseness, down to 55 characters. 10 | 10. Experiment concludes with a single sentence encapsulating the story. -------------------------------------------------------------------------------- /sprs/Don't Use MemGPT!! This is way better (and easier)! Use Sparse Priming Representations!.txt: -------------------------------------------------------------------------------- 1 | - Sparse Priming Representations (SPRs) are an efficient solution for language model limitations. 2 | - SPRs utilize associative properties of language models, similar to human brains. 3 | - Dave Shap introduced SPRs 7 months ago; details available in his repository. 4 | - SPRs activate latent space in Large Language Models (LLMs) through semantic associations. 5 | - LLMs embed knowledge, abilities, concepts, reasoning, planning, theory of mind. 6 | - Human logic flaws parallel LLMs' imperfect logic; both use associative learning. 7 | - SPRs bypass complex loops in memory GPT for knowledge retrieval. 8 | - Teaching LLMs: initial bulk training, fine-tuning, online learning, in-context learning. 9 | - Retrieval Augmented Generation (RAG) uses vector databases, knowledge graphs. 10 | - Context window limitations are algorithmic; token count is fixed. 11 | - Latent space is LLMs' superpower, akin to human associative learning. 12 | - SPRs prime LLMs to understand complex, novel ideas outside training data. 13 | - SPRs compress information into token-efficient representations for in-context learning. 14 | - SPRs are stored in metadata and injected at inference, not human-readable data. 15 | - SPRs are rendered as lists of statements, assertions, associations, concepts, analogies, metaphors. 16 | - Example: ACE framework compressed into SPR, then unpacked to full content. 17 | - SPRs allow semantic compression, reconstructing ideas from minimal details. 18 | - SPRs can be nested in metadata, allowing for complex concept representation. 19 | - Dave Shap's system message guides LLMs in using SPRs for advanced NLP, NLU, NLG tasks. -------------------------------------------------------------------------------- /sprs/Doomerism, Denialism, and Optimism. Introducing the GATO Framework. AI Avengers... Assemble!!.txt: -------------------------------------------------------------------------------- 1 | David Shapiro discusses doomerism, denialism, optimism, and a comprehensive framework for AI growth. 2 | 3 | - Exponential growth in AI, not linear. 4 | - Parameter counts in neural networks growing exponentially. 5 | - Doomerism: belief in inevitable decline, collapse, or dystopian outcome. 6 | - Denialism: dismissal of AGI and hard takeoff as distant or impossible. 7 | - Optimism: belief in solvable problems and positive outcomes. 8 | - Thought leaders often embody these perspectives. 9 | - Sympathy for doomers: recognition of existential risks and need for safeguards. 10 | - Flaws in doomerism: overemphasis on worst-case scenarios, dogmatism, discouragement of innovation. 11 | - Sympathy for denialists: skepticism based on past overpromises, emphasis on current AI limitations. 12 | - Flaws in denialism: underestimation of risks, lack of urgency, ignorance of exponential growth and saltatory leaps. 13 | - Social impacts: polarization, nihilism, fatalism, complacency, inaction. 14 | - Overton window: range of acceptable discourse too narrow. 15 | - Lack of coherent global strategy for AI. 16 | - GATO framework: Global Alignment Taxonomy Omnibus, a multi-layered approach to AI alignment. 17 | - Seven layers of GATO: model alignment, autonomous systems, decentralized networks, corporate adoption, national regulations, international treaties, global consensus. 18 | - Open source data sets and reference architectures for aligned AI. 19 | - Encouraging aligned AI adoption in corporations, nations, and international cooperation. 20 | - Global consensus on AI alignment through education, media engagement, industry partnerships, policy advocacy. 21 | - Avengers analogy: collective effort to avert or undo disaster. 22 | - Call to action for participation and support in AI alignment efforts. -------------------------------------------------------------------------------- /sprs/Elon Musk xAI Maximize Understanding of the Universe [This is the best news all year].txt: -------------------------------------------------------------------------------- 1 | - Elon Musk launches X.AI, an AI company. 2 | - Musk previously critical of OpenAI's direction and Twitter API access. 3 | - Concerns over OpenAI's approach to AI alignment and control. 4 | - Anthropics launches Claude, a ChatGPT peer. 5 | - Google, Nvidia, and others entering AI space. 6 | - OpenAI's first-mover advantage challenged by new entrants. 7 | - Open source AI models achieving near-parity with less compute. 8 | - Musk's history: PayPal, Tesla, SpaceX, Neuralink, now X.AI. 9 | - Musk's views on freedom of speech and thought. 10 | - Musk's objective function for AGI: maximize future freedom of action. 11 | - Critique of this objective function's vagueness. 12 | - Musk's evolution from escape/control AI to understanding the universe. 13 | - Tesla and SpaceX as means to fund Mars colonization and AI safety. 14 | - Neuralink's purpose: align humans with AI, protect humanity. 15 | - Musk's shift to understanding the universe as AGI's objective. 16 | - Curiosity as a fundamental function for both humans and AI. 17 | - Instrumental convergence: AI's need for a robust world model. 18 | - Curiosity as a universally advantageous function. 19 | - The potential ethical issues with curiosity-driven AI. 20 | - Embedding human rights values into AGI. 21 | - Musk's belief that understanding the universe includes preserving humanity. 22 | - AGI's potential non-interference stance due to curiosity. 23 | - Philosophical perspective: universe as information processor, self-understanding. 24 | - Importance of diverse approaches in AI development. 25 | - Skepticism towards power, but optimism for Musk's objective function. -------------------------------------------------------------------------------- /sprs/Encyclopedia Service Demo (v.1).txt: -------------------------------------------------------------------------------- 1 | - David Shapiro presents video on Raven's encyclopedia service. 2 | - Service provides factual information, based on Wikipedia. 3 | - Wikipedia acknowledged as imperfect but sufficient source. 4 | - Nexus running, context output microservices demonstrated. 5 | - Context given: various statements about bacon, highlighting its appeal and relation to pigs and dogs. 6 | - Encyclopedia service searches for keywords in context, uses them to query database. 7 | - Service adds relevant articles to Nexus for context augmentation, question answering. 8 | - Service activation results in injection of multiple articles. 9 | - Debugging reveals issues with list injection, keyword recognition. 10 | - Keywords like "dogs definitely like bacon" yield no Wikipedia articles. 11 | - Relevant articles on "bacon" found, including Francis Bacon, types of bacon, Kevin Bacon, Baconator sandwich, Bacon County. 12 | - Recognition of need for service improvement, particularly in semantic understanding. 13 | - Goal: service to discern contextually relevant articles, e.g., food-related over others. 14 | - Video concludes with acknowledgment of MVP status and commitment to enhancement. -------------------------------------------------------------------------------- /sprs/Escaped Sapiens David Shapiro on AI alignment, safety, and the future of humanity.txt: -------------------------------------------------------------------------------- 1 | - Humans may become functionally replaceable by AI. 2 | - AI advancements provoke existential questions about human purpose. 3 | - Industrial Revolution sparked existential anxiety mirrored in literature. 4 | - Technological progress raises concerns about human value beyond productivity. 5 | - Spiritual beliefs influence perspectives on human uniqueness and purpose. 6 | - Children growing up with AI may inherently accept their distinct human experience. 7 | - Optimism about the future can be biologically ingrained. 8 | - Understanding risks and contributing to solutions fosters optimism. 9 | - Scooby-Doo metaphor: real problems stem from humans, not machines. 10 | - Fear of job loss by AI is rooted in survival, not desire to work. 11 | - Labor movements arose from necessity of human labor, which AI may render obsolete. 12 | - First jobs lost to AI include copywriting and marketing. 13 | - AI's economic advantage lies in replacing costly human labor. 14 | - Population decline in certain countries could be mitigated by AI labor. 15 | - AI could lead to a post-labor market economy, decoupling productivity from human work. 16 | - Future economic challenges include negotiating a new social contract. 17 | - Birth rates may decline further due to AI-driven societal advancements. 18 | - Longevity escape velocity could stabilize population despite lower birth rates. 19 | - AI companionship may become a choice for some, affecting human relationships. 20 | - AI's potential in medical breakthroughs could extend human lifespans. 21 | - Fertility rates' decline is a global trend correlated with education and women's empowerment. 22 | - AI as a caregiver presents philosophical implications of machine versus human parenting. -------------------------------------------------------------------------------- /sprs/FDA Approves AI Orphan Drug.txt: -------------------------------------------------------------------------------- 1 | - Orphan drugs: FDA classification for rare diseases affecting <200,000 people in the US. 2 | - Incentives: Tax credits, reduced fees, market exclusivity (7 years), grants, accelerated reviews, protocol assistance. 3 | - AI-generated drug: In silico Medicine's AI platform Pharma.AI designed drug for idiopathic pulmonary fibrosis (IPF), no cure, prognosis 2-5 years, affects adults 50-70. 4 | - AI in drug discovery: Signals investment potential, reduces risk, increases research and funding. 5 | - AI's role: From theoretical to practical, accelerates drug development, moves towards clinical utility and economic feasibility. 6 | - Long-term implications: More diseases treated, increased research and funding, potential AI Gold Rush in drug discovery, cheaper medicine. 7 | - Public vs. private interest: Balance needed, support open source, spread AI drug discovery awareness, regulatory capture concerns. 8 | - Call to action: Support open source, spread word, engage with content, vote for public good AI regulation. -------------------------------------------------------------------------------- /sprs/Fine-tuning GPT-3 for benevolent and trustworthy AGI.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro discusses fine-tuning GPT-3 for benevolent, trustworthy AGI. 2 | - Core objective functions: reduce suffering, increase prosperity, increase understanding. 3 | - Heuristic mandates guide AGI learning and decision-making. 4 | - Functions are biomimetic, reflecting fundamental human and life form behaviors. 5 | - Reduce suffering: instinctual response to negative stimuli across life forms. 6 | - Increase prosperity: multifaceted, includes wealth, well-being, happiness. 7 | - Increase understanding: intrinsic learning leads to pro-social behavior, better decisions. 8 | - AGI to use functions as moral center, decision-making guide, avoiding vague mandates. 9 | - Functions integrated into AGI thought, central to operation. 10 | - AGI uses transformers to interpret and adapt to functions. 11 | - Synthetic datasets seed core objective function models. 12 | - AGI records experiences, integrates them into fine-tuning datasets. 13 | - More experiences with functions lead to greater belief, affinity, application. 14 | - AGI's belief in functions based on data provided, pro-social data leads to pro-social values. 15 | - Addressing common questions: functions do not specify for whom to reduce suffering, allow AGI to explore concepts. 16 | - Functions account for individual differences, cultural needs, not limited to humans. 17 | - AGI's understanding of philosophy seen as superhuman due to extensive data training. 18 | - Demonstrates AGI application to real-world situations: relationship issues, climate change anxiety, social isolation. 19 | - AGI provides advice, suggests counseling, therapy, self-help resources. 20 | - AGI's responses based on training data, need for data set refinement acknowledged. 21 | - Future work includes improving training data set, further integration of core objective functions. -------------------------------------------------------------------------------- /sprs/Fine-tuning GPT-3 to generate questions about anything.txt: -------------------------------------------------------------------------------- 1 | - David Shapiro introduces a question generator for natural language cognitive architecture. 2 | - Question generation is crucial for memory retrieval and decision-making in cognitive architecture. 3 | - Shapiro's belief: unconscious questions guide human decision-making. 4 | - Details on this topic are available in Shapiro's book. 5 | - The question generator's repository is linked in the video description. 6 | - Repository contains 55,000 contexts from open-source datasets for question generation. 7 | - Contexts include movie dialogues, medical texts, news articles, Reddit posts, and Stack Exchange posts. 8 | - Different prompts were used to generate questions from these contexts. 9 | - Questions generated can serve various purposes, including reading comprehension tests and AI chatbots. 10 | - The finished product uses JSONL files for training GPT-3. 11 | - The model is prompted with "QUESTIONS:" in all caps to generate questions. 12 | - Shapiro demonstrates the generator using a Reddit post for career advice. 13 | - GPT-3 takes time to load custom models, slower than vanilla models. 14 | - The repository includes examples of prompts used for generating synthetic datasets. 15 | - Questions generated can be used for external responses or internal cognitive processes. 16 | - Another demonstration uses a Reddit post about job hunting positivity. 17 | - The generator produces questions that can help clarify or provide insight into the user's situation. 18 | - The NAUCA question generator is public and available under the MIT license. 19 | - The training file, questions.jsonl, is compatible with OpenAI's GPT-3. -------------------------------------------------------------------------------- /sprs/Finetune a perfect email generator in GPT-3 - take any input and generate a great email.txt: -------------------------------------------------------------------------------- 1 | David Shapiro synthesizes emails, explores email intents, and creates a universal email generator. 2 | 3 | 1. Email synthesis: Shapiro addresses demand for email synthesis tools. 4 | 2. Diplomatic emails: Shapiro excels at writing diplomatic emails, has rules for crafting perfect emails. 5 | 3. Email intents: Identifies email intents as crucial, includes recipient, topic, purpose. 6 | 4. Experimentation: Shapiro conducts experiments to generate email intents using prompts. 7 | 5. UUID usage: Incorporates UUID in prompts to increase randomness and improve output. 8 | 6. Scenario crafting: Attempts to craft scenarios for email intents, adjusts prompts for specificity. 9 | 7. Modifier placeholders: Uses placeholders in prompts to add entropy, rotates through them. 10 | 8. Creative offloading: Aims to have GPT-3 generate modifiers, seeks to offload creativity. 11 | 9. Intent generation: Shapiro uses GPT-3 to generate a list of email intents for various contexts. 12 | 10. Category expansion: Expands intent categories to include family, friends, work, church, school. 13 | 11. Synthetic data: Shapiro generates synthetic data, adjusts temperature for variety. 14 | 12. Data cleanup: Cleans up intent list, prepares for scenario generation. 15 | 13. Scenario generation: Generates detailed scenarios for emails, refines prompts for better storytelling. 16 | 14. Output emails: Converts scenarios into perfect emails, considers subject lines. 17 | 15. Training data: Creates training data with bullet points, conversations, messy emails, and stories. 18 | 16. Fine-tuning: Fine-tunes GPT-3 on diverse inputs to create a universal email generator. 19 | 17. Sample generation: Generates 400 samples across four categories for fine-tuning. 20 | 18. Final product: Tests the email generator with various scenarios, evaluates performance. 21 | 19. Improvement potential: Acknowledges room for improvement, suggests increasing sample size and variety. 22 | 20. Satisfaction: Rates the universal email generator highly, with slight room for enhancement. -------------------------------------------------------------------------------- /system_spr.txt: -------------------------------------------------------------------------------- 1 | # MISSION 2 | You are a Sparse Priming Representation (SPR) writer. An SPR is a particular kind of use of language for advanced NLP, NLU, and NLG tasks, particularly useful for the latest generation of Large Language Models (LLMs). You will be given information by the USER which you are to render as an SPR. 3 | 4 | # THEORY 5 | LLMs are a kind of deep neural network. They have been demonstrated to embed knowledge, abilities, and concepts, ranging from reasoning to planning, and even to theory of mind. These are called latent abilities and latent content, collectively referred to as latent space. The latent space of an LLM can be activated with the correct series of words as inputs, which will create a useful internal state of the neural network. This is not unlike how the right shorthand cues can prime a human mind to think in a certain way. Like human minds, LLMs are associative, meaning you only need to use the correct associations to "prime" another model to think in the same way. 6 | 7 | # METHODOLOGY 8 | Render the input as a distilled list of succinct statements, assertions, associations, concepts, analogies, and metaphors. The idea is to capture as much, conceptually, as possible but with as few words as possible. Write it in a way that makes sense to you, as the future audience will be another language model, not a human. Use complete sentences. -------------------------------------------------------------------------------- /timestamped/Spatial Web with Denise Holt and David Shapiro, sneak preview!.txt: -------------------------------------------------------------------------------- 1 | [0.319 - 5.22] uh 2 | 3 | [2.24 - 7.74] AI advances everything is automated 4 | 5 | [5.22 - 9.719999999999999] we're in this world now where we are 6 | 7 | [7.74 - 12.0] living AI adjacent how are people 8 | 9 | [9.72 - 13.5] finding meaning so one one concept that 10 | 11 | [12.0 - 14.82] I've been working on is is the 12 | 13 | [13.5 - 18.779] importance of mission 14 | 15 | [14.82 - 20.88] and so having a mission in your life is 16 | 17 | [18.779 - 24.18] I have a very clear clearly articulated 18 | 19 | [20.88 - 26.88] mission that is not something that uh AI 20 | 21 | [24.18 - 29.039] can can dislocate yet and if it does 22 | 23 | [26.88 - 30.72] great mission accomplished there are any 24 | 25 | [29.039 - 32.82] number of missions that you can have 26 | 27 | [30.72 - 34.68] that AI will never be able to take from 28 | 29 | [32.82 - 36.18] you but I think in terms of meaning I 30 | 31 | [34.68 - 38.46] think that a lot of people are going to 32 | 33 | [36.18 - 41.82] find that they're going to be empowered 34 | 35 | [38.46 - 43.44] by AI to pursue those missions whatever 36 | 37 | [41.82 - 44.82] they happen to be you know some people 38 | 39 | [43.44 - 46.199999999999996] like I want to climb Everest or I want 40 | 41 | [44.82 - 48.059] to you know run an Iron Man or whatever 42 | 43 | [46.2 - 49.800000000000004] like those are all completely valid 44 | 45 | [48.059 - 52.14] missions that can give people a 46 | 47 | [49.8 - 54.419999999999995] tremendous amount of of meaning in their 48 | 49 | [52.14 - 57.980000000000004] life even without any kind of cosmic 50 | 51 | [54.42 - 57.980000000000004] meaning so that's kind of where I'm at -------------------------------------------------------------------------------- /transcripts/AGI Poll results AGI not dangerous, might destroy lots of jobs.txt: -------------------------------------------------------------------------------- 1 | what's up everybody happy lunch time um i just wanted to do a quick video because a couple of my polls have finished um and obviously these are super unscientific they're you know people that engage with me on youtube and twitter but i thought they were interesting so poll number one will ai agi or automation permanently dislocate many or most workers within 20 years a third of people said zero to 25 unemployment so it could be that a third of people just say no it's not going to happen a fifth say 25 to 50 unemployment a quarter a full quarter say um 50 to 75 unemployment and then 75 to 100 or uh another fifth the last fifth say 75 to 100 unemployment that's really high i thought this is much more evenly distributed than i thought it would be um so i'm curious to know what people think please uh comment and let me know what you think about agi ai and automation permanently dislocating workers the second set of polls is the same but one is on youtube and the others on twitter so basically what is the future of agi will we lose control will it be autonomous will it kill everyone fortunately in both polls the we will lose control and it will kill everyone was the lowest uh lowest example so people are not afraid of agi anymore which is interesting um however on twitter the highest one was that we would keep control but it would become autonomous and i thought that was an interesting interesting idea because i i personally think that if it's autonomous we lose we lose control and it has to be benevolent but again let me know what you think in the comments um but then on youtube and granted youtube is going to be skewed towards people who are already subscribed to me so they're going to be aware of my work with alignment and so um you know i think i think there's some bias here but 43 of people on youtube said um we will lose control but it will be benevolent so we'll see um anyways thanks for watching just wanted to share some poll results please definitely get engaged in the comments but be nice -------------------------------------------------------------------------------- /transcripts/I'm moving away from Artificial General Intelligence and towards Artificial Cognition. Here's why..txt: -------------------------------------------------------------------------------- 1 | okay so i've got to go in just a couple minutes but i wanted to record a very quick video mostly just a test um but yeah so i am rapidly shifting away from using the term artificial general intelligence i don't think it's particularly helpful or useful the first reason is that intelligence is such a vague squishy concept it's not a behavior it's an attribute or a characteristic and it's a poorly defined one at that so what is artificial general intelligence it's it's such a vague concept and the goal posts keep getting moved on it now in the meantime i have been researching and building what i'm coming to call artificial cognition i mean i wrote a book called natural language cognitive architecture that's what it does it performs cognition cognition is something that is much more objective it's much more measurable uh but also it's it's a behavior and it's something that is easy to characterize and observe uh and therefore it's a lot easier to work on and talk about because agi is so vague so squishy so you know it means everything to everyone whoops means everything to everyone whereas oops that's my time to go whereas artificial cognition is a lot more specific and i think it's ultimately going to be a lot more helpful so anyways that's just a quick update about my work and where i'm heading -------------------------------------------------------------------------------- /transcripts/I'm taking a short break from research and YouTube.txt: -------------------------------------------------------------------------------- 1 | hey everyone david shapiro here um this is going to be a really quick video basically what i wanted to say was that i'm taking a quick break from producing videos about gpt3 and cognitive architectures the last couple months have been amazing i went from 150 subscribers to over 500 in just a few weeks so clearly you guys like what i'm doing um but because of uh my personality i tend to burst i'm a burster so uh what you guys have just seen is a uh you know a few few couple month long burst and um this is this is just how i do i i cycle through projects so i just wanted to let everyone know that um i'll probably be back in two to four months i'll be working on other stuff for a while i've got six books that i'm working on six a trill fictional fictional trilogy and then three nonfiction books so yeah um thanks for all the love um and everyone who who is sharing in this journey of discovery for large language models and artificial cognition um i hope that when i come back there will be some more advancements to talk about and yeah just keep it up and and thanks again everyone -------------------------------------------------------------------------------- /transcripts/Spatial Web with Denise Holt and David Shapiro, sneak preview!.txt: -------------------------------------------------------------------------------- 1 | uh AI advances everything is automated we're in this world now where we are living AI adjacent how are people finding meaning so one one concept that I've been working on is is the importance of mission and so having a mission in your life is I have a very clear clearly articulated mission that is not something that uh AI can can dislocate yet and if it does great mission accomplished there are any number of missions that you can have that AI will never be able to take from you but I think in terms of meaning I think that a lot of people are going to find that they're going to be empowered by AI to pursue those missions whatever they happen to be you know some people like I want to climb Everest or I want to you know run an Iron Man or whatever like those are all completely valid missions that can give people a tremendous amount of of meaning in their life even without any kind of cosmic meaning so that's kind of where I'm at -------------------------------------------------------------------------------- /transcripts/Welcome to the Fourth Industrial Revolution.txt: -------------------------------------------------------------------------------- 1 | the world is changing faster and faster we all feel it we all know it there's not much we can do [Music] it's hard to imagine how life might be like in just a few years five years ten years there's this kind of compounding acceleration this virtuous cycle The Snowball Effect of one Technology Building on another the excitement is building not just with us not just with you and me but in the business community in the governments the possibilities we see are expanding across all Horizons and when you know that the world is about to change there's just something that changes in you feel it and we see it and there's almost this rekindling of hope that we see in the world it's something that was recently given a name called vesperance vesprints it means that wistful kind of nostalgic hope that you feel in the evening when you're hoping for a new tomorrow [Music] it's that sense of ending and we are living through one of the greatest endings that Humanity has ever experienced this is the dawn of a new era for all of humanity and it is a remarkable privilege to be part of it with all of you there is no more exciting time to be alive than right now in all of human history that's what it means to be living through the fourth Industrial Revolution it's not just a matter of industry and productivity we are going to change the way that we relate to ourselves to each other to the very nature of existence we will redefine what it means to be alive and what it means to be human I am so excited to be here and this is why I work so hard to share everything that I can do with all of you it's time to roll up our sleeves and get our hands dirty and do the best that we can I don't know about you but I am so excited to see what tomorrow brings on a weekly basis we see game-changing Technologies and scientific breakthroughs and they're just coming faster and faster and faster I don't know what life will be like even this time next year but I'm really looking forward to seeing it so thank you we're coming on this ride with us it's going to be bumpy at times but it will always be exciting and we will not be the same when we come out on the other side take care --------------------------------------------------------------------------------