├── .gitignore ├── LICENSE.md ├── README - dedup.md ├── README.md ├── ansi_colors.py ├── comms.py ├── communication_file.py ├── compare_and_update_json.py ├── create_people_json.py ├── dedup.py ├── docs ├── journey.md ├── person_body.md ├── person_frontmatter.md └── tags.md ├── embed_notes.py ├── last_contact.py ├── md_birthdays.py ├── md_body.py ├── md_date.py ├── md_file.py ├── md_frontmatter.py ├── md_interactions.py ├── md_lookup.py ├── md_person.py ├── media ├── Ctrl mouse over wikilink.png ├── EricaXu.png ├── HALHome.jpg ├── SpongeBob_frontmatter.png ├── anniversary_and_birthday_query.png ├── inline_query.png ├── mynetwork.png ├── obsidian filters.png ├── obsidian_folders.png └── sample_janet_frontmatter.png ├── mise.toml ├── most_contacted.py ├── queries ├── Apr Birthdays and Anniversaries.md ├── Aug Birthdays and Anniversaries.md ├── Dec Birthdays and Anniversaries.md ├── Feb Birthdays and Anniversaries.md ├── Jan Birthdays and Anniversaries.md ├── Jul Birthdays and Anniversaries.md ├── Jun Birthdays and Anniversaries.md ├── Mar Birthdays and Anniversaries.md ├── May Birthdays and Anniversaries.md ├── Nov Birthdays and Anniversaries.md ├── Oct Birthdays and Anniversaries.md └── Sep Birthdays and Anniversaries.md ├── requirements.txt └── templates ├── Call.md ├── Chat.md ├── Organization.md ├── Person.md ├── Place.md ├── Post.md ├── Product.md ├── Video.md └── email.md /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | *.pyc 3 | .vs/* 4 | __pycache__ 5 | *.pyproj 6 | *.pyperf 7 | *.sln 8 | *.pyproj.user 9 | people.json -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Copyright 2024 JanLabs Inc. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README - dedup.md: -------------------------------------------------------------------------------- 1 | # dedup.py - Markdown File Deduplication Tool 2 | 3 | A Python utility for removing duplicate content from markdown files while preserving context, with special handling for email threads and conversation logs. 4 | 5 | *DISCLAIMER: This tool was written using ChatGPT and Claude. It is destructive so suggest creating copies of your files and using it on those, not on your original files.* 6 | 7 | ## Overview 8 | 9 | `dedup.py` is designed to clean up markdown files containing email threads, conversations, or notes that often contain duplicate content. It identifies and removes redundant information while preserving the context and structure of the original document. 10 | 11 | ## Features 12 | 13 | - **Smart Duplicate Detection**: Identifies duplicate messages and paragraphs with content similarity analysis 14 | - **Context Preservation**: Optionally replaces duplicates with contextual markers rather than removing completely 15 | - **Batch Processing**: Groups similar duplicates to reduce the number of interactions 16 | - **Customizable Replacement**: Prompts for replacement text for each duplicate or group 17 | - **Selective Processing**: Interactive mode allows reviewing each duplicate or group 18 | - **Robust Error Handling**: Gracefully handles missing files, permission issues, and interruptions 19 | - **Backup Creation**: Automatically creates backups before modifying files 20 | - **Formatting Fixes**: Optionally cleans up common email formatting issues 21 | 22 | ## Installation 23 | 24 | 1. Ensure you have Python 3.6+ installed 25 | 2. Download `dedup.py` to your preferred location 26 | 3. No external dependencies required (uses Python standard library only) 27 | 28 | ## Usage 29 | 30 | ### Basic Usage 31 | 32 | ```bash 33 | python dedup.py 34 | ``` 35 | 36 | This will process all dated markdown files (format: YYYY-MM-DD*.md) in the specified folder and its subfolders interactively. 37 | 38 | ### Command Line Arguments 39 | 40 | ```bash 41 | python dedup.py [options] 42 | ``` 43 | 44 | Options: 45 | - `--auto`: Automatically remove duplicates from the same sender without prompting 46 | - `--min-chars INT`: Minimum content length in characters to consider for deduplication (default: 40) 47 | - `--verbose`: Show detailed processing information 48 | - `--dry-run`: Show what would be removed without making changes 49 | - `--no-format-fix`: Skip formatting fixes 50 | - `--no-prompt`: Don't prompt for replacement text (just remove duplicates) 51 | - `--no-context`: Remove duplicate content completely without leaving context 52 | 53 | ### Interactive Mode 54 | 55 | When run in interactive mode, the tool will: 56 | 57 | 1. Scan for duplicates and group them by similarity patterns 58 | 2. Present each group with sample content 59 | 3. Offer options to: 60 | - Remove all duplicates in the group (y) 61 | - Skip the group (n) 62 | - Selectively process each duplicate individually (s) 63 | 4. Prompt for replacement text when removing duplicates (unless `--no-prompt` is specified) 64 | 65 | ### Examples 66 | 67 | Process a single file interactively: 68 | ```bash 69 | python dedup.py path/to/file.md 70 | ``` 71 | 72 | Process all files in a folder automatically (with same-sender duplicates): 73 | ```bash 74 | python dedup.py --auto path/to/folder 75 | ``` 76 | 77 | Remove duplicates without any replacement text: 78 | ```bash 79 | python dedup.py --no-prompt path/to/folder 80 | ``` 81 | 82 | Dry run to see what would be detected without making changes: 83 | ```bash 84 | python dedup.py --dry-run --verbose path/to/folder 85 | ``` 86 | 87 | ## What It Detects 88 | 89 | The tool identifies several types of duplicates: 90 | 91 | 1. **Duplicate Messages** - Messages with the same content from: 92 | - The same sender (e.g., email replies, forwarded content) 93 | - Different senders (with higher similarity threshold) 94 | 95 | 2. **Repeating Paragraphs** - Content blocks that repeat within the document 96 | 97 | 3. **Common Email Patterns** - Recognizes patterns like: 98 | 99 | - "Name at HH:MM" format 100 | - "Name wrote:" format 101 | - Email headers (From, To, Subject, etc.) 102 | - Simple name headers followed by content 103 | 104 | ## Backups 105 | 106 | Before modifying any file, the tool creates a backup in the `backups` directory, preserving the original file structure. If an error occurs during backup creation, a warning will be displayed. 107 | 108 | ## Formatting Fixes 109 | 110 | When the `--no-format-fix` option is not specified, the tool also cleans up common formatting issues: 111 | - Standardizes "Original Message" markers 112 | - Fixes email header formatting 113 | - Cleans excessive whitespace 114 | 115 | ## Troubleshooting 116 | 117 | - **File Not Found Errors**: If files are removed during processing, the tool will now skip them and continue 118 | - **Keyboard Interrupts**: Press Ctrl+C to gracefully exit the process 119 | - **Empty Backup Folder**: Check for permission issues or path problems if backups aren't being created 120 | 121 | ## Notes 122 | 123 | - The tool is designed for markdown files following the YYYY-MM-DD*.md naming pattern 124 | - Files with embedded content (like `![[file.ext]]` syntax) are flagged during processing 125 | - Large files may take longer to process due to the similarity comparison algorithms 126 | 127 | ## License 128 | 129 | This tool is provided as-is under the MIT License. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # hal_md 2 | 3 | Personal relationship management of your ego social network using plain-text Markdown files. Think of it like a Personal CRM on steroids. You'll need steroids and patience to create the Markdown files but they'll be yours forever. 4 | 5 | This is a collection of templates with instructions and over time it will evolve. The approach relies heavily on a Personal Knowledge Management (PKM) tool like [Obsidian](https://obsidian.md/) but could work with any text editor. 6 | 7 | ## Context 8 | 9 | Getting here has been a decades-long journey which you can read about in [The Long and Winding Road](docs/journey.md). 10 | 11 | ## Start with the end in mind 12 | 13 | This will take a long time to build out and require some attention to detail. 14 | 15 | Here's the visualization of my social network in Obsidian. This is using a filter of the Markdown files that have `tags: [person]` in their frontmatter – the meta-data at the top of my note files. The colors represent which category I have the person in such as the bright green for my "A-listers" where I have the tags `person` and `alist`. Red is for `flist`, i.e., the people that I don't want to keep in touch with for various reasons. 16 | 17 | ![](media/mynetwork.png) 18 | 19 | ## Navigating your social network 20 | 21 | You can use any text editor but preferably one that supports wikilinks, YAML frontmatter, and queries. 22 | 23 | 1. [Obsidian](https://obsidian.md/) by fellow Canadians [Erica Xu](https://github.com/ericaxu) and [Shida Li](https://github.com/lishid) 24 | 2. [Silver Bullet](https://github.com/silverbulletmd) which is Open-Source by Dutch [Zef Hemel](https://github.com/zefhemel) 25 | 26 | I haven't checked if [GitJournal](https://github.com/GitJournal/GitJournal) by [Vishesh Handa](https://www.linkedin.com/in/visheshhanda/) supports YAML frontmatter but it does support wikilinks so you'll still be able to navigate your notes. 27 | 28 | As shown in the above image, Obsidian has a graph view (aka Map of Content aka MoC) which is a really fun way to visualize and navigate your social network. The local graph view is much more useful at an individual person level. 29 | 30 | [Visual Studio](https://visualstudio.microsoft.com/) is also handy for bulk changes. 31 | 32 | ## How it works 33 | 34 | Simply use Obsidian and start creating files for each `Person` and optionally `Organization` and `Place` using the [Templates](#templates) provided. Include wikilinks in the body of the file in the form of `[[name]]` to "connect" the people, places, and organizations together as you go. 35 | 36 | 1. Create a file for a person in your network 37 | 2. Use the [Person.md](templates/Person.md) template 38 | 3. Name the file `FirstName Lastname.md` 39 | 4. Fill in as little or as much of the metadata on the person 40 | 5. List the people they're connected to under `## People` using `[[FirstName LastName]]` 41 | 6. List their positions under `## Positions` using `Title, [[Organization Name]]` 42 | 7. Click on each person under `## People` 43 | 8. Have a sip of your favorite drink 44 | 9. Go to Step 2 45 | 46 | ### Bonus points 47 | 48 | For each Organization under `## Organization` fill in as little or as much information on the organization. 49 | 50 | For each person under `## People` add [tags](docs/tags.md) like `#friend` or `#strong` to track the strength of the ties between them. 51 | 52 | ## Folders, or no Folders, it's up to you 53 | 54 | Organize your notes as you wish. I like to have folders. 55 | 56 | - `Attachments` - for any files, images, photos 57 | - `Organizations` - put all the company profiles in here 58 | - `People` - put all the people in here. Subfolder with their `slug` and then dated files for each interaction 59 | - `Personal` - my personal notes 60 | - `Templates` - the files from [Templates](#templates) 61 | 62 | For most people, I create a folder for them and a sub-folder `media` for a photo of them and any images or files we shared with each other. 63 | 64 | My [Helper Tools](#helper-tools) put images and files I've shared into those `media` subfolders. For people that I haven't communicated with, I stuff those in `People\others` 65 | 66 | ![](media/obsidian_folders.png) 67 | 68 | ## Templates 69 | 70 | These are a set of templates to track your social network. Each contain a set of metadata at the top of the files also known as YAML frontmatter. If you're not technical, don't worry as Obsidian makes it easy to edit that information. 71 | 72 | File | For what | Notes 73 | ---|---|--- 74 | [Call.md](templates/Call.md) | A phone call | Do people still make these? 75 | [Chat.md](templates/Chat.md) | Instant messaging chat | e.g. LinkedIn, Signal, SMS 76 | [Organization.md](templates/Organization.md) | Schools and companies | Where a `Person` studie, volunteers, or works 77 | [Person.md](templates/Person.md) | A person | The actual person! 78 | [Place.md](templates/Place.md) | A physical place | Places people including you have been (e.g vacations, recommendations) 79 | [Post.md](templates/Post.md) | Social media or blog post | Material post by a `Person` 80 | [Product.md](templates/Product.md)| Product | A product worked on by a `Person` and/or `Organization` 81 | [Video.md](templates/Video.md) | Videos | e.g. YouTube video by `People` 82 | 83 | ## Frontmatter matters 84 | 85 | A big part of this working well will be maintaining the frontmatter, you can sip things in over time like a new skill for a person or a new interest. You don't have to do it all at once. Just start. 86 | 87 | In this example, you can see if you click in the skills field, Obsidian shows a list of skills other people have which makes it easy to be consistent across all people with that skill. 88 | 89 | ![](media/SpongeBob_frontmatter.png) 90 | 91 | ## What's a slug? 92 | 93 | Each key template has a `slug` field which is a one-word or hyphenated word that uniquely identifies the Person, Place, Organization from others. It needs to be unique within each of the categories. 94 | 95 | For example, [Organization.md](templates/Organization.md) has `people: []` in the frontmatter which could contain a comma separated list of Person slugs from individual [Person.md](templates/Person.md) files. 96 | 97 | The [Chat.md](templates/Chat.md) template also has a `people` field to list the people that were part of the conversation. 98 | 99 | The [Place.md](templates/Place.md) template has a `people` field which you could use to list people that recommended the place. You won't need to put people that live or work there in this field since that information is already in their [Person.md](templates/Person.md) template. 100 | 101 | ## The Person 102 | 103 | This is the most important template of the collectoin and there are two pages describing the file: 104 | 105 | - [The Person's head](docs/person_frontmatter.md) describes each of the fields in the [Person.md](templates/Person.md) template. 106 | 107 | - [The Person's body](docs/person_body.md) describes the sections of the body of the [Person.md](templates/Person.md) template. 108 | 109 | ## List of birthdays and anniversaries 110 | 111 | With standard Obsidian (no additional plugins), create a file for each month of the year and include embedded queries to show the birthdays and anniveraries that month. The sample query files are [here](queries) and here's what they look like: 112 | 113 | ![](media/anniversary_and_birthday_query.png) 114 | 115 | Which results in this (a bit ugly as you see the regex): 116 | 117 | ![](media/inline_query.png) 118 | 119 | ## Helper tools 120 | 121 | I've written some Python tools to convert the exports from various messaging apps to Markdown. 122 | 123 | So far, I've created: 124 | 125 | - [linkedin_md](https://github.com/thephm/linkedin_md) for Linkedin chats 126 | - [signal_md](https://github.com/thephm/signal_md) for Signal messages using `signald` 127 | - [signal_sqlite_md](https://github.com/thephm/signal_sqlite_md) for Signal messages from it's SQLite DB 128 | - [sms_backup_md](https://github.com/thephm/sms_backup_md) for SMS messages 129 | - 2024-03-10: [last_contact](last_contact.py) to see when I last contacted the person 130 | - 2024-03-10: [md_birthdays](md_birthdays.py) outputs a month-by-month calendar of birthdays 131 | - 2024-03-10: [sample](https://github.com/thephm/sample) a sample collection of famous computer science folk 132 | - 2024-09-22: [comms](comms.py) to show the most recent communications with a person 133 | 134 | Why? So I can get **my** conversations with people in **my** network into **my** own files that **I** can control and use directly with **my** social network data. Each of those tools rely on [message_md](https://github.com/thephm/message_md). 135 | 136 | ### Comms 137 | 138 | This tool is meant to be used on the command line to lookup the most recent communications with a person. 139 | 140 | By default the contents of last 3 dated message files are shown e.g. `2024-09-12.md`. 141 | 142 | The Markdown is converted to plain text. 143 | 144 | For this tool you need to install a few libraries: 145 | 146 | ```bash 147 | pip install markdown 148 | pip install rich 149 | pip install html2text 150 | ``` 151 | 152 | #### Command line options 153 | 154 | - `-f` or `--folder` - The folder where each Person has a subfolder named with their slug 155 | - `-s` or `--slug` - The slug of the person e.g. 'sponge-bob' 156 | - `-d` or `--debug` - Debug messages 157 | - `-n` or `--name` - The first name of the person -- NOT IMPLEMENTED 158 | - `-x` or `--max` - The maximum number of interaction files to dump 159 | - `-m` or `--markdown` - To display the Markdown instead of plain text 160 | - `-t` or `--time` - Show the time e.g. SpongeBob at 23:31" 161 | - `-c` or `--color` - Use ANSI colors, otherwise just black/white text 162 | 163 | ### Most contacted 164 | 165 | The `most_contacted.py` script goes through every file dated `YYYY-MM-DD.md` and then shows you who you communicated with the most number of days, over how long, and when was the last contact. Kind of a fun leaderboard that I shared with my siblings. 166 | 167 | By default, the results are displayed on the command line or you can use the `-o` option to generate a CSV file and then play with it in Excel. 168 | 169 | DISCLAIMER: this script was entirely crafted by ChatGPT based on about 40 prompts I gave it. Not sure who owns the code now but alas my duty to disclose be done. 170 | 171 | #### Command line options 172 | 173 | - `-m` or `--my-slug` - the short code you use for yourself e.g. `sponge-bob` 174 | - `-n` or `--top-n` - Show the `n` people you communicate with the most number of days 175 | - `-o` or `--output-csv` - Generate to a CSV file instead of displaying the results 176 | 177 | ## License 178 | 179 | This project is licensed under the MIT License - see the [LICENSE](LICENSE.md) file for details. -------------------------------------------------------------------------------- /ansi_colors.py: -------------------------------------------------------------------------------- 1 | # text colors 2 | BLACK = "\033[30m" 3 | RED = "\033[31m" 4 | GREEN = "\033[32m" 5 | YELLOW = "\033[33m" 6 | BLUE = "\033[34m" 7 | MAGENTA = "\033[35m" 8 | CYAN = "\033[36m" 9 | WHITE = "\033[37m" 10 | RESET = "\033[0m" # Reset to default color 11 | 12 | # background colors 13 | BG_BLACK = "\033[40m" 14 | BG_RED = "\033[41m" 15 | BG_GREEN = "\033[42m" 16 | BG_YELLOW = "\033[43m" 17 | BG_BLUE = "\033[44m" 18 | BG_MAGENTA = "\033[45m" 19 | BG_CYAN = "\033[46m" 20 | BG_WHITE = "\033[47m" 21 | 22 | # Example usage 23 | if __name__ == "__main__": 24 | print(f"{RED}This is red text{RESET}") 25 | print(f"{BG_GREEN}This is text on a green background{RESET}") 26 | -------------------------------------------------------------------------------- /comms.py: -------------------------------------------------------------------------------- 1 | # Get the comms with a specific person 2 | 3 | import os 4 | from argparse import ArgumentParser 5 | import datetime 6 | 7 | import sys 8 | sys.path.insert(1, '../hal/') 9 | import person 10 | import identity 11 | 12 | sys.path.insert(1, './') 13 | import md_lookup 14 | import md_frontmatter 15 | import md_body 16 | import md_date 17 | import md_interactions 18 | 19 | sys.path.insert(1, './') 20 | import communication_file 21 | import ansi_colors 22 | 23 | import markdown 24 | import html2text 25 | import re 26 | 27 | def remove_markdown(text): 28 | # convert Markdown to HTML 29 | html = markdown.markdown(text) 30 | 31 | # use html2text to strip HTML tags 32 | plain_text = html2text.html2text(html) 33 | 34 | # remove block quotes, including cases like "> >", ">>", or "> > >" 35 | plain_text = re.sub(r'(^|\n)\s*>+\s*', ' ', plain_text) # Match one or more '>' with spaces 36 | 37 | # remove underscores used for italics (_italic_ or __bold__) 38 | plain_text = re.sub(r'(_{1,2})(.*?)\1', r'\2', plain_text) 39 | 40 | # replace custom image syntax like ![[filename|size]] with "(image)" 41 | plain_text = re.sub(r'!\[\[.*?\|?.*?\]\]', '(image)', plain_text) 42 | 43 | # replace video links like [[filename.mp4]] with "(video)" 44 | plain_text = re.sub(r'\[\[.*?\.mp4\]\]', '(video)', plain_text) 45 | 46 | # remove " at HH:MM" and replace with ":" 47 | plain_text = re.sub(r' at \d{1,2}:\d{2}', ':', plain_text) 48 | 49 | return plain_text 50 | 51 | NEW_LINE = "\n" 52 | 53 | # Parse the command line arguments 54 | def get_arguments(): 55 | 56 | parser = ArgumentParser() 57 | 58 | parser.add_argument("-f", "--folder", dest="folder", default=".", 59 | help="The folder where each Person has a subfolder named with their slug") 60 | 61 | parser.add_argument("-s", "--slug", dest="slug", default=".", 62 | help="The slug of the person e.g. 'sponge-bob'") 63 | 64 | parser.add_argument("-d", "--debug", dest="debug", action="store_true", default=False, 65 | help="Print extra info as the files processed") 66 | 67 | parser.add_argument("-m", "--markdown", dest="markdown", action="store_true", default=False, 68 | help="Display the Markdown instead of raw text") 69 | 70 | parser.add_argument("-t", "--time", dest="showtime", action="store_true", default=False, 71 | help="Show the time e.g. SpongeBob at 23:31") 72 | 73 | parser.add_argument("-c", "--color", dest="color", action="store_true", default=False, 74 | help="Use ANSI colors, otherwise just black/white text") 75 | 76 | parser.add_argument("-n", "--name", dest="name", default="", 77 | help="The name of the person") 78 | 79 | parser.add_argument("-x", "--max", type=int, dest="max", default=3, 80 | help="Maximum number of interactions to display") 81 | 82 | args = parser.parse_args() 83 | 84 | return args 85 | 86 | # ----------------------------------------------------------------------------- 87 | # 88 | # Given a folder name and a group of people (slugs), load recent interactions 89 | # with that person. 90 | # 91 | # Parameters: 92 | # 93 | # - folder - folder containing sub-folders for each person 94 | # - slug - person that is being looked up 95 | # - max - the maximum number of interactions to display 96 | # - color - True if we should include ANSI colors, False if not 97 | # 98 | # Returns: 99 | # 100 | # - Nothing 101 | # 102 | # Notes: 103 | # 104 | # 1. Find all files with names `YYYY-MM-DD` in the `folder` 105 | # 2. Grab the body and collate the markdown 106 | # 107 | # ----------------------------------------------------------------------------- 108 | def get_interactions(folder, slug, max, color): 109 | 110 | count = 0 111 | the_markdown = "" 112 | 113 | the_interactions = [] 114 | 115 | # get all of the interactions with the person 116 | the_date = md_interactions.get_interactions(slug, os.path.join(folder, slug), the_interactions) 117 | 118 | if args.debug: 119 | print(slug + ": " + str(the_date)) 120 | 121 | for interaction in the_interactions: 122 | the_date = interaction.date 123 | the_file = communication_file.CommunicationFile() 124 | the_file.path = os.path.join(folder, slug + "/" + str(the_date) + ".md") 125 | the_file.open('r') 126 | 127 | if the_file is not None: 128 | 129 | the_file.frontmatter.read() 130 | 131 | file_date = getattr(the_file.frontmatter, md_frontmatter.FIELD_DATE) 132 | the_service = getattr(the_file.frontmatter, md_frontmatter.FIELD_SERVICE) 133 | if color: 134 | the_markdown += ansi_colors.BG_BLUE 135 | the_markdown += str(file_date) 136 | if color: 137 | the_markdown += ansi_colors.RESET 138 | the_markdown += " via " + str(the_service) + NEW_LINE + NEW_LINE 139 | the_file.body.read() 140 | the_markdown += str(the_file.body.raw) 141 | 142 | count += 1 143 | if count >= max: 144 | break 145 | 146 | return the_markdown 147 | 148 | # main 149 | 150 | args = get_arguments() 151 | folder = args.folder 152 | 153 | if folder and not os.path.exists(folder): 154 | print('The folder "' + folder + '" could not be found.') 155 | 156 | elif folder: 157 | the_markdown = get_interactions(folder, args.slug, args.max, args.color) 158 | if not args.markdown: 159 | the_markdown = remove_markdown(the_markdown) 160 | 161 | print(the_markdown) 162 | -------------------------------------------------------------------------------- /communication_file.py: -------------------------------------------------------------------------------- 1 | # Represents a Markdown file containing communications like email or chat. 2 | 3 | import sys 4 | sys.path.insert(1, './') 5 | import md_frontmatter 6 | import md_body 7 | import md_file 8 | 9 | # communication tags 10 | TAG_CHAT = "chat" 11 | TAG_EMAIL = "email" 12 | TAG_PHONE = "phone" 13 | TAG_CALL = "call" 14 | 15 | Tags = [TAG_CHAT, TAG_EMAIL, TAG_PHONE, TAG_CALL] 16 | 17 | # fields in a communication Markdown files 18 | FIELD_PEOPLE = "people" 19 | FIELD_SERVICE = "service" 20 | FIELD_TOPIC = "topic" 21 | FIELD_DATE = "date" 22 | FIELD_TIME = "time" 23 | 24 | Fields = [FIELD_PEOPLE, FIELD_TOPIC, FIELD_DATE, FIELD_TIME, FIELD_SERVICE] 25 | 26 | class CommunicationFrontmatter(md_frontmatter.Frontmatter): 27 | def __init__(self, parent): 28 | super().__init__(parent) 29 | self.parent = parent 30 | self.tags.extend(Tags) 31 | self.fields.extend(Fields) 32 | self.raw = "" 33 | 34 | class CommunicationBody(md_body.Body): 35 | def __init__(self, parent): 36 | super().__init__(parent) 37 | self.parent = parent 38 | self.raw = "" 39 | 40 | class CommunicationFile(md_file.File): 41 | def __init__(self): 42 | super().__init__() 43 | self.frontmatter = CommunicationFrontmatter(self) 44 | self.frontmatter.init_fields() 45 | self.body = CommunicationBody(self) 46 | -------------------------------------------------------------------------------- /compare_and_update_json.py: -------------------------------------------------------------------------------- 1 | # The `hal_md` and related scripts are part of a larger project that involves 2 | # generating and managing a collection of Markdown files. The scripts use a 3 | # `people.json` file as input to determine the person slug associated with 4 | # phone numbers and email addresses. 5 | # 6 | # This script is temporary to help me update my people Markdown files with 7 | # current contact info (email addresses, mobile phone numbers). And also to 8 | # remove old values that are no longer valid and update the JSON file with 9 | # them for archival purposes. 10 | # 11 | # It compares two JSON files that are lists of dictionaries representing a 12 | # person. Each dictionary has a unique "slug" key. Typically, the slug is 13 | # `firstname_lastname`. 14 | # 15 | # I have oodles of people Markdown files and a `create_people_json.py` script 16 | # that generates a JSON file with all of the people based on the frontmatter 17 | # in each person's file. 18 | # 19 | # I need to compare the generated JSON file with the `people.json` to see if 20 | # there are any new or changed values. I then manually (yeah, I know), update 21 | # the original People Markdown files with the new values. 22 | # 23 | # Why not update the original Markdown files and just use them instead of this 24 | # chaos? Because I have a lot of Markdown files and I don't want to break them. 25 | # I want to keep the original files intact and just update the JSON file. Also 26 | # because I developed the scripts at different times. Lastly, the `people.json` 27 | # supports multiple phone numbers and email addresses, many of which are old 28 | # and no longer valid. I want to keep the old values in the JSON file for 29 | # archival purposes when I process old emails for example. I don't want or 30 | # need those old values in the Markdown files that I use every day. 31 | # 32 | # Phew, that was a lot of background but I needed to write it down to get it 33 | # out of my head. 34 | 35 | # The script compares two JSON files, identifies conflicts or missing records, 36 | # and allows you to choose between keeping the original or using the modified 37 | # version for conflicting records. It also identifies records that exist in the 38 | # original file but are missing in the modified file. 39 | # 40 | # When a record exists in both files but has differences, the script shows the 41 | # differences using show_diff_dicts. You are prompted to choose between: 42 | # 43 | # [o] Keep original: Keeps the record from the original file. 44 | # [m] Use modified: Uses the record from the modified file updating the original. 45 | 46 | import json 47 | import difflib 48 | 49 | def load_json_list(filename): 50 | with open(filename, 'r', encoding='utf-8') as f: 51 | return json.load(f) 52 | 53 | def normalize_dict(d): 54 | """Remove fields that are blank (None, empty string, empty list) and ensure consistent structure.""" 55 | if not isinstance(d, dict): 56 | return d # Return as-is if not a dictionary 57 | return {k: normalize_dict(v) for k, v in d.items() if v not in (None, "", [], {})} 58 | 59 | def to_dict_by_slug(items): 60 | slug_dict = {} 61 | for i, item in enumerate(items): 62 | if not isinstance(item, dict): 63 | print(f"⚠️ Skipping non-dict item at index {i}: {item}") 64 | continue 65 | 66 | slug = item.get("slug") 67 | if not isinstance(slug, str): 68 | print(f"⚠️ Skipping item at index {i} with invalid slug (type={type(slug).__name__}):\n{json.dumps(item, indent=2)}") 69 | continue 70 | 71 | slug_dict[slug] = item 72 | return slug_dict 73 | 74 | def show_diff_dicts(orig, mod): 75 | # Normalize the dictionaries to ignore blank and missing fields 76 | orig_normalized = normalize_dict(orig) 77 | mod_normalized = normalize_dict(mod) 78 | 79 | # Ensure consistent formatting for comparison 80 | orig_lines = json.dumps(orig_normalized, indent=2, ensure_ascii=False).splitlines() 81 | mod_lines = json.dumps(mod_normalized, indent=2, ensure_ascii=False).splitlines() 82 | 83 | diff = difflib.unified_diff( 84 | orig_lines, mod_lines, fromfile="original", tofile="modified", lineterm="" 85 | ) 86 | return "\n".join(diff) 87 | 88 | def choose_version(slug, orig, mod): 89 | # Normalize the dictionaries to ignore blank and missing fields 90 | orig_normalized = normalize_dict(orig) 91 | mod_normalized = normalize_dict(mod) 92 | 93 | # Handle the email field specifically 94 | orig_emails = set(orig.get("email", "").split(";")) if orig else set() 95 | mod_email = mod.get("email", "").strip() if mod else "" 96 | 97 | if mod_email and mod_email not in orig_emails: 98 | orig_emails.add(mod_email) 99 | orig["email"] = ";".join(sorted(orig_emails)) # Merge emails and sort for consistency 100 | 101 | # Compare the rest of the fields 102 | orig_normalized = normalize_dict(orig) 103 | mod_normalized = normalize_dict(mod) 104 | 105 | print(f"\n=== Conflict for slug: {slug} ===") 106 | print(show_diff_dicts(orig_normalized, mod_normalized)) 107 | print("\nChoose:") 108 | print("[o] Keep original") 109 | print("[m] Use modified") 110 | choice = input("Choice [o/m]? ").strip().lower() 111 | 112 | if choice == "o": 113 | return orig 114 | elif choice == "m": 115 | return mod 116 | else: 117 | print("Invalid input, defaulting to original.") 118 | return orig 119 | 120 | def show_diff_dicts(orig, mod): 121 | # Normalize the dictionaries to ignore blank and missing fields 122 | orig_normalized = normalize_dict(orig) 123 | mod_normalized = normalize_dict(mod) 124 | 125 | # Ensure consistent formatting for comparison 126 | orig_lines = json.dumps(orig_normalized, indent=2, ensure_ascii=False).splitlines() 127 | mod_lines = json.dumps(mod_normalized, indent=2, ensure_ascii=False).splitlines() 128 | 129 | diff = difflib.unified_diff( 130 | orig_lines, mod_lines, fromfile="original", tofile="modified", lineterm="" 131 | ) 132 | return "\n".join(diff) 133 | 134 | def main(original_file, modified_file): 135 | original_list = load_json_list(original_file) 136 | modified_list = load_json_list(modified_file) 137 | 138 | orig_dict = to_dict_by_slug(original_list) 139 | mod_dict = to_dict_by_slug(modified_list) 140 | 141 | all_slugs = sorted(set(orig_dict) | set(mod_dict)) 142 | 143 | missing_records = [] 144 | 145 | for slug in all_slugs: 146 | orig = orig_dict.get(slug) 147 | mod = mod_dict.get(slug) 148 | 149 | # Normalize both records before comparison 150 | orig_normalized = normalize_dict(orig) if orig else None 151 | mod_normalized = normalize_dict(mod) if mod else None 152 | 153 | # Compare normalized records for conflict detection 154 | if orig_normalized and mod_normalized: 155 | if orig_normalized != mod_normalized: 156 | # Display the conflict only once 157 | updated_record = choose_version(slug, orig, mod) 158 | missing_records.append(updated_record) 159 | elif orig_normalized: # Record exists in original but not in modified 160 | print(f"\n+++ Missing record in modified: {slug}") 161 | print(json.dumps(orig_normalized, indent=2, ensure_ascii=False)) 162 | missing_records.append(orig_normalized) 163 | 164 | print(f"\n✅ Found {len(missing_records)} missing or conflicting records.") 165 | print("You can save these records to a separate file if needed.") 166 | 167 | if __name__ == "__main__": 168 | import sys 169 | if len(sys.argv) != 3: 170 | print("Usage: python compare_by_slug.py original.json modified.json") 171 | else: 172 | main(sys.argv[1], sys.argv[2]) 173 | -------------------------------------------------------------------------------- /create_people_json.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | # Created with ChatGPT 4 | # 5 | # This script extracts frontmatter from markdown files in a given folder and 6 | # generates a JSON file with person information. It assumes that the 7 | # frontmatter is in YAML format and contains specific fields. 8 | # 9 | # It also assumes that the folder structure is such that each person's markdown 10 | # file is in a subfolder named after the person's slug. 11 | # 12 | # The script uses the PyYAML library to parse the YAML frontmatter. 13 | # 14 | # To run it, the PyYAML library needs to be installed. To install it using pip: 15 | # 16 | # pip install pyyaml 17 | 18 | import os 19 | import sys 20 | import json 21 | import yaml 22 | 23 | def extract_frontmatter(filepath): 24 | with open(filepath, 'r', encoding='utf-8', errors='replace') as f: 25 | lines = f.readlines() 26 | 27 | if not lines or not lines[0].strip() == '---': 28 | return None # No frontmatter 29 | 30 | # Extract YAML block between --- and --- 31 | try: 32 | end_index = lines[1:].index('---\n') + 1 33 | except ValueError: 34 | return None # Malformed frontmatter 35 | 36 | frontmatter_lines = lines[1:end_index] 37 | yaml_text = ''.join(frontmatter_lines) 38 | try: 39 | parsed_yaml = yaml.safe_load(yaml_text) 40 | if not isinstance(parsed_yaml, dict): 41 | print(f"Invalid YAML structure in file {filepath}: Expected a dictionary but got {type(parsed_yaml).__name__}") 42 | return None 43 | return parsed_yaml 44 | except yaml.YAMLError as e: 45 | print(f"Error parsing YAML in file {filepath}: {e}") 46 | return None 47 | except ValueError as e: 48 | print(f"Invalid date or value in YAML in file {filepath}: {e}") 49 | return None 50 | 51 | def clean_field(value): 52 | return value if value is not None else "" 53 | 54 | def extract_emails(frontmatter): 55 | # List of all possible email keys in order 56 | email_keys = ['email', 'work_email', 'home_email', 'other_email'] 57 | emails = [frontmatter.get(key) for key in email_keys if frontmatter.get(key)] 58 | return ";".join(emails) 59 | 60 | def update_person_file(filepath, slug): 61 | """Update the person's markdown file to include the slug in the frontmatter.""" 62 | with open(filepath, 'r', encoding='utf-8', errors='replace') as f: 63 | lines = f.readlines() 64 | 65 | if not lines or not lines[0].strip() == '---': 66 | print(f"Warning: File {filepath} does not have valid frontmatter. Skipping update.") 67 | return 68 | 69 | try: 70 | end_index = lines[1:].index('---\n') + 1 71 | except ValueError: 72 | print(f"Warning: Malformed frontmatter in file {filepath}. Skipping update.") 73 | return 74 | 75 | # Check if the slug already exists in the frontmatter 76 | for i in range(1, end_index): 77 | if lines[i].strip().startswith('slug:'): 78 | return 79 | 80 | # Insert the slug after the last_name field in the frontmatter 81 | for i in range(1, end_index): 82 | if lines[i].strip().startswith('last_name:'): 83 | lines.insert(i + 1, f"slug: {slug}\n") 84 | break 85 | else: 86 | lines.insert(end_index, f"slug: {slug}\n") 87 | 88 | # Write the updated content back to the file 89 | with open(filepath, 'w', encoding='utf-8') as f: 90 | f.writelines(lines) 91 | print(f"Added slug: {slug} to file {filepath}") 92 | 93 | def extract_person_info(frontmatter, folder_slug, filepath): 94 | tags = frontmatter.get('tags') 95 | if tags is None: 96 | tags = [] # Default to an empty list if 'tags' is None 97 | elif isinstance(tags, str): 98 | tags = [t.strip() for t in tags.strip('[]').split(',')] # Convert string to list if needed 99 | 100 | if 'person' not in tags: 101 | return None 102 | 103 | slug = frontmatter.get('slug') 104 | if not slug: 105 | slug = folder_slug 106 | update_person_file(filepath, slug) # Update the file with the generated slug 107 | 108 | return { 109 | "slug": slug, 110 | "first-name": clean_field(frontmatter.get('first_name')), 111 | "last-name": clean_field(frontmatter.get('last_name')), 112 | "mobile": clean_field(frontmatter.get('mobile')), 113 | "work-mobile": clean_field(frontmatter.get('work_mobile')), 114 | "email": extract_emails(frontmatter), 115 | "facebook-id": clean_field(frontmatter.get('facebook_id')), 116 | "linkedin-id": clean_field(frontmatter.get('linkedin_id')), 117 | "x-id": clean_field(frontmatter.get('x_id')) 118 | } 119 | 120 | def main(folder_path): 121 | people = [] 122 | for root, _, files in os.walk(folder_path): 123 | for file in files: 124 | if file.endswith('.md'): 125 | full_path = os.path.join(root, file) 126 | folder_slug = os.path.basename(os.path.dirname(full_path)) 127 | frontmatter = extract_frontmatter(full_path) 128 | if frontmatter: 129 | person = extract_person_info(frontmatter, folder_slug, full_path) 130 | if person: 131 | people.append(person) 132 | 133 | with open('people.json', 'w', encoding='utf-8') as f: 134 | json.dump(people, f, indent=2) 135 | 136 | print("JSON data has been written to people.json") 137 | 138 | if __name__ == '__main__': 139 | if len(sys.argv) != 2: 140 | print("Usage: python generate_people_json.py ") 141 | sys.exit(1) 142 | main(sys.argv[1]) 143 | -------------------------------------------------------------------------------- /dedup.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | import re 4 | import sys 5 | import argparse 6 | import difflib 7 | from collections import defaultdict 8 | 9 | def get_backup_path(original_path, base_backup_dir="backups"): 10 | """Create a backup path mirroring the original structure.""" 11 | abs_path = os.path.abspath(original_path) 12 | relative_path = os.path.relpath(abs_path, start=os.getcwd()) 13 | return os.path.join(base_backup_dir, relative_path) 14 | 15 | def create_backup(original_path, backup_path): 16 | """Save a backup copy before modifying.""" 17 | os.makedirs(os.path.dirname(backup_path), exist_ok=True) 18 | shutil.copy2(original_path, backup_path) 19 | 20 | def parse_markdown_content(text): 21 | """Parse markdown content, preserving frontmatter.""" 22 | # First, identify and preserve frontmatter 23 | frontmatter = None 24 | content = text 25 | 26 | # Check for YAML frontmatter (between --- markers) 27 | frontmatter_match = re.match(r'^---\n(.*?)\n---\n', text, re.DOTALL) 28 | if frontmatter_match: 29 | frontmatter = frontmatter_match.group(0) 30 | content = text[len(frontmatter):] 31 | 32 | return frontmatter, content 33 | 34 | def clean_formatting(text): 35 | """Clean up the formatting of various email-style markers.""" 36 | # Replace "**-Original Message**-" with "*-- Original Message --*" 37 | text = re.sub(r'\*\*-+\s*Original\s*Message\s*-+\*\*', '*-- Original Message --*', text, flags=re.IGNORECASE) 38 | 39 | # Handle "*-Original Message-*" format (add spacing) 40 | text = re.sub(r'\*-\s*Original\s*Message\s*-\*', '*-- Original Message --*', text, flags=re.IGNORECASE) 41 | 42 | # Also handle other variations of Original Message markers with any combination of - and * 43 | text = re.sub(r'(\*+|\*+-)[\s-]*Original\s*Message[\s-]*(-\*+|\*+)', '*-- Original Message --*', text, flags=re.IGNORECASE) 44 | 45 | # Common email header fields to clean up 46 | header_fields = ['From', 'To', 'Cc', 'Bcc', 'Subject', 'Date', 'Sent', 'Reply To', 'Reply-To', 'Forwarded'] 47 | 48 | for field in header_fields: 49 | # Replace "**Field:** " with "Field: " and ensure there's a newline before it 50 | text = re.sub(r'(?)?', headers['From']) 80 | if name_match: 81 | sender_name = name_match.group(1).strip() 82 | 83 | headers['sender_name'] = sender_name 84 | 85 | return headers 86 | 87 | def extract_embeds(text): 88 | """Extract embedded file references from the text.""" 89 | embeds = [] 90 | 91 | # Pattern for ![[filename.ext]] and [[filename.ext]] embeds 92 | embed_pattern = r'(!?\[\[.*?\]\])' 93 | 94 | for match in re.finditer(embed_pattern, text): 95 | embeds.append({ 96 | 'text': match.group(0), 97 | 'start': match.start(), 98 | 'end': match.end() 99 | }) 100 | 101 | return embeds 102 | 103 | def extract_complete_messages(text): 104 | """Extract complete messages with their headers using multiple patterns.""" 105 | messages = [] 106 | 107 | # Pattern 1: "Name at HH:MM" format 108 | pattern1 = r'([A-Za-z]+\s+at\s+\d{1,2}:\d{2})\s*\n((?:.+\n?)+?)(?=\n[A-Za-z]+\s+at\s+\d{1,2}:\d{2}|$)' 109 | 110 | # Pattern 2: "Name wrote" or similar formats (possibly with quote markers) 111 | pattern2 = r'((?:>\s*)?[A-Za-z]+(?:\s+[A-Za-z]+)?\s+wrote(?:(?:\s+\w+){0,5})?(?:\s+on\s+(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+\d{1,2},\s+\d{4})?:)\s*\n((?:.+\n?)+?)(?=\n(?:>\s*)?[A-Za-z]+(?:\s+[A-Za-z]+)?\s+wrote|$)' 112 | 113 | # Pattern 3: Just a name followed by blank line then content 114 | pattern3 = r'((?:>\s*)?(?:[A-Z][a-z]+(?:\s+[A-Z][a-z]+)?))(?:\s*\n\s*\n)((?:.+\n?)+?)(?=\n\n(?:>\s*)?(?:[A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)|$)' 115 | 116 | # Pattern 4: Email with headers 117 | pattern4 = r'((?:From|To|Subject|Date|Sent):\s*[^\n]+\n(?:(?:From|To|Cc|Bcc|Subject|Date|Sent|Reply-To):\s*[^\n]+\n)*)\s*\n((?:.+\n?)+?)(?=\n(?:From|To|Subject|Date|Sent):|$)' 118 | 119 | # Extract messages using pattern 1 120 | for match in re.finditer(pattern1, text, re.DOTALL): 121 | header = match.group(1) 122 | body = match.group(2).strip() 123 | messages.append({ 124 | 'header': header, 125 | 'body': body, 126 | 'complete_text': f"{header}\n{body}", 127 | 'start': match.start(), 128 | 'end': match.end(), 129 | 'pattern': 'time' 130 | }) 131 | 132 | # Extract messages using pattern 2 133 | for match in re.finditer(pattern2, text, re.DOTALL): 134 | header = match.group(1) 135 | body = match.group(2).strip() 136 | messages.append({ 137 | 'header': header, 138 | 'body': body, 139 | 'complete_text': f"{header}\n{body}", 140 | 'start': match.start(), 141 | 'end': match.end(), 142 | 'pattern': 'wrote' 143 | }) 144 | 145 | # Extract messages using pattern 3 146 | for match in re.finditer(pattern3, text, re.DOTALL): 147 | header = match.group(1) 148 | body = match.group(2).strip() 149 | messages.append({ 150 | 'header': header, 151 | 'body': body, 152 | 'complete_text': f"{header}\n\n{body}", # Note the double newline here 153 | 'start': match.start(), 154 | 'end': match.end(), 155 | 'pattern': 'name' 156 | }) 157 | 158 | # Extract messages using pattern 4 159 | for match in re.finditer(pattern4, text, re.DOTALL): 160 | headers = match.group(1) 161 | body = match.group(2).strip() 162 | 163 | # Extract sender information from headers 164 | header_info = extract_email_headers(headers) 165 | messages.append({ 166 | 'header': headers, 167 | 'body': body, 168 | 'complete_text': f"{headers}\n{body}", 169 | 'start': match.start(), 170 | 'end': match.end(), 171 | 'pattern': 'email', 172 | 'header_info': header_info 173 | }) 174 | 175 | # Sort messages by their position in the text 176 | messages.sort(key=lambda x: x['start']) 177 | 178 | # Remove any overlapping messages (prefer more specific patterns) 179 | non_overlapping = [] 180 | for msg in messages: 181 | # Check if this message overlaps with any previously accepted message 182 | overlaps = False 183 | for accepted in non_overlapping: 184 | # If there's significant overlap 185 | if (msg['start'] < accepted['end'] and msg['end'] > accepted['start']): 186 | # If patterns conflict, keep the more specific one 187 | if msg['pattern'] in ['time', 'wrote', 'email'] and accepted['pattern'] == 'name': 188 | # Replace the less specific with the more specific 189 | non_overlapping.remove(accepted) 190 | non_overlapping.append(msg) 191 | overlaps = True 192 | break 193 | 194 | if not overlaps: 195 | non_overlapping.append(msg) 196 | 197 | return non_overlapping 198 | 199 | def has_embed_difference(text1, text2): 200 | """Check if there are differences in embeds between two texts.""" 201 | # Extract embeds from both texts 202 | embeds1 = re.findall(r'(!?\[\[.*?\]\])', text1) 203 | embeds2 = re.findall(r'(!?\[\[.*?\]\])', text2) 204 | 205 | # If embed counts differ, they're different 206 | if len(embeds1) != len(embeds2): 207 | return True 208 | 209 | # Check if all embeds match exactly 210 | for e1 in embeds1: 211 | if e1 not in embeds2: 212 | return True 213 | 214 | return False 215 | 216 | def create_context_summary(message): 217 | """Create a context summary for a message that will be removed.""" 218 | if message['pattern'] == 'time': 219 | # For "Name at HH:MM" format 220 | name = extract_name_from_header(message['header']) 221 | if name: 222 | return f"{name} wrote: [duplicate message removed]" 223 | return "[duplicate message removed]" 224 | 225 | elif message['pattern'] == 'wrote': 226 | # Keep the "Name wrote:" part 227 | return f"{message['header']} [duplicate message removed]" 228 | 229 | elif message['pattern'] == 'name': 230 | # For simple name headers 231 | name = extract_name_from_header(message['header']) 232 | if name: 233 | return f"{name} wrote: [duplicate message removed]" 234 | return "[duplicate message removed]" 235 | 236 | elif message['pattern'] == 'email': 237 | # For email headers, keep a simplified version 238 | if 'header_info' in message and message['header_info'].get('sender_name'): 239 | context = f"{message['header_info']['sender_name']} wrote: [duplicate message removed]\n\n" 240 | else: 241 | context = "[duplicate message removed]\n\n" 242 | 243 | # Add simplified headers 244 | if 'header_info' in message: 245 | for field in ['From', 'Sent', 'To', 'Subject']: 246 | if field in message['header_info']: 247 | context += f"{field}: {message['header_info'][field]}\n" 248 | 249 | return context 250 | 251 | # Default case 252 | return "[duplicate message removed]" 253 | 254 | def find_duplicate_messages(messages, min_chars=40): 255 | """Find duplicate messages based on content similarity.""" 256 | duplicates = [] 257 | 258 | # Compare all message pairs 259 | for i, msg1 in enumerate(messages): 260 | for j, msg2 in enumerate(messages): 261 | # Skip self-comparison and already processed messages 262 | if i >= j: # This ensures we only compare each pair once and keep the first occurrence 263 | continue 264 | 265 | # First check if there are differences in embeds 266 | if has_embed_difference(msg1['body'], msg2['body']): 267 | continue # Skip this pair if embed differences exist 268 | 269 | # Check content similarity even if headers differ 270 | body_similarity = difflib.SequenceMatcher(None, msg1['body'], msg2['body']).ratio() 271 | 272 | # If bodies are very similar and significant in length 273 | if body_similarity > 0.8 and len(msg2['body']) >= min_chars: 274 | # Extract name from header for comparison 275 | name1 = extract_name_from_header(msg1['header']) 276 | name2 = extract_name_from_header(msg2['header']) 277 | 278 | # Higher priority for same sender 279 | if name1 and name2 and name1.lower() == name2.lower(): 280 | # Record the complete message for removal 281 | context_summary = create_context_summary(msg2) 282 | duplicates.append({ 283 | 'text': msg2['complete_text'], 284 | 'start': msg2['start'], 285 | 'end': msg2['end'], 286 | 'similarity': body_similarity, 287 | 'duplicate_of': i, 288 | 'same_sender': True, 289 | 'context_summary': context_summary, 290 | 'pattern': msg2['pattern'] 291 | }) 292 | # Also catch duplicates with different senders but mark them differently 293 | elif body_similarity > 0.9: # Higher threshold for different senders 294 | context_summary = create_context_summary(msg2) 295 | duplicates.append({ 296 | 'text': msg2['complete_text'], 297 | 'start': msg2['start'], 298 | 'end': msg2['end'], 299 | 'similarity': body_similarity, 300 | 'duplicate_of': i, 301 | 'same_sender': False, 302 | 'context_summary': context_summary, 303 | 'pattern': msg2['pattern'] 304 | }) 305 | 306 | return duplicates 307 | 308 | def extract_name_from_header(header): 309 | """Extract the sender's name from various header formats.""" 310 | # For "Name at HH:MM" format 311 | time_match = re.match(r'([A-Za-z]+)\s+at\s+\d{1,2}:\d{2}', header) 312 | if time_match: 313 | return time_match.group(1) 314 | 315 | # For "Name wrote" or "> Name wrote" formats 316 | wrote_match = re.match(r'(?:>\s*)?([A-Za-z]+(?:\s+[A-Za-z]+)?)\s+wrote', header) 317 | if wrote_match: 318 | return wrote_match.group(1) 319 | 320 | # For just a name 321 | name_match = re.match(r'(?:>\s*)?([A-Za-z]+(?:\s+[A-Za-z]+)?)', header) 322 | if name_match: 323 | return name_match.group(1) 324 | 325 | # Try to extract from email headers 326 | from_match = re.search(r'From:\s*([^<\n]+)', header) 327 | if from_match: 328 | return from_match.group(1).strip() 329 | 330 | # If no pattern matches, return None 331 | return None 332 | 333 | def find_repeating_paragraphs(text, min_chars=40): 334 | """Find repeating paragraphs that might be duplicates.""" 335 | # Split the content into paragraphs 336 | paragraphs = re.split(r'\n\s*\n', text) 337 | 338 | # Patterns for message headers to ignore 339 | header_patterns = [ 340 | r'^[A-Za-z]+\s+at\s+\d{1,2}:\d{2}$', 341 | r'^(?:>\s*)?[A-Za-z]+(?:\s+[A-Za-z]+)?\s+wrote(?:(?:\s+\w+){0,5})?(?:\s+on\s+(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+\d{1,2},\s+\d{4})?:$', 342 | r'^(?:>\s*)?[A-Za-z]+(?:\s+[A-Za-z]+)?$', # Just a name 343 | r'^(?:From|To|Subject|Date|Sent):\s*[^\n]+' # Email headers 344 | ] 345 | 346 | duplicates = [] 347 | 348 | for i, para1 in enumerate(paragraphs): 349 | # Skip headers and short paragraphs 350 | if any(re.match(pattern, para1.strip()) for pattern in header_patterns) or len(para1) < min_chars: 351 | continue 352 | 353 | for j, para2 in enumerate(paragraphs[i+1:], i+1): 354 | # Also skip headers and short paragraphs 355 | if any(re.match(pattern, para2.strip()) for pattern in header_patterns) or len(para2) < min_chars: 356 | continue 357 | 358 | # Check for embed differences 359 | if has_embed_difference(para1, para2): 360 | continue # Skip if there are embed differences 361 | 362 | # Check for similar content 363 | similarity = difflib.SequenceMatcher(None, para1, para2).ratio() 364 | 365 | # If paragraphs are very similar 366 | if similarity > 0.9: 367 | # Find the position of the duplicate paragraph in the original text 368 | start_pos = -1 369 | current_pos = 0 370 | 371 | # Find the exact position by advancing through the text 372 | for k in range(j+1): 373 | current_pos = text.find(paragraphs[k], current_pos) 374 | if k == j: 375 | start_pos = current_pos 376 | if current_pos != -1: 377 | current_pos += len(paragraphs[k]) 378 | 379 | if start_pos >= 0: 380 | end_pos = start_pos + len(para2) 381 | duplicates.append({ 382 | 'text': para2, 383 | 'start': start_pos, 384 | 'end': end_pos, 385 | 'similarity': similarity, 386 | 'context_summary': "[duplicate content removed]", 387 | 'pattern': 'paragraph' 388 | }) 389 | 390 | return duplicates 391 | 392 | def detect_message_header_before(text, position, max_lines=3): 393 | """Detect if there's a message header right before the given position.""" 394 | # Get a few lines before the position 395 | lines_before = text[:position].split('\n')[-max_lines:] 396 | 397 | # Patterns for message headers 398 | header_patterns = [ 399 | r'^[A-Za-z]+\s+at\s+\d{1,2}:\d{2}$', 400 | r'^(?:>\s*)?[A-Za-z]+(?:\s+[A-Za-z]+)?\s+wrote(?:(?:\s+\w+){0,5})?(?:\s+on\s+(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+\d{1,2},\s+\d{4})?:$', 401 | r'^(?:>\s*)?[A-Za-z]+(?:\s+[A-Za-z]+)?$', # Just a name 402 | r'^(?:From|To|Subject|Date|Sent):\s*[^\n]+' # Email headers 403 | ] 404 | 405 | for line in lines_before: 406 | if any(re.match(pattern, line.strip()) for pattern in header_patterns): 407 | # Found a header, get its position 408 | header_pos = text[:position].rfind(line) 409 | if header_pos >= 0: 410 | return header_pos, line 411 | 412 | return None, None 413 | 414 | def remove_duplicates(filepath, interactive=True, min_chars=40, verbose=False, dry_run=False, fix_formatting=True, preserve_context=True): 415 | """Remove duplicate content while preserving message context.""" 416 | with open(filepath, "r", encoding="utf-8") as f: 417 | text = f.read() 418 | 419 | # Parse frontmatter and content 420 | frontmatter, content = parse_markdown_content(text) 421 | 422 | # Clean up formatting if requested 423 | if fix_formatting: 424 | formatted_content = clean_formatting(content) 425 | formatting_changed = (formatted_content != content) 426 | content = formatted_content 427 | else: 428 | formatting_changed = False 429 | 430 | # Extract complete messages 431 | messages = extract_complete_messages(content) 432 | if verbose: 433 | print(f"Found {len(messages)} complete messages") 434 | for i, msg in enumerate(messages): 435 | print(f" Message {i}: {msg['pattern']} pattern") 436 | if 'header_info' in msg: 437 | print(f" Sender: {msg['header_info'].get('sender_name', 'Unknown')}") 438 | # Show first line of body 439 | first_line = msg['body'].split('\n')[0] if '\n' in msg['body'] else msg['body'] 440 | print(f" First line: {first_line}") 441 | 442 | # Find embedded file references 443 | embeds = extract_embeds(content) 444 | if verbose and embeds: 445 | print(f"Found {len(embeds)} embedded file references:") 446 | for embed in embeds: 447 | print(f" {embed['text']}") 448 | 449 | # Find duplicate messages 450 | duplicate_messages = find_duplicate_messages(messages, min_chars) 451 | 452 | # Also find repeating paragraphs not part of message structure 453 | duplicate_paragraphs = find_repeating_paragraphs(content, min_chars) 454 | 455 | # Combine duplicates 456 | all_duplicates = duplicate_messages + duplicate_paragraphs 457 | 458 | modified = False 459 | 460 | # Check if formatting needs to be fixed even if no duplicates 461 | if fix_formatting and formatting_changed and not all_duplicates: 462 | modified = True 463 | if verbose: 464 | print(f"Fixed formatting in {filepath}") 465 | 466 | if not dry_run: 467 | new_text = frontmatter + content if frontmatter else content 468 | with open(filepath, "w", encoding="utf-8") as f: 469 | f.write(new_text) 470 | 471 | return modified 472 | 473 | if not all_duplicates: 474 | if verbose: 475 | print(f"No duplicates found in {filepath}") 476 | return modified 477 | 478 | print(f"\nFound duplicates in: {filepath}") 479 | 480 | # Create backup if not in dry run mode 481 | if not dry_run: 482 | backup_path = get_backup_path(filepath) 483 | create_backup(filepath, backup_path) 484 | print(f" Backup created at: {backup_path}") 485 | 486 | # Keep track of blocks to remove 487 | blocks_to_remove = [] 488 | removed_count = 0 489 | 490 | if interactive: 491 | # Sort duplicates by position in the file 492 | all_duplicates.sort(key=lambda x: x['start']) 493 | 494 | for dup in all_duplicates: 495 | print("\n" + "="*40) 496 | if 'duplicate_of' in dup: 497 | if dup.get('same_sender', False): 498 | print(f"Duplicate message: {extract_name_from_header(messages[dup['duplicate_of']]['header']) or 'Unknown'} wrote:") 499 | else: 500 | print(f"Similar message from different senders") 501 | else: 502 | print(f"Duplicate content ({len(dup['text'])} chars, {dup['similarity']:.2f} similarity):") 503 | 504 | # Check if the text contains embeds 505 | has_embeds = bool(re.search(r'(!?\[\[.*?\]\])', dup['text'])) 506 | if has_embeds: 507 | print("NOTE: This text contains file embeds.") 508 | 509 | print("-"*40) 510 | print(dup['text']) 511 | print("="*40) 512 | 513 | if preserve_context: 514 | print(f"Will be replaced with context: {dup['context_summary']}") 515 | else: 516 | print("Will be removed completely") 517 | 518 | choice = input("Remove this duplicate content? (y/n): ") 519 | if choice.lower() == "y": 520 | blocks_to_remove.append(dup) 521 | removed_count += 1 522 | else: 523 | # Auto-remove all duplicates from the same sender (except those with embeds), but prompt for different senders 524 | for dup in all_duplicates: 525 | # Skip auto-removal if embeds are present 526 | has_embeds = bool(re.search(r'(!?\[\[.*?\]\])', dup['text'])) 527 | 528 | if 'duplicate_of' in dup and dup.get('same_sender', False) and not has_embeds: 529 | # Auto-remove same-sender duplicates without embeds 530 | blocks_to_remove.append(dup) 531 | removed_count += 1 532 | elif interactive: 533 | # Prompt for other types of duplicates 534 | print("\n" + "="*40) 535 | print(f"Duplicate content ({len(dup['text'])} chars, {dup['similarity']:.2f} similarity):") 536 | 537 | if has_embeds: 538 | print("NOTE: This text contains file embeds.") 539 | 540 | print("-"*40) 541 | print(dup['text']) 542 | print("="*40) 543 | 544 | if preserve_context: 545 | print(f"Will be replaced with context: {dup['context_summary']}") 546 | else: 547 | print("Will be removed completely") 548 | 549 | choice = input("Remove this duplicate content? (y/n): ") 550 | if choice.lower() == "y": 551 | blocks_to_remove.append(dup) 552 | removed_count += 1 553 | 554 | # Only modify the file if we're removing something or fixing formatting, and not in dry run mode 555 | if (removed_count > 0 or formatting_changed) and not dry_run: 556 | # Sort blocks by position in reverse order to avoid position changes 557 | blocks_to_remove.sort(key=lambda x: x['start'], reverse=True) 558 | 559 | # Create a new content string by removing the duplicate blocks 560 | new_content = content 561 | for block in blocks_to_remove: 562 | # For duplicates, we need to make sure we include the header if it's not already part of the duplicate 563 | start_pos = block['start'] 564 | 565 | # Check if we need to include the header 566 | if block['pattern'] != 'email': # Skip for email pattern as it already includes headers 567 | header_pos, header_line = detect_message_header_before(new_content, start_pos) 568 | if header_pos is not None and header_pos < start_pos: 569 | # The header is right before this content but might not be included in the duplicate range 570 | # Adjust the start position to include the header 571 | start_pos = header_pos 572 | 573 | # Remove the block from content (with header if needed) 574 | if preserve_context: 575 | # Replace with context summary instead of completely removing 576 | new_content = new_content[:start_pos] + block['context_summary'] + '\n\n' + new_content[block['end']:] 577 | else: 578 | # Remove completely 579 | new_content = new_content[:start_pos] + new_content[block['end']:] 580 | 581 | # Clean up any excessive newlines 582 | new_content = re.sub(r'\n{3,}', '\n\n', new_content) 583 | 584 | # Reconstruct the document 585 | if frontmatter: 586 | new_text = frontmatter + new_content 587 | else: 588 | new_text = new_content 589 | 590 | with open(filepath, "w", encoding="utf-8") as f: 591 | f.write(new_text) 592 | 593 | if removed_count > 0: 594 | print(f" Removed {removed_count} duplicate blocks.") 595 | if formatting_changed and verbose: 596 | print(f" Fixed formatting.") 597 | 598 | return True 599 | elif dry_run and (removed_count > 0 or formatting_changed): 600 | if removed_count > 0: 601 | print(f" Would remove {removed_count} duplicate blocks (dry run).") 602 | if formatting_changed and verbose: 603 | print(f" Would fix formatting (dry run).") 604 | 605 | return False 606 | else: 607 | return modified 608 | 609 | def is_dated_markdown_file(filename): 610 | """Check if the filename matches the YYYY-MM-DD*.md pattern.""" 611 | pattern = r"^\d{4}-\d{2}-\d{2}.*\.md$" 612 | return bool(re.match(pattern, filename)) 613 | 614 | def process_folder(folder_path, interactive=True, min_chars=40, verbose=False, dry_run=False, fix_formatting=True, preserve_context=True): 615 | """Process all dated markdown files in a folder and its subfolders.""" 616 | processed_files = 0 617 | modified_files = 0 618 | 619 | for root, _, files in os.walk(folder_path): 620 | for file in files: 621 | if is_dated_markdown_file(file): 622 | file_path = os.path.join(root, file) 623 | processed_files += 1 624 | if remove_duplicates(file_path, interactive, min_chars, verbose, dry_run, fix_formatting, preserve_context): 625 | modified_files += 1 626 | 627 | action = "Would modify" if dry_run else "Modified" 628 | print(f"\nSummary: Processed {processed_files} files, {action} {modified_files} files.") 629 | 630 | if __name__ == "__main__": 631 | # Set up argument parser 632 | parser = argparse.ArgumentParser(description="Remove duplicate content from dated markdown files while preserving message context.") 633 | parser.add_argument("folder", nargs="?", help="Folder or file path to process") 634 | parser.add_argument("--auto", action="store_true", help="Automatically remove duplicates from same sender") 635 | parser.add_argument("--min-chars", type=int, default=40, 636 | help="Minimum content length in characters (default: 40)") 637 | parser.add_argument("--verbose", action="store_true", help="Show detailed processing information") 638 | parser.add_argument("--dry-run", action="store_true", help="Show what would be removed without making changes") 639 | parser.add_argument("--no-format-fix", action="store_true", help="Skip formatting fixes") 640 | parser.add_argument("--no-context", action="store_true", help="Remove duplicate content completely without leaving context") 641 | args = parser.parse_args() 642 | 643 | # If path is provided as command line argument 644 | if args.folder: 645 | if os.path.isdir(args.folder): 646 | print(f"Starting deduplication process for: {args.folder}") 647 | print(f"Mode: {'Automatic for same-sender duplicates' if args.auto else 'Interactive'}") 648 | print(f"Context preservation: {'Off' if args.no_context else 'On'}") 649 | print(f"Minimum content length: {args.min_chars} characters") 650 | if args.dry_run: 651 | print("DRY RUN: No files will be modified") 652 | if args.no_format_fix: 653 | print("Skipping formatting fixes") 654 | process_folder(args.folder, interactive=not args.auto, min_chars=args.min_chars, 655 | verbose=args.verbose, dry_run=args.dry_run, fix_formatting=not args.no_format_fix, 656 | preserve_context=not args.no_context) 657 | elif os.path.isfile(args.folder): 658 | # Allow processing a single file if provided 659 | print(f"Processing single file: {args.folder}") 660 | if args.dry_run: 661 | print("DRY RUN: No files will be modified") 662 | if args.no_format_fix: 663 | print("Skipping formatting fixes") 664 | remove_duplicates(args.folder, interactive=not args.auto, min_chars=args.min_chars, 665 | verbose=args.verbose, dry_run=args.dry_run, fix_formatting=not args.no_format_fix) 666 | else: 667 | print(f"Error: '{args.folder}' is not a valid directory or file.") 668 | # If no arguments provided, prompt for input 669 | else: 670 | path = input("Enter file or folder path to deduplicate: ") 671 | if os.path.isfile(path): 672 | min_chars = int(input("Minimum content length in characters (default: 40): ") or 40) 673 | verbose = input("Show verbose output? (y/n): ").lower() == 'y' 674 | dry_run = input("Dry run (no changes made)? (y/n): ").lower() == 'y' 675 | fix_formatting = input("Fix message formatting? (y/n): ").lower() == 'y' 676 | remove_duplicates(path, min_chars=min_chars, verbose=verbose, dry_run=dry_run, fix_formatting=fix_formatting) 677 | elif os.path.isdir(path): 678 | min_chars = int(input("Minimum content length in characters (default: 40): ") or 40) 679 | auto_mode = input("Automatically remove same-sender duplicates? (y/n): ").lower() == 'y' 680 | verbose = input("Show verbose output? (y/n): ").lower() == 'y' 681 | dry_run = input("Dry run (no changes made)? (y/n): ").lower() == 'y' 682 | fix_formatting = input("Fix message formatting? (y/n): ").lower() == 'y' 683 | process_folder(path, interactive=not auto_mode, min_chars=min_chars, 684 | verbose=verbose, dry_run=dry_run, fix_formatting=fix_formatting) 685 | else: 686 | print("File or directory not found.") -------------------------------------------------------------------------------- /docs/journey.md: -------------------------------------------------------------------------------- 1 | # The Long and Winding Road 2 | 3 | A bit of context on how I got here. More of the story in the post [Goodbye social networks, hello Markdown](https://medium.com/@noteapps/goodbye-social-networks-hello-markdown-9c504a36d618). 4 | 5 | ## 2001 6 | 7 | I worked as an Engineering Manager at a startup full of Linux folks and they created a home-grown recruitment system using flat files for interview notes and folders containing resumes. This stuck in my head for a long time, such a simple solution with permissions controlled by Linux file system and Web auth. 8 | 9 | ## 2002 10 | 11 | I came up with the idea to visualize and manage our social networks like we do with computer networks. This lead me to find and learn about social network science. 12 | 13 | ## 2003 14 | 15 | With some friends we started a company to build out my vision. We failed to get funding and it fizzled away. 16 | 17 | ## 2007 18 | 19 | After our failed social networking software startup, I began developing a tool for my own use based on the same concept and nicknamed it HAL. I built it with PHP, MySQL, and Javascript and I continued tweaking it over the years while sipping in my network data. It was slow but I lived with it. This was the top page, originally to mimic the Google homepage which is ironic as years later I tried to go Google-free! 20 | 21 | ![](../media/HALHome.jpg) 22 | 23 | This was the profile page for a person. 24 | 25 | ![](../media/EricaXu.png) 26 | 27 | ## 2021 28 | 29 | I started a new [hobby](https://www.noteapps.ca/why/) looking for the best Android note-taking app. I tested apps and posted one review a week. As of Dec 2023 I've posted 108 app reviews. Early on one of my [key requirements](https://www.noteapps.ca/my-note-app-l/) was the ability to use Markdown support for input and output. At this point I use [Drafting](https://www.noteapps.ca/drafting/) for quick capture and [Obsidian](https://www.noteapps.ca/obsidian-v1-0-5-scores-7-10/) for pretty much everything. 30 | 31 | ## 2022 32 | 33 | After two decades of building and using HAL, I got the idea to start using Markdown and [Obsidian](https://obsidian.md/) to manage my social network and wrote a post [Keeping track of people and connections in Obsidian](https://medium.com/@noteapps/keeping-track-of-people-and-connections-in-obsidian-cfd6339b50c) describing the idea. 34 | 35 | ## 2023 36 | 37 | I spent a lot of time writing an exporter for my large ego network of 3,640 people from my custom MySQL DB to Markdown. I stopped using HAL as my main tool and now only use [Obsidian](https://obsidian.md/). It's slow to load on Android (about 30 seconds) but fast once loaded. On Windows, loads in about 9 seconds. 37,638 files. Now I can do things I dreamed of 20 years prior like this... 38 | 39 | ![](../media/mynetwork.png) 40 | -------------------------------------------------------------------------------- /docs/person_body.md: -------------------------------------------------------------------------------- 1 | # The Person's body 2 | 3 | Here are some of the sections in the body of a person note: 4 | 5 | Section | Description | Example 6 | --|---|--- 7 | `# Person` | Their full name | `# SpongeBob SquarePants` 8 | `## Bio` | A brief description of the person. Usually from one of the numbered references and put `[1]`, `[2]` etc. for the source of the info | `> an American animated television series created by marine science educator and animator Stephen Hillenburg - [1]` 9 | `## Quotes` | Something they said. Usually from one of the numbered references and put `[1]`, `[2]` etc. for the source of the info | `> Aye-aye, captain!` 10 | `## Life Events` | Key moments in the person's life. Format: `YYYY-MM-DD: ` | `2023-02-01: had a son, Thor 5lbs 2oz` 11 | `References` | Hyperlinks to more information on the Web such as profiles on social media sites, articles mentioning them, about them, or by them | `1. [Wikipedia](https://en.wikipedia.org/wiki/SpongeBob_SquarePants)` 12 | `## Products` | A bulleted list of things this person worked on.\nSometimes they are a backlink to a separate [Product](../templates/Product.md) Markdown file\nSometimes a hyperlink directly to an article they wrote or a Web page about the product.\nSometimes just text | `- [[Dynalist]]` 13 | `## Positions`| A bulleted list of positions they've held. Could be work or volunteer.\nFormat: `, [[Orgnanization]], [[Place]], YYYY to YYYY` or `#current` and `#tag1 #tag2`\n Could append tags like `#quit`, `#fired`, `#promoted`, `#retired`. | `- Fry Cook, [[Krusty Krab]], [[Bikini Bottom]], 1990 to 1993 #quit` 14 | `## People` | A list of `[[First Name Last Name]]` wikilinks to people this person is connected to. This is where the magic happens! They are typically wikilinks to other person files. Append a few words like "- co-Founder" and [tags](tags.md) like `#friend`. Could be hyperlinks to a person on the Web don’t need to have them in your network | `[[Patrick]] - met at work #friend #very-strong` 15 | `## Interests` | A bulleted list of things they are interested in. May move this to a Front Matter field where it could be queried so you can ask questions like “Who are all the people I know that like embroidery?” (one of them!) | `- Jellyfishing` 16 | `## Notes` | Just that, personal notes about the person | `- Our son used to love this cartoon` 17 | `## Communications` | For capturing any messages or emails with the person. Each sub-heading is the date in `### YYYY-MM-DD` format. For people with a lot of communications, keep separate files for each communication aka atomic notes and then embed them here | `![[spongebob/2023-12-30.md]]` 18 | 19 | See [The Frontmatter](person_frontmatter.md) to learn about the structure of the top of the `person.md` template. 20 | -------------------------------------------------------------------------------- /docs/person_frontmatter.md: -------------------------------------------------------------------------------- 1 | # The Person's head 2 | 3 | At the top of the [Person.md](../templates/Person.md) is what's called the the frontmatter. If you're not technical, don't let it scare you. It's just a bunch of fields like in a form. In this case, it's the information about the person. 4 | 5 | ### Why 6 | 7 | Having structured metadata -- data about the data which, in this case, is data about the person -- is helpful to be able to query across all of your notes and to be able to answer questions like "Who in my network knows Java?" 8 | 9 | ### What 10 | 11 | Frontmatter is a collection of fields at the top of a note delineated by three dashes `---` before and after the fields like this: 12 | 13 | ``` 14 | --- 15 | tags: [person, friend, ex-colleague, blist] 16 | first_name: SpongeBob 17 | last_name: SquarePants 18 | --- 19 | ``` 20 | 21 | ### Fields 22 | 23 | Here are the fields in the top of the [Person.md](../templates/Person.md) template: 24 | 25 | Field | Description | Example 26 | --|---|--- 27 | `tags` | Individual labels. Always include `person` preferably the first one but doesn't have to be | `software-developer`, `friend`, `ex-colleague` 28 | `subject-id` | Don't use this as it will be removed as it was for me to reference my HAL system, you don't need it 29 | `aliases` | Nickname or preferred name. Can then link to the person with this alias. See the Obsidian [Aliases](https://help.obsidian.md/Linking+notes+and+files/Aliases) Help page | `[SpongeBob, Bob]` 30 | `slug` | A one-word or hyphenated label **unique** to that person. Must be unique. Used for their folder name under `People` folder. Used in `people:` fields in other files like [Chat.md](../templates/Chat.md) or `from:` or `to:` in [Email.md](../templates/Email.md). Helpful for queries. | `spongebob` 31 | `birthday` | One of the most important fields!\nFormat: `YYYY-MM-DD` or `"MM-DD"` if you don't know the year.\nTo use the month and day only, you may need to switch to source mode in Obsidian otherwise the date-picker expects a fully qualified date | `1965-09-29` or `09-29` 32 | `title` | Their current job title. Thinking to remove this since already under `## Positions` with label `#current` | `Fry Cook` 33 | `skills` | A comma-separated list of one-word or hyphenated words skills the person has. | `[java, spring-boot, css]` 34 | `organizations` | A collection of organizations. The current organization(s) they are at. Matches `slug` in the corresponding [Organization.md](../templates/Organization.md)` note file on the company the person is affiliated with | `[krusty-krab, mcdondalds]` 35 | `url` | The primary Web site to visit for this person if they have one | `https://www.spongebob.com` 36 | `products` | A comma-separated list of product slugs that the person worked on. Could be redundant if also in `## Products` so may be removed | `[obsidian, dynalist]` 37 | `hometown` | Where they are originally from | `Bikini Bottom, Marshall Islands` 38 | `city` | City where they live | `Bikini Bottom` 39 | `state` | Province or State where they live | `Marshall Islands` 40 | `x_id` | The last portion of their X (Twitter) social network URL. Could be redundant if also in `## References` so may be removed | `spongebob` 41 | `linkedin_id` | The last portion of their LinkedIn social network URL. Could be redundant if also in `## References` so may be removed | `spongebobrocks` 42 | 43 | For some people I add fields: 44 | 45 | - `anniversary` for their wedding anniversary. See `birthday` for the format 46 | - `address` for their street address 47 | - `zip` for the postal/ZIP code 48 | - `github_id`, `threads_id`, `reddit_id` etc. 49 | 50 | See [The Body](person_body.md) to learn about the structure of the rest of the `person.md` template. 51 | -------------------------------------------------------------------------------- /docs/tags.md: -------------------------------------------------------------------------------- 1 | # HAL tags 2 | 3 | Some of the `tags` I add to `person.md` files. 4 | 5 | I try to avoid putting any company or organization tags since those are already listed with wikilinks `## Organizations` section of the `person.md` file. 6 | 7 | I do add tags for special projects we worked on together, to be able to search on the crews I was part of through my career. 8 | 9 | | Tag | Meaning | 10 | | ---- | ---- | 11 | | **State** | | 12 | | `#ex` | Used to be... neighbour, partner, friend, employee,... | 13 | | `#deceased` | No longer on this planet | 14 | | `#never-met` | I never met them | 15 | | **Strength** | | 16 | | `#alist` | Strong connection, would refer without hesitation | 17 | | `#blist` | Good connection, would help out, would refer | 18 | | `#clist` | OK connection, would contact, may refer | 19 | | `#dlist` | Some connection, previous work colleague or met once | 20 | | `#elist` | No direction connection, opportunity to connect through someone else | 21 | | `#flist` | Don't want to connect or bad connection, would not refer | 22 | | **Personal** | | 23 | | `#classmate` | Classmate of mine | 24 | | `#family` | In my family | 25 | | `#friend` | Someone who would answer my call and vice versa | 26 | | `#foaf` | Friend of a friend, friend a sibling, or sibling of a friend, family friend | 27 | | `#koaf` | Kid of a friend | 28 | | `#poaf` | Partner of a friend | 29 | | `#sibling` | Brother or sister | 30 | | `#in-law` | In laws | 31 | | **Business** | | 32 | | `#employee` | Reported to me, add `#ex` if in the past? | 33 | | `#my-manager` | My manager | 34 | | `#colleague` | Was a peer of mine at an organization | 35 | | `#neighbour` | A neighbour, add `#ex` if in the past? | 36 | | `#interviewee` | Someone I interviewed for a job | 37 | | `#interviewer` | Someone who interviewed me for a job | 38 | | **Profession** | | 39 | | `#agile` | Agile - may move this to `skills` | 40 | | `#architect` | Architect, another tag will provide specificity on the area | 41 | | `#developer` | Software Developer | 42 | | `#engineer` | Engineer | 43 | | `#entrepreneur` | Entrepreneur(ial) | 44 | | `#exec` | They're an executive | 45 | | `#hr` | Human Resources person | 46 | | `#recruiter` | Recruiter, internal or external | 47 | | `#software-developer` | Software Developer | 48 | | `#service-provider` | They provide service to us, like Lawyer, Accountant etc. | 49 | | **Personality** | | 50 | | `#nice` | Nice person | 51 | | `#fan` | Someone who likes me | 52 | | `#ass` | An ass | 53 | | `#bad` | A bad person for whatever reason I experienced | 54 | | `#odd` | An odd person | 55 | | `#dislike` | Someone I don't like | 56 | | `#arch-nemesis` | My arch nemesis | 57 | | `#avoid` | Incommunicado, persona non-grata | 58 | -------------------------------------------------------------------------------- /embed_notes.py: -------------------------------------------------------------------------------- 1 | # Include the individual, dated Markdown note files into the Person's profile 2 | # under `## Notes` so you can see the entire communication history with them. 3 | 4 | import os 5 | from argparse import ArgumentParser 6 | import datetime 7 | 8 | import sys 9 | sys.path.insert(1, '../hal/') 10 | import person 11 | import identity 12 | 13 | sys.path.insert(1, './') 14 | import md_lookup 15 | import md_person 16 | import md_frontmatter 17 | import md_body 18 | import md_date 19 | import md_interactions 20 | 21 | NEW_LINE = "\n" 22 | HEADING_2 = "##" 23 | HEADING_NOTES = HEADING_2 + " Notes" 24 | WIKILINK_OPEN = "[[" 25 | WIKILINK_CLOSE = "]]" 26 | MD_EMBED = "!" 27 | MD_SUFFIX = ".md" 28 | EMBEDDED_WIKILINK = MD_EMBED + WIKILINK_OPEN 29 | 30 | # Parse the command line arguments 31 | def get_arguments(): 32 | 33 | parser = ArgumentParser() 34 | 35 | parser.add_argument("-f", "--folder", dest="folder", default=".", 36 | help="The folder where each Person has a subfolder named with their slug") 37 | 38 | parser.add_argument("-d", "--debug", dest="debug", action="store_true", default=False, 39 | help="Print extra info as the files processed") 40 | 41 | parser.add_argument("-x", "--max", type=int, dest="max", default=99999, 42 | help="Maximum number of people to process") 43 | 44 | args = parser.parse_args() 45 | 46 | return args 47 | 48 | # ----------------------------------------------------------------------------- 49 | # 50 | # Create a set of lines in Markdown with [[Wikilinks]] to each interaction. 51 | # 52 | # Parameters: 53 | # 54 | # - slug - person's slug e.g. 'spongebob' 55 | # - interactions - collection of Interaction 56 | # 57 | # Returns: 58 | # 59 | # - Markdown text 60 | # 61 | # Notes: 62 | # 63 | # - Generates a set of lines with embedded link to each communication file, 64 | # separated by a blank line 65 | # 66 | # Example: 67 | # 68 | # - If there are two files `2023-02-01.md` and `2024-03-24.md` 69 | # 70 | # ``` 71 | # ![[spongebob/2023-02-01.md]] 72 | # 73 | # ![[spongebob/2024-03-24.md]] 74 | # ``` 75 | # 76 | # ----------------------------------------------------------------------------- 77 | def generate_markdown(slug, the_interactions): 78 | markdown = "" 79 | 80 | # Sort the interactions chronologically from old to new 81 | sorted_interactions = sorted(the_interactions, key=lambda x: x.date) 82 | 83 | # Make a Wikilink to each interaction file so it can be embedded 84 | for interaction in sorted_interactions: 85 | markdown += EMBEDDED_WIKILINK 86 | markdown += slug + "/" + interaction.filename # Use the filename from the Interaction object 87 | markdown += WIKILINK_CLOSE + NEW_LINE + NEW_LINE 88 | 89 | return markdown.strip() # Remove trailing blank lines 90 | 91 | # ----------------------------------------------------------------------------- 92 | # 93 | # Given a folder name, load all of the interactions with that person based on 94 | # the existence of dated Markdown files for each date where an interaction 95 | # occured. 96 | # 97 | # Parameters: 98 | # 99 | # - folder - folder containing sub-folders for each person 100 | # 101 | # Returns: 102 | # 103 | # - The number of people processed. 104 | # 105 | # Notes: 106 | # 107 | # 1. Go through each folder `folder-name` under `People` 108 | # 2. Find all files with names `YYYY-MM-DD` 109 | # 3. Create a list of them like this, ordered oldest to newest 110 | # 111 | # ``` 112 | # ![[spongebob/2017-08-13.md]] 113 | # 114 | # ![[spongebob/2022-12-06.md]] 115 | # ``` 116 | # 117 | # 4. Open the corresponding person file where `slug` = `folder-name` 118 | # 5. Find the section `## Notes` 119 | # 6. After any bulleted list items (individual notes), replace what is there 120 | # with the new list of embedded files. 121 | # 122 | # ----------------------------------------------------------------------------- 123 | def update_interactions(folder): 124 | 125 | count = 0 126 | notes_section = md_person.SECTION_NOTES 127 | 128 | # get list of people `slug`s from the folder names 129 | slugs = md_person.get_slugs(folder) 130 | 131 | # for each person, find the most recent communication 132 | for slug in slugs: 133 | 134 | the_interactions = [] 135 | top = "" 136 | 137 | # get all of the interactions with the person 138 | the_date = md_interactions.get_interactions(slug, os.path.join(folder, slug), the_interactions) 139 | 140 | if args.debug: 141 | print(slug + ": " + str(the_date)) 142 | 143 | # generate the Notes section of the body 144 | interactions_markdown = generate_markdown(slug, the_interactions) 145 | 146 | # get the Person's profile 147 | person_file = md_person.read_person_frontmatter(slug, folder) 148 | 149 | if person_file is not None: 150 | 151 | # get the part of the section before the embedded notes 152 | top = person_file.section_top(notes_section, EMBEDDED_WIKILINK) 153 | top = top.rstrip() # remove trailing whitespace [#21] 154 | 155 | # add the new content after the top, effectively replacing what's after top 156 | new_markdown = top + NEW_LINE + NEW_LINE + interactions_markdown 157 | 158 | # update what's in the Person's Notes section of their profile 159 | person_file.update_section(slug, notes_section, new_markdown) 160 | 161 | # write the file with the updated section 162 | result = person_file.save() 163 | 164 | count += 1 165 | 166 | # stop if we've reached the limit of the passed in `max` argument 167 | if args.max and count >= int(args.max): 168 | return count 169 | 170 | return count 171 | 172 | # main 173 | 174 | args = get_arguments() 175 | folder = args.folder 176 | the_interactions = [] 177 | 178 | if folder and not os.path.exists(folder): 179 | print('The folder "' + folder + '" could not be found.') 180 | 181 | elif folder: 182 | count = update_interactions(folder) 183 | -------------------------------------------------------------------------------- /last_contact.py: -------------------------------------------------------------------------------- 1 | # Updates the "last_contact" frontmatter field based on atomic dated files. 2 | 3 | import os 4 | from argparse import ArgumentParser 5 | 6 | import sys 7 | sys.path.insert(1, '../hal/') 8 | import person 9 | 10 | sys.path.insert(1, './') 11 | import md_person 12 | import md_interactions 13 | 14 | # Parse the command line arguments 15 | def get_arguments(): 16 | 17 | parser = ArgumentParser() 18 | 19 | parser.add_argument("-f", "--folder", dest="folder", default=".", 20 | help="The folder where each Person has a subfolder named with their slug") 21 | 22 | parser.add_argument("-d", "--debug", dest="debug", action="store_true", default=False, 23 | help="Print extra info as the files processed") 24 | 25 | parser.add_argument("-t", "--template", dest="template", default=0, 26 | help="Markdown template file") 27 | 28 | parser.add_argument("-x", "--max", type=int, dest="max", default=0, 29 | help="Maximum number of people to process") 30 | 31 | args = parser.parse_args() 32 | 33 | return args 34 | 35 | # ----------------------------------------------------------------------------- 36 | # 37 | # Given a set of interactions, update each Person's `last_contact` field 38 | # 39 | # Parameters: 40 | # 41 | # - folder - folder containing sub-folders for each person 42 | # - the_interactions - collection of Interaction 43 | # 44 | # Returns: 45 | # 46 | # - True if success, False otherwise 47 | # 48 | # Notes: 49 | # 50 | # - #todo maybe use `Message` `from message_md` instead of `Interaction` 51 | # - as go through the files, e.g. exclude "tags: note" 52 | # 53 | # ----------------------------------------------------------------------------- 54 | def update_last_contact(folder, the_interactions): 55 | 56 | result = False 57 | the_date = "" 58 | 59 | # for each person find the most recent communication 60 | if the_interactions: 61 | 62 | # take the first (most recent) interaction 63 | most_recent_interaction = the_interactions[0] 64 | slug = most_recent_interaction.slug 65 | the_date = most_recent_interaction.date 66 | 67 | # update their profile 68 | if the_date: 69 | result = md_person.update_field(slug, folder, person.FIELD_LAST_CONTACT, str(the_date)) 70 | 71 | return result 72 | 73 | # ----------------------------------------------------------------------------- 74 | # 75 | # Given a folder name, load all of the interactions with that person and 76 | # update the `last_contact` field with the date of the most recent interaction. 77 | # 78 | # Parameters: 79 | # 80 | # - folder - folder containing sub-folders for each person 81 | # - interactions - collection of Interaction 82 | # 83 | # Returns: 84 | # 85 | # - the number of interactions 86 | # 87 | # Notes: 88 | # 89 | # - populates `theInteractions` with all of the interactions e.g. chats 90 | # this person had and sorts them from most recent to oldest 91 | # - #todo maybe use `Message` `from message_md` instead of `Interaction` 92 | # 93 | # ----------------------------------------------------------------------------- 94 | def load_interactions(folder, the_interactions): 95 | 96 | count = 0 97 | 98 | # get list of people `slug`s from the folder names 99 | slugs = md_person.get_slugs(folder) 100 | 101 | # for each person find the most recent communication 102 | for slug in slugs: 103 | the_interactions = [] 104 | 105 | # get all of the interactions with the person 106 | the_date = md_interactions.get_interactions(slug, os.path.join(folder, slug), the_interactions) 107 | 108 | # update the `last_contact` field for the person 109 | update_last_contact(folder, the_interactions) 110 | 111 | # stop if we've reached the limit of the passed in `max` argument 112 | if args.max and count >= int(args.max): 113 | return count 114 | 115 | if args.debug: 116 | print(slug + ": " + str(the_date)) 117 | 118 | count += 1 119 | 120 | return count 121 | 122 | # main 123 | 124 | args = get_arguments() 125 | folder = args.folder 126 | the_interactions = [] 127 | 128 | if folder and not os.path.exists(folder): 129 | print('The folder "' + args.folder + '" could not be found.') 130 | 131 | elif folder: 132 | count = load_interactions(folder, the_interactions) 133 | 134 | print(str(count) + " people checked" + " "*20) 135 | -------------------------------------------------------------------------------- /md_birthdays.py: -------------------------------------------------------------------------------- 1 | # Gets a list of birthdays by month 2 | 3 | import os 4 | from argparse import ArgumentParser 5 | import datetime 6 | import calendar 7 | 8 | import sys 9 | sys.path.insert(1, '../hal/') 10 | import person 11 | import identity 12 | import life_events 13 | 14 | sys.path.insert(1, './') 15 | import md_lookup 16 | import md_date 17 | 18 | NEW_LINE = "\n" 19 | HEADING_2 = "##" 20 | BIRTHDAY_TABLE_HEADING = "Day | Person | Year | Age\n:-:|---|:-:|:-:\n" 21 | TABLE_SEPARATOR = " | " 22 | WIKILINK_OPEN = "[[" 23 | WIKILINK_CLOSE = "]]" 24 | 25 | # A birthday contains {name, slug, birthday} 26 | PAIR_NAME = 0 27 | PAIR_SLUG = 1 28 | PAIR_BIRTHDAY = 2 29 | PAIR_DEATHDAY = 3 30 | 31 | # Parse the command line arguments 32 | def get_arguments(): 33 | 34 | parser = ArgumentParser() 35 | 36 | parser.add_argument("-f", "--folder", dest="folder", default=".", 37 | help="The folder where each Person has a subfolder named with their slug") 38 | 39 | parser.add_argument("-d", "--debug", dest="debug", action="store_true", default=False, 40 | help="Print extra info as the files processed") 41 | 42 | parser.add_argument("-u", "--upcoming", type=int, dest="upcoming", default=None, 43 | help="Show the birthdays upcoming in the next number of days") 44 | 45 | parser.add_argument("-x", "--max", type=int, dest="max", default=99999, 46 | help="Maximum number of people to process") 47 | 48 | args = parser.parse_args() 49 | 50 | return args 51 | 52 | # ----------------------------------------------------------------------------- 53 | # 54 | # Sort a collection of birthdays in chronological order from Jan to Dec 55 | # 56 | # Parameters: 57 | # 58 | # birthdays - collection of {slug, birthday} 59 | # 60 | # Notes: 61 | # 62 | # - Some birthdays in form `YYYY-MM-DD` and some `MM-DD` 63 | # 64 | # ----------------------------------------------------------------------------- 65 | def sort_birthdays(birthdays): 66 | 67 | valid_birthdays = [] 68 | 69 | # filter out birthdays with missing or invalid dates 70 | for birthday in birthdays: 71 | slug = birthday[person.FIELD_SLUG] 72 | name = birthday[identity.FIELD_NAME] 73 | date = birthday[life_events.FIELD_BIRTHDAY] 74 | deathday = birthday[life_events.FIELD_DEATHDAY] 75 | if date and md_date.extract_month(date) and md_date.extract_day(date): 76 | valid_birthdays.append((name, slug, date, deathday)) 77 | elif date and date != None and date != "None": 78 | print("Invalid birthday: '" + str(birthday) + "'") 79 | 80 | # sort valid birthdays by birthday 81 | sorted_birthdays = sorted(valid_birthdays, key=lambda x: (int(md_date.extract_month(x[PAIR_BIRTHDAY]) or 0), int(md_date.extract_day(x[PAIR_BIRTHDAY]) or 0))) 82 | 83 | return sorted_birthdays 84 | 85 | def calculate_age(birthday, deathday): 86 | 87 | # parse the birthday and deathday strings into datetime objects 88 | birth_date = md_date.get_date(birthday) 89 | death_date = md_date.get_date(deathday) 90 | 91 | if death_date: 92 | age = death_date.year - birth_date.year - ((death_date.month, death_date.day) < (birth_date.month, birth_date.day)) 93 | elif deathday: 94 | print("Invalid deathday: '" + str(deathday) + "'") 95 | else: 96 | try: 97 | # calculate the current age 98 | current_date = datetime.datetime.now() 99 | age = current_date.year - birth_date.year - ((current_date.month, current_date.day) < (birth_date.month, birth_date.day)) 100 | except: 101 | print("Invalid birthday: '" + str(birthday) + "'") 102 | 103 | return age 104 | 105 | # ----------------------------------------------------------------------------- 106 | # 107 | # Display the birthdays coming up in the next `num_days` days 108 | # 109 | # Parameters: 110 | # 111 | # birthdays - collection of {slug, name, birthday, deathday} 112 | # num_days - the number of days (including today) forward to look 113 | # 114 | # ----------------------------------------------------------------------------- 115 | def upcoming(birthdays, num_days): 116 | 117 | output = "" 118 | 119 | # calculate the current age 120 | current_date = datetime.datetime.now() 121 | current_month = current_date.month 122 | current_day = current_date.day 123 | 124 | # calculate the date `num_days` from now 125 | end_date = datetime.datetime.now() + datetime.timedelta(days=num_days) 126 | 127 | for birthday in birthdays: 128 | 129 | # parse the birthday string 130 | the_month = int(md_date.extract_month(birthday[PAIR_BIRTHDAY])) 131 | the_day = int(md_date.extract_day(birthday[PAIR_BIRTHDAY])) 132 | 133 | if 1 <= the_month <= 12 and 1 <= the_day <= 31: 134 | # calculate the birthday's date for the current year 135 | birthday_date = datetime.datetime(datetime.datetime.now().year, the_month, the_day) 136 | 137 | # if the birthday falls within the next num_days, include it 138 | if datetime.datetime.now() <= birthday_date <= end_date: 139 | output += birthday[PAIR_NAME] + " on " + calendar.month_abbr[the_month] + " " + str(the_day) + NEW_LINE 140 | 141 | else: 142 | print("Invalid birthday: " + str(birthday)) 143 | 144 | return output 145 | 146 | # ----------------------------------------------------------------------------- 147 | # 148 | # Generate a calendar of birthdays in Markdown 149 | # 150 | # Parameters: 151 | # 152 | # birthdays - collection of {slug, name, birthday, deathday} 153 | # 154 | # ----------------------------------------------------------------------------- 155 | def make_calendar(birthdays): 156 | 157 | output = "" 158 | 159 | # group birthdays by month 160 | grouped_birthdays = {} 161 | for birthday in birthdays: 162 | month = md_date.extract_month(birthday[PAIR_BIRTHDAY]) 163 | if month in grouped_birthdays: 164 | grouped_birthdays[month].append(birthday) 165 | else: 166 | grouped_birthdays[month] = [birthday] 167 | 168 | # print birthdays by month 169 | for month_num in range(1, 13): 170 | output += HEADING_2 + " " + calendar.month_name[month_num] + NEW_LINE + NEW_LINE 171 | 172 | month_num_str = str(month_num).zfill(2) 173 | 174 | if month_num_str in grouped_birthdays: 175 | 176 | output+= BIRTHDAY_TABLE_HEADING 177 | 178 | # sort birthdays within the month 179 | sorted_birthdays = sorted(grouped_birthdays[month_num_str], key=lambda x: md_date.extract_day(x[PAIR_BIRTHDAY])) 180 | 181 | for birthday in sorted_birthdays: 182 | 183 | name = WIKILINK_OPEN + birthday[PAIR_NAME] + WIKILINK_CLOSE 184 | birthdayStr = birthday[PAIR_BIRTHDAY] 185 | try: 186 | deathdayStr = birthday[PAIR_DEATHDAY] 187 | except: 188 | deathdayStr = "" 189 | 190 | year="" 191 | age="" 192 | day = birthdayStr[-2:] 193 | 194 | if len(birthdayStr) == 10: # YYYY-MM-DD format 195 | year = birthdayStr[:4] 196 | age = str(calculate_age(birthdayStr, deathdayStr)) 197 | 198 | output += day + TABLE_SEPARATOR + name + TABLE_SEPARATOR + year + TABLE_SEPARATOR + age + NEW_LINE 199 | 200 | if month_num != 12: 201 | output += NEW_LINE + NEW_LINE 202 | 203 | return output 204 | 205 | # main 206 | 207 | args = get_arguments() 208 | folder = args.folder 209 | 210 | the_calendar = "" 211 | 212 | if folder and not os.path.exists(folder): 213 | print('The folder "' + folder + '" could not be found.') 214 | 215 | elif folder: 216 | birthdays = md_lookup.get_values(folder, [life_events.FIELD_BIRTHDAY, life_events.FIELD_DEATHDAY], args) 217 | sorted_birthdays = sort_birthdays(birthdays) 218 | 219 | if args.upcoming: 220 | print(upcoming(sorted_birthdays, args.upcoming)) 221 | else: 222 | the_calendar = make_calendar(sorted_birthdays) 223 | print(the_calendar) 224 | -------------------------------------------------------------------------------- /md_body.py: -------------------------------------------------------------------------------- 1 | # Represents the body of a Markdown file 2 | 3 | import sys 4 | sys.path.insert(1, './') 5 | 6 | NEW_LINE = "\n" 7 | 8 | # pairs in the section dictionary 9 | # e.g. {'heading': '## Bio', 'body': 'Born in 1964'} 10 | SECTION_HEADING = 'heading' 11 | SECTION_CONTENT = 'content' 12 | 13 | SECTION_H1 = "# " 14 | SECTION_H2 = "## " 15 | 16 | class Body: 17 | 18 | def __init__(self, parent): 19 | self.parent = parent # file containing this body 20 | self.section_headings = [] # possible headings e.g. `## Notes` 21 | self.sections = [] # each of the section's content 22 | self.raw = "" # raw text from the file 23 | 24 | def __str__(self): 25 | output = "sections: " + str(self.sections) + NEW_LINE 26 | output += "body: " + NEW_LINE + self.raw 27 | return output 28 | 29 | def read(self): 30 | 31 | if not self.parent.file: 32 | self.parent.open('r') 33 | 34 | # read the frontmatter, even if it was already read, so we know 35 | # that we're at the right spot in the file 36 | self.parent.frontmatter.read() 37 | 38 | # grab the handle to the file from the parent object 39 | file = self.parent.file 40 | 41 | # read the body of the file 42 | for line in file: 43 | self.raw += str(line) 44 | 45 | # parse the body by sections 46 | self.parse() 47 | 48 | return True 49 | 50 | # ------------------------------------------------------------------------- 51 | # 52 | # Parse the body of a Markdown file into H1 and H2 sections. 53 | # 54 | # Notes: 55 | # 56 | # - Creates a collection of sections for the content of the Markdown file 57 | # - Each section is like this: 58 | # 59 | # {'heading': '# Spongebob Squarpants", 'contents': 'Ocean dweller'}, 60 | # {'heading': '## Bio", 'contents': 'Fictitious animated character'}, 61 | # {'heading': '## Life Events", 'contents': '- 2020: Born'}, 62 | # 63 | # - Any content at the H3 or lower levels is kept in the `content` of the 64 | # H2 section that contains it. 65 | # 66 | # ------------------------------------------------------------------------- 67 | def parse(self): 68 | 69 | self.sections = [] # Initialize sections list 70 | 71 | lines = self.raw.splitlines() # Split the string into lines 72 | current_section = None 73 | 74 | for line in lines: 75 | # if this is a new section 76 | if line.startswith(SECTION_H1): 77 | if current_section is not None: 78 | self.sections.append(current_section) 79 | # create a new section with level 1 heading 80 | current_section = {'heading': line, 'content': ''} 81 | elif line.startswith(SECTION_H2): 82 | if current_section is not None: 83 | self.sections.append(current_section) 84 | # create a new section with level 2 heading 85 | current_section = {'heading': line, 'content': ''} 86 | elif current_section is not None: 87 | # add line to the content of the current section 88 | current_section['content'] += line + NEW_LINE 89 | 90 | # add the last section to the sections list 91 | if current_section is not None: 92 | self.sections.append(current_section) 93 | 94 | # ------------------------------------------------------------------------- 95 | # 96 | # Writes the body of a Markdown file. 97 | # 98 | # Notes: 99 | # 100 | # - Takes the parent file and writes each section heading and content 101 | # 102 | # ------------------------------------------------------------------------- 103 | def write(self): 104 | 105 | if not self.parent.file: 106 | self.parent.open('w+') 107 | 108 | # grab the handle to the file from the parent object 109 | file = self.parent.file 110 | 111 | file.write(NEW_LINE) 112 | for section in self.sections: 113 | file.write(section[SECTION_HEADING] + NEW_LINE) 114 | file.write(section[SECTION_CONTENT]) 115 | 116 | return True 117 | 118 | # get the content from a specific section of the file 119 | def get_content(self, section_heading): 120 | 121 | for section in self.sections: 122 | if section['heading'] == section_heading: 123 | return section['content'] -------------------------------------------------------------------------------- /md_date.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | 3 | def extract_year(dateString): 4 | if "-" in dateString: 5 | if len(dateString) == 10: # YYYY-MM-DD format 6 | return dateString[0:4] 7 | elif len(dateString) == 5: # MM-DD format 8 | return "" 9 | 10 | def extract_month(dateString): 11 | if "-" in dateString: 12 | if len(dateString) == 10: # YYYY-MM-DD format 13 | return dateString[5:7] 14 | elif len(dateString) == 5: # MM-DD format 15 | return dateString[0:2] 16 | 17 | def extract_day(dateString): 18 | if "-" in dateString: 19 | if len(dateString) == 10: # YYYY-MM-DD format 20 | return dateString[8:10] 21 | elif len(dateString) == 5: # MM-DD format 22 | return dateString[3:5] 23 | 24 | # parse a date string of format `YYYY-MM-DD` or `MM-DD` into a datetime object 25 | def get_date(dateStr): 26 | 27 | the_date = None 28 | 29 | if "-" in dateStr: 30 | if len(dateStr) == 10: # YYYY-MM-DD format 31 | try: 32 | the_date = datetime.datetime.strptime(dateStr, "%Y-%m-%d") 33 | except: 34 | print("invalid date: '" + str(dateStr) + "'") 35 | elif len(dateStr) == 5: # MM-DD format 36 | try: 37 | the_date = datetime.datetime.strptime(dateStr, "%m-%d") 38 | except: 39 | print("invalid date: '" + str(dateStr) + "'") 40 | 41 | return the_date -------------------------------------------------------------------------------- /md_file.py: -------------------------------------------------------------------------------- 1 | # Represents a Markdown file 2 | 3 | import sys 4 | import os 5 | 6 | sys.path.insert(1, './') 7 | import md_frontmatter 8 | import md_body 9 | 10 | NEW_LINE = "\n" 11 | 12 | TAG_NOTE = "note" 13 | 14 | # field in all Markdown files 15 | FIELD_TAGS = "tags" 16 | 17 | Tags=[TAG_NOTE] 18 | 19 | class File: 20 | def __init__(self): 21 | self.path = "" # path to the file 22 | self.file = None # file handle 23 | self.frontmatter = md_frontmatter.Frontmatter(self) # the frontmatter 24 | self.body = md_body.Body(self) # the contents after the frontmatter 25 | 26 | def __str__(self): 27 | output = self.path + NEW_LINE 28 | output += str(self.frontmatter) + NEW_LINE 29 | output += str(self.body) 30 | return output 31 | 32 | def open(self, mode): 33 | self.file = open(self.path, mode) 34 | 35 | def save(self): 36 | # open the file in read-write mode 37 | self.open('w+') 38 | self.frontmatter.write() 39 | self.body.write() 40 | self.file.close() 41 | 42 | def get_prefix(path): 43 | fileName = os.path.basename(path) # get the base name of the file 44 | filePrefix = os.path.splitext(fileName)[0] # remove the extension 45 | return filePrefix -------------------------------------------------------------------------------- /md_frontmatter.py: -------------------------------------------------------------------------------- 1 | # Class to create and modify a Markdown file's Frontmatter 2 | 3 | # What's cool is it uses Pyhton's built in "setattr()" function to dynamically 4 | # set the attributes of the object based on the set of "fields" that are 5 | # configured by the user of this class. In this way, this class can be 6 | # inherited by other classes for specific Markdown files like "Person" or 7 | # Organization. 8 | # 9 | # One default field "tags" is used for all instances of this class since it's 10 | # an important field to determine the type of Object. 11 | # 12 | # Another special field is "slug" which is a unique label for an item of the 13 | # specific class. It's not defined in here but likely used in Classes that 14 | # inherit this one. 15 | 16 | import yaml 17 | import datetime 18 | 19 | FRONTMATTER_SEPARATOR = "---" 20 | 21 | NEW_LINE = "\n" 22 | 23 | TAG_NOTE = "note" 24 | TAG_CHAT = "chat" 25 | TAG_EMAIL = "email" 26 | TAG_PHONE = "phone" 27 | TAG_CALL = "call" 28 | TAG_PERSON = "person" 29 | 30 | FIELD_RAW = "raw" 31 | 32 | # field in all Markdown files 33 | FIELD_TAGS = "tags" 34 | 35 | # fields in a communication Markdown files 36 | 37 | FIELD_PEOPLE = "people" 38 | FIELD_SERVICE = "service" 39 | FIELD_TOPIC = "topic" 40 | FIELD_DATE = "date" 41 | FIELD_TIME = "time" 42 | 43 | CommunicationFields = [FIELD_TAGS, FIELD_PEOPLE, FIELD_TOPIC, FIELD_DATE, FIELD_TIME, FIELD_SERVICE] 44 | 45 | # keep a list of fields that are of type array 46 | ArrayFields = [FIELD_TAGS] 47 | 48 | class Frontmatter: 49 | def __init__(self, parent): 50 | self.parent = parent # file containing this frontmatter 51 | self.fields = [] # list of fields, dynamically set 52 | self.tags = [] # the tags for this file 53 | self.raw = "" # the full text, all lines unparsed 54 | 55 | def __str__(self): 56 | output = "" 57 | for field in self.fields: 58 | value = getattr(self, field) 59 | if value: 60 | output += field + ": " + str(value) + NEW_LINE 61 | return output 62 | 63 | def get_date(self): 64 | return getattr(self, FIELD_DATE) 65 | 66 | # ------------------------------------------------------------------------- 67 | # 68 | # Initialize each field to [] if it's an array field or "" otherwise. 69 | # 70 | # ------------------------------------------------------------------------- 71 | def init_fields(self): 72 | for field in self.fields: 73 | if field in ArrayFields: 74 | setattr(self, field, []) 75 | else: 76 | setattr(self, field, "") 77 | 78 | # ------------------------------------------------------------------------- 79 | # 80 | # See which fields are missing in the doc or extra, i.e. not a self.<field> 81 | # 82 | # ------------------------------------------------------------------------- 83 | def check_fields(self, doc_fields): 84 | missing_fields = [] 85 | extra_fields = [] 86 | 87 | # Check for missing fields in self.fields 88 | for field_name in doc_fields: 89 | if not hasattr(self, field_name): 90 | missing_fields.append(field_name) 91 | 92 | # Check for extra fields in self.fields 93 | for field_name in self.fields: 94 | if field_name not in doc_fields: 95 | extra_fields.append(field_name) 96 | 97 | return missing_fields, extra_fields 98 | 99 | # ------------------------------------------------------------------------- 100 | # 101 | # Read a specific field from the doc and return it's value. 102 | # 103 | # Parameters: 104 | # 105 | # - doc - the JSON text to be parsed 106 | # - field - the name of the field to obtain 107 | # - fields - add the field to this collection 108 | # 109 | # Returns: 110 | # 111 | # - value - the value of the field 112 | # 113 | # ------------------------------------------------------------------------- 114 | def get_field(self, doc, field, fields): 115 | 116 | value = None 117 | 118 | try: 119 | if field in doc: 120 | if field == FIELD_DATE: 121 | try: 122 | value = datetime.datetime.strptime(str(doc[field]), '%Y-%m-%d').date() 123 | except Exception as e: 124 | pass 125 | 126 | elif field == FIELD_TIME: 127 | value = doc[field] 128 | 129 | # there are cases where the YAML parser sees the 130 | # frontmatter "time" value as an integer, e.g. 862 131 | if isinstance(value, int): 132 | # convert integer to hours and minutes 133 | hours, minutes = divmod(value, 60) 134 | # format the time as "HH:MM" 135 | value = '{:02}:{:02}'.format(hours, minutes) 136 | else: 137 | value = doc[field] 138 | 139 | setattr(self, field, value) 140 | fields.append(field) 141 | 142 | except Exception as e: 143 | print(e) 144 | pass 145 | 146 | return value 147 | 148 | # ------------------------------------------------------------------------- 149 | # 150 | # Parse the YAML frontmatter into fields. 151 | # 152 | # Returns: 153 | # 154 | # - True if valid YAML 155 | # - False if not 156 | # 157 | # ------------------------------------------------------------------------- 158 | def parse(self): 159 | 160 | result = False 161 | fields = [] 162 | 163 | # take the YAML data from the "raw" field 164 | try: 165 | yamlData = yaml.safe_load_all(self.raw) 166 | 167 | for doc in yamlData: 168 | if isinstance(doc, dict): 169 | for field in self.fields: 170 | self.get_field(doc, field, fields) 171 | result = True 172 | 173 | except Exception as e: 174 | print(e) 175 | 176 | return result 177 | 178 | # ------------------------------------------------------------------------- 179 | # 180 | # Read the YAML frontmatter, parse it, and return True if it's valid. 181 | # 182 | # Returns: 183 | # 184 | # - True if valid YAML 185 | # - False if not 186 | # 187 | # Notes: 188 | # 189 | # - If the file starts with "---" followed by one or more line(s), 190 | # followed by "---", the parse the YAML into the `frontmatter` fields. 191 | # 192 | # ------------------------------------------------------------------------- 193 | def read(self): 194 | 195 | result = False 196 | line = "" 197 | 198 | if not self.parent.file: 199 | self.parent.open('r') 200 | 201 | file = self.parent.file 202 | 203 | if file: 204 | # read the first line of the file 205 | try: 206 | firstLine = file.readline().strip() 207 | 208 | if firstLine == FRONTMATTER_SEPARATOR: 209 | self.raw += firstLine + NEW_LINE 210 | 211 | # read lines until the second '---' is found or the end of the file is reached 212 | for line in file: 213 | line = line.strip() 214 | self.raw += line + NEW_LINE 215 | if line == FRONTMATTER_SEPARATOR: 216 | result = True 217 | break 218 | except: 219 | pass 220 | 221 | if result: 222 | result = self.parse() 223 | 224 | return result # YAML format is correct 225 | 226 | def write(self): 227 | 228 | # if not already open, open the file in read-write mode 229 | if not self.parent.file: 230 | self.open('w+') 231 | 232 | file = self.parent.file 233 | file.write(self.get_yaml()) 234 | 235 | def get_yaml(self): 236 | result = FRONTMATTER_SEPARATOR + NEW_LINE 237 | 238 | for field in self.fields: 239 | if getattr(self, field): 240 | result += field + ": " + str(getattr(self, field)) + NEW_LINE 241 | 242 | result += FRONTMATTER_SEPARATOR + NEW_LINE 243 | 244 | return result 245 | -------------------------------------------------------------------------------- /md_interactions.py: -------------------------------------------------------------------------------- 1 | # Loads the interactions between people e.g. email, chat 2 | 3 | import os 4 | import sys 5 | import datetime 6 | import glob 7 | import re 8 | 9 | sys.path.insert(1, './hal') 10 | import interaction 11 | 12 | sys.path.insert(1, './') 13 | import communication_file 14 | 15 | NEW_LINE = "\n" 16 | 17 | # ----------------------------------------------------------------------------- 18 | # 19 | # Get the interaction file dates within a specific Person's folder. 20 | # 21 | # Parameters: 22 | # 23 | # - slug - person's slug e.g. 'spongebob 24 | # - path - path to where the files are 25 | # - this_interactions - the collection of interactions 26 | # 27 | # Returns: 28 | # 29 | # - the date of the most recent interaction 30 | # 31 | # ----------------------------------------------------------------------------- 32 | def get_interactions(slug, path, the_interactions): 33 | result = None 34 | markdown_file = communication_file.CommunicationFile() 35 | 36 | # match files starting with YYYY-MM-DD [12] 37 | pattern = r'^(\d{4}-\d{2}-\d{2})(?:\s-\s.*)?\.md$' 38 | 39 | # Get a list of file names matching the pattern 40 | files = [ 41 | os.path.splitext(os.path.basename(file))[0] 42 | for file in glob.glob(os.path.join(path, '*')) 43 | if re.match(pattern, os.path.basename(file)) 44 | ] 45 | if files: 46 | files.sort(reverse=True) 47 | 48 | for file in files: 49 | this_interaction = interaction.Interaction() 50 | this_interaction.slug = slug 51 | try: 52 | # extract the date portion using the regex 53 | match = re.match(pattern, file + ".md") 54 | if match: 55 | date_part = match.group(1) # extract the YYYY-MM-DD part 56 | this_interaction.date = datetime.datetime.strptime(date_part, "%Y-%m-%d").date() 57 | 58 | # store the filename in the Interaction object 59 | this_interaction.filename = file + ".md" 60 | 61 | # add the interaction to the list 62 | the_interactions.append(this_interaction) 63 | 64 | # get the full pathname for the file 65 | full_path = os.path.join(path, file + ".md") 66 | markdown_file.path = full_path 67 | 68 | # read and parse the file's frontmatter 69 | markdown_file.frontmatter.read() 70 | 71 | # get the date from the frontmatter if it's a communication 72 | this_date = get_date(markdown_file) 73 | 74 | if this_date and (result is None or this_date > result): 75 | result = this_date 76 | 77 | except Exception as e: 78 | print(f"Error processing file {file}: {e}") 79 | pass 80 | 81 | # Sort the interactions by reverse date 82 | the_interactions.sort(key=lambda x: x.date, reverse=True) 83 | 84 | return result 85 | 86 | # ----------------------------------------------------------------------------- 87 | # 88 | # If the file is a communication e.g. the `tags` frontmatter field contains 89 | # "email", then return the frontmatter's `date` value. If not, return blank. 90 | # 91 | # ----------------------------------------------------------------------------- 92 | def get_date(file): 93 | the_date = "" 94 | 95 | for tag in file.frontmatter.tags: 96 | if tag in communication_file.Tags: 97 | the_date = file.frontmatter.get_date() 98 | break 99 | 100 | return the_date -------------------------------------------------------------------------------- /md_lookup.py: -------------------------------------------------------------------------------- 1 | # Retrieve a specific attribute for collection of people 2 | 3 | import os 4 | import glob 5 | import re 6 | 7 | import sys 8 | 9 | sys.path.insert(1, '../hal/') 10 | import person 11 | import identity 12 | 13 | sys.path.insert(1, './') 14 | import md_person 15 | import md_file 16 | 17 | FIELD_VALUES = "values" 18 | 19 | # ----------------------------------------------------------------------------- 20 | # 21 | # Given a folder name, get a specific set of attribute for each person under 22 | # that folder. 23 | # 24 | # Parameters: 25 | # 26 | # - folder - source folder containing sub-folders for each person 27 | # - fields - the attributes in the frontmatter tp retrieve 28 | # - max - maximum number of people to load 29 | # 30 | # Returns: 31 | # 32 | # - collection of {name, slug, value} 33 | # 34 | # Notes: 35 | # 36 | # ----------------------------------------------------------------------------- 37 | def get_values(folder, fields, args): 38 | 39 | if args.debug: 40 | print("get_values('" + folder + "', " + "'" + str(fields) + "', " + str(args) + ")") 41 | 42 | values = [] 43 | 44 | # get list of people `slug`s from the folder names 45 | slugs = md_person.get_slugs(folder) 46 | 47 | count = 0 48 | 49 | # for each person get the values for the fields 50 | for slug in slugs: 51 | person_values = get_person_values(folder, slug, fields) 52 | 53 | # check if at least one of the fields requested is non-empty 54 | has_non_empty_field = any(person_values.get(field) for field in fields) 55 | 56 | # only add those people where there was a value found 57 | if has_non_empty_field: 58 | if args.debug: 59 | print(str(person_values)) 60 | values.append(person_values) 61 | count += 1 62 | if count >= args.max: 63 | break 64 | 65 | return values 66 | 67 | # ----------------------------------------------------------------------------- 68 | # 69 | # Get a specific person's attributes from the frontmatter in their profile. 70 | # 71 | # Parameters: 72 | # 73 | # folder - source folder containing sub-folders for each person 74 | # slug - slug of the person 75 | # fields - the frontmatter fields to read, e.g. {'birthday', 'deathday'} 76 | # 77 | # Returns: 78 | # 79 | # {fileprefix, value} of the field, file_prefix will be the person's name 80 | # 81 | # ----------------------------------------------------------------------------- 82 | def get_person_values(folder, slug, fields): 83 | 84 | result = {} 85 | 86 | path = os.path.join(folder, slug) 87 | 88 | # get a list of files with ".md" extension 89 | all_files = glob.glob(os.path.join(path, "*.md")) 90 | 91 | # pattern for matching YYYY-MM-DD filenames 92 | date_pattern = re.compile(r'\d{4}-\d{2}-\d{2}') 93 | 94 | # Filter out files with filename format of "YYYY-MM-DD" 95 | files = [file for file in all_files if os.path.isfile(file) and not date_pattern.match(md_file.get_prefix(file))] 96 | 97 | for file in files: 98 | theFile = md_person.PersonFile() 99 | theFile.path = file 100 | theFile.frontmatter.read() 101 | yaml = theFile.frontmatter 102 | 103 | # if this is a person profile and the right person 104 | if yaml.tags and person.TAG_PERSON in yaml.tags: 105 | # for each of the fields being requested 106 | result[person.FIELD_SLUG] = slug 107 | result[identity.FIELD_NAME] = md_file.get_prefix(file) 108 | for field in fields: 109 | result[field] = "" 110 | try: 111 | # get the value of the field 112 | value = getattr(yaml, field) 113 | if value is not None: 114 | result[field] = str(value) 115 | except: 116 | pass # it's ok not to have the field 117 | 118 | return result -------------------------------------------------------------------------------- /md_person.py: -------------------------------------------------------------------------------- 1 | # Represents a Person Markdown file e.g. "Sponge Bob.md" 2 | 3 | # Like all good Markdown files, there's the frontmatter and body. 4 | 5 | # The frontmatter fields start with "FIELD_" 6 | # The body has Sections 7 | 8 | import os 9 | import sys 10 | import glob 11 | import re 12 | 13 | sys.path.insert(1, '../hal/') 14 | import person 15 | 16 | sys.path.insert(1, './') 17 | import md_frontmatter 18 | import md_body 19 | import md_file 20 | 21 | # sections of the body 22 | NEW_LINE = "\n" 23 | SECTION_H1 = "# " 24 | SECTION_BIO = "## Bio" 25 | SECTION_QUOTES = "## Quotes" 26 | SECTION_LIFE_EVENTS = "## Life Events" 27 | SECTION_PEOPLE = "## People" 28 | SECTION_REFERENCES = "## References" 29 | SECTION_FAVORITES = "## Favorites" 30 | SECTION_POSITIONS = "## Positions" 31 | SECTION_NOTES = "## Notes" 32 | CONTENT_EMBED = "![[" 33 | 34 | PersonSections = [SECTION_BIO, SECTION_QUOTES, SECTION_LIFE_EVENTS, 35 | SECTION_REFERENCES, SECTION_PEOPLE, SECTION_FAVORITES, 36 | SECTION_NOTES] 37 | 38 | class PersonFrontmatter(md_frontmatter.Frontmatter): 39 | def __init__(self, parent): 40 | super().__init__(parent) 41 | self.parent = parent 42 | self.tags.extend(person.Tags) 43 | self.fields.extend(person.Fields) 44 | self.section_headings = PersonSections 45 | self.raw = "" 46 | 47 | class PersonBody(md_body.Body): 48 | def __init__(self, parent): 49 | super().__init__(parent) 50 | self.parent = parent 51 | self.sections = [] 52 | self.raw = "" 53 | 54 | class PersonFile(md_file.File): 55 | def __init__(self): 56 | super().__init__() 57 | self.prefix = "" # will be the Person's name 58 | self.frontmatter = PersonFrontmatter(self) 59 | self.frontmatter.init_fields() 60 | self.body = PersonBody(self) 61 | 62 | # ------------------------------------------------------------------------- 63 | # 64 | # Get the first part of a section of the body. 65 | # 66 | # Parameters: 67 | # 68 | # - section - the name of the section e.g. "## Notes" 69 | # - before - get any content before this text 70 | # 71 | # Returns: 72 | # 73 | # - The first part of the section 74 | # 75 | # Notes: 76 | # 77 | # - Looks for any content in the section in front of the 'before' text 78 | # 79 | # - Useful to get notes before the first embedded wikilink in "## Notes" 80 | # so it can be retained 81 | # 82 | # - Example: if before is "![[spongebob/2024-03-24.md]]", it will find: 83 | # 84 | # "![[spongebob/2024-03-24.md]]" # nothing before 85 | # "- ![[spongebob/2024-03-24.md]]" # bullet before 86 | # " - ![[spongebob/2024-03-24.md]]" # tabs before "-" 87 | # "- ![[spongebob/2024-03-24.md]]" # tabs after "-" 88 | # 89 | # ------------------------------------------------------------------------- 90 | def section_top(self, section, before): 91 | 92 | top = "" 93 | 94 | # read the body of the file, which also parses it 95 | self.body.read() 96 | 97 | if self.file is not None: 98 | 99 | # get the current content of the "## Notes" section 100 | content = self.body.get_content(section) 101 | 102 | if content is not None: 103 | for line in content.split(NEW_LINE): 104 | # if it's not a lines that starts with "before" text, add it 105 | if before not in line: 106 | top += line + NEW_LINE 107 | else: 108 | break 109 | 110 | return top 111 | 112 | # ------------------------------------------------------------------------- 113 | # 114 | # Update a specific section within a Person's profile Markdown file body. 115 | # 116 | # Parameters: 117 | # 118 | # slug - the person slug, e.g. 'spongebob' 119 | # section_heading - the body section to update e.g. SECTION_NOTES 120 | # value - what to set the field to 121 | # 122 | # Returns: 123 | # 124 | # True if successful, False otherwise. 125 | # 126 | # Notes: 127 | # 128 | # - The section header should have a blank line before and after it 129 | # 130 | # ------------------------------------------------------------------------- 131 | def update_section(self, slug, section_heading, value): 132 | 133 | result = False 134 | 135 | # get the Person's profile file 136 | # person_file = read_person_frontmatter(slug, path) 137 | 138 | if self.file is not None: 139 | 140 | # read the body of the file, which also parses it 141 | self.body.read() 142 | 143 | # check if the section exists in the file 144 | for section in self.body.sections: 145 | if section['heading'] == section_heading: 146 | # update the content of the section 147 | section['content'] = value 148 | 149 | # write the file with the updated section 150 | result = self.save() 151 | 152 | return result 153 | 154 | # ------------------------------------------------------------------------- 155 | # 156 | # Get a list of people slugs based on the folder names under `path`. 157 | # 158 | # Parameters: 159 | # 160 | # path - the path to the file 161 | # 162 | # Notes: 163 | # 164 | # - Each top-level folder contains all of the files for person 165 | # - Does not recursively 166 | # 167 | # ------------------------------------------------------------------------- 168 | def get_slugs(path): 169 | 170 | slugs = [] 171 | 172 | try: 173 | # get a list of all folders in the specified path excluding hidden ones 174 | slugs = [folder for folder in os.listdir(path) 175 | if os.path.isdir(os.path.join(path, folder)) and not folder.startswith('.')] 176 | except: 177 | pass 178 | 179 | return slugs 180 | 181 | # ------------------------------------------------------------------------- 182 | # 183 | # Get a list of a person's Markdown files not named with `YYYY-MM-DD`. 184 | # 185 | # Parameters: 186 | # 187 | # slug - the person slug, e.g. 'spongebob' 188 | # path - the path to the file 189 | # 190 | # Returns: 191 | # 192 | # A list of files. 193 | # 194 | # ------------------------------------------------------------------------- 195 | def get_non_dated_files(slug, path): 196 | 197 | # get a list of files with ".md" extension 198 | all_files = glob.glob(os.path.join(path, slug + "/*.md")) 199 | 200 | # pattern for matching filenames starting with YYYY-MM-DD [#12] 201 | date_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}.*') 202 | 203 | # filter out files with filename format of "YYYY-MM-DD" and those starting with "." 204 | files = [file for file in all_files if not date_pattern.match(md_file.get_prefix(file))] 205 | 206 | return files 207 | 208 | # ------------------------------------------------------------------------- 209 | # 210 | # Figure out which file from a list of filenames is the Person's profile. 211 | # 212 | # Parameters: 213 | # 214 | # slug - the person slug, e.g. 'spongebob' 215 | # path - the path to the file 216 | # 217 | # Returns: 218 | # 219 | # The first file found with `tags: [person]` in it or None. 220 | # 221 | # ------------------------------------------------------------------------- 222 | def read_person_frontmatter(slug, path): 223 | 224 | # get list of files that aren't interactions or notes e.g. ! `2024-03-22.md` 225 | files = get_non_dated_files(slug, path) 226 | 227 | for file in files: 228 | # load the file assuming it's a Person file 229 | person_file = PersonFile() 230 | person_file.path = file 231 | person_file.frontmatter.read() 232 | 233 | yaml = person_file.frontmatter 234 | 235 | # check if it actually is this person's profile 236 | if yaml.tags and person.TAG_PERSON in yaml.tags: 237 | return person_file 238 | 239 | return None 240 | 241 | # ------------------------------------------------------------------------- 242 | # 243 | # Update the value of a Person's specific profile metadata field. 244 | # 245 | # Parameters: 246 | # 247 | # slug - the person slug, e.g. 'spongebob' 248 | # path - the path to the file 249 | # field - the frontmatter field to update, e.g. 'last_contact' 250 | # value - what to set the field to 251 | # 252 | # Returns: 253 | # 254 | # True if successful, False otherwise. 255 | # 256 | # Notes: 257 | # 258 | # - In the case of `last_contact`, only update it if it's a more recent 259 | # date than the current value 260 | # - @todo: check if yaml.slug == slug to make sure it's the right person 261 | # 262 | # ------------------------------------------------------------------------- 263 | def update_field(slug, path, field, value): 264 | 265 | result = False 266 | 267 | # get the Person's profile file 268 | person_file = read_person_frontmatter(slug, path) 269 | 270 | if person_file is not None: 271 | 272 | yaml = person_file.frontmatter 273 | 274 | try: 275 | # get the current value of the field 276 | current_value = getattr(yaml, field) 277 | except: 278 | pass 279 | 280 | # set the `last_contact` if the new value is a more recent date 281 | if field == person.FIELD_LAST_CONTACT: 282 | if value > str(current_value): 283 | setattr(yaml, field, value) 284 | 285 | # read the body of the file 286 | person_file.body.read() 287 | 288 | # write the file with the updated 'last_contact' value 289 | result = person_file.save() 290 | 291 | return result 292 | -------------------------------------------------------------------------------- /media/Ctrl mouse over wikilink.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/Ctrl mouse over wikilink.png -------------------------------------------------------------------------------- /media/EricaXu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/EricaXu.png -------------------------------------------------------------------------------- /media/HALHome.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/HALHome.jpg -------------------------------------------------------------------------------- /media/SpongeBob_frontmatter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/SpongeBob_frontmatter.png -------------------------------------------------------------------------------- /media/anniversary_and_birthday_query.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/anniversary_and_birthday_query.png -------------------------------------------------------------------------------- /media/inline_query.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/inline_query.png -------------------------------------------------------------------------------- /media/mynetwork.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/mynetwork.png -------------------------------------------------------------------------------- /media/obsidian filters.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/obsidian filters.png -------------------------------------------------------------------------------- /media/obsidian_folders.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/obsidian_folders.png -------------------------------------------------------------------------------- /media/sample_janet_frontmatter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thephm/hal_md/b589a1cf1beb39390154271ca9950b3871aa16f9/media/sample_janet_frontmatter.png -------------------------------------------------------------------------------- /mise.toml: -------------------------------------------------------------------------------- 1 | [tools] 2 | python = "3.13.3" 3 | uv = "0.6.14" 4 | 5 | [env] 6 | _.python.venv = { path = ".venv", create = true, uv_create_args = ['--seed'] } 7 | 8 | [tasks.install] 9 | description = "Install dependencies" 10 | alias = "i" 11 | run = "uv pip install -r requirements.txt" 12 | 13 | [tasks.comms] 14 | description = "Process comms" 15 | depends = ["install"] 16 | run = "python comms.py" 17 | 18 | [tasks.dedup] 19 | description = "Deduplicate files (POTENTIALLY DESTRUCTIVE, BACKUP FIRST)" 20 | depends = ["install"] 21 | run = "python dedup.py" 22 | 23 | [tasks.embed_notes] 24 | description = "Embed notes" 25 | depends = ["install"] 26 | run = "python dedup.py" 27 | 28 | [tasks.last_contact] 29 | description = "Last contact with someone" 30 | depends = ["install"] 31 | run = "python last_contact.py" 32 | 33 | [tasks.md_birthdays] 34 | description = "Generate a MD calendar of birthdays" 35 | depends = ["install"] 36 | run = "python md_birthdays.py" 37 | 38 | [tasks.md_body] 39 | description = "Generate a MD calendar of birthdays" 40 | depends = ["install"] 41 | run = "python md_birthdays.py" 42 | 43 | [tasks.most_contacted] 44 | description = "Most contacted people" 45 | depends = ["install"] 46 | run = "python most_contacted.py" 47 | 48 | -------------------------------------------------------------------------------- /most_contacted.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import yaml 4 | import argparse 5 | import csv 6 | from collections import Counter, defaultdict 7 | from datetime import datetime 8 | 9 | # Directory containing markdown files 10 | DIRECTORY = '/mnt/c/data/notes/People' 11 | 12 | # Fields to be processed 13 | FIELDS_TO_PROCESS = ['to', 'from', 'people'] 14 | 15 | def parse_arguments(): 16 | parser = argparse.ArgumentParser(description='Process markdown files to find most contacted people.') 17 | parser.add_argument('-m', '--my-slug', type=str, required=True, help='Your slug to exclude from the count') 18 | parser.add_argument('-n', '--top-n', type=int, default=None, help='Number of top names to display') 19 | parser.add_argument('-o', '--output-csv', type=str, help='Output CSV file to save the results') 20 | return parser.parse_args() 21 | 22 | def extract_frontmatter(content): 23 | yaml_pattern = re.compile(r'^---\n(.*?)\n---', re.DOTALL) 24 | yaml_match = yaml_pattern.search(content) 25 | if yaml_match: 26 | try: 27 | return yaml.safe_load(yaml_match.group(1)) 28 | except yaml.YAMLError as e: 29 | print(f"Error parsing YAML: {e}") 30 | return None 31 | 32 | def process_file(file_path, my_slug, fields_to_process, name_counter, person_dates): 33 | with open(file_path, 'r', encoding='utf-8', errors='ignore') as file: 34 | content = file.read() 35 | frontmatter = extract_frontmatter(content) 36 | if frontmatter: 37 | for field in fields_to_process: 38 | if field in frontmatter: 39 | if isinstance(frontmatter[field], list): 40 | for name in frontmatter[field]: 41 | if name != my_slug: 42 | name_counter.update([name]) 43 | person_dates[name].append(os.path.basename(file_path)[:10]) 44 | else: 45 | if frontmatter[field] != my_slug: 46 | name_counter.update([frontmatter[field]]) 47 | person_dates[frontmatter[field]].append(os.path.basename(file_path)[:10]) 48 | 49 | def process_files(directory, my_slug, fields_to_process): 50 | name_counter = Counter() 51 | person_dates = defaultdict(list) 52 | filename_pattern = re.compile(r'^\d{4}-\d{2}-\d{2}\.md$') 53 | 54 | for root, _, files in os.walk(directory): 55 | for filename in files: 56 | if filename_pattern.match(filename): 57 | file_path = os.path.join(root, filename) 58 | process_file(file_path, my_slug, fields_to_process, name_counter, person_dates) 59 | return name_counter, person_dates 60 | 61 | def print_top_names(top_names, person_dates, output_csv=None): 62 | rows = [] 63 | for name, count in top_names: 64 | dates = [datetime.strptime(date, '%Y-%m-%d') for date in person_dates[name]] 65 | if dates: 66 | min_date = min(dates) 67 | max_date = max(dates) 68 | date_diff = max_date - min_date 69 | years, remainder = divmod(date_diff.days, 365) 70 | months, days = divmod(remainder, 30) 71 | first_date = min_date.strftime('%Y-%m-%d') 72 | last_date = max_date.strftime('%Y-%m-%d') 73 | else: 74 | years = months = days = 0 75 | first_date = 'N/A' 76 | last_date = 'N/A' 77 | 78 | count_label = "day" if count == 1 else "days" 79 | row = [name, count, years, months, days, first_date, last_date] 80 | rows.append(row) 81 | 82 | if not output_csv: 83 | duration = f"{years}, {months}, {days}" 84 | print(f"{name}: {count} {count_label} across {duration} since {first_date}. Most recently on {last_date}") 85 | 86 | if output_csv: 87 | with open(output_csv, 'w', newline='', encoding='utf-8') as csvfile: 88 | csvwriter = csv.writer(csvfile) 89 | csvwriter.writerow(['Name', 'Count', 'Years', 'Months', 'Days', 'First Date', 'Last Date']) 90 | csvwriter.writerows(rows) 91 | 92 | def main(): 93 | args = parse_arguments() 94 | name_counter, person_dates = process_files(DIRECTORY, args.my_slug, FIELDS_TO_PROCESS) 95 | 96 | if args.top_n: 97 | top_names = name_counter.most_common(args.top_n) 98 | else: 99 | top_names = name_counter.most_common() 100 | 101 | print_top_names(top_names, person_dates, args.output_csv) 102 | 103 | if __name__ == "__main__": 104 | main() -------------------------------------------------------------------------------- /queries/Apr Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Apr Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*04-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*04-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*04-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*04-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*04-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*04-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*04-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*04-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*04-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*04-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*04-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*04-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*04-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*04-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*04-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*04-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*04-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*04-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*04-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*04-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*04-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*04-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*04-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*04-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*04-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*04-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*04-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*04-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*04-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*04-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*04-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Aug Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Aug Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*08-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*08-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*08-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*08-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*08-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*08-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*08-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*08-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*08-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*08-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*08-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*08-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*08-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*08-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*08-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*08-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*08-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*08-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*08-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*08-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*08-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*08-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*08-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*08-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*08-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*08-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*08-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*08-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*08-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*08-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*08-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Dec Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Dec Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*12-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*12-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*12-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*12-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*12-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*12-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*12-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*12-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*12-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*12-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*12-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*12-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*12-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*12-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*12-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*12-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*12-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*12-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*-01-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*-01-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*12-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*12-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*12-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*12-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*12-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*12-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*12-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*12-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*12-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*12-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*12-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Feb Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Feb Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*02-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*02-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*02-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*02-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*02-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*02-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*02-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*02-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*02-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*02-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*02-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*02-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*02-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*02-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*02-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*02-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*02-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*02-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*02-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*02-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*02-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*02-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*02-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*02-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*02-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*02-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*02-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*02-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*02-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*02-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*02-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Jan Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Jan Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*01-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*01-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*01-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*01-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*01-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*01-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*01-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*01-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*01-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*01-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*01-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*01-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*01-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*01-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*01-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*01-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*01-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*01-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*-01-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*-01-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*01-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*01-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*01-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*01-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*01-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*01-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*01-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*01-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*01-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*01-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*01-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Jul Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Jul Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*07-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*07-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*07-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*07-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*07-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*07-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*07-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*07-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*07-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*07-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*07-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*07-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*07-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*07-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*07-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*07-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*07-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*07-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*07-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*07-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*07-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*07-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*07-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*07-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*07-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*07-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*07-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*07-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*07-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*07-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*07-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Jun Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Jun Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*06-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*06-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*06-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*06-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*06-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*06-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*06-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*06-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*06-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*06-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*06-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*06-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*06-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*06-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*06-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*06-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*06-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*06-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*06-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*06-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*06-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*06-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*06-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*06-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*06-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*06-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*06-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*06-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*06-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*06-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*06-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Mar Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Mar Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*03-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*03-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*03-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*03-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*03-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*03-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*03-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*03-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*03-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*03-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*03-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*03-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*03-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*03-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*03-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*03-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*03-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*03-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*03-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*03-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*03-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*03-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*03-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*03-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*03-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*03-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*03-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*03-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*03-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*03-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*03-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/May Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # May Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*05-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*05-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*05-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*05-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*05-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*05-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*05-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*05-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*05-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*05-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*05-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*05-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*05-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*05-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*05-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*05-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*05-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*05-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*05-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*05-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*05-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*05-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*05-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*05-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*05-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*05-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*05-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*05-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*05-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*05-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*05-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Nov Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Nov Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*11-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*11-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*11-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*11-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*11-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*11-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*11-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*11-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*11-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*11-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*11-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*11-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*11-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*11-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*11-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*11-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*11-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*11-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*11-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*11-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*11-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*11-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*11-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*11-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*11-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*11-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*11-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*11-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*11-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*11-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*11-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Oct Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Oct Birthdays and Anniversaries 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*10-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*10-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*10-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*10-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*10-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*10-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*10-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*10-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*10-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*10-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*10-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*10-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*10-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*10-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*10-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*10-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*10-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*10-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*10-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*10-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*10-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*10-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*10-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*10-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*10-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*10-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*10-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*10-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*10-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*10-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*10-31$/] 125 | ``` -------------------------------------------------------------------------------- /queries/Sep Birthdays and Anniversaries.md: -------------------------------------------------------------------------------- 1 | # Sep Birthdays and Anniversaries 1 2 | 3 | ```query 4 | [/^(birthday|anniversary)/: /.*09-01$/] 5 | ``` 6 | 7 | ```query 8 | [/^(birthday|anniversary)/: /.*09-02$/] 9 | ``` 10 | 11 | ```query 12 | [/^(birthday|anniversary)/: /.*09-03$/] 13 | ``` 14 | 15 | ```query 16 | [/^(birthday|anniversary)/: /.*09-04/] 17 | ``` 18 | 19 | ```query 20 | [/^(birthday|anniversary)/: /.*09-05$/] 21 | ``` 22 | 23 | ```query 24 | [/^(birthday|anniversary)/: /.*09-06$/] 25 | ``` 26 | 27 | ```query 28 | [/^(birthday|anniversary)/: /.*09-07$/] 29 | ``` 30 | 31 | ```query 32 | [/^(birthday|anniversary)/: /.*09-08$/] 33 | ``` 34 | 35 | ```query AZ 36 | [/^(birthday|anniversary)/: /.*09-09$/] 37 | ``` 38 | 39 | ```query 40 | [/^(birthday|anniversary)/: /.*09-10$/] 41 | ``` 42 | 43 | ```query 44 | [/^(birthday|anniversary)/: /.*09-11$/] 45 | ``` 46 | 47 | ```query 48 | [/^(birthday|anniversary)/: /.*09-12$/] 49 | ``` 50 | 51 | ```query 52 | [/^(birthday|anniversary)/: /.*09-13$/] 53 | ``` 54 | 55 | ```query 56 | [/^(birthday|anniversary)/: /.*09-14$/] 57 | ``` 58 | 59 | ```query 60 | [/^(birthday|anniversary)/: /.*09-15$/] 61 | ``` 62 | 63 | ```query 64 | [/^(birthday|anniversary)/: /.*09-16$/] 65 | ``` 66 | 67 | ```query 68 | [/^(birthday|anniversary)/: /.*09-17$/] 69 | ``` 70 | 71 | ```query 72 | [/^(birthday|anniversary)/: /.*09-18$/] 73 | ``` 74 | 75 | ```query 76 | [/^(birthday|anniversary)/: /.*09-19$/] 77 | ``` 78 | 79 | ```query 80 | [/^(birthday|anniversary)/: /.*09-20$/] 81 | ``` 82 | 83 | ```query 84 | [/^(birthday|anniversary)/: /.*09-21$/] 85 | ``` 86 | 87 | ```query 88 | [/^(birthday|anniversary)/: /.*09-22$/] 89 | ``` 90 | 91 | ```query 92 | [/^(birthday|anniversary)/: /.*09-23$/] 93 | ``` 94 | 95 | ```query 96 | [/^(birthday|anniversary)/: /.*09-24$/] 97 | ``` 98 | 99 | ```query 100 | [/^(birthday|anniversary)/: /.*09-25$/] 101 | ``` 102 | 103 | ```query 104 | [/^(birthday|anniversary)/: /.*09-26$/] 105 | ``` 106 | 107 | ```query 108 | [/^(birthday|anniversary)/: /.*09-27$/] 109 | ``` 110 | 111 | ```query 112 | [/^(birthday|anniversary)/: /.*09-28$/] 113 | ``` 114 | 115 | ```query 116 | [/^(birthday|anniversary)/: /.*09-29$/] 117 | ``` 118 | 119 | ```query 120 | [/^(birthday|anniversary)/: /.*09-30$/] 121 | ``` 122 | 123 | ```query 124 | [/^(birthday|anniversary)/: /.*09-31$/] 125 | ``` -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | html2text==2025.4.15 2 | markdown==3.8 3 | markdown-it-py==3.0.0 4 | mdurl==0.1.2 5 | pip==25.0.1 6 | pygments==2.19.1 7 | rich==14.0.0 8 | -------------------------------------------------------------------------------- /templates/Call.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - phone-call 4 | people: 5 | organizations: 6 | date: 7 | time: 8 | --- 9 | 10 | -------------------------------------------------------------------------------- /templates/Chat.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - chat 4 | people: 5 | topic: 6 | date: 7 | time: 8 | service: 9 | --- 10 | 11 | -------------------------------------------------------------------------------- /templates/Organization.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: [organization] 3 | slug: 4 | aliases: [] 5 | url: 6 | email: 7 | phone: 8 | linkedin_id: 9 | x_id: 10 | city: 11 | province: 12 | country: 13 | --- 14 | 15 | # Organization 16 | 17 | ## Quotes 18 | 19 | > 20 | 21 | ## References 22 | 23 | 1. 24 | 25 | ## Products 26 | 27 | - 28 | 29 | ## People 30 | 31 | - 32 | 33 | ## Notes 34 | -------------------------------------------------------------------------------- /templates/Person.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - person 4 | first_name: 5 | last_name: 6 | aliases: 7 | slug: 8 | birthday: 9 | deathday: 10 | title: 11 | skills: 12 | organizations: 13 | url: 14 | email: 15 | mobile: 16 | phone: 17 | x_id: 18 | instagram_id: 19 | linkedin_id: 20 | hometown: 21 | city: 22 | state: 23 | country: 24 | --- 25 | 26 | # Person 27 | 28 | ## Bio 29 | 30 | > 31 | 32 | ## Quotes 33 | 34 | > 35 | 36 | ## Life Events 37 | 38 | - 39 | 40 | ## References 41 | 42 | 1. 43 | 44 | ## Products 45 | 46 | - 47 | 48 | ## Positions 49 | 50 | - 51 | 52 | ## People 53 | 54 | - 55 | 56 | ## Interests 57 | 58 | - 59 | 60 | ## Notes 61 | 62 | - 63 | -------------------------------------------------------------------------------- /templates/Place.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - place 4 | aliases: 5 | people: 6 | url: 7 | phone: 8 | email: 9 | recommended_by: 10 | rating: 11 | city: 12 | province: 13 | country: 14 | --- 15 | 16 | # Place 17 | 18 | ## References 19 | 20 | 1. 21 | 22 | ## Notes 23 | 24 | - 25 | -------------------------------------------------------------------------------- /templates/Post.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - post 4 | url: 5 | date: 6 | people: 7 | service: 8 | --- 9 | 10 | # Post 11 | 12 | -------------------------------------------------------------------------------- /templates/Product.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - product 4 | aliases: 5 | slug: 6 | organizations: 7 | people: 8 | integrations: 9 | url: 10 | email: 11 | phone: 12 | linkedin_id: 13 | x_id: 14 | --- 15 | 16 | # Product 17 | 18 | ## Summary 19 | 20 | 21 | ## Quotes 22 | 23 | 24 | ## References 25 | 26 | 1. 27 | 28 | ## People 29 | 30 | - 31 | 32 | ## Integrations 33 | 34 | - 35 | 36 | ## Notes 37 | 38 | - -------------------------------------------------------------------------------- /templates/Video.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: 3 | - video 4 | url: 5 | people: 6 | date: 7 | duration: 8 | --- 9 | 10 | # Video 11 | 12 | -------------------------------------------------------------------------------- /templates/email.md: -------------------------------------------------------------------------------- 1 | --- 2 | tags: [email] 3 | from: 4 | to: [] 5 | cc: [] 6 | subject: 7 | date: 8 | time: 9 | --- 10 | 11 | --------------------------------------------------------------------------------