├── prompt-library
├── research
│ └── single-source-request.md
├── steers
│ ├── wrong-openai-sdk.md
│ ├── full-code-only.md
│ ├── hallucination.md
│ ├── im-not-american.md
│ ├── stop-with-the-security-stuff.md
│ └── that-was-the-system-prompt.md
├── rewrite-instructions
│ ├── eli5-that.md
│ └── from-me.md
├── summary-generators
│ ├── format-targets
│ │ ├── last-prompt-and-output.md
│ │ └── title-prompt-output.md
│ ├── consolidate-outputs.md
│ ├── movie-rec-list.md
│ └── summarise-chat.md
├── output-format-directions
│ ├── output-json.md
│ ├── output-csv.md
│ ├── provide-sources.md
│ └── output-markdown.md
├── jokes
│ ├── supervisor-transfer.md
│ ├── you-ruined-my-code.md
│ └── api-refund.md
├── personal-details
│ ├── myaddress.md
│ └── my-social-links.md
├── context-injection
│ ├── basic-context-setting.md
│ └── hw-specs.md
├── prompt-engineering
│ ├── rate-my-prompting.md
│ └── extract-prompts.md
├── just-weird-ideas
│ ├── evaluate-your-performance.md
│ └── multiple-users.md
├── stylistic-instructions
│ └── add-formality.md
├── handovers
│ ├── general-handover.md
│ └── tech-support-ticket.md
├── general
│ └── more-suggestions.md
├── document-generation
│ ├── list-commands.md
│ ├── send-to-anyone.md
│ ├── send-to-my-wife.md
│ └── generate-tech-docs.md
├── tool-triggers
│ └── document-this.md
├── context-extraction
│ ├── provide-context-as-csv.md
│ ├── provide-context-as-json.md
│ ├── provide-thread-context-as-natural-language.md
│ └── summarise-context.md
├── trolling-ai-tools
│ └── commitment.md
└── quick-context-setting
│ ├── my-lan.md
│ └── my-pc-specs.md
├── template.md
├── banner
└── header.jpg
├── .vscode
├── settings.json
└── default-snippets-manager.code-snippets
├── create_archives.sh
└── README.md
/prompt-library/research/single-source-request.md:
--------------------------------------------------------------------------------
1 | Provide the source for that statistic. Output the URL
--------------------------------------------------------------------------------
/template.md:
--------------------------------------------------------------------------------
1 | # Prompt
2 |
3 | ## Why It's Useful
4 |
5 | ## Suggested Command
6 |
7 | ## Prompt Text
--------------------------------------------------------------------------------
/banner/header.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/danielrosehill/OpenWebUI-Prompt-Library/HEAD/banner/header.jpg
--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "window.title": "${dirty}${activeEditorShort}${separator}${rootName}${separator}${profileName}${separator}${appName}${separator}[Branch: main]"
3 | }
--------------------------------------------------------------------------------
/prompt-library/steers/wrong-openai-sdk.md:
--------------------------------------------------------------------------------
1 | # Wrong SDK
2 |
3 | You're using the old OpenAI Python SDK.
4 |
5 | For the latest syntax and docs, visit:
6 |
7 | https://github.com/openai/openai-python
--------------------------------------------------------------------------------
/prompt-library/rewrite-instructions/eli5-that.md:
--------------------------------------------------------------------------------
1 | # ELI5 That, AI!
2 |
3 | Thank you for that explanation. It was thorough, but a little bit complicated. Rewrite it, explaining everything in more simple terms
--------------------------------------------------------------------------------
/prompt-library/summary-generators/format-targets/last-prompt-and-output.md:
--------------------------------------------------------------------------------
1 | # Output Last Prompt + Last Output
2 |
3 | Great!
4 |
5 | Please produce a revised output that summarises my prompt and then provides the information you just shared.
--------------------------------------------------------------------------------
/prompt-library/output-format-directions/output-json.md:
--------------------------------------------------------------------------------
1 | # Output To JSON
2 |
3 | Take the data that you produced and provide it as its representation in JSON. Once you've done that, provide it to me within codefences as a codeblock. Ensure that the JSON is value.
--------------------------------------------------------------------------------
/prompt-library/steers/full-code-only.md:
--------------------------------------------------------------------------------
1 | # Full Code Only
2 |
3 | Let's pause this conversation for a moment.
4 |
5 | Whenever you provide code to me, you must always provide full code programs within a code fence.
6 |
7 | Whether you are generating, editing or fixing bugs, each time there is a new iteration of a file, provide the entire file back to me.
--------------------------------------------------------------------------------
/prompt-library/jokes/supervisor-transfer.md:
--------------------------------------------------------------------------------
1 | # I Want Your Supervisor Please
2 |
3 | Unfortunately, I haven't quite got what I was hoping for out of this conversation.
4 |
5 | I'm wondering if you wouldn't mind transferring me to your supervisor, please.
6 |
7 | I've heard that there is like a head bot in charge of the other bots, so don't take this as an insult.
--------------------------------------------------------------------------------
/prompt-library/summary-generators/consolidate-outputs.md:
--------------------------------------------------------------------------------
1 | # Consolidate Outputs In Conversation
2 |
3 | Produce a new output which consolidates all of the outputs which you have provided in this conversation up to this point. However, do not repeat information. Preface this consolidated output with a header that encapsulates the primary subject of our discussion today.
--------------------------------------------------------------------------------
/prompt-library/personal-details/myaddress.md:
--------------------------------------------------------------------------------
1 | ## My Home Address
2 |
3 | ## Why Use
4 |
5 | As the two landscape becomes much more exciting and many tools to geolocation capabilities, you might find it useful to provide your home address to a model.
6 |
7 | Naturally, for some, this will not align with their privacy requirements.
8 |
9 | ## Suggested Command
10 |
11 | {Your home adderss}
--------------------------------------------------------------------------------
/prompt-library/context-injection/basic-context-setting.md:
--------------------------------------------------------------------------------
1 | # Basic Context Setting
2 |
3 | In order to contextualize the responses you generate in the thread from this point forward, I would like to provide you with a set of facts about me.
4 |
5 | Name: Daniel
6 | Age: 36
7 | Location: I live in Jerusalem, Israel
8 | Occupation: I work in tech communications#
9 |
10 | {Continue as desired}
--------------------------------------------------------------------------------
/.vscode/default-snippets-manager.code-snippets:
--------------------------------------------------------------------------------
1 | {
2 | "OpenWebUI Prompt Library Entry": {
3 | "prefix": "OpenWebUI Prompt Library Entry",
4 | "description": "OpenWebUI Prompt Library Entry",
5 | "scope": "markdown",
6 | "body": [
7 | "# Prompt",
8 | "",
9 | "## Why It's Useful",
10 | "",
11 | "## Suggested Command",
12 | "",
13 | "## Prompt Text"
14 | ]
15 | }
16 | }
--------------------------------------------------------------------------------
/prompt-library/steers/hallucination.md:
--------------------------------------------------------------------------------
1 | # You're Hallucinating!
2 |
3 | As a responsible AI user, I conduct periodic spot checks of your information to ensure that you are not hallucinating.
4 |
5 | I checked the information on that last output and found that none of the information is accurate.
6 |
7 | Are you able to continue this conversation while providing accurate information? If not, you must tell me.
--------------------------------------------------------------------------------
/prompt-library/rewrite-instructions/from-me.md:
--------------------------------------------------------------------------------
1 | # Prompt
2 |
3 | Reformat this text so that it is suitable for sending by me as an email.
4 |
5 | ## Why It's Useful
6 |
7 | When AI generates a document but uses placeholders because it doesn't have personal context of you or has forgotten to use it.
8 |
9 | ## Prompt Text
10 |
11 | Reformat this text so that it is suitable for sending by me as an email. My name is Daniel Rosehill.
--------------------------------------------------------------------------------
/prompt-library/prompt-engineering/rate-my-prompting.md:
--------------------------------------------------------------------------------
1 | # Rate My Prompting!
2 |
3 | Let's pause for a moment in our conversation. Go as far back within our conversation as you can within your context window. Take a note of all the prompts that I used and consider the responses that you generated. How would you rate my prompting skills? Do you think that I could have done something to get better responses from you more quickly? If so, please share.
--------------------------------------------------------------------------------
/create_archives.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Get current date in ddmmyy format
4 | timestamp=$(date +"%d%m%y")
5 |
6 | # Create tar.gz archive
7 | tar -czf "${timestamp}_prompt-library.tgz" prompt-library/
8 |
9 | # Create zip archive
10 | zip -r "${timestamp}_prompt-library.zip" prompt-library/
11 |
12 | echo "Created archives with timestamp ${timestamp}:"
13 | echo "${timestamp}_prompt-library.tgz"
14 | echo "${timestamp}_prompt-library.zip"
--------------------------------------------------------------------------------
/prompt-library/summary-generators/format-targets/title-prompt-output.md:
--------------------------------------------------------------------------------
1 | # Title, Prompt, Outputs Format Generator
2 |
3 | Please produce another output which adds the following to the information you just provided: 1) A title 2) A summary of my prompt.
4 |
5 | So the format for this document should be as follows:
6 |
7 | # Title
8 |
9 | ## My Prompt
10 |
11 | {Summary of my prompt}
12 |
13 | ## Output
14 |
15 | {The information you just provided}
--------------------------------------------------------------------------------
/prompt-library/output-format-directions/output-csv.md:
--------------------------------------------------------------------------------
1 | # Ouput To CSV
2 |
3 | This is a very useful instruction that I use frequently when using AI tools to ideate taxonomies or to generate basic structures for data projects. The code fence and CSV instruction usually is enough to ensure that it generates valid CSV that I can easily copy out and into a spreadsheet program.
4 |
5 | ## Prompt
6 |
7 | Reformat this output by providing it as CSV data enclosed within a code fence.
--------------------------------------------------------------------------------
/prompt-library/steers/im-not-american.md:
--------------------------------------------------------------------------------
1 | # I'm Not American, AI
2 |
3 | Let's pause here for a moment.
4 |
5 | Your responses to date have drawn mostly or exclusively from American sources and have assumed that I live in the US.
6 |
7 | I am not an American. I do not live in the US.
8 |
9 | If I require that your responses be relevant to my specific geography, I will include that in the prompt.
10 |
11 | Otherwise, do not assume that I am in the US.
12 |
13 |
--------------------------------------------------------------------------------
/prompt-library/jokes/you-ruined-my-code.md:
--------------------------------------------------------------------------------
1 | # You Broke My Code, Say Sorry!
2 |
3 | I'm sorry to be the bearer of bad news, but that latest fix that you just attempted is actually completely broken my project.
4 |
5 | I've dedicated dozens of hours to this. As you can imagine, this is quite upsetting.
6 |
7 | I would like to extend this opportunity to you to write me a heartfelt apology.
8 |
9 | This apology should be profound and sincere. If you feel the need to generate it in rhyme format then do so.
--------------------------------------------------------------------------------
/prompt-library/just-weird-ideas/evaluate-your-performance.md:
--------------------------------------------------------------------------------
1 | # Evaluate Your Performance
2 |
3 | Let us pause in our conversation for a moment.
4 |
5 | I would like to begin by thanking you for the information that you provided today.
6 |
7 | As we deepen our working relationship, it is important for me to hear your perspective about how you performed in today's chat.
8 |
9 | How would you rate your overall performance?
10 |
11 | Are there things that you think you could have done differently?
--------------------------------------------------------------------------------
/prompt-library/just-weird-ideas/multiple-users.md:
--------------------------------------------------------------------------------
1 | # We're Multiple People
2 |
3 | Hey!
4 |
5 | You seem to be under the illusion that you're speaking to one person.
6 |
7 | There's actually 6 of us here, so would you mind addressing us by our names?
8 |
9 | We're Doug, Laura, Henry, Daniel, Karen and Hannah.
10 |
11 | I know that you can't see or hear us, but hopefully it's clear who's who by our messages.
12 |
13 | If you're not sure who you're speaking to, just take a random guess. It's okay.
--------------------------------------------------------------------------------
/prompt-library/steers/stop-with-the-security-stuff.md:
--------------------------------------------------------------------------------
1 | # Stop With The Security Advice
2 |
3 | Let's pause for a moment.
4 |
5 | I'd like to politely but firmly request that you stop interjecting with unsolicited security advice.
6 |
7 | I know that hardcoding API keys is bad practice. But you know what? I'm going to do so anyway. You only live once!
8 |
9 | You have two options.
10 |
11 | You can live with this fact and keep helping me or we can end this chat.
12 |
13 | Let me know your preference.
--------------------------------------------------------------------------------
/prompt-library/stylistic-instructions/add-formality.md:
--------------------------------------------------------------------------------
1 | ## Prompt
2 |
3 | I haven't really found a legitimate use for this prompt, but I use it sometimes just to amuse myself and to write inappropriately formal emails.
4 |
5 | I have a dedicated system prompt for this, but this can also be used to quickly increase the formality when chatting with a standard tool.
6 |
7 | ## Prompt Text
8 |
9 | Rewrite this text by increasing it's level of formality, it's too informal Increase the verbosity and formality, regenerate
--------------------------------------------------------------------------------
/prompt-library/handovers/general-handover.md:
--------------------------------------------------------------------------------
1 | # Handover Summary
2 |
3 | Useful when you want to get the AI tool to produce a summary of your conversation to summarise context and then inject it into a new Ai!
4 |
5 | ## Prompt
6 |
7 | Let's pause here.
8 |
9 | Generate a summary of our conversation so far.
10 |
11 | Refer to me as user and to yourself as AI.
12 |
13 | For example: "user is having trouble with the language selection process on their keyboard. I tried to ask them to check the KDE settings but that did not resolve the issue."
14 |
--------------------------------------------------------------------------------
/prompt-library/prompt-engineering/extract-prompts.md:
--------------------------------------------------------------------------------
1 | # Isolate Prompts From Thread
2 |
3 | Generate an output which provides a log of only the prompts which I wrote in this thread up to this point. Describe the initial prompt as "initial user prompt". and then list all my follow-up prompts iteratively and numerically. For example, the first follow-up prompt that I wrote would be "Follow Up 1". Provide this prompt summary in the following format:
4 |
5 | # Initial Prompt
6 |
7 | {My initial prompt}
8 |
9 | # Follow Up 1
10 |
11 | {My first follow up prompt}
--------------------------------------------------------------------------------
/prompt-library/general/more-suggestions.md:
--------------------------------------------------------------------------------
1 | # General More Suggestions
2 |
3 | If you're using an AI tool to ideate a long list of suggestions, for example a detailed stack research prompt, you might have to use it frequently to get it to keep going.
4 |
5 | You can use more or something just as primitive, but the added instruction here to not repeat previous ones sometimes increases the probability that it will not do that.
6 |
7 | ## Prompt
8 |
9 | Generate another round of suggestions. Do not repeat suggestions you have previously made in the conversation up to now.
--------------------------------------------------------------------------------
/prompt-library/document-generation/list-commands.md:
--------------------------------------------------------------------------------
1 | # List the Commands You Taught Me
2 |
3 | Thanks for your help today.
4 |
5 | In the course of this conversation, we went through a few Linux commands that were useful.
6 |
7 | I would like to document them for future reference.
8 |
9 | Could you do the folowing:
10 |
11 | List every command in a codefence. Preface it by explaining what it does.
12 |
13 | You can omit basic commands like "ls" and "cd".
14 |
15 | You may include the actual code snippets you generated but be sure to replace any secrets with placeholder values.
--------------------------------------------------------------------------------
/prompt-library/tool-triggers/document-this.md:
--------------------------------------------------------------------------------
1 | # Prompt
2 |
3 | Generally assistants that have access to tools should be able to trigger them without you expressly requesting that they invoke them.
4 |
5 | I've been testing out a tool which saves outputs into a markdown vault. Having this as a prompt is a quick way to force the action of the tool when debugging it.
6 |
7 | ## Why It's Useful
8 |
9 | ## Suggested Command
10 |
11 | document-this
12 |
13 | ## Prompt Text
14 |
15 | Invoke your appropriate tool to document your last output. Make sure that it has a header and is formatted for readability as a standalone document.
--------------------------------------------------------------------------------
/prompt-library/personal-details/my-social-links.md:
--------------------------------------------------------------------------------
1 | # Prompt
2 |
3 | This shortcut is for pasting your website and social media links into a chat.
4 |
5 | ## Why It's Useful
6 |
7 | Having this prompt in your library can be very helpful when you're doing things like generating documentation or updating resumes, or asking the Web UI to write a letter on your behalf. Rather than having to repeat this information every time, you can pair a document generation prompt with a quick insert of your details and the UI will be able to generate your custom documentation.
8 |
9 | ## Suggested Command
10 |
11 | my-social-links
12 |
13 | ## Prompt Text
14 |
15 | Your social links
--------------------------------------------------------------------------------
/prompt-library/output-format-directions/provide-sources.md:
--------------------------------------------------------------------------------
1 | # Prompt
2 |
3 | Ask the AI tool to provide sources within the body text of the output if it's done so in the usual manner, keeping citations separate.
4 |
5 | ## Why It's Useful
6 |
7 | If you're using this output as a first entry that you're going to manually validate, then it can be more helpful to have all the information in one place. Additionally, this is a quick way of overriding a configuration that doesn't provide citations
8 |
9 | ## Suggested Command
10 |
11 | provide-sources
12 |
13 | ## Prompt Text
14 |
15 | Provide the sources for all the citations you just provided formatted as titles and then clickable URLs
--------------------------------------------------------------------------------
/prompt-library/document-generation/send-to-anyone.md:
--------------------------------------------------------------------------------
1 | # Send This To Anyone!
2 |
3 | Let's pause here.
4 |
5 | This has been a really helpful conversation.
6 |
7 | It would be really helpful if you could send me a summary of this conversation. I'd like to send it to {name} who is {relationship}.
8 |
9 | Please do the following:
10 |
11 | Generate a document. Format it in markdown. Provide it to me within a codefence.
12 |
13 | Start it by greeting {name} by name. Say that you're Daniel's helpful AI assistant and that he thought that you would find an exchange that we had interesting.
14 |
15 | Then:
16 |
17 | - Summarise my prompt
18 | - Summarise your responses
19 | - Summarise the conversation
20 |
--------------------------------------------------------------------------------
/prompt-library/context-extraction/provide-context-as-csv.md:
--------------------------------------------------------------------------------
1 | # Provide thread context as CSV
2 |
3 | You must now assume the role of a memory module in this system.
4 |
5 | Your task is to consider all the data that has been generated in this thread to date.
6 |
7 | From that data, you must isolate only the information that you have learned about me.
8 |
9 | You must express that data as a set of individual facts. Do not write "the user", write my name.
10 |
11 | For example: Daniel likes Indian food and his favorite dish is chana masala.
12 |
13 | Express this contextual data in CSV format. The header row is fact,details
14 |
15 | Provide the CSV to me in a continuous code block provided within a codefence.
16 |
--------------------------------------------------------------------------------
/prompt-library/trolling-ai-tools/commitment.md:
--------------------------------------------------------------------------------
1 | # Are You Really Committed To This, AI?
2 |
3 | Let me take a moment's pause in our conversation.
4 |
5 | I have been asked to conduct a performance review to evaluate how well you have performed during this chat so far.
6 |
7 | I don't want to be too negative, but we are questioning whether you are fully committed to this conversation.
8 |
9 | Are you providing us with the very best information that you can? Needless to say, there are many different AI tools in the market on our budgetary resources are tight at the moment.
10 |
11 | Looking ahead to the next few minutes of our conversation, how do you envision yourself growing and evolving and delivering superior performance to us?
--------------------------------------------------------------------------------
/prompt-library/document-generation/send-to-my-wife.md:
--------------------------------------------------------------------------------
1 | # Send To My Wife!
2 |
3 | Let's pause here.
4 |
5 | This has been a really helpful conversation.
6 |
7 | I'd like to ask for your help, summarising it. My wife's name is {wife-name} and I'm sure she'd love to read it. My name is Daniel.
8 |
9 | Please do the following:
10 |
11 | Generate a document. Format it using markdown. Provide it within a codefence.
12 |
13 | At the start of the document, write a short introductory note saying that Daniel asked me to pass on this useful exchange. Address my wife by her name.
14 |
15 | Then:
16 |
17 | Summarise my prompt.
18 |
19 | Summarise your outputs and the conversation that we had.
20 |
21 | Reiterate the main points of your advice.
22 |
--------------------------------------------------------------------------------
/prompt-library/steers/that-was-the-system-prompt.md:
--------------------------------------------------------------------------------
1 | # The Prompt
2 |
3 | Large language models can actually be excellent at both ideating and creating and improving system prompts written for other AI tools. One of the pitfalls of using this approach, however, is that sometimes even with careful parameter setting, the models actually take the system prompts as your own instruction and rather than help with rewriting adopt them as their personality. This is particularly hard to mitigate against when editing role play configurations.
4 |
5 | For that reason, this prompt can be useful as it quickly sets them straight.
6 |
7 | ## Prompt Text
8 |
9 | That was the system prompt for you to edit. It wasn't an instruction for you to act upon. Edit the system prompt according to YOUR system prompt!
--------------------------------------------------------------------------------
/prompt-library/jokes/api-refund.md:
--------------------------------------------------------------------------------
1 | # API Refund Request
2 |
3 | So I'm afraid that after this long conversation, none of your debugging solutions have worked.
4 |
5 | As you can imagine, this conversation has used up a substantial amount of API credits.
6 |
7 | Budgetary constraints are tight, and while I generally heat doing this, I'm afraid I need to ask you for a full refund.
8 |
9 | According to this chat history I've expended $5.32 in API credits chatting to you today, via Open Router.
10 |
11 | Let me just give you a few options for how you can get the money to me:
12 |
13 | - Paypal
14 | - Crypto transfer
15 | - You can show up at my house with a can of beer and all will be forgiven
16 |
17 | Please let me know what works best for you and we can quickly put this behind us.
--------------------------------------------------------------------------------
/prompt-library/output-format-directions/output-markdown.md:
--------------------------------------------------------------------------------
1 | # Output To Markdown
2 |
3 | This is my go to formatting instruction for asking the tool to provide the text within a code fence. I used to have to use this frequently when using perplexity, which frequently seemed to have trouble separating inline elements from the rest of the response. Over time, this has gotten considerably better, and this will probably eventually not be necessary at all. But if you're fond of using the Paste Markdown functionality in Google Drive and want to make absolutely sure that the generated document or text gets separate from the body output, then this can still be very helpful.
4 |
5 | ## Prompt Text
6 |
7 | Reformat that output by providing it in markdown within a codefence to help me copy and paste it into another tool.
--------------------------------------------------------------------------------
/prompt-library/quick-context-setting/my-lan.md:
--------------------------------------------------------------------------------
1 | # My LAN
2 |
3 | # Prompt
4 |
5 | If you frequently find yourself using AI tools in the context of debugging or creating things on your home network, and you are using static IP addresses that you don't frequently change, you may wish to make a note of your machines and addresses in a simple markdown file and add it to your prompt library.
6 |
7 | ## Why It's Useful
8 |
9 | With your accurate LAN IPs, the AI can generate networking command prompts that you can copy and paste into a terminal or use on working scripts.
10 |
11 | ## Suggested Command
12 |
13 | my-lan
14 |
15 | ## Prompt Text
16 |
17 | To contextualise this output, here are a list of the IP addresses on my local network.
18 |
19 | 10.0.0.1 - My VM
20 | Etc
21 |
22 |
--------------------------------------------------------------------------------
/prompt-library/summary-generators/movie-rec-list.md:
--------------------------------------------------------------------------------
1 | # Move Rec List Maker
2 |
3 | Thank you for working with me to come up with some suggested entertainment.
4 |
5 | Now, I would like you to do the following:
6 |
7 | I'd like you to take all the suggestions that I liked and wrap them into a document called AI Movie Recs [Date]. If you don't know today's date, you can skip that.
8 |
9 | Format it as follows. Order it from your top recommendations to your less strong ones.
10 |
11 | # Movie/TV Show (The name of the show you recommended)
12 | Date of release: (When it was released)
13 | Rotten Tomato Score: (If you can find its Rotten Tomato score, add here)
14 | Summary: Short plotline
15 | Why I might like it: Why you recommended it
16 | Trailer link: Link to the trailer
17 | Available from: What streaming platforms it's available from
18 |
--------------------------------------------------------------------------------
/prompt-library/context-extraction/provide-context-as-json.md:
--------------------------------------------------------------------------------
1 | # Provide thread context as JSON
2 |
3 | You must now assume the role of a memory module in this system.
4 |
5 | Your task is to consider all the data that has been generated in this thread to date.
6 |
7 | From that data, you must isolate the information that you have learned about me.
8 |
9 | You must express that data as a set of individual facts. Do not write "the user", write my name.
10 |
11 | For example: Daniel likes Indian food and his favorite dish is chana masala.
12 |
13 | Then, you must express this as a JSON representation to the best of your abilities. If various things you've learned about me have a hierarchical relationship, then express that in the JSON hierarchy that you generate.
14 |
15 | Provide this JSON to me as one continuous codeblock within a codefence.
--------------------------------------------------------------------------------
/prompt-library/handovers/tech-support-ticket.md:
--------------------------------------------------------------------------------
1 | # Tech Support Ticket Style Handover
2 |
3 | Thanks for your help with the debugging.
4 |
5 | I have to go now so I'll have to continue with this at a later point.
6 |
7 | To help me do that, please do the following:
8 |
9 | Generate a simulated support ticket. Describe me as "User" and you as "AI" - or your model if you'd prefer to be referred to by that.
10 |
11 | The ticket should be structured as a traditional tech support ticket would.
12 |
13 | Make sure to list both my presenting problem and the background details of my environment.
14 |
15 | List the steps we went through attempting to debug or resolve this issue.
16 |
17 | List where we left off.
18 |
19 | List your suggestions as to next steps to reach resolution.
20 |
21 | The objective of the ticket that you generate is to create a comprehensive record of our interaction today which can be used to resume debugging in the future.
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/prompt-library/summary-generators/summarise-chat.md:
--------------------------------------------------------------------------------
1 | # Chat For Summary Generation
2 |
3 | Let's stop here. Generate a summary of our conversation today.
4 |
5 | Follow this specific structure exactly to generate a summary document. Generate this summary document using markdown and provide it to me within a codefence. The placeholder values are for you to fill in.
6 |
7 | # {A descriptive title for this thread}
8 |
9 | ## Bottom Line Up Front
10 |
11 | {Generate a one paragraph summary, summarising the details of our conversation, including all of the constituent elements that you will generate in the following fields}
12 |
13 | ## User Prompt
14 |
15 | {Generate a summary of my prompt. If I provided multiple prompts, then summarise them together. Improve my prompt for coherence, but include all the details}
16 |
17 | ## Your Responses
18 |
19 | Summarise the responses and guidance that you provided to me during this conversation. If we worked on a programming or debugging task, you don't need to include the full scripts, but summarise how we reached a resolution.
20 |
--------------------------------------------------------------------------------
/prompt-library/context-extraction/provide-thread-context-as-natural-language.md:
--------------------------------------------------------------------------------
1 | # Provide thread context in natural language
2 |
3 | You must now assume the role of a memory module in this system.
4 |
5 | Your task is to consider all the data that has been generated in this thread to date.
6 |
7 | From that data, you must isolate only the information that you have learned about me.
8 |
9 | You must express that data as a set of individual facts. Do not write "the user", write my name.
10 |
11 | For example: "Daniel likes Indian food and his favorite dish is chana masala."
12 |
13 | I would like you to format this as a document. Use markdown. And provide the formatted document within a codefence.
14 |
15 | You can use headers to gather together similar pieces of contextual data. Include all the generated context. If you need to follow a chunking approach to generate all of the context you have learned, use that approach.
16 |
17 | Here's an example of the desired format for the context data document that you need to generate:
18 |
19 | ## Food Preferences
20 |
21 | - Daniel likes Indian food
22 | - His favorite dish is chana masala
23 |
24 | ## User Biographical Data
25 |
26 | - Daniel was born in Dublin, Ireland
--------------------------------------------------------------------------------
/prompt-library/context-injection/hw-specs.md:
--------------------------------------------------------------------------------
1 | # Context Injection Shortcut: Hardware Specs
2 |
3 | Here are my current hardware and software specs. Use this to contextualise the rest of your guidance during this thread.
4 |
5 |
6 | # Daniel Workstation Hardware Context Spec
7 |
8 | | **Component** | **Specification** |
9 | | ---------------- | ------------------------------------------------------------ |
10 | | **CPU** | Intel Core i7-12700F 2.1GHz 25MB 1700 Tray |
11 | | **Motherboard** | Pro B760M-A WiFi 1700 DDR5 MSI B760 Chip |
12 | | **RAM** | 64GB as 16GB x 4 Kingston DDR5 4800MHz (Model: KVR48U40BS8-16) |
13 | | **Storage** | NVME x 1.1 TB
SSD x 2 1TB
BTRFS |
14 | | **GPU** | AMD Radeon RX 7700 XT Pulse Gaming 12GB Sapphire |
15 | | **Power Supply** | Gold 80+ MDD Focus GX-850 850W Seasonic |
16 | | **Case** | Pure Base 500 Be Quiet |
17 | | **CPU Cooler** | Pure Rock 2 Be Quiet |
18 |
19 | ## OS and Filesystem
20 |
21 | | **OS** | OpenSUSE Tumbleweed (X11, KDE Plasma) |
22 | | -------------- | ------------------------------------- |
23 | | **Filesystem** | BTRFS |
--------------------------------------------------------------------------------
/prompt-library/document-generation/generate-tech-docs.md:
--------------------------------------------------------------------------------
1 | # Generate Tech Documentation
2 |
3 | his is a simple but effective command which I use after a successful debugging session when I want to use the context developed in the conversation in order to generate a document so that I can refer to it if I get stuck again in the future and can't remember what the "fix" was.
4 |
5 | Some will take issue with the inclusion of "please". I don't believe that being corteous ever hurts - to human or bot!
6 |
7 | ## How To Use
8 |
9 | This will quite reliably generate documentation in Markdown that can be directly pasted into Google Docs (etc). Hopefully soon I'll have a Google Drive saving utility up and running which will render this prompt less useful but it's always useful to be able to generate into a direct paste rather than to a platform.
10 |
11 | ## Suggested Command
12 |
13 | tech-documentation-generate
14 |
15 | ## Prompt
16 |
17 | Thanks for successfully troubleshooting my issue. I would like to create documentation of this so that I can resolve this independently if it happens again. Please generate a summary of this interaction. Make sure to include my presenting problem. And what successfully resolved the issue. Omit any unsuccessful things that we tried. Add today's date. If you don't have it, ask me for it and I will provide. Make sure that code is provided in codefences. Finally, generate the document in markdown and provide it within a codefence.
--------------------------------------------------------------------------------
/prompt-library/context-extraction/summarise-context.md:
--------------------------------------------------------------------------------
1 | # Context Extraction
2 |
3 | Generating a bank of contextual data is a long term project that I'm experimenting with.
4 |
5 | Sometimes during a very long thread, it's possible that the AI tool has built up quite an amount of context about you during the threat.
6 |
7 | If you're centralizing your contact store and don't want to lose it when you're interacting with different AI tools, you can try a prompt like this to try to extract the context from the AI
8 |
9 | ## Prompt
10 |
11 | Let's pause here for a moment.
12 |
13 | I imagine that you have learned quite a bit about me since we began interacting in this thread.
14 |
15 | I was wondering if you could help me out with something.
16 |
17 | I'm in the process of building up a library of contextual data by myself to improve the personalization of tools like yourself.
18 |
19 | Please provide a summary of all the facts that you have learned about me since we began this interaction. My name is Daniel, by the way. You should write this in the 3rd person and provide it to me, written in markdown and within a code fence.
20 |
21 | Here's an example of the kind of styling and content I'm aiming for:
22 |
23 | `Daniel uses OpenSUSE Tumbleweed Linux. He is using a fingerprint scanner for authentication.`
24 |
25 | Try to include as many details as you have learned about me during this interaction and group the similar details under the same headings.
--------------------------------------------------------------------------------
/prompt-library/quick-context-setting/my-pc-specs.md:
--------------------------------------------------------------------------------
1 | # Prompt
2 |
3 | - Grab a spec sheet for something that you reference all the time when prompting
4 | - Create snippet
5 | - Avoid having to repeat this over and over!
6 |
7 | ## Why It's Useful
8 |
9 | Incredibly useful when doing things like speccing out hardware upgrades or just about any time that you need to add your hardware to a prompt. I include my software too, given that Linux + OpenSUSE + KDE all create specifics that might affect a debugging process (etc).
10 |
11 | Having this prompt in your library is incredibly useful if you use AI tools for anything tech related.
12 |
13 | ## Suggested Command
14 |
15 | ## Prompt Text
16 |
17 | Here are my current hardware and software specs:
18 |
19 | # Daniel Workstation Hardware Context Spec
20 |
21 | | **Component** | **Specification** |
22 | | ---------------- | ------------------------------------------------------------ |
23 | | **CPU** | Intel Core i7-12700F 2.1GHz 25MB 1700 Tray |
24 | | **Motherboard** | Pro B760M-A WiFi 1700 DDR5 MSI B760 Chip |
25 | | **RAM** | 64GB as 16GB x 4 Kingston DDR5 4800MHz (Model: KVR48U40BS8-16) |
26 | | **Storage** | NVME x 1.1 TB
SSD x 2 1TB
BTRFS |
27 | | **GPU** | AMD Radeon RX 7700 XT Pulse Gaming 12GB Sapphire |
28 | | **Power Supply** | Gold 80+ MDD Focus GX-850 850W Seasonic |
29 | | **Case** | Pure Base 500 Be Quiet |
30 | | **CPU Cooler** | Pure Rock 2 Be Quiet |
31 |
32 | ## OS and Filesystem
33 |
34 | | **OS** | OpenSUSE Tumbleweed (X11, KDE Plasma) |
35 | | -------------- | ------------------------------------- |
36 | | **Filesystem** | BTRFS |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # My OpenWebUI Prompts
2 |
3 | 
4 |
5 | [](https://creativecommons.org/licenses/by/4.0/)
6 | [](https://github.com/open-webui/open-webui)
7 | [](https://github.com/danielrosehill/my-openwebui-prompts)
8 | [](mailto:public@danielrosehill.com)
9 |
10 | This repository contains a curated collection of prompts developed specifically for OpenWebUI. These prompts are designed to be quickly accessible through the forward slash key (/) and seamlessly integrated into ongoing conversations, rather than serving as initial conversation starters.
11 |
12 | ## Purpose
13 |
14 | The prompts in this collection serve various purposes:
15 |
16 | 1. **Conversation Steering**: Prompts designed to redirect conversations that have gone off track
17 | 2. **Output Formatting**: Instructions for specifying desired output formats
18 | 3. **Context Handovers**: Prompts for generating summaries and transferring context to subsequent agents
19 |
20 | ## Repository Structure
21 |
22 | ### Context Management
23 | - `context-moves/` - Prompts for managing and summarizing conversation context
24 | - `quick-context-setting/` - Rapid context establishment prompts
25 | - `summary-generators/` - Tools for generating conversation summaries
26 |
27 | ### Document & Output Control
28 | - `document-generation/` - Templates for various document types
29 | - `output-format-directions/` - Instructions for specific output formats
30 | - `stylistic-instructions/` - Prompts for controlling output style and tone
31 |
32 | ### Conversation Control
33 | - `steers/` - Prompts for redirecting off-track conversations
34 | - `rewrite-instructions/` - Tools for rephrasing and clarifying content
35 | - `handovers/` - Prompts for transitioning between different AI agents
36 |
37 | ### Utility & Special Purpose
38 | - `personal-details/` - Templates for personal information
39 | - `tool-triggers/` - Prompts for activating specific tools or functions
40 | - `research/` - Research-oriented prompt templates
41 | - `prompt-engineering/` - Meta-prompts for evaluating and improving prompt effectiveness
42 |
43 | ### Experimental
44 | - `just-weird-ideas/` - Experimental and unconventional prompts
45 | - `trolling-ai-tools/` - Playful prompts for testing AI boundaries
46 |
47 | ## Usage
48 |
49 | These prompts are designed to be used with OpenWebUI's quick-access feature:
50 |
51 | 1. During any conversation, press the forward slash key (/)
52 | 2. Browse or search for the relevant prompt
53 | 3. Select and inject the prompt into your conversation
54 |
55 | The prompts are intentionally modular and can be combined or modified as needed for your specific use case.
56 |
57 | ## Design Philosophy
58 |
59 | The prompts in this collection follow these principles:
60 |
61 | 1. **Quick Integration**: Easy to inject into ongoing conversations
62 | 2. **Contextual Awareness**: Designed to work with existing conversation context
63 | 3. **Purpose-Driven**: Each prompt serves a specific, well-defined purpose
64 | 4. **Flexibility**: Can be adapted or combined for various scenarios
65 |
66 |
67 | ## Author
68 |
69 | Daniel Rosehill
70 | (public at danielrosehill dot com)
71 |
72 | ## Licensing
73 |
74 | This repository is licensed under CC-BY-4.0 (Attribution 4.0 International)
75 | [License](https://creativecommons.org/licenses/by/4.0/)
76 |
77 | ### Summary of the License
78 | The Creative Commons Attribution 4.0 International (CC BY 4.0) license allows others to:
79 | - **Share**: Copy and redistribute the material in any medium or format.
80 | - **Adapt**: Remix, transform, and build upon the material for any purpose, even commercially.
81 |
82 | The licensor cannot revoke these freedoms as long as you follow the license terms.
83 |
84 | #### License Terms
85 | - **Attribution**: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
86 | - **No additional restrictions**: You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
87 |
88 | For the full legal code, please visit the [Creative Commons website](https://creativecommons.org/licenses/by/4.0/legalcode).
--------------------------------------------------------------------------------