├── README.md └── privacy-policy.md /README.md: -------------------------------------------------------------------------------- 1 | # Let Me Prompt It For You - Fast, Smart Prompting for Devs 2 | 3 | Launch: 4 | 5 | - https://x.com/janwilmake/status/1924471476305932741 6 | - https://news.ycombinator.com/item?id=44030556 7 | - https://www.producthunt.com/posts/let-me-prompt-it-for-you-lmpify-com 8 | 9 | Ask anything about the docs: [![](https://b.lmpify.com)](https://www.lmpify.com/httpsuithubcomj-u4l8lj0) 10 | 11 | # Why lmpify? 12 | 13 | lmpify offers several advantages over traditional AI assistants like Claude 14 | 15 | ## Feature Comparison 16 | 17 | | Feature | lmpify | Claude | Why it matters | 18 | | -------------------- | -------------------------- | ------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | 19 | | **Startup Speed** | ✓ Instant (<100ms) | ✗ Slow | lmpify is optimised for speed and TTFT. Results are cached. | 20 | | **Performance** | ✓ Snappy & reliable | ✗ Often slow & buggy | Consistent, responsive experience without frustrating delays or crashes | 21 | | **URLs as Context** | ✓ Built-in | ✗ Only with MCP | Seamlessly reference external textual files (md, json, etc) without having to first toolcall them. | 22 | | **HTML Rendering** | ✓ Full capability | ✗ Limited (no scripts) | Complete HTML renders with scripts and full-screen support. Great for prototyping. | 23 | | **Sharing** | ✓ One-click | ✗ Multiple steps, buggy | Instantly share prompts and results with a simple URL | 24 | | **Token Efficiency** | ✓ Incentivizes edit | ✗ Designed as chat with history | Encourages prompt editing over replies, resulting in less token use and better results with the same model. | 25 | | **Long Output** | ✓ Up to limit of model API | ✗ Limits at ±8k output tokens, continue button is buggy | Long outputs can be useful when generating long files. | 26 | 27 | # DOCS 28 | 29 | ## URL Context 30 | 31 | Only urls that return textual results are supported for now. HTML is disabled by design to incentivize users to improve their context. 32 | 33 | Some recommend contexts are: 34 | 35 | - https://xymake.com for X threads 36 | - https://uithub.com for GitHub context with filters applied 37 | - https://openapisearch.com for APIs 38 | - https://arxivmd.org for ArXiv papers 39 | 40 | Are you a high-agency product engineer? Join the [Context Building Club](https://contextbuilding.com) to get access to the most advanced and high-agency prompting techniques before everyone else. 41 | 42 | ## Result format 43 | 44 | Any lmpify URL is available as HTML, JSON, or Markdown. Browsers return HTML by default while developer or apis like `curl` or `fetch` default to markdown. You can control the output by appending `.json/.md/.html` or by specifying an `accept` header. 45 | 46 | It's also possible to get a subset of the markdown through `?key=result|context|prompt` 47 | 48 | Examples: 49 | 50 | - in javascript, `fetch("https://lmpify.com/httpsuithubcomj-m8tfk00").then(res=>res.text())` returns markdown 51 | - in your terminal, `curl https://lmpify.com/httpsuithubcomj-m8tfk00` returns markdown 52 | - in the browser, https://lmpify.com/httpsuithubcomj-m8tfk00 returns the UI (html) but it's easy to get markdown using https://lmpify.com/httpsuithubcomj-m8tfk00.md or json using https://lmpify.com/httpsuithubcomj-m8tfk00.json 53 | - if you need only the result or another part, you can use https://lmpify.com/httpsuithubcomj-m8tfk00.md?key=result 54 | 55 | ## Chat Completions 56 | 57 | > [!IMPORTANT] 58 | > Coming soon 59 | 60 | Every prompt is made available as [chatcompletions endpoint](https://platform.openai.com/docs/guides/text-generation) at `POST https://lmpify.com/[id]/chat/completions`. This means you can use it by setting the base path in the [OpenAI SDK](https://platform.openai.com/docs/libraries) (or other ChatCompletion SDKs) to https://lmpify.com/[id], e.g. https://lmpify.com/httpsuithubcomj-m8tfk00. This will use the context as system prompt and the prompt as a fist 'model' message in the back. 61 | 62 | To use a model other than the default model, you can specify the model parameter. For available models, check [models.json](models.json) 63 | 64 | ## Getting your API key 65 | 66 | You can get your API key by going to the developer console in your browser, and find the `access_token` value in your cookies storage. This serves as authentication to any API. There's currently no scopes or ability to rotate your key, so be careful, this key is meant to be private, if it gets compromised, all your balance may be used by third parties. 67 | 68 | ## `npx mdapply` CLI 69 | 70 | > [!IMPORTANT] 71 | > Beta 72 | 73 | You can use [mdapply](https://github.com/janwilmake/mdapply) to apply a response output to your local filesystem. Just run `curl "https://lmpify.com/[id]?key=result" -o apply.md && npx mdapply ./apply.md` 74 | 75 | Try it yourself (this one will create `cli.js`) in your cwd: 76 | 77 | ```sh 78 | curl "https://lmpify.com/httpsuithubcomj-m8tfk00?key=result" -o apply.md && npx mdapply ./apply.md 79 | ``` 80 | 81 | ## 'Prompt it' buttons 82 | 83 | Allowing users to easily prompt things about your open source library or template can really reduce friction for developers to adopt it. 84 | 85 | - Point to specific contexts that are useful to use (parts of) your library 86 | - Show how your project was made 87 | 88 | You can link from your README, docs, or website to a prompt button using the following code: 89 | 90 | HTML: 91 | 92 | ```html 93 | 94 | 95 | 96 | ``` 97 | 98 | Markdown: 99 | 100 | ```md 101 | [![](https://b.lmpify.com/YOUR_TEXT)](https://www.lmpify.com/YOUR_ID) 102 | ``` 103 | 104 | Example: 105 | 106 | ```md 107 | [![](https://b.lmpify.com/FAQ)](https://www.lmpify.com/httpsuithubcomj-u4l8lj0) 108 | ``` 109 | 110 | Result: 111 | 112 | [![](https://b.lmpify.com/FAQ)](https://www.lmpify.com/httpsuithubcomj-u4l8lj0) 113 | 114 | Any shared links that were previously generated are free to be reached without ratelimit. However, to do new prompts, please be aware that, although I aim to keep the free plan as big as possible, lmpify is not a free service, and as of now, amount of free, unauthenticated, prompts are capped at 5 per hour, and restricted to cheaper models such as OpenAI ChatGPT 4.1 mini. After this users are prompted to add a balance to keep going. 115 | 116 | ## Pricing 117 | 118 | - All previously generated results are _cached forever_ and _free for everyone_ without ratelimits 119 | - New users that didn't deposit $ with Stripe get 5 free prompts per hour. This may change in the future. 120 | - After depositing $ through Stripe, users pay the model price + markup when executing new prompts. 121 | - The markup is 50% markup on top of model price to account for free usage, creator benefits (coming soon), and make this tool sustainable. 122 | -------------------------------------------------------------------------------- /privacy-policy.md: -------------------------------------------------------------------------------- 1 | # Privacy Policy 2 | 3 | **Important:** All content submitted to lmpify.com, including prompts and generated responses, is public by design. This information is accessible and searchable on the public internet. 4 | 5 | ## Public Nature of the Service 6 | 7 | lmpify.com is designed as a public sharing platform. The public nature of our service is core to its functionality, allowing for easy sharing and discovery of AI-generated content. 8 | 9 | When you use our service: 10 | 11 | - Your prompts are stored and made publicly accessible 12 | - Generated responses are stored and made publicly accessible 13 | - All content is assigned a unique URL that can be easily shared 14 | - Content may be indexed by search engines 15 | - Anyone with the URL can access your prompts and responses 16 | 17 | ## Information We Collect 18 | 19 | We collect and store the following information: 20 | 21 | - Prompts you submit to the service 22 | - AI-generated responses to your prompts 23 | - Basic usage information (timestamps, model used, etc.) 24 | - Standard server logs including IP addresses and user agents 25 | 26 | ## How We Use Your Information 27 | 28 | We use the collected information to: 29 | 30 | - Provide and maintain the service 31 | - Generate and display AI responses to your prompts 32 | - Create shareable links for content 33 | - Improve our service and user experience 34 | - Monitor and analyze usage patterns 35 | 36 | ## Content Ownership and Responsibility 37 | 38 | You are responsible for the content you submit to lmpify.com. Do not submit sensitive personal information, confidential data, or content that violates others' rights. 39 | 40 | While you retain ownership of your original content, by using our service, you grant us a worldwide, non-exclusive license to use, store, display, reproduce, and distribute your content in connection with the service. 41 | 42 | ## Content Removal 43 | 44 | If you wish to have content removed from lmpify.com, please contact us with the specific URL of the content. While we will consider removal requests, please remember that the public nature of the service means that content may have been cached, shared, or archived elsewhere. 45 | 46 | ## Third-Party Services 47 | 48 | We may use third-party services to help operate our service, including: 49 | 50 | - AI model providers 51 | - Hosting and infrastructure services 52 | - Analytics tools 53 | 54 | These services may collect and process information according to their own privacy policies. 55 | 56 | ## Changes to This Policy 57 | 58 | We may update this privacy policy from time to time. We will notify users of any significant changes by posting the new policy on this page. 59 | 60 | ## Contact Us 61 | 62 | If you have any questions about this privacy policy or our practices, please contact us. 63 | 64 | _Last updated: May 17, 2025_ 65 | 66 | © 2025 lmpify.com - [let me prompt it for you](https://lmpify.com) 67 | --------------------------------------------------------------------------------