├── LICENSE
├── README.md
├── api-reference
├── introduction.mdx
├── privacy
│ ├── delete_request.mdx
│ └── delete_status.mdx
└── tracing
│ └── whitelist_user.mdx
├── favicon.png
├── img
├── apikey.png
├── integrations
│ ├── dynatrace.png
│ ├── honeycomb.png
│ ├── hyperdx.png
│ └── signoz.png
├── no_traces.png
├── prompt_configuration.png
├── prompt_deployment.png
├── prompt_playground.png
├── trace.png
└── workflow.png
├── introduction.mdx
├── logo
├── dark.svg
└── light.svg
├── mint.json
└── openllmetry
├── configuration.mdx
├── contributing
├── developing.mdx
└── overview.mdx
├── getting-started-nextjs.mdx
├── getting-started-python.mdx
├── getting-started-ts.mdx
├── integrations
├── datadog.mdx
├── dynatrace.mdx
├── exporting.mdx
├── grafana.mdx
├── honeycomb.mdx
├── hyperdx.mdx
├── newrelic.mdx
├── otel-collector.mdx
├── signoz.mdx
└── traceloop.mdx
├── introduction.mdx
├── prompts
├── quick-start.mdx
├── registry.mdx
└── sdk-usage.mdx
├── tracing
├── association.mdx
├── decorators.mdx
├── privacy.mdx
├── python-threads.mdx
└── user-feedback.mdx
└── troubleshooting.mdx
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # docs
2 | Documentation for Traceloop & OpenLLMetry
3 |
--------------------------------------------------------------------------------
/api-reference/introduction.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Introduction"
3 | ---
4 |
5 | The following is a list of publicly available APIs you can use with the [Traceloop platform](https://app.traceloop.com).
6 |
7 | All APIs require an API key to be used. You can get one by [signing up](https://app.traceloop.com),
8 | and then going to the [API Keys](https://app.traceloop.com/settings/api-keys) page.
9 |
--------------------------------------------------------------------------------
/api-reference/privacy/delete_request.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Delete specific user data"
3 | api: "DELETE https://app.traceloop.com/api/config/privacy/data-deletion"
4 | ---
5 |
6 | You can delete traces data for a specific user of yours by specifying their association properties.
7 |
8 | ## Request Body
9 |
10 |
11 | A specific criterion for deletion like `{userId: "123"}`.
12 |
13 |
14 | ```json
15 | {
16 | "associationProperty": {
17 | "userId": "123"
18 | }
19 | }
20 | ```
21 |
22 | ## Response
23 |
24 |
25 | The request ID for this deletion request. You can use it to query the status
26 | of the deletion.
27 |
28 | ```
29 |
--------------------------------------------------------------------------------
/api-reference/privacy/delete_status.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Status of user deletion request"
3 | api: "GET https://app.traceloop.com/api/config/privacy/data-deletion"
4 | ---
5 |
6 | Get the status of your user deletion request.
7 |
8 | ## Request Query Parameter
9 |
10 |
11 | The request ID from the user deletion request.
12 |
13 |
14 | ## Response
15 |
16 |
17 | `true` if the process was completed, `false` otherwise.
18 |
19 |
20 |
21 | The number of spans that were deleted.
22 |
23 |
24 |
25 | The number of spans that needs to be deleted in total.
26 |
27 |
--------------------------------------------------------------------------------
/api-reference/tracing/whitelist_user.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Enable logging of prompts and responses"
3 | api: "POST https://app.traceloop.com/api/config/pii/traceing-allow-list"
4 | ---
5 |
6 | By default, all prompts and responses are logged.
7 | If you want to disable this behavior by following [this guide](/openllmetry/tracing/privacy),
8 | you can selectively enable it for some of your users with this API.
9 |
10 | ## Request Body
11 |
12 |
13 | The list of association properties (like `{userId: "123"}`) that will be allowed to be logged.
14 |
15 |
16 | Example:
17 |
18 | ```json
19 | {
20 | "associationPropertyAllowList": [
21 | {
22 | "userId": "123"
23 | }
24 | ]
25 | }
26 | ```
27 |
--------------------------------------------------------------------------------
/favicon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/favicon.png
--------------------------------------------------------------------------------
/img/apikey.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/apikey.png
--------------------------------------------------------------------------------
/img/integrations/dynatrace.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/integrations/dynatrace.png
--------------------------------------------------------------------------------
/img/integrations/honeycomb.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/integrations/honeycomb.png
--------------------------------------------------------------------------------
/img/integrations/hyperdx.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/integrations/hyperdx.png
--------------------------------------------------------------------------------
/img/integrations/signoz.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/integrations/signoz.png
--------------------------------------------------------------------------------
/img/no_traces.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/no_traces.png
--------------------------------------------------------------------------------
/img/prompt_configuration.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/prompt_configuration.png
--------------------------------------------------------------------------------
/img/prompt_deployment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/prompt_deployment.png
--------------------------------------------------------------------------------
/img/prompt_playground.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/prompt_playground.png
--------------------------------------------------------------------------------
/img/trace.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/trace.png
--------------------------------------------------------------------------------
/img/workflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aavetis/docs/fe4953f8f6ae7f17ae28c76ef707c0029ac1c769/img/workflow.png
--------------------------------------------------------------------------------
/introduction.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Introduction"
3 | description: "Monitor, debug and test the quality of you LLM outputs"
4 | ---
5 |
6 | Traceloop automatically monitors the quality of your LLM outputs. It helps you to debug and test changes to your models and prompts.
7 |
8 | - Get real-time alerts about your model's quality
9 | - Execution tracing for every request
10 | - Gradually rollout changes to models and prompts
11 | - Debug and re-run issues from production in your IDE
12 |
13 | Need help using Traceloop? Ping us at dev@traceloop.com
14 |
15 | ### Get Started - Install the SDK
16 |
17 |
18 |
19 | Available
20 |
21 |
26 | Available
27 |
28 |
29 | In Development
30 |
31 |
32 |
--------------------------------------------------------------------------------
/logo/dark.svg:
--------------------------------------------------------------------------------
1 |
22 |
--------------------------------------------------------------------------------
/logo/light.svg:
--------------------------------------------------------------------------------
1 |
22 |
--------------------------------------------------------------------------------
/mint.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "traceloop",
3 | "favicon": "/favicon.png",
4 | "logo": {
5 | "light": "/logo/light.svg",
6 | "dark": "/logo/dark.svg"
7 | },
8 | "colors": {
9 | "primary": "#FFB53D",
10 | "light": "#FFF238",
11 | "dark": "#FF3D5D",
12 | "anchors": {
13 | "from": "#FFF238",
14 | "to": "#FF3D5D"
15 | },
16 | "background": {
17 | "dark": "#121212"
18 | }
19 | },
20 | "topbarLinks": [
21 | {
22 | "name": "Website",
23 | "url": "https://www.traceloop.com"
24 | }
25 | ],
26 | "topbarCtaButton": {
27 | "type": "link",
28 | "name": "Start now",
29 | "url": "https://app.traceloop.com/"
30 | },
31 | "anchors": [
32 | {
33 | "name": "OpenLLMetry",
34 | "icon": "telescope",
35 | "url": "openllmetry"
36 | },
37 | {
38 | "name": "Dashboard API",
39 | "icon": "webhook",
40 | "url": "api-reference"
41 | },
42 | {
43 | "name": "Community",
44 | "icon": "slack",
45 | "url": "https://join.slack.com/t/traceloopcommunity/shared_invite/zt-1plpfpm6r-zOHKI028VkpcWdobX65C~g"
46 | },
47 | {
48 | "name": "GitHub",
49 | "icon": "github",
50 | "url": "https://github.com/traceloop"
51 | }
52 | ],
53 | "navigation": [
54 | {
55 | "group": "Learn",
56 | "pages": ["introduction"]
57 | },
58 | {
59 | "group": "Introduction",
60 | "pages": ["openllmetry/introduction"]
61 | },
62 | {
63 | "group": "Quick Start",
64 | "pages": [
65 | "openllmetry/getting-started-python",
66 | "openllmetry/getting-started-ts",
67 | "openllmetry/getting-started-nextjs",
68 | "openllmetry/configuration",
69 | "openllmetry/troubleshooting"
70 | ]
71 | },
72 | {
73 | "group": "Tracing",
74 | "pages": [
75 | "openllmetry/tracing/decorators",
76 | "openllmetry/tracing/association",
77 | "openllmetry/tracing/privacy",
78 | "openllmetry/tracing/python-threads"
79 | ]
80 | },
81 | {
82 | "group": "Integrations",
83 | "pages": [
84 | "openllmetry/integrations/exporting",
85 | "openllmetry/integrations/traceloop",
86 | "openllmetry/integrations/dynatrace",
87 | "openllmetry/integrations/datadog",
88 | "openllmetry/integrations/newrelic",
89 | "openllmetry/integrations/honeycomb",
90 | "openllmetry/integrations/grafana",
91 | "openllmetry/integrations/hyperdx",
92 | "openllmetry/integrations/signoz",
93 | "openllmetry/integrations/otel-collector"
94 | ]
95 | },
96 | {
97 | "group": "Prompt Management",
98 | "pages": [
99 | "openllmetry/prompts/quick-start",
100 | "openllmetry/prompts/registry",
101 | "openllmetry/prompts/sdk-usage"
102 | ]
103 | },
104 | {
105 | "group": "Contribute",
106 | "pages": [
107 | "openllmetry/contributing/overview",
108 | "openllmetry/contributing/developing"
109 | ]
110 | },
111 | {
112 | "group": "API Reference",
113 | "pages": ["api-reference/introduction"]
114 | },
115 | {
116 | "group": "Tracing",
117 | "pages": ["api-reference/tracing/whitelist_user"]
118 | },
119 | {
120 | "group": "GDPR & Privacy",
121 | "pages": [
122 | "api-reference/privacy/delete_request",
123 | "api-reference/privacy/delete_status"
124 | ]
125 | }
126 | ],
127 | "footerSocials": {
128 | "github": "https://github.com/traceloop",
129 | "twitter": "https://twitter.com/traceloopdev"
130 | },
131 | "analytics": {
132 | "gtm": {
133 | "tagId": "GTM-NDQVZMH"
134 | }
135 | },
136 | "modeToggle": {
137 | "default": "dark",
138 | "isHidden": true
139 | },
140 | "feedback": {
141 | "thumbsRating": true,
142 | "suggestEdit": true
143 | },
144 | "api": {
145 | "baseUrl": "https://app.traceloop.com/api",
146 | "auth": {
147 | "method": "bearer"
148 | }
149 | }
150 | }
151 |
--------------------------------------------------------------------------------
/openllmetry/configuration.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "SDK Initialization Options"
3 | description: "Documentation of the initialization options for the SDKs."
4 | ---
5 |
6 | Most configuration options can be set via environment variables or via the SDK's initialization options.
7 | The SDK initialization options always take precedence over the environment variables.
8 | See below for the list of options.
9 |
10 | ## Application Name
11 |
12 | You can customize the application name that will be logged with the traces. This is useful to identify if you have multiple services with
13 | OpenLLMetry installed.
14 |
15 |
16 |
17 | ```python Python
18 | Traceloop.init(app_name="my app name")
19 | ```
20 |
21 | ```js Typescript / Javascript
22 | Traceloop.init({ appName: "my app name" });
23 | ```
24 |
25 |
26 |
27 | ## Base URL
28 |
29 | This defines the OpenTelemetry endpoint to connect to. It defaults to https://api.traceloop.com
30 | If you prefix it with `http` or `https`, it will use the OTLP/HTTP protocol.
31 | Otherwise, it will use the OTLP/GRPC protocol.
32 |
33 | For configuring this to different observability platform, check out our [integrations section](/openllmetry/integrations).
34 |
35 |
36 | The OpenTelemetry standard defines that the actual endpoint should always end
37 | with `/v1/traces`. Thus, if you specify a base URL, we always append
38 | `/v1/traces` to it. This is similar to how `OTLP_EXPORTER_OTLP_ENDPOINT` works
39 | in all OpenTelemetry SDKs.
40 |
41 |
42 |
43 |
44 | ```bash Environment Variable
45 | TRACELOOP_BASE_URL=
46 | ```
47 |
48 | ```python Python
49 | Traceloop.init(api_endpoint=)
50 | ```
51 |
52 | ```js Typescript / Javascript
53 | Traceloop.init({ baseUrl: })
54 | ```
55 |
56 |
57 |
58 | ## API Key
59 |
60 | If set, this is sent as a bearer token on the Authorization header.
61 |
62 | API keys can be generated from the [Traceloop Dashboard](https://app.traceloop.com/settings/api-keys), for each of the three supported environments (Development, Staging, Production).
63 |
64 | Note that API Keys are only displayed once, at the time of their creation and are not stored anywhere. If you lose your API key, you will need to generate a new one.
65 |
66 |
67 | If this is not set, and the base URL is set to `https://api.traceloop.com`,
68 | the SDK will generate a new API key automatically with the Traceloop
69 | dashboard.
70 |
71 |
72 |
73 |
74 | ```bash Environment Variable
75 | TRACELOOP_API_KEY=
76 | ```
77 |
78 | ```python Python
79 | Traceloop.init(api_key=)
80 | ```
81 |
82 | ```js Typescript / Javascript
83 | Traceloop.init({ apiKey: })
84 | ```
85 |
86 |
87 |
88 | ## Headers
89 |
90 | If set, this is sent as-is as HTTP headers. This is useful for custom authentication protocols that some observability platforms require.
91 | The format follows the [W3C Correlation-Context](https://github.com/w3c/baggage/blob/master/baggage/HTTP_HEADER_FORMAT.md) format, i.e.
92 | `key1=value1,key2=value2`. If you need spaces, use `%20`.
93 | This is similar to how `OTEL_EXPORTER_OTLP_HEADERS` works in all OpenTelemetry SDKs.
94 |
95 | If this is set, the API key is ignored.
96 |
97 |
98 |
99 | ```bash Environment Variable
100 | TRACELOOP_HEADERS=key1=value1,key2=value2
101 | ```
102 |
103 | ```python Python
104 | Traceloop.init(headers={"key1": "value1", "key2": "value2"})
105 | ```
106 |
107 | ```js Typescript / Javascript
108 | Traceloop.init({ headers: { key1: "value1", key2: "value2" } });
109 | ```
110 |
111 |
112 |
113 | ## Custom Traces Exporter
114 |
115 | If, for some reason, you cannot use the OTLP/HTTP or OTLP/GRPC exporter that is provided with the SDK, you can set a custom
116 | exporter (for example, to Jaeger, Zipkin, or others)
117 |
118 |
119 | If this is set, Base URL, API key and headers configurations are ignored.
120 |
121 |
122 |
123 |
124 | ```python Python
125 | Traceloop.init(exporter=ZipkinExporter(endpoint="http://localhost:9411/api/v2/spans"))
126 | ```
127 |
128 | ```js Typescript / Javascript
129 | Traceloop.init({ exporter: new ZipkinExporter() });
130 | ```
131 |
132 |
133 |
134 | ## Disable Batch
135 |
136 | By default, the SDK batches spans using the [OpenTelemetry batch span processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md).
137 | When working locally, sometime you may wish to disable this behavior. You can do that with this flag.
138 |
139 |
140 |
141 | ```python Python
142 | Traceloop.init(disable_batch=True)
143 | ```
144 |
145 | ```js Typescript / Javascript
146 | Traceloop.init({ disableBatch: true });
147 | ```
148 |
149 |
150 |
151 | ## Disable Logging
152 |
153 | By default, the SDK outputs some debug logs to the console. You can disable this behavior with this flag.
154 |
155 |
156 |
157 | ```python Python
158 | Traceloop.init(suppress_logs=True)
159 | ```
160 |
161 | ```js Typescript / Javascript
162 | Traceloop.init({ suppressLogs: true });
163 | ```
164 |
165 |
166 |
--------------------------------------------------------------------------------
/openllmetry/contributing/developing.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Local Developement - Python"
3 | ---
4 |
5 | You can contribute both new instrumentations or update and improve the Python SDK wrapper.
6 |
7 | We use `poetry` to manage packages, and each package is managed independently under its own directory under `/packages`.
8 | All instrumentations depends on `opentelemetry-semantic-conventions-ai`,
9 | and `traceloop-sdk` depends on all the instrumentations.
10 |
11 | If adding a new instrumentation, make sure to use it in `traceloop-sdk`, and write proper tests.
12 |
13 | [Join our Slack community](https://join.slack.com/t/traceloopcommunity/shared_invite/zt-1plpfpm6r-zOHKI028VkpcWdobX65C~g) to chat and get help on any issues you may encounter.
14 |
15 | ## Debugging
16 |
17 | No matter if you're working on an instrumentation or on the SDK, we recommend testing the changes by using
18 | the SDK in the sample app (`/packages/sample-app`) or the tests under the SDK.
19 |
20 | ## Running tests
21 |
22 | All tests are currently under `packages/python-sdk`. Since we wanted to simulate a real environment,
23 | the tests make actual calls to OpenAI and other services.
24 | To run them, you'll need to set up the following env vars:
25 |
26 | ```
27 | OPENAI_API_KEY=...
28 | ANTHROPIC_API_KEY=...
29 | PINECONE_API_KEY=...
30 | PINECONE_ENVIRONMENT=...
31 | ```
32 |
33 | Run the tests using:
34 |
35 | ```bash
36 | cd packages/python-sdk
37 | poetry install
38 | poetry run pytest
39 | ```
40 |
--------------------------------------------------------------------------------
/openllmetry/contributing/overview.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Overview"
3 | description: "We welcome any contributions to OpenLLMetry, big or small."
4 | ---
5 |
6 | ## Community
7 |
8 | It's the early days of our project and we're working hard to build an awesome, inclusive community. In order to grow this, all community members must adhere to our [Code of Conduct](https://github.com/traceloop/openllmetry/blob/main/CODE_OF_CONDUCT.md).
9 |
10 | ## Bugs and issues
11 |
12 | Bug reports help make OpenLLMetry a better experience for everyone. When you report a bug, a template will be created automatically containing information we'd like to know.
13 |
14 | Before raising a new issue, please search existing ones to make sure you're not creating a duplicate.
15 |
16 |
17 | If the issue is related to security, please email us directly at
18 | dev@traceloop.com.
19 |
20 |
21 | ## Deciding what to work on
22 |
23 | You can start by browsing through our list of issues or adding your own that improves on the test suite experience. Once you've decided on an issue, leave a comment and wait to get approved; this helps avoid multiple people working on the same issue.
24 |
25 | If you're ever in doubt about whether or not a proposed feature aligns with OpenLLMetry as a whole, feel free to raise an issue about it and we'll get back to you promptly.
26 |
27 | ## Writing and submitting code
28 |
29 | Anyone can contribute code to OpenLLMetry. To get started, check out the local development guide, make your changes, and submit a pull request to the main repository.
30 |
31 | ## Licensing
32 |
33 | All of OpenLLMetry's code is under the Apache 2.0 license.
34 |
35 | Any third party components incorporated into our code are licensed under the original license provided by the applicable component owner.
36 |
--------------------------------------------------------------------------------
/openllmetry/getting-started-nextjs.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Next.js"
3 | ---
4 |
5 | You can also check out our full working example with Next.js 13 [here](https://github.com/traceloop/openllmetry-nextjs-demo).
6 |
7 | Install Traceloop by following these 2 easy steps and get instant monitoring.
8 |
9 |
10 |
11 |
12 | Run the following command in your terminal:
13 |
14 |
15 | ```bash npm
16 | npm install @traceloop/node-server-sdk
17 | ```
18 |
19 | ```bash pnpm
20 | pnpm install @traceloop/node-server-sdk
21 | ```
22 |
23 | ```bash yarn
24 | yarn add @traceloop/node-server-sdk
25 | ```
26 |
27 |
28 |
29 | Whenever you use an LLM or a vector DB (API routes, `getServerSideProps()`, etc.),
30 | you'll need to import the Traceloop SDK and initialize it. Make sure to pass
31 | the modules you'd like to instrument and set `disableBatch` to `true`.
32 |
33 | ```js
34 | import * as traceloop from "@traceloop/node-server-sdk";
35 | import OpenAI from "openai";
36 |
37 | traceloop.initialize({
38 | disableBatch: true,
39 | instrumentModules: {
40 | openAI: OpenAI,
41 | },
42 | });
43 | ```
44 |
45 |
46 |
47 |
48 |
49 | If you just want to explore - you don't need to do anything! Just run your app
50 | without setting any environment variable and a new account will automatically
51 | be created for you so you can start seeing traces from your developement
52 | environment.
53 |
54 | Lastly, you'll need to configure where to export your traces.
55 | The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
56 |
57 | For Traceloop, read on. For other options, see [Exporting](/openllmetry/exporting).
58 |
59 | ### Using Traceloop Cloud
60 |
61 | Go to [Traceloop](https://app.traceloop.com), and create a new account.
62 | Then, click on **Environments** on the left-hand navigation bar. Or go to directly to https://app.traceloop.com/settings/api-keys.
63 | Click **Generate API Key** to generate an API key for the developement environment and click **Copy API Key** to copy it over.
64 |
65 | Make sure to copy it as it won't be shown again.
66 |
67 |
68 |
69 |
70 |
71 | Set the copied Traceloop's API key as an environment variable in your app named `TRACELOOP_API_KEY`.
72 |
73 | Done! You'll get instant visibility into everything that's happening with your LLM.
74 | If you're calling a vector DB, or any other external service or database, you'll also see it in the Traceloop dashboard.
75 |
76 |
77 |
78 |
--------------------------------------------------------------------------------
/openllmetry/getting-started-python.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Python"
3 | ---
4 |
5 | You can also check out our full working example of a RAG pipeline with Pinecone [here](https://github.com/traceloop/pinecone-demo).
6 |
7 | Install Traceloop by following these 3 easy steps and get instant monitoring.
8 |
9 |
10 |
11 |
12 | Run the following command in your terminal:
13 |
14 |
15 |
16 | ```bash pip
17 | pip install traceloop-sdk
18 | ```
19 |
20 | ```bash poetry
21 | poetry add traceloop-sdk
22 | ```
23 |
24 |
25 |
26 | In your LLM app, initialize the Traceloop tracer like this:
27 |
28 | ```python
29 | from traceloop.sdk import Traceloop
30 |
31 | Traceloop.init()
32 | ```
33 |
34 | If you're running this locally, you may want to disable batch sending, so you can see the traces immediately:
35 |
36 | ```python
37 | Traceloop.init(disable_batch=True)
38 | ```
39 |
40 |
41 |
42 |
43 |
44 |
45 | If you have complex workflows or chains, you can annotate them to get a better understanding of what's going on.
46 | You'll see the complete trace of your workflow on Traceloop or any other dashboard you're using.
47 |
48 | We have a set of [decorators](/openllmetry/tracing/decorators) to make this easier.
49 | Assume you have a function that renders a prompt and calls an LLM, simply add `@workflow` (or for asynchronous methods - `@aworkflow`).
50 |
51 |
52 | If you're using an LLM framework like Haystack, Langchain or LlamaIndex -
53 | we'll do that for you. No need to add any annotations to your code.
54 |
55 |
56 | ```python
57 | from traceloop.sdk.decorators import workflow
58 |
59 | @workflow(name="suggest_answers")
60 | def suggest_answers(question: str):
61 | ...
62 | ```
63 |
64 | For more information, see the [dedicated section in the docs](/openllmetry/tracing/decorators).
65 |
66 |
67 |
68 |
69 | If you just want to explore - you don't need to do anything! Just run your app
70 | without setting any environment variable and a new account will automatically
71 | be created for you so you can start seeing traces from your developement
72 | environment.
73 |
74 | Lastly, you'll need to configure where to export your traces.
75 | The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
76 |
77 | For Traceloop, read on. For other options, see [Exporting](/openllmetry/exporting).
78 |
79 | ### Using Traceloop Cloud
80 |
81 | Go to [Traceloop](https://app.traceloop.com), and create a new account.
82 | Then, click on **Environments** on the left-hand navigation bar. Or go to directly to https://app.traceloop.com/settings/api-keys.
83 | Click **Generate API Key** to generate an API key for the developement environment and click **Copy API Key** to copy it over.
84 |
85 | Make sure to copy it as it won't be shown again.
86 |
87 |
88 |
89 |
90 |
91 | Set the copied Traceloop's API key as an environment variable in your app named `TRACELOOP_API_KEY`.
92 |
93 | Done! You'll get instant visibility into everything that's happening with your LLM.
94 | If you're calling a vector DB, or any other external service or database, you'll also see it in the Traceloop dashboard.
95 |
96 |
97 |
98 |
--------------------------------------------------------------------------------
/openllmetry/getting-started-ts.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Node.js"
3 | ---
4 |
5 |
6 | If you're on Next.js, follow the [Next.js
7 | guide](/openllmetry/getting-started-nextjs).
8 |
9 |
10 | Install Traceloop by following these 3 easy steps and get instant monitoring.
11 |
12 |
13 |
14 |
15 | Run the following command in your terminal:
16 |
17 |
18 | ```bash npm
19 | npm install @traceloop/node-server-sdk
20 | ```
21 |
22 | ```bash pnpm
23 | pnpm install @traceloop/node-server-sdk
24 | ```
25 |
26 | ```bash yarn
27 | yarn add @traceloop/node-server-sdk
28 | ```
29 |
30 |
31 |
32 | In your LLM app, initialize the Traceloop tracer like this:
33 |
34 | ```js
35 | import * as traceloop from "@traceloop/node-server-sdk";
36 |
37 | traceloop.initialize();
38 | ```
39 |
40 |
41 | Because of the way Javascript works, you must import the Traceloop SDK before
42 | importing any LLM module like OpenAI.
43 |
44 |
45 | If you're running this locally, you may want to disable batch sending, so you can see the traces immediately:
46 |
47 | ```js
48 | traceloop.initialize({ disableBatch: true });
49 | ```
50 |
51 |
52 |
53 |
54 |
55 |
56 | If you have complex workflows or chains, you can annotate them to get a better understanding of what's going on.
57 | You'll see the complete trace of your workflow on Traceloop or any other dashboard you're using.
58 |
59 | We have a set of [decorators](/openllmetry/decorators) to make this easier.
60 | Assume you have a function that renders a prompt and calls an LLM, simply add `@workflow` to a method.
61 |
62 |
63 | If you're using an LLM framework like Haystack, Langchain or LlamaIndex -
64 | we'll do that for you. No need to add any annotations to your code.
65 |
66 |
67 | ```js
68 | class MyLLM {
69 | @workflow("suggest_answers")
70 | async suggestAnswers(question: string) {
71 | ...
72 | }
73 | }
74 | ```
75 |
76 | For more information, see the [dedicated section in the docs](/openllmetry/decorators).
77 |
78 |
79 |
80 |
81 | If you just want to explore - you don't need to do anything! Just run your app
82 | without setting any environment variable and a new account will automatically
83 | be created for you so you can start seeing traces from your developement
84 | environment.
85 |
86 | Lastly, you'll need to configure where to export your traces.
87 | The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
88 |
89 | For Traceloop, read on. For other options, see [Exporting](/openllmetry/exporting).
90 |
91 | ### Using Traceloop Cloud
92 |
93 | Go to [Traceloop](https://app.traceloop.com), and create a new account.
94 | Then, click on **Environments** on the left-hand navigation bar. Or go to directly to https://app.traceloop.com/settings/api-keys.
95 | Click **Generate API Key** to generate an API key for the developement environment and click **Copy API Key** to copy it over.
96 |
97 | Make sure to copy it as it won't be shown again.
98 |
99 |
100 |
101 |
102 |
103 | Set the copied Traceloop's API key as an environment variable in your app named `TRACELOOP_API_KEY`.
104 |
105 | Done! You'll get instant visibility into everything that's happening with your LLM.
106 | If you're calling a vector DB, or any other external service or database, you'll also see it in the Traceloop dashboard.
107 |
108 |
109 |
110 |
--------------------------------------------------------------------------------
/openllmetry/integrations/datadog.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Datadog"
3 | ---
4 |
5 | With datadog, there are 2 options - you can either export directly to a Datadog Agent in your cluster, or through an OpenTelemetry Collector (which requires that you deploy one in your cluster).
6 |
7 | See also [Datadog documentation](https://docs.datadoghq.com/opentelemetry/).
8 |
9 | Exporting directly to an agent is easiest.
10 | To do that, first enable the OTLP HTTP collector in your agent configuration.
11 | This depends on how you deployed your Datadog agent. For example, if you've used a Helm chart,
12 | you can add the following to your `values.yaml`
13 | (see [this](https://docs.datadoghq.com/opentelemetry/otlp_ingest_in_the_agent/?tab=kuberneteshelmvaluesyaml#enabling-otlp-ingestion-on-the-datadog-agent) for other options):
14 |
15 | ```yaml
16 | otlp:
17 | receiver:
18 | protocols:
19 | http:
20 | enabled: true
21 | ```
22 |
23 | Then, set this env var, and you're done!
24 |
25 | ```bash
26 | TRACELOOP_BASE_URL=http://:4318
27 | ```
28 |
--------------------------------------------------------------------------------
/openllmetry/integrations/dynatrace.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Dynatrace"
3 | ---
4 |
5 |
6 |
7 |
8 |
9 | Analyze all collected LLM traces within Dynatrace by using the native OpenTelemetry ingest endpoint of your Dynatrace environment.
10 |
11 | Go to your Dynatrace environment and create a new access token under **Manage Access Tokens**.
12 | The access token needs the following permission scopes that allow the ingest of OpenTelemetry spans, metrics and logs
13 | (openTelemetryTrace.ingest, metrics.ingest, logs.ingest).
14 |
15 | Set `TRACELOOP_BASE_URL` environment variable to the URL of your Dynatrace OpenTelemetry ingest endpoint
16 |
17 | ```bash
18 | TRACELOOP_BASE_URL=https://.live.dynatrace.com\api\v2\otlp
19 | ```
20 |
21 | Set the `TRACELOOP_HEADERS` environment variable to include your previously created access token
22 |
23 | ```bash
24 | TRACELOOP_HEADERS=Authorization=Api-Token%20
25 | ```
26 |
27 | Done! All the exported spans along with their span attributes will show up within the Dynatrace trace view.
28 |
--------------------------------------------------------------------------------
/openllmetry/integrations/exporting.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Overview"
3 | description: "Connect to any observability platform - Traceloop, Dynatrace, Datadog, Honeycomb, and others"
4 | ---
5 |
6 | Since Traceloop SDK is using OpenTelemetry under the hood, you can see everything
7 | in any observability platform that supports OpenTelemetry.
8 |
9 | ## The Integrations Catalog
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
--------------------------------------------------------------------------------
/openllmetry/integrations/grafana.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Grafana"
3 | ---
4 |
5 | First, go to the Grafana Cloud account page under `https://grafana.com/orgs/`,
6 | and click on **Send Traces** under Tempo. In **Grafana Data Source settings**,
7 | note the `URL` value. Click **Generate now** to generate an API key and copy it.
8 | Note also the `Stack ID` value (you can find it in the URL `https://grafana.com/orgs//stacks/`).
9 |
10 | ## With Grafana Agent
11 |
12 | Make sure you have an agent installed and running in your cluster.
13 | The host to target your traces is constructed is the hostname of the `URL` noted above, without the `https://` and the trailing `/tempo`.
14 |
15 | Add this to the configuration of your agent:
16 |
17 | ```yaml
18 | traces:
19 | configs:
20 | - name: default
21 | remote_write:
22 | - endpoint: :443
23 | basic_auth:
24 | username:
25 | password:
26 | receivers:
27 | otlp:
28 | protocols:
29 | grpc:
30 | ```
31 |
32 |
33 | Note the endpoint. The URL you need to use is without `https` and the trailing
34 | `/`. So `https://tempo-us-central1.grafana.net/tempo` should be used as
35 | `tempo-us-central1.grafana.net:443`.
36 |
37 |
38 | Set this as an environment variable in your app:
39 |
40 | ```bash
41 | TRACELOOP_BASE_URL=http://:4318
42 | ```
43 |
44 | ## Without Grafana Agent
45 |
46 | Grafana cloud currently only supports sending traces to some of its regions.
47 | Before you begin, [check out this list](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/otlp/send-data-otlp/)
48 | and make sure your region is supported.
49 |
50 | In a terminal, type:
51 |
52 | ```bash
53 | echo -n ":" | base64
54 | ```
55 |
56 | Note the result which is a base64 encoding of your user id and api key.
57 |
58 | The URL you'll use as the destination for the traces depends on your region/zone. For example, for AWS US Central this will be `prod-us-central-0`.
59 | See [here](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/otlp/send-data-otlp/#before-you-begin) for the names of the zones you should use below.
60 |
61 | Then, you can set the following environment variables when running your app with Traceloop SDK installed:
62 |
63 | ```bash
64 | TRACELOOP_BASE_URL=https://otlp-gateway-.grafana.net/otlp
65 | TRACELOOP_HEADERS="Authorization=Basic%20"
66 | ```
67 |
--------------------------------------------------------------------------------
/openllmetry/integrations/honeycomb.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Honeycomb"
3 | ---
4 |
5 |
6 |
7 |
8 |
9 | Since Honeycomb natively supports OpenTelemetry, you just need to route the traces to Honeycomb's endpoint and set the
10 | API key:
11 |
12 | ```bash
13 | TRACELOOP_BASE_URL=https://api.honeycomb.io
14 | TRACELOOP_HEADERS="x-honeycomb-team="
15 | ```
16 |
--------------------------------------------------------------------------------
/openllmetry/integrations/hyperdx.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "HyperDX"
3 | ---
4 |
5 |
6 |
7 |
8 |
9 | HyperDX is an [open source observability platform](https://github.com/hyperdxio/hyperdx) that natively supports OpenTelemetry.
10 | Just route the traces to HyperDX's endpoint and set the API key:
11 |
12 | ```bash
13 | TRACELOOP_BASE_URL=https://in-otel.hyperdx.io
14 | TRACELOOP_HEADERS="authorization="
15 | ```
16 |
--------------------------------------------------------------------------------
/openllmetry/integrations/newrelic.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "New Relic"
3 | ---
4 |
5 | Since New Relic natively supports OpenTelemetry, you just need to route the traces to New Relic's endpoint and set the API key:
6 |
7 | ```bash
8 | TRACELOOP_BASE_URL=https://otlp.nr-data.net:443
9 | TRACELOOP_HEADERS="api-key="
10 | ```
11 |
12 | For more information check out the [docs link](https://docs.newrelic.com/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/get-started/opentelemetry-set-up-your-app/#review-settings).
13 |
--------------------------------------------------------------------------------
/openllmetry/integrations/otel-collector.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "OpenTelemetry Collector"
3 | ---
4 |
5 | Since Traceloop is emitting standard OTLP HTTP (standard OpenTelemetry protocol), you can use any OpenTelemetry Collector, which gives you the flexibility
6 | to then connect to any backend you want.
7 | First, [deploy an OpenTelemetry Collector](https://opentelemetry.io/docs/kubernetes/operator/automatic/#create-an-opentelemetry-collector-optional)
8 | in your cluster.
9 | Then, point the output of the Traceloop SDK to the collector by setting:
10 |
11 | ```bash
12 | TRACELOOP_BASE_URL=https://:4318
13 | ```
14 |
--------------------------------------------------------------------------------
/openllmetry/integrations/signoz.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Signoz"
3 | ---
4 |
5 |
6 |
7 |
8 |
9 | SigNoz is an [open-source observability platform](https://github.com/signoz/signoz).
10 |
11 | ### With SigNoz cloud
12 |
13 | Since SigNoz natively supports OpenTelemetry, you just need to route the traces to SigNoz's endpoint and set the
14 | ingestion key (note no `https` in the URL):
15 |
16 | ```bash
17 | TRACELOOP_BASE_URL=ingest.{region}.signoz.cloud
18 | TRACELOOP_HEADERS="signoz-access-token="
19 | ```
20 |
21 | Where `region` depends on the choice of your SigNoz cloud region:
22 | | Region | Endpoint |
23 | | ------ | -------- |
24 | | US | ingest.us.signoz.cloud:443 |
25 | | IN | ingest.in.signoz.cloud:443 |
26 | | EU | ingest.eu.signoz.cloud:443 |
27 |
28 | Validate your configuration by [following these instructions](https://signoz.io/docs/instrumentation/python/#validating-instrumentation-by-checking-for-traces)
29 |
30 | ### With Self-Hosted version
31 |
32 | Once you have an up and running instance of SigNoz, use the following environment variables to export your traces:
33 |
34 | ```bash
35 | TRACELOOP_BASE_URL="http://localhost:4318"
36 | ```
37 |
--------------------------------------------------------------------------------
/openllmetry/integrations/traceloop.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Traceloop"
3 | ---
4 |
5 |
6 |
7 |
8 |
9 | Go to [Traceloop Environments Management](https://app.traceloop.com/settings/api-keys)
10 | (you can also reach here by clicking on **Environments** on the left-hand navigation bar).
11 | Click on **Generate API Key**. Click **Copy Key** to copy the API key as it won't be shown again.
12 |
13 | Set the API key as an environment variable in your app named `TRACELOOP_API_KEY`.
14 | For on-prem deployments, set `TRACELOOP_BASE_URL` to the URL of your Traceloop instance.
15 |
16 | Done! You'll get instant visibility into everything that's happening with your LLM.
17 | If you're calling a vector DB, or any other external service or database, you'll also see it in the Traceloop dashboard.
18 |
--------------------------------------------------------------------------------
/openllmetry/introduction.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "What is OpenLLMetry?"
3 | ---
4 |
5 |
6 |
7 |
8 |
9 | OpenLLMetry allows you to easily start monitoring and debugging the execution of your LLM app.
10 | Tracing is done in a non-intrusive way, built on top of OpenTelemetry.
11 | You can choose to export the traces to Traceloop, or to your existing observability stack.
12 |
13 |
14 | You can use OpenLLMetry whether you use a framework like LangChain, or
15 | directly interact with a foundation model API.
16 |
17 |
18 |
19 |
20 | ```python Python
21 | import openai
22 | from traceloop.sdk import Traceloop
23 | from traceloop.sdk.decorators import workflow
24 |
25 | Traceloop.init(app_name="joke_generation_service")
26 |
27 | @workflow(name="joke_creation")
28 | def create_joke():
29 | completion = openai.ChatCompletion.create(
30 | model="gpt-3.5-turbo",
31 | messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
32 | )
33 |
34 | return completion.choices[0].message.content
35 | ```
36 |
37 | ```js Typescript
38 | import * as traceloop from "@traceloop/node-server-sdk";
39 | import OpenAI from "openai";
40 |
41 | Traceloop.init({ app_name="joke_generation_service" })
42 | const openai = new OpenAI();
43 |
44 | class MyLLM {
45 | @traceloop.workflow("joke_creation")
46 | async create_joke():
47 | completion = await openai.chat.completions.create({
48 | model: "gpt-3.5-turbo",
49 | messages: [{"role": "user", "content": "Tell me a joke about opentelemetry"}],
50 | })
51 |
52 | return completion.choices[0].message.content
53 | }
54 | ```
55 |
56 |
57 |
58 | ## Getting Started
59 |
60 | Select from the following guides to learn more about how to use OpenLLMetry:
61 |
62 |
63 |
68 | Set up Traceloop Python SDK in your project
69 |
70 |
75 | Set up Traceloop Javascript SDK in your project
76 |
77 |
78 | Learn how to annotate your code to enrich your traces
79 |
80 |
85 | Learn how to connect to your existing observability stack
86 |
87 |
92 | Manage your prompts and rollout changes with confidence
93 |
94 |
95 |
--------------------------------------------------------------------------------
/openllmetry/prompts/quick-start.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Quick Start"
3 | description: "Manage your prompts on the Traceloop platform"
4 | ---
5 |
6 | You can use Traceloop to manage your prompts and model configurations.
7 | That way you can easily experiment with different prompts, and rollout changes gradually and safely.
8 |
9 | ### Define a prompt in the Prompt Registry
10 |
11 |
12 |
13 |
14 |
15 | First, define the prompt, and deploy it to the environment in which you want to use it.
16 |
17 | For more information see [Registry Documentation](/openllmetry/prompts/registry)
18 |
19 | ### Use the prompt in your code with the SDK
20 |
21 | Then, you can retrieve your prompt (in this example with the key `joke_generator` and a single variable `persona` as defined above) with `get_prompt`:
22 |
23 | ```python
24 | from traceloop.sdk.prompts import get_prompt
25 |
26 | prompt_args = get_prompt("joke_generator", persona="pirate")
27 | completion = openai.ChatCompletion.create(**prompt_args)
28 | ```
29 |
30 |
31 | The returned variable `prompt_args` is compatible with the API used by the
32 | foundation models SDKs (OpenAI, Anthropic, etc.) which means you can directly
33 | plug in the response to the appropriate API call.
34 |
35 |
36 | For more information see [SDK Usage Documentation](/openllmetry/prompts/sdk-usage)
37 |
--------------------------------------------------------------------------------
/openllmetry/prompts/registry.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Prompt Registry"
3 | description: "Manage your prompts on the Traceloop platform"
4 | ---
5 |
6 |
7 |
8 |
9 |
10 | You can use Traceloop to manage your prompts and model configurations.
11 | That way you can easily experiment with different prompts, and rollout changes gradually and safely.
12 |
13 | To enable this on the SDK, you'll need to set the environment variable: `TRACELOOP_PROMPT_REGISTRY_ENABLED=true`.
14 |
15 | Assume you created the following prompt with the key `joke_generator` in the UI:
16 |
17 | ```
18 | Tell me a joke about OpenTelemetry as a {{persona}}
19 | ```
20 |
21 | Then, you can retrieve it with `get_prompt`:
22 |
23 | ```python
24 | from traceloop.sdk.prompts import get_prompt
25 |
26 | prompt_args = get_prompt("joke_generator", persona="pirate")
27 | completion = openai.ChatCompletion.create(**prompt_args)
28 | ```
29 |
30 |
31 | The returned variable `prompt_args` is compatible with the API used by the
32 | foundation models SDKs (OpenAI, Anthropic, etc.) which means you can directly
33 | plug in the response to the appropriate API call.
34 |
35 |
36 | ## Quick Start
37 |
38 | ### Configuring your prompt
39 |
40 |
41 |
42 |
43 |
44 | Define both the prompt template (system and/or user prompts) and the model configuration (all parameters can be found in the right side menu) you want to use.
45 |
46 | Initially, prompts are created in `Draft Mode`. In this mode, you can make changes to the prompt and configuration. You can also test your prompt in the playground (see below). While in `Draft Mode`, prompts can only be deployed to the `Development` environment.
47 | Once you are satisfied with the prompt, you can publish it and make it available to deploy in all environments. Once published, the prompt version cannot be edited anymore.
48 |
49 |
50 | Only prompt versions in `Draft Mode` can be edited. While in this mode, they
51 | can only be deployed to the `development` environment. `Publish` a prompt to
52 | make it available for use in `staging` & `production` environments as well.
53 | You can add a name to help you identify this version. Once published, changes
54 | to this version will not be possible.
55 |
56 |
57 | If you want to make changes to your prompt, simply create a new version by clicking on the `New Version` button. New versions will be created in `Draft Mode`.
58 | For more information on deploying prompts, see the section below.
59 |
60 |
61 | Your prompt can include variables. Variables are defined according to the
62 | syntax of the parser specified. For example, if using `jinjia2` the syntax
63 | will be `{{ variable_name }}`. You can then pass variable values to the SDK
64 | when calling `get_prompt`. See the example on the [SDK
65 | Usage](/openllmetry/prompts/sdk-usage) section.
66 |
67 |
68 |
69 | If you change the names of variables or add/remove existing variables, you
70 | will be required to create a new prompt.
71 |
72 |
73 | ### Testing your prompt configuration (Prompt Playground)
74 |
75 | By using the prompt playground you will be able to re-iterate and refine your prompt before deploying it.
76 |
77 |
78 |
79 |
80 |
81 | In order to test your newly defined prompt you can use the playground feature.
82 | Simply click on the `Test` button in the playground tab at the bottom of the screen.
83 |
84 | If your prompt includes variables, then you need to define values for them before testing.
85 | Choose `Variables` in the right side menu and assign a value to each.
86 |
87 | Once you click the `Test` button your prompt template will be rendered with the values you provided and will be sent to the configured LLM with the model configuration defined.
88 | The completion response (including token usage) will be displayed in the playground.
89 |
90 | ### Deploying your prompt
91 |
92 | Choose the `Deploy` Tab to navigate to the deployments page for your prompt.
93 |
94 | Here, you can see all recent prompt versions, and which environments they are deployed to.
95 | Simply click on the `Deploy` button to deploy a prompt version to an environment. Similarly, click `Rollback` to revert to a previous prompt version for a specific environment.
96 |
97 |
98 | Prompts in `Draft Mode` can only be deployed to the `Development` environment.
99 |
100 |
101 |
102 | As a safeguard, you cannot deploy a prompt to the `Staging` environment before
103 | first deploying it to `Development`. Similarly, you cannot deploy to
104 | `Production` without first deploying to `Staging`.
105 |
106 |
107 | To fetch prompts from a specific environment, you must supply that environment's API key to the Traceloop SDK. See the [SDK Configuration](/openllmetry/configuration#api-key) for details
108 |
109 |
110 |
111 |
112 |
--------------------------------------------------------------------------------
/openllmetry/prompts/sdk-usage.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Prompt Usage"
3 | description: "Use your managed prompts with the Traceloop SDKs"
4 | ---
5 |
6 | ### Using your prompt
7 |
8 | To enable the prompt feature on the Traceloop SDK, you'll need to set the environment variable: `TRACELOOP_PROMPT_REGISTRY_ENABLED=true`.
9 |
10 | You must also configure the SDK with your API Key for a particular environment. See the [SDK documentation](/openllmetry/configuration#api-key) for more information.
11 |
12 |
13 | The Traceloop SDK will automatically fetch updates you make to your prompts.
14 | This way, you can deploy your prompt changes without having to redeploy your
15 | application.
16 |
17 |
18 | After creating a with the key `joke_generator` in the UI with the following prompt:
19 |
20 | ```
21 | Tell me a joke about OpenTelemetry as a {{persona}}
22 | ```
23 |
24 | Then, you can retrieve it with in your code using `get_prompt`:
25 |
26 | ```python
27 | from traceloop.sdk.prompts import get_prompt
28 |
29 | prompt_args = get_prompt("joke_generator", persona="pirate")
30 | completion = openai.ChatCompletion.create(**prompt_args)
31 | ```
32 |
33 |
34 | The returned variable `prompt_args` is compatible with the API used by the
35 | foundation models SDKs (OpenAI, Anthropic, etc.) which means you can directly
36 | plug in the response to the appropriate API call.
37 |
38 |
--------------------------------------------------------------------------------
/openllmetry/tracing/association.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Associating Entities with Traces"
3 | description: "How to associate traces with entities in your own application"
4 | ---
5 |
6 | Each trace you run is usually connected to entities in your own application -
7 | things like `user_id`, `chat_id`, or anything else that is tied to the flow that triggered the trace.
8 |
9 | OpenLLMetry allows you to easily mark traces with these IDs so you can track them in the UI.
10 |
11 | ```python Python
12 | Tracer.set_association_properties({ "user_id": "user12345", "chat_id": "chat12345" })
13 | ```
14 |
15 |
16 | Older versions of OpenLLMetry (before 0.0.62), used the `set_correlation_id`
17 | method to set a single ID. If you're using this API, you should switch to the
18 | new `set_association_properties` method. `set_correlation_id` will be removed
19 | in a future version.
20 |
21 |
--------------------------------------------------------------------------------
/openllmetry/tracing/decorators.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Decorators"
3 | description: "Enrich your traces by annotating chains and workflows in your app"
4 | ---
5 |
6 | Traceloop SDK supports several decorators that can be used instead of manually starting a span.
7 | You also get extra visibility into your LLM application structure.
8 |
9 | ## Workflows and Tasks
10 |
11 | Sometimes called a "chain", is a decorator for a multi-step process that can be traced as a single unit.
12 |
13 |
14 |
15 |
16 | Use it as `@workflow(name="my_workflow")`.
17 | We will consider every call to OpenAI as a distinct step (or task). You can even annotate the task with a name, using `@task(name="my_task")`.
18 |
19 |
20 | The `name` argument is optional. If you don't provide it, we will use the
21 | function name as the workflow or task name.
22 |
23 |
24 | ```python
25 | from traceloop.sdk.decorators import workflow, task
26 |
27 | @task(name="joke_creation")
28 | def create_joke():
29 | completion = openai.ChatCompletion.create(
30 | model="gpt-3.5-turbo",
31 | messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
32 | )
33 |
34 | return completion.choices[0].message.content
35 |
36 | @task(name="signature_generation")
37 | def generate_signature(joke: str):
38 | completion = openai.Completion.create(
39 | model="text-davinci-003",[]
40 | prompt="add a signature to the joke:\n\n" + joke,
41 | )
42 |
43 | return completion.choices[0].text
44 |
45 |
46 | @workflow(name="pirate_joke_generator")
47 | def joke_workflow():
48 | eng_joke = create_joke()
49 | pirate_joke = translate_joke_to_pirate(eng_joke)
50 | signature = generate_signature(pirate_joke)
51 | print(pirate_joke + "\n\n" + signature)
52 | ```
53 |
54 |
55 |
56 |
57 | This feature is only available in Typescript. You'll need to update your `tsconfig.json` to enable decorators.
58 |
59 |
60 | Update `tsconfig.json` to enable decorators:
61 |
62 | ```json
63 | {
64 | "compilerOptions": {
65 | "experimentalDecorators": true
66 | }
67 | }
68 | ```
69 |
70 | Use it in your code `@traceloop.workflow("my_workflow")` for class methods only.
71 | We will consider every call to OpenAI as a distinct step (or task). You can even annotate the task with a name, using `@traceloop.task("my_task")`.
72 |
73 |
74 | The name is optional. If you don't provide it, we will use the function name
75 | as the workflow or task name.
76 |
77 |
78 | ```js
79 | import * as traceloop from "@traceloop/node-server-sdk";
80 |
81 | class JokeCreation {
82 | @traceloop.task("joke_creation")
83 | async create_joke() {
84 | completion = await openai.chat.completions({
85 | model: "gpt-3.5-turbo",
86 | messages: [
87 | { role: "user", content: "Tell me a joke about opentelemetry" },
88 | ],
89 | });
90 |
91 | return completion.choices[0].message.content;
92 | }
93 |
94 | @traceloop.task("signature_generation")
95 | async generate_signature(joke: string) {
96 | completion = await openai.completions.create({
97 | model: "text-davinci-003",
98 | prompt: "add a signature to the joke:\n\n" + joke,
99 | });
100 |
101 | return completion.choices[0].text;
102 | }
103 |
104 | @traceloop.workflow("pirate_joke_generator")
105 | async joke_workflow() {
106 | eng_joke = create_joke();
107 | pirate_joke = await translate_joke_to_pirate(eng_joke);
108 | signature = await generate_signature(pirate_joke);
109 | console.log(pirate_joke + "\n\n" + signature);
110 | }
111 | }
112 | ```
113 |
114 |
115 |
116 |
117 | ## Agents and Tools
118 |
119 | Similarily, if you use autonomous agents, you can use the `@agent` decorator to trace them as a single unit.
120 | Each tool should be marked with `@tool`.
121 |
122 |
123 |
124 |
125 | ```python
126 | from traceloop.sdk.decorators import agent, tool
127 |
128 | @agent(name="joke_translation")
129 | def translate_joke_to_pirate(joke: str):
130 | completion = openai.ChatCompletion.create(
131 | model="gpt-3.5-turbo",
132 | messages=[{"role": "user", "content": f"Translate the below joke to pirate-like english:\n\n{joke}"}],
133 | )
134 |
135 | history_jokes_tool()
136 |
137 | return completion.choices[0].message.content
138 |
139 |
140 | @tool(name="history_jokes")
141 | def history_jokes_tool():
142 | completion = openai.ChatCompletion.create(
143 | model="gpt-3.5-turbo",
144 | messages=[{"role": "user", "content": f"get some history jokes"}],
145 | )
146 |
147 | return completion.choices[0].message.content
148 | ```
149 |
150 |
151 |
152 |
153 | Remember to set `experimentalDecorators` to `true` in your `tsconfig.json`.
154 |
155 | ```js
156 | import * as traceloop from "@traceloop/node-server-sdk";
157 |
158 | class Agent {
159 | @traceloop.agent("joke_translation")
160 | async translate_joke_to_pirate(joke: str) {
161 | completion = await openai.chat.completions.create({
162 | model="gpt-3.5-turbo",
163 | messages=[{"role": "user", "content": f"Translate the below joke to pirate-like english:\n\n{joke}"}],
164 | })
165 |
166 | history_jokes_tool()
167 |
168 | return completion.choices[0].message.content
169 |
170 | }
171 |
172 | @traceloop.tool("history_jokes")
173 | async history_jokes_tool() {
174 | completion = await openai.chat.completions.create({
175 | model="gpt-3.5-turbo",
176 | messages=[{"role": "user", "content": f"get some history jokes"}],
177 | })
178 |
179 | return completion.choices[0].message.content
180 |
181 | }
182 |
183 | }
184 |
185 | ````
186 |
187 |
188 |
189 | ## Decorating Classes (Python only)
190 |
191 | While the examples above shows how to decorate functions, you can also decorate classes.
192 | In this case, you will also need to provide the name of the method that runs the workflow, task, agent or tool.
193 |
194 | ```python Python
195 | from traceloop.sdk.decorators import agent
196 |
197 | @agent(name="base_joke_generator", method_name="generate_joke")
198 | class JokeAgent:
199 | def generate_joke(self):
200 | completion = openai.ChatCompletion.create(
201 | model="gpt-3.5-turbo",
202 | messages=[{"role": "user", "content": "Tell me a joke about Traceloop"}],
203 | )
204 |
205 | return completion.choices[0].message.content
206 | ```
207 |
208 | ## Async methods and classes (Python only)
209 |
210 | All decorators have an equivalent async decorator.
211 |
212 | So, if you're decorating an `async` method, use `@aworkflow`, `@atask` and so forth.
213 |
214 |
215 | Javascript support async methods with the regular decorators,
216 | `traceloop.workflow()`, `traceloop.task()`, etc.
217 |
218 |
219 | See also a [separate section on using threads with OpenLLMetry](tracing/python-threads).
--------------------------------------------------------------------------------
/openllmetry/tracing/privacy.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Privacy"
3 | ---
4 |
5 | By default, OpenLLMetry logs prompts, completions, and embeddings to span attributes.
6 | This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
7 |
8 | However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users.
9 | You may also simply want to reduce the size of your traces.
10 |
11 | To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`:
12 |
13 | ```bash
14 | TRACELOOP_TRACE_CONTENT=false
15 | ```
16 |
17 | Traceloop SDK, as well as all individual instrumentations will respect this setting.
18 |
19 | ## Enabling logging only for specific users
20 |
21 | You can decide to selectively enable prompt logging for specific users or workflows.
22 | To do that, first make sure content tracing is disabled globally:
23 |
24 | ```bash
25 | TRACELOOP_TRACE_CONTENT=false
26 | ```
27 |
28 | ### Using Traceloop
29 |
30 | We have an API to enable content tracing for specific users or workflows.
31 | See the [Traceloop API documentation](/api-reference/privacy/whitelist_user) for more information.
32 |
33 | ### Without Traceloop
34 |
35 | Set a key called `override_enable_content_tracing` in the OpenTelemetry context to `True` right before making the LLM call
36 | you want to trace with prompts.
37 | This will create a new context that will instruct instrumentations to log prompts and completions as span attributes.
38 |
39 | ```python Python
40 | from opentelemetry.context import attach, set_value
41 |
42 | attach(set_value("override_enable_content_tracing", True))
43 | ```
44 |
45 | Make sure to also disable it afterwards:
46 |
47 | ```python Python
48 | from opentelemetry.context import attach, set_value
49 |
50 | attach(set_value("override_enable_content_tracing", False))
51 | ```
52 |
--------------------------------------------------------------------------------
/openllmetry/tracing/python-threads.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Usage with Threads (Python)"
3 | description: "How to use OpenLLMetry with `ThreadPoolExecutor` and other thread-based libraries."
4 | ---
5 |
6 | Since many LLM operations tend to be I/O bound, it is often useful to use threads to perform multiple operations at once.
7 | Usually, you'll use the `ThreadPoolExecutor` class from the `concurrent.futures` module in the Python standard library, like this:
8 |
9 | ```python
10 | indexes = [pinecone.Index(f"index{i}") for i in range(3)]
11 | executor = ThreadPoolExecutor(max_workers=3)
12 | for i in range(3):
13 | executor.submit(indexes[i].query, [1.0, 2.0, 3.0], top_k=10)
14 | ```
15 |
16 | Unfortunately, this won't work as you expect and may cause you to see "broken" traces or missing spans.
17 | The reason relies in how OpenTelemetry (which is what we use under the hood in OpenLLMetry, hence the name)
18 | uses [Python's context](https://docs.python.org/3/library/contextvars.html) to propagate the trace.
19 | You'll need to explictly propagate the context to the threads:
20 |
21 | ```python
22 | indexes = [pinecone.Index(f"index{i}") for i in range(3)]
23 | executor = ThreadPoolExecutor(max_workers=3)
24 | for i in range(3):
25 | ctx = contextvars.copy_context()
26 | executor.submit(
27 | ctx.run,
28 | functools.partial(index.query, [1.0, 2.0, 3.0], top_k=10),
29 | )
30 | ```
31 |
32 | Also check out the [full example](https://github.com/traceloop/openllmetry/blob/main/packages/sample-app/sample_app/thread_pool_example.py).
33 |
--------------------------------------------------------------------------------
/openllmetry/tracing/user-feedback.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Tracking User Feedback"
3 | ---
4 |
5 | When building LLM applications, it quickly becomes highly useful and important to track user feedback on the result of your LLM workflow.
6 |
7 | Doing that with OpenLLMetry is easy. First, make sure you [associate your LLM workflow with unique identifiers](/openllmetry/tracing/association).
8 |
9 | Then, you can use OpenLLMetry client-side SDK to track user feedback.
--------------------------------------------------------------------------------
/openllmetry/troubleshooting.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Troubleshooting"
3 | description: "Not seeing anything? Here are some things to check."
4 | ---
5 |
6 |
7 |
8 |
9 |
10 | We've all been there. You followed all the instructions, but you're not seeing any traces. Let's fix this.
11 |
12 | ## 1. Disable batch sending
13 |
14 | Sending traces in batch is useful in production, but can be confusing if you're working locally.
15 | Make sure you've [disabled batch sending](/openllmetry/configuration#disable-batch).
16 |
17 |
18 |
19 | ```python Python
20 | Traceloop.init(disable_batch=True)
21 | ```
22 |
23 | ```js Typescript / Javascript
24 | Traceloop.init({ disableBatch: true });
25 | ```
26 |
27 |
28 |
29 | ## 2. Check the logs
30 |
31 | When Traceloop initializes, it logs a message to the console, specifying the endpoint that it uses.
32 | If you don't see that, you might not be initializing the SDK properly.
33 |
34 | > **Traceloop exporting traces to `https://api.traceloop.com`**
35 |
36 | ## 3. Try outputting traces to the console
37 |
38 | Use the `ConsoleExporter` and check if you see traces in the console.
39 |
40 |
41 | ```python Python
42 | from opentelemetry.sdk.trace.export import ConsoleSpanExporter
43 |
44 | Traceloop.init(exporter=ConsoleExporter())
45 | ```
46 |
47 | ```js Typescript / Javascript
48 | import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-node";
49 |
50 | Traceloop.init({ exporter: new ConsoleSpanExporter() });
51 | ```
52 |
53 |
54 |
55 | If you see traces in the console, then you probable haven't configured the exporter properly.
56 | Check the [integration guide](/openllmetry/integrations) again, and make sure you're using the right endpoint and API key.
57 |
58 | ## 4. (Typescript / Javascript only) Did you import the SDK correctly?
59 |
60 | If you're using Typescript or Javascript, you must import the SDK before importing any LLM package like OpenAI, Pinecone, and others.
61 | ```js
62 | import * as traceloop from "@traceloop/node-server-sdk";
63 |
64 | import OpenAI from "openai";
65 | ```
66 |
67 | ## 5. Is your library supported yet?
68 |
69 | Checkout [OpenLLMetry](https://github.com/traceloop/openllmetry#readme) or [OpenLLMetry-JS](https://github.com/traceloop/openllmetry-js#readme) README files to see which libraries and versions are currently supported.
70 | Contributions are always welcome! If you want to add support for a library, please open a PR.
71 |
72 | ## 6. Talk to us!
73 |
74 | We're here to help.
75 | Reach out any time over
76 | [Slack](https://join.slack.com/t/traceloopcommunity/shared_invite/zt-1plpfpm6r-zOHKI028VkpcWdobX65C~g),
77 | [email](mailto:dev@traceloop.com), and we'd love to assist you.
78 |
--------------------------------------------------------------------------------