6 |
7 |
8 |
9 | |
10 |
11 | Getting Started
12 |
17 | |
18 |
19 |
20 |
21 |
22 |
23 | |
24 |
25 | Data Administration
26 |
34 | |
35 |
36 |
37 |
38 |
39 |
40 | |
41 |
42 | Data Science
43 |
49 | |
50 |
51 |
52 |
53 |
54 |
55 | |
56 |
57 | Data Engineering
58 |
63 | |
64 |
65 |
66 |
67 |
68 | |
69 |
70 | Machine Learning
71 |
80 | |
81 |
82 |
83 |
84 |
85 | |
86 |
87 | Using Notebooks
88 |
94 | |
95 |
96 |
97 |
98 |
99 | ## Load demo notebooks to Snowflake
100 |
101 | The notebook files are available for download as `.ipynb` files. To load the demo notebooks into your Snowflake Notebook, follow these steps:
102 |
103 | 1. On Github, click into each folder containing the tutorial and the corresponding `.ipynb file`, such as [this](https://github.com/Snowflake-Labs/notebook-demo/blob/main/My%20First%20Notebook%20Project/My%20First%20Notebook%20Project.ipynb). Download the file by clicking on the `Download raw file` from the top right.
104 |
105 | 2. Go to the Snowflake web interface, [Snowsight](https://app.snowflake.com), on your browser.
106 |
107 | 3. Navigate to `Project` > `Notebooks` from the left menu bar.
108 |
109 | 3. Import the .ipynb file you've download into your Snowflake Notebook by using the `Import from .ipynb` button located on the top right of the Notebooks page.
110 |
111 | 4. Select the file from your local directory and press `Open`.
112 |
113 | 5. A `Create Notebook` dialog will show up. Select a database, schema, and warehouse for the Notebook and click `Create`.
114 |
115 | ## Resources
116 |
117 | Here are some resources to learn more about Snowflake Notebooks:
118 |
119 | * [Documentation](https://docs.snowflake.com/LIMITEDACCESS/snowsight-notebooks/ui-snowsight-notebooks-about)
120 | * [YouTube Playlist](https://www.youtube.com/playlist?list=PLavJpcg8cl1Efw8x_fBKmfA2AMwjUaeBI)
121 | * [Solution Center](https://developers.snowflake.com/solutions/?_sft_technology=notebooks)
122 |
123 | ## License
124 |
125 | All code and notebooks included in this repo is available with an Apache 2.0 license.
126 |
127 | ## Other links
128 |
129 | * Interested in developing and running interactive Streamlit apps in Snowflake? Check out the [Streamlit in Snowflake Demo Repo](https://github.com/Snowflake-Labs/snowflake-demo-streamlit/) to learn more!
130 |
--------------------------------------------------------------------------------
/Reference cells and variables/Reference cells and variables.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "d40f15d5-0f06-4c81-b4e6-a760771d44c2",
6 | "metadata": {
7 | "collapsed": false,
8 | "name": "cell1"
9 | },
10 | "source": [
11 | "# Reference cells and variables in Snowflake Notebooks"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "id": "884f6e12-725b-4ae2-b9c9-5eaa4f4f964f",
17 | "metadata": {
18 | "collapsed": false,
19 | "name": "cell2"
20 | },
21 | "source": [
22 | "You can reference the results of previous cells in a cell in your notebook. This allows you to seamless switch between working in Python and SQL and reuse the results and variables.\n",
23 | "\n"
24 | ]
25 | },
26 | {
27 | "cell_type": "markdown",
28 | "id": "1ad40569-c979-461e-a2a0-98449785ba2f",
29 | "metadata": {
30 | "collapsed": false,
31 | "name": "cell3"
32 | },
33 | "source": [
34 | "## Referencing SQL output in Python cells\n",
35 | "\n",
36 | "We can access the SQL results directly in Python and convert the results to a Snowpark or pandas dataframe.\n",
37 | "\n",
38 | "The cell reference is based on the cell name. Note that if you change the cell name, you will also need to update the subsequent cell reference accordingly.\n",
39 | "\n",
40 | "\n",
41 | "### Example 1: Access SQL results as Snowpark or Pandas Dataframes"
42 | ]
43 | },
44 | {
45 | "cell_type": "code",
46 | "execution_count": null,
47 | "id": "3775908f-ca36-4846-8f38-5adca39217f2",
48 | "metadata": {
49 | "codeCollapsed": false,
50 | "language": "sql",
51 | "name": "cell4"
52 | },
53 | "outputs": [],
54 | "source": [
55 | "-- assign Query Tag to Session. This helps with performance monitoring and troubleshooting\n",
56 | "ALTER SESSION SET query_tag = '{\"origin\":\"sf_sit-is\",\"name\":\"notebook_demo_pack\",\"version\":{\"major\":1, \"minor\":0},\"attributes\":{\"is_quickstart\":0, \"source\":\"sql\", \"vignette\":\"reference_cells\"}}';\n",
57 | "\n",
58 | "SELECT 'FRIDAY' as SNOWDAY, 0.2 as CHANCE_OF_SNOW\n",
59 | "UNION ALL\n",
60 | "SELECT 'SATURDAY',0.5\n",
61 | "UNION ALL \n",
62 | "SELECT 'SUNDAY', 0.9;"
63 | ]
64 | },
65 | {
66 | "cell_type": "code",
67 | "execution_count": null,
68 | "id": "8d50cbf4-0c8d-4950-86cb-114990437ac9",
69 | "metadata": {
70 | "codeCollapsed": false,
71 | "language": "python",
72 | "name": "cell5"
73 | },
74 | "outputs": [],
75 | "source": [
76 | "snowpark_df = cell4.to_df()"
77 | ]
78 | },
79 | {
80 | "cell_type": "code",
81 | "execution_count": null,
82 | "id": "c695373e-ac74-4b62-a1f1-08206cbd5c81",
83 | "metadata": {
84 | "codeCollapsed": false,
85 | "language": "python",
86 | "name": "cell6"
87 | },
88 | "outputs": [],
89 | "source": [
90 | "pandas_df = cell4.to_pandas()"
91 | ]
92 | },
93 | {
94 | "cell_type": "markdown",
95 | "id": "585a54f7-5dd4-412a-9c42-89d5c5d5978c",
96 | "metadata": {
97 | "collapsed": false,
98 | "name": "cell7"
99 | },
100 | "source": [
101 | "## Referencing variables in SQL code\n",
102 | "\n",
103 | "You can use the Jinja syntax `{{..}}` to reference Python variables within your SQL queries as follows.\n",
104 | "\n",
105 | "### Example 2: Using Python variable value in a SQL query\n"
106 | ]
107 | },
108 | {
109 | "cell_type": "code",
110 | "execution_count": null,
111 | "id": "e73b633a-57d4-436c-baae-960c92c9cef6",
112 | "metadata": {
113 | "codeCollapsed": false,
114 | "collapsed": false,
115 | "language": "sql",
116 | "name": "cell8"
117 | },
118 | "outputs": [],
119 | "source": [
120 | "-- Create a dataset of countries\n",
121 | "CREATE OR REPLACE TABLE countries (\n",
122 | " country_name VARCHAR(100)\n",
123 | ");\n",
124 | "\n",
125 | "INSERT INTO countries (country_name) VALUES\n",
126 | " ('USA'),('Canada'),('United Kingdom'),('Germany'),('France'),\n",
127 | " ('Australia'),('Japan'),('China'),('India'),('Brazil');"
128 | ]
129 | },
130 | {
131 | "cell_type": "code",
132 | "execution_count": null,
133 | "id": "e7a6f119-4f67-4ef5-a35f-117a7f502475",
134 | "metadata": {
135 | "codeCollapsed": false,
136 | "language": "python",
137 | "name": "cell9"
138 | },
139 | "outputs": [],
140 | "source": [
141 | "c = \"'USA'\""
142 | ]
143 | },
144 | {
145 | "cell_type": "code",
146 | "execution_count": null,
147 | "id": "60a59077-a4b1-4699-81a5-645addd8ad6d",
148 | "metadata": {
149 | "codeCollapsed": false,
150 | "language": "sql",
151 | "name": "cell10"
152 | },
153 | "outputs": [],
154 | "source": [
155 | "-- Filter to record where country is USA\n",
156 | "SELECT * FROM countries WHERE COUNTRY_NAME = {{c}}"
157 | ]
158 | },
159 | {
160 | "cell_type": "markdown",
161 | "id": "decf8b5e-e804-439d-a186-3a329da12563",
162 | "metadata": {
163 | "name": "cell11"
164 | },
165 | "source": [
166 | "### Example 3: Using Python dataframe in a SQL query"
167 | ]
168 | },
169 | {
170 | "cell_type": "code",
171 | "execution_count": null,
172 | "id": "9b49d972-3966-4fa6-9457-f028b06484a3",
173 | "metadata": {
174 | "codeCollapsed": false,
175 | "language": "sql",
176 | "name": "cell12"
177 | },
178 | "outputs": [],
179 | "source": [
180 | "-- Create dataset with columns PRODUCT_ID, RATING, PRICE\n",
181 | "SELECT CONCAT('SNOW-',UNIFORM(1000,9999, RANDOM())) AS PRODUCT_ID, \n",
182 | " ABS(NORMAL(5, 3, RANDOM())) AS RATING, \n",
183 | " ABS(NORMAL(750, 200::FLOAT, RANDOM())) AS PRICE\n",
184 | "FROM TABLE(GENERATOR(ROWCOUNT => 100));"
185 | ]
186 | },
187 | {
188 | "cell_type": "code",
189 | "execution_count": null,
190 | "id": "b7040f85-0ab8-4bdb-a36e-33599b79ea54",
191 | "metadata": {
192 | "codeCollapsed": false,
193 | "language": "sql",
194 | "name": "cell13"
195 | },
196 | "outputs": [],
197 | "source": [
198 | "-- Filter to products where price is greater than 500\n",
199 | "SELECT * FROM {{cell12}} where PRICE > 500"
200 | ]
201 | }
202 | ],
203 | "metadata": {
204 | "kernelspec": {
205 | "display_name": "Streamlit Notebook",
206 | "name": "streamlit"
207 | }
208 | },
209 | "nbformat": 4,
210 | "nbformat_minor": 5
211 | }
212 |
--------------------------------------------------------------------------------
/Role_Based_Access_Auditing_with_Streamlit/environment.yml:
--------------------------------------------------------------------------------
1 | name: app_environment
2 | channels:
3 | - snowflake
4 | dependencies:
5 | - altair=*
6 | - pandas=*
7 |
--------------------------------------------------------------------------------
/Scheduled_Query_Execution_Report/Scheduled_Query_Execution_Report.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "display_name": "Streamlit Notebook",
5 | "name": "streamlit"
6 | }
7 | },
8 | "nbformat_minor": 5,
9 | "nbformat": 4,
10 | "cells": [
11 | {
12 | "cell_type": "markdown",
13 | "id": "cc4fb15e-f9db-44eb-9f60-1b9589b755cb",
14 | "metadata": {
15 | "name": "md_title",
16 | "collapsed": false,
17 | "resultHeight": 285
18 | },
19 | "source": "# Scheduled Query Execution Report\n\nA notebook to report on failed or long-running scheduled queries, providing insights into reliability issues.\n\nHere's a breakdown of the steps:\n1. Retrieve Data\n2. Convert Table to a DataFrame\n3. Create an Interactive Slider Widget & Data Preparation\n4. Create a Heatmap for Visualizing Scheduled Query Execution"
20 | },
21 | {
22 | "cell_type": "markdown",
23 | "id": "42a7b143-0779-4706-affc-c214213f55c5",
24 | "metadata": {
25 | "name": "md_retrieve_data",
26 | "collapsed": false,
27 | "resultHeight": 170
28 | },
29 | "source": "## 1. Retrieve Data\n\nFirstly, we'll write an SQL query to retrieve the execution history for scheduled queries, along with their status, timing metrics, and execution status. \n\nWe're obtaining this from the `snowflake.account_usage.task_history` table."
30 | },
31 | {
32 | "cell_type": "code",
33 | "id": "39f7713b-dd7a-41a2-872e-cc534c6dc4f6",
34 | "metadata": {
35 | "language": "sql",
36 | "name": "sql_data",
37 | "resultHeight": 439,
38 | "collapsed": false,
39 | "codeCollapsed": false
40 | },
41 | "outputs": [],
42 | "source": "SELECT \n name,\n database_name,\n query_id,\n query_text,\n schema_name,\n scheduled_time,\n query_start_time,\n completed_time,\n DATEDIFF('second', query_start_time, completed_time) as execution_time_seconds,\n state,\n error_code,\n error_message,\nFROM snowflake.account_usage.task_history\nWHERE scheduled_time >= DATEADD(days, -1, CURRENT_TIMESTAMP())\nORDER BY scheduled_time DESC;",
43 | "execution_count": null
44 | },
45 | {
46 | "cell_type": "markdown",
47 | "id": "870b69dd-aae0-4dd3-93f7-7adce1268159",
48 | "metadata": {
49 | "name": "md_dataframe",
50 | "collapsed": false,
51 | "resultHeight": 102
52 | },
53 | "source": "## 2. Convert Table to a DataFrame\n\nNext, we'll convert the table to a Pandas DataFrame."
54 | },
55 | {
56 | "cell_type": "code",
57 | "id": "4a5559a8-ef3a-40c3-a9d5-54602403adab",
58 | "metadata": {
59 | "language": "python",
60 | "name": "py_dataframe",
61 | "codeCollapsed": false,
62 | "resultHeight": 439,
63 | "collapsed": false
64 | },
65 | "outputs": [],
66 | "source": "sql_data.to_pandas()",
67 | "execution_count": null
68 | },
69 | {
70 | "cell_type": "markdown",
71 | "id": "59b04137-ca95-4fb8-b216-133272349a78",
72 | "metadata": {
73 | "name": "md_data_preparation",
74 | "collapsed": false,
75 | "resultHeight": 195
76 | },
77 | "source": "## 3. Create an Interactive Slider Widget & Data Preparation\n\nHere, we'll create an interactive slider for dynamically selecting the number of days to analyze. This would then trigger the filtering of the DataFrame to the specified number of days.\n\nNext, we'll reshape the data by calculating the frequency count by hour and task name, which will subsequently be used for creating the heatmap in the next step."
78 | },
79 | {
80 | "cell_type": "code",
81 | "id": "ba8fa564-d7d5-4d1c-9f6b-400f9c05ecae",
82 | "metadata": {
83 | "language": "python",
84 | "name": "py_data_preparation",
85 | "codeCollapsed": false,
86 | "resultHeight": 216
87 | },
88 | "outputs": [],
89 | "source": "import pandas as pd\nimport streamlit as st\nimport altair as alt\n\n# Create date filter slider\nst.subheader(\"Select time duration\")\ndays = st.slider('Select number of days to analyze', \n min_value=10, \n max_value=90, \n value=30, \n step=10)\n \n# Filter data according to day duration\nlatest_date = pd.to_datetime(df['SCHEDULED_TIME']).max()\ncutoff_date = latest_date - pd.Timedelta(days=days)\nfiltered_df = df[pd.to_datetime(df['SCHEDULED_TIME']) > cutoff_date].copy()\n \n# Prepare data for heatmap\nfiltered_df['HOUR_OF_DAY'] = pd.to_datetime(filtered_df['SCHEDULED_TIME']).dt.hour\nfiltered_df['HOUR_DISPLAY'] = filtered_df['HOUR_OF_DAY'].apply(lambda x: f\"{x:02d}:00\")\n \n# Calculate frequency count by hour and task name\nagg_df = filtered_df.groupby(['NAME', 'HOUR_DISPLAY', 'STATE']).size().reset_index(name='COUNT')\n\nst.warning(f\"Analyzing data for the last {days} days!\")",
90 | "execution_count": null
91 | },
92 | {
93 | "cell_type": "markdown",
94 | "id": "35f31e4e-95d5-4ee5-a146-b9e93dd9d570",
95 | "metadata": {
96 | "name": "md_heatmap",
97 | "collapsed": false,
98 | "resultHeight": 128
99 | },
100 | "source": "## 4. Create a Heatmap for Visualizing Scheduled Query Execution\n\nFinally, a heatmap and summary statistics table are generated that will allow us to gain insights on the task name and state (e.g. `SUCCEEDED`, `FAILED`, `SKIPPED`)."
101 | },
102 | {
103 | "cell_type": "code",
104 | "id": "e3049001-f3ba-4b66-ba54-c9f02f551992",
105 | "metadata": {
106 | "language": "python",
107 | "name": "py_heatmap",
108 | "codeCollapsed": false,
109 | "resultHeight": 791
110 | },
111 | "outputs": [],
112 | "source": "# Create heatmap\nchart = alt.Chart(agg_df).mark_rect(\n stroke='black',\n strokeWidth=1\n).encode(\n x=alt.X('HOUR_DISPLAY:O', \n title='Hour of Day',\n axis=alt.Axis(\n labels=True,\n tickMinStep=1,\n labelOverlap=False\n )),\n y=alt.Y('NAME:N', \n title='',\n axis=alt.Axis(\n labels=True,\n labelLimit=200,\n tickMinStep=1,\n labelOverlap=False,\n labelPadding=10\n )),\n color=alt.Color('COUNT:Q', \n title='Number of Executions'),\n row=alt.Row('STATE:N', \n title='Task State',\n header=alt.Header(labelAlign='left')),\n tooltip=[\n alt.Tooltip('NAME', title='Task Name'),\n alt.Tooltip('HOUR_DISPLAY', title='Hour'),\n alt.Tooltip('STATE', title='State'),\n alt.Tooltip('COUNT', title='Number of Executions')\n ]\n).properties(\n height=100,\n width=450\n).configure_view(\n stroke=None,\n continuousWidth=300\n).configure_axis(\n labelFontSize=10\n)\n\n# Display the chart\nst.subheader(f'Task Execution Frequency by State ({days} Days)')\nst.altair_chart(chart)\n\n# Optional: Display summary statistics\nst.subheader(\"Summary Statistics\")\nsummary_df = filtered_df.groupby('NAME').agg({\n 'STATE': lambda x: pd.Series(x).value_counts().to_dict()\n}).reset_index()\n\n# Format the state counts as separate columns\nstate_counts = pd.json_normalize(summary_df['STATE']).fillna(0).astype(int)\nsummary_df = pd.concat([summary_df['NAME'], state_counts], axis=1)\n\nst.dataframe(summary_df)",
113 | "execution_count": null
114 | },
115 | {
116 | "cell_type": "markdown",
117 | "id": "eb3e9b67-6a6e-4218-b17a-3f8564a04d18",
118 | "metadata": {
119 | "name": "md_resources",
120 | "collapsed": false,
121 | "resultHeight": 217
122 | },
123 | "source": "## Want to learn more?\n\n- Snowflake Docs on [Account Usage](https://docs.snowflake.com/en/sql-reference/account-usage) and [TASK_HISTORY view](https://docs.snowflake.com/en/sql-reference/account-usage/task_history)\n- More about [Snowflake Notebooks](https://docs.snowflake.com/en/user-guide/ui-snowsight/notebooks-use-with-snowflake)\n- For more inspiration on how to use Streamlit widgets in Notebooks, check out [Streamlit Docs](https://docs.streamlit.io/) and this list of what is currently supported inside [Snowflake Notebooks](https://docs.snowflake.com/en/user-guide/ui-snowsight/notebooks-use-with-snowflake#label-notebooks-streamlit-support)\n- Check out the [Altair User Guide](https://altair-viz.github.io/user_guide/data.html) for further information on customizing Altair charts"
124 | }
125 | ]
126 | }
--------------------------------------------------------------------------------
/Scheduled_Query_Execution_Report/environment.yml:
--------------------------------------------------------------------------------
1 | name: app_environment
2 | channels:
3 | - snowflake
4 | dependencies:
5 | - altair=*
6 | - pandas=*
7 |
--------------------------------------------------------------------------------
/Schema_Change_Tracker/environment.yml:
--------------------------------------------------------------------------------
1 | name: app_environment
2 | channels:
3 | - snowflake
4 | dependencies:
5 | - altair=*
6 | - pandas=*
7 |
--------------------------------------------------------------------------------
/Snowflake_Notebooks_Summit_2024_Demo/aileen_summit_notebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "display_name": "Streamlit Notebook",
5 | "name": "streamlit"
6 | }
7 | },
8 | "nbformat_minor": 5,
9 | "nbformat": 4,
10 | "cells": [
11 | {
12 | "cell_type": "markdown",
13 | "id": "30fcf7ae-e7f3-4a88-8afc-6568831d1c2a",
14 | "metadata": {
15 | "name": "Title",
16 | "collapsed": false,
17 | "resultHeight": 333
18 | },
19 | "source": "# :date: Send :orange[Daily Digest] of Fresh Foods Customer Reviews to :orange[Slack] \n\n## Features\n:gray[In this demo, we'll cover the following features:]\n- :gray[Calling Snowflake Cortex functions]\n- :gray[Integrating with external endpoints, i.e. Slack APIs]\n- :gray[Scheduling the notebook to run daily]\n- :gray[Keeping version control with Git]\n- :green[**BONUS**] :gray[- Run one notebook from another :knot: :knot: :knot:]"
20 | },
21 | {
22 | "cell_type": "markdown",
23 | "id": "754480e1-8983-4b6c-8ba7-270e9dc5994f",
24 | "metadata": {
25 | "name": "Step_1_Get_data",
26 | "collapsed": false,
27 | "resultHeight": 60
28 | },
29 | "source": "## Step :one: - Get the customer reviews data :speech_balloon:"
30 | },
31 | {
32 | "cell_type": "code",
33 | "id": "465f4adb-3571-483b-90da-cd3e576b9435",
34 | "metadata": {
35 | "language": "sql",
36 | "name": "Get_data",
37 | "collapsed": false,
38 | "codeCollapsed": false
39 | },
40 | "outputs": [],
41 | "source": "USE SCHEMA PUBLIC.PUBLIC;\nSELECT * FROM FRESH_FOODS_REVIEWS;",
42 | "execution_count": null
43 | },
44 | {
45 | "cell_type": "code",
46 | "id": "89f98a73-ef13-4a4e-a8c6-7ed8bf620930",
47 | "metadata": {
48 | "language": "python",
49 | "name": "Set_review_date",
50 | "collapsed": false
51 | },
52 | "outputs": [],
53 | "source": "from datetime import date\nimport streamlit as st\n\nreview_date = date(2024, 6, 4) # change to `date.today()` to always grab the current date \nst.write(review_date)",
54 | "execution_count": null
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "id": "d3530f1e-55dd-43d9-9e09-0c0797116102",
59 | "metadata": {
60 | "name": "Step_2_Cortex",
61 | "collapsed": false,
62 | "resultHeight": 377
63 | },
64 | "source": "## Step :two: - Ask Snowflake Cortex to generate the daily digest :mega:\nSnowflake Cortex is a fully-managed service that enables access to industry-leading large language models (LLMs).\n- COMPLETE: Given a prompt, returns a response that completes the prompt. This function accepts either a single prompt or a conversation with multiple prompts and responses.\n\n- EMBED_TEXT_768: Given a piece of text, returns a vector embedding that represents that text.\n\n- EXTRACT_ANSWER: Given a question and unstructured data, returns the answer to the question if it can be found in the data.\n\n- SENTIMENT: Returns a sentiment score, from -1 to 1, representing the detected positive or negative sentiment of the given text.\n\n- SUMMARIZE: Returns a summary of the given text.\n\n- TRANSLATE: Translates given text from any supported language to any other."
65 | },
66 | {
67 | "cell_type": "code",
68 | "id": "58a6bf2f-34df-452d-946f-ba416b07118d",
69 | "metadata": {
70 | "language": "sql",
71 | "name": "Cortex_SUMMARIZE",
72 | "collapsed": false
73 | },
74 | "outputs": [],
75 | "source": "WITH CUSTOMER_REVIEWS AS(\n SELECT LISTAGG(DISTINCT REVIEW) AS REVIEWS \n FROM {{Get_data}} \n WHERE to_date(DATE) = '{{review_date}}' )\n\nSELECT SNOWFLAKE.CORTEX.SUMMARIZE(REVIEWS) FROM CUSTOMER_REVIEWS;",
76 | "execution_count": null
77 | },
78 | {
79 | "cell_type": "code",
80 | "id": "eea93bfd-ed59-4478-9931-b145261dab5b",
81 | "metadata": {
82 | "language": "python",
83 | "name": "Summary",
84 | "collapsed": false
85 | },
86 | "outputs": [],
87 | "source": "summary_text = Cortex_SUMMARIZE.to_pandas().iloc[0]['SNOWFLAKE.CORTEX.SUMMARIZE(REVIEWS)']\nst.write(summary_text)",
88 | "execution_count": null
89 | },
90 | {
91 | "cell_type": "code",
92 | "id": "4849cc86-d8b4-4b7c-a4b2-f73174798593",
93 | "metadata": {
94 | "language": "sql",
95 | "name": "Daily_avg_score",
96 | "collapsed": false
97 | },
98 | "outputs": [],
99 | "source": "SELECT AVG(SNOWFLAKE.CORTEX.SENTIMENT(REVIEW)) AS AVERAGE_RATING FROM FRESH_FOODS_REVIEWS WHERE DATE = '{{review_date}}';",
100 | "execution_count": null
101 | },
102 | {
103 | "cell_type": "markdown",
104 | "id": "c61883bc-ff05-4627-9558-681383d477f6",
105 | "metadata": {
106 | "name": "Step_3_Slack",
107 | "collapsed": false,
108 | "resultHeight": 60
109 | },
110 | "source": "## Step :three: - Send the summary and sentiment to Slack :tada:\n"
111 | },
112 | {
113 | "cell_type": "code",
114 | "id": "f69f5fcf-f470-48a6-a688-259440c95741",
115 | "metadata": {
116 | "language": "python",
117 | "name": "Send_to_Slack",
118 | "collapsed": false,
119 | "codeCollapsed": false
120 | },
121 | "outputs": [],
122 | "source": "import requests\nimport numpy as np\n\n\nheaders = {\n 'Content-Type': 'Content-type: application/json',\n}\n\n# Extract Daily_avg_score contents\nsentiment_score = str(np.round(Daily_avg_score.to_pandas().values[0][0], 2))\n\n\ndata = {\n\t\"blocks\": [\n\t\t{\n\t\t\t\"type\": \"section\",\n\t\t\t\"text\": {\n\t\t\t\t\"type\": \"mrkdwn\",\n\t\t\t\t\"text\": f\":mega: *Daily summary | Sentiment score: {sentiment_score} | {review_date}*\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"type\": \"section\",\n\t\t\t\"text\": {\n\t\t\t\t\"type\": \"mrkdwn\",\n\t\t\t\t\"text\": summary_text\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"type\": \"divider\"\n\t\t},\n\t\t{\n\t\t\t\"type\": \"context\",\n\t\t\t\"elements\": [\n\t\t\t\t{\n\t\t\t\t\t\"type\": \"mrkdwn\",\n\t\t\t\t\t\"text\": \"