├── pyomo_tou.py ├── .gitignore ├── final.pdf ├── bins_Dict.pkl ├── load_data.png ├── action_pie.png ├── opt_example.png ├── Deep ├── bins_Dict.pkl ├── bins_Dict_conservative.pkl └── opt_lmp.py ├── opt_ex_energy.png ├── opt_ex_tariff.png ├── bins_Dict_conservative.pkl ├── cumulative_cost_comparison.png ├── README.md ├── LICENSE ├── analysis.py ├── opt_lmp.py ├── opt_tou.py ├── csv-mod.py ├── Ed └── base-naive.py └── residential.py /pyomo_tou.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /venv/ 2 | temp.py 3 | opt_tou.py 4 | csv-mod.py 5 | -------------------------------------------------------------------------------- /final.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/final.pdf -------------------------------------------------------------------------------- /bins_Dict.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/bins_Dict.pkl -------------------------------------------------------------------------------- /load_data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/load_data.png -------------------------------------------------------------------------------- /action_pie.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/action_pie.png -------------------------------------------------------------------------------- /opt_example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/opt_example.png -------------------------------------------------------------------------------- /Deep/bins_Dict.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/Deep/bins_Dict.pkl -------------------------------------------------------------------------------- /opt_ex_energy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/opt_ex_energy.png -------------------------------------------------------------------------------- /opt_ex_tariff.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/opt_ex_tariff.png -------------------------------------------------------------------------------- /bins_Dict_conservative.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/bins_Dict_conservative.pkl -------------------------------------------------------------------------------- /cumulative_cost_comparison.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/cumulative_cost_comparison.png -------------------------------------------------------------------------------- /Deep/bins_Dict_conservative.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deep-daya/Grid_Scale_Energy_Storage_Q_Learning/HEAD/Deep/bins_Dict_conservative.pkl -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Final Project for AA 228: Decision-Making under Uncertainty 2 | 3 | Abstract: Grid-scale energy storage systems (ESSs) are capable of participating in multiple grid applications, with the potential for multiple value streams for a single system, termed "value-stacking". This paper introduces a framework for decision making, using reinforcement learning to analyze the financial advantage of value-stacking grid-scale energy storage, as applied to a single residential home with energy storage. A policy is developed via Q-learning to dispatch the energy storage between two grid applications: time-of-use (TOU) bill reduction and energy arbitrage on locational marginal price (LMP). The performance of the dispatch resulting from this learned policy is then compared to several other dispatch cases: a baseline of no dispatch, a naively-determined dispatch, and the optimal dispatches for TOU and LMP separately. The policy obtained via Q-learning successfully led to the lowest cost, demonstrating the financial advantage of value-stacking. 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Kevin Moy 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /analysis.py: -------------------------------------------------------------------------------- 1 | # Analysis of our results!! 2 | # Kevin Moy 3 | #11/8/2020 4 | 5 | import pandas as pd 6 | import matplotlib.pyplot as plt 7 | import matplotlib.dates as mdates 8 | import os 9 | import numpy as np 10 | 11 | 12 | currentDirectory = os.getcwd().replace('\\', '/') 13 | edDir = currentDirectory + '/Ed' 14 | 15 | df_tou = pd.read_csv('opt_tou_5kW_14kWh.csv') 16 | df_base = pd.read_csv(edDir + '/base.csv') 17 | df_naive = pd.read_csv(edDir + '/naive.csv') 18 | df_rl = pd.read_csv('output2.csv') 19 | 20 | # Preprocess LMP data 21 | df_lmp = pd.read_csv('df_LMP.csv') 22 | lmpcv = df_lmp['Cumulative Additive Revenue'].to_numpy() 23 | lmpcv_rep = np.repeat(lmpcv, 4) 24 | lmpcv_load = df_base.R_base - lmpcv_rep[0:-4] 25 | 26 | # Obtain cumulative rewards 27 | df_rewards = pd.concat([df_base.local_15min, df_base.R_base, df_naive.R_naive, df_tou.cumulative_cost, 28 | pd.Series(lmpcv_load), -df_rl.cumulative_revenue], axis=1) 29 | df_rewards.set_index('local_15min', inplace=True) 30 | df_rewards.index = pd.to_datetime(df_rewards.index) 31 | df_rewards.columns = ['base', 'naive', 'optimal TOU', 'optimal LMP', 'Q-learned policy'] 32 | 33 | plot = df_rewards.plot() 34 | fig = plot.get_figure() 35 | fig.autofmt_xdate() 36 | plot.set_xlabel('Date') 37 | plot.set_ylabel('Cumulative cost, $') 38 | fig.savefig("cumulative_cost_comparison.png") 39 | 40 | actions = pd.concat([df_base.local_15min, df_rl.actions], axis=1) 41 | # actions.actions = pd.Categorical(actions.actions) 42 | plot2 = actions['actions'].value_counts().plot(kind='pie', legend=None) 43 | fig = plot2.get_figure() 44 | # plot2.set_xlabel('Action') 45 | plot2.set_ylabel('') 46 | fig.tight_layout() 47 | fig.savefig("action_pie.png") 48 | 49 | # action_types = ['LMP_buy', 'LMP_sell', 'wait', 'TOU_buy', 'TOU_discharge'] 50 | # actions['act_codes'] = pd.Categorical(actions.actions, categories=action_types).codes 51 | # actions.set_index('local_15min', inplace=True) 52 | # actions.index = pd.to_datetime(actions.index) 53 | # 54 | # actions.act_codes.plot() 55 | -------------------------------------------------------------------------------- /opt_lmp.py: -------------------------------------------------------------------------------- 1 | # File to compute optimal LMP dispatch from load data and tariff rate pricing 2 | # Kevin Moy, 11/3/2020 3 | 4 | import cvxpy as cp 5 | import pandas as pd 6 | import numpy as np 7 | import matplotlib.pyplot as plt 8 | import matplotlib.dates as mdates 9 | 10 | # Import load and tariff rate data; convert to numpy array and get length 11 | df = pd.read_csv("df_LMP.csv") 12 | # load = df.gridnopv[0:288].to_numpy() 13 | # tariff = df.tariff[0:288].to_numpy() 14 | # times = pd.to_datetime(df.local_15min[0:288]) 15 | lmp = df.LMP_kWh.to_numpy() 16 | times = pd.to_datetime(df.DATETIME) 17 | 18 | 19 | # Set environment variables: 20 | LMP_LEN = lmp.size # length of optimization 21 | BAT_KW = 5 # Rated power of battery, in kW, continuous power for the Powerwall 22 | BAT_KWH = 14 # Rated energy of battery, in kWh. 23 | # Note Tesla Powerwall rates their energy at 13.5kWh, but at 100% DoD, 24 | # but I have also seen that it's actually 14kwh, 13.5kWh usable 25 | BAT_KWH_MIN = 0.1 * BAT_KWH # Minimum SOE of battery, 10% of rated 26 | BAT_KWH_MAX = 0.9 * BAT_KWH # Maximum SOE of battery, 90% of rated 27 | BAT_KWH_INIT = 0.5 * BAT_KWH # Starting SOE of battery, 50% of rated 28 | HR_FRAC = ( 29 | 60 / 60 30 | ) # Data at 60 minute intervals, which is 1 hours. Need for conversion between kW <-> kWh 31 | 32 | # Create optimization variables. 33 | chg_pow = cp.Variable(LMP_LEN) # Power charged to the battery 34 | dch_pow = cp.Variable(LMP_LEN) # Power discharged from the battery 35 | bat_eng = cp.Variable(LMP_LEN) # Energy stored in the battery 36 | 37 | # Create constraints. 38 | constraints = [bat_eng[0] == BAT_KWH_INIT] 39 | 40 | for i in range(LMP_LEN): 41 | constraints += [ 42 | chg_pow[i] <= BAT_KW, 43 | dch_pow[i] <= BAT_KW, 44 | bat_eng[i] <= BAT_KWH_MAX, # Prevent overcharging 45 | bat_eng[i] >= BAT_KWH_MIN, # Prevent undercharging 46 | bat_eng[i] 47 | >= HR_FRAC * dch_pow[i], # Prevent undercharging from overdischarging 48 | # Convexity requirements: 49 | chg_pow[i] >= 0, 50 | dch_pow[i] >= 0, 51 | bat_eng[i] >= 0, 52 | ] 53 | 54 | for i in range(1, LMP_LEN): 55 | constraints += [ 56 | bat_eng[i] 57 | == HR_FRAC * chg_pow[i - 1] + (bat_eng[i - 1] - HR_FRAC * dch_pow[i - 1]) 58 | ] # Energy flow constraints 59 | 60 | print("constraints complete") 61 | 62 | # Form objective. 63 | obj = cp.Maximize(lmp.T @ (dch_pow - chg_pow)) 64 | # obj = cp.Minimize(lod_pow.T @ np.ones(LOAD_LEN)) 65 | 66 | 67 | # Form and solve problem. 68 | prob = cp.Problem(obj, constraints) 69 | print("solving...") 70 | prob.solve() # Returns the optimal value. 71 | print("status:", prob.status) 72 | print("optimal value", prob.value) 73 | 74 | # Calculate relevant quantities. 75 | bat_pow = dch_pow.value - chg_pow.value 76 | cumulative_revenue = np.cumsum(bat_pow * lmp) 77 | 78 | 79 | # Save output to CSV. 80 | print("saving to CSV") 81 | outputdf = pd.DataFrame( 82 | np.transpose([bat_pow, bat_eng.value, lmp, cumulative_revenue]) 83 | ) 84 | outputdf.columns = [ 85 | "battery_power", 86 | "battery_energy", 87 | "lmp", 88 | "cumulative_cost", 89 | ] 90 | outputdf.set_index(times, inplace=True) 91 | outputdf.to_csv("opt_lmp_5kW_14kWh.csv") 92 | 93 | 94 | # PLOTTING ! 95 | 96 | fig, ax1 = plt.subplots(1, 1, figsize=(10, 6)) 97 | fig.autofmt_xdate() 98 | plt.gcf().autofmt_xdate() 99 | xfmt = mdates.DateFormatter("%m-%d-%y %H:%M") 100 | ax1.xaxis.set_major_formatter(xfmt) 101 | ax1.set_xlabel("Date") 102 | ax1.set_ylabel("Power, kW") 103 | p1 = ax1.plot(times, bat_pow) 104 | 105 | color = "tab:red" 106 | ax2 = ax1.twinx() 107 | ax2.set_ylabel("Energy Price, $/kWh", color=color) 108 | p4 = ax2.plot(times, lmp, color=color) 109 | ax2.tick_params(axis="y", labelcolor=color) 110 | ax2.set_ylim([0, 1.1 * max(lmp)]) 111 | ax2.xaxis.set_major_formatter(xfmt) 112 | 113 | plt.legend( 114 | (p1[0]), 115 | ("Battery Power"), 116 | loc="best", 117 | ) 118 | fig.tight_layout() # otherwise the right y-label is slightly clipped 119 | 120 | plt.savefig("opt_ex_lmp.png") 121 | 122 | # for i in range(len(data_frame)): 123 | # if i % 1000 == 0: print(i) 124 | # constraints += [rate[i] <= discharge_max, # Rate should be lower than or equal to max rate, 125 | # rate[i] >= charge_max, 126 | # E[i] <= SOC_max, # Overall kW should be within the range of [SOC_min,SOC_max] 127 | # E[i] >= SOC_min] 128 | # revenue += prices[i] * ( 129 | # rate[i]) # Revenue = sum of (prices ($/kWh) * (energy sold (kW) * 1hr - energy bought (kW) * 1hr) at timestep t) 130 | # for i in range(1, len(data_frame)): 131 | # if i % 1000 == 0: print(i) 132 | # constraints += [E[i] == E[i - 1] + rate[i - 1]] # Current SOC constraint 133 | # constraints += [E[0] == random.uniform(SOC_min, SOC_max), rate[0] == 0] # create first time step constraints -------------------------------------------------------------------------------- /opt_tou.py: -------------------------------------------------------------------------------- 1 | # File to compute optimal TOU dispatch from load data and tariff rate pricing 2 | # Kevin Moy, 11/3/2020 3 | 4 | import cvxpy as cp 5 | import pandas as pd 6 | import numpy as np 7 | import matplotlib.pyplot as plt 8 | import matplotlib.dates as mdates 9 | 10 | # Import load and tariff rate data; convert to numpy array and get length 11 | df = pd.read_csv("load_tariff.csv") 12 | # load = df.gridnopv[0:288].to_numpy() 13 | # tariff = df.tariff[0:288].to_numpy() 14 | # times = pd.to_datetime(df.local_15min[0:288]) 15 | load = df.gridnopv.to_numpy() 16 | tariff = df.tariff.to_numpy() 17 | times = pd.to_datetime(df.local_15min) 18 | 19 | 20 | # Set environment variables: 21 | LOAD_LEN = load.size # length of optimization 22 | BAT_KW = 5 # Rated power of battery, in kW, continuous power for the Powerwall 23 | BAT_KWH = 14 # Rated energy of battery, in kWh. 24 | # Note Tesla Powerwall rates their energy at 13.5kWh, but at 100% DoD, 25 | # but I have also seen that it's actually 14kwh, 13.5kWh usable 26 | BAT_KWH_MIN = 0.1 * BAT_KWH # Minimum SOE of battery, 10% of rated 27 | BAT_KWH_MAX = 0.9 * BAT_KWH # Maximum SOE of battery, 90% of rated 28 | BAT_KWH_INIT = 0.5 * BAT_KWH # Starting SOE of battery, 50% of rated 29 | HR_FRAC = ( 30 | 15 / 60 31 | ) # Data at 15 minute intervals, which is 0.25 hours. Need for conversion between kW <-> kWh 32 | 33 | # Create optimization variables. 34 | grd_pow = cp.Variable(LOAD_LEN) # Total power consumed from grid 35 | lod_pow = cp.Variable(LOAD_LEN) # Power consumed by load from grid 36 | chg_pow = cp.Variable(LOAD_LEN) # Power charged to the battery 37 | dch_pow = cp.Variable(LOAD_LEN) # Power discharged from the battery 38 | bat_eng = cp.Variable(LOAD_LEN) # Energy stored in the battery 39 | 40 | # Create constraints. 41 | constraints = [bat_eng[0] == BAT_KWH_INIT] 42 | 43 | for i in range(LOAD_LEN): 44 | constraints += [ 45 | grd_pow[i] == chg_pow[i] + lod_pow[i], # Power flow constraints 46 | load[i] == dch_pow[i] + lod_pow[i], 47 | chg_pow[i] <= BAT_KW, 48 | dch_pow[i] <= BAT_KW, 49 | bat_eng[i] <= BAT_KWH_MAX, # Prevent overcharging 50 | bat_eng[i] >= BAT_KWH_MIN, # Prevent undercharging 51 | bat_eng[i] 52 | >= HR_FRAC * dch_pow[i], # Prevent undercharging from overdischarging 53 | # Convexity requirements: 54 | grd_pow[i] >= 0, 55 | chg_pow[i] >= 0, 56 | dch_pow[i] >= 0, 57 | bat_eng[i] >= 0, 58 | lod_pow[i] >= 0, 59 | ] 60 | 61 | for i in range(1, LOAD_LEN): 62 | constraints += [ 63 | bat_eng[i] 64 | == HR_FRAC * chg_pow[i - 1] + (bat_eng[i - 1] - HR_FRAC * dch_pow[i - 1]) 65 | ] # Energy flow constraints 66 | 67 | print("constraints complete") 68 | 69 | # Form objective. 70 | obj = cp.Minimize(grd_pow.T @ tariff) 71 | # obj = cp.Minimize(lod_pow.T @ np.ones(LOAD_LEN)) 72 | 73 | 74 | # Form and solve problem. 75 | prob = cp.Problem(obj, constraints) 76 | print("solving...") 77 | prob.solve() # Returns the optimal value. 78 | print("status:", prob.status) 79 | print("optimal value", prob.value) 80 | 81 | # Calculate relevant quantities. 82 | bat_pow = dch_pow.value - chg_pow.value 83 | cumulative_cost = np.cumsum(grd_pow.value * tariff) 84 | 85 | 86 | # Save output to CSV. 87 | print("saving to CSV") 88 | outputdf = pd.DataFrame( 89 | np.transpose([load, grd_pow.value, bat_pow, bat_eng.value, tariff, cumulative_cost]) 90 | ) 91 | outputdf.columns = [ 92 | "load_power", 93 | "grid_power", 94 | "battery_power", 95 | "battery_energy", 96 | "tariff_rate", 97 | "cumulative_cost", 98 | ] 99 | outputdf.set_index(times, inplace=True) 100 | outputdf.to_csv("opt_tou_5kW_14kWh.csv") 101 | 102 | 103 | # PLOTTING ! 104 | 105 | fig, ax1 = plt.subplots(1, 1, figsize=(10, 6)) 106 | fig.autofmt_xdate() 107 | plt.gcf().autofmt_xdate() 108 | xfmt = mdates.DateFormatter("%m-%d-%y %H:%M") 109 | ax1.xaxis.set_major_formatter(xfmt) 110 | ax1.set_xlabel("Date") 111 | ax1.set_ylabel("Power, kW") 112 | p1 = ax1.plot(times, bat_pow) 113 | p2 = ax1.plot(times, load) 114 | p3 = ax1.plot(times, grd_pow.value) 115 | 116 | color = "tab:red" 117 | ax2 = ax1.twinx() 118 | ax2.set_ylabel("Energy Price, $/kWh", color=color) 119 | p4 = ax2.plot(times, tariff, color=color) 120 | ax2.tick_params(axis="y", labelcolor=color) 121 | ax2.set_ylim([0, 1.1 * max(tariff)]) 122 | ax2.xaxis.set_major_formatter(xfmt) 123 | 124 | plt.legend( 125 | (p1[0], p2[0], p3[0], p4[0]), 126 | ("Battery Power", "Load Power", "Grid Power", "Tariff Rate"), 127 | loc="best", 128 | ) 129 | fig.tight_layout() # otherwise the right y-label is slightly clipped 130 | 131 | plt.savefig("opt_ex_tariff.png") 132 | 133 | fig, ax1 = plt.subplots(1, 1, figsize=(10, 6)) 134 | fig.autofmt_xdate() 135 | plt.gcf().autofmt_xdate() 136 | xfmt = mdates.DateFormatter("%m-%d-%y %H:%M") 137 | ax1.xaxis.set_major_formatter(xfmt) 138 | ax1.set_xlabel("Date") 139 | ax1.set_ylabel("Power, kW") 140 | p1 = ax1.plot(times, bat_pow) 141 | p2 = ax1.plot(times, load) 142 | p3 = ax1.plot(times, grd_pow.value) 143 | 144 | color = "tab:purple" 145 | ax2 = ax1.twinx() 146 | ax2.set_ylabel("Energy, kWh", color=color) 147 | p4 = ax2.plot(times, bat_eng.value, color=color) 148 | ax2.tick_params(axis="y", labelcolor=color) 149 | ax2.set_ylim([0, BAT_KWH]) 150 | ax2.xaxis.set_major_formatter(xfmt) 151 | 152 | plt.legend( 153 | (p1[0], p2[0], p3[0], p4[0]), 154 | ("Battery Power", "Load Power", "Grid Power", "Battery Energy"), 155 | loc="best", 156 | ) 157 | fig.tight_layout() # otherwise the right y-label is slightly clipped 158 | 159 | plt.savefig("opt_ex_energy.png") 160 | -------------------------------------------------------------------------------- /csv-mod.py: -------------------------------------------------------------------------------- 1 | # Script to compute TOU pricing for each time period in a dataset and return a modified dataset. 2 | # Input: CSV file of daily consumption with time/date data as one column 3 | # Output: CSV file of daily consumption with TOU pricing data added 4 | 5 | import pandas as pd 6 | import matplotlib.pyplot as plt 7 | import numpy as np 8 | 9 | # Tariff rate data for TOU-DR1 10 | SUMMER_MONTHS = [6, 7, 8, 9, 10] # June 1 through Oct 31 11 | WINTER_MONTHS = [1, 2, 3, 4, 5, 11, 12] # Nov 1 through May 31th 12 | ON_PEAK = [16, 17, 18, 19, 20] # 4pm - 9pm, same for all days 13 | SUMMER_OFF_PEAK = [ 14 | 6, 15 | 7, 16 | 8, 17 | 9, 18 | 10, 19 | 11, 20 | 12, 21 | 13, 22 | 14, 23 | 15, 24 | 21, 25 | 22, 26 | 23, 27 | ] # 6am - 4pm, 9pm - midnight 28 | SUPER_OFF_PEAK = [ 29 | 0, 30 | 1, 31 | 2, 32 | 3, 33 | 4, 34 | 5, 35 | ] # midnight - 6am, same for all days except in March and April 36 | WINTER_OFF_PEAK = [ 37 | 6, 38 | 7, 39 | 8, 40 | 9, 41 | 10, 42 | 11, 43 | 12, 44 | 13, 45 | 14, 46 | 15, 47 | 21, 48 | 22, 49 | 23, 50 | ] # 6am - 4pm, 9pm - midnight 51 | WINTER_OFF_PEAK_MAR_APR = [ 52 | 6, 53 | 7, 54 | 8, 55 | 9, 56 | 14, 57 | 15, 58 | 21, 59 | 22, 60 | 23, 61 | ] # 6am - 4pm, 9pm - midnight, excluding 10:00 a.m. – 2:00 p.m 62 | WINTER_SUPER_OFF_PEAK_MAR_APR = [ 63 | 0, 64 | 1, 65 | 2, 66 | 3, 67 | 4, 68 | 5, 69 | 10, 70 | 11, 71 | 12, 72 | 13, 73 | ] # midnight - 6am; 10am = 2pm 74 | OFF_PEAK_WEEKEND = [14, 15, 21, 22, 23] # 2pm - 4pm; 9pm - midnight 75 | SUPER_OFF_PEAK_WEEKEND = [ 76 | 0, 77 | 1, 78 | 2, 79 | 3, 80 | 4, 81 | 5, 82 | 6, 83 | 7, 84 | 8, 85 | 9, 86 | 10, 87 | 11, 88 | 12, 89 | 13, 90 | ] # midnight - 2pm 91 | 92 | # since we have 15-minute periods, therefore $/kw-hour must be divided by (15/60) = 4 93 | SUM_ON_PEAK_TOU = 0.50199 / 4 94 | SUM_OFF_PEAK_TOU = 0.30462 / 4 95 | SUM_SUP_OFF_PEAK_TOU = 0.25900 / 4 96 | 97 | WIN_ON_PEAK_TOU = 0.35630 / 4 98 | WIN_OFF_PEAK_TOU = 0.34747 / 4 99 | WIN_SUP_OFF_PEAK_TOU = 0.3376 / 4 100 | 101 | # df = pd.read_csv('9836.csv') 102 | # 103 | # # drop NaNs (0 in original CSV -- not metered load quantities) 104 | # df.dropna(axis=1, how='all', inplace=True) 105 | # 106 | # # Remove unnecessary voltage data and dataid columns 107 | # df.drop(['dataid', 'leg1v', 'leg2v'], axis=1, inplace=True) 108 | # 109 | # # Create column subtracting out PV output: 110 | # df['gridnopv'] = df['grid'] - df['solar'] 111 | 112 | # Keep only grid and solar data: 113 | df = pd.read_csv("9836.csv", usecols=["local_15min", "grid", "solar"]) 114 | 115 | # Create column subtracting out PV output: 116 | df["gridnopv"] = df["grid"] + df["solar"] 117 | 118 | # Convert first column to datetime: 119 | 120 | df["dt"] = pd.to_datetime(df["local_15min"], format="%m/%d/%Y %H:%M") 121 | 122 | # Plot! 123 | fig = plt.figure(figsize=(8, 6), dpi=150) 124 | ax = plt.gca() 125 | # df.plot(kind='line', x='dt', y='grid', ax=ax, xlabel='Date', ylabel='Power, kW') 126 | # df.plot(kind='line', x='dt', y='solar', color='red', ax=ax, xlabel='Date', ylabel='Power, kW') 127 | df.plot( 128 | kind="line", 129 | x="dt", 130 | y="gridnopv", 131 | color="green", 132 | ax=ax, 133 | xlabel="Date", 134 | ylabel="Power, kW", 135 | ) 136 | fig.savefig("load_data.png") 137 | 138 | df = df.assign(tariff="") 139 | 140 | # Summer TOU pricing, weekdays: 141 | df.loc[ 142 | df["dt"].dt.month.isin(SUMMER_MONTHS) 143 | & df["dt"].dt.hour.isin(ON_PEAK) 144 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 145 | "tariff", 146 | ] = SUM_ON_PEAK_TOU 147 | df.loc[ 148 | df["dt"].dt.month.isin(SUMMER_MONTHS) 149 | & df["dt"].dt.hour.isin(SUMMER_OFF_PEAK) 150 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 151 | "tariff", 152 | ] = SUM_OFF_PEAK_TOU 153 | df.loc[ 154 | df["dt"].dt.month.isin(SUMMER_MONTHS) 155 | & df["dt"].dt.hour.isin(SUPER_OFF_PEAK) 156 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 157 | "tariff", 158 | ] = SUM_SUP_OFF_PEAK_TOU 159 | 160 | # Winter TOU pricing, weekdays: 161 | df.loc[ 162 | df["dt"].dt.month.isin(WINTER_MONTHS) 163 | & df["dt"].dt.hour.isin(ON_PEAK) 164 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 165 | "tariff", 166 | ] = WIN_ON_PEAK_TOU 167 | df.loc[ 168 | df["dt"].dt.month.isin(WINTER_MONTHS) 169 | & df["dt"].dt.hour.isin(WINTER_OFF_PEAK) 170 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 171 | "tariff", 172 | ] = WIN_OFF_PEAK_TOU 173 | df.loc[ 174 | df["dt"].dt.month.isin(WINTER_MONTHS) 175 | & df["dt"].dt.hour.isin(SUPER_OFF_PEAK) 176 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 177 | "tariff", 178 | ] = WIN_SUP_OFF_PEAK_TOU 179 | # Adjust March and April TOU periods: 180 | df.loc[ 181 | df["dt"].dt.month.isin([3, 4]) 182 | & df["dt"].dt.hour.isin(WINTER_SUPER_OFF_PEAK_MAR_APR) 183 | & df["dt"].dt.weekday.isin([1, 2, 3, 4, 5]), 184 | "tariff", 185 | ] = WIN_SUP_OFF_PEAK_TOU 186 | 187 | # Summer TOU pricing, weekends: 188 | df.loc[ 189 | df["dt"].dt.month.isin(SUMMER_MONTHS) 190 | & df["dt"].dt.hour.isin(ON_PEAK) 191 | & df["dt"].dt.weekday.isin([0, 6]), 192 | "tariff", 193 | ] = SUM_ON_PEAK_TOU 194 | df.loc[ 195 | df["dt"].dt.month.isin(SUMMER_MONTHS) 196 | & df["dt"].dt.hour.isin(OFF_PEAK_WEEKEND) 197 | & df["dt"].dt.weekday.isin([0, 6]), 198 | "tariff", 199 | ] = SUM_OFF_PEAK_TOU 200 | df.loc[ 201 | df["dt"].dt.month.isin(SUMMER_MONTHS) 202 | & df["dt"].dt.hour.isin(SUPER_OFF_PEAK_WEEKEND) 203 | & df["dt"].dt.weekday.isin([0, 6]), 204 | "tariff", 205 | ] = SUM_SUP_OFF_PEAK_TOU 206 | 207 | # Winter TOU pricing, weekends: 208 | df.loc[ 209 | df["dt"].dt.month.isin(WINTER_MONTHS) 210 | & df["dt"].dt.hour.isin(ON_PEAK) 211 | & df["dt"].dt.weekday.isin([0, 6]), 212 | "tariff", 213 | ] = WIN_ON_PEAK_TOU 214 | df.loc[ 215 | df["dt"].dt.month.isin(WINTER_MONTHS) 216 | & df["dt"].dt.hour.isin(OFF_PEAK_WEEKEND) 217 | & df["dt"].dt.weekday.isin([0, 6]), 218 | "tariff", 219 | ] = WIN_OFF_PEAK_TOU 220 | df.loc[ 221 | df["dt"].dt.month.isin(WINTER_MONTHS) 222 | & df["dt"].dt.hour.isin(SUPER_OFF_PEAK_WEEKEND) 223 | & df["dt"].dt.weekday.isin([0, 6]), 224 | "tariff", 225 | ] = WIN_SUP_OFF_PEAK_TOU 226 | 227 | # # Plot tariff rate! 228 | # fig = plt.figure(figsize=(8, 6), dpi=150) 229 | # ax = plt.gca() 230 | # df.plot(kind='line', x='dt', y='tariff', color='black', ax=ax, xlabel='Date', ylabel='$/kWh') 231 | # fig.savefig('tariff_data.png') 232 | 233 | df.to_csv("load_tariff.csv") 234 | -------------------------------------------------------------------------------- /Ed/base-naive.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | 4 | import matplotlib.pyplot as plt 5 | import matplotlib.dates as mdates 6 | 7 | from pandas.plotting import register_matplotlib_converters 8 | 9 | register_matplotlib_converters() 10 | 11 | import datetime 12 | 13 | 14 | """ 15 | Base Case: 16 | 17 | Generate Cumulative Rewards with no battery use 18 | 19 | 20 | """ 21 | 22 | 23 | def make_base_csv(df): 24 | load = df.gridnopv.to_numpy() 25 | tariff = df.tariff.to_numpy() 26 | times = pd.to_datetime(df.local_15min) 27 | 28 | df["R_base"] = np.cumsum(load * tariff) 29 | rewards = df["R_base"].to_numpy() 30 | 31 | df.to_csv(r"base.csv") 32 | # #plotting 33 | # fig, ax1 = plt.subplots(1, 1, figsize=(10,6)) 34 | # fig.autofmt_xdate() 35 | # plt.gcf().autofmt_xdate() 36 | # xfmt = mdates.DateFormatter('%m-%d-%y %H:%M') 37 | # ax1.xaxis.set_major_formatter(xfmt) 38 | # ax1.set_xlabel('Date') 39 | # ax1.set_ylabel('Power, kW') 40 | 41 | # #load 42 | # p1 = ax1.plot(times, tariff) 43 | # p2 = ax1.plot(times, load) 44 | # p3 = ax1.plot(times, rewards) 45 | # # p3 = ax1.plot(times, grd_pow.value) 46 | 47 | # color = 'tab:red' 48 | # ax2 = ax1.twinx() 49 | # ax2.set_ylabel('Energy Price, $/kWh', color=color) 50 | # # p4 = ax2.plot(times, tariff, color=color) 51 | # ax2.tick_params(axis='y', labelcolor=color) 52 | # ax2.set_ylim([0,1.1*max(tariff)]) 53 | # ax2.xaxis.set_major_formatter(xfmt) 54 | 55 | # # plt.legend((p1[0], p2[0], p3[0], p4[0]), ('Battery Power', 'Load Power', 'Grid Power', 'Tariff Rate'), loc='best') 56 | # fig.tight_layout() # otherwise the right y-label is slightly clipped 57 | 58 | # # plt.savefig('opt_ex_tariff.png') 59 | # ax2.set_xlim([datetime.date(2014, 7, 8), datetime.date(2014, 7, 11)]) 60 | # plt.show() 61 | 62 | 63 | """ 64 | Naive TOU Case: 65 | 66 | Generate Cumulative Rewards optimized on peak/non-peak TOU 67 | 68 | nonpeak: lowest tariff price 69 | peak: all other prices 70 | 71 | battery charges/discharges at power rating 72 | 73 | 74 | """ 75 | 76 | 77 | def make_naiveTOU_csv(df): 78 | load = df.gridnopv.to_numpy() 79 | tariff = df.tariff.to_numpy() 80 | times = pd.to_datetime(df.local_15min) 81 | 82 | min_tariff = min(tariff) 83 | max_tariff = max(tariff) 84 | 85 | # constants 86 | BAT_KW = 5 # Rated power of battery, in kW 87 | BAT_KWH = 14 # Rated energy of battery, in kWh. 88 | # Note Tesla Powerwall rates their energy at 13.5kWh, but at 100% DoD, 89 | # but I have also seen that it's actually 14kwh, 13.5kWh usable 90 | BAT_KWH_MIN = 0.1 * BAT_KWH # Minimum SOE of battery, 10% of rated 91 | BAT_KWH_MAX = 0.9 * BAT_KWH # Maximum SOE of battery, 90% of rated 92 | BAT_KWH_INIT = 0.5 * BAT_KWH # Starting SOE of battery, 50% of rated 93 | HR_FRAC = ( 94 | 15 / 60 95 | ) # Data at 15 minute intervals, which is 0.25 hours. Need for conversion between kW <-> kWh 96 | 97 | # df['bat_eng'], df['R_naive'], df['bat_charge'] 98 | 99 | # set all battery charges to 0 initially 100 | df["bat_charge"] = 0 101 | df["bat_discharge"] = 0 102 | df["bat_eng"] = 0 103 | df["R_naive"] = 0 104 | 105 | for index, row in df.iterrows(): 106 | 107 | # first row initialize 108 | if index == 0: 109 | df.loc[index, "bat_eng"], df.loc[index, "R_naive"] = BAT_KWH_INIT, 0 110 | else: 111 | df.loc[index, "bat_eng"], df.loc[index, "R_naive"] = bat_eng_old, R_old 112 | 113 | # when tariff is lowest, charge the battery all you can 114 | if row["tariff"] == min_tariff: 115 | 116 | if df.loc[index, "bat_eng"] < BAT_KWH_MAX: 117 | df.loc[index, "bat_charge"] = min( 118 | BAT_KWH_MAX - df.loc[index, "bat_eng"], HR_FRAC * BAT_KW 119 | ) 120 | df.loc[index, "bat_eng"] += df.loc[index, "bat_charge"] 121 | 122 | # try to discharge as much as possible at max tariff 123 | elif row["tariff"] == max_tariff: 124 | 125 | if df.loc[index, "bat_eng"] > BAT_KWH_MIN: 126 | df.loc[index, "bat_discharge"] = min( 127 | df.loc[index, "bat_eng"] - BAT_KWH_MIN, HR_FRAC * BAT_KW 128 | ) 129 | df.loc[index, "bat_eng"] -= df.loc[index, "bat_discharge"] 130 | 131 | # account for battery charge/discharge in the reward 132 | df.loc[index, "R_naive"] += ( 133 | row["gridnopv"] 134 | - df.loc[index, "bat_discharge"] 135 | + df.loc[index, "bat_charge"] 136 | ) * row["tariff"] 137 | 138 | # update old values: 139 | bat_eng_old, R_old = df.loc[index, "bat_eng"], df.loc[index, "R_naive"] 140 | 141 | # to csv 142 | df.to_csv("naive.csv") 143 | 144 | fig, ax1 = plt.subplots(1, 1, figsize=(10, 6)) 145 | fig.autofmt_xdate() 146 | plt.gcf().autofmt_xdate() 147 | xfmt = mdates.DateFormatter("%m-%d-%y %H:%M") 148 | ax1.xaxis.set_major_formatter(xfmt) 149 | ax1.set_xlabel("Date") 150 | ax1.set_ylabel("Power, kW") 151 | 152 | # p1 = ax1.plot(times, bat_pow) 153 | 154 | # load 155 | p2 = ax1.plot(times, load) 156 | # p3 = ax1.plot(times, grd_pow.value) 157 | 158 | p1 = ax1.plot(times, df["bat_charge"].to_numpy()) 159 | p3 = ax1.plot(times, df["bat_discharge"].to_numpy()) 160 | # p4 = ax1.plot(times, df['R_naive'].to_numpy()) 161 | 162 | color = "tab:red" 163 | ax2 = ax1.twinx() 164 | ax2.set_ylabel("Energy Price, $/kWh", color=color) 165 | # p4 = ax2.plot(times, tariff, color=color) 166 | ax2.tick_params(axis="y", labelcolor=color) 167 | ax2.set_ylim([0, 1.1 * max(tariff)]) 168 | ax2.xaxis.set_major_formatter(xfmt) 169 | 170 | # plt.legend((p1[0], p2[0], p3[0], p4[0]), ('Battery Power', 'Load Power', 'Grid Power', 'Tariff Rate'), loc='best') 171 | fig.tight_layout() # otherwise the right y-label is slightly clipped 172 | 173 | # plt.savefig('opt_ex_tariff.png') 174 | ax2.set_xlim([datetime.date(2014, 7, 8), datetime.date(2014, 7, 11)]) 175 | plt.show() 176 | 177 | 178 | if __name__ == "__main__": 179 | df = pd.read_csv(r"load_tariff.csv") 180 | # make_base_csv(df) 181 | make_naiveTOU_csv(df) 182 | 183 | # #Plot compare 184 | # df1= pd.read_csv(r'naive.csv') 185 | # df2 = pd.read_csv(r'base.csv') 186 | # times = pd.to_datetime(df1.local_15min) 187 | # fig, ax1 = plt.subplots(1, 1, figsize=(10,6)) 188 | 189 | # #Plot 190 | 191 | # fig.autofmt_xdate() 192 | # plt.gcf().autofmt_xdate() 193 | # xfmt = mdates.DateFormatter('%m-%d-%y %H:%M') 194 | # ax1.xaxis.set_major_formatter(xfmt) 195 | # ax1.set_xlabel('Date') 196 | # ax1.set_ylabel('Power, kW') 197 | # # p1 = ax1.plot(times, bat_pow) 198 | # p2 = ax1.plot(times, df1['R_naive']) 199 | # p3 = ax1.plot(times, df2['R_base']) 200 | ############ 201 | # color = 'tab:red' 202 | # ax2 = ax1.twinx() 203 | # ax2.set_ylabel('Energy Price, $/kWh', color=color) 204 | # # p4 = ax2.plot(times, tariff, color=color) 205 | # ax2.tick_params(axis='y', labelcolor=color) 206 | # ax2.set_ylim([0,1.1*max(tariff)]) 207 | # ax2.xaxis.set_major_formatter(xfmt) 208 | 209 | # # plt.legend((p1[0], p2[0], p3[0], p4[0]), ('Battery Power', 'Load Power', 'Grid Power', 'Tariff Rate'), loc='best') 210 | # fig.tight_layout() # otherwise the right y-label is slightly clipped 211 | 212 | # # plt.savefig('opt_ex_tariff.png') 213 | plt.show() 214 | 215 | # fig, ax1 = plt.subplots(1, 1, figsize=(10,6)) 216 | # fig.autofmt_xdate() 217 | # plt.gcf().autofmt_xdate() 218 | # xfmt = mdates.DateFormatter('%m-%d-%y %H:%M') 219 | # ax1.xaxis.set_major_formatter(xfmt) 220 | # ax1.set_xlabel('Date') 221 | # ax1.set_ylabel('Power, kW') 222 | # p1 = ax1.plot(times, bat_pow) 223 | # p2 = ax1.plot(times, load) 224 | # p3 = ax1.plot(times, grd_pow.value) 225 | 226 | # color = 'tab:purple' 227 | # ax2 = ax1.twinx() 228 | # ax2.set_ylabel('Energy, kWh', color=color) 229 | # p4 = ax2.plot(times, bat_eng.value, color=color) 230 | # ax2.tick_params(axis='y', labelcolor=color) 231 | # ax2.set_ylim([0,BAT_KWH]) 232 | # ax2.xaxis.set_major_formatter(xfmt) 233 | -------------------------------------------------------------------------------- /Deep/opt_lmp.py: -------------------------------------------------------------------------------- 1 | import cvxpy as cp 2 | import numpy as np 3 | import pandas as pd 4 | import datetime 5 | import random 6 | import pickle 7 | import matplotlib.pyplot as plt 8 | 9 | def script(filename): 10 | """Read Dataset and parse it into datetime, and respective LMP prices at each hour.""" 11 | data_2 = pd.read_csv(filename) 12 | data_2.drop(["NODE_ID_XML","NODE_ID","NODE","MARKET_RUN_ID","PNODE_RESMRID","GRP_TYPE","POS","OPR_INTERVAL"],axis=1,inplace=True) 13 | data_2[data_2["LMP_TYPE"]=="LMP"] 14 | data_2["DATETIME"]=pd.to_datetime(data_2["INTERVALSTARTTIME_GMT"]) 15 | data_2 = data_2[data_2["LMP_TYPE"]=="LMP"].sort_values("DATETIME") 16 | data_2.drop(["INTERVALSTARTTIME_GMT","INTERVALENDTIME_GMT","OPR_DT","OPR_HR","LMP_TYPE","XML_DATA_ITEM","GROUP"],axis=1,inplace=True) 17 | return data_2 18 | 19 | def optimization_problem(data_frame): 20 | 21 | """Defines the optimization problem, and solves it for the maximum revenue along with saving the relevant result 22 | as a dataframe and a plot.""" 23 | #LMP Prices 24 | prices = data_frame["LMP_kWh"] 25 | 26 | #Initialize Variables for optimization Problem 27 | rate = cp.Variable((len(data_frame),1)) 28 | E = cp.Variable((len(data_frame),1)) 29 | 30 | #Create max, min for the 3 optimization variables 31 | discharge_max = 5 32 | charge_max = -5 33 | SOC_max = 0.9*14 34 | SOC_min = 0.1*14 35 | 36 | #Initialize constraints and revenue 37 | constraints = [] 38 | revenue = 0 39 | 40 | print("Starting Constraint Creation") 41 | #Create constraints for the each time step along with revenue. 42 | for i in range(len(data_frame)): 43 | if i%1000 == 0: print(i) 44 | constraints += [rate[i] <= discharge_max, #Rate should be lower than or equal to max rate, 45 | rate[i] >= charge_max, 46 | E[i]<= SOC_max, #Overall kW should be within the range of [SOC_min,SOC_max] 47 | E[i] >= SOC_min] 48 | revenue += prices[i] *(rate[i]) #Revenue = sum of (prices ($/kWh) * (energy sold (kW) * 1hr - energy bought (kW) * 1hr) at timestep t) 49 | 50 | for i in range(1,len(data_frame)): 51 | if i%1000 == 0: print(i) 52 | constraints += [E[i] == E[i-1] + rate[i-1]] #Current SOC constraint 53 | 54 | constraints += [E[0] == random.uniform(SOC_min,SOC_max), rate[0] == 0] #create first time step constraints 55 | 56 | print("Solving problem") 57 | #Create Problem and solve to find Optimal Revenue and Times to sell. 58 | prob = cp.Problem(cp.Maximize(revenue),constraints) 59 | prob.solve(solver=cp.ECOS,verbose=True) 60 | print("Optimal Maximum Revenue is {0}".format(prob.value)) 61 | 62 | #Convert values for the variables into arrays 63 | E_val = [E.value[i][0] for i in range(len(data_frame))] 64 | charge_val = [rate.value[i][0] for i in range(len(data_frame)) ] 65 | 66 | #Join values to the data frame 67 | data_frame["E"] = E_val 68 | data_frame["Charge"] = charge_val 69 | data_frame["DATETIME"] = data_frame.index 70 | revenue = [0] 71 | for i in range(1,len(data_frame)): 72 | revenue.append(revenue[-1] + prices[i]*charge_val[i]) 73 | 74 | data_frame["Cumulative Additive Revenue"] = revenue 75 | print("Saving DF") 76 | #Save dataframe 77 | data_frame.to_csv("df_LMP.csv") 78 | print("Plotting 2 day timeline") 79 | #Plot dataframe for 2 days 80 | f = plt.figure(figsize=(20,20)) 81 | data_frame.iloc[:48].plot(x="DATETIME",y=["E","Charge"]) 82 | plt.xlabel("DateTime") 83 | plt.ylabel("Power- kW") 84 | plt.show() 85 | plt.savefig('2_day_battery_energy_arbitrage.png') 86 | 87 | return data_frame 88 | 89 | def state_space_creation(data_frame,load_bins_number = 10, lmp_bins_number = 10): 90 | """Generates State Space for the problem containing TOU, LMP, Load, and SOC in both binned and unbinned formats.""" 91 | #Use a 15 min resampled dataset for joining datasets and creating state space 92 | data_frame_2 = data_frame.resample('15T').pad() 93 | SOC_max = 0.9*14 94 | SOC_min = 0.1*14 95 | 96 | data_frame_2["MW"] = data_frame_2["MW"].apply(lambda x:x/1000) 97 | data_frame_2.rename(columns={"MW":"LMP_kWh"},inplace=True) 98 | 99 | ##Drop last value added for ease of parsing 100 | data_frame_2.drop(data_frame_2.iloc[len(data_frame_2)-1].name,axis=0,inplace=True) 101 | 102 | # For 15 min sampled data, create a new column to match indices to the load, tariff dataset from 2014. 103 | data_frame_2["date_month_2014"] = data_frame_2.index 104 | 105 | new_dateimt = [] 106 | for i in data_frame_2["date_month_2014"]: 107 | if i.year == 2018: 108 | i = datetime.datetime(2014,i.month,i.day,i.hour,i.minute,i.second) 109 | else: 110 | i = datetime.datetime(2015,i.month,i.day,i.hour,i.minute,i.second) 111 | new_dateimt.append(i) 112 | 113 | data_frame_2["date_month_2014"]= new_dateimt 114 | 115 | #Load in the load_tariff dataset 116 | data_sep = pd.read_csv("load_tariff.csv") 117 | 118 | #Convert data into relevant types 119 | data_sep["dt"] = pd.to_datetime(data_sep["dt"]) 120 | data_sep["tariff"] = data_sep["tariff"].to_numpy() 121 | data_sep["solar"] = data_sep["solar"].to_numpy() 122 | data_sep["grid"] = data_sep["grid"].to_numpy() 123 | data_sep["gridnopv"] = data_sep["gridnopv"].to_numpy() 124 | 125 | #Parse data 126 | data_sep.drop("local_15min",axis=1,inplace=True) 127 | data_sep.drop("Unnamed: 0",axis=1,inplace=True) 128 | 129 | #Set index to be DateTime Index 130 | data_sep.set_index("dt",inplace=True) 131 | 132 | #Merge the 2 datasets on the 2014 datetime column 133 | df = pd.merge(data_frame_2, data_sep,left_on="date_month_2014",right_index=True) 134 | 135 | #Drop Column 136 | df.drop("date_month_2014",axis=1,inplace=True) 137 | df.drop("grid",axis=1,inplace=True) 138 | df.drop("solar",axis=1,inplace=True) 139 | 140 | df.rename(columns={"tariff":"TOU"},inplace=True) 141 | df.rename(columns={"gridnopv":"Load"},inplace=True) 142 | 143 | #Create Binned LMP, Load, TOU columns 144 | df["binned_LMP"],bins_LMP = pd.cut(df['LMP_kWh'], lmp_bins_number,labels = range(lmp_bins_number),retbins=True) 145 | df["binned_Load"],bins_Load = pd.cut(df["Load"],load_bins_number,labels=range(load_bins_number),retbins=True) 146 | 147 | #Create bins for SOC 148 | bins_SOC = [] 149 | for i in np.arange(SOC_min,SOC_max,5*.25): 150 | bins_SOC.append(round(i,2)) 151 | bins_SOC.append(SOC_max) 152 | 153 | #Create bins for TOU 154 | unique_TOU = {j:i for i,j in enumerate(df["TOU"].unique())} 155 | rows_TOU = [] 156 | rows_Load = [] 157 | for i, j in enumerate(df.iterrows()): 158 | rows_TOU.append(unique_TOU[j[1]["TOU"]]) 159 | 160 | 161 | bins_TOU = df["TOU"].unique() 162 | 163 | #Create binned TOU to be mapping to indices 164 | df["binned_TOU"] = rows_TOU 165 | 166 | #Create mapping bins Dict 167 | bins_dict = {"LMP":bins_LMP,"Load":bins_Load,"TOU":bins_TOU,"SOC":bins_SOC} 168 | 169 | #Create SOC binned/unbinned column 170 | df["SOC"] = [5.15] + [0]*(len(df) -1) 171 | df["binned_SOC"] = [3] + [0]*(len(df)-1) 172 | 173 | #save csv and dictionary 174 | df.set_index(data_sep.index,drop=True,inplace=True) 175 | df.to_csv("Discretized_State_Space.csv") 176 | 177 | def save_obj(obj, name ): 178 | with open(name + '.pkl', 'wb') as f: 179 | pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL) 180 | 181 | save_obj(bins_dict,"bins_Dict") 182 | 183 | 184 | 185 | #Append the different datasets from each month into one dataset to range from July 2018 to June 2019 186 | t = 0 187 | filename_list = ["20190208_20190309_PRC_LMP_DAM_20201017_00_48_54_v1.csv", 188 | "20190608_20190701_PRC_LMP_DAM_20201017_00_55_15_v1.csv", 189 | "20190408_20190509_PRC_LMP_DAM_20201017_00_52_30_v1.csv", 190 | "20190308_20190409_PRC_LMP_DAM_20201017_00_51_06_v1.csv", 191 | "20181108_20181209_PRC_LMP_DAM_20201017_00_23_08_v1.csv", 192 | "20180908_20181009_PRC_LMP_DAM_20201017_00_19_49_v1.csv", 193 | "20190508_20190608_PRC_LMP_DAM_20201105_01_18_27_v1.csv", 194 | "20180608_20180708_PRC_LMP_DAM_20201105_00_51_29_v1.csv", 195 | "20180708_20180808_PRC_LMP_DAM_20201105_00_33_06_v1.csv", 196 | "20180808_20180908_PRC_LMP_DAM_20201105_00_11_49_v1.csv", 197 | "20181008_20181108_PRC_LMP_DAM_20201105_00_09_20_v1.csv", 198 | "20181208_20190108_PRC_LMP_DAM_20201105_00_06_18_v1.csv", 199 | "20190108_20190208_PRC_LMP_DAM_20201105_00_03_31_v1.csv"] 200 | for i in filename_list: 201 | data_temp = script(i) 202 | if t==0: 203 | data_frame = data_temp 204 | else: 205 | data_frame = pd.concat([data_frame, data_temp], ignore_index=True) 206 | t += 1 207 | 208 | #Drop Duplicates and reset index. 209 | data_frame.drop_duplicates(inplace=True) 210 | data_frame.reset_index(inplace=True,drop = True) 211 | 212 | #Further Parse the datetime, and limit the dataset into the dates in the original load dataset. 213 | datestime= [] 214 | for i in data_frame["DATETIME"]: 215 | if (i.date() < datetime.date(2019,7,1)) & (i.date() > datetime.date(2018,7,7)): 216 | datestime.append(str(i).replace("+00:00","")) 217 | else: 218 | data_frame.drop(data_frame.loc[data_frame["DATETIME"]==i].index.values[0],inplace=True) 219 | #Convert index to DateTimeIndex. 220 | data_frame["DATETIME"] = pd.to_datetime(datestime) 221 | 222 | data_frame = data_frame.sort_values("DATETIME").reset_index(drop=True) 223 | 224 | data_frame = data_frame.append({"DATETIME" :datetime.datetime(2019,7,1,0,0,0),"MW":42069},ignore_index=True) 225 | 226 | data_frame.set_index("DATETIME",inplace=True) 227 | 228 | state_space_creation(data_frame) 229 | 230 | #Rename Column for both datsets 231 | data_frame["MW"] = data_frame["MW"].apply(lambda x:x/1000) 232 | data_frame.rename(columns={"MW":"LMP_kWh"},inplace=True) 233 | 234 | ##Drop last value added for ease of parsing 235 | data_frame.drop(data_frame.iloc[len(data_frame)-1].name,axis=0,inplace=True) 236 | 237 | print("Finished DF creation, starting optimization") 238 | optimization_problem(data_frame) 239 | 240 | -------------------------------------------------------------------------------- /residential.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | 4 | from collections import defaultdict 5 | 6 | 7 | class Residential: 8 | def __init__(self, df, state_dict): 9 | 10 | self.df = df 11 | 12 | self.gamma = 0.95 13 | self.alpha = 0.2 14 | self.epsilon = 0.65 15 | self.LMP_Mavg = 0 16 | 17 | self.LMP_bins = state_dict["LMP"] 18 | self.Load_bins = state_dict["Load"] 19 | self.TOU_bins = state_dict["TOU"] 20 | self.SOC_bins = state_dict["SOC"] 21 | 22 | self.BAT_KWH_MIN = 0.1 * 14 # Minimum SOE of battery, 10% of rated 23 | self.BAT_KWH_MAX = 0.9 * 14 # Maximum SOE of battery, 90% of rated 24 | self.BAT_KW = 5 25 | # Data at 15 minute intervals, which is 0.25 hours. Need for conversion between kW <-> kWh 26 | self.HR_FRAC = 15 / 60 27 | 28 | # D means discharge the battery to help the utility, H means hold current battery energy 29 | # THIS ORDER MATTERS 30 | self.action_map = { 31 | 0: self.LMP_buy, 32 | 1: self.LMP_sell, 33 | 2: self.wait, 34 | 3: self.TOU_buy, 35 | 4: self.TOU_discharge, 36 | } 37 | 38 | # state parameterized by: LMP, TOU, load 39 | self.S = np.zeros( 40 | [ 41 | len(self.LMP_bins), 42 | len(self.TOU_bins), 43 | len(self.Load_bins), 44 | len(self.SOC_bins), 45 | ] 46 | ) 47 | self.Q = np.zeros( 48 | [ 49 | len(self.LMP_bins), 50 | len(self.TOU_bins), 51 | len(self.Load_bins), 52 | len(self.SOC_bins), 53 | len(self.action_map), 54 | ] 55 | ) 56 | self.Policy = np.zeros( 57 | [ 58 | len(self.LMP_bins), 59 | len(self.TOU_bins), 60 | len(self.Load_bins), 61 | len(self.SOC_bins), 62 | ] 63 | ) 64 | 65 | def get_allowed_actions(self, state): 66 | # [self.LMP_buy, self.LMP_sell, self.wait, self.TOU_buy, self.TOU_discharge] 67 | 68 | actions = [] 69 | 70 | # can buy if it doesn't put you over the limit 71 | if state["SOC"] + 1 < len(self.TOU_bins): 72 | # actions.append(self.LMP_buy) 73 | # actions.append(self.TOU_buy) 74 | actions.append(0) 75 | actions.append(3) 76 | 77 | # vice versa for sell 78 | if state["SOC"] - 1 > 0: 79 | # actions.append(self.LMP_sell) 80 | # actions.append(self.TOU_discharge) 81 | actions.append(1) 82 | # Ensure that load never goes negative: 83 | if state["Load"] >= self.BAT_KW: 84 | actions.append(4) 85 | 86 | # actions.append(self.wait) 87 | actions.append(2) 88 | 89 | return actions 90 | 91 | def LMP_buy(self, state): 92 | # buy 15 mins of power from LMP 93 | # kWh * $/kWh 94 | LMP_cost = self.LMP_bins[state["LMP"]] 95 | 96 | energy_change = self.BAT_KW * self.HR_FRAC 97 | 98 | return (self.LMP_Mavg - LMP_cost) * energy_change 99 | 100 | def LMP_sell(self, state): 101 | # sell 15 mins of power to LMP 102 | # kWh * $/kWh 103 | LMP_comp = self.LMP_bins[state["LMP"]] 104 | energy_change = self.BAT_KW * self.HR_FRAC 105 | 106 | return (LMP_comp - self.LMP_Mavg) * energy_change 107 | 108 | def TOU_buy(self, state): 109 | # buy 15 mins of power from TOU 110 | # kWh * $/kWh 111 | TOU_cost = -self.TOU_bins[state["TOU"]] 112 | energy_change = self.BAT_KW * self.HR_FRAC 113 | 114 | return TOU_cost * energy_change 115 | 116 | def TOU_discharge(self, state): 117 | # discharge 15 mins of power to TOU, offsetting load 118 | # kWh * $/kWh 119 | TOU_comp = self.TOU_bins[state["TOU"]] 120 | energy_change = self.BAT_KW * self.HR_FRAC 121 | 122 | return TOU_comp * energy_change 123 | 124 | def wait(self, state): 125 | # do nothing 126 | return 0 127 | 128 | def createEpsilonGreedyPolicy(self, Q, epsilon, num_actions): 129 | """ 130 | Creates an epsilon-greedy policy based 131 | on a given Q-function and epsilon. 132 | 133 | Returns a function that takes the state 134 | as an input and returns the probabilities 135 | for each action in the form of a numpy array 136 | of length of the action space(set of possible actions). 137 | """ 138 | 139 | def policyFunction(state): 140 | LMP_ind, TOU_ind, Load_ind, SOC_ind = ( 141 | state["LMP"], 142 | state["TOU"], 143 | state["Load"], 144 | state["SOC"], 145 | ) 146 | 147 | allowed = self.get_allowed_actions(state) 148 | 149 | num_allowed = len(allowed) 150 | Action_probabilities = [ 151 | float(epsilon / num_allowed) if i in allowed else 0.0 152 | for i in range(num_actions) 153 | ] 154 | 155 | if all(Q[LMP_ind][TOU_ind][Load_ind][SOC_ind][:]): 156 | best_action = np.argmax(Q[LMP_ind][TOU_ind][Load_ind][SOC_ind][:]) 157 | else: 158 | best_action = np.random.choice(allowed) 159 | 160 | Action_probabilities[best_action] += 1.0 - epsilon 161 | return Action_probabilities 162 | 163 | return policyFunction 164 | 165 | def Q_learning(self): 166 | 167 | # Create an epsilon greedy policy function 168 | # appropriately for environment action space 169 | policy = self.createEpsilonGreedyPolicy( 170 | self.Q, self.epsilon, len(self.action_map) 171 | ) 172 | 173 | # initialize 174 | SOC_ind = 3 175 | n = 0 176 | for ind, row in self.df.iterrows(): 177 | 178 | LMP_ind = row["binned_LMP"] 179 | TOU_ind = row["binned_TOU"] 180 | Load_ind = row["binned_Load"] 181 | 182 | state = { 183 | "LMP": LMP_ind, 184 | "TOU": TOU_ind, 185 | "Load": Load_ind, 186 | "SOC": SOC_ind, 187 | } 188 | 189 | # update 190 | n += 1 191 | self.LMP_Mavg = (self.LMP_Mavg + self.LMP_bins[LMP_ind]) / n 192 | 193 | # get probabilities of all actions from current state 194 | action_probabilities = policy(state) 195 | 196 | # choose action according to 197 | # the probability distribution 198 | action = np.random.choice( 199 | np.arange(len(action_probabilities)), p=action_probabilities 200 | ) 201 | 202 | # take action and get reward, transit to next state 203 | next_state, reward, done = self.get_next(state, action) 204 | LMP_ind_new, TOU_ind_new, Load_ind_new, SOC_ind_new = ( 205 | next_state["LMP"], 206 | next_state["TOU"], 207 | next_state["Load"], 208 | next_state["SOC"], 209 | ) 210 | 211 | # TD Update 212 | 213 | # print(LMP_ind_new, TOU_ind_new, Load_ind_new, SOC_ind_new ) 214 | allowed = self.get_allowed_actions(next_state) 215 | if all(self.Q[LMP_ind_new][TOU_ind_new][Load_ind_new][SOC_ind_new][:]): 216 | best_next_action = np.argmax( 217 | self.Q[LMP_ind_new][TOU_ind_new][Load_ind_new][SOC_ind_new][:] 218 | ) 219 | else: 220 | best_next_action = np.random.choice(allowed) 221 | td_target = ( 222 | reward 223 | + self.gamma 224 | * self.Q[LMP_ind_new][TOU_ind_new][Load_ind_new][SOC_ind_new][ 225 | best_next_action 226 | ] 227 | ) 228 | td_delta = td_target - self.Q[LMP_ind][TOU_ind][Load_ind][SOC_ind][action] 229 | self.Q[LMP_ind][TOU_ind][Load_ind][SOC_ind][action] += self.alpha * td_delta 230 | 231 | # update the policy with the action 232 | self.Policy[LMP_ind][TOU_ind][Load_ind][SOC_ind] = action 233 | 234 | # done is True if episode terminated 235 | if done: 236 | break 237 | 238 | state = next_state 239 | SOC_ind = SOC_ind_new 240 | 241 | def get_next(self, state, action): 242 | """ 243 | Given starting state and action 244 | 245 | return: next state, reward, and boolean (done or not) 246 | """ 247 | # self.action_map = {0: self.LMP_buy, 1: self.LMP_sell, 2: self.wait, 3: self.TOU_buy, 4: self.TOU_discharge} 248 | # no terminating states in this problem 249 | done = False 250 | 251 | reward = 0 252 | 253 | newstate = state.copy() 254 | # action can only alter the SOC of the state variables 255 | 256 | # if charged: increase SOC, keep load the same 257 | if action in [0, 3]: 258 | newstate["SOC"] += 1 259 | 260 | # if discharged TOU: decrease SOC, decrease load 261 | if action == 4: 262 | newstate["SOC"] -= 1 263 | load = max(0, self.Load_bins[state["Load"]]-self.BAT_KW) 264 | else: 265 | load = self.Load_bins[state["Load"]] 266 | 267 | # if discharged LMP: decrease SOC 268 | if action == 1: 269 | newstate["SOC"] -= 1 270 | 271 | # if wait do nothing 272 | if action == 2: 273 | pass 274 | 275 | # reward is based on action + residual of load 276 | # load to kWh * TOU rate + action component 277 | action_component = self.action_map[action](state) 278 | reward = -load * self.HR_FRAC * self.TOU_bins[state["TOU"]] + action_component 279 | 280 | return newstate, reward, done 281 | 282 | def calc_revenue(self): 283 | revenue = 0 284 | SOC_ind = 3 285 | actions = [] 286 | rewards = [] 287 | for ind, row in self.df.iterrows(): 288 | 289 | LMP_ind = row["binned_LMP"] 290 | TOU_ind = row["binned_TOU"] 291 | Load_ind = row["binned_Load"] 292 | 293 | state = { 294 | "LMP": LMP_ind, 295 | "TOU": TOU_ind, 296 | "Load": Load_ind, 297 | "SOC": SOC_ind, 298 | } 299 | # print(state) 300 | allowed = self.get_allowed_actions(state) 301 | # print(allowed) 302 | if int(self.Policy[LMP_ind][TOU_ind][Load_ind][SOC_ind]) in allowed: 303 | action = int(self.Policy[LMP_ind][TOU_ind][Load_ind][SOC_ind]) 304 | else: 305 | action = np.random.choice(allowed) 306 | 307 | #make new state 308 | newstate = state.copy() 309 | 310 | # CALCULATE REWARD COMPONENTS 311 | LMP_cost = self.LMP_bins[state["LMP"]] 312 | TOU_cost = self.TOU_bins[state["TOU"]] 313 | energy_change = self.BAT_KW * self.HR_FRAC 314 | # self.action_map = {0: self.LMP_buy, 1: self.LMP_sell, 2: self.wait, 3: self.TOU_buy, 4: self.TOU_discharge} 315 | 316 | if action == 0: 317 | action_component = -LMP_cost * energy_change 318 | elif action == 1: 319 | action_component = LMP_cost * energy_change 320 | elif action == 2: 321 | action_component = 0 322 | elif action == 3: 323 | action_component = -TOU_cost * energy_change 324 | elif action == 4: 325 | action_component = TOU_cost * energy_change 326 | 327 | # CHANGE STATE 328 | # if charged: increase SOC, keep load the same 329 | if action in [0, 3]: 330 | newstate["SOC"] += 1 331 | 332 | # if discharged TOU: decrease SOC 333 | if action == 4: 334 | newstate["SOC"] -= 1 335 | # load = max(0, self.Load_bins[state["Load"]]-self.BAT_KW) 336 | # else: 337 | 338 | # Do not alter load 339 | load = self.Load_bins[state["Load"]] 340 | 341 | # if discharged LMP: decrease SOC 342 | if action == 1: 343 | newstate["SOC"] -= 1 344 | 345 | # if wait do nothing 346 | if action == 2: 347 | pass 348 | 349 | # reward is based on action + residual of load 350 | # load to kWh * TOU rate + action component 351 | reward = -load * self.HR_FRAC * self.TOU_bins[state["TOU"]] + action_component 352 | 353 | # calc revenue and transition SOC 354 | revenue += reward 355 | SOC_ind = newstate["SOC"] 356 | 357 | #record 358 | actions.append(action) 359 | rewards.append(reward) 360 | 361 | print(revenue) 362 | self.write(actions, rewards) 363 | 364 | def write(self, actions, rewards): 365 | # for action in self.Policy: 366 | # with open('out.policy', 'w') as f: 367 | # f.write(action) 368 | 369 | 370 | action2string = { 371 | 0: 'LMP_buy', 372 | 1: 'LMP_sell', 373 | 2: 'wait', 374 | 3: 'TOU_buy', 375 | 4: 'TOU_discharge', 376 | } 377 | 378 | df = pd.DataFrame(list(zip(actions, rewards)), 379 | columns =['action_inds', 'rewards']) 380 | 381 | df['actions'] = df['action_inds'].map(action2string) 382 | df['cumulative_revenue'] = np.cumsum(df['rewards']) 383 | 384 | df.to_csv('output2.csv') 385 | 386 | def main(): 387 | df = pd.read_csv(r"Discretized_State_Space.csv") 388 | state_dict = pd.read_pickle(r"bins_Dict.pkl") 389 | 390 | a = Residential(df, state_dict) 391 | a.Q_learning() 392 | a.calc_revenue() 393 | 394 | 395 | 396 | 397 | if __name__ == "__main__": 398 | main() 399 | --------------------------------------------------------------------------------