├── dataset ├── .~lock.Walk-0-dataset.csv# └── .~lock.Walk-2-dataset.csv# ├── source_code_map.png ├── .gitignore ├── README.md ├── ensemble.py ├── Callback.py ├── main.py ├── SpEnv.py ├── plotResults.py ├── DeepQTrading.py └── run_all_experiments.sh /dataset/.~lock.Walk-0-dataset.csv#: -------------------------------------------------------------------------------- 1 | ,anselmo,anselmo-Aspire-E5-574G,11.02.2020 09:06,file:///home/anselmo/.config/libreoffice/4; -------------------------------------------------------------------------------- /dataset/.~lock.Walk-2-dataset.csv#: -------------------------------------------------------------------------------- 1 | ,unica-laptop,unicalaptop,11.03.2020 08:09,file:///home/unica-laptop/.config/libreoffice/4; -------------------------------------------------------------------------------- /source_code_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Artificial-Intelligence-Big-Data-Lab/A-Multi-Layer-and-Multi-Ensembled-Stock-Trader-Using-Deep-Learning-and-Deep-Reinforcement-Learning/HEAD/source_code_map.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Don't track content of these folders 2 | Output/ 3 | 4 | 5 | # Don't track this files # 6 | telegramSettings.py 7 | 8 | 9 | # Compiled source # 10 | ################### 11 | *.com 12 | *.class 13 | *.dll 14 | *.exe 15 | *.o 16 | *.so 17 | *.pyc 18 | *.so 19 | 20 | # Documents # 21 | ############# 22 | *.pdf 23 | *.xlsx 24 | 25 | # Packages # 26 | ############ 27 | # it's better to unpack these files and commit the raw source 28 | # git has its own built in compression methods 29 | *.7z 30 | *.dmg 31 | *.gz 32 | *.iso 33 | *.jar 34 | *.rar 35 | *.tar 36 | *.zip 37 | 38 | 39 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # A Multi-Layer and Multi-Ensemble Stock Trader Using Deep Learning and Deep Reinforcement Learning 2 | 3 | 4 | 5 | ## Abstract 6 | 7 | > Abstract The use of computer-aided stock trading is gaining popularity in recent years, mainly because of its ability to process efficiently past information through machine learning in order to predict future market behavior. Several approaches have been proposed this task, with the most effective ones using fusion of a pile of classifiers decisions to predict future stock values. However, using prices information only has proven to lead to poor results, mainly because market history is not enough to be an indicative of future market behavior. In this paper, we propose to tackle this issue by proposing a multi-layer and multi-ensemble stock trader. Our method starts by pre-processing data with hundreds of deep neural networks. Then, a reward-based classifier acts as a meta-learner to maximize profit and generate stock signals through different iterations. Finally, several metalearner trading decisions are fused in order to get a more robust trading. Experimental results of index Futures intra-day trading indicate better performance when compared to several other ensemble techniques and the conventional Buy-and-Hold strategy. 8 | 9 | 10 | ## Authors: 11 | 12 | - Salvatore Carta 13 | - Andrea Corriga 14 | - Anselmo Ferreira 15 | - Alessandro Sebastian Podda 16 | - Diego Reforgiato Recupero 17 | 18 | # Info 19 | This is the source code of the paper "A Multi-Layer and Multi-Ensembled Stock Trader Using Deep Learning and Deep Reinforcement Learning" 20 | 21 | In this source code, we offer two datasets from 2 SP500 symbols: JPM and MSFT (please check ./datasets folder). The source code runs JPM, to change for another one, just change DeepQTrading.py and ensemble.py according. 22 | 23 | To execute the code, just run ./run_all_experiments.sh in your terminal, and 100 experiments will be done (check section 5.1 of our paper). 24 | 25 | After running each experiment, a final pdf will be built with data from RL training and also a final table, showing the final trading results. 26 | 27 | Please check source_code_map.png for a better explanation of what each .py file does. 28 | 29 | If using our code, dont forget to cite our paper. 30 | -------------------------------------------------------------------------------- /ensemble.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | 5 | def majority_voting(df): 6 | 7 | local_df = df.copy() 8 | x=local_df.loc[:,'iteration0':'iteration24'] 9 | local_df['ensemble']=x.mode(axis=1).iloc[:, 0] 10 | local_df = local_df.drop(local_df.columns.difference(['ensemble']), axis=1) 11 | return local_df 12 | 13 | def ensemble(type, ensembleFolderName): 14 | 15 | dollSum=0 16 | rewSum=0 17 | posSum=0 18 | negSum=0 19 | covSum=0 20 | numSum=0 21 | 22 | values=[] 23 | columns = ["Experiment","#Wins","#Losses","Dollars","Coverage","Accuracy"] 24 | 25 | sp500=pd.read_csv("./dataset/jpm/test_data.csv",index_col='date_time') 26 | 27 | df=pd.read_csv("./Output/ensemble/"+ensembleFolderName+"/ensemble_"+type+".csv",index_col='date_time') 28 | 29 | df=majority_voting(df) 30 | 31 | num=0 32 | rew=0 33 | pos=0 34 | neg=0 35 | doll=0 36 | cov=0 37 | 38 | 39 | #Lets iterate through each date and decision 40 | for date, i in df.iterrows(): 41 | 42 | #If the date in the predictions is in the index of sp500 (which is also a date) 43 | if date in sp500.index: 44 | 45 | num+=1 46 | 47 | #If the output is 1 (long) 48 | if (i['ensemble']==1): 49 | 50 | #If the close - open is positive at that day, we have earning money. Positives are equal to 1. Otherwise, no incrementation 51 | pos+= 1 if (float(sp500.at[date,'delta_next_day'])) > 0 else 0 52 | 53 | #If close - open is negative at that day, we are losing money. Negatives are equal to 1. Otherwise, no incrementation 54 | neg+= 1 if (float(sp500.at[date,'delta_next_day'])) < 0 else 0 55 | 56 | #Lets calculate the reward (positive or negative) 57 | rew+=float(sp500.at[date,'delta_next_day']) 58 | 59 | #In dollars, we just multiply by the sp500 points by the differences 60 | doll+=float(sp500.at[date,'delta_next_day']) 61 | 62 | #There is coverage (of course) 63 | cov+=1 64 | 65 | #The same stuff happens for short. 66 | elif (i['ensemble']==2): 67 | 68 | pos+= 1 if float(sp500.at[date,'delta_next_day']) < 0 else 0 69 | neg+= 1 if float(sp500.at[date,'delta_next_day']) > 0 else 0 70 | 71 | rew+=-float(sp500.at[date,'delta_next_day']) 72 | cov+=1 73 | doll+=-float(sp500.at[date,'delta_next_day']) 74 | 75 | 76 | 77 | values.append([str(1),str(round(pos,2)),str(round(neg,2)),str(round(doll,2)),str(round(cov/num,2)),(str(round(pos/cov,2)) if (cov>0) else "")]) 78 | 79 | #Now lets sum walk by walk 80 | dollSum+=doll 81 | rewSum+=rew 82 | posSum+=pos 83 | negSum+=neg 84 | covSum+=cov 85 | numSum+=num 86 | 87 | 88 | #Now lets summarize everything showing the sum of values 89 | values.append(["sum",str(round(posSum,2)),str(round(negSum,2)),str(round(dollSum,2)),str(round(covSum/numSum,2)),(str(round(posSum/covSum,2)) if (covSum>0) else "")]) 90 | 91 | return values,columns 92 | 93 | 94 | ################ 95 | -------------------------------------------------------------------------------- /Callback.py: -------------------------------------------------------------------------------- 1 | #Callbacks are functions used to give a feedback about each epoch calculated metrics 2 | from rl.callbacks import Callback 3 | 4 | class ValidationCallback(Callback): 5 | 6 | def __init__(self): 7 | #Initially, the metrics are zero 8 | self.episodes = 0 9 | self.rewardSum = 0 10 | self.accuracy = 0 11 | self.coverage = 0 12 | self.short = 0 13 | self.long = 0 14 | self.shortAcc =0 15 | self.longAcc =0 16 | self.longPrec =0 17 | self.shortPrec =0 18 | self.marketRise =0 19 | self.marketFall =0 20 | 21 | def reset(self): 22 | #The metrics are also zero when the epoch ends 23 | self.episodes = 0 24 | self.rewardSum = 0 25 | self.accuracy = 0 26 | self.coverage = 0 27 | self.short = 0 28 | self.long = 0 29 | self.shortAcc =0 30 | self.longAcc =0 31 | self.longPrec =0 32 | self.shortPrec =0 33 | self.marketRise =0 34 | self.marketFall =0 35 | 36 | #all information is given by the environment: action, reward and market 37 | #Then, when the episode ends, metrics are calculated 38 | def on_episode_end(self, action, reward, market): 39 | 40 | #After the episode ends, increments the episodes 41 | self.episodes+=1 42 | 43 | #Increments the reward 44 | self.rewardSum+=reward 45 | 46 | #If the action is not a hold, there is coverage because the agent decided 47 | self.coverage+=1 if (action != 0) else 0 48 | 49 | #increments the accuracy if the reward is positive (we have a hit) 50 | self.accuracy+=1 if (reward >= 0 and action != 0) else 0 51 | 52 | 53 | #Increments the counter for short if the action is a short (id 2) 54 | self.short +=1 if(action == 2) else 0 55 | 56 | #Increments the counter for long if the action is a long (id 1) 57 | self.long +=1 if(action == 1) else 0 58 | 59 | #We will also calculate the accuracy for a given action. Here, it increments 60 | #the accuracy for short if the action is short and the reward is positive 61 | self.shortAcc +=1 if(action == 2 and reward >=0) else 0 62 | 63 | #Increments the accuracy for long if the action is long and the reward is positive 64 | self.longAcc +=1 if(action == 1 and reward >=0) else 0 65 | 66 | #If the market increases, increments the marketRise variable. If the prediction is 1 (long), increments the precision for long 67 | if(market>0): 68 | self.marketRise+=1 69 | self.longPrec+=1 if(action == 1) else 0 70 | 71 | #If market decreases, increments the marketFall. If the prediction is 2 (short), increments the precision for short 72 | elif(market<0): 73 | self.marketFall+=1 74 | self.shortPrec+=1 if(action == 2) else 0 75 | 76 | #Function to show the metrics of the episode 77 | def getInfo(self): 78 | #Start setting them to zero 79 | acc = 0 80 | cov = 0 81 | short = 0 82 | long = 0 83 | longAcc = 0 84 | shortAcc = 0 85 | longPrec = 0 86 | shortPrec = 0 87 | 88 | #If there is coverage, we will calculate the accuracy only related to when decisions were made. 89 | #In other words, we dont calculate accuracy for hold operations 90 | if self.coverage > 0: 91 | acc = self.accuracy/self.coverage 92 | 93 | #Now, we calculate the mean coverage, short and long operations from the episodes 94 | if self.episodes > 0: 95 | cov = self.coverage/self.episodes 96 | short = self.short/self.episodes 97 | long = self.long/self.episodes 98 | 99 | #Calculate the mean accuracy for short operations. 100 | #That is, the number of total short correctly predicted (self.shortAcc) 101 | #divided by the total of shorts predicted (self.short) 102 | # #We need to correct this 103 | if self.short > 0: 104 | shortAcc = self.shortAcc/self.short 105 | 106 | #Calculate the mean accuracy for long operations. 107 | #That is, the number of total short correctly predicted (long.shortAcc) 108 | #divided by the total of longs predicted (long.short) 109 | if self.long > 0: 110 | longAcc = self.longAcc/self.long 111 | 112 | 113 | if self.marketRise > 0: 114 | longPrec = self.longPrec/self.marketRise 115 | 116 | if self.marketFall > 0: 117 | shortPrec = self.shortPrec/self.marketFall 118 | 119 | #Returns the metrics to the user 120 | return self.episodes,cov,acc,self.rewardSum,long,short,longAcc,shortAcc,longPrec,shortPrec 121 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | """"" 2 | This is the code of Reinforcement learning applied on outputs of an ensemble of classifiers 3 | There is an ensemble of 1000 CNNs that will output predictions for each day 4 | Therefore, our RL metalearner will be applied on these 1000 outputs 5 | 6 | We call it as the following: 7 | 8 | python main.py 9 | 10 | ex: python3 main.py 3 0.3 selu teste-rmsprop-0.3-selu rmsprop 11 | 12 | where: 13 | : number of actions done by the agent. 14 | : in the RL training, this is the probability that the action taken is random or it obeys the Q-values found previously 15 | : activation function of the double q-network layer we use as RL agent 16 | : where results will be written 17 | : optimization approach of the RL network 18 | 19 | Authors: Anselmo Ferreira, Alessandro Sebastian Podda and Andrea Corriga 20 | 21 | Please dont hesitate to cite our Applied Intelligence paper when using it for your research ;-) 22 | 23 | """"" 24 | 25 | #os library is used to define the GPU to be used by the code, needed only in cerain situations (Better not to use it, use only if the main gpu is Busy) 26 | import os 27 | os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"; 28 | os.environ["CUDA_VISIBLE_DEVICES"]="0"; 29 | 30 | #This is the class call for the Agent which will perform the experiment 31 | from DeepQTrading import DeepQTrading 32 | 33 | #Date library to manipulate time in the source code 34 | import datetime 35 | 36 | #Keras library to define the NN to be used 37 | from keras.models import Sequential 38 | 39 | #Layers used in the NN considered 40 | from keras.layers import Dense, Activation, Flatten 41 | from keras.layers import advanced_activations 42 | 43 | #Activation Layers used in the source code 44 | from keras.layers.advanced_activations import LeakyReLU, PReLU, ReLU 45 | 46 | #Optimizer used in the NN 47 | from keras.optimizers import Adam 48 | 49 | #Libraries used for the Agent considered 50 | from rl.agents.dqn import DQNAgent 51 | from rl.memory import SequentialMemory 52 | from rl.policy import EpsGreedyQPolicy 53 | 54 | #Library used for showing the exception in the case of error 55 | import sys 56 | 57 | import tensorflow as tf 58 | from keras.backend.tensorflow_backend import set_session 59 | config = tf.ConfigProto() 60 | config.gpu_options.per_process_gpu_memory_fraction = 0.3 61 | set_session(tf.Session(config=config)) 62 | 63 | 64 | #There are three actions possible in the stock market 65 | #Hold(id 0): do nothing. 66 | #Long(id 1): It predicts that the stock market value will raise at the end of the day. 67 | #So, the action performed in this case is buying at the beginning of the day and sell it at the end of the day (aka long). 68 | #Short(id 2): It predicts that the stock market value will decrease at the end of the day. 69 | #So, the action that must be done is selling at the beginning of the day and buy it at the end of the day (aka short). 70 | 71 | #This is a simple NN considered. It is composed of: 72 | #One flatten layer to get 1000 dimensional vectors as input 73 | #One dense layer with 35 neurons with a given activation 74 | #One final Dense Layer with the number of actions considered and linear activation 75 | 76 | model = Sequential() 77 | model.add(Flatten(input_shape=(1,1000))) 78 | if(sys.argv[4]=="relu"): 79 | model.add(Dense(35,activation='relu')) 80 | if(sys.argv[4]=="sigmoid"): 81 | model.add(Dense(35,activation='sigmoid')) 82 | if(sys.argv[4]=="linear"): 83 | model.add(Dense(35,activation='linear')) 84 | if(sys.argv[4]=="tanh"): 85 | model.add(Dense(35,activation='tanh')) 86 | if(sys.argv[4]=="selu"): 87 | model.add(Dense(35,activation='selu')) 88 | model.add(LeakyReLU(alpha=.001)) 89 | model.add(Dense(int(sys.argv[1]))) 90 | model.add(Activation('linear')) 91 | 92 | 93 | 94 | #Define the DeepQTrading class with the following parameters: 95 | #explorations: operations are random with a given probability, and 25 iterations. 96 | #in this case, iterations parameter is used because the agent acts on daily basis, so its better to repeat the experiments several 97 | #times. 98 | #outputFile: where the results will be written 99 | 100 | #sys.argv[1]: number of actions 101 | #sys.argv[2]: probability of performing explorations 102 | #sys.argv[3]: initializer 103 | #sys.argv[4]: folder name where experiments results will be written 104 | #sys.argv[5]: optimizer 105 | 106 | dqt = DeepQTrading( 107 | model=model, 108 | nbActions=int(sys.argv[1]), 109 | explorations_iterations=[(round(float(sys.argv[2])),25)], 110 | outputFile="./Output/csv/" + sys.argv[4], 111 | ensembleFolderName=sys.argv[4], 112 | optimizer=sys.argv[5] 113 | ) 114 | 115 | dqt.run() 116 | 117 | dqt.end() 118 | 119 | 120 | -------------------------------------------------------------------------------- /SpEnv.py: -------------------------------------------------------------------------------- 1 | #Environment used for spenv 2 | #gym is the library of videogames used by reinforcement learning 3 | import gym 4 | from gym import spaces 5 | #Numpy is the library to deal with matrices 6 | import numpy 7 | #Pandas is the library used to deal with the CSV dataset 8 | import pandas 9 | #datetime is the library used to manipulate time and date 10 | from datetime import datetime 11 | #Library created by Tonio to merge data used as feature vectors 12 | #from MergedDataStructure import MergedDataStructure 13 | #Callback is the library used to show metrics 14 | import Callback 15 | 16 | 17 | class SpEnv(gym.Env): 18 | #Just for the gym library. In a continuous environment, you can do infinite decisions. 19 | #We dont want this because we have just three possible actions. 20 | continuous = False 21 | 22 | #Observation window is the time window regarding the "hourly" dataset 23 | #ensemble variable tells to save or not the decisions at each walk 24 | 25 | def __init__(self, data, callback = None, ensamble = None, columnName = "iteration-1"): 26 | #Declare the episode as the first episode 27 | self.episode=1 28 | 29 | # opening the dataset 30 | self.data=data 31 | 32 | #Load the data 33 | self.output=False 34 | 35 | #ensamble is the table of validation and testing 36 | #If its none, you will not save csvs of validation and testing 37 | if(ensamble is not None): # managing the ensamble output (maybe in the wrong way) 38 | self.output=True 39 | self.ensamble=ensamble 40 | self.columnName = columnName 41 | 42 | #self.ensemble is a big table (before file writing) containing observations as lines and epochs as columns 43 | #each column will contain a decision for each epoch at each date. It is saved later. 44 | #We read this table later in order to make ensemble decisions at each epoch 45 | self.ensamble[self.columnName]=0 46 | 47 | #Declare low and high as vectors with -inf values 48 | self.low = numpy.array([-numpy.inf]) 49 | self.high = numpy.array([+numpy.inf]) 50 | 51 | #Define the space of actions as 3 52 | #the action space is now 2 (hold and long) 53 | #self.action_spaces = space.Discrete(2) 54 | self.action_space = gym.spaces.Box(low=numpy.array([0]),high= numpy.array([2]), dtype=numpy.int) 55 | 56 | 57 | #low and high are the minimun and maximum accepted values for this problem 58 | #Tonio used random values 59 | #We dont know what are the minimum and maximum values of Close-Open, so we put these values 60 | self.observation_space = spaces.Box(self.low, self.high, dtype=numpy.float32) 61 | 62 | self.currentObservation = 0 63 | #Defines that the environment is not done yet 64 | self.done = False 65 | #The limit is the number of open values in the dataset (could be any other value) 66 | self.limit = len(data) 67 | 68 | #Initiates the values to be returned by the environment 69 | self.reward = None 70 | self.possibleGain = 0 71 | self.openValue = 0 72 | self.closeValue = 0 73 | self.callback=callback 74 | 75 | #This is the action that is done in the environment. 76 | #Receives the action and returns the state, the reward and if its done 77 | def step(self, action): 78 | 79 | #assert self.action_space.contains(action) 80 | 81 | #Initiates the reward, weeklist and daylist 82 | self.reward=0 83 | 84 | #Calculate the reward in percentage of growing/decreasing 85 | self.possibleGain = self.data.iloc[self.currentObservation]['delta_next_day'] 86 | 87 | #Calculate the reward in percentage of growing/decreasing 88 | self.possibleGain = self.data.iloc[self.currentObservation]['delta_next_day'] 89 | 90 | #If action is a LONG, calculate the reward 91 | #If action is a long, calculate the reward 92 | if(action == 1): 93 | #The reward must be subtracted by the cost of transaction 94 | #action=1 95 | self.reward = self.possibleGain 96 | 97 | #If action is a short, calculate the reward 98 | elif(action==2): 99 | self.reward = (-self.possibleGain) 100 | 101 | #If action is a hold, no reward 102 | elif(action==0): 103 | self.reward = 0 104 | 105 | #Finish episode 106 | self.done=True 107 | 108 | 109 | #Call the callback for the episode 110 | if(self.callback!=None and self.done): 111 | self.callback.on_episode_end(action,self.reward,self.possibleGain) 112 | 113 | #If its validation or test, save the outputs in the ensemble file that will be ensembled later 114 | if(self.output): 115 | self.ensamble.at[self.data.iloc[self.currentObservation]['date_time'],self.columnName]=action 116 | 117 | self.episode+=1 118 | self.currentObservation+=1 119 | 120 | if(self.currentObservation>=self.limit): 121 | self.currentObservation=0 122 | 123 | #Return the state, reward and if its done or not 124 | return self.getObservation(), self.reward, self.done, {} 125 | 126 | #function done when the episode finishes 127 | #reset will prepare the next state (feature vector) and give it to the agent 128 | def reset(self): 129 | 130 | self.done = False 131 | self.reward = None 132 | self.possibleGain = 0 133 | 134 | return self.getObservation() 135 | 136 | 137 | def getObservation(self): 138 | 139 | predictionList = [] 140 | predictionList=numpy.array(self.data.iloc[self.currentObservation]["prediction_0":"prediction_999"]) 141 | 142 | 143 | return predictionList.ravel() 144 | 145 | def resetEnv(self): 146 | self.currentObservation=0 147 | self.episode=1 148 | -------------------------------------------------------------------------------- /plotResults.py: -------------------------------------------------------------------------------- 1 | from matplotlib.backends.backend_pdf import PdfPages 2 | import pandas as pd 3 | import matplotlib.pyplot as plt 4 | import numpy as np 5 | import sys 6 | from math import floor 7 | from ensemble import ensemble 8 | 9 | from matplotlib.gridspec import GridSpec 10 | 11 | outputFile=str(sys.argv[2])+".pdf" 12 | numFiles=int(sys.argv[3]) 13 | #Number of epochs in the algorithm 14 | numEpochs=35 15 | numPlots=10 16 | 17 | pdf=PdfPages(outputFile) 18 | 19 | #Configure the size of the picture that will be plotted 20 | #Configure the size of the picture that will be plotted 21 | plt.figure(figsize=((numEpochs/10)*(2),9*5)) 22 | 23 | #Open the file that was saved on folder csv/walks, containing information about each iteration in that walk 24 | #Lets show a summary of each walk 25 | #For each walk, one column is plotted in a final pdf file 26 | for i in range(1,numFiles+1): 27 | 28 | document = pd.read_csv("./Output/csv/"+ sys.argv[1]+"/results-agent-training.csv") 29 | plt.subplot(numPlots,numFiles,0*numFiles + i) 30 | #Draw information in that file. First of all, lets plot accuracy 31 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testAccuracy'].tolist(),'r',label='Test') 32 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainAccuracy'].tolist(),'b',label='Train') 33 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationAccuracy'].tolist(),'g',label='Validation') 34 | plt.xticks(range(0,numEpochs,4)) 35 | plt.yticks(np.arange(0, 1, step=0.1)) 36 | plt.ylim(-0.05,1.05) 37 | plt.axhline(y=0, color='k', linestyle='-') 38 | plt.legend() 39 | plt.grid() 40 | plt.title('Accuracy') 41 | 42 | #Lets draw information about coverage, read from the csv file located at csv/walks 43 | plt.subplot(numPlots,numFiles,1*numFiles + i) 44 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testCoverage'].tolist(),'r',label='Test') 45 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainCoverage'].tolist(),'b',label='Train') 46 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationCoverage'].tolist(),'g',label='Validation') 47 | plt.xticks(range(0,numEpochs,4)) 48 | plt.yticks(np.arange(0, 1, step=0.1)) 49 | plt.ylim(-0.05,1.05) 50 | plt.axhline(y=0, color='k', linestyle='-') 51 | plt.legend() 52 | plt.grid() 53 | plt.title('Coverage') 54 | 55 | # Information about reward 56 | plt.subplot(numPlots,numFiles,2*numFiles + i ) 57 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainReward'].tolist(),'b',label='Train') 58 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationReward'].tolist(),'g',label='Validation') 59 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testReward'].tolist(),'r',label='Test') 60 | plt.xticks(range(0,numEpochs,4)) 61 | plt.axhline(y=0, color='k', linestyle='-') 62 | plt.legend() 63 | plt.grid() 64 | plt.title('Reward') 65 | 66 | #Percentages of long 67 | plt.subplot(numPlots,numFiles,3*numFiles + i ) 68 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainLong%'].tolist(),'b',label='Train') 69 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationLong%'].tolist(),'g',label='Validation') 70 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testLong%'].tolist(),'r',label='Test') 71 | plt.xticks(range(0,numEpochs,4)) 72 | plt.yticks(np.arange(0, 1, step=0.1)) 73 | plt.ylim(-0.05,1.05) 74 | plt.axhline(y=0, color='k', linestyle='-') 75 | plt.legend() 76 | plt.grid() 77 | plt.title('Long %') 78 | 79 | #Percentages of short 80 | plt.subplot(numPlots,numFiles,4*numFiles + i ) 81 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainShort%'].tolist(),'b',label='Train') 82 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationShort%'].tolist(),'g',label='Validation') 83 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testShort%'].tolist(),'r',label='Test') 84 | plt.xticks(range(0,numEpochs,4)) 85 | plt.yticks(np.arange(0, 1, step=0.1)) 86 | plt.ylim(-0.05,1.05) 87 | plt.axhline(y=0, color='k', linestyle='-') 88 | plt.legend() 89 | plt.grid() 90 | plt.title('Short %') 91 | 92 | 93 | #Coverage 94 | plt.subplot(numPlots,numFiles,5*numFiles + i ) 95 | plt.plot(document.ix[:, 'Iteration'].tolist(),list(map(lambda x: 1-x,document.ix[:, 'trainCoverage'].tolist())),'b',label='Train') 96 | plt.plot(document.ix[:, 'Iteration'].tolist(),list(map(lambda x: 1-x,document.ix[:, 'validationCoverage'].tolist())),'g',label='Validation') 97 | plt.plot(document.ix[:, 'Iteration'].tolist(),list(map(lambda x: 1-x,document.ix[:, 'testCoverage'].tolist())),'r',label='Test') 98 | plt.xticks(range(0,numEpochs,4)) 99 | plt.yticks(np.arange(0, 1, step=0.1)) 100 | plt.ylim(-0.05,1.05) 101 | plt.axhline(y=0, color='k', linestyle='-') 102 | plt.legend() 103 | plt.grid() 104 | plt.title('Hold %') 105 | 106 | 107 | #Accuracy of longs 108 | plt.subplot(numPlots,numFiles,6*numFiles + i ) 109 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainLongAcc'].tolist(),'b',label='Train') 110 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationLongAcc'].tolist(),'g',label='Validation') 111 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testLongAcc'].tolist(),'r',label='Test') 112 | plt.xticks(range(0,numEpochs,4)) 113 | plt.yticks(np.arange(0, 1, step=0.1)) 114 | plt.ylim(-0.05,1.05) 115 | plt.axhline(y=0, color='k', linestyle='-') 116 | plt.legend() 117 | plt.grid() 118 | plt.title('Long Accuracy') 119 | 120 | #Accuracy of shorts 121 | plt.subplot(numPlots,numFiles,7*numFiles + i ) 122 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainShortAcc'].tolist(),'b',label='Train') 123 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validationShortAcc'].tolist(),'g',label='Validation') 124 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testShortAcc'].tolist(),'r',label='Test') 125 | plt.xticks(range(0,numEpochs,4)) 126 | plt.yticks(np.arange(0, 1, step=0.1)) 127 | plt.ylim(-0.05,1.05) 128 | plt.axhline(y=0, color='k', linestyle='-') 129 | plt.legend() 130 | plt.grid() 131 | plt.title('Short Accuracy') 132 | 133 | 134 | #Precisions of long 135 | plt.subplot(numPlots,numFiles,8*numFiles + i ) 136 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainLongPrec'].tolist(),'b',label='Train') 137 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validLongPrec'].tolist(),'g',label='Validation') 138 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testLongPrec'].tolist(),'r',label='Test') 139 | plt.xticks(range(0,numEpochs,4)) 140 | plt.yticks(np.arange(0, 1, step=0.1)) 141 | plt.ylim(-0.05,1.05) 142 | plt.axhline(y=0, color='k', linestyle='-') 143 | plt.legend() 144 | plt.grid() 145 | plt.title('Long Precision') 146 | 147 | #Precisions of short 148 | plt.subplot(numPlots,numFiles,9*numFiles + i ) 149 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'trainShortPrec'].tolist(),'b',label='Train') 150 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'validShortPrec'].tolist(),'g',label='Validation') 151 | plt.plot(document.ix[:, 'Iteration'].tolist(),document.ix[:, 'testShortPrec'].tolist(),'r',label='Test') 152 | plt.xticks(range(0,numEpochs,4)) 153 | plt.yticks(np.arange(0, 1, step=0.1)) 154 | plt.ylim(-0.05,1.05) 155 | plt.axhline(y=0, color='k', linestyle='-') 156 | plt.legend() 157 | plt.grid() 158 | plt.title('Short Precision') 159 | 160 | plt.suptitle("Experiment RL metalearner\n" 161 | +"Model: 35 neurons single layer\n" 162 | +"Input: 1000 predictions of CNNs\n" 163 | +"Memory-Window Length: 10000-1\n" 164 | +"Other changes: Does Short, Hold and Long\n" 165 | +"Explorations:" +sys.argv[4] +"." 166 | ,size=19 167 | ,weight=20 168 | ,ha='left' 169 | ,x=0.1 170 | ,y=0.99) 171 | 172 | pdf.savefig() 173 | 174 | 175 | #Now, lets try the ensemble 176 | i=1 177 | 178 | ###########-------------------------------------------------------------------|Tabella Full Ensemble|------------------- 179 | x=2 180 | y=1 181 | plt.figure(figsize=(x*3.5,y*3.5)) 182 | 183 | plt.subplot(y,y,1) 184 | plt.axis('off') 185 | val,col=ensemble("test", sys.argv[1]) 186 | t=plt.table(cellText=val, colLabels=col, fontsize=20, loc='center') 187 | t.auto_set_font_size(False) 188 | t.set_fontsize(6) 189 | plt.title("Final Results") 190 | #plt.suptitle("MAJORITY VOTING") 191 | pdf.savefig() 192 | ###########-------------------------------------------------------------------------------------------------------------------- 193 | pdf.close() 194 | 195 | 196 | 197 | -------------------------------------------------------------------------------- /DeepQTrading.py: -------------------------------------------------------------------------------- 1 | #Imports the SPEnv library, which will perform the Agent actions themselves 2 | from SpEnv import SpEnv 3 | 4 | #Callback used to print the results at each episode 5 | from Callback import ValidationCallback 6 | 7 | #Keras library for the NN considered 8 | from keras.models import Sequential 9 | 10 | #Keras libraries for layers, activations and optimizers used 11 | from keras.layers import Dense, Activation, Flatten 12 | from keras.layers.advanced_activations import LeakyReLU, PReLU 13 | from keras.optimizers import * 14 | 15 | #RL Agent 16 | from rl.agents.dqn import DQNAgent 17 | from rl.memory import SequentialMemory 18 | from rl.policy import EpsGreedyQPolicy 19 | from keras_radam import RAdam 20 | 21 | #Mathematical operations used later 22 | from math import floor 23 | 24 | #Library to manipulate the dataset in a csv file 25 | import pandas as pd 26 | 27 | #Library used to manipulate time 28 | import datetime 29 | import os 30 | 31 | import numpy 32 | numpy.random.seed(0) 33 | 34 | class DeepQTrading: 35 | 36 | #Class constructor 37 | #model: Keras model considered 38 | #explorations_iterations: a vector containing (i) probability of random predictions; (ii) how many iterations will be 39 | #run by the algorithm (we run the algorithm several times-several iterations) 40 | #outputFile: name of the file to print metrics of the training 41 | #ensembleFolderName: name of the file to print predictions 42 | #optimizer: optimizer to run 43 | 44 | def __init__(self, model, nbActions, explorations_iterations, outputFile, ensembleFolderName, optimizer="adamax"): 45 | 46 | self.ensembleFolderName=ensembleFolderName 47 | self.policy = EpsGreedyQPolicy() 48 | self.explorations_iterations=explorations_iterations 49 | self.nbActions=nbActions 50 | self.model=model 51 | #Define the memory 52 | self.memory = SequentialMemory(limit=10000, window_length=1) 53 | #Instantiate the agent with parameters received 54 | self.agent = DQNAgent(model=self.model, policy=self.policy, nb_actions=self.nbActions, memory=self.memory, nb_steps_warmup=200, target_model_update=1e-1, enable_double_dqn=True,enable_dueling_network=True) 55 | 56 | #Compile the agent with the optimizer given as parameter 57 | if optimizer=="adamax": 58 | self.agent.compile(Adamax(), metrics=['mae']) 59 | if optimizer=="adadelta": 60 | self.agent.compile(Adadelta(), metrics=['mae']) 61 | if optimizer=="sgd": 62 | self.agent.compile(SGD(), metrics=['mae']) 63 | if optimizer=="rmsprop": 64 | self.agent.compile(RMSprop(), metrics=['mae']) 65 | if optimizer=="nadam": 66 | self.agent.compile(Nadam(), metrics=['mae']) 67 | if optimizer=="adagrad": 68 | self.agent.compile(Adagrad(), metrics=['mae']) 69 | if optimizer=="adam": 70 | self.agent.compile(Adam(), metrics=['mae']) 71 | if optimizer=="radam": 72 | self.agent.compile(RAdam(total_steps=5000, warmup_proportion=0.1, min_lr=1e-5), metrics=['mae']) 73 | 74 | #Save the weights of the agents in the q.weights file 75 | #Save random weights 76 | self.agent.save_weights("q.weights", overwrite=True) 77 | 78 | #Load data 79 | self.train_data= pd.read_csv('./dataset/jpm/train_data.csv') 80 | self.validation_data=pd.read_csv('./dataset/jpm/train_data.csv') 81 | self.test_data=pd.read_csv('./dataset/jpm/test_data.csv') 82 | 83 | #Call the callback for training, validation and test in order to show results for each iteration 84 | self.trainer=ValidationCallback() 85 | self.validator=ValidationCallback() 86 | self.tester=ValidationCallback() 87 | self.outputFileName=outputFile 88 | 89 | def run(self): 90 | #Initiates the environments, 91 | trainEnv=validEnv=testEnv=" " 92 | 93 | if not os.path.exists(self.outputFileName): 94 | os.makedirs(self.outputFileName) 95 | 96 | file_name=self.outputFileName+"/results-agent-training.csv" 97 | 98 | self.outputFile=open(file_name, "w+") 99 | #write the first row of the csv 100 | self.outputFile.write( 101 | "Iteration,"+ 102 | "trainAccuracy,"+ 103 | "trainCoverage,"+ 104 | "trainReward,"+ 105 | "trainLong%,"+ 106 | "trainShort%,"+ 107 | "trainLongAcc,"+ 108 | "trainShortAcc,"+ 109 | "trainLongPrec,"+ 110 | "trainShortPrec,"+ 111 | 112 | "validationAccuracy,"+ 113 | "validationCoverage,"+ 114 | "validationReward,"+ 115 | "validationLong%,"+ 116 | "validationShort%,"+ 117 | "validationLongAcc,"+ 118 | "validationShortAcc,"+ 119 | "validLongPrec,"+ 120 | "validShortPrec,"+ 121 | 122 | "testAccuracy,"+ 123 | "testCoverage,"+ 124 | "testReward,"+ 125 | "testLong%,"+ 126 | "testShort%,"+ 127 | "testLongAcc,"+ 128 | "testShortAcc,"+ 129 | "testLongPrec,"+ 130 | "testShortPrec\n") 131 | 132 | 133 | #Prepare the training and validation files for saving them later 134 | ensambleValid=pd.DataFrame(index=self.validation_data[:].ix[:,'date_time'].drop_duplicates().tolist()) 135 | ensambleTest=pd.DataFrame(index=self.test_data[:].ix[:,'date_time'].drop_duplicates().tolist()) 136 | 137 | #Put the name of the index for validation and testing 138 | ensambleValid.index.name='date_time' 139 | ensambleTest.index.name='date_time' 140 | 141 | #Explorations are epochs considered, or how many times the agent will play the game. 142 | for eps in self.explorations_iterations: 143 | 144 | #policy will use eps[0] (explorations), so the randomness of predictions (actions) will happen with eps[0] of probability 145 | self.policy.eps = eps[0] 146 | 147 | #there will be 25 iterations or eps[1] in explorations_iterations) 148 | for i in range(0,eps[1]): 149 | 150 | del(trainEnv) 151 | #Define the training, validation and testing environments with their respective callbacks 152 | trainEnv = SpEnv(data=self.train_data, callback=self.trainer) 153 | 154 | del(validEnv) 155 | validEnv=SpEnv(data=self.validation_data,ensamble=ensambleValid,callback=self.validator,columnName="iteration"+str(i)) 156 | 157 | del(testEnv) 158 | testEnv=SpEnv(data=self.test_data, callback=self.tester,ensamble=ensambleTest,columnName="iteration"+str(i)) 159 | 160 | #Reset the callback 161 | self.trainer.reset() 162 | self.validator.reset() 163 | self.tester.reset() 164 | 165 | #Reset the training environment 166 | trainEnv.resetEnv() 167 | 168 | #Train the agent 169 | #The agent receives as input one environment 170 | self.agent.fit(trainEnv,nb_steps=len(self.train_data),visualize=False,verbose=0) 171 | 172 | #Get the info from the train callback 173 | (_,trainCoverage,trainAccuracy,trainReward,trainLongPerc, trainShortPerc,trainLongAcc,trainShortAcc,trainLongPrec,trainShortPrec)=self.trainer.getInfo() 174 | 175 | print("Iteration " + str(i+1) + " TRAIN: accuracy: " + str(trainAccuracy)+ " coverage: " + str(trainCoverage)+ " reward: " + str(trainReward)) 176 | 177 | #Reset the validation environment 178 | validEnv.resetEnv() 179 | #Test the agent on validation data 180 | self.agent.test(validEnv,nb_episodes=len(self.validation_data),visualize=False,verbose=0) 181 | 182 | #Get the info from the validation callback 183 | (_,validCoverage,validAccuracy,validReward,validLongPerc,validShortPerc, 184 | validLongAcc,validShortAcc,validLongPrec,validShortPrec)=self.validator.getInfo() 185 | #Print callback values on the screen 186 | print("Iteration " +str(i+1) + " VALIDATION: accuracy: " + str(validAccuracy)+ " coverage: " + str(validCoverage)+ " reward: " + str(validReward)) 187 | 188 | #Reset the testing environment 189 | testEnv.resetEnv() 190 | #Test the agent on testing data 191 | self.agent.test(testEnv,nb_episodes=len(self.test_data),visualize=False,verbose=0) 192 | #Get the info from the testing callback 193 | (_,testCoverage,testAccuracy,testReward,testLongPerc,testShortPerc, 194 | testLongAcc,testShortAcc,testLongPrec,testShortPrec)=self.tester.getInfo() 195 | #Print callback values on the screen 196 | print("Iteration " +str(i+1) + " TEST: acc: " + str(testAccuracy)+ " cov: " + str(testCoverage)+ " rew: " + str(testReward)) 197 | print(" ") 198 | 199 | #write the metrics in a text file 200 | self.outputFile.write( 201 | str(i)+","+ 202 | str(trainAccuracy)+","+ 203 | str(trainCoverage)+","+ 204 | str(trainReward)+","+ 205 | str(trainLongPerc)+","+ 206 | str(trainShortPerc)+","+ 207 | str(trainLongAcc)+","+ 208 | str(trainShortAcc)+","+ 209 | str(trainLongPrec)+","+ 210 | str(trainShortPrec)+","+ 211 | 212 | str(validAccuracy)+","+ 213 | str(validCoverage)+","+ 214 | str(validReward)+","+ 215 | str(validLongPerc)+","+ 216 | str(validShortPerc)+","+ 217 | str(validLongAcc)+","+ 218 | str(validShortAcc)+","+ 219 | str(validLongPrec)+","+ 220 | str(validShortPrec)+","+ 221 | 222 | str(testAccuracy)+","+ 223 | str(testCoverage)+","+ 224 | str(testReward)+","+ 225 | str(testLongPerc)+","+ 226 | str(testShortPerc)+","+ 227 | str(testLongAcc)+","+ 228 | str(testShortAcc)+","+ 229 | str(testLongPrec)+","+ 230 | str(testShortPrec)+"\n") 231 | 232 | #Close the file 233 | self.outputFile.close() 234 | 235 | if not os.path.exists("./Output/ensemble/"+self.ensembleFolderName): 236 | os.makedirs("./Output/ensemble/"+self.ensembleFolderName) 237 | 238 | ensambleValid.to_csv("./Output/ensemble/"+self.ensembleFolderName+"/ensemble_valid.csv") 239 | ensambleTest.to_csv("./Output/ensemble/"+self.ensembleFolderName+"/ensemble_test.csv") 240 | 241 | 242 | #Function to end the Agent 243 | def end(self): 244 | print("FINISHED") 245 | -------------------------------------------------------------------------------- /run_all_experiments.sh: -------------------------------------------------------------------------------- 1 | #ADAM optimizer 2 | #0.3 3 | python3 main.py 3 0.3 relu teste-adam-0.3-relu adam 4 | python3 plotResults.py teste-adam-0.3-relu results-adam-relu-explorations-0.3 1 0.3 5 | 6 | python3 main.py 3 0.3 sigmoid teste-adam-0.3-sigmoid adam 7 | python3 plotResults.py teste-adam-0.3-sigmoid results-adam-sigmoid-explorations-0.3 1 0.3 8 | 9 | python3 main.py 3 0.3 linear teste-adam-0.3-linear adam 10 | python3 plotResults.py teste-adam-0.3-linear results-adam-linear-explorations-0.3 1 0.3 11 | 12 | python3 main.py 3 0.3 tanh teste-adam-0.3-tanh adam 13 | python3 plotResults.py teste-adam-0.3-tanh results-adam-tanh-explorations-0.3 1 0.3 14 | 15 | python3 main.py 3 0.3 selu teste-adam-0.3-selu adam 16 | python3 plotResults.py teste-adam-0.3-selu results-adam-selu-explorations-0.3 1 0.3 17 | 18 | #0.5 19 | python3 main.py 3 0.5 relu teste-adam-0.5-relu adam 20 | python3 plotResults.py teste-adam-0.5-relu results-adam-relu-explorations-0.5 1 0.5 21 | 22 | python3 main.py 3 0.5 sigmoid teste-adam-0.5-sigmoid adam 23 | python3 plotResults.py teste-adam-0.5-sigmoid results-adam-sigmoid-explorations-0.5 1 0.5 24 | 25 | python3 main.py 3 0.5 linear teste-adam-0.5-linear adam 26 | python3 plotResults.py teste-adam-0.5-linear results-adam-linear-explorations-0.5 1 0.5 27 | 28 | python3 main.py 3 0.5 tanh teste-adam-0.5-tanh adam 29 | python3 plotResults.py teste-adam-0.5-tanh results-adam-tanh-explorations-0.5 1 0.5 30 | 31 | python3 main.py 3 0.5 selu teste-adam-0.5-selu adam 32 | python3 plotResults.py teste-adam-0.5-selu results-adam-selu-explorations-0.5 1 0.5 33 | 34 | #0.7 35 | python3 main.py 3 0.7 relu teste-adam-0.7-relu adam 36 | python3 plotResults.py teste-adam-0.7-relu results-adam-relu-explorations-0.7 1 0.7 37 | 38 | python3 main.py 3 0.7 sigmoid teste-adam-0.7-sigmoid adam 39 | python3 plotResults.py teste-adam-0.7-sigmoid results-adam-sigmoid-explorations-0.7 1 0.7 40 | 41 | python3 main.py 3 0.7 linear teste-adam-0.7-linear adam 42 | python3 plotResults.py teste-adam-0.7-linear results-adam-linear-explorations-0.7 1 0.7 43 | 44 | python3 main.py 3 0.7 tanh teste-adam-0.7-tanh adam 45 | python3 plotResults.py teste-adam-0.7-tanh results-adam-tanh-explorations-0.7 1 0.7 46 | 47 | python3 main.py 3 0.7 selu teste-adam-0.7-selu adam 48 | python3 plotResults.py teste-adam-0.7-selu results-adam-selu-explorations-0.7 1 0.7 49 | 50 | #0.9 51 | python3 main.py 3 0.9 relu teste-adam-0.9-relu adam 52 | python3 plotResults.py teste-adam-0.9-relu results-adam-relu-explorations-0.9 1 0.9 53 | 54 | python3 main.py 3 0.9 sigmoid teste-adam-0.9-sigmoid adam 55 | python3 plotResults.py teste-adam-0.9-sigmoid results-adam-sigmoid-explorations-0.9 1 0.9 56 | 57 | python3 main.py 3 0.9 linear teste-adam-0.9-linear adam 58 | python3 plotResults.py teste-adam-0.9-linear results-adam-linear-explorations-0.9 1 0.9 59 | 60 | python3 main.py 3 0.9 tanh teste-adam-0.9-tanh adam 61 | python3 plotResults.py teste-adam-0.9-tanh results-adam-tanh-explorations-0.9 1 0.9 62 | 63 | python3 main.py 3 0.9 selu teste-adam-0.9-selu adam 64 | python3 plotResults.py teste-adam-0.9-selu results-adam-selu-explorations-0.9 1 0.9 65 | 66 | #ADAMAX optimizer 67 | #0.3 68 | python3 main.py 3 0.3 relu teste-adamax-0.3-relu adamax 69 | python3 plotResults.py teste-adamax-0.3-relu results-adamax-relu-explorations-0.3 1 0.3 70 | 71 | python3 main.py 3 0.3 sigmoid teste-adamax-0.3-sigmoid adamax 72 | python3 plotResults.py teste-adamax-0.3-sigmoid results-adamax-sigmoid-explorations-0.3 1 0.3 73 | 74 | python3 main.py 3 0.3 linear teste-adamax-0.3-linear adamax 75 | python3 plotResults.py teste-adamax-0.3-linear results-adamax-linear-explorations-0.3 1 0.3 76 | 77 | python3 main.py 3 0.3 tanh teste-adamax-0.3-tanh adamax 78 | python3 plotResults.py teste-adamax-0.3-tanh results-adamax-tanh-explorations-0.3 1 0.3 79 | 80 | python3 main.py 3 0.3 selu teste-adamax-0.3-selu adamax 81 | python3 plotResults.py teste-adamax-0.3-selu results-adamax-selu-explorations-0.3 1 0.3 82 | 83 | #0.5 84 | python3 main.py 3 0.5 relu teste-adamax-0.5-relu adamax 85 | python3 plotResults.py teste-adamax-0.5-relu results-adamax-relu-explorations-0.5 1 0.5 86 | 87 | python3 main.py 3 0.5 sigmoid teste-adamax-0.5-sigmoid adamax 88 | python3 plotResults.py teste-adamax-0.5-sigmoid results-adamax-sigmoid-explorations-0.5 1 0.5 89 | 90 | python3 main.py 3 0.5 linear teste-adamax-0.5-linear adamax 91 | python3 plotResults.py teste-adamax-0.5-linear results-adamax-linear-explorations-0.5 1 0.5 92 | 93 | python3 main.py 3 0.5 tanh teste-adamax-0.5-tanh adamax 94 | python3 plotResults.py teste-adamax-0.5-tanh results-adamax-tanh-explorations-0.5 1 0.5 95 | 96 | python3 main.py 3 0.5 selu teste-adamax-0.5-selu adamax 97 | python3 plotResults.py teste-adamax-0.5-selu results-adamax-selu-explorations-0.5 1 0.5 98 | 99 | #0.7 100 | python3 main.py 3 0.7 relu teste-adamax-0.7-relu adamax 101 | python3 plotResults.py teste-adamax-0.7-relu results-adamax-relu-explorations-0.7 1 0.7 102 | 103 | python3 main.py 3 0.7 sigmoid teste-adamax-0.7-sigmoid adamax 104 | python3 plotResults.py teste-adamax-0.7-sigmoid results-adamax-sigmoid-explorations-0.7 1 0.7 105 | 106 | python3 main.py 3 0.7 linear teste-adamax-0.7-linear adamax 107 | python3 plotResults.py teste-adamax-0.7-linear results-adamax-linear-explorations-0.7 1 0.7 108 | 109 | python3 main.py 3 0.7 tanh teste-adamax-0.7-tanh adamax 110 | python3 plotResults.py teste-adamax-0.7-tanh results-adamax-tanh-explorations-0.7 1 0.7 111 | 112 | python3 main.py 3 0.7 selu teste-adamax-0.7-selu adamax 113 | python3 plotResults.py teste-adamax-0.7-selu results-adamax-selu-explorations-0.7 1 0.7 114 | 115 | #0.9 116 | python3 main.py 3 0.9 relu teste-adamax-0.9-relu adamax 117 | python3 plotResults.py teste-adamax-0.9-relu results-adamax-relu-explorations-0.9 1 0.9 118 | 119 | python3 main.py 3 0.9 sigmoid teste-adamax-0.9-sigmoid adamax 120 | python3 plotResults.py teste-adamax-0.9-sigmoid results-adamax-sigmoid-explorations-0.9 1 0.9 121 | 122 | python3 main.py 3 0.9 linear teste-adamax-0.9-linear adamax 123 | python3 plotResults.py teste-adamax-0.9-linear results-adamax-linear-explorations-0.9 1 0.9 124 | 125 | python3 main.py 3 0.9 tanh teste-adamax-0.9-tanh adamax 126 | python3 plotResults.py teste-adamax-0.9-tanh results-adamax-tanh-explorations-0.9 1 0.9 127 | 128 | python3 main.py 3 0.9 selu teste-adamax-0.9-selu adamax 129 | python3 plotResults.py teste-adamax-0.9-selu results-adamax-selu-explorations-0.9 1 0.9 130 | 131 | #ADAGRAD optimizer 132 | #0.3 133 | python3 main.py 3 0.3 relu teste-adagrad-0.3-relu adagrad 134 | python3 plotResults.py teste-adagrad-0.3-relu results-adagrad-relu-explorations-0.3 1 0.3 135 | 136 | python3 main.py 3 0.3 sigmoid teste-adagrad-0.3-sigmoid adagrad 137 | python3 plotResults.py teste-adagrad-0.3-sigmoid results-adagrad-sigmoid-explorations-0.3 1 0.3 138 | 139 | python3 main.py 3 0.3 linear teste-adagrad-0.3-linear adagrad 140 | python3 plotResults.py teste-adagrad-0.3-linear results-adagrad-linear-explorations-0.3 1 0.3 141 | 142 | python3 main.py 3 0.3 tanh teste-adagrad-0.3-tanh adagrad 143 | python3 plotResults.py teste-adagrad-0.3-tanh results-adagrad-tanh-explorations-0.3 1 0.3 144 | 145 | python3 main.py 3 0.3 selu teste-adagrad-0.3-selu adagrad 146 | python3 plotResults.py teste-adagrad-0.3-selu results-adagrad-selu-explorations-0.3 1 0.3 147 | 148 | #0.5 149 | python3 main.py 3 0.5 relu teste-adagrad-0.5-relu adagrad 150 | python3 plotResults.py teste-adagrad-0.5-relu results-adagrad-relu-explorations-0.5 1 0.5 151 | 152 | python3 main.py 3 0.5 sigmoid teste-adagrad-0.5-sigmoid adagrad 153 | python3 plotResults.py teste-adagrad-0.5-sigmoid results-adagrad-sigmoid-explorations-0.5 1 0.5 154 | 155 | python3 main.py 3 0.5 linear teste-adagrad-0.5-linear adagrad 156 | python3 plotResults.py teste-adagrad-0.5-linear results-adagrad-linear-explorations-0.5 1 0.5 157 | 158 | python3 main.py 3 0.5 tanh teste-adagrad-0.5-tanh adagrad 159 | python3 plotResults.py teste-adagrad-0.5-tanh results-adagrad-tanh-explorations-0.5 1 0.5 160 | 161 | python3 main.py 3 0.5 selu teste-adagrad-0.5-selu adagrad 162 | python3 plotResults.py teste-adagrad-0.5-selu results-adagrad-selu-explorations-0.5 1 0.5 163 | 164 | #0.7 165 | python3 main.py 3 0.7 relu teste-adagrad-0.7-relu adagrad 166 | python3 plotResults.py teste-adagrad-0.7-relu results-adagrad-relu-explorations-0.7 1 0.7 167 | 168 | python3 main.py 3 0.7 sigmoid teste-adagrad-0.7-sigmoid adagrad 169 | python3 plotResults.py teste-adagrad-0.7-sigmoid results-adagrad-sigmoid-explorations-0.7 1 0.7 170 | 171 | python3 main.py 3 0.7 linear teste-adagrad-0.7-linear adagrad 172 | python3 plotResults.py teste-adagrad-0.7-linear results-adagrad-linear-explorations-0.7 1 0.7 173 | 174 | python3 main.py 3 0.7 tanh teste-adagrad-0.7-tanh adagrad 175 | python3 plotResults.py teste-adagrad-0.7-tanh results-adagrad-tanh-explorations-0.7 1 0.7 176 | 177 | python3 main.py 3 0.7 selu teste-adagrad-0.7-selu adagrad 178 | python3 plotResults.py teste-adagrad-0.7-selu results-adagrad-selu-explorations-0.7 1 0.7 179 | 180 | #0.9 181 | python3 main.py 3 0.9 relu teste-adagrad-0.9-relu adagrad 182 | python3 plotResults.py teste-adagrad-0.9-relu results-adagrad-relu-explorations-0.9 1 0.9 183 | 184 | python3 main.py 3 0.9 sigmoid teste-adagrad-0.9-sigmoid adagrad 185 | python3 plotResults.py teste-adagrad-0.9-sigmoid results-adagrad-sigmoid-explorations-0.9 1 0.9 186 | 187 | python3 main.py 3 0.9 linear teste-adagrad-0.9-linear adagrad 188 | python3 plotResults.py teste-adagrad-0.9-linear results-adagrad-linear-explorations-0.9 1 0.9 189 | 190 | python3 main.py 3 0.9 tanh teste-adagrad-0.9-tanh adagrad 191 | python3 plotResults.py teste-adagrad-0.9-tanh results-adagrad-tanh-explorations-0.9 1 0.9 192 | 193 | python3 main.py 3 0.9 selu teste-adagrad-0.9-selu adagrad 194 | python3 plotResults.py teste-adagrad-0.9-selu results-adagrad-selu-explorations-0.9 1 0.9 195 | 196 | #ADADELTA optimizer 197 | #0.3 198 | python3 main.py 3 0.3 relu teste-adadelta-0.3-relu adadelta 199 | python3 plotResults.py teste-adadelta-0.3-relu results-adadelta-relu-explorations-0.3 1 0.3 200 | 201 | python3 main.py 3 0.3 sigmoid teste-adadelta-0.3-sigmoid adadelta 202 | python3 plotResults.py teste-adadelta-0.3-sigmoid results-adadelta-sigmoid-explorations-0.3 1 0.3 203 | 204 | python3 main.py 3 0.3 linear teste-adadelta-0.3-linear adadelta 205 | python3 plotResults.py teste-adadelta-0.3-linear results-adadelta-linear-explorations-0.3 1 0.3 206 | 207 | python3 main.py 3 0.3 tanh teste-adadelta-0.3-tanh adadelta 208 | python3 plotResults.py teste-adadelta-0.3-tanh results-adadelta-tanh-explorations-0.3 1 0.3 209 | 210 | python3 main.py 3 0.3 selu teste-adadelta-0.3-selu adadelta 211 | python3 plotResults.py teste-adadelta-0.3-selu results-adadelta-selu-explorations-0.3 1 0.3 212 | 213 | #0.5 214 | python3 main.py 3 0.5 relu teste-adadelta-0.5-relu adadelta 215 | python3 plotResults.py teste-adadelta-0.5-relu results-adadelta-relu-explorations-0.5 1 0.5 216 | 217 | python3 main.py 3 0.5 sigmoid teste-adadelta-0.5-sigmoid adadelta 218 | python3 plotResults.py teste-adadelta-0.5-sigmoid results-adadelta-sigmoid-explorations-0.5 1 0.5 219 | 220 | python3 main.py 3 0.5 linear teste-adadelta-0.5-linear adadelta 221 | python3 plotResults.py teste-adadelta-0.5-linear results-adadelta-linear-explorations-0.5 1 0.5 222 | 223 | python3 main.py 3 0.5 tanh teste-adadelta-0.5-tanh adadelta 224 | python3 plotResults.py teste-adadelta-0.5-tanh results-adadelta-tanh-explorations-0.5 1 0.5 225 | 226 | python3 main.py 3 0.5 selu teste-adadelta-0.5-selu adadelta 227 | python3 plotResults.py teste-adadelta-0.5-selu results-adadelta-selu-explorations-0.5 1 0.5 228 | 229 | #0.7 230 | python3 main.py 3 0.7 relu teste-adadelta-0.7-relu adadelta 231 | python3 plotResults.py teste-adadelta-0.7-relu results-adadelta-relu-explorations-0.7 1 0.7 232 | 233 | python3 main.py 3 0.7 sigmoid teste-adadelta-0.7-sigmoid adadelta 234 | python3 plotResults.py teste-adadelta-0.7-sigmoid results-adadelta-sigmoid-explorations-0.7 1 0.7 235 | 236 | python3 main.py 3 0.7 linear teste-adadelta-0.7-linear adadelta 237 | python3 plotResults.py teste-adadelta-0.7-linear results-adadelta-linear-explorations-0.7 1 0.7 238 | 239 | python3 main.py 3 0.7 tanh teste-adadelta-0.7-tanh adadelta 240 | python3 plotResults.py teste-adadelta-0.7-tanh results-adadelta-tanh-explorations-0.7 1 0.7 241 | 242 | python3 main.py 3 0.7 selu teste-adadelta-0.7-selu adadelta 243 | python3 plotResults.py teste-adadelta-0.7-selu results-adadelta-selu-explorations-0.7 1 0.7 244 | 245 | #0.9 246 | python3 main.py 3 0.9 relu teste-adadelta-0.9-relu adadelta 247 | python3 plotResults.py teste-adadelta-0.9-relu results-adadelta-relu-explorations-0.9 1 0.9 248 | 249 | python3 main.py 3 0.9 sigmoid teste-adadelta-0.9-sigmoid adadelta 250 | python3 plotResults.py teste-adadelta-0.9-sigmoid results-adadelta-sigmoid-explorations-0.9 1 0.9 251 | 252 | python3 main.py 3 0.9 linear teste-adadelta-0.9-linear adadelta 253 | python3 plotResults.py teste-adadelta-0.9-linear results-adadelta-linear-explorations-0.9 1 0.9 254 | 255 | python3 main.py 3 0.9 tanh teste-adadelta-0.9-tanh adadelta 256 | python3 plotResults.py teste-adadelta-0.9-tanh results-adadelta-tanh-explorations-0.9 1 0.9 257 | 258 | python3 main.py 3 0.9 selu teste-adadelta-0.9-selu adadelta 259 | python3 plotResults.py teste-adadelta-0.9-selu results-adadelta-selu-explorations-0.9 1 0.9 260 | 261 | #RMSPROP optimizer 262 | #0.3 263 | python3 main.py 3 0.3 relu teste-rmsprop-0.3-relu rmsprop 264 | python3 plotResults.py teste-rmsprop-0.3-relu results-rmsprop-relu-explorations-0.3 1 0.3 265 | 266 | python3 main.py 3 0.3 sigmoid teste-rmsprop-0.3-sigmoid rmsprop 267 | python3 plotResults.py teste-rmsprop-0.3-sigmoid results-rmsprop-sigmoid-explorations-0.3 1 0.3 268 | 269 | python3 main.py 3 0.3 linear teste-rmsprop-0.3-linear rmsprop 270 | python3 plotResults.py teste-rmsprop-0.3-linear results-rmsprop-linear-explorations-0.3 1 0.3 271 | 272 | python3 main.py 3 0.3 tanh teste-rmsprop-0.3-tanh rmsprop 273 | python3 plotResults.py teste-rmsprop-0.3-tanh results-rmsprop-tanh-explorations-0.3 1 0.3 274 | 275 | python3 main.py 3 0.3 selu teste-rmsprop-0.3-selu rmsprop 276 | python3 plotResults.py teste-rmsprop-0.3-selu results-rmsprop-selu-explorations-0.3 1 0.3 277 | 278 | #0.5 279 | python3 main.py 3 0.5 relu teste-rmsprop-0.5-relu rmsprop 280 | python3 plotResults.py teste-rmsprop-0.5-relu results-rmsprop-relu-explorations-0.5 1 0.5 281 | 282 | python3 main.py 3 0.5 sigmoid teste-rmsprop-0.5-sigmoid rmsprop 283 | python3 plotResults.py teste-rmsprop-0.5-sigmoid results-rmsprop-sigmoid-explorations-0.5 1 0.5 284 | 285 | python3 main.py 3 0.5 linear teste-rmsprop-0.5-linear rmsprop 286 | python3 plotResults.py teste-rmsprop-0.5-linear results-rmsprop-linear-explorations-0.5 1 0.5 287 | 288 | python3 main.py 3 0.5 tanh teste-rmsprop-0.5-tanh rmsprop 289 | python3 plotResults.py teste-rmsprop-0.5-tanh results-rmsprop-tanh-explorations-0.5 1 0.5 290 | 291 | python3 main.py 3 0.5 selu teste-rmsprop-0.5-selu rmsprop 292 | python3 plotResults.py teste-rmsprop-0.5-selu results-rmsprop-selu-explorations-0.5 1 0.5 293 | 294 | #0.7 295 | python3 main.py 3 0.7 relu teste-rmsprop-0.7-relu rmsprop 296 | python3 plotResults.py teste-rmsprop-0.7-relu results-rmsprop-relu-explorations-0.7 1 0.7 297 | 298 | python3 main.py 3 0.7 sigmoid teste-rmsprop-0.7-sigmoid rmsprop 299 | python3 plotResults.py teste-rmsprop-0.7-sigmoid results-rmsprop-sigmoid-explorations-0.7 1 0.7 300 | 301 | python3 main.py 3 0.7 linear teste-rmsprop-0.7-linear rmsprop 302 | python3 plotResults.py teste-rmsprop-0.7-linear results-rmsprop-linear-explorations-0.7 1 0.7 303 | 304 | python3 main.py 3 0.7 tanh teste-rmsprop-0.7-tanh rmsprop 305 | python3 plotResults.py teste-rmsprop-0.7-tanh results-rmsprop-tanh-explorations-0.7 1 0.7 306 | 307 | python3 main.py 3 0.7 selu teste-rmsprop-0.7-selu rmsprop 308 | python3 plotResults.py teste-rmsprop-0.7-selu results-rmsprop-selu-explorations-0.7 1 0.7 309 | 310 | #0.9 311 | python3 main.py 3 0.9 relu teste-rmsprop-0.9-relu rmsprop 312 | python3 plotResults.py teste-rmsprop-0.9-relu results-rmsprop-relu-explorations-0.9 1 0.9 313 | 314 | python3 main.py 3 0.9 sigmoid teste-rmsprop-0.9-sigmoid rmsprop 315 | python3 plotResults.py teste-rmsprop-0.9-sigmoid results-rmsprop-sigmoid-explorations-0.9 1 0.9 316 | 317 | python3 main.py 3 0.9 linear teste-rmsprop-0.9-linear rmsprop 318 | python3 plotResults.py teste-rmsprop-0.9-linear results-rmsprop-linear-explorations-0.9 1 0.9 319 | 320 | python3 main.py 3 0.9 tanh teste-rmsprop-0.9-tanh rmsprop 321 | python3 plotResults.py teste-rmsprop-0.9-tanh results-rmsprop-tanh-explorations-0.9 1 0.9 322 | 323 | python3 main.py 3 0.9 selu teste-rmsprop-0.9-selu rmsprop 324 | python3 plotResults.py teste-rmsprop-0.9-selu results-rmsprop-selu-explorations-0.9 1 0.9 325 | --------------------------------------------------------------------------------