├── .gitignore ├── LICENSE ├── PredictiveModel.py ├── PredictiveMonitor.py ├── README.md ├── SequenceEncoder.py ├── TextTransformers.py └── experiments ├── BoNGExperimentsCV.py ├── FinalExperiments.py ├── LDAExperimentsCV.py ├── NBLogCountRatioExperimentsCV.py ├── PVExperimentsCV.py ├── TimedExperiments.py ├── cv_results ├── bong_selected100_ngram1_rf_part1 ├── bong_selected100_tfidf_ngram1_rf_part1 ├── bong_selected100_tfidf_ngram2_rf_part1 ├── bong_selected100_tfidf_ngram3_rf_part1 └── optimal_params_rf ├── extract_optimal_params.py ├── final_results └── base_rf └── timed_results └── base_rf_1 /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | 55 | # Sphinx documentation 56 | docs/_build/ 57 | 58 | # PyBuilder 59 | target/ 60 | 61 | #Ipython Notebook 62 | .ipynb_checkpoints 63 | 64 | # Other 65 | experiments/cv_results 66 | experiments/final_results 67 | Predictive_monitoring.ipynb 68 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | {description} 294 | Copyright (C) {year} {fullname} 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | {signature of Ty Coon}, 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | -------------------------------------------------------------------------------- /PredictiveModel.py: -------------------------------------------------------------------------------- 1 | from SequenceEncoder import SequenceEncoder 2 | from TextTransformers import LDATransformer, PVTransformer, BoNGTransformer, NBLogCountRatioTransformer 3 | from sklearn.ensemble import RandomForestClassifier 4 | from sklearn.linear_model import LogisticRegression 5 | import pandas as pd 6 | import time 7 | import numpy as np 8 | 9 | class PredictiveModel(): 10 | 11 | def __init__(self, nr_events, case_id_col, label_col, encoder_kwargs, transformer_kwargs, cls_kwargs, text_col=None, 12 | text_transformer_type=None, cls_method="rf"): 13 | 14 | self.text_col = text_col 15 | self.case_id_col = case_id_col 16 | self.label_col = label_col 17 | 18 | self.encoder = SequenceEncoder(nr_events=nr_events, case_id_col=case_id_col, label_col=label_col, **encoder_kwargs) 19 | 20 | if text_transformer_type is None: 21 | self.transformer = None 22 | elif text_transformer_type == "LDATransformer": 23 | self.transformer = LDATransformer(**transformer_kwargs) 24 | elif text_transformer_type == "BoNGTransformer": 25 | self.transformer = BoNGTransformer(**transformer_kwargs) 26 | elif text_transformer_type == "NBLogCountRatioTransformer": 27 | self.transformer = NBLogCountRatioTransformer(**transformer_kwargs) 28 | elif text_transformer_type == "PVTransformer": 29 | self.transformer = PVTransformer(**transformer_kwargs) 30 | 31 | else: 32 | print("Transformer type not known") 33 | 34 | if cls_method == "logit": 35 | self.cls = LogisticRegression(**cls_kwargs) 36 | elif cls_method == "rf": 37 | self.cls = RandomForestClassifier(**cls_kwargs) 38 | else: 39 | print("Classifier method not known") 40 | 41 | self.hardcoded_prediction = None 42 | self.test_encode_time = None 43 | self.test_preproc_time = None 44 | self.test_time = None 45 | self.nr_test_cases = None 46 | 47 | 48 | def fit(self, dt_train): 49 | preproc_start_time = time.time() 50 | train_encoded = self.encoder.fit_transform(dt_train) 51 | train_X = train_encoded.drop([self.case_id_col, self.label_col], axis=1) 52 | train_y = train_encoded[self.label_col] 53 | 54 | 55 | if self.transformer is not None: 56 | text_cols = [col for col in train_X.columns.values if col.startswith(self.text_col)] 57 | for col in text_cols: 58 | train_X[col] = train_X[col].astype('str') 59 | train_text = self.transformer.fit_transform(train_X[text_cols], train_y) 60 | train_X = pd.concat([train_X.drop(text_cols, axis=1), train_text], axis=1) 61 | self.train_X = train_X 62 | preproc_end_time = time.time() 63 | self.preproc_time = preproc_end_time - preproc_start_time 64 | 65 | cls_start_time = time.time() 66 | if len(train_y.unique()) < 2: # less than 2 classes are present 67 | self.hardcoded_prediction = train_y.iloc[0] 68 | self.cls.classes_ = train_y.unique() 69 | else: 70 | self.cls.fit(train_X, train_y) 71 | cls_end_time = time.time() 72 | self.cls_time = cls_end_time - cls_start_time 73 | 74 | 75 | def predict_proba(self, dt_test): 76 | encode_start_time = time.time() 77 | test_encoded = self.encoder.transform(dt_test) 78 | encode_end_time = time.time() 79 | self.test_encode_time = encode_end_time - encode_start_time 80 | 81 | test_preproc_start_time = time.time() 82 | test_X = test_encoded.drop([self.case_id_col, self.label_col], axis=1) 83 | 84 | if self.transformer is not None: 85 | text_cols = [col for col in test_X.columns.values if col.startswith(self.text_col)] 86 | for col in text_cols: 87 | test_encoded[col] = test_encoded[col].astype('str') 88 | test_text = self.transformer.transform(test_encoded[text_cols]) 89 | test_X = pd.concat([test_X.drop(text_cols, axis=1), test_text], axis=1) 90 | 91 | 92 | self.test_case_names = test_encoded[self.case_id_col] 93 | self.test_X = test_X 94 | self.test_y = test_encoded[self.label_col] 95 | test_preproc_end_time = time.time() 96 | self.test_preproc_time = test_preproc_end_time - test_preproc_start_time 97 | 98 | test_start_time = time.time() 99 | if self.hardcoded_prediction is not None: # e.g. model was trained with one class only 100 | predictions_proba = np.array([1.0,0.0]*test_X.shape[0]).reshape(test_X.shape[0],2) 101 | else: 102 | predictions_proba = self.cls.predict_proba(test_X) 103 | test_end_time = time.time() 104 | self.test_time = test_end_time - test_start_time 105 | self.nr_test_cases = len(predictions_proba) 106 | 107 | return predictions_proba 108 | -------------------------------------------------------------------------------- /PredictiveMonitor.py: -------------------------------------------------------------------------------- 1 | from PredictiveModel import PredictiveModel 2 | import numpy as np 3 | import os.path 4 | 5 | class PredictiveMonitor(): 6 | 7 | def __init__(self, event_nr_col, case_id_col, label_col, encoder_kwargs, cls_kwargs, transformer_kwargs, 8 | pos_label=1, text_col=None, 9 | text_transformer_type=None, cls_method="rf"): 10 | 11 | self.event_nr_col = event_nr_col 12 | self.case_id_col = case_id_col 13 | self.label_col = label_col 14 | self.text_col = text_col 15 | self.pos_label = pos_label 16 | 17 | self.text_transformer_type = text_transformer_type 18 | self.cls_method = cls_method 19 | 20 | self.encoder_kwargs = encoder_kwargs 21 | self.transformer_kwargs = transformer_kwargs 22 | self.cls_kwargs = cls_kwargs 23 | 24 | self.models = {} 25 | self.predictions = {} 26 | self.evaluations = {} 27 | 28 | 29 | def train(self, dt_train, max_events=None): 30 | 31 | max_events = max(dt_train[self.event_nr_col]) if max_events==None else max_events 32 | self.max_events = max_events 33 | for nr_events in range(1, max_events+1): 34 | 35 | pred_model = PredictiveModel(nr_events=nr_events, case_id_col=self.case_id_col, label_col=self.label_col, 36 | text_col=self.text_col, text_transformer_type=self.text_transformer_type, 37 | cls_method=self.cls_method, encoder_kwargs=self.encoder_kwargs, 38 | transformer_kwargs=self.transformer_kwargs, cls_kwargs=self.cls_kwargs) 39 | 40 | pred_model.fit(dt_train) 41 | self.models[nr_events] = pred_model 42 | 43 | 44 | def test(self, dt_test, confidences=[0.6], two_sided=False, evaluate=True, output_filename=None, outfile_mode='w', performance_output_filename=None): 45 | 46 | for confidence in confidences: 47 | results = self._test_single_conf(dt_test, confidence, two_sided) 48 | self.predictions[confidence] = results 49 | 50 | if evaluate: 51 | evaluation = self._evaluate(dt_test, results, two_sided) 52 | self.evaluations[confidence] = evaluation 53 | 54 | if output_filename is not None: 55 | metric_names = list(self.evaluations[confidences[0]].keys()) 56 | if not os.path.isfile(output_filename): 57 | outfile_mode = 'w' 58 | with open(output_filename, outfile_mode) as fout: 59 | if outfile_mode == 'w': 60 | fout.write("confidence;value;metric\n") 61 | for confidence in confidences: 62 | for k,v in self.evaluations[confidence].items(): 63 | fout.write("%s;%s;%s\n"%(confidence, v, k)) 64 | 65 | if performance_output_filename is not None: 66 | with open(performance_output_filename, 'w') as fout: 67 | fout.write("nr_events;train_preproc_time;train_cls_time;test_encode_time;test_preproc_time;test_time;nr_test_cases\n") 68 | for nr_events, pred_model in self.models.items(): 69 | fout.write("%s;%s;%s;%s;%s;%s;%s\n"%(nr_events, pred_model.preproc_time, pred_model.cls_time, pred_model.test_encode_time, pred_model.test_preproc_time, pred_model.test_time, pred_model.nr_test_cases)) 70 | 71 | 72 | def _test_single_conf(self, dt_test, confidence, two_sided): 73 | 74 | results = [] 75 | case_names_unprocessed = set(dt_test[self.case_id_col].unique()) 76 | max_events = min(max(dt_test[self.event_nr_col]), max(self.models.keys())) 77 | 78 | nr_events = 1 79 | 80 | # monitor cases until confident prediction is made or the case ends 81 | while len(case_names_unprocessed) > 0 and nr_events <= max_events: 82 | 83 | # prepare test set 84 | dt_test = dt_test[dt_test[self.case_id_col].isin(case_names_unprocessed)] 85 | if len(dt_test[dt_test[self.event_nr_col] >= nr_events]) == 0: # all cases are shorter than nr_events 86 | break 87 | elif nr_events not in self.models: 88 | nr_events += 1 89 | continue 90 | 91 | # select relevant model 92 | pred_model = self.models[nr_events] 93 | 94 | # predict 95 | predictions_proba = pred_model.predict_proba(dt_test) 96 | 97 | # filter predictions with sufficient confidence 98 | for label_col_idx, label in enumerate(pred_model.cls.classes_): 99 | if label == self.pos_label or two_sided: 100 | finished_idxs = np.where(predictions_proba[:,label_col_idx] >= confidence) 101 | finished_cases = pred_model.test_case_names.iloc[finished_idxs] 102 | for idx in finished_idxs[0]: 103 | results.append({"case_name":pred_model.test_case_names.iloc[idx], 104 | "prediction":label, 105 | "class":pred_model.test_y.iloc[idx], 106 | "nr_events":nr_events}) 107 | case_names_unprocessed = case_names_unprocessed.difference(set(finished_cases)) 108 | 109 | nr_events += 1 110 | 111 | return(results) 112 | 113 | 114 | def _evaluate(self, dt_test, results, two_sided): 115 | #case_lengths = dt_test[self.case_id_col].value_counts() 116 | #dt_test = dt_test[dt_test[self.event_nr_col] == 1] 117 | N = len(dt_test) 118 | 119 | tp = 0 120 | fp = 0 121 | tn = 0 122 | fn = 0 123 | earliness = 0 124 | finished_case_names = [result["case_name"] for result in results] 125 | positives = sum(dt_test[self.label_col] == self.pos_label) 126 | negatives = sum(dt_test[self.label_col] != self.pos_label) 127 | 128 | 129 | for result in results: 130 | if result["prediction"] == self.pos_label and result["class"] == self.pos_label: 131 | tp += 1 132 | elif result["prediction"] == self.pos_label and result["class"] != self.pos_label: 133 | fp += 1 134 | elif result["prediction"] != self.pos_label and result["class"] != self.pos_label: 135 | tn += 1 136 | else: 137 | fn += 1 138 | #earliness += 1.0 * result["nr_events"] / case_lengths[result["case_name"]] 139 | earliness += 1.0 * result["nr_events"] / min(int(dt_test[dt_test[self.case_id_col] == result["case_name"]]["case_length"]), self.max_events) 140 | 141 | if not two_sided: 142 | dt_test = dt_test[~dt_test[self.case_id_col].isin(finished_case_names)] # predicted as negatives 143 | tn = sum(dt_test[self.label_col] != self.pos_label) 144 | fn = len(dt_test) - tn 145 | 146 | metrics = {} 147 | 148 | metrics["recall"] = 1.0 * tp / positives # alternative without failures: (tp+fn) 149 | if len(results) > 0: 150 | metrics["accuracy"] = 1.0 * (tp+tn) / (tp+tn+fp+fn) 151 | metrics["precision"] = 1.0 * tp / (tp+fp) 152 | metrics["earliness"] = earliness / len(results) 153 | metrics["fscore"] = 2 * metrics["precision"] * metrics["recall"] / (metrics["precision"] + metrics["recall"]) 154 | else: 155 | metrics["accuracy"] = 0 156 | metrics["precision"] = 0 157 | metrics["earliness"] = 0 158 | metrics["fscore"] = 0 159 | metrics["specificity"] = 1.0 * tn / negatives # alternative without failures: (fp+tn) 160 | metrics["tp"] = tp 161 | metrics["fn"] = fn 162 | metrics["fp"] = fp 163 | metrics["tn"] = tn 164 | metrics["failure_rate"] = 1 - 1.0 * len(results) / N 165 | 166 | return(metrics) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This repository contains supplementary material for the article "[Predictive Business Process Monitoring with Structured and Unstructured Data](https://link.springer.com/chapter/10.1007/978-3-319-45348-4_23)" by [Irene Teinemaa](https://irhete.github.io/), [Marlon Dumas](http://kodu.ut.ee/~dumas/), [Fabrizio Maria Maggi](https://scholar.google.nl/citations?user=Jo9fNKEAAAAJ&hl=en&oi=sra), and [Chiara Di Francescomarino](https://shell-static.fbk.eu/people/dfmchiara/), which is published in the proceedings of the International Conference on Business Process Management 2016. 2 | 3 | ## Reference 4 | If you use the code from this repository, please cite the original paper: 5 | ``` 6 | @inproceedings{teinemaa2016predictive, 7 | title={Predictive Business Process Monitoring with Structured and Unstructured Data}, 8 | author={Teinemaa, Irene and Dumas, Marlon and Maggi, Fabrizio Maria and Di Francescomarino, Chiara}, 9 | booktitle={International Conference on Business Process Management}, 10 | pages={401--417}, 11 | year={2016}, 12 | organization={Springer} 13 | } 14 | ``` 15 | 16 | ## Dependencies 17 | 18 | * python 3.5 19 | * [NumPy](http://www.numpy.org/) 20 | * [pandas](http://pandas.pydata.org/) 21 | * [scikit-learn](http://scikit-learn.org/stable/index.html) 22 | * [gensim](https://radimrehurek.com/gensim/) (for LDA and doc2vec models) 23 | * [estnltk](https://github.com/estnltk/estnltk) (for lemmatization in Estonian language) 24 | 25 | 26 | 27 | ## Preprocessing 28 | 29 | Before using the text models, textual data should be lemmatized. The example below constructs a list of lemmatized documents (`docs_as_lemmas`), given a list of raw documents (`corpus`), using a lemmatizer for Estonian language. 30 | 31 | ```python 32 | from estnltk import Text 33 | 34 | docs_as_lemmas = [] 35 | for document in corpus: 36 | text = Text(document.lower()) 37 | docs_as_lemmas.append(" ".join(text.lemmas)) 38 | 39 | ``` 40 | 41 | 42 | ## Sequence encoding 43 | 44 | The `SequenceEncoder` enables encoding data as a complex sequence using index-based encoding. The input data should be in the following format: 45 | 46 | case_id;event_nr;class_label;dynamic_attr1;...;dynamic_attr_n;static_attr1;...;static_attr_h 47 | 48 | In other words, each row in the input data should correspond to a given event (determined by `event_nr`) in a given case (determined by `case_id`). Each such event should be accompanied with a class label (`class_label`) that expresses the outcome of a case. Also, each event may carry an arbitrary number of static and dynamic data attributes. Both static and dynamic attributes may contain unstructured data, however, `SequenceEncoder` does not perform text processing by itself. 49 | 50 | The output of sequence encoder is as follows: 51 | 52 | case_id;class_label;dynamic_attr1_event_1;...;dynamic_attr1_event_m;...;dynamic_attr_n_event_1;...;dynamic_attr_n_event_m;static_attr1;...;static_attr_h 53 | 54 | When using `SequenceEncoder`, one should specify columns that represent the case id, event number, class label, dynamic attributes, and static attributes. Also, columns that should be interpreted as categorical values should be specified. Number of events that should be used for encoding the sequence (prefix length) is specified as the `nr_events` parameter. Cases that are shorter than `nr_events` are discarded. Additionally, sequence encoder enables oversampling the dataset when `fit` is called (i.e. the training set), using `minority_label` as the class that should be oversampled. If `fillna=True`, all NA values are filled with zeros. Example usage of the sequence encoder is illustrated below. 55 | 56 | ```python 57 | from SequenceEncoder import SequenceEncoder 58 | 59 | encoder = SequenceEncoder(case_id_col="case_id", event_nr_col="event_nr", label_col="class_label", 60 | static_cols=["static1", "static2", "static3"], dynamic_cols=["dynamic1", "dynamic2"], 61 | cat_cols=["static2", "dynamic1"], nr_events=3, oversample_fit=True, minority_label="unsuccessful", 62 | fillna=True, random_state=22) 63 | train_encoded = encoder.fit_transform(train) # oversampled 64 | test_encoded = encoder.transform(test) 65 | ``` 66 | 67 | 68 | ## Text models 69 | 70 | The text models are implemented as custom transformers, which include `fit`, `transform`, and `fit_transform` methods. 71 | 72 | Four transformers are implemented: 73 | * `LDATransformer` - Latent Dirichlet Allocation topic modeling (utilizes Gensim's implementation). 74 | * `PVTransformer` - Paragraph Vector (utilizes Gensim's implementation -- doc2vec) 75 | * `BoNGTransformer` - bag-of-n-grams. 76 | * `NBLogCountRatioTransformer` - bag-of-n-grams weighted with Naive Bayes log count ratios. 77 | 78 | Example usage of the text transformers is shown below. `X` stands for a pandas `DataFrame` consisting of one or more textual columns, while `y` contains the target variable (class labels). Note that `X` should contain textual columns only. 79 | 80 | ```python 81 | from TextTransformers import LDATransformer, PVTransformer, BoNGTransformer, NBLogCountRatioTransformer 82 | 83 | lda_transformer = LDATransformer(num_topics=20, tfidf=False, passes=3, iterations=700, random_seed=22) 84 | lda_transformer.fit(X) 85 | lda_transformer.transform(X) 86 | 87 | pv_transformer = PVTransformer(size=16, window=8, min_count=1, workers=1, alpha=0.025, dm=1, epochs=1, random_seed=22) 88 | pv_transformer.fit(X) 89 | pv_transformer.transform(X) 90 | 91 | bong_transformer = BoNGTransformer(ngram_min=1, ngram_max=1, tfidf=False, nr_selected=100) 92 | bong_transformer.fit(X, y) 93 | bong_transformer.transform(X) 94 | 95 | nb_transformer = NBLogCountRatioTransformer(ngram_min=1, ngram_max=1, alpha=1.0, nr_selected=100, pos_label="positive") 96 | nb_transformer.fit(X, y) 97 | nb_transformer.transform(X) 98 | 99 | ``` 100 | 101 | 102 | ## Predictive model 103 | 104 | The `PredictiveModel` class enables building a predictive model for a fixed prefix length, starting from raw data sets. The initializer expects as input the `text_transformer_type` (one of {`None`, "LDATransformer", "PVTransformer", "BoNGTransformer", "NBLogCountRatioTransformer"}) and classifier type `cls_method`, where "rf" stands for sklearn's `RandomForestClassifier` and "logit" stands for `LogisticRegression`. Furthermore, the prefix length should be predefined in `nr_events`, the names of relevant columns as `case_id_col`, `label_col`, `text_col`, and label of the positive class (`pos_label`). Additional arguments that should be forwarded to the `SequenceEncoder`, text transformer, and classifier should be given as `encoder_kwargs`, `transformer_kwargs`, and `cls_kwargs`, respectively (see sections above for details of these arguments). 105 | 106 | Example usage: 107 | 108 | ```python 109 | from PredictiveModel import PredictiveModel 110 | 111 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, 112 | "cat_cols":cat_cols,"oversample_fit":False, "minority_label":"unsuccessful", 113 | "fillna":True, "random_state":22} 114 | transformer_kwargs = {"ngram_max":ngram_max, "alpha":alpha, "nr_selected":nr_selected, 115 | "pos_label":pos_label} 116 | cls_kwargs = {"n_estimators":500, "random_state":22} 117 | 118 | pred_model = PredictiveModel(nr_events=nr_events, case_id_col=case_id_col, 119 | label_col=label_col, pos_label=pos_label, text_col=text_col, 120 | text_transformer_type="NBLogCountRatioTransformer", cls_method="rf", 121 | encoder_kwargs=encoder_kwargs, transformer_kwargs=transformer_kwargs, 122 | cls_kwargs=cls_kwargs) 123 | 124 | pred_model.fit(train) 125 | predictions_proba = pred_model.predict_proba(test) 126 | ``` 127 | 128 | 129 | 130 | ## Predictive monitoring 131 | 132 | The `PredictiveMonitor` trains multiple `PredictiveModel`s (one for each possible prefix length) that consitute the offline component of the predictive monitoring framework. The arguments are the same as for `PredictiveModel`, with the exception of `event_nr_col` instead of `nr_events`. In the test phase, each case is monitored until a sufficient confidence level is achieved or the case ends. Possible arguments for testing function are a list of `confidences` to produce the results for, boolean `evaluate` if different metrics should be calculated, `output_filename` if the results should be written to an external file, and `performance_output_filename` if the calculation times should be written to an external file. 133 | 134 | Example usage: 135 | 136 | ```python 137 | from PredictiveMonitor import PredictiveMonitor 138 | 139 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, 140 | "cat_cols":cat_cols, "oversample_fit":False, "minority_label":"unsuccessful", 141 | "fillna":True, "random_state":22} 142 | transformer_kwargs = {"ngram_max":ngram_max, "alpha":alpha, "nr_selected":nr_selected, 143 | "pos_label":pos_label} 144 | cls_kwargs = {"n_estimators":500, "random_state":22} 145 | 146 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, 147 | label_col=label_col, pos_label=pos_label, text_col=text_col, 148 | text_transformer_type="NBLogCountRatioTransformer", cls_method="rf", 149 | encoder_kwargs=encoder_kwargs, transformer_kwargs=transformer_kwargs, 150 | cls_kwargs=cls_kwargs) 151 | 152 | predictive_monitor.train(train) 153 | predictive_monitor.test(test, confidences=[0.5, 0.75, 0.9], evaluate=True, output_filename="example_output.txt") 154 | ``` 155 | 156 | Real examples of predictive monitoring can be found in folder "experiments". 157 | -------------------------------------------------------------------------------- /SequenceEncoder.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.feature_extraction import DictVectorizer as DV 3 | 4 | class SequenceEncoder(): 5 | 6 | def __init__(self, nr_events, event_nr_col, case_id_col, label_col, static_cols=[], dynamic_cols=[], 7 | last_state_cols=[], cat_cols=[], oversample_fit=True, minority_label="positive", fillna=True, 8 | random_state=None, max_events=200, dyn_event_marker="dynevent", last_event_marker="lastevent", 9 | case_length_col = "case_length", pre_encoded=False): 10 | self.nr_events = nr_events 11 | self.static_cols = static_cols 12 | self.dynamic_cols = dynamic_cols 13 | self.last_state_cols = last_state_cols 14 | self.cat_cols = cat_cols 15 | self.event_nr_col = event_nr_col 16 | self.case_id_col = case_id_col 17 | self.label_col = label_col 18 | self.oversample_fit = oversample_fit 19 | self.minority_label = minority_label 20 | self.random_state = random_state 21 | self.fillna = fillna 22 | self.dyn_event_marker = dyn_event_marker 23 | self.last_event_marker = last_event_marker 24 | self.max_events = max_events 25 | self.case_length_col = case_length_col 26 | self.pre_encoded = pre_encoded 27 | 28 | self.fitted_columns = None 29 | 30 | 31 | def fit(self, X): 32 | return self 33 | 34 | def fit_transform(self, X): 35 | data = self._encode(X) 36 | if self.oversample_fit: 37 | data = self._oversample(data) 38 | return data 39 | 40 | def transform(self, X): 41 | data = self._encode(X) 42 | return data 43 | 44 | def pre_encode(self, X): 45 | # encode static cols 46 | if self.label_col not in self.static_cols: 47 | self.static_cols.append(self.label_col) 48 | if self.case_id_col not in self.static_cols: 49 | self.static_cols.append(self.case_id_col) 50 | data_final = X[X[self.event_nr_col] == 1][self.static_cols] 51 | 52 | # encode dynamic cols 53 | for i in range(1, self.max_events+1): 54 | data_selected = X[X[self.event_nr_col] == i][[self.case_id_col] + self.dynamic_cols] 55 | data_selected.columns = [self.case_id_col] + ["%s_%s%s"%(col, self.dyn_event_marker, i) for col in self.dynamic_cols] 56 | data_final = pd.merge(data_final, data_selected, on=self.case_id_col, how="left") 57 | 58 | 59 | # encode last state cols 60 | for i in range(1, self.max_events+1): 61 | data_selected = X[X[self.event_nr_col] == i][[self.case_id_col] + self.last_state_cols] 62 | data_selected.columns = [self.case_id_col] + ["%s_%s%s"%(col, self.last_event_marker, i) for col in self.last_state_cols] 63 | data_final = pd.merge(data_final, data_selected, on=self.case_id_col, how="left") 64 | if i > 1: 65 | for col in self.last_state_cols: 66 | missing = pd.isnull(data_final["%s_%s%s"%(col, self.last_event_marker, i)]) 67 | data_final["%s_%s%s"%(col, self.last_event_marker, i)].loc[missing] = data_final["%s_%s%s"%(col, self.last_event_marker, i-1)].loc[missing] 68 | 69 | 70 | # make categorical 71 | dynamic_cat_cols = [col for col in self.cat_cols if col in self.dynamic_cols] 72 | static_cat_cols = [col for col in self.cat_cols if col in self.static_cols] 73 | categorical_cols = ["%s_%s%s"%(col, self.dyn_event_marker, i) for i in range(1, self.max_events+1) for col in dynamic_cat_cols] + static_cat_cols 74 | cat_df = data_final[categorical_cols] 75 | cat_dict = cat_df.T.to_dict().values() 76 | vectorizer = DV( sparse = False ) 77 | vec_cat_dict = vectorizer.fit_transform(cat_dict) 78 | cat_data = pd.DataFrame(vec_cat_dict, columns=vectorizer.feature_names_) 79 | data_final = pd.concat([data_final.drop(categorical_cols, axis=1), cat_data], axis=1) 80 | 81 | data_final = pd.merge(data_final, X.groupby(self.case_id_col)[self.event_nr_col].agg({"case_length": "max"}).reset_index(), on=self.case_id_col, how="left") 82 | 83 | # fill NA 84 | if self.fillna: 85 | for col in data_final: 86 | dt = data_final[col].dtype 87 | if dt == int or dt == float: 88 | data_final[col].fillna(0, inplace=True) 89 | else: 90 | data_final[col].fillna("", inplace=True) 91 | 92 | return data_final 93 | 94 | 95 | def _encode(self, X): 96 | if self.pre_encoded: 97 | rel_cols = X.columns[~X.columns.str.contains('|'.join(["%s%s"%(self.dyn_event_marker, k) for k in 98 | range(self.nr_events+1, self.max_events+1)] + 99 | [self.last_event_marker]))] 100 | rel_cols = rel_cols | X.columns[X.columns.str.endswith("%s%s"%(self.last_event_marker, self.nr_events))] 101 | selected = X[rel_cols] 102 | selected = selected[selected[self.case_length_col] >= self.nr_events] 103 | return selected.drop(self.case_length_col, axis=1) 104 | else: 105 | return self._complex_encode(X) 106 | 107 | def _complex_encode(self, X): 108 | # encode static cols 109 | if self.label_col not in self.static_cols: 110 | self.static_cols.append(self.label_col) 111 | if self.case_id_col not in self.static_cols: 112 | self.static_cols.append(self.case_id_col) 113 | data_final = X[X[self.event_nr_col] == 1][self.static_cols] 114 | 115 | # encode dynamic cols 116 | for i in range(1, self.nr_events+1): 117 | data_selected = X[X[self.event_nr_col] == i][[self.case_id_col] + self.dynamic_cols] 118 | data_selected.columns = [self.case_id_col] + ["%s_%s"%(col, i) for col in self.dynamic_cols] 119 | data_final = pd.merge(data_final, data_selected, on=self.case_id_col, how="right") 120 | 121 | # encode last state cols 122 | for col in self.last_state_cols: 123 | data_final = pd.merge(data_final, X[X[self.event_nr_col] == self.nr_events][[self.case_id_col, col]], on=self.case_id_col, how="right") 124 | for idx, row in data_final.iterrows(): 125 | current_nr_events = self.nr_events - 1 126 | while pd.isnull(data_final.loc[idx, col]) and current_nr_events > 0: 127 | data_final.loc[idx, col] = X[(X[self.case_id_col] == row[self.case_id_col]) & (X[self.event_nr_col] == current_nr_events)].iloc[0][col] 128 | current_nr_events -= 1 129 | 130 | 131 | # make categorical 132 | dynamic_cat_cols = [col for col in self.cat_cols if col in self.dynamic_cols] 133 | static_cat_cols = [col for col in self.cat_cols if col in self.static_cols] 134 | catecorical_cols = ["%s_%s"%(col, i) for i in range(1, self.nr_events+1) for col in dynamic_cat_cols] + static_cat_cols 135 | cat_df = data_final[catecorical_cols] 136 | cat_dict = cat_df.T.to_dict().values() 137 | vectorizer = DV( sparse = False ) 138 | vec_cat_dict = vectorizer.fit_transform(cat_dict) 139 | cat_data = pd.DataFrame(vec_cat_dict, columns=vectorizer.feature_names_) 140 | data_final = pd.concat([data_final.drop(catecorical_cols, axis=1), cat_data], axis=1) 141 | 142 | if self.fitted_columns is not None: 143 | missing_cols = self.fitted_columns[~self.fitted_columns.isin(data_final.columns)] 144 | for col in missing_cols: 145 | data_final[col] = 0 146 | data_final = data_final[self.fitted_columns] 147 | else: 148 | self.fitted_columns = data_final.columns 149 | 150 | # fill NA 151 | if self.fillna: 152 | for col in data_final: 153 | dt = data_final[col].dtype 154 | if dt == int or dt == float: 155 | data_final[col].fillna(0, inplace=True) 156 | else: 157 | data_final[col].fillna("", inplace=True) 158 | 159 | return data_final 160 | 161 | def _oversample(self, X): 162 | oversample_count = sum(X[self.label_col] != self.minority_label) - sum(X[self.label_col] == self.minority_label) 163 | 164 | if oversample_count > 0 and sum(X[self.label_col] == self.minority_label) > 0: 165 | oversampled_data = X[X[self.label_col]==self.minority_label].sample(oversample_count, replace=True, random_state=self.random_state) 166 | X = pd.concat([X, oversampled_data]) 167 | 168 | return X 169 | 170 | -------------------------------------------------------------------------------- /TextTransformers.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | from gensim import corpora, similarities 4 | from gensim import models as gensim_models 5 | from gensim.models.doc2vec import LabeledSentence 6 | from gensim.models import Doc2Vec 7 | from sklearn.base import TransformerMixin 8 | from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer 9 | from sklearn.feature_selection import SelectKBest, chi2 10 | import os.path 11 | 12 | class LDATransformer(TransformerMixin): 13 | 14 | def __init__(self, num_topics=20, tfidf=False, 15 | passes=3, iterations=700, min_prob=0, min_freq=0, save_dict=False, dict_file=None, random_seed=None): 16 | 17 | # should be tuned 18 | self.num_topics = num_topics 19 | self.tfidf = tfidf 20 | 21 | # may be left as default 22 | self.passes = passes 23 | self.iterations = iterations 24 | self.min_prob = min_prob 25 | self.min_freq = min_freq 26 | 27 | # for reproducability 28 | self.random_seed = random_seed 29 | self.save_dict = save_dict 30 | self.dict_file = dict_file 31 | 32 | self.dictionary = None 33 | self.lda_model = None 34 | self.tfidf_model = None 35 | 36 | 37 | 38 | def fit(self, X, y=None): 39 | if self.dict_file is not None and os.path.isfile(self.dict_file) : 40 | self.dictionary = corpora.Dictionary.load(self.dict_file) 41 | else: 42 | self.dictionary = self._generate_dictionary(X) 43 | corpus = self._generate_corpus_data(X) 44 | np.random.seed(self.random_seed) 45 | self.lda_model = gensim_models.LdaModel(corpus, id2word=self.dictionary, num_topics=self.num_topics, 46 | passes=self.passes, iterations=self.iterations, minimum_probability=self.min_prob) 47 | return self 48 | 49 | 50 | def transform(self, X): 51 | ncol = X.shape[1] 52 | corpus = self._generate_corpus_data(X) 53 | topics = self.lda_model[corpus] 54 | topic_data = np.zeros((len(topics), self.num_topics)) 55 | for i in range(len(topics)): 56 | for (idx, prob) in topics[i]: 57 | topic_data[i,idx] = prob 58 | topic_data = np.hstack(np.vsplit(topic_data, ncol)) 59 | topic_colnames = ["topic%s_event%s"%(topic+1, event+1) for event in range(ncol) for topic in range(self.num_topics)] 60 | 61 | return pd.DataFrame(topic_data, columns=topic_colnames, index=X.index) 62 | 63 | 64 | def _generate_dictionary(self, X): 65 | data = X.values.flatten('F') 66 | texts = [[word for word in str(document).lower().split()] for document in data] 67 | dictionary = corpora.Dictionary(texts) 68 | if self.save_dict: 69 | dictionary.save(self.dict_file) 70 | return dictionary 71 | 72 | 73 | def _generate_corpus_data(self, X): 74 | data = X.values.flatten('F') 75 | texts = [[word for word in str(document).lower().split()] for document in data] 76 | 77 | # if frequency threshold set, filter 78 | if self.min_freq > 0: 79 | frequency = defaultdict(int) 80 | for text in texts: 81 | for token in text: 82 | frequency[token] += 1 83 | texts = [[token for token in text if frequency[token] > self.min_freq] for text in texts] 84 | 85 | # construct corpus 86 | corpus = [self.dictionary.doc2bow(text) for text in texts] 87 | 88 | # if requested, do tfidf transformation 89 | if self.tfidf: 90 | if self.tfidf_model == None: 91 | self.tfidf_model = gensim_models.TfidfModel(corpus) 92 | corpus_tfidf = self.tfidf_model[corpus] 93 | return(corpus_tfidf) 94 | return corpus 95 | 96 | 97 | 98 | class PVTransformer(TransformerMixin): 99 | 100 | def __init__(self, size=16, window=8, min_count=1, workers=1, alpha=0.025, dm=1, epochs=1, random_seed=None): 101 | 102 | self.random_seed = random_seed 103 | self.pv_model = None 104 | 105 | # should be tuned 106 | self.size = size 107 | self.window = window 108 | self.alpha = alpha 109 | self.dm = dm 110 | 111 | # may be left as default 112 | self.min_count = min_count 113 | self.workers = workers 114 | self.epochs = epochs 115 | 116 | 117 | def fit(self, X, y=None): 118 | train_comments = X.values.flatten('F') 119 | train_documents = self._generate_labeled_sentences(train_comments) 120 | 121 | self.pv_model = Doc2Vec(size=self.size, window=self.window, alpha=self.alpha, min_count=self.min_count, workers=self.workers, seed=self.random_seed, dm=self.dm) 122 | self.pv_model.build_vocab(train_documents) 123 | np.random.seed(self.random_seed) 124 | for epoch in range(self.epochs): 125 | np.random.shuffle(train_documents) 126 | self.pv_model.train(train_documents) 127 | 128 | return self 129 | 130 | 131 | def fit_transform(self, X, y=None): 132 | self.fit(X) 133 | 134 | nrow = X.shape[0] 135 | ncol = X.shape[1] 136 | 137 | train_X = self.pv_model.docvecs[range(nrow*ncol)] 138 | train_X = np.hstack(np.vsplit(train_X, ncol)) 139 | colnames = ["pv%s_event%s"%(vec+1, event+1) for event in range(ncol) for vec in range(self.size)] 140 | 141 | train_X = pd.DataFrame(train_X, columns=colnames, index=X.index) 142 | return train_X 143 | 144 | 145 | def transform(self, X): 146 | ncol = X.shape[1] 147 | 148 | test_comments = X.values.flatten('F') 149 | vecs = [self.pv_model.infer_vector(comment.split()) for comment in test_comments] 150 | test_X = np.hstack(np.vsplit(np.array(vecs), ncol)) 151 | colnames = ["pv%s_event%s"%(vec+1, event+1) for event in range(ncol) for vec in range(self.size)] 152 | 153 | test_X = pd.DataFrame(test_X, columns=colnames, index=X.index) 154 | test_X.to_csv("test_X_pv2.csv", sep=";") 155 | return test_X 156 | 157 | 158 | def _generate_labeled_sentences(self, comments): 159 | documents = [LabeledSentence(words=comment.split(), tags=[i]) for i, comment in enumerate(comments)] 160 | return(documents) 161 | 162 | 163 | 164 | class BoNGTransformer(TransformerMixin): 165 | 166 | def __init__(self, ngram_min=1, ngram_max=1, tfidf=False, nr_selected=100): 167 | 168 | # should be tuned 169 | self.ngram_max = ngram_max 170 | self.tfidf = tfidf 171 | self.nr_selected = nr_selected 172 | 173 | # may be left as default 174 | self.ngram_min = ngram_min 175 | 176 | self.vectorizer = None 177 | self.feature_selector = SelectKBest(chi2, k=self.nr_selected) 178 | self.selected_cols = None 179 | 180 | 181 | def fit(self, X, y): 182 | data = X.values.flatten('F') 183 | if self.tfidf: 184 | self.vectorizer = TfidfVectorizer(ngram_range=(self.ngram_min,self.ngram_max)) 185 | else: 186 | self.vectorizer = CountVectorizer(ngram_range=(self.ngram_min,self.ngram_max)) 187 | bong = self.vectorizer.fit_transform(data) 188 | 189 | # select features 190 | if self.nr_selected=="all" or len(self.vectorizer.get_feature_names()) <= self.nr_selected: 191 | self.feature_selector = SelectKBest(chi2, k="all") 192 | self.feature_selector.fit(bong, y) 193 | 194 | # remember selected column names 195 | if self.nr_selected=="all": 196 | self.selected_cols = np.array(self.vectorizer.get_feature_names()) 197 | else: 198 | self.selected_cols = np.array(self.vectorizer.get_feature_names())[self.feature_selector.scores_.argsort()[-self.nr_selected:][::-1]] 199 | 200 | return self 201 | 202 | 203 | def transform(self, X): 204 | data = X.values.flatten('F') 205 | bong = self.vectorizer.transform(data) 206 | bong = self.feature_selector.transform(bong) 207 | bong = bong.toarray() 208 | 209 | return pd.DataFrame(bong, columns=self.selected_cols, index=X.index) 210 | 211 | 212 | 213 | class NBLogCountRatioTransformer(TransformerMixin): 214 | 215 | def __init__(self, ngram_min=1, ngram_max=1, alpha=1.0, nr_selected=100, pos_label="positive"): 216 | 217 | # should be tuned 218 | self.ngram_max = ngram_max 219 | self.alpha = alpha 220 | self.nr_selected = nr_selected 221 | 222 | # may be left as default 223 | self.ngram_min = ngram_min 224 | 225 | self.pos_label = pos_label 226 | self.vectorizer = CountVectorizer(ngram_range=(ngram_min,ngram_max)) 227 | 228 | 229 | def fit(self, X, y): 230 | data = X.values.flatten('F') 231 | bong = self.vectorizer.fit_transform(data) 232 | bong = bong.toarray() 233 | 234 | # calculate nb ratios 235 | pos_label_idxs = y == self.pos_label 236 | if sum(pos_label_idxs) > 0: 237 | if len(y) - sum(pos_label_idxs) > 0: 238 | pos_bong = bong[pos_label_idxs] 239 | neg_bong = bong[~pos_label_idxs] 240 | else: 241 | neg_bong = np.array([]) 242 | pos_bong = bong.copy() 243 | else: 244 | neg_bong = bong.copy() 245 | pos_bong = np.array([]) 246 | p = 1.0 * pos_bong.sum(axis=0) + self.alpha 247 | q = 1.0 * neg_bong.sum(axis=0) + self.alpha 248 | r = np.log((p / p.sum()) / (q / q.sum())) 249 | self.nb_r = r 250 | r = np.squeeze(np.asarray(r)) 251 | 252 | 253 | # feature selection 254 | if (self.nr_selected >= len(r)): 255 | r_selected = range(len(r)) 256 | else: 257 | r_sorted = np.argsort(r) 258 | r_selected = np.concatenate([r_sorted[:self.nr_selected/2], r_sorted[-self.nr_selected/2:]]) 259 | self.r_selected = r_selected 260 | 261 | if self.nr_selected=="all": 262 | self.selected_cols = np.array(self.vectorizer.get_feature_names()) 263 | else: 264 | self.selected_cols = np.array(self.vectorizer.get_feature_names())[self.r_selected] 265 | 266 | return self 267 | 268 | 269 | def transform(self, X): 270 | data = X.values.flatten('F') 271 | bong = self.vectorizer.transform(data) 272 | bong = bong.toarray() 273 | 274 | # generate transformed selected data 275 | bong = bong * self.nb_r 276 | bong = bong[:,self.r_selected] 277 | 278 | 279 | return pd.DataFrame(bong, columns=self.selected_cols, index=X.index) 280 | 281 | -------------------------------------------------------------------------------- /experiments/BoNGExperimentsCV.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.cross_validation import train_test_split 3 | from sklearn.cross_validation import StratifiedKFold 4 | import sys 5 | sys.path.append('..') 6 | from PredictiveMonitor import PredictiveMonitor 7 | 8 | data_filepath = "data_preprocessed.csv" 9 | data = pd.read_csv(data_filepath, sep=";", encoding="utf-8") 10 | 11 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 12 | "bilanss_client"] 13 | 14 | static_cols = ['case_name', 'label', 'omakapital', 'kaive', 'state_tax_lq', 'lab_tax_lq', 'bilansimaht', 'puhaskasum', 'kasumlikkus', 'status_code', 'pr_1', 'pr_2', 'pr_3', 'pr_4', 'pr_5', 'pr_6', 'deg_1', 'deg_2', 'deg_3', 'deg_4', 'deg_5', 'deg_6', 'score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'score_6', 'td_6', 'td_5', 'td_4', 'td_3', 'td_2', 'td_1', 'tdp_6', 'tdp_5', 'tdp_4', 'tdp_3', 'tdp_2', 'tdp_1', 'tdd_6', 'tdd_5', 'tdd_4', 'tdd_3', 'tdd_2', 'tdd_1', 'tdi_6', 'tdi_5', 'tdi_4', 'tdi_3', 'tdi_2', 'tdi_1', 'tdip_6', 'tdip_5', 'tdip_4', 'tdip_3', 'tdip_2', 'tdip_1', 'md_6', 'md_5', 'md_4', 'md_3', 'md_2', 'md_1', 'decl_6', 'decl_5', 'decl_4', 'decl_3', 'decl_2', 'decl_1', 'age', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 15 | 16 | last_state_cols = ["event_description_lemmas"] 17 | 18 | cat_cols = ['tax_declar', 'month', 'status_code', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 19 | 20 | case_id_col = "case_name" 21 | label_col = "label" 22 | event_nr_col = "event_nr" 23 | text_col = "event_description_lemmas" 24 | pos_label = "unsuccessful" 25 | 26 | # divide into train and test data 27 | train_names, test_names = train_test_split( data[case_id_col].unique(), train_size = 4.0/5, random_state = 22 ) 28 | train = data[data[case_id_col].isin(train_names)] 29 | test = data[data[case_id_col].isin(test_names)] 30 | 31 | # create five folds for cross-validation out of training data 32 | kf = StratifiedKFold(train[train[event_nr_col]==1][label_col], n_folds=5, shuffle=True, random_state=22) 33 | 34 | 35 | confidences = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] 36 | text_transformer_type = "BoNGTransformer" 37 | cls_methods = ["rf", "logit"] 38 | 39 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, 40 | "last_state_cols":last_state_cols, "cat_cols":cat_cols, "oversample_fit":True, 41 | "minority_label":pos_label, "fillna":True, "random_state":22} 42 | 43 | part = 1 44 | for train_index, test_index in kf: 45 | 46 | # create train and validation data according to current fold 47 | current_train_names = train_names[train_index] 48 | train_chunk = train[train[case_id_col].isin(current_train_names)] 49 | current_test_names = train_names[test_index] 50 | test_chunk = train[train[case_id_col].isin(current_test_names)] 51 | 52 | for cls_method in cls_methods: 53 | if cls_method == "rf": 54 | cls_kwargs = {"n_estimators":500, "random_state":22} 55 | else: 56 | cls_kwargs = {"random_state":22} 57 | 58 | for nr_selected in [100, 250, 500, 750, 1000, 2000, 5000]: 59 | for tfidf in [True, False]: 60 | for ngram_max in [1,2,3]: 61 | 62 | transformer_kwargs = {"ngram_max":ngram_max, "tfidf":tfidf, "nr_selected":nr_selected} 63 | 64 | # train 65 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, label_col=label_col, pos_label=pos_label, encoder_kwargs=encoder_kwargs, cls_kwargs=cls_kwargs, transformer_kwargs=transformer_kwargs, text_col=text_col, text_transformer_type=text_transformer_type, cls_method=cls_method) 66 | predictive_monitor.train(train_chunk) 67 | 68 | # test 69 | predictive_monitor.test(test_chunk, confidences, evaluate=True, output_filename="cv_results/bong_selected%s_%sngram%s_%s_part%s"%(nr_selected, ("tfidf_" if tfidf else ""), ngram_max, cls_method, part)) 70 | 71 | part += 1 -------------------------------------------------------------------------------- /experiments/FinalExperiments.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.cross_validation import train_test_split 3 | from sklearn.cross_validation import StratifiedKFold 4 | import sys 5 | sys.path.append('..') 6 | from PredictiveMonitor import PredictiveMonitor 7 | 8 | data_filepath = "data_preprocessed.csv" 9 | data = pd.read_csv(data_filepath, sep=";", encoding="utf-8") 10 | 11 | static_cols = ['case_name', 'label', 'omakapital', 'kaive', 'state_tax_lq', 'lab_tax_lq', 'bilansimaht', 'puhaskasum', 'kasumlikkus', 'status_code', 'pr_1', 'pr_2', 'pr_3', 'pr_4', 'pr_5', 'pr_6', 'deg_1', 'deg_2', 'deg_3', 'deg_4', 'deg_5', 'deg_6', 'score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'score_6', 'td_6', 'td_5', 'td_4', 'td_3', 'td_2', 'td_1', 'tdp_6', 'tdp_5', 'tdp_4', 'tdp_3', 'tdp_2', 'tdp_1', 'tdd_6', 'tdd_5', 'tdd_4', 'tdd_3', 'tdd_2', 'tdd_1', 'tdi_6', 'tdi_5', 'tdi_4', 'tdi_3', 'tdi_2', 'tdi_1', 'tdip_6', 'tdip_5', 'tdip_4', 'tdip_3', 'tdip_2', 'tdip_1', 'md_6', 'md_5', 'md_4', 'md_3', 'md_2', 'md_1', 'decl_6', 'decl_5', 'decl_4', 'decl_3', 'decl_2', 'decl_1', 'age', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 12 | 13 | cat_cols = ['tax_declar', 'month', 'status_code', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 14 | 15 | case_id_col = "case_name" 16 | label_col = "label" 17 | event_nr_col = "event_nr" 18 | text_col = "event_description_lemmas" 19 | pos_label = "unsuccessful" 20 | 21 | # divide into train and test data 22 | train_names, test_names = train_test_split( data[case_id_col].unique(), train_size = 4.0/5, random_state = 22 ) 23 | train = data[data[case_id_col].isin(train_names)] 24 | test = data[data[case_id_col].isin(test_names)] 25 | 26 | confidences = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] 27 | text_transformer_types = [None, "LDATransformer", "BoNGTransformer", "NBLogCountRatioTransformer"] # "PVTransformer", 28 | out_filename_bases = {None:"base", "LDATransformer":"lda", "PVTransformer":"pv", "BoNGTransformer":"bong", "NBLogCountRatioTransformer":"nb"} 29 | cls_methods = ["rf", "logit"] 30 | 31 | for text_transformer_type in text_transformer_types: 32 | if text_transformer_type is not None: 33 | optimal_params = pd.read_csv("cv_results/optimal_params_%s"%out_filename_bases[text_transformer_type], sep=";") 34 | 35 | for cls_method in cls_methods: 36 | if cls_method == "rf": 37 | cls_kwargs = {"n_estimators":500, "random_state":22} 38 | else: 39 | cls_kwargs = {"random_state":22} 40 | 41 | for conf in confidences: 42 | if text_transformer_type is not None: 43 | optimal_params_row = optimal_params.loc[(optimal_params["confidence"] == conf) & 44 | (optimal_params["cls"] == cls_method)] 45 | 46 | if text_transformer_type is None: 47 | transformer_kwargs = None 48 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 49 | "bilanss_client"] 50 | last_state_cols = [] 51 | elif text_transformer_type == "LDATransformer": 52 | k = int(optimal_params_row.k) 53 | tfidf = (optimal_params_row.tfidf == "tfidf").iloc[0] 54 | transformer_kwargs = {"num_topics":k, "tfidf":tfidf, "random_seed":22} 55 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 56 | "bilanss_client", "event_description_lemmas"] 57 | last_state_cols = [] 58 | elif text_transformer_type == "PVTransformer": 59 | size = int(optimal_params_row.size) 60 | window = int(optimal_params_row.window) 61 | transformer_kwargs = {"size":size, "window":window, "random_seed":22, "epochs":10} 62 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 63 | "bilanss_client", "event_description_lemmas"] 64 | last_state_cols = [] 65 | elif text_transformer_type == "BoNGTransformer": 66 | ngram_max = int(optimal_params_row.ngram) 67 | nr_selected = int(optimal_params_row.selected) 68 | tfidf = (optimal_params_row.tfidf == "tfidf").iloc[0] 69 | transformer_kwargs = {"ngram_max":ngram_max, "tfidf":tfidf, "nr_selected":nr_selected} 70 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 71 | "bilanss_client"] 72 | last_state_cols = ["event_description_lemmas"] 73 | elif text_transformer_type == "NBLogCountRatioTransformer": 74 | ngram_max = int(optimal_params_row.ngram) 75 | nr_selected = int(optimal_params_row.selected) 76 | alpha = float(optimal_params_row.alpha) 77 | transformer_kwargs = {"ngram_max":ngram_max, "alpha":alpha, "nr_selected":nr_selected} 78 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 79 | "bilanss_client"] 80 | last_state_cols = ["event_description_lemmas"] 81 | 82 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, 83 | "last_state_cols":last_state_cols, "cat_cols":cat_cols, "oversample_fit":True, 84 | "minority_label":pos_label, "fillna":True, "random_state":22} 85 | 86 | # train 87 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, label_col=label_col, pos_label=pos_label, encoder_kwargs=encoder_kwargs, cls_kwargs=cls_kwargs, transformer_kwargs=transformer_kwargs, text_col=text_col, text_transformer_type=text_transformer_type, cls_method=cls_method) 88 | predictive_monitor.train(train) 89 | 90 | # test 91 | predictive_monitor.test(test, confidences=[conf], evaluate=True, output_filename="final_results/%s_%s"%(out_filename_bases[text_transformer_type], cls_method), outfile_mode='a') -------------------------------------------------------------------------------- /experiments/LDAExperimentsCV.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.cross_validation import train_test_split 3 | from sklearn.cross_validation import StratifiedKFold 4 | import sys 5 | sys.path.append('..') 6 | from PredictiveMonitor import PredictiveMonitor 7 | 8 | data_filepath = "data_preprocessed.csv" 9 | data = pd.read_csv(data_filepath, sep=";", encoding="utf-8") 10 | 11 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 12 | "bilanss_client", "event_description_lemmas"] 13 | 14 | static_cols = ['case_name', 'label', 'omakapital', 'kaive', 'state_tax_lq', 'lab_tax_lq', 'bilansimaht', 'puhaskasum', 'kasumlikkus', 'status_code', 'pr_1', 'pr_2', 'pr_3', 'pr_4', 'pr_5', 'pr_6', 'deg_1', 'deg_2', 'deg_3', 'deg_4', 'deg_5', 'deg_6', 'score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'score_6', 'td_6', 'td_5', 'td_4', 'td_3', 'td_2', 'td_1', 'tdp_6', 'tdp_5', 'tdp_4', 'tdp_3', 'tdp_2', 'tdp_1', 'tdd_6', 'tdd_5', 'tdd_4', 'tdd_3', 'tdd_2', 'tdd_1', 'tdi_6', 'tdi_5', 'tdi_4', 'tdi_3', 'tdi_2', 'tdi_1', 'tdip_6', 'tdip_5', 'tdip_4', 'tdip_3', 'tdip_2', 'tdip_1', 'md_6', 'md_5', 'md_4', 'md_3', 'md_2', 'md_1', 'decl_6', 'decl_5', 'decl_4', 'decl_3', 'decl_2', 'decl_1', 'age', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 15 | 16 | cat_cols = ['tax_declar', 'month', 'status_code', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 17 | 18 | case_id_col = "case_name" 19 | label_col = "label" 20 | event_nr_col = "event_nr" 21 | text_col = "event_description_lemmas" 22 | pos_label = "unsuccessful" 23 | 24 | # divide into train and test data 25 | train_names, test_names = train_test_split( data[case_id_col].unique(), train_size = 4.0/5, random_state = 22 ) 26 | train = data[data[case_id_col].isin(train_names)] 27 | test = data[data[case_id_col].isin(test_names)] 28 | 29 | # create five folds for cross-validation out of training data 30 | kf = StratifiedKFold(train[train[event_nr_col]==1][label_col], n_folds=5, shuffle=True, random_state=22) 31 | 32 | 33 | confidences = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] 34 | text_transformer_type = "LDATransformer" 35 | cls_methods = ["rf", "logit"] 36 | 37 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, "cat_cols":cat_cols, 38 | "oversample_fit":True, "minority_label":pos_label, "fillna":True, 39 | "random_state":22} 40 | 41 | part = 1 42 | for train_index, test_index in kf: 43 | 44 | # create train and validation data according to current fold 45 | current_train_names = train_names[train_index] 46 | train_chunk = train[train[case_id_col].isin(current_train_names)] 47 | current_test_names = train_names[test_index] 48 | test_chunk = train[train[case_id_col].isin(current_test_names)] 49 | 50 | for cls_method in cls_methods: 51 | if cls_method == "rf": 52 | cls_kwargs = {"n_estimators":500, "random_state":22} 53 | else: 54 | cls_kwargs = {"random_state":22} 55 | 56 | for k in [10, 25, 50, 75, 100, 200]: 57 | for tfidf in [True, False]: 58 | 59 | transformer_kwargs = {"num_topics":k, "tfidf":tfidf, "random_seed":22} 60 | 61 | # train 62 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, label_col=label_col, pos_label=pos_label, encoder_kwargs=encoder_kwargs, cls_kwargs=cls_kwargs, transformer_kwargs=transformer_kwargs, text_col=text_col, text_transformer_type=text_transformer_type, cls_method=cls_method) 63 | predictive_monitor.train(train_chunk) 64 | 65 | # test 66 | predictive_monitor.test(test_chunk, confidences, evaluate=True, output_filename="cv_results/lda_k%s_%s%s_part%s"%(k, ("tfidf_" if tfidf else ""), cls_method, part)) 67 | 68 | part += 1 -------------------------------------------------------------------------------- /experiments/NBLogCountRatioExperimentsCV.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.cross_validation import train_test_split 3 | from sklearn.cross_validation import StratifiedKFold 4 | import sys 5 | sys.path.append('..') 6 | from PredictiveMonitor import PredictiveMonitor 7 | 8 | data_filepath = "data_preprocessed.csv" 9 | data = pd.read_csv(data_filepath, sep=";", encoding="utf-8") 10 | 11 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 12 | "bilanss_client"] 13 | 14 | last_state_cols = ["event_description_lemmas"] 15 | 16 | static_cols = ['case_name', 'label', 'omakapital', 'kaive', 'state_tax_lq', 'lab_tax_lq', 'bilansimaht', 'puhaskasum', 'kasumlikkus', 'status_code', 'pr_1', 'pr_2', 'pr_3', 'pr_4', 'pr_5', 'pr_6', 'deg_1', 'deg_2', 'deg_3', 'deg_4', 'deg_5', 'deg_6', 'score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'score_6', 'td_6', 'td_5', 'td_4', 'td_3', 'td_2', 'td_1', 'tdp_6', 'tdp_5', 'tdp_4', 'tdp_3', 'tdp_2', 'tdp_1', 'tdd_6', 'tdd_5', 'tdd_4', 'tdd_3', 'tdd_2', 'tdd_1', 'tdi_6', 'tdi_5', 'tdi_4', 'tdi_3', 'tdi_2', 'tdi_1', 'tdip_6', 'tdip_5', 'tdip_4', 'tdip_3', 'tdip_2', 'tdip_1', 'md_6', 'md_5', 'md_4', 'md_3', 'md_2', 'md_1', 'decl_6', 'decl_5', 'decl_4', 'decl_3', 'decl_2', 'decl_1', 'age', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 17 | 18 | cat_cols = ['tax_declar', 'month', 'status_code', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 19 | 20 | case_id_col = "case_name" 21 | label_col = "label" 22 | event_nr_col = "event_nr" 23 | text_col = "event_description_lemmas" 24 | pos_label = "unsuccessful" 25 | 26 | # divide into train and test data 27 | train_names, test_names = train_test_split( data[case_id_col].unique(), train_size = 4.0/5, random_state = 22 ) 28 | train = data[data[case_id_col].isin(train_names)] 29 | test = data[data[case_id_col].isin(test_names)] 30 | 31 | # create five folds for cross-validation out of training data 32 | kf = StratifiedKFold(train[train[event_nr_col]==1][label_col], n_folds=5, shuffle=True, random_state=22) 33 | 34 | 35 | confidences = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] 36 | text_transformer_type = "NBLogCountRatioTransformer" 37 | cls_methods = ["rf", "logit"] 38 | 39 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, 40 | "last_state_cols":last_state_cols, "cat_cols":cat_cols, "oversample_fit":True, 41 | "minority_label":pos_label, "fillna":True, "random_state":22} 42 | 43 | part = 1 44 | for train_index, test_index in kf: 45 | 46 | # create train and validation data according to current fold 47 | current_train_names = train_names[train_index] 48 | train_chunk = train[train[case_id_col].isin(current_train_names)] 49 | current_test_names = train_names[test_index] 50 | test_chunk = train[train[case_id_col].isin(current_test_names)] 51 | 52 | for cls_method in cls_methods: 53 | if cls_method == "rf": 54 | cls_kwargs = {"n_estimators":500, "random_state":22} 55 | else: 56 | cls_kwargs = {"random_state":22} 57 | 58 | for alpha in [0.01, 0.1, 0.5, 1.0]: 59 | for nr_selected in [100, 250, 500, 750, 1000, 2000, 5000]: 60 | for ngram_max in [1,2,3]: 61 | 62 | transformer_kwargs = {"ngram_max":ngram_max, "alpha":alpha, "nr_selected":nr_selected} 63 | 64 | # train 65 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, label_col=label_col, pos_label=pos_label, encoder_kwargs=encoder_kwargs, cls_kwargs=cls_kwargs, transformer_kwargs=transformer_kwargs, text_col=text_col, text_transformer_type=text_transformer_type, cls_method=cls_method) 66 | predictive_monitor.train(train_chunk) 67 | 68 | # test 69 | predictive_monitor.test(test_chunk, confidences, evaluate=True, output_filename="cv_results/nb_selected%s_alpha%s_ngram%s_%s_part%s"%(nr_selected, alpha, ngram_max, cls_method, part)) 70 | 71 | part += 1 -------------------------------------------------------------------------------- /experiments/PVExperimentsCV.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.cross_validation import train_test_split 3 | from sklearn.cross_validation import StratifiedKFold 4 | import sys 5 | sys.path.append('..') 6 | from PredictiveMonitor import PredictiveMonitor 7 | 8 | data_filepath = "data_preprocessed.csv" 9 | data = pd.read_csv(data_filepath, sep=";", encoding="utf-8") 10 | 11 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 12 | "bilanss_client", "event_description_lemmas"] 13 | 14 | static_cols = ['case_name', 'label', 'omakapital', 'kaive', 'state_tax_lq', 'lab_tax_lq', 'bilansimaht', 'puhaskasum', 'kasumlikkus', 'status_code', 'pr_1', 'pr_2', 'pr_3', 'pr_4', 'pr_5', 'pr_6', 'deg_1', 'deg_2', 'deg_3', 'deg_4', 'deg_5', 'deg_6', 'score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'score_6', 'td_6', 'td_5', 'td_4', 'td_3', 'td_2', 'td_1', 'tdp_6', 'tdp_5', 'tdp_4', 'tdp_3', 'tdp_2', 'tdp_1', 'tdd_6', 'tdd_5', 'tdd_4', 'tdd_3', 'tdd_2', 'tdd_1', 'tdi_6', 'tdi_5', 'tdi_4', 'tdi_3', 'tdi_2', 'tdi_1', 'tdip_6', 'tdip_5', 'tdip_4', 'tdip_3', 'tdip_2', 'tdip_1', 'md_6', 'md_5', 'md_4', 'md_3', 'md_2', 'md_1', 'decl_6', 'decl_5', 'decl_4', 'decl_3', 'decl_2', 'decl_1', 'age', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 15 | 16 | cat_cols = ['tax_declar', 'month', 'status_code', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 17 | 18 | case_id_col = "case_name" 19 | label_col = "label" 20 | event_nr_col = "event_nr" 21 | text_col = "event_description_lemmas" 22 | pos_label = "unsuccessful" 23 | 24 | # divide into train and test data 25 | train_names, test_names = train_test_split( data[case_id_col].unique(), train_size = 4.0/5, random_state = 22 ) 26 | train = data[data[case_id_col].isin(train_names)] 27 | test = data[data[case_id_col].isin(test_names)] 28 | 29 | # create five folds for cross-validation out of training data 30 | kf = StratifiedKFold(train[train[event_nr_col]==1][label_col], n_folds=5, shuffle=True, random_state=22) 31 | 32 | 33 | confidences = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] 34 | text_transformer_type = "PVTransformer" 35 | cls_methods = ["rf", "logit"] 36 | 37 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, "cat_cols":cat_cols, 38 | "oversample_fit":True, "minority_label":pos_label, "fillna":True, 39 | "random_state":22} 40 | 41 | part = 1 42 | for train_index, test_index in kf: 43 | 44 | # create train and validation data according to current fold 45 | current_train_names = train_names[train_index] 46 | train_chunk = train[train[case_id_col].isin(current_train_names)] 47 | current_test_names = train_names[test_index] 48 | test_chunk = train[train[case_id_col].isin(current_test_names)] 49 | 50 | for cls_method in cls_methods: 51 | if cls_method == "rf": 52 | cls_kwargs = {"n_estimators":500, "random_state":22} 53 | else: 54 | cls_kwargs = {"random_state":22} 55 | 56 | for size in [10, 25, 50, 75, 100, 200, 400]: 57 | for window in range(1, 13): 58 | 59 | transformer_kwargs = {"size":size, "window":window, "random_seed":22, "epochs":10} 60 | 61 | # train 62 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, label_col=label_col, pos_label=pos_label, encoder_kwargs=encoder_kwargs, cls_kwargs=cls_kwargs, transformer_kwargs=transformer_kwargs, text_col=text_col, text_transformer_type=text_transformer_type, cls_method=cls_method) 63 | predictive_monitor.train(train_chunk) 64 | 65 | # test 66 | predictive_monitor.test(test_chunk, confidences, evaluate=True, output_filename="cv_results/pv_size%s_window%s_%s_part%s"%(size, window, cls_method, part)) 67 | 68 | part += 1 -------------------------------------------------------------------------------- /experiments/TimedExperiments.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from sklearn.cross_validation import train_test_split 3 | from sklearn.cross_validation import StratifiedKFold 4 | import sys 5 | sys.path.append('..') 6 | from PredictiveMonitor import PredictiveMonitor 7 | 8 | data_filepath = "data_preprocessed.csv" 9 | data = pd.read_csv(data_filepath, sep=";", encoding="utf-8") 10 | 11 | static_cols = ['case_name', 'label', 'omakapital', 'kaive', 'state_tax_lq', 'lab_tax_lq', 'bilansimaht', 'puhaskasum', 'kasumlikkus', 'status_code', 'pr_1', 'pr_2', 'pr_3', 'pr_4', 'pr_5', 'pr_6', 'deg_1', 'deg_2', 'deg_3', 'deg_4', 'deg_5', 'deg_6', 'score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'score_6', 'td_6', 'td_5', 'td_4', 'td_3', 'td_2', 'td_1', 'tdp_6', 'tdp_5', 'tdp_4', 'tdp_3', 'tdp_2', 'tdp_1', 'tdd_6', 'tdd_5', 'tdd_4', 'tdd_3', 'tdd_2', 'tdd_1', 'tdi_6', 'tdi_5', 'tdi_4', 'tdi_3', 'tdi_2', 'tdi_1', 'tdip_6', 'tdip_5', 'tdip_4', 'tdip_3', 'tdip_2', 'tdip_1', 'md_6', 'md_5', 'md_4', 'md_3', 'md_2', 'md_1', 'decl_6', 'decl_5', 'decl_4', 'decl_3', 'decl_2', 'decl_1', 'age', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 12 | 13 | cat_cols = ['tax_declar', 'month', 'status_code', 'exp_payment_isna', 'state_tax_lq_isna', 'lab_tax_lq_isna', 'tax_debt_isna', 'tax_declar_isna', 'debt_balances_isna', 'debtors_name_isna', 'aasta_isna', 'kaive_isna', 'bilansimaht_isna', 'omakapital_isna', 'puhaskasum_isna', 'kasumlikkus_isna', 'bilanss_client_isna', 'status_code_isna', 'pr_1_isna', 'deg_1_isna', 'score_1_isna', 'td_1_isna', 'tdp_1_isna', 'tdd_1_isna', 'tdi_1_isna', 'tdip_1_isna', 'md_1_isna', 'decl_1_isna', 'age_isna'] 14 | 15 | case_id_col = "case_name" 16 | label_col = "label" 17 | event_nr_col = "event_nr" 18 | text_col = "event_description_lemmas" 19 | pos_label = "unsuccessful" 20 | 21 | # divide into train and test data 22 | train_names, test_names = train_test_split( data[case_id_col].unique(), train_size = 4.0/5, random_state = 22 ) 23 | train = data[data[case_id_col].isin(train_names)] 24 | test = data[data[case_id_col].isin(test_names)] 25 | 26 | conf = 0.6 27 | n_iterations = 10 28 | text_transformer_types = [None, "LDATransformer", "PVTransformer", "BoNGTransformer", "NBLogCountRatioTransformer"] 29 | out_filename_bases = {None:"base", "LDATransformer":"lda", "PVTransformer":"pv", "BoNGTransformer":"bong", "NBLogCountRatioTransformer":"nb"} 30 | cls_methods = ["rf", "logit"] 31 | 32 | for i in range(n_iterations): 33 | for text_transformer_type in text_transformer_types: 34 | optimal_params = pd.read_csv("cv_results/optimal_params_%s"%out_filename_bases[text_transformer_type], sep=";") 35 | 36 | for cls_method in cls_methods: 37 | if cls_method == "rf": 38 | cls_kwargs = {"n_estimators":500, "random_state":22} 39 | else: 40 | cls_kwargs = {"random_state":22} 41 | 42 | 43 | optimal_params_row = optimal_params.loc[(optimal_params["confidence"] == conf) & 44 | (optimal_params["cls"] == cls_method)] 45 | 46 | if text_transformer_type is None: 47 | transformer_kwargs = None 48 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 49 | "bilanss_client"] 50 | last_state_cols = [] 51 | elif text_transformer_type == "LDATransformer": 52 | 53 | k = int(optimal_params_row.k) 54 | tfidf = (optimal_params_row.tfidf == "tfidf").iloc[0] 55 | transformer_kwargs = {"num_topics":k, "tfidf":tfidf, "random_seed":22} 56 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 57 | "bilanss_client", "event_description_lemmas"] 58 | last_state_cols = [] 59 | elif text_transformer_type == "PVTransformer": 60 | size = int(optimal_params_row.size) 61 | window = int(optimal_params_row.window) 62 | transformer_kwargs = {"size":size, "window":window, "random_seed":22, "epochs":10} 63 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 64 | "bilanss_client", "event_description_lemmas"] 65 | last_state_cols = [] 66 | elif text_transformer_type == "BoNGTransformer": 67 | ngram_max = int(optimal_params_row.ngram) 68 | nr_selected = int(optimal_params_row.selected) 69 | tfidf = (optimal_params_row.tfidf == "tfidf").iloc[0] 70 | transformer_kwargs = {"ngram_max":ngram_max, "tfidf":tfidf, "nr_selected":nr_selected} 71 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 72 | "bilanss_client"] 73 | last_state_cols = ["event_description_lemmas"] 74 | elif text_transformer_type == "NBLogCountRatioTransformer": 75 | ngram_max = int(optimal_params_row.ngram) 76 | nr_selected = int(optimal_params_row.selected) 77 | alpha = float(optimal_params_row.alpha) 78 | transformer_kwargs = {"ngram_max":ngram_max, "alpha":alpha, "nr_selected":nr_selected} 79 | dynamic_cols = ["debt_sum", "max_days_due", "exp_payment", "tax_declar", "month", "tax_debt", "debt_balances", 80 | "bilanss_client"] 81 | last_state_cols = ["event_description_lemmas"] 82 | 83 | encoder_kwargs = {"event_nr_col":event_nr_col, "static_cols":static_cols, "dynamic_cols":dynamic_cols, 84 | "last_state_cols":last_state_cols, "cat_cols":cat_cols, "oversample_fit":True, 85 | "minority_label":pos_label, "fillna":True, "random_state":22} 86 | 87 | # train 88 | predictive_monitor = PredictiveMonitor(event_nr_col=event_nr_col, case_id_col=case_id_col, label_col=label_col, pos_label=pos_label, encoder_kwargs=encoder_kwargs, cls_kwargs=cls_kwargs, transformer_kwargs=transformer_kwargs, text_col=text_col, text_transformer_type=text_transformer_type, cls_method=cls_method) 89 | predictive_monitor.train(train) 90 | 91 | # test 92 | predictive_monitor.test(test, confidences=[conf], evaluate=False, performance_output_filename="timed_results/%s_%s_%s"%(out_filename_bases[text_transformer_type], cls_method, i+1)) -------------------------------------------------------------------------------- /experiments/cv_results/bong_selected100_ngram1_rf_part1: -------------------------------------------------------------------------------- 1 | confidence;value;metric 2 | 0.5;35;tp 3 | 0.5;0.981785063752;specificity 4 | 0.5;0.9665924276169265;failure_rate 5 | 0.5;0.975946547884;accuracy 6 | 0.5;0.714285714286;recall 7 | 0.5;0.4666666666666667;precision 8 | 0.5;40;fp 9 | 0.5;2156;tn 10 | 0.5;14;fn 11 | 0.5;0.564516129032;fscore 12 | 0.5;2.70666666667;earliness 13 | 0.55;33;tp 14 | 0.55;0.986338797814;specificity 15 | 0.55;0.9719376391982183;failure_rate 16 | 0.55;0.979510022272;accuracy 17 | 0.55;0.673469387755;recall 18 | 0.55;0.5238095238095238;precision 19 | 0.55;30;fp 20 | 0.55;2166;tn 21 | 0.55;16;fn 22 | 0.55;0.589285714286;fscore 23 | 0.55;2.77777777778;earliness 24 | 0.6;32;tp 25 | 0.6;0.987704918033;specificity 26 | 0.6;0.9737193763919821;failure_rate 27 | 0.6;0.980400890869;accuracy 28 | 0.6;0.65306122449;recall 29 | 0.6;0.5423728813559322;precision 30 | 0.6;27;fp 31 | 0.6;2169;tn 32 | 0.6;17;fn 33 | 0.6;0.592592592593;fscore 34 | 0.6;2.89830508475;earliness 35 | 0.65;30;tp 36 | 0.65;0.99043715847;specificity 37 | 0.65;0.97728285077951;failure_rate 38 | 0.65;0.982182628062;accuracy 39 | 0.65;0.612244897959;recall 40 | 0.65;0.5882352941176471;precision 41 | 0.65;21;fp 42 | 0.65;2175;tn 43 | 0.65;19;fn 44 | 0.65;0.6;fscore 45 | 0.65;3.09803921569;earliness 46 | 0.7;28;tp 47 | 0.7;0.991803278689;specificity 48 | 0.7;0.9795100222717149;failure_rate 49 | 0.7;0.982628062361;accuracy 50 | 0.7;0.571428571429;recall 51 | 0.7;0.6086956521739131;precision 52 | 0.7;18;fp 53 | 0.7;2178;tn 54 | 0.7;21;fn 55 | 0.7;0.589473684211;fscore 56 | 0.7;3.26086956522;earliness 57 | 0.75;25;tp 58 | 0.75;0.993169398907;specificity 59 | 0.75;0.9821826280623608;failure_rate 60 | 0.75;0.982628062361;accuracy 61 | 0.75;0.510204081633;recall 62 | 0.75;0.625;precision 63 | 0.75;15;fp 64 | 0.75;2181;tn 65 | 0.75;24;fn 66 | 0.75;0.561797752809;fscore 67 | 0.75;3.2;earliness 68 | 0.8;22;tp 69 | 0.8;0.99635701275;specificity 70 | 0.8;0.9866369710467706;failure_rate 71 | 0.8;0.984409799555;accuracy 72 | 0.8;0.448979591837;recall 73 | 0.8;0.7333333333333333;precision 74 | 0.8;8;fp 75 | 0.8;2188;tn 76 | 0.8;27;fn 77 | 0.8;0.556962025316;fscore 78 | 0.8;3.36666666667;earliness 79 | 0.85;19;tp 80 | 0.85;0.997267759563;specificity 81 | 0.85;0.9888641425389755;failure_rate 82 | 0.85;0.983964365256;accuracy 83 | 0.85;0.387755102041;recall 84 | 0.85;0.76;precision 85 | 0.85;6;fp 86 | 0.85;2190;tn 87 | 0.85;30;fn 88 | 0.85;0.513513513514;fscore 89 | 0.85;3.32;earliness 90 | 0.9;14;tp 91 | 0.9;0.997723132969;specificity 92 | 0.9;0.9915367483296214;failure_rate 93 | 0.9;0.982182628062;accuracy 94 | 0.9;0.285714285714;recall 95 | 0.9;0.7368421052631579;precision 96 | 0.9;5;fp 97 | 0.9;2191;tn 98 | 0.9;35;fn 99 | 0.9;0.411764705882;fscore 100 | 0.9;4.36842105263;earliness 101 | 0.95;10;tp 102 | 0.95;0.998633879781;specificity 103 | 0.95;0.9942093541202672;failure_rate 104 | 0.95;0.981291759465;accuracy 105 | 0.95;0.204081632653;recall 106 | 0.95;0.7692307692307693;precision 107 | 0.95;3;fp 108 | 0.95;2193;tn 109 | 0.95;39;fn 110 | 0.95;0.322580645161;fscore 111 | 0.95;3.92307692308;earliness 112 | -------------------------------------------------------------------------------- /experiments/cv_results/bong_selected100_tfidf_ngram1_rf_part1: -------------------------------------------------------------------------------- 1 | confidence;value;metric 2 | 0.5;37;tp 3 | 0.5;0.984061930783;specificity 4 | 0.5;0.9679287305122495;failure_rate 5 | 0.5;0.979064587973;accuracy 6 | 0.5;0.755102040816;recall 7 | 0.5;0.5138888888888888;precision 8 | 0.5;35;fp 9 | 0.5;2161;tn 10 | 0.5;12;fn 11 | 0.5;0.611570247934;fscore 12 | 0.5;2.48611111111;earliness 13 | 0.55;35;tp 14 | 0.55;0.985428051002;specificity 15 | 0.55;0.9701559020044543;failure_rate 16 | 0.55;0.979510022272;accuracy 17 | 0.55;0.714285714286;recall 18 | 0.55;0.5223880597014925;precision 19 | 0.55;32;fp 20 | 0.55;2164;tn 21 | 0.55;14;fn 22 | 0.55;0.603448275862;fscore 23 | 0.55;2.71641791045;earliness 24 | 0.6;33;tp 25 | 0.6;0.987249544627;specificity 26 | 0.6;0.9728285077951002;failure_rate 27 | 0.6;0.980400890869;accuracy 28 | 0.6;0.673469387755;recall 29 | 0.6;0.5409836065573771;precision 30 | 0.6;28;fp 31 | 0.6;2168;tn 32 | 0.6;16;fn 33 | 0.6;0.6;fscore 34 | 0.6;2.85245901639;earliness 35 | 0.65;31;tp 36 | 0.65;0.989071038251;specificity 37 | 0.65;0.9755011135857461;failure_rate 38 | 0.65;0.981291759465;accuracy 39 | 0.65;0.632653061224;recall 40 | 0.65;0.5636363636363636;precision 41 | 0.65;24;fp 42 | 0.65;2172;tn 43 | 0.65;18;fn 44 | 0.65;0.596153846154;fscore 45 | 0.65;2.94545454545;earliness 46 | 0.7;30;tp 47 | 0.7;0.991347905282;specificity 48 | 0.7;0.978173719376392;failure_rate 49 | 0.7;0.983073496659;accuracy 50 | 0.7;0.612244897959;recall 51 | 0.7;0.6122448979591837;precision 52 | 0.7;19;fp 53 | 0.7;2177;tn 54 | 0.7;19;fn 55 | 0.7;0.612244897959;fscore 56 | 0.7;3.16326530612;earliness 57 | 0.75;26;tp 58 | 0.75;0.994535519126;specificity 59 | 0.75;0.9830734966592427;failure_rate 60 | 0.75;0.984409799555;accuracy 61 | 0.75;0.530612244898;recall 62 | 0.75;0.6842105263157895;precision 63 | 0.75;12;fp 64 | 0.75;2184;tn 65 | 0.75;23;fn 66 | 0.75;0.597701149425;fscore 67 | 0.75;2.89473684211;earliness 68 | 0.8;24;tp 69 | 0.8;0.99635701275;specificity 70 | 0.8;0.9857461024498887;failure_rate 71 | 0.8;0.985300668151;accuracy 72 | 0.8;0.489795918367;recall 73 | 0.8;0.75;precision 74 | 0.8;8;fp 75 | 0.8;2188;tn 76 | 0.8;25;fn 77 | 0.8;0.592592592593;fscore 78 | 0.8;3.15625;earliness 79 | 0.85;20;tp 80 | 0.85;0.996812386157;specificity 81 | 0.85;0.9879732739420936;failure_rate 82 | 0.85;0.983964365256;accuracy 83 | 0.85;0.408163265306;recall 84 | 0.85;0.7407407407407407;precision 85 | 0.85;7;fp 86 | 0.85;2189;tn 87 | 0.85;29;fn 88 | 0.85;0.526315789474;fscore 89 | 0.85;3.37037037037;earliness 90 | 0.9;15;tp 91 | 0.9;0.997723132969;specificity 92 | 0.9;0.9910913140311804;failure_rate 93 | 0.9;0.982628062361;accuracy 94 | 0.9;0.30612244898;recall 95 | 0.9;0.75;precision 96 | 0.9;5;fp 97 | 0.9;2191;tn 98 | 0.9;34;fn 99 | 0.9;0.434782608696;fscore 100 | 0.9;3.85;earliness 101 | 0.95;10;tp 102 | 0.95;0.999089253188;specificity 103 | 0.95;0.9946547884187082;failure_rate 104 | 0.95;0.981737193764;accuracy 105 | 0.95;0.204081632653;recall 106 | 0.95;0.8333333333333334;precision 107 | 0.95;2;fp 108 | 0.95;2194;tn 109 | 0.95;39;fn 110 | 0.95;0.327868852459;fscore 111 | 0.95;4.83333333333;earliness 112 | -------------------------------------------------------------------------------- /experiments/cv_results/bong_selected100_tfidf_ngram2_rf_part1: -------------------------------------------------------------------------------- 1 | confidence;value;metric 2 | 0.5;36;tp 3 | 0.5;0.981785063752;specificity 4 | 0.5;0.9661469933184855;failure_rate 5 | 0.5;0.976391982183;accuracy 6 | 0.5;0.734693877551;recall 7 | 0.5;0.47368421052631576;precision 8 | 0.5;40;fp 9 | 0.5;2156;tn 10 | 0.5;13;fn 11 | 0.5;0.576;fscore 12 | 0.5;2.55263157895;earliness 13 | 0.55;36;tp 14 | 0.55;0.985883424408;specificity 15 | 0.55;0.9701559020044543;failure_rate 16 | 0.55;0.980400890869;accuracy 17 | 0.55;0.734693877551;recall 18 | 0.55;0.5373134328358209;precision 19 | 0.55;31;fp 20 | 0.55;2165;tn 21 | 0.55;13;fn 22 | 0.55;0.620689655172;fscore 23 | 0.55;2.58208955224;earliness 24 | 0.6;36;tp 25 | 0.6;0.98679417122;specificity 26 | 0.6;0.9710467706013363;failure_rate 27 | 0.6;0.981291759465;accuracy 28 | 0.6;0.734693877551;recall 29 | 0.6;0.5538461538461539;precision 30 | 0.6;29;fp 31 | 0.6;2167;tn 32 | 0.6;13;fn 33 | 0.6;0.631578947368;fscore 34 | 0.6;2.70769230769;earliness 35 | 0.65;32;tp 36 | 0.65;0.988615664845;specificity 37 | 0.65;0.9746102449888642;failure_rate 38 | 0.65;0.981291759465;accuracy 39 | 0.65;0.65306122449;recall 40 | 0.65;0.5614035087719298;precision 41 | 0.65;25;fp 42 | 0.65;2171;tn 43 | 0.65;17;fn 44 | 0.65;0.603773584906;fscore 45 | 0.65;2.89473684211;earliness 46 | 0.7;30;tp 47 | 0.7;0.991347905282;specificity 48 | 0.7;0.978173719376392;failure_rate 49 | 0.7;0.983073496659;accuracy 50 | 0.7;0.612244897959;recall 51 | 0.7;0.6122448979591837;precision 52 | 0.7;19;fp 53 | 0.7;2177;tn 54 | 0.7;19;fn 55 | 0.7;0.612244897959;fscore 56 | 0.7;3.0;earliness 57 | 0.75;27;tp 58 | 0.75;0.994080145719;specificity 59 | 0.75;0.9821826280623608;failure_rate 60 | 0.75;0.984409799555;accuracy 61 | 0.75;0.551020408163;recall 62 | 0.75;0.675;precision 63 | 0.75;13;fp 64 | 0.75;2183;tn 65 | 0.75;22;fn 66 | 0.75;0.606741573034;fscore 67 | 0.75;3.125;earliness 68 | 0.8;23;tp 69 | 0.8;0.99635701275;specificity 70 | 0.8;0.9861915367483296;failure_rate 71 | 0.8;0.984855233853;accuracy 72 | 0.8;0.469387755102;recall 73 | 0.8;0.7419354838709677;precision 74 | 0.8;8;fp 75 | 0.8;2188;tn 76 | 0.8;26;fn 77 | 0.8;0.575;fscore 78 | 0.8;3.06451612903;earliness 79 | 0.85;21;tp 80 | 0.85;0.997267759563;specificity 81 | 0.85;0.9879732739420936;failure_rate 82 | 0.85;0.984855233853;accuracy 83 | 0.85;0.428571428571;recall 84 | 0.85;0.7777777777777778;precision 85 | 0.85;6;fp 86 | 0.85;2190;tn 87 | 0.85;28;fn 88 | 0.85;0.552631578947;fscore 89 | 0.85;3.14814814815;earliness 90 | 0.9;18;tp 91 | 0.9;0.997723132969;specificity 92 | 0.9;0.9897550111358575;failure_rate 93 | 0.9;0.983964365256;accuracy 94 | 0.9;0.367346938776;recall 95 | 0.9;0.782608695652174;precision 96 | 0.9;5;fp 97 | 0.9;2191;tn 98 | 0.9;31;fn 99 | 0.9;0.5;fscore 100 | 0.9;4.17391304348;earliness 101 | 0.95;12;tp 102 | 0.95;0.998633879781;specificity 103 | 0.95;0.9933184855233853;failure_rate 104 | 0.95;0.982182628062;accuracy 105 | 0.95;0.244897959184;recall 106 | 0.95;0.8;precision 107 | 0.95;3;fp 108 | 0.95;2193;tn 109 | 0.95;37;fn 110 | 0.95;0.375;fscore 111 | 0.95;4.6;earliness 112 | -------------------------------------------------------------------------------- /experiments/cv_results/bong_selected100_tfidf_ngram3_rf_part1: -------------------------------------------------------------------------------- 1 | confidence;value;metric 2 | 0.5;36;tp 3 | 0.5;0.983151183971;specificity 4 | 0.5;0.9674832962138085;failure_rate 5 | 0.5;0.977728285078;accuracy 6 | 0.5;0.734693877551;recall 7 | 0.5;0.4931506849315068;precision 8 | 0.5;37;fp 9 | 0.5;2159;tn 10 | 0.5;13;fn 11 | 0.5;0.590163934426;fscore 12 | 0.5;2.45205479452;earliness 13 | 0.55;36;tp 14 | 0.55;0.985883424408;specificity 15 | 0.55;0.9701559020044543;failure_rate 16 | 0.55;0.980400890869;accuracy 17 | 0.55;0.734693877551;recall 18 | 0.55;0.5373134328358209;precision 19 | 0.55;31;fp 20 | 0.55;2165;tn 21 | 0.55;13;fn 22 | 0.55;0.620689655172;fscore 23 | 0.55;2.61194029851;earliness 24 | 0.6;36;tp 25 | 0.6;0.987249544627;specificity 26 | 0.6;0.9714922048997773;failure_rate 27 | 0.6;0.981737193764;accuracy 28 | 0.6;0.734693877551;recall 29 | 0.6;0.5625;precision 30 | 0.6;28;fp 31 | 0.6;2168;tn 32 | 0.6;13;fn 33 | 0.6;0.637168141593;fscore 34 | 0.6;2.703125;earliness 35 | 0.65;35;tp 36 | 0.65;0.989526411658;specificity 37 | 0.65;0.9741648106904232;failure_rate 38 | 0.65;0.983518930958;accuracy 39 | 0.65;0.714285714286;recall 40 | 0.65;0.603448275862069;precision 41 | 0.65;23;fp 42 | 0.65;2173;tn 43 | 0.65;14;fn 44 | 0.65;0.654205607477;fscore 45 | 0.65;2.77586206897;earliness 46 | 0.7;30;tp 47 | 0.7;0.991803278689;specificity 48 | 0.7;0.978619153674833;failure_rate 49 | 0.7;0.983518930958;accuracy 50 | 0.7;0.612244897959;recall 51 | 0.7;0.625;precision 52 | 0.7;18;fp 53 | 0.7;2178;tn 54 | 0.7;19;fn 55 | 0.7;0.618556701031;fscore 56 | 0.7;3.08333333333;earliness 57 | 0.75;27;tp 58 | 0.75;0.994080145719;specificity 59 | 0.75;0.9821826280623608;failure_rate 60 | 0.75;0.984409799555;accuracy 61 | 0.75;0.551020408163;recall 62 | 0.75;0.675;precision 63 | 0.75;13;fp 64 | 0.75;2183;tn 65 | 0.75;22;fn 66 | 0.75;0.606741573034;fscore 67 | 0.75;3.275;earliness 68 | 0.8;25;tp 69 | 0.8;0.995901639344;specificity 70 | 0.8;0.9848552338530067;failure_rate 71 | 0.8;0.985300668151;accuracy 72 | 0.8;0.510204081633;recall 73 | 0.8;0.7352941176470589;precision 74 | 0.8;9;fp 75 | 0.8;2187;tn 76 | 0.8;24;fn 77 | 0.8;0.602409638554;fscore 78 | 0.8;3.08823529412;earliness 79 | 0.85;21;tp 80 | 0.85;0.996812386157;specificity 81 | 0.85;0.9875278396436525;failure_rate 82 | 0.85;0.984409799555;accuracy 83 | 0.85;0.428571428571;recall 84 | 0.85;0.75;precision 85 | 0.85;7;fp 86 | 0.85;2189;tn 87 | 0.85;28;fn 88 | 0.85;0.545454545455;fscore 89 | 0.85;3.14285714286;earliness 90 | 0.9;20;tp 91 | 0.9;0.997267759563;specificity 92 | 0.9;0.9884187082405346;failure_rate 93 | 0.9;0.984409799555;accuracy 94 | 0.9;0.408163265306;recall 95 | 0.9;0.7692307692307693;precision 96 | 0.9;6;fp 97 | 0.9;2190;tn 98 | 0.9;29;fn 99 | 0.9;0.533333333333;fscore 100 | 0.9;3.38461538462;earliness 101 | 0.95;11;tp 102 | 0.95;0.998178506375;specificity 103 | 0.95;0.9933184855233853;failure_rate 104 | 0.95;0.981291759465;accuracy 105 | 0.95;0.224489795918;recall 106 | 0.95;0.7333333333333333;precision 107 | 0.95;4;fp 108 | 0.95;2192;tn 109 | 0.95;38;fn 110 | 0.95;0.34375;fscore 111 | 0.95;4.46666666667;earliness 112 | -------------------------------------------------------------------------------- /experiments/cv_results/optimal_params_rf: -------------------------------------------------------------------------------- 1 | "confidence";"topic_k";"topic_tfidf";"bow_selected";"bow_ngram";"bow_tfidf";"nb_selected";"nb_ngram";"nb_alpha";"doc2vec_size";"doc2vec_window" 2 | 0.5;"50";"no-tfidf";"5000";"3";"tfidf";"5000";"1";"0.01";"10";"9" 3 | 0.55;"50";"no-tfidf";"750";"1";"tfidf";"2000";"1";"0.01";"10";"10" 4 | 0.6;"50";"no-tfidf";"5000";"2";"tfidf";"500";"1";"0.5";"10";"11" 5 | 0.65;"50";"no-tfidf";"5000";"2";"tfidf";"100";"1";"0.5";"10";"12" 6 | 0.7;"75";"no-tfidf";"5000";"2";"tfidf";"100";"1";"0.5";"10";"12" 7 | 0.75;"75";"no-tfidf";"1000";"3";"tfidf";"100";"1";"0.5";"10";"11" 8 | 0.8;"75";"no-tfidf";"750";"1";"tfidf";"5000";"1";"0.1";"10";"12" 9 | 0.85;"200";"no-tfidf";"100";"3";"no-tfidf";"250";"2";"0.5";"10";"9" 10 | 0.9;"100";"no-tfidf";"250";"3";"tfidf";"500";"1";"0.5";"10";"10" 11 | 0.95;"100";"no-tfidf";"250";"3";"no-tfidf";"1000";"2";"0.1";"10";"7" 12 | -------------------------------------------------------------------------------- /experiments/extract_optimal_params.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import re 4 | import os 5 | 6 | files = os.listdir("cv_results/") 7 | 8 | all_metrics = {} 9 | for file in files: 10 | parts = file.split("_") 11 | if parts[0] == "bong": 12 | m = re.match("selected(\d+)", parts[1]) 13 | selected = m.group(1) 14 | 15 | if parts[2] == "tfidf": 16 | tfidf = "tfidf" 17 | nextidx = 3 18 | else: 19 | tfidf = "no-tfidf" 20 | nextidx = 2 21 | 22 | m = re.match("ngram(\d+)", parts[nextidx]) 23 | ngram = m.group(1) 24 | 25 | cls = parts[nextidx+1] 26 | 27 | m = re.match("part(\d+)", parts[nextidx+2]) 28 | part = m.group(1) 29 | 30 | metrics = pd.read_csv("cv_results/%s"%file, sep=";") 31 | metrics["selected"] = selected 32 | metrics["tfidf"] = tfidf 33 | metrics["ngram"] = ngram 34 | 35 | elif parts[0] == "nb": 36 | m = re.match("selected(\d+)", parts[1]) 37 | selected = m.group(1) 38 | 39 | m = re.match("alpha(.*)", parts[2]) 40 | alpha = m.group(1) 41 | 42 | m = re.match("ngram(\d+)", parts[3]) 43 | ngram = m.group(1) 44 | 45 | cls = parts[4] 46 | 47 | m = re.match("part(\d+)", parts[5]) 48 | part = m.group(1) 49 | 50 | metrics = pd.read_csv("cv_results/%s"%file, sep=";") 51 | metrics["selected"] = selected 52 | metrics["alpha"] = alpha 53 | metrics["ngram"] = ngram 54 | 55 | elif parts[0] == "lda": 56 | m = re.match("k(\d+)", parts[1]) 57 | k = m.group(1) 58 | 59 | if parts[2] == "tfidf": 60 | tfidf = "tfidf" 61 | nextidx = 3 62 | else: 63 | tfidf = "no-tfidf" 64 | nextidx = 2 65 | 66 | cls = parts[nextidx] 67 | 68 | m = re.match("part(\d+)", parts[nextidx+1]) 69 | part = m.group(1) 70 | 71 | metrics = pd.read_csv("cv_results/%s"%file, sep=";") 72 | metrics["k"] = k 73 | metrics["tfidf"] = tfidf 74 | 75 | elif parts[0] == "pv": 76 | m = re.match("size(\d+)", parts[1]) 77 | size = m.group(1) 78 | 79 | m = re.match("window(\d+)", parts[2]) 80 | window = m.group(1) 81 | 82 | cls = parts[3] 83 | 84 | m = re.match("part(\d+)", parts[4]) 85 | part = m.group(1) 86 | 87 | metrics = pd.read_csv("cv_results/%s"%file, sep=";") 88 | metrics["size"] = size 89 | metrics["window"] = window 90 | else: 91 | continue 92 | 93 | metrics["part"] = part 94 | metrics["cls"] = cls 95 | if parts[0] not in all_metrics: 96 | all_metrics[parts[0]] = metrics.copy() 97 | else: 98 | all_metrics[parts[0]] = pd.concat([all_metrics[parts[0]], metrics], ignore_index=True) 99 | 100 | optimal_params = {} 101 | for k, v in all_metrics.items(): 102 | grouped = v.groupby([col for col in v.columns if col not in ["part", "value"]]) 103 | means = pd.DataFrame(grouped["value"].mean()).reset_index() 104 | fscores = means[means["metric"] == "fscore"].reset_index() 105 | fscores_max = fscores.loc[fscores.groupby(["confidence", "cls"])['value'].idxmax()] 106 | optimal_params[k] = fscores_max 107 | 108 | for k, v in optimal_params.items(): 109 | v.to_csv("cv_results/optimal_params_%s"%k, sep=";", index=False) -------------------------------------------------------------------------------- /experiments/final_results/base_rf: -------------------------------------------------------------------------------- 1 | confidence;value;metric 2 | 0.5;2655;tn 3 | 0.5;58;tp 4 | 0.5;28;fn 5 | 0.5;0.9565062388591801;failure_rate 6 | 0.5;0.557692307692;fscore 7 | 0.5;0.47540983606557374;precision 8 | 0.5;64;fp 9 | 0.5;0.967201426025;accuracy 10 | 0.5;2.32786885246;earliness 11 | 0.5;0.976461934535;specificity 12 | 0.5;0.674418604651;recall 13 | 0.55;2668;tn 14 | 0.55;53;tp 15 | 0.55;33;fn 16 | 0.55;0.9629233511586452;failure_rate 17 | 0.55;0.557894736842;fscore 18 | 0.55;0.5096153846153846;precision 19 | 0.55;51;fp 20 | 0.55;0.970053475936;accuracy 21 | 0.55;2.45192307692;earliness 22 | 0.55;0.981243104082;specificity 23 | 0.55;0.616279069767;recall 24 | 0.6;2681;tn 25 | 0.6;52;tp 26 | 0.6;34;fn 27 | 0.6;0.9679144385026738;failure_rate 28 | 0.6;0.590909090909;fscore 29 | 0.6;0.5777777777777777;precision 30 | 0.6;38;fp 31 | 0.6;0.974331550802;accuracy 32 | 0.6;2.5;earliness 33 | 0.6;0.98602427363;specificity 34 | 0.6;0.604651162791;recall 35 | 0.65;2686;tn 36 | 0.65;49;tp 37 | 0.65;37;fn 38 | 0.65;0.9707664884135473;failure_rate 39 | 0.65;0.583333333333;fscore 40 | 0.65;0.5975609756097561;precision 41 | 0.65;33;fp 42 | 0.65;0.97504456328;accuracy 43 | 0.65;2.78048780488;earliness 44 | 0.65;0.987863184994;specificity 45 | 0.65;0.56976744186;recall 46 | 0.7;2691;tn 47 | 0.7;47;tp 48 | 0.7;39;fn 49 | 0.7;0.9732620320855615;failure_rate 50 | 0.7;0.583850931677;fscore 51 | 0.7;0.6266666666666667;precision 52 | 0.7;28;fp 53 | 0.7;0.976114081996;accuracy 54 | 0.7;2.82666666667;earliness 55 | 0.7;0.989702096359;specificity 56 | 0.7;0.546511627907;recall 57 | 0.75;2698;tn 58 | 0.75;42;tp 59 | 0.75;44;fn 60 | 0.75;0.9775401069518717;failure_rate 61 | 0.75;0.563758389262;fscore 62 | 0.75;0.6666666666666666;precision 63 | 0.75;21;fp 64 | 0.75;0.976827094474;accuracy 65 | 0.75;3.22222222222;earliness 66 | 0.75;0.992276572269;specificity 67 | 0.75;0.488372093023;recall 68 | 0.8;2702;tn 69 | 0.8;38;tp 70 | 0.8;48;fn 71 | 0.8;0.9803921568627451;failure_rate 72 | 0.8;0.539007092199;fscore 73 | 0.8;0.6909090909090909;precision 74 | 0.8;17;fp 75 | 0.8;0.976827094474;accuracy 76 | 0.8;3.38181818182;earliness 77 | 0.8;0.993747701361;specificity 78 | 0.8;0.441860465116;recall 79 | 0.85;2709;tn 80 | 0.85;34;tp 81 | 0.85;52;fn 82 | 0.85;0.9843137254901961;failure_rate 83 | 0.85;0.523076923077;fscore 84 | 0.85;0.7727272727272727;precision 85 | 0.85;10;fp 86 | 0.85;0.977896613191;accuracy 87 | 0.85;4.15909090909;earliness 88 | 0.85;0.996322177271;specificity 89 | 0.85;0.395348837209;recall 90 | 0.9;2713;tn 91 | 0.9;25;tp 92 | 0.9;61;fn 93 | 0.9;0.9889483065953654;failure_rate 94 | 0.9;0.42735042735;fscore 95 | 0.9;0.8064516129032258;precision 96 | 0.9;6;fp 97 | 0.9;0.976114081996;accuracy 98 | 0.9;4.54838709677;earliness 99 | 0.9;0.997793306363;specificity 100 | 0.9;0.290697674419;recall 101 | 0.95;2717;tn 102 | 0.95;20;tp 103 | 0.95;66;fn 104 | 0.95;0.9921568627450981;failure_rate 105 | 0.95;0.37037037037;fscore 106 | 0.95;0.9090909090909091;precision 107 | 0.95;2;fp 108 | 0.95;0.975757575758;accuracy 109 | 0.95;4.95454545455;earliness 110 | 0.95;0.999264435454;specificity 111 | 0.95;0.232558139535;recall 112 | -------------------------------------------------------------------------------- /experiments/timed_results/base_rf_1: -------------------------------------------------------------------------------- 1 | nr_events;train_preproc_time;train_cls_time;test_preproc_time;test_time;nr_test_cases 2 | 1;1.649287462234497;16.466801166534424;0.3549313545227051;0.22014117240905762;2805 3 | 2;0.5642406940460205;5.137614965438843;0.14580845832824707;0.10602164268493652;960 4 | 3;0.30377817153930664;2.3286209106445312;0.08414483070373535;0.06566047668457031;339 5 | 4;0.17227935791015625;1.1944429874420166;0.06815123558044434;0.050820350646972656;127 6 | 5;0.13522577285766602;0.8283267021179199;0.062253713607788086;0.04609370231628418;56 7 | 6;0.1509559154510498;0.6707963943481445;0.0669853687286377;0.04413723945617676;24 8 | 7;0.12365460395812988;0.6283445358276367;0.07805299758911133;0.04243755340576172;12 9 | 8;0.13129568099975586;0.6071677207946777;0.09876894950866699;0.04128122329711914;5 10 | --------------------------------------------------------------------------------