├── Examples A - spiral distribution ├── Example 01 - full map │ ├── dict_coeffs_order=10.p │ ├── example_01.py │ ├── output_composite_map.png │ ├── output_conditional_map.png │ ├── output_forward_map.png │ ├── output_inverse_map.png │ └── transport_map.py └── Example 02 - partial map │ ├── dict_coeffs_order=10_partial.p │ ├── example_02.py │ ├── output_composite_map.png │ ├── output_conditional_map.png │ └── transport_map.py ├── Examples B - statistical inference ├── Example 03 - average temperature data │ ├── DLMUNICH.txt │ ├── RSMOSCOW.txt │ ├── conditional_temperatures_Moscow.png │ ├── example_03.py │ ├── generative_modeling.png │ ├── temperature_data.png │ └── transport_map.py └── Example 04 - Monod kinetics │ ├── 01_data.png │ ├── 02_prior_simulations.png │ ├── 03_posterior_parameters.png │ ├── 04_posterior_simulations.png │ ├── example_04.py │ ├── model_monod.dat │ └── transport_map.py ├── Examples C - data assimilation ├── Example 05 - Ensemble Transport Filter │ ├── 01_RMSE_EnTF_order=1.png │ ├── 01_RMSE_EnTF_order=2.png │ ├── 01_RMSE_EnTF_order=3.png │ ├── 01_RMSE_EnTF_order=4.png │ ├── 01_RMSE_EnTF_order=5.png │ ├── example_05.py │ └── transport_map.py └── Example 06 - Ensemble Transport Smoother │ ├── 01_RMSE_EnTF_order=1.png │ ├── 01_RMSE_EnTF_order=2.png │ ├── 01_RMSE_EnTF_order=3.png │ ├── 01_RMSE_EnTF_order=4.png │ ├── 01_RMSE_EnTF_order=5.png │ ├── 02_RMSE_EnTS_order=1_smoother_order=1.png │ ├── 02_RMSE_EnTS_order=2_smoother_order=2.png │ ├── 02_RMSE_EnTS_order=3_smoother_order=3.png │ ├── 02_RMSE_EnTS_order=4_smoother_order=4.png │ ├── 02_RMSE_EnTS_order=5_smoother_order=5.png │ ├── example_06.py │ └── transport_map.py ├── LICENSE ├── README.md ├── figures └── spiral_animated.gif └── transport_map.py /Examples A - spiral distribution/Example 01 - full map/dict_coeffs_order=10.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 01 - full map/dict_coeffs_order=10.p -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 01 - full map/example_01.py: -------------------------------------------------------------------------------- 1 | # Load a number of libraries required for this exercise 2 | import scipy.stats 3 | import numpy as np 4 | import matplotlib 5 | import matplotlib.pyplot as plt 6 | import copy 7 | import itertools 8 | import os 9 | import pickle 10 | 11 | # Load in the transport map class 12 | from transport_map import * 13 | 14 | # Find the current working directory 15 | root_directory = os.path.dirname(os.path.realpath(__file__)) 16 | 17 | # Set a random seed 18 | np.random.seed(0) 19 | 20 | # Close all open figures 21 | plt.close('all') 22 | 23 | # I use a custom radial colormap. These are its colors. 24 | turbocolors = [[0.6904761904761906, 0.010977778700493195, 0.012597277894375276], [0.6906519235270884, 0.01640135787405724, 0.0837140289089694], [0.6917028231470678, 0.0229766531631339, 0.1508422093503125], [0.6935600550923448, 0.03059217460463138, 0.2140806913791454], [0.6961547851191345, 0.03913937310725368, 0.27354760270667333], [0.6994181789836523, 0.048510190139500135, 0.32937485403121564], [0.7032814024421139, 0.05859488216300645, 0.3817042734043511], [0.7076756212507346, 0.06928011981122602, 0.4306849504666904], [0.7125320011657298, 0.08044736181345138, 0.4764714640882298], [0.717781707943315, 0.09197150366417624, 0.5192227314969523], [0.7233559073397057, 0.10371980103779682, 0.5591012754819024], [0.7291857651111174, 0.11555106794865597, 0.5962727587134745], [0.7352024470137656, 0.12731514965642451, 0.6309056806339336], [0.7413371188038657, 0.13885267031682524, 0.6631711727355227], [0.7475209462376329, 0.1499950553776956, 0.6932428623615332], [0.7536850950712828, 0.16056482872039282, 0.7212968034377881], [0.7597607310610313, 0.17037618454653694, 0.747511494767853], [0.75929001722113, 0.17923583401009582, 0.7656790199630931], [0.7475941038279283, 0.18694820855183453, 0.7713723012300058], [0.7366824556818949, 0.19346062714094175, 0.7768150322166271], [0.72643042874709, 0.19890811745519935, 0.7820344349913866], [0.7167412048139664, 0.20344443760235664, 0.7870610686424058], [0.7075377277455205, 0.20722973652712667, 0.7919254922578067], [0.6987607996238635, 0.2104298456332004, 0.7966582649257115], [0.6903673228438931, 0.21321570323575734, 0.8012899457342416], [0.682328749433452, 0.21576291184447535, 0.8058510937715191], [0.6746297909082095, 0.21825142827703536, 0.8103722681256655], [0.6672648697027385, 0.22085425646468607, 0.8148817689184499], [0.6602014830869536, 0.22359379565318152, 0.8193767173456075], [0.6533787124237928, 0.2263796531215008, 0.823831507046331], [0.6467289333281976, 0.22911680044368107, 0.8282200161103943], [0.6401785662494847, 0.23170815719718751, 0.8325161226275704], [0.6336482569987627, 0.23405470433309905, 0.8366937046876322], [0.6270530344594542, 0.2360556532010278, 0.8407266403803533], [0.6203024476840878, 0.23760867022877552, 0.8445888077955067], [0.6133006844761989, 0.23861015725671983, 0.8482540850228658], [0.6059624573538679, 0.23898238778533165, 0.8516979593044193], [0.5983090374215976, 0.23882956412204973, 0.854906810971959], [0.5904174130986337, 0.23833421958277098, 0.8578715372512724], [0.5823783687413812, 0.2376823588176445, 0.8605830499184942], [0.5742958438497263, 0.23706268520314622, 0.8630322607497598], [0.566286260437254, 0.23666602945460796, 0.8652100815212042], [0.5584776771672347, 0.23668473807742815, 0.8671074240089628], [0.5510087685640095, 0.23731202165697124, 0.8687151999891709], [0.5440276258636073, 0.23874126128013823, 0.8700243216224771], [0.5376105577619357, 0.2411191644739669, 0.8710360813847914], [0.5315489266889114, 0.24442474436899914, 0.871789217558878], [0.5255684953009153, 0.24859807924582497, 0.8723309092587582], [0.5193987629250915, 0.2535784508499333, 0.8727083355984544], [0.5127774500657107, 0.25930498111025313, 0.8729686756919878], [0.5054539190109256, 0.2657171896860476, 0.8731591086533795], [0.4971915396084222, 0.2727554723421566, 0.8733268135966521], [0.4877690201269585, 0.2803615001525919, 0.8735189696358265], [0.47698218055553143, 0.28847794895571527, 0.873782586986574], [0.4648276728474042, 0.29697110893585776, 0.8741423773429315], [0.45165101189669743, 0.305552156792384, 0.8745787059949128], [0.43777753809619335, 0.31391252849082757, 0.8750666949974303], [0.4234601983798163, 0.32174200618204823, 0.8755814664053952], [0.4088786643397557, 0.3287286961969925, 0.87609814227372], [0.39414368224229085, 0.33455917871529023, 0.8765918446573163], [0.3793067492830514, 0.3389188291076908, 0.877037695611096], [0.3643751994164286, 0.34149231095233595, 0.8774108171899714], [0.34933070330797267, 0.34197454203820826, 0.877688092213006], [0.3403673509225958, 0.34663277097930234, 0.8778987375452829], [0.3370236501326216, 0.3554484329299287, 0.8781318704949722], [0.33231461663862033, 0.3618511397272365, 0.8784798910693628], [0.3266091000188368, 0.36659036039709064, 0.8790351992757446], [0.32027293973512316, 0.3703929847930015, 0.8798901951214063], [0.31366899325232445, 0.37397422952969406, 0.8811372786136377], [0.3071578750013543, 0.3780456944328393, 0.8828688497597277], [0.301099406185961, 0.38332054751916245, 0.8851773085669661], [0.29583267832568955, 0.3904653314042777, 0.8881489717217561], [0.2914323797009556, 0.39953599160062525, 0.8917964039595472], [0.2877980734089111, 0.41017448566248715, 0.8960831069995929], [0.28482805439516534, 0.4220062674230746, 0.9009716956503901], [0.28242172761403994, 0.43464761751547915, 0.906424784720434], [0.2804788409846796, 0.4477051726897616, 0.9124049890182206], [0.2788987912262992, 0.4607749922480771, 0.918874923352247], [0.277580002572562, 0.47344114547803795, 0.9257972025310082], [0.2764193783650953, 0.485273787396179, 0.9331344413630012], [0.27531382596760734, 0.49590948909381166, 0.9408321707166999], [0.27417124253611536, 0.5054794081653604, 0.9487320904666496], [0.2729055061729636, 0.5143187877995266, 0.9566368878753057], [0.27143253536604345, 0.5227855656932154, 0.9643491802293045], [0.2696704730654312, 0.531261581824037, 0.9716715848152837], [0.2675399881902376, 0.5401524972415588, 0.9784067189198801], [0.2649646868587828, 0.5498860194147946, 0.9843571998297309], [0.26187163334209196, 0.5609083493724006, 0.9893256448314729], [0.25819198579084385, 0.5736786941767217, 0.9931146823187612], [0.25388033135389326, 0.5884766343804858, 0.9955682574046939], [0.2489559746899319, 0.6049657034015569, 0.9966648267466579], [0.24345275591304505, 0.62265633216535, 0.9964101744320204], [0.23740633990537824, 0.6410301303413559, 0.9948100845481483], [0.23085404063102094, 0.6595442628994237, 0.9918703411824094], [0.22383464684219176, 0.6776362279674177, 0.9875967284221705], [0.21638824917772453, 0.6947290781287391, 0.9819950303547987], [0.20855606865385165, 0.7102371206640654, 0.9750710310676618], [0.20038059410110376, 0.7235758485222823, 0.9668315591455693], [0.19193351310774223, 0.734519649068058, 0.957382599029186], [0.1833340368744243, 0.7435006401649751, 0.9470095683956252], [0.17469783713168363, 0.7510390693547768, 0.9360167776775004], [0.16613077624888986, 0.7576720124478675, 0.9247085373074251], [0.15772857678376886, 0.763950628076615, 0.9133891577180121], [0.14957672814523254, 0.7704390486819581, 0.9023629493418752], [0.14175063036952132, 0.7777150132050495, 0.8919342226116274], [0.1343159750096613, 0.7863723484029234, 0.8824072879598819], [0.12732970449680675, 0.797012529379906, 0.8740811995081446], [0.12084976147141473, 0.8099384376963948, 0.8671237357742304], [0.11493975728845535, 0.8251268658709435, 0.8615653598949594], [0.10966332888119507, 0.8425408262745097, 0.857430069974358], [0.1050853463044867, 0.8547418641164511, 0.8473403551838433], [0.10127356192382456, 0.8535247404252649, 0.8231538838024755], [0.09830031072117495, 0.8538026970048249, 0.7998509537157463], [0.09624426171757831, 0.8555997319591567, 0.7775291500837249], [0.09519222051252364, 0.8589398433922859, 0.7562951678279484], [0.09523466033004026, 0.8638301588380453, 0.736254113762426], [0.09640562817698234, 0.8700971873969563, 0.7173867121266534], [0.09870775682645283, 0.8774578722514202, 0.6996232876010631], [0.10214964316230982, 0.8856276192531288, 0.6829345377608849], [0.10674326864505639, 0.8943218342537755, 0.6673309599875173], [0.1125007767162798, 0.9032559231050521, 0.6528591156909103], [0.1194311818691091, 0.9121452916586508, 0.6395960113839659], [0.12753701038469437, 0.9207053457662641, 0.6276416598987021], [0.13681087273470835, 0.9286514912795846, 0.6171099273803513], [0.1472273301533093, 0.9357451866752111, 0.608020030012468], [0.1587227785675077, 0.9419996156076766, 0.5998477287978358], [0.17121964482967164, 0.9475135374443872, 0.5918796184356399], [0.18463807709930177, 0.952385782298133, 0.5834130936511204], [0.19889759168666765, 0.9567151802817035, 0.5737737807193738], [0.21391852693235652, 0.9606005615078892, 0.562329072946783], [0.22962331181780002, 0.9641407560894797, 0.5484979436976313], [0.24593754930678074, 0.9674345941392647, 0.5317571316448257], [0.26279082355117567, 0.9705808834297661, 0.511644132353509], [0.2800068036776241, 0.9736514082827111, 0.48808838425690876], [0.2970834223782295, 0.9766384622071741, 0.4619539474027109], [0.31345165544585374, 0.9795197960248092, 0.4340817805575162], [0.3285349032795682, 0.982273160557271, 0.40507369070122135], [0.3417498926656193, 0.9848763066262134, 0.3752866956776463], [0.36016989144758, 0.9873069850532904, 0.35250783102398103], [0.40675615393016756, 0.9895429466601561, 0.36021581312049306], [0.44700613398058514, 0.9915619422684648, 0.3642784802445829], [0.4800453603427492, 0.9933416833596941, 0.3641065514694242], [0.5059172225697819, 0.9948570776317582, 0.3595849281383], [0.5260321807100684, 0.996078410770516, 0.35138497368822147], [0.5417361711468346, 0.9969755367452103, 0.340261882906466], [0.5542251246559543, 0.9975183095250835, 0.3269807394030567], [0.564594836019787, 0.9976765830793782, 0.3123146193759361], [0.5738773907037504, 0.9974202113773375, 0.29704222917768014], [0.5830638840476217, 0.9967190483882035, 0.28194507668375485], [0.593113257901611, 0.9955429480812196, 0.2678041764623123], [0.6049173329252986, 0.9938631942887316, 0.2553741911685384], [0.618688941513129, 0.9916814407117477, 0.24493286081566687], [0.6340191489156332, 0.9890283751100516, 0.23630248008060123], [0.6504815845022238, 0.9859358477750884, 0.22928282597582322], [0.667664607259613, 0.9824357089983022, 0.2236706549395957], [0.6851775225926456, 0.9785598090711378, 0.21926108516565482], [0.7026554225327669, 0.9743399982850405, 0.21584884718485342], [0.7197626647068687, 0.9698081269314549, 0.21322940269875598], [0.7361950207876023, 0.9649960453018253, 0.21119993166518547], [0.7517045578243673, 0.9599385848130461, 0.20957252075787536], [0.7662987145771546, 0.9546988900091035, 0.20827881408159643], [0.7801235262514455, 0.9493557514336002, 0.20731561723168843], [0.793333881434332, 0.9439881244434671, 0.2066805054707251], [0.8060910507633685, 0.9386749643956355, 0.20637143421171145], [0.818561771106568, 0.9334952266470367, 0.20638701190028513], [0.8309175422374003, 0.9285278665546012, 0.20672674995777193], [0.8433341375472628, 0.9238518394752603, 0.2073912897850912], [0.8559913280771064, 0.9195461007659449, 0.20838260682751408], [0.8690561732865327, 0.9156936485066011, 0.20969336805791894], [0.8826222717866333, 0.9123973799376971, 0.21126511049859706], [0.8967754117130945, 0.9097663045485556, 0.21302599542691433], [0.9079094334716317, 0.9041987442580517, 0.21490680311671295], [0.9069357778393816, 0.886592094707027, 0.21684014643856891], [0.9069543487842606, 0.870013975207323, 0.21875944038476458], [0.9080741574387239, 0.8545231793863904, 0.22059763194219112], [0.910404214935227, 0.8401608509928674, 0.22228569031318043], [0.9140533609890943, 0.8269505724832292, 0.22375088679927074], [0.9190348675863581, 0.8148283391986565, 0.22493134638188253], [0.9251076558303858, 0.8035443408635032, 0.22580368730895622], [0.9319887969388863, 0.7928444208549985, 0.2263494470448247], [0.9393953621295686, 0.7825040250835419, 0.2265502781157535], [0.9470444226201423, 0.7723286488430519, 0.2263898716540945], [0.9546530496283158, 0.762153889073118, 0.22585549404541186], [0.9619383143717981, 0.7518450142735859, 0.2249391366785828], [0.9686172880682985, 0.7412959626121468, 0.22363827879886775], [0.9744096708096767, 0.730428526919429, 0.22195619461848926], [0.9791805893769406, 0.7192323214790531, 0.21989751414051684], [0.9830137772099842, 0.707766047135114, 0.21746224880159565], [0.9860109991965004, 0.6960819371591295, 0.21464892131240673], [0.9882740202241823, 0.6842207640296426, 0.21145507809451805], [0.9899046051807234, 0.6722129383813515, 0.2078773477615485], [0.9910045189538177, 0.660079441906903, 0.203911434685365], [0.9916755264311576, 0.6478325847934217, 0.19955204764730958], [0.9920193925004372, 0.6354765794026785, 0.19479276357445865], [0.9921346225982788, 0.6230072587408348, 0.18962791161850712], [0.9920602459326743, 0.6104024959410064, 0.1840886756924244], [0.9917833282948878, 0.5976302231717846, 0.17823961163192695], [0.9912891871817501, 0.584660910964709, 0.17214700759084267], [0.9905631400900912, 0.5714677122588908, 0.16587767290499006], [0.9895905045167414, 0.5580263006318681, 0.15949881685675113], [0.9883565979585309, 0.5443147998853488, 0.15307790816828803], [0.9868467379122904, 0.5303138044684916, 0.14668251522340267], [0.9850462418748495, 0.5160064896870945, 0.14038012701804065], [0.9829415325780593, 0.5013871258172927, 0.13423136665239896], [0.9805283806463996, 0.48652287501129976, 0.12824076784310964], [0.9778072570671092, 0.4715174832781621, 0.12238470698953774], [0.9747786684299458, 0.45647541200570035, 0.11663935572331047], [0.9714431213246664, 0.44150112246113676, 0.11098098205678426], [0.9678011223410284, 0.42669869332027743, 0.10538603876578982], [0.9638531780687892, 0.4121715013388048, 0.09983125148265794], [0.9595997950977064, 0.3980219636495409, 0.09429370649952155], [0.955041480017537, 0.38435133958413126, 0.08875093828190032], [0.9501776477067925, 0.3712391645206384, 0.08319244458709686], [0.945002869918086, 0.35867340216680627, 0.07765856363342656], [0.939510375845868, 0.34661657642435645, 0.07220291355950477], [0.9336933946348576, 0.33503080017448555, 0.06687804720630672], [0.927545155429773, 0.3238776561046621, 0.061735278666316916], [0.9210588873753326, 0.3131181839572251, 0.056824503280264636], [0.9142278196162553, 0.30271297234473005, 0.05219401055843878], [0.9070451812972594, 0.2926223514744295, 0.04789029002658525], [0.8995042852162572, 0.2828067015124927, 0.043957748795836256], [0.891624412799365, 0.27323253076797877, 0.04041351972303193], [0.8834874722688015, 0.263879251265467, 0.03721289045871814], [0.8751846232207746, 0.254725704246588, 0.03430224569442672], [0.8668070252514929, 0.24574829852791777, 0.03162849539198234], [0.8584458379571641, 0.23692126221807738, 0.029139373140852975], [0.8501922209339968, 0.22821677001517918, 0.026783627401313844], [0.842137333778199, 0.2196049425900127, 0.024511105633431018], [0.8343723360859792, 0.21105371449732208, 0.022272731311857318], [0.8269868884473934, 0.20252933155631014, 0.02002124877275106], [0.8200045690445795, 0.19402936800454978, 0.017748501028958072], [0.8133583005373909, 0.18559748229672426, 0.015498635760688948], [0.8069744374385174, 0.17728209148881974, 0.013318655178400935], [0.8007793342606484, 0.16913337148852742, 0.0112547441788015], [0.7946993455164736, 0.16120340729781465, 0.009352359692418623], [0.788660825718683, 0.15354617809025697, 0.00765622813291652], [0.782590129379966, 0.14621737391166023, 0.0062102509481560386], [0.7764136110130129, 0.1392740409365317, 0.005057318272995783], [0.7700620750740421, 0.13276939181212588, 0.004236488419270876], [0.7635366882039091, 0.12668307615009333, 0.003745345772828322], [0.7568947816167044, 0.12093573368910132, 0.003549296943119582], [0.7501952516603297, 0.11544669842075488, 0.0036134540477882945], [0.7434969946826867, 0.11013619050381865, 0.0039037799561516488], [0.7368589070316779, 0.10492567700866447, 0.004387290052200903], [0.7303398850552048, 0.09973804029508934, 0.0050322105475879364], [0.7239988251011695, 0.09449755619814312, 0.005808093344599986], [0.7178946235174741, 0.08912968460872006, 0.006685887449122245], [0.7120861766520206, 0.08356067512355873, 0.007637966933587637], [0.7066323808527106, 0.07771699020297941, 0.008638115449914247], [0.7015921324674467, 0.07152454771412878, 0.009661467292431428], [0.6970243278441304, 0.06490778385274144, 0.01068440501079227], [0.6929878633306636, 0.05778853622743917, 0.011684413572873753], [0.6895416352749486, 0.050084745357375754, 0.012639891077665865], [0.6867445400248873, 0.041708970976610964, 0.013529916018146956], [0.6846554739283816, 0.032566717356936344, 0.014333971094147597], [0.6833333333333333, 0.022554559355018072, 0.01503162357520156], [0.4796, 0.01583, 0.010550000000000002]] 25 | 26 | # ============================================================================= 27 | # Step 1: Sampling the target density function 28 | # ============================================================================= 29 | 30 | # This function allows us to sample the spiral-shaped target density function 31 | def sample_spiral_distribution(size): 32 | 33 | # First draw some rotation samples from a beta distribution, then scale 34 | # them to the range between -pi and +2pi 35 | seeds = scipy.stats.beta.rvs( 36 | a = 4, 37 | b = 3, 38 | size = size)*3*np.pi-np.pi 39 | 40 | # Create a local copy of the rotations 41 | seeds_orig = copy.copy(seeds) 42 | 43 | # Re-normalize the rotations, then scale them to the range between [-3,+3] 44 | vals = (seeds+np.pi)/(3*np.pi)*6-3 45 | 46 | # Plot the rotation samples on a straight spiral 47 | X = np.column_stack(( 48 | np.cos(seeds)[:,np.newaxis], 49 | np.sin(seeds)[:,np.newaxis]))*((1+seeds+np.pi)/(3*np.pi)*5)[:,np.newaxis] 50 | 51 | # Offset each sample along the spiral's normal vector by scaled Gaussian 52 | # noise 53 | X += np.column_stack([ 54 | np.cos(seeds_orig), 55 | np.sin(seeds_orig)])*(scipy.stats.norm.rvs(size=size)*scipy.stats.norm.pdf(vals))[:,np.newaxis] 56 | 57 | return X/2 58 | 59 | # Define the ensemble size 60 | N = 10000 61 | 62 | # Draw that many samples 63 | X = sample_spiral_distribution(N) 64 | 65 | #%% 66 | # ============================================================================= 67 | # Define the transport map parameterization 68 | # ============================================================================= 69 | 70 | # Next, we define the map component functions used in the triangular transport 71 | # map. The map definition requires two lists of lists: one for the monotone 72 | # part (basis functions which do depend on the last argument) and one for the 73 | # nonmonotone part (basis functions which do not depend on the last argument). 74 | # Each entry in those lists is another list that defines the basis functions. 75 | # Polynomial basis functions are lists of integers, with a potential keyword 76 | # such as 'HF' appended to mark it as a Hermite function. RBFs or related basis 77 | # functions are defined as strings such as 'RBF 0' or 'iRBF 7'. 78 | # 79 | # Example: -------------------------------------------------------------------- 80 | # 81 | # monotone = [ 82 | # [ [0] ], 83 | # [ [1], [0,0,1,'HF'] ] ] 84 | # nonmonotone = [ 85 | # [ [] ], 86 | # [ [], [0], [0,0], [0,0,'HF], 'RBF 0'] ] 87 | # 88 | # Explanation: ---------------------------------------------------------------- 89 | # 90 | # Monotone [list] 91 | # | 92 | # └―― Map component 1 [list] (last argument: entry x_{0}) 93 | # | | 94 | # | └―― [0] Basis function 1 (linear term for entry 0) 95 | # | 96 | # └―― Map component 2 [list] (last argument: entry x_{1}) 97 | # | 98 | # └―― [1] Basis function 1 (linear term for entry 1) 99 | # | 100 | # └―― [0,0,1,'HF'] Basis function 2 (cross-term: quadratic Hermite function for entry 0, linear Hermite function for entry 1) 101 | # 102 | # Nonmonotone [list] 103 | # | 104 | # └―― Map component 1 [list] (valid arguments: constant) 105 | # | | 106 | # | └―― [] Basis function 1 (constant term) 107 | # | 108 | # └―― Map component 2 [list] (Valid arguments: consant, x_{0}) 109 | # | 110 | # └―― [] Basis function 1 (constant term) 111 | # | 112 | # └―― [0] Basis function 2 (linear term for entry x_{0}) 113 | # | 114 | # └―― [0,0] Basis function 3 (quadratic term for entry x_{0}) 115 | # | 116 | # └―― [0,0,'HF'] Basis function 4 (quadratic Hermite function for entry x_{0}) 117 | # | 118 | # └―― 'RBF 0' Basis function 5 (radial basis function for entry x_{0}) 119 | 120 | # Create empty lists for the map component specifications 121 | monotone = [] 122 | nonmonotone = [] 123 | 124 | # The spiral distribution is a very challenging target density function. We 125 | # require polynomial basis terms up to order 10 126 | maxorder = 10 127 | 128 | # For complex maps, it is often better to write functions which assemble the 129 | # map component specifications automatically 130 | for k in range(2): 131 | 132 | # Level 1: Add an empty list entry for each map component function 133 | monotone.append([]) 134 | nonmonotone.append([]) # An empty list "[]" denotes a constant 135 | 136 | # Level 2: We initiate the nonmonotone terms with a constant 137 | nonmonotone[-1].append([]) 138 | 139 | # Go through every polynomial order 140 | for order in range(maxorder): 141 | 142 | # Nonmonotone part ---------------------------------------------------- 143 | 144 | # We only have non-constant nonmonotone terms past the first map 145 | # component, and we already added the constant term earlier, so only do 146 | # this for the second map component function (k > 0). 147 | if k > 0: 148 | 149 | # Level 2: Specify the polynomial order of this term; 150 | # It's a Hermite function term 151 | nonmonotone[-1].append([k-1]*(order+1)+['HF']) 152 | 153 | # Monotone part ------------------------------------------------------- 154 | 155 | # For the monotone part, we consider cross-terms. This list creates all 156 | # possible combinations for this map component up to total order 10 157 | comblist = list(itertools.combinations_with_replacement( 158 | np.arange(k+1), 159 | order+1)) 160 | 161 | # Go through each of these combinations 162 | for entry in comblist: 163 | 164 | # Create a transport map component term 165 | if k in entry: 166 | 167 | # Level 2: Specify the polynomial order of this term; 168 | # It's a Hermite function 169 | monotone[-1].append( 170 | list(entry)+['HF']) 171 | 172 | #%% 173 | # ============================================================================= 174 | # Create the transport map object 175 | # ============================================================================= 176 | 177 | # With the map parameterization (nonmonotone, monotone) defined and the target 178 | # samples (X) obtained, we can start creating the transport map object. 179 | 180 | # To begin, delete any map object which might already exist. 181 | if "tm" in globals(): 182 | del tm 183 | 184 | # Create the transport map object tm 185 | tm = transport_map( 186 | monotone = monotone, # Specify the monotone parts of the map component function 187 | nonmonotone = nonmonotone, # Specify the nonmonotone parts of the map component function 188 | X = X, # A N-by-D matrix of training samples (N = ensemble size, D = variable space dimension) 189 | polynomial_type = "hermite function", # What types of polynomials did we specify? The option 'Hermite functions' here are re-scaled probabilist's Hermites, to avoid numerical overflow for higher-order terms 190 | monotonicity = "integrated rectifier", # Are we ensuring monotonicity through 'integrated rectifier' or 'separable monotonicity'? 191 | standardize_samples = True, # Standardize the training ensemble X? Should always be True 192 | workers = 1, # Number of workers for the parallel optimization. 193 | quadrature_input = { # Keywords for the Gaussian quadrature used for integration 194 | 'order' : 25, 195 | 'adaptive' : False, 196 | 'threshold' : 1E-9, 197 | 'verbose' : False, 198 | 'increment' : 6}) 199 | 200 | # Since this map is very complex, optimizing it is a very demanding task, and 201 | # can take up to 10 minutes of computation. Most maps are drastically cheaper, 202 | # often only taking fractions of seconds to optimize. To make this example a 203 | # bit faster, we have stored the pre-optimized output coefficients as a Python 204 | # pickle. If you want to redo the optimization yourself, delete the file named 205 | # "dict_coeffs_order=10.p" in the working directory. 206 | if 'dict_coeffs_order='+str(maxorder)+'.p' not in os.listdir(root_directory): 207 | 208 | print('No pre-optimized coefficients found. Optimizing...') 209 | 210 | # Optimize the transport maps. This takes a while, it's an extremeley 211 | # complicated map. 212 | tm.optimize() 213 | 214 | # Store the coefficients in a dictionary 215 | dict_coeffs = { 216 | 'coeffs_mon' : tm.coeffs_mon, 217 | 'coeffs_nonmon' : tm.coeffs_nonmon} 218 | 219 | # Save the dictionary 220 | pickle.dump(dict_coeffs,open('dict_coeffs_order='+str(maxorder)+'.p','wb')) 221 | 222 | else: 223 | 224 | print('Pre-optimized coefficients found. Extracting...') 225 | 226 | # Load the previous coefficients 227 | dict_coeffs = pickle.load(open('dict_coeffs_order='+str(maxorder)+'.p','rb')) 228 | 229 | # Write the coefficients into the map object without optimization 230 | tm.coeffs_mon = copy.copy(dict_coeffs['coeffs_mon']) 231 | tm.coeffs_nonmon = copy.copy(dict_coeffs['coeffs_nonmon']) 232 | 233 | #%% 234 | # ============================================================================= 235 | # Apply the map 236 | # ============================================================================= 237 | 238 | # ----------------------------------------------------------------------------- 239 | # Example 1: forward map from the target to the reference 240 | # ----------------------------------------------------------------------------- 241 | 242 | # In this first example, we apply the map forward. This transforms samples from 243 | # the target (the spiral) into samples from the reference (a standard Gaussian) 244 | 245 | # We can evaluate the forward map with the following command: 246 | Z_gen = tm.map(X) 247 | 248 | # Now plot the results 249 | plt.figure(figsize=(7,7)) 250 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:orangish red',label='target samples',zorder = 10) 251 | plt.scatter(Z_gen[:,0],Z_gen[:,1],s=1,color='xkcd:grass green',label='reference samples', zorder = 5) 252 | plt.xlabel('dimension 1 ($x_{1}$ or $z_{1}$)') 253 | plt.ylabel('dimension 2 ($x_{2}$ or $z_{2}$)') 254 | plt.title('forward map $\pi$ to $\eta$') 255 | plt.xlim(-3,3) 256 | plt.ylim(-3,3) 257 | plt.legend() 258 | plt.savefig('output_forward_map.png') 259 | plt.show() 260 | 261 | # ----------------------------------------------------------------------------- 262 | # Example 2: inverse map from the reference to the target 263 | # ----------------------------------------------------------------------------- 264 | 265 | # In this second example, we apply the map backward. This transforms samples 266 | # from the reference (a standard Gaussian) into samples from the target (the 267 | # spiral). 268 | 269 | # Let's generate new reference samples 270 | Z = scipy.stats.norm.rvs(loc=0,scale=1,size=(N,2)) 271 | 272 | # We can evaluate the inverse map with the following command: 273 | X_gen = tm.inverse_map(Z) 274 | 275 | # Now plot the results 276 | plt.figure(figsize=(7,7)) 277 | plt.scatter(X_gen[:,0],X_gen[:,1],s=1,color='xkcd:orangish red',label='target samples',zorder = 10) 278 | plt.scatter(Z[:,0],Z[:,1],s=1,color='xkcd:grass green',label='reference samples', zorder = 5) 279 | plt.xlabel('dimension 1 ($x_{1}$ or $z_{1}$)') 280 | plt.ylabel('dimension 2 ($x_{2}$ or $z_{2}$)') 281 | plt.title('inverse map $\eta$ to $\pi$') 282 | plt.xlim(-3,3) 283 | plt.ylim(-3,3) 284 | plt.legend() 285 | plt.savefig('output_inverse_map.png') 286 | plt.show() 287 | 288 | # ----------------------------------------------------------------------------- 289 | # Example 3: conditional map from the reference to a target conditional 290 | # ----------------------------------------------------------------------------- 291 | 292 | # In this second example, we explore one of the most important properties of 293 | # triangular transport maps: the ability to sample conditionals of the target 294 | # density function. To do so, we must first define samples $x_{1}^{*}$ on which 295 | # we want to condition. 296 | 297 | # Define the values we want to condition. Here, we want to condition on the 298 | # value $x_{1}^{*} = 0.6$. To do so, we generate duplicates on of this value 299 | # for all sample realizations, arranged into a N-by-D array. Feek free to play 300 | # around with different conditional slices. 301 | X_star = np.ones((N,1))*0.6 302 | 303 | # Note that we can condition each samples on individual values, if we so want. 304 | # In this case, X_star would not be an array of duplicate values. 305 | 306 | # Let's generate new reference samples for the second dimension $z_{2}$ 307 | Z_marg = scipy.stats.norm.rvs(loc=0,scale=1,size=(N,1)) 308 | 309 | # We can evaluate the inverse map with the following command: 310 | X_cond = tm.inverse_map(X_star = X_star, Z = Z_marg) 311 | 312 | # Now plot the results 313 | plt.figure(figsize=(8.5,7)) 314 | gs = matplotlib.gridspec.GridSpec( 315 | nrows = 1, 316 | ncols = 2, 317 | width_ratios = [1.0,0.2], 318 | wspace = 0.) 319 | plt.subplot(gs[0,0]) 320 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:silver',label='target samples',zorder = 5) 321 | plt.scatter(X_cond[:,0],X_cond[:,1],s=1,color='xkcd:orangish red',label='conditional samples', zorder = 10, alpha = 0.01) 322 | plt.xlabel('dimension 1 ($x_{1}$ or $x_{1}^{*}$)') 323 | plt.ylabel('dimension 2 ($x_{2}$ or $x_{2}^{*}$)') 324 | plt.title('conditional map $\eta$ to $\pi^{*}$') 325 | plt.xlim(-3,3) 326 | plt.ylim(-3,3) 327 | plt.legend() 328 | plt.subplot(gs[0,1]) 329 | plt.hist(X_cond[:,1],orientation='horizontal',bins=30,color='xkcd:orangish red') 330 | plt.ylim(-3,3) 331 | plt.savefig('output_conditional_map.png') 332 | plt.show() 333 | 334 | 335 | # ----------------------------------------------------------------------------- 336 | # Example 4: composite conditional map from the target to a target conditional 337 | # ----------------------------------------------------------------------------- 338 | 339 | # This example is similar to Example 3. In many cases, it is preferrable to use 340 | # reference samples from a forward application of the map for the conditional 341 | # inverse, rather than samples from the reference distribution itself. In this 342 | # example, we show how we construct a composite map. It's quite simple. 343 | # This map is very close to the target density, so the advantage is negligible. 344 | # In many other cases, however, composite maps offer a dramatic advantage over 345 | # conventional conditional inverses, and should thus almost always be used over 346 | # the procedure in Example 3. 347 | 348 | # Once more, let us define the values we want to condition. 349 | X_star = np.ones((N,1))*0.6 350 | 351 | # Instead of sampling the standard Gaussian, we now generate our referece 352 | # samples from a forward application of the map. First, apply the forward map: 353 | Z_gen = tm.map(X) 354 | 355 | # We only need the second marginal of these reference samples. 356 | Z_marg = Z_gen[:,:1] 357 | 358 | # Applying the inverse map again with these map-generated reference samples, we 359 | # effectively implement a composite map: 360 | X_cond = tm.inverse_map(X_star = X_star, Z = Z_marg) 361 | 362 | # Now plot the results 363 | plt.figure(figsize=(8.5,7)) 364 | gs = matplotlib.gridspec.GridSpec( 365 | nrows = 1, 366 | ncols = 2, 367 | width_ratios = [1.0,0.2], 368 | wspace = 0.) 369 | plt.subplot(gs[0,0]) 370 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:silver',label='target samples',zorder = 5) 371 | plt.scatter(X_cond[:,0],X_cond[:,1],s=1,color='xkcd:orangish red',label='conditional samples', zorder = 10, alpha = 0.01) 372 | plt.xlabel('dimension 1 ($x_{1}$ or $x_{1}^{*}$)') 373 | plt.ylabel('dimension 2 ($x_{2}$ or $x_{2}^{*}$)') 374 | plt.title('composite map $\pi$ to $\pi^{*}$') 375 | plt.xlim(-3,3) 376 | plt.ylim(-3,3) 377 | plt.legend() 378 | plt.subplot(gs[0,1]) 379 | plt.hist(X_cond[:,1],orientation='horizontal',bins=30,color='xkcd:orangish red') 380 | plt.ylim(-3,3) 381 | plt.savefig('output_composite_map.png') 382 | plt.show() 383 | 384 | 385 | 386 | 387 | 388 | 389 | 390 | -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 01 - full map/output_composite_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 01 - full map/output_composite_map.png -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 01 - full map/output_conditional_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 01 - full map/output_conditional_map.png -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 01 - full map/output_forward_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 01 - full map/output_forward_map.png -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 01 - full map/output_inverse_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 01 - full map/output_inverse_map.png -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 02 - partial map/dict_coeffs_order=10_partial.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 02 - partial map/dict_coeffs_order=10_partial.p -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 02 - partial map/example_02.py: -------------------------------------------------------------------------------- 1 | # Load a number of libraries required for this exercise 2 | import scipy.stats 3 | import numpy as np 4 | import matplotlib 5 | import matplotlib.pyplot as plt 6 | import copy 7 | import itertools 8 | import os 9 | import pickle 10 | 11 | # Load in the transport map class 12 | from transport_map import * 13 | 14 | # Find the current working directory 15 | root_directory = os.path.dirname(os.path.realpath(__file__)) 16 | 17 | # Set a random seed 18 | np.random.seed(0) 19 | 20 | # Close all open figures 21 | plt.close('all') 22 | 23 | # I use a custom radial colormap. These are its colors. 24 | turbocolors = [[0.6904761904761906, 0.010977778700493195, 0.012597277894375276], [0.6906519235270884, 0.01640135787405724, 0.0837140289089694], [0.6917028231470678, 0.0229766531631339, 0.1508422093503125], [0.6935600550923448, 0.03059217460463138, 0.2140806913791454], [0.6961547851191345, 0.03913937310725368, 0.27354760270667333], [0.6994181789836523, 0.048510190139500135, 0.32937485403121564], [0.7032814024421139, 0.05859488216300645, 0.3817042734043511], [0.7076756212507346, 0.06928011981122602, 0.4306849504666904], [0.7125320011657298, 0.08044736181345138, 0.4764714640882298], [0.717781707943315, 0.09197150366417624, 0.5192227314969523], [0.7233559073397057, 0.10371980103779682, 0.5591012754819024], [0.7291857651111174, 0.11555106794865597, 0.5962727587134745], [0.7352024470137656, 0.12731514965642451, 0.6309056806339336], [0.7413371188038657, 0.13885267031682524, 0.6631711727355227], [0.7475209462376329, 0.1499950553776956, 0.6932428623615332], [0.7536850950712828, 0.16056482872039282, 0.7212968034377881], [0.7597607310610313, 0.17037618454653694, 0.747511494767853], [0.75929001722113, 0.17923583401009582, 0.7656790199630931], [0.7475941038279283, 0.18694820855183453, 0.7713723012300058], [0.7366824556818949, 0.19346062714094175, 0.7768150322166271], [0.72643042874709, 0.19890811745519935, 0.7820344349913866], [0.7167412048139664, 0.20344443760235664, 0.7870610686424058], [0.7075377277455205, 0.20722973652712667, 0.7919254922578067], [0.6987607996238635, 0.2104298456332004, 0.7966582649257115], [0.6903673228438931, 0.21321570323575734, 0.8012899457342416], [0.682328749433452, 0.21576291184447535, 0.8058510937715191], [0.6746297909082095, 0.21825142827703536, 0.8103722681256655], [0.6672648697027385, 0.22085425646468607, 0.8148817689184499], [0.6602014830869536, 0.22359379565318152, 0.8193767173456075], [0.6533787124237928, 0.2263796531215008, 0.823831507046331], [0.6467289333281976, 0.22911680044368107, 0.8282200161103943], [0.6401785662494847, 0.23170815719718751, 0.8325161226275704], [0.6336482569987627, 0.23405470433309905, 0.8366937046876322], [0.6270530344594542, 0.2360556532010278, 0.8407266403803533], [0.6203024476840878, 0.23760867022877552, 0.8445888077955067], [0.6133006844761989, 0.23861015725671983, 0.8482540850228658], [0.6059624573538679, 0.23898238778533165, 0.8516979593044193], [0.5983090374215976, 0.23882956412204973, 0.854906810971959], [0.5904174130986337, 0.23833421958277098, 0.8578715372512724], [0.5823783687413812, 0.2376823588176445, 0.8605830499184942], [0.5742958438497263, 0.23706268520314622, 0.8630322607497598], [0.566286260437254, 0.23666602945460796, 0.8652100815212042], [0.5584776771672347, 0.23668473807742815, 0.8671074240089628], [0.5510087685640095, 0.23731202165697124, 0.8687151999891709], [0.5440276258636073, 0.23874126128013823, 0.8700243216224771], [0.5376105577619357, 0.2411191644739669, 0.8710360813847914], [0.5315489266889114, 0.24442474436899914, 0.871789217558878], [0.5255684953009153, 0.24859807924582497, 0.8723309092587582], [0.5193987629250915, 0.2535784508499333, 0.8727083355984544], [0.5127774500657107, 0.25930498111025313, 0.8729686756919878], [0.5054539190109256, 0.2657171896860476, 0.8731591086533795], [0.4971915396084222, 0.2727554723421566, 0.8733268135966521], [0.4877690201269585, 0.2803615001525919, 0.8735189696358265], [0.47698218055553143, 0.28847794895571527, 0.873782586986574], [0.4648276728474042, 0.29697110893585776, 0.8741423773429315], [0.45165101189669743, 0.305552156792384, 0.8745787059949128], [0.43777753809619335, 0.31391252849082757, 0.8750666949974303], [0.4234601983798163, 0.32174200618204823, 0.8755814664053952], [0.4088786643397557, 0.3287286961969925, 0.87609814227372], [0.39414368224229085, 0.33455917871529023, 0.8765918446573163], [0.3793067492830514, 0.3389188291076908, 0.877037695611096], [0.3643751994164286, 0.34149231095233595, 0.8774108171899714], [0.34933070330797267, 0.34197454203820826, 0.877688092213006], [0.3403673509225958, 0.34663277097930234, 0.8778987375452829], [0.3370236501326216, 0.3554484329299287, 0.8781318704949722], [0.33231461663862033, 0.3618511397272365, 0.8784798910693628], [0.3266091000188368, 0.36659036039709064, 0.8790351992757446], [0.32027293973512316, 0.3703929847930015, 0.8798901951214063], [0.31366899325232445, 0.37397422952969406, 0.8811372786136377], [0.3071578750013543, 0.3780456944328393, 0.8828688497597277], [0.301099406185961, 0.38332054751916245, 0.8851773085669661], [0.29583267832568955, 0.3904653314042777, 0.8881489717217561], [0.2914323797009556, 0.39953599160062525, 0.8917964039595472], [0.2877980734089111, 0.41017448566248715, 0.8960831069995929], [0.28482805439516534, 0.4220062674230746, 0.9009716956503901], [0.28242172761403994, 0.43464761751547915, 0.906424784720434], [0.2804788409846796, 0.4477051726897616, 0.9124049890182206], [0.2788987912262992, 0.4607749922480771, 0.918874923352247], [0.277580002572562, 0.47344114547803795, 0.9257972025310082], [0.2764193783650953, 0.485273787396179, 0.9331344413630012], [0.27531382596760734, 0.49590948909381166, 0.9408321707166999], [0.27417124253611536, 0.5054794081653604, 0.9487320904666496], [0.2729055061729636, 0.5143187877995266, 0.9566368878753057], [0.27143253536604345, 0.5227855656932154, 0.9643491802293045], [0.2696704730654312, 0.531261581824037, 0.9716715848152837], [0.2675399881902376, 0.5401524972415588, 0.9784067189198801], [0.2649646868587828, 0.5498860194147946, 0.9843571998297309], [0.26187163334209196, 0.5609083493724006, 0.9893256448314729], [0.25819198579084385, 0.5736786941767217, 0.9931146823187612], [0.25388033135389326, 0.5884766343804858, 0.9955682574046939], [0.2489559746899319, 0.6049657034015569, 0.9966648267466579], [0.24345275591304505, 0.62265633216535, 0.9964101744320204], [0.23740633990537824, 0.6410301303413559, 0.9948100845481483], [0.23085404063102094, 0.6595442628994237, 0.9918703411824094], [0.22383464684219176, 0.6776362279674177, 0.9875967284221705], [0.21638824917772453, 0.6947290781287391, 0.9819950303547987], [0.20855606865385165, 0.7102371206640654, 0.9750710310676618], [0.20038059410110376, 0.7235758485222823, 0.9668315591455693], [0.19193351310774223, 0.734519649068058, 0.957382599029186], [0.1833340368744243, 0.7435006401649751, 0.9470095683956252], [0.17469783713168363, 0.7510390693547768, 0.9360167776775004], [0.16613077624888986, 0.7576720124478675, 0.9247085373074251], [0.15772857678376886, 0.763950628076615, 0.9133891577180121], [0.14957672814523254, 0.7704390486819581, 0.9023629493418752], [0.14175063036952132, 0.7777150132050495, 0.8919342226116274], [0.1343159750096613, 0.7863723484029234, 0.8824072879598819], [0.12732970449680675, 0.797012529379906, 0.8740811995081446], [0.12084976147141473, 0.8099384376963948, 0.8671237357742304], [0.11493975728845535, 0.8251268658709435, 0.8615653598949594], [0.10966332888119507, 0.8425408262745097, 0.857430069974358], [0.1050853463044867, 0.8547418641164511, 0.8473403551838433], [0.10127356192382456, 0.8535247404252649, 0.8231538838024755], [0.09830031072117495, 0.8538026970048249, 0.7998509537157463], [0.09624426171757831, 0.8555997319591567, 0.7775291500837249], [0.09519222051252364, 0.8589398433922859, 0.7562951678279484], [0.09523466033004026, 0.8638301588380453, 0.736254113762426], [0.09640562817698234, 0.8700971873969563, 0.7173867121266534], [0.09870775682645283, 0.8774578722514202, 0.6996232876010631], [0.10214964316230982, 0.8856276192531288, 0.6829345377608849], [0.10674326864505639, 0.8943218342537755, 0.6673309599875173], [0.1125007767162798, 0.9032559231050521, 0.6528591156909103], [0.1194311818691091, 0.9121452916586508, 0.6395960113839659], [0.12753701038469437, 0.9207053457662641, 0.6276416598987021], [0.13681087273470835, 0.9286514912795846, 0.6171099273803513], [0.1472273301533093, 0.9357451866752111, 0.608020030012468], [0.1587227785675077, 0.9419996156076766, 0.5998477287978358], [0.17121964482967164, 0.9475135374443872, 0.5918796184356399], [0.18463807709930177, 0.952385782298133, 0.5834130936511204], [0.19889759168666765, 0.9567151802817035, 0.5737737807193738], [0.21391852693235652, 0.9606005615078892, 0.562329072946783], [0.22962331181780002, 0.9641407560894797, 0.5484979436976313], [0.24593754930678074, 0.9674345941392647, 0.5317571316448257], [0.26279082355117567, 0.9705808834297661, 0.511644132353509], [0.2800068036776241, 0.9736514082827111, 0.48808838425690876], [0.2970834223782295, 0.9766384622071741, 0.4619539474027109], [0.31345165544585374, 0.9795197960248092, 0.4340817805575162], [0.3285349032795682, 0.982273160557271, 0.40507369070122135], [0.3417498926656193, 0.9848763066262134, 0.3752866956776463], [0.36016989144758, 0.9873069850532904, 0.35250783102398103], [0.40675615393016756, 0.9895429466601561, 0.36021581312049306], [0.44700613398058514, 0.9915619422684648, 0.3642784802445829], [0.4800453603427492, 0.9933416833596941, 0.3641065514694242], [0.5059172225697819, 0.9948570776317582, 0.3595849281383], [0.5260321807100684, 0.996078410770516, 0.35138497368822147], [0.5417361711468346, 0.9969755367452103, 0.340261882906466], [0.5542251246559543, 0.9975183095250835, 0.3269807394030567], [0.564594836019787, 0.9976765830793782, 0.3123146193759361], [0.5738773907037504, 0.9974202113773375, 0.29704222917768014], [0.5830638840476217, 0.9967190483882035, 0.28194507668375485], [0.593113257901611, 0.9955429480812196, 0.2678041764623123], [0.6049173329252986, 0.9938631942887316, 0.2553741911685384], [0.618688941513129, 0.9916814407117477, 0.24493286081566687], [0.6340191489156332, 0.9890283751100516, 0.23630248008060123], [0.6504815845022238, 0.9859358477750884, 0.22928282597582322], [0.667664607259613, 0.9824357089983022, 0.2236706549395957], [0.6851775225926456, 0.9785598090711378, 0.21926108516565482], [0.7026554225327669, 0.9743399982850405, 0.21584884718485342], [0.7197626647068687, 0.9698081269314549, 0.21322940269875598], [0.7361950207876023, 0.9649960453018253, 0.21119993166518547], [0.7517045578243673, 0.9599385848130461, 0.20957252075787536], [0.7662987145771546, 0.9546988900091035, 0.20827881408159643], [0.7801235262514455, 0.9493557514336002, 0.20731561723168843], [0.793333881434332, 0.9439881244434671, 0.2066805054707251], [0.8060910507633685, 0.9386749643956355, 0.20637143421171145], [0.818561771106568, 0.9334952266470367, 0.20638701190028513], [0.8309175422374003, 0.9285278665546012, 0.20672674995777193], [0.8433341375472628, 0.9238518394752603, 0.2073912897850912], [0.8559913280771064, 0.9195461007659449, 0.20838260682751408], [0.8690561732865327, 0.9156936485066011, 0.20969336805791894], [0.8826222717866333, 0.9123973799376971, 0.21126511049859706], [0.8967754117130945, 0.9097663045485556, 0.21302599542691433], [0.9079094334716317, 0.9041987442580517, 0.21490680311671295], [0.9069357778393816, 0.886592094707027, 0.21684014643856891], [0.9069543487842606, 0.870013975207323, 0.21875944038476458], [0.9080741574387239, 0.8545231793863904, 0.22059763194219112], [0.910404214935227, 0.8401608509928674, 0.22228569031318043], [0.9140533609890943, 0.8269505724832292, 0.22375088679927074], [0.9190348675863581, 0.8148283391986565, 0.22493134638188253], [0.9251076558303858, 0.8035443408635032, 0.22580368730895622], [0.9319887969388863, 0.7928444208549985, 0.2263494470448247], [0.9393953621295686, 0.7825040250835419, 0.2265502781157535], [0.9470444226201423, 0.7723286488430519, 0.2263898716540945], [0.9546530496283158, 0.762153889073118, 0.22585549404541186], [0.9619383143717981, 0.7518450142735859, 0.2249391366785828], [0.9686172880682985, 0.7412959626121468, 0.22363827879886775], [0.9744096708096767, 0.730428526919429, 0.22195619461848926], [0.9791805893769406, 0.7192323214790531, 0.21989751414051684], [0.9830137772099842, 0.707766047135114, 0.21746224880159565], [0.9860109991965004, 0.6960819371591295, 0.21464892131240673], [0.9882740202241823, 0.6842207640296426, 0.21145507809451805], [0.9899046051807234, 0.6722129383813515, 0.2078773477615485], [0.9910045189538177, 0.660079441906903, 0.203911434685365], [0.9916755264311576, 0.6478325847934217, 0.19955204764730958], [0.9920193925004372, 0.6354765794026785, 0.19479276357445865], [0.9921346225982788, 0.6230072587408348, 0.18962791161850712], [0.9920602459326743, 0.6104024959410064, 0.1840886756924244], [0.9917833282948878, 0.5976302231717846, 0.17823961163192695], [0.9912891871817501, 0.584660910964709, 0.17214700759084267], [0.9905631400900912, 0.5714677122588908, 0.16587767290499006], [0.9895905045167414, 0.5580263006318681, 0.15949881685675113], [0.9883565979585309, 0.5443147998853488, 0.15307790816828803], [0.9868467379122904, 0.5303138044684916, 0.14668251522340267], [0.9850462418748495, 0.5160064896870945, 0.14038012701804065], [0.9829415325780593, 0.5013871258172927, 0.13423136665239896], [0.9805283806463996, 0.48652287501129976, 0.12824076784310964], [0.9778072570671092, 0.4715174832781621, 0.12238470698953774], [0.9747786684299458, 0.45647541200570035, 0.11663935572331047], [0.9714431213246664, 0.44150112246113676, 0.11098098205678426], [0.9678011223410284, 0.42669869332027743, 0.10538603876578982], [0.9638531780687892, 0.4121715013388048, 0.09983125148265794], [0.9595997950977064, 0.3980219636495409, 0.09429370649952155], [0.955041480017537, 0.38435133958413126, 0.08875093828190032], [0.9501776477067925, 0.3712391645206384, 0.08319244458709686], [0.945002869918086, 0.35867340216680627, 0.07765856363342656], [0.939510375845868, 0.34661657642435645, 0.07220291355950477], [0.9336933946348576, 0.33503080017448555, 0.06687804720630672], [0.927545155429773, 0.3238776561046621, 0.061735278666316916], [0.9210588873753326, 0.3131181839572251, 0.056824503280264636], [0.9142278196162553, 0.30271297234473005, 0.05219401055843878], [0.9070451812972594, 0.2926223514744295, 0.04789029002658525], [0.8995042852162572, 0.2828067015124927, 0.043957748795836256], [0.891624412799365, 0.27323253076797877, 0.04041351972303193], [0.8834874722688015, 0.263879251265467, 0.03721289045871814], [0.8751846232207746, 0.254725704246588, 0.03430224569442672], [0.8668070252514929, 0.24574829852791777, 0.03162849539198234], [0.8584458379571641, 0.23692126221807738, 0.029139373140852975], [0.8501922209339968, 0.22821677001517918, 0.026783627401313844], [0.842137333778199, 0.2196049425900127, 0.024511105633431018], [0.8343723360859792, 0.21105371449732208, 0.022272731311857318], [0.8269868884473934, 0.20252933155631014, 0.02002124877275106], [0.8200045690445795, 0.19402936800454978, 0.017748501028958072], [0.8133583005373909, 0.18559748229672426, 0.015498635760688948], [0.8069744374385174, 0.17728209148881974, 0.013318655178400935], [0.8007793342606484, 0.16913337148852742, 0.0112547441788015], [0.7946993455164736, 0.16120340729781465, 0.009352359692418623], [0.788660825718683, 0.15354617809025697, 0.00765622813291652], [0.782590129379966, 0.14621737391166023, 0.0062102509481560386], [0.7764136110130129, 0.1392740409365317, 0.005057318272995783], [0.7700620750740421, 0.13276939181212588, 0.004236488419270876], [0.7635366882039091, 0.12668307615009333, 0.003745345772828322], [0.7568947816167044, 0.12093573368910132, 0.003549296943119582], [0.7501952516603297, 0.11544669842075488, 0.0036134540477882945], [0.7434969946826867, 0.11013619050381865, 0.0039037799561516488], [0.7368589070316779, 0.10492567700866447, 0.004387290052200903], [0.7303398850552048, 0.09973804029508934, 0.0050322105475879364], [0.7239988251011695, 0.09449755619814312, 0.005808093344599986], [0.7178946235174741, 0.08912968460872006, 0.006685887449122245], [0.7120861766520206, 0.08356067512355873, 0.007637966933587637], [0.7066323808527106, 0.07771699020297941, 0.008638115449914247], [0.7015921324674467, 0.07152454771412878, 0.009661467292431428], [0.6970243278441304, 0.06490778385274144, 0.01068440501079227], [0.6929878633306636, 0.05778853622743917, 0.011684413572873753], [0.6895416352749486, 0.050084745357375754, 0.012639891077665865], [0.6867445400248873, 0.041708970976610964, 0.013529916018146956], [0.6846554739283816, 0.032566717356936344, 0.014333971094147597], [0.6833333333333333, 0.022554559355018072, 0.01503162357520156], [0.4796, 0.01583, 0.010550000000000002]] 25 | 26 | # ============================================================================= 27 | # Step 1: Sampling the target density function 28 | # ============================================================================= 29 | 30 | # This function allows us to sample the spiral-shaped target density function 31 | def sample_spiral_distribution(size): 32 | 33 | # First draw some rotation samples from a beta distribution, then scale 34 | # them to the range between -pi and +2pi 35 | seeds = scipy.stats.beta.rvs( 36 | a = 4, 37 | b = 3, 38 | size = size)*3*np.pi-np.pi 39 | 40 | # Create a local copy of the rotations 41 | seeds_orig = copy.copy(seeds) 42 | 43 | # Re-normalize the rotations, then scale them to the range between [-3,+3] 44 | vals = (seeds+np.pi)/(3*np.pi)*6-3 45 | 46 | # Plot the rotation samples on a straight spiral 47 | X = np.column_stack(( 48 | np.cos(seeds)[:,np.newaxis], 49 | np.sin(seeds)[:,np.newaxis]))*((1+seeds+np.pi)/(3*np.pi)*5)[:,np.newaxis] 50 | 51 | # Offset each sample along the spiral's normal vector by scaled Gaussian 52 | # noise 53 | X += np.column_stack([ 54 | np.cos(seeds_orig), 55 | np.sin(seeds_orig)])*(scipy.stats.norm.rvs(size=size)*scipy.stats.norm.pdf(vals))[:,np.newaxis] 56 | 57 | return X/2 58 | 59 | # Define the ensemble size 60 | N = 10000 61 | 62 | # Draw that many samples 63 | X = sample_spiral_distribution(N) 64 | 65 | #%% 66 | # ============================================================================= 67 | # Define the transport map parameterization 68 | # ============================================================================= 69 | 70 | # Next, we define the map component functions used in the triangular transport 71 | # map. The map definition requires two lists of lists: one for the monotone 72 | # part (basis functions which do depend on the last argument) and one for the 73 | # nonmonotone part (basis functions which do not depend on the last argument). 74 | # Each entry in those lists is another list that defines the basis functions. 75 | # Polynomial basis functions are lists of integers, with a potential keyword 76 | # such as 'HF' appended to mark it as a Hermite function. RBFs or related basis 77 | # functions are defined as strings such as 'RBF 0' or 'iRBF 7'. 78 | # 79 | # Example: -------------------------------------------------------------------- 80 | # 81 | # monotone = [ 82 | # [ [0] ], 83 | # [ [1], [0,0,1,'HF'] ] ] 84 | # nonmonotone = [ 85 | # [ [] ], 86 | # [ [], [0], [0,0], [0,0,'HF], 'RBF 0'] ] 87 | # 88 | # Explanation: ---------------------------------------------------------------- 89 | # 90 | # Monotone [list] 91 | # | 92 | # └―― Map component 1 [list] (last argument: entry x_{0}) 93 | # | | 94 | # | └―― [0] Basis function 1 (linear term for entry 0) 95 | # | 96 | # └―― Map component 2 [list] (last argument: entry x_{1}) 97 | # | 98 | # └―― [1] Basis function 1 (linear term for entry 1) 99 | # | 100 | # └―― [0,0,1,'HF'] Basis function 2 (cross-term: quadratic Hermite function for entry 0, linear Hermite function for entry 1) 101 | # 102 | # Nonmonotone [list] 103 | # | 104 | # └―― Map component 1 [list] (valid arguments: constant) 105 | # | | 106 | # | └―― [] Basis function 1 (constant term) 107 | # | 108 | # └―― Map component 2 [list] (Valid arguments: consant, x_{0}) 109 | # | 110 | # └―― [] Basis function 1 (constant term) 111 | # | 112 | # └―― [0] Basis function 2 (linear term for entry x_{0}) 113 | # | 114 | # └―― [0,0] Basis function 3 (quadratic term for entry x_{0}) 115 | # | 116 | # └―― [0,0,'HF'] Basis function 4 (quadratic Hermite function for entry x_{0}) 117 | # | 118 | # └―― 'RBF 0' Basis function 5 (radial basis function for entry x_{0}) 119 | 120 | # Create empty lists for the map component specifications 121 | monotone = [] 122 | nonmonotone = [] 123 | 124 | # The spiral distribution is a very challenging target density function. We 125 | # require polynomial basis terms up to order 10 126 | maxorder = 10 127 | 128 | # For complex maps, it is often better to write functions which assemble the 129 | # map component specifications automatically 130 | for k in range(2): 131 | 132 | # Level 1: Add an empty list entry for each map component function 133 | monotone.append([]) 134 | nonmonotone.append([]) # An empty list "[]" denotes a constant 135 | 136 | # Level 2: We initiate the nonmonotone terms with a constant 137 | nonmonotone[-1].append([]) 138 | 139 | # Go through every polynomial order 140 | for order in range(maxorder): 141 | 142 | # Nonmonotone part ---------------------------------------------------- 143 | 144 | # We only have non-constant nonmonotone terms past the first map 145 | # component, and we already added the constant term earlier, so only do 146 | # this for the second map component function (k > 0). 147 | if k > 0: 148 | 149 | # Level 2: Specify the polynomial order of this term; 150 | # It's a Hermite function term 151 | nonmonotone[-1].append([k-1]*(order+1)+['HF']) 152 | 153 | # Monotone part ------------------------------------------------------- 154 | 155 | # For the monotone part, we consider cross-terms. This list creates all 156 | # possible combinations for this map component up to total order 10 157 | comblist = list(itertools.combinations_with_replacement( 158 | np.arange(k+1), 159 | order+1)) 160 | 161 | # Go through each of these combinations 162 | for entry in comblist: 163 | 164 | # Create a transport map component term 165 | if k in entry: 166 | 167 | # Level 2: Specify the polynomial order of this term; 168 | # It's a Hermite function 169 | monotone[-1].append( 170 | list(entry)+['HF']) 171 | 172 | #%% 173 | # ============================================================================= 174 | # Create the transport map object 175 | # ============================================================================= 176 | 177 | """ 178 | Up to this point, the file is identical to the full map example file. Here is 179 | where the differences begin. Instead of defining the full map, we are now only 180 | defining a partial map. This map only defines the lower block of the map. If we 181 | seek to implement a conditioning operation, we only need this lower block, and 182 | never have to define or optimize the full map. 183 | 184 | As a consequence, we will now remove the first map components from the map 185 | parameterization lists (nonmonotone, monotone). 186 | """ 187 | 188 | # For the partial map, just remove the unnecessary map parameterization files. 189 | # If the number of map component functions (now: 1) does not match the target 190 | # sample dimensions (here: 2), the transport map object automatically assumes 191 | # the user has specified the lower map block only. 192 | nonmonotone = nonmonotone[1:] # Discard the first map component 193 | monotone = monotone[1:] # Discard the first map component 194 | 195 | # With the map parameterization (nonmonotone, monotone) defined and the target 196 | # samples (X) obtained, we can start creating the transport map object. 197 | 198 | # To begin, delete any map object which might already exist. 199 | if "tm" in globals(): 200 | del tm 201 | 202 | # Create the transport map object tm 203 | tm = transport_map( 204 | monotone = monotone, # Specify the monotone parts of the map component function 205 | nonmonotone = nonmonotone, # Specify the nonmonotone parts of the map component function 206 | X = X, # A N-by-D matrix of training samples (N = ensemble size, D = variable space dimension) 207 | polynomial_type = "hermite function", # What types of polynomials did we specify? The option 'Hermite functions' here are re-scaled probabilist's Hermites, to avoid numerical overflow for higher-order terms 208 | monotonicity = "integrated rectifier", # Are we ensuring monotonicity through 'integrated rectifier' or 'separable monotonicity'? 209 | standardize_samples = True, # Standardize the training ensemble X? Should always be True 210 | workers = 1, # Number of workers for the parallel optimization. 211 | quadrature_input = { # Keywords for the Gaussian quadrature used for integration 212 | 'order' : 25, 213 | 'adaptive' : False, 214 | 'threshold' : 1E-9, 215 | 'verbose' : False, 216 | 'increment' : 6}) 217 | 218 | # Since this map is very complex, optimizing it is a very demanding task, and 219 | # can take up to 10 minutes of computation. Most maps are drastically cheaper, 220 | # often only taking fractions of seconds to optimize. To make this example a 221 | # bit faster, we have stored the pre-optimized output coefficients as a Python 222 | # pickle. If you want to redo the optimization yourself, delete the file named 223 | # "dict_coeffs_order=10.p" in the working directory. 224 | if 'dict_coeffs_order='+str(maxorder)+'_partial.p' not in os.listdir(root_directory): 225 | 226 | print('No pre-optimized coefficients found. Optimizing...') 227 | 228 | # Optimize the transport maps. This takes a while, it's an extremeley 229 | # complicated map. 230 | tm.optimize() 231 | 232 | # Store the coefficients in a dictionary 233 | dict_coeffs = { 234 | 'coeffs_mon' : tm.coeffs_mon, 235 | 'coeffs_nonmon' : tm.coeffs_nonmon} 236 | 237 | # Save the dictionary 238 | pickle.dump(dict_coeffs,open('dict_coeffs_order='+str(maxorder)+'_partial.p','wb')) 239 | 240 | else: 241 | 242 | print('Pre-optimized coefficients found. Extracting...') 243 | 244 | # Load the previous coefficients 245 | dict_coeffs = pickle.load(open('dict_coeffs_order='+str(maxorder)+'_partial.p','rb')) 246 | 247 | # Write the coefficients into the map object without optimization 248 | tm.coeffs_mon = copy.copy(dict_coeffs['coeffs_mon']) 249 | tm.coeffs_nonmon = copy.copy(dict_coeffs['coeffs_nonmon']) 250 | 251 | #%% 252 | # ============================================================================= 253 | # Apply the map 254 | # ============================================================================= 255 | 256 | # ----------------------------------------------------------------------------- 257 | # Example 1: conditional map from the reference to a target conditional 258 | # ----------------------------------------------------------------------------- 259 | 260 | # With partial maps, we can still evaluate conditionals of the target density 261 | # the same way as with the full map. 262 | 263 | # Define the values we want to condition. Here, we want to condition on the 264 | # value $x_{1}^{*} = 0.6$. To do so, we generate duplicates on of this value 265 | # for all sample realizations, arranged into a N-by-D array. Feek free to play 266 | # around with different conditional slices. 267 | X_star = np.ones((N,1))*0.6 268 | 269 | # Note that we can condition each samples on individual values, if we so want. 270 | # In this case, X_star would not be an array of duplicate values. 271 | 272 | # Let's generate new reference samples for the second dimension $z_{2}$ 273 | Z_marg = scipy.stats.norm.rvs(loc=0,scale=1,size=(N,1)) 274 | 275 | # We can evaluate the inverse map with the following command: 276 | X_cond = tm.inverse_map(X_star = X_star, Z = Z_marg) 277 | 278 | # Now plot the results 279 | plt.figure(figsize=(8.5,7)) 280 | gs = matplotlib.gridspec.GridSpec( 281 | nrows = 1, 282 | ncols = 2, 283 | width_ratios = [1.0,0.2], 284 | wspace = 0.) 285 | plt.subplot(gs[0,0]) 286 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:silver',label='target samples',zorder = 5) 287 | plt.scatter(X_star,X_cond[:,0],s=1,color='xkcd:orangish red',label='conditional samples', zorder = 10, alpha = 0.01) 288 | plt.xlabel('dimension 1 ($x_{1}$ or $x_{1}^{*}$)') 289 | plt.ylabel('dimension 2 ($x_{2}$ or $x_{2}^{*}$)') 290 | plt.title('conditional map $\eta$ to $\pi^{*}$') 291 | plt.xlim(-3,3) 292 | plt.ylim(-3,3) 293 | plt.legend() 294 | plt.subplot(gs[0,1]) 295 | plt.hist(X_cond[:,0],orientation='horizontal',bins=30,color='xkcd:orangish red') 296 | plt.ylim(-3,3) 297 | plt.savefig('output_conditional_map.png') 298 | plt.show() 299 | 300 | # ----------------------------------------------------------------------------- 301 | # Example 2: composite conditional map from the target to a target conditional 302 | # ----------------------------------------------------------------------------- 303 | 304 | # The composite map evaluation likewise remains almost identical. The only 305 | # difference to the full map case is that the tm.map(X) application no longer 306 | # returns two-dimensional output, but only output for the lower block. In 307 | # consequence, we don't have to actively extract the last Z marginal (Z_marg). 308 | # Everything else remains the same. 309 | 310 | # Once more, let us define the values we want to condition. 311 | X_star = np.ones((N,1))*0.6 312 | 313 | # Instead of sampling the standard Gaussian, we now generate our referece 314 | # samples from a forward application of the map. First, apply the forward map: 315 | Z_marg = tm.map(X) 316 | 317 | # Applying the inverse map again with these map-generated reference samples, we 318 | # effectively implement a composite map: 319 | X_cond = tm.inverse_map(X_star = X_star, Z = Z_marg) 320 | 321 | # Now plot the results 322 | plt.figure(figsize=(8.5,7)) 323 | gs = matplotlib.gridspec.GridSpec( 324 | nrows = 1, 325 | ncols = 2, 326 | width_ratios = [1.0,0.2], 327 | wspace = 0.) 328 | plt.subplot(gs[0,0]) 329 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:silver',label='target samples',zorder = 5) 330 | plt.scatter(X_star,X_cond[:,0],s=1,color='xkcd:orangish red',label='conditional samples', zorder = 10, alpha = 0.01) 331 | plt.xlabel('dimension 1 ($x_{1}$ or $x_{1}^{*}$)') 332 | plt.ylabel('dimension 2 ($x_{2}$ or $x_{2}^{*}$)') 333 | plt.title('composite map $\pi$ to $\pi^{*}$') 334 | plt.xlim(-3,3) 335 | plt.ylim(-3,3) 336 | plt.legend() 337 | plt.subplot(gs[0,1]) 338 | plt.hist(X_cond[:,0],orientation='horizontal',bins=30,color='xkcd:orangish red') 339 | plt.ylim(-3,3) 340 | plt.savefig('output_composite_map.png') 341 | plt.show() 342 | -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 02 - partial map/output_composite_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 02 - partial map/output_composite_map.png -------------------------------------------------------------------------------- /Examples A - spiral distribution/Example 02 - partial map/output_conditional_map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples A - spiral distribution/Example 02 - partial map/output_conditional_map.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 03 - average temperature data/conditional_temperatures_Moscow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 03 - average temperature data/conditional_temperatures_Moscow.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 03 - average temperature data/example_03.py: -------------------------------------------------------------------------------- 1 | # Load a number of libraries required for this exercise 2 | import scipy.stats 3 | import numpy as np 4 | import matplotlib 5 | import matplotlib.pyplot as plt 6 | import copy 7 | import itertools 8 | import os 9 | import pickle 10 | import re 11 | 12 | # Load in the transport map class 13 | from transport_map import * 14 | 15 | # Find the current working directory 16 | root_directory = os.path.dirname(os.path.realpath(__file__)) 17 | 18 | # Set a random seed 19 | np.random.seed(0) 20 | 21 | # Close all open figures 22 | plt.close('all') 23 | 24 | # ============================================================================= 25 | # Step 1: Preparing the data 26 | # ============================================================================= 27 | 28 | # For this example, we use daily average temperature data in Munich, Germany, 29 | # and Moscow, Russia, to demonstrate a data-based example for statistical 30 | # inference. The data sets we used are extracted from here: 31 | # https://academic.udayton.edu/kissock/http/Weather/default.htm 32 | 33 | # Let's load in the data for Munich 34 | with open('DLMUNICH.txt') as text_file: 35 | lines = text_file.read().splitlines() 36 | 37 | # Go through all lines, store the date and the daily average temperature 38 | data_Munich = [] 39 | times = [] 40 | for line in lines: 41 | chunks = line.split() 42 | data_Munich .append(float(chunks[-1])) 43 | times .append(chunks[2]+'-'+chunks[1]+'-'+chunks[0]) 44 | 45 | # Now let's load the data for Moscow 46 | with open('RSMOSCOW.txt') as text_file: 47 | lines = text_file.read().splitlines() 48 | 49 | # Likewise go through every line of the text file. Only add a data point if we 50 | # have an average temperature for both Munich and Moscow on the same date. 51 | data = [] 52 | 53 | # Go through every line of the Moscow data file 54 | for line in lines: 55 | 56 | # Split the string 57 | chunks = line.split() 58 | 59 | # Do we have a temperature data point for Munich? 60 | if chunks[2]+'-'+chunks[1]+'-'+chunks[0] in times: 61 | 62 | # In which index is that data point? 63 | idx = times.index(chunks[2]+'-'+chunks[1]+'-'+chunks[0]) 64 | 65 | # Are both data points valid measurements, or NaNs? 66 | if data_Munich[idx] > -99 and float(chunks[-1]) > -99: 67 | 68 | # If yes, append a new data point 69 | data .append([data_Munich[idx],float(chunks[-1])]) 70 | 71 | # Convert the list of lists into a N-by-2 array, with temperatures in Munich in 72 | # the first column, and temperatures for Moscow in the second column 73 | data = np.asarray(data) 74 | 75 | # Convert the temperatures from Fahrenheit to Celsius 76 | data = (data - 32) * .5556 77 | 78 | # These data points constitute samples from our target density function 79 | X = data 80 | 81 | # Plot the data 82 | plt.figure(figsize=(7,7)) 83 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:grey',label='real data') 84 | plt.xlabel('daily average temperature in Munich, Germany (°C)') 85 | plt.ylabel('daily average temperature in Moscow, Russia (°C)') 86 | plt.legend() 87 | plt.savefig('temperature_data.png') 88 | 89 | 90 | #%% 91 | # ============================================================================= 92 | # Define the transport map parameterization 93 | # ============================================================================= 94 | 95 | # Once more, we have to define the map parameterization. We have seen this in 96 | # earlier examples. 97 | 98 | # Create empty lists for the map component specifications 99 | monotone = [] 100 | nonmonotone = [] 101 | 102 | # Change the maxorder, and see how the map approximation function changes 103 | maxorder = 10 104 | 105 | # Here, we try different form of map parameterization. Let's try using maps 106 | # with separable monotonicity. These are often much more efficient, but do not 107 | # allow for cross-terms or nonmonotone basis functions in the 'monotone' list. 108 | for k in range(X.shape[-1]): 109 | 110 | # Level 1: Add an empty list entry for each map component function 111 | monotone.append([]) 112 | nonmonotone.append([]) # An empty list "[]" denotes a constant 113 | 114 | # Level 2: We initiate the nonmonotone terms with a constant 115 | nonmonotone[-1].append([]) 116 | 117 | # Nonmonotone part -------------------------------------------------------- 118 | 119 | # Go through every polynomial order 120 | for order in range(maxorder): 121 | 122 | # We only have non-constant nonmonotone terms past the first map 123 | # component, and we already added the constant term earlier, so only do 124 | # this for the second map component function (k > 0). 125 | if k > 0: 126 | 127 | # The nonmonotone basis functions can be as nonmonotone as we want. 128 | # Hermite functions are generally a good choice. 129 | nonmonotone[-1].append([k-1]*(order+1)+['HF']) 130 | 131 | # Monotone part ----------------------------------------------------------- 132 | 133 | # Let's get more fancy with the monotone part this time. If the order we 134 | # specified is one, then use a linear term. Otherwise, use a few monotone 135 | # special functions: Left edge terms, integrated radial basis functions, 136 | # and right edge terms 137 | 138 | # The specified order is one 139 | if maxorder == 1: 140 | 141 | # Then just add a linear term 142 | monotone[-1].append([k]) 143 | 144 | # Otherweise, the order is greater than one. Let's use special terms. 145 | else: 146 | 147 | # Add a left edge term. The order matters for these special terms. 148 | # While they are placed according to marginal quantiles, they are 149 | # placed from left to right. We want the left edge term to be left. 150 | monotone[-1].append('LET '+str(k)) 151 | 152 | # Lets only add maxorder-1 iRBFs 153 | for order in range(maxorder-1): 154 | 155 | # Add an integrated radial basis function 156 | monotone[-1].append('iRBF '+str(k)) 157 | 158 | # Then add a right edge term 159 | monotone[-1].append('RET '+str(k)) 160 | 161 | 162 | #%% 163 | # ============================================================================= 164 | # Create the transport map object 165 | # ============================================================================= 166 | 167 | # With the map parameterization (nonmonotone, monotone) defined and the target 168 | # samples (X) obtained, we can start creating the transport map object. 169 | 170 | # To begin, delete any map object which might already exist. 171 | if "tm" in globals(): 172 | del tm 173 | 174 | # Create the transport map object tm 175 | tm = transport_map( 176 | monotone = monotone, # Specify the monotone parts of the map component function 177 | nonmonotone = nonmonotone, # Specify the nonmonotone parts of the map component function 178 | X = X, # A N-by-D matrix of training samples (N = ensemble size, D = variable space dimension) 179 | polynomial_type = "hermite function", # What types of polynomials did we specify? The option 'Hermite functions' here are re-scaled probabilist's Hermites, to avoid numerical overflow for higher-order terms 180 | monotonicity = "separable monotonicity", # Are we ensuring monotonicity through 'integrated rectifier' or 'separable monotonicity'? 181 | standardize_samples = True, # Standardize the training ensemble X? Should always be True 182 | verbose = True, # Shall we print the map's progress? 183 | workers = 1) # Number of workers for the parallel optimization.) 184 | 185 | 186 | # This map is cheap to optimize. 187 | tm.optimize() 188 | 189 | 190 | #%% 191 | # ============================================================================= 192 | # Generative modelling 193 | # ============================================================================= 194 | 195 | # Let's do something interesting. Let's generate some more, fake pairs of daily 196 | # temperatures between Munich and Moscow. By learning the target distribution 197 | # underlying the data set, we can use the inverse transport map to generate 198 | # new, approximate realizations of the target. 199 | 200 | # Let's start by drawing new samples from the standard Gaussian reference. 201 | Z = scipy.stats.norm.rvs(size=(1000,2)) 202 | 203 | # Now pull these samples back to the target 204 | X_gen = tm.inverse_map(Z) 205 | 206 | # Plot the results 207 | plt.figure(figsize=(7,7)) 208 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:grey',label='real data') 209 | plt.scatter(X_gen[:,0],X_gen[:,1],s=2,color='xkcd:orangish red',label='generated data') 210 | plt.xlabel('daily average temperature in Munich, Germany (°C)') 211 | plt.ylabel('daily average temperature in Moscow, Russia (°C)') 212 | plt.legend() 213 | plt.savefig('generative_modeling.png') 214 | 215 | 216 | #%% 217 | # ============================================================================= 218 | # Conditional sampling 219 | # ============================================================================= 220 | 221 | # We can also use transport maps to answer statistical questions about the 222 | # joint target density function. For example, we can wonder what temperatures 223 | # we can expect in Moscow if temperatures in Munich are at a certain value. 224 | # This requires conditional sampling. 225 | 226 | # Let's start by drawing new samples from the standard Gaussian reference. As 227 | # in the earlier exercises, we only need reference samples for the second 228 | # dimension, so these samples will be a column vector rather than a matrix with 229 | # two columns. 230 | Z = scipy.stats.norm.rvs(size=(1000,1)) 231 | 232 | # For what temperatures in Munich to we want to investigate the temperatures in 233 | # Moscow? Feel free to adjust this value 234 | X_star = np.ones((1000,1))*(-5) # Let's try -5 °C first. 235 | 236 | # Now pull these samples back to the target 237 | X_cond = tm.inverse_map(Z = Z, X_star = X_star) 238 | 239 | # Now plot the results 240 | plt.figure(figsize=(8.5,7)) 241 | gs = matplotlib.gridspec.GridSpec( 242 | nrows = 1, 243 | ncols = 2, 244 | width_ratios = [1.0,0.2], 245 | wspace = 0.) 246 | plt.subplot(gs[0,0]) 247 | plt.scatter(X[:,0],X[:,1],s=1,color='xkcd:grey',label='real data',zorder=5) 248 | plt.scatter(X_cond[:,0],X_cond[:,1],s=2,color='xkcd:orangish red',label='conditional temperatures in Moscow',zorder=10,alpha=0.1) 249 | plt.xlabel('daily average temperature in Munich, Germany (°C)') 250 | plt.ylabel('daily average temperature in Moscow, Russia (°C)') 251 | plt.legend() 252 | ylims = plt.gca().get_ylim() 253 | plt.subplot(gs[0,1]) 254 | plt.hist(X_cond[:,1],orientation='horizontal',bins=30,color='xkcd:orangish red') 255 | plt.ylim(ylims) 256 | plt.gca().yaxis.tick_right() 257 | plt.gca().set_ylabel('conditional temperature in Moscow') 258 | plt.gca().yaxis.set_label_position("right") 259 | plt.savefig('conditional_temperatures_Moscow.png') 260 | plt.show() 261 | 262 | 263 | 264 | 265 | -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 03 - average temperature data/generative_modeling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 03 - average temperature data/generative_modeling.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 03 - average temperature data/temperature_data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 03 - average temperature data/temperature_data.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 04 - Monod kinetics/01_data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 04 - Monod kinetics/01_data.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 04 - Monod kinetics/02_prior_simulations.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 04 - Monod kinetics/02_prior_simulations.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 04 - Monod kinetics/03_posterior_parameters.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 04 - Monod kinetics/03_posterior_parameters.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 04 - Monod kinetics/04_posterior_simulations.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples B - statistical inference/Example 04 - Monod kinetics/04_posterior_simulations.png -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 04 - Monod kinetics/example_04.py: -------------------------------------------------------------------------------- 1 | # Load a number of libraries required for this exercise 2 | import scipy.stats 3 | import numpy as np 4 | import matplotlib 5 | import matplotlib.pyplot as plt 6 | import copy 7 | import itertools 8 | import os 9 | import pickle 10 | import re 11 | 12 | # Load in the transport map class 13 | from transport_map import * 14 | 15 | # Find the current working directory 16 | root_directory = os.path.dirname(os.path.realpath(__file__)) 17 | 18 | # Set a random seed 19 | np.random.seed(0) 20 | 21 | # Close all open figures 22 | plt.close('all') 23 | 24 | # ============================================================================= 25 | # Step 1: Load the data 26 | # ============================================================================= 27 | 28 | # In this example, we will show how to use maps for Bayesian inference. Our 29 | # first example considers parameter inference for Monod kinetics. This model 30 | # has two parameters we seek to learn: r_max and K 31 | # 32 | # r_max - maximum reaction rate 33 | # K - half-velocity constant 34 | # 35 | # In this example, we assume that we observe reaction rates for different 36 | # concentrations of substrate, and seek to find the parameters which best match 37 | # our observations. 38 | 39 | # To start off, let us load the data 40 | text_file = open("model_monod.dat", "r") 41 | lines = text_file.readlines() 42 | text_file.close() 43 | 44 | # Convert data into arrays 45 | C = np.zeros(len(lines)-1) # Concentrations for which we make observations 46 | obs_rate= np.zeros(len(lines)-1) # The observed rate 47 | for idx,line in enumerate(lines): # Go through each line string 48 | if idx > 0: # Ignore the first line - it's only headers 49 | # Split the string into parts, delimited by tabulators 50 | split_string = lines[idx].split('\t') 51 | # The second entry is the concentration, convert it from a string to a number 52 | C[idx-1] = float(split_string[1]) 53 | # The third entry is the observed rate, convert it from a string to a number 54 | obs_rate[idx-1] = float(split_string[2]) 55 | 56 | # Let's plot these results 57 | plt.figure(figsize=(7,7)) 58 | plt.title('observations') 59 | plt.plot(C,obs_rate,marker='x',color='xkcd:orangish red') 60 | plt.xlabel('substrate concentration') 61 | plt.ylabel('observed rate') 62 | plt.savefig('01_data.png') 63 | 64 | # ============================================================================= 65 | # Step 2: Sampling the prior 66 | # ============================================================================= 67 | 68 | # In the next step, let us define the prior for each of the parameters, then 69 | # draw samples from it. 70 | 71 | # Let's begin by defining the sample size we will use in these experiments 72 | N = 1000 73 | 74 | # Now draw N samples from the prior for each parameter 75 | r_max = np.exp(scipy.stats.norm.rvs(loc = 1.5, scale = 0.5, size = N)) 76 | K = np.exp(scipy.stats.norm.rvs(loc = 1.0, scale = 0.5, size = N)) 77 | 78 | # Let's see how well our prior fits the observations. This is the model for the 79 | # Monod kinetics: 80 | def model_monod(r_max,K,C): 81 | 82 | # Create an empty array for the simulated concentrations 83 | sim_rate = np.zeros((len(r_max),len(C))) 84 | 85 | # Simulate all samples 86 | for n in range(len(r_max)): 87 | sim_rate[n,:] = (r_max[n]*C)/(K[n]+C) 88 | 89 | return sim_rate 90 | 91 | # Run the simulations 92 | sim_rate = model_monod(r_max,K,C) 93 | 94 | # From these simulated rates, let's generate observation predictions. These add 95 | # the observation noise to the simulations 96 | pred_rate = np.zeros(sim_rate.shape) 97 | for n in range(N): 98 | pred_rate[n,:] = sim_rate[n,:] + scipy.stats.norm.rvs(scale=0.1,size=len(C)) 99 | 100 | # Let's plot these results 101 | plt.figure(figsize=(7,7)) 102 | for n in range(N): 103 | if n == 0: 104 | plt.plot(C,sim_rate[n,:],color='xkcd:silver', zorder = 5, label = 'predicted',alpha=0.1) 105 | else: 106 | plt.plot(C,sim_rate[n,:],color='xkcd:silver', zorder = 5, alpha=0.1) 107 | plt.plot(C,obs_rate,marker='x',color='xkcd:orangish red', zorder = 10, label = 'observed') 108 | plt.title('prior predictions') 109 | plt.xlabel('substrate concentration') 110 | plt.ylabel('reaction rate') 111 | plt.ylim(0,plt.gca().get_ylim()[-1]) 112 | plt.legend() 113 | plt.savefig('02_prior_simulations.png') 114 | 115 | #%% 116 | # ============================================================================= 117 | # Step 3: Define the transport map parameterization 118 | # ============================================================================= 119 | 120 | # Once more, we have to define the map parameterization. We have seen this in 121 | # earlier examples. In this case, our target density function will not be two- 122 | # dimensional, but 22-dimensional: 20 observation prediction dimensions, and 123 | # 2 parameter dimensions. 124 | 125 | # Let's define the dimensionality of the target density 126 | D = pred_rate.shape[-1] + 2 127 | 128 | # Create empty lists for the map component specifications 129 | monotone = [] 130 | nonmonotone = [] 131 | 132 | # Let's keep the map relatively simple for now 133 | maxorder = 5 134 | 135 | # Let us use Hermite functions and integrated radial basis functions, as 136 | # before. Recall that we only need the lower block of the map, so let's only 137 | # define the last few map component functions: 138 | for k in np.arange(D-2,D,1): 139 | 140 | # Level 1: Add an empty list entry for each map component function 141 | monotone.append([]) 142 | nonmonotone.append([]) # An empty list "[]" denotes a constant 143 | 144 | # Level 2: We initiate the nonmonotone terms with a constant 145 | nonmonotone[-1].append([]) 146 | 147 | # Nonmonotone part -------------------------------------------------------- 148 | 149 | # Go through every polynomial order 150 | for order in range(maxorder): 151 | 152 | # We only have non-constant nonmonotone terms past the first map 153 | # component, and we already added the constant term earlier, so only do 154 | # this for the second map component function (k > 0). 155 | if k > 0: 156 | 157 | # If the order is one, just add it as a linear term 158 | if order == 0: 159 | nonmonotone[-1].append([k-1]) 160 | else: 161 | nonmonotone[-1].append([k-1]*(order+1)+['HF']) 162 | 163 | # Monotone part ----------------------------------------------------------- 164 | 165 | # Let's get more fancy with the monotone part this time. If the order we 166 | # specified is one, then use a linear term. Otherwise, use a few monotone 167 | # special functions: Left edge terms, integrated radial basis functions, 168 | # and right edge terms 169 | 170 | # The specified order is one 171 | if maxorder == 1: 172 | 173 | # Then just add a linear term 174 | monotone[-1].append([k]) 175 | 176 | # Otherweise, the order is greater than one. Let's use special terms. 177 | else: 178 | 179 | # Add a left edge term. The order matters for these special terms. 180 | # While they are placed according to marginal quantiles, they are 181 | # placed from left to right. We want the left edge term to be left. 182 | monotone[-1].append('LET '+str(k)) 183 | 184 | # Lets only add maxorder-1 iRBFs 185 | for order in range(maxorder-1): 186 | 187 | # Add an integrated radial basis function 188 | monotone[-1].append('iRBF '+str(k)) 189 | 190 | # Then add a right edge term 191 | monotone[-1].append('RET '+str(k)) 192 | 193 | #%% 194 | # ============================================================================= 195 | # Step 4: Create the transport map object 196 | # ============================================================================= 197 | 198 | # First let's define the target samples. If we want to extract conditionals of 199 | # the target density function, as in this case, the values on which we want to 200 | # condition must occupy the upper-most entries of the map. In consequence, we 201 | # assemble the matrix X as follows: 202 | 203 | X = np.column_stack(( 204 | pred_rate, # 20 dims 205 | r_max[:,np.newaxis], # 1 dim 206 | K[:,np.newaxis] )) # 1 dim 207 | 208 | # With the map parameterization (nonmonotone, monotone) defined and the target 209 | # samples (X) obtained, we can start creating the transport map object. 210 | 211 | # To begin, delete any map object which might already exist. 212 | if "tm" in globals(): 213 | del tm 214 | 215 | # Create the transport map object tm 216 | tm = transport_map( 217 | monotone = monotone, # Specify the monotone parts of the map component function 218 | nonmonotone = nonmonotone, # Specify the nonmonotone parts of the map component function 219 | X = X, # A N-by-D matrix of training samples (N = ensemble size, D = variable space dimension) 220 | polynomial_type = "hermite function", # What types of polynomials did we specify? The option 'Hermite functions' here are re-scaled probabilist's Hermites, to avoid numerical overflow for higher-order terms 221 | monotonicity = "separable monotonicity", # Are we ensuring monotonicity through 'integrated rectifier' or 'separable monotonicity'? 222 | standardize_samples = True, # Standardize the training ensemble X? Should always be True 223 | verbose = True, # Shall we print the map's progress? 224 | workers = 1) # Number of workers for the parallel optimization.) 225 | 226 | # This map is cheap to optimize. 227 | tm.optimize() 228 | 229 | #%% 230 | # ============================================================================= 231 | # Step 5: Bayesian inference step 232 | # ============================================================================= 233 | 234 | # We have established the basics of conditional sampling earlier. To condition 235 | # this target density on specific observations, create a matrix of duplicates 236 | # of the observations. 237 | X_star = np.repeat( 238 | a = obs_rate[np.newaxis,:], 239 | repeats = N, 240 | axis = 0) 241 | 242 | # Let's use a composite map approach, so we derive our reference samples from a 243 | # forward application of the map 244 | Z = tm.map(X) 245 | 246 | # Now pull these samples back to the target 247 | X_cond = tm.inverse_map(Z = Z, X_star = X_star) 248 | 249 | # Let's extract the updated parameters 250 | r_max_cond = X_cond[:,0] 251 | K_cond = X_cond[:,1] 252 | 253 | # Run the simulations 254 | sim_rate_cond = model_monod(r_max_cond,K_cond,C) 255 | 256 | # Let's generate the corresponding observation predictions 257 | pred_rate_cond = np.zeros(sim_rate_cond.shape) 258 | for n in range(N): 259 | pred_rate_cond[n,:] = sim_rate_cond[n,:] + scipy.stats.norm.rvs(scale=0.1,size=len(C)) 260 | 261 | #%% 262 | # ============================================================================= 263 | # Step 6: Plot the results 264 | # ============================================================================= 265 | 266 | # First, let's plot the prior parameters and the posterior parameters 267 | plt.figure(figsize=(7,7)) 268 | 269 | # Plot prior and posterior parameters 270 | plt.scatter(r_max,K,s=3,color='xkcd:silver',label='prior') 271 | plt.scatter(r_max_cond,K_cond,s=3,color='xkcd:grass green',label='posterior') 272 | plt.scatter(5,2.4,marker='x',color='xkcd:orangish red',label='truth') 273 | 274 | # Complete the figure 275 | plt.title('prior and posterior parameters') 276 | plt.xlabel('maximum reaction rate $r_{max}$') 277 | plt.ylabel('half-velocity constant $K$') 278 | plt.legend() 279 | 280 | # Save the figure 281 | plt.savefig('03_posterior_parameters.png') 282 | 283 | 284 | # Next, let's see how these updated parameters affect the predictions 285 | plt.figure(figsize=(10,10)) 286 | 287 | # Plot the prior quantiles 288 | q0025 = np.quantile(sim_rate, q = 0.025, axis = 0) 289 | q0500 = np.quantile(sim_rate, q = 0.500, axis = 0) 290 | q0975 = np.quantile(sim_rate, q = 0.975, axis = 0) 291 | plt.plot(C,q0500,color='xkcd:grey',zorder=10,label='prior (median)') 292 | plt.fill(list(C)+list(np.flip(C)),list(q0025)+list(np.flip(q0975)),color='xkcd:grey',alpha=0.25,label='prior (2.5% - 97.5%)') 293 | 294 | # Plot the posteiror quantiles 295 | q0025 = np.quantile(sim_rate_cond, q = 0.025, axis = 0) 296 | q0500 = np.quantile(sim_rate_cond, q = 0.500, axis = 0) 297 | q0975 = np.quantile(sim_rate_cond, q = 0.975, axis = 0) 298 | plt.plot(C,q0500,color='xkcd:grass green',zorder=10,label='posterior (median)') 299 | plt.fill(list(C)+list(np.flip(C)),list(q0025)+list(np.flip(q0975)),color='xkcd:grass green',alpha=0.25,label='posterior (2.5% - 97.5%)') 300 | 301 | # Plot the observations 302 | plt.plot(C,obs_rate,marker='x',color='xkcd:orangish red', zorder = 10, label = 'observed') 303 | 304 | # Complete the figure 305 | plt.title('prior and posterior predictions') 306 | plt.xlabel('substrate concentration') 307 | plt.ylabel('reaction rate') 308 | plt.ylim(0,10) 309 | plt.legend(ncol=3) 310 | plt.savefig('04_posterior_simulations.png') -------------------------------------------------------------------------------- /Examples B - statistical inference/Example 04 - Monod kinetics/model_monod.dat: -------------------------------------------------------------------------------- 1 | "var" "C" "data" 2 | "r" 0.5 0.773942218460235 3 | "r" 1 1.27591582091686 4 | "r" 1.5 1.67848878530953 5 | "r" 2 1.78111731336713 6 | "r" 2.5 2.29582311366305 7 | "r" 3 2.55570535500672 8 | "r" 3.5 2.60685476746181 9 | "r" 4 2.78730186184884 10 | "r" 4.5 3.06362724729673 11 | "r" 5 3.12179046408179 12 | "r" 5.5 3.31315942153716 13 | "r" 6 3.2064060780553 14 | "r" 6.5 3.48832495114531 15 | "r" 7 3.47884464968913 16 | "r" 7.5 3.46262833667404 17 | "r" 8 3.57705278190036 18 | "r" 8.5 3.42808389430758 19 | "r" 9 3.95580881746777 20 | "r" 9.5 3.7592548257641 21 | "r" 10 3.82272286815765 22 | -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=1.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=2.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=3.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=4.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 05 - Ensemble Transport Filter/01_RMSE_EnTF_order=5.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 05 - Ensemble Transport Filter/example_05.py: -------------------------------------------------------------------------------- 1 | if __name__ == '__main__': 2 | 3 | # Load in a number of libraries we will use 4 | import numpy as np 5 | import scipy.stats 6 | import copy 7 | from transport_map import * 8 | import time 9 | import pickle 10 | import matplotlib.pyplot as plt 11 | 12 | # Close all open figures 13 | plt.close("all") 14 | 15 | # In this example file, we will consider data assimilation for a chaotic, 16 | # three-dimensional dynamical system known as Lorenz-63. Using triangular 17 | # transport methods for filtering results in an algorithm we call an 18 | # Ensemble Transport Filter (EnTS), a nonlinear generalization of the well- 19 | # known Ensemble Kalman Filter (EnKF.) 20 | 21 | # ========================================================================= 22 | # Set up the Lorenz-63 dynamics 23 | # ========================================================================= 24 | 25 | # First, we must define the system dynamics and the time integration scheme 26 | 27 | # Lorenz-63 dynamics 28 | def lorenz_dynamics(t, Z, beta=8/3, rho=28, sigma=10): 29 | 30 | if len(Z.shape) == 1: # Only one particle 31 | 32 | dZ1ds = - sigma*Z[0] + sigma*Z[1] 33 | dZ2ds = - Z[0]*Z[2] + rho*Z[0] - Z[1] 34 | dZ3ds = Z[0]*Z[1] - beta*Z[2] 35 | 36 | dyn = np.asarray([dZ1ds, dZ2ds, dZ3ds]) 37 | 38 | else: 39 | 40 | dZ1ds = - sigma*Z[...,0] + sigma*Z[...,1] 41 | dZ2ds = - Z[...,0]*Z[...,2] + rho*Z[...,0] - Z[...,1] 42 | dZ3ds = Z[...,0]*Z[...,1] - beta*Z[...,2] 43 | 44 | dyn = np.column_stack((dZ1ds, dZ2ds, dZ3ds)) 45 | 46 | return dyn 47 | 48 | # Fourth-order Runge-Kutta scheme 49 | def rk4(Z,fun,t=0,dt=1,nt=1):#(x0, y0, x, h): 50 | 51 | """ 52 | Parameters 53 | t : initial time 54 | Z : initial states 55 | fun : function to be integrated 56 | dt : time step length 57 | nt : number of time steps 58 | 59 | """ 60 | 61 | # Prepare array for use 62 | if len(Z.shape) == 1: # We have only one particle, convert it to correct format 63 | Z = Z[np.newaxis,:] 64 | 65 | # Go through all time steps 66 | for i in range(nt): 67 | 68 | # Calculate the RK4 values 69 | k1 = fun(t + i*dt, Z); 70 | k2 = fun(t + i*dt + 0.5*dt, Z + dt/2*k1); 71 | k3 = fun(t + i*dt + 0.5*dt, Z + dt/2*k2); 72 | k4 = fun(t + i*dt + dt, Z + dt*k3); 73 | 74 | # Update next value 75 | Z += dt/6*(k1 + 2*k2 + 2*k3 + k4) 76 | 77 | return Z 78 | 79 | # ========================================================================= 80 | # Set up the exercise 81 | # ========================================================================= 82 | 83 | # Set a random seed 84 | np.random.seed(0) 85 | 86 | # Define problem dimensions 87 | O = 3 # Observation space dimensions 88 | D = 3 # State space dimensions 89 | 90 | # Ensemble size 91 | N = 500 92 | 93 | # Time-related parameters 94 | T = 1000 # Full time series length 95 | dt = 0.1 # Time step length 96 | dti = 0.05 # Time step increment 97 | 98 | # Observation error 99 | obs_sd = 2 100 | 101 | # In this study, we introduce regularization for the transport map. 102 | lmbda = 0.05 103 | 104 | # Maximum polynomial order for EnTF / EnTS; Feel free to change this value 105 | maxorder_filter = 3 106 | 107 | #%% 108 | # ========================================================================= 109 | # Generate synthetic observations 110 | # ========================================================================= 111 | 112 | # Create the synthetic reference 113 | synthetic_truth = np.zeros((T,1,D)) 114 | synthetic_truth[0,0,:] = scipy.stats.norm.rvs(size=3) 115 | 116 | for t in np.arange(0,T-1,1): 117 | 118 | # Make a Lorenz forecast 119 | synthetic_truth[t+1,:,:] = rk4( 120 | Z = copy.copy(synthetic_truth[t,:,:]), 121 | fun = lorenz_dynamics, 122 | t = 0, 123 | dt = dti, 124 | nt = int(dt/dti)) 125 | 126 | # Remove the unnecessary particle index 127 | synthetic_truth = synthetic_truth[:,0,:] 128 | 129 | # Create observations 130 | observations = copy.copy(synthetic_truth) + scipy.stats.norm.rvs(scale = obs_sd, size = synthetic_truth.shape) 131 | 132 | #%% 133 | # ========================================================================= 134 | # Set up the transport map object 135 | # ========================================================================= 136 | 137 | # Define the map component functions. The target distribution over which we 138 | # do inference is six-dimensional, because we assume we observe all three 139 | # variable dimensions (three state observations + three states). The graph 140 | # of the system looks like: 141 | # 142 | # ┌---------------┐ 143 | # | | 144 | # x_a --- x_b --- x_c 145 | # | | | 146 | # y_a y_b y_c 147 | # 148 | # A straightforward assimilation scheme would form a six-dimensional target 149 | # distribution, and condition everything at once. If we want to exploit the 150 | # conditional independence structure in this graph, we can subdivide this 151 | # update into three separate operations, each assimilating one observation: 152 | # 153 | # Operation 1 154 | # ┌---------------┐ 155 | # | (3) | --> sample new y_b 156 | # (2) x_a --- x_b --- x_c (4) 157 | # | 158 | # (1) y_a 159 | # 160 | # Operation 2 161 | # ┌---------------┐ 162 | # | (2) | --> sample new y_c 163 | # (3) x_a --- x_b --- x_c (4) 164 | # | 165 | # y_b 166 | # (1) 167 | # 168 | # Operation 3 169 | # ┌---------------┐ 170 | # | (4) | 171 | # (3) x_a --- x_b --- x_c (2) 172 | # | 173 | # y_c (1) 174 | # 175 | # We apply the three conditioning operations in sequence, each conditioning 176 | # on one observation at a time. After each update, we have to sample new 177 | # observation predictions. The numbers next to each node define the their 178 | # position in the triangular map. 179 | 180 | 181 | # For the later inference problem, we are only interested in extracting 182 | # conditionals of this six-dimensional distribution. As established in 183 | # previous exercises, this means we only have to define the lower three map 184 | # component functions - those corresponding to the three states. 185 | if maxorder_filter == 1: # Map is linear 186 | 187 | # We can explot a bit of sparsity in each graph 188 | nonmonotone_filter = [ 189 | [[],[0]], 190 | [[], [1]], 191 | [[], [1],[2]]] 192 | 193 | monotone_filter = [ 194 | [[1]], 195 | [[2]], 196 | [[3]]] 197 | 198 | else: # Map is nonlinear 199 | 200 | # We use combinations of linear terms and Hermite functions for the 201 | # non-monotone terms 202 | nonmonotone_filter = [ 203 | [[],[0]]+[[0]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)], 204 | [[],[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)], 205 | [[],[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)]+[[2]]+[[2]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)]] 206 | 207 | # Here, we use map component functions of differing complexity. We use 208 | # nonlinear combinations of edge terms and integrated radial basis 209 | # functions for the first state, corresponding to the observed state, 210 | # and linear terms for all lower dependencies. 211 | monotone_filter = [ 212 | ['LET 1']+['iRBF 1']*(maxorder_filter-1)+['RET 1'], 213 | [[2]], 214 | [[3]]] 215 | 216 | 217 | # If a previous transport map object exists, delete it 218 | if "tm" in globals(): 219 | del tm 220 | 221 | # Let's create the transport map object. We initiate it with dummy random 222 | # variables for now. 223 | tm = transport_map( 224 | monotone = monotone_filter, 225 | nonmonotone = nonmonotone_filter, 226 | X = np.random.uniform(size=(N,1+D)), # Dummy input 227 | polynomial_type = "hermite function", 228 | monotonicity = "separable monotonicity", 229 | regularization = "l2", 230 | regularization_lambda = lmbda, 231 | verbose = False) 232 | 233 | #%% 234 | # ========================================================================= 235 | # Ensemble Transport Filtering 236 | # ========================================================================= 237 | 238 | # Create an empty list for the ensemble mean RMSEs 239 | RMSE_list = [] 240 | 241 | # Let's set up an empty array for the samples obtained during filtering 242 | Xt = np.zeros((T,N,D)) 243 | 244 | # Draw initial samples from a standard Gaussian 245 | Xt[0,...] = scipy.stats.norm.rvs(size=(N,D)) 246 | 247 | # Create a copy of this array for the forecast. We don't really need this 248 | # array for this example, but it will be important in the next example. 249 | Xft = copy.copy(Xt) 250 | 251 | # Start the filtering 252 | for t in np.arange(0,T,1): 253 | 254 | # Print the update 255 | print('Timestep '+str(t+1).zfill(4)+'|'+str(T).zfill(4)) 256 | 257 | # Copy the forecast into the analysis array 258 | Xt[t,...] = copy.copy(Xft[t,...]) 259 | 260 | # Lets assimilate these observations one-at-a-time 261 | for idx,perm in enumerate([[0,1,2],[1,0,2],[2,1,0]]): 262 | 263 | # Before the update step, Ensemble Transport Filters must sample 264 | # the stochastic observation model. Sample independent errors for 265 | # each state. 266 | Yt = copy.copy(Xt[t,:,:][:,idx]) + \ 267 | scipy.stats.norm.rvs( 268 | loc = 0, 269 | scale = obs_sd, 270 | size = (N)) 271 | 272 | # Concatenate observations and samples into six-dimensional samples 273 | # from the target density 274 | map_input = copy.copy(np.column_stack(( 275 | Yt[:,np.newaxis], # First O dimensions: simulated observations 276 | Xt[t,:,:][:,perm]))) # Next D dimensions: predicted states 277 | 278 | # Reset the transport map for the new values 279 | tm.reset(map_input) 280 | 281 | # Optimize the transport map 282 | tm.optimize() 283 | 284 | # --------------------------------------------------------------------- 285 | # Implement the composite map update 286 | # --------------------------------------------------------------------- 287 | 288 | # For the composite map, we use reference samples from the pushforward 289 | # samples 290 | Z_pushforward = tm.map(map_input) 291 | 292 | # Create an array with replicated of the observations 293 | Y_star = np.repeat( 294 | a = observations[t,idx].reshape((1,1)), 295 | repeats = N, 296 | axis = 0) 297 | 298 | # Apply the inverse map 299 | ret = tm.inverse_map( 300 | X_star = Y_star, 301 | Z = Z_pushforward) 302 | 303 | # Undo the permutation of the states 304 | ret = ret[:,perm] 305 | 306 | # Save the result in the analysis array 307 | Xt[t,...] = copy.copy(ret) 308 | 309 | # --------------------------------------------------------------------- 310 | # Detemrine RMSE, make a forecast to the next timestep 311 | # --------------------------------------------------------------------- 312 | 313 | # Calculate ensemble mean RMSE 314 | RMSE = (np.mean(Xt[t,...],axis=0) - synthetic_truth[t,:])**2 315 | RMSE = np.mean(RMSE) 316 | RMSE = np.sqrt(RMSE) 317 | RMSE_list.append(RMSE) 318 | 319 | # After the analysis step, make a forecast to the next timestep 320 | if t < T-1: 321 | 322 | # Make a Lorenz forecast 323 | Xft[t+1,:,:] = rk4( 324 | Z = copy.copy(Xt[t,:,:]), 325 | fun = lorenz_dynamics, 326 | t = 0, 327 | dt = dti, 328 | nt = int(dt/dti)) 329 | 330 | # Plot the results 331 | plt.figure(figsize=(7,7)) 332 | plt.plot(RMSE_list,color='xkcd:grey') 333 | plt.xlabel('timestep') 334 | plt.ylabel('ensemble mean RMSE') 335 | plt.title('EnTF order '+str(maxorder_filter)+' | RMSE: '+"{:.3f}".format(np.mean(RMSE_list))) 336 | plt.savefig('01_RMSE_EnTF_order='+str(maxorder_filter)+'.png') -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=1.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=2.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=3.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=4.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/01_RMSE_EnTF_order=5.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=1_smoother_order=1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=1_smoother_order=1.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=2_smoother_order=2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=2_smoother_order=2.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=3_smoother_order=3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=3_smoother_order=3.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=4_smoother_order=4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=4_smoother_order=4.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=5_smoother_order=5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/02_RMSE_EnTS_order=5_smoother_order=5.png -------------------------------------------------------------------------------- /Examples C - data assimilation/Example 06 - Ensemble Transport Smoother/example_06.py: -------------------------------------------------------------------------------- 1 | if __name__ == '__main__': 2 | 3 | # Load in a number of libraries we will use 4 | import numpy as np 5 | import scipy.stats 6 | import copy 7 | from transport_map import * 8 | import time 9 | import pickle 10 | import matplotlib.pyplot as plt 11 | 12 | # Close all open figures 13 | plt.close("all") 14 | 15 | # In this example file, we will build on the foundation established by the 16 | # Ensemble Transport Filter, and demonstrate how to extend these principles 17 | # to smoothing. Specifically, we consider a backward Ensemble Transport 18 | # Smoother, which is a nonlinear generalization of the Ensemble Rauch-Tung- 19 | # Striebel Smoother. 20 | 21 | # ========================================================================= 22 | # Set up the Lorenz-63 dynamics 23 | # ========================================================================= 24 | 25 | # First, we must define the system dynamics and the time integration scheme 26 | 27 | # Lorenz-63 dynamics 28 | def lorenz_dynamics(t, Z, beta=8/3, rho=28, sigma=10): 29 | 30 | if len(Z.shape) == 1: # Only one particle 31 | 32 | dZ1ds = - sigma*Z[0] + sigma*Z[1] 33 | dZ2ds = - Z[0]*Z[2] + rho*Z[0] - Z[1] 34 | dZ3ds = Z[0]*Z[1] - beta*Z[2] 35 | 36 | dyn = np.asarray([dZ1ds, dZ2ds, dZ3ds]) 37 | 38 | else: 39 | 40 | dZ1ds = - sigma*Z[...,0] + sigma*Z[...,1] 41 | dZ2ds = - Z[...,0]*Z[...,2] + rho*Z[...,0] - Z[...,1] 42 | dZ3ds = Z[...,0]*Z[...,1] - beta*Z[...,2] 43 | 44 | dyn = np.column_stack((dZ1ds, dZ2ds, dZ3ds)) 45 | 46 | return dyn 47 | 48 | # Fourth-order Runge-Kutta scheme 49 | def rk4(Z,fun,t=0,dt=1,nt=1):#(x0, y0, x, h): 50 | 51 | """ 52 | Parameters 53 | t : initial time 54 | Z : initial states 55 | fun : function to be integrated 56 | dt : time step length 57 | nt : number of time steps 58 | 59 | """ 60 | 61 | # Prepare array for use 62 | if len(Z.shape) == 1: # We have only one particle, convert it to correct format 63 | Z = Z[np.newaxis,:] 64 | 65 | # Go through all time steps 66 | for i in range(nt): 67 | 68 | # Calculate the RK4 values 69 | k1 = fun(t + i*dt, Z); 70 | k2 = fun(t + i*dt + 0.5*dt, Z + dt/2*k1); 71 | k3 = fun(t + i*dt + 0.5*dt, Z + dt/2*k2); 72 | k4 = fun(t + i*dt + dt, Z + dt*k3); 73 | 74 | # Update next value 75 | Z += dt/6*(k1 + 2*k2 + 2*k3 + k4) 76 | 77 | return Z 78 | 79 | # ========================================================================= 80 | # Set up the exercise 81 | # ========================================================================= 82 | 83 | # Set a random seed 84 | np.random.seed(0) 85 | 86 | # Define problem dimensions 87 | O = 3 # Observation space dimensions 88 | D = 3 # State space dimensions 89 | 90 | # Ensemble size 91 | N = 500 92 | 93 | # Time-related parameters 94 | T = 1000 # Full time series length 95 | dt = 0.1 # Time step length 96 | dti = 0.05 # Time step increment 97 | 98 | # Observation error 99 | obs_sd = 2 100 | 101 | # In this study, we introduce regularization for the transport map. 102 | lmbda = 0.05 103 | 104 | # Maximum polynomial order for EnTF / EnTS; Feel free to change this value 105 | maxorder_filter = 3 # Maximum map order for the filter 106 | maxorder_smoother = 3 # Maximum map order for the smoother 107 | 108 | #%% 109 | # ========================================================================= 110 | # Generate synthetic observations 111 | # ========================================================================= 112 | 113 | # Create the synthetic reference 114 | synthetic_truth = np.zeros((T,1,D)) 115 | synthetic_truth[0,0,:] = scipy.stats.norm.rvs(size=3) 116 | 117 | for t in np.arange(0,T-1,1): 118 | 119 | # Make a Lorenz forecast 120 | synthetic_truth[t+1,:,:] = rk4( 121 | Z = copy.copy(synthetic_truth[t,:,:]), 122 | fun = lorenz_dynamics, 123 | t = 0, 124 | dt = dti, 125 | nt = int(dt/dti)) 126 | 127 | # Remove the unnecessary particle index 128 | synthetic_truth = synthetic_truth[:,0,:] 129 | 130 | # Create observations 131 | observations = copy.copy(synthetic_truth) + scipy.stats.norm.rvs(scale = obs_sd, size = synthetic_truth.shape) 132 | 133 | #%% 134 | """ 135 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 136 | The backward EnTS is built on the foundation of a previous EnTF run. The 137 | lines below are an exact replicate of example_05.py. Feel free to skip past 138 | it if you have already familiarized yourself with the previous example. 139 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 140 | """ 141 | 142 | 143 | # ========================================================================= 144 | # Set up the transport map object 145 | # ========================================================================= 146 | 147 | # Define the map component functions. The target distribution over which we 148 | # do inference is six-dimensional, because we assume we observe all three 149 | # variable dimensions (three state observations + three states). The graph 150 | # of the system looks like: 151 | # 152 | # ┌---------------┐ 153 | # | | 154 | # x_a --- x_b --- x_c 155 | # | | | 156 | # y_a y_b y_c 157 | # 158 | # A straightforward assimilation scheme would form a six-dimensional target 159 | # distribution, and condition everything at once. If we want to exploit the 160 | # conditional independence structure in this graph, we can subdivide this 161 | # update into three separate operations, each assimilating one observation: 162 | # 163 | # Operation 1 164 | # ┌---------------┐ 165 | # | (3) | --> sample new y_b 166 | # (2) x_a --- x_b --- x_c (4) 167 | # | 168 | # (1) y_a 169 | # 170 | # Operation 2 171 | # ┌---------------┐ 172 | # | (2) | --> sample new y_c 173 | # (3) x_a --- x_b --- x_c (4) 174 | # | 175 | # y_b 176 | # (1) 177 | # 178 | # Operation 3 179 | # ┌---------------┐ 180 | # | (4) | 181 | # (3) x_a --- x_b --- x_c (2) 182 | # | 183 | # y_c (1) 184 | # 185 | # We apply the three conditioning operations in sequence, each conditioning 186 | # on one observation at a time. After each update, we have to sample new 187 | # observation predictions. The numbers next to each node define the their 188 | # position in the triangular map. 189 | 190 | 191 | # For the later inference problem, we are only interested in extracting 192 | # conditionals of this six-dimensional distribution. As established in 193 | # previous exercises, this means we only have to define the lower three map 194 | # component functions - those corresponding to the three states. 195 | if maxorder_filter == 1: # Map is linear 196 | 197 | # We can explot a bit of sparsity in each graph 198 | nonmonotone_filter = [ 199 | [[],[0]], 200 | [[], [1]], 201 | [[], [1],[2]]] 202 | 203 | monotone_filter = [ 204 | [[1]], 205 | [[2]], 206 | [[3]]] 207 | 208 | else: # Map is nonlinear 209 | 210 | # We use combinations of linear terms and Hermite functions for the 211 | # non-monotone terms 212 | nonmonotone_filter = [ 213 | [[],[0]]+[[0]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)], 214 | [[],[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)], 215 | [[],[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)]+[[2]]+[[2]*od+['HF'] for od in np.arange(1,maxorder_filter+1,1)]] 216 | 217 | # Here, we use map component functions of differing complexity. We use 218 | # nonlinear combinations of edge terms and integrated radial basis 219 | # functions for the first state, corresponding to the observed state, 220 | # and linear terms for all lower dependencies. 221 | monotone_filter = [ 222 | ['LET 1']+['iRBF 1']*(maxorder_filter-1)+['RET 1'], 223 | [[2]], 224 | [[3]]] 225 | 226 | 227 | # If a previous transport map object exists, delete it 228 | if "tm" in globals(): 229 | del tm 230 | 231 | # Let's create the transport map object. We initiate it with dummy random 232 | # variables for now. 233 | tm = transport_map( 234 | monotone = monotone_filter, 235 | nonmonotone = nonmonotone_filter, 236 | X = np.random.uniform(size=(N,1+D)), # Dummy input 237 | polynomial_type = "hermite function", 238 | monotonicity = "separable monotonicity", 239 | regularization = "l2", 240 | regularization_lambda = lmbda, 241 | verbose = False) 242 | 243 | #%% 244 | # ========================================================================= 245 | # Ensemble Transport Filtering 246 | # ========================================================================= 247 | 248 | # Create an empty list for the ensemble mean RMSEs 249 | RMSE_list = [] 250 | 251 | # Let's set up an empty array for the samples obtained during filtering 252 | Xt = np.zeros((T,N,D)) 253 | 254 | # Draw initial samples from a standard Gaussian 255 | Xt[0,...] = scipy.stats.norm.rvs(size=(N,D)) 256 | 257 | # Create a copy of this array for the forecast. We don't really need this 258 | # array for this example, but it will be important in the next example. 259 | Xft = copy.copy(Xt) 260 | 261 | # Start the filtering 262 | for t in np.arange(0,T,1): 263 | 264 | # Print the update 265 | print('Filtering: timestep '+str(t+1).zfill(4)+'|'+str(T).zfill(4)) 266 | 267 | # Copy the forecast into the analysis array 268 | Xt[t,...] = copy.copy(Xft[t,...]) 269 | 270 | # Lets assimilate these observations one-at-a-time 271 | for idx,perm in enumerate([[0,1,2],[1,0,2],[2,1,0]]): 272 | 273 | # Before the update step, Ensemble Transport Filters must sample 274 | # the stochastic observation model. Sample independent errors for 275 | # each state. 276 | Yt = copy.copy(Xt[t,:,:][:,idx]) + \ 277 | scipy.stats.norm.rvs( 278 | loc = 0, 279 | scale = obs_sd, 280 | size = (N)) 281 | 282 | # Concatenate observations and samples into six-dimensional samples 283 | # from the target density 284 | map_input = copy.copy(np.column_stack(( 285 | Yt[:,np.newaxis], # First O dimensions: simulated observations 286 | Xt[t,:,:][:,perm]))) # Next D dimensions: predicted states 287 | 288 | # Reset the transport map for the new values 289 | tm.reset(map_input) 290 | 291 | # Optimize the transport map 292 | tm.optimize() 293 | 294 | # --------------------------------------------------------------------- 295 | # Implement the composite map update 296 | # --------------------------------------------------------------------- 297 | 298 | # For the composite map, we use reference samples from the pushforward 299 | # samples 300 | Z_pushforward = tm.map(map_input) 301 | 302 | # Create an array with replicated of the observations 303 | Y_star = np.repeat( 304 | a = observations[t,idx].reshape((1,1)), 305 | repeats = N, 306 | axis = 0) 307 | 308 | # Apply the inverse map 309 | ret = tm.inverse_map( 310 | X_star = Y_star, 311 | Z = Z_pushforward) 312 | 313 | # Undo the permutation of the states 314 | ret = ret[:,perm] 315 | 316 | # Save the result in the analysis array 317 | Xt[t,...] = copy.copy(ret) 318 | 319 | # --------------------------------------------------------------------- 320 | # Detemrine RMSE, make a forecast to the next timestep 321 | # --------------------------------------------------------------------- 322 | 323 | # Calculate ensemble mean RMSE 324 | RMSE = (np.mean(Xt[t,...],axis=0) - synthetic_truth[t,:])**2 325 | RMSE = np.mean(RMSE) 326 | RMSE = np.sqrt(RMSE) 327 | RMSE_list.append(RMSE) 328 | 329 | # After the analysis step, make a forecast to the next timestep 330 | if t < T-1: 331 | 332 | # Make a Lorenz forecast 333 | Xft[t+1,:,:] = rk4( 334 | Z = copy.copy(Xt[t,:,:]), 335 | fun = lorenz_dynamics, 336 | t = 0, 337 | dt = dti, 338 | nt = int(dt/dti)) 339 | 340 | # Plot the results 341 | plt.figure(figsize=(7,7)) 342 | plt.plot(RMSE_list,color='xkcd:grey') 343 | plt.xlabel('timestep') 344 | plt.ylabel('ensemble mean RMSE') 345 | plt.title('EnTF order '+str(maxorder_filter)+' | RMSE: '+"{:.3f}".format(np.mean(RMSE_list))) 346 | plt.savefig('01_RMSE_EnTF_order='+str(maxorder_filter)+'.png') 347 | 348 | """ 349 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 350 | We resume the smoothing algorithm at this point. 351 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 352 | """ 353 | 354 | #%% 355 | 356 | # ========================================================================= 357 | # Backward smoothing 358 | # ========================================================================= 359 | 360 | # The backward smoother conditions a joint distribution of states at 361 | # adjacent times on smoothed states passed down backward along the graph. 362 | # As such, the graph for the smoothing update has six nodes, one for the 363 | # three states at time s and at time s+1, and is dense, that is to say 364 | # there is no conditional independence to exploit. As this map differs in 365 | # structure from the one we used in filtering, we must create a new map. 366 | 367 | # Define the map component functions 368 | if maxorder_smoother == 1: # Linear smoother 369 | 370 | nonmonotone_BWS = [ 371 | [[],[0],[1],[2]], 372 | [[],[0],[1],[2],[3]], 373 | [[],[0],[1],[2],[3],[4]]] 374 | 375 | monotone_BWS = [ 376 | [[3]], 377 | [[4]], 378 | [[5]]] 379 | 380 | else: # Nonlinear smoother 381 | 382 | nonmonotone_BWS = [ 383 | [[],[0]]+[[0]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[2]]+[[2]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)], 384 | [[],[0]]+[[0]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[2]]+[[2]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[3]]+[[3]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)], 385 | [[],[0]]+[[0]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[1]]+[[1]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[2]]+[[2]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[3]]+[[3]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)] + [[4]]+[[4]*od+['HF'] for od in np.arange(1,maxorder_smoother+1,1)]] 386 | 387 | # Even for the nonlinear backward smoother, we keep the monotone terms 388 | # linear. This reduces computational demand by quite a bit, and often 389 | # still performs quite well. 390 | monotone_BWS = [ 391 | [[3]], 392 | [[4]], 393 | [[5]]] 394 | 395 | # Delete the pre-existing map object 396 | if "tm" in globals(): 397 | del tm 398 | 399 | # Parameterize the transport map 400 | tm = transport_map( 401 | monotone = monotone_BWS, 402 | nonmonotone = nonmonotone_BWS, 403 | X = np.random.uniform(size=(N,int(2*D))), 404 | polynomial_type = "probabilist's hermite", 405 | monotonicity = "separable monotonicity", 406 | regularization = "l2", 407 | regularization_lambda = lmbda, 408 | verbose = False) 409 | 410 | # ========================================================================= 411 | # Now start the actual backward smoother 412 | # ========================================================================= 413 | 414 | # Let's set up an empty array for the samples obtained during smoothing. We 415 | # initiate it as a copy of the filtering analysis samples, but really we 416 | # only need the last marginal of the filtering samples (Xt[-1,...])- 417 | Xst = copy.copy(Xt) 418 | 419 | # Create a list for the ensemble mean RMSEs. The first smoothing marginal 420 | # is also the last filtering marginal, so they share an RMSE value. 421 | RMSE_list_smoother = [RMSE_list[-1]] 422 | 423 | # Go backward through time 424 | for t in range(T-2,-1,-1): 425 | 426 | # Print the update 427 | print('Smoothing: timestep '+str(t+1).zfill(4)+'|'+str(T).zfill(4)) 428 | 429 | # The joint distributions conditioned by the backward smoother are 430 | # constructed from filtering analysis and forecast samples obtained 431 | # during the preceding filtering pass. 432 | map_input = copy.copy(np.column_stack(( 433 | Xft[t+1,:,:], # First D dimensions: filtering forecast 434 | Xt[t,...]))) # Next D dimensions: filtering analysis 435 | 436 | # Reset the map 437 | tm.reset(copy.copy(map_input)) 438 | 439 | # Start optimizing the transport map 440 | tm.optimize() 441 | 442 | # --------------------------------------------------------------------- 443 | # Composite map 444 | # --------------------------------------------------------------------- 445 | 446 | # We condition on previous smoothing samples 447 | X_star = copy.copy(Xst[t+1,...]) 448 | 449 | # The composite map uses reference samples 450 | # from an application of the forward map 451 | Z_pushforward = tm.map(map_input) 452 | 453 | # Invert / Condition the map 454 | ret = tm.inverse_map( 455 | X_star = X_star, 456 | Z = Z_pushforward) 457 | 458 | # Copy the results into the smoothing array 459 | Xst[t,...] = copy.copy(ret) 460 | 461 | # Calculate RMSE 462 | RMSE = (np.mean(Xst[t,...],axis=0) - synthetic_truth[t,:])**2 463 | RMSE = np.mean(RMSE) 464 | RMSE = np.sqrt(RMSE) 465 | RMSE_list_smoother.append(RMSE) 466 | 467 | # Plot the results 468 | plt.figure(figsize=(7,7)) 469 | plt.plot(RMSE_list,color='xkcd:grey',label='Ensemble Transport Filter') 470 | plt.plot(np.flip(RMSE_list_smoother),color='xkcd:orangish red',label='Ensemble Transport Smoother') 471 | plt.xlabel('timestep') 472 | plt.ylabel('ensemble mean RMSE') 473 | plt.legend() 474 | plt.title('EnTF order '+str(maxorder_filter)+' | RMSE: '+"{:.3f}".format(np.mean(RMSE_list))+' | EnTs order '+str(maxorder_smoother)+' | RMSE: '+"{:.3f}".format(np.mean(RMSE_list_smoother))) 475 | plt.savefig('02_RMSE_EnTS_order='+str(maxorder_filter)+'_smoother_order='+str(maxorder_smoother)+'.png') -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [2022] [Maximilian Ramgraber] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Triangular Transport Toolbox 2 | 3 | 4 | 5 | This repository contains the code for my triangular transport implementation. To use it in your own code, simply download the file `transport_map.py`, copy it into your working directory, import the class `transport_map` into your Python code with the line `from transport_map import *`. That's it! 6 | 7 | The practical use and capabilities of this toolbox are illustrated in a number of example files: 8 | 9 | - **Examples A - spiral distribution** illustrates the basic use of the map, from the parameterization of a transport map object to its use for forward mapping, inverse mapping, and conditional sampling. 10 | - **Examples B - statistical inference** builds on previous established basics to illustrate the use of transport methods for statistical inference. The first examples examines statistical dependencies between temperatures in two cities, the second demonstrates Bayesian parameter inference for Monod kinetics. 11 | - **Examples C - data assimilation** demonstrates the use of transport maps for Bayesian filtering and smoothing, using the chaotic Lorenz-63 system. These examples also introduce the use of map regularization, the possibility of separation of the map update, and the exploitation of conditional independence. 12 | 13 | --- 14 | 15 | If you're curious about other triangular transport libraries, I recommend checking out the the [**M**onotone **Par**ameterization **T**oolbox MParT](https://measuretransport.github.io/MParT/). It is a joint effort by colleagues from MIT and Dartmouth to create an efficient toolbox for the monotone part of the transport map, realized in C++ for computational efficiency, with bindings to a wide variety of programming languages (Python, MATLAB, Julia). 16 | -------------------------------------------------------------------------------- /figures/spiral_animated.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MaxRamgraber/Triangular-Transport-Toolbox/26155c22279b727ded570675689e7ed69703daf6/figures/spiral_animated.gif --------------------------------------------------------------------------------