├── code ├── requirements.txt ├── data │ ├── Gravity_model.xlsx │ ├── Edges_GLSN_2015.xlsx │ ├── Edges_GLSN_2017.xlsx │ ├── TV_GDP_LSCI_2015.xlsx │ ├── TV_GDP_LSCI_2017.xlsx │ └── Gravity_model_lsbci.xlsx ├── expected output │ ├── Regression_Variables_2015.xlsx │ ├── Regression_Variables_2017.xlsx │ ├── Table 2. Results for multivariate linear regressions....xlsx │ ├── Table 3. Multivariate regression results for gravity models....xlsx │ ├── Supplementary Table S3. Results for multivariate linear regressions....xlsx │ ├── Supplementary Table S6. Multivariate regression results for gravity models....xlsx │ ├── Supplementary Table S1. Results for multivariate linear regressions when the export value....xlsx │ ├── Supplementary Table S2. Coefficients for representative multivariate linear regression models....xlsx │ ├── Table 1. Pearson correlation coefficient between the trade value and each explanatory variable....xlsx │ ├── Supplementary Table S5. Regressions of countries’ trade value change between years 2015 and 2018....xlsx │ └── Supplementary Table S4. Results for multivariate linear regressions when the dependent variable is the trade value in 2017....xlsx ├── configure.py ├── main.py ├── README.md └── src │ ├── Validation2017.py │ ├── CalculatePearsonCorrelationCoefficient.py │ ├── TradeValueChanges.py │ ├── CalculateExplanatoryVariables.py │ ├── GravityModel.py │ └── MultivariateLinearRegression.py ├── README.md └── LICENSE /code/requirements.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/requirements.txt -------------------------------------------------------------------------------- /code/data/Gravity_model.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/data/Gravity_model.xlsx -------------------------------------------------------------------------------- /code/data/Edges_GLSN_2015.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/data/Edges_GLSN_2015.xlsx -------------------------------------------------------------------------------- /code/data/Edges_GLSN_2017.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/data/Edges_GLSN_2017.xlsx -------------------------------------------------------------------------------- /code/data/TV_GDP_LSCI_2015.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/data/TV_GDP_LSCI_2015.xlsx -------------------------------------------------------------------------------- /code/data/TV_GDP_LSCI_2017.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/data/TV_GDP_LSCI_2017.xlsx -------------------------------------------------------------------------------- /code/data/Gravity_model_lsbci.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/data/Gravity_model_lsbci.xlsx -------------------------------------------------------------------------------- /code/expected output/Regression_Variables_2015.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Regression_Variables_2015.xlsx -------------------------------------------------------------------------------- /code/expected output/Regression_Variables_2017.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Regression_Variables_2017.xlsx -------------------------------------------------------------------------------- /code/expected output/Table 2. Results for multivariate linear regressions....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Table 2. Results for multivariate linear regressions....xlsx -------------------------------------------------------------------------------- /code/expected output/Table 3. Multivariate regression results for gravity models....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Table 3. Multivariate regression results for gravity models....xlsx -------------------------------------------------------------------------------- /code/expected output/Supplementary Table S3. Results for multivariate linear regressions....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Supplementary Table S3. Results for multivariate linear regressions....xlsx -------------------------------------------------------------------------------- /code/expected output/Supplementary Table S6. Multivariate regression results for gravity models....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Supplementary Table S6. Multivariate regression results for gravity models....xlsx -------------------------------------------------------------------------------- /code/expected output/Supplementary Table S1. Results for multivariate linear regressions when the export value....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Supplementary Table S1. Results for multivariate linear regressions when the export value....xlsx -------------------------------------------------------------------------------- /code/expected output/Supplementary Table S2. Coefficients for representative multivariate linear regression models....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Supplementary Table S2. Coefficients for representative multivariate linear regression models....xlsx -------------------------------------------------------------------------------- /code/expected output/Table 1. Pearson correlation coefficient between the trade value and each explanatory variable....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Table 1. Pearson correlation coefficient between the trade value and each explanatory variable....xlsx -------------------------------------------------------------------------------- /code/expected output/Supplementary Table S5. Regressions of countries’ trade value change between years 2015 and 2018....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Supplementary Table S5. Regressions of countries’ trade value change between years 2015 and 2018....xlsx -------------------------------------------------------------------------------- /code/expected output/Supplementary Table S4. Results for multivariate linear regressions when the dependent variable is the trade value in 2017....xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Network-Maritime-Complexity/GLSN-and-international-trade/HEAD/code/expected output/Supplementary Table S4. Results for multivariate linear regressions when the dependent variable is the trade value in 2017....xlsx -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Estimating international trade status of countries from global liner shipping networks 2 | 3 | This repository contains all the code and data used in our article. 4 | 5 | Please cite the article if you use the code. 6 | 7 | ## Arxiv preprint 8 | 9 | https://arxiv.org/abs/2001.07688 10 | 11 | ## Contact 12 | 13 | * Mengqiao Xu: 14 | * Qian Pan: 15 | 16 | ## Zenodo repository 17 | https://doi.org/10.5281/zenodo.4018584 18 | -------------------------------------------------------------------------------- /code/configure.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on 2020/2/4 4 | Python 3.6 5 | 6 | @author: Qian Pan 7 | @e-mail: qianpan_93@163.com 8 | """ 9 | 10 | 11 | import os 12 | import time 13 | import pandas as pd 14 | import numpy as np 15 | import networkx as nx 16 | from scipy import stats 17 | import statsmodels.api as sm 18 | from statsmodels.stats.outliers_influence import variance_inflation_factor 19 | import itertools 20 | import warnings 21 | 22 | warnings.filterwarnings('ignore') 23 | 24 | data_path = 'data/' 25 | save_path = 'output/' 26 | 27 | if os.path.exists(save_path): 28 | pass 29 | else: 30 | os.makedirs(save_path) 31 | -------------------------------------------------------------------------------- /code/main.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on 2020/2/5 4 | Python 3.6 5 | 6 | @author: Qian Pan 7 | @e-mail: qianpan_93@163.com 8 | """ 9 | 10 | 11 | if __name__ == "__main__": 12 | import time 13 | start_time = time.perf_counter() 14 | print() 15 | print('**************************************** RUN TIME WARNING ****************************************') 16 | print('It needs approximately 70 minutes for the whole experiment.') 17 | print() 18 | print('======================================================================================================') 19 | print('Output:') 20 | print() 21 | from src import CalculateExplanatoryVariables 22 | from src import CalculatePearsonCorrelationCoefficient 23 | from src import MultivariateLinearRegression 24 | from src import Validation2017 25 | from src import TradeValueChanges 26 | from src import GravityModel 27 | 28 | CalculateExplanatoryVariables.startup() 29 | CalculatePearsonCorrelationCoefficient.startup() 30 | MultivariateLinearRegression.startup() 31 | Validation2017.startup() 32 | TradeValueChanges.startup() 33 | GravityModel.startup() 34 | 35 | print('======================================================================================================') 36 | print() 37 | print('Code performance: {:.0f}s.'.format(time.perf_counter() - start_time)) 38 | -------------------------------------------------------------------------------- /code/README.md: -------------------------------------------------------------------------------- 1 | # Estimating international trade status of countries from global liner shipping networks 2 | 3 | # Overview 4 | 5 | The repository allows one to reproduce the results reported in the manuscript. The repository is organized as follows: 6 | 7 | | Subdirectory | Description | 8 | | --- | --- | 9 | | **data** | This folder contains all the data adopted and generated in our study. | 10 | | **expected output** | This folder contains the expected results of the analysis. | 11 | | **src** | This folder contains all the scripts used for calculation, including:
  • the script "[CalculateExplanatoryVariables.py](./src/CalculateExplanatoryVariables.py)" for calculating the explanatory variables,
  • the script "[CalculatePearsonCorrelationCoefficient.py](./src/CalculatePearsonCorrelationCoefficient.py)" for calculating the Pearson correlation coefficients,
  • the script "[MultivariateLinearRegression.py](./src/MultivariateLinearRegression.py)" for running the multivariate linear regressions,
  • the script "[Validation2017.py](./src/Validation2017.py)" for validation using the dataset of 2017,
  • the script "[TradeValueChanges.py](./src/TradeValueChanges.py)" for estimating countries’ trade value changes by the GLSN betweenness,
  • the script "[GravityModel.py](./src/GravityModel.py)" for comparison with the gravity model.
  • | 12 | | **output** | After running a script, you will get a folder named "output". Results will be saved in this folder. Please find out below [How to Use](#How-to-Use). | 13 | 14 | # System Requirements 15 | 16 | ## OS Requirements 17 | 18 | These scripts have been tested on *Windows10* operating system. 19 | 20 | ### Installing Python on Windows 21 | 22 | Before setting up the package, users should have Python version 3.6 or higher, and several packages set up from Python 3.6. The latest version of python can be downloaded from the official website: https://www.python.org/ 23 | 24 | ## Hardware Requirements 25 | 26 | The package requires only a standard computer with enough RAM to support the operations defined by a user. For minimal performance, this will be a computer with about 4 GB of RAM. For optimal performance, we recommend a computer with the following specs: 27 | 28 | RAM: 8+ GB 29 | CPU: 4+ cores, 3.4+ GHz/core 30 | 31 | The runtimes are generated in a computer with the recommended specs (8 GB RAM, 4 cores@3.4 GHz). 32 | 33 | # Installation Guide 34 | 35 | ## Package dependencies 36 | 37 | Users should install the following packages prior to running the code: 38 | 39 | ``` 40 | networkx==2.3 41 | numpy==1.17.4 42 | openpyxl==2.6.2 43 | pandas==0.25.0 44 | scipy==1.3.1 45 | statsmodels==0.10.1 46 | xlrd==1.2.0 47 | xlwt==1.3.0 48 | ``` 49 | 50 | For a *Windows10* operating system, users can install the packages as follows: 51 | 52 | To install all the packages, open the *cmd* window in the root folder and type: 53 | 54 | ``` 55 | pip install -r requirements.txt 56 | ``` 57 | 58 | To install only one of the packages, type: 59 | 60 | ``` 61 | pip install pandas==0.25.0 62 | ``` 63 | 64 | # How to Use 65 | 66 | The script [`main.py`](main.py) is used for reproducing the results reported in the manuscript. Open the *cmd* window in the root folder, then run the following command: 67 | 68 | ``` 69 | python main.py 70 | ``` 71 | 72 | The results will be saved in a folder called "output". 73 | 74 | ### Code performance 75 | 76 | It takes approximately 70 minutes to reproduce the results reported in the manuscript, using a computer with a recommendable spec (8 GB RAM, 4 cores@3.4 GHz). 77 | 78 | # Contact 79 | 80 | * Mengqiao Xu: 81 | * Qian Pan: 82 | 83 | # Acknowledgement 84 | 85 | We appreciate a lab member for carefully testing the code: 86 | 87 | - Jia Song: 88 | -------------------------------------------------------------------------------- /code/src/Validation2017.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on 2020/2/4 4 | Python 3.6 5 | 6 | @author: Qian Pan 7 | @e-mail: qianpan_93@163.com 8 | """ 9 | 10 | 11 | from configure import * 12 | 13 | 14 | def z_score(x): 15 | x = (x - np.mean(x, axis=0)) / np.std(x, ddof=1, axis=0) 16 | return x 17 | 18 | 19 | def cal_pearson_r(x, y): 20 | pr = stats.pearsonr(x, y) 21 | corr = round(pr[0], 3) 22 | pval = pr[1] 23 | pmarker = pval_marker(pval) 24 | return corr, pmarker 25 | 26 | 27 | def pval_marker(pval): 28 | if pval < 0.001: 29 | pmarker = '**' 30 | elif 0.001 <= pval < 0.01: 31 | pmarker = '*' 32 | elif 0.01 <= pval < 0.05: 33 | pmarker = '+' 34 | elif pval >= 0.05: 35 | pmarker = 'NaN' 36 | else: 37 | pmarker = pval 38 | return pmarker 39 | 40 | 41 | def tables4(): 42 | dataset = '2017' 43 | dataname = 'Regression_Variables_' + dataset + '.xlsx' 44 | data = pd.read_excel(save_path + dataname) 45 | data.rename(columns={'GLSN betweenness (Lmax=2)': 'Gb', 'GLSN connectivity (Edge weight=None)': 'Gc', 46 | 'LSCI': 'L', 'Freeman betweenness': 'Fb'}, inplace=True) 47 | y_col = 'Trade value' 48 | y = data[y_col] 49 | y = z_score(y) 50 | 51 | param_cols = ['Gc', 'Gb', 'Fb', 'L'] 52 | N = len(param_cols) 53 | 54 | list_nobs = [] 55 | list_adj_r2 = [] 56 | list_fpval = [] 57 | list_vif = [] 58 | list_aic = [] 59 | list_col_idx = [] 60 | for n in np.arange(1, N+1, 1): 61 | x_cols = list(itertools.combinations(param_cols, r=int(n))) 62 | for x in x_cols: 63 | sep = ', ' 64 | xname = sep.join(x) 65 | list_col_idx.append(xname) 66 | xs = list(x) 67 | x = z_score(data[xs]) 68 | X = sm.add_constant(x) 69 | model = sm.OLS(y, X) 70 | fit_res = model.fit() 71 | # print(fit_res.summary()) 72 | 73 | list_adj_r2.append(fit_res.rsquared_adj) 74 | f_pvalue = pval_marker(fit_res.f_pvalue) 75 | list_fpval.append(f_pvalue) 76 | list_nobs.append(fit_res.nobs) 77 | 78 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 79 | 80 | # AIC 81 | predict_y = fit_res.predict() 82 | RSS = sum((y - predict_y) ** 2) 83 | num_obs = fit_res.nobs 84 | K = fit_res.df_model + fit_res.k_constant 85 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 86 | list_aic.append(aic) 87 | 88 | cols = ['variable' + str(i) for i in range(1, N+1)] 89 | cols.insert(0, 'const') 90 | 91 | df_vif = pd.DataFrame(list_vif, columns=cols) 92 | 93 | v_cols = [col for col in df_vif.columns if 'variable' in col] 94 | max_vif = df_vif[v_cols].max(axis=1) 95 | 96 | df_res = pd.DataFrame() 97 | df_res['Adjusted R2'] = list_adj_r2 98 | df_res['p-value'] = list_fpval 99 | df_res['AIC'] = list_aic 100 | df_res['Max VIF'] = max_vif 101 | df_res['# observations'] = list_nobs 102 | df_res.index = pd.Series(list_col_idx) 103 | 104 | sheetname = "Supplementary Table S4" 105 | filename = 'Supplementary Table S4. Results for multivariate linear regressions when the dependent variable is ' \ 106 | 'the trade value in 2017....xlsx' 107 | df_res.to_excel(save_path + filename, sheet_name=sheetname, 108 | float_format='%.3f', index=True, index_label='Explanatory variable') 109 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 110 | print() 111 | 112 | 113 | def startup(): 114 | tables4() 115 | -------------------------------------------------------------------------------- /code/src/CalculatePearsonCorrelationCoefficient.py: -------------------------------------------------------------------------------- 1 | #! python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | 5 | Created on 2019/7/31 6 | 7 | Code Performance: 8 | 9 | 10 | @author: Qian Pan 11 | @e-mail: qianpan_93@163.com 12 | """ 13 | 14 | 15 | from configure import * 16 | 17 | 18 | def cal_pearson_r(x, y): 19 | pr = stats.pearsonr(x, y) 20 | corr = round(pr[0], 3) 21 | pval = pr[1] 22 | pmarker = pval_marker(pval) 23 | 24 | return corr, pval, pmarker 25 | 26 | 27 | def pval_marker(pval): 28 | if pval < 0.001: 29 | pmarker = '**' 30 | elif 0.001 <= pval < 0.01: 31 | pmarker = '*' 32 | elif 0.01 <= pval < 0.05: 33 | pmarker = '+' 34 | elif pval >= 0.05: 35 | pmarker = np.nan 36 | else: 37 | pmarker = pval 38 | return pmarker 39 | 40 | 41 | def cal_pr1(data): 42 | x_cols = [ 43 | 'GLSN connectivity (Edge weight=None)', 44 | 'GLSN connectivity (Edge weight=1)', 45 | 'GLSN connectivity (Edge weight=1/(n-1))', 46 | 'GLSN connectivity (Edge weight=1/[n(n-1)/2])', 47 | 'GLSN connectivity (Edge weight=C)', 48 | 'GLSN connectivity (Edge weight=C/(n-1))', 49 | 'GLSN connectivity (Edge weight=C/[n(n-1)/2])', 50 | 'Normalized GLSN connectivity (Edge weight=None)', 51 | 'Normalized GLSN connectivity (Edge weight=1)', 52 | 'Normalized GLSN connectivity (Edge weight=1/(n-1))', 53 | 'Normalized GLSN connectivity (Edge weight=1/[n(n-1)/2])', 54 | 'Normalized GLSN connectivity (Edge weight=C)', 55 | 'Normalized GLSN connectivity (Edge weight=C/(n-1))', 56 | 'Normalized GLSN connectivity (Edge weight=C/[n(n-1)/2])', 57 | 'GLSN betweenness (Lmax=2)', 'GLSN betweenness (Lmax=3)', 58 | 'GLSN betweenness (Lmax=4)', 'GLSN betweenness (Lmax=5)', 59 | 'Freeman betweenness', 'normalized Freeman betweenness', 'LSCI'] 60 | 61 | y_col = 'Trade value' 62 | y = data[y_col] 63 | 64 | list_corr = [] 65 | list_pval = [] 66 | list_pmk = [] 67 | for x_col in x_cols: 68 | x = data[x_col] 69 | corr = cal_pearson_r(x, y)[0] 70 | pval = cal_pearson_r(x, y)[1] 71 | pmk = cal_pearson_r(x, y)[2] 72 | list_corr.append(corr) 73 | list_pval.append(pval) 74 | list_pmk.append(pmk) 75 | df_res = pd.DataFrame() 76 | df_res['Variable'] = x_cols 77 | df_res['r'] = list_corr 78 | df_res['p-value'] = list_pval 79 | df_res['superscript'] = list_pmk 80 | 81 | sheet_name = 'Table 1' 82 | filename = 'Table 1. Pearson correlation coefficient between the trade value and each ' \ 83 | 'explanatory variable....xlsx' 84 | writer = pd.ExcelWriter(save_path + filename) 85 | df_res.to_excel(writer, sheet_name=sheet_name, index=False) 86 | writer.save() 87 | writer.close() 88 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 89 | print() 90 | 91 | 92 | def cal_pr2(data): 93 | data.rename(columns={'GLSN connectivity (Edge weight=None)': 'GLSN connectivity', 94 | 'GLSN betweenness (Lmax=2)': 'GLSN betweenness'}, inplace=True) 95 | x_cols = ['GLSN connectivity', 'GLSN betweenness', 'Freeman betweenness', 'LSCI'] 96 | 97 | y_cols = ['Export value', 'Import value', 'Net export', 'GDP'] 98 | print('******************************************************************') 99 | print('The in-text result:') 100 | print('Subsection titled "Estimating countries\' trade values by multivariate linear regression"') 101 | print('Section titled "Results"') 102 | print('******************************************************************') 103 | for y_col in y_cols: 104 | print('=========== Pearson correlation coefficient between the {} and each explanatory variable ==========='.format(y_col)) 105 | y = data[y_col] 106 | 107 | list_corr = [] 108 | list_pval = [] 109 | list_pmk = [] 110 | for x_col in x_cols: 111 | x = data[x_col] 112 | corr = cal_pearson_r(x, y)[0] 113 | pval = cal_pearson_r(x, y)[1] 114 | pmk = cal_pearson_r(x, y)[2] 115 | list_corr.append(corr) 116 | list_pval.append(pval) 117 | list_pmk.append(pmk) 118 | df_res = pd.DataFrame() 119 | df_res['Variable'] = x_cols 120 | df_res['r'] = list_corr 121 | df_res['p-value'] = list_pval 122 | df_res['superscript'] = list_pmk 123 | print(df_res) 124 | print() 125 | 126 | 127 | def startup(): 128 | datasets = ['2015'] 129 | for dataset in datasets: 130 | dataname = 'Regression_Variables_' + dataset + '.xlsx' 131 | data = pd.read_excel(save_path + dataname) 132 | cal_pr1(data) 133 | cal_pr2(data) 134 | -------------------------------------------------------------------------------- /code/src/TradeValueChanges.py: -------------------------------------------------------------------------------- 1 | #! python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on 2019/5/9 5 | 6 | Note: 7 | 8 | Code Performance: 9 | 10 | 11 | @author: Qian Pan 12 | @e-mail: qianpan_93@163.com 13 | """ 14 | 15 | 16 | from configure import * 17 | 18 | 19 | def z_score(x): 20 | x = (x - np.mean(x, axis=0)) / np.std(x, ddof=1, axis=0) 21 | 22 | return x 23 | 24 | 25 | def cal_pearson_r(x, y): 26 | pr = stats.pearsonr(x, y) 27 | corr = round(pr[0], 3) 28 | pval = pr[1] 29 | pmarker = pval_marker(pval) 30 | return corr, pmarker 31 | 32 | 33 | def pval_marker(pval): 34 | if pval < 0.001: 35 | pmarker = '**' 36 | elif 0.001 <= pval < 0.01: 37 | pmarker = '*' 38 | elif 0.01 <= pval < 0.05: 39 | pmarker = '+' 40 | elif pval >= 0.05: 41 | pmarker = 'NaN' 42 | else: 43 | pmarker = pval 44 | return pmarker 45 | 46 | 47 | def my_zip(*args, fillvalue=None): 48 | from itertools import repeat 49 | # zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D- 50 | iterators = [iter(it) for it in args] 51 | num_active = len(iterators) 52 | if not num_active: 53 | return 54 | while True: 55 | values = [] 56 | for i, it in enumerate(iterators): 57 | try: 58 | value = next(it) 59 | except StopIteration: 60 | num_active -= 1 61 | if not num_active: 62 | return 63 | iterators[i] = repeat(fillvalue) 64 | value = fillvalue 65 | values.append(value) 66 | yield list(values) 67 | 68 | 69 | def tvc(): 70 | dataset = '2015' 71 | dataname = 'Regression_Variables_' + dataset + '.xlsx' 72 | data = pd.read_excel(save_path + dataname) 73 | data.rename(columns={'GLSN connectivity (Edge weight=None)': 'Gc', 'GLSN betweenness (Lmax=2)': 'Gb', 74 | 'Freeman betweenness': 'Fb', 'LSCI': 'L', 'Trade value': 'Tv'}, inplace=True) 75 | data = data[data['TvChange'] == 'T'] 76 | 77 | param_cols = ['Tv', 'Gc', 'Gb', 'Fb', 'L'] 78 | y_col = 'Trade value(2018-2015)' 79 | y = data[y_col] 80 | N = len(param_cols) 81 | 82 | list_adj_r2 = [] 83 | list_fpval = [] 84 | list_vif = [] 85 | list_aic = [] 86 | dict_conf_ints = {} 87 | list_nobs = [] 88 | list_n_variables = [] 89 | list_col_idx = [] 90 | dict_param_coef = {} 91 | dict_se = {} 92 | dict_tpval = {} 93 | ix = 0 94 | for n in np.arange(1, N+1, 1): 95 | x_cols = list(itertools.combinations(param_cols, r=int(n))) 96 | for x in x_cols: 97 | sep = ', ' 98 | xname = sep.join(x) 99 | list_col_idx.append(xname) 100 | y = z_score(y) 101 | xs = list(x) 102 | x = z_score(data[xs]) 103 | 104 | X = sm.add_constant(x) 105 | model = sm.OLS(y, X) 106 | fit_res = model.fit() 107 | 108 | list_n_variables.append(n) 109 | list_nobs.append(fit_res.nobs) 110 | list_adj_r2.append(fit_res.rsquared_adj) 111 | 112 | f_pvalue = pval_marker(fit_res.f_pvalue) 113 | list_fpval.append(f_pvalue) 114 | 115 | coefs = fit_res.params.values.tolist() 116 | param = fit_res.params.index.tolist() 117 | ses = fit_res.bse.values.tolist() 118 | tpvals = fit_res.pvalues.values.tolist() 119 | 120 | lower_conf_int = round(fit_res.conf_int(alpha=0.05)[0], 3).values.tolist() 121 | upper_conf_int = round(fit_res.conf_int(alpha=0.05)[1], 3).values.tolist() 122 | conf_int = list(my_zip(lower_conf_int, upper_conf_int)) 123 | dict_conf_ints[ix] = dict(zip(param, conf_int)) 124 | 125 | dict_param_coef[ix] = dict(zip(param, coefs)) 126 | dict_se[ix] = dict(zip(param, ses)) 127 | dict_tpval[ix] = dict(zip(param, tpvals)) 128 | 129 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 130 | 131 | # AIC 132 | predict_y = fit_res.predict() 133 | RSS = sum((y - predict_y) ** 2) 134 | num_obs = fit_res.nobs 135 | K = fit_res.df_model + fit_res.k_constant 136 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 137 | list_aic.append(aic) 138 | 139 | ix += 1 140 | 141 | df_coef = pd.DataFrame(dict_param_coef).T 142 | df_conf_ints = pd.DataFrame(dict_conf_ints).T 143 | df_conf_ints = round(df_conf_ints, 3) 144 | df_tpval = pd.DataFrame(dict_tpval).T 145 | 146 | for col in df_tpval.columns: 147 | for ix in df_tpval.index: 148 | df_tpval.loc[ix, col] = pval_marker(df_tpval.loc[ix, col]) 149 | 150 | df_res = pd.concat([df_coef, df_tpval, df_conf_ints], axis=1, 151 | keys=['Coefficient', 'pval', 'confidence interval']) 152 | 153 | cols = ['variable' + str(i) for i in range(1, N + 1)] 154 | cols.insert(0, 'const') 155 | df_vif = pd.DataFrame(list_vif, columns=cols) 156 | v_cols = [col for col in df_vif.columns if 'variable' in col] 157 | max_vif = df_vif[v_cols].max(axis=1) 158 | 159 | df_res['Adjusted R2'] = list_adj_r2 160 | df_res['p-value'] = list_fpval 161 | df_res['AIC'] = list_aic 162 | df_res['Max VIF'] = max_vif 163 | df_res['# observations'] = list_nobs 164 | df_res.index = pd.Series(list_col_idx) 165 | df_res.fillna('—', inplace=True) 166 | 167 | filename = 'Supplementary Table S5. Regressions of countries’ trade value change between years 2015 and 2018....xlsx' 168 | sheetname = 'Supplementary Table S5' 169 | df_res.to_excel(save_path + filename, sheet_name=sheetname, 170 | freeze_panes=(3, 1), float_format='%.3f', index=True, 171 | index_label='Explanatory variable') 172 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 173 | print() 174 | 175 | 176 | def startup(): 177 | tvc() 178 | -------------------------------------------------------------------------------- /code/src/CalculateExplanatoryVariables.py: -------------------------------------------------------------------------------- 1 | #! python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | 5 | Created on: 2020/1/16 6 | 7 | Code Performance: 8 | 9 | 10 | @author: Qian Pan 11 | @e-mail: qianpan_93@163.com 12 | """ 13 | 14 | 15 | from configure import * 16 | 17 | 18 | def calculate_glsn_connectivity(df_edges, df_nodes): 19 | num_ports = df_nodes.groupby('Country Code', as_index=False)['id'].count() 20 | dict_num_ports = dict(zip(num_ports['Country Code'], num_ports['id'])) 21 | 22 | df_edges = df_edges[df_edges['CountryCode_port1'] != df_edges['CountryCode_port2']] 23 | 24 | df_edges_copy = df_edges.copy() 25 | s_cols = [col for col in df_edges.columns if 'port1_id' in col or 'CountryCode_port1' in col] 26 | t_cols = [col for col in df_edges.columns if 'port2_id' in col or 'CountryCode_port2' in col] 27 | tmp = df_edges[s_cols] 28 | df_edges_copy[s_cols] = df_edges[t_cols] 29 | df_edges_copy[t_cols] = tmp 30 | df_edges = pd.concat([df_edges, df_edges_copy], axis=0) 31 | 32 | ew_cols = [col for col in df_edges.columns if 'Edge weight=' in col] 33 | df_gc = df_edges.groupby('CountryCode_port1', as_index=False)[ew_cols].sum() 34 | 35 | df_ew_none = df_edges.groupby('CountryCode_port1', as_index=False)['CountryCode_port2'].count() 36 | df_gc = pd.merge(df_gc, df_ew_none, on=['CountryCode_port1']) 37 | df_gc.rename(columns={'CountryCode_port2': 'GLSN connectivity (Edge weight=None)', 38 | 'CountryCode_port1': 'Country Code'}, inplace=True) 39 | 40 | norm_cols = [] 41 | for col in ew_cols: 42 | new_col = 'GLSN connectivity (' + col + ')' 43 | df_gc.rename(columns={col: new_col}, inplace=True) 44 | norm_cols.append(new_col) 45 | 46 | df_gc['# ports'] = df_gc['Country Code'].apply(dict_num_ports.get) 47 | norm_cols.insert(0, 'GLSN connectivity (Edge weight=None)') 48 | for col in norm_cols: 49 | df_gc['Normalized ' + col] = round(df_gc[col] / df_gc['# ports'], 4) 50 | 51 | df_gc.drop(columns='# ports', inplace=True) 52 | 53 | return df_gc 54 | 55 | 56 | def calculate_glsn_betweennes(df_edges, df_nodes): 57 | 58 | def _dump_sp(df_edges, df_nodes): 59 | dict_port_country = dict(zip(df_nodes['id'], df_nodes['Country Code'])) 60 | graph = nx.from_pandas_edgelist(df_edges, 'port1_id', 'port2_id', edge_attr=None, create_using=nx.Graph()) 61 | nodelist = list(graph.nodes) 62 | list_spl = [] 63 | list_valid_sp = [] 64 | list_country = [] 65 | for i, port_s in enumerate(nodelist[:-1]): 66 | for port_t in nodelist[i + 1:]: 67 | all_sp = nx.all_shortest_paths(graph, source=port_s, target=port_t, weight=None) 68 | for path in all_sp: 69 | spl = len(path) - 1 70 | if spl == 1: 71 | continue 72 | else: 73 | country_list = list(map(dict_port_country.get, path)) 74 | inner_countries = country_list[1:-1] 75 | head_country = country_list[0] 76 | tail_country = country_list[-1] 77 | if (head_country in inner_countries) or (tail_country in inner_countries) \ 78 | or (head_country == tail_country): 79 | continue 80 | else: 81 | list_spl.append(spl) 82 | list_valid_sp.append(path) 83 | list_country.append(country_list) 84 | df_sp = pd.DataFrame(list_valid_sp) 85 | df_sp.columns = ['id' + str(col + 1) for col in df_sp.columns] 86 | df_sp['SPL'] = list_spl 87 | df_country = pd.DataFrame(list_country) 88 | df_country.columns = ['Country' + str(col + 1) for col in df_country.columns] 89 | df_sp = pd.concat([df_sp, df_country], axis=1) 90 | return df_sp 91 | 92 | def _cal_glsn_bc_spl2(df_sp): 93 | df_sp = df_sp[df_sp['SPL'] == 2] 94 | df_sp['Edge'] = df_sp['id1'].astype(str) + '--' + df_sp['id3'].astype(str) 95 | 96 | num_total_sp = df_sp.groupby(['Edge'], as_index=False)['id1'].count() 97 | dict_num_sp = dict(zip(num_total_sp['Edge'], num_total_sp['id1'])) 98 | 99 | df_gb = df_sp.groupby(['Edge', 'Country2'], as_index=False)['id1'].count() 100 | df_gb.rename(columns={'id1': '# country_SP', 'Country2': 'Country Code'}, inplace=True) 101 | df_gb['# SP'] = df_gb['Edge'].apply(dict_num_sp.get) 102 | df_gb['GLSN betweenness (SPL=2)'] = df_gb['# country_SP'] / df_gb['# SP'] 103 | df_gb = df_gb.groupby('Country Code', as_index=False)['GLSN betweenness (SPL=2)'].sum() 104 | dict_gb = dict(zip(df_gb['Country Code'], df_gb['GLSN betweenness (SPL=2)'])) 105 | return dict_gb 106 | 107 | def _cal_glsn_bc_spl(df_sp, spl): 108 | data = df_sp[df_sp['SPL'] == spl] 109 | data['Edge'] = data['id1'].astype(str) + '--' + data['id' + str(spl + 1)].astype(str) 110 | if spl == 3: 111 | data['countries'] = data['Country2'].astype(str) + ',' + data['Country3'].astype(str) 112 | elif spl == 4: 113 | data['countries'] = data['Country2'].astype(str) + ',' + data['Country3'].astype(str) + \ 114 | ',' + data['Country4'].astype(str) 115 | else: 116 | data['countries'] = data['Country2'].astype(str) + ',' + data['Country3'].astype(str) + \ 117 | ',' + data['Country4'].astype(str) + ',' + data['Country5'].astype(str) 118 | 119 | data['countries'] = data['countries'].str.split(',') 120 | data['unique_countries'] = data['countries'].apply(pd.unique) 121 | df_country = pd.DataFrame(data['unique_countries'].values.tolist()) 122 | df_country.columns = ['Country' + str(col+2) for col in df_country.columns] 123 | 124 | inner_country_cols = df_country.columns 125 | data.drop(columns=inner_country_cols, inplace=True) 126 | data.index = range(0, len(data)) 127 | data = pd.concat([data, df_country], axis=1) 128 | 129 | num_total_sp = data.groupby(['Edge'], as_index=False)['id1'].count() 130 | dict_num_sp = dict(zip(num_total_sp['Edge'], num_total_sp['id1'])) 131 | 132 | df_gb_all = pd.DataFrame() 133 | for col in inner_country_cols: 134 | df_gb = data.groupby(['Edge', col], as_index=False)['id1'].count() 135 | df_gb.rename(columns={'id1': '# country_SP', col: 'Country Code'}, inplace=True) 136 | df_gb['# SP'] = df_gb['Edge'].apply(dict_num_sp.get) 137 | df_gb['GLSN betweenness (SPL=' + str(spl) + ')'] = df_gb['# country_SP'] / df_gb['# SP'] 138 | df_gb_all = pd.concat([df_gb_all, df_gb], axis=0) 139 | 140 | df_gb = df_gb_all.groupby('Country Code', as_index=False)['GLSN betweenness (SPL=' + str(spl) + ')'].sum() 141 | dict_gb = dict(zip(df_gb['Country Code'], df_gb['GLSN betweenness (SPL=' + str(spl) + ')'])) 142 | 143 | return dict_gb 144 | 145 | def _merge_all(data): 146 | 147 | cols1 = ['GLSN betweenness (SPL=2)', 'GLSN betweenness (SPL=3)', 'GLSN betweenness (SPL=4)', 148 | 'GLSN betweenness (SPL=5)'] 149 | for i in range(2, 6): 150 | data['GLSN betweenness (Lmax=' + str(i) + ')'] = data[cols1[:i - 1]].sum(axis=1) 151 | 152 | data.drop(columns=['GLSN betweenness (SPL=2)', 'GLSN betweenness (SPL=3)', 153 | 'GLSN betweenness (SPL=4)', 'GLSN betweenness (SPL=5)'], inplace=True) 154 | return data 155 | 156 | df_country = df_nodes[['Country Code']] 157 | df_country.drop_duplicates(inplace=True) 158 | df_sp = _dump_sp(df_edges, df_nodes) 159 | dict_gb2 = _cal_glsn_bc_spl2(df_sp) 160 | df_country['GLSN betweenness (SPL=2)'] = df_country['Country Code'].apply(dict_gb2.get) 161 | 162 | max_spl = df_sp['SPL'].max() 163 | list_spl = range(3, max_spl + 1) 164 | for spl in list_spl: 165 | dict_gb = _cal_glsn_bc_spl(df_sp, spl) 166 | df_country['GLSN betweenness (SPL=' + str(spl) + ')'] = df_country['Country Code'].apply(dict_gb.get) 167 | 168 | df_country.fillna(0, inplace=True) 169 | df_gb_res = _merge_all(df_country) 170 | 171 | return df_gb_res 172 | 173 | 174 | def calculate_freeman_bc(df_edges, df_nodes): 175 | g = nx.from_pandas_edgelist(df_edges, 'port1_id', 'port2_id', create_using=nx.Graph()) 176 | dict_bc = nx.betweenness_centrality(g, normalized=False) 177 | df_nodes['Freeman betweenness'] = df_nodes['id'].apply(dict_bc.get) 178 | num_ports = df_nodes.groupby('Country Code', as_index=False)['id'].count() 179 | dict_num_ports = dict(zip(num_ports['Country Code'], num_ports['id'])) 180 | 181 | df_bc = df_nodes.groupby('Country Code', as_index=False)['Freeman betweenness'].sum() 182 | df_bc['# ports'] = df_bc['Country Code'].apply(dict_num_ports.get) 183 | df_bc['normalized Freeman betweenness'] = df_bc['Freeman betweenness'] / df_bc['# ports'] 184 | df_bc.drop(columns='# ports', inplace=True) 185 | return df_bc 186 | 187 | 188 | def startup(): 189 | datasets = ['2015', '2017'] 190 | 191 | for dataset in datasets: 192 | edgedata = pd.read_excel(data_path + 'Edges_GLSN_' + dataset + '.xlsx') 193 | edgedata_copy = edgedata.copy() 194 | tmp = edgedata[['port1_id', 'CountryCode_port1']] 195 | edgedata_copy[['port1_id', 'CountryCode_port1']] = edgedata[['port2_id', 'CountryCode_port2']] 196 | edgedata_copy[['port2_id', 'CountryCode_port2']] = tmp 197 | 198 | edgedata_copy = pd.concat([edgedata, edgedata_copy], axis=0) 199 | nodedata = edgedata_copy[['port1_id', 'CountryCode_port1']] 200 | nodedata.drop_duplicates(inplace=True) 201 | nodedata.columns = ['id', 'Country Code'] 202 | 203 | gc = calculate_glsn_connectivity(edgedata, nodedata) 204 | bc = calculate_freeman_bc(edgedata, nodedata) 205 | gb = calculate_glsn_betweennes(edgedata, nodedata) 206 | 207 | df_res = pd.merge(gc, bc, on='Country Code') 208 | df_res = pd.merge(df_res, gb, on='Country Code') 209 | 210 | df_ec = pd.read_excel(data_path + 'TV_GDP_LSCI_' + dataset + '.xlsx') 211 | 212 | df_all = pd.merge(df_ec, df_res, on='Country Code') 213 | filename2 = 'Regression_Variables_' + dataset + '.xlsx' 214 | df_all.to_excel(save_path + filename2, index=False) 215 | print('The result file "{}" saved at: "{}"'.format(filename2, save_path)) 216 | print() 217 | -------------------------------------------------------------------------------- /code/src/GravityModel.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on 2020/1/17 4 | Python 3.6 5 | 6 | @author: Qian Pan 7 | @e-mail: qianpan_93@163.com 8 | """ 9 | 10 | 11 | from configure import * 12 | 13 | 14 | def gravity_model(): 15 | dataset = '2015' 16 | 17 | def _cal_corr(x, y): 18 | corr = round(stats.pearsonr(x, y)[0], 3) 19 | pval = round(stats.pearsonr(x, y)[1], 3) 20 | 21 | return corr, pval 22 | 23 | dataname = 'Gravity_model.xlsx' 24 | df_btv = pd.read_excel(data_path + dataname) 25 | y_col = 'BTVij' 26 | y = np.log(df_btv[y_col]) 27 | 28 | x_cols = ['GDPi x GDPj', 'dij'] 29 | x = np.log(df_btv[x_cols]) 30 | X = sm.add_constant(x) 31 | model = sm.OLS(y, X) 32 | fit_res = model.fit() 33 | print('******************************************************************') 34 | print('The in-text result:') 35 | print("Location in the manuscript text: ") 36 | print('Subsection titled "Comparison with the gravity model"') 37 | print('Section titled "Results"') 38 | print('******************************************************************') 39 | print() 40 | print('"The model yielded an adjusted R^2 value of {:.3f}, where the qualified (i, j) pairs were regarded as samples."'.format(fit_res.rsquared_adj)) 41 | print() 42 | predict_btv = np.exp(fit_res.fittedvalues) 43 | df_btv['predicted BTVij'] = predict_btv 44 | 45 | df_btv_copy = df_btv.copy() 46 | tmp = df_btv['Partner ISO'] 47 | df_btv_copy['Partner ISO'] = df_btv['Reporter ISO'] 48 | df_btv_copy['Reporter ISO'] = tmp 49 | df_btv = pd.concat([df_btv, df_btv_copy]) 50 | df_predicted = df_btv.groupby('Reporter ISO', as_index=False)['predicted BTVij'].sum() 51 | df_predicted.columns = ['Country Code', 'Estimated TV'] 52 | 53 | dataname = 'Regression_Variables_' + dataset + '.xlsx' 54 | df_tv = pd.read_excel(save_path + dataname) 55 | df_tv = df_tv[df_tv['GravityModel'] == 'T'] 56 | df_res = pd.merge(df_tv, df_predicted, on='Country Code') 57 | pr, pval = _cal_corr(df_res['Trade value'], df_res['Estimated TV']) 58 | r2 = pr ** 2 59 | n = 144 60 | k = 1 61 | adj_r2 = 1 - ((1-r2)*(n-1) / (n-k-1)) 62 | print( 63 | '"The Pearson correlation coefficient between the empirical and estimated trade value of countries was equal to {:.3f}, resulting in an adjusted R^2 value of {:.3f}."'.format(pr, adj_r2)) 64 | print() 65 | 66 | 67 | def z_score(x): 68 | x = (x - np.mean(x, axis=0)) / np.std(x, ddof=1, axis=0) 69 | return x 70 | 71 | 72 | def cal_pearson_r(x, y): 73 | pr = stats.pearsonr(x, y) 74 | corr = round(pr[0], 3) 75 | pval = pr[1] 76 | pmarker = pval_marker(pval) 77 | return corr, pmarker 78 | 79 | 80 | def pval_marker(pval): 81 | if pval < 0.001: 82 | pmarker = '**' 83 | elif 0.001 <= pval < 0.01: 84 | pmarker = '*' 85 | elif 0.01 <= pval < 0.05: 86 | pmarker = '+' 87 | elif pval >= 0.05: 88 | pmarker = 'NaN' 89 | else: 90 | pmarker = pval 91 | return pmarker 92 | 93 | 94 | def tables3(): 95 | dataset = '2015' 96 | dataname = 'Regression_Variables_' + dataset + '.xlsx' 97 | data = pd.read_excel(save_path + dataname) 98 | data = data[data['GravityModel'] == 'T'] 99 | data.rename(columns={'GLSN betweenness (Lmax=2)': 'Gb', 'GLSN connectivity (Edge weight=None)': 'Gc', 100 | 'LSCI': 'L', 'Freeman betweenness': 'Fb'}, inplace=True) 101 | 102 | y_col = 'Trade value' 103 | y = data[y_col] 104 | y = z_score(y) 105 | 106 | param_cols = ['Gc', 'Gb', 'Fb', 'L'] 107 | N = len(param_cols) 108 | 109 | list_adj_r2 = [] 110 | list_r2 = [] 111 | list_fpval = [] 112 | list_vif = [] 113 | list_aic = [] 114 | list_col_idx = [] 115 | list_nobs = [] 116 | for n in np.arange(1, N+1, 1): 117 | x_cols = list(itertools.combinations(param_cols, r=int(n))) 118 | for x in x_cols: 119 | sep = ', ' 120 | xname = sep.join(x) 121 | list_col_idx.append(xname) 122 | xs = list(x) 123 | x = z_score(data[xs]) 124 | X = sm.add_constant(x) 125 | model = sm.OLS(y, X) 126 | fit_res = model.fit() 127 | 128 | list_adj_r2.append(fit_res.rsquared_adj) 129 | list_r2.append(fit_res.rsquared) 130 | f_pvalue = pval_marker(fit_res.f_pvalue) 131 | list_fpval.append(f_pvalue) 132 | list_nobs.append(fit_res.nobs) 133 | 134 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 135 | 136 | # AIC 137 | predict_y = fit_res.predict() 138 | RSS = sum((y - predict_y) ** 2) 139 | num_obs = fit_res.nobs 140 | K = fit_res.df_model + fit_res.k_constant 141 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 142 | list_aic.append(aic) 143 | 144 | cols = ['variable' + str(i) for i in range(1, N + 1)] 145 | cols.insert(0, 'const') 146 | df_vif = pd.DataFrame(list_vif, columns=cols) 147 | 148 | v_cols = [col for col in df_vif.columns if 'variable' in col] 149 | max_vif = df_vif[v_cols].max(axis=1) 150 | df_res = pd.DataFrame() 151 | df_res['Adjusted R2'] = list_adj_r2 152 | # df_res['R2'] = list_r2 153 | df_res['p-value'] = list_fpval 154 | df_res['AIC'] = list_aic 155 | df_res['Max VIF'] = max_vif 156 | df_res['# observations'] = list_nobs 157 | df_res.index = pd.Series(list_col_idx) 158 | 159 | sheetname = 'Supplementary Table S3' 160 | filename = 'Supplementary Table S3. Results for multivariate linear regressions....xlsx' 161 | df_res.to_excel(save_path + filename, sheet_name=sheetname, 162 | float_format='%.3f', index=True, 163 | index_label='Explanatory variable') 164 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 165 | print() 166 | 167 | 168 | def table3(): 169 | data = pd.read_excel('data/Gravity_model_lsbci.xlsx') 170 | data = data.dropna() 171 | 172 | y_col = 'BTVij' 173 | y = data[y_col] 174 | y = np.log(y) 175 | 176 | models = [['ln(GDPi x GDPj)', 'ln(dij)'], ['ln(GDPi x GDPj)', 'ln(dij)', 'ln(LSBCIij)'], 177 | ['ln(GDPi x GDPj)', 'ln(dij)', 'ln(Gbi x Gbj)'], 178 | ['ln(GDPi x GDPj)', 'ln(dij)', 'ln(LSBCIij)', 'ln(Gbi x Gbj)']] 179 | N = 4 180 | 181 | list_nobs = [] 182 | list_adj_r2 = [] 183 | list_fpval = [] 184 | list_vif = [] 185 | list_aic = [] 186 | list_col_idx = [] 187 | sep = ', ' 188 | for model in models: 189 | xname = sep.join(model) 190 | list_col_idx.append(xname) 191 | 192 | x = z_score(data[model]) 193 | X = sm.add_constant(x) 194 | model = sm.OLS(y, X) 195 | fit_res = model.fit() 196 | # print(fit_res.summary()) 197 | 198 | list_adj_r2.append(fit_res.rsquared_adj) 199 | f_pvalue = pval_marker(fit_res.f_pvalue) 200 | list_fpval.append(f_pvalue) 201 | list_nobs.append(fit_res.nobs) 202 | 203 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 204 | 205 | # AIC 206 | predict_y = fit_res.predict() 207 | RSS = sum((y - predict_y) ** 2) 208 | num_obs = fit_res.nobs 209 | K = fit_res.df_model + fit_res.k_constant 210 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 211 | list_aic.append(aic) 212 | 213 | cols = ['variable' + str(i) for i in range(1, N+1)] 214 | cols.insert(0, 'const') 215 | 216 | df_vif = pd.DataFrame(list_vif, columns=cols) 217 | 218 | v_cols = [col for col in df_vif.columns if 'variable' in col] 219 | max_vif = df_vif[v_cols].max(axis=1) 220 | 221 | df_res = pd.DataFrame() 222 | df_res['Adjusted R2'] = list_adj_r2 223 | df_res['p-value'] = list_fpval 224 | df_res['AIC'] = list_aic 225 | df_res['Max VIF'] = max_vif 226 | df_res['# observations'] = list_nobs 227 | df_res.index = pd.Series(list_col_idx) 228 | 229 | sheetname = "Table 3" 230 | filename = 'Table 3. Multivariate regression results for gravity models....xlsx' 231 | df_res.to_excel(save_path + filename, sheet_name=sheetname, 232 | float_format='%.3f', index=True, index_label='Explanatory variable') 233 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 234 | print() 235 | 236 | 237 | def tables6(): 238 | data = pd.read_excel('data/Gravity_model_lsbci.xlsx') 239 | y_col = 'BTVij' 240 | y = data[y_col] 241 | y = np.log(y) 242 | 243 | models = [['ln(GDPi x GDPj)', 'ln(dij)'], ['ln(GDPi x GDPj)', 'ln(dij)', 'ln(LSBCIij)'], 244 | ['ln(GDPi x GDPj)', 'ln(dij)', 'ln(Gci x Gcj)'], 245 | ['ln(GDPi x GDPj)', 'ln(dij)', 'ln(LSBCIij)', 'ln(Gci x Gcj)']] 246 | N = 4 247 | 248 | list_nobs = [] 249 | list_adj_r2 = [] 250 | list_fpval = [] 251 | list_vif = [] 252 | list_aic = [] 253 | list_col_idx = [] 254 | sep = ', ' 255 | for model in models: 256 | xname = sep.join(model) 257 | list_col_idx.append(xname) 258 | 259 | x = z_score(data[model]) 260 | X = sm.add_constant(x) 261 | model = sm.OLS(y, X) 262 | fit_res = model.fit() 263 | # print(fit_res.summary()) 264 | 265 | list_adj_r2.append(fit_res.rsquared_adj) 266 | f_pvalue = pval_marker(fit_res.f_pvalue) 267 | list_fpval.append(f_pvalue) 268 | list_nobs.append(fit_res.nobs) 269 | 270 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 271 | 272 | # AIC 273 | predict_y = fit_res.predict() 274 | RSS = sum((y - predict_y) ** 2) 275 | num_obs = fit_res.nobs 276 | K = fit_res.df_model + fit_res.k_constant 277 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 278 | list_aic.append(aic) 279 | 280 | cols = ['variable' + str(i) for i in range(1, N+1)] 281 | cols.insert(0, 'const') 282 | 283 | df_vif = pd.DataFrame(list_vif, columns=cols) 284 | 285 | v_cols = [col for col in df_vif.columns if 'variable' in col] 286 | max_vif = df_vif[v_cols].max(axis=1) 287 | 288 | df_res = pd.DataFrame() 289 | df_res['Adjusted R2'] = list_adj_r2 290 | df_res['p-value'] = list_fpval 291 | df_res['AIC'] = list_aic 292 | df_res['Max VIF'] = max_vif 293 | df_res['# observations'] = list_nobs 294 | df_res.index = pd.Series(list_col_idx) 295 | 296 | sheetname = "Supplementary Table S6" 297 | filename = 'Supplementary Table S6. Multivariate regression results for gravity models....xlsx' 298 | df_res.to_excel(save_path + filename, sheet_name=sheetname, 299 | float_format='%.3f', index=True, index_label='Explanatory variable') 300 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 301 | print() 302 | 303 | 304 | def startup(): 305 | gravity_model() 306 | tables3() 307 | table3() 308 | tables6() 309 | -------------------------------------------------------------------------------- /code/src/MultivariateLinearRegression.py: -------------------------------------------------------------------------------- 1 | #! python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on 2019/5/9 5 | 6 | Note: 7 | 8 | Code Performance: 9 | 10 | 11 | @author: Qian Pan 12 | @e-mail: qianpan_93@163.com 13 | """ 14 | 15 | from configure import * 16 | 17 | 18 | def z_score(x): 19 | x = (x - np.mean(x, axis=0)) / np.std(x, ddof=1, axis=0) 20 | return x 21 | 22 | 23 | def cal_pearson_r(x, y): 24 | pr = stats.pearsonr(x, y) 25 | corr = round(pr[0], 3) 26 | pval = pr[1] 27 | pmarker = pval_marker(pval) 28 | return corr, pmarker 29 | 30 | 31 | def pval_marker(pval): 32 | if pval < 0.001: 33 | pmarker = '**' 34 | elif 0.001 <= pval < 0.01: 35 | pmarker = '*' 36 | elif 0.01 <= pval < 0.05: 37 | pmarker = '+' 38 | elif pval >= 0.05: 39 | pmarker = 'NaN' 40 | else: 41 | pmarker = pval 42 | return pmarker 43 | 44 | 45 | def my_zip(*args, fillvalue=None): 46 | from itertools import repeat 47 | # zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D- 48 | iterators = [iter(it) for it in args] 49 | num_active = len(iterators) 50 | if not num_active: 51 | return 52 | while True: 53 | values = [] 54 | for i, it in enumerate(iterators): 55 | try: 56 | value = next(it) 57 | except StopIteration: 58 | num_active -= 1 59 | if not num_active: 60 | return 61 | iterators[i] = repeat(fillvalue) 62 | value = fillvalue 63 | values.append(value) 64 | yield list(values) 65 | 66 | 67 | def table2(data): 68 | 69 | y_col = 'Trade value' 70 | y = data[y_col] 71 | y = z_score(y) 72 | 73 | param_cols = ['Gc', 'Gb', 'Fb', 'L'] 74 | N = len(param_cols) 75 | 76 | list_nobs = [] 77 | list_adj_r2 = [] 78 | list_fpval = [] 79 | list_vif = [] 80 | list_aic = [] 81 | list_col_idx = [] 82 | for n in np.arange(1, N+1, 1): 83 | x_cols = list(itertools.combinations(param_cols, r=int(n))) 84 | for x in x_cols: 85 | sep = ', ' 86 | xname = sep.join(x) 87 | list_col_idx.append(xname) 88 | xs = list(x) 89 | x = z_score(data[xs]) 90 | X = sm.add_constant(x) 91 | model = sm.OLS(y, X) 92 | fit_res = model.fit() 93 | # print(fit_res.summary()) 94 | 95 | list_adj_r2.append(fit_res.rsquared_adj) 96 | f_pvalue = pval_marker(fit_res.f_pvalue) 97 | list_fpval.append(f_pvalue) 98 | list_nobs.append(fit_res.nobs) 99 | 100 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 101 | 102 | # AIC 103 | predict_y = fit_res.predict() 104 | RSS = sum((y - predict_y) ** 2) 105 | num_obs = fit_res.nobs 106 | K = fit_res.df_model + fit_res.k_constant 107 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 108 | list_aic.append(aic) 109 | 110 | cols = ['variable' + str(i) for i in range(1, N+1)] 111 | cols.insert(0, 'const') 112 | 113 | df_vif = pd.DataFrame(list_vif, columns=cols) 114 | 115 | v_cols = [col for col in df_vif.columns if 'variable' in col] 116 | max_vif = df_vif[v_cols].max(axis=1) 117 | 118 | df_res = pd.DataFrame() 119 | df_res['Adjusted R2'] = list_adj_r2 120 | df_res['p-value'] = list_fpval 121 | df_res['AIC'] = list_aic 122 | df_res['Max VIF'] = max_vif 123 | df_res['# observations'] = list_nobs 124 | df_res.index = pd.Series(list_col_idx) 125 | 126 | sheetname = "Table 2" 127 | filename = 'Table 2. Results for multivariate linear regressions....xlsx' 128 | df_res.to_excel(save_path + filename, sheet_name=sheetname, 129 | float_format='%.3f', index=True, index_label='Explanatory variable') 130 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 131 | print() 132 | 133 | 134 | def tables1(data): 135 | y_cols = ['Export value', 'Import value', 'Net export', 'GDP'] 136 | 137 | list_res = [] 138 | for y_col in y_cols: 139 | y = data[y_col] 140 | y = z_score(y) 141 | 142 | param_cols = ['Gc', 'Gb', 'Fb', 'L'] 143 | N = len(param_cols) 144 | 145 | list_nobs = [] 146 | list_adj_r2 = [] 147 | list_fpval = [] 148 | list_vif = [] 149 | list_aic = [] 150 | list_col_idx = [] 151 | 152 | for n in np.arange(1, N+1, 1): 153 | x_cols = list(itertools.combinations(param_cols, r=int(n))) 154 | for x in x_cols: 155 | sep = ', ' 156 | xname = sep.join(x) 157 | list_col_idx.append(xname) 158 | xs = list(x) 159 | x = z_score(data[xs]) 160 | X = sm.add_constant(x) 161 | model = sm.OLS(y, X) 162 | fit_res = model.fit() 163 | # print(fit_res.summary()) 164 | 165 | list_adj_r2.append(fit_res.rsquared_adj) 166 | f_pvalue = pval_marker(fit_res.f_pvalue) 167 | list_fpval.append(f_pvalue) 168 | list_nobs.append(fit_res.nobs) 169 | list_vif.append([round(variance_inflation_factor(X.values, i), 2) for i in range(X.shape[1])]) 170 | 171 | # AIC 172 | predict_y = fit_res.predict() 173 | RSS = sum((y - predict_y) ** 2) 174 | num_obs = fit_res.nobs 175 | K = fit_res.df_model + fit_res.k_constant 176 | aic = round(num_obs * np.log(RSS / num_obs) + 2 * K, 2) 177 | list_aic.append(aic) 178 | 179 | cols = ['variable' + str(i) for i in range(1, N+1)] 180 | cols.insert(0, 'const') 181 | 182 | df_vif = pd.DataFrame(list_vif, columns=cols) 183 | v_cols = [col for col in df_vif.columns if 'variable' in col] 184 | max_vif = df_vif[v_cols].max(axis=1) 185 | 186 | df_res = pd.DataFrame() 187 | df_res['Adjusted R2'] = list_adj_r2 188 | df_res['p-value'] = list_fpval 189 | df_res['AIC'] = list_aic 190 | list_res.append(df_res) 191 | 192 | df_res_all = pd.concat([list_res[0], list_res[1], list_res[2], list_res[3]], axis=1, keys=y_cols) 193 | df_res_all['Max VIF'] = max_vif 194 | df_res_all['# observations'] = list_nobs 195 | df_res_all.index = pd.Series(list_col_idx) 196 | filename = 'Supplementary Table S1. Results for multivariate linear regressions when the export value....xlsx' 197 | sheetname = 'Supplementary Table S1' 198 | df_res_all.to_excel(save_path + '/' + filename, sheet_name=sheetname, float_format='%.3f', 199 | freeze_panes=(3, 1), index=True, 200 | index_label='Explanatory variable') 201 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 202 | print() 203 | 204 | 205 | def tables2(): 206 | models = {'2015': [['Trade value', 'Gc', 'Fb'], ['Trade value', 'Gc', 'Gb'], 207 | ['Export value', 'Gc', 'Fb'], ['Export value', 'Gc', 'Gb', 'L'], 208 | ['Import value', 'Gc', 'Fb'], ['Import value', 'Gc', 'Gb'], 209 | ['Net export', 'Gc', 'Gb', 'L'], 210 | ['GDP', 'Gc', 'L'], ['GDP', 'Gc', 'Fb', 'L']], 211 | '2017': [['Trade value', 'Gc', 'Fb'], ['Trade value', 'Gc', 'Gb']]} 212 | 213 | list_nobs = [] 214 | list_col_idx = [] 215 | 216 | dict_conf_ints = {} 217 | dict_param_coef = {} 218 | dict_tpval = {} 219 | ix = 0 220 | 221 | list_dependent_variable = [] 222 | for year, model in models.items(): 223 | dataname = 'Regression_Variables_' + year + '.xlsx' 224 | data = pd.read_excel(save_path + dataname) 225 | data.rename(columns={'GLSN betweenness (Lmax=2)': 'Gb', 'GLSN connectivity (Edge weight=None)': 'Gc', 226 | 'LSCI': 'L', 'Freeman betweenness': 'Fb'}, inplace=True) 227 | 228 | for mm in model: 229 | y_col = mm[0] 230 | 231 | dependent_variable = y_col + ' (' + year + ')' 232 | list_dependent_variable.append(dependent_variable) 233 | 234 | y = data[y_col] 235 | y = z_score(y) 236 | x_cols = mm[1:] 237 | sep = ', ' 238 | xname = sep.join(x_cols) 239 | list_col_idx.append(xname) 240 | x = data[x_cols] 241 | x = z_score(x) 242 | X = sm.add_constant(x) 243 | model = sm.OLS(y, X) 244 | fit_res = model.fit() 245 | # print(fit_res.summary()) 246 | 247 | list_nobs.append(fit_res.nobs) 248 | coefs = fit_res.params.values.tolist() 249 | param = fit_res.params.index.tolist() 250 | tpvals = fit_res.pvalues.values.tolist() 251 | 252 | dict_param_coef[ix] = dict(zip(param, coefs)) 253 | dict_tpval[ix] = dict(zip(param, tpvals)) 254 | 255 | lower_conf_int = round(fit_res.conf_int(alpha=0.05)[0], 3).values.tolist() 256 | upper_conf_int = round(fit_res.conf_int(alpha=0.05)[1], 3).values.tolist() 257 | conf_int = list(my_zip(lower_conf_int, upper_conf_int)) 258 | dict_conf_ints[ix] = dict(zip(param, conf_int)) 259 | 260 | ix += 1 261 | 262 | df_coef = pd.DataFrame(dict_param_coef).T 263 | df_conf_ints = pd.DataFrame(dict_conf_ints).T 264 | df_tpval = pd.DataFrame(dict_tpval).T 265 | 266 | cols = ['Gc', 'Gb', 'Fb', 'L'] 267 | df_coef = df_coef[cols] 268 | df_tpval = df_tpval[cols] 269 | df_conf_ints = df_conf_ints[cols] 270 | 271 | for col in df_tpval.columns: 272 | for ix in df_tpval.index: 273 | df_tpval.loc[ix, col] = pval_marker(df_tpval.loc[ix, col]) 274 | 275 | df_res = pd.concat([df_coef, df_tpval, df_conf_ints], axis=1, 276 | keys=['Coefficient', 'p-value', 'confidence interval']) 277 | 278 | df_res['# observations'] = list_nobs 279 | 280 | df_res['Dependent variable'] = list_dependent_variable 281 | df_res['Explanatory variable'] = pd.Series(list_col_idx) 282 | df_res = df_res.set_index(['Dependent variable', 'Explanatory variable']) 283 | df_res.fillna('—', inplace=True) 284 | 285 | filename = 'Supplementary Table S2. Coefficients for representative multivariate linear regression models....xlsx' 286 | sheetname = 'Supplementary Table S2' 287 | df_res.to_excel(save_path + '/' + filename, sheet_name=sheetname, freeze_panes=(3, 2), float_format='%.3f', 288 | index=True) 289 | print('The result file "{}" saved at: "{}"'.format(filename, save_path)) 290 | print() 291 | 292 | 293 | def startup(): 294 | dataset = '2015' 295 | dataname = 'Regression_Variables_' + dataset + '.xlsx' 296 | data = pd.read_excel(save_path + dataname) 297 | data.rename(columns={'GLSN betweenness (Lmax=2)': 'Gb', 'GLSN connectivity (Edge weight=None)': 'Gc', 298 | 'LSCI': 'L', 'Freeman betweenness': 'Fb'}, inplace=True) 299 | table2(data) 300 | tables1(data) 301 | tables2() 302 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Attribution 4.0 International 2 | 3 | ======================================================================= 4 | 5 | Creative Commons Corporation ("Creative Commons") is not a law firm and 6 | does not provide legal services or legal advice. Distribution of 7 | Creative Commons public licenses does not create a lawyer-client or 8 | other relationship. Creative Commons makes its licenses and related 9 | information available on an "as-is" basis. Creative Commons gives no 10 | warranties regarding its licenses, any material licensed under their 11 | terms and conditions, or any related information. Creative Commons 12 | disclaims all liability for damages resulting from their use to the 13 | fullest extent possible. 14 | 15 | Using Creative Commons Public Licenses 16 | 17 | Creative Commons public licenses provide a standard set of terms and 18 | conditions that creators and other rights holders may use to share 19 | original works of authorship and other material subject to copyright 20 | and certain other rights specified in the public license below. The 21 | following considerations are for informational purposes only, are not 22 | exhaustive, and do not form part of our licenses. 23 | 24 | Considerations for licensors: Our public licenses are 25 | intended for use by those authorized to give the public 26 | permission to use material in ways otherwise restricted by 27 | copyright and certain other rights. Our licenses are 28 | irrevocable. Licensors should read and understand the terms 29 | and conditions of the license they choose before applying it. 30 | Licensors should also secure all rights necessary before 31 | applying our licenses so that the public can reuse the 32 | material as expected. Licensors should clearly mark any 33 | material not subject to the license. This includes other CC- 34 | licensed material, or material used under an exception or 35 | limitation to copyright. More considerations for licensors: 36 | wiki.creativecommons.org/Considerations_for_licensors 37 | 38 | Considerations for the public: By using one of our public 39 | licenses, a licensor grants the public permission to use the 40 | licensed material under specified terms and conditions. If 41 | the licensor's permission is not necessary for any reason--for 42 | example, because of any applicable exception or limitation to 43 | copyright--then that use is not regulated by the license. Our 44 | licenses grant only permissions under copyright and certain 45 | other rights that a licensor has authority to grant. Use of 46 | the licensed material may still be restricted for other 47 | reasons, including because others have copyright or other 48 | rights in the material. A licensor may make special requests, 49 | such as asking that all changes be marked or described. 50 | Although not required by our licenses, you are encouraged to 51 | respect those requests where reasonable. More considerations 52 | for the public: 53 | wiki.creativecommons.org/Considerations_for_licensees 54 | 55 | ======================================================================= 56 | 57 | Creative Commons Attribution 4.0 International Public License 58 | 59 | By exercising the Licensed Rights (defined below), You accept and agree 60 | to be bound by the terms and conditions of this Creative Commons 61 | Attribution 4.0 International Public License ("Public License"). To the 62 | extent this Public License may be interpreted as a contract, You are 63 | granted the Licensed Rights in consideration of Your acceptance of 64 | these terms and conditions, and the Licensor grants You such rights in 65 | consideration of benefits the Licensor receives from making the 66 | Licensed Material available under these terms and conditions. 67 | 68 | 69 | Section 1 -- Definitions. 70 | 71 | a. Adapted Material means material subject to Copyright and Similar 72 | Rights that is derived from or based upon the Licensed Material 73 | and in which the Licensed Material is translated, altered, 74 | arranged, transformed, or otherwise modified in a manner requiring 75 | permission under the Copyright and Similar Rights held by the 76 | Licensor. For purposes of this Public License, where the Licensed 77 | Material is a musical work, performance, or sound recording, 78 | Adapted Material is always produced where the Licensed Material is 79 | synched in timed relation with a moving image. 80 | 81 | b. Adapter's License means the license You apply to Your Copyright 82 | and Similar Rights in Your contributions to Adapted Material in 83 | accordance with the terms and conditions of this Public License. 84 | 85 | c. Copyright and Similar Rights means copyright and/or similar rights 86 | closely related to copyright including, without limitation, 87 | performance, broadcast, sound recording, and Sui Generis Database 88 | Rights, without regard to how the rights are labeled or 89 | categorized. For purposes of this Public License, the rights 90 | specified in Section 2(b)(1)-(2) are not Copyright and Similar 91 | Rights. 92 | 93 | d. Effective Technological Measures means those measures that, in the 94 | absence of proper authority, may not be circumvented under laws 95 | fulfilling obligations under Article 11 of the WIPO Copyright 96 | Treaty adopted on December 20, 1996, and/or similar international 97 | agreements. 98 | 99 | e. Exceptions and Limitations means fair use, fair dealing, and/or 100 | any other exception or limitation to Copyright and Similar Rights 101 | that applies to Your use of the Licensed Material. 102 | 103 | f. Licensed Material means the artistic or literary work, database, 104 | or other material to which the Licensor applied this Public 105 | License. 106 | 107 | g. Licensed Rights means the rights granted to You subject to the 108 | terms and conditions of this Public License, which are limited to 109 | all Copyright and Similar Rights that apply to Your use of the 110 | Licensed Material and that the Licensor has authority to license. 111 | 112 | h. Licensor means the individual(s) or entity(ies) granting rights 113 | under this Public License. 114 | 115 | i. Share means to provide material to the public by any means or 116 | process that requires permission under the Licensed Rights, such 117 | as reproduction, public display, public performance, distribution, 118 | dissemination, communication, or importation, and to make material 119 | available to the public including in ways that members of the 120 | public may access the material from a place and at a time 121 | individually chosen by them. 122 | 123 | j. Sui Generis Database Rights means rights other than copyright 124 | resulting from Directive 96/9/EC of the European Parliament and of 125 | the Council of 11 March 1996 on the legal protection of databases, 126 | as amended and/or succeeded, as well as other essentially 127 | equivalent rights anywhere in the world. 128 | 129 | k. You means the individual or entity exercising the Licensed Rights 130 | under this Public License. Your has a corresponding meaning. 131 | 132 | 133 | Section 2 -- Scope. 134 | 135 | a. License grant. 136 | 137 | 1. Subject to the terms and conditions of this Public License, 138 | the Licensor hereby grants You a worldwide, royalty-free, 139 | non-sublicensable, non-exclusive, irrevocable license to 140 | exercise the Licensed Rights in the Licensed Material to: 141 | 142 | a. reproduce and Share the Licensed Material, in whole or 143 | in part; and 144 | 145 | b. produce, reproduce, and Share Adapted Material. 146 | 147 | 2. Exceptions and Limitations. For the avoidance of doubt, where 148 | Exceptions and Limitations apply to Your use, this Public 149 | License does not apply, and You do not need to comply with 150 | its terms and conditions. 151 | 152 | 3. Term. The term of this Public License is specified in Section 153 | 6(a). 154 | 155 | 4. Media and formats; technical modifications allowed. The 156 | Licensor authorizes You to exercise the Licensed Rights in 157 | all media and formats whether now known or hereafter created, 158 | and to make technical modifications necessary to do so. The 159 | Licensor waives and/or agrees not to assert any right or 160 | authority to forbid You from making technical modifications 161 | necessary to exercise the Licensed Rights, including 162 | technical modifications necessary to circumvent Effective 163 | Technological Measures. For purposes of this Public License, 164 | simply making modifications authorized by this Section 2(a) 165 | (4) never produces Adapted Material. 166 | 167 | 5. Downstream recipients. 168 | 169 | a. Offer from the Licensor -- Licensed Material. Every 170 | recipient of the Licensed Material automatically 171 | receives an offer from the Licensor to exercise the 172 | Licensed Rights under the terms and conditions of this 173 | Public License. 174 | 175 | b. No downstream restrictions. You may not offer or impose 176 | any additional or different terms or conditions on, or 177 | apply any Effective Technological Measures to, the 178 | Licensed Material if doing so restricts exercise of the 179 | Licensed Rights by any recipient of the Licensed 180 | Material. 181 | 182 | 6. No endorsement. Nothing in this Public License constitutes or 183 | may be construed as permission to assert or imply that You 184 | are, or that Your use of the Licensed Material is, connected 185 | with, or sponsored, endorsed, or granted official status by, 186 | the Licensor or others designated to receive attribution as 187 | provided in Section 3(a)(1)(A)(i). 188 | 189 | b. Other rights. 190 | 191 | 1. Moral rights, such as the right of integrity, are not 192 | licensed under this Public License, nor are publicity, 193 | privacy, and/or other similar personality rights; however, to 194 | the extent possible, the Licensor waives and/or agrees not to 195 | assert any such rights held by the Licensor to the limited 196 | extent necessary to allow You to exercise the Licensed 197 | Rights, but not otherwise. 198 | 199 | 2. Patent and trademark rights are not licensed under this 200 | Public License. 201 | 202 | 3. To the extent possible, the Licensor waives any right to 203 | collect royalties from You for the exercise of the Licensed 204 | Rights, whether directly or through a collecting society 205 | under any voluntary or waivable statutory or compulsory 206 | licensing scheme. In all other cases the Licensor expressly 207 | reserves any right to collect such royalties. 208 | 209 | 210 | Section 3 -- License Conditions. 211 | 212 | Your exercise of the Licensed Rights is expressly made subject to the 213 | following conditions. 214 | 215 | a. Attribution. 216 | 217 | 1. If You Share the Licensed Material (including in modified 218 | form), You must: 219 | 220 | a. retain the following if it is supplied by the Licensor 221 | with the Licensed Material: 222 | 223 | i. identification of the creator(s) of the Licensed 224 | Material and any others designated to receive 225 | attribution, in any reasonable manner requested by 226 | the Licensor (including by pseudonym if 227 | designated); 228 | 229 | ii. a copyright notice; 230 | 231 | iii. a notice that refers to this Public License; 232 | 233 | iv. a notice that refers to the disclaimer of 234 | warranties; 235 | 236 | v. a URI or hyperlink to the Licensed Material to the 237 | extent reasonably practicable; 238 | 239 | b. indicate if You modified the Licensed Material and 240 | retain an indication of any previous modifications; and 241 | 242 | c. indicate the Licensed Material is licensed under this 243 | Public License, and include the text of, or the URI or 244 | hyperlink to, this Public License. 245 | 246 | 2. You may satisfy the conditions in Section 3(a)(1) in any 247 | reasonable manner based on the medium, means, and context in 248 | which You Share the Licensed Material. For example, it may be 249 | reasonable to satisfy the conditions by providing a URI or 250 | hyperlink to a resource that includes the required 251 | information. 252 | 253 | 3. If requested by the Licensor, You must remove any of the 254 | information required by Section 3(a)(1)(A) to the extent 255 | reasonably practicable. 256 | 257 | 4. If You Share Adapted Material You produce, the Adapter's 258 | License You apply must not prevent recipients of the Adapted 259 | Material from complying with this Public License. 260 | 261 | 262 | Section 4 -- Sui Generis Database Rights. 263 | 264 | Where the Licensed Rights include Sui Generis Database Rights that 265 | apply to Your use of the Licensed Material: 266 | 267 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right 268 | to extract, reuse, reproduce, and Share all or a substantial 269 | portion of the contents of the database; 270 | 271 | b. if You include all or a substantial portion of the database 272 | contents in a database in which You have Sui Generis Database 273 | Rights, then the database in which You have Sui Generis Database 274 | Rights (but not its individual contents) is Adapted Material; and 275 | 276 | c. You must comply with the conditions in Section 3(a) if You Share 277 | all or a substantial portion of the contents of the database. 278 | 279 | For the avoidance of doubt, this Section 4 supplements and does not 280 | replace Your obligations under this Public License where the Licensed 281 | Rights include other Copyright and Similar Rights. 282 | 283 | 284 | Section 5 -- Disclaimer of Warranties and Limitation of Liability. 285 | 286 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE 287 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS 288 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF 289 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, 290 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, 291 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR 292 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, 293 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT 294 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT 295 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. 296 | 297 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE 298 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, 299 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, 300 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, 301 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR 302 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN 303 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR 304 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR 305 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. 306 | 307 | c. The disclaimer of warranties and limitation of liability provided 308 | above shall be interpreted in a manner that, to the extent 309 | possible, most closely approximates an absolute disclaimer and 310 | waiver of all liability. 311 | 312 | 313 | Section 6 -- Term and Termination. 314 | 315 | a. This Public License applies for the term of the Copyright and 316 | Similar Rights licensed here. However, if You fail to comply with 317 | this Public License, then Your rights under this Public License 318 | terminate automatically. 319 | 320 | b. Where Your right to use the Licensed Material has terminated under 321 | Section 6(a), it reinstates: 322 | 323 | 1. automatically as of the date the violation is cured, provided 324 | it is cured within 30 days of Your discovery of the 325 | violation; or 326 | 327 | 2. upon express reinstatement by the Licensor. 328 | 329 | For the avoidance of doubt, this Section 6(b) does not affect any 330 | right the Licensor may have to seek remedies for Your violations 331 | of this Public License. 332 | 333 | c. For the avoidance of doubt, the Licensor may also offer the 334 | Licensed Material under separate terms or conditions or stop 335 | distributing the Licensed Material at any time; however, doing so 336 | will not terminate this Public License. 337 | 338 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public 339 | License. 340 | 341 | 342 | Section 7 -- Other Terms and Conditions. 343 | 344 | a. The Licensor shall not be bound by any additional or different 345 | terms or conditions communicated by You unless expressly agreed. 346 | 347 | b. Any arrangements, understandings, or agreements regarding the 348 | Licensed Material not stated herein are separate from and 349 | independent of the terms and conditions of this Public License. 350 | 351 | 352 | Section 8 -- Interpretation. 353 | 354 | a. For the avoidance of doubt, this Public License does not, and 355 | shall not be interpreted to, reduce, limit, restrict, or impose 356 | conditions on any use of the Licensed Material that could lawfully 357 | be made without permission under this Public License. 358 | 359 | b. To the extent possible, if any provision of this Public License is 360 | deemed unenforceable, it shall be automatically reformed to the 361 | minimum extent necessary to make it enforceable. If the provision 362 | cannot be reformed, it shall be severed from this Public License 363 | without affecting the enforceability of the remaining terms and 364 | conditions. 365 | 366 | c. No term or condition of this Public License will be waived and no 367 | failure to comply consented to unless expressly agreed to by the 368 | Licensor. 369 | 370 | d. Nothing in this Public License constitutes or may be interpreted 371 | as a limitation upon, or waiver of, any privileges and immunities 372 | that apply to the Licensor or You, including from the legal 373 | processes of any jurisdiction or authority. 374 | 375 | 376 | ======================================================================= 377 | 378 | Creative Commons is not a party to its public licenses. 379 | Notwithstanding, Creative Commons may elect to apply one of its public 380 | licenses to material it publishes and in those instances will be 381 | considered the “Licensor.” The text of the Creative Commons public 382 | licenses is dedicated to the public domain under the CC0 Public Domain 383 | Dedication. Except for the limited purpose of indicating that material 384 | is shared under a Creative Commons public license or as otherwise 385 | permitted by the Creative Commons policies published at 386 | creativecommons.org/policies, Creative Commons does not authorize the 387 | use of the trademark "Creative Commons" or any other trademark or logo 388 | of Creative Commons without its prior written consent including, 389 | without limitation, in connection with any unauthorized modifications 390 | to any of its public licenses or any other arrangements, 391 | understandings, or agreements concerning use of licensed material. For 392 | the avoidance of doubt, this paragraph does not form part of the public 393 | licenses. 394 | 395 | Creative Commons may be contacted at creativecommons.org. 396 | --------------------------------------------------------------------------------