├── .gitignore
├── www
├── FigApp.png
├── favicon.ico
├── logo_covid.png
├── logoestatistica.png
└── styles.css
├── STvideos
├── US_n.rds
├── Brazil_n.rds
├── China_n.rds
├── India_n.rds
├── Italy_n.rds
└── Spain_n.rds
├── app_v1
├── www
│ ├── FigApp.png
│ ├── favicon.ico
│ ├── Covid19UFMG.pdf
│ ├── event_2105.jpg
│ ├── GitHub-Mark-32px.png
│ ├── logoestatistica.png
│ ├── GitHub-Mark-Light-32px.png
│ ├── markdown
│ │ ├── CoronaUFMG_MD_en.pdf
│ │ ├── CoronaUFMG_MD_pt.pdf
│ │ ├── CoronaUFMG_MD.md
│ │ └── CoronaUFMG_MD.Rmd
│ ├── code.js
│ ├── styles.css
│ └── CoronaUFMG_MD.Rmd
├── EstadosCov19.csv
├── time_series_covid19_deaths_global.csv
├── time_series_covid19_confirmed_global.csv
├── google-analytics.html
├── README.md
├── global.R
├── CoronaUFMG_MD.md
└── CoronaUFMG_MDen.md
├── STpredictions
├── Iraq_d.rds
├── Iraq_n.rds
├── Peru_d.rds
├── Peru_n.rds
├── Brazil_d.rds
├── Brazil_n.rds
├── Canada_d.rds
├── Canada_n.rds
├── Chile_d.rds
├── Chile_n.rds
├── China_d.rds
├── China_n.rds
├── France_d.rds
├── France_n.rds
├── Greece_d.rds
├── Greece_n.rds
├── India_d.rds
├── India_n.rds
├── Italy_d.rds
├── Italy_n.rds
├── Japan_d.rds
├── Japan_n.rds
├── Mexico_d.rds
├── Mexico_n.rds
├── Norway_d.rds
├── Norway_n.rds
├── Panama_d.rds
├── Panama_n.rds
├── Poland_d.rds
├── Poland_n.rds
├── Russia_d.rds
├── Russia_n.rds
├── Spain_d.rds
├── Spain_n.rds
├── Sweden_d.rds
├── Sweden_n.rds
├── Turkey_d.rds
├── Turkey_n.rds
├── Argentina_d.rds
├── Argentina_n.rds
├── Australia_d.rds
├── Australia_n.rds
├── Belgium_d.rds
├── Belgium_n.rds
├── Bolivia_d.rds
├── Bolivia_n.rds
├── Colombia_d.rds
├── Colombia_n.rds
├── Ecuador_d.rds
├── Ecuador_n.rds
├── Ethiopia_d.rds
├── Ethiopia_n.rds
├── Germany_d.rds
├── Germany_n.rds
├── Guatemala_d.rds
├── Guatemala_n.rds
├── Honduras_d.rds
├── Honduras_n.rds
├── Indonesia_d.rds
├── Indonesia_n.rds
├── Ireland_d.rds
├── Ireland_n.rds
├── Morocco_d.rds
├── Morocco_n.rds
├── Paraguay_d.rds
├── Paraguay_n.rds
├── Portugal_d.rds
├── Portugal_n.rds
├── Romania_d.rds
├── Romania_n.rds
├── Ukraine_d.rds
├── Ukraine_n.rds
├── Uruguay_d.rds
├── Uruguay_n.rds
├── Venezuela_d.rds
├── Venezuela_n.rds
├── Brazil_AC_de.rds
├── Brazil_AC_ne.rds
├── Brazil_AL_de.rds
├── Brazil_AL_ne.rds
├── Brazil_AM_de.rds
├── Brazil_AM_ne.rds
├── Brazil_AP_de.rds
├── Brazil_AP_ne.rds
├── Brazil_BA_de.rds
├── Brazil_BA_ne.rds
├── Brazil_CE_de.rds
├── Brazil_CE_ne.rds
├── Brazil_DF_de.rds
├── Brazil_DF_ne.rds
├── Brazil_ES_de.rds
├── Brazil_ES_ne.rds
├── Brazil_GO_de.rds
├── Brazil_GO_ne.rds
├── Brazil_MA_de.rds
├── Brazil_MA_ne.rds
├── Brazil_MG_de.rds
├── Brazil_MG_ne.rds
├── Brazil_MS_de.rds
├── Brazil_MS_ne.rds
├── Brazil_MT_de.rds
├── Brazil_MT_ne.rds
├── Brazil_PA_de.rds
├── Brazil_PA_ne.rds
├── Brazil_PB_de.rds
├── Brazil_PB_ne.rds
├── Brazil_PE_de.rds
├── Brazil_PE_ne.rds
├── Brazil_PI_de.rds
├── Brazil_PI_ne.rds
├── Brazil_PR_de.rds
├── Brazil_PR_ne.rds
├── Brazil_RJ_de.rds
├── Brazil_RJ_ne.rds
├── Brazil_RN_de.rds
├── Brazil_RN_ne.rds
├── Brazil_RO_de.rds
├── Brazil_RO_ne.rds
├── Brazil_RR_de.rds
├── Brazil_RR_ne.rds
├── Brazil_RS_de.rds
├── Brazil_RS_ne.rds
├── Brazil_SC_de.rds
├── Brazil_SC_ne.rds
├── Brazil_SE_de.rds
├── Brazil_SE_ne.rds
├── Brazil_SP_de.rds
├── Brazil_SP_ne.rds
├── Brazil_TO_de.rds
├── Brazil_TO_ne.rds
├── Costa-Rica_d.rds
├── Costa-Rica_n.rds
├── Netherlands_d.rds
├── Netherlands_n.rds
├── New-Zealand_d.rds
├── New-Zealand_n.rds
├── South-Korea_d.rds
├── South-Korea_n.rds
├── Switzerland_d.rds
├── Switzerland_n.rds
├── Saudi-Arabia_d.rds
├── Saudi-Arabia_n.rds
├── South-Africa_d.rds
├── South-Africa_n.rds
├── United-Kingdom_d.rds
├── United-Kingdom_n.rds
├── United-States-of-America_d.rds
└── United-States-of-America_n.rds
├── cache
├── EstadosCov19.csv
├── time_series_covid19_deaths_global.csv
└── time_series_covid19_confirmed_global.csv
├── app_COVID19.Rproj
├── google-analytics.html
├── R
├── JAGS
│ ├── loadData.R
│ ├── jags_poisson.R
│ ├── posterior_sample.R
│ ├── mcmcplot_country.R
│ ├── predict_BR.R
│ ├── death_BR_par.R
│ ├── predict_BR_par.R
│ ├── predict_confirmed_par.R
│ └── death_confirmed_par.R
├── pop
│ ├── pop_BR.csv
│ └── pop_WR.csv
├── getRepofiles.R
├── STAN
│ ├── create_cases_BR.R
│ ├── create_cases_country.R
│ ├── create_deaths_BR.R
│ ├── create_deaths_country.R
│ ├── template_cases_countries.R
│ ├── template_deaths_countries.R
│ ├── template_cases_BR.R
│ ├── template_deaths_BR.R
│ └── utils.R
└── Video
│ └── Video_BR.R
├── rsconnect
└── shinyapps.io
│ └── dest-ufmg
│ └── app_COVID19.dcf
├── footer.html
├── Rver
├── global.R
└── README.md
/.gitignore:
--------------------------------------------------------------------------------
1 | .Rproj.user
2 | .Rhistory
3 | .RData
4 | .Ruserdata
5 |
--------------------------------------------------------------------------------
/www/FigApp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/www/FigApp.png
--------------------------------------------------------------------------------
/www/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/www/favicon.ico
--------------------------------------------------------------------------------
/STvideos/US_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STvideos/US_n.rds
--------------------------------------------------------------------------------
/www/logo_covid.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/www/logo_covid.png
--------------------------------------------------------------------------------
/STvideos/Brazil_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STvideos/Brazil_n.rds
--------------------------------------------------------------------------------
/STvideos/China_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STvideos/China_n.rds
--------------------------------------------------------------------------------
/STvideos/India_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STvideos/India_n.rds
--------------------------------------------------------------------------------
/STvideos/Italy_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STvideos/Italy_n.rds
--------------------------------------------------------------------------------
/STvideos/Spain_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STvideos/Spain_n.rds
--------------------------------------------------------------------------------
/app_v1/www/FigApp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/FigApp.png
--------------------------------------------------------------------------------
/STpredictions/Iraq_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Iraq_d.rds
--------------------------------------------------------------------------------
/STpredictions/Iraq_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Iraq_n.rds
--------------------------------------------------------------------------------
/STpredictions/Peru_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Peru_d.rds
--------------------------------------------------------------------------------
/STpredictions/Peru_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Peru_n.rds
--------------------------------------------------------------------------------
/app_v1/EstadosCov19.csv:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/EstadosCov19.csv
--------------------------------------------------------------------------------
/app_v1/www/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/favicon.ico
--------------------------------------------------------------------------------
/cache/EstadosCov19.csv:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/cache/EstadosCov19.csv
--------------------------------------------------------------------------------
/www/logoestatistica.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/www/logoestatistica.png
--------------------------------------------------------------------------------
/STpredictions/Brazil_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_d.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_n.rds
--------------------------------------------------------------------------------
/STpredictions/Canada_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Canada_d.rds
--------------------------------------------------------------------------------
/STpredictions/Canada_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Canada_n.rds
--------------------------------------------------------------------------------
/STpredictions/Chile_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Chile_d.rds
--------------------------------------------------------------------------------
/STpredictions/Chile_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Chile_n.rds
--------------------------------------------------------------------------------
/STpredictions/China_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/China_d.rds
--------------------------------------------------------------------------------
/STpredictions/China_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/China_n.rds
--------------------------------------------------------------------------------
/STpredictions/France_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/France_d.rds
--------------------------------------------------------------------------------
/STpredictions/France_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/France_n.rds
--------------------------------------------------------------------------------
/STpredictions/Greece_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Greece_d.rds
--------------------------------------------------------------------------------
/STpredictions/Greece_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Greece_n.rds
--------------------------------------------------------------------------------
/STpredictions/India_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/India_d.rds
--------------------------------------------------------------------------------
/STpredictions/India_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/India_n.rds
--------------------------------------------------------------------------------
/STpredictions/Italy_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Italy_d.rds
--------------------------------------------------------------------------------
/STpredictions/Italy_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Italy_n.rds
--------------------------------------------------------------------------------
/STpredictions/Japan_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Japan_d.rds
--------------------------------------------------------------------------------
/STpredictions/Japan_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Japan_n.rds
--------------------------------------------------------------------------------
/STpredictions/Mexico_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Mexico_d.rds
--------------------------------------------------------------------------------
/STpredictions/Mexico_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Mexico_n.rds
--------------------------------------------------------------------------------
/STpredictions/Norway_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Norway_d.rds
--------------------------------------------------------------------------------
/STpredictions/Norway_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Norway_n.rds
--------------------------------------------------------------------------------
/STpredictions/Panama_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Panama_d.rds
--------------------------------------------------------------------------------
/STpredictions/Panama_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Panama_n.rds
--------------------------------------------------------------------------------
/STpredictions/Poland_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Poland_d.rds
--------------------------------------------------------------------------------
/STpredictions/Poland_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Poland_n.rds
--------------------------------------------------------------------------------
/STpredictions/Russia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Russia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Russia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Russia_n.rds
--------------------------------------------------------------------------------
/STpredictions/Spain_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Spain_d.rds
--------------------------------------------------------------------------------
/STpredictions/Spain_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Spain_n.rds
--------------------------------------------------------------------------------
/STpredictions/Sweden_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Sweden_d.rds
--------------------------------------------------------------------------------
/STpredictions/Sweden_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Sweden_n.rds
--------------------------------------------------------------------------------
/STpredictions/Turkey_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Turkey_d.rds
--------------------------------------------------------------------------------
/STpredictions/Turkey_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Turkey_n.rds
--------------------------------------------------------------------------------
/app_v1/www/Covid19UFMG.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/Covid19UFMG.pdf
--------------------------------------------------------------------------------
/app_v1/www/event_2105.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/event_2105.jpg
--------------------------------------------------------------------------------
/STpredictions/Argentina_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Argentina_d.rds
--------------------------------------------------------------------------------
/STpredictions/Argentina_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Argentina_n.rds
--------------------------------------------------------------------------------
/STpredictions/Australia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Australia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Australia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Australia_n.rds
--------------------------------------------------------------------------------
/STpredictions/Belgium_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Belgium_d.rds
--------------------------------------------------------------------------------
/STpredictions/Belgium_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Belgium_n.rds
--------------------------------------------------------------------------------
/STpredictions/Bolivia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Bolivia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Bolivia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Bolivia_n.rds
--------------------------------------------------------------------------------
/STpredictions/Colombia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Colombia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Colombia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Colombia_n.rds
--------------------------------------------------------------------------------
/STpredictions/Ecuador_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ecuador_d.rds
--------------------------------------------------------------------------------
/STpredictions/Ecuador_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ecuador_n.rds
--------------------------------------------------------------------------------
/STpredictions/Ethiopia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ethiopia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Ethiopia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ethiopia_n.rds
--------------------------------------------------------------------------------
/STpredictions/Germany_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Germany_d.rds
--------------------------------------------------------------------------------
/STpredictions/Germany_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Germany_n.rds
--------------------------------------------------------------------------------
/STpredictions/Guatemala_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Guatemala_d.rds
--------------------------------------------------------------------------------
/STpredictions/Guatemala_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Guatemala_n.rds
--------------------------------------------------------------------------------
/STpredictions/Honduras_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Honduras_d.rds
--------------------------------------------------------------------------------
/STpredictions/Honduras_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Honduras_n.rds
--------------------------------------------------------------------------------
/STpredictions/Indonesia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Indonesia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Indonesia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Indonesia_n.rds
--------------------------------------------------------------------------------
/STpredictions/Ireland_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ireland_d.rds
--------------------------------------------------------------------------------
/STpredictions/Ireland_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ireland_n.rds
--------------------------------------------------------------------------------
/STpredictions/Morocco_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Morocco_d.rds
--------------------------------------------------------------------------------
/STpredictions/Morocco_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Morocco_n.rds
--------------------------------------------------------------------------------
/STpredictions/Paraguay_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Paraguay_d.rds
--------------------------------------------------------------------------------
/STpredictions/Paraguay_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Paraguay_n.rds
--------------------------------------------------------------------------------
/STpredictions/Portugal_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Portugal_d.rds
--------------------------------------------------------------------------------
/STpredictions/Portugal_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Portugal_n.rds
--------------------------------------------------------------------------------
/STpredictions/Romania_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Romania_d.rds
--------------------------------------------------------------------------------
/STpredictions/Romania_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Romania_n.rds
--------------------------------------------------------------------------------
/STpredictions/Ukraine_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ukraine_d.rds
--------------------------------------------------------------------------------
/STpredictions/Ukraine_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Ukraine_n.rds
--------------------------------------------------------------------------------
/STpredictions/Uruguay_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Uruguay_d.rds
--------------------------------------------------------------------------------
/STpredictions/Uruguay_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Uruguay_n.rds
--------------------------------------------------------------------------------
/STpredictions/Venezuela_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Venezuela_d.rds
--------------------------------------------------------------------------------
/STpredictions/Venezuela_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Venezuela_n.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AC_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AC_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AC_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AC_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AL_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AL_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AL_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AL_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AM_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AM_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AM_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AM_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AP_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AP_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_AP_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_AP_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_BA_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_BA_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_BA_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_BA_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_CE_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_CE_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_CE_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_CE_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_DF_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_DF_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_DF_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_DF_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_ES_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_ES_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_ES_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_ES_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_GO_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_GO_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_GO_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_GO_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MA_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MA_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MA_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MA_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MG_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MG_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MG_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MG_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MS_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MS_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MS_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MS_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MT_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MT_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_MT_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_MT_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PA_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PA_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PA_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PA_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PB_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PB_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PB_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PB_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PE_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PE_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PE_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PE_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PI_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PI_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PI_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PI_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PR_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PR_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_PR_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_PR_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RJ_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RJ_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RJ_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RJ_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RN_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RN_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RN_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RN_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RO_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RO_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RO_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RO_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RR_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RR_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RR_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RR_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RS_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RS_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_RS_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_RS_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_SC_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_SC_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_SC_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_SC_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_SE_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_SE_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_SE_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_SE_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_SP_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_SP_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_SP_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_SP_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_TO_de.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_TO_de.rds
--------------------------------------------------------------------------------
/STpredictions/Brazil_TO_ne.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Brazil_TO_ne.rds
--------------------------------------------------------------------------------
/STpredictions/Costa-Rica_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Costa-Rica_d.rds
--------------------------------------------------------------------------------
/STpredictions/Costa-Rica_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Costa-Rica_n.rds
--------------------------------------------------------------------------------
/STpredictions/Netherlands_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Netherlands_d.rds
--------------------------------------------------------------------------------
/STpredictions/Netherlands_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Netherlands_n.rds
--------------------------------------------------------------------------------
/STpredictions/New-Zealand_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/New-Zealand_d.rds
--------------------------------------------------------------------------------
/STpredictions/New-Zealand_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/New-Zealand_n.rds
--------------------------------------------------------------------------------
/STpredictions/South-Korea_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/South-Korea_d.rds
--------------------------------------------------------------------------------
/STpredictions/South-Korea_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/South-Korea_n.rds
--------------------------------------------------------------------------------
/STpredictions/Switzerland_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Switzerland_d.rds
--------------------------------------------------------------------------------
/STpredictions/Switzerland_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Switzerland_n.rds
--------------------------------------------------------------------------------
/app_v1/www/GitHub-Mark-32px.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/GitHub-Mark-32px.png
--------------------------------------------------------------------------------
/app_v1/www/logoestatistica.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/logoestatistica.png
--------------------------------------------------------------------------------
/STpredictions/Saudi-Arabia_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Saudi-Arabia_d.rds
--------------------------------------------------------------------------------
/STpredictions/Saudi-Arabia_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/Saudi-Arabia_n.rds
--------------------------------------------------------------------------------
/STpredictions/South-Africa_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/South-Africa_d.rds
--------------------------------------------------------------------------------
/STpredictions/South-Africa_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/South-Africa_n.rds
--------------------------------------------------------------------------------
/STpredictions/United-Kingdom_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/United-Kingdom_d.rds
--------------------------------------------------------------------------------
/STpredictions/United-Kingdom_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/United-Kingdom_n.rds
--------------------------------------------------------------------------------
/app_v1/www/GitHub-Mark-Light-32px.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/GitHub-Mark-Light-32px.png
--------------------------------------------------------------------------------
/app_v1/www/markdown/CoronaUFMG_MD_en.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/markdown/CoronaUFMG_MD_en.pdf
--------------------------------------------------------------------------------
/app_v1/www/markdown/CoronaUFMG_MD_pt.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/www/markdown/CoronaUFMG_MD_pt.pdf
--------------------------------------------------------------------------------
/STpredictions/United-States-of-America_d.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/United-States-of-America_d.rds
--------------------------------------------------------------------------------
/STpredictions/United-States-of-America_n.rds:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/STpredictions/United-States-of-America_n.rds
--------------------------------------------------------------------------------
/app_v1/time_series_covid19_deaths_global.csv:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/time_series_covid19_deaths_global.csv
--------------------------------------------------------------------------------
/cache/time_series_covid19_deaths_global.csv:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/cache/time_series_covid19_deaths_global.csv
--------------------------------------------------------------------------------
/cache/time_series_covid19_confirmed_global.csv:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/cache/time_series_covid19_confirmed_global.csv
--------------------------------------------------------------------------------
/app_v1/time_series_covid19_confirmed_global.csv:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/CovidLP/app_COVID19/HEAD/app_v1/time_series_covid19_confirmed_global.csv
--------------------------------------------------------------------------------
/app_v1/www/code.js:
--------------------------------------------------------------------------------
1 | $( document ).ready(function() {
2 | $( ".navbar .container-fluid" ).append( '
Source Code
' );
3 | });
--------------------------------------------------------------------------------
/app_COVID19.Rproj:
--------------------------------------------------------------------------------
1 | Version: 1.0
2 |
3 | RestoreWorkspace: Default
4 | SaveWorkspace: Default
5 | AlwaysSaveHistory: Default
6 |
7 | EnableCodeIndexing: Yes
8 | UseSpacesForTab: Yes
9 | NumSpacesForTab: 2
10 | Encoding: UTF-8
11 |
12 | RnwWeave: Sweave
13 | LaTeX: pdfLaTeX
14 |
--------------------------------------------------------------------------------
/app_v1/www/styles.css:
--------------------------------------------------------------------------------
1 | .fab {
2 | color: black;
3 | }
4 |
5 | .img-with-text {
6 | text-align: center;
7 | max-width: 120px;
8 | max-height: 50px;
9 | margin: 1%;
10 | }
11 |
12 | .img-with-text img {
13 | display: block;
14 | margin: auto;
15 | padding: 1%
16 | }
--------------------------------------------------------------------------------
/google-analytics.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
--------------------------------------------------------------------------------
/app_v1/google-analytics.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
--------------------------------------------------------------------------------
/R/JAGS/loadData.R:
--------------------------------------------------------------------------------
1 | ## main function to load the data - GLOBAL
2 | loadData = function(fileName, columnName) {
3 | data = read.csv(file.path(baseURL, fileName), check.names=FALSE, stringsAsFactors=FALSE) %>%
4 | select(-Lat, -Long) %>%
5 | pivot_longer(-(1:2), names_to="date", values_to=columnName)%>%
6 | mutate(date=as.Date(date, format="%m/%d/%y")) %>%
7 | rename(country = `Country/Region`, state = `Province/State`)
8 | save(data, file=fileName)
9 | return(data)
10 | }
11 |
--------------------------------------------------------------------------------
/rsconnect/shinyapps.io/dest-ufmg/app_COVID19.dcf:
--------------------------------------------------------------------------------
1 | name: app_COVID19
2 | title: app_COVID19
3 | username:
4 | account: dest-ufmg
5 | server: shinyapps.io
6 | hostUrl: https://api.shinyapps.io/v1
7 | appId: 2008872
8 | bundleId: 3289548
9 | url: https://dest-ufmg.shinyapps.io/app_COVID19/
10 | when: 1592520228.8867
11 | asMultiple: FALSE
12 | asStatic: FALSE
13 | ignoredFiles: Brazil_n.rds|plot_LTpred.R|README.md|teste_plots_weekend.R|teste_subplot.R|cache/EstadosCov19.csv|app_v1|R|STpredictions|STvideos
14 |
--------------------------------------------------------------------------------
/R/pop/pop_BR.csv:
--------------------------------------------------------------------------------
1 | "uf","pop"
2 | "AC",881935
3 | "AL",3337357
4 | "AM",4144597
5 | "AP",845731
6 | "BA",14873064
7 | "CE",9132078
8 | "DF",3015268
9 | "ES",4018650
10 | "GO",7018354
11 | "MA",7075181
12 | "MG",21168791
13 | "MS",2778986
14 | "MT",3484466
15 | "PA",8602865
16 | "PB",4018127
17 | "PE",9557071
18 | "PI",3273227
19 | "PR",11433957
20 | "RJ",17264943
21 | "RN",3506853
22 | "RO",1777225
23 | "RR",605761
24 | "RS",11377239
25 | "SC",7164788
26 | "SE",2298696
27 | "SP",45919049
28 | "TO",1572866
29 | "BR",210147125
30 |
--------------------------------------------------------------------------------
/R/getRepofiles.R:
--------------------------------------------------------------------------------
1 | readfiles.repo <- function(){
2 | req <- GET("https://api.github.com/repos/thaispaiva/app_COVID19/git/trees/master?recursive=1")
3 | stop_for_status(req)
4 | filelist <- unlist(lapply(content(req)$tree, "[", "path"), use.names = F)
5 | files <- grep("STpredictions/", filelist, value = TRUE, fixed = TRUE)
6 | files <- unlist(lapply(strsplit(files, "/"), "[", 2))
7 | files <- grep(".rds", files, value = TRUE)
8 |
9 | return(files)
10 | }
11 |
12 | #files <- readfiles.repo()
13 |
14 |
15 | #strsplit(files,"_")
16 |
--------------------------------------------------------------------------------
/R/STAN/create_cases_BR.R:
--------------------------------------------------------------------------------
1 | library("dplyr")
2 | library("PandemicLP")
3 | library("pracma")
4 |
5 | setwd("/home/marcosop/Covid/R/STAN/")
6 | source("utils.R")
7 |
8 | out <- nwaves(country = FALSE)
9 | nwaves <- out$nwaves
10 | state_list <- out$state_list
11 |
12 | template <- readLines("template_cases_BR.R")
13 |
14 | files <- list.files(pattern="predict_BR_[1,2,3,4,5,6,7,8,9](.*?).R")
15 | file.remove(files)
16 |
17 | for(i in 1:length(nwaves)){
18 |
19 | states <- paste("c(\"",paste(state_list[[i]],collapse="\",\""),"\")",sep="")
20 |
21 | code <- gsub("_statelist_",states,template)
22 | code <- gsub("_nwaves_",nwaves[i],code)
23 | file.name <- paste0("predict_BR_",nwaves[i],"wave.R")
24 | write(code,file=file.name)
25 | cat("File ", paste0(getwd(),"/",file.name), "written.\n")
26 | }
27 |
28 |
29 |
--------------------------------------------------------------------------------
/R/STAN/create_cases_country.R:
--------------------------------------------------------------------------------
1 | library("dplyr")
2 | library("PandemicLP")
3 | library("pracma")
4 |
5 | setwd("/home/marcosop/Covid/R/STAN/")
6 | source("utils.R")
7 |
8 | out <- nwaves()
9 | nwaves <- out$nwaves
10 | country_list <- out$country_list
11 |
12 | template <- readLines("template_cases_countries.R")
13 |
14 | files <- list.files(pattern="predict_[1,2,3,4,5,6,7,8,9](.*?).R")
15 | file.remove(files)
16 |
17 | for(i in 1:length(nwaves)){
18 |
19 | countries <- paste("c(\"",paste(country_list[[i]],collapse="\",\""),"\")",sep="")
20 |
21 | code <- gsub("_countrynames_",countries,template)
22 | code <- gsub("_nwaves_",nwaves[i],code)
23 | file.name <- paste0("predict_",nwaves[i],"wave_par.R")
24 | write(code,file=file.name)
25 | cat("File ", paste0(getwd(),"/",file.name), "written.\n")
26 | }
27 |
--------------------------------------------------------------------------------
/R/STAN/create_deaths_BR.R:
--------------------------------------------------------------------------------
1 | library("dplyr")
2 | library("PandemicLP")
3 | library("pracma")
4 |
5 | setwd("/home/marcosop/Covid/R/STAN/")
6 | source("utils.R")
7 |
8 | out <- nwaves(country = FALSE,new_cases = FALSE)
9 | nwaves <- out$nwaves
10 | state_list <- out$state_list
11 |
12 | template <- readLines("template_deaths_BR.R")
13 |
14 | files <- list.files(pattern="death_BR_[1,2,3,4,5,6,7,8,9](.*?).R")
15 | file.remove(files)
16 |
17 | for(i in 1:length(nwaves)){
18 |
19 | states <- paste("c(\"",paste(state_list[[i]],collapse="\",\""),"\")",sep="")
20 |
21 | code <- gsub("_statelist_",states,template)
22 | code <- gsub("_nwaves_",nwaves[i],code)
23 | file.name <- paste0("death_BR_",nwaves[i],"wave.R")
24 | write(code,file=file.name)
25 | cat("File ", paste0(getwd(),"/",file.name), "written.\n")
26 | }
27 |
--------------------------------------------------------------------------------
/R/STAN/create_deaths_country.R:
--------------------------------------------------------------------------------
1 | library("dplyr")
2 | library("PandemicLP")
3 | library("pracma")
4 |
5 | setwd("/home/marcosop/Covid/R/STAN/")
6 | source("utils.R")
7 |
8 | out <- nwaves(new_cases = FALSE)
9 | nwaves <- out$nwaves
10 | country_list <- out$country_list
11 |
12 | template <- readLines("template_deaths_countries.R")
13 |
14 | files <- list.files(pattern="death_[1,2,3,4,5,6,7,8,9](.*?).R")
15 | file.remove(files)
16 |
17 | for(i in 1:length(nwaves)){
18 |
19 | countries <- paste("c(\"",paste(country_list[[i]],collapse="\",\""),"\")",sep="")
20 |
21 | code <- gsub("_countrynames_",countries,template)
22 | code <- gsub("_nwaves_",nwaves[i],code)
23 | file.name <- paste0("death_",nwaves[i],"wave_par.R")
24 | write(code,file=file.name)
25 | cat("File ", paste0(getwd(),"/",file.name), "written.\n")
26 | }
27 |
28 |
29 |
--------------------------------------------------------------------------------
/footer.html:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/Rver:
--------------------------------------------------------------------------------
1 | classify.flag = function(obs, adj){
2 |
3 | colnames(obs) = c("date", "y")
4 |
5 | dados = full_join(obs, adj) %>% mutate(y_s = rollmean(y, k = 20, fill = NA) %>% rollmean(k = 10, fill = NA), # suavizando os dados
6 | dif_s = c(NA, y_s %>% diff()) %>% rollmean(k = 50, fill = NA), # diferença suavizada
7 | id = c(NA, dif_s %>% sign() %>% diff()) ) # identificador de mudança de comportamento
8 |
9 | aux = dados %>% filter(id == 2 |id == -2) %>% arrange(desc(date))
10 |
11 | if(aux$id[1] == -2){ # se o último ponto identificado é um pico, a amplitude é calculada com base na última observação do y suavizado
12 | aux = rbind(tail(dados %>% filter(!is.na(y_s)), 1) %>% mutate(id = 2), aux)
13 | }
14 |
15 | est = (aux %>% mutate(amp_y = c(NA, diff(y_s)),
16 | amp_mu = c(NA, diff(mu)),
17 | id_df = c(NA, diff(id)),
18 | est = (amp_y - amp_mu)^2/max(y_s)^2) %>%
19 | filter(id_df == -4) %>% # amplitudes corretas
20 | summarise(crit = sum(est)))$crit
21 |
22 | #ifelse(est <= 0.2, 0, ifelse(est <= 0.5, 1, 2))
23 | est
24 | }
25 |
--------------------------------------------------------------------------------
/app_v1/README.md:
--------------------------------------------------------------------------------
1 | # Previsão de Curto e Longo Prazo para COVID-19 / DEST-UFMG
2 |
3 | Esse é o repositório com o código utilizado para implementação do aplicativo para previsão de curto e longo prazo para casos de COVID-19.
4 |
5 | O aplicativo pode ser acessado em: [https://dest-ufmg.shinyapps.io/app_COVID19/](https://dest-ufmg.shinyapps.io/app_COVID19/).
6 |
7 | Este projeto está sendo desenvolvido por um grupo de professores e alunos de pós-graduação do [Departamento de Estatística](http://www.est.ufmg.br) da [Universidade Federal de Minas Gerais](http://www.ufmg.br).
8 |
9 | Para mais projetos relacionados aos dados de COVID-19 desenvolvidos no DEST/UFMG, acesse: [est.ufmg.br/portal/extensao/coronavirus](http://www.est.ufmg.br/portal/extensao/coronavirus).
10 |
11 | ---
12 |
13 | ### Short and long term prediction for COVID-19 / DEST-UFMG
14 | This repository contains the source code for the implementation of an app for long and short term prediction of COVID-19 cases.
15 |
16 | The app can be accessed at: [https://dest-ufmg.shinyapps.io/app_COVID19/](https://dest-ufmg.shinyapps.io/app_COVID19/).
17 |
18 | This project is being developed as a group work with professors and graduate students at the [Department of Statistics](http://www.est.ufmg.br) at [UFMG](http://www.ufmg.br) (Federal University of Minas Gerais).
19 |
20 | For more projects related to COVID-19 data being developed at the Statistics Department, visit: [est.ufmg.br/portal/extensao/coronavirus](http://www.est.ufmg.br/portal/extensao/coronavirus).
21 |
--------------------------------------------------------------------------------
/R/JAGS/jags_poisson.R:
--------------------------------------------------------------------------------
1 | ###############################################################################
2 | ### M0 (new)
3 | ###############################################################################
4 | mod_string_new <- "model{
5 |
6 | for(i in 1:t) {
7 | y[i] ~ dpois(mu[i])
8 | mu[i] = f*a*c*exp(-c*i)/(b+exp(-c*i))^(f+1)
9 | }
10 |
11 | a ~ dgamma(0.1, 0.1)
12 | b ~ dgamma(0.1, 0.1)
13 | c ~ dbeta(2,6)
14 | f = exp(f1)
15 | f1 ~ dnorm(0,.1)
16 | # f ~ dgamma(1,1)
17 | # f ~ dgamma(0.1,0.1)
18 |
19 | # for(j in 1:L){
20 | # yfut[j] ~ dpois(mu[t+j])
21 | # mu[t+j] = f*a*c*exp(-c*(t+j))/(b+exp(-c*(t+j)))^(f+1)
22 | # }
23 |
24 | }"
25 |
26 |
27 |
28 |
29 | ###############################################################################
30 | ### M0 (new DM)
31 | ###############################################################################
32 | mod_string_new_dm <- "model{
33 |
34 | for(i in 1:t) {
35 | y[i] ~ dpois(mu[i])
36 | a[i] = exp(sum(wa[1:i]))
37 | mu[i] = f*a[i]*c*exp(-c*i)/(b+exp(-c*i))^(f+1)
38 | }
39 |
40 | b ~ dgamma(0.1, 0.1)
41 | c ~ dbeta(2,6)
42 | f = exp(f1)
43 | f1 ~ dnorm(0,.1)
44 | # f ~ dgamma(1,1)
45 | # f ~ dgamma(0.1,0.1)
46 | # f = 1
47 | # Wa = 1e1
48 | # Ka = 1 # 1e1
49 | Wa ~ dgamma(0.01, 0.01)
50 |
51 | # prioris para o inicio da dinamica
52 | wa[1] ~ dnorm(-5, 0.1)
53 | # prioris para as perturbacoes da evolucao
54 | for (i in 2:t){
55 | # wa[i] ~ dnorm(0, Wa)
56 | wa[i] ~ dnorm(-1/(2*Wa),Wa)
57 | }
58 | for (j in 1:L){
59 | # wa[t+j] ~ dnorm(0, Wa)
60 | # wa[t+j] ~ dnorm(0, Wa/Ka)
61 | wa[t+j] ~ dnorm(-1/(2*Wa),Wa)
62 | # wa[t+j] ~ dnorm(-Ka/(2*Wa),Wa/Ka)
63 | }
64 |
65 | for(j in 1:L){
66 | yfut[j] ~ dpois(mu[t+j])
67 | a[t+j] = exp(sum(wa[1:(t+j)]))
68 | mu[t+j] = f*a[t+j]*c*exp(-c*(t+j))/(b+exp(-c*(t+j)))^(f+1)
69 | }
70 |
71 | }"
72 |
73 |
74 |
--------------------------------------------------------------------------------
/R/JAGS/posterior_sample.R:
--------------------------------------------------------------------------------
1 | ## generate a sample from the predictive distribution
2 | # M: sample size for the predictive sample
3 | # L: number of steps in future
4 | # B: baseline time (last t of the fitted model)
5 | # a: samples of parameter a in time t
6 | # b: samples of parameter b in time t
7 | # c: samples of parameter c in time t
8 | # taua : precision of the normal distribution for the future samples of a
9 | # taub : precision of the normal distribution for the future samples of b
10 | # tauc : precision of the normal distribution for the future samples of c
11 | pred <- function(L=100,B,a,b,c,taua,taub,tauc){
12 | #save sample size
13 | M.save <- M <- length(a)
14 |
15 | mu <- a.fut <- b.fut <- c.fut <- matrix(-Inf, ncol=L+1,nrow=M)
16 |
17 | #initialize process with first information
18 | a.fut[,1] <- a
19 | b.fut[,1] <- b
20 | c.fut[,1] <- c
21 |
22 | mu[,1] <- (a.fut[,1]*exp(c.fut[,1]*(B+1))) / (1+b.fut[,1]*exp(c.fut[,1]*(B+1)))
23 |
24 | #limit attempt for the rejection procedure
25 | limit <- 100
26 | for(t in 2:(L+1)){
27 | #counter for how many attempts
28 | count <- 1
29 | M <- M.save
30 | pos <- 1:M
31 | while(count <= limit){
32 |
33 | #generate sample for w's
34 | wa <- rnorm(M,0,sd=sqrt(1/taua))
35 | wb <- rnorm(M,0,sd=sqrt(1/taub))
36 | wc <- rnorm(M,0,sd=sqrt(1/tauc))
37 |
38 | #calculate a,b and c parameters
39 | a.fut[pos,t] <- exp(log(a.fut[pos,t-1]) + wa)
40 | b.fut[pos,t] <- exp(log(b.fut[pos,t-1]) + wb)
41 | c.fut[pos,t] <- exp(log(c.fut[pos,t-1]) + wc)
42 |
43 | #calculate mu
44 | mu[pos,t] <- (a.fut[pos,t]*exp(c.fut[pos,t]*(B+t))) / (1+b.fut[pos,t]*exp(c.fut[pos,t]*(B+t)))
45 |
46 | #check if mu satisify condition
47 | cond <- mu[,t]-mu[,t-1]
48 | pos <- which(cond <= 0)
49 | if(length(pos) != 0){
50 | M <- length(pos)
51 | count <- count + 1
52 | if(count == limit+1) mu[pos,t] <- mu[pos,t-1] + 1e-7
53 | }
54 | else count <- limit + 1
55 | }
56 | }
57 |
58 | M <- M.save
59 | y.fut <- matrix(-Inf, ncol=L,nrow=M)
60 | #gernate sample from the mu's
61 | for(i in 1:L) y.fut[,i] <- rpois(M,mu[,i+1]-mu[,i])
62 |
63 | out <- list(y.fut <- y.fut, mu.fut <- mu, a.fut <- a.fut, b.fut <- b.fut, c.fut <- c.fut)
64 | return(out)
65 | }
66 |
67 |
--------------------------------------------------------------------------------
/R/STAN/template_cases_countries.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 | setwd("/home/marcosop/Covid/R/STAN")
3 |
4 | ###################################################################
5 | ### Packages
6 | ###################################################################
7 | library(PandemicLP)
8 | library(foreach)
9 | library(doMC)
10 | library(dplyr)
11 | library(zoo)
12 | source("utils.R")
13 |
14 | Sys.setenv(LANGUAGE='en')
15 | rstan_options(auto_write = TRUE)
16 |
17 | ###################################################################
18 | ### Data sets: https://github.com/CSSEGISandData
19 | ###################################################################
20 | countrylist <- _countrynames_
21 |
22 | #register cores
23 | #registerDoMC(cores = detectCores()-1) # Alternativa Linux
24 | registerDoMC(cores = min(63,length(countrylist))) # Alternativa Linux
25 |
26 |
27 | obj <- foreach(s = 1:length(countrylist)) %dopar% {
28 |
29 | country_name <- countrylist[s]
30 | covid_country <- load_covid(country_name=country_name) # load data
31 |
32 | nwaves = _nwaves_
33 |
34 | nom <- c("date","n","d","n_new","d_new")
35 | name.to.save <- gsub(" ", "-", country_name)
36 | results_directory = "/home/marcosop/Covid/chains/"
37 | name.file <- paste0(results_directory,name.to.save,'_',nom[2],'.rds')
38 | old <- readRDS(name.file)
39 |
40 | mod <- pandemic_model(covid_country,case_type = "confirmed", p = 0.08,
41 | n_waves = nwaves,
42 | warmup = 10e3, thin = 3, sample_size = 1e3,
43 | init=old, covidLPconfig = FALSE) # run the model
44 |
45 | names(covid_country$data) <- nom
46 |
47 | pred <- posterior_predict(mod,horizonLong = 1000,horizonShort = 14) # do predictions
48 |
49 | stats <- pandemic_stats(pred) # calculate stats
50 | stats[[1]] <-NULL # removing the data (the app use the data coming from other object)
51 | names(stats) <- c("df_predict","lt_predict","lt_summary","mu_plot")
52 |
53 | names(stats$df_predict) <- c("date", "q25", "med", "q975", "m")
54 | names(stats$lt_predict) <- c("date", "q25", "med", "q975", "m")
55 | names(stats$lt_summary) <- c("NTC25","NTC500","NTC975","high.dat.low","high.dat.med","high.dat.upper","end.dat.low",
56 | "end.dat.med","end.dat.upper")
57 |
58 | #create flag
59 | flag <- try(classify.flag(covid_country$data[,c("date","n_new")],stats$mu_plot))
60 |
61 | if(class(flag)== "try-error") flag <- -1
62 |
63 | list_out <- list( df_predict = stats$df_predict, lt_predict=stats$lt_predict, lt_summary=stats$lt_summary,
64 | mu_plot = stats$mu_plot, residuals = cbind(mod$nominal_errors, mod$relative_errors),
65 | flag = flag)
66 |
67 | name.to.save <- gsub(" ", "-", country_name)
68 |
69 | ### saveRDS
70 | results_directory = "/home/marcosop/Covid/app_COVID19/STpredictions/"
71 | name.file <- paste0(results_directory,name.to.save,'_',colnames(covid_country$data)[2],'.rds')
72 | saveRDS(list_out, file=name.file)
73 |
74 | ### saveChain
75 | results_directory = "/home/marcosop/Covid/chains/"
76 | name.file <- paste0(results_directory,name.to.save,'_',colnames(covid_country$data)[2],'.rds')
77 | saveRDS(mod, file=name.file)
78 | }
79 |
--------------------------------------------------------------------------------
/R/STAN/template_deaths_countries.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 | setwd("/home/marcosop/Covid/R/STAN")
3 |
4 | ###################################################################
5 | ### Packages
6 | ###################################################################
7 | library(PandemicLP)
8 | library(foreach)
9 | library(doMC)
10 | library(dplyr)
11 | library(zoo)
12 | source("utils.R")
13 |
14 | Sys.setenv(LANGUAGE='en')
15 | rstan_options(auto_write = TRUE)
16 |
17 | ###################################################################
18 | ### Data sets: https://github.com/CSSEGISandData
19 | ###################################################################
20 | countrylist <- _countrynames_
21 |
22 | #register cores
23 | #registerDoMC(cores = detectCores()-1) # Alternativa Linux
24 | registerDoMC(cores = min(63,length(countrylist))) # Alternativa Linux
25 |
26 |
27 | obj <- foreach(s = 1:length(countrylist)) %dopar% {
28 |
29 | country_name <- countrylist[s]
30 | covid_country <- load_covid(country_name=country_name) # load data
31 |
32 | nwaves = _nwaves_
33 |
34 | nom <- c("date","n","d","n_new","d_new")
35 | name.to.save <- gsub(" ", "-", country_name)
36 | results_directory = "/home/marcosop/Covid/chains/"
37 | name.file <- paste0(results_directory,name.to.save,'_',nom[3],'.rds')
38 | old <- readRDS(name.file)
39 |
40 | mod <- pandemic_model(covid_country,case_type = "deaths", p = 0.08*.25,
41 | n_waves = nwaves,
42 | warmup = 10e3, thin = 3, sample_size = 1e3,
43 | init=old, covidLPconfig = FALSE) # run the model
44 |
45 | names(covid_country$data) <- nom
46 |
47 | pred <- posterior_predict(mod,horizonLong = 1000,horizonShort = 14) # do predictions
48 |
49 | stats <- pandemic_stats(pred) # calculate stats
50 | stats[[1]] <-NULL # removing the data (the app use the data coming from other object)
51 | names(stats) <- c("df_predict","lt_predict","lt_summary","mu_plot")
52 |
53 | names(stats$df_predict) <- c("date", "q25", "med", "q975", "m")
54 | names(stats$lt_predict) <- c("date", "q25", "med", "q975", "m")
55 | names(stats$lt_summary) <- c("NTC25","NTC500","NTC975","high.dat.low","high.dat.med","high.dat.upper","end.dat.low",
56 | "end.dat.med","end.dat.upper")
57 |
58 | #create flag
59 | flag <- try(classify.flag(covid_country$data[,c("date","d_new")],stats$mu_plot))
60 |
61 | if(class(flag)== "try-error") flag <- -1
62 |
63 | list_out <- list( df_predict = stats$df_predict, lt_predict=stats$lt_predict, lt_summary=stats$lt_summary,
64 | mu_plot = stats$mu_plot, residuals = cbind(mod$nominal_errors, mod$relative_errors),
65 | flag = flag)
66 |
67 | name.to.save <- gsub(" ", "-", country_name)
68 |
69 | ### saveRDS
70 | results_directory = "/home/marcosop/Covid/app_COVID19/STpredictions/"
71 | name.file <- paste0(results_directory,name.to.save,'_',colnames(covid_country$data)[3],'.rds')
72 | saveRDS(list_out, file=name.file)
73 |
74 | ### saveChain
75 | results_directory = "/home/marcosop/Covid/chains/"
76 | name.file <- paste0(results_directory,name.to.save,'_',colnames(covid_country$data)[3],'.rds')
77 | saveRDS(mod, file=name.file)
78 | }
79 |
--------------------------------------------------------------------------------
/global.R:
--------------------------------------------------------------------------------
1 | ## Load packages
2 | library(dplyr)
3 | library(tidyr)
4 | library(httr)
5 | library(lubridate)
6 | library(knitr)
7 | library(shiny)
8 | library(shinyjs)
9 | library(plotly)
10 | library(shinythemes)
11 | library(shinycssloaders)
12 | library(shinyWidgets)
13 | library(shinyBS)
14 | library(markdown)
15 |
16 | ## Load local functions
17 | source("utils.R")
18 |
19 | ## Define font to be used later
20 | f1 <- list(family = "Arial", size = 10, color = "rgb(30, 30, 30)")
21 |
22 | ## colors for observed data
23 | blu <- 'rgb(100, 140, 240)'
24 | dblu <- 'rgb(0, 0, 102)'
25 | red <- 'rgb(200, 30, 30)'
26 | dred <- 'rgb(100, 30, 30)'
27 |
28 | ##-- DATA SOURCES ----
29 | ## setup data source (Johns Hopkins)- GLOBAL
30 | baseURL <- "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series"
31 |
32 | allData <- loadData("time_series_covid19_confirmed_global.csv", "CumConfirmed") %>%
33 | inner_join(
34 | loadData("time_series_covid19_deaths_global.csv", "CumDeaths"),
35 | by = c("Province/State", "Country/Region", "date")
36 | )
37 |
38 | countries <- sort(unique(allData$`Country/Region`))
39 |
40 | ## Setup data source (MSaude/BR)- BRAZIL
41 | # baseURL.BR = "https://raw.githubusercontent.com/belisards/coronabr/master/dados"
42 | # baseURL.BR = "https://covid.saude.gov.br/assets/files/COVID19_"
43 | baseURL.BR <- "https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados"
44 | # baseURL.BR <- "https://raw.githubusercontent.com/thaispaiva/app_COVID19/master/R/STAN"
45 | brData <- loadData.BR("EstadosCov19.csv")
46 |
47 | ## Setup data source - US
48 | baseURL.US = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series"
49 |
50 | covid19_confirm <- loadData.US_cases("time_series_covid19_confirmed_US.csv", "confirmed")
51 | covid19_deaths <- loadData.US_deaths("time_series_covid19_deaths_US.csv", "deaths")
52 | usData <- left_join(covid19_confirm,covid19_deaths, by=c('state','date')) %>%
53 | rename(`Province/State`=state, CumConfirmed=confirmed, CumDeaths=deaths) %>%
54 | select(`Province/State`, CumConfirmed, CumDeaths, date)
55 |
56 | ##-- LOAD PREDICTION RESULTS ----
57 | files <- readfiles.repo()
58 | aux <- sub(pattern = '(\\_n.rds$)|(\\_d.rds$)', replacement = '', x = files)
59 |
60 | ## List of countries for SHORT TERM prediction
61 | countries_STpred <- sort(unique(
62 | unlist(lapply(strsplit(aux, "_"), function(x) x[1]))))
63 | countries_STpred_orig <- gsub("-", " ", countries_STpred)
64 |
65 | ## List of Brazil's states
66 | statesBR_STpred <- unique(unlist(lapply(strsplit(aux, "_"), function(x) if(x[1] == "Brazil") return(x[2]))))
67 | statesBR_STpred[is.na(statesBR_STpred)] <- ""
68 | statesBR_STpred <- sort(statesBR_STpred)
69 |
70 | ## List of countries for LONG TERM prediction
71 | countries_LTpred_orig <- countries_STpred_orig
72 |
73 | ## List of Brazil's states - LONG TERM
74 | statesBR_LTpred <- statesBR_STpred
75 |
76 | ##-- List of countries to hide new cases
77 | # hide_countries_nc <- list('Australia', 'Belgium', 'Canada', 'Greece', 'Japan', 'Korea, South',
78 | # 'Netherlands', 'Peru', 'Poland', 'Portugal', 'Spain', 'Switzerland',
79 | # 'Uruguay', 'US')
80 | hide_countries_nc <- list()
81 |
82 | ##-- List of countries to hide deaths
83 | # hide_countries_d <- list('Australia', 'US')
84 | hide_countries_d <- list()
85 |
86 | ## Read RDS files from github repository
87 | githubURL <- "https://github.com/thaispaiva/app_COVID19/raw/master/STpredictions"
88 |
89 | ##-- Date format
90 | dt_format <- "%d/%b/%y"
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Projeto CovidLP - Previsão de Curto e Longo Prazos para COVID-19 / DEST-UFMG
2 |
3 | Neste repositório, encontram-se todos os arquivos e códigos necessários para implementar o nosso aplicativo de previsões sobre os casos do COVID-19. A versão online do aplicativo está disponível em [http://est.ufmg.br/covidlp](http://est.ufmg.br/covidlp).
4 |
5 | O aplicativo foi criado por um grupo de professores e alunos de pós-graduação do Departamento de Estatística da Universidade Federal de Minas Gerais, com intuito de fazer com que o usuário possa acompanhar e saber o que esperar sobre os casos e mortes confirmadas de COVID-19. Para mais informações sobre o projeto, acesse o site [http://est.ufmg.br/covidlp/home](http://est.ufmg.br/covidlp/home).
6 |
7 | ---
8 |
9 | Abaixo segue uma explicação deste repositório e seus diretórios:
10 |
11 | - Os arquivos na pasta principal são os códigos necessários para a implementação do aplicativo Shiny no R.
12 | - Na pasta "R" encontram-se arquivos com os códigos executados diariamente para ajustar o modelo de estimação para cada país/estado analisados.
13 | - A pasta "STpredictions" contém os resultados mais recentes das previsões de curto e longo prazo para todos os países, além de relatórios sobre a convergência dos parâmetros do modelo.
14 | - A pasta "STvideos" contém as previsões, de alguns países, realizadas em datas anteriores. Os registros ao longo do tempo são necessários para construir a animação com a evolução das análises. Essa função está em desenvolvimento.
15 | - Na pasta "www" estão as figuras estáticas, utilizadas no aplicativo.
16 | - O botão verde "Clonar ou baixar" oferece a opção de baixar todos os arquivos da página, para que o usuário possa fazer um clone do nosso aplicativo na versão atual. Ressaltamos que o trabalho está em desenvolvimento, e recomendamos acompanhar as atualizações deste repositório.
17 |
18 | ---
19 |
20 | # CovidLP - Short and long term predictions for COVID-19 / DEST-UFMG
21 |
22 | In this repository, you will find all the files needed to implement our app of predictions about COVID-19 cases. The online app can be accessed at [http://est.ufmg.br/covidlp](http://est.ufmg.br/covidlp).
23 |
24 | This application was created by a group of faculty and graduate students from the Statistics Department of the Universidade Federal de Minas Gerais, to allow users to follow and know what to expect about the confirmed cases and deaths cause by COVID-19. For more information about this project, check our site [http://est.ufmg.br/covidlp/home](http://est.ufmg.br/covidlp/home).
25 |
26 | ---
27 |
28 | Below is an explanation of this repository and its directories:
29 | - The files on the root folder are the code files needed to implement the shiny app using R.
30 | - In the "R" folder, there are files with the codes executed daily to fit the estimation model for each country/state being analyzed.
31 | - The folder "STpredictions" contains the most recent results of short and long time predictions for all countries, as well as reports on the convergence of the parameters of the model.
32 | - The folder "STvideos" contains the predictions of some countries, made for previous dates. Records over time are necessary to build the animation with the evolution of the analysis. This feature is under development.
33 | - The "www" folder contains the figures that are static, used in the app.
34 | - The green button "Clone or download" offers the option to download all the files of the repository, so that the user can make a clone of our app on its current version. We emphasize that the work is still under development, and we recommend all users to follow the updates of this repository.
35 |
--------------------------------------------------------------------------------
/www/styles.css:
--------------------------------------------------------------------------------
1 | #country {
2 | background-color: #fff !important;
3 | }
4 |
5 | .nav {
6 | padding-left: 1%;
7 | }
8 |
9 | .fab {
10 | color: #00b1d8;
11 | font-size: x-large;
12 | }
13 |
14 | .fa, .fas {
15 | color: #00b1d8;
16 | font-size: smaller;
17 | font-size: x-large;
18 | }
19 |
20 | label {
21 | font-weight: 100;
22 | }
23 |
24 | .container-fluid {
25 | padding: 0;
26 | background-color: #f4f4f7;
27 | }
28 |
29 | body {
30 | background-color: #e8e8ea;
31 | }
32 |
33 | .h1, .h2, .h3, .h4, .h5, .h6, h1, h2, h3, h4, h5, h6 {
34 | font-family: Avenir Next, sans-serif;
35 | color: #565656;
36 | }
37 |
38 | .a, a {
39 | color: #00b1d8;
40 | text-decoration: underline;
41 | }
42 |
43 | .row {
44 | margin-right: 0;
45 | margin-left: 0;
46 | }
47 |
48 | /*.btn, .btn-default {
49 | color: #fff;
50 | background-color: #00b1d8;
51 | border-color: #fff;
52 | display: inline;
53 | float: right;
54 | }*/
55 |
56 | .btn-default, .btn-default:hover {
57 | color: #fff;
58 | background-color: #00b1d8;
59 | border-color: #fff;
60 | }
61 |
62 | .btn-default.active, .btn-default:active, .open>.dropdown-toggle.btn-default {
63 | color: #fff;
64 | background-color: #00b1d8;
65 | background-image: none;
66 | border-color: #ffffff;
67 | }
68 |
69 | .btn-default.active.focus, .btn-default.active:focus, .btn-default.active:hover, .btn-default:active.focus, .btn-default:active:focus, .btn-default:active:hover, .open>.dropdown-toggle.btn-default.focus, .open>.dropdown-toggle.btn-default:focus, .open>.dropdown-toggle.btn-default:hover {
70 | color: #fff;
71 | background-color: #00b1d8;
72 | border-color: #ffffff;
73 | }
74 |
75 | .btn-default.focus, .btn-default:focus {
76 | color: #fff;
77 | background-color: #00b1d8;
78 | border-color: #ffffff;
79 | }
80 |
81 | .bootstrap-select>.dropdown-toggle {
82 | font-size: 1.25em;
83 | }
84 |
85 | #pB2 {
86 | background-color: #00b1d8;
87 | border-color: white;
88 | color: white;
89 | }
90 |
91 | .bttn-material-flat.bttn-default {
92 | float: right;
93 | }
94 |
95 | hr {
96 | border-top: 1px solid #eee0;
97 | }
98 |
99 | .dropdown-menu>.active>a, .dropdown-menu>.active>a:focus, .dropdown-menu>.active>a:hover {
100 | color: #fff !important;
101 | background-color: #00b1d8 !important;
102 | }
103 |
104 | .shiny-input-container:not(.shiny-input-container-inline) {
105 | padding-top: 10px;
106 | }
107 |
108 | .bootstrap-select .dropdown-toggle .filter-option-inner-inner {
109 | text-align: center;
110 | }
111 |
112 | .btn_div {
113 | display: inline-block;
114 | vertical-align: initial;
115 | }
116 |
117 | .well {
118 | margin-bottom: 0px;
119 | }
120 |
121 | .footer {
122 | max-height: 100%;
123 | bottom: 0px;
124 | left: 0px;
125 | right: 0px;
126 | margin-bottom: 0px;
127 | background-color: #FFFFFF;
128 | text-align: center;
129 | }
130 |
131 | .img_footer{
132 | padding-top: 10px;
133 | padding-right: 100px;
134 | padding-left: 100px;
135 | }
136 |
137 | .imgContainerFooter{
138 | padding-top: 20px;
139 | float: right;
140 | }
141 |
142 | .imgContainerFooter img {
143 | height: 40px;
144 | padding: 5px 5px 5px 5px;
145 | }
146 |
147 | .logoContainerFooter{
148 | padding-top: 0px;
149 | float: left;
150 | }
151 |
152 | .logoContainerFooter img {
153 | height: 30px;
154 | }
155 |
156 | .shiny-input-container:not(.shiny-input-container-inline) {
157 | padding-top: 0px;
158 | margin-bottom: 0px;
159 | }
160 |
161 | .column {
162 | float: left;
163 | width: 50%;
164 | }
165 |
166 | .row:after {
167 | content: "";
168 | display: table;
169 | clear: both;
170 | }
--------------------------------------------------------------------------------
/R/JAGS/mcmcplot_country.R:
--------------------------------------------------------------------------------
1 | mcmcplot_country <- function (mcmcout, parms = NULL, regex = NULL, random = NULL,
2 | leaf.marker = "[\\[_]", dir = tempdir(), filename = "MCMCoutput",
3 | extension = "html", title = NULL, heading = title,
4 | col = NULL, lty = 1, xlim = NULL, ylim = NULL, style = c("gray","plain"),
5 | greek = FALSE,
6 | country = NULL,
7 | type = NULL)
8 | {
9 | if (is.null(title))
10 | title <- paste("MCMC Plots: ", deparse(substitute(mcmcout)),
11 | sep = "")
12 | if (is.null(heading))
13 | heading <- title
14 | style <- match.arg(style)
15 | current.devices <- dev.list()
16 | on.exit(sapply(dev.list(), function(dev) if (!(dev %in% current.devices)) dev.off(dev)))
17 | mcmcout <- convert.mcmc.list(mcmcout)
18 | nchains <- length(mcmcout)
19 | if (is.null(col)) {
20 | col <- mcmcplotsPalette(nchains)
21 | }
22 | css.file <- system.file("MCMCoutput.css", package = "mcmcplots")
23 | css.file <- paste("file:///", css.file, sep = "")
24 | htmlfile <- .html.begin(dir, filename, extension, title = title,
25 | cssfile = css.file)
26 | if (is.null(varnames(mcmcout))) {
27 | warning("Argument 'mcmcout' did not have valid variable names, so names have been created for you.")
28 | varnames(mcmcout) <- varnames(mcmcout, allow.null = FALSE)
29 | }
30 | parnames <- parms2plot(varnames(mcmcout), parms, regex, random,
31 | leaf.marker, do.unlist = FALSE)
32 | if (length(parnames) == 0)
33 | stop("No parameters matched arguments 'parms' or 'regex'.")
34 | np <- length(unlist(parnames))
35 | cat("\n\n", file = htmlfile, append = TRUE)
36 | cat("
", heading, "
", sep = "",
37 | file = htmlfile, append = TRUE)
38 | cat("
\n", file = htmlfile, append = TRUE)
39 | cat("\n
Table of Contents
", file = htmlfile,
40 | append = TRUE)
41 | cat("
\n", file = htmlfile, append = TRUE)
42 | for (group.name in names(parnames)) {
43 | cat(sprintf("- %s
\n",
44 | group.name, group.name), file = htmlfile, append = TRUE)
45 | }
46 | cat("
\n", file = htmlfile, append = TRUE)
47 | cat("
\n", file = htmlfile, append = TRUE)
48 | htmlwidth <- 640
49 | htmlheight <- 480
50 | for (group.name in names(parnames)) {
51 | cat(sprintf("
\n",
52 | group.name, group.name), file = htmlfile, append = TRUE)
53 | for (p in parnames[[group.name]]) {
54 | pctdone <- round(100 * match(p, unlist(parnames))/np)
55 | cat("\r", rep(" ", getOption("width")),
56 | sep = "")
57 | cat("\rPreparing plots for ", group.name, ". ",
58 | pctdone, "% complete.", sep = "")
59 | gname <- paste(country,'_',type,'_', p, ".png", sep = "")
60 | png(file.path(dir, gname), width = htmlwidth, height = htmlheight)
61 | plot_err <- tryCatch({
62 | mcmcplot1(mcmcout[, p, drop = FALSE], col = col,
63 | lty = lty, xlim = xlim, ylim = ylim, style = style,
64 | greek = greek)
65 | }, error = function(e) {
66 | e
67 | })
68 | dev.off()
69 | if (inherits(plot_err, "error")) {
70 | cat(sprintf("
%s. %s
",
71 | p, plot_err), file = htmlfile, append = TRUE)
72 | }
73 | else {
74 | .html.img(file = htmlfile, class = "mcmcplot",
75 | src = gname, width = htmlwidth, height = htmlheight)
76 | }
77 | }
78 | }
79 | cat("\r", rep(" ", getOption("width")),
80 | "\r", sep = "")
81 | cat("\n
\n
\n", file = htmlfile, append = TRUE)
82 | .html.end(htmlfile)
83 | full.name.path <- paste("file://", htmlfile, sep = "")
84 | #browseURL(full.name.path)
85 | invisible(full.name.path)
86 | }
87 |
88 | environment(mcmcplot_country) <- environment(mcmcplot)
89 |
90 |
--------------------------------------------------------------------------------
/R/STAN/template_cases_BR.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 | setwd("/home/marcosop/Covid/R/STAN")
3 |
4 | ###################################################################
5 | ### Packages
6 | ###################################################################
7 | library(PandemicLP)
8 | library(foreach)
9 | library(doMC)
10 | library(covid19br)
11 | library(dplyr)
12 | library(zoo)
13 | source("utils.R")
14 |
15 | Sys.setenv(LANGUAGE='en')
16 | rstan_options(auto_write = TRUE)
17 |
18 | ###################################################################
19 | ### Data sets: https://github.com/CSSEGISandData
20 | ###################################################################
21 | covid19 <- downloadCovid19(level = "states") %>%
22 | select(date,n=accumCases,d=accumDeaths,n_new=newCases, d_new=newDeaths,state) %>%
23 | arrange(state,date)
24 |
25 | uf <- distinct(covid19,state)
26 |
27 | br_pop <- read.csv("../pop/pop_BR.csv")
28 |
29 |
30 | state_list <- _statelist_
31 |
32 | #register cores
33 | #registerDoMC(cores = detectCores()-1) # Alternativa Linux
34 | #registerDoMC(cores = 5) # Alternativa Linux
35 | registerDoMC(cores = min(63,length(state_list))) # Alternativa Linux
36 |
37 | obj <- foreach(s = 1:length(state_list)) %dopar% {
38 |
39 | estado <- state_list[s]
40 | data <- covid19 %>% filter(state== estado) %>%
41 | select(date=date, cases=n, deaths=d, new_cases=n_new, new_deaths=d_new,-state)
42 |
43 | #remove duplicated data
44 | {if(sum(duplicated(data$date)) > 0){
45 | data <- data[-which(duplicated(data$date)),]
46 | }}
47 |
48 | while(any(data$new_cases <0)){
49 | pos <- which(data$new_cases <0)
50 | for(j in pos){
51 | data$new_cases[j-1] = data$new_cases[j] + data$new_cases[j-1]
52 | data$new_cases[j] = 0
53 | }
54 | }
55 |
56 | pop <- br_pop$pop[which(br_pop$uf == estado)]
57 | names <- paste("Brazil",estado,sep="_")
58 |
59 | covid_state <- list(data=as.data.frame(data), name = names, population = pop)
60 |
61 | nwaves = _nwaves_
62 |
63 | nom <- c("date","n","d","n_new","d_new")
64 | results_directory = "/home/marcosop/Covid/chains/"
65 | file_id <- paste0(state_list[s],'_',nom[2],'e')
66 | old <- readRDS(paste0(results_directory,file_id,'.rds'))
67 |
68 | mod <- pandemic_model(covid_state,case_type = "confirmed", p = 0.08,
69 | seasonal_effect=c("sunday","monday"),n_waves = nwaves,
70 | warmup = 10e3, thin = 3, sample_size = 1e3,
71 | init=old, covidLPconfig = FALSE) # run the model
72 |
73 | names(covid_state$data) <- nom
74 |
75 | pred <- posterior_predict(mod,horizonLong = 1000,horizonShort = 14) # do predictions
76 |
77 | stats <- pandemic_stats(pred) # calculate stats
78 | stats[[1]] <-NULL # removing the data (the app use the data coming from other object)
79 | names(stats) <- c("df_predict","lt_predict","lt_summary","mu_plot")
80 |
81 | names(stats$df_predict) <- c("date", "q25", "med", "q975", "m")
82 | names(stats$lt_predict) <- c("date", "q25", "med", "q975", "m")
83 | names(stats$lt_summary) <- c("NTC25","NTC500","NTC975","high.dat.low","high.dat.med","high.dat.upper","end.dat.low",
84 | "end.dat.med","end.dat.upper")
85 |
86 | #create flag
87 | flag <- try(classify.flag(covid_state$data[,c("date","n_new")],stats$mu_plot, rem_saz = TRUE))
88 |
89 | if(class(flag)== "try-error") flag <- -1
90 |
91 | list_out <- list( df_predict = stats$df_predict, lt_predict=stats$lt_predict, lt_summary=stats$lt_summary,
92 | mu_plot = stats$mu_plot, residuals = cbind(mod$nominal_errors, mod$relative_errors),
93 | flag = flag)
94 |
95 | ### saveRDS
96 | results_directory = "/home/marcosop/Covid/app_COVID19/STpredictions/"
97 | file_id <- paste0(state_list[s],'_',colnames(covid_state$data)[2],'e')
98 | saveRDS(list_out, file=paste0(results_directory,'Brazil_',file_id,'.rds'))
99 |
100 |
101 | ### saveRDS - THE POSTERIOR PREDICT (posterior_predict object)
102 | results_directory = "/home/marcosop/TMP/STaux/"
103 | file_id <- paste0(state_list[s],'_posterior_predict_',colnames(covid_state$data)[2],'e')
104 | saveRDS(pred,file = paste0(results_directory,file_id,'.rds'))
105 |
106 | ### saveChain
107 | results_directory = "/home/marcosop/Covid/chains/"
108 | file_id <- paste0(state_list[s],'_',colnames(covid_state$data)[2],'e')
109 | saveRDS(mod, file=paste0(results_directory,file_id,'.rds'))
110 | }
111 |
--------------------------------------------------------------------------------
/R/STAN/template_deaths_BR.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 | setwd("/home/marcosop/Covid/R/STAN")
3 |
4 | ###################################################################
5 | ### Packages
6 | ###################################################################
7 | library(PandemicLP)
8 | library(foreach)
9 | library(doMC)
10 | library(covid19br)
11 | library(dplyr)
12 | library(zoo)
13 | source("utils.R")
14 |
15 | Sys.setenv(LANGUAGE='en')
16 | rstan_options(auto_write = TRUE)
17 |
18 | ###################################################################
19 | ### Data sets: https://github.com/CSSEGISandData
20 | ###################################################################
21 | covid19 <- downloadCovid19(level = "states") %>%
22 | select(date,n=accumCases,d=accumDeaths,n_new=newCases, d_new=newDeaths,state) %>%
23 | arrange(state,date)
24 |
25 | uf <- distinct(covid19,state)
26 |
27 | br_pop <- read.csv("../pop/pop_BR.csv")
28 |
29 |
30 | state_list <- _statelist_
31 |
32 | #register cores
33 | #registerDoMC(cores = detectCores()-1) # Alternativa Linux
34 | #registerDoMC(cores = 15) # Alternativa Linux
35 | registerDoMC(cores = min(63,length(state_list))) # Alternativa Linux
36 |
37 | obj <- foreach(s = 1:length(state_list)) %dopar% {
38 |
39 | estado <- state_list[s]
40 | data <- covid19 %>% filter(state== estado) %>%
41 | select(date=date, cases=n, deaths=d, new_cases=n_new, new_deaths=d_new,-state)
42 |
43 | #remove duplicated data
44 | {if(sum(duplicated(data$date)) > 0){
45 | data <- data[-which(duplicated(data$date)),]
46 | }}
47 |
48 | while(any(data$new_deaths <0)){
49 | pos <- which(data$new_deaths <0)
50 | for(j in pos){
51 | data$new_cases[j-1] = data$new_deaths[j] + data$new_deaths[j-1]
52 | data$new_deaths[j] = 0
53 | }
54 | }
55 |
56 |
57 | pop <- br_pop$pop[which(br_pop$uf == estado)]
58 | names <- paste("Brazil",estado,sep="_")
59 |
60 | covid_state <- list(data=as.data.frame(data), name = names, population = pop)
61 |
62 | nwaves = _nwaves_
63 |
64 | nom <- c("date","n","d","n_new","d_new")
65 | results_directory = "/home/marcosop/Covid/chains/"
66 | file_id <- paste0(state_list[s],'_',nom[3],'e')
67 | old <- readRDS(paste0(results_directory,file_id,'.rds'))
68 |
69 |
70 | mod <- pandemic_model(covid_state,case_type = "deaths", p = 0.08*0.25,
71 | seasonal_effect=c("sunday","monday"),n_waves = nwaves,
72 | warmup = 10e3, thin = 3, sample_size = 1e3,
73 | init=old, covidLPconfig = FALSE) # run the model
74 |
75 | names(covid_state$data) <- nom
76 |
77 | pred <- posterior_predict(mod,horizonLong = 1000,horizonShort = 14) # do predictions
78 |
79 | stats <- pandemic_stats(pred) # calculate stats
80 | stats[[1]] <-NULL # removing the data (the app use the data coming from other object)
81 | names(stats) <- c("df_predict","lt_predict","lt_summary","mu_plot")
82 |
83 | names(stats$df_predict) <- c("date", "q25", "med", "q975", "m")
84 | names(stats$lt_predict) <- c("date", "q25", "med", "q975", "m")
85 | names(stats$lt_summary) <- c("NTC25","NTC500","NTC975","high.dat.low","high.dat.med","high.dat.upper","end.dat.low",
86 | "end.dat.med","end.dat.upper")
87 |
88 | #create flag
89 | flag <- try(classify.flag(covid_state$data[,c("date","d_new")],stats$mu_plot, rem_saz = TRUE))
90 |
91 | if(class(flag)== "try-error") flag <- -1
92 |
93 | list_out <- list( df_predict = stats$df_predict, lt_predict=stats$lt_predict, lt_summary=stats$lt_summary,
94 | mu_plot = stats$mu_plot, residuals = cbind(mod$nominal_errors, mod$relative_errors),
95 | flag = flag)
96 |
97 |
98 | ### saveRDS
99 | results_directory = "/home/marcosop/Covid/app_COVID19/STpredictions/"
100 | file_id <- paste0(state_list[s],'_',colnames(covid_state$data)[3],'e')
101 | saveRDS(list_out, file=paste0(results_directory,'Brazil_',file_id,'.rds'))
102 |
103 |
104 | ### saveRDS - THE POSTERIOR PREDICT (posterior_predict object)
105 | results_directory = "/home/marcosop/TMP/STaux/"
106 | file_id <- paste0(state_list[s],'_posterior_predict_',colnames(covid_state$data)[3],'e')
107 | saveRDS(pred,file = paste0(results_directory,file_id,'.rds'))
108 |
109 | ### saveChain
110 | results_directory = "/home/marcosop/Covid/chains/"
111 | file_id <- paste0(state_list[s],'_',colnames(covid_state$data)[3],'e')
112 | saveRDS(mod, file=paste0(results_directory,file_id,'.rds'))
113 |
114 | }
115 |
--------------------------------------------------------------------------------
/R/JAGS/predict_BR.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 |
3 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R")
4 |
5 | ###################################################################
6 | ### Packages
7 | ###################################################################
8 | library(dplyr)
9 | library(tidyr)
10 | library("rjags")
11 | library(matrixStats)
12 | library(mcmcplots)
13 |
14 |
15 |
16 | ###################################################################
17 | ### Data sets: https://github.com/CSSEGISandData
18 | ###################################################################
19 | baseURLbr = "https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados"
20 |
21 | covid19uf <- read.csv(file.path(baseURLbr,"EstadosCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
22 | rename(state = estado,
23 | date = data,
24 | n = casos.acumulados,
25 | d = obitos.acumulados,
26 | n_new = novos.casos,
27 | d_new = obitos.novos) %>%
28 | mutate(date = as.Date(date)) %>%
29 | select(date, n, d, n_new, d_new, state) %>%
30 | arrange(state,date) %>% filter(date>'2020-02-01')
31 |
32 | covid19br <- read.csv(file.path(baseURLbr,"BrasilCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
33 | mutate(state = 'BR') %>%
34 | rename(date = data,
35 | n = casos.acumulados,
36 | d = obitos.acumulados,
37 | n_new = novos.casos,
38 | d_new = obitos.novos) %>%
39 | mutate(date = as.Date(date)) %>%
40 | select(date, n, d, n_new, d_new, state) %>%
41 | arrange(date) %>% filter(date>'2020-02-01')
42 |
43 | covid19 <- bind_rows(covid19uf,covid19br)
44 | uf <- distinct(covid19,state)
45 |
46 | # class(covid19)
47 | # covid19 %>% tbl_df %>% print(n=Inf)
48 |
49 | ###########################################################################
50 | ###### JAGS
51 | ###########################################################################
52 | source("jags_poisson.R")
53 | i = 2 # (2: confirmed, 3: deaths)
54 | L = 200
55 | params = c("a","b","c","assint","yfut")
56 |
57 | for ( s in 1:dim(uf)[1] ) {
58 |
59 | #t0 = Sys.time()
60 |
61 | Y = covid19 %>% filter(state==uf$state[s])
62 | t = dim(Y)[1]
63 | data_jags = list(y=Y[[i]], t=t, L=L)
64 | nc = 3
65 | ni = 5e4
66 |
67 | # set.seed(100)
68 | mod = jags.model(textConnection(mod_string), data=data_jags, n.chains=nc)
69 | update(mod, n.iter=5e3)
70 | mod_sim = try(coda.samples(model=mod, variable.names=params, n.iter=ni))
71 |
72 | if(class(mod_sim) != "try-error"){
73 | mod_chain = as.data.frame(do.call(rbind, mod_sim))
74 | # names(mod_chain)
75 |
76 | npar = 4
77 | mod_chain_y = as.matrix(mod_chain %>% select(-names(mod_chain)[1:npar]))
78 | mod_chain_cumy = rowCumsums(mod_chain_y) + Y[[i]][t]
79 |
80 |
81 | ### list output
82 | L0 = 14
83 | df_predict <- data.frame( date = as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01"),
84 | q25 = colQuantiles(mod_chain_cumy[,1:L0], prob=.025),
85 | med = colQuantiles(mod_chain_cumy[,1:L0], prob=.5),
86 | q975 = colQuantiles(mod_chain_cumy[,1:L0], prob=.975),
87 | m = colMeans(mod_chain_cumy[,1:L0]))
88 | row.names(df_predict) <- NULL
89 |
90 | list_out <- list( df_predict = df_predict )
91 |
92 |
93 | ### saveRDS
94 | results_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/"
95 | file_id <- ifelse(uf$state[s]=='BR', colnames(Y)[i] , paste0(uf$state[s],'_',colnames(Y)[i],'e'))
96 | saveRDS(list_out, file=paste0(results_directory,'Brazil_',file_id,'.rds'))
97 |
98 |
99 | ### report
100 | source("mcmcplot_country.R")
101 | report_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/reports"
102 | #report_directory = 'C:/Users/ricar/Dropbox/covid19/R/predict/report'
103 | mcmcplot_country(mcmcout = mod_chain, parms = c("a", "b", "c", "assint"),
104 | dir = report_directory,
105 | filename = paste0('Brazil_',file_id,'_diagnostics'),
106 | heading = paste0('Brazil_',file_id),
107 | extension = "html", greek = TRUE,
108 | country = 'Brazil',
109 | type = file_id)
110 |
111 |
112 | ### run time
113 | #run_time = round(as.numeric(Sys.time()-t0, units="mins"),2)
114 | #print(noquote(paste(run_time, "minutes to", uf$state[s])))
115 | }
116 | else print(paste0("ERROR:",country_name))
117 |
118 | }
119 |
120 |
121 |
--------------------------------------------------------------------------------
/app_v1/global.R:
--------------------------------------------------------------------------------
1 | ## Load packages
2 | library(dplyr)
3 | library(tidyr)
4 | library(httr)
5 | library(lubridate)
6 | library(knitr)
7 | library(shiny)
8 | library(shinyjs)
9 | library(plotly)
10 | library(shinythemes)
11 | library(shinycssloaders)
12 | library(markdown)
13 | library(shinyWidgets)
14 | library(shinyBS)
15 |
16 | ##############################################################################
17 |
18 | ## Define font to be used later
19 | f1 = list(family = "Arial", size = 10, color = "rgb(30,30,30)")
20 |
21 | ## Function to measure how old a file is
22 | minutesSinceLastUpdate = function(fileName) {
23 | (as.numeric(as.POSIXlt(Sys.time())) - as.numeric(file.info(fileName)$ctime)) / 60
24 | }
25 |
26 | ## Function to format the dates for better plotting
27 | printDate = function(date){
28 | # paste0(day(date),"/",month(date, lab=T, locale="us"))
29 | monthsEn=c("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec")
30 | paste0(day(date),"/",monthsEn[month(date)])
31 | }
32 |
33 | ## colors for observed data
34 | blu = 'rgb(100,140,240)'
35 | dblu = 'rgb(0,0,102)'
36 | red = 'rgb(200,30,30)'
37 | dred = 'rgb(100,30,30)'
38 |
39 | ##############################################################################
40 | ## DATA SOURCES
41 |
42 | ## setup data source (Johns Hopkins)- GLOBAL
43 | baseURL = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series"
44 |
45 | ## Main function to load the data - GLOBAL
46 | loadData = function(fileName, columnName) {
47 | if(!file.exists(fileName) ||
48 | minutesSinceLastUpdate(fileName) > 10) {
49 | data = read.csv(file.path(baseURL, fileName),
50 | check.names=FALSE, stringsAsFactors=FALSE) %>%
51 | select(-Lat, -Long) %>%
52 | pivot_longer(-(1:2), names_to="date", values_to=columnName)%>%
53 | mutate(
54 | date=as.Date(date, format="%m/%d/%y"),
55 | `Province/State`=
56 | if_else(`Province/State` == "", "", `Province/State`)
57 | )
58 | save(data, file=fileName)
59 | } else {
60 | load(file=fileName)
61 | }
62 | return(data)
63 | }
64 |
65 | allData = loadData("time_series_covid19_confirmed_global.csv", "CumConfirmed") %>%
66 | inner_join(
67 | loadData("time_series_covid19_deaths_global.csv", "CumDeaths"),
68 | by = c("Province/State", "Country/Region", "date")
69 | ) # %>%
70 | # inner_join(
71 | # loadData("time_series_covid19_recovered_global.csv","CumRecovered"))
72 |
73 | ##############################################################################
74 | ## Setup data source (MSaude/BR)- BRAZIL
75 | # baseURL.BR = "https://raw.githubusercontent.com/belisards/coronabr/master/dados"
76 | # baseURL.BR = "https://covid.saude.gov.br/assets/files/COVID19_"
77 | baseURL.BR = "https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados"
78 |
79 | ## Function to load data - BRAZIL
80 | loadData.BR = function(fileName) {
81 | if(!file.exists(fileName) || minutesSinceLastUpdate(fileName) > 10) {
82 | data = read.csv(file.path(baseURL.BR, fileName),
83 | check.names=FALSE, stringsAsFactors=FALSE) %>%
84 | # select(-c(1,3:8,10:11,14)) %>% # remove unwanted columns
85 | select(-c(1,4,6)) %>%
86 | as_tibble() %>%
87 | mutate(date = as.Date(data)
88 | # date=as.Date(date), # format dates
89 | # `CumRecovered` = NA
90 | ) %>%
91 | rename(`Province/State` = estado, # rename some variables
92 | `CumConfirmed` = casos.acumulados,
93 | `CumDeaths` = obitos.acumulados
94 | ) %>%
95 | select(-2)
96 | # browser()
97 | save(data, file=fileName)
98 | } else {
99 | load(file=fileName)
100 | }
101 |
102 | return(data)
103 | }
104 |
105 | # brData = loadData.BR("corona_brasil.csv")
106 | brData = loadData.BR("EstadosCov19.csv")
107 |
108 | ##############################################################################
109 | ## LOAD PREDICTION RESULTS
110 |
111 | ## Function to return list of countries with available data on github
112 | readfiles.repo <- function() {
113 | req <- GET("https://api.github.com/repos/thaispaiva/app_COVID19/git/trees/master?recursive=1")
114 | stop_for_status(req)
115 | # extract list of files on github
116 | filelist <- unlist(lapply(content(req)$tree, "[", "path"), use.names = F)
117 | files <- grep("STpredictions/", filelist, value = TRUE, fixed = TRUE)
118 | files <- unlist(lapply(strsplit(files,"/"),"[",2))
119 | files <- grep(".rds",files, value=TRUE)
120 |
121 | return(files)
122 | }
123 |
124 | files = readfiles.repo() # get available results
125 | aux = sub('\\_n.rds$', '', sub('\\_d.rds$', '', files))
126 |
127 | ##############################################
128 | ## List of countries for SHORT TERM prediction
129 | countries_STpred = sort(unique( # country names without space
130 | unlist(lapply(strsplit(aux,"_"), function(x) x[1]))))
131 | countries_STpred_orig = gsub("-"," ", countries_STpred) # country names with space (original)
132 |
133 | ## List of Brazil's states
134 | statesBR_STpred = unique(
135 | unlist(lapply(strsplit(aux,"_"), function(x) if(x[1]=="Brazil") return(x[2]) )))
136 | statesBR_STpred[is.na(statesBR_STpred)] = ""
137 | statesBR_STpred = sort(statesBR_STpred)
138 |
139 | #############################################
140 | ## List of countries for LONG TERM prediction
141 | countries_LTpred_orig = countries_STpred_orig
142 |
143 | ## List of Brazil's states - LONG TERM
144 | statesBR_LTpred = statesBR_STpred
145 |
146 | ## Read RDS files from github repository
147 | githubURL = "https://github.com/thaispaiva/app_COVID19/raw/master/STpredictions"
148 |
--------------------------------------------------------------------------------
/R/STAN/utils.R:
--------------------------------------------------------------------------------
1 | classify.flag = function(obs, adj, rem_saz = FALSE){
2 |
3 | colnames(obs) = c("date", "y")
4 | nn = nrow(obs)
5 |
6 | if(rem_saz){
7 |
8 | dados.aux = suppressMessages(full_join(obs, adj) %>%
9 | mutate(week_days = weekdays(as.Date(date))) %>%
10 | filter(!(week_days == "domingo" | week_days == "segunda-feira")) %>%
11 | mutate(y_s = rollmean(y, k = 20, fill = NA) %>% rollmean(k = 10, fill = NA), # suavizando os dados
12 | dif_s = c(NA, y_s %>% diff()) %>% rollmean(k = 50, fill = NA), # diferen?a suavizada
13 | s1 = dif_s >= 0,
14 | s2 = dif_s < 0,
15 | id = c(NA, diff(s1 - s2)) ))
16 |
17 | } else{
18 |
19 | dados.aux = suppressMessages(full_join(obs, adj) %>%
20 | mutate(y_s = rollmean(y, k = 20, fill = NA) %>% rollmean(k = 10, fill = NA), # suavizando os dados
21 | dif_s = c(NA, y_s %>% diff()) %>% rollmean(k = 50, fill = NA), # diferen?a suavizada
22 | s1 = dif_s >= 0,
23 | s2 = dif_s < 0,
24 | id = c(NA, diff(s1 - s2)) ))
25 |
26 | }
27 |
28 | maxx = max(dados.aux$y_s, na.rm = TRUE)
29 |
30 | aux = dados.aux %>% filter(id == 2 |id == -2) %>% arrange(desc(date))
31 |
32 | if(aux$id[1] == -2){ # se o ?ltimo ponto identificado ? um pico, a amplitude ? calculada com base na ?ltima observa??o do y suavizado
33 | aux = rbind(tail(dados.aux %>% filter(!is.na(y_s)), 1) %>% mutate(id = 2), aux)
34 | }
35 |
36 | est = aux %>% mutate(amp_y = c(NA, diff(y_s)),
37 | amp_mu = c(NA, diff(mu)),
38 | id_df = c(NA, diff(id)),
39 | est = (amp_y - amp_mu)^2/maxx^2) %>%
40 | filter(id_df == -4) # amplitudes corretas
41 |
42 | est1 = (est %>% summarise(crit = sum(est)))$crit
43 |
44 | est1
45 |
46 | }
47 |
48 | classify.flag.old = function(obs, adj){
49 |
50 | colnames(obs) = c("date", "y")
51 |
52 | dados.aux = suppressMessages(full_join(obs, adj) %>%
53 | mutate(y_s = rollmean(y, k = 20, fill = NA) %>% rollmean(k = 10, fill = NA), # suavizando os dados
54 | dif_s = c(NA, y_s %>% diff()) %>% rollmean(k = 50, fill = NA), # diferença suavizada
55 | id = c(NA, dif_s %>% sign() %>% diff()) ) ) # identificador de mudança de comportamento
56 |
57 | maxx = max(dados.aux$y_s, na.rm = TRUE)
58 |
59 | aux = dados.aux %>% filter(id == 2 |id == -2) %>% arrange(desc(date))
60 |
61 | if(aux$id[1] == -2){ # se o último ponto identificado é um pico, a amplitude é calculada com base na última observação do y suavizado
62 | aux = rbind(tail(dados.aux %>% filter(!is.na(y_s)), 1) %>% mutate(id = 2), aux)
63 | }
64 |
65 | est = aux %>% mutate(amp_y = c(NA, diff(y_s)),
66 | amp_mu = c(NA, diff(mu)),
67 | id_df = c(NA, diff(id)),
68 | est = (amp_y - amp_mu)^2/maxx^2) %>%
69 | filter(id_df == -4) # amplitudes corretas
70 |
71 | est1 = (est %>% summarise(crit = sum(est)))$crit
72 |
73 | est1
74 |
75 | }
76 |
77 | classify.flag.old2 = function(obs, adj){
78 |
79 | colnames(obs) = c("date", "y")
80 |
81 | dados = full_join(obs, adj) %>% mutate(y_s = rollmean(y, k = 20, fill = NA) %>% rollmean(k = 10, fill = NA), # suavizando os dados
82 | dif_s = c(NA, y_s %>% diff()) %>% rollmean(k = 50, fill = NA), # diferença suavizada
83 | id = c(NA, dif_s %>% sign() %>% diff()) ) # identificador de mudança de comportamento
84 |
85 | aux = dados %>% filter(id == 2 |id == -2) %>% arrange(desc(date))
86 |
87 | if(aux$id[1] == -2){ # se o último ponto identificado é um pico, a amplitude é calculada com base na última observação do y suavizado
88 | aux = rbind(tail(dados %>% filter(!is.na(y_s)), 1) %>% mutate(id = 2), aux)
89 | }
90 |
91 | est = (aux %>% mutate(amp_y = c(NA, diff(y_s)),
92 | amp_mu = c(NA, diff(mu)),
93 | id_df = c(NA, diff(id)),
94 | est = (amp_y - amp_mu)^2/max(y_s)^2) %>%
95 | filter(id_df == -4) %>% # amplitudes corretas
96 | summarise(crit = sum(est)))$crit
97 |
98 | #ifelse(est <= 0.2, 0, ifelse(est <= 0.5, 1, 2))
99 | est
100 | }
101 |
102 | nwaves <- function(country = TRUE, new_cases = TRUE){
103 |
104 | country_name = c("Argentina", "Australia", "Belgium", "Bolivia", "Canada", "Chile", "China", "Colombia", "Costa Rica", "Ecuador", "Ethiopia", "France", "Germany", "Greece", "Guatemala", "Honduras", "India", "Indonesia", "Iraq", "Ireland", "Italy", "Japan", "South Korea", "Mexico", "Morocco", "Netherlands", "New Zealand", "Norway", "Panama", "Paraguay", "Peru", "Poland", "Portugal", "Romania", "Russia", "Saudi Arabia", "South Africa", "Spain", "Sweden", "Switzerland", "Turkey", "Ukraine", "United Kingdom", "Uruguay", "United States of America", "Venezuela")
105 | state_name = c("AC", "AL", "AM", "AP", "BA", "CE", "DF", "ES", "GO", "MA", "MG", "MS", "MT", "PA", "PB", "PE", "PI", "PR", "RJ", "RN", "RO", "RR", "RS", "SC", "SE", "SP", "TO")
106 |
107 | if(country){
108 |
109 | data = apply(matrix(country_name), 1, load_covid)
110 |
111 | if(new_cases){
112 |
113 | npeaks = data %>% lapply(function(x) x$data$new_cases) %>%
114 | lapply(function(x) nrow(findpeaks(x, minpeakdistance = 95)))
115 |
116 | } else{
117 |
118 | npeaks = data %>% lapply(function(x) x$data$new_deaths) %>%
119 | lapply(function(x) nrow(findpeaks(x, minpeakdistance = 95)))
120 |
121 | }
122 |
123 | waves = npeaks %>% unlist
124 | nwaves = waves %>% unique %>% sort
125 |
126 | country_list = list()
127 | for(i in 1:length(nwaves)){
128 | pos = which(waves == nwaves[i])
129 | country_list[[i]] = country_name[pos]
130 | }
131 | names(country_list) = paste("wave", nwaves, sep = "")
132 |
133 | final_list = list(nwaves = nwaves, country_list = country_list)
134 |
135 | } else{ # states
136 |
137 | data = apply(matrix(state_name), 1, function(x) load_covid("Brazil", x))
138 |
139 | if(new_cases){
140 |
141 | npeaks = data %>% lapply(function(x) x$data$new_cases) %>%
142 | lapply(function(x) nrow(findpeaks(x, minpeakdistance = 95)))
143 |
144 | } else{
145 |
146 | npeaks = data %>% lapply(function(x) x$data$new_deaths) %>%
147 | lapply(function(x) nrow(findpeaks(x, minpeakdistance = 95)))
148 |
149 | }
150 |
151 | waves = npeaks %>% unlist
152 | nwaves = waves %>% unique %>% sort
153 |
154 | state_list = list()
155 | for(i in 1:length(nwaves)){
156 | pos = which(waves == nwaves[i])
157 | state_list[[i]] = state_name[pos]
158 | }
159 | names(state_list) = paste("wave", nwaves, sep = "")
160 |
161 | final_list = list(nwaves = nwaves, state_list = state_list)
162 |
163 | }
164 |
165 | final_list
166 |
167 | }
168 |
--------------------------------------------------------------------------------
/R/pop/pop_WR.csv:
--------------------------------------------------------------------------------
1 | country,code,pop
2 | Aruba,ABW,105845
3 | Afghanistan,AFG,37172386
4 | Angola,AGO,30809762
5 | Albania,ALB,2866376
6 | Andorra,AND,77006
7 | Arab World,ARB,419790588
8 | United Arab Emirates,ARE,9630959
9 | Argentina,ARG,44494502
10 | Armenia,ARM,2951776
11 | American Samoa,ASM,55465
12 | Antigua and Barbuda,ATG,96286
13 | Australia,AUS,24982688
14 | Austria,AUT,8840521
15 | Azerbaijan,AZE,9939800
16 | Burundi,BDI,11175378
17 | Belgium,BEL,11433256
18 | Benin,BEN,11485048
19 | Burkina Faso,BFA,19751535
20 | Bangladesh,BGD,161356039
21 | Bulgaria,BGR,7025037
22 | Bahrain,BHR,1569439
23 | Bahamas,BHS,385640
24 | Bosnia and Herzegovina,BIH,3323929
25 | Belarus,BLR,9483499
26 | Belize,BLZ,383071
27 | Bermuda,BMU,63973
28 | Bolivia,BOL,11353142
29 | Brazil,BRA,209469333
30 | Barbados,BRB,286641
31 | Brunei,BRN,428962
32 | Bhutan,BTN,754394
33 | Botswana,BWA,2254126
34 | Central African Republic,CAF,4666377
35 | Canada,CAN,37057765
36 | Central Europe and the Baltics,CEB,102511922
37 | Switzerland,CHE,8513227
38 | Channel Islands,CHI,170499
39 | Chile,CHL,18729160
40 | China,CHN,1392730000
41 | Cote d'Ivoire,CIV,25069229
42 | Cameroon,CMR,25216237
43 | Congo (Kinshasa),COD,84068091
44 | Congo (Brazzaville),COG,5244363
45 | Colombia,COL,49648685
46 | Comoros,COM,832322
47 | Cabo Verde,CPV,543767
48 | Costa Rica,CRI,4999441
49 | Caribbean small states,CSS,7358965
50 | Cuba,CUB,11338138
51 | Curacao,CUW,159800
52 | Cayman Islands,CYM,64174
53 | Cyprus,CYP,1189265
54 | Czechia,CZE,10629928
55 | Germany,DEU,82905782
56 | Djibouti,DJI,958920
57 | Dominica,DMA,71625
58 | Denmark,DNK,5793636
59 | Dominican Republic,DOM,10627165
60 | Algeria,DZA,42228429
61 | East Asia & Pacific (excluding high income),EAP,2081651801
62 | Early-demographic dividend,EAR,3249140605
63 | East Asia & Pacific,EAS,2328220870
64 | Europe & Central Asia (excluding high income),ECA,417797257
65 | Europe & Central Asia,ECS,918793590
66 | Ecuador,ECU,17084357
67 | Egypt,EGY,98423595
68 | Euro area,EMU,341783171
69 | Eritrea,ERI,
70 | Spain,ESP,46796540
71 | Estonia,EST,1321977
72 | Ethiopia,ETH,109224559
73 | European Union,EUU,446786293
74 | Fragile and conflict affected situations,FCS,743958386
75 | Finland,FIN,5515525
76 | Fiji,FJI,883483
77 | France,FRA,66977107
78 | Faroe Islands,FRO,48497
79 | Micronesia; Fed. Sts.,FSM,112640
80 | Gabon,GAB,2119275
81 | United Kingdom,GBR,66460344
82 | Georgia,GEO,3726549
83 | Ghana,GHA,29767108
84 | Gibraltar,GIB,33718
85 | Guinea,GIN,12414318
86 | Gambia,GMB,2280102
87 | Guinea-Bissau,GNB,1874309
88 | Equatorial Guinea,GNQ,1308974
89 | Greece,GRC,10731726
90 | Grenada,GRD,111454
91 | Greenland,GRL,56025
92 | Guatemala,GTM,17247807
93 | Guam,GUM,165768
94 | Guyana,GUY,779004
95 | High income,HIC,1210312147
96 | Hong Kong SAR; China,HKG,7451000
97 | Honduras,HND,9587522
98 | Heavily indebted poor countries (HIPC),HPC,780234406
99 | Croatia,HRV,4087843
100 | Haiti,HTI,11123176
101 | Hungary,HUN,9775564
102 | IBRD only,IBD,4772284113
103 | IDA & IBRD total,IBT,6412522234
104 | IDA total,IDA,1640238121
105 | IDA blend,IDB,555830605
106 | Indonesia,IDN,267663435
107 | IDA only,IDX,1084407516
108 | Isle of Man,IMN,84077
109 | India,IND,1352617328
110 | Not classified,INX,
111 | Ireland,IRL,4867309
112 | Iran,IRN,81800269
113 | Iraq,IRQ,38433600
114 | Iceland,ISL,352721
115 | Israel,ISR,8882800
116 | Italy,ITA,60421760
117 | Jamaica,JAM,2934855
118 | Jordan,JOR,9956011
119 | Japan,JPN,126529100
120 | Kazakhstan,KAZ,18272430
121 | Kenya,KEN,51393010
122 | Kyrgyzstan,KGZ,6322800
123 | Cambodia,KHM,16249798
124 | Kiribati,KIR,115847
125 | Saint Kitts and Nevis,KNA,52441
126 | "Korea, South",KOR,51606633
127 | Kuwait,KWT,4137309
128 | Latin America & Caribbean (excluding high income),LAC,609013934
129 | Laos,LAO,7061507
130 | Lebanon,LBN,6848925
131 | Liberia,LBR,4818977
132 | Libya,LBY,6678567
133 | Saint Lucia,LCA,181889
134 | Latin America & Caribbean,LCN,641357515
135 | Least developed countries: UN classification,LDC,1009662578
136 | Low income,LIC,705417321
137 | Liechtenstein,LIE,37910
138 | Sri Lanka,LKA,21670000
139 | Lower middle income,LMC,3022905169
140 | Low & middle income,LMY,6383958209
141 | Lesotho,LSO,2108132
142 | Late-demographic dividend,LTE,2288665963
143 | Lithuania,LTU,2801543
144 | Luxembourg,LUX,607950
145 | Latvia,LVA,1927174
146 | Macao SAR; China,MAC,631636
147 | St. Martin (French part),MAF,37264
148 | Morocco,MAR,36029138
149 | Monaco,MCO,38682
150 | Moldova,MDA,2706049
151 | Madagascar,MDG,26262368
152 | Maldives,MDV,515696
153 | Middle East & North Africa,MEA,448912859
154 | Mexico,MEX,126190788
155 | Marshall Islands,MHL,58413
156 | Middle income,MIC,5678540888
157 | North Macedonia,MKD,2082958
158 | Mali,MLI,19077690
159 | Malta,MLT,484630
160 | Burma,MMR,53708395
161 | Middle East & North Africa (excluding high income),MNA,382896715
162 | Montenegro,MNE,622227
163 | Mongolia,MNG,3170208
164 | Northern Mariana Islands,MNP,56882
165 | Mozambique,MOZ,29495962
166 | Mauritania,MRT,4403319
167 | Mauritius,MUS,1265303
168 | Malawi,MWI,18143315
169 | Malaysia,MYS,31528585
170 | North America,NAC,364290258
171 | Namibia,NAM,2448255
172 | New Caledonia,NCL,284060
173 | Niger,NER,22442948
174 | Nigeria,NGA,195874740
175 | Nicaragua,NIC,6465513
176 | Netherlands,NLD,17231624
177 | Norway,NOR,5311916
178 | Nepal,NPL,28087871
179 | Nauru,NRU,12704
180 | New Zealand,NZL,4841000
181 | OECD members,OED,1303529456
182 | Oman,OMN,4829483
183 | Other small states,OSS,30758989
184 | Pakistan,PAK,212215030
185 | Panama,PAN,4176873
186 | Peru,PER,31989256
187 | Philippines,PHL,106651922
188 | Palau,PLW,17907
189 | Papua New Guinea,PNG,8606316
190 | Poland,POL,37974750
191 | Pre-demographic dividend,PRE,919485393
192 | Puerto Rico,PRI,3195153
193 | Korea; Dem. People’s Rep.,PRK,25549819
194 | Portugal,PRT,10283822
195 | Paraguay,PRY,6956071
196 | West Bank and Gaza,PSE,4569087
197 | Pacific island small states,PSS,2457367
198 | Post-demographic dividend,PST,1109997273
199 | French Polynesia,PYF,277679
200 | Qatar,QAT,2781677
201 | Romania,ROU,19466145
202 | Russia,RUS,144478050
203 | Rwanda,RWA,12301939
204 | South Asia,SAS,1814388744
205 | Saudi Arabia,SAU,33699947
206 | Sudan,SDN,41801533
207 | Senegal,SEN,15854360
208 | Singapore,SGP,5638676
209 | Solomon Islands,SLB,652858
210 | Sierra Leone,SLE,7650154
211 | El Salvador,SLV,6420744
212 | San Marino,SMR,33785
213 | Somalia,SOM,15008154
214 | Serbia,SRB,6982604
215 | Sub-Saharan Africa (excluding high income),SSA,1078209758
216 | South Sudan,SSD,10975920
217 | Sub-Saharan Africa,SSF,1078306520
218 | Small states,SST,40575321
219 | Sao Tome and Principe,STP,211028
220 | Suriname,SUR,575991
221 | Slovakia,SVK,5446771
222 | Slovenia,SVN,2073894
223 | Sweden,SWE,10175214
224 | Eswatini,SWZ,1136191
225 | Sint Maarten (Dutch part),SXM,40654
226 | Seychelles,SYC,96762
227 | Syria,SYR,16906283
228 | Turks and Caicos Islands,TCA,37665
229 | Chad,TCD,15477751
230 | East Asia & Pacific (IDA & IBRD countries),TEA,2056064424
231 | Europe & Central Asia (IDA & IBRD countries),TEC,459865205
232 | Togo,TGO,7889094
233 | Thailand,THA,69428524
234 | Tajikistan,TJK,9100837
235 | Turkmenistan,TKM,5850908
236 | Latin America & the Caribbean (IDA & IBRD countries),TLA,625569713
237 | Timor-Leste,TLS,1267972
238 | Middle East & North Africa (IDA & IBRD countries),TMN,378327628
239 | Tonga,TON,103197
240 | South Asia (IDA & IBRD),TSA,1814388744
241 | Sub-Saharan Africa (IDA & IBRD countries),TSS,1078306520
242 | Trinidad and Tobago,TTO,1389858
243 | Tunisia,TUN,11565204
244 | Turkey,TUR,82319724
245 | Tuvalu,TUV,11508
246 | Tanzania,TZA,56318348
247 | Uganda,UGA,42723139
248 | Ukraine,UKR,44622516
249 | Upper middle income,UMC,2655635719
250 | Uruguay,URY,3449299
251 | US,USA,326687501
252 | Uzbekistan,UZB,32955400
253 | Saint Vincent and the Grenadines,VCT,110210
254 | Venezuela,VEN,28870195
255 | British Virgin Islands,VGB,29802
256 | Virgin Islands (U.S.),VIR,106977
257 | Vietnam,VNM,95540395
258 | Vanuatu,VUT,292680
259 | World,WLD,7594270356
260 | Samoa,WSM,196130
261 | Kosovo,XKX,1845300
262 | Yemen,YEM,28498687
263 | South Africa,ZAF,57779622
264 | Zambia,ZMB,17351822
265 | Zimbabwe,ZWE,14439018
266 |
--------------------------------------------------------------------------------
/R/JAGS/death_BR_par.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 |
3 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R/JAGS")
4 |
5 | ###################################################################
6 | ### Packages
7 | ###################################################################
8 | library(dplyr)
9 | library(tidyr)
10 | library("rjags")
11 | library(matrixStats)
12 | library(mcmcplots)
13 | library(foreach)
14 | library(doMC)
15 | ###################################################################
16 | ### Data sets: https://github.com/CSSEGISandData
17 | ###################################################################
18 | baseURLbr = "https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados"
19 |
20 | covid19uf <- read.csv(file.path(baseURLbr,"EstadosCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
21 | rename(state = estado,
22 | date = data,
23 | n = casos.acumulados,
24 | d = obitos.acumulados,
25 | n_new = novos.casos,
26 | d_new = obitos.novos) %>%
27 | mutate(date = as.Date(date)) %>%
28 | select(date, n, d, -n_new, -d_new, state) %>%
29 | arrange(state,date) %>% filter(date>='2020-01-23')
30 |
31 | covid19br <- read.csv(file.path(baseURLbr,"BrasilCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
32 | mutate(state = 'BR') %>%
33 | rename(date = data,
34 | n = casos.acumulados,
35 | d = obitos.acumulados,
36 | n_new = novos.casos,
37 | d_new = obitos.novos) %>%
38 | mutate(date = as.Date(date)) %>%
39 | select(date, n, d, -n_new, -d_new, state) %>%
40 | arrange(date) %>% filter(date>='2020-01-23')
41 |
42 | covid19 <- bind_rows(covid19uf,covid19br)
43 | uf <- distinct(covid19,state)
44 |
45 | br_pop <- read.csv("../pop/pop_BR.csv")
46 |
47 | ###########################################################################
48 | ###### JAGS
49 | ###########################################################################
50 | #register cores
51 | registerDoMC(cores = detectCores()-1) # Alternativa Linux
52 |
53 | #for ( s in 1:dim(uf)[1] ) {
54 | obj <- foreach( s = 1:dim(uf)[1] ) %dopar% {
55 | #obj <- foreach( s = 1:3 ) %dopar% {
56 | source("jags_poisson.R")
57 |
58 | i = 5 # (2: confirmed, 3: deaths, 4: new confirmed, 5: new deaths)
59 | L = 300
60 | #t0 = Sys.time()
61 | estado = uf$state[s]
62 |
63 | Y <- covid19 %>% filter(state==estado) %>%
64 | mutate(n_new = n - lag(n, default=0),
65 | d_new = d - lag(d, default=0)) %>%
66 | select(date, n, d, n_new, d_new, state) %>%
67 | arrange(date) %>% filter(date>='2020-01-23')
68 |
69 | #Y = covid19 %>% filter(state==uf$state[s])
70 |
71 | while(any(Y$d_new <0)){
72 | pos <- which(Y$d_new <0)
73 | for(j in pos){
74 | Y$d_new[j-1] = Y$d_new[j] + Y$d_new[j-1]
75 | Y$d_new[j] = 0
76 | Y$d[j-1] = Y$d[j]
77 | }
78 | }
79 |
80 | pop <- br_pop$pop[which(br_pop$uf == uf$state[s])]
81 |
82 | t = dim(Y)[1]
83 |
84 | #use static to provide initial values
85 | params = c("a","b","c","f","yfut","mu")
86 | nc = 1 # 3
87 | nb = 90e3 # 5e4
88 | thin = 10
89 | ni = 10e3 # 5e4
90 | data_jags = list(y=Y[[i]], t=t, L=L)
91 | mod = try(jags.model(textConnection(mod_string_new), data=data_jags, n.chains=nc, n.adapt=nb, quiet=TRUE))
92 | try(update(mod, n.iter=ni, progress.bar="none"))
93 | mod_sim = try(coda.samples(model=mod, variable.names=params, n.iter=ni, thin=thin,progress.bar="none"))
94 |
95 | if(class(mod_sim) != "try-error" && class(mod) != "try-error"){
96 |
97 | mod_chain = as.data.frame(do.call(rbind, mod_sim))
98 |
99 | a_pos = "a"
100 | b_pos = "b"
101 | c_pos = "c"
102 | f_pos = "f"
103 | mu_pos = paste0("mu[",1:(t+L),"]")
104 | yfut_pos = paste0("yfut[",1:L,"]")
105 | L0 = 14
106 |
107 | mod_chain_y = as.matrix(mod_chain[yfut_pos])
108 | mod_chain_cumy = rowCumsums(mod_chain_y) + Y[[3]][t]
109 |
110 |
111 | ### list output
112 | df_predict <- data.frame( date = as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01"),
113 | q25 = colQuantiles(mod_chain_cumy[,1:L0], prob=.025),
114 | med = colQuantiles(mod_chain_cumy[,1:L0], prob=.5),
115 | q975 = colQuantiles(mod_chain_cumy[,1:L0], prob=.975),
116 | m = colMeans(mod_chain_cumy[,1:L0]))
117 | row.names(df_predict) <- NULL
118 |
119 | lt_predict <- lt_summary <- NULL
120 |
121 | #longterm
122 | L0 = 200
123 |
124 | #acha a curva de quantil
125 | lowquant <- colQuantiles(mod_chain_y[,1:L0], prob=.025)
126 | medquant <- colQuantiles(mod_chain_y[,1:L0], prob=.5)
127 | highquant <- colQuantiles(mod_chain_y[,1:L0], prob=.975)
128 |
129 | NTC25 =sum(lowquant)+Y[[3]][t]
130 | NTC500=sum(medquant)+Y[[3]][t]
131 | NTC975=sum(highquant)+Y[[3]][t]
132 |
133 |
134 | ##flag
135 | cm <- pop * 0.025 * 0.12
136 | ch <- pop * 0.03 * 0.15
137 | flag <- 0 #tudo bem
138 | {if(NTC500 > cm) flag <- 2 #nao plotar
139 | else{if(NTC975 > ch){flag <- 1; NTC25 <- NTC975 <- NULL}}} #plotar so mediana
140 |
141 | #vetor de data futuras e pega a posicao do maximo do percentil 25.
142 | dat.vec <- as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01")
143 | dat.full <- c(Y[[1]],dat.vec)
144 |
145 |
146 | Dat25 <- Dat500 <- Dat975 <- NULL
147 | dat.low.end <- dat.med.end <- dat.high.end <- NULL
148 |
149 | mod_chain_mu = as.matrix(mod_chain[mu_pos])
150 | mu50 <- apply(mod_chain_mu,2,quantile, probs=0.5)
151 | Dat500 <- dat.full[which.max(mu50[1:(t+L0)])]
152 |
153 | q <- .99
154 | med.cum <- c(medquant[1]+Y[[3]][t],medquant[2:length(medquant)])
155 | med.cum <- colCumsums(as.matrix(med.cum))
156 | med.cum <- med.cum/med.cum[length(med.cum)]
157 | med.end <- which(med.cum - q > 0)[1]
158 | dat.med.end <- dat.vec[med.end]
159 |
160 | if(flag == 0){
161 | #definicao do pico usando a curva das medias
162 | mu25 <- apply(mod_chain_mu,2,quantile, probs=0.025)
163 | mu975 <- apply(mod_chain_mu,2,quantile, probs=.975)
164 |
165 | posMax.q25 <- which.max(mu25[1:(t+L0)])
166 | aux <- mu975 - mu25[posMax.q25]
167 | aux2 <- aux[posMax.q25:(t+L0)]
168 | val <- min(aux2[aux2>0])
169 | dat.max <- which(aux == val)
170 |
171 | aux <- mu975 - mu25[posMax.q25]
172 | aux2 <- aux[1:posMax.q25]
173 | val <- min(aux2[aux2>0])
174 | dat.min <- which(aux == val)
175 |
176 | Dat25 <- dat.full[dat.min]
177 | Dat975 <- dat.full[dat.max]
178 |
179 | #calcula o fim da pandemia
180 | low.cum <- c(lowquant[1]+Y[[3]][t],lowquant[2:length(lowquant)])
181 | low.cum <- colCumsums(as.matrix(low.cum))
182 | low.cum <- low.cum/low.cum[length(low.cum)]
183 | low.end <- which(low.cum - q > 0)[1]
184 | dat.low.end <- dat.vec[low.end]
185 |
186 | high.cum <- c(highquant[1]+Y[[3]][t],highquant[2:length(highquant)])
187 | high.cum <- colCumsums(as.matrix(high.cum))
188 | high.cum <- high.cum/high.cum[length(high.cum)]
189 | high.end <- which(high.cum - q > 0)[1]
190 | dat.high.end <- dat.vec[high.end]
191 | }
192 |
193 | lt_predict <- data.frame( date = dat.vec,
194 | q25 = lowquant,
195 | med = medquant,
196 | q975 = highquant,
197 | m = colMeans(mod_chain_y[,1:L0]))
198 | row.names(lt_predict) <- NULL
199 |
200 | lt_summary <- list(NTC25=NTC25,
201 | NTC500=NTC500,
202 | NTC975=NTC975,
203 | high.dat.low=Dat25,
204 | high.dat.med=Dat500,
205 | high.dat.upper=Dat975,
206 | end.dat.low = dat.low.end,
207 | end.dat.med = dat.med.end,
208 | end.dat.upper = dat.high.end)
209 |
210 |
211 | muplot <- data.frame(date = dat.full, mu = mu50[1:(t+L0)])
212 | list_out <- list( df_predict = df_predict, lt_predict=lt_predict, lt_summary=lt_summary, mu_plot = muplot, flag=flag)
213 |
214 | ### saveRDS
215 | results_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/"
216 | # results_directory = 'C:/Users/ricar/Dropbox/covid19/R/predict/'
217 | file_id <- ifelse(uf$state[s]=='BR', colnames(Y)[3] , paste0(uf$state[s],'_',colnames(Y)[3],'e'))
218 | saveRDS(list_out, file=paste0(results_directory,'Brazil_',file_id,'.rds'))
219 |
220 | ### report
221 | source("mcmcplot_country.R")
222 | report_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/reports"
223 | # report_directory = 'C:/Users/ricar/Dropbox/covid19/R/predict/report'
224 | #mcmcplot_country(mcmcout = mod_sim, parms = c(paste0("a[",t,"]"), paste0("b[",t,"]"), paste0("c[",t,"]")),
225 | mcmcplot_country(mcmcout = mod_sim, parms = c("a", "b", "c", "f"),
226 | dir = report_directory,
227 | filename = paste0('Brazil_',file_id,'_diagnostics'),
228 | heading = paste0('Brazil_',file_id),
229 | extension = "html", greek = TRUE,
230 | country = 'Brazil',
231 | type = file_id)
232 |
233 | ### run time
234 | #run_time = round(as.numeric(Sys.time()-t0, units="mins"),2)
235 | #print(noquote(paste(run_time, "minutes to", uf$state[s])))
236 |
237 | }
238 |
239 | else print(paste0("ERROR:",uf$state[s]))
240 |
241 | }
242 |
243 |
--------------------------------------------------------------------------------
/R/JAGS/predict_BR_par.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 |
3 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R/JAGS")
4 |
5 | ###################################################################
6 | ### Packages
7 | ###################################################################
8 | library(dplyr)
9 | library(tidyr)
10 | library("rjags")
11 | library(matrixStats)
12 | library(mcmcplots)
13 | library(foreach)
14 | library(doMC)
15 | ###################################################################
16 | ### Data sets: https://github.com/CSSEGISandData
17 | ###################################################################
18 | baseURLbr = "https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados"
19 |
20 | covid19uf <- read.csv(file.path(baseURLbr,"EstadosCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
21 | rename(state = estado,
22 | date = data,
23 | n = casos.acumulados,
24 | d = obitos.acumulados,
25 | n_new = novos.casos,
26 | d_new = obitos.novos) %>%
27 | mutate(date = as.Date(date)) %>%
28 | select(date, n, d, -n_new, -d_new, state) %>%
29 | arrange(state,date) %>% filter(date>='2020-01-23')
30 |
31 | covid19br <- read.csv(file.path(baseURLbr,"BrasilCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
32 | mutate(state = 'BR') %>%
33 | rename(date = data,
34 | n = casos.acumulados,
35 | d = obitos.acumulados,
36 | n_new = novos.casos,
37 | d_new = obitos.novos) %>%
38 | mutate(date = as.Date(date)) %>%
39 | select(date, n, d, -n_new, -d_new, state) %>%
40 | arrange(date) %>% filter(date>='2020-01-23')
41 |
42 | covid19 <- bind_rows(covid19uf,covid19br)
43 | uf <- distinct(covid19,state)
44 |
45 | br_pop <- read.csv("../pop/pop_BR.csv")
46 |
47 | ###########################################################################
48 | ###### JAGS
49 | ###########################################################################
50 | #register cores
51 | registerDoMC(cores = detectCores()-1) # Alternativa Linux
52 |
53 | #for ( s in 1:dim(uf)[1] ) {
54 | obj <- foreach( s = 1:dim(uf)[1] ) %dopar% {
55 | #obj <- foreach( s = 1:3 ) %dopar% {
56 | source("jags_poisson.R")
57 |
58 | i = 4 # (2: confirmed, 3: deaths, 4: new confirmed, 5: new deaths)
59 | L = 300
60 | #t0 = Sys.time()
61 | estado = uf$state[s]
62 |
63 | Y <- covid19 %>% filter(state==estado) %>%
64 | mutate(n_new = n - lag(n, default=0),
65 | d_new = d - lag(d, default=0)) %>%
66 | select(date, n, d, n_new, d_new, state) %>%
67 | arrange(date) %>% filter(date>='2020-01-23')
68 | #Y = covid19 %>% filter(state==uf$state[s])
69 |
70 | while(any(Y$n_new <0)){
71 | pos <- which(Y$n_new <0)
72 | for(j in pos){
73 | Y$n_new[j-1] = Y$n_new[j] + Y$n_new[j-1]
74 | Y$n_new[j] = 0
75 | Y$n[j-1] = Y$n[j]
76 | }
77 | }
78 |
79 | pop <- br_pop$pop[which(br_pop$uf == uf$state[s])]
80 |
81 | t = dim(Y)[1]
82 |
83 | #use static to provide initial values
84 | params = c("a","b","c","f","yfut","mu")
85 | nc = 1 # 3
86 | nb = 90e3 # 5e4
87 | thin = 10
88 | ni = 10e3 # 5e4
89 | data_jags = list(y=Y[[i]], t=t, L=L)
90 | mod = try(jags.model(textConnection(mod_string_new), data=data_jags, n.chains=nc, n.adapt=nb, quiet=TRUE))
91 | try(update(mod, n.iter=ni, progress.bar="none"))
92 | mod_sim = try(coda.samples(model=mod, variable.names=params, n.iter=ni, thin=thin,progress.bar="none"))
93 |
94 | if(class(mod_sim) != "try-error" && class(mod) != "try-error"){
95 |
96 | mod_chain = as.data.frame(do.call(rbind, mod_sim))
97 |
98 | a_pos = "a"
99 | b_pos = "b"
100 | c_pos = "c"
101 | f_pos = "f"
102 | mu_pos = paste0("mu[",1:(t+L),"]")
103 | yfut_pos = paste0("yfut[",1:L,"]")
104 | L0 = 14
105 |
106 | mod_chain_y = as.matrix(mod_chain[yfut_pos])
107 | mod_chain_cumy = rowCumsums(mod_chain_y) + Y[[2]][t]
108 |
109 |
110 | ### list output
111 | df_predict <- data.frame( date = as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01"),
112 | q25 = colQuantiles(mod_chain_cumy[,1:L0], prob=.025),
113 | med = colQuantiles(mod_chain_cumy[,1:L0], prob=.5),
114 | q975 = colQuantiles(mod_chain_cumy[,1:L0], prob=.975),
115 | m = colMeans(mod_chain_cumy[,1:L0]))
116 | row.names(df_predict) <- NULL
117 |
118 | lt_predict <- lt_summary <- NULL
119 |
120 | #longterm
121 | L0 = 200
122 |
123 | #acha a curva de quantil
124 | lowquant <- colQuantiles(mod_chain_y[,1:L0], prob=.025)
125 | medquant <- colQuantiles(mod_chain_y[,1:L0], prob=.5)
126 | highquant <- colQuantiles(mod_chain_y[,1:L0], prob=.975)
127 |
128 | NTC25 =sum(lowquant)+Y[[2]][t]
129 | NTC500=sum(medquant)+Y[[2]][t]
130 | NTC975=sum(highquant)+Y[[2]][t]
131 |
132 |
133 | ##flag
134 | cm <- pop * 0.025
135 | ch <- pop * 0.03
136 | flag <- 0 #tudo bem
137 | {if(NTC500 > cm) flag <- 2 #nao plotar
138 | else{if(NTC975 > ch){flag <- 1; NTC25 <- NTC975 <- NULL}}} #plotar so mediana
139 |
140 | #vetor de data futuras e pega a posicao do maximo do percentil 25.
141 | dat.vec <- as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01")
142 | dat.full <- c(Y[[1]],dat.vec)
143 |
144 |
145 | Dat25 <- Dat500 <- Dat975 <- NULL
146 | dat.low.end <- dat.med.end <- dat.high.end <- NULL
147 |
148 | mod_chain_mu = as.matrix(mod_chain[mu_pos])
149 | mu50 <- apply(mod_chain_mu,2,quantile, probs=0.5)
150 | Dat500 <- dat.full[which.max(mu50[1:(t+L0)])]
151 |
152 | q <- .99
153 | med.cum <- c(medquant[1]+Y[[2]][t],medquant[2:length(medquant)])
154 | med.cum <- colCumsums(as.matrix(med.cum))
155 | med.cum <- med.cum/med.cum[length(med.cum)]
156 | med.end <- which(med.cum - q > 0)[1]
157 | dat.med.end <- dat.vec[med.end]
158 |
159 | if(flag == 0){
160 | #definicao do pico usando a curva das medias
161 | mu25 <- apply(mod_chain_mu,2,quantile, probs=0.025)
162 | mu975 <- apply(mod_chain_mu,2,quantile, probs=.975)
163 |
164 | posMax.q25 <- which.max(mu25[1:(t+L0)])
165 | aux <- mu975 - mu25[posMax.q25]
166 | aux2 <- aux[posMax.q25:(t+L0)]
167 | val <- min(aux2[aux2>0])
168 | dat.max <- which(aux == val)
169 |
170 | aux <- mu975 - mu25[posMax.q25]
171 | aux2 <- aux[1:posMax.q25]
172 | val <- min(aux2[aux2>0])
173 | dat.min <- which(aux == val)
174 |
175 | Dat25 <- dat.full[dat.min]
176 | Dat975 <- dat.full[dat.max]
177 |
178 | #calcula o fim da pandemia
179 | low.cum <- c(lowquant[1]+Y[[2]][t],lowquant[2:length(lowquant)])
180 | low.cum <- colCumsums(as.matrix(low.cum))
181 | low.cum <- low.cum/low.cum[length(low.cum)]
182 | low.end <- which(low.cum - q > 0)[1]
183 | dat.low.end <- dat.vec[low.end]
184 |
185 | high.cum <- c(highquant[1]+Y[[2]][t],highquant[2:length(highquant)])
186 | high.cum <- colCumsums(as.matrix(high.cum))
187 | high.cum <- high.cum/high.cum[length(high.cum)]
188 | high.end <- which(high.cum - q > 0)[1]
189 | dat.high.end <- dat.vec[high.end]
190 | }
191 |
192 | lt_predict <- data.frame( date = dat.vec,
193 | q25 = lowquant,
194 | med = medquant,
195 | q975 = highquant,
196 | m = colMeans(mod_chain_y[,1:L0]))
197 | row.names(lt_predict) <- NULL
198 |
199 | lt_summary <- list(NTC25=NTC25,
200 | NTC500=NTC500,
201 | NTC975=NTC975,
202 | high.dat.low=Dat25,
203 | high.dat.med=Dat500,
204 | high.dat.upper=Dat975,
205 | end.dat.low = dat.low.end,
206 | end.dat.med = dat.med.end,
207 | end.dat.upper = dat.high.end)
208 |
209 | muplot <- data.frame(date = dat.full, mu = mu50[1:(t+L0)])
210 | list_out <- list( df_predict = df_predict, lt_predict=lt_predict, lt_summary=lt_summary, mu_plot = muplot, flag=flag)
211 |
212 | ### saveRDS
213 | results_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/"
214 | # results_directory = 'C:/Users/ricar/Dropbox/covid19/R/predict/'
215 | file_id <- ifelse(uf$state[s]=='BR', colnames(Y)[2] , paste0(uf$state[s],'_',colnames(Y)[2],'e'))
216 | saveRDS(list_out, file=paste0(results_directory,'Brazil_',file_id,'.rds'))
217 |
218 | ### report
219 | source("mcmcplot_country.R")
220 | report_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/reports"
221 | # report_directory = 'C:/Users/ricar/Dropbox/covid19/R/predict/report'
222 | #mcmcplot_country(mcmcout = mod_sim, parms = c(paste0("a[",t,"]"), paste0("b[",t,"]"), paste0("c[",t,"]")),
223 | mcmcplot_country(mcmcout = mod_sim, parms = c("a", "b", "c", "f"),
224 | dir = report_directory,
225 | filename = paste0('Brazil_',file_id,'_diagnostics'),
226 | heading = paste0('Brazil_',file_id),
227 | extension = "html", greek = TRUE,
228 | country = 'Brazil',
229 | type = file_id)
230 |
231 | ### run time
232 | #run_time = round(as.numeric(Sys.time()-t0, units="mins"),2)
233 | #print(noquote(paste(run_time, "minutes to", uf$state[s])))
234 |
235 | }
236 |
237 | else print(paste0("ERROR:",uf$state[s]))
238 |
239 | }
240 |
241 |
--------------------------------------------------------------------------------
/R/Video/Video_BR.R:
--------------------------------------------------------------------------------
1 | ############################################
2 | ###### GERACAO DE DADOS PARA O VIDEO #######
3 | ############################################
4 |
5 | ############################################
6 | ###### Packages ######
7 | ############################################
8 | library(dplyr)
9 | library(tidyr)
10 | library(matrixStats)
11 | library(mcmcplots)
12 | library(MCMCvis)
13 | library(foreach)
14 | library(doMC)
15 | library(rstan)
16 | library(lubridate)
17 |
18 | #setwd("~/Desktop/app_COVID19/R/STAN")
19 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R/STAN")
20 |
21 | ###################################################################
22 | ### Data sets: https://github.com/CSSEGISandData
23 | ###################################################################
24 | baseURLbr = "https://raw.githubusercontent.com/covid19br/covid19br.github.io/master/dados"
25 |
26 | covid19br <- read.csv(file.path(baseURLbr,"BrasilCov19.csv"), check.names=FALSE, stringsAsFactors=FALSE) %>%
27 | mutate(state = 'BR') %>%
28 | rename(date = data,
29 | n = casos.acumulados,
30 | d = obitos.acumulados,
31 | n_new = novos.casos,
32 | d_new = obitos.novos) %>%
33 | mutate(date = as.Date(date)) %>%
34 | select(date, n, d, -n_new, -d_new, state) %>%
35 | arrange(date) %>% filter(date>='2020-01-23')
36 |
37 | br_pop <- read.csv("../pop/pop_BR.csv")
38 |
39 | ###########################################################################
40 | ###### JAGS/STAN
41 | ###########################################################################
42 |
43 | results.video <- list()
44 |
45 | data_final_prev <- ymd('2021-07-01')
46 | data_max <- max(covid19br$date)
47 |
48 | ## Verifica se tem o RDS - Se tiver, ele só atualiza, se não, ele roda de novo
49 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STvideos")
50 | arquivos <- list.files(pattern = '*.rds', full.names = F)
51 | arquivo <- "Brazil_n.rds" %in% arquivos
52 |
53 | {if(arquivo == TRUE){
54 | arquivo_rds <- readRDS("Brazil_n.rds")
55 | data_final <- arquivo_rds[[length(arquivo_rds)]]$last_date
56 | data_usada <- data_final + ddays(1)
57 | a <- length(arquivo_rds) + 1
58 | results.video <- arquivo_rds
59 | }
60 | else{
61 | data_usada <- min(covid19br$date) + ddays(30)
62 | a <- 1
63 | }}
64 |
65 |
66 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R/STAN")
67 | #complie stan model
68 | model="stan_model_poisson_gen.stan" #modelo STAN
69 | mod<- try(stan_model(file = model,verbose=FALSE))
70 |
71 | if(class(mod) == "try-error") stop("STAN DID NOT COMPILE")
72 |
73 | pop <- br_pop$pop[which(br_pop$uf == 'BR')]
74 | key <- FALSE
75 | while(data_usada <= data_max) {
76 |
77 | ## SELECIONAR OS DADOS QUE VAO SER USADOS -> VAI ACRESCENTANDO UM DIA NA DATA ATE O FINAL
78 | dados <- subset(covid19br,covid19br$date <= data_usada)
79 |
80 | i = 4 # (2: confirmed, 3: deaths, 4: new confirmed, 5: new deaths)
81 | L = as.numeric(difftime(data_final_prev, data_usada, units = "days"))
82 | ## Todas as previsões precisam ir até o mesmo dia -> dia final definido no parâmetro data_final_prev
83 |
84 | # Preparação dos dados para rodar o modelo no Stan
85 | Y <- dados %>%
86 | mutate(n_new = n - lag(n, default=0),
87 | d_new = d - lag(d, default=0)) %>%
88 | select(date, n, d, n_new, d_new, state) %>%
89 | arrange(date) %>% filter(date>='2020-01-23')
90 |
91 | while(any(Y$n_new <0)){
92 | pos <- which(Y$n_new <0)
93 | for(j in pos){
94 | Y$n_new[j-1] = Y$n_new[j] + Y$n_new[j-1]
95 | Y$n_new[j] = 0
96 | Y$n[j-1] = Y$n[j]
97 | }
98 | }
99 |
100 | t = dim(Y)[1]
101 |
102 | params = c("a","b","c","f","mu")
103 |
104 | burn_in= 2e3
105 | lag= 3
106 | sample_size= 1e3
107 | number_iterations= burn_in + lag*sample_size
108 | number_chains= 1
109 |
110 | data_stan = list(y=Y[[i]], n=t, L=L, pop=pop, perPop=0.1)
111 |
112 | init <- list(
113 | list(a = 1, b1 = log(1), c = .5, f = 1)
114 | )
115 |
116 | mod_sim<- try(sampling(object = mod, data = data_stan,
117 | pars = params,
118 | chains = number_chains,
119 | init = init,
120 | iter = number_iterations, warmup = burn_in, thin = lag,
121 | control = list(max_treedepth = 50, adapt_delta=0.999),
122 | verbose = FALSE, open_progress=FALSE, show_messages=FALSE))
123 |
124 | mod_chain = as.data.frame(mod_sim)
125 |
126 | a_pos = "a"
127 | b_pos = "b"
128 | c_pos = "c"
129 | f_pos = "f"
130 | #mu_pos = c(paste0("mu[",1:t,"]"),paste0("mufut[",1:L,"]"))
131 | mu_pos = paste0("mu[",1:t,"]")
132 | yfut_pos = paste0("yfut[",1:L,"]")
133 |
134 | source("posterior_sample.R")
135 | fut <- predL(L=L,t,pop,Y[[2]][t],c(mod_chain[[a_pos]]),c(mod_chain[[b_pos]]),c(mod_chain[[c_pos]]),c(mod_chain[[f_pos]]))
136 |
137 |
138 | mod_chain_y = fut$y.fut
139 | #mod_chain_y = as.matrix(mod_chain[yfut_pos])
140 | mod_chain_cumy = rowCumsums(mod_chain_y) + Y[[2]][t]
141 |
142 | lt_predict <- lt_summary <- NULL
143 | L0 = as.numeric(difftime(data_final_prev, data_usada, units = "days"))
144 | ## Todas as previsões precisam ir até o mesmo dia -> dia final definido no parâmetro data_final_prev
145 |
146 | #acha a curva de quantil
147 | if(Y[[2]][t] > 1000){
148 | #acha a curva de quantil
149 | lowquant <- colQuantiles(mod_chain_y[,1:L0], prob=.025)
150 | medquant <- colQuantiles(mod_chain_y[,1:L0], prob=.5)
151 | highquant <- colQuantiles(mod_chain_y[,1:L0], prob=.975)
152 | }
153 | else{
154 | lowquant <- c(Y[[2]][t],colQuantiles(mod_chain_cumy[,1:L0], prob=.025))
155 | lowquant <- (lowquant-lag(lowquant,default=0))[-1]
156 | medquant <- c(Y[[2]][t],colQuantiles(mod_chain_cumy[,1:L0], prob=.5))
157 | medquant <- (medquant-lag(medquant,default=0))[-1]
158 | highquant <- c(Y[[2]][t],colQuantiles(mod_chain_cumy[,1:L0], prob=.975))
159 | highquant <- (highquant-lag(highquant,default=0))[-1]
160 | }
161 |
162 | NTC25 =sum(lowquant)+Y[[2]][t]
163 | NTC500=sum(medquant)+Y[[2]][t]
164 | NTC975=sum(highquant)+Y[[2]][t]
165 |
166 | ##flag
167 | cm <- pop * 0.2
168 | ch <- pop * 0.2
169 | flag <- 0 #tudo bem
170 | {if(NTC500 > cm) flag <- 2 #nao plotar
171 | else{if(NTC975 > ch){flag <- 1; NTC25 <- NTC975 <- NULL}}} #plotar so mediana
172 |
173 | #vetor de data futuras e pega a posicao do maximo do percentil 25.
174 | dat.vec <- as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01")
175 | dat.full <- c(Y[[1]],dat.vec)
176 |
177 |
178 | Dat25 <- Dat500 <- Dat975 <- NULL
179 | dat.low.end <- dat.med.end <- dat.high.end <- NULL
180 |
181 | #mod_chain_mu = as.matrix(mod_chain[mu_pos])
182 | ifelse(length(fut$pos) > 0,
183 | mod_chain_mu <- cbind(as.matrix(mod_chain[mu_pos])[-fut$pos,],fut$mu.fut),
184 | mod_chain_mu <- cbind(as.matrix(mod_chain[mu_pos]),fut$mu.fut))
185 | mu50 <- apply(mod_chain_mu,2,quantile, probs=0.5)
186 | Dat500 <- dat.full[which.max(mu50[1:(t+L0)])]
187 |
188 | q <- .99
189 | med.cum <- c(medquant[1]+Y[[2]][t],medquant[2:length(medquant)])
190 | med.cum <- colCumsums(as.matrix(med.cum))
191 | med.cum <- med.cum/med.cum[length(med.cum)]
192 | med.end <- which(med.cum - q > 0)[1]
193 | dat.med.end <- dat.vec[med.end]
194 |
195 | if(flag == 0){
196 | #definicao do pico usando a curva das medias
197 | mu25 <- apply(mod_chain_mu,2,quantile, probs=0.025)
198 | mu975 <- apply(mod_chain_mu,2,quantile, probs=.975)
199 |
200 | posMax.q25 <- which.max(mu25[1:(t+L0)])
201 | aux <- mu975 - mu25[posMax.q25]
202 | aux2 <- aux[posMax.q25:(t+L0)]
203 | val <- min(aux2[aux2>0])
204 | dat.max <- which(aux == val)
205 |
206 | aux <- mu975 - mu25[posMax.q25]
207 | aux2 <- aux[1:posMax.q25]
208 | val <- min(aux2[aux2>0])
209 | dat.min <- which(aux == val)
210 |
211 | Dat25 <- dat.full[dat.min]
212 | Dat975 <- dat.full[dat.max]
213 |
214 | #calcula o fim da pandemia
215 | low.cum <- c(lowquant[1]+Y[[2]][t],lowquant[2:length(lowquant)])
216 | low.cum <- colCumsums(as.matrix(low.cum))
217 | low.cum <- low.cum/low.cum[length(low.cum)]
218 | low.end <- which(low.cum - q > 0)[1]
219 | dat.low.end <- dat.vec[low.end]
220 |
221 | high.cum <- c(highquant[1]+Y[[2]][t],highquant[2:length(highquant)])
222 | high.cum <- colCumsums(as.matrix(high.cum))
223 | high.cum <- high.cum/high.cum[length(high.cum)]
224 | high.end <- which(high.cum - q > 0)[1]
225 | dat.high.end <- dat.vec[high.end]
226 | }
227 |
228 | lt_predict <- data.frame( date = dat.vec,
229 | q25 = lowquant,
230 | med = medquant,
231 | q975 = highquant,
232 | m = colMeans(mod_chain_y[,1:L0]))
233 | row.names(lt_predict) <- NULL
234 |
235 | lt_summary <- list(NTC25=NTC25,
236 | NTC500=NTC500,
237 | NTC975=NTC975,
238 | high.dat.low=Dat25,
239 | high.dat.med=Dat500,
240 | high.dat.upper=Dat975,
241 | end.dat.low = dat.low.end,
242 | end.dat.med = dat.med.end,
243 | end.dat.upper = dat.high.end)
244 |
245 | muplot <- data.frame(date = dat.full, mu = mu50[1:(t+L0)])
246 | list_out <- list(lt_predict=lt_predict, lt_summary=lt_summary, mu_plot = muplot, flag=flag, last_date = data_usada)
247 |
248 | results.video[[a]] <- list_out
249 | a = a + 1
250 | data_usada <- data_usada + ddays(1)
251 | key <- TRUE
252 | }
253 |
254 | {if(key){
255 | ### saveRDS
256 | results_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STvideos/"
257 | # results_directory = 'C:/Users/ricar/Dropbox/covid19/R/predict/'
258 | file_id <- colnames(Y)[2]
259 | saveRDS(results.video, file=paste0(results_directory,'Brazil_',file_id,'.rds'))
260 | }
261 | else print("No update necessary")}
262 |
--------------------------------------------------------------------------------
/app_v1/www/markdown/CoronaUFMG_MD.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Modelando a pandemia de Covid19"
3 | author: "Dani Gamerman"
4 | # output: slidy_presentation
5 | output:
6 | html_document:
7 | toc: true
8 | # toc_depth: 5
9 | # toc_float: true
10 | ---
11 |
12 |
13 | ### Modelando a pandemia de Covid19
14 |
15 | Dani Gamerman - Programa de PG em Estatística - UFMG
16 |
17 | *1º semestre de 2020*
18 |
19 | (motivado a partir de notas de José Marcos Andrade Figueiredo - UFMG)
20 |
21 | #### Modelo de crescimento logı́stico básico
22 |
23 | $$
24 | Y(t) \sim N ( \mu (t) , \sigma^2 ), \qquad t = 1, 2, ...
25 | $$
26 |
27 | onde $Y(t)$ é o número de casos acumulados até o dia $t$ em uma dada região, com
28 |
29 | $$
30 | \mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ c t \} }}.
31 | $$
32 | **Caso especial:** $b = 0$ (crescimento exponencial) $ \rightarrow \mu(t) = a \exp \{ ct \} $.
33 |
34 | - Adequado para os estágios iniciais da pandemia
35 |
36 | ##### Problemas do modelo básico:
37 |
38 | a) os dados são contagens e a distribuição normal pressupõe dados contı́nuo;
39 |
40 | b) a variância deveria aumentar com a magnitude dos dados.
41 |
42 |
43 | ##### Características de interesse:
44 |
45 | As características mais importantes são:
46 |
47 |
48 | **1) Taxa de infecção**
49 |
50 | - $c$ mede a velocidade da aceleração e reflete a taxa de infecção da doença.
51 |
52 |
53 | **2) Assíntota**
54 | $$\lim_{t \to \infty} \mu( t) = \lim_{t \to \infty} \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac ab$$
55 | - Reflete o total de casos acululados ao longo de toda a pandemia.
56 |
57 | - Crescimento exponencial ($b=0$): assíntota $= \infty$ !
58 |
59 |
60 | **3) Ponto de inflexão**
61 |
62 | - É o tempo $t*$ onde o número de novos casos para de crescer e começa a diminuir.
63 |
64 | - Crescimento exponencial ($b=0$): número de novos casos nunca para de crescer!
65 |
66 |
67 | **4) Previsão:**
68 |
69 | - O que podemos dizer sobre $Y (t+k ), \forall k$, para $t$ fixo (hoje)?
70 | Depende da distribuição de $Y( t)$ mas será sempre dada pela distribuição preditiva de $Y(t+k)$ dado $Y(1:t) = \{ Y( 1) , ... , Y( t ) \}$, o que foi observado.
71 |
72 | Funciona como se fosse a distribuição a posteriori de $Y(t + k )$.
73 |
74 | **Resultado útil:** Se $Z$ e $W$ são 2 v. a.'s quaisquer então:
75 |
76 | $E[Z] = E[ E(Z \mid W ) ]$
77 |
78 | $Var[Z] = Var[ E(Z \mid W ) ] + E[ Var( Z \mid W ) ]$
79 |
80 | Em particular,
81 | $E[Y ( t + k ) \mid Y( 1:t)] = E\{ E[ Y ( t + k ) \mid \mu( 1:t)] \mid Y( 1:t) \}$
82 | $= E[ \mu( t+k )] \mid Y( 1:t) ]$, a média a posteriori de $\mu ( t + k )$.
83 |
84 | A inferência para tudo que foi dito acima deve ser reportada através de estimadores pontuais (ex: médias a posteriori), junto com os respectivos intervalos de credibilidade.
85 |
86 |
87 | **5) Número de reprodutibilidade $R_0$**
88 |
89 | $R_0$ é o número esperado de casos secundários de uma doença produzidos por um indivíduo infectado.
90 |
91 | No tempo $t$, ele é definido como
92 | $$R_0 = \frac {\mu ( t ) - \mu ( t-1)}{\mu ( t-1)} = \frac {\mu ( t )}{\mu ( t-1)} - 1$$
93 |
94 | **Início da epidemia:**
95 | $1 \gg b \exp { \{ ct \} } \to \mu (t) \approx a \exp { \{ ct \} } \to R_0 \approx e^c - 1$
96 |
97 | **Final da epidemia:**
98 | $1 \ll b \exp { \{ ct \} } \to \mu (t) \approx a / b \to R_0 \approx 0$
99 |
100 | **Meio da epidemia:**
101 | $R_0$ é função dos parâmetros $(a,b,c)$ e do tempo $t$, e é dado por
102 | $$R_0 (t) = e^c \ \frac{1 + b e^c e^{ct} } {1 + b e^{ct} } \ - \ 1$$
103 |
104 | Para cada $t$ fixado, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
105 |
106 |
107 | **6) Número médio de novos casos (NMNC)**
108 |
109 | NMNC no tempo $t+k$:
110 | $n_t ( k ) = E [Y ( t + k ) - Y ( t + k -1 ) ] = \mu ( t + k ) - \mu ( t + k -1 )$
111 |
112 | Logo, NMNC também é função dos parâmetros $(a,b,c)$ e pode ser facilmente calculado.
113 |
114 | Para cada $t$ e $k$ fixados, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
115 |
116 | 
117 |
118 |
119 | #### Alternativas
120 |
121 | 1.1) $Y( t ) \sim Poisson ( \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \mu ( t )$
122 |
123 | 1.2) $Y ( t ) \sim N ( \mu ( t ) , \sigma^2 \ \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \sigma^2 \ \mu ( t )$
124 |
125 | Obs:
126 |
127 | - Modelo (1.2) admite sobredispersão se $\sigma^2 > 1$
128 |
129 | - Alternativa (1.2) só cuida do comentário (b)
130 |
131 | - Alternativa (1.1) cuida dos 2 comentários mas não permite sobredispersão
132 |
133 | ##### Poisson com sobredispersão
134 |
135 | 1.3) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) + \epsilon ( t ) )$ com $E[ \epsilon (t)] = 0$ e $Var( \epsilon (t)) = \sigma^2$
136 |
137 | 1.4) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) \times \epsilon ( t ) )$ com $E[ \epsilon (t)] = 1$ e $Var( \epsilon (t)) = \sigma^2$
138 |
139 |
140 | Usando os resultados úteis anteriores:
141 |
142 | **Mod(1.3):**
143 | $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t ) ) ] = E[ \mu ( t ) + \epsilon ( t ) ] = \mu ( t ) + E[ \epsilon ( t ) ] = \mu ( t )$
144 |
145 | $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) + \epsilon ( t ) ] + E [ \mu ( t ) + \epsilon ( t ) ] = \sigma^2 + \mu_t > \mu ( t )$
146 |
147 | **Mod(1.4):**
148 | $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t) ) ] = E[ \mu ( t ) \times \epsilon ( t ) ] = \mu ( t ) \times E[ \epsilon ( t ) ] = \mu ( t )$
149 |
150 | $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) \times \epsilon ( t ) ] + E [ \mu ( t ) \times \epsilon ( t ) ] =
151 | \mu_t ^2 \sigma^2 + \mu_t > \mu ( t )$
152 |
153 | Ambos preservam média da Poisson e aumentam dispersão da Poisson.
154 |
155 | ##### Extensões dinâmicas
156 |
157 | Modelos anteriores assumem comportamento estático $\Rightarrow$ a forma
158 | da doença não se modifica ao longo do tempo $\Rightarrow$ taxa de
159 | infecção será sempre a mesma, assintota será sempre a mesma, \...
160 |
161 | **Modelos dinâmicos** flexibilizam isso.
162 |
163 | $$\mu ( t ) = \frac{ a( {\color{red} t )} \ \exp{ \{ c( {\color{red}t) } \ t \} } } {1 + b( {\color{red}t) } \ \exp { \{ c({\color{red}t) } \ t \} }}$$
164 |
165 | com:
166 | $a ( t ) = a ( t-1) + w_a ( t )$, onde $w_a ( t ) \sim N ( 0 , W_a ), \forall t $.
167 |
168 | $b ( t ) = b ( t-1) + w_b ( t )$, onde $w_b ( t ) \sim N ( 0 , W_b ), \forall t $.
169 |
170 | $c ( t ) = c ( t-1) + w_c ( t )$, onde $w_c ( t ) \sim N ( 0 , W_c ), \forall t $.
171 |
172 |
173 | **Vantagens:**
174 |
175 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
176 |
177 | b) $Var[ a(t) \mid a(t-1 )]= W_a$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
178 |
179 | **Problemas:**
180 |
181 | a) variâncias $W_a, W_b, W_c$ conhecidas $\Rightarrow$ difíceis de especificar.
182 |
183 | b) variâncias $W_a, W_b, W_c$ deconhecidas $\Rightarrow$ difíceis de estimar.
184 |
185 | c) não dá para simplificar $W_a = W_b = W_c = W$ (magnitudes diferentes de $(a,b,c)$).
186 |
187 | - Outra forma de introduzir dinamismo, agora multiplicativo:
188 |
189 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $w_a ( t ) \sim Gamma ( d_a ,d_a ), \forall t $.
190 |
191 | $b ( t ) = b ( t-1) \times w_b ( t )$, onde $w_b ( t ) \sim Gamma ( d_b ,d_b ), \forall t $.
192 |
193 | $c ( t ) = c ( t-1) \times w_c ( t )$, onde $w_c ( t ) \sim Gamma ( d_c ,d_c ), \forall t $.
194 |
195 |
196 | **Vantagens:**
197 |
198 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
199 |
200 | b) $Var[ a(t) \mid a(t-1 )]= d_c^{-1}$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
201 |
202 | c) Hiperparâmetros $d_a, d_b, d_c$ fáceis de especificar.
203 |
204 | Exemplos:
205 | $$d=1000 \ \to \ 0,90= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$$
206 | $$d=1500 \ \to \ 0,95= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$$
207 |
208 |
209 | **Problemas:**
210 |
211 | a) As magnitures de $a, b, c$ ainda interferem no aumento da incerteza.
212 |
213 | b) não sei se software lida bem com Gammas tendo parâmetros tão altos.
214 |
215 |
216 | **Evolução multiplicativa com erros normais**
217 |
218 | Considere a evolução multiplicativa abaixo para o parâmetro $a$:
219 |
220 | $$a ( t ) = a ( t-1) \times \exp \{ w_a ( t ) \}, \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
221 | Tomando o logartimo nos dois lados, obtem-se:
222 | $$\log \ a ( t ) = \log \ a ( t-1) + w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
223 | Passando $\log \ a ( t-1)$ para a esquerda, vem que:
224 | $$\log \ a ( t ) - \log \ a ( t-1) = \log \left[ \frac{ a ( t )}{ a ( t-1)} \right] = w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
225 |
226 | Especificação de $W_a$: pode-se pensar em percentual de incremento, como antes.
227 |
228 | $$0,95 = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right) = P ( - 0,05 < w_a(t) < 0,05 )$$
229 | Isso implica $2 \sqrt{W_a} = 0,05$, que implica $\sqrt{W_a} = 0,025 \ \Rightarrow W_a = (0,025)^2$.
230 |
231 | Mesma especificação vale para $W_b$ e $W_c$, pois dimensões de $b$ e $c$ não importam.
232 |
233 |
234 | **Caso particular**
235 |
236 | Baseado em Gamerman, Santos e Franco (J. Time Series Analysis, 2013):
237 |
238 | $$\mu ( t ) = \frac{ a( {\color{red}t)} \ \exp{ \{ c \ t \} } } {1 + b \ \exp { \{ c \ t \} }}$$
239 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $ w_a ( t ) \sim Beta, \forall t$.
240 |
241 | Pode ser usado também para crescimento exponencial ($b=0$).
242 |
243 |
244 | **Vantagem:**
245 |
246 | a) Permite contas exatas, dispensando aproximações MCMC.
247 |
248 |
249 | **Desvantagem:**
250 |
251 | a) Não permite $b$ e $c$ dinâmicos.
252 |
253 | #### Generalizações da logística
254 |
255 | Até agora, usamos a logística para especificar a média $\mu ( t )$ como
256 | $$\mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac{ a} { b + \exp { \{ - ct \} }}.$$
257 | Essa expressão é a forma mais simples da logística.
258 |
259 | Ela pode ser generalizada de várias formas. Uma possível forma da **logística generalizada** é
260 | $$\mu ( t ) = d + \frac{ a - d} {( b + \exp { \{ - ct \} } )^f}$$
261 |
262 | A logística é obtida fazendo $d=0$ e $f=1$.
263 |
--------------------------------------------------------------------------------
/app_v1/CoronaUFMG_MD.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Modelando a pandemia de Covid19"
3 | author: "Dani Gamerman"
4 | # output: slidy_presentation
5 | output:
6 | html_document:
7 | toc: true
8 | # toc_depth: 5
9 | # toc_float: true
10 | ---
11 |
12 |
13 | ### Modelando a pandemia de Covid19
14 |
15 | Dani Gamerman - Programa de PG em Estatística - UFMG
16 |
17 |
18 |
19 | *1º semestre de 2020*
20 |
21 | (motivado a partir de notas de José Marcos Andrade Figueiredo - UFMG)
22 |
23 | #### Modelo de crescimento logı́stico básico
24 |
25 | $$
26 | Y(t) \sim N ( \mu (t) , \sigma^2 ), \qquad t = 1, 2, ...
27 | $$
28 |
29 | onde $Y(t)$ é o número de casos acumulados até o dia $t$ em uma dada região, com
30 |
31 | $$
32 | \mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ c t \} }}.
33 | $$
34 | **Caso especial:** $b = 0$ (crescimento exponencial) $ \rightarrow \mu(t) = a \exp \{ ct \} $.
35 |
36 | - Adequado para os estágios iniciais da pandemia
37 |
38 | ##### Problemas do modelo básico:
39 |
40 | a) os dados são contagens e a distribuição normal pressupõe dados contı́nuo;
41 |
42 | b) a variância deveria aumentar com a magnitude dos dados.
43 |
44 |
45 | ##### Características de interesse:
46 |
47 | As características mais importantes são:
48 |
49 |
50 | **1) Taxa de infecção**
51 |
52 | - $c$ mede a velocidade da aceleração e reflete a taxa de infecção da doença.
53 |
54 |
55 | **2) Assíntota**
56 | $$\lim_{t \to \infty} \mu( t) = \lim_{t \to \infty} \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac ab$$
57 | - Reflete o total de casos acululados ao longo de toda a pandemia.
58 |
59 | - Crescimento exponencial ($b=0$): assíntota $= \infty$ !
60 |
61 |
62 | **3) Ponto de inflexão**
63 |
64 | - É o tempo $t*$ onde o número de novos casos para de crescer e começa a diminuir.
65 |
66 | - Crescimento exponencial ($b=0$): número de novos casos nunca para de crescer!
67 |
68 |
69 | **4) Previsão:**
70 |
71 | - O que podemos dizer sobre $Y (t+k ), \forall k$, para $t$ fixo (hoje)?
72 | Depende da distribuição de $Y( t)$ mas será sempre dada pela distribuição preditiva de $Y(t+k)$ dado $Y(1:t) = \{ Y( 1) , ... , Y( t ) \}$, o que foi observado.
73 |
74 | Funciona como se fosse a distribuição a posteriori de $Y(t + k )$.
75 |
76 | **Resultado útil:** Se $Z$ e $W$ são 2 v. a.'s quaisquer então:
77 |
78 | $E[Z] = E[ E(Z \mid W ) ]$
79 |
80 | $Var[Z] = Var[ E(Z \mid W ) ] + E[ Var( Z \mid W ) ]$
81 |
82 | Em particular,
83 | $E[Y ( t + k ) \mid Y( 1:t)] = E\{ E[ Y ( t + k ) \mid \mu( 1:t)] \mid Y( 1:t) \}$
84 | $= E[ \mu( t+k )] \mid Y( 1:t) ]$, a média a posteriori de $\mu ( t + k )$.
85 |
86 | A inferência para tudo que foi dito acima deve ser reportada através de estimadores pontuais (ex: médias a posteriori), junto com os respectivos intervalos de credibilidade.
87 |
88 |
89 | **5) Número de reprodutibilidade $R_0$**
90 |
91 | $R_0$ é o número esperado de casos secundários de uma doença produzidos por um indivíduo infectado.
92 |
93 | No tempo $t$, ele é definido como
94 | $$R_0 = \frac {\mu ( t ) - \mu ( t-1)}{\mu ( t-1)} = \frac {\mu ( t )}{\mu ( t-1)} - 1$$
95 |
96 | **Início da epidemia:**
97 | $1 \gg b \exp { \{ ct \} } \to \mu (t) \approx a \exp { \{ ct \} } \to R_0 \approx e^c - 1$
98 |
99 | **Final da epidemia:**
100 | $1 \ll b \exp { \{ ct \} } \to \mu (t) \approx a / b \to R_0 \approx 0$
101 |
102 | **Meio da epidemia:**
103 | $R_0$ é função dos parâmetros $(a,b,c)$ e do tempo $t$, e é dado por
104 | $$R_0 (t) = e^c \ \frac{1 + b e^c e^{ct} } {1 + b e^{ct} } \ - \ 1$$
105 |
106 | Para cada $t$ fixado, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
107 |
108 |
109 | **6) Número médio de novos casos (NMNC)**
110 |
111 | NMNC no tempo $t+k$:
112 | $n_t ( k ) = E [Y ( t + k ) - Y ( t + k -1 ) ] = \mu ( t + k ) - \mu ( t + k -1 )$
113 |
114 | Logo, NMNC também é função dos parâmetros $(a,b,c)$ e pode ser facilmente calculado.
115 |
116 | Para cada $t$ e $k$ fixados, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
117 |
118 | 
119 |
120 |
121 | #### Alternativas
122 |
123 | 1.1) $Y( t ) \sim Poisson ( \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \mu ( t )$
124 |
125 | 1.2) $Y ( t ) \sim N ( \mu ( t ) , \sigma^2 \ \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \sigma^2 \ \mu ( t )$
126 |
127 | Obs:
128 |
129 | - Modelo (1.2) admite sobredispersão se $\sigma^2 > 1$
130 |
131 | - Alternativa (1.2) só cuida do comentário (b)
132 |
133 | - Alternativa (1.1) cuida dos 2 comentários mas não permite sobredispersão
134 |
135 | ##### Poisson com sobredispersão
136 |
137 | 1.3) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) + \epsilon ( t ) )$ com $E[ \epsilon (t)] = 0$ e $Var( \epsilon (t)) = \sigma^2$
138 |
139 | 1.4) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) \times \epsilon ( t ) )$ com $E[ \epsilon (t)] = 1$ e $Var( \epsilon (t)) = \sigma^2$
140 |
141 |
142 | Usando os resultados úteis anteriores:
143 |
144 | **Mod(1.3):**
145 | $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t ) ) ] = E[ \mu ( t ) + \epsilon ( t ) ] = \mu ( t ) + E[ \epsilon ( t ) ] = \mu ( t )$
146 |
147 | $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) + \epsilon ( t ) ] + E [ \mu ( t ) + \epsilon ( t ) ] = \sigma^2 + \mu_t > \mu ( t )$
148 |
149 | **Mod(1.4):**
150 | $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t) ) ] = E[ \mu ( t ) \times \epsilon ( t ) ] = \mu ( t ) \times E[ \epsilon ( t ) ] = \mu ( t )$
151 |
152 | $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) \times \epsilon ( t ) ] + E [ \mu ( t ) \times \epsilon ( t ) ] =
153 | \mu_t ^2 \sigma^2 + \mu_t > \mu ( t )$
154 |
155 | Ambos preservam média da Poisson e aumentam dispersão da Poisson.
156 |
157 | ##### Extensões dinâmicas
158 |
159 | Modelos anteriores assumem comportamento estático $\Rightarrow$ a forma
160 | da doença não se modifica ao longo do tempo $\Rightarrow$ taxa de
161 | infecção será sempre a mesma, assintota será sempre a mesma, \...
162 |
163 | **Modelos dinâmicos** flexibilizam isso.
164 |
165 | $$\mu ( t ) = \frac{ a( {\color{red} t )} \ \exp{ \{ c( {\color{red}t) } \ t \} } } {1 + b( {\color{red}t) } \ \exp { \{ c({\color{red}t) } \ t \} }}$$
166 |
167 | com:
168 | $a ( t ) = a ( t-1) + w_a ( t )$, onde $w_a ( t ) \sim N ( 0 , W_a ), \forall t $.
169 |
170 | $b ( t ) = b ( t-1) + w_b ( t )$, onde $w_b ( t ) \sim N ( 0 , W_b ), \forall t $.
171 |
172 | $c ( t ) = c ( t-1) + w_c ( t )$, onde $w_c ( t ) \sim N ( 0 , W_c ), \forall t $.
173 |
174 |
175 | **Vantagens:**
176 |
177 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
178 |
179 | b) $Var[ a(t) \mid a(t-1 )]= W_a$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
180 |
181 | **Problemas:**
182 |
183 | a) variâncias $W_a, W_b, W_c$ conhecidas $\Rightarrow$ difíceis de especificar.
184 |
185 | b) variâncias $W_a, W_b, W_c$ deconhecidas $\Rightarrow$ difíceis de estimar.
186 |
187 | c) não dá para simplificar $W_a = W_b = W_c = W$ (magnitudes diferentes de $(a,b,c)$).
188 |
189 | - Outra forma de introduzir dinamismo, agora multiplicativo:
190 |
191 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $w_a ( t ) \sim Gamma ( d_a ,d_a ), \forall t $.
192 |
193 | $b ( t ) = b ( t-1) \times w_b ( t )$, onde $w_b ( t ) \sim Gamma ( d_b ,d_b ), \forall t $.
194 |
195 | $c ( t ) = c ( t-1) \times w_c ( t )$, onde $w_c ( t ) \sim Gamma ( d_c ,d_c ), \forall t $.
196 |
197 |
198 | **Vantagens:**
199 |
200 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
201 |
202 | b) $Var[ a(t) \mid a(t-1 )]= d_c^{-1}$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
203 |
204 | c) Hiperparâmetros $d_a, d_b, d_c$ fáceis de especificar.
205 |
206 | Exemplos:
207 | $$d=1000 \ \to \ 0,90= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$$
208 | $$d=1500 \ \to \ 0,95= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$$
209 |
210 |
211 | **Problemas:**
212 |
213 | a) As magnitures de $a, b, c$ ainda interferem no aumento da incerteza.
214 |
215 | b) não sei se software lida bem com Gammas tendo parâmetros tão altos.
216 |
217 |
218 | **Evolução multiplicativa com erros normais**
219 |
220 | Considere a evolução multiplicativa abaixo para o parâmetro $a$:
221 |
222 | $$a ( t ) = a ( t-1) \times \exp \{ w_a ( t ) \}, \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
223 | Tomando o logartimo nos dois lados, obtem-se:
224 | $$\log \ a ( t ) = \log \ a ( t-1) + w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
225 | Passando $\log \ a ( t-1)$ para a esquerda, vem que:
226 | $$\log \ a ( t ) - \log \ a ( t-1) = \log \left[ \frac{ a ( t )}{ a ( t-1)} \right] = w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
227 |
228 | Especificação de $W_a$: pode-se pensar em percentual de incremento, como antes.
229 |
230 | $$0,95 = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right) = P ( - 0,05 < w_a(t) < 0,05 )$$
231 | Isso implica $2 \sqrt{W_a} = 0,05$, que implica $\sqrt{W_a} = 0,025 \ \Rightarrow W_a = (0,025)^2$.
232 |
233 | Mesma especificação vale para $W_b$ e $W_c$, pois dimensões de $b$ e $c$ não importam.
234 |
235 |
236 | **Caso particular**
237 |
238 | Baseado em Gamerman, Santos e Franco (J. Time Series Analysis, 2013):
239 |
240 | $$\mu ( t ) = \frac{ a( {\color{red}t)} \ \exp{ \{ c \ t \} } } {1 + b \ \exp { \{ c \ t \} }}$$
241 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $ w_a ( t ) \sim Beta, \forall t$.
242 |
243 | Pode ser usado também para crescimento exponencial ($b=0$).
244 |
245 |
246 | **Vantagem:**
247 |
248 | a) Permite contas exatas, dispensando aproximações MCMC.
249 |
250 |
251 | **Desvantagem:**
252 |
253 | a) Não permite $b$ e $c$ dinâmicos.
254 |
255 | #### Generalizações da logística
256 |
257 | Até agora, usamos a logística para especificar a média $\mu ( t )$ como
258 | $$\mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac{ a} { b + \exp { \{ - ct \} }}.$$
259 | Essa expressão é a forma mais simples da logística.
260 |
261 | Ela pode ser generalizada de várias formas. Uma possível forma da **logística generalizada** é
262 | $$\mu ( t ) = d + \frac{ a - d} {( b + \exp { \{ - ct \} } )^f}$$
263 |
264 | A logística é obtida fazendo $d=0$ e $f=1$.
265 |
--------------------------------------------------------------------------------
/R/JAGS/predict_confirmed_par.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 |
3 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R/JAGS")
4 |
5 | ###################################################################
6 | ### Packages
7 | ###################################################################
8 | library(dplyr)
9 | library(tidyr)
10 | library("rjags")
11 | library(matrixStats)
12 | library(mcmcplots)
13 | library(foreach)
14 | library(doMC)
15 |
16 | ###################################################################
17 | ### Data sets: https://github.com/CSSEGISandData
18 | ###################################################################
19 | baseURL = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series"
20 | source("loadData.R")
21 |
22 | covid19_confirm <- loadData("time_series_covid19_confirmed_global.csv", "confirmed")
23 | # covid19_recover <- loadData("time_series_covid19_recovered_global.csv", "recovered")
24 | covid19_deaths <- loadData("time_series_covid19_deaths_global.csv", "deaths")
25 |
26 | covid19 <- covid19_confirm %>% left_join(covid19_deaths)
27 |
28 | #countrylist = "Korea, South"
29 | countrylist <- c("Argentina","Australia","Belgium","Bolivia","Canada","Chile","China","Colombia","Ecuador","France","Germany","Greece", "India", "Iran", "Ireland", "Italy", "Japan", "Korea, South", "Mexico", "Netherlands", "New Zealand", "Norway", "Peru", "Paraguay", "Poland", "Portugal", "Russia", "South Africa", "Spain","United Kingdom", "Uruguay", "Sweden", "Switzerland", "US", "Turkey", "Venezuela")
30 |
31 | #countrylist <- c("Argentina","Bolivia","Canada","Chile","Colombia","Ecuador", "Greece", "India", "Japan", "Korea, South", "Mexico", "Peru", "Paraguay", "Poland", "Russia", "South Africa", "United Kingdom", "Uruguay", "Sweden", "US", "Venezuela")
32 |
33 | country_pop <- read.csv("../pop/pop_WR.csv")
34 |
35 | #register cores
36 | registerDoMC(cores = detectCores()-1) # Alternativa Linux
37 |
38 | #for(country_name in countrylist){
39 | obj <- foreach(s = 1:length(countrylist) ) %dopar% {
40 |
41 | country_name <- countrylist[s]
42 |
43 | pop <- country_pop$pop[which(country_pop$country == country_name)]
44 |
45 | covid_states <- covid19 %>% filter(country==country_name) %>%
46 | mutate(confirmed_new = confirmed - lag(confirmed, default=0),
47 | # recovered_new = recovered - lag(recovered, default=0),
48 | deaths_new = deaths - lag(deaths, default=0)) %>%
49 | arrange(date,state)
50 |
51 | covid_country <- covid_states %>% group_by(date) %>%
52 | summarize(n = sum(confirmed, na.rm=T),
53 | d = sum(deaths, na.rm=T),
54 | n_new = sum(confirmed_new, na.rm=T),
55 | d_new = sum(deaths_new, na.rm=T)) %>%
56 | arrange(date) %>% filter(date>='2020-01-23')
57 |
58 | # covid_country <- covid19 %>% filter(country==country_name) %>%
59 | # mutate(n_new = confirmed - lag(confirmed, default=0),
60 | # d_new = deaths - lag(deaths, default=0)) %>%
61 | # rename(n = confirmed,
62 | # d = deaths) %>%
63 | # arrange(date) %>% filter(date>='2020-01-01') %>%
64 | # select(-c("state","country"))
65 |
66 | # covid_country %>% print(n=Inf)
67 |
68 | ###########################################################################
69 | ###### JAGS
70 | ###########################################################################
71 | Y = covid_country
72 |
73 | while(any(Y$n_new <0)){
74 | pos <- which(Y$n_new <0)
75 | for(j in pos){
76 | Y$n_new[j-1] = Y$n_new[j] + Y$n_new[j-1]
77 | Y$n_new[j] = 0
78 | Y$n[j-1] = Y$n[j]
79 | }
80 | }
81 |
82 | t = dim(Y)[1]
83 |
84 | source("jags_poisson.R")
85 |
86 | i = 4 # (2: confirmed, 3: deaths)
87 | L = 300
88 | #t0 = Sys.time()
89 |
90 | params = c("a","b","c","f","yfut","mu")
91 | nc = 1 # 3
92 | nb = 90e3 # 5e4
93 | thin = 10
94 | ni = 10e3 # 5e4
95 | data_jags = list(y=Y[[i]], t=t, L=L)
96 |
97 | mod = try(jags.model(textConnection(mod_string_new), data=data_jags, n.chains=nc, n.adapt=nb, quiet=TRUE))
98 | try(update(mod, n.iter=ni, progress.bar="none"))
99 | mod_sim = try(coda.samples(model=mod, variable.names=params, n.iter=ni, thin=thin,progress.bar="none"))
100 |
101 | if(class(mod_sim) != "try-error" && class(mod) != "try-error"){
102 | mod_chain = as.data.frame(do.call(rbind, mod_sim))
103 | # names(mod_chain)
104 |
105 | a_pos = "a"
106 | b_pos = "b"
107 | c_pos = "c"
108 | f_pos = "f"
109 | mu_pos = paste0("mu[",1:(t+L),"]")
110 | yfut_pos = paste0("yfut[",1:L,"]")
111 | L0 = 14
112 |
113 | mod_chain_y = as.matrix(mod_chain[yfut_pos])
114 | mod_chain_cumy = rowCumsums(mod_chain_y) + Y[[2]][t]
115 |
116 |
117 | ### list output
118 | df_predict <- data.frame( date = as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01"),
119 | q25 = colQuantiles(mod_chain_cumy[,1:L0], prob=.025),
120 | med = colQuantiles(mod_chain_cumy[,1:L0], prob=.5),
121 | q975 = colQuantiles(mod_chain_cumy[,1:L0], prob=.975),
122 | m = colMeans(mod_chain_cumy[,1:L0]))
123 | row.names(df_predict) <- NULL
124 |
125 | lt_predict <- lt_summary <- NULL
126 |
127 | #longterm
128 | L0 = 200
129 |
130 | #acha a curva de quantil
131 | lowquant <- colQuantiles(mod_chain_y[,1:L0], prob=.025)
132 | medquant <- colQuantiles(mod_chain_y[,1:L0], prob=.5)
133 | highquant <- colQuantiles(mod_chain_y[,1:L0], prob=.975)
134 |
135 | NTC25 =sum(lowquant)+Y[[2]][t]
136 | NTC500=sum(medquant)+Y[[2]][t]
137 | NTC975=sum(highquant)+Y[[2]][t]
138 |
139 |
140 | ##flag
141 | cm <- pop * 0.025
142 | ch <- pop * 0.03
143 | flag <- 0 #tudo bem
144 | {if(NTC500 > cm) flag <- 2 #nao plotar
145 | else{if(NTC975 > ch){flag <- 1; NTC25 <- NTC975 <- NULL}}} #plotar so mediana
146 |
147 | #vetor de data futuras e pega a posicao do maximo do percentil 25.
148 | dat.vec <- as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01")
149 | dat.full <- c(Y[[1]],dat.vec)
150 |
151 |
152 | Dat25 <- Dat500 <- Dat975 <- NULL
153 | dat.low.end <- dat.med.end <- dat.high.end <- NULL
154 |
155 | mod_chain_mu = as.matrix(mod_chain[mu_pos])
156 | mu50 <- apply(mod_chain_mu,2,quantile, probs=0.5)
157 | Dat500 <- dat.full[which.max(mu50[1:(t+L0)])]
158 |
159 | q <- .99
160 | med.cum <- c(medquant[1]+Y[[2]][t],medquant[2:length(medquant)])
161 | med.cum <- colCumsums(as.matrix(med.cum))
162 | med.cum <- med.cum/med.cum[length(med.cum)]
163 | med.end <- which(med.cum - q > 0)[1]
164 | dat.med.end <- dat.vec[med.end]
165 |
166 | if(flag == 0){
167 | #definicao do pico usando a curva das medias
168 | mu25 <- apply(mod_chain_mu,2,quantile, probs=0.025)
169 | mu975 <- apply(mod_chain_mu,2,quantile, probs=.975)
170 |
171 | posMax.q25 <- which.max(mu25[1:(t+L0)])
172 | aux <- mu975 - mu25[posMax.q25]
173 | aux2 <- aux[posMax.q25:(t+L0)]
174 | val <- min(aux2[aux2>0])
175 | dat.max <- which(aux == val)
176 |
177 | aux <- mu975 - mu25[posMax.q25]
178 | aux2 <- aux[1:posMax.q25]
179 | val <- min(aux2[aux2>0])
180 | dat.min <- which(aux == val)
181 |
182 | Dat25 <- dat.full[dat.min]
183 | Dat975 <- dat.full[dat.max]
184 |
185 | #calcula o fim da pandemia
186 | low.cum <- c(lowquant[1]+Y[[2]][t],lowquant[2:length(lowquant)])
187 | low.cum <- colCumsums(as.matrix(low.cum))
188 | low.cum <- low.cum/low.cum[length(low.cum)]
189 | low.end <- which(low.cum - q > 0)[1]
190 | dat.low.end <- dat.vec[low.end]
191 |
192 | high.cum <- c(highquant[1]+Y[[2]][t],highquant[2:length(highquant)])
193 | high.cum <- colCumsums(as.matrix(high.cum))
194 | high.cum <- high.cum/high.cum[length(high.cum)]
195 | high.end <- which(high.cum - q > 0)[1]
196 | dat.high.end <- dat.vec[high.end]
197 | }
198 |
199 | lt_predict <- data.frame( date = dat.vec,
200 | q25 = lowquant,
201 | med = medquant,
202 | q975 = highquant,
203 | m = colMeans(mod_chain_y[,1:L0]))
204 | row.names(lt_predict) <- NULL
205 |
206 | lt_summary <- list(NTC25=NTC25,
207 | NTC500=NTC500,
208 | NTC975=NTC975,
209 | high.dat.low=Dat25,
210 | high.dat.med=Dat500,
211 | high.dat.upper=Dat975,
212 | end.dat.low = dat.low.end,
213 | end.dat.med = dat.med.end,
214 | end.dat.upper = dat.high.end)
215 |
216 | muplot <- data.frame(date = dat.full, mu = mu50[1:(t+L0)])
217 | list_out <- list( df_predict = df_predict, lt_predict=lt_predict, lt_summary=lt_summary, mu_plot = muplot, flag=flag)
218 | name.to.save <- gsub(" ", "-", country_name)
219 |
220 | ### saveRDS
221 | results_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/"
222 | #results_directory = getwd()#'C:/Users/ricar/Dropbox/covid19/R/predict/'
223 | name.file <- paste0(results_directory,name.to.save,'_',colnames(Y)[2],'.rds')
224 | saveRDS(list_out, file=name.file)
225 |
226 | source("mcmcplot_country.R")
227 | report_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/reports"
228 | #mcmcplot_country(mcmcout = mod_sim, parms = c(paste0("a[",t,"]"), paste0("b[",t,"]"), paste0("c[",t,"]")),
229 | mcmcplot_country(mcmcout = mod_sim, parms = c("a", "b", "c", "f"),
230 | dir = report_directory,
231 | filename = paste0(country_name,'_',colnames(Y)[i],'_diagnostics'),
232 | heading = paste0(country_name,'_',colnames(Y)[i]),
233 | extension = "html", greek = TRUE,
234 | country = country_name,
235 | type = colnames(Y)[i])
236 | }
237 | else print(paste0("ERROR:",country_name))
238 | }
239 |
--------------------------------------------------------------------------------
/R/JAGS/death_confirmed_par.R:
--------------------------------------------------------------------------------
1 | rm(list=ls())
2 |
3 | setwd("/run/media/marcos/OS/UFMG/Pesquisa/Covid/R/JAGS")
4 |
5 | ###################################################################
6 | ### Packages
7 | ###################################################################
8 | library(dplyr)
9 | library(tidyr)
10 | library("rjags")
11 | library(matrixStats)
12 | library(mcmcplots)
13 | library(foreach)
14 | library(doMC)
15 |
16 | ###################################################################
17 | ### Data sets: https://github.com/CSSEGISandData
18 | ###################################################################
19 | baseURL = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series"
20 | source("loadData.R")
21 |
22 | covid19_confirm <- loadData("time_series_covid19_confirmed_global.csv", "confirmed")
23 | # covid19_recover <- loadData("time_series_covid19_recovered_global.csv", "recovered")
24 | covid19_deaths <- loadData("time_series_covid19_deaths_global.csv", "deaths")
25 |
26 | covid19 <- covid19_confirm %>% left_join(covid19_deaths)
27 |
28 | #countrylist = "Korea, South"
29 | countrylist <- c("Argentina","Australia","Belgium","Bolivia","Canada","Chile","China","Colombia","Ecuador","France","Germany","Greece", "India", "Iran", "Ireland", "Italy", "Japan", "Korea, South", "Mexico", "Netherlands", "New Zealand", "Norway", "Peru", "Paraguay", "Poland", "Portugal", "Russia", "South Africa", "Spain","United Kingdom", "Uruguay", "Sweden", "Switzerland", "US", "Turkey", "Venezuela")
30 |
31 | #countrylist <- c("Argentina","Bolivia","Canada","Chile","Colombia","Ecuador", "Greece", "India", "Japan", "Korea, South", "Mexico", "Peru", "Paraguay", "Poland", "Russia", "South Africa", "United Kingdom", "Uruguay", "Sweden", "US", "Venezuela")
32 |
33 | country_pop <- read.csv("../pop/pop_WR.csv")
34 |
35 | #register cores
36 | registerDoMC(cores = detectCores()-1) # Alternativa Linux
37 |
38 | #for(country_name in countrylist){
39 | obj <- foreach(s = 1:length(countrylist) ) %dopar% {
40 |
41 | country_name <- countrylist[s]
42 |
43 | pop <- country_pop$pop[which(country_pop$country == country_name)]
44 |
45 | covid_states <- covid19 %>% filter(country==country_name) %>%
46 | mutate(confirmed_new = confirmed - lag(confirmed, default=0),
47 | # recovered_new = recovered - lag(recovered, default=0),
48 | deaths_new = deaths - lag(deaths, default=0)) %>%
49 | arrange(date,state)
50 |
51 | covid_country <- covid_states %>% group_by(date) %>%
52 | summarize(n = sum(confirmed, na.rm=T),
53 | d = sum(deaths, na.rm=T),
54 | n_new = sum(confirmed_new, na.rm=T),
55 | d_new = sum(deaths_new, na.rm=T)) %>%
56 | arrange(date) %>% filter(date>='2020-01-23')
57 |
58 | # covid_country <- covid19 %>% filter(country==country_name) %>%
59 | # mutate(n_new = confirmed - lag(confirmed, default=0),
60 | # d_new = deaths - lag(deaths, default=0)) %>%
61 | # rename(n = confirmed,
62 | # d = deaths) %>%
63 | # arrange(date) %>% filter(date>='2020-01-23') %>%
64 | # select(-c("state","country"))
65 |
66 | # covid_country %>% print(n=Inf)
67 |
68 | ###########################################################################
69 | ###### JAGS
70 | ###########################################################################
71 | Y = covid_country
72 |
73 | while(any(Y$d_new <0)){
74 | pos <- which(Y$d_new <0)
75 | for(j in pos){
76 | Y$d_new[j-1] = Y$d_new[j] + Y$d_new[j-1]
77 | Y$d_new[j] = 0
78 | Y$d[j-1] = Y$d[j]
79 | }
80 | }
81 |
82 | t = dim(Y)[1]
83 |
84 | source("jags_poisson.R")
85 |
86 | i = 5 # (2: confirmed, 3: deaths)
87 | L = 300
88 | #t0 = Sys.time()
89 |
90 | #use static to provide initial values
91 | params = c("a","b","c","f","yfut","mu")
92 | nc = 1 # 3
93 | nb = 90e3 # 5e4
94 | thin = 10
95 | ni = 10e3 # 5e4
96 | data_jags = list(y=Y[[i]], t=t, L=L)
97 | mod = try(jags.model(textConnection(mod_string_new), data=data_jags, n.chains=nc, n.adapt=nb, quiet=TRUE))
98 | try(update(mod, n.iter=ni, progress.bar="none"))
99 | mod_sim = try(coda.samples(model=mod, variable.names=params, n.iter=ni, thin=thin,progress.bar="none"))
100 |
101 | if(class(mod_sim) != "try-error" && class(mod) != "try-error"){
102 | mod_chain = as.data.frame(do.call(rbind, mod_sim))
103 | # names(mod_chain)
104 |
105 | a_pos = "a"
106 | b_pos = "b"
107 | c_pos = "c"
108 | f_pos = "f"
109 | mu_pos = paste0("mu[",1:(t+L),"]")
110 | yfut_pos = paste0("yfut[",1:L,"]")
111 | L0 = 14
112 |
113 | mod_chain_y = as.matrix(mod_chain[yfut_pos])
114 | mod_chain_cumy = rowCumsums(mod_chain_y) + Y[[3]][t]
115 |
116 |
117 | ### list output
118 | df_predict <- data.frame( date = as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01"),
119 | q25 = colQuantiles(mod_chain_cumy[,1:L0], prob=.025),
120 | med = colQuantiles(mod_chain_cumy[,1:L0], prob=.5),
121 | q975 = colQuantiles(mod_chain_cumy[,1:L0], prob=.975),
122 | m = colMeans(mod_chain_cumy[,1:L0]))
123 | row.names(df_predict) <- NULL
124 |
125 | lt_predict <- lt_summary <- NULL
126 |
127 | #longterm
128 | L0 = 200
129 |
130 | #acha a curva de quantil
131 | lowquant <- colQuantiles(mod_chain_y[,1:L0], prob=.025)
132 | medquant <- colQuantiles(mod_chain_y[,1:L0], prob=.5)
133 | highquant <- colQuantiles(mod_chain_y[,1:L0], prob=.975)
134 |
135 | NTC25 =sum(lowquant)+Y[[3]][t]
136 | NTC500=sum(medquant)+Y[[3]][t]
137 | NTC975=sum(highquant)+Y[[3]][t]
138 |
139 |
140 | ##flag
141 | cm <- pop * 0.025 * 0.12
142 | ch <- pop * 0.03 * 0.15
143 | flag <- 0 #tudo bem
144 | {if(NTC500 > cm) flag <- 2 #nao plotar
145 | else{if(NTC975 > ch){flag <- 1; NTC25 <- NTC975 <- NULL}}} #plotar so mediana
146 |
147 | #vetor de data futuras e pega a posicao do maximo do percentil 25.
148 | dat.vec <- as.Date((max(Y$date)+1):(max(Y$date)+L0), origin="1970-01-01")
149 | dat.full <- c(Y[[1]],dat.vec)
150 |
151 |
152 | Dat25 <- Dat500 <- Dat975 <- NULL
153 | dat.low.end <- dat.med.end <- dat.high.end <- NULL
154 |
155 | mod_chain_mu = as.matrix(mod_chain[mu_pos])
156 | mu50 <- apply(mod_chain_mu,2,quantile, probs=0.5)
157 | Dat500 <- dat.full[which.max(mu50[1:(t+L0)])]
158 |
159 | q <- .99
160 | med.cum <- c(medquant[1]+Y[[3]][t],medquant[2:length(medquant)])
161 | med.cum <- colCumsums(as.matrix(med.cum))
162 | med.cum <- med.cum/med.cum[length(med.cum)]
163 | med.end <- which(med.cum - q > 0)[1]
164 | dat.med.end <- dat.vec[med.end]
165 |
166 | if(flag == 0){
167 | #definicao do pico usando a curva das medias
168 | mu25 <- apply(mod_chain_mu,2,quantile, probs=0.025)
169 | mu975 <- apply(mod_chain_mu,2,quantile, probs=.975)
170 |
171 | posMax.q25 <- which.max(mu25[1:(t+L0)])
172 | aux <- mu975 - mu25[posMax.q25]
173 | aux2 <- aux[posMax.q25:(t+L0)]
174 | val <- min(aux2[aux2>0])
175 | dat.max <- which(aux == val)
176 |
177 | aux <- mu975 - mu25[posMax.q25]
178 | aux2 <- aux[1:posMax.q25]
179 | val <- min(aux2[aux2>0])
180 | dat.min <- which(aux == val)
181 |
182 | Dat25 <- dat.full[dat.min]
183 | Dat975 <- dat.full[dat.max]
184 |
185 | #calcula o fim da pandemia
186 | low.cum <- c(lowquant[1]+Y[[3]][t],lowquant[2:length(lowquant)])
187 | low.cum <- colCumsums(as.matrix(low.cum))
188 | low.cum <- low.cum/low.cum[length(low.cum)]
189 | low.end <- which(low.cum - q > 0)[1]
190 | dat.low.end <- dat.vec[low.end]
191 |
192 | high.cum <- c(highquant[1]+Y[[3]][t],highquant[2:length(highquant)])
193 | high.cum <- colCumsums(as.matrix(high.cum))
194 | high.cum <- high.cum/high.cum[length(high.cum)]
195 | high.end <- which(high.cum - q > 0)[1]
196 | dat.high.end <- dat.vec[high.end]
197 | }
198 |
199 | lt_predict <- data.frame( date = dat.vec,
200 | q25 = lowquant,
201 | med = medquant,
202 | q975 = highquant,
203 | m = colMeans(mod_chain_y[,1:L0]))
204 | row.names(lt_predict) <- NULL
205 |
206 | lt_summary <- list(NTC25=NTC25,
207 | NTC500=NTC500,
208 | NTC975=NTC975,
209 | high.dat.low=Dat25,
210 | high.dat.med=Dat500,
211 | high.dat.upper=Dat975,
212 | end.dat.low = dat.low.end,
213 | end.dat.med = dat.med.end,
214 | end.dat.upper = dat.high.end)
215 |
216 | muplot <- data.frame(date = dat.full, mu = mu50[1:(t+L0)])
217 | list_out <- list( df_predict = df_predict, lt_predict=lt_predict, lt_summary=lt_summary, mu_plot = muplot, flag=flag)
218 | name.to.save <- gsub(" ", "-", country_name)
219 |
220 | ### saveRDS
221 | results_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/"
222 | #results_directory = getwd()#'C:/Users/ricar/Dropbox/covid19/R/predict/'
223 | name.file <- paste0(results_directory,name.to.save,'_',colnames(Y)[3],'.rds')
224 | saveRDS(list_out, file=name.file)
225 |
226 | source("mcmcplot_country.R")
227 | report_directory = "/run/media/marcos/OS/UFMG/Pesquisa/Covid/app_COVID19/STpredictions/reports"
228 | #mcmcplot_country(mcmcout = mod_sim, parms = c(paste0("a[",t,"]"), paste0("b[",t,"]"), paste0("c[",t,"]")),
229 | mcmcplot_country(mcmcout = mod_sim, parms = c("a", "b", "c", "f"),
230 | dir = report_directory,
231 | filename = paste0(country_name,'_',colnames(Y)[i],'_diagnostics'),
232 | heading = paste0(country_name,'_',colnames(Y)[i]),
233 | extension = "html", greek = TRUE,
234 | country = country_name,
235 | type = colnames(Y)[i])
236 | }
237 | else print(paste0("ERROR:",country_name))
238 | }
239 |
--------------------------------------------------------------------------------
/app_v1/www/CoronaUFMG_MD.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Modelando a pandemia de Covid19"
3 | author: "Dani Gamerman"
4 | # output: slidy_presentation
5 | output:
6 | html_document:
7 | toc: false
8 | # toc_depth: 5
9 | # toc_float: true
10 | ---
11 |
12 |
13 | ### Modelando a pandemia de Covid19
14 |
15 | Dani Gamerman - Programa de PG em Estatística - UFMG
16 |
17 | *1º semestre de 2020*
18 |
19 | (motivado a partir de notas de José Marcos Andrade Figueiredo - UFMG)
20 |
21 |
22 | #### Modelo de crescimento logı́stico básico
23 |
24 | $$
25 | Y(t) \sim N ( \mu (t) , \sigma^2 ), \qquad t = 1, 2, ...
26 | $$
27 |
28 | onde $Y(t)$ é o **número de casos acumulados** até o dia $t$ em uma dada região, com $\mu ( t ) = \frac{ a .\, \exp{ \{ c t \} } } {1 + b .\, \exp { \{ c t \} }}.$
29 |
30 |
31 | **Caso especial:**
32 |
33 | - $b = 0$ (crescimento exponencial) $\rightarrow \mu(t) = a .\,\exp \{ ct \}$;
34 |
35 | - adequado para os estágios iniciais da pandemia.
36 |
37 |
38 |
39 | ##### **Problemas do modelo básico:**
40 |
41 | a) os dados são **contagens**, e a distribuição normal pressupõe dados contínuos;
42 |
43 | b) a variância deveria aumentar com a magnitude dos dados.
44 |
45 |
46 |
47 | #### Características de interesse
48 |
49 | As características mais importantes são:
50 |
51 |
52 | **1) Taxa de infecção**
53 |
54 | - $c$ mede a velocidade da aceleração e reflete a taxa de infecção da doença.
55 |
56 |
57 | **2) Assíntota**
58 |
59 | $\lim_{t \to \infty} \mu( t) = \lim_{t \to \infty} \frac{ a .\, \exp{ \{ c t \} } } {1 + b .\, \exp { \{ ct \} }} = \frac ab$
60 |
61 | - Reflete o total de casos acumulados ao longo de toda a pandemia.
62 |
63 | - Crescimento exponencial ($b=0$): assíntota $= \infty$ !
64 |
65 |
66 | **3) Ponto de inflexão**
67 |
68 | - É o tempo $t^*$ onde o número de novos casos para de crescer e começa a diminuir.
69 |
70 | - Crescimento exponencial ($b=0$): número de novos casos nunca para de crescer!
71 |
72 |
73 | **4) Previsão:**
74 |
75 | - O que podemos dizer sobre $Y (t+k ), \forall k$, para $t$ fixo (hoje)?
76 |
77 | Depende da distribuição de $Y( t)$ mas será sempre dada pela distribuição preditiva de $Y(t+k)$ dado $Y(1:t) = \{ Y( 1) , ... , Y( t ) \}$ - o que foi observado.
78 |
79 | Funciona como se fosse a distribuição a posteriori de $Y(t + k )$.
80 |
81 |
82 | **Resultado útil:** Se $Z$ e $W$ são 2 v. a.'s quaisquer então:
83 |
84 | * $E[Z] = E[ E(Z \mid W ) ]$
85 |
86 | * $Var[Z] = Var[ E(Z \mid W ) ] + E[ Var( Z \mid W ) ]$
87 |
88 | Em particular, $E[Y ( t + k ) \mid Y( 1:t)] = E\{ E[ Y ( t + k ) \mid \mu( 1:t)] \mid Y( 1:t) \} = E[ \mu( t+k )] \mid Y( 1:t) ]$, a média a posteriori de $\mu ( t + k )$.
89 |
90 | A inferência para tudo que foi dito acima deve ser reportada através de estimadores pontuais (ex: médias a posteriori), junto com os respectivos intervalos de credibilidade.
91 |
92 |
93 | **5) Número de reprodutibilidade $R_0$**
94 |
95 | $R_0$ é o número esperado de casos secundários de uma doença produzidos por um indivíduo infectado.
96 |
97 | No tempo $t$, ele é definido como $R_0 = \frac {\mu ( t ) - \mu ( t-1)}{\mu ( t-1)} = \frac {\mu ( t )}{\mu ( t-1)} - 1$.
98 |
99 | - **Início da epidemia:** $1 \gg b \exp { \{ ct \} } \to \mu (t) \approx a \exp { \{ ct \} } \to R_0 \approx e^c - 1$
100 |
101 | - **Final da epidemia:** $1 \ll b \exp { \{ ct \} } \to \mu (t) \approx a / b \to R_0 \approx 0$
102 |
103 | - **Meio da epidemia:** $R_0$ é função dos parâmetros $(a,b,c)$ e do tempo $t$, e é dado por $R_0 (t) = e^c \ \frac{1 + b e^c e^{ct} } {1 + b e^{ct} } \ - \ 1$.
104 |
105 | Para cada $t$ fixado, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
106 |
107 |
108 | **6) Número médio de novos casos (NMNC)**
109 |
110 | Número médio de novos casos no tempo $t+k$:
111 |
112 | $n_t ( k ) = E [Y ( t + k ) - Y ( t + k -1 ) ] = \mu ( t + k ) - \mu ( t + k -1 )$
113 |
114 | Logo, NMNC também é função dos parâmetros $(a,b,c)$ e pode ser facilmente calculado.
115 |
116 | Para cada $t$ e $k$ fixados, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
117 |
118 |
119 |
120 |
121 |
122 | Estimativas e Intervalo de Credibilidade para o pico da pandemia e para o total de casos
123 |
124 |
125 | #### Alternativas:
126 |
127 | 1.1) $Y( t ) \sim Poisson ( \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \mu ( t )$
128 |
129 | 1.2) $Y ( t ) \sim N ( \mu ( t ) , \sigma^2 \ \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \sigma^2 \ \mu ( t )$
130 |
131 | Observações:
132 |
133 | - Modelo (1.2) admite sobredispersão se $\sigma^2 > 1$
134 |
135 | - Alternativa (1.2) só cuida do comentário (b)
136 |
137 | - Alternativa (1.1) cuida dos 2 comentários mas não permite sobredispersão
138 |
139 |
140 | ##### **Poisson com sobredispersão**
141 |
142 | 1.3) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) + \epsilon ( t ) )$ com $E[ \epsilon (t)] = 0$ e $Var( \epsilon (t)) = \sigma^2$
143 |
144 | 1.4) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) \times \epsilon ( t ) )$ com $E[ \epsilon (t)] = 1$ e $Var( \epsilon (t)) = \sigma^2$
145 |
146 |
147 | Usando os resultados úteis anteriores:
148 |
149 | Mod(1.3):
150 |
151 | - $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t ) ) ] = E[ \mu ( t ) + \epsilon ( t ) ] = \mu ( t ) + E[ \epsilon ( t ) ] = \mu ( t )$
152 |
153 | - $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) + \epsilon ( t ) ] + E [ \mu ( t ) + \epsilon ( t ) ] = \sigma^2 + \mu_t > \mu ( t )$
154 |
155 |
156 | Mod(1.4):
157 |
158 | - $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t) ) ] = E[ \mu ( t ) \times \epsilon ( t ) ] = \mu ( t ) \times E[ \epsilon ( t ) ] = \mu ( t )$
159 |
160 | - $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) \times \epsilon ( t ) ] + E [ \mu ( t ) \times \epsilon ( t ) ] = \mu_t ^2 \sigma^2 + \mu_t > \mu ( t )$
161 |
162 |
163 | Ambos preservam média da Poisson e aumentam dispersão da Poisson.
164 |
165 |
166 | #### Extensões dinâmicas
167 |
168 | Modelos anteriores assumem comportamento estático:
169 |
170 | - a forma da doença não se modifica ao longo do tempo;
171 |
172 | - taxa de infecção será sempre a mesma, assintota será sempre a mesma, \...
173 |
174 | **Modelos dinâmicos** flexibilizam isso.
175 |
176 |
177 | ##### 1. **Modelos dinâmicos**
178 |
179 | $\mu ( t ) = \frac{ a( {\color{red} t )} \ \exp{ \{ c( {\color{red}t) } \ t \} } } {1 + b( {\color{red}t) } \ \exp { \{ c({\color{red}t) } \ t \} }}$
180 |
181 | com:
182 | $a ( t ) = a ( t-1) + w_a ( t )$, onde $w_a ( t ) \sim N ( 0 , W_a ), \forall t$.
183 |
184 | $b ( t ) = b ( t-1) + w_b ( t )$, onde $w_b ( t ) \sim N ( 0 , W_b ), \forall t$.
185 |
186 | $c ( t ) = c ( t-1) + w_c ( t )$, onde $w_c ( t ) \sim N ( 0 , W_c ), \forall t$.
187 |
188 |
189 | **Vantagens:**
190 |
191 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
192 |
193 | b) $Var[ a(t) \mid a(t-1 )]= W_a$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
194 |
195 | **Problemas:**
196 |
197 | a) variâncias $W_a, W_b, W_c$ conhecidas $\Rightarrow$ difíceis de especificar.
198 |
199 | b) variâncias $W_a, W_b, W_c$ deconhecidas $\Rightarrow$ difíceis de estimar.
200 |
201 | c) não dá para simplificar $W_a = W_b = W_c = W$ (magnitudes diferentes de $(a,b,c)$).
202 |
203 |
204 | ##### 2. **Efeito multiplicativo:**
205 |
206 | Outra forma de introduzir dinamismo, agora **multiplicativo**:
207 |
208 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $w_a ( t ) \sim Gamma ( d_a ,d_a ), \forall t$.
209 |
210 | $b ( t ) = b ( t-1) \times w_b ( t )$, onde $w_b ( t ) \sim Gamma ( d_b ,d_b ), \forall t$.
211 |
212 | $c ( t ) = c ( t-1) \times w_c ( t )$, onde $w_c ( t ) \sim Gamma ( d_c ,d_c ), \forall t$.
213 |
214 |
215 | **Vantagens:**
216 |
217 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
218 |
219 | b) $Var[ a(t) \mid a(t-1 )]= d_c^{-1}$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
220 |
221 | c) Hiperparâmetros $d_a, d_b, d_c$ fáceis de especificar.
222 |
223 | Exemplos:
224 | $d=1000 \ \to \ 0,90= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$
225 | $d=1500 \ \to \ 0,95= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$
226 |
227 | **Problemas:**
228 |
229 | a) As magnitures de $a, b, c$ ainda interferem no aumento da incerteza.
230 |
231 | b) não sei se software lida bem com Gammas tendo parâmetros tão altos.
232 |
233 |
234 | ##### 3. **Evolução multiplicativa com erros normais**
235 |
236 | Considere a evolução multiplicativa abaixo para o parâmetro $a$:
237 |
238 | $$a ( t ) = a ( t-1) \times \exp \{ w_a ( t ) \}, \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
239 | Tomando o logartimo nos dois lados, obtem-se:
240 | $$\log \ a ( t ) = \log \ a ( t-1) + w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
241 | Passando $\log \ a ( t-1)$ para a esquerda, vem que:
242 | $$\log \ a ( t ) - \log \ a ( t-1) = \log \left[ \frac{ a ( t )}{ a ( t-1)} \right] = w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
243 |
244 | Especificação de $W_a$: pode-se pensar em percentual de incremento, como antes.
245 |
246 | $$0,95 = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right) = P ( - 0,05 < w_a(t) < 0,05 )$$
247 | Isso implica $2 \sqrt{W_a} = 0,05$, que implica $\sqrt{W_a} = 0,025 \ \Rightarrow W_a = (0,025)^2$.
248 |
249 | Mesma especificação vale para $W_b$ e $W_c$, pois dimensões de $b$ e $c$ não importam.
250 |
251 |
252 | - **Caso particular**
253 |
254 | Baseado em Gamerman, Santos e Franco (J. Time Series Analysis, 2013):
255 |
256 | $$\mu ( t ) = \frac{ a( {\color{red}t)} \ \exp{ \{ c \ t \} } } {1 + b \ \exp { \{ c \ t \} }}$$
257 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $ w_a ( t ) \sim Beta, \forall t$.
258 |
259 | Pode ser usado também para crescimento exponencial ($b=0$).
260 |
261 | **Vantagem:**
262 |
263 | a) Permite contas exatas, dispensando aproximações MCMC.
264 |
265 | **Desvantagem:**
266 |
267 | a) Não permite $b$ e $c$ dinâmicos.
268 |
269 |
270 | #### Generalizações da logística
271 |
272 | Até agora, usamos a logística para especificar a média $\mu ( t )$ como $\mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac{ a} { b + \exp { \{ - ct \} }}$.
273 |
274 | Essa expressão é a forma mais simples da logística. Ela pode ser generalizada de várias formas.
275 |
276 | - Uma possível forma da **logística generalizada** é
277 | $$\mu ( t ) = d + \frac{ a - d} {( b + \exp { \{ - ct \} } )^f}$$
278 |
279 | A logística é obtida fazendo $d=0$ e $f=1$.
280 |
281 |
282 |
283 |
--------------------------------------------------------------------------------
/app_v1/www/markdown/CoronaUFMG_MD.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Modelando a pandemia de Covid19"
3 | author: "Dani Gamerman"
4 | # output: slidy_presentation
5 | output:
6 | html_document:
7 | toc: false
8 | # toc_depth: 5
9 | # toc_float: true
10 | ---
11 |
12 |
13 | ### Modelando a pandemia de Covid19
14 |
15 | Dani Gamerman - Programa de PG em Estatística - UFMG
16 |
17 | *1º semestre de 2020*
18 |
19 | (motivado a partir de notas de José Marcos Andrade Figueiredo - UFMG)
20 |
21 |
22 | #### Modelo de crescimento logı́stico básico
23 |
24 | $$
25 | Y(t) \sim N ( \mu (t) , \sigma^2 ), \qquad t = 1, 2, ...
26 | $$
27 |
28 | onde $Y(t)$ é o **número de casos acumulados** até o dia $t$ em uma dada região, com $\mu ( t ) = \frac{ a .\, \exp{ \{ c t \} } } {1 + b .\, \exp { \{ c t \} }}.$
29 |
30 |
31 | **Caso especial:**
32 |
33 | - $b = 0$ (crescimento exponencial) $\rightarrow \mu(t) = a .\,\exp \{ ct \}$;
34 |
35 | - adequado para os estágios iniciais da pandemia.
36 |
37 |
38 |
39 | ##### **Problemas do modelo básico:**
40 |
41 | a) os dados são **contagens**, e a distribuição normal pressupõe dados contínuos;
42 |
43 | b) a variância deveria aumentar com a magnitude dos dados.
44 |
45 |
46 |
47 | #### Características de interesse
48 |
49 | As características mais importantes são:
50 |
51 |
52 | **1) Taxa de infecção**
53 |
54 | - $c$ mede a velocidade da aceleração e reflete a taxa de infecção da doença.
55 |
56 |
57 | **2) Assíntota**
58 |
59 | $\lim_{t \to \infty} \mu( t) = \lim_{t \to \infty} \frac{ a .\, \exp{ \{ c t \} } } {1 + b .\, \exp { \{ ct \} }} = \frac ab$
60 |
61 | - Reflete o total de casos acumulados ao longo de toda a pandemia.
62 |
63 | - Crescimento exponencial ($b=0$): assíntota $= \infty$ !
64 |
65 |
66 | **3) Ponto de inflexão**
67 |
68 | - É o tempo $t^*$ onde o número de novos casos para de crescer e começa a diminuir.
69 |
70 | - Crescimento exponencial ($b=0$): número de novos casos nunca para de crescer!
71 |
72 |
73 | **4) Previsão:**
74 |
75 | - O que podemos dizer sobre $Y (t+k ), \forall k$, para $t$ fixo (hoje)?
76 |
77 | Depende da distribuição de $Y( t)$ mas será sempre dada pela distribuição preditiva de $Y(t+k)$ dado $Y(1:t) = \{ Y( 1) , ... , Y( t ) \}$ - o que foi observado.
78 |
79 | Funciona como se fosse a distribuição a posteriori de $Y(t + k )$.
80 |
81 |
82 | **Resultado útil:** Se $Z$ e $W$ são 2 v. a.'s quaisquer então:
83 |
84 | * $E[Z] = E[ E(Z \mid W ) ]$
85 |
86 | * $Var[Z] = Var[ E(Z \mid W ) ] + E[ Var( Z \mid W ) ]$
87 |
88 | Em particular, $E[Y ( t + k ) \mid Y( 1:t)] = E\{ E[ Y ( t + k ) \mid \mu( 1:t)] \mid Y( 1:t) \} = E[ \mu( t+k )] \mid Y( 1:t) ]$, a média a posteriori de $\mu ( t + k )$.
89 |
90 | A inferência para tudo que foi dito acima deve ser reportada através de estimadores pontuais (ex: médias a posteriori), junto com os respectivos intervalos de credibilidade.
91 |
92 |
93 | **5) Número de reprodutibilidade $R_0$**
94 |
95 | $R_0$ é o número esperado de casos secundários de uma doença produzidos por um indivíduo infectado.
96 |
97 | No tempo $t$, ele é definido como $R_0 = \frac {\mu ( t ) - \mu ( t-1)}{\mu ( t-1)} = \frac {\mu ( t )}{\mu ( t-1)} - 1$.
98 |
99 | - **Início da epidemia:** $1 \gg b \exp { \{ ct \} } \to \mu (t) \approx a \exp { \{ ct \} } \to R_0 \approx e^c - 1$
100 |
101 | - **Final da epidemia:** $1 \ll b \exp { \{ ct \} } \to \mu (t) \approx a / b \to R_0 \approx 0$
102 |
103 | - **Meio da epidemia:** $R_0$ é função dos parâmetros $(a,b,c)$ e do tempo $t$, e é dado por $R_0 (t) = e^c \ \frac{1 + b e^c e^{ct} } {1 + b e^{ct} } \ - \ 1$.
104 |
105 | Para cada $t$ fixado, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
106 |
107 |
108 | **6) Número médio de novos casos (NMNC)**
109 |
110 | Número médio de novos casos no tempo $t+k$:
111 |
112 | $n_t ( k ) = E [Y ( t + k ) - Y ( t + k -1 ) ] = \mu ( t + k ) - \mu ( t + k -1 )$
113 |
114 | Logo, NMNC também é função dos parâmetros $(a,b,c)$ e pode ser facilmente calculado.
115 |
116 | Para cada $t$ e $k$ fixados, podemos obter sua distribuição a posteriori (via amostra MCMC) e calcular média, quantis e intervalos de credibilidade.
117 |
118 |
119 |
120 |
121 |
122 | Estimativas e Intervalo de Credibilidade para o pico da pandemia e para o total de casos
123 |
124 |
125 | #### Alternativas:
126 |
127 | 1.1) $Y( t ) \sim Poisson ( \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \mu ( t )$
128 |
129 | 1.2) $Y ( t ) \sim N ( \mu ( t ) , \sigma^2 \ \mu ( t ) )$ com $E[ Y(t)] = \mu (t )$ e $Var(Y(t)) = \sigma^2 \ \mu ( t )$
130 |
131 | Observações:
132 |
133 | - Modelo (1.2) admite sobredispersão se $\sigma^2 > 1$
134 |
135 | - Alternativa (1.2) só cuida do comentário (b)
136 |
137 | - Alternativa (1.1) cuida dos 2 comentários mas não permite sobredispersão
138 |
139 |
140 | ##### **Poisson com sobredispersão**
141 |
142 | 1.3) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) + \epsilon ( t ) )$ com $E[ \epsilon (t)] = 0$ e $Var( \epsilon (t)) = \sigma^2$
143 |
144 | 1.4) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) \times \epsilon ( t ) )$ com $E[ \epsilon (t)] = 1$ e $Var( \epsilon (t)) = \sigma^2$
145 |
146 |
147 | Usando os resultados úteis anteriores:
148 |
149 | Mod(1.3):
150 |
151 | - $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t ) ) ] = E[ \mu ( t ) + \epsilon ( t ) ] = \mu ( t ) + E[ \epsilon ( t ) ] = \mu ( t )$
152 |
153 | - $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) + \epsilon ( t ) ] + E [ \mu ( t ) + \epsilon ( t ) ] = \sigma^2 + \mu_t > \mu ( t )$
154 |
155 |
156 | Mod(1.4):
157 |
158 | - $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t) ) ] = E[ \mu ( t ) \times \epsilon ( t ) ] = \mu ( t ) \times E[ \epsilon ( t ) ] = \mu ( t )$
159 |
160 | - $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) \times \epsilon ( t ) ] + E [ \mu ( t ) \times \epsilon ( t ) ] = \mu_t ^2 \sigma^2 + \mu_t > \mu ( t )$
161 |
162 |
163 | Ambos preservam média da Poisson e aumentam dispersão da Poisson.
164 |
165 |
166 | #### Extensões dinâmicas
167 |
168 | Modelos anteriores assumem comportamento estático:
169 |
170 | - a forma da doença não se modifica ao longo do tempo;
171 |
172 | - taxa de infecção será sempre a mesma, assintota será sempre a mesma, \...
173 |
174 | **Modelos dinâmicos** flexibilizam isso.
175 |
176 |
177 | ##### 1. **Modelos dinâmicos**
178 |
179 | $\mu ( t ) = \frac{ a( {\color{red} t )} \ \exp{ \{ c( {\color{red}t) } \ t \} } } {1 + b( {\color{red}t) } \ \exp { \{ c({\color{red}t) } \ t \} }}$
180 |
181 | com:
182 | $a ( t ) = a ( t-1) + w_a ( t )$, onde $w_a ( t ) \sim N ( 0 , W_a ), \forall t$.
183 |
184 | $b ( t ) = b ( t-1) + w_b ( t )$, onde $w_b ( t ) \sim N ( 0 , W_b ), \forall t$.
185 |
186 | $c ( t ) = c ( t-1) + w_c ( t )$, onde $w_c ( t ) \sim N ( 0 , W_c ), \forall t$.
187 |
188 |
189 | **Vantagens:**
190 |
191 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
192 |
193 | b) $Var[ a(t) \mid a(t-1 )]= W_a$, e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
194 |
195 | **Problemas:**
196 |
197 | a) variâncias $W_a, W_b, W_c$ conhecidas $\Rightarrow$ difíceis de especificar.
198 |
199 | b) variâncias $W_a, W_b, W_c$ deconhecidas $\Rightarrow$ difíceis de estimar.
200 |
201 | c) não dá para simplificar $W_a = W_b = W_c = W$ (magnitudes diferentes de $(a,b,c)$).
202 |
203 |
204 | ##### 2. **Efeito multiplicativo:**
205 |
206 | Outra forma de introduzir dinamismo, agora **multiplicativo**:
207 |
208 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $w_a ( t ) \sim Gamma ( d_a ,d_a ), \forall t$.
209 |
210 | $b ( t ) = b ( t-1) \times w_b ( t )$, onde $w_b ( t ) \sim Gamma ( d_b ,d_b ), \forall t$.
211 |
212 | $c ( t ) = c ( t-1) \times w_c ( t )$, onde $w_c ( t ) \sim Gamma ( d_c ,d_c ), \forall t$.
213 |
214 |
215 | **Vantagens:**
216 |
217 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ constância local.
218 |
219 | b) $Var[ a(t) \mid a(t-1 )]= d_c^{-1}$ e o mesmo vale para $b(t)$ e $c(t) \Rightarrow$ aumento da incerteza.
220 |
221 | c) Hiperparâmetros $d_a, d_b, d_c$ fáceis de especificar.
222 |
223 | Exemplos:
224 | $d=1000 \ \to \ 0,90= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$
225 | $d=1500 \ \to \ 0,95= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$
226 |
227 | **Problemas:**
228 |
229 | a) As magnitures de $a, b, c$ ainda interferem no aumento da incerteza.
230 |
231 | b) não sei se software lida bem com Gammas tendo parâmetros tão altos.
232 |
233 |
234 | ##### 3. **Evolução multiplicativa com erros normais**
235 |
236 | Considere a evolução multiplicativa abaixo para o parâmetro $a$:
237 |
238 | $$a ( t ) = a ( t-1) \times \exp \{ w_a ( t ) \}, \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
239 | Tomando o logartimo nos dois lados, obtem-se:
240 | $$\log \ a ( t ) = \log \ a ( t-1) + w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
241 | Passando $\log \ a ( t-1)$ para a esquerda, vem que:
242 | $$\log \ a ( t ) - \log \ a ( t-1) = \log \left[ \frac{ a ( t )}{ a ( t-1)} \right] = w_a ( t ), \mbox{ onde } w_a ( t ) \sim N( 0 , W_a )$$
243 |
244 | Especificação de $W_a$: pode-se pensar em percentual de incremento, como antes.
245 |
246 | $$0,95 = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right) = P ( - 0,05 < w_a(t) < 0,05 )$$
247 | Isso implica $2 \sqrt{W_a} = 0,05$, que implica $\sqrt{W_a} = 0,025 \ \Rightarrow W_a = (0,025)^2$.
248 |
249 | Mesma especificação vale para $W_b$ e $W_c$, pois dimensões de $b$ e $c$ não importam.
250 |
251 |
252 | - **Caso particular**
253 |
254 | Baseado em Gamerman, Santos e Franco (J. Time Series Analysis, 2013):
255 |
256 | $$\mu ( t ) = \frac{ a( {\color{red}t)} \ \exp{ \{ c \ t \} } } {1 + b \ \exp { \{ c \ t \} }}$$
257 | $a ( t ) = a ( t-1) \times w_a ( t )$, onde $ w_a ( t ) \sim Beta, \forall t$.
258 |
259 | Pode ser usado também para crescimento exponencial ($b=0$).
260 |
261 | **Vantagem:**
262 |
263 | a) Permite contas exatas, dispensando aproximações MCMC.
264 |
265 | **Desvantagem:**
266 |
267 | a) Não permite $b$ e $c$ dinâmicos.
268 |
269 |
270 | #### Generalizações da logística
271 |
272 | Até agora, usamos a logística para especificar a média $\mu ( t )$ como $\mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac{ a} { b + \exp { \{ - ct \} }}$.
273 |
274 | Essa expressão é a forma mais simples da logística. Ela pode ser generalizada de várias formas.
275 |
276 | - Uma possível forma da **logística generalizada** é
277 | $$\mu ( t ) = d + \frac{ a - d} {( b + \exp { \{ - ct \} } )^f}$$
278 |
279 | A logística é obtida fazendo $d=0$ e $f=1$.
280 |
281 |
282 |
283 |
--------------------------------------------------------------------------------
/app_v1/CoronaUFMG_MDen.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Modeling the Covid-19 pandemic"
3 | author: "Dani Gamerman"
4 | # output: slidy_presentation
5 | output:
6 | html_document:
7 | toc: true
8 | # toc_depth: 5
9 | # toc_float: true
10 | ---
11 |
12 |
13 | ### Modelling the Covid19 pandemic
14 |
15 | Dani Gamerman - Graduate Program in Statistics - UFMG
16 |
17 | *1st semester 2020*
18 |
19 | (inspired on notes by José Marcos Andrade Figueiredo - UFMG)
20 |
21 |
22 |
23 | #### Basic logistic growth model
24 |
25 | $$
26 | Y(t) \sim N ( \mu (t) , \sigma^2 ), \qquad t = 1, 2, ...
27 | $$
28 |
29 | where $Y(t)$ is the **cumulated number of confirmed cases** by day $t$ in a given region, with $\mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ c t \} }}.$
30 |
31 |
32 | **Special case:**
33 |
34 | - $b = 0$ (exponential growth) $ \rightarrow \mu(t) = a \exp \{ ct \} $;
35 |
36 | - adequate for early stages of the pandemic.
37 |
38 |
39 |
40 | ##### **Problems of the basic model:**
41 |
42 | a) data are **counts**, and the normal distribution assumes continuous data;
43 |
44 | b) variance should increase with data magnitude.
45 |
46 |
47 |
48 | #### Characteristics of interest
49 |
50 | The most important characteristics are:
51 |
52 |
53 |
54 | **1) Infection rate**
55 |
56 | - $c$ measures the acceleration of growth and reflects the infection rate of the disease.
57 |
58 |
59 |
60 | **2) Assintote**
61 |
62 | $\lim_{t \to \infty} \mu( t) = \lim_{t \to \infty} \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac ab$
63 |
64 | - Reflects the total number of cases accumulated throughout the whole trajectory of the pandemic.
65 |
66 | - Exponential growth ($b=0$): assintote $= \infty$ !
67 |
68 |
69 |
70 | **3) Peak of the pandemic**
71 |
72 | - Defined as the time $t^*$ where number of new cases stops growing and starts to decrease.
73 |
74 | - Exponential growth ($b=0$): number of new cases never stops growing!
75 |
76 |
77 |
78 | **4) Prediction**
79 |
80 | - What can be said about $Y (t+k ), \forall k$, for $t$ fixed (today)?
81 |
82 | It depends on the distribution of $Y(t)$ but will always be given by the predictive distribution of $Y(t+k)$ given $Y(1:t) = \{ Y(1) , ... , Y(t) \}$ - what was observed.
83 |
84 | It works as the posterior distribution of $Y(t + k )$.
85 |
86 |
87 |
88 | **Useful result:** If $Z$ and $W$ are any 2 r. v.'s then:
89 |
90 | - $E[Z] = E[ E(Z \mid W ) ]$
91 |
92 | - $Var[Z] = Var[ E(Z \mid W ) ] + E[ Var( Z \mid W ) ]$
93 |
94 | In particular, $E[Y ( t + k ) \mid Y( 1:t)] = E\{ E[ Y ( t + k ) \mid \mu( 1:t)] \mid Y( 1:t) \} = E[ \mu( t+k )] \mid Y( 1:t) ]$, the posterior mean of $\mu ( t + k )$.
95 |
96 | Inference about all that was described above should be reported through point estimators (eg: posterior means), along with respective credibility intervals.
97 |
98 |
99 |
100 | **5) Reproducibility rate $R_0$**
101 |
102 | $R_0$ is the expected number of secondary cases of a disease caused by an infected individual.
103 |
104 | At time $t$, it is defined as $R_0 = \frac {\mu ( t ) - \mu ( t-1)}{\mu ( t-1)} = \frac {\mu ( t )}{\mu ( t-1)} - 1$.
105 |
106 | - **Beginning of the pandemic:** $1 \gg b \exp { \{ ct \} } \to \mu (t) \approx a \exp { \{ ct \} } \to R_0 \approx e^c - 1$
107 |
108 | - **End of the pandemic:** $1 \ll b \exp { \{ ct \} } \to \mu (t) \approx a / b \to R_0 \approx 0$
109 |
110 | - **Middle of the pandemic:** $R_0$ is a function of parameters $(a,b,c)$ and time $t$, and is given by $R_0 (t) = e^c \ \frac{1 + b e^c e^{ct} } {1 + b e^{ct} } \ - \ 1$.
111 |
112 | For any fixed $t$, one can obtain its posterior distribution (via MCMC sample) and calculate mean, quantiles and credibility intervals.
113 |
114 |
115 |
116 | **6) Mean number of new cases (MNNC)**
117 |
118 | Mean number of new cases at time $t+k$:
119 |
120 | $n_t ( k ) = E [Y ( t + k ) - Y ( t + k -1 ) ] = \mu ( t + k ) - \mu ( t + k -1 )$
121 |
122 | Thus, MNNC is also a function of parameters $(a,b,c)$ and can be easily calculated.
123 |
124 | For any fixed $t$ and $k$, one can obtain its posterior distribution (via MCMC sample) and calculate mean, quantiles and credibility intervals.
125 |
126 | 
127 |
128 |
129 |
130 | #### Alternatives:
131 |
132 | 1.1) $Y( t ) \sim Poisson ( \mu ( t ) )$ with $E[ Y(t)] = \mu (t )$ and $Var(Y(t)) = \mu ( t )$
133 |
134 | 1.2) $Y ( t ) \sim N ( \mu ( t ) , \sigma^2 \ \mu ( t ) )$ with $E[ Y(t)] = \mu (t )$ and $Var(Y(t)) = \sigma^2 \ \mu ( t )$
135 |
136 | Observações:
137 |
138 | - Model (1.2) admits overdispersion if $\sigma^2 > 1$
139 |
140 | - Alternative (1.2) only handles comment (b)
141 |
142 | - Alternative (1.1) handles the two comments but does not allow overdispersion
143 |
144 |
145 |
146 | ##### **Poisson with overdispersion**
147 |
148 | 1.3) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) + \epsilon ( t ) )$ with $E[ \epsilon (t)] = 0$ and $Var( \epsilon (t)) = \sigma^2$
149 |
150 | 1.4) $Y( t ) \mid \epsilon ( t ) \sim Poisson ( \mu ( t ) \times \epsilon ( t ) )$ with $E[ \epsilon (t)] = 1$ and $Var( \epsilon (t)) = \sigma^2$
151 |
152 |
153 | Considering the usefull results presented above:
154 |
155 | Mod(1.3):
156 |
157 | - $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t ) ) ] = E[ \mu ( t ) + \epsilon ( t ) ] = \mu ( t ) + E[ \epsilon ( t ) ] = \mu ( t )$
158 |
159 | - $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) + \epsilon ( t ) ] + E [ \mu ( t ) + \epsilon ( t ) ] = \sigma^2 + \mu_t > \mu ( t )$
160 |
161 |
162 | Mod(1.4):
163 |
164 | - $E [ Y ( t ) ] = E[ E( Y(t) \mid \epsilon (t) ) ] = E[ \mu ( t ) \times \epsilon ( t ) ] = \mu ( t ) \times E[ \epsilon ( t ) ] = \mu ( t )$
165 |
166 | - $Var[ Y ( t ) ] = Var[ E( Y(t) \mid \epsilon (t) ) ] + E[ Var ( Y(t) \mid \epsilon (t) ) ] = Var[ \mu ( t ) \times \epsilon ( t ) ] + E [ \mu ( t ) \times \epsilon ( t ) ] = \mu_t ^2 \sigma^2 + \mu_t > \mu ( t )$
167 |
168 |
169 | Both preserve Poisson mean but increase Poisson dispersion.
170 |
171 |
172 |
173 | #### Dynamic extensions
174 |
175 | Previous models assume static behaviour:
176 |
177 | - shape of the disease does not modify along time;
178 |
179 | - infection rate will always be the same, assintote will always be the same, ...
180 |
181 | **Dynamic models** make it flexible.
182 |
183 |
184 |
185 | ##### 1. **Dynamic models**
186 |
187 | $\mu ( t ) = \frac{ a( {\color{red} t )} \ \exp{ \{ c( {\color{red}t) } \ t \} } } {1 + b( {\color{red}t) } \ \exp { \{ c({\color{red}t) } \ t \} }}$
188 |
189 | with:
190 | $a ( t ) = a ( t-1) + w_a ( t )$, where $w_a ( t ) \sim N ( 0 , W_a ), \forall t $.
191 |
192 | $b ( t ) = b ( t-1) + w_b ( t )$, where $w_b ( t ) \sim N ( 0 , W_b ), \forall t $.
193 |
194 | $c ( t ) = c ( t-1) + w_c ( t )$, where $w_c ( t ) \sim N ( 0 , W_c ), \forall t $.
195 |
196 |
197 | **Advantages:**
198 |
199 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$, and the same goes for $b(t)$ and $c(t) \Rightarrow$ local constancy.
200 |
201 | b) $Var[ a(t) \mid a(t-1 )]= W_a$, and the same goes for $b(t)$ and $c(t) \Rightarrow$ increase in uncertainty.
202 |
203 | **Problems:**
204 |
205 | a) variances $W_a, W_b, W_c$ unknown $\Rightarrow$ difficult to specify;
206 |
207 | b) variances $W_a, W_b, W_c$ unknown $\Rightarrow$ difficult to estimate.
208 |
209 | c) it os not possible to simplify $W_a = W_b = W_c = W$ (different magnitudes of $(a,b,c)$).
210 |
211 |
212 |
213 | ##### 2. **Multiplicative effect:**
214 |
215 | Another form to introduce dynamics, now **multiplicative**:
216 |
217 | $a ( t ) = a ( t-1) \times w_a ( t )$, where $w_a ( t ) \sim Gamma ( d_a ,d_a ), \forall t $.
218 |
219 | $b ( t ) = b ( t-1) \times w_b ( t )$, where $w_b ( t ) \sim Gamma ( d_b ,d_b ), \forall t $.
220 |
221 | $c ( t ) = c ( t-1) \times w_c ( t )$, where $w_c ( t ) \sim Gamma ( d_c ,d_c ), \forall t $.
222 |
223 | **Advantages:**
224 |
225 | a) $E[ a(t) \mid a(t-1 )]= a (t-1)$ and the same goes for $b(t)$ and $c(t) \Rightarrow$ local constancy.
226 |
227 | b) $Var[ a(t) \mid a(t-1 )]= d_c^{-1}$ and the same goes for $b(t)$ and $c(t) \Rightarrow$ increase in uncertainty.
228 |
229 | c) Hiperparameters $d_a, d_b, d_c$ easier to specify.
230 |
231 | Examples:
232 | $d=1000 \ \to \ 0,90= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$
233 | $d=1500 \ \to \ 0,95= P ( 0,95 < w(t) < 1,05 ) = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right)$
234 |
235 | **Disadvantages:**
236 |
237 | a) Magnitudes of $a, b, c$ still interfere in the increase in uncertainty.
238 |
239 | b) Not sure if free software works fine with Gammas with such high parameter values.
240 |
241 |
242 |
243 | ##### 3. **Multiplicative evolution with normal errors**
244 |
245 | Consider the multiplicative evolution below for parameter $a$:
246 |
247 | $$a ( t ) = a ( t-1) \times \exp \{ w_a ( t ) \}, \mbox{ where } w_a ( t ) \sim N( 0 , W_a )$$
248 | Taking logarithm on both sides, one obtains:
249 | $$\log \ a ( t ) = \log \ a ( t-1) + w_a ( t ), \mbox{ where } w_a ( t ) \sim N( 0 , W_a )$$
250 | Passing $\log \ a ( t-1)$ to the left, one obtains:
251 | $$\log \ a ( t ) - \log \ a ( t-1) = \log \left[ \frac{ a ( t )}{ a ( t-1)} \right] = w_a ( t ), \mbox{ where } w_a ( t ) \sim N( 0 , W_a )$$
252 |
253 | Specification of $W_a$: one can think of percentual increase, as before.
254 |
255 | $$0,95 = P \left( 0,95 < \frac {a(t)}{a(t-1) } < 1,05 \right) = P ( - 0,05 < w_a(t) < 0,05 )$$
256 | This implies $2 \sqrt{W_a} = 0,05$, that implies $\sqrt{W_a} = 0,025 \ \Rightarrow W_a = (0,025)^2$.
257 |
258 | The same specification is valid for $W_b$ and $W_c$, since magnitudes of $b$ and $c$ do not matter.
259 |
260 |
261 |
262 | - **Special case**
263 |
264 | Based on Gamerman, Santos and Franco (J. Time Series Analysis, 2013):
265 |
266 | $$\mu ( t ) = \frac{ a( {\color{red}t)} \ \exp{ \{ c \ t \} } } {1 + b \ \exp { \{ c \ t \} }}$$
267 | $a ( t ) = a ( t-1) \times w_a ( t )$, where $ w_a ( t ) \sim Beta, \forall t$.
268 |
269 | It may also be used for exponencial growth($b=0$).
270 |
271 | **Advantage:**
272 |
273 | a) Allows exact calculation, thus avoiding (MCMC) approximations.
274 |
275 | **Disadvantage:**
276 |
277 | a) Does not allow dynamic $b$ and $c$.
278 |
279 |
280 |
281 | #### Generalizations of the logistic curve
282 |
283 | So far, logistic curve was used to specify the mean $\mu ( t )$ as $\mu ( t ) = \frac{ a \exp{ \{ c t \} } } {1 + b \exp { \{ ct \} }} = \frac{ a} { b + \exp { \{ - ct \} }}$.
284 |
285 | This expression is the simplest logistic form. It can be generalized in many ways. One possible form of the **generalized logistic** is
286 | $$\mu ( t ) = d + \frac{ a - d} {( b + \exp { \{ - ct \} } )^f}$$
287 |
288 | The logistic curve is obtained by taking $d=0$ and $f=1$.
289 |
290 |
--------------------------------------------------------------------------------