├── .Rbuildignore ├── .Rprofile ├── .github └── FUNDING.yml ├── .gitignore ├── .travis.yml ├── DESCRIPTION ├── LICENSE.md ├── NAMESPACE ├── NEWS.md ├── R ├── createSparseMatrix.R ├── fitVAR.R ├── fitVARX.R ├── fitVECM.R ├── impulseResponse.R ├── mcSimulations.R ├── plotIRF.R ├── plotMatrix.R ├── scadReg.R ├── simulateVAR.R ├── simulateVARX.R ├── sparsevar.R ├── timeSlice.R ├── twoStepOLS.R ├── utils.R └── utilsVAR.R ├── README.md ├── cran-comments.md ├── man ├── accuracy.Rd ├── bootstrappedVAR.Rd ├── checkImpulseZero.Rd ├── checkIsVar.Rd ├── companionVAR.Rd ├── computeForecasts.Rd ├── createSparseMatrix.Rd ├── decomposePi.Rd ├── errorBandsIRF.Rd ├── fitVAR.Rd ├── fitVARX.Rd ├── fitVECM.Rd ├── frobNorm.Rd ├── impulseResponse.Rd ├── informCrit.Rd ├── l1norm.Rd ├── l2norm.Rd ├── lInftyNorm.Rd ├── maxNorm.Rd ├── mcSimulations.Rd ├── multiplot.Rd ├── plotIRF.Rd ├── plotIRFGrid.Rd ├── plotMatrix.Rd ├── plotVAR.Rd ├── plotVECM.Rd ├── simulateVAR.Rd ├── simulateVARX.Rd ├── sparsevar.Rd ├── spectralNorm.Rd ├── spectralRadius.Rd ├── testGranger.Rd ├── transformData.Rd ├── varENET.Rd ├── varMCP.Rd └── varSCAD.Rd ├── renv.lock ├── renv ├── .gitignore └── activate.R ├── sparsevar.Rproj ├── tests ├── testSparse.R ├── testSparse2.R ├── testthat.R └── testthat │ └── testIsWorking.R └── vignettes ├── using.Rmd └── using_cache └── latex ├── __packages ├── unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.RData ├── unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.rdb └── unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.rdx /.Rbuildignore: -------------------------------------------------------------------------------- 1 | ^renv$ 2 | ^renv\.lock$ 3 | ^.*\.Rproj$ 4 | ^\.Rproj\.user$ 5 | cran-comments.md 6 | R/old/ 7 | R/todo/ 8 | R/scadReg.R 9 | tests/testIRF* 10 | tests/testSparse* 11 | tests/testPicasso.R 12 | tests/testInformCrit.R 13 | tests/testEigen.cpp 14 | .travis.yml 15 | LICENSE.md 16 | .github/* 17 | ^doc$ 18 | ^Meta$ 19 | -------------------------------------------------------------------------------- /.Rprofile: -------------------------------------------------------------------------------- 1 | source("renv/activate.R") 2 | -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] 4 | patreon: # Replace with a single Patreon username 5 | open_collective: # Replace with a single Open Collective username 6 | ko_fi: svazzole # Replace with a single Ko-fi username 7 | tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel 8 | community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry 9 | liberapay: # Replace with a single Liberapay username 10 | issuehunt: # Replace with a single IssueHunt username 11 | otechie: # Replace with a single Otechie username 12 | custom: paypal.me/svazzole # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] 13 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .Rhistory 2 | docs/ 3 | *.m 4 | *~ 5 | results/ 6 | todo.org 7 | .RData 8 | fig/ 9 | data/ 10 | .Rproj.user 11 | R/old/ 12 | R/todo/ 13 | vignettes/using_cache/ 14 | vignettes/using_cache/latex/ 15 | *.o 16 | *.so 17 | src/*.o 18 | src/*.so 19 | tests/testInformCrit.R 20 | tests/testIRF* 21 | tests/testPicasso.R 22 | 23 | /doc/ 24 | /Meta/ 25 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: r 2 | 3 | warnings_are_errors: false 4 | 5 | r_packages: 6 | - covr 7 | 8 | after_success: 9 | - Rscript -e 'library(covr); codecov()' 10 | 11 | -------------------------------------------------------------------------------- /DESCRIPTION: -------------------------------------------------------------------------------- 1 | Package: sparsevar 2 | Version: 0.1.0 3 | Date: 2021-04-16 4 | Title: Sparse VAR/VECM Models Estimation 5 | Authors@R: c(person("Simone", "Vazzoler", role = c("aut", "cre"), 6 | email = "svazzole@gmail.com")) 7 | Maintainer: Simone Vazzoler 8 | Imports: 9 | Matrix, 10 | ncvreg, 11 | parallel, 12 | doParallel, 13 | glmnet, 14 | ggplot2, 15 | reshape2, 16 | grid, 17 | mvtnorm, 18 | picasso, 19 | corpcor, 20 | Suggests: 21 | knitr, 22 | rmarkdown, 23 | testthat, 24 | Depends: 25 | R (>= 3.5.0) 26 | Description: A wrapper for sparse VAR/VECM time series models estimation 27 | using penalties like ENET (Elastic Net), SCAD (Smoothly Clipped 28 | Absolute Deviation) and MCP (Minimax Concave Penalty). 29 | Based on the work of Sumanta Basu and George Michailidis 30 | . 31 | License: GPL-2 32 | URL: http://github.com/svazzole/sparsevar 33 | BugReports: http://github.com/svazzole/sparsevar 34 | VignetteBuilder: 35 | knitr 36 | RoxygenNote: 7.1.1 37 | Encoding: UTF-8 38 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | {description} 294 | Copyright (C) {year} {fullname} 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | {signature of Ty Coon}, 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | -------------------------------------------------------------------------------- /NAMESPACE: -------------------------------------------------------------------------------- 1 | # Generated by roxygen2: do not edit by hand 2 | 3 | export(accuracy) 4 | export(bootstrappedVAR) 5 | export(checkImpulseZero) 6 | export(checkIsVar) 7 | export(companionVAR) 8 | export(computeForecasts) 9 | export(createSparseMatrix) 10 | export(decomposePi) 11 | export(errorBandsIRF) 12 | export(fitVAR) 13 | export(fitVARX) 14 | export(fitVECM) 15 | export(frobNorm) 16 | export(impulseResponse) 17 | export(informCrit) 18 | export(l1norm) 19 | export(l2norm) 20 | export(lInftyNorm) 21 | export(maxNorm) 22 | export(mcSimulations) 23 | export(multiplot) 24 | export(plotIRF) 25 | export(plotIRFGrid) 26 | export(plotMatrix) 27 | export(plotVAR) 28 | export(plotVECM) 29 | export(simulateVAR) 30 | export(simulateVARX) 31 | export(spectralNorm) 32 | export(spectralRadius) 33 | export(testGranger) 34 | export(transformData) 35 | export(varENET) 36 | export(varMCP) 37 | export(varSCAD) 38 | -------------------------------------------------------------------------------- /NEWS.md: -------------------------------------------------------------------------------- 1 | # sparsevar 0.1.0 2 | 3 | - Fix bug in plotIRF 4 | - Linted code 5 | - Fixed knitr/markdown/rmarkdown problems as in https://github.com/yihui/knitr/issues/1864 6 | 7 | # sparsevar 0.0.11 8 | 9 | - Added CV on a predefined list of lambdas (thanks to PierrickPiette) 10 | 11 | # sparsevar 0.0.10 12 | 13 | - Added plotVECM function 14 | - Removed plotComparisonVAR (substituted by plotVAR) 15 | - Added the option to generate VARs with given matrices 16 | - Fixed AIC 17 | - Fixed problems with error bands options 18 | - Added tests 19 | - Added the function computeForecasts 20 | 21 | # sparsevar 0.0.9 22 | 23 | - Fast SCAD estimation (using picasso package; works only with SCAD and timeSlice) 24 | - Added function to compute VAR forecasts 25 | - Added information criteria (AIC, SChwartz and Hannan-Quinn) 26 | - Fixed mean estimation for timeSlice 27 | 28 | # sparsevar 0.0.7 29 | 30 | - Major code rewriting 31 | - Remove dependecies from MTS and caret 32 | - Added impulse response error bands (using bootstrap) 33 | - Added plot functions for IRF 34 | - New timeSlice estimation 35 | - Removed repeated cross validation 36 | 37 | # sparsevar 0.0.6 38 | 39 | - Added impulse response function for VAR processes 40 | 41 | # sparsevar 0.0.5 42 | 43 | - Added timeSlice estimation 44 | - Fixed normalization constant in creating sparse var matrix 45 | - Fixed parallel backend in Windows 46 | 47 | # sparsevar 0.0.4 48 | 49 | - Added as output the residuals of the estimation (for estimateVAR) 50 | - Fixed parallel background in LASSO estimation 51 | - Now repeatedCV returns MSE 52 | -------------------------------------------------------------------------------- /R/createSparseMatrix.R: -------------------------------------------------------------------------------- 1 | #' @title Create Sparse Matrix 2 | #' 3 | #' @description Creates a sparse square matrix with a given sparsity and 4 | #' distribution. 5 | #' 6 | #' @param N the dimension of the square matrix 7 | #' @param sparsity the density of non zero elements 8 | #' @param method the method used to generate the entries of the matrix. 9 | #' Possible values are \code{"normal"} (default) or \code{"bimodal"}. 10 | #' @param stationary should the spectral radius of the matrix be smaller than 1? 11 | #' Possible values are \code{TRUE} or \code{FALSE}. Default is \code{FALSE}. 12 | #' @param p normalization constant (used for VAR of order greater than 1, 13 | #' default = 1) 14 | #' @param ... other options for the matrix (you can specify the mean 15 | #' \code{mu_mat} and the standard deviation \code{sd_mat}). 16 | #' @return An NxN sparse matrix. 17 | #' @examples 18 | #' M <- createSparseMatrix( 19 | #' N = 30, sparsity = 0.05, method = "normal", 20 | #' stationary = TRUE 21 | #' ) 22 | #' @export 23 | createSparseMatrix <- function(N, sparsity, method = "normal", 24 | stationary = FALSE, p = 1, ...) { 25 | opt <- list(...) 26 | mu <- ifelse(!is.null(opt$mu_mat), opt$mu_mat, 0) 27 | sd <- ifelse(!is.null(opt$sd_mat), opt$sd_mat, 1) 28 | n <- floor(sparsity * (N^2)) 29 | 30 | if (method == "normal") { 31 | # normal distributed nonzero entries 32 | non_zero_entries <- stats::rnorm(n, mean = mu, sd = sd) 33 | entries <- sample(x = 1:(N^2), size = n, replace = FALSE) 34 | Atmp <- numeric(length = (N^2)) 35 | Atmp[entries] <- non_zero_entries 36 | A <- matrix(Atmp, nrow = N, ncol = N) 37 | } else if (method == "bimodal") { 38 | # bimodal (bi-normal) distributed nonzero entries 39 | non_zero_entries_left <- stats::rnorm(n, mean = -mu, sd = sd) 40 | non_zero_entries_right <- stats::rnorm(n, mean = mu, sd = sd) 41 | non_zero_entries <- sample( 42 | x = c( 43 | non_zero_entries_left, 44 | non_zero_entries_right 45 | ), 46 | size = n, replace = FALSE 47 | ) 48 | entries <- sample(x = 1:(N^2), size = n, replace = FALSE) 49 | Atmp <- numeric(length = (N^2)) 50 | Atmp[entries] <- non_zero_entries 51 | A <- matrix(Atmp, nrow = N, ncol = N) 52 | } else if (method == "full") { 53 | # full matrix: used only for tests 54 | e <- 0.9^(1:N) 55 | D <- diag(e) 56 | P <- matrix(0, N, N) 57 | while (det(P) == 0) { 58 | P <- createSparseMatrix(N = N, sparsity = 1, method = "bimodal") 59 | } 60 | A <- solve(P) %*% D %*% P 61 | stationary <- FALSE 62 | } else { 63 | # invalid method 64 | stop("Unknown method. Possible methods are normal or bimodal.") 65 | } 66 | 67 | if (stationary == TRUE) { 68 | # if spectral radius < 1 is needed, return the re-normalized matrix 69 | K <- 1 70 | return(1 / (K * base::sqrt(p * sparsity * N * sd)) * A) 71 | } else { 72 | return(A) 73 | } 74 | } 75 | -------------------------------------------------------------------------------- /R/fitVAR.R: -------------------------------------------------------------------------------- 1 | #' @title Multivariate VAR estimation 2 | #' 3 | #' @description A function to estimate a (possibly high-dimensional) 4 | #' multivariate VAR time series using penalized least squares methods, 5 | #' such as ENET, SCAD or MC+. 6 | #' 7 | #' @usage fitVAR(data, p = 1, penalty = "ENET", method = "cv", ...) 8 | #' 9 | #' @param data the data from the time series: variables in columns and 10 | #' observations in rows 11 | #' @param p order of the VAR model 12 | #' @param penalty the penalty function to use. Possible values 13 | #' are \code{"ENET"}, \code{"SCAD"} or \code{"MCP"} 14 | #' @param method possible values are \code{"cv"} or \code{"timeSlice"} 15 | #' @param ... the options for the estimation. Global options are: 16 | #' \code{threshold}: if \code{TRUE} all the entries smaller than the oracle 17 | #' threshold are set to zero; 18 | #' \code{scale}: scale the data (default = FALSE)? 19 | #' \code{nfolds}: the number of folds used for cross validation (default = 10); 20 | #' \code{parallel}: if \code{TRUE} use multicore backend (default = FALSE); 21 | #' \code{ncores}: if \code{parallel} is \code{TRUE}, specify the number 22 | #' of cores to use for parallel evaluation. Options for ENET estimation: 23 | #' \code{alpha}: the value of alpha to use in elastic net 24 | #' (0 is Ridge regression, 1 is LASSO (default)); 25 | #' \code{type.measure}: the measure to use for error evaluation 26 | #' (\code{"mse"} or \code{"mae"}); 27 | #' \code{nlambda}: the number of lambdas to use in the cross 28 | #' validation (default = 100); 29 | #' \code{leaveOut}: in the time slice validation leave out the 30 | #' last \code{leaveOutLast} observations (default = 15); 31 | #' \code{horizon}: the horizon to use for estimating mse/mae (default = 1); 32 | #' \code{picasso}: use picasso package for estimation (only available 33 | #' for \code{penalty = "SCAD"} and \code{method = "timeSlice"}). 34 | #' 35 | #' @return \code{A} the list (of length \code{p}) of the estimated matrices 36 | #' of the process 37 | #' @return \code{fit} the results of the penalized LS estimation 38 | #' @return \code{mse} the mean square error of the cross validation 39 | #' @return \code{time} elapsed time for the estimation 40 | #' @return \code{residuals} the time series of the residuals 41 | #' 42 | #' @export 43 | fitVAR <- function(data, p = 1, penalty = "ENET", method = "cv", ...) { 44 | opt <- list(...) 45 | 46 | # convert data to matrix 47 | if (!is.matrix(data)) { 48 | data <- as.matrix(data) 49 | } 50 | 51 | cnames <- colnames(data) 52 | 53 | if (method == "cv") { 54 | 55 | # use CV to find lambda 56 | opt$method <- "cv" 57 | out <- cvVAR(data, p, penalty, opt) 58 | } else if (method == "timeSlice") { 59 | 60 | # use timeslice to find lambda 61 | opt$method <- "timeSlice" 62 | out <- timeSliceVAR(data, p, penalty, opt) 63 | } else { 64 | 65 | # error: unknown method 66 | stop("Unknown method. Possible values are \"cv\" or \"timeSlice\"") 67 | } 68 | 69 | # Add the names of the variables to the matrices 70 | if (!is.null(cnames)) { 71 | for (k in 1:length(out$A)) { 72 | colnames(out$A[[k]]) <- cnames 73 | rownames(out$A[[k]]) <- cnames 74 | } 75 | } 76 | 77 | return(out) 78 | } 79 | 80 | cvVAR <- function(data, p, penalty = "ENET", opt = NULL) { 81 | nc <- ncol(data) 82 | nr <- nrow(data) 83 | 84 | picasso <- ifelse(!is.null(opt$picasso), opt$picasso, FALSE) 85 | threshold <- ifelse(!is.null(opt$threshold), opt$threshold, FALSE) 86 | 87 | threshold_type <- ifelse(!is.null(opt$threshold_type), 88 | opt$threshold_type, "soft" 89 | ) 90 | 91 | return_fit <- ifelse(!is.null(opt$return_fit), opt$return_fit, FALSE) 92 | 93 | if (picasso) { 94 | stop("picasso available only with timeSlice method.") 95 | } 96 | # transform the dataset 97 | tr_dt <- transformData(data, p, opt) 98 | 99 | if (penalty == "ENET") { 100 | 101 | # fit the ENET model 102 | t <- Sys.time() 103 | fit <- cvVAR_ENET(tr_dt$X, tr_dt$y, nvar = nc, opt) 104 | elapsed <- Sys.time() - t 105 | 106 | # extract what is needed 107 | lambda <- ifelse(is.null(opt$lambda), "lambda.min", opt$lambda) 108 | 109 | # extract the coefficients and reshape the matrix 110 | Avector <- stats::coef(fit, s = lambda) 111 | A <- matrix(Avector[2:length(Avector)], 112 | nrow = nc, ncol = nc * p, 113 | byrow = TRUE 114 | ) 115 | 116 | mse <- min(fit$cvm) 117 | } else if (penalty == "SCAD") { 118 | 119 | # convert from sparse matrix to std matrix (SCAD does not work with sparse 120 | # matrices) 121 | tr_dt$X <- as.matrix(tr_dt$X) 122 | 123 | # fit the SCAD model 124 | t <- Sys.time() 125 | fit <- cvVAR_SCAD(tr_dt$X, tr_dt$y, opt) 126 | elapsed <- Sys.time() - t 127 | 128 | # extract the coefficients and reshape the matrix 129 | Avector <- stats::coef(fit, s = "lambda.min") 130 | A <- matrix(Avector[2:length(Avector)], 131 | nrow = nc, ncol = nc * p, 132 | byrow = TRUE 133 | ) 134 | mse <- min(fit$cve) 135 | } else if (penalty == "MCP") { 136 | 137 | # convert from sparse matrix to std matrix (MCP does not work with sparse 138 | # matrices) 139 | tr_dt$X <- as.matrix(tr_dt$X) 140 | 141 | # fit the MCP model 142 | t <- Sys.time() 143 | fit <- cvVAR_SCAD(tr_dt$X, tr_dt$y, opt) 144 | elapsed <- Sys.time() - t 145 | 146 | # extract the coefficients and reshape the matrix 147 | Avector <- stats::coef(fit, s = "lambda.min") 148 | A <- matrix(Avector[2:length(Avector)], 149 | nrow = nc, ncol = nc * p, 150 | byrow = TRUE 151 | ) 152 | mse <- min(fit$cve) 153 | } else { 154 | 155 | # Unknown penalty error 156 | stop("Unkown penalty. Available penalties are: ENET, SCAD, MCP.") 157 | } 158 | 159 | # If threshold = TRUE then set to zero all the entries that are smaller than 160 | # the threshold 161 | if (threshold == TRUE) { 162 | A <- applyThreshold(A, nr, nc, p, type = threshold_type) 163 | } 164 | 165 | # Get back the list of VAR matrices (of length p) 166 | A <- splitMatrix(A, p) 167 | 168 | # Now that we have the matrices compute the residuals 169 | res <- computeResiduals(tr_dt$series, A) 170 | 171 | # To extract the sd of mse 172 | if (penalty == "ENET") { 173 | ix <- which(fit$cvm == min(fit$cvm)) 174 | mse_sd <- fit$cvsd[ix] 175 | } else { 176 | ix <- which(fit$cve == min(fit$cve)) 177 | mse_sd <- fit$cvse[ix] 178 | } 179 | 180 | # Create the output 181 | output <- list() 182 | output$mu <- tr_dt$mu 183 | output$A <- A 184 | 185 | # Do you want the fit? 186 | if (return_fit == TRUE) { 187 | output$fit <- fit 188 | } 189 | 190 | # Return the "best" lambda 191 | output$lambda <- fit$lambda.min 192 | 193 | output$mse <- mse 194 | output$mse_sd <- mse_sd 195 | output$time <- elapsed 196 | output$series <- tr_dt$series 197 | output$residuals <- res 198 | 199 | # Variance/Covariance estimation 200 | output$sigma <- estimateCovariance(res) 201 | 202 | output$penalty <- penalty 203 | output$method <- "cv" 204 | attr(output, "class") <- "var" 205 | attr(output, "type") <- "fit" 206 | return(output) 207 | } 208 | 209 | cvVAR_ENET <- function(X, y, nvar, opt) { 210 | a <- ifelse(is.null(opt$alpha), 1, opt$alpha) 211 | nl <- ifelse(is.null(opt$nlambda), 100, opt$nlambda) 212 | tm <- ifelse(is.null(opt$type.measure), "mse", opt$type.measure) 213 | nf <- ifelse(is.null(opt$nfolds), 10, opt$nfolds) 214 | parall <- ifelse(is.null(opt$parallel), FALSE, opt$parallel) 215 | ncores <- ifelse(is.null(opt$ncores), 1, opt$ncores) 216 | 217 | # Vector of lambdas to work on 218 | if (!is.null(opt$lambdas_list)) { 219 | lambdas_list <- opt$lambdas_list 220 | } else { 221 | lambdas_list <- c(0) 222 | } 223 | 224 | # Assign ids to the CV-folds (useful for replication of results) 225 | if (is.null(opt$folds_ids)) { 226 | folds_ids <- numeric(0) 227 | } else { 228 | nr <- nrow(X) 229 | folds_ids <- rep(sort(rep(seq(nf), length.out = nr / nvar)), nvar) 230 | } 231 | 232 | if (parall == TRUE) { 233 | if (ncores < 1) { 234 | stop("The number of cores must be > 1") 235 | } else { 236 | cl <- doParallel::registerDoParallel(cores = ncores) 237 | 238 | if (length(folds_ids) == 0) { 239 | if (length(lambdas_list) < 2) { 240 | cvfit <- glmnet::cv.glmnet(X, y, 241 | alpha = a, nlambda = nl, 242 | type.measure = tm, nfolds = nf, 243 | parallel = TRUE, standardize = FALSE 244 | ) 245 | } else { 246 | cvfit <- glmnet::cv.glmnet(X, y, 247 | alpha = a, lambda = lambdas_list, 248 | type.measure = tm, nfolds = nf, 249 | parallel = TRUE, standardize = FALSE 250 | ) 251 | } 252 | } else { 253 | if (length(lambdas_list) < 2) { 254 | cvfit <- glmnet::cv.glmnet(X, y, 255 | alpha = a, nlambda = nl, 256 | type.measure = tm, foldid = folds_ids, 257 | parallel = TRUE, standardize = FALSE 258 | ) 259 | } else { 260 | cvfit <- glmnet::cv.glmnet(X, y, 261 | alpha = a, lambda = lambdas_list, 262 | type.measure = tm, foldid = folds_ids, 263 | parallel = TRUE, standardize = FALSE 264 | ) 265 | } 266 | } 267 | } 268 | } else { 269 | if (length(folds_ids) == 0) { 270 | if (length(lambdas_list) < 2) { 271 | cvfit <- glmnet::cv.glmnet(X, y, 272 | alpha = a, nlambda = nl, 273 | type.measure = tm, nfolds = nf, 274 | parallel = FALSE, standardize = FALSE 275 | ) 276 | } else { 277 | cvfit <- glmnet::cv.glmnet(X, y, 278 | alpha = a, lambda = lambdas_list, 279 | type.measure = tm, nfolds = nf, 280 | parallel = FALSE, standardize = FALSE 281 | ) 282 | } 283 | } else { 284 | if (length(lambdas_list) < 2) { 285 | cvfit <- glmnet::cv.glmnet(X, y, 286 | alpha = a, nlambda = nl, 287 | type.measure = tm, foldid = folds_ids, 288 | parallel = FALSE, standardize = FALSE 289 | ) 290 | } else { 291 | cvfit <- glmnet::cv.glmnet(X, y, 292 | alpha = a, lambda = lambdas_list, 293 | type.measure = tm, foldid = folds_ids, 294 | parallel = FALSE, standardize = FALSE 295 | ) 296 | } 297 | } 298 | } 299 | 300 | return(cvfit) 301 | } 302 | 303 | cvVAR_SCAD <- function(X, y, opt) { 304 | e <- ifelse(is.null(opt$eps), 0.01, opt$eps) 305 | nf <- ifelse(is.null(opt$nfolds), 10, opt$nfolds) 306 | parall <- ifelse(is.null(opt$parallel), FALSE, opt$parallel) 307 | ncores <- ifelse(is.null(opt$ncores), 1, opt$ncores) 308 | picasso <- ifelse(is.null(opt$picasso), FALSE, TRUE) 309 | 310 | if (!picasso) { 311 | if (parall == TRUE) { 312 | if (ncores < 1) { 313 | stop("The number of cores must be > 1") 314 | } else { 315 | cl <- parallel::makeCluster(ncores) 316 | cvfit <- ncvreg::cv.ncvreg(X, y, 317 | nfolds = nf, penalty = "SCAD", 318 | eps = e, cluster = cl 319 | ) 320 | parallel::stopCluster(cl) 321 | } 322 | } else { 323 | cvfit <- ncvreg::cv.ncvreg(X, y, nfolds = nf, penalty = "SCAD", eps = e) 324 | } 325 | } else { 326 | cvfit <- picasso::picasso(X, y, method = "scad") 327 | } 328 | 329 | return(cvfit) 330 | } 331 | 332 | cvVAR_MCP <- function(X, y, opt) { 333 | e <- ifelse(is.null(opt$eps), 0.01, opt$eps) 334 | nf <- ifelse(is.null(opt$nfolds), 10, opt$nfolds) 335 | parall <- ifelse(is.null(opt$parallel), FALSE, opt$parallel) 336 | ncores <- ifelse(is.null(opt$ncores), 1, opt$ncores) 337 | 338 | if (parall == TRUE) { 339 | if (ncores < 1) { 340 | stop("The number of cores must be > 1") 341 | } else { 342 | cl <- parallel::makeCluster(ncores) 343 | cvfit <- ncvreg::cv.ncvreg(X, y, 344 | nfolds = nf, penalty = "MCP", 345 | eps = e, cluster = cl 346 | ) 347 | parallel::stopCluster(cl) 348 | } 349 | } else { 350 | cvfit <- ncvreg::cv.ncvreg(X, y, nfolds = nf, penalty = "MCP", eps = e) 351 | } 352 | 353 | return(cvfit) 354 | } 355 | -------------------------------------------------------------------------------- /R/fitVARX.R: -------------------------------------------------------------------------------- 1 | #' @title Multivariate VARX estimation 2 | #' 3 | #' @description A function to estimate a (possibly high-dimensional) multivariate VARX time series 4 | #' using penalized least squares methods, such as ENET, SCAD or MC+. 5 | #' 6 | #' @usage fitVARX(data, p = 1, Xt, m = 1, penalty = "ENET", method = "cv", ...) 7 | #' 8 | #' @param data the data from the time series: variables in columns and observations in 9 | #' rows 10 | #' @param p order of the VAR model 11 | #' @param Xt the exogenous variables 12 | #' @param m order of the exogenous variables 13 | #' @param penalty the penalty function to use. Possible values are \code{"ENET"}, 14 | #' \code{"SCAD"} or \code{"MCP"} 15 | #' @param method possible values are \code{"cv"} or \code{"timeSlice"} 16 | #' @param ... the options for the estimation. Global options are: 17 | #' \code{threshold}: if \code{TRUE} all the entries smaller than the oracle threshold are set to zero; 18 | #' \code{scale}: scale the data (default = FALSE)? 19 | #' \code{nfolds}: the number of folds used for cross validation (default = 10); 20 | #' \code{parallel}: if \code{TRUE} use multicore backend (default = FALSE); 21 | #' \code{ncores}: if \code{parallel} is \code{TRUE}, specify the number of cores to use 22 | #' for parallel evaluation. Options for ENET estimation: 23 | #' \code{alpha}: the value of alpha to use in elastic net (0 is Ridge regression, 1 is LASSO (default)); 24 | #' \code{type.measure}: the measure to use for error evaluation (\code{"mse"} or \code{"mae"}); 25 | #' \code{nlambda}: the number of lambdas to use in the cross validation (default = 100); 26 | #' \code{leaveOut}: in the time slice validation leave out the last \code{leaveOutLast} observations 27 | #' (default = 15); 28 | #' \code{horizon}: the horizon to use for estimating mse/mae (default = 1); 29 | #' \code{picasso}: use picasso package for estimation (only available for \code{penalty = "SCAD"} 30 | #' and \code{method = "timeSlice"}). 31 | #' 32 | #' @return \code{A} the list (of length \code{p}) of the estimated matrices of the process 33 | #' @return \code{fit} the results of the penalized LS estimation 34 | #' @return \code{mse} the mean square error of the cross validation 35 | #' @return \code{time} elapsed time for the estimation 36 | #' @return \code{residuals} the time series of the residuals 37 | #' 38 | #' @export 39 | fitVARX <- function(data, p = 1, Xt, m = 1, penalty = "ENET", method = "cv", ...) { 40 | opt <- list(...) 41 | 42 | # convert data to matrix 43 | if (!is.matrix(data)) { 44 | data <- as.matrix(data) 45 | } 46 | 47 | # convert data to matrix 48 | if (!is.matrix(Xt)) { 49 | Xt <- as.matrix(Xt) 50 | } 51 | 52 | dataXt <- cbind(data, Xt) 53 | 54 | cnames <- colnames(data) 55 | cnamesX <- colnames(Xt) 56 | 57 | pX <- max(p, m) 58 | 59 | if (method == "cv") { 60 | # use CV to find lambda 61 | opt$method <- "cv" 62 | out <- cvVAR(dataXt, pX, penalty, opt) 63 | } else if (method == "timeSlice") { 64 | # use timeslice to find lambda 65 | opt$method <- "timeSlice" 66 | out <- timeSliceVAR(dataXt, pX, penalty, opt) 67 | } else { 68 | # error: unknown method 69 | stop("Unknown method. Possible values are \"cv\" or \"timeSlice\"") 70 | } 71 | 72 | nc <- ncol(data) 73 | ncX <- ncol(Xt) 74 | 75 | out <- VARtoVARX(out, p, m, nc, ncX) 76 | 77 | # Add the names of the variables to the matrices 78 | if (!is.null(cnames)) { 79 | for (k in 1:length(out$A)) { 80 | colnames(out$A[[k]]) <- cnames 81 | rownames(out$A[[k]]) <- cnames 82 | } 83 | } 84 | 85 | return(out) 86 | } 87 | 88 | VARtoVARX <- function(v, p, m, nc, ncX) { 89 | l <- length(v$A) 90 | newA <- list() 91 | B <- list() 92 | for (i in 1:l) { 93 | newA[[i]] <- v$A[[i]][1:nc, 1:nc] 94 | B[[i]] <- v$A[[i]][1:nc, (nc + 1):ncol(v$A[[i]])] 95 | } 96 | if (p < l) { 97 | v$newA <- newA[-((p + 1):l)] 98 | v$B <- B 99 | } else if (m < l) { 100 | v$newA <- newA 101 | v$B <- B[-((m + 1):l)] 102 | } else { 103 | v$newA <- newA 104 | v$B <- B 105 | } 106 | return(v) 107 | } 108 | -------------------------------------------------------------------------------- /R/fitVECM.R: -------------------------------------------------------------------------------- 1 | #' @title Multivariate VECM estimation 2 | #' 3 | #' @description A function to estimate a (possibly big) multivariate VECM time series 4 | #' using penalized least squares methods, such as ENET, SCAD or MC+. 5 | #' 6 | #' @usage fitVECM(data, p, penalty, method, logScale, ...) 7 | #' 8 | #' @param data the data from the time series: variables in columns and observations in 9 | #' rows 10 | #' @param p order of the VECM model 11 | #' @param penalty the penalty function to use. Possible values are \code{"ENET"}, 12 | #' \code{"SCAD"} or \code{"MCP"} 13 | #' @param logScale should the function consider the \code{log} of the inputs? By default 14 | #' this is set to \code{TRUE} 15 | #' @param method \code{"cv"} or \code{"timeSlice"} 16 | #' @param ... options for the function (TODO: specify) 17 | #' 18 | #' @return Pi the matrix \code{Pi} for the VECM model 19 | #' @return G the list (of length \code{p-1}) of the estimated matrices of the process 20 | #' @return fit the results of the penalized LS estimation 21 | #' @return mse the mean square error of the cross validation 22 | #' @return time elapsed time for the estimation 23 | #' 24 | #' @export 25 | fitVECM <- function(data, p = 0, penalty = "ENET", method = "cv", logScale = TRUE, ...) { 26 | nr <- nrow(data) 27 | nc <- ncol(data) 28 | 29 | p <- p + 1 30 | 31 | opt <- list(...) 32 | opt$center <- FALSE 33 | 34 | # by default log-scale the data 35 | if (logScale == TRUE) { 36 | data <- log(data) 37 | data[is.na(data)] <- 0 38 | # data[is.infinite(data)] <- 0 39 | } 40 | 41 | resultsVAR <- fitVAR(data, p = p, penalty = penalty, method = method, ...) 42 | M <- resultsVAR$A 43 | I <- diag(x = 1, nrow = nc, ncol = nc) 44 | 45 | # Coint matrix 46 | Pi <- -(I - matrixSum(M, ix = 1)) 47 | 48 | # Gamma matrices 49 | G <- list() 50 | 51 | if (p > 1) { 52 | for (k in 1:(p - 1)) { 53 | G[[k]] <- -matrixSum(M, ix = k + 1) 54 | } 55 | } 56 | 57 | output <- list() 58 | output$mu <- resultsVAR$mu 59 | output$Pi <- Pi 60 | output$G <- G 61 | output$A <- resultsVAR$A 62 | output$fit <- resultsVAR$fit 63 | output$mse <- resultsVAR$mse 64 | output$mseSD <- resultsVAR$mseSD 65 | output$time <- resultsVAR$time 66 | output$residuals <- resultsVAR$residuals 67 | output$lambda <- resultsVAR$lambda 68 | output$series <- resultsVAR$series 69 | 70 | if (is.null(opt$methodCov)) { 71 | output$sigma <- estimateCovariance(output$residuals) 72 | } else { 73 | output$sigma <- estimateCovariance(output$residuals, methodCovariance = opt$methodCov) 74 | } 75 | 76 | output$penalty <- resultsVAR$penalty 77 | output$method <- resultsVAR$method 78 | attr(output, "class") <- "vecm" 79 | attr(output, "type") <- "fit" 80 | 81 | return(output) 82 | } 83 | 84 | matrixSum <- function(M, ix = 1) { 85 | l <- length(M) 86 | nc <- ncol(M[[1]]) 87 | 88 | A <- matrix(0, nrow = nc, ncol = nc) 89 | 90 | for (i in ix:l) { 91 | A <- A + M[[i]] 92 | } 93 | 94 | return(A) 95 | } 96 | 97 | #' @title Decompose Pi VECM matrix 98 | #' 99 | #' @description A function to estimate a (possibly big) multivariate VECM time series 100 | #' using penalized least squares methods, such as ENET, SCAD or MC+. 101 | #' 102 | #' @usage decomposePi(vecm, rk, ...) 103 | #' 104 | #' @param vecm the VECM object 105 | #' @param rk rank 106 | #' @param ... options for the function (TODO: specify) 107 | #' 108 | #' @return alpha 109 | #' @return beta 110 | #' 111 | #' @export 112 | decomposePi <- function(vecm, rk, ...) { 113 | if (attr(vecm, "class") != "vecm") { 114 | stop("The input is not a vecm object.") 115 | } 116 | 117 | # Different covariance methods? 118 | opt <- list(...) 119 | 120 | nc <- ncol(vecm$Pi) 121 | Pi <- vecm$Pi 122 | colnames(Pi) <- NULL 123 | rownames(Pi) <- NULL 124 | sig <- corpcor::invcov.shrink(vecm$residuals, verbose = FALSE) 125 | colnames(sig) <- NULL 126 | rownames(sig) <- NULL 127 | 128 | if (rk < nc & rk > 0) { 129 | a <- Pi[, 1:rk] 130 | b <- t(solve(t(a) %*% sig %*% a) %*% (t(a) %*% sig %*% Pi[, (rk + 1):nc])) 131 | b <- rbind(diag(1, rk, rk), b) 132 | } else if (rk == nc) { 133 | a <- Pi 134 | b <- diag(1, rk, rk) 135 | } else { 136 | a <- numeric(length = nc) 137 | b <- Pi 138 | } 139 | 140 | out <- list() 141 | out$alpha <- a 142 | out$beta <- b 143 | return(out) 144 | } 145 | 146 | decomposePi2 <- function(vecm, rk) { 147 | if (attr(vecm, "class") != "vecm") { 148 | stop("The input is not a vecm object.") 149 | } 150 | 151 | nc <- ncol(vecm$Pi) 152 | Pi <- vecm$Pi 153 | colnames(Pi) <- NULL 154 | rownames(Pi) <- NULL 155 | 156 | if (rk >= 1) { 157 | a <- Pi[, 1:rk] 158 | # s <- solve(vecm$sigma) 159 | # b <- t(solve(t(a)%*%s%*%a)%*%(t(a)%*%s%*%vecm$Pi[,(rk+1):nc])) 160 | # b <- rbind(diag(1, rk, rk), b) 161 | A <- kronecker(diag(1, nc, nc), a) 162 | B <- as.numeric(Pi) 163 | b <- matrix(qr.solve(A, B), ncol = rk, nrow = nc, byrow = TRUE) 164 | bT <- matrix(qr.solve(A, B), ncol = rk, nrow = nc, byrow = FALSE) 165 | } else { 166 | a <- numeric(length = nc) 167 | b <- Pi 168 | bT <- t(Pi) 169 | } 170 | 171 | out <- list() 172 | out$alpha <- a 173 | out$beta <- b 174 | out$betaT <- bT 175 | return(out) 176 | } 177 | -------------------------------------------------------------------------------- /R/impulseResponse.R: -------------------------------------------------------------------------------- 1 | #' @title Impulse Response Function 2 | #' 3 | #' @description A function to estimate the Impulse Response Function of a given VAR. 4 | #' 5 | #' @usage impulseResponse(v, len = 20) 6 | #' 7 | #' @param v the data in the for of a VAR 8 | #' @param len length of the impulse response function 9 | #' 10 | #' @return \code{irf} a 3d array containing the impulse response function. 11 | #' 12 | #' @export 13 | impulseResponse <- function(v, len = 20) { 14 | 15 | # Check if v is a VAR object 16 | if (!checkIsVar(v)) { 17 | stop("Input v must be a VAR object") 18 | } 19 | 20 | # Numerical problems in the estimated variance covariance 21 | e <- eigen(v$sigma)$values 22 | if (!is.null(e[e <= 0])) { 23 | P <- t(chol(v$sigma, pivot = TRUE)) 24 | } else { 25 | P <- t(chol(v$sigma)) 26 | } 27 | 28 | bigA <- companionVAR(v) 29 | 30 | out <- getIRF(v, bigA, len = len, P) 31 | out$cholP <- P # Add Choleski factorization to the output 32 | 33 | return(out) 34 | } 35 | 36 | #' @title Check Impulse Zero 37 | #' 38 | #' @description A function to find which entries of the impulse response function 39 | #' are zero. 40 | #' 41 | #' @usage checkImpulseZero(irf) 42 | #' 43 | #' @param irf irf output from impulseResponse function 44 | #' 45 | #' @return a matrix containing the indices of the impulse response function that 46 | #' are 0. 47 | #' 48 | #' @export 49 | checkImpulseZero <- function(irf) { 50 | nx <- dim(irf)[1] 51 | ny <- dim(irf)[2] 52 | nz <- dim(irf)[3] 53 | logicalIrf <- matrix(0, nx, ny) 54 | 55 | for (z in 1:nz) { 56 | logicalIrf <- logicalIrf + abs(irf[, , z]) 57 | } 58 | 59 | logicalIrf <- logicalIrf == 0 60 | return(which(logicalIrf == TRUE, arr.ind = TRUE)) 61 | } 62 | 63 | #' @title Error bands for IRF 64 | #' 65 | #' @description A function to estimate the confidence intervals for irf and oirf. 66 | #' 67 | #' @usage errorBandsIRF(v, irf, alpha, M, resampling, ...) 68 | #' 69 | #' @param v a var object as from fitVAR or simulateVAR 70 | #' @param irf irf output from impulseResponse function 71 | #' @param alpha level of confidence (default \code{alpha = 0.01}) 72 | #' @param M number of bootstrapped series (default \code{M = 100}) 73 | #' @param resampling type of resampling: \code{"bootstrap"} or \code{"jackknife"} 74 | #' @param ... some options for the estimation: \code{verbose = TRUE} or \code{FALSE}, 75 | #' \code{mode = "fast"} or \code{"slow"}, \code{threshold = TRUE} or \code{FALSE}. 76 | #' 77 | #' @return a matrix containing the indices of the impulse response function that 78 | #' are 0. 79 | #' 80 | #' @export 81 | errorBandsIRF <- function(v, irf, alpha = 0.01, M = 100, resampling = "bootstrap", ...) { 82 | opt <- list(...) 83 | verbose <- ifelse(!is.null(opt$verbose), opt$verbose, TRUE) 84 | mode <- ifelse(!is.null(opt$mode), opt$mode, "fast") 85 | threshold <- ifelse(!is.null(opt$threshold), opt$threshold, FALSE) 86 | thresholdType <- ifelse(!is.null(opt$thresholdType), opt$thresholdType, "soft") 87 | 88 | if (resampling == "bootstrap") { 89 | lambda <- v$lambda 90 | p <- length(v$A) 91 | nr <- ncol(v$series) 92 | nc <- ncol(v$A[[1]]) 93 | len <- dim(irf$irf)[3] 94 | 95 | irfs <- array(data = rep(0, len * nc^2 * M), dim = c(nc, nc, len + 1, M)) 96 | oirfs <- array(data = rep(0, len * nc^2 * M), dim = c(nc, nc, len + 1, M)) 97 | 98 | if (verbose == TRUE) { 99 | cat("Step 1 of 2: bootstrapping series and re-estimating VAR...\n") 100 | pb <- utils::txtProgressBar(min = 0, max = M, style = 3) 101 | } 102 | 103 | for (k in 1:M) { 104 | # create Xs and Ys (temp variables) 105 | o <- bootstrappedVAR(v) 106 | 107 | if (mode == "fast") { 108 | if (v$penalty == "ENET") { 109 | # fit ENET to a specific value of lambda 110 | fit <- varENET(o, p, lambda, opt = list(method = v$method, penalty = v$penalty)) 111 | Avector <- stats::coef(fit, s = lambda) 112 | A <- matrix(Avector[2:length(Avector)], nrow = nc, ncol = nc * p, byrow = TRUE) 113 | } else if (v$penalty == "SCAD") { 114 | fit <- varSCAD(o, p, lambda, opt = list(method = v$method, penalty = v$penalty)) 115 | Avector <- fit$beta[2:nrow(fit$beta), 1] 116 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 117 | } else { 118 | fit <- varMCP(o, p, lambda, opt = list(method = v$method, penalty = v$penalty)) 119 | Avector <- fit$beta[2:nrow(fit$beta), 1] 120 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 121 | } 122 | 123 | if (threshold == TRUE) { 124 | applyThreshold(A, nr, nc, p, type = thresholdType) 125 | } 126 | 127 | M <- cbind(diag(x = 1, nrow = (nc * (p - 1)), ncol = (nc * (p - 1))), matrix(0, nrow = (nc * (p - 1)), ncol = nc)) 128 | bigA <- rbind(A, M) 129 | } else { 130 | # fit ENET on a series of lambdas 131 | if (threshold == TRUE) { 132 | fit <- fitVAR(o, p, penalty = v$penalty, method = v$method, threshold = TRUE) 133 | } else { 134 | fit <- fitVAR(o, p, penalty = v$penalty, method = v$method) 135 | } 136 | bigA <- companionVAR(fit) 137 | } 138 | 139 | tmpRes <- getIRF(v, bigA, len, irf$cholP) 140 | irfs[, , , k] <- tmpRes$irf 141 | oirfs[, , , k] <- tmpRes$oirf 142 | 143 | if (verbose == TRUE) { 144 | utils::setTxtProgressBar(pb, k) 145 | } 146 | } 147 | 148 | if (verbose == TRUE) { 149 | close(pb) 150 | cat("Step 2 of 2: computing quantiles...\n") 151 | pb <- utils::txtProgressBar(min = 0, max = (nc * nc), style = 3) 152 | } 153 | 154 | irfUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 155 | irfLB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 156 | oirfUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 157 | oirfLB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 158 | irfQUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 159 | irfQLB <- irfQUB 160 | oirfQUB <- irfQUB 161 | oirfQLB <- irfQUB 162 | 163 | a <- alpha / 2 164 | qLB <- stats::qnorm(a) 165 | qUB <- stats::qnorm((1 - a)) 166 | 167 | for (i in 1:nc) { 168 | for (j in 1:nc) { 169 | for (k in 1:len) { 170 | irfQUB[i, j, k] <- stats::quantile(irfs[i, j, k, ], probs = (1 - a), na.rm = TRUE) 171 | oirfQUB[i, j, k] <- stats::quantile(oirfs[i, j, k, ], probs = (1 - a), na.rm = TRUE) 172 | irfQLB[i, j, k] <- stats::quantile(irfs[i, j, k, ], probs = a, na.rm = TRUE) 173 | oirfQLB[i, j, k] <- stats::quantile(oirfs[i, j, k, ], probs = a, na.rm = TRUE) 174 | 175 | irfUB[i, j, k] <- qUB * stats::sd(irfs[i, j, k, ]) 176 | oirfUB[i, j, k] <- qUB * stats::sd(oirfs[i, j, k, ]) 177 | irfLB[i, j, k] <- qLB * stats::sd(irfs[i, j, k, ]) 178 | oirfLB[i, j, k] <- qLB * stats::sd(oirfs[i, j, k, ]) 179 | } 180 | if (verbose == TRUE) { 181 | utils::setTxtProgressBar(pb, (i - 1) * nc + j) 182 | } 183 | } 184 | } 185 | 186 | if (verbose == TRUE) { 187 | close(pb) 188 | } 189 | 190 | output <- list() 191 | 192 | output$irfUB <- irfUB 193 | output$oirfUB <- oirfUB 194 | output$irfLB <- irfLB 195 | output$oirfLB <- oirfLB 196 | 197 | output$irfQUB <- irfQUB 198 | output$oirfQUB <- oirfQUB 199 | output$irfQLB <- irfQLB 200 | output$oirfQLB <- oirfQLB 201 | 202 | attr(output, "class") <- "irfBands" 203 | attr(output, "resampling") <- "bootstrap" 204 | return(output) 205 | } else if (resampling == "jackknife") { 206 | output <- jackknife(v, irf, alpha = alpha, ...) 207 | return(output) 208 | } else if (resampling == "bootstrapOLS") { 209 | output <- bootstrapOLS(v, irf, alpha = alpha, ...) 210 | } else { 211 | stop("Unknown resampling method. Possible values are \"bootstrap\" or \"jackknife\"") 212 | } 213 | } 214 | 215 | jackknife <- function(v, irf, mode = "fast", alpha, ...) { 216 | lambda <- v$lambda 217 | p <- length(v$A) 218 | nc <- ncol(v$A[[1]]) 219 | len <- dim(irf$irf)[3] 220 | nr <- nrow(v$series) 221 | 222 | opt <- list(...) 223 | verbose <- ifelse(!is.null(opt$verbose), opt$verbose, TRUE) 224 | mode <- ifelse(!is.null(opt$mode), opt$mode, "fast") 225 | threshold <- ifelse(!is.null(opt$threshold), opt$threshold, FALSE) 226 | thresholdType <- ifelse(!is.null(opt$thresholdType), opt$thresholdType, "soft") 227 | 228 | irfs <- array(data = rep(0, len * nc^2 * nr), dim = c(nc, nc, len + 1, nr)) 229 | oirfs <- array(data = rep(0, len * nc^2 * nr), dim = c(nc, nc, len + 1, nr)) 230 | 231 | if (verbose == TRUE) { 232 | cat("Step 1 of 2: jack knifing series and re-estimating VAR...\n") 233 | pb <- utils::txtProgressBar(min = 0, max = nr, style = 3) 234 | } 235 | 236 | for (k in 1:nr) { 237 | # create Xs and Ys (temp variables) 238 | data <- v$series[-k, ] 239 | trDt <- transformData(data, p, opt = list(method = v$method, penalty = v$penalty)) 240 | trDt$X <- trDt$X 241 | trDt$y <- trDt$y 242 | 243 | # data <- v$series 244 | # trDt <- transformData(data, p, opt = list(method = v$method, penalty = v$penalty)) 245 | # trDt$X <- trDt$X[-k, ] 246 | # trDt$y <- trDt$y[-k] 247 | 248 | if (mode == "fast") { 249 | if (v$penalty == "ENET") { 250 | # fit ENET to a specific value of lambda 251 | fit <- varENET(data, p, lambda, opt = list(method = v$method, penalty = v$penalty)) 252 | Avector <- stats::coef(fit, s = lambda) 253 | A <- matrix(Avector[2:length(Avector)], nrow = nc, ncol = nc * p, byrow = TRUE) 254 | } else if (v$penalty == "SCAD") { 255 | fit <- varSCAD(data, p, lambda, opt = list(method = v$method, penalty = v$penalty)) 256 | Avector <- fit$beta[2:nrow(fit$beta), 1] 257 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 258 | } else { 259 | fit <- varMCP(data, p, lambda, opt = list(method = v$method, penalty = v$penalty)) 260 | Avector <- fit$beta[2:nrow(fit$beta), 1] 261 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 262 | } 263 | 264 | if (threshold == TRUE) { 265 | applyThreshold(A, nr, nc, p, type = thresholdType) 266 | } 267 | 268 | M <- cbind(diag(x = 1, nrow = (nc * (p - 1)), ncol = (nc * (p - 1))), matrix(0, nrow = (nc * (p - 1)), ncol = nc)) 269 | bigA <- rbind(A, M) 270 | } else { 271 | # fit ENET on a series of lambdas 272 | if (threshold == TRUE) { 273 | fit <- fitVAR(data, p, penalty = v$penalty, method = v$method, threshold = TRUE) 274 | } else { 275 | fit <- fitVAR(data, p, penalty = v$penalty, method = v$method) 276 | } 277 | bigA <- companionVAR(fit) 278 | } 279 | 280 | tmpRes <- getIRF(v, bigA, len, irf$cholP) 281 | irfs[, , , k] <- tmpRes$irf 282 | oirfs[, , , k] <- tmpRes$oirf 283 | 284 | if (verbose == TRUE) { 285 | utils::setTxtProgressBar(pb, k) 286 | } 287 | } 288 | 289 | if (verbose == TRUE) { 290 | close(pb) 291 | cat("Step 2 of 2: computing quantiles...\n") 292 | pb <- utils::txtProgressBar(min = 0, max = (nc * nc), style = 3) 293 | } 294 | 295 | irfUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 296 | irfLB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 297 | oirfUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 298 | oirfLB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 299 | 300 | a <- alpha / 2 301 | qLB <- stats::qnorm(a) 302 | qUB <- stats::qnorm((1 - a)) 303 | 304 | for (i in 1:nc) { 305 | for (j in 1:nc) { 306 | for (k in 1:len) { 307 | irfUB[i, j, k] <- base::mean(irfs[i, j, k, ]) + qUB * stats::sd(irfs[i, j, k, ]) 308 | oirfUB[i, j, k] <- base::mean(oirfs[i, j, k, ]) + qUB * stats::sd(oirfs[i, j, k, ]) 309 | irfLB[i, j, k] <- base::mean(irfs[i, j, k, ]) + qLB * stats::sd(irfs[i, j, k, ]) 310 | oirfLB[i, j, k] <- base::mean(oirfs[i, j, k, ]) + qLB * stats::sd(oirfs[i, j, k, ]) 311 | } 312 | if (verbose == TRUE) { 313 | utils::setTxtProgressBar(pb, (i - 1) * nc + j) 314 | } 315 | } 316 | } 317 | 318 | if (verbose == TRUE) { 319 | close(pb) 320 | } 321 | 322 | output <- list() 323 | 324 | output$irfUB <- irfUB 325 | output$oirfUB <- oirfUB 326 | output$irfLB <- irfLB 327 | output$oirfLB <- oirfLB 328 | 329 | attr(output, "class") <- "irfBands" 330 | attr(output, "resampling") <- "jackknife" 331 | return(output) 332 | } 333 | 334 | 335 | bootstrapOLS <- function(v, irf, mode = "fast", alpha, ...) { 336 | lambda <- v$lambda 337 | p <- length(v$A) 338 | nc <- ncol(v$A[[1]]) 339 | len <- dim(irf$irf)[3] 340 | nr <- nrow(v$series) 341 | 342 | opt <- list(...) 343 | verbose <- ifelse(!is.null(opt$verbose), opt$verbose, TRUE) 344 | mode <- ifelse(!is.null(opt$mode), opt$mode, "fast") 345 | threshold <- ifelse(!is.null(opt$threshold), opt$threshold, FALSE) 346 | thresholdType <- ifelse(!is.null(opt$thresholdType), opt$thresholdType, "soft") 347 | 348 | irfs <- array(data = rep(0, len * nc^2 * nr), dim = c(nc, nc, len + 1, nr)) 349 | oirfs <- array(data = rep(0, len * nc^2 * nr), dim = c(nc, nc, len + 1, nr)) 350 | 351 | if (verbose == TRUE) { 352 | cat("Step 1 of 2: bootstrappingOLS series and re-estimating VAR...\n") 353 | pb <- utils::txtProgressBar(min = 0, max = nr, style = 3) 354 | } 355 | 356 | for (k in 1:nr) { 357 | # create Xs and Ys (temp variables) 358 | o <- bootstrappedVAR(v) 359 | 360 | N <- ncol(v$A[[1]]) 361 | nobs <- nrow(v$series) 362 | 363 | bigA <- companionVAR(v) 364 | 365 | trDt <- transformData(o, p = p, opt = list(method = v$method, scale = FALSE, center = TRUE)) 366 | 367 | nonZeroEntries <- as.matrix(bigA != 0) 368 | 369 | ## Create matrix R 370 | t <- as.vector(nonZeroEntries) 371 | n <- sum(t != 0) 372 | ix <- which(t != 0) 373 | j <- 1:n 374 | 375 | R <- matrix(0, ncol = n, nrow = length(t)) 376 | for (zz in 1:n) { 377 | R[ix[zz], j[zz]] <- 1 378 | } 379 | 380 | X <- as.matrix(trDt$X) 381 | y <- as.vector(t(o[-(1:p), ])) 382 | 383 | # Metodo A MANO 384 | s <- corpcor::invcov.shrink(v$residuals, verbose = FALSE) 385 | G <- t(o[-nobs, ]) %*% o[-nobs, ] / nobs 386 | 387 | V <- solve(t(R) %*% (kronecker(G, s) %*% R)) 388 | VV <- nonZeroEntries 389 | VV[nonZeroEntries] <- diag(V) 390 | G1 <- solve(t(R) %*% (kronecker(t(o[-nobs, ]) %*% o[-nobs, ], s)) %*% R) 391 | G2 <- t(R) %*% (kronecker(t(o[-nobs, ]), s)) 392 | 393 | g <- G1 %*% G2 # [ , (N+1):(length(y) + N)] 394 | ga <- g %*% y 395 | 396 | b1 <- vector(length = N * N) 397 | b1 <- R %*% ga 398 | A <- matrix(b1, ncol = N, byrow = F) 399 | 400 | 401 | M <- cbind(diag(x = 1, nrow = (nc * (p - 1)), ncol = (nc * (p - 1))), matrix(0, nrow = (nc * (p - 1)), ncol = nc)) 402 | bigA <- rbind(A, M) 403 | 404 | tmpRes <- getIRF(v, bigA, len, irf$cholP) 405 | irfs[, , , k] <- tmpRes$irf 406 | oirfs[, , , k] <- tmpRes$oirf 407 | 408 | if (verbose == TRUE) { 409 | utils::setTxtProgressBar(pb, k) 410 | } 411 | } 412 | 413 | if (verbose == TRUE) { 414 | close(pb) 415 | cat("Step 2 of 2: computing quantiles...\n") 416 | pb <- utils::txtProgressBar(min = 0, max = (nc * nc), style = 3) 417 | } 418 | 419 | irfUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 420 | irfLB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 421 | oirfUB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 422 | oirfLB <- array(data = rep(0, len * nc^2), dim = c(nc, nc, len)) 423 | 424 | a <- alpha / 2 425 | qLB <- stats::qnorm(a) 426 | qUB <- stats::qnorm((1 - a)) 427 | 428 | for (i in 1:nc) { 429 | for (j in 1:nc) { 430 | for (k in 1:len) { 431 | irfUB[i, j, k] <- base::mean(irfs[i, j, k, ]) + qUB * stats::sd(irfs[i, j, k, ]) 432 | oirfUB[i, j, k] <- base::mean(oirfs[i, j, k, ]) + qUB * stats::sd(oirfs[i, j, k, ]) 433 | irfLB[i, j, k] <- base::mean(irfs[i, j, k, ]) + qLB * stats::sd(irfs[i, j, k, ]) 434 | oirfLB[i, j, k] <- base::mean(oirfs[i, j, k, ]) + qLB * stats::sd(oirfs[i, j, k, ]) 435 | } 436 | if (verbose == TRUE) { 437 | utils::setTxtProgressBar(pb, (i - 1) * nc + j) 438 | } 439 | } 440 | } 441 | 442 | if (verbose == TRUE) { 443 | close(pb) 444 | } 445 | 446 | output <- list() 447 | 448 | output$irfUB <- irfUB 449 | output$oirfUB <- oirfUB 450 | output$irfLB <- irfLB 451 | output$oirfLB <- oirfLB 452 | 453 | attr(output, "class") <- "irfBands" 454 | attr(output, "resampling") <- "jackknife" 455 | return(output) 456 | } 457 | 458 | 459 | getIRF <- function(v, bigA, len = 20, P) { 460 | nr <- nrow(v$A[[1]]) 461 | 462 | irf <- array(data = rep(0, len * nr^2), dim = c(nr, nr, len + 1)) 463 | oirf <- array(data = rep(0, len * nr^2), dim = c(nr, nr, len + 1)) 464 | 465 | Atmp <- diag(nrow = nrow(bigA), ncol = ncol(bigA)) 466 | 467 | irf[, , 1] <- Atmp[1:nr, 1:nr] 468 | oirf[, , 1] <- Atmp[1:nr, 1:nr] %*% P 469 | 470 | for (k in 1:len) { 471 | Atmp <- Atmp %*% bigA 472 | irf[, , (k + 1)] <- as.matrix(Atmp[1:nr, 1:nr]) 473 | oirf[, , (k + 1)] <- as.matrix(Atmp[1:nr, 1:nr] %*% P) 474 | } 475 | 476 | ## TODO: add cumulative response functions 477 | out <- list() 478 | out$irf <- irf 479 | out$oirf <- oirf 480 | attr(out, "class") <- "irf" 481 | 482 | return(out) 483 | } 484 | -------------------------------------------------------------------------------- /R/mcSimulations.R: -------------------------------------------------------------------------------- 1 | #' @title Monte Carlo simulations 2 | #' 3 | #' @description This function generates monte carlo simultaions of sparse VAR and 4 | #' its estimation (at the moment only for VAR(1) processes). 5 | #' @param N dimension of the multivariate time series. 6 | #' @param nobs number of observations to be generated. 7 | #' @param nMC number of Monte Carlo simulations. 8 | #' @param rho base value for the covariance. 9 | #' @param sparsity density of non zero entries of the VAR matrices. 10 | #' @param penalty penalty function to use for LS estimation. Possible values are \code{"ENET"}, 11 | #' \code{"SCAD"} or \code{"MCP"}. 12 | #' @param covariance type of covariance matrix to be used in the generation of the sparse VAR model. 13 | #' @param method which type of distribution to use in the generation of the entries of the matrices. 14 | #' @param modelSel select which model selection criteria to use (\code{"cv"} or \code{"timeslice"}). 15 | #' @param ... (TODO: complete) 16 | #' 17 | #' @return a \code{nMc}x5 matrix with the results of the Monte Carlo estimation 18 | 19 | #' @export 20 | mcSimulations <- function(N, nobs = 250, nMC = 100, rho = 0.5, sparsity = 0.05, 21 | penalty = "ENET", covariance = "Toeplitz", 22 | method = "normal", modelSel = "cv", ...) { 23 | results <- list() 24 | 25 | results$confusionMatrix <- matrix(0, nMC, 4) 26 | results$matrixNorms <- matrix(0, nMC, 6) 27 | pb <- utils::txtProgressBar(min = 0, max = nMC, style = 3) 28 | 29 | for (i in 1:nMC) { 30 | s <- simulateVAR(nobs = nobs, N = N, rho = rho, sparsity = sparsity, covariance = covariance, method = method, ...) 31 | rets <- s$series 32 | genA <- s$A[[1]] 33 | spRad <- max(Mod(eigen(genA)$values)) 34 | 35 | res <- fitVAR(data = rets, penalty = penalty, method = modelSel, ...) 36 | 37 | A <- res$A[[1]] 38 | estSpRad <- max(Mod(eigen(A)$values)) 39 | 40 | L <- A 41 | L[L != 0] <- 1 42 | L[L == 0] <- 0 43 | 44 | genL <- genA 45 | genL[genL != 0] <- 1 46 | genL[genL == 0] <- 0 47 | 48 | results$confusionMatrix[i, 1:4] <- prop.table(table(Predicted = L, Real = genL)) 49 | results$accuracy[i] <- 1 - sum(abs(L - genL)) / N^2 # accuracy -(1 - sum(genL)/N^2) 50 | results$matrixNorms[i, 1] <- abs(sum(L) / N^2 - sparsity) # sparsity 51 | results$matrixNorms[i, 2] <- l2norm(A - genA) / l2norm(genA) 52 | results$matrixNorms[i, 3] <- frobNorm(A - genA) / frobNorm(genA) 53 | results$matrixNorms[i, 4] <- res$mse 54 | results$matrixNorms[i, 5] <- spRad 55 | results$matrixNorms[i, 6] <- estSpRad 56 | utils::setTxtProgressBar(pb, i) 57 | } 58 | 59 | close(pb) 60 | 61 | results$confusionMatrix <- as.data.frame(results$confusionMatrix) 62 | colnames(results$confusionMatrix) <- c("TP", "FP", "FN", "TN") 63 | results$matrixNorms <- as.data.frame(results$matrixNorms) 64 | colnames(results$matrixNorms) <- c("sparDiff", "l2", "frob", "mse", "spRad", "estSpRad") 65 | 66 | return(results) 67 | } 68 | -------------------------------------------------------------------------------- /R/plotIRF.R: -------------------------------------------------------------------------------- 1 | #' @title IRF plot 2 | #' 3 | #' @description Plot a IRF object 4 | #' 5 | #' @param irf the irf object to plot 6 | #' @param eb the errorbands to plot 7 | #' @param i the first index 8 | #' @param j the second index 9 | #' @param type \code{type = "irf"} or \code{type = "oirf"} 10 | #' @param bands \code{"quantiles"} or \code{"sd"} 11 | #' @return An \code{image} plot relative to the impulse response function. 12 | #' @usage plotIRF(irf, eb, i, j, type, bands) 13 | #' 14 | #' @export 15 | plotIRF <- function(irf, eb, i, j, type = "irf", bands = "quantiles") { 16 | if (attr(irf, "class") != "irf" | attr(eb, "class") != "irfBands") { 17 | stop("Inputs must be an irf object and an irfBands object") 18 | } 19 | 20 | if (attr(eb, "resampling") == "bootstrap") { 21 | nz <- dim(irf$irf)[3] 22 | t <- 0:(nz - 1) 23 | 24 | ebs <- list() 25 | 26 | if (bands == "quantiles") { 27 | ebs$irfUB <- eb$irfQUB[i, j, ] - irf$irf[i, j, ] 28 | ebs$irfLB <- eb$irfQLB[i, j, ] - irf$irf[i, j, ] 29 | ebs$oirfUB <- eb$oirfQUB[i, j, ] - irf$oirf[i, j, ] 30 | ebs$oirfLB <- eb$oirfQLB[i, j, ] - irf$oirf[i, j, ] 31 | } else if (bands == "sd") { 32 | ebs$irfUB <- eb$irfUB[i, j, ] 33 | ebs$irfLB <- eb$irfLB[i, j, ] 34 | ebs$oirfUB <- eb$oirfUB[i, j, ] 35 | ebs$oirfLB <- eb$oirfLB[i, j, ] 36 | } else { 37 | stop("Possible values for bands are sd or quantiles") 38 | } 39 | 40 | if (type == "irf") { 41 | irfString <- paste0("IRF ", j, " -> ", i) 42 | ub <- irf$irf[i, j, ] + ebs$irfUB 43 | lb <- irf$irf[i, j, ] + ebs$irfLB 44 | d <- as.data.frame(cbind(t, irf$irf[i, j, ], lb, ub, 0)) 45 | } else if (type == "oirf") { 46 | irfString <- paste0("OIRF ", j, " -> ", i) 47 | ub <- irf$oirf[i, j, ] + ebs$oirfUB 48 | lb <- irf$oirf[i, j, ] + ebs$oirfLB 49 | d <- as.data.frame(cbind(t, irf$oirf[i, j, ], lb, ub, 0)) 50 | } else { 51 | stop("Unknown type") 52 | } 53 | 54 | ggplot2::ggplot(d, ggplot2::aes(x = d[, 1], y = d[, 2])) + 55 | ggplot2::ylab(irfString) + 56 | ggplot2::geom_line(data = d, ggplot2::aes(x = t, y = d[, 3]), linetype = "dashed", color = "blue") + 57 | ggplot2::geom_line(data = d, ggplot2::aes(x = t, y = d[, 4]), linetype = "dashed", color = "blue") + 58 | ggplot2::geom_ribbon(data = d, ggplot2::aes(ymin = d[, 3], ymax = d[, 4]), fill = "lightsteelblue2", alpha = 0.75) + 59 | ggplot2::geom_line(data = d, ggplot2::aes(x = t, y = d[, 5]), color = "red") + 60 | ggplot2::geom_line() + 61 | ggplot2::xlab("Time") 62 | } else { 63 | nz <- dim(irf$irf)[3] 64 | t <- 0:(nz - 1) 65 | 66 | ebs <- list() 67 | 68 | ebs$irfUB <- eb$irfUB[i, j, ] 69 | ebs$irfLB <- eb$irfLB[i, j, ] 70 | ebs$oirfUB <- eb$oirfUB[i, j, ] 71 | ebs$oirfLB <- eb$oirfLB[i, j, ] 72 | 73 | if (type == "irf") { 74 | irfString <- paste0("IRF ", j, " -> ", i) 75 | ub <- ebs$irfUB 76 | lb <- ebs$irfLB 77 | d <- as.data.frame(cbind(t, irf$irf[i, j, ], lb, ub, 0)) 78 | } else if (type == "oirf") { 79 | irfString <- paste0("OIRF ", j, " -> ", i) 80 | ub <- ebs$oirfUB 81 | lb <- ebs$oirfLB 82 | d <- as.data.frame(cbind(t, irf$oirf[i, j, ], lb, ub, 0)) 83 | } else { 84 | stop("Unknown type") 85 | } 86 | 87 | ggplot2::ggplot(d, ggplot2::aes(x = d[, 1], y = d[, 2])) + 88 | ggplot2::ylab(irfString) + 89 | ggplot2::geom_line(data = d, ggplot2::aes(x = t, y = d[, 3]), linetype = "dashed", color = "blue") + 90 | ggplot2::geom_line(data = d, ggplot2::aes(x = t, y = d[, 4]), linetype = "dashed", color = "blue") + 91 | ggplot2::geom_ribbon(data = d, ggplot2::aes(ymin = d[, 3], ymax = d[, 4]), fill = "lightsteelblue2", alpha = 0.75) + 92 | ggplot2::geom_line(data = d, ggplot2::aes(x = t, y = d[, 5]), color = "red") + 93 | ggplot2::geom_line() + 94 | ggplot2::xlab("Time") 95 | } 96 | } 97 | 98 | #' @title IRF grid plot 99 | #' 100 | #' @description Plot a IRF grid object 101 | #' 102 | #' @param irf the irf object computed using impulseResponse 103 | #' @param eb the error bands estimated using errorBands 104 | #' @param indexes a vector containing the indeces that you want to plot 105 | #' @param type plot the irf (\code{type = "irf"} by default) or the orthogonal irf 106 | #' (\code{type = "oirf"}) 107 | #' @param bands which type of bands to plot ("quantiles" (default) or "sd") 108 | #' @return An \code{image} plot relative to the impulse response function. 109 | #' @usage plotIRFGrid(irf, eb, indexes, type, bands) 110 | #' 111 | #' @export 112 | plotIRFGrid <- function(irf, eb, indexes, type = "irf", bands = "quantiles") { 113 | n <- length(indexes) 114 | g <- expand.grid(indexes, indexes) 115 | nrgrid <- nrow(g) 116 | 117 | pl <- list() 118 | 119 | for (i in 1:nrgrid) { 120 | pl[[i]] <- plotIRF(irf, eb, g[i, 1], g[i, 2], type = type, bands = bands) 121 | } 122 | 123 | multiplot(plotlist = pl, cols = n, layout = matrix(1:nrgrid, nrow = n, byrow = TRUE)) 124 | } 125 | -------------------------------------------------------------------------------- /R/plotMatrix.R: -------------------------------------------------------------------------------- 1 | #' @title Matrix plot 2 | #' 3 | #' @description Plot a sparse matrix 4 | #' 5 | #' @param M the matrix to plot 6 | #' @param colors dark or light 7 | #' @return An \code{image} plot with a particular color palette (black zero entries, red 8 | #' for the negative ones and green for the positive) 9 | #' @usage plotMatrix(M, colors) 10 | #' 11 | #' @export 12 | plotMatrix <- function(M, colors = "dark") { 13 | if (!is.matrix(M)) { 14 | stop("Input must be a matrix") 15 | } 16 | 17 | nr <- nrow(M) 18 | nc <- ncol(M) 19 | M <- t(M)[, nr:1] 20 | if (colors == "dark") { 21 | ggplot2::ggplot(reshape2::melt(M), ggplot2::aes_string(x = "Var1", y = "Var2", fill = "value")) + 22 | ggplot2::geom_raster() + 23 | ggplot2::scale_fill_gradient2(low = "red", high = "green", mid = "black") + 24 | ggplot2::xlab("Row") + 25 | ggplot2::ylab("Col") + 26 | ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1, vjust = 1)) 27 | } else if (colors == "light") { 28 | ggplot2::ggplot(reshape2::melt(M), ggplot2::aes_string(x = "Var1", y = "Var2", fill = "value")) + 29 | ggplot2::geom_raster() + 30 | ggplot2::scale_fill_gradient2(low = "red", high = "blue", mid = "white") + 31 | ggplot2::xlab("Row") + 32 | ggplot2::ylab("Col") + 33 | ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1, vjust = 1)) 34 | } else { 35 | stop("Colors must be\"light\" or \"dark\".") 36 | } 37 | } 38 | 39 | #' @title Plot VARs 40 | #' 41 | #' @description Plot all the matrices of a VAR model 42 | #' 43 | #' @param ... a sequence of VAR objects (one or more 44 | #' than one, as from \code{simulateVAR} or \code{fitVAR}) 45 | #' @param colors the gradient used to plot the matrix. It can be "light" (low = 46 | #' red -- mid = white -- high = blue) or "dark" (low = red -- mid = black -- 47 | #' high = green) 48 | #' @return An \code{image} plot with a specific color palette 49 | #' @usage plotVAR(..., colors) 50 | #' 51 | #' @export 52 | plotVAR <- function(..., colors = "dark") { 53 | vars <- list(...) 54 | l <- length(vars) 55 | 56 | for (i in 1:l) { 57 | if (!checkIsVar(vars[[i]])) { 58 | stop("Inputs must be var objects") 59 | } 60 | } 61 | 62 | pl <- list() 63 | varorder <- length(vars[[1]]$A) 64 | differentVarOrder <- FALSE 65 | for (i in 1:l) { 66 | if (varorder != length(vars[[i]]$A)) { 67 | differentVarOrder <- TRUE 68 | varorder <- min(varorder, length(vars[[i]]$A)) 69 | } 70 | } 71 | 72 | if (differentVarOrder == TRUE) { 73 | warning("Different VAR orders: plotting up to the min one") 74 | } 75 | 76 | for (i in 1:l) { 77 | for (j in 1:varorder) { 78 | pl[[((i - 1) * varorder) + j]] <- plotMatrix(vars[[i]]$A[[j]], colors = colors) 79 | } 80 | } 81 | 82 | multiplot(plotlist = pl, cols = varorder, layout = matrix(1:(l * varorder), nrow = l, byrow = TRUE)) 83 | } 84 | 85 | #' @title Plot VECMs 86 | #' 87 | #' @description Plot all the matrices of a VECM model 88 | #' 89 | #' @param v a VECM object (as from \code{fitVECM}) 90 | #' @return An \code{image} plot with a specific color palette (black zero entries, red 91 | #' for the negative ones and green for the positive) 92 | #' @usage plotVECM(v) 93 | #' 94 | #' @export 95 | plotVECM <- function(v) { 96 | if (attr(v, "class") != "vecm") { 97 | stop("v must be a VECM object") 98 | } 99 | 100 | l <- length(v$G) 101 | pl <- list() 102 | 103 | pl[[1]] <- plotMatrix(v$Pi) 104 | 105 | if (l > 0) { 106 | for (i in 1:l) { 107 | pl[[i + 1]] <- plotMatrix(v$G[[i]]) 108 | } 109 | } 110 | 111 | multiplot(plotlist = pl, cols = l + 1, layout = matrix(1:(l + 1), nrow = 1, byrow = TRUE)) 112 | } 113 | 114 | #' @title Multiplots with ggplot 115 | #' 116 | #' @description Multiple plot function. ggplot objects can be passed in ..., or 117 | #' to plotlist (as a list of ggplot objects) 118 | #' @param ... a sequence of ggplots to be plotted in the grid. 119 | #' @param plotlist a list containing ggplots as elements. 120 | #' @param cols number of columns in layout 121 | #' @param layout a matrix specifying the layout. If present, 'cols' is ignored. 122 | #' If the layout is something like matrix(c(1,2,3,3), nrow=2, byrow=TRUE), 123 | #' then plot 1 will go in the upper left, 2 will go in the upper right, and 124 | #' 3 will go all the way across the bottom. 125 | #' Taken from R Cookbook 126 | #' 127 | #' @return A ggplot containing the plots passed as arguments 128 | #' @export 129 | multiplot <- function(..., plotlist = NULL, cols = 1, layout = NULL) { 130 | # library(grid) 131 | 132 | # Make a list from the ... arguments and plotlist 133 | plots <- c(list(...), plotlist) 134 | 135 | numPlots <- length(plots) 136 | 137 | # If layout is NULL, then use 'cols' to determine layout 138 | if (is.null(layout)) { 139 | # Make the panel 140 | # ncol: Number of columns of plots 141 | # nrow: Number of rows needed, calculated from # of cols 142 | layout <- matrix(seq(1, cols * ceiling(numPlots / cols)), 143 | ncol = cols, nrow = ceiling(numPlots / cols) 144 | ) 145 | } 146 | 147 | if (numPlots == 1) { 148 | print(plots[[1]]) 149 | } else { 150 | # Set up the page 151 | grid::grid.newpage() 152 | grid::pushViewport(grid::viewport(layout = grid::grid.layout(nrow(layout), ncol(layout)))) 153 | 154 | # Make each plot, in the correct location 155 | for (i in 1:numPlots) { 156 | # Get the i,j matrix positions of the regions that contain this subplot 157 | matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE)) 158 | 159 | print(plots[[i]], vp = grid::viewport( 160 | layout.pos.row = matchidx$row, 161 | layout.pos.col = matchidx$col 162 | )) 163 | } 164 | } 165 | } 166 | -------------------------------------------------------------------------------- /R/scadReg.R: -------------------------------------------------------------------------------- 1 | #' @export 2 | scadReg <- function(X, y, family = "gaussian", penalty = "SCAD", 3 | gamma = 3.7, alpha = 1, lambda.min = ifelse(n > p, .001, .05), 4 | nlambda = 100, lambda, eps = .001, max.iter = 1000, convex = TRUE, 5 | dfmax = p + 1, penalty.factor = rep(1, ncol(X)), 6 | warn = TRUE, returnX = FALSE, ...) { 7 | # Coersion 8 | # if (class(X) != "matrix") { 9 | # tmp <- try(X <- model.matrix(~0+., data=X), silent=TRUE) 10 | # if (class(tmp)[1] == "try-error") stop("X must be a matrix or able to be coerced to a matrix") 11 | # } 12 | 13 | # Error checking 14 | standardize <- TRUE 15 | if (gamma <= 1 & penalty == "MCP") stop("gamma must be greater than 1 for the MC penalty") 16 | if (gamma <= 2 & penalty == "SCAD") stop("gamma must be greater than 2 for the SCAD penalty") 17 | if (nlambda < 2) stop("nlambda must be at least 2") 18 | if (alpha <= 0) stop("alpha must be greater than 0; choose a small positive number instead") 19 | if (any(is.na(y)) | any(is.na(X))) stop("Missing data (NA's) detected. Take actions (e.g., removing cases, removing features, imputation) to eliminate missing data before passing X and y to ncvreg") 20 | if (length(penalty.factor) != ncol(X)) stop("penalty.factor does not match up with X") 21 | if (family == "binomial" & length(table(y)) > 2) stop("Attemping to use family='binomial' with non-binary data") 22 | if (family == "binomial" & !identical(sort(unique(y)), 0:1)) y <- as.numeric(y == max(y)) 23 | 24 | ## Deprication support 25 | dots <- list(...) 26 | if ("n.lambda" %in% names(dots)) nlambda <- dots$n.lambda 27 | 28 | ## Set up XX, yy, lambda 29 | if (standardize) { 30 | std <- standardize2(as.matrix(X)) 31 | XX <- as(Matrix::Matrix(std[[1]], sparse = TRUE), "dgCMatrix") 32 | center <- as.numeric(std[[2]]) 33 | scale <- as.numeric(std[[3]]) 34 | nz <- which(scale > 1e-6) 35 | if (length(nz) != ncol(XX)) XX <- XX[, nz, drop = FALSE] 36 | penalty.factor <- penalty.factor[nz] 37 | } else { 38 | XX <- as(Matrix::Matrix(X, sparse = TRUE), "dgCMatrix") 39 | } 40 | 41 | p <- ncol(XX) 42 | 43 | if (family == "gaussian") { 44 | yy <- y - mean(y) 45 | } else { 46 | yy <- y 47 | } 48 | n <- length(yy) 49 | if (missing(lambda)) { 50 | # lambda <- setupLambda(if (standardize) XX else X, yy, family, alpha, lambda.min, nlambda, penalty.factor) 51 | lambda <- setupLambda2(as.matrix(XX), yy, family, alpha, lambda.min, nlambda, penalty.factor) 52 | user.lambda <- FALSE 53 | } else { 54 | nlambda <- length(lambda) 55 | user.lambda <- TRUE 56 | } 57 | 58 | ## Fit 59 | if (family == "gaussian" & standardize == TRUE) { 60 | 61 | # res <- cdfit_gaussianTEST(XX, yy, penalty, lambda, eps, as.integer(max.iter), as.double(gamma), penalty.factor, alpha, as.integer(dfmax), as.integer(user.lambda | any(penalty.factor==0))) 62 | res <- cdfit_gaussianTEST(XX, yy, lambda, eps, as.integer(max.iter), as.double(gamma), penalty.factor, alpha, as.integer(dfmax), as.integer(user.lambda | any(penalty.factor == 0))) 63 | a <- rep(mean(y), nlambda) 64 | # b <- matrix(res[[1]], p, nlambda) 65 | b <- t(res[[1]]) 66 | b[is.nan(b)] <- 0 67 | loss <- res[[2]] 68 | iter <- res[[3]] 69 | } else if (family == "gaussian" & standardize == FALSE & 1 == 0) { 70 | beta <- cdfit_rawTEST(X, y, penalty, lambda, eps, as.integer(max.iter), as.double(gamma), penalty.factor, alpha, as.integer(dfmax), as.integer(user.lambda | any(penalty.factor == 0))) 71 | # b <- matrix(res[[1]], p, nlambda) 72 | # loss <- res[[2]] 73 | # iter <- res[[3]] 74 | # else if (family=="binomial") { 75 | # res <- .Call("cdfit_binomial", XX, yy, penalty, lambda, eps, as.integer(max.iter), as.double(gamma), penalty.factor, alpha, as.integer(dfmax), as.integer(user.lambda | any(penalty.factor==0)), as.integer(warn)) 76 | # a <- res[[1]] 77 | # b <- matrix(res[[2]], p, nlambda) 78 | # loss <- res[[3]] 79 | # iter <- res[[4]] 80 | # } else if (family=="poisson") { 81 | # res <- .Call("cdfit_poisson", XX, yy, penalty, lambda, eps, as.integer(max.iter), as.double(gamma), penalty.factor, alpha, as.integer(dfmax), as.integer(user.lambda | any(penalty.factor==0)), as.integer(warn)) 82 | # a <- res[[1]] 83 | # b <- matrix(res[[2]], p, nlambda) 84 | # loss <- res[[3]] 85 | # iter <- res[[4]] 86 | # } 87 | } 88 | ## Eliminate saturated lambda values, if any 89 | ind <- !is.na(iter) 90 | 91 | if (family != "gaussian" | standardize == TRUE) a <- a[ind] 92 | b <- b[, ind, drop = FALSE] 93 | iter <- iter[ind] 94 | lambda <- lambda[ind] 95 | loss <- loss[ind] 96 | # if (warn & any(iter==max.iter)) warning("Algorithm failed to converge for some values of lambda") 97 | 98 | ## Local convexity? 99 | # convex.min <- if (convex & standardize) convexMin(b, XX, penalty, gamma, lambda*(1-alpha), family, penalty.factor, a=a) else NULL 100 | 101 | ## Unstandardize 102 | if (standardize) { 103 | beta <- b / scale[nz] 104 | val <- structure(list( 105 | beta = beta, 106 | lambda = lambda, 107 | center = center, 108 | scale = scale, 109 | iter = iter 110 | )) 111 | return(val) 112 | # beta <- matrix(0, nrow=(ncol(X)+1), ncol=length(lambda)) 113 | # bb <- b/scale[nz] 114 | # beta[nz+1,] <- bb 115 | # beta[1,] <- a - crossprod(center[nz], bb) 116 | } else { 117 | beta <- if (family == "gaussian") b else rbind(a, b) 118 | } 119 | # 120 | # ## Names 121 | # varnames <- if (is.null(colnames(X))) paste("V",1:ncol(X),sep="") else colnames(X) 122 | # if (family!="gaussian" | standardize==TRUE) varnames <- c("(Intercept)", varnames) 123 | # dimnames(beta) <- list(varnames, round(lambda,digits=4)) 124 | # 125 | # ## Output 126 | # val <- structure(list(beta = beta, 127 | # iter = iter, 128 | # lambda = lambda, 129 | # penalty = penalty, 130 | # family = family, 131 | # gamma = gamma, 132 | # alpha = alpha, 133 | # convex.min = convex.min, 134 | # loss = loss, 135 | # penalty.factor = penalty.factor, 136 | # n = n), 137 | # class = "ncvreg") 138 | # if (family=="poisson") val$y <- y 139 | # if (returnX) { 140 | # val$X <- XX 141 | # val$center <- center 142 | # val$scale <- scale 143 | # val$y <- yy 144 | # } 145 | # val 146 | } 147 | 148 | setupLambda2 <- function(X, y, family, alpha, lambda.min, nlambda, penalty.factor) { 149 | n <- nrow(X) 150 | p <- ncol(X) 151 | 152 | ## Determine lambda.max 153 | ind <- which(penalty.factor != 0) 154 | if (length(ind) != p) { 155 | fit <- glm(y ~ X[, -ind], family = family) 156 | } else { 157 | fit <- glm(y ~ 1, family = family) 158 | } 159 | if (family == "gaussian") { 160 | zmax <- maxprod(X, fit$residuals, ind, penalty.factor) / n 161 | } else { 162 | zmax <- maxprod(X, residuals(fit, "working") * fit$weights, ind, penalty.factor) / n 163 | } 164 | lambda.max <- zmax / alpha 165 | if (lambda.min == 0) { 166 | lambda <- c(exp(seq(log(lambda.max), log(.001 * lambda.max), len = nlambda - 1)), 0) 167 | } else { 168 | lambda <- exp(seq(log(lambda.max), log(lambda.min * lambda.max), len = nlambda)) 169 | } 170 | 171 | if (length(ind) != p) lambda[1] <- lambda[1] * 1.000001 172 | lambda 173 | } 174 | 175 | convexMin <- function(b, X, penalty, gamma, l2, family, penalty.factor, a, Delta = NULL) { 176 | n <- nrow(X) 177 | p <- ncol(X) 178 | l <- ncol(b) 179 | 180 | if (penalty == "MCP") { 181 | k <- 1 / gamma 182 | } else if (penalty == "SCAD") { 183 | k <- 1 / (gamma - 1) 184 | } else if (penalty == "lasso") { 185 | return(NULL) 186 | } 187 | if (l == 0) { 188 | return(NULL) 189 | } 190 | 191 | val <- NULL 192 | for (i in 1:l) { 193 | A1 <- if (i == 1) rep(1, p) else b[, i] == 0 194 | if (i == l) { 195 | L2 <- l2[i] 196 | U <- A1 197 | } else { 198 | A2 <- b[, i + 1] == 0 199 | U <- A1 & A2 200 | L2 <- l2[i + 1] 201 | } 202 | if (sum(!U) == 0) next 203 | Xu <- X[, !U] 204 | p.. <- k * (penalty.factor[!U] != 0) - L2 * penalty.factor[!U] 205 | if (family == "gaussian") { 206 | if (any(A1 != A2)) { 207 | eigen.min <- min(eigen(crossprod(Xu) / n - diag(p.., length(p..), length(p..)))$values) 208 | } 209 | } else if (family == "binomial") { 210 | if (i == l) { 211 | eta <- a[i] + X %*% b[, i] 212 | } else { 213 | eta <- a[i + 1] + X %*% b[, i + 1] 214 | } 215 | pi. <- exp(eta) / (1 + exp(eta)) 216 | w <- as.numeric(pi. * (1 - pi.)) 217 | w[eta > log(.9999 / .0001)] <- .0001 218 | w[eta < log(.0001 / .9999)] <- .0001 219 | Xu <- sqrt(w) * cbind(1, Xu) 220 | xwxn <- crossprod(Xu) / n 221 | eigen.min <- min(eigen(xwxn - diag(c(0, diag(xwxn)[-1] * p..)))$values) 222 | } else if (family == "poisson") { 223 | if (i == l) { 224 | eta <- a[i] + X %*% b[, i] 225 | } else { 226 | eta <- a[i + 1] + X %*% b[, i + 1] 227 | } 228 | mu <- exp(eta) 229 | w <- as.numeric(mu) 230 | Xu <- sqrt(w) * cbind(1, Xu) 231 | xwxn <- crossprod(Xu) / n 232 | eigen.min <- min(eigen(xwxn - diag(c(0, diag(xwxn)[-1] * p..)))$values) 233 | } else if (family == "cox") { 234 | eta <- if (i == l) X %*% b[, i] else X %*% b[, i + 1] 235 | haz <- drop(exp(eta)) 236 | rsk <- rev(cumsum(rev(haz))) 237 | h <- haz * cumsum(Delta / rsk) 238 | xwxn <- crossprod(sqrt(h) * Xu) / n 239 | eigen.min <- min(eigen(xwxn - diag(diag(xwxn) * p.., nrow(xwxn), ncol(xwxn)))$values) 240 | } 241 | 242 | if (eigen.min < 0) { 243 | val <- i 244 | break 245 | } 246 | } 247 | val 248 | } 249 | -------------------------------------------------------------------------------- /R/simulateVAR.R: -------------------------------------------------------------------------------- 1 | #' @title VAR simulation 2 | #' 3 | #' @description This function generates a simulated multivariate VAR time series. 4 | #' 5 | #' @usage simulateVAR(N, p, nobs, rho, sparsity, mu, method, covariance, ...) 6 | #' 7 | #' @param N dimension of the time series. 8 | #' @param p number of lags of the VAR model. 9 | #' @param nobs number of observations to be generated. 10 | #' @param rho base value for the covariance matrix. 11 | #' @param sparsity density (in percentage) of the number of nonzero elements of the VAR matrices. 12 | #' @param mu a vector containing the mean of the simulated process. 13 | #' @param method which method to use to generate the VAR matrix. Possible values 14 | #' are \code{"normal"} or \code{"bimodal"}. 15 | #' @param covariance type of covariance matrix to use in the simulation. Possible 16 | #' values: \code{"toeplitz"}, \code{"block1"}, \code{"block2"} or simply \code{"diagonal"}. 17 | #' @param ... the options for the simulation. These are: 18 | #' \code{muMat}: the mean of the entries of the VAR matrices; 19 | #' \code{sdMat}: the sd of the entries of the matrices; 20 | #' 21 | #' @return A a list of NxN matrices ordered by lag 22 | #' @return data a list with two elements: \code{series} the multivariate time series and 23 | #' \code{noises} the time series of errors 24 | #' @return S the variance/covariance matrix of the process 25 | #' 26 | #' @export 27 | simulateVAR <- function(N = 100, p = 1, nobs = 250, rho = 0.5, sparsity = 0.05, 28 | mu = 0, method = "normal", covariance = "Toeplitz", ...) { 29 | opt <- list(...) 30 | fixedMat <- opt$fixedMat 31 | SNR <- opt$SNR 32 | 33 | # Create a var object to save the matrices (the output) 34 | out <- list() 35 | attr(out, "class") <- "var" 36 | attr(out, "type") <- "simulation" 37 | 38 | out$A <- list() 39 | 40 | if (!is.null(fixedMat)) { 41 | # The user passed a list of matrices 42 | out$A <- fixedMat 43 | if (!checkMatrices(out$A)) { 44 | stop("The matrices you passed are incompatible.") 45 | } 46 | cVAR <- as.matrix(companionVAR(out)) 47 | if (max(Mod(eigen(cVAR)$values)) >= 1) { 48 | warning("The VAR you passed is unstable.") 49 | } 50 | } else { 51 | stable <- FALSE 52 | while (stable == FALSE) { 53 | for (i in 1:p) { 54 | out$A[[i]] <- createSparseMatrix(sparsity = sparsity, N = N, method = method, stationary = TRUE, p = p, ...) 55 | l <- max(Mod(eigen(out$A[[i]])$values)) 56 | while ((l > 1) | (l == 0)) { 57 | out$A[[i]] <- createSparseMatrix(sparsity = sparsity, N = N, method = method, stationary = TRUE, p = p, ...) 58 | l <- max(Mod(eigen(out$A[[i]])$values)) 59 | } 60 | } 61 | cVAR <- as.matrix(companionVAR(out)) 62 | if (max(Mod(eigen(cVAR)$values)) < 1) { 63 | stable <- TRUE 64 | } 65 | } 66 | } 67 | 68 | # Covariance Matrix: Toeplitz, Block1 or Block2 69 | if (covariance == "block1") { 70 | l <- floor(N / 2) 71 | I <- diag(1 - rho, nrow = N) 72 | r <- matrix(0, nrow = N, ncol = N) 73 | r[1:l, 1:l] <- rho 74 | r[(l + 1):N, (l + 1):N] <- diag(rho, nrow = (N - l)) 75 | C <- I + r 76 | } else if (covariance == "block2") { 77 | l <- floor(N / 2) 78 | I <- diag(1 - rho, nrow = N) 79 | r <- matrix(0, nrow = N, ncol = N) 80 | r[1:l, 1:l] <- rho 81 | r[(l + 1):N, (l + 1):N] <- rho 82 | C <- I + r 83 | } else if (covariance == "Toeplitz") { 84 | r <- rho^(1:N) 85 | C <- Matrix::toeplitz(r) 86 | } else if (covariance == "Wishart") { 87 | r <- rho^(1:N) 88 | S <- Matrix::toeplitz(r) 89 | C <- stats::rWishart(1, 2 * N, S) 90 | C <- as.matrix(C[, , 1]) 91 | } else if (covariance == "diagonal") { 92 | C <- diag(x = rho, nrow = N, ncol = N) 93 | } else { 94 | stop("Unknown covariance matrix type. Possible choices are: toeplitz, block1, block2 or diagonal") 95 | } 96 | 97 | # Adjust Signal to Noise Ratio 98 | if (!is.null(SNR)) { 99 | if (SNR == 0) { 100 | stop("Signal to Noise Ratio must be greater than 0.") 101 | } 102 | s <- max(abs(cVAR)) / opt$SNR 103 | C <- diag(s, N, N) %*% C %*% diag(s, N, N) 104 | } 105 | 106 | # Matrix for MA part 107 | theta <- matrix(0, N, N) 108 | 109 | # Generate the VAR process 110 | data <- generateVARseries(nobs = nobs, mu, AR = out$A, sigma = C, skip = 200) 111 | 112 | # Complete the output 113 | out$series <- data$series 114 | out$noises <- data$noises 115 | out$sigma <- C 116 | 117 | return(out) 118 | } 119 | 120 | generateVARseries <- function(nobs, mu, AR, sigma, skip = 200) { 121 | 122 | # This function creates the simulated time series 123 | 124 | N <- nrow(sigma) 125 | nT <- nobs + skip 126 | at <- mvtnorm::rmvnorm(nT, rep(0, N), sigma) 127 | 128 | p <- length(AR) 129 | 130 | ist <- p + 1 131 | zt <- matrix(0, nT, N) 132 | 133 | if (length(mu) == 0) { 134 | mu <- rep(0, N) 135 | } 136 | 137 | for (it in ist:nT) { 138 | tmp <- matrix(at[it, ], 1, N) 139 | 140 | for (i in 1:p) { 141 | ph <- AR[[i]] 142 | ztm <- matrix(zt[it - i, ], 1, N) 143 | tmp <- tmp + ztm %*% t(ph) 144 | } 145 | 146 | zt[it, ] <- mu + tmp 147 | } 148 | 149 | # skip the first skip points to initialize the series 150 | zt <- zt[(1 + skip):nT, ] 151 | at <- at[(1 + skip):nT, ] 152 | 153 | out <- list() 154 | out$series <- zt 155 | out$noises <- at 156 | return(out) 157 | } 158 | 159 | checkMatrices <- function(A) { 160 | 161 | # This function check if all the matrices passed have the same dimensions 162 | if (!is.list(A)) { 163 | stop("The matrices must be passed in a list") 164 | } else { 165 | l <- length(A) 166 | if (l > 1) { 167 | for (i in 1:(l - 1)) { 168 | if (sum(1 - (dim(A[[i]]) == dim(A[[i + 1]]))) != 0) { 169 | return(FALSE) 170 | } 171 | } 172 | return(TRUE) 173 | } else { 174 | return(TRUE) 175 | } 176 | } 177 | } 178 | -------------------------------------------------------------------------------- /R/simulateVARX.R: -------------------------------------------------------------------------------- 1 | #' @title VARX simulation 2 | #' 3 | #' @description This function generates a simulated multivariate VAR time series. 4 | #' 5 | #' @usage simulateVARX(N, K, p, m, nobs, rho, 6 | #' sparsityA1, sparsityA2, sparsityA3, 7 | #' mu, method, covariance, ...) 8 | #' 9 | #' @param N dimension of the time series. 10 | #' @param K TODO 11 | #' @param p number of lags of the VAR model. 12 | #' @param m TODO 13 | #' @param nobs number of observations to be generated. 14 | #' @param rho base value for the covariance matrix. 15 | #' @param sparsityA1 density (in percentage) of the number of nonzero elements 16 | #' of the A1 block. 17 | #' @param sparsityA2 density (in percentage) of the number of nonzero elements 18 | #' of the A2 block. 19 | #' @param sparsityA3 density (in percentage) of the number of nonzero elements 20 | #' of the A3 block. 21 | #' @param mu a vector containing the mean of the simulated process. 22 | #' @param method which method to use to generate the VAR matrix. Possible values 23 | #' are \code{"normal"} or \code{"bimodal"}. 24 | #' @param covariance type of covariance matrix to use in the simulation. Possible 25 | #' values: \code{"toeplitz"}, \code{"block1"}, \code{"block2"} or simply \code{"diagonal"}. 26 | #' @param ... the options for the simulation. These are: 27 | #' \code{muMat}: the mean of the entries of the VAR matrices; 28 | #' \code{sdMat}: the sd of the entries of the matrices; 29 | #' 30 | #' @return A a list of NxN matrices ordered by lag 31 | #' @return data a list with two elements: \code{series} the multivariate time series and 32 | #' \code{noises} the time series of errors 33 | #' @return S the variance/covariance matrix of the process 34 | #' 35 | #' @export 36 | simulateVARX <- function(N = 40, K = 10, p = 1, m = 1, nobs = 250, rho = 0.5, 37 | sparsityA1 = 0.05, sparsityA2 = 0.5, sparsityA3 = 0.5, 38 | mu = 0, method = "normal", covariance = "Toeplitz", ...) { 39 | opt <- list(...) 40 | fixedMat <- opt$fixedMat 41 | SNR <- opt$SNR 42 | 43 | # Create a var object to save the matrices (the output) 44 | out <- list() 45 | attr(out, "class") <- "varx" 46 | attr(out, "type") <- "simulation" 47 | 48 | out$A <- list() 49 | out$A1 <- list() 50 | out$A2 <- list() 51 | out$A3 <- list() 52 | out$A4 <- list() 53 | pX <- max(p, m) 54 | 55 | # Create D matrices (null) 56 | for (i in 1:pX) { 57 | out$A4[[i]] <- matrix(0, nrow = K, ncol = N) 58 | } 59 | 60 | stable <- FALSE 61 | 62 | while (!stable) { 63 | # Randomly select an order for C matrices in 1:pX 64 | s <- sample(1:pX, 1) 65 | 66 | # Create random C matrices with a given sparsity 67 | for (i in 1:s) { 68 | out$A3[[i]] <- createSparseMatrix(sparsity = sparsityA3, N = K, method = method, stationary = TRUE, p = 1, ...) 69 | l <- max(Mod(eigen(out$A3[[i]])$values)) 70 | while ((l > 1) | (l == 0)) { 71 | out$A3[[i]] <- createSparseMatrix(sparsity = sparsityA3, N = K, method = method, stationary = TRUE, p = 1, ...) 72 | l <- max(Mod(eigen(out$A3[[i]])$values)) 73 | } 74 | } 75 | if (s < pX) { 76 | for (i in (s + 1):pX) { 77 | out$A3[[i]] <- matrix(0, nrow = K, ncol = K) 78 | } 79 | } 80 | 81 | # Create random A matrices with a given sparsity 82 | for (i in 1:p) { 83 | out$A1[[i]] <- createSparseMatrix(sparsity = sparsityA1, N = N, method = method, stationary = TRUE, p = p, ...) 84 | l <- max(Mod(eigen(out$A1[[i]])$values)) 85 | while ((l > 1) | (l == 0)) { 86 | out$A1[[i]] <- createSparseMatrix(sparsity = sparsityA1, N = N, method = method, stationary = TRUE, p = p, ...) 87 | l <- max(Mod(eigen(out$A1[[i]])$values)) 88 | } 89 | } 90 | if (p < pX) { 91 | for (i in (p + 1):pX) { 92 | out$A1[[i]] <- matrix(0, nrow = N, ncol = N) 93 | } 94 | } 95 | 96 | # Create random B matrices 97 | for (i in 1:m) { 98 | R <- max(K, N) 99 | tmp <- createSparseMatrix(sparsity = sparsityA2, N = R, method = method, stationary = TRUE, p = p, ...) 100 | out$A2[[i]] <- tmp[1:N, 1:K] 101 | } 102 | if (m < pX) { 103 | for (i in (m + 1):pX) { 104 | out$A2[[i]] <- matrix(0, nrow = N, ncol = K) 105 | } 106 | } 107 | 108 | # Now "glue" all the matrices together 109 | for (i in 1:pX) { 110 | tmp1 <- cbind(out$A1[[i]], out$A2[[i]]) 111 | tmp2 <- cbind(out$A4[[i]], out$A3[[i]]) 112 | out$A[[i]] <- rbind(tmp1, tmp2) 113 | } 114 | 115 | cVAR <- as.matrix(companionVAR(out)) 116 | if (max(Mod(eigen(cVAR)$values)) < 1) { 117 | stable <- TRUE 118 | } 119 | } 120 | 121 | N <- N + K 122 | # Covariance Matrix: Toeplitz, Block1 or Block2 123 | if (covariance == "block1") { 124 | l <- floor(N / 2) 125 | I <- diag(1 - rho, nrow = N) 126 | r <- matrix(0, nrow = N, ncol = N) 127 | r[1:l, 1:l] <- rho 128 | r[(l + 1):N, (l + 1):N] <- diag(rho, nrow = (N - l)) 129 | C <- I + r 130 | } else if (covariance == "block2") { 131 | l <- floor(N / 2) 132 | I <- diag(1 - rho, nrow = N) 133 | r <- matrix(0, nrow = N, ncol = N) 134 | r[1:l, 1:l] <- rho 135 | r[(l + 1):N, (l + 1):N] <- rho 136 | C <- I + r 137 | } else if (covariance == "Toeplitz") { 138 | r <- rho^(1:N) 139 | C <- Matrix::toeplitz(r) 140 | } else if (covariance == "Wishart") { 141 | r <- rho^(1:N) 142 | S <- Matrix::toeplitz(r) 143 | C <- stats::rWishart(1, 2 * N, S) 144 | C <- as.matrix(C[, , 1]) 145 | } else if (covariance == "diagonal") { 146 | C <- diag(x = rho, nrow = N, ncol = N) 147 | } else { 148 | stop("Unknown covariance matrix type. Possible choices are: toeplitz, block1, block2 or diagonal") 149 | } 150 | 151 | # Adjust Signal to Noise Ratio 152 | if (!is.null(SNR)) { 153 | if (SNR == 0) { 154 | stop("Signal to Noise Ratio must be greater than 0.") 155 | } 156 | s <- max(abs(cVAR)) / opt$SNR 157 | C <- diag(s, N, N) %*% C %*% diag(s, N, N) 158 | } 159 | 160 | # Matrix for MA part 161 | theta <- matrix(0, N, N) 162 | 163 | # Generate the VAR process 164 | data <- generateVARseries(nobs = nobs, mu, AR = out$A, sigma = C, skip = 200) 165 | 166 | # Complete the output 167 | out$series <- data$series[, 1:(N - K)] 168 | out$Xt <- data$series[, (N - K + 1):N] 169 | out$noises <- data$noises 170 | out$sigma <- C 171 | 172 | return(out) 173 | } 174 | 175 | generateVARXseries <- function(nobs, mu, AR, sigma, skip = 200) { 176 | 177 | # This function creates the simulated time series 178 | 179 | N <- nrow(sigma) 180 | nT <- nobs + skip 181 | at <- mvtnorm::rmvnorm(nT, rep(0, N), sigma) 182 | 183 | p <- length(AR) 184 | 185 | ist <- p + 1 186 | zt <- matrix(0, nT, N) 187 | 188 | if (length(mu) == 0) { 189 | mu <- rep(0, N) 190 | } 191 | 192 | for (it in ist:nT) { 193 | tmp <- matrix(at[it, ], 1, N) 194 | 195 | for (i in 1:p) { 196 | ph <- AR[[i]] 197 | ztm <- matrix(zt[it - i, ], 1, N) 198 | tmp <- tmp + ztm %*% t(ph) 199 | } 200 | 201 | zt[it, ] <- mu + tmp 202 | } 203 | 204 | # skip the first skip points to initialize the series 205 | zt <- zt[(1 + skip):nT, ] 206 | at <- at[(1 + skip):nT, ] 207 | 208 | out <- list() 209 | out$series <- zt 210 | out$noises <- at 211 | return(out) 212 | } 213 | 214 | checkMatricesX <- function(A) { 215 | 216 | # This function check if all the matrices passed have the same dimensions 217 | if (!is.list(A)) { 218 | stop("The matrices must be passed in a list") 219 | } else { 220 | l <- length(A) 221 | if (l > 1) { 222 | for (i in 1:(l - 1)) { 223 | if (sum(1 - (dim(A[[i]]) == dim(A[[i + 1]]))) != 0) { 224 | return(FALSE) 225 | } 226 | } 227 | return(TRUE) 228 | } else { 229 | return(TRUE) 230 | } 231 | } 232 | } 233 | -------------------------------------------------------------------------------- /R/sparsevar.R: -------------------------------------------------------------------------------- 1 | #' sparsevar: A package to estimate multivariate time series models (such as VAR and 2 | #' VECM), under the sparsity hypothesis. 3 | #' 4 | #' It performs the estimation of the matrices of the models using penalized 5 | #' least squares methods such as LASSO, SCAD and MCP. 6 | #' 7 | #' @section sparsevar functions: 8 | #' \code{fitVAR}, \code{fitVECM}, \code{simulateVAR}, \code{createSparseMatrix}, 9 | #' \code{plotMatrix}, \code{plotVAR}, \code{plotVECM} 10 | #' \code{l2norm}, \code{l1norm}, \code{lInftyNorm}, \code{maxNorm}, \code{frobNorm}, 11 | #' \code{spectralRadius}, \code{spectralNorm}, \code{impulseResponse} 12 | #' 13 | #' @docType package 14 | #' @name sparsevar 15 | #' 16 | NULL 17 | -------------------------------------------------------------------------------- /R/timeSlice.R: -------------------------------------------------------------------------------- 1 | timeSliceVAR <- function(data, p = 1, penalty = "ENET", opt) { 2 | if (penalty == "ENET") { 3 | # call timeslice with ENET 4 | out <- timeSliceVAR_ENET(data, p, opt) 5 | } else if (penalty == "SCAD" | penalty == "MCP" | penalty == "SCAD2") { 6 | # call timeslice with SCAD or MCP 7 | out <- timeSliceVAR_SCAD(data, p, opt, penalty) 8 | } else { 9 | # error 10 | stop("Unknown penalty. Possible values are \"ENET\", \"SCAD\" or \"MCP\".") 11 | } 12 | 13 | out$penalty <- penalty 14 | return(out) 15 | } 16 | 17 | timeSliceVAR_ENET <- function(data, p, opt) { 18 | t <- Sys.time() 19 | nr <- nrow(data) 20 | nc <- ncol(data) 21 | 22 | threshold <- ifelse(!is.null(opt$threshold), opt$threshold, FALSE) 23 | returnFit <- ifelse(!is.null(opt$returnFit), opt$returnFit, FALSE) 24 | methodCov <- ifelse(!is.null(opt$methodCov), opt$methodCov, "tiger") 25 | a <- ifelse(!is.null(opt$alpha), opt$alpha, 1) 26 | l <- ifelse(!is.null(opt$leaveOut), opt$leaveOut, 10) 27 | ## TODO: Add the look ahead period > 1 28 | winLength <- nr - l 29 | horizon <- 1 30 | 31 | trDt <- transformData(data[1:winLength, ], p, opt) 32 | lam <- glmnet::glmnet(trDt$X, trDt$y, alpha = a)$lambda 33 | 34 | resTS <- matrix(0, ncol = l + 1, nrow = length(lam)) 35 | resTS[, 1] <- lam 36 | 37 | for (i in 1:l) { 38 | d <- data[i:(winLength + i), ] 39 | fit <- varENET(d[1:(nrow(d) - 1), ], p, lam, opt) 40 | resTS[, i + 1] <- computeErrors(d, p, fit) 41 | } 42 | 43 | finalRes <- matrix(0, ncol = 3, nrow = length(lam)) 44 | finalRes[, 1] <- lam 45 | finalRes[, 2] <- rowMeans(resTS[, 2:(l + 1)]) 46 | for (k in 1:length(lam)) { 47 | finalRes[k, 3] <- stats::sd(resTS[k, 2:(l + 1)]) 48 | } 49 | 50 | ix <- which(finalRes[, 2] == min(finalRes[, 2]))[1] 51 | fit <- varENET(data, p, finalRes[ix, 1], opt) 52 | 53 | Avector <- stats::coef(fit, s = finalRes[ix, 1]) 54 | A <- matrix(Avector[2:length(Avector)], nrow = nc, ncol = nc * p, byrow = TRUE) 55 | 56 | elapsed <- Sys.time() - t 57 | 58 | # If threshold = TRUE then set to zero all the entries that are smaller than 59 | # the threshold 60 | if (threshold == TRUE) { 61 | A <- applyThreshold(A, nr, nc, p) 62 | } 63 | 64 | # Get back the list of VAR matrices (of length p) 65 | A <- splitMatrix(A, p) 66 | 67 | # Now that we have the matrices compute the residuals 68 | res <- computeResiduals(trDt$series, A) 69 | 70 | # Create the output 71 | output <- list() 72 | output$mu <- trDt$mu 73 | output$A <- A 74 | 75 | # Do you want the fit? 76 | if (returnFit == TRUE) { 77 | output$fit <- fit 78 | } 79 | 80 | output$lambda <- finalRes[ix, 1] 81 | output$mse <- finalRes[ix, 2] 82 | output$mseSD <- finalRes[ix, 3] 83 | output$time <- elapsed 84 | output$series <- trDt$series 85 | output$residuals <- res 86 | 87 | # Variance/Covariance estimation 88 | output$sigma <- estimateCovariance(res) 89 | 90 | output$penalty <- "ENET" 91 | output$method <- "timeSlice" 92 | attr(output, "class") <- "var" 93 | attr(output, "type") <- "fit" 94 | 95 | return(output) 96 | 97 | return(finalRes) 98 | } 99 | 100 | timeSliceVAR_SCAD <- function(data, p, opt, penalty) { 101 | t <- Sys.time() 102 | nr <- nrow(data) 103 | nc <- ncol(data) 104 | 105 | picasso <- ifelse(!is.null(opt$picasso), opt$picasso, FALSE) 106 | threshold <- ifelse(!is.null(opt$threshold), opt$threshold, FALSE) 107 | returnFit <- ifelse(!is.null(opt$returnFit), opt$returnFit, FALSE) 108 | methodCov <- ifelse(!is.null(opt$methodCov), opt$methodCov, "tiger") 109 | a <- ifelse(!is.null(opt$alpha), opt$alpha, 1) 110 | ## TODO: Add the look ahead period > 1 111 | l <- ifelse(!is.null(opt$leaveOut), opt$leaveOut, 10) 112 | winLength <- nr - l 113 | horizon <- 1 114 | 115 | trDt <- transformData(data[1:winLength, ], p, opt) 116 | 117 | if (!picasso) { 118 | if (penalty == "SCAD") { 119 | lam <- ncvreg::ncvreg(as.matrix(trDt$X), trDt$y, 120 | family = "gaussian", penalty = "SCAD", 121 | alpha = 1 122 | )$lambda 123 | } else if (penalty == "MCP") { 124 | lam <- ncvreg::ncvreg(as.matrix(trDt$X), trDt$y, 125 | family = "gaussian", penalty = "MCP", 126 | alpha = 1 127 | )$lambda 128 | } else { 129 | stop("[WIP] Only SCAD and MCP regression are supported.") 130 | # lam <- sparsevar::scadReg(as(trDt$X, "dgCMatrix"), trDt$y, alpha = 1)$lambda 131 | } 132 | } else { 133 | lam <- picasso::picasso(trDt$X, trDt$y, method = "scad", nlambda = 100)$lambda 134 | } 135 | 136 | resTS <- matrix(0, ncol = l + 1, nrow = length(lam)) 137 | resTS[, 1] <- lam 138 | 139 | for (i in 1:l) { 140 | d <- data[i:(winLength + i), ] 141 | if (!picasso) { 142 | if (penalty == "SCAD" | penalty == "SCAD2") { 143 | fit <- varSCAD(d[1:(nrow(d) - 1), ], p, lam, opt, penalty) 144 | resTS[, i + 1] <- computeErrors(d, p, fit, penalty = penalty) 145 | } else { 146 | fit <- varMCP(d[1:(nrow(d) - 1), ], p, lam, opt) 147 | resTS[, i + 1] <- computeErrors(d, p, fit, penalty = "MCP") 148 | } 149 | } else { 150 | trDt <- transformData(d, p, opt) 151 | fit <- picasso::picasso(trDt$X, trDt$y, method = "scad", lambda = lam) 152 | resTS[, i + 1] <- computeErrorsPicasso(d, p, fit) 153 | } 154 | } 155 | 156 | finalRes <- matrix(0, ncol = 3, nrow = length(lam)) 157 | finalRes[, 1] <- lam 158 | finalRes[, 2] <- rowMeans(resTS[, 2:(l + 1)]) 159 | for (k in 1:length(lam)) { 160 | finalRes[k, 3] <- stats::sd(resTS[k, 2:(l + 1)]) 161 | } 162 | 163 | ix <- which(finalRes[, 2] == min(finalRes[, 2]))[1] 164 | 165 | if (!picasso) { 166 | if (penalty == "SCAD") { 167 | fit <- varSCAD(data, p, finalRes[ix, 1], opt) 168 | Avector <- fit$beta[2:nrow(fit$beta), 1] 169 | } else if (penalty == "MCP") { 170 | fit <- varMCP(data, p, finalRes[ix, 1], opt) 171 | Avector <- fit$beta[2:nrow(fit$beta), 1] 172 | } else { 173 | fit <- varSCAD(data, p, finalRes[ix, 1], opt, penalty == "SCAD2") 174 | Avector <- fit$beta[1:nrow(fit$beta), 1] 175 | } 176 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 177 | } else { 178 | trDt <- transformData(data, p, opt) 179 | fit <- picasso::picasso(trDt$X, trDt$y, method = "scad", lambda = finalRes[ix, 1]) 180 | Avector <- fit$beta[, 1] 181 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 182 | } 183 | 184 | elapsed <- Sys.time() - t 185 | 186 | # If threshold = TRUE then set to zero all the entries that are smaller than 187 | # the threshold 188 | if (!is.null(opt$threshold)) { 189 | if (opt$threshold == TRUE) { 190 | tr <- 1 / sqrt(p * nc * log(nr)) 191 | L <- abs(A) >= tr 192 | A <- A * L 193 | } 194 | } 195 | 196 | # Get back the list of VAR matrices (of length p) 197 | A <- splitMatrix(A, p) 198 | 199 | # Now that we have the matrices compute the residuals 200 | res <- computeResiduals(data, A) 201 | 202 | # Create the output 203 | output <- list() 204 | output$mu <- trDt$mu 205 | output$A <- A 206 | 207 | # Do you want the fit? 208 | if (!is.null(opt$returnFit)) { 209 | if (opt$returnFit == TRUE) { 210 | output$fit <- fit 211 | } 212 | } 213 | 214 | output$lambda <- finalRes[ix, 1] 215 | output$mse <- finalRes[ix, 2] 216 | output$mseSD <- finalRes[ix, 3] 217 | output$time <- elapsed 218 | output$series <- trDt$series 219 | output$residuals <- res 220 | 221 | # Variance/Covariance estimation 222 | output$sigma <- estimateCovariance(res) 223 | 224 | output$penalty <- penalty 225 | output$method <- "timeSlice" 226 | attr(output, "class") <- "var" 227 | attr(output, "type") <- "fit" 228 | return(output) 229 | 230 | return(finalRes) 231 | } 232 | 233 | computeErrors <- function(data, p, fit, penalty = "ENET") { 234 | nr <- nrow(data) 235 | nc <- ncol(data) 236 | l <- length(fit$lambda) 237 | 238 | err <- rep(0, ncol = 1, nrow = nr) 239 | 240 | for (i in 1:l) { 241 | if (penalty == "ENET") { 242 | Avector <- stats::coef(fit, s = fit$lambda[i]) 243 | A <- matrix(Avector[2:length(Avector)], nrow = nc, ncol = nc * p, byrow = TRUE) 244 | } else if (penalty == "SCAD" | penalty == "MCP") { 245 | Avector <- fit$beta[2:nrow(fit$beta), i] 246 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 247 | } else { 248 | Avector <- fit$beta[1:nrow(fit$beta), i] 249 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 250 | } 251 | 252 | A <- splitMatrix(A, p) 253 | 254 | n <- data[nr, ] 255 | 256 | f <- rep(0, nrow = nc, ncol = 1) 257 | tmpData <- data[((nr - 1) - p + 1):(nr - 1), ] 258 | for (k in 1:p) { 259 | f <- f + A[[k]] %*% data[((nr - 1) - (k - 1)), ] 260 | } 261 | 262 | err[i] <- mean((f - n)^2) 263 | } 264 | 265 | return(err) 266 | } 267 | 268 | computeErrorsPicasso <- function(data, p, fit) { 269 | nr <- nrow(data) 270 | nc <- ncol(data) 271 | l <- length(fit$lambda) 272 | 273 | err <- rep(0, ncol = 1, nrow = nr) 274 | 275 | for (i in 1:l) { 276 | Avector <- fit$beta[, i] 277 | A <- matrix(Avector, nrow = nc, ncol = nc * p, byrow = TRUE) 278 | 279 | A <- splitMatrix(A, p) 280 | 281 | n <- data[nr, ] 282 | 283 | f <- rep(0, nrow = nc, ncol = 1) 284 | tmpData <- data[((nr - 1) - p + 1):(nr - 1), ] 285 | for (k in 1:p) { 286 | f <- f + A[[k]] %*% data[((nr - 1) - (k - 1)), ] 287 | } 288 | 289 | err[i] <- mean((f - n)^2) 290 | } 291 | 292 | return(err) 293 | } 294 | -------------------------------------------------------------------------------- /R/twoStepOLS.R: -------------------------------------------------------------------------------- 1 | twoStepOLS <- function(series, p = 1, penalty = "ENET", method = "cv", ...) { 2 | 3 | ## TODO: rewrite this function and add p>1 support 4 | 5 | ## First step: estimate VAR using LASSO 6 | fit <- fitVAR(data = series, p = p, penalty = penalty, method = method, ...) 7 | 8 | N <- ncol(fit$A[[1]]) 9 | nobs <- nrow(fit$series) 10 | 11 | bigA <- companionVAR(fit) 12 | 13 | trDt <- transformData(fit$series, p = p, opt = list(method = method, scale = FALSE, center = TRUE)) 14 | 15 | nonZeroEntries <- as.matrix(bigA != 0) 16 | 17 | ## Create matrix R 18 | t <- as.vector(nonZeroEntries) 19 | n <- sum(t != 0) 20 | ix <- which(t != 0) 21 | j <- 1:n 22 | 23 | R <- matrix(0, ncol = n, nrow = length(t)) 24 | for (k in 1:n) { 25 | R[ix[k], j[k]] <- 1 26 | } 27 | 28 | X <- as.matrix(trDt$X) 29 | y <- as.vector(t(fit$series[-(1:p), ])) 30 | 31 | # Metodo A MANO 32 | s <- corpcor::invcov.shrink(fit$residuals, verbose = FALSE) 33 | G <- t(fit$series[-nobs, ]) %*% fit$series[-nobs, ] / nobs 34 | 35 | V <- solve(t(R) %*% (kronecker(G, s) %*% R)) 36 | VV <- nonZeroEntries 37 | VV[nonZeroEntries] <- diag(V) 38 | G1 <- solve(t(R) %*% (kronecker(t(fit$series[-nobs, ]) %*% fit$series[-nobs, ], s)) %*% R) 39 | G2 <- t(R) %*% (kronecker(t(fit$series[-nobs, ]), s)) 40 | 41 | g <- G1 %*% G2 # [ , (N+1):(length(y) + N)] 42 | ga <- g %*% y 43 | 44 | b1 <- vector(length = N * N) 45 | b1 <- R %*% ga 46 | A <- matrix(b1, ncol = N, byrow = F) 47 | 48 | varCov <- R %*% (solve(t(R) %*% (kronecker(G, s)) %*% R) / nobs) %*% t(R) 49 | varA <- matrix(diag(varCov), ncol = N, byrow = F) 50 | 51 | result <- list() 52 | attr(result, "class") <- "var" 53 | 54 | result$A <- splitMatrix(A, p) 55 | result$varA <- list(varA) 56 | 57 | uA <- result$A[[1]] + 2 * sqrt(result$varA[[1]]) 58 | lA <- result$A[[1]] - 2 * sqrt(result$varA[[1]]) 59 | L <- (uA < 0) | (lA > 0) 60 | result$cleanA <- result$A[[1]] * L 61 | result$residuals <- fit$residuals 62 | result$varCov <- varCov 63 | return(result) 64 | } 65 | -------------------------------------------------------------------------------- /R/utils.R: -------------------------------------------------------------------------------- 1 | #' @title L2 matrix norm 2 | #' 3 | #' @description Compute the L2 matrix norm of M 4 | #' @usage l2norm(M) 5 | #' @param M the matrix (real or complex valued) 6 | #' 7 | #' @export 8 | l2norm <- function(M) { 9 | s <- sqrt(spectralRadius(t(M) %*% M)) 10 | return(s) 11 | } 12 | 13 | #' @title L1 matrix norm 14 | #' 15 | #' @description Compute the L1 matrix norm of M 16 | #' @usage l1norm(M) 17 | #' @param M the matrix (real or complex valued) 18 | #' 19 | #' @export 20 | l1norm <- function(M) { 21 | c <- max(colSums(Mod(M))) 22 | return(c) 23 | } 24 | 25 | #' @title L-infinity matrix norm 26 | #' 27 | #' @description Compute the L-infinity matrix norm of M 28 | #' @usage lInftyNorm(M) 29 | #' @param M the matrix (real or complex valued) 30 | #' 31 | #' @export 32 | lInftyNorm <- function(M) { 33 | c <- max(rowSums(Mod(M))) 34 | return(c) 35 | } 36 | 37 | #' @title Max-norm of a matrix 38 | #' 39 | #' @description Compute the max-norm of M 40 | #' @usage maxNorm(M) 41 | #' @param M the matrix (real or complex valued) 42 | #' 43 | #' @export 44 | maxNorm <- function(M) { 45 | return(max(abs(M))) 46 | } 47 | 48 | #' @title Froebenius norm of a matrix 49 | #' 50 | #' @description Compute the Froebenius norm of M 51 | #' @usage frobNorm(M) 52 | #' @param M the matrix (real or complex valued) 53 | #' 54 | #' @export 55 | frobNorm <- function(M) { 56 | A <- (t(M) %*% M) 57 | A <- A * diag(nrow(A)) 58 | return(sqrt(sum(A))) 59 | } 60 | 61 | #' @title Spectral radius 62 | #' 63 | #' @description Compute the spectral radius of M 64 | #' @usage spectralRadius(M) 65 | #' @param M the matrix (real or complex valued) 66 | #' 67 | #' @export 68 | spectralRadius <- function(M) { 69 | e <- eigen(M) 70 | maxEig <- max(Mod(e$values)) 71 | return(maxEig) 72 | } 73 | 74 | #' @title Spectral norm 75 | #' 76 | #' @description Compute the spectral norm of M 77 | #' @usage spectralNorm(M) 78 | #' @param M the matrix (real or complex valued) 79 | #' 80 | #' @export 81 | spectralNorm <- function(M) { 82 | return(sqrt(spectralRadius(t(M) %*% M))) 83 | } 84 | 85 | #' @title Accuracy metric 86 | #' 87 | #' @description Compute the accuracy of a fit 88 | #' @param referenceM the matrix to use as reference 89 | #' @param A the matrix obtained from a fit 90 | #' 91 | #' @usage accuracy(referenceM, A) 92 | #' 93 | #' @export 94 | accuracy <- function(referenceM, A) { 95 | N <- ncol(A) 96 | L <- A 97 | L[L != 0] <- 1 98 | L[L == 0] <- 0 99 | 100 | genL <- referenceM 101 | genL[genL != 0] <- 1 102 | genL[genL == 0] <- 0 103 | 104 | acc <- 1 - sum(abs(L - genL)) / N^2 # accuracy -(1 - sum(genL)/N^2) 105 | return(acc) 106 | } 107 | 108 | #' @title Check is var 109 | #' 110 | #' @description Check if the input is a var object 111 | #' @param v the object to test 112 | #' 113 | #' @usage checkIsVar(v) 114 | #' 115 | #' @export 116 | checkIsVar <- function(v) { 117 | if (!is.null(attr(v, "class"))) { 118 | ifelse(attr(v, "class") == "var" | attr(v, "class") == "varx", return(TRUE), return(FALSE)) 119 | } else { 120 | return(FALSE) 121 | } 122 | } 123 | -------------------------------------------------------------------------------- /R/utilsVAR.R: -------------------------------------------------------------------------------- 1 | #' @title Transorm data 2 | #' 3 | #' @description Transform the input data 4 | #' 5 | #' @usage transformData(data, p, opt) 6 | #' 7 | #' @param data the data 8 | #' @param p the order of the VAR 9 | #' @param opt a list containing the options 10 | #' 11 | #' @export 12 | transformData <- function(data, p, opt) { 13 | 14 | # get the number of rows and columns 15 | nr <- nrow(data) 16 | nc <- ncol(data) 17 | 18 | # make sure the data is in matrix format 19 | data <- as.matrix(data) 20 | 21 | # scale the matrix columns 22 | scale <- ifelse(is.null(opt$scale), FALSE, opt$scale) 23 | # center the matrix columns (default) 24 | center <- ifelse(is.null(opt$center), TRUE, opt$center) 25 | 26 | if (center == TRUE) { 27 | if (opt$method == "timeSlice") { 28 | leaveOut <- ifelse(is.null(opt$leaveOut), 10, opt$leaveOut) 29 | m <- colMeans(data[1:(nr - leaveOut), ]) 30 | } else { 31 | m <- colMeans(data) 32 | } 33 | cm <- matrix(rep(m, nrow(data)), nrow = nrow(data), byrow = TRUE) 34 | data <- data - cm 35 | } else { 36 | m <- rep(0, nc) 37 | } 38 | 39 | if (scale == TRUE) { 40 | data <- apply(FUN = scale, X = data, MARGIN = 2) 41 | } 42 | 43 | # create Xs and Ys (temp variables) 44 | tmpX <- data[1:(nr - 1), ] 45 | tmpY <- data[2:(nr), ] 46 | 47 | # create the data matrix 48 | tmpX <- duplicateMatrix(tmpX, p) 49 | tmpY <- tmpY[p:nrow(tmpY), ] 50 | 51 | y <- as.vector(tmpY) 52 | 53 | # Hadamard product for data 54 | I <- Matrix::Diagonal(nc) 55 | X <- kronecker(I, tmpX) 56 | 57 | output <- list() 58 | output$X <- X 59 | output$y <- y 60 | output$series <- data 61 | output$mu <- t(m) 62 | 63 | return(output) 64 | } 65 | 66 | #' @title VAR ENET 67 | #' 68 | #' @description Estimate VAR using ENET penalty 69 | #' 70 | #' @usage varENET(data, p, lambdas, opt) 71 | #' 72 | #' @param data the data 73 | #' @param p the order of the VAR 74 | #' @param lambdas a vector containing the lambdas to be used in the fit 75 | #' @param opt a list containing the options 76 | #' 77 | #' @export 78 | varENET <- function(data, p, lambdas, opt) { 79 | # transform the dataset 80 | trDt <- transformData(data, p, opt) 81 | 82 | fit <- glmnet::glmnet(trDt$X, trDt$y, lambda = lambdas) 83 | 84 | return(fit) 85 | } 86 | 87 | #' @title VAR SCAD 88 | #' 89 | #' @description Estimate VAR using SCAD penalty 90 | #' 91 | #' @usage varSCAD(data, p, lambdas, opt, penalty) 92 | #' 93 | #' @param data the data 94 | #' @param p the order of the VAR 95 | #' @param lambdas a vector containing the lambdas to be used in the fit 96 | #' @param opt a list containing the options 97 | #' @param penalty a string "SCAD" or something else 98 | #' 99 | #' @export 100 | 101 | varSCAD <- function(data, p, lambdas, opt, penalty = "SCAD") { 102 | # transform the dataset 103 | trDt <- transformData(data, p, opt) 104 | 105 | if (penalty == "SCAD") { 106 | fit <- ncvreg::ncvreg(as.matrix(trDt$X), trDt$y, 107 | family = "gaussian", penalty = "SCAD", 108 | alpha = 1, lambda = lambdas 109 | ) 110 | } else { 111 | stop("[WIP] Only SCAD regression is supported at the moment") 112 | } 113 | return(fit) 114 | } 115 | 116 | #' @title VAR MCP 117 | #' 118 | #' @description Estimate VAR using MCP penalty 119 | #' 120 | #' @usage varMCP(data, p, lambdas, opt) 121 | #' 122 | #' @param data the data 123 | #' @param p the order of the VAR 124 | #' @param lambdas a vector containing the lambdas to be used in the fit 125 | #' @param opt a list containing the options 126 | #' 127 | #' @export 128 | varMCP <- function(data, p, lambdas, opt) { 129 | # transform the dataset 130 | trDt <- transformData(data, p, opt) 131 | 132 | fit <- ncvreg::ncvreg(as.matrix(trDt$X), trDt$y, 133 | family = "gaussian", penalty = "MCP", 134 | alpha = 1, lambda = lambdas 135 | ) 136 | 137 | return(fit) 138 | } 139 | 140 | splitMatrix <- function(M, p) { 141 | nr <- nrow(M) 142 | A <- list() 143 | 144 | for (i in 1:p) { 145 | ix <- ((i - 1) * nr) + (1:nr) 146 | A[[i]] <- M[1:nr, ix] 147 | } 148 | 149 | return(A) 150 | } 151 | 152 | duplicateMatrix <- function(data, p) { 153 | nr <- nrow(data) 154 | nc <- ncol(data) 155 | 156 | outputData <- data 157 | 158 | if (p > 1) { 159 | for (i in 1:(p - 1)) { 160 | tmpData <- matrix(0, nrow = nr, ncol = nc) 161 | tmpData[(i + 1):nr, ] <- data[1:(nr - i), ] 162 | outputData <- cbind(outputData, tmpData) 163 | } 164 | } 165 | 166 | outputData <- outputData[p:nr, ] 167 | return(outputData) 168 | } 169 | 170 | computeResiduals <- function(data, A) { 171 | nr <- nrow(data) 172 | nc <- ncol(data) 173 | p <- length(A) 174 | 175 | res <- matrix(0, ncol = nc, nrow = nr) 176 | f <- matrix(0, ncol = nc, nrow = nr) 177 | 178 | for (i in 1:p) { 179 | tmpD <- rbind(matrix(0, nrow = i, ncol = nc), data[1:(nrow(data) - i), ]) 180 | tmpF <- t(A[[i]] %*% t(tmpD)) 181 | f <- f + tmpF 182 | } 183 | 184 | res <- data - f 185 | return(res) 186 | } 187 | 188 | #' @title Companion VAR 189 | #' 190 | #' @description Build the VAR(1) representation of a VAR(p) process 191 | #' 192 | #' @usage companionVAR(v) 193 | #' 194 | #' @param v the VAR object as from \code{fitVAR} or \code{simulateVAR} 195 | #' 196 | #' @export 197 | companionVAR <- function(v) { 198 | if (!checkIsVar(v)) { 199 | stop("v must be a var object") 200 | } 201 | A <- v$A 202 | nc <- ncol(A[[1]]) 203 | p <- length(A) 204 | if (p > 1) { 205 | bigA <- Matrix::Matrix(0, nrow = p * nc, ncol = p * nc, sparse = TRUE) 206 | for (k in 1:p) { 207 | ix <- ((k - 1) * nc) + (1:nc) 208 | bigA[1:nc, ix] <- A[[k]] 209 | } 210 | 211 | ixR <- (nc + 1):nrow(bigA) 212 | ixC <- 1:((p - 1) * nc) 213 | bigA[ixR, ixC] <- diag(1, nrow = length(ixC), ncol = length(ixC)) 214 | } else { 215 | bigA <- Matrix::Matrix(A[[1]], sparse = TRUE) 216 | } 217 | 218 | return(bigA) 219 | } 220 | 221 | #' @title Bootstrap VAR 222 | #' 223 | #' @description Build the bootstrapped series from the original var 224 | #' 225 | #' @usage bootstrappedVAR(v) 226 | #' 227 | #' @param v the VAR object as from fitVAR or simulateVAR 228 | #' 229 | #' @export 230 | bootstrappedVAR <- function(v) { 231 | 232 | ## This function creates the bootstrapped time series 233 | if (!checkIsVar(v)) { 234 | stop("v must be a var object") 235 | } 236 | 237 | r <- v$residuals 238 | s <- v$series 239 | A <- v$A 240 | N <- ncol(A[[1]]) 241 | p <- length(A) 242 | t <- nrow(r) 243 | r <- r - matrix(colMeans(r), ncol = N, nrow = t) 244 | 245 | zt <- matrix(0, nrow = t, ncol = N) 246 | zt[1:p, ] <- s[1:p, ] 247 | 248 | for (t0 in (p + 1):t) { 249 | ix <- sample((p + 1):t, 1) 250 | u <- r[ix, ] 251 | vv <- rep(0, N) 252 | for (i in 1:p) { 253 | ph <- A[[i]] 254 | vv <- vv + ph %*% zt[(t0 - i), ] 255 | } 256 | vv <- vv + u 257 | zt[t0, ] <- vv 258 | } 259 | 260 | return(zt) 261 | } 262 | 263 | #' @title Test for Ganger Causality 264 | #' 265 | #' @description This function should retain only the coefficients of the 266 | #' matrices of the VAR that are statistically significative (from the bootstrap) 267 | #' 268 | #' @usage testGranger(v, eb) 269 | #' 270 | #' @param v the VAR object as from fitVAR or simulateVAR 271 | #' @param eb the error bands as obtained from errorBands 272 | #' 273 | #' @export 274 | testGranger <- function(v, eb) { 275 | p <- length(v$A) 276 | A <- list() 277 | for (i in 1:p) { 278 | L <- (eb$irfQUB[, , i + 1] >= 0 & eb$irfQLB[, , i + 1] <= 0) 279 | A[[i]] <- v$A[[i]] * (1 - L) 280 | } 281 | 282 | 283 | return(A) 284 | } 285 | 286 | #' @title Computes information criteria for VARs 287 | #' 288 | #' @description This function computes information criterias (AIC, Schwartz and 289 | #' Hannan-Quinn) for VARs. 290 | #' 291 | #' @usage informCrit(v) 292 | #' 293 | #' @param v a list of VAR objects as from fitVAR. 294 | #' 295 | #' @export 296 | informCrit <- function(v) { 297 | if (is.list(v)) { 298 | k <- length(v) 299 | r <- matrix(0, nrow = k, ncol = 3) 300 | for (i in 1:k) { 301 | if (attr(v[[1]], "class") == "var" | attr(v[[1]], "class") == "vecm") { 302 | p <- length(v[[i]]$A) 303 | # Compute sparsity 304 | s <- 0 305 | for (l in 1:p) { 306 | s <- s + sum(v[[i]]$A[[l]] != 0) 307 | } 308 | sp <- s / (p * ncol(v[[i]]$A[[1]])^2) 309 | } else { 310 | stop("List elements must be var or vecm objects.") 311 | } 312 | sigma <- v[[i]]$sigma 313 | nr <- nrow(v[[i]]$residuals) 314 | nc <- ncol(v[[i]]$residuals) 315 | d <- det(sigma) 316 | 317 | r[i, 1] <- log(d) + (2 * p * sp * nc^2) / nr # AIC 318 | r[i, 2] <- log(d) + (log(nr) / nr) * (p * sp * nc^2) # BIC 319 | r[i, 3] <- log(d) + (2 * p * sp * nc^2) / nr * log(log(nr)) # Hannan-Quinn 320 | } 321 | results <- data.frame(r) 322 | colnames(results) <- c("AIC", "BIC", "HannanQuinn") 323 | } else { 324 | stop("Input must be a list of var models.") 325 | } 326 | 327 | return(results) 328 | } 329 | 330 | estimateCovariance <- function(res, ...) { 331 | nc <- ncol(res) 332 | s <- corpcor::cov.shrink(res, verbose = FALSE) 333 | sigma <- matrix(0, ncol = nc, nrow = nc) 334 | 335 | for (i in 1:nc) { 336 | for (j in 1:nc) { 337 | sigma[i, j] <- s[i, j] 338 | } 339 | } 340 | 341 | return(sigma) 342 | } 343 | 344 | #' @title Computes forecasts for VARs 345 | #' 346 | #' @description This function computes forecasts for a given VAR. 347 | #' 348 | #' @usage computeForecasts(v, num_steps) 349 | #' 350 | #' @param v a VAR object as from fitVAR. 351 | #' @param num_steps the number of forecasts to produce. 352 | #' 353 | #' @export 354 | computeForecasts <- function(v, num_steps = 1) { 355 | if (!checkIsVar(v)) { 356 | stop("You must pass a var object.") 357 | } else { 358 | mu <- v$mu 359 | data <- v$series 360 | v <- v$A 361 | } 362 | 363 | if (!is.list(v)) { 364 | stop("v must be a var object or a list of matrices.") 365 | } else { 366 | nr <- nrow(data) 367 | nc <- ncol(v[[1]]) 368 | p <- length(v) 369 | 370 | f <- matrix(0, nrow = nc, ncol = num_steps) 371 | 372 | tmp_data <- matrix(data = t(data[(nr - p + 1):nr, ]), 373 | nrow = nc, 374 | ncol = num_steps) 375 | nr <- ncol(tmp_data) 376 | 377 | for (n in 1:num_steps) { 378 | for (k in 1:p) { 379 | if (n == 1) { 380 | f[, n] <- f[, n] + v[[k]] %*% tmp_data[, nr - k + 1] 381 | } else { 382 | if (nr > 1) { 383 | tmp_data <- cbind(tmp_data[, 2:nr], f[, n - 1]) 384 | } else { 385 | tmp_data <- as.matrix(f[, n - 1]) 386 | } 387 | f[, n] <- f[, n] + v[[k]] %*% tmp_data[, nr - k + 1] 388 | } 389 | } 390 | } 391 | } 392 | f <- f + matrix(rep(mu, length(mu)), length(mu), num_steps) 393 | return(f) 394 | } 395 | 396 | applyThreshold <- function(a_mat, nr, nc, p, type = "soft") { 397 | if (type == "soft") { 398 | tr <- 1 / sqrt(p * nc * log(nr)) 399 | } else if (type == "hard") { 400 | tr <- (nc) ^ (-0.49) 401 | } else { 402 | stop("Unknown threshold type. Possible values are: \"soft\" or \"hard\"") 403 | } 404 | 405 | l_mat <- abs(a_mat) >= tr 406 | a_mat <- a_mat * l_mat 407 | return(a_mat) 408 | } 409 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Sparse VAR (sparsevar) 2 | [![License](http://img.shields.io/badge/license-GPL%20%28%3E=%202%29-brightgreen.svg?style=flat)](http://www.gnu.org/licenses/gpl-2.0.html) 3 | [![Version](https://img.shields.io/badge/version-0.1.0-oran.svg)](https://github.com/svazzole/sparsevar) 4 | [![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/sparsevar)](https://cran.r-project.org/package=sparsevar) 5 | [![Downloads](http://cranlogs.r-pkg.org/badges/sparsevar)](https://cran.r-project.org/package=sparsevar) 6 | [![Total Downloads](http://cranlogs.r-pkg.org/badges/grand-total/sparsevar?color=brightgreen)](https://cran.r-project.org/package=sparsevar) 7 | [![Build Status](https://travis-ci.org/svazzole/sparsevar.svg?branch=master)](https://travis-ci.org/svazzole/sparsevar) 8 | 9 | Some R functions useful to estimate sparse VAR / VECM models. 10 | 11 | ### Installation 12 | 13 | To install the stable version from CRAN: 14 | ```r 15 | install.package("sparsevar") 16 | ``` 17 | 18 | To install the developing version: 19 | ```r 20 | install.packages("devtools") 21 | devtools::install_github("svazzole/sparsevar", "master") 22 | ``` 23 | Check [here](https://www.rstudio.com/products/rpackages/devtools/) to understand which are the dependencies of `devtools` for your OS. 24 | 25 | ### Quick start 26 | 27 | To load the `sparsevar` package simply type 28 | ```r 29 | library(sparsevar) 30 | ``` 31 | 32 | Using the function included in the package, we simply generate a 20x20 VAR(2) process 33 | ```r 34 | set.seed(1) 35 | sim <- simulateVAR(N = 20, p = 2) 36 | ``` 37 | This command will generate a model with two sparse matrices with 5% of non-zero entries and a Toeplitz variance-covariance matrix with rho = 0.5. 38 | We can estimate the matrices of the process using for example 39 | ```r 40 | fit <- fitVAR(sim$series, p = 2, threshold = TRUE) 41 | ``` 42 | 43 | The results can be seen by plotting the two `var` objects 44 | ```r 45 | plotVAR(sim, fit) 46 | ``` 47 | the first row of the plot is made by the matrices of the simulated process and the second row is formed by their estimates. 48 | 49 | The fit contains also the estimate of the variance/covariance matrix of the residuals 50 | ```r 51 | plotMatrix(fit$sigma) 52 | ``` 53 | 54 | which can be compared with the covariance matrix of the errors of the generating process 55 | ```r 56 | plotMatrix(sim$sigma) 57 | ``` 58 | 59 | ### Usage 60 | 61 | The functions included for model estimation are: 62 | 63 | - `fitVAR`: to estimate a sparse VAR multivariate time series with ENET, SCAD or MC+; 64 | - `fitVARX`: to estimate a sparse VAR-X model using ENET; 65 | - `fitVECM`: to estimate a sparse VECM (Vector Error Correction Model) using LS with penalty (again: ENET, SCAD or MC+); 66 | - `impulseResponse`: compute the impulse response function; 67 | - `errorBands`: estimate the error bands for the IRF (using bootstrap). 68 | 69 | For simulations: 70 | 71 | - `simulateVAR`: to generate a sparse VAR multivariate time series; 72 | - `simulateVARX`: to generate a sparse VARX time series; 73 | - `createSparseMatrix`: used to create sparse matrices with a given density. 74 | 75 | For plotting: 76 | 77 | - `plotMatrix`: useful to plot matrices and sparse matrices; 78 | - `plotVAR`: plot all the matrices of the model or models in input; 79 | - `plotIRF`: plot IRF function; 80 | - `plotGridIRF`: multiple plots of IRF. 81 | 82 | ### Papers using `sparsevar` 83 | [[1](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005364)] Gibbons SM, Kearney SM, Smillie CS, Alm EJ (2017) Two dynamic regimes in the human gut microbiome. PLoS Comput Biol 13(2): e1005364. 84 | 85 | [[2](https://doi.org/10.1016/j.insmatheco.2019.07.004)] Quentin Guibert, Olivier Lopez, Pierrick Piette, Forecasting mortality rate improvements with a high-dimensional VAR, Insurance: Mathematics and Economics, Volume 88, 2019, Pages 255-272, ISSN 0167-6687. 86 | 87 | ### References 88 | [[1](http://projecteuclid.org/euclid.aos/1434546214)] Basu, Sumanta; Michailidis, George. Regularized estimation in sparse high-dimensional time series models. Ann. Statist. 43 (2015), no. 4, 1535--1567. doi:10.1214/15-AOS1315. 89 | 90 | [[2](https://books.google.it/books/?id=COUFCAAAQBAJ&redir_esc=y)] Lütkepohl, Helmut. New Introduction to Multiple Time Series Analysis. Springer Science & Business Media, 2005, ISBN 3540277528. 91 | -------------------------------------------------------------------------------- /cran-comments.md: -------------------------------------------------------------------------------- 1 | ## Test environments 2 | * local Xubuntu install, R 4.0.2 3 | * Ubuntu 16.04.6 LTS (on travis-ci), R 4.0.2 (2020-06-22) 4 | * win-builder (devel and release) 5 | 6 | ## R CMD check results 7 | There were no ERRORs, no WARNINGs and no NOTEs. 8 | 9 | ## Downstream dependecies 10 | There are currently no downstream dependecies for this package 11 | -------------------------------------------------------------------------------- /man/accuracy.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{accuracy} 4 | \alias{accuracy} 5 | \title{Accuracy metric} 6 | \usage{ 7 | accuracy(referenceM, A) 8 | } 9 | \arguments{ 10 | \item{referenceM}{the matrix to use as reference} 11 | 12 | \item{A}{the matrix obtained from a fit} 13 | } 14 | \description{ 15 | Compute the accuracy of a fit 16 | } 17 | -------------------------------------------------------------------------------- /man/bootstrappedVAR.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{bootstrappedVAR} 4 | \alias{bootstrappedVAR} 5 | \title{Bootstrap VAR} 6 | \usage{ 7 | bootstrappedVAR(v) 8 | } 9 | \arguments{ 10 | \item{v}{the VAR object as from fitVAR or simulateVAR} 11 | } 12 | \description{ 13 | Build the bootstrapped series from the original var 14 | } 15 | -------------------------------------------------------------------------------- /man/checkImpulseZero.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/impulseResponse.R 3 | \name{checkImpulseZero} 4 | \alias{checkImpulseZero} 5 | \title{Check Impulse Zero} 6 | \usage{ 7 | checkImpulseZero(irf) 8 | } 9 | \arguments{ 10 | \item{irf}{irf output from impulseResponse function} 11 | } 12 | \value{ 13 | a matrix containing the indices of the impulse response function that 14 | are 0. 15 | } 16 | \description{ 17 | A function to find which entries of the impulse response function 18 | are zero. 19 | } 20 | -------------------------------------------------------------------------------- /man/checkIsVar.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{checkIsVar} 4 | \alias{checkIsVar} 5 | \title{Check is var} 6 | \usage{ 7 | checkIsVar(v) 8 | } 9 | \arguments{ 10 | \item{v}{the object to test} 11 | } 12 | \description{ 13 | Check if the input is a var object 14 | } 15 | -------------------------------------------------------------------------------- /man/companionVAR.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{companionVAR} 4 | \alias{companionVAR} 5 | \title{Companion VAR} 6 | \usage{ 7 | companionVAR(v) 8 | } 9 | \arguments{ 10 | \item{v}{the VAR object as from \code{fitVAR} or \code{simulateVAR}} 11 | } 12 | \description{ 13 | Build the VAR(1) representation of a VAR(p) process 14 | } 15 | -------------------------------------------------------------------------------- /man/computeForecasts.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{computeForecasts} 4 | \alias{computeForecasts} 5 | \title{Computes forecasts for VARs} 6 | \usage{ 7 | computeForecasts(v, num_steps) 8 | } 9 | \arguments{ 10 | \item{v}{a VAR object as from fitVAR.} 11 | 12 | \item{num_steps}{the number of forecasts to produce.} 13 | } 14 | \description{ 15 | This function computes forecasts for a given VAR. 16 | } 17 | -------------------------------------------------------------------------------- /man/createSparseMatrix.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/createSparseMatrix.R 3 | \name{createSparseMatrix} 4 | \alias{createSparseMatrix} 5 | \title{Create Sparse Matrix} 6 | \usage{ 7 | createSparseMatrix( 8 | N, 9 | sparsity, 10 | method = "normal", 11 | stationary = FALSE, 12 | p = 1, 13 | ... 14 | ) 15 | } 16 | \arguments{ 17 | \item{N}{the dimension of the square matrix} 18 | 19 | \item{sparsity}{the density of non zero elements} 20 | 21 | \item{method}{the method used to generate the entries of the matrix. 22 | Possible values are \code{"normal"} (default) or \code{"bimodal"}.} 23 | 24 | \item{stationary}{should the spectral radius of the matrix be smaller than 1? 25 | Possible values are \code{TRUE} or \code{FALSE}. Default is \code{FALSE}.} 26 | 27 | \item{p}{normalization constant (used for VAR of order greater than 1, 28 | default = 1)} 29 | 30 | \item{...}{other options for the matrix (you can specify the mean 31 | \code{mu_mat} and the standard deviation \code{sd_mat}).} 32 | } 33 | \value{ 34 | An NxN sparse matrix. 35 | } 36 | \description{ 37 | Creates a sparse square matrix with a given sparsity and 38 | distribution. 39 | } 40 | \examples{ 41 | M <- createSparseMatrix( 42 | N = 30, sparsity = 0.05, method = "normal", 43 | stationary = TRUE 44 | ) 45 | } 46 | -------------------------------------------------------------------------------- /man/decomposePi.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/fitVECM.R 3 | \name{decomposePi} 4 | \alias{decomposePi} 5 | \title{Decompose Pi VECM matrix} 6 | \usage{ 7 | decomposePi(vecm, rk, ...) 8 | } 9 | \arguments{ 10 | \item{vecm}{the VECM object} 11 | 12 | \item{rk}{rank} 13 | 14 | \item{...}{options for the function (TODO: specify)} 15 | } 16 | \value{ 17 | alpha 18 | 19 | beta 20 | } 21 | \description{ 22 | A function to estimate a (possibly big) multivariate VECM time series 23 | using penalized least squares methods, such as ENET, SCAD or MC+. 24 | } 25 | -------------------------------------------------------------------------------- /man/errorBandsIRF.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/impulseResponse.R 3 | \name{errorBandsIRF} 4 | \alias{errorBandsIRF} 5 | \title{Error bands for IRF} 6 | \usage{ 7 | errorBandsIRF(v, irf, alpha, M, resampling, ...) 8 | } 9 | \arguments{ 10 | \item{v}{a var object as from fitVAR or simulateVAR} 11 | 12 | \item{irf}{irf output from impulseResponse function} 13 | 14 | \item{alpha}{level of confidence (default \code{alpha = 0.01})} 15 | 16 | \item{M}{number of bootstrapped series (default \code{M = 100})} 17 | 18 | \item{resampling}{type of resampling: \code{"bootstrap"} or \code{"jackknife"}} 19 | 20 | \item{...}{some options for the estimation: \code{verbose = TRUE} or \code{FALSE}, 21 | \code{mode = "fast"} or \code{"slow"}, \code{threshold = TRUE} or \code{FALSE}.} 22 | } 23 | \value{ 24 | a matrix containing the indices of the impulse response function that 25 | are 0. 26 | } 27 | \description{ 28 | A function to estimate the confidence intervals for irf and oirf. 29 | } 30 | -------------------------------------------------------------------------------- /man/fitVAR.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/fitVAR.R 3 | \name{fitVAR} 4 | \alias{fitVAR} 5 | \title{Multivariate VAR estimation} 6 | \usage{ 7 | fitVAR(data, p = 1, penalty = "ENET", method = "cv", ...) 8 | } 9 | \arguments{ 10 | \item{data}{the data from the time series: variables in columns and 11 | observations in rows} 12 | 13 | \item{p}{order of the VAR model} 14 | 15 | \item{penalty}{the penalty function to use. Possible values 16 | are \code{"ENET"}, \code{"SCAD"} or \code{"MCP"}} 17 | 18 | \item{method}{possible values are \code{"cv"} or \code{"timeSlice"}} 19 | 20 | \item{...}{the options for the estimation. Global options are: 21 | \code{threshold}: if \code{TRUE} all the entries smaller than the oracle 22 | threshold are set to zero; 23 | \code{scale}: scale the data (default = FALSE)? 24 | \code{nfolds}: the number of folds used for cross validation (default = 10); 25 | \code{parallel}: if \code{TRUE} use multicore backend (default = FALSE); 26 | \code{ncores}: if \code{parallel} is \code{TRUE}, specify the number 27 | of cores to use for parallel evaluation. Options for ENET estimation: 28 | \code{alpha}: the value of alpha to use in elastic net 29 | (0 is Ridge regression, 1 is LASSO (default)); 30 | \code{type.measure}: the measure to use for error evaluation 31 | (\code{"mse"} or \code{"mae"}); 32 | \code{nlambda}: the number of lambdas to use in the cross 33 | validation (default = 100); 34 | \code{leaveOut}: in the time slice validation leave out the 35 | last \code{leaveOutLast} observations (default = 15); 36 | \code{horizon}: the horizon to use for estimating mse/mae (default = 1); 37 | \code{picasso}: use picasso package for estimation (only available 38 | for \code{penalty = "SCAD"} and \code{method = "timeSlice"}).} 39 | } 40 | \value{ 41 | \code{A} the list (of length \code{p}) of the estimated matrices 42 | of the process 43 | 44 | \code{fit} the results of the penalized LS estimation 45 | 46 | \code{mse} the mean square error of the cross validation 47 | 48 | \code{time} elapsed time for the estimation 49 | 50 | \code{residuals} the time series of the residuals 51 | } 52 | \description{ 53 | A function to estimate a (possibly high-dimensional) 54 | multivariate VAR time series using penalized least squares methods, 55 | such as ENET, SCAD or MC+. 56 | } 57 | -------------------------------------------------------------------------------- /man/fitVARX.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/fitVARX.R 3 | \name{fitVARX} 4 | \alias{fitVARX} 5 | \title{Multivariate VARX estimation} 6 | \usage{ 7 | fitVARX(data, p = 1, Xt, m = 1, penalty = "ENET", method = "cv", ...) 8 | } 9 | \arguments{ 10 | \item{data}{the data from the time series: variables in columns and observations in 11 | rows} 12 | 13 | \item{p}{order of the VAR model} 14 | 15 | \item{Xt}{the exogenous variables} 16 | 17 | \item{m}{order of the exogenous variables} 18 | 19 | \item{penalty}{the penalty function to use. Possible values are \code{"ENET"}, 20 | \code{"SCAD"} or \code{"MCP"}} 21 | 22 | \item{method}{possible values are \code{"cv"} or \code{"timeSlice"}} 23 | 24 | \item{...}{the options for the estimation. Global options are: 25 | \code{threshold}: if \code{TRUE} all the entries smaller than the oracle threshold are set to zero; 26 | \code{scale}: scale the data (default = FALSE)? 27 | \code{nfolds}: the number of folds used for cross validation (default = 10); 28 | \code{parallel}: if \code{TRUE} use multicore backend (default = FALSE); 29 | \code{ncores}: if \code{parallel} is \code{TRUE}, specify the number of cores to use 30 | for parallel evaluation. Options for ENET estimation: 31 | \code{alpha}: the value of alpha to use in elastic net (0 is Ridge regression, 1 is LASSO (default)); 32 | \code{type.measure}: the measure to use for error evaluation (\code{"mse"} or \code{"mae"}); 33 | \code{nlambda}: the number of lambdas to use in the cross validation (default = 100); 34 | \code{leaveOut}: in the time slice validation leave out the last \code{leaveOutLast} observations 35 | (default = 15); 36 | \code{horizon}: the horizon to use for estimating mse/mae (default = 1); 37 | \code{picasso}: use picasso package for estimation (only available for \code{penalty = "SCAD"} 38 | and \code{method = "timeSlice"}).} 39 | } 40 | \value{ 41 | \code{A} the list (of length \code{p}) of the estimated matrices of the process 42 | 43 | \code{fit} the results of the penalized LS estimation 44 | 45 | \code{mse} the mean square error of the cross validation 46 | 47 | \code{time} elapsed time for the estimation 48 | 49 | \code{residuals} the time series of the residuals 50 | } 51 | \description{ 52 | A function to estimate a (possibly high-dimensional) multivariate VARX time series 53 | using penalized least squares methods, such as ENET, SCAD or MC+. 54 | } 55 | -------------------------------------------------------------------------------- /man/fitVECM.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/fitVECM.R 3 | \name{fitVECM} 4 | \alias{fitVECM} 5 | \title{Multivariate VECM estimation} 6 | \usage{ 7 | fitVECM(data, p, penalty, method, logScale, ...) 8 | } 9 | \arguments{ 10 | \item{data}{the data from the time series: variables in columns and observations in 11 | rows} 12 | 13 | \item{p}{order of the VECM model} 14 | 15 | \item{penalty}{the penalty function to use. Possible values are \code{"ENET"}, 16 | \code{"SCAD"} or \code{"MCP"}} 17 | 18 | \item{method}{\code{"cv"} or \code{"timeSlice"}} 19 | 20 | \item{logScale}{should the function consider the \code{log} of the inputs? By default 21 | this is set to \code{TRUE}} 22 | 23 | \item{...}{options for the function (TODO: specify)} 24 | } 25 | \value{ 26 | Pi the matrix \code{Pi} for the VECM model 27 | 28 | G the list (of length \code{p-1}) of the estimated matrices of the process 29 | 30 | fit the results of the penalized LS estimation 31 | 32 | mse the mean square error of the cross validation 33 | 34 | time elapsed time for the estimation 35 | } 36 | \description{ 37 | A function to estimate a (possibly big) multivariate VECM time series 38 | using penalized least squares methods, such as ENET, SCAD or MC+. 39 | } 40 | -------------------------------------------------------------------------------- /man/frobNorm.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{frobNorm} 4 | \alias{frobNorm} 5 | \title{Froebenius norm of a matrix} 6 | \usage{ 7 | frobNorm(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the Froebenius norm of M 14 | } 15 | -------------------------------------------------------------------------------- /man/impulseResponse.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/impulseResponse.R 3 | \name{impulseResponse} 4 | \alias{impulseResponse} 5 | \title{Impulse Response Function} 6 | \usage{ 7 | impulseResponse(v, len = 20) 8 | } 9 | \arguments{ 10 | \item{v}{the data in the for of a VAR} 11 | 12 | \item{len}{length of the impulse response function} 13 | } 14 | \value{ 15 | \code{irf} a 3d array containing the impulse response function. 16 | } 17 | \description{ 18 | A function to estimate the Impulse Response Function of a given VAR. 19 | } 20 | -------------------------------------------------------------------------------- /man/informCrit.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{informCrit} 4 | \alias{informCrit} 5 | \title{Computes information criteria for VARs} 6 | \usage{ 7 | informCrit(v) 8 | } 9 | \arguments{ 10 | \item{v}{a list of VAR objects as from fitVAR.} 11 | } 12 | \description{ 13 | This function computes information criterias (AIC, Schwartz and 14 | Hannan-Quinn) for VARs. 15 | } 16 | -------------------------------------------------------------------------------- /man/l1norm.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{l1norm} 4 | \alias{l1norm} 5 | \title{L1 matrix norm} 6 | \usage{ 7 | l1norm(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the L1 matrix norm of M 14 | } 15 | -------------------------------------------------------------------------------- /man/l2norm.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{l2norm} 4 | \alias{l2norm} 5 | \title{L2 matrix norm} 6 | \usage{ 7 | l2norm(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the L2 matrix norm of M 14 | } 15 | -------------------------------------------------------------------------------- /man/lInftyNorm.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{lInftyNorm} 4 | \alias{lInftyNorm} 5 | \title{L-infinity matrix norm} 6 | \usage{ 7 | lInftyNorm(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the L-infinity matrix norm of M 14 | } 15 | -------------------------------------------------------------------------------- /man/maxNorm.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{maxNorm} 4 | \alias{maxNorm} 5 | \title{Max-norm of a matrix} 6 | \usage{ 7 | maxNorm(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the max-norm of M 14 | } 15 | -------------------------------------------------------------------------------- /man/mcSimulations.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/mcSimulations.R 3 | \name{mcSimulations} 4 | \alias{mcSimulations} 5 | \title{Monte Carlo simulations} 6 | \usage{ 7 | mcSimulations( 8 | N, 9 | nobs = 250, 10 | nMC = 100, 11 | rho = 0.5, 12 | sparsity = 0.05, 13 | penalty = "ENET", 14 | covariance = "Toeplitz", 15 | method = "normal", 16 | modelSel = "cv", 17 | ... 18 | ) 19 | } 20 | \arguments{ 21 | \item{N}{dimension of the multivariate time series.} 22 | 23 | \item{nobs}{number of observations to be generated.} 24 | 25 | \item{nMC}{number of Monte Carlo simulations.} 26 | 27 | \item{rho}{base value for the covariance.} 28 | 29 | \item{sparsity}{density of non zero entries of the VAR matrices.} 30 | 31 | \item{penalty}{penalty function to use for LS estimation. Possible values are \code{"ENET"}, 32 | \code{"SCAD"} or \code{"MCP"}.} 33 | 34 | \item{covariance}{type of covariance matrix to be used in the generation of the sparse VAR model.} 35 | 36 | \item{method}{which type of distribution to use in the generation of the entries of the matrices.} 37 | 38 | \item{modelSel}{select which model selection criteria to use (\code{"cv"} or \code{"timeslice"}).} 39 | 40 | \item{...}{(TODO: complete)} 41 | } 42 | \value{ 43 | a \code{nMc}x5 matrix with the results of the Monte Carlo estimation 44 | } 45 | \description{ 46 | This function generates monte carlo simultaions of sparse VAR and 47 | its estimation (at the moment only for VAR(1) processes). 48 | } 49 | -------------------------------------------------------------------------------- /man/multiplot.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/plotMatrix.R 3 | \name{multiplot} 4 | \alias{multiplot} 5 | \title{Multiplots with ggplot} 6 | \usage{ 7 | multiplot(..., plotlist = NULL, cols = 1, layout = NULL) 8 | } 9 | \arguments{ 10 | \item{...}{a sequence of ggplots to be plotted in the grid.} 11 | 12 | \item{plotlist}{a list containing ggplots as elements.} 13 | 14 | \item{cols}{number of columns in layout} 15 | 16 | \item{layout}{a matrix specifying the layout. If present, 'cols' is ignored. 17 | If the layout is something like matrix(c(1,2,3,3), nrow=2, byrow=TRUE), 18 | then plot 1 will go in the upper left, 2 will go in the upper right, and 19 | 3 will go all the way across the bottom. 20 | Taken from R Cookbook} 21 | } 22 | \value{ 23 | A ggplot containing the plots passed as arguments 24 | } 25 | \description{ 26 | Multiple plot function. ggplot objects can be passed in ..., or 27 | to plotlist (as a list of ggplot objects) 28 | } 29 | -------------------------------------------------------------------------------- /man/plotIRF.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/plotIRF.R 3 | \name{plotIRF} 4 | \alias{plotIRF} 5 | \title{IRF plot} 6 | \usage{ 7 | plotIRF(irf, eb, i, j, type, bands) 8 | } 9 | \arguments{ 10 | \item{irf}{the irf object to plot} 11 | 12 | \item{eb}{the errorbands to plot} 13 | 14 | \item{i}{the first index} 15 | 16 | \item{j}{the second index} 17 | 18 | \item{type}{\code{type = "irf"} or \code{type = "oirf"}} 19 | 20 | \item{bands}{\code{"quantiles"} or \code{"sd"}} 21 | } 22 | \value{ 23 | An \code{image} plot relative to the impulse response function. 24 | } 25 | \description{ 26 | Plot a IRF object 27 | } 28 | -------------------------------------------------------------------------------- /man/plotIRFGrid.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/plotIRF.R 3 | \name{plotIRFGrid} 4 | \alias{plotIRFGrid} 5 | \title{IRF grid plot} 6 | \usage{ 7 | plotIRFGrid(irf, eb, indexes, type, bands) 8 | } 9 | \arguments{ 10 | \item{irf}{the irf object computed using impulseResponse} 11 | 12 | \item{eb}{the error bands estimated using errorBands} 13 | 14 | \item{indexes}{a vector containing the indeces that you want to plot} 15 | 16 | \item{type}{plot the irf (\code{type = "irf"} by default) or the orthogonal irf 17 | (\code{type = "oirf"})} 18 | 19 | \item{bands}{which type of bands to plot ("quantiles" (default) or "sd")} 20 | } 21 | \value{ 22 | An \code{image} plot relative to the impulse response function. 23 | } 24 | \description{ 25 | Plot a IRF grid object 26 | } 27 | -------------------------------------------------------------------------------- /man/plotMatrix.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/plotMatrix.R 3 | \name{plotMatrix} 4 | \alias{plotMatrix} 5 | \title{Matrix plot} 6 | \usage{ 7 | plotMatrix(M, colors) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix to plot} 11 | 12 | \item{colors}{dark or light} 13 | } 14 | \value{ 15 | An \code{image} plot with a particular color palette (black zero entries, red 16 | for the negative ones and green for the positive) 17 | } 18 | \description{ 19 | Plot a sparse matrix 20 | } 21 | -------------------------------------------------------------------------------- /man/plotVAR.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/plotMatrix.R 3 | \name{plotVAR} 4 | \alias{plotVAR} 5 | \title{Plot VARs} 6 | \usage{ 7 | plotVAR(..., colors) 8 | } 9 | \arguments{ 10 | \item{...}{a sequence of VAR objects (one or more 11 | than one, as from \code{simulateVAR} or \code{fitVAR})} 12 | 13 | \item{colors}{the gradient used to plot the matrix. It can be "light" (low = 14 | red -- mid = white -- high = blue) or "dark" (low = red -- mid = black -- 15 | high = green)} 16 | } 17 | \value{ 18 | An \code{image} plot with a specific color palette 19 | } 20 | \description{ 21 | Plot all the matrices of a VAR model 22 | } 23 | -------------------------------------------------------------------------------- /man/plotVECM.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/plotMatrix.R 3 | \name{plotVECM} 4 | \alias{plotVECM} 5 | \title{Plot VECMs} 6 | \usage{ 7 | plotVECM(v) 8 | } 9 | \arguments{ 10 | \item{v}{a VECM object (as from \code{fitVECM})} 11 | } 12 | \value{ 13 | An \code{image} plot with a specific color palette (black zero entries, red 14 | for the negative ones and green for the positive) 15 | } 16 | \description{ 17 | Plot all the matrices of a VECM model 18 | } 19 | -------------------------------------------------------------------------------- /man/simulateVAR.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/simulateVAR.R 3 | \name{simulateVAR} 4 | \alias{simulateVAR} 5 | \title{VAR simulation} 6 | \usage{ 7 | simulateVAR(N, p, nobs, rho, sparsity, mu, method, covariance, ...) 8 | } 9 | \arguments{ 10 | \item{N}{dimension of the time series.} 11 | 12 | \item{p}{number of lags of the VAR model.} 13 | 14 | \item{nobs}{number of observations to be generated.} 15 | 16 | \item{rho}{base value for the covariance matrix.} 17 | 18 | \item{sparsity}{density (in percentage) of the number of nonzero elements of the VAR matrices.} 19 | 20 | \item{mu}{a vector containing the mean of the simulated process.} 21 | 22 | \item{method}{which method to use to generate the VAR matrix. Possible values 23 | are \code{"normal"} or \code{"bimodal"}.} 24 | 25 | \item{covariance}{type of covariance matrix to use in the simulation. Possible 26 | values: \code{"toeplitz"}, \code{"block1"}, \code{"block2"} or simply \code{"diagonal"}.} 27 | 28 | \item{...}{the options for the simulation. These are: 29 | \code{muMat}: the mean of the entries of the VAR matrices; 30 | \code{sdMat}: the sd of the entries of the matrices;} 31 | } 32 | \value{ 33 | A a list of NxN matrices ordered by lag 34 | 35 | data a list with two elements: \code{series} the multivariate time series and 36 | \code{noises} the time series of errors 37 | 38 | S the variance/covariance matrix of the process 39 | } 40 | \description{ 41 | This function generates a simulated multivariate VAR time series. 42 | } 43 | -------------------------------------------------------------------------------- /man/simulateVARX.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/simulateVARX.R 3 | \name{simulateVARX} 4 | \alias{simulateVARX} 5 | \title{VARX simulation} 6 | \usage{ 7 | simulateVARX(N, K, p, m, nobs, rho, 8 | sparsityA1, sparsityA2, sparsityA3, 9 | mu, method, covariance, ...) 10 | } 11 | \arguments{ 12 | \item{N}{dimension of the time series.} 13 | 14 | \item{K}{TODO} 15 | 16 | \item{p}{number of lags of the VAR model.} 17 | 18 | \item{m}{TODO} 19 | 20 | \item{nobs}{number of observations to be generated.} 21 | 22 | \item{rho}{base value for the covariance matrix.} 23 | 24 | \item{sparsityA1}{density (in percentage) of the number of nonzero elements 25 | of the A1 block.} 26 | 27 | \item{sparsityA2}{density (in percentage) of the number of nonzero elements 28 | of the A2 block.} 29 | 30 | \item{sparsityA3}{density (in percentage) of the number of nonzero elements 31 | of the A3 block.} 32 | 33 | \item{mu}{a vector containing the mean of the simulated process.} 34 | 35 | \item{method}{which method to use to generate the VAR matrix. Possible values 36 | are \code{"normal"} or \code{"bimodal"}.} 37 | 38 | \item{covariance}{type of covariance matrix to use in the simulation. Possible 39 | values: \code{"toeplitz"}, \code{"block1"}, \code{"block2"} or simply \code{"diagonal"}.} 40 | 41 | \item{...}{the options for the simulation. These are: 42 | \code{muMat}: the mean of the entries of the VAR matrices; 43 | \code{sdMat}: the sd of the entries of the matrices;} 44 | } 45 | \value{ 46 | A a list of NxN matrices ordered by lag 47 | 48 | data a list with two elements: \code{series} the multivariate time series and 49 | \code{noises} the time series of errors 50 | 51 | S the variance/covariance matrix of the process 52 | } 53 | \description{ 54 | This function generates a simulated multivariate VAR time series. 55 | } 56 | -------------------------------------------------------------------------------- /man/sparsevar.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/sparsevar.R 3 | \docType{package} 4 | \name{sparsevar} 5 | \alias{sparsevar} 6 | \title{sparsevar: A package to estimate multivariate time series models (such as VAR and 7 | VECM), under the sparsity hypothesis.} 8 | \description{ 9 | It performs the estimation of the matrices of the models using penalized 10 | least squares methods such as LASSO, SCAD and MCP. 11 | } 12 | \section{sparsevar functions}{ 13 | 14 | \code{fitVAR}, \code{fitVECM}, \code{simulateVAR}, \code{createSparseMatrix}, 15 | \code{plotMatrix}, \code{plotVAR}, \code{plotVECM} 16 | \code{l2norm}, \code{l1norm}, \code{lInftyNorm}, \code{maxNorm}, \code{frobNorm}, 17 | \code{spectralRadius}, \code{spectralNorm}, \code{impulseResponse} 18 | } 19 | 20 | -------------------------------------------------------------------------------- /man/spectralNorm.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{spectralNorm} 4 | \alias{spectralNorm} 5 | \title{Spectral norm} 6 | \usage{ 7 | spectralNorm(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the spectral norm of M 14 | } 15 | -------------------------------------------------------------------------------- /man/spectralRadius.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utils.R 3 | \name{spectralRadius} 4 | \alias{spectralRadius} 5 | \title{Spectral radius} 6 | \usage{ 7 | spectralRadius(M) 8 | } 9 | \arguments{ 10 | \item{M}{the matrix (real or complex valued)} 11 | } 12 | \description{ 13 | Compute the spectral radius of M 14 | } 15 | -------------------------------------------------------------------------------- /man/testGranger.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{testGranger} 4 | \alias{testGranger} 5 | \title{Test for Ganger Causality} 6 | \usage{ 7 | testGranger(v, eb) 8 | } 9 | \arguments{ 10 | \item{v}{the VAR object as from fitVAR or simulateVAR} 11 | 12 | \item{eb}{the error bands as obtained from errorBands} 13 | } 14 | \description{ 15 | This function should retain only the coefficients of the 16 | matrices of the VAR that are statistically significative (from the bootstrap) 17 | } 18 | -------------------------------------------------------------------------------- /man/transformData.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{transformData} 4 | \alias{transformData} 5 | \title{Transorm data} 6 | \usage{ 7 | transformData(data, p, opt) 8 | } 9 | \arguments{ 10 | \item{data}{the data} 11 | 12 | \item{p}{the order of the VAR} 13 | 14 | \item{opt}{a list containing the options} 15 | } 16 | \description{ 17 | Transform the input data 18 | } 19 | -------------------------------------------------------------------------------- /man/varENET.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{varENET} 4 | \alias{varENET} 5 | \title{VAR ENET} 6 | \usage{ 7 | varENET(data, p, lambdas, opt) 8 | } 9 | \arguments{ 10 | \item{data}{the data} 11 | 12 | \item{p}{the order of the VAR} 13 | 14 | \item{lambdas}{a vector containing the lambdas to be used in the fit} 15 | 16 | \item{opt}{a list containing the options} 17 | } 18 | \description{ 19 | Estimate VAR using ENET penalty 20 | } 21 | -------------------------------------------------------------------------------- /man/varMCP.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{varMCP} 4 | \alias{varMCP} 5 | \title{VAR MCP} 6 | \usage{ 7 | varMCP(data, p, lambdas, opt) 8 | } 9 | \arguments{ 10 | \item{data}{the data} 11 | 12 | \item{p}{the order of the VAR} 13 | 14 | \item{lambdas}{a vector containing the lambdas to be used in the fit} 15 | 16 | \item{opt}{a list containing the options} 17 | } 18 | \description{ 19 | Estimate VAR using MCP penalty 20 | } 21 | -------------------------------------------------------------------------------- /man/varSCAD.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/utilsVAR.R 3 | \name{varSCAD} 4 | \alias{varSCAD} 5 | \title{VAR SCAD} 6 | \usage{ 7 | varSCAD(data, p, lambdas, opt, penalty) 8 | } 9 | \arguments{ 10 | \item{data}{the data} 11 | 12 | \item{p}{the order of the VAR} 13 | 14 | \item{lambdas}{a vector containing the lambdas to be used in the fit} 15 | 16 | \item{opt}{a list containing the options} 17 | 18 | \item{penalty}{a string "SCAD" or something else} 19 | } 20 | \description{ 21 | Estimate VAR using SCAD penalty 22 | } 23 | -------------------------------------------------------------------------------- /renv.lock: -------------------------------------------------------------------------------- 1 | { 2 | "R": { 3 | "Version": "4.0.2", 4 | "Repositories": [ 5 | { 6 | "Name": "CRAN", 7 | "URL": "https://cloud.r-project.org" 8 | } 9 | ] 10 | }, 11 | "Packages": { 12 | "MASS": { 13 | "Package": "MASS", 14 | "Version": "7.3-52", 15 | "Source": "Repository", 16 | "Repository": "CRAN", 17 | "Hash": "095c8b0dd20f5d9c2a75cf72fdd74dab" 18 | }, 19 | "Matrix": { 20 | "Package": "Matrix", 21 | "Version": "1.2-18", 22 | "Source": "Repository", 23 | "Repository": "CRAN", 24 | "Hash": "08588806cba69f04797dab50627428ed" 25 | }, 26 | "R6": { 27 | "Package": "R6", 28 | "Version": "2.5.0", 29 | "Source": "Repository", 30 | "Repository": "CRAN", 31 | "Hash": "b203113193e70978a696b2809525649d" 32 | }, 33 | "RColorBrewer": { 34 | "Package": "RColorBrewer", 35 | "Version": "1.1-2", 36 | "Source": "Repository", 37 | "Repository": "CRAN", 38 | "Hash": "e031418365a7f7a766181ab5a41a5716" 39 | }, 40 | "Rcpp": { 41 | "Package": "Rcpp", 42 | "Version": "1.0.6", 43 | "Source": "Repository", 44 | "Repository": "CRAN", 45 | "Hash": "dbb5e436998a7eba5a9d682060533338" 46 | }, 47 | "base64enc": { 48 | "Package": "base64enc", 49 | "Version": "0.1-3", 50 | "Source": "Repository", 51 | "Repository": "CRAN", 52 | "Hash": "543776ae6848fde2f48ff3816d0628bc" 53 | }, 54 | "brio": { 55 | "Package": "brio", 56 | "Version": "1.1.1", 57 | "Source": "Repository", 58 | "Repository": "CRAN", 59 | "Hash": "36758510e65a457efeefa50e1e7f0576" 60 | }, 61 | "callr": { 62 | "Package": "callr", 63 | "Version": "3.6.0", 64 | "Source": "Repository", 65 | "Repository": "CRAN", 66 | "Hash": "25da2c6fba6a13b5da94e37acdb3f532" 67 | }, 68 | "cli": { 69 | "Package": "cli", 70 | "Version": "2.4.0", 71 | "Source": "Repository", 72 | "Repository": "CRAN", 73 | "Hash": "be982c9bcbfbe9e59c0225b0ed37d47e" 74 | }, 75 | "codetools": { 76 | "Package": "codetools", 77 | "Version": "0.2-16", 78 | "Source": "Repository", 79 | "Repository": "CRAN", 80 | "Hash": "89cf4b8207269ccf82fbeb6473fd662b" 81 | }, 82 | "colorspace": { 83 | "Package": "colorspace", 84 | "Version": "2.0-0", 85 | "Source": "Repository", 86 | "Repository": "CRAN", 87 | "Hash": "abea3384649ef37f60ef51ce002f3547" 88 | }, 89 | "corpcor": { 90 | "Package": "corpcor", 91 | "Version": "1.6.9", 92 | "Source": "Repository", 93 | "Repository": "CRAN", 94 | "Hash": "ae01381679f4511ca7a72d55fe175213" 95 | }, 96 | "crayon": { 97 | "Package": "crayon", 98 | "Version": "1.4.1", 99 | "Source": "Repository", 100 | "Repository": "CRAN", 101 | "Hash": "e75525c55c70e5f4f78c9960a4b402e9" 102 | }, 103 | "desc": { 104 | "Package": "desc", 105 | "Version": "1.3.0", 106 | "Source": "Repository", 107 | "Repository": "CRAN", 108 | "Hash": "b6963166f7f10b970af1006c462ce6cd" 109 | }, 110 | "diffobj": { 111 | "Package": "diffobj", 112 | "Version": "0.3.4", 113 | "Source": "Repository", 114 | "Repository": "CRAN", 115 | "Hash": "feb5b7455eba422a2c110bb89852e6a3" 116 | }, 117 | "digest": { 118 | "Package": "digest", 119 | "Version": "0.6.27", 120 | "Source": "Repository", 121 | "Repository": "CRAN", 122 | "Hash": "a0cbe758a531d054b537d16dff4d58a1" 123 | }, 124 | "doParallel": { 125 | "Package": "doParallel", 126 | "Version": "1.0.16", 127 | "Source": "Repository", 128 | "Repository": "CRAN", 129 | "Hash": "2dc413572eb42475179bfe0afabd2adf" 130 | }, 131 | "ellipsis": { 132 | "Package": "ellipsis", 133 | "Version": "0.3.1", 134 | "Source": "Repository", 135 | "Repository": "CRAN", 136 | "Hash": "fd2844b3a43ae2d27e70ece2df1b4e2a" 137 | }, 138 | "evaluate": { 139 | "Package": "evaluate", 140 | "Version": "0.14", 141 | "Source": "Repository", 142 | "Repository": "CRAN", 143 | "Hash": "ec8ca05cffcc70569eaaad8469d2a3a7" 144 | }, 145 | "fansi": { 146 | "Package": "fansi", 147 | "Version": "0.4.2", 148 | "Source": "Repository", 149 | "Repository": "CRAN", 150 | "Hash": "fea074fb67fe4c25d47ad09087da847d" 151 | }, 152 | "farver": { 153 | "Package": "farver", 154 | "Version": "2.1.0", 155 | "Source": "Repository", 156 | "Repository": "CRAN", 157 | "Hash": "c98eb5133d9cb9e1622b8691487f11bb" 158 | }, 159 | "foreach": { 160 | "Package": "foreach", 161 | "Version": "1.5.1", 162 | "Source": "Repository", 163 | "Repository": "CRAN", 164 | "Hash": "e32cfc0973caba11b65b1fa691b4d8c9" 165 | }, 166 | "ggplot2": { 167 | "Package": "ggplot2", 168 | "Version": "3.3.3", 169 | "Source": "Repository", 170 | "Repository": "CRAN", 171 | "Hash": "3eb6477d01eb5bbdc03f7d5f70f2733e" 172 | }, 173 | "glmnet": { 174 | "Package": "glmnet", 175 | "Version": "4.1-1", 176 | "Source": "Repository", 177 | "Repository": "CRAN", 178 | "Hash": "18482cb4790abf3ed27cafa2381d6175" 179 | }, 180 | "glue": { 181 | "Package": "glue", 182 | "Version": "1.4.2", 183 | "Source": "Repository", 184 | "Repository": "CRAN", 185 | "Hash": "6efd734b14c6471cfe443345f3e35e29" 186 | }, 187 | "gtable": { 188 | "Package": "gtable", 189 | "Version": "0.3.0", 190 | "Source": "Repository", 191 | "Repository": "CRAN", 192 | "Hash": "ac5c6baf7822ce8732b343f14c072c4d" 193 | }, 194 | "highr": { 195 | "Package": "highr", 196 | "Version": "0.8", 197 | "Source": "Repository", 198 | "Repository": "CRAN", 199 | "Hash": "4dc5bb88961e347a0f4d8aad597cbfac" 200 | }, 201 | "htmltools": { 202 | "Package": "htmltools", 203 | "Version": "0.5.1.1", 204 | "Source": "Repository", 205 | "Repository": "CRAN", 206 | "Hash": "af2c2531e55df5cf230c4b5444fc973c" 207 | }, 208 | "isoband": { 209 | "Package": "isoband", 210 | "Version": "0.2.4", 211 | "Source": "Repository", 212 | "Repository": "CRAN", 213 | "Hash": "b2008df40fb297e3fef135c7e8eeec1a" 214 | }, 215 | "iterators": { 216 | "Package": "iterators", 217 | "Version": "1.0.13", 218 | "Source": "Repository", 219 | "Repository": "CRAN", 220 | "Hash": "64778782a89480e9a644f69aad9a2877" 221 | }, 222 | "jsonlite": { 223 | "Package": "jsonlite", 224 | "Version": "1.7.2", 225 | "Source": "Repository", 226 | "Repository": "CRAN", 227 | "Hash": "98138e0994d41508c7a6b84a0600cfcb" 228 | }, 229 | "knitr": { 230 | "Package": "knitr", 231 | "Version": "1.32", 232 | "Source": "Repository", 233 | "Repository": "CRAN", 234 | "Hash": "c9e3c0fffd678c847d9b274c068ca46b" 235 | }, 236 | "labeling": { 237 | "Package": "labeling", 238 | "Version": "0.4.2", 239 | "Source": "Repository", 240 | "Repository": "CRAN", 241 | "Hash": "3d5108641f47470611a32d0bdf357a72" 242 | }, 243 | "lattice": { 244 | "Package": "lattice", 245 | "Version": "0.20-41", 246 | "Source": "Repository", 247 | "Repository": "CRAN", 248 | "Hash": "fbd9285028b0263d76d18c95ae51a53d" 249 | }, 250 | "lifecycle": { 251 | "Package": "lifecycle", 252 | "Version": "1.0.0", 253 | "Source": "Repository", 254 | "Repository": "CRAN", 255 | "Hash": "3471fb65971f1a7b2d4ae7848cf2db8d" 256 | }, 257 | "magrittr": { 258 | "Package": "magrittr", 259 | "Version": "2.0.1", 260 | "Source": "Repository", 261 | "Repository": "CRAN", 262 | "Hash": "41287f1ac7d28a92f0a286ed507928d3" 263 | }, 264 | "markdown": { 265 | "Package": "markdown", 266 | "Version": "1.1", 267 | "Source": "Repository", 268 | "Repository": "CRAN", 269 | "Hash": "61e4a10781dd00d7d81dd06ca9b94e95" 270 | }, 271 | "mgcv": { 272 | "Package": "mgcv", 273 | "Version": "1.8-32", 274 | "Source": "Repository", 275 | "Repository": "CRAN", 276 | "Hash": "8c15879d93932843512e53c956e4ea04" 277 | }, 278 | "mime": { 279 | "Package": "mime", 280 | "Version": "0.10", 281 | "Source": "Repository", 282 | "Repository": "CRAN", 283 | "Hash": "26fa77e707223e1ce042b2b5d09993dc" 284 | }, 285 | "munsell": { 286 | "Package": "munsell", 287 | "Version": "0.5.0", 288 | "Source": "Repository", 289 | "Repository": "CRAN", 290 | "Hash": "6dfe8bf774944bd5595785e3229d8771" 291 | }, 292 | "mvtnorm": { 293 | "Package": "mvtnorm", 294 | "Version": "1.1-1", 295 | "Source": "Repository", 296 | "Repository": "CRAN", 297 | "Hash": "69fa7331e7410c2a2cb3f9868513904f" 298 | }, 299 | "ncvreg": { 300 | "Package": "ncvreg", 301 | "Version": "3.13.0", 302 | "Source": "Repository", 303 | "Repository": "CRAN", 304 | "Hash": "7fc37427fa78517a439392d1e56c764d" 305 | }, 306 | "nlme": { 307 | "Package": "nlme", 308 | "Version": "3.1-149", 309 | "Source": "Repository", 310 | "Repository": "CRAN", 311 | "Hash": "7c24ab3a1e3afe50388eb2d893aab255" 312 | }, 313 | "picasso": { 314 | "Package": "picasso", 315 | "Version": "1.3.1", 316 | "Source": "Repository", 317 | "Repository": "CRAN", 318 | "Hash": "0cebbb616caa5eb9f5b6df7829e8fcdd" 319 | }, 320 | "pillar": { 321 | "Package": "pillar", 322 | "Version": "1.6.0", 323 | "Source": "Repository", 324 | "Repository": "CRAN", 325 | "Hash": "a8c755912ae31910ba6a5d42f5526b6b" 326 | }, 327 | "pkgconfig": { 328 | "Package": "pkgconfig", 329 | "Version": "2.0.3", 330 | "Source": "Repository", 331 | "Repository": "CRAN", 332 | "Hash": "01f28d4278f15c76cddbea05899c5d6f" 333 | }, 334 | "pkgload": { 335 | "Package": "pkgload", 336 | "Version": "1.2.1", 337 | "Source": "Repository", 338 | "Repository": "CRAN", 339 | "Hash": "463642747f81879e6752485aefb831cf" 340 | }, 341 | "plyr": { 342 | "Package": "plyr", 343 | "Version": "1.8.6", 344 | "Source": "Repository", 345 | "Repository": "CRAN", 346 | "Hash": "ec0e5ab4e5f851f6ef32cd1d1984957f" 347 | }, 348 | "praise": { 349 | "Package": "praise", 350 | "Version": "1.0.0", 351 | "Source": "Repository", 352 | "Repository": "CRAN", 353 | "Hash": "a555924add98c99d2f411e37e7d25e9f" 354 | }, 355 | "processx": { 356 | "Package": "processx", 357 | "Version": "3.5.1", 358 | "Source": "Repository", 359 | "Repository": "CRAN", 360 | "Hash": "5ee87b05936a4aa9d8d026eb1a51314b" 361 | }, 362 | "ps": { 363 | "Package": "ps", 364 | "Version": "1.6.0", 365 | "Source": "Repository", 366 | "Repository": "CRAN", 367 | "Hash": "32620e2001c1dce1af49c49dccbb9420" 368 | }, 369 | "rematch2": { 370 | "Package": "rematch2", 371 | "Version": "2.1.2", 372 | "Source": "Repository", 373 | "Repository": "CRAN", 374 | "Hash": "76c9e04c712a05848ae7a23d2f170a40" 375 | }, 376 | "renv": { 377 | "Package": "renv", 378 | "Version": "0.13.2", 379 | "Source": "Repository", 380 | "Repository": "CRAN", 381 | "Hash": "079cb1f03ff972b30401ed05623cbe92" 382 | }, 383 | "reshape2": { 384 | "Package": "reshape2", 385 | "Version": "1.4.4", 386 | "Source": "Repository", 387 | "Repository": "CRAN", 388 | "Hash": "bb5996d0bd962d214a11140d77589917" 389 | }, 390 | "rlang": { 391 | "Package": "rlang", 392 | "Version": "0.4.10", 393 | "Source": "Repository", 394 | "Repository": "CRAN", 395 | "Hash": "599df23c40a4fce9c7b4764f28c37857" 396 | }, 397 | "rmarkdown": { 398 | "Package": "rmarkdown", 399 | "Version": "2.7", 400 | "Source": "Repository", 401 | "Repository": "CRAN", 402 | "Hash": "edbf4cb1aefae783fd8d3a008ae51943" 403 | }, 404 | "rprojroot": { 405 | "Package": "rprojroot", 406 | "Version": "2.0.2", 407 | "Source": "Repository", 408 | "Repository": "CRAN", 409 | "Hash": "249d8cd1e74a8f6a26194a91b47f21d1" 410 | }, 411 | "rstudioapi": { 412 | "Package": "rstudioapi", 413 | "Version": "0.13", 414 | "Source": "Repository", 415 | "Repository": "CRAN", 416 | "Hash": "06c85365a03fdaf699966cc1d3cf53ea" 417 | }, 418 | "scales": { 419 | "Package": "scales", 420 | "Version": "1.1.1", 421 | "Source": "Repository", 422 | "Repository": "CRAN", 423 | "Hash": "6f76f71042411426ec8df6c54f34e6dd" 424 | }, 425 | "shape": { 426 | "Package": "shape", 427 | "Version": "1.4.5", 428 | "Source": "Repository", 429 | "Repository": "CRAN", 430 | "Hash": "58510f25472de6fd363d76698d29709e" 431 | }, 432 | "stringi": { 433 | "Package": "stringi", 434 | "Version": "1.5.3", 435 | "Source": "Repository", 436 | "Repository": "CRAN", 437 | "Hash": "a063ebea753c92910a4cca7b18bc1f05" 438 | }, 439 | "stringr": { 440 | "Package": "stringr", 441 | "Version": "1.4.0", 442 | "Source": "Repository", 443 | "Repository": "CRAN", 444 | "Hash": "0759e6b6c0957edb1311028a49a35e76" 445 | }, 446 | "survival": { 447 | "Package": "survival", 448 | "Version": "3.2-3", 449 | "Source": "Repository", 450 | "Repository": "CRAN", 451 | "Hash": "3cc6154c577a82f06250254db30a4bfb" 452 | }, 453 | "testthat": { 454 | "Package": "testthat", 455 | "Version": "3.0.2", 456 | "Source": "Repository", 457 | "Repository": "CRAN", 458 | "Hash": "495e0434d9305716b6a87031570ce109" 459 | }, 460 | "tibble": { 461 | "Package": "tibble", 462 | "Version": "3.1.0", 463 | "Source": "Repository", 464 | "Repository": "CRAN", 465 | "Hash": "4d894a114dbd4ecafeda5074e7c538e6" 466 | }, 467 | "tinytex": { 468 | "Package": "tinytex", 469 | "Version": "0.31", 470 | "Source": "Repository", 471 | "Repository": "CRAN", 472 | "Hash": "25b572f764f3c19fef9aac33b5724f3d" 473 | }, 474 | "utf8": { 475 | "Package": "utf8", 476 | "Version": "1.2.1", 477 | "Source": "Repository", 478 | "Repository": "CRAN", 479 | "Hash": "c3ad47dc6da0751f18ed53c4613e3ac7" 480 | }, 481 | "vctrs": { 482 | "Package": "vctrs", 483 | "Version": "0.3.7", 484 | "Source": "Repository", 485 | "Repository": "CRAN", 486 | "Hash": "5540dc30a203a43a1ce5dc6a89532b3b" 487 | }, 488 | "viridisLite": { 489 | "Package": "viridisLite", 490 | "Version": "0.4.0", 491 | "Source": "Repository", 492 | "Repository": "CRAN", 493 | "Hash": "55e157e2aa88161bdb0754218470d204" 494 | }, 495 | "waldo": { 496 | "Package": "waldo", 497 | "Version": "0.2.5", 498 | "Source": "Repository", 499 | "Repository": "CRAN", 500 | "Hash": "20c45f1d511a3f730b7b469f4d11e104" 501 | }, 502 | "withr": { 503 | "Package": "withr", 504 | "Version": "2.4.1", 505 | "Source": "Repository", 506 | "Repository": "CRAN", 507 | "Hash": "caf4781c674ffa549a4676d2d77b13cc" 508 | }, 509 | "xfun": { 510 | "Package": "xfun", 511 | "Version": "0.22", 512 | "Source": "Repository", 513 | "Repository": "CRAN", 514 | "Hash": "eab2f8ba53809c321813e72ecbbd19ba" 515 | }, 516 | "yaml": { 517 | "Package": "yaml", 518 | "Version": "2.2.1", 519 | "Source": "Repository", 520 | "Repository": "CRAN", 521 | "Hash": "2826c5d9efb0a88f657c7a679c7106db" 522 | } 523 | } 524 | } 525 | -------------------------------------------------------------------------------- /renv/.gitignore: -------------------------------------------------------------------------------- 1 | library/ 2 | local/ 3 | lock/ 4 | python/ 5 | staging/ 6 | -------------------------------------------------------------------------------- /renv/activate.R: -------------------------------------------------------------------------------- 1 | 2 | local({ 3 | 4 | # the requested version of renv 5 | version <- "0.13.2" 6 | 7 | # the project directory 8 | project <- getwd() 9 | 10 | # avoid recursion 11 | if (!is.na(Sys.getenv("RENV_R_INITIALIZING", unset = NA))) 12 | return(invisible(TRUE)) 13 | 14 | # signal that we're loading renv during R startup 15 | Sys.setenv("RENV_R_INITIALIZING" = "true") 16 | on.exit(Sys.unsetenv("RENV_R_INITIALIZING"), add = TRUE) 17 | 18 | # signal that we've consented to use renv 19 | options(renv.consent = TRUE) 20 | 21 | # load the 'utils' package eagerly -- this ensures that renv shims, which 22 | # mask 'utils' packages, will come first on the search path 23 | library(utils, lib.loc = .Library) 24 | 25 | # check to see if renv has already been loaded 26 | if ("renv" %in% loadedNamespaces()) { 27 | 28 | # if renv has already been loaded, and it's the requested version of renv, 29 | # nothing to do 30 | spec <- .getNamespaceInfo(.getNamespace("renv"), "spec") 31 | if (identical(spec[["version"]], version)) 32 | return(invisible(TRUE)) 33 | 34 | # otherwise, unload and attempt to load the correct version of renv 35 | unloadNamespace("renv") 36 | 37 | } 38 | 39 | # load bootstrap tools 40 | bootstrap <- function(version, library) { 41 | 42 | # attempt to download renv 43 | tarball <- tryCatch(renv_bootstrap_download(version), error = identity) 44 | if (inherits(tarball, "error")) 45 | stop("failed to download renv ", version) 46 | 47 | # now attempt to install 48 | status <- tryCatch(renv_bootstrap_install(version, tarball, library), error = identity) 49 | if (inherits(status, "error")) 50 | stop("failed to install renv ", version) 51 | 52 | } 53 | 54 | renv_bootstrap_tests_running <- function() { 55 | getOption("renv.tests.running", default = FALSE) 56 | } 57 | 58 | renv_bootstrap_repos <- function() { 59 | 60 | # check for repos override 61 | repos <- Sys.getenv("RENV_CONFIG_REPOS_OVERRIDE", unset = NA) 62 | if (!is.na(repos)) 63 | return(repos) 64 | 65 | # if we're testing, re-use the test repositories 66 | if (renv_bootstrap_tests_running()) 67 | return(getOption("renv.tests.repos")) 68 | 69 | # retrieve current repos 70 | repos <- getOption("repos") 71 | 72 | # ensure @CRAN@ entries are resolved 73 | repos[repos == "@CRAN@"] <- getOption( 74 | "renv.repos.cran", 75 | "https://cloud.r-project.org" 76 | ) 77 | 78 | # add in renv.bootstrap.repos if set 79 | default <- c(FALLBACK = "https://cloud.r-project.org") 80 | extra <- getOption("renv.bootstrap.repos", default = default) 81 | repos <- c(repos, extra) 82 | 83 | # remove duplicates that might've snuck in 84 | dupes <- duplicated(repos) | duplicated(names(repos)) 85 | repos[!dupes] 86 | 87 | } 88 | 89 | renv_bootstrap_download <- function(version) { 90 | 91 | # if the renv version number has 4 components, assume it must 92 | # be retrieved via github 93 | nv <- numeric_version(version) 94 | components <- unclass(nv)[[1]] 95 | 96 | methods <- if (length(components) == 4L) { 97 | list( 98 | renv_bootstrap_download_github 99 | ) 100 | } else { 101 | list( 102 | renv_bootstrap_download_cran_latest, 103 | renv_bootstrap_download_cran_archive 104 | ) 105 | } 106 | 107 | for (method in methods) { 108 | path <- tryCatch(method(version), error = identity) 109 | if (is.character(path) && file.exists(path)) 110 | return(path) 111 | } 112 | 113 | stop("failed to download renv ", version) 114 | 115 | } 116 | 117 | renv_bootstrap_download_impl <- function(url, destfile) { 118 | 119 | mode <- "wb" 120 | 121 | # https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17715 122 | fixup <- 123 | Sys.info()[["sysname"]] == "Windows" && 124 | substring(url, 1L, 5L) == "file:" 125 | 126 | if (fixup) 127 | mode <- "w+b" 128 | 129 | utils::download.file( 130 | url = url, 131 | destfile = destfile, 132 | mode = mode, 133 | quiet = TRUE 134 | ) 135 | 136 | } 137 | 138 | renv_bootstrap_download_cran_latest <- function(version) { 139 | 140 | spec <- renv_bootstrap_download_cran_latest_find(version) 141 | 142 | message("* Downloading renv ", version, " ... ", appendLF = FALSE) 143 | 144 | type <- spec$type 145 | repos <- spec$repos 146 | 147 | info <- tryCatch( 148 | utils::download.packages( 149 | pkgs = "renv", 150 | destdir = tempdir(), 151 | repos = repos, 152 | type = type, 153 | quiet = TRUE 154 | ), 155 | condition = identity 156 | ) 157 | 158 | if (inherits(info, "condition")) { 159 | message("FAILED") 160 | return(FALSE) 161 | } 162 | 163 | # report success and return 164 | message("OK (downloaded ", type, ")") 165 | info[1, 2] 166 | 167 | } 168 | 169 | renv_bootstrap_download_cran_latest_find <- function(version) { 170 | 171 | # check whether binaries are supported on this system 172 | binary <- 173 | getOption("renv.bootstrap.binary", default = TRUE) && 174 | !identical(.Platform$pkgType, "source") && 175 | !identical(getOption("pkgType"), "source") && 176 | Sys.info()[["sysname"]] %in% c("Darwin", "Windows") 177 | 178 | types <- c(if (binary) "binary", "source") 179 | 180 | # iterate over types + repositories 181 | for (type in types) { 182 | for (repos in renv_bootstrap_repos()) { 183 | 184 | # retrieve package database 185 | db <- tryCatch( 186 | as.data.frame( 187 | utils::available.packages(type = type, repos = repos), 188 | stringsAsFactors = FALSE 189 | ), 190 | error = identity 191 | ) 192 | 193 | if (inherits(db, "error")) 194 | next 195 | 196 | # check for compatible entry 197 | entry <- db[db$Package %in% "renv" & db$Version %in% version, ] 198 | if (nrow(entry) == 0) 199 | next 200 | 201 | # found it; return spec to caller 202 | spec <- list(entry = entry, type = type, repos = repos) 203 | return(spec) 204 | 205 | } 206 | } 207 | 208 | # if we got here, we failed to find renv 209 | fmt <- "renv %s is not available from your declared package repositories" 210 | stop(sprintf(fmt, version)) 211 | 212 | } 213 | 214 | renv_bootstrap_download_cran_archive <- function(version) { 215 | 216 | name <- sprintf("renv_%s.tar.gz", version) 217 | repos <- renv_bootstrap_repos() 218 | urls <- file.path(repos, "src/contrib/Archive/renv", name) 219 | destfile <- file.path(tempdir(), name) 220 | 221 | message("* Downloading renv ", version, " ... ", appendLF = FALSE) 222 | 223 | for (url in urls) { 224 | 225 | status <- tryCatch( 226 | renv_bootstrap_download_impl(url, destfile), 227 | condition = identity 228 | ) 229 | 230 | if (identical(status, 0L)) { 231 | message("OK") 232 | return(destfile) 233 | } 234 | 235 | } 236 | 237 | message("FAILED") 238 | return(FALSE) 239 | 240 | } 241 | 242 | renv_bootstrap_download_github <- function(version) { 243 | 244 | enabled <- Sys.getenv("RENV_BOOTSTRAP_FROM_GITHUB", unset = "TRUE") 245 | if (!identical(enabled, "TRUE")) 246 | return(FALSE) 247 | 248 | # prepare download options 249 | pat <- Sys.getenv("GITHUB_PAT") 250 | if (nzchar(Sys.which("curl")) && nzchar(pat)) { 251 | fmt <- "--location --fail --header \"Authorization: token %s\"" 252 | extra <- sprintf(fmt, pat) 253 | saved <- options("download.file.method", "download.file.extra") 254 | options(download.file.method = "curl", download.file.extra = extra) 255 | on.exit(do.call(base::options, saved), add = TRUE) 256 | } else if (nzchar(Sys.which("wget")) && nzchar(pat)) { 257 | fmt <- "--header=\"Authorization: token %s\"" 258 | extra <- sprintf(fmt, pat) 259 | saved <- options("download.file.method", "download.file.extra") 260 | options(download.file.method = "wget", download.file.extra = extra) 261 | on.exit(do.call(base::options, saved), add = TRUE) 262 | } 263 | 264 | message("* Downloading renv ", version, " from GitHub ... ", appendLF = FALSE) 265 | 266 | url <- file.path("https://api.github.com/repos/rstudio/renv/tarball", version) 267 | name <- sprintf("renv_%s.tar.gz", version) 268 | destfile <- file.path(tempdir(), name) 269 | 270 | status <- tryCatch( 271 | renv_bootstrap_download_impl(url, destfile), 272 | condition = identity 273 | ) 274 | 275 | if (!identical(status, 0L)) { 276 | message("FAILED") 277 | return(FALSE) 278 | } 279 | 280 | message("OK") 281 | return(destfile) 282 | 283 | } 284 | 285 | renv_bootstrap_install <- function(version, tarball, library) { 286 | 287 | # attempt to install it into project library 288 | message("* Installing renv ", version, " ... ", appendLF = FALSE) 289 | dir.create(library, showWarnings = FALSE, recursive = TRUE) 290 | 291 | # invoke using system2 so we can capture and report output 292 | bin <- R.home("bin") 293 | exe <- if (Sys.info()[["sysname"]] == "Windows") "R.exe" else "R" 294 | r <- file.path(bin, exe) 295 | args <- c("--vanilla", "CMD", "INSTALL", "-l", shQuote(library), shQuote(tarball)) 296 | output <- system2(r, args, stdout = TRUE, stderr = TRUE) 297 | message("Done!") 298 | 299 | # check for successful install 300 | status <- attr(output, "status") 301 | if (is.numeric(status) && !identical(status, 0L)) { 302 | header <- "Error installing renv:" 303 | lines <- paste(rep.int("=", nchar(header)), collapse = "") 304 | text <- c(header, lines, output) 305 | writeLines(text, con = stderr()) 306 | } 307 | 308 | status 309 | 310 | } 311 | 312 | renv_bootstrap_platform_prefix <- function() { 313 | 314 | # construct version prefix 315 | version <- paste(R.version$major, R.version$minor, sep = ".") 316 | prefix <- paste("R", numeric_version(version)[1, 1:2], sep = "-") 317 | 318 | # include SVN revision for development versions of R 319 | # (to avoid sharing platform-specific artefacts with released versions of R) 320 | devel <- 321 | identical(R.version[["status"]], "Under development (unstable)") || 322 | identical(R.version[["nickname"]], "Unsuffered Consequences") 323 | 324 | if (devel) 325 | prefix <- paste(prefix, R.version[["svn rev"]], sep = "-r") 326 | 327 | # build list of path components 328 | components <- c(prefix, R.version$platform) 329 | 330 | # include prefix if provided by user 331 | prefix <- renv_bootstrap_platform_prefix_impl() 332 | if (!is.na(prefix) && nzchar(prefix)) 333 | components <- c(prefix, components) 334 | 335 | # build prefix 336 | paste(components, collapse = "/") 337 | 338 | } 339 | 340 | renv_bootstrap_platform_prefix_impl <- function() { 341 | 342 | # if an explicit prefix has been supplied, use it 343 | prefix <- Sys.getenv("RENV_PATHS_PREFIX", unset = NA) 344 | if (!is.na(prefix)) 345 | return(prefix) 346 | 347 | # if the user has requested an automatic prefix, generate it 348 | auto <- Sys.getenv("RENV_PATHS_PREFIX_AUTO", unset = NA) 349 | if (auto %in% c("TRUE", "True", "true", "1")) 350 | return(renv_bootstrap_platform_prefix_auto()) 351 | 352 | # empty string on failure 353 | "" 354 | 355 | } 356 | 357 | renv_bootstrap_platform_prefix_auto <- function() { 358 | 359 | prefix <- tryCatch(renv_bootstrap_platform_os(), error = identity) 360 | if (inherits(prefix, "error") || prefix %in% "unknown") { 361 | 362 | msg <- paste( 363 | "failed to infer current operating system", 364 | "please file a bug report at https://github.com/rstudio/renv/issues", 365 | sep = "; " 366 | ) 367 | 368 | warning(msg) 369 | 370 | } 371 | 372 | prefix 373 | 374 | } 375 | 376 | renv_bootstrap_platform_os <- function() { 377 | 378 | sysinfo <- Sys.info() 379 | sysname <- sysinfo[["sysname"]] 380 | 381 | # handle Windows + macOS up front 382 | if (sysname == "Windows") 383 | return("windows") 384 | else if (sysname == "Darwin") 385 | return("macos") 386 | 387 | # check for os-release files 388 | for (file in c("/etc/os-release", "/usr/lib/os-release")) 389 | if (file.exists(file)) 390 | return(renv_bootstrap_platform_os_via_os_release(file, sysinfo)) 391 | 392 | # check for redhat-release files 393 | if (file.exists("/etc/redhat-release")) 394 | return(renv_bootstrap_platform_os_via_redhat_release()) 395 | 396 | "unknown" 397 | 398 | } 399 | 400 | renv_bootstrap_platform_os_via_os_release <- function(file, sysinfo) { 401 | 402 | # read /etc/os-release 403 | release <- utils::read.table( 404 | file = file, 405 | sep = "=", 406 | quote = c("\"", "'"), 407 | col.names = c("Key", "Value"), 408 | comment.char = "#", 409 | stringsAsFactors = FALSE 410 | ) 411 | 412 | vars <- as.list(release$Value) 413 | names(vars) <- release$Key 414 | 415 | # get os name 416 | os <- tolower(sysinfo[["sysname"]]) 417 | 418 | # read id 419 | id <- "unknown" 420 | for (field in c("ID", "ID_LIKE")) { 421 | if (field %in% names(vars) && nzchar(vars[[field]])) { 422 | id <- vars[[field]] 423 | break 424 | } 425 | } 426 | 427 | # read version 428 | version <- "unknown" 429 | for (field in c("UBUNTU_CODENAME", "VERSION_CODENAME", "VERSION_ID", "BUILD_ID")) { 430 | if (field %in% names(vars) && nzchar(vars[[field]])) { 431 | version <- vars[[field]] 432 | break 433 | } 434 | } 435 | 436 | # join together 437 | paste(c(os, id, version), collapse = "-") 438 | 439 | } 440 | 441 | renv_bootstrap_platform_os_via_redhat_release <- function() { 442 | 443 | # read /etc/redhat-release 444 | contents <- readLines("/etc/redhat-release", warn = FALSE) 445 | 446 | # infer id 447 | id <- if (grepl("centos", contents, ignore.case = TRUE)) 448 | "centos" 449 | else if (grepl("redhat", contents, ignore.case = TRUE)) 450 | "redhat" 451 | else 452 | "unknown" 453 | 454 | # try to find a version component (very hacky) 455 | version <- "unknown" 456 | 457 | parts <- strsplit(contents, "[[:space:]]")[[1L]] 458 | for (part in parts) { 459 | 460 | nv <- tryCatch(numeric_version(part), error = identity) 461 | if (inherits(nv, "error")) 462 | next 463 | 464 | version <- nv[1, 1] 465 | break 466 | 467 | } 468 | 469 | paste(c("linux", id, version), collapse = "-") 470 | 471 | } 472 | 473 | renv_bootstrap_library_root_name <- function(project) { 474 | 475 | # use project name as-is if requested 476 | asis <- Sys.getenv("RENV_PATHS_LIBRARY_ROOT_ASIS", unset = "FALSE") 477 | if (asis) 478 | return(basename(project)) 479 | 480 | # otherwise, disambiguate based on project's path 481 | id <- substring(renv_bootstrap_hash_text(project), 1L, 8L) 482 | paste(basename(project), id, sep = "-") 483 | 484 | } 485 | 486 | renv_bootstrap_library_root <- function(project) { 487 | 488 | path <- Sys.getenv("RENV_PATHS_LIBRARY", unset = NA) 489 | if (!is.na(path)) 490 | return(path) 491 | 492 | path <- Sys.getenv("RENV_PATHS_LIBRARY_ROOT", unset = NA) 493 | if (!is.na(path)) { 494 | name <- renv_bootstrap_library_root_name(project) 495 | return(file.path(path, name)) 496 | } 497 | 498 | prefix <- renv_bootstrap_profile_prefix() 499 | paste(c(project, prefix, "renv/library"), collapse = "/") 500 | 501 | } 502 | 503 | renv_bootstrap_validate_version <- function(version) { 504 | 505 | loadedversion <- utils::packageDescription("renv", fields = "Version") 506 | if (version == loadedversion) 507 | return(TRUE) 508 | 509 | # assume four-component versions are from GitHub; three-component 510 | # versions are from CRAN 511 | components <- strsplit(loadedversion, "[.-]")[[1]] 512 | remote <- if (length(components) == 4L) 513 | paste("rstudio/renv", loadedversion, sep = "@") 514 | else 515 | paste("renv", loadedversion, sep = "@") 516 | 517 | fmt <- paste( 518 | "renv %1$s was loaded from project library, but this project is configured to use renv %2$s.", 519 | "Use `renv::record(\"%3$s\")` to record renv %1$s in the lockfile.", 520 | "Use `renv::restore(packages = \"renv\")` to install renv %2$s into the project library.", 521 | sep = "\n" 522 | ) 523 | 524 | msg <- sprintf(fmt, loadedversion, version, remote) 525 | warning(msg, call. = FALSE) 526 | 527 | FALSE 528 | 529 | } 530 | 531 | renv_bootstrap_hash_text <- function(text) { 532 | 533 | hashfile <- tempfile("renv-hash-") 534 | on.exit(unlink(hashfile), add = TRUE) 535 | 536 | writeLines(text, con = hashfile) 537 | tools::md5sum(hashfile) 538 | 539 | } 540 | 541 | renv_bootstrap_load <- function(project, libpath, version) { 542 | 543 | # try to load renv from the project library 544 | if (!requireNamespace("renv", lib.loc = libpath, quietly = TRUE)) 545 | return(FALSE) 546 | 547 | # warn if the version of renv loaded does not match 548 | renv_bootstrap_validate_version(version) 549 | 550 | # load the project 551 | renv::load(project) 552 | 553 | TRUE 554 | 555 | } 556 | 557 | renv_bootstrap_profile_load <- function(project) { 558 | 559 | # if RENV_PROFILE is already set, just use that 560 | profile <- Sys.getenv("RENV_PROFILE", unset = NA) 561 | if (!is.na(profile) && nzchar(profile)) 562 | return(profile) 563 | 564 | # check for a profile file (nothing to do if it doesn't exist) 565 | path <- file.path(project, "renv/local/profile") 566 | if (!file.exists(path)) 567 | return(NULL) 568 | 569 | # read the profile, and set it if it exists 570 | contents <- readLines(path, warn = FALSE) 571 | if (length(contents) == 0L) 572 | return(NULL) 573 | 574 | # set RENV_PROFILE 575 | profile <- contents[[1L]] 576 | if (nzchar(profile)) 577 | Sys.setenv(RENV_PROFILE = profile) 578 | 579 | profile 580 | 581 | } 582 | 583 | renv_bootstrap_profile_prefix <- function() { 584 | profile <- renv_bootstrap_profile_get() 585 | if (!is.null(profile)) 586 | return(file.path("renv/profiles", profile)) 587 | } 588 | 589 | renv_bootstrap_profile_get <- function() { 590 | profile <- Sys.getenv("RENV_PROFILE", unset = "") 591 | renv_bootstrap_profile_normalize(profile) 592 | } 593 | 594 | renv_bootstrap_profile_set <- function(profile) { 595 | profile <- renv_bootstrap_profile_normalize(profile) 596 | if (is.null(profile)) 597 | Sys.unsetenv("RENV_PROFILE") 598 | else 599 | Sys.setenv(RENV_PROFILE = profile) 600 | } 601 | 602 | renv_bootstrap_profile_normalize <- function(profile) { 603 | 604 | if (is.null(profile) || profile %in% c("", "default")) 605 | return(NULL) 606 | 607 | profile 608 | 609 | } 610 | 611 | # load the renv profile, if any 612 | renv_bootstrap_profile_load(project) 613 | 614 | # construct path to library root 615 | root <- renv_bootstrap_library_root(project) 616 | 617 | # construct library prefix for platform 618 | prefix <- renv_bootstrap_platform_prefix() 619 | 620 | # construct full libpath 621 | libpath <- file.path(root, prefix) 622 | 623 | # attempt to load 624 | if (renv_bootstrap_load(project, libpath, version)) 625 | return(TRUE) 626 | 627 | # load failed; inform user we're about to bootstrap 628 | prefix <- paste("# Bootstrapping renv", version) 629 | postfix <- paste(rep.int("-", 77L - nchar(prefix)), collapse = "") 630 | header <- paste(prefix, postfix) 631 | message(header) 632 | 633 | # perform bootstrap 634 | bootstrap(version, libpath) 635 | 636 | # exit early if we're just testing bootstrap 637 | if (!is.na(Sys.getenv("RENV_BOOTSTRAP_INSTALL_ONLY", unset = NA))) 638 | return(TRUE) 639 | 640 | # try again to load 641 | if (requireNamespace("renv", lib.loc = libpath, quietly = TRUE)) { 642 | message("* Successfully installed and loaded renv ", version, ".") 643 | return(renv::load()) 644 | } 645 | 646 | # failed to download or load renv; warn the user 647 | msg <- c( 648 | "Failed to find an renv installation: the project will not be loaded.", 649 | "Use `renv::activate()` to re-initialize the project." 650 | ) 651 | 652 | warning(paste(msg, collapse = "\n"), call. = FALSE) 653 | 654 | }) 655 | -------------------------------------------------------------------------------- /sparsevar.Rproj: -------------------------------------------------------------------------------- 1 | Version: 1.0 2 | 3 | RestoreWorkspace: No 4 | SaveWorkspace: No 5 | AlwaysSaveHistory: Yes 6 | 7 | EnableCodeIndexing: Yes 8 | UseSpacesForTab: Yes 9 | NumSpacesForTab: 2 10 | Encoding: UTF-8 11 | 12 | RnwWeave: Sweave 13 | LaTeX: pdfLaTeX 14 | 15 | BuildType: Package 16 | PackageUseDevtools: Yes 17 | PackageInstallArgs: --no-multiarch --with-keep.source 18 | PackageRoxygenize: rd,collate,namespace 19 | -------------------------------------------------------------------------------- /tests/testSparse.R: -------------------------------------------------------------------------------- 1 | ## Test1 ----------------------------------------------------------------------- 2 | suppressMessages(library(Matrix)) 3 | i <- sample(1:100, 20) 4 | j <- sample(1:100, 20) 5 | x <- rnorm(20) 6 | A <- sparseMatrix(dims = c(100,100), i, j, x = x) 7 | print(A) 8 | b <- rep(1, 100) 9 | crossprod(A,b) 10 | 11 | ## Test2 ----------------------------------------------------------------------- 12 | sim <- simulateVAR(N = 100) 13 | 14 | trDt <- transformData(sim$series, 1, list()) 15 | 16 | X <- as(trDt$X, "dgCMatrix") 17 | y <- trDt$y 18 | 19 | Rcpp::sourceCpp(file = "/home/svazzole/workspace/r/sparsevar/src/scad.cpp") 20 | z <- sparsevar::crossprod(X,y) 21 | z <- crossprod(X,y) 22 | -------------------------------------------------------------------------------- /tests/testSparse2.R: -------------------------------------------------------------------------------- 1 | library(ncvreg) 2 | Rcpp::sourceCpp(file = "/home/svazzole/workspace/r/sparsevar/src/scad.cpp") 3 | Rcpp::sourceCpp(file = "/home/svazzole/workspace/r/ncvreg/src/maxprod.cpp") 4 | Rcpp::sourceCpp(file = "/home/svazzole/workspace/r/ncvreg/src/standardize.cpp") 5 | source(file = "/home/svazzole/workspace/r/sparsevar/R/scadReg.R") 6 | n <- 200 7 | p <- 23 8 | 9 | X <- matrix(rnorm(n*p), n, p) 10 | mult <- rep(1, ncol(X)) 11 | b <- rnorm(p) 12 | y <- rnorm(n, X%*%b) 13 | beta <- lm(y~X)$coef 14 | 15 | # XX <- as(Matrix::Matrix(X, sparse = TRUE), "dgCMatrix") 16 | scad <- scadReg(X,y,lambda=c(0),eps=.0001) 17 | scad2 <- scadReg(X,y,lambda=c(0), eps=.0001) 18 | scad3 <- scadReg(X,y, nlambda = 20, lambda.min = 0, eps = .0001) 19 | ncv <- coef(ncvreg(X,y,nlambda = 20, penalty="SCAD",eps=.0001)) 20 | beta 21 | scad 22 | scad2 23 | ncv 24 | 25 | check(scad, beta[2:nrow(beta)], tolerance=.01, check.attributes=FALSE) 26 | check(mcp, beta,tolerance=.01,check.attributes=FALSE) 27 | 28 | scad3$beta[, 21] - beta[2:24] 29 | 30 | abs((scad3$beta[,6]-scad3$beta[,5])/scad3$beta[,5])>.0001 31 | 32 | abs((scad3_2$beta[,6]-scad3_2$beta[,5])/scad3_2$beta[,5])>.0001 33 | -------------------------------------------------------------------------------- /tests/testthat.R: -------------------------------------------------------------------------------- 1 | library(testthat) 2 | library(sparsevar) 3 | -------------------------------------------------------------------------------- /tests/testthat/testIsWorking.R: -------------------------------------------------------------------------------- 1 | context("Testing generation of VARs") 2 | 3 | test_that("sparsevar", { 4 | sim <- sparsevar::simulateVAR(N = 20, p = 2) 5 | expect_output(str(sim), "List of 4") 6 | expect_output(cat(attr(sim, "class")), "var") 7 | expect_output(cat(attr(sim, "type")), "simulation") 8 | }) 9 | 10 | context("Testing estimation of VARs") 11 | 12 | test_that("sparsevar", { 13 | sim <- sparsevar::simulateVAR(N = 30, p = 1) 14 | fit <- sparsevar::fitVAR(sim$series, p = 1) 15 | expect_output(str(fit), "List of 11") 16 | expect_output(cat(attr(fit, "class")), "var") 17 | expect_output(cat(attr(fit, "type")), "fit") 18 | 19 | sim <- sparsevar::simulateVAR(N = 30, p = 1) 20 | fit <- sparsevar::fitVAR(sim$series, p = 1, 21 | lambdas_list = c(0.1, 0.5, 0.3)) 22 | expect_output(str(fit), "List of 11") 23 | expect_output(cat(attr(fit, "class")), "var") 24 | expect_output(cat(attr(fit, "type")), "fit") 25 | }) 26 | 27 | context("Testing IRF and error bands") 28 | 29 | test_that("sparsevar", { 30 | sim <- sparsevar::simulateVAR(N = 10, p = 3) 31 | fit <- sparsevar::fitVAR(sim$series, p = 3) 32 | irf <- sparsevar::impulseResponse(fit, len = 20) 33 | expect_output(str(irf), "List of 3") 34 | expect_output(cat(attr(irf, "class")), "irf") 35 | eb <- sparsevar::errorBandsIRF(fit, irf, verbose = FALSE) 36 | expect_output(str(eb), "List of 8") 37 | expect_output(cat(attr(eb, "class")), "irfBands") 38 | }) 39 | -------------------------------------------------------------------------------- /vignettes/using.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Using sparsevar package" 3 | author: "" 4 | date: "`r Sys.Date()`" 5 | output: 6 | rmarkdown::pdf_document 7 | vignette: > 8 | %\VignetteIndexEntry{Using sparsevar} 9 | %\VignetteEngine{knitr::rmarkdown} 10 | %\VignetteEncoding{UTF-8} 11 | --- 12 | 13 | # Introduction 14 | 15 | `sparsevar` is an R package that estimates sparse VAR and VECM model using penalized least squares methods (PLS): it is possible to use 16 | various penalties such as ENET, SCAD or MC+ penalties. The sparsity parameter can be estimated using cross-validation or time slicing. When using ENET it is possible to estimate VAR(1) of dimension up to 200, while when using one of the other two is better not to go beyond 50. When estimating a VAR($p$) model then the limits are roughly $200/p$ and $50/p$, respectively. 17 | 18 | The authors of `sparsevar` are Monica Billio, Lorenzo Frattarolo and Simone Vazzoler and the R package is mantained by Simone Vazzoler. This vignette describes the usage of `sparsevar` in R. 19 | 20 | # Installation 21 | 22 | The simplest way to install the package is by using the CRAN repositories, by typing in the 23 | R console 24 | ```{r, eval=FALSE} 25 | install.packages("sparsevar", repos = "http://cran.us.r-project.org") 26 | ``` 27 | 28 | It is also possible to install the developing version of the package by typing 29 | ```{r, eval=FALSE} 30 | install.packages("devtools", repos = "http://cran.us.r-project.org") 31 | devtools::install_github("svazzole/sparsevar") 32 | ``` 33 | 34 | # Quick start 35 | 36 | To load the `sparsevar` package simply type 37 | ```{r} 38 | library(sparsevar) 39 | ``` 40 | 41 | Using a function included in the package, we simply generate a $20\times 20$ VAR$(2)$ process 42 | ```{r, cache = TRUE} 43 | set.seed(1) 44 | sim <- simulateVAR(N = 20, p = 2) 45 | ``` 46 | 47 | and we can estimate the matrices of the process using 48 | ```{r, cache = TRUE} 49 | fit <- fitVAR(sim$series, p = 2) 50 | ``` 51 | 52 | The results can be seen by plotting the matrices 53 | ```{r} 54 | plotVAR(sim, fit) 55 | ``` 56 | 57 | # Description of the package's functions 58 | 59 | ## Estimation of a VAR or VECM models 60 | 61 | Use `fitVAR` for VAR model estimation or `fitVECM` for VECM estimation. 62 | 63 | The common arguments for the two functions are: 64 | 65 | * `data`: a matrix containing the multivariate time series (variables in columns, observations in rows); 66 | * `p`: the order of the VAR model to be estimated; default `p = 1` for `fitVAR` and `p=2` 67 | for `fitVECM`. 68 | * `method`: the method used to estimate the sparsity parameter. Default is `method = "cv"` 69 | (cross-validation). Another possibility is `method = "timeSlice"`. 70 | * `penalty`: the penalty used in least squares. Possible values are: `"ENET"`, `"SCAD"` or `"MCP"`; 71 | * `...`: sequence of options. Some of them depend on the penalty used, some on the method 72 | and some are global. 73 | 74 | ### Global options 75 | 76 | * `parallel`: `TRUE` or `FALSE` (default). Parallel cross-validation (on the folds); 77 | * `ncores`: if `parallel = TRUE` then you must specify the number of cores used for the parallelization (default = `1`). 78 | * `nfolds`: number of folds to use in the cross validation (default `nfolds = 10`) 79 | * `threshold`: `TRUE` or `FALSE` (default). If `TRUE` all the elements of the VAR/VECM 80 | matrices that are small "enough" are set to 0. 81 | 82 | ### Options for `penalty = "ENET"` 83 | 84 | * `lambda`: `"lambda.min"` (default) or `"lambda.1se"`; 85 | * `alpha`: a value in (0,1) (default `alpha = 1`). `alpha = 1` is LASSO regression, `alpha = 0` is Ridge LS; 86 | * `type.measure`: `"mse"` (default) or `"mae"`; 87 | * `nlambda`: number of lambdas used for cross validation. 88 | * `foldsID`: the vector containing the IDs for the folds in the cross validation. 89 | 90 | ### Options for `penalty = "SCAD"` or `"MCP"` 91 | 92 | * `eps`: convergence tolerance 93 | * `picasso`: `TRUE` or `FALSE`. If `TRUE` uses the `picasso` package for SCAD estimation. 94 | 95 | ### Output 96 | 97 | The output of the function `fitVAR` is a S3 object of class `var` containing: 98 | 99 | * `mu`: a vector for the mean; 100 | * `A`: a list of length `p` containing the matrices estimated for the VAR(p) model; 101 | * `lambda`: the estimated sparsity parameter; 102 | * `mse`: the mean square error of the cross validation or time slicing; 103 | * `time`: elapsed time for the estimation; 104 | * `series`: the transformed data matrix (centered or scaled); 105 | * `residuals`: the matrix of the estimated residuals; 106 | * `sigma`: the variance/covariance matrix of the residuals; 107 | * `penalty`: the penalty used (`ENET`, `SCAD` or `MCP`); 108 | * `method`: the method used (`"cv"` or `"timeSlice"`). 109 | 110 | ## Simulation of VAR models 111 | 112 | Use `simulateVAR`. The parameters for the function are: 113 | 114 | * `N`: the dimension of the process; 115 | * `nobs`: the number of observations of the process; 116 | * `rho`: the variance/covariance "intensity"; 117 | * `sparsity`: the percentage of non zero elements in the matrix of the VAR; 118 | * `method`: `"normal"` or `"bimodal"`. 119 | 120 | ## Estimation of Impulse Response function 121 | 122 | Use the functions `impulseResponse` and `errorBands` to compute the impulse response 123 | function and to estimate the error bands of the model respectively. 124 | 125 | ```{r, eval=FALSE} 126 | irf <- impulseResponse(fit) 127 | eb <- errorBandsIRF(fit, irf) 128 | ``` 129 | 130 | # Examples 131 | 132 | ## Estimations' examples 133 | 134 | ```{r, eval=FALSE} 135 | results <- fitVAR(rets) 136 | ``` 137 | will estimate VAR(1) process using LASSO regression on the dataset `rets`. 138 | 139 | The command 140 | ```{r, eval=FALSE} 141 | results <- fitVAR(rets, p = 3, penalty = "ENET", parallel = TRUE, 142 | ncores = 5, alpha = 0.95, type.measure = "mae", 143 | lambda = "lambda.1se") 144 | ``` 145 | will estimate a VAR(3) model on the dataset `rets` using the penalty `"ENET"` with `alpha = 0.95` (between LASSO and Ridge). For the cross validation it will use `"mae"` (mean absolute error) insteadof mean square error and it will choose as model the one correspondent to the lambda which is at 1 std deviation from the minimum. Moreover it will parallelize the cross validation over 5 cores. 146 | 147 | ## IRF example 148 | 149 | Here we compute the IRF for the model estimated in the Quick Start section. 150 | 151 | ```{r, eval = FALSE} 152 | irf <- impulseResponse(fit) 153 | eb <- errorBandsIRF(fit, irf, verbose = FALSE) 154 | plotIRFGrid(irf, eb, indexes = c(11,20)) 155 | ``` 156 | 157 | ## Simulations' examples 158 | 159 | ```{r, eval=FALSE} 160 | sim <- simulateVAR(N = 100, nobs = 250, rho = 0.75, sparsity = 0.05, method = "normal") 161 | ``` 162 | -------------------------------------------------------------------------------- /vignettes/using_cache/latex/__packages: -------------------------------------------------------------------------------- 1 | base 2 | sparsevar 3 | -------------------------------------------------------------------------------- /vignettes/using_cache/latex/unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/svazzole/sparsevar/2fbaf6080aff0c36e72f9407613223a9794087e9/vignettes/using_cache/latex/unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.RData -------------------------------------------------------------------------------- /vignettes/using_cache/latex/unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/svazzole/sparsevar/2fbaf6080aff0c36e72f9407613223a9794087e9/vignettes/using_cache/latex/unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.rdb -------------------------------------------------------------------------------- /vignettes/using_cache/latex/unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/svazzole/sparsevar/2fbaf6080aff0c36e72f9407613223a9794087e9/vignettes/using_cache/latex/unnamed-chunk-7_ed36c6df10e0fd7f41f62fe376f5eeb8.rdx --------------------------------------------------------------------------------