├── .Rbuildignore ├── .gitignore ├── DESCRIPTION ├── NAMESPACE ├── R ├── caafidata-data.R ├── dassdata-data.R ├── datascreen-data.R ├── dirtdata-data.R ├── efa-data.R ├── encoder_logic.R ├── encoder_ui.R ├── introR-data.R ├── meaningdata-data.R ├── mirtdata-data.R ├── resdata-data.R └── server_context.R ├── README.md ├── data ├── caafidata.rda ├── dassdata.rda ├── datascreen.rda ├── dirtdata.rda ├── efa.rda ├── introR.rda ├── meaningdata.rda ├── mirtdata.rda └── resdata.rda ├── inst ├── doc │ ├── lecture_cfa.R │ ├── lecture_cfa.Rmd │ ├── lecture_cfa.html │ ├── lecture_data_screen.R │ ├── lecture_data_screen.Rmd │ ├── lecture_data_screen.html │ ├── lecture_efa.R │ ├── lecture_efa.Rmd │ ├── lecture_efa.html │ ├── lecture_introR.R │ ├── lecture_introR.Rmd │ ├── lecture_introR.html │ ├── lecture_irt.R │ ├── lecture_irt.Rmd │ ├── lecture_irt.html │ ├── lecture_lgm.R │ ├── lecture_lgm.Rmd │ ├── lecture_lgm.html │ ├── lecture_mgcfa.R │ ├── lecture_mgcfa.Rmd │ ├── lecture_mgcfa.html │ ├── lecture_mtmm.R │ ├── lecture_mtmm.Rmd │ ├── lecture_mtmm.html │ ├── lecture_path.R │ ├── lecture_path.Rmd │ ├── lecture_path.html │ ├── lecture_secondcfa.R │ ├── lecture_secondcfa.Rmd │ ├── lecture_secondcfa.html │ ├── lecture_sem.R │ ├── lecture_sem.Rmd │ ├── lecture_sem.html │ ├── lecture_terms.R │ ├── lecture_terms.Rmd │ └── lecture_terms.html └── tutorials │ ├── cfabasics │ ├── .gitignore │ └── cfabasics.Rmd │ ├── cfasecond │ ├── .gitignore │ └── cfasecond.Rmd │ ├── datascreen │ ├── .gitignore │ └── datascreen.Rmd │ ├── efa │ ├── .gitignore │ └── efa.Rmd │ ├── fullsem │ ├── .gitignore │ ├── fullsem.Rmd │ └── fullsem_files │ │ └── figure-html │ │ └── unnamed-chunk-2-1.png │ ├── introR │ ├── .gitignore │ └── introR.Rmd │ ├── irt │ ├── .gitignore │ └── irt.Rmd │ ├── lgm │ ├── .gitignore │ ├── lgm.Rmd │ └── lgm_data │ │ ├── data.RData │ │ └── data_chunks_index.txt │ ├── mgcfa │ ├── .gitignore │ └── mgcfa.Rmd │ ├── mtmm │ ├── .gitignore │ └── mtmm.Rmd │ ├── path1 │ ├── .gitignore │ ├── images │ │ ├── assignment_path1_1.png │ │ └── assignment_path1_2.png │ └── path1.Rmd │ ├── path2 │ ├── .gitignore │ ├── images │ │ └── assignment_path2.png │ └── path2.Rmd │ └── terms │ ├── .gitignore │ └── terms.Rmd ├── learnSEM.Rproj ├── man ├── caafidata.Rd ├── dassdata.Rd ├── datascreen.Rd ├── dirtdata.Rd ├── efa.Rd ├── encoder_logic.Rd ├── encoder_ui.Rd ├── introR.Rd ├── is_server_context.Rd ├── meaningdata.Rd ├── mirtdata.Rd └── resdata.Rd ├── setup └── learnSEM_setup.R └── vignettes ├── data ├── assignment_introR.csv ├── assignment_mgcfa.csv ├── lecture_data_screen.csv ├── lecture_efa.csv ├── lecture_evals.csv ├── lecture_irt.csv └── lecture_mtmm.csv ├── lecture_cfa.R ├── lecture_cfa.Rmd ├── lecture_cfa.html ├── lecture_data_screen.R ├── lecture_data_screen.Rmd ├── lecture_data_screen.html ├── lecture_efa.R ├── lecture_efa.Rmd ├── lecture_efa.html ├── lecture_introR.R ├── lecture_introR.Rmd ├── lecture_introR.html ├── lecture_irt.R ├── lecture_irt.Rmd ├── lecture_irt.html ├── lecture_lgm.R ├── lecture_lgm.Rmd ├── lecture_lgm.html ├── lecture_mgcfa.R ├── lecture_mgcfa.Rmd ├── lecture_mgcfa.html ├── lecture_mtmm.R ├── lecture_mtmm.Rmd ├── lecture_mtmm.html ├── lecture_path.R ├── lecture_path.Rmd ├── lecture_path.html ├── lecture_secondcfa.R ├── lecture_secondcfa.Rmd ├── lecture_secondcfa.html ├── lecture_sem.R ├── lecture_sem.Rmd ├── lecture_sem.html ├── lecture_terms.R ├── lecture_terms.Rmd ├── lecture_terms.html └── pictures ├── ability.png ├── bi_factor.png ├── diagram_sem.png ├── example_lgm.png ├── exo_endo.png ├── full_example.png ├── full_sem.png ├── full_sem2.png ├── icc_example.png ├── indicators.png ├── item_difficulty.png ├── kline_model.png ├── lecture_evals.png ├── model1_mtmm.png ├── model2_mtmm.png ├── model3_mtmm.png ├── model4_mtmm.png ├── model_steps.png ├── random_fixed.png ├── rotate.png ├── scree.png ├── second_order.png └── srmr_formula.png /.Rbuildignore: -------------------------------------------------------------------------------- 1 | ^.*\.Rproj$ 2 | ^\.Rproj\.user$ 3 | setup 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .Rproj.user 2 | .Rhistory 3 | .RData 4 | .Ruserdata 5 | .DS_Store 6 | -------------------------------------------------------------------------------- /DESCRIPTION: -------------------------------------------------------------------------------- 1 | Package: learnSEM 2 | Type: Package 3 | Title: Learning Tutorials for Structural Equation Modeling 4 | Version: 0.5.0 5 | Authors@R: 6 | person(given = "Erin M.", 7 | family = "Buchanan", 8 | role = "cre", 9 | email = "buchananlab@gmail.com", 10 | comment = c(ORCID = "0000-0002-9689-4189")) 11 | Maintainer: Erin M. Buchanan 12 | Description: Do you want to learn Structural Equation Modeling in R? 13 | You have come to the right place! You can use these learnr tutorials 14 | to teach yourself SEM in R, along with the lectures, and guided 15 | examples provided. 16 | License: GPL-3 17 | Encoding: UTF-8 18 | LazyData: true 19 | Imports: 20 | lavaan (>= 0.6.7), 21 | semPlot (>= 1.1.2), 22 | learnr (>= 0.10.1), 23 | shiny (>= 1.5.0), 24 | knitr 25 | VignetteBuilder: knitr 26 | Suggests: 27 | mirt, 28 | ltm, 29 | corrplot, 30 | MOTE, 31 | car, 32 | mice, 33 | palmerpenguins, 34 | rio, 35 | psych, 36 | GPArotation, 37 | parameters, 38 | broom 39 | Depends: 40 | R (>= 2.10) 41 | RoxygenNote: 7.2.3 42 | -------------------------------------------------------------------------------- /NAMESPACE: -------------------------------------------------------------------------------- 1 | # Generated by roxygen2: do not edit by hand 2 | 3 | export(encoder_logic) 4 | export(encoder_ui) 5 | export(is_server_context) 6 | import(learnr) 7 | import(shiny) 8 | import(utils) 9 | -------------------------------------------------------------------------------- /R/caafidata-data.R: -------------------------------------------------------------------------------- 1 | #' CAAFI Data: Computer Aversion, Attitudes, and Familiarity Inventory 2 | #' 3 | #' Study: This dataset has data collected on the computer 4 | #' aversion, attitudes, and familiarity inventory. 5 | #' 6 | #' The instructions were: 7 | #' Below is a list of items describing many of the 8 | #' thoughts and experiences that people have with computers. 9 | #' After reading each statement, circle the number that best 10 | #' describes how true or how false the statement is as it 11 | #' applies to you at this time. If you have no opinion about 12 | #' the item, circle ‘‘0”, but please use this option only if 13 | #' it is absolutely necessary. Be sure to circle only one 14 | #' number. Please do your best to respond to each item. 15 | #' 16 | #' Scale -3 to 3: absolutely false, neutral, absolutely true 17 | #' 18 | #' Computer Familiarity: Items 3, 13-14, 16, 20-23, 27, and 30. 19 | #' 20 | #' Computer Attitudes: Items 1-2, 4-5, 8, 11, 18-19, and 28-29. 21 | #' 22 | #' Computer Aversion: Items 6-7, 9-10, 12, 15, 17, and 24-26. 23 | #' 24 | #' @docType data 25 | #' 26 | #' @usage data(caafidata) 27 | #' 28 | #' @format A data frame with 794 rows and 30 variables. 29 | #' 30 | #'\describe{ 31 | #' \item{q1}{I enjoy using computers.} 32 | #' \item{q2}{Being able to use a computer is important to me.} 33 | #' \item{q3}{I keep up with the latest computer hardware.} 34 | #' \item{q4}{Computers are beneficial because they save people time.} 35 | #' \item{q5}{I like using word-processing programs.} 36 | #' \item{q6}{I feel like a fool when I am using a computer and others are around.} 37 | #' \item{q7}{I am smart enough to use a computer.} 38 | #' \item{q8}{I avoid using computers whenever possible.} 39 | #' \item{q9}{I do not understand how to use computer software (e.g., word-processing programs, spreadsheet programs, etc.).} 40 | #' \item{q10}{I feel that I understand how to use computer files, documents, and folders.} 41 | #' \item{q11}{I use a computer input device every day (e.g., a keyboard, a touch pad, a mouse).} 42 | #' \item{q12}{I can use a computer to successfully perform tasks.} 43 | #' \item{q13}{I can add new hardware to a computer.} 44 | #' \item{q14}{I enjoy reading computer magazines.} 45 | #' \item{q15}{When I use a computer, I am afraid that I will damage it.} 46 | #' \item{q16}{I enjoy connecting new computer accessories.} 47 | #' \item{q17}{I must have a reference manual or a help file to run computer software.} 48 | #' \item{q18}{E-mail is an easy way to communicate with people.} 49 | #' \item{q19}{I use e-mail every day.} 50 | #' \item{q20}{I am comfortable changing (installing/upgrading) computer software.} 51 | #' \item{q21}{I often read computer books.} 52 | #' \item{q22}{My friends often ask me computer-related questions.} 53 | #' \item{q23}{I often read computer magazines.} 54 | #' \item{q24}{Overall, I feel that I don't know how to use a computer.} 55 | #' \item{q25}{Computers are too scientific for me.} 56 | #' \item{q26}{When using a computer, I often lose data.} 57 | #' \item{q27}{I enjoy learning to use new software programs.} 58 | #' \item{q28}{I like to use computer input devices such as a keyboard, a touch pad, a mouse, etc.} 59 | #' \item{q29}{Using a computer is entertaining.} 60 | #' \item{q30}{I keep up with the latest computer software.} 61 | #' } 62 | #' 63 | #' @keywords datasets 64 | "caafidata" 65 | 66 | -------------------------------------------------------------------------------- /R/dassdata-data.R: -------------------------------------------------------------------------------- 1 | #' DASS Data: Depression, Anxiety, and Stress Inventory 2 | #' 3 | #' Study: The DASS is a measurement scale that examines 4 | #' the depression, anxiety, and stress of an individual. 5 | #' 6 | #' The instructions were: 7 | #' Please read each statement and select a number 0, 1, 8 | #' 2 or 3 that indicates how much the statement applied 9 | #' to you over the past week. There are no right or wrong 10 | #' answers. Do not spend too much time on any statement, 11 | #' but please answer each question. 12 | #' 13 | #' Scale 0-3: did not apply to me, applied to me to some 14 | #' degree, applied to me to a considerable degree, or 15 | #' applied to me very much 16 | #' 17 | #' Depression: Questions 3, 5, 10, 13, 16, 17, 21 18 | #' 19 | #' Anxiety: Questions 2, 4, 7, 9, 15, 19, 20 20 | #' 21 | #' Stress: 1, 6, 8, 11, 12, 14, 18 22 | #' 23 | #' @docType data 24 | #' 25 | #' @usage data(dassdata) 26 | #' 27 | #' @format A data frame with 794 rows and 30 variables. 28 | #' 29 | #'\describe{ 30 | #' \item{Q1}{I found it hard to wind down.} 31 | #' \item{Q2}{I was aware of dryness of my mouth.} 32 | #' \item{Q3}{I couldn't seem to experience any positive feeling at all.} 33 | #' \item{Q4}{I experienced breathing difficulty.} 34 | #' \item{Q5}{I found it difficult to work up the initiative to do things.} 35 | #' \item{Q6}{I tended to over-react to situations.} 36 | #' \item{Q7}{I experienced trembling (eg, in the hands).} 37 | #' \item{Q8}{I felt that I was using a lot of nervous energy.} 38 | #' \item{Q9}{I was worried about situations in which I might panic and make a fool of myself.} 39 | #' \item{Q10}{I felt that I had nothing to look forward to.} 40 | #' \item{Q11}{I found myself getting agitated.} 41 | #' \item{Q12}{I found it difficult to relax.} 42 | #' \item{Q13}{I felt down-hearted and blue.} 43 | #' \item{Q14}{I was intolerant of anything that kept me from getting on with what I was doing.} 44 | #' \item{Q15}{I felt I was close to panic.} 45 | #' \item{Q16}{I was unable to become enthusiastic about anything.} 46 | #' \item{Q17}{I felt I wasn't worth much as a person.} 47 | #' \item{Q18}{I felt that I was rather touchy.} 48 | #' \item{Q19}{I was aware of the action of my heart in the absence of physical exertion.} 49 | #' \item{Q20}{I felt scared without any good reason.} 50 | #' \item{Q21}{I felt that life was meaningless.} 51 | #' } 52 | #' 53 | #' @keywords datasets 54 | "dassdata" 55 | 56 | -------------------------------------------------------------------------------- /R/datascreen-data.R: -------------------------------------------------------------------------------- 1 | #' Data Screening Practice Dataset 2 | #' 3 | #' Study: This dataset includes a male body dissatisfaction scale that can be used for 4 | #' datascreening or scale development. 5 | #' 6 | #' @docType data 7 | #' 8 | #' @usage data(datascreen) 9 | #' 10 | #' @format A data frame with 797 rows and 12 variables. 11 | #' 12 | #' \describe{ 13 | #' \item{Participant_ID}{ID number for each participant} 14 | #' \item{q1}{I think my body should be leaner} 15 | #' \item{q2}{I am concerned that my stomach is too flabby} 16 | #' \item{q3}{I feel dissatisfied with my overall body build} 17 | #' \item{q4}{I think I have too much fat on my body} 18 | #' \item{q5}{I think my abs are not thin enough} 19 | #' \item{q6}{I feel satisfied with the size and shape of 20 | #' my body} 21 | #' \item{q7}{Has eating sweets, cakes, or other high 22 | #' calorie food made you feel fat or weak?} 23 | #' \item{q8}{Have you felt excessively large and rounded 24 | #' (i.e., fat)?} 25 | #' \item{q9}{Have you felt ashamed of your body size or 26 | #' shape?} 27 | #' \item{q10}{Has seeing your reflection (e.g., in a mirror 28 | #' or window) made you feel badly about your size or shape?} 29 | #' \item{q11}{Have you been so worried about your body size 30 | #' or shape that you have been feeling that you ought to diet?} 31 | #' } 32 | #' 33 | #' @keywords datasets 34 | "datascreen" 35 | 36 | -------------------------------------------------------------------------------- /R/dirtdata-data.R: -------------------------------------------------------------------------------- 1 | #' Dichotomous IRT Practice Data 2 | #' 3 | #' Study: This data represents the answers on an 4 | #' Educational Psychology exam. A zero indicates that 5 | #' the person missed the question, while one 6 | #' indicates that the person got the question right. 7 | #' 8 | #' @docType data 9 | #' 10 | #' @usage data(dirtdata) 11 | #' 12 | #' @format A data frame with 30 rows and 4 variables. 13 | #' 14 | #' @keywords datasets 15 | "dirtdata" 16 | 17 | -------------------------------------------------------------------------------- /R/efa-data.R: -------------------------------------------------------------------------------- 1 | #' Exploratory Factor Analysis Practice Dataset 2 | #' 3 | #' Study: This dataset has data on the Openness to 4 | #' Experience scale collected as part of an undergraduate 5 | #' honor's thesis project. 6 | #' 7 | #' The instructions were: 8 | #' Below are some phrases describing people's behaviors. 9 | #' Please use the rating scale below to describe how 10 | #' accurately each statement describes you. Describe 11 | #' yourself as you generally are now, not as you wish to 12 | #' be in the future. Describe yourself as you honestly 13 | #' see yourself in relation to other people of your 14 | #' gender and of roughly your same age. Please read 15 | #' each statement carefully, and then check the box 16 | #' that corresponds to your response. 17 | #' 18 | #' Scale: very inaccurate, moderately inaccurate, 19 | #' neither inaccurate nor accurate, moderately 20 | #' accurate, very accurate 21 | #' 22 | #' @docType data 23 | #' 24 | #' @usage data(efa) 25 | #' 26 | #' @format A data frame with 99 rows and 21 variables. 27 | #' 28 | #'\describe{ 29 | #' \item{o1}{Believe in the importance of art.} 30 | #' \item{o2}{Have a vivid imagination.} 31 | #' \item{o3}{Tend to vote for liberal political candidates.} 32 | #' \item{o4}{Carry the conversation to a higher level.} 33 | #' \item{o5}{Enjoy hearing new ideas.} 34 | #' \item{o6}{Enjoy thinking about things.} 35 | #' \item{o7}{Can say things beautifully.} 36 | #' \item{o8}{Enjoy wild flights of fantasy.} 37 | #' \item{o9}{Get excited by new ideas.} 38 | #' \item{o10}{Have a rich vocabulary.} 39 | #' \item{o11}{Am not interested in abstract ideas.} 40 | #' \item{o12}{Do not like art.} 41 | #' \item{o13}{Avoid philosophical discussions.} 42 | #' \item{o14}{Do not enjoy going to art museums.} 43 | #' \item{o15}{Tend to vote for conservative political candidates.} 44 | #' \item{o16}{Do not like poetry.} 45 | #' \item{o17}{Rarely look for a deeper meaning in things.} 46 | #' \item{o18}{Believe that too much tax money goes to support artists.} 47 | #' \item{o19}{Am not interested in theoretical discussions.} 48 | #' \item{o20}{Have difficulty understanding abstract ideas.} 49 | #' \item{condition}{a group condition each participant received} 50 | #' } 51 | #' 52 | #' @keywords datasets 53 | "efa" 54 | 55 | -------------------------------------------------------------------------------- /R/encoder_logic.R: -------------------------------------------------------------------------------- 1 | #' Encoding Logic for learnr Tutorials 2 | #' 3 | #' This function grabs the student answers from a learnr 4 | #' tutorial and returns them as an HTML output for 5 | #' printing to the tutorial screen. 6 | #' 7 | #' @return HTML output for the student tutorial 8 | #' 9 | #' @keywords shiny, learnr, student answers 10 | #' @import learnr 11 | #' @import shiny 12 | #' @import utils 13 | #' @export 14 | #' @examples 15 | #' 16 | #' # Be sure to put this into a server-context chunk. 17 | #' #```{r context="server"} 18 | #' #encoder_logic() 19 | #' #``` 20 | 21 | encoder_logic <- function() { 22 | p <- parent.frame() 23 | check_server_context(p) 24 | 25 | # Evaluate in parent frame to get input, output, and session 26 | local({ 27 | encoded_txt <- shiny::eventReactive( 28 | input$submission_generate, 29 | { 30 | #extract the objects 31 | objs <- learnr:::get_all_state_objects(session) 32 | objs <- learnr:::submissions_from_state_objects(objs) 33 | 34 | str(objs) 35 | 36 | #create a report 37 | report <- "" 38 | 39 | #loop and add to the report 40 | for (level1 in 1:length(objs)){ 41 | 42 | #put in the ID 43 | report <- paste(report, "
", "

Question ID: ", objs[[level1]]$id, 44 | "

") 45 | 46 | objs[[level1]]$data$code <- gsub("\n", "
", objs[[level1]]$data$code) 47 | #if code exercise 48 | if (objs[[level1]]$type == "exercise_submission"){ 49 | 50 | report <- paste(report, "
", "Code Typed: ", 51 | objs[[level1]]$data$code) 52 | 53 | report <- paste(report, "
", "Output: ", 54 | objs[[level1]]$data$output) 55 | 56 | } else { #else question submission 57 | 58 | report <- paste(report, "
", "Question: ", 59 | objs[[level1]]$data$question) 60 | 61 | report <- paste(report, "
", "Answer: ", 62 | objs[[level1]]$data$answer) 63 | 64 | } 65 | 66 | } 67 | 68 | #return the report 69 | report 70 | 71 | } 72 | ) 73 | 74 | output$submission_output <- shiny::renderUI(HTML(encoded_txt())) 75 | 76 | }, envir = p) 77 | } 78 | 79 | #' @rdname encoder_logic 80 | #' @export 81 | -------------------------------------------------------------------------------- /R/encoder_ui.R: -------------------------------------------------------------------------------- 1 | #' Encoding User Interface for learnr Tutorials 2 | #' 3 | #' This function is the shiny user interface for 4 | #' creating the submission output. You can 5 | #' define instructions to go before or after the 6 | #' submission window! 7 | #' 8 | #' @param ui_before Shiny code to go before your 9 | #' submission box. 10 | #' @param ui_after Shiny code to go after your 11 | #' submission box. 12 | #' @return Shiny interface for creating submissions 13 | #' for the learnr tutorials. 14 | #' 15 | #' @keywords shiny, learnr, student answers 16 | #' @import shiny 17 | #' @export 18 | #' @examples 19 | #' 20 | #' #```{r encode, echo=FALSE} 21 | #' #encoder_ui() 22 | #' #``` 23 | 24 | encoder_ui <- function(ui_before = NULL, ui_after = NULL) { 25 | check_not_server_context(parent.frame()) 26 | 27 | shiny::tags$div( 28 | ui_before, 29 | shiny::fixedRow( 30 | shiny::column( 31 | width = 3, 32 | shiny::actionButton("submission_generate", "Generate Submission") 33 | ), 34 | shiny::column(width = 7), 35 | shiny::column( 36 | width = 2 37 | ) 38 | ), 39 | shiny::tags$br(), 40 | htmlOutput("submission_output"), 41 | shiny::tags$br(), 42 | ui_after 43 | ) 44 | } 45 | 46 | #' @rdname encoder_ui 47 | #' @export 48 | -------------------------------------------------------------------------------- /R/introR-data.R: -------------------------------------------------------------------------------- 1 | #' Introduction to R Dataset 2 | #' 3 | #' A dataset containing research results from an experiment 4 | #' that examined how pleasant people felt about words, and the 5 | #' information about how the word is typed. This dataset 6 | #' examines the QWERTY effect. 7 | #' @docType data 8 | #' 9 | #' @usage data(introR) 10 | #' 11 | #' @format A data frame with 33949 rows and 14 variables: 12 | #' 13 | #' \describe{ 14 | #' \item{expno}{the experiment number we assigned to that 15 | #' group of participants} 16 | #' \item{rating}{the pleasantness rating of that word} 17 | #' \item{originalcode}{the word the particpiant saw} 18 | #' \item{id}{the participant ID number} 19 | #' \item{speed}{the typing speed of the participant} 20 | #' \item{error}{the number of typing errors by the participant} 21 | #' \item{whichhand}{which has the participant indicated as 22 | #' their dominate hand} 23 | #' \item{LR_switch}{the number of times typing the word would 24 | #' switch from left to right hands} 25 | #' \item{finger_switch}{the number of times you would switch 26 | #' fingers typing the word} 27 | #' \item{rha}{right hand advantage: Right - Left handed letters} 28 | #' \item{word_length}{the number of characters in the word} 29 | #' \item{letter_freq}{the average of the frequency of each of 30 | #' the letters in the word} 31 | #' \item{real_fake}{if the word was a real English word or not} 32 | #' \item{speed_c}{z-scored speed values} 33 | #' } 34 | #' 35 | #' @keywords datasets 36 | "introR" 37 | 38 | -------------------------------------------------------------------------------- /R/meaningdata-data.R: -------------------------------------------------------------------------------- 1 | #' Meaning and Purpose in Life Data 2 | #' 3 | #' Study: This data includes three measures of meaning 4 | #' and purpose in life for exploring latent variables 5 | #' or multi-trait multi-methods analysis. 6 | #' 7 | #' The Meaning in Life Questionnaire is scored from 1 8 | #' absolutely untrue to 7 absolutely true. These items 9 | #' are marked with a M in the dataset. The Purpose in 10 | #' Life Questionnaire is scaled from 1 to 7 varying by 11 | #' the question and is marked P in the dataset. Last, 12 | #' the Seeking of Neotic Goals scale ranges from 1 never 13 | #' to 7 constantly and is marked with S in the dataset. 14 | #' 15 | #' @docType data 16 | #' 17 | #' @usage data(meaningdata) 18 | #' 19 | #' @format A data frame with 567 rows and 50 variables. 20 | #' 21 | #'\describe{ 22 | #' \item{p1}{I am usually (completely bored to exuberant, 23 | #' enthusiastic)} 24 | #' \item{p2}{Life to me seems (completely routine to 25 | #' always exciting)} 26 | #' \item{p3}{In life I have (no goals or aims at all to 27 | #' very clear goals and aims)} 28 | #' \item{p4}{My personal existence is (utterlying 29 | #' meaningless without purpose to very purposeful and 30 | #' meaningful)} 31 | #' \item{p5}{Every day is (exactly the same to constantly 32 | #' new)} 33 | #' \item{p6}{If I could choose, I would (prefer never to 34 | #' have been born to like nine more lives just like this 35 | #' one)} 36 | #' \item{p7}{After retiring, I would (loaf completely the 37 | #' rest of my life to do some of the exciting things I 38 | #' have always wanted to do)} 39 | #' \item{p8}{In achieving life goals I have (made no 40 | #' progress whatever to progressed to complete fulfillment)} 41 | #' \item{p9}{My life is (empty, filled only with despair 42 | #' to running over with exciting good things)} 43 | #' \item{p10}{If I should die today, I would feel that 44 | #' my life had been (completely worthless to very 45 | #' worthwhile)} 46 | #' \item{p11}{In thinking of my life, I (often wonder why 47 | #' I exist to always see a reason for my being here)} 48 | #' \item{p12}{As I view the world in relation to my life, 49 | #' the world (completely confuses me to fits meaningfully 50 | #' with my life)} 51 | #' \item{p13}{I am a (very irresponsible person to very 52 | #' responsible person)} 53 | #' \item{p14}{Concerning man's freedom to make his own 54 | #' choices, I believe man is (completely bound by 55 | #' limitations of heridity and environment to absolutely 56 | #' free to make all life choices)} 57 | #' \item{p15}{With regard to death, I am (unprepared and 58 | #' afraid to prepared and unafraid)} 59 | #' \item{p16}{With regard to suicide, I have (thought of 60 | #' it seriously as a way out to never given it a second 61 | #' thought)} 62 | #' \item{p17}{I regard my ability to find a meaning, 63 | #' purpose, or mission in life as (practically none to 64 | #' very great)} 65 | #' \item{p18}{My life is (out of my hands and controlled 66 | #' by external factors to in my hands and I am in 67 | #' control of it)} 68 | #' \item{p19}{Facing my daily tasks is (a painful and 69 | #' boring experience to a source of pleasure and 70 | #' satisfaction)} 71 | #' \item{p20}{I have discovered (no mission or purpose in 72 | #' life to clear-cut goals and a satisfying life purpose)} 73 | #' \item{m1}{I understand my life's meaning.} 74 | #' \item{m2}{I am lookin gof rsomething ath makes my life 75 | #' feel meaningful.} 76 | #' \item{m3}{I am always looking to find my life's ppurpose.} 77 | #' \item{m4}{My life has a clear sense of purpose. } 78 | #' \item{m5}{I have a good sense of what makes my life 79 | #' meaningful.} 80 | #' \item{m6}{I have discovered a satifying life purpose.} 81 | #' \item{m7}{I am lalways searching for something that makes 82 | #' my life feel significant.} 83 | #' \item{m8}{I am seeking a purpose or mission for my life.} 84 | #' \item{m9}{My life has no clear purpose. } 85 | #' \item{m10}{I am searching for meaning in my life. } 86 | #' \item{s1}{I think about the ultimate meaning of life.} 87 | #' \item{s2}{I have experienced the feeling that while I am 88 | #' destined to accomplish something important, I cannot 89 | #' quite put myfinger on just what it is.} 90 | #' \item{s3}{I try new activities or areas of interest, 91 | #' and then these soon lose their attractiveness.} 92 | #' \item{s4}{I feel that some element which I can't 93 | #' quite define is missing fro mmy life.} 94 | #' \item{s5}{I am restless.} 95 | #' \item{s6}{I feel that the greatest fulfillment of my 96 | #' life lies wyet in the future.} 97 | #' \item{s7}{I hope for something exciting in the future.} 98 | #' \item{s8}{I daydream of finding a new place for my life 99 | #' and a new identity.} 100 | #' \item{s9}{I feel the lack of -- and a need to find -- a 101 | #' real meaning and purpose in my life.} 102 | #' \item{s10}{I think of achieving something new and 103 | #' different.} 104 | #' \item{s11}{I seem to change my main objective in life.} 105 | #' \item{s12}{The mystery of life puzzles and disturbs me.} 106 | #' \item{s13}{I feel myself in need of a 'new lease on life'. } 107 | #' \item{s14}{Before I achieve one goal, I start out toward 108 | #' a different one.} 109 | #' \item{s15}{I feel the need for adventure and 'new worlds 110 | #' to conquer'.} 111 | #' \item{s16}{Over my lifetime I have felt a strong urge 112 | #' to find myself.} 113 | #' \item{s17}{On occasion I have thought that I had found 114 | #' what I was looking for in life, only to have it vanish later.} 115 | #' \item{s18}{I have been aware of all-powerful and consuming 116 | #' purpose toward which my life has been directed.} 117 | #' \item{s19}{I have sensed a lack of worthwhile job to do 118 | #' in life.} 119 | #' \item{s20}{I have felt a determination to achieve something f 120 | #' ar beyond the ordinary.} 121 | #' } 122 | #' 123 | #' @keywords datasets 124 | "meaningdata" 125 | 126 | -------------------------------------------------------------------------------- /R/mirtdata-data.R: -------------------------------------------------------------------------------- 1 | #' Polytomous IRT Practice Data 2 | #' 3 | #' Study: This dataset includes 15 questions that are scored 4 | #' from 1 to 7 to use for polytomous IRT examples. One 5 | #' would indicate a low score on the latent trait, while 6 | #' seven would indicate a higher score on the latent 7 | #' trait (if the scale works!). 8 | #' 9 | #' @docType data 10 | #' 11 | #' @usage data(mirtdata) 12 | #' 13 | #' @format A data frame with 171 rows and 15 variables. 14 | #' 15 | #' @keywords datasets 16 | "mirtdata" 17 | 18 | -------------------------------------------------------------------------------- /R/resdata-data.R: -------------------------------------------------------------------------------- 1 | #' Multigroup CFA Practice Data 2 | #' 3 | #' Study: This dataset has data on gender, ethnicity, 4 | #' and a resiliency scale for practicing factor analysis 5 | #' and other structural equation modeling topics 6 | #' like multigroup CFA. 7 | #' 8 | #' The instructions were: 9 | #' 10 | #' Please read the following statements. To the right of 11 | #' each you will find seven numbers, ranging from "1" 12 | #' (Strongly Disagree) on the left to "7" (Strongly Agree) 13 | #' on the right. Circle the number which best indicates 14 | #' your feelings about that statement. For example, if 15 | #' you strongly disagree with a statement, circle "1". 16 | #' If you are neutral, circle "4", and if you 17 | #' strongly agree, circle "7", etc. 18 | #' 19 | #' Scale: strongly disagree, moderately disagree, 20 | #' somewhat disagree, neutral, somewhat agree, 21 | #' moderately agree, strongly agree 22 | #' 23 | #' @docType data 24 | #' 25 | #' @usage data(resdata) 26 | #' 27 | #' @format A data frame with 516 rows and 16 variables. 28 | #' 29 | #'\describe{ 30 | #' \item{Sex}{A variable for gender where 1 is 31 | #' male, 2 is female, and 3 is other/na.} 32 | #' \item{Ethnicity}{A variable for ethnicity 33 | #' coded as 1 as Black, 2 as White, and 3 as 34 | #' other/na.} 35 | #' \item{RS1}{I usually manage one way or 36 | #' another.} 37 | #' \item{RS2}{I feel proud that I have accomplished 38 | #' things in life.} 39 | #' \item{RS3}{I usually take things in stride.} 40 | #' \item{RS4}{I am friends with myself.} 41 | #' \item{RS5}{I feel that I can handle many 42 | #' things at a time.} 43 | #' \item{RS6}{I am determined.} 44 | #' \item{RS7}{I can get through difficult times 45 | #' because I’ve experienced difficulty before.} 46 | #' \item{RS8}{I have self-discipline.} 47 | #' \item{RS9}{I keep interested in things.} 48 | #' \item{RS10}{I can usually find something to 49 | #' laugh about.} 50 | #' \item{RS11}{My belief in myself gets me through 51 | #' hard times.} 52 | #' \item{RS12}{In an emergency, I’m someone people 53 | #' can generally rely on.} 54 | #' \item{RS13}{My life has meaning.} 55 | #' \item{RS14}{When I’m in a difficult situation, I 56 | #' can usually find my way out of it.} 57 | #' } 58 | #' 59 | #' @keywords datasets 60 | "resdata" 61 | 62 | -------------------------------------------------------------------------------- /R/server_context.R: -------------------------------------------------------------------------------- 1 | #' Server Functions for learnr Tutorials 2 | #' 3 | #' These functions help check that you have put 4 | #' together the tutorial correctly for the 5 | #' student answers to print out at the end of the 6 | #' tutorial 7 | #' 8 | #' @param .envir Automatically grabs the environment 9 | #' variable for your shiny session. 10 | #' 11 | #' @return Error messages if you incorrectly use 12 | #' the functions. 13 | #' 14 | #' @keywords shiny, learnr, student answers 15 | #' @export 16 | 17 | is_server_context <- function(.envir) { 18 | # We are in the server context if there are the follow: 19 | # * input - input reactive values 20 | # * output - shiny output 21 | # * session - shiny session 22 | # 23 | # Check context by examining the class of each of these. 24 | # If any is missing then it will be a NULL which will fail. 25 | 26 | inherits(.envir$input, "reactivevalues") & 27 | inherits(.envir$output, "shinyoutput") & 28 | inherits(.envir$session, "ShinySession") 29 | } 30 | 31 | check_not_server_context = function(.envir) { 32 | if (is_server_context(.envir)) { 33 | calling_func <- deparse(sys.calls()[[sys.nframe()-1]]) 34 | 35 | err = paste0( 36 | "Function `", calling_func,"`", 37 | " must *not* be called from an Rmd chunk where `context = \"server\"`" 38 | ) 39 | 40 | # The following seems to be necessary - since this is in the server context 41 | # it will not run at compile time 42 | shiny::stopApp() 43 | 44 | stop(err, call. = FALSE) 45 | } 46 | } 47 | 48 | check_server_context <- function(.envir) { 49 | if (!is_server_context(.envir)) { 50 | calling_func <- deparse(sys.calls()[[sys.nframe()-1]]) 51 | 52 | err = paste0( 53 | "Function `", calling_func,"`", 54 | " must be called from an Rmd chunk where `context = \"server\"`" 55 | ) 56 | 57 | stop(err, call. = FALSE) 58 | } 59 | } 60 | 61 | #' @rdname server_context 62 | #' @export 63 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## learnSEM 2 | 3 | `learnSEM` is a tutorial package to learn structural equation modeling written by Erin M. Buchanan at https://statisticsofdoom.com/. 4 | 5 | Current Version: `0.5.0` 6 | 7 | ### Installation 8 | 9 | You can install `learnSEM` by using the following code: 10 | 11 | ``` 12 | #install.packages("devtools") #uncomment if you need devtools 13 | library(devtools) 14 | install_github("doomlab/learnSEM") 15 | ``` 16 | 17 | Be sure to restart your **R** session, as this helps you get the **Tutorial Window** from the `learnr` package. If you see a message *no tutorial found in learnSEM*, try restarting RStudio. 18 | 19 | ### Course Schedule 20 | 21 | 1. Introduction to R: 22 | 23 | - Lecture: `vignette("lecture_introR", "learnSEM")` 24 | - Tutorial: `learnr::run_tutorial("introR", "learnSEM")` 25 | 26 | 2. Data Screening Practice: 27 | 28 | - Lecture: `vignette("lecture_data_screen", "learnSEM")` 29 | - Tutorial: `learnr::run_tutorial("datascreen", "learnSEM")` 30 | 31 | 3. Exploratory Factor Analysis: 32 | 33 | - Lecture: `vignette("lecture_efa", "learnSEM")` 34 | - Tutorial: `learnr::run_tutorial("efa", "learnSEM")` 35 | 36 | 4. Terminology: 37 | 38 | - Lecture: `vignette("lecture_terms", "learnSEM")` 39 | - Tutorial: `learnr::run_tutorial("terms", "learnSEM")` 40 | 41 | 5. Path Models: 42 | 43 | - Lecture: `vignette("lecture_path", "learnSEM")` 44 | - Tutorial: `learnr::run_tutorial("path1", "learnSEM")` 45 | - Tutorial: `learnr::run_tutorial("path2", "learnSEM")` 46 | 47 | 6. CFA Models: 48 | 49 | - Lecture: `vignette("lecture_cfa", "learnSEM")` 50 | - Tutorial: `learnr::run_tutorial("cfabasics", "learnSEM")` 51 | 52 | 7. CFA Second Order Models: 53 | 54 | - Lecture: `vignette("lecture_secondcfa", "learnSEM")` 55 | - Tutorial: `learnr::run_tutorial("cfasecond", "learnSEM")` 56 | 57 | 8. Full Structural Models: 58 | 59 | - Lecture: `vignette("lecture_sem", "learnSEM")` 60 | - Tutorial: `learnr::run_tutorial("fullsem", "learnSEM")` 61 | 62 | 9. Multitrait Multimethod: 63 | 64 | - Lecture: `vignette("lecture_mtmm", "learnSEM")` 65 | - Tutorial: `learnr::run_tutorial("mtmm", "learnSEM")` 66 | 67 | 10. Multigroup CFA: 68 | 69 | - Lecture: `vignette("lecture_mgcfa", "learnSEM")` 70 | - Tutorial: `learnr::run_tutorial("mgcfa", "learnSEM")` 71 | 72 | 11. Latent Growth Models: 73 | 74 | - Lecture: `vignette("lecture_lgm", "learnSEM")` 75 | - Tutorial: `learnr::run_tutorial("lgm", "learnSEM")` 76 | 77 | 12. Item Response Theory: 78 | 79 | - Lecture: `vignette("lecture_irt", "learnSEM")` 80 | - Tutorial: `learnr::run_tutorial("irt", "learnSEM")` 81 | 82 | Lectures are being added every Friday! Check back if one is not open yet. 83 | -------------------------------------------------------------------------------- /data/caafidata.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/caafidata.rda -------------------------------------------------------------------------------- /data/dassdata.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/dassdata.rda -------------------------------------------------------------------------------- /data/datascreen.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/datascreen.rda -------------------------------------------------------------------------------- /data/dirtdata.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/dirtdata.rda -------------------------------------------------------------------------------- /data/efa.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/efa.rda -------------------------------------------------------------------------------- /data/introR.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/introR.rda -------------------------------------------------------------------------------- /data/meaningdata.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/meaningdata.rda -------------------------------------------------------------------------------- /data/mirtdata.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/mirtdata.rda -------------------------------------------------------------------------------- /data/resdata.rda: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/data/resdata.rda -------------------------------------------------------------------------------- /inst/doc/lecture_cfa.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/diagram_sem.png") 14 | 15 | ## ------------------------------------------------------- 16 | # a famous example, build the model 17 | HS.model <- ' visual =~ x1 + x2 + x3 18 | textual =~ x4 + x5 + x6 19 | speed =~ x7 + x8 + x9 ' 20 | 21 | # fit the model 22 | HS.fit <- cfa(HS.model, data = HolzingerSwineford1939) 23 | 24 | # diagram the model 25 | semPaths(HS.fit, 26 | whatLabels = "std", 27 | layout = "tree", 28 | edge.label.cex = 1) 29 | 30 | ## ------------------------------------------------------- 31 | # a famous example, build the model 32 | HS.model <- ' visual <~ x1 + x2 + x3' 33 | 34 | # fit the model 35 | HS.fit <- cfa(HS.model, data = HolzingerSwineford1939) 36 | 37 | # diagram the model 38 | semPaths(HS.fit, 39 | whatLabels = "std", 40 | layout = "tree", 41 | edge.label.cex = 1) 42 | 43 | ## ------------------------------------------------------- 44 | wisc4.cor <- lav_matrix_lower2full(c(1, 45 | 0.72,1, 46 | 0.64,0.63,1, 47 | 0.51,0.48,0.37,1, 48 | 0.37,0.38,0.38,0.38,1)) 49 | # enter the SDs 50 | wisc4.sd <- c(3.01 , 3.03 , 2.99 , 2.89 , 2.98) 51 | 52 | # give everything names 53 | colnames(wisc4.cor) <- 54 | rownames(wisc4.cor) <- 55 | names(wisc4.sd) <- 56 | c("Information", "Similarities", 57 | "Word.Reasoning", "Matrix.Reasoning", "Picture.Concepts") 58 | 59 | # convert 60 | wisc4.cov <- cor2cov(wisc4.cor, wisc4.sd) 61 | 62 | ## ------------------------------------------------------- 63 | wisc4.model <- ' 64 | g =~ Information + Similarities + Word.Reasoning + Matrix.Reasoning + Picture.Concepts 65 | ' 66 | 67 | ## ------------------------------------------------------- 68 | wisc4.fit <- cfa(model = wisc4.model, 69 | sample.cov = wisc4.cov, 70 | sample.nobs = 550, 71 | std.lv = FALSE) 72 | 73 | ## ------------------------------------------------------- 74 | summary(wisc4.fit, 75 | standardized=TRUE, 76 | rsquare = TRUE, 77 | fit.measures=TRUE) 78 | 79 | ## ------------------------------------------------------- 80 | parameterestimates(wisc4.fit, 81 | standardized=TRUE) 82 | 83 | ## ------------------------------------------------------- 84 | fitted(wisc4.fit) ## estimated covariances 85 | wisc4.cov ## actual covariances 86 | 87 | ## ------------------------------------------------------- 88 | fitmeasures(wisc4.fit) 89 | 90 | ## ------------------------------------------------------- 91 | modificationindices(wisc4.fit, sort = T) 92 | 93 | ## ------------------------------------------------------- 94 | semPaths(wisc4.fit, 95 | whatLabels="std", 96 | what = "std", 97 | layout ="tree", 98 | edge.color = "blue", 99 | edge.label.cex = 1) 100 | 101 | ## ------------------------------------------------------- 102 | wisc4.model2 <- ' 103 | V =~ Information + Similarities + Word.Reasoning 104 | F =~ Matrix.Reasoning + Picture.Concepts 105 | ' 106 | 107 | # wisc4.model2 <- ' 108 | # V =~ Information + Similarities + Word.Reasoning 109 | # F =~ a*Matrix.Reasoning + a*Picture.Concepts 110 | # ' 111 | 112 | ## ------------------------------------------------------- 113 | wisc4.fit2 <- cfa(wisc4.model2, 114 | sample.cov=wisc4.cov, 115 | sample.nobs=550, 116 | std.lv = F) 117 | 118 | ## ------------------------------------------------------- 119 | summary(wisc4.fit2, 120 | standardized=TRUE, 121 | rsquare = TRUE, 122 | fit.measures=TRUE) 123 | 124 | ## ------------------------------------------------------- 125 | semPaths(wisc4.fit2, 126 | whatLabels="std", 127 | what = "std", 128 | edge.color = "pink", 129 | edge.label.cex = 1, 130 | layout="tree") 131 | 132 | ## ------------------------------------------------------- 133 | anova(wisc4.fit, wisc4.fit2) 134 | fitmeasures(wisc4.fit, c("aic", "ecvi")) 135 | fitmeasures(wisc4.fit2, c("aic", "ecvi")) 136 | 137 | ## ------------------------------------------------------- 138 | #install.packages("parameters") 139 | library(parameters) 140 | model_parameters(wisc4.fit, standardize = TRUE) 141 | 142 | ## ------------------------------------------------------- 143 | library(broom) 144 | tidy(wisc4.fit) 145 | glance(wisc4.fit) 146 | 147 | -------------------------------------------------------------------------------- /inst/doc/lecture_data_screen.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----setup, include=FALSE------------------------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | 10 | ## ------------------------------------------------------- 11 | library(rio) 12 | master <- import("data/lecture_data_screen.csv") 13 | names(master) 14 | 15 | ## ------------------------------------------------------- 16 | #summary(master) 17 | table(master$JOL_group) 18 | 19 | table(master$type_cue) 20 | 21 | ## ------------------------------------------------------- 22 | no_typos <- master 23 | no_typos$JOL_group <- factor(no_typos$JOL_group, 24 | levels = c("delayed", "immediate"), 25 | labels = c("Delayed", "Immediate")) 26 | 27 | no_typos$type_cue <- factor(no_typos$type_cue, 28 | levels = c("cue only", "stimulus pairs"), 29 | labels = c("Cue Only", "Stimulus Pairs")) 30 | 31 | ## ------------------------------------------------------- 32 | summary(no_typos) 33 | 34 | ## ------------------------------------------------------- 35 | # how did I get 3:22? 36 | # how did I get the rule? 37 | # what should I do? 38 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] 39 | 40 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] <- NA 41 | 42 | no_typos[ , 3:22][ no_typos[ , 3:22] < 0 ] <- NA 43 | 44 | ## ------------------------------------------------------- 45 | no_missing <- no_typos 46 | summary(no_missing) 47 | 48 | ## ------------------------------------------------------- 49 | percent_missing <- function(x){sum(is.na(x))/length(x) * 100} 50 | missing <- apply(no_missing, 1, percent_missing) 51 | table(missing) 52 | 53 | ## ------------------------------------------------------- 54 | replace_rows <- subset(no_missing, missing <= 5) 55 | no_rows <- subset(no_missing, missing > 5) 56 | 57 | ## ------------------------------------------------------- 58 | missing <- apply(replace_rows, 2, percent_missing) 59 | table(missing) 60 | 61 | replace_columns <- replace_rows[ , 3:22] 62 | no_columns <- replace_rows[ , 1:2] 63 | 64 | ## ------------------------------------------------------- 65 | library(mice) 66 | tempnomiss <- mice(replace_columns) 67 | 68 | ## ------------------------------------------------------- 69 | fixed_columns <- complete(tempnomiss) 70 | all_columns <- cbind(no_columns, fixed_columns) 71 | all_rows <- rbind(all_columns, no_rows) 72 | nrow(no_missing) 73 | nrow(all_rows) 74 | 75 | ## ------------------------------------------------------- 76 | mahal <- mahalanobis(all_columns[ , -c(1,2)], #take note here 77 | colMeans(all_columns[ , -c(1,2)], na.rm=TRUE), 78 | cov(all_columns[ , -c(1,2)], use ="pairwise.complete.obs")) 79 | 80 | cutoff <- qchisq(p = 1 - .001, #1 minus alpha 81 | df = ncol(all_columns[ , -c(1,2)])) # number of columns 82 | 83 | ## ------------------------------------------------------- 84 | cutoff 85 | 86 | summary(mahal < cutoff) #notice the direction 87 | 88 | no_outliers <- subset(all_columns, mahal < cutoff) 89 | 90 | ## ------------------------------------------------------- 91 | library(corrplot) 92 | corrplot(cor(no_outliers[ , -c(1,2)])) 93 | 94 | ## ------------------------------------------------------- 95 | random_variable <- rchisq(nrow(no_outliers), 7) 96 | fake_model <- lm(random_variable ~ ., 97 | data = no_outliers[ , -c(1,2)]) 98 | standardized <- rstudent(fake_model) 99 | fitvalues <- scale(fake_model$fitted.values) 100 | 101 | ## ------------------------------------------------------- 102 | plot(fake_model, 2) 103 | 104 | ## ------------------------------------------------------- 105 | hist(standardized) 106 | 107 | ## ------------------------------------------------------- 108 | {plot(standardized, fitvalues) 109 | abline(v = 0) 110 | abline(h = 0) 111 | } 112 | 113 | -------------------------------------------------------------------------------- /inst/doc/lecture_data_screen.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Data Screening" 3 | output: rmarkdown::slidy_presentation 4 | description: > 5 | This vignette includes the lecture slides data screening for SEM (part 2). 6 | vignette: > 7 | %\VignetteIndexEntry{"ANOVA: Analysis of Variance"} 8 | %\VignetteEngine{knitr::rmarkdown} 9 | %\VignetteEncoding{UTF-8} 10 | --- 11 | 12 | ```{r, include = FALSE} 13 | knitr::opts_chunk$set( 14 | collapse = TRUE, 15 | comment = "#>" 16 | ) 17 | ``` 18 | 19 | ```{r setup, include=FALSE} 20 | knitr::opts_chunk$set(echo = TRUE) 21 | ``` 22 | 23 | ## Data Screening Overview 24 | 25 | - In this lecture, we will give you demonstration of what you might do to data screen a dataset for structural equation modeling. 26 | - There are four key steps: 27 | 28 | - Accuracy: dealing with errors 29 | - Missing: dealing with missing data 30 | - Outliers: determining if there are outliers and what to do with them 31 | - Assumptions: additivity, multivariate normality, linearity, homogeneity, and homoscedasticity 32 | 33 | - Note that the type of data screening may change depending on the type of data you have (i.e., ordinal data has different assumptions) 34 | - Mostly, we will focus on datasets with traditional parametric assumptions 35 | 36 | ## Hypothesis Testing versus Data Screening 37 | 38 | - Generally, we set an $alpha$ value, or Type 1 error 39 | - Often, this translates to "statistical significance", *p* < $alpha$ = significant, where $alpha$ is often defined as .05 40 | - In data screening, we want things to be very unusual before correcting or eliminating things 41 | - Therefore, we will often lower our criterion and use *p* < $alpha$ to denote problems with the data, where $alpha$ is lowered to .001 42 | 43 | ## Order is Important 44 | 45 | - While datascreening can be performed many ways, it's important to know that you should fix errors, missing data, etc. before checking assumptions 46 | - The changes you make effect the next steps 47 | 48 | ## An Example 49 | 50 | - We will learn about data screening by working an example 51 | - This data is made up data where people were asked to judge their own learning in different experimental conditions, and they rated their confidence of remembering information, and then we measured their actual memory of a situation 52 | 53 | ## Import the Data 54 | 55 | ```{r} 56 | library(rio) 57 | master <- import("data/lecture_data_screen.csv") 58 | names(master) 59 | ``` 60 | 61 | ## Accuracy 62 | 63 | - Use the `summary()` and `table()` functions to examine the dataset. 64 | - Categorical data: Are the labels right? Should this variable be factored? 65 | - Continuous data: is the min/max of the data correct? Are the data scored correctly? 66 | 67 | ## Accuracy Categorical 68 | 69 | ```{r} 70 | #summary(master) 71 | table(master$JOL_group) 72 | 73 | table(master$type_cue) 74 | ``` 75 | 76 | ## Accuracy Categorical 77 | 78 | ```{r} 79 | no_typos <- master 80 | no_typos$JOL_group <- factor(no_typos$JOL_group, 81 | levels = c("delayed", "immediate"), 82 | labels = c("Delayed", "Immediate")) 83 | 84 | no_typos$type_cue <- factor(no_typos$type_cue, 85 | levels = c("cue only", "stimulus pairs"), 86 | labels = c("Cue Only", "Stimulus Pairs")) 87 | ``` 88 | 89 | ## Accuracy Continuous 90 | 91 | - Confidence and recall should only be between 0 and 100. 92 | - Looks like we have some data to clean up. 93 | 94 | ```{r} 95 | summary(no_typos) 96 | ``` 97 | 98 | ## Accuracy Continuous 99 | 100 | ```{r} 101 | # how did I get 3:22? 102 | # how did I get the rule? 103 | # what should I do? 104 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] 105 | 106 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] <- NA 107 | 108 | no_typos[ , 3:22][ no_typos[ , 3:22] < 0 ] <- NA 109 | ``` 110 | 111 | ## Missing 112 | 113 | - There are two main types of missing data: 114 | 115 | - Missing not at random: when data is missing because of a common cause (i.e., everyone skipped question five) 116 | - Missing completely at random: data is randomly missing, potentially due to computer or human error 117 | 118 | - We also have to distinguish between missing data and incomplete data 119 | 120 | ```{r} 121 | no_missing <- no_typos 122 | summary(no_missing) 123 | ``` 124 | 125 | ## Missing Rows 126 | 127 | ```{r} 128 | percent_missing <- function(x){sum(is.na(x))/length(x) * 100} 129 | missing <- apply(no_missing, 1, percent_missing) 130 | table(missing) 131 | ``` 132 | 133 | ## Missing Replacement 134 | 135 | - How much data can I safely replace? 136 | 137 | - Replace only things that make sense. 138 | - Replace as minimal as possible, often less than 5% 139 | - Replace based on completion/missingness type 140 | 141 | ```{r} 142 | replace_rows <- subset(no_missing, missing <= 5) 143 | no_rows <- subset(no_missing, missing > 5) 144 | ``` 145 | 146 | ## Missing Columns 147 | 148 | - Separate out columns that you should not replace 149 | - Make sure columns have less than 5% missing for replacement 150 | 151 | ```{r} 152 | missing <- apply(replace_rows, 2, percent_missing) 153 | table(missing) 154 | 155 | replace_columns <- replace_rows[ , 3:22] 156 | no_columns <- replace_rows[ , 1:2] 157 | ``` 158 | 159 | ## Missing Replacement 160 | 161 | ```{r} 162 | library(mice) 163 | tempnomiss <- mice(replace_columns) 164 | ``` 165 | 166 | ## Missing Put Together 167 | 168 | ```{r} 169 | fixed_columns <- complete(tempnomiss) 170 | all_columns <- cbind(no_columns, fixed_columns) 171 | all_rows <- rbind(all_columns, no_rows) 172 | nrow(no_missing) 173 | nrow(all_rows) 174 | ``` 175 | 176 | ## Outliers 177 | 178 | - We will mostly be concerned with multivariate outliers in SEM. 179 | - These are rows of data (participants) who have extremely weird patterns of scores when compared to everyone else. 180 | - We will use Mahalanobis Distance to examine each row to determine if they are an outlier 181 | 182 | - This score *D* is the distance from the centriod or mean of means 183 | - We will use a cutoff score based on our strict screening criterion, *p* < .001 to determine if they are an outlier 184 | - This cutoff criterion is based on *the number of variables* rather than the *number of observations* 185 | 186 | ## Outliers Mahalanobis 187 | 188 | ```{r} 189 | mahal <- mahalanobis(all_columns[ , -c(1,2)], #take note here 190 | colMeans(all_columns[ , -c(1,2)], na.rm=TRUE), 191 | cov(all_columns[ , -c(1,2)], use ="pairwise.complete.obs")) 192 | 193 | cutoff <- qchisq(p = 1 - .001, #1 minus alpha 194 | df = ncol(all_columns[ , -c(1,2)])) # number of columns 195 | ``` 196 | 197 | ## Outliers Mahalanobis 198 | 199 | - Do outliers really matter in a SEM analysis though? 200 | 201 | ```{r} 202 | cutoff 203 | 204 | summary(mahal < cutoff) #notice the direction 205 | 206 | no_outliers <- subset(all_columns, mahal < cutoff) 207 | ``` 208 | 209 | ## Assumptions Additivity 210 | 211 | - Additivity is the assumption that each variable adds something to the model 212 | - You basically do not want to use the same variable twice, as that lowers power 213 | - Often this is described as multicollinearity 214 | - Mainly, SEM analysis has a lot of correlated variables, you just want to make sure they aren't perfectly correlated 215 | 216 | ## Assumptions Additivity 217 | 218 | ```{r} 219 | library(corrplot) 220 | corrplot(cor(no_outliers[ , -c(1,2)])) 221 | ``` 222 | 223 | ## Assumptions Set Up 224 | 225 | ```{r} 226 | random_variable <- rchisq(nrow(no_outliers), 7) 227 | fake_model <- lm(random_variable ~ ., 228 | data = no_outliers[ , -c(1,2)]) 229 | standardized <- rstudent(fake_model) 230 | fitvalues <- scale(fake_model$fitted.values) 231 | ``` 232 | 233 | ## Assumptions Linearity 234 | 235 | - We assume the the multivariate relationship between continuous variables is linear (i.e., no curved) 236 | - There are many ways to test this, but we can use a QQ/PP Plot to examine for linearity 237 | 238 | ```{r} 239 | plot(fake_model, 2) 240 | ``` 241 | 242 | ## Assumptions Normality 243 | 244 | - We expect that the residuals are normally distributed 245 | - Not that the *sample* is normally distributed 246 | - Generally, SEM requires a large sample size, thus, buffering against normality deviations 247 | 248 | ```{r} 249 | hist(standardized) 250 | ``` 251 | 252 | ## Assumptions Homogeneity + Homoscedasticity 253 | 254 | - These assumptions are about equality of the variances 255 | - We assume equal variances between groups for things like t-tests, ANOVA 256 | - Here the assumption is equality in the spread of variance across predicted values 257 | 258 | ```{r} 259 | {plot(standardized, fitvalues) 260 | abline(v = 0) 261 | abline(h = 0) 262 | } 263 | ``` 264 | 265 | ## Recap 266 | 267 | - We have completed a datascreening check up for our dataset 268 | - Any problems should be noted, and we will discuss how to handle some of the issues as relevant to SEM analysis 269 | - Let's check out the assignment! 270 | -------------------------------------------------------------------------------- /inst/doc/lecture_efa.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F------------------------------------------- 8 | options(scipen = 999) 9 | knitr::opts_chunk$set(echo = TRUE) 10 | 11 | ## ----echo = F, warning = F, message = F----------------- 12 | library(lavaan) 13 | library(semPlot) 14 | HS.model <- ' visual =~ x1 + x2 + x3 15 | textual =~ x4 + x5 + x6 16 | speed =~ x7 + x8 + x9 ' 17 | 18 | fit <- cfa(HS.model, data = HolzingerSwineford1939) 19 | semPaths(fit, 20 | whatLabels = "std", 21 | edge.label.cex = 1) 22 | 23 | ## ----echo = F, warning = F, message = F----------------- 24 | library(lavaan) 25 | library(semPlot) 26 | HS.model <- ' visual =~ x1 + x2 + x3 27 | textual =~ x4 + x5 + x6 28 | speed =~ x7 + x8 + x9 ' 29 | 30 | fit <- cfa(HS.model, data = HolzingerSwineford1939) 31 | semPaths(fit, 32 | whatLabels = "std", 33 | edge.label.cex = 1) 34 | 35 | ## ----message = F---------------------------------------- 36 | library(rio) 37 | library(psych) 38 | master <- import("data/lecture_efa.csv") 39 | head(master) 40 | 41 | ## ----scree, echo=FALSE, out.height="500px", out.width="800px", fig.align="center"---- 42 | knitr::include_graphics("pictures/scree.png") 43 | 44 | ## ------------------------------------------------------- 45 | number_items <- fa.parallel(master, #data frame 46 | fm="ml", #math 47 | fa="fa") #only efa 48 | 49 | ## ------------------------------------------------------- 50 | 51 | sum(number_items$fa.values > 1) 52 | sum(number_items$fa.values > .7) 53 | 54 | ## ----rotation, echo=FALSE, out.height="500px", out.width="800px", fig.align="center"---- 55 | knitr::include_graphics("pictures/rotate.png") 56 | 57 | ## ------------------------------------------------------- 58 | EFA_fit <- fa(master, #data 59 | nfactors = 2, #number of factors 60 | rotate = "oblimin", #rotation 61 | fm = "ml") #math 62 | 63 | ## ------------------------------------------------------- 64 | EFA_fit 65 | 66 | ## ------------------------------------------------------- 67 | EFA_fit2 <- fa(master[ , -23], #data 68 | nfactors = 2, #number of factors 69 | rotate = "oblimin", #rotation 70 | fm = "ml") #math 71 | 72 | EFA_fit2 73 | 74 | ## ------------------------------------------------------- 75 | fa.plot(EFA_fit2, 76 | labels = colnames(master[ , -23])) 77 | 78 | ## ------------------------------------------------------- 79 | fa.diagram(EFA_fit2) 80 | 81 | ## ------------------------------------------------------- 82 | EFA_fit2$rms #Root mean square of the residuals 83 | EFA_fit2$RMSEA #root mean squared error of approximation 84 | EFA_fit2$TLI #tucker lewis index 85 | 1 - ((EFA_fit2$STATISTIC-EFA_fit2$dof)/ 86 | (EFA_fit2$null.chisq-EFA_fit2$null.dof)) #CFI 87 | 88 | ## ------------------------------------------------------- 89 | factor1 = c(1:7, 9:10, 12:16, 18:22) 90 | factor2 = c(8, 11, 17) 91 | ##we use the psych::alpha to make sure that R knows we want the alpha function from the psych package. 92 | ##ggplot2 has an alpha function and if we have them both open at the same time 93 | ##you will sometimes get a color error without this :: information. 94 | psych::alpha(master[, factor1], check.keys = T) 95 | psych::alpha(master[, factor2], check.keys = T) 96 | 97 | -------------------------------------------------------------------------------- /inst/doc/lecture_introR.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----setup, include=FALSE------------------------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | 10 | ## ------------------------------------------------------- 11 | X <- 4 12 | 13 | ## ------------------------------------------------------- 14 | library(palmerpenguins) 15 | data(penguins) 16 | attributes(penguins) 17 | 18 | ## ------------------------------------------------------- 19 | str(penguins) 20 | 21 | names(penguins) #ls(penguins) provides this as well 22 | 23 | ## ------------------------------------------------------- 24 | X 25 | 26 | ## ------------------------------------------------------- 27 | penguins$species 28 | 29 | ## ------------------------------------------------------- 30 | A <- 1:20 31 | A 32 | 33 | B <- seq(from = 1, to = 20, by = 1) 34 | B 35 | 36 | C <- c("cheese", "is", "great") 37 | C 38 | 39 | D <- rep(1, times = 30) 40 | D 41 | 42 | ## ------------------------------------------------------- 43 | class(A) 44 | class(C) 45 | class(penguins) 46 | class(penguins$species) 47 | 48 | ## ------------------------------------------------------- 49 | dim(penguins) #rows, columns 50 | length(penguins) 51 | length(penguins$species) 52 | 53 | ## ------------------------------------------------------- 54 | output <- lm(flipper_length_mm ~ bill_length_mm, data = penguins) 55 | str(output) 56 | output$coefficients 57 | 58 | ## ------------------------------------------------------- 59 | myMatrix <- matrix(data = 1:10, 60 | nrow = 5, 61 | ncol = 2) 62 | myMatrix 63 | 64 | ## ------------------------------------------------------- 65 | penguins[1, 2:3] 66 | penguins$sex[4:25] #why no comma? 67 | 68 | ## ------------------------------------------------------- 69 | X <- 1:5 70 | Y <- 6:10 71 | # I can use either because they are the same size 72 | cbind(X,Y) 73 | rbind(X,Y) 74 | 75 | ## ------------------------------------------------------- 76 | ls() 77 | ls(penguins) 78 | 79 | ## ------------------------------------------------------- 80 | newDF <- as.data.frame(cbind(X,Y)) 81 | str(newDF) 82 | as.numeric(c("one", "two", "3")) 83 | 84 | ## ------------------------------------------------------- 85 | penguins[1:2,] #just the first two rows 86 | penguins[penguins$bill_length_mm > 54 , ] #how does this work? 87 | penguins$bill_length_mm > 54 88 | 89 | ## ------------------------------------------------------- 90 | #you can create complex rules 91 | penguins[penguins$bill_length_mm > 54 & penguins$bill_depth_mm > 17, ] 92 | #you can do all BUT 93 | penguins[ , -1] 94 | #grab a few columns by name 95 | vars <- c("bill_length_mm", "sex") 96 | penguins[ , vars] 97 | 98 | ## ------------------------------------------------------- 99 | #another function 100 | #notice any differences? 101 | subset(penguins, bill_length_mm > 54) 102 | #other functions include filter() in tidyverse 103 | 104 | ## ------------------------------------------------------- 105 | head(complete.cases(penguins)) #creates logical 106 | head(na.omit(penguins)) #creates actual rows 107 | head(is.na(penguins$body_mass_g)) #for individual vectors 108 | 109 | ## ------------------------------------------------------- 110 | getwd() 111 | 112 | ## ----eval = F------------------------------------------- 113 | # setwd("/Users/buchanan/OneDrive - Harrisburg University/Teaching/ANLY 580/updated/1 Introduction R") 114 | 115 | ## ------------------------------------------------------- 116 | library(rio) 117 | myDF <- import("data/assignment_introR.csv") 118 | head(myDF) 119 | 120 | ## ----eval = F------------------------------------------- 121 | # install.packages("car") 122 | 123 | ## ------------------------------------------------------- 124 | library(car) 125 | 126 | ## ----eval = F------------------------------------------- 127 | # ?lm 128 | # help(lm) 129 | 130 | ## ------------------------------------------------------- 131 | args(lm) 132 | example(lm) 133 | 134 | ## ------------------------------------------------------- 135 | pizza <- function(x){ x^2 } 136 | pizza(3) 137 | 138 | ## ------------------------------------------------------- 139 | table(penguins$species) 140 | summary(penguins$bill_length_mm) 141 | 142 | ## ------------------------------------------------------- 143 | mean(penguins$bill_length_mm) #returns NA 144 | mean(penguins$bill_length_mm, na.rm = TRUE) 145 | 146 | cor(penguins[ , c("bill_length_mm", "bill_depth_mm", "flipper_length_mm")]) 147 | cor(penguins[ , c("bill_length_mm", "bill_depth_mm", "flipper_length_mm")], 148 | use = "pairwise.complete.obs") 149 | 150 | -------------------------------------------------------------------------------- /inst/doc/lecture_irt.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/icc_example.png") 14 | 15 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 16 | knitr::include_graphics("pictures/item_difficulty.png") 17 | 18 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 19 | knitr::include_graphics("pictures/ability.png") 20 | 21 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 22 | knitr::include_graphics("pictures/ability.png") 23 | 24 | ## ------------------------------------------------------- 25 | library(ltm) 26 | library(mirt) 27 | data(LSAT) 28 | head(LSAT) 29 | 30 | ## ------------------------------------------------------- 31 | # Data frame name ~ z1 for one latent variable 32 | #irt.param to give it to you standardized 33 | LSAT.model <- ltm(LSAT ~ z1, 34 | IRT.param = TRUE) 35 | 36 | ## ------------------------------------------------------- 37 | coef(LSAT.model) 38 | 39 | ## ------------------------------------------------------- 40 | plot(LSAT.model, type = "ICC") ## all items at once 41 | 42 | ## ------------------------------------------------------- 43 | plot(LSAT.model, type = "IIC", items = 0) ## Test Information Function 44 | 45 | ## ------------------------------------------------------- 46 | factor.scores(LSAT.model) 47 | 48 | ## ------------------------------------------------------- 49 | LSAT.model2 <- tpm(LSAT, #dataset 50 | type = "latent.trait", 51 | IRT.param = TRUE) 52 | 53 | ## ------------------------------------------------------- 54 | coef(LSAT.model2) 55 | 56 | ## ------------------------------------------------------- 57 | plot(LSAT.model2, type = "ICC") ## all items at once 58 | 59 | ## ------------------------------------------------------- 60 | plot(LSAT.model2, type = "IIC", items = 0) ## Test Information Function 61 | 62 | ## ------------------------------------------------------- 63 | factor.scores(LSAT.model2) 64 | 65 | ## ------------------------------------------------------- 66 | anova(LSAT.model, LSAT.model2) 67 | 68 | ## ------------------------------------------------------- 69 | library(rio) 70 | poly.data <- import("data/lecture_irt.csv") 71 | poly.data <- na.omit(poly.data) 72 | 73 | #reverse code 74 | poly.data$Q99_9 = 8 - poly.data$Q99_9 75 | 76 | #separate factors 77 | poly.data1 = poly.data[ , c(1, 4, 5, 6, 9)] 78 | poly.data2 = poly.data[ , c(2, 3, 7, 8, 10)] 79 | 80 | ## ------------------------------------------------------- 81 | gpcm.model1 <- mirt(data = poly.data1, #data 82 | model = 1, #number of factors 83 | itemtype = "gpcm") #poly model type 84 | 85 | ## ------------------------------------------------------- 86 | summary(gpcm.model1) ##standardized coefficients 87 | 88 | ## ------------------------------------------------------- 89 | coef(gpcm.model1, IRTpars = T) ##coefficients 90 | 91 | head(fscores(gpcm.model1)) ##factor scores 92 | 93 | ## ------------------------------------------------------- 94 | plot(gpcm.model1, type = "trace") ##curves for all items at once 95 | itemplot(gpcm.model1, 5, type = "trace") 96 | 97 | ## ------------------------------------------------------- 98 | itemplot(gpcm.model1, 4, type = "info") ##IIC for each item 99 | plot(gpcm.model1, type = "info") ##test information curve 100 | 101 | ## ------------------------------------------------------- 102 | plot(gpcm.model1) ##expected score curve 103 | 104 | -------------------------------------------------------------------------------- /inst/doc/lecture_irt.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Item Response Theory" 3 | output: rmarkdown::slidy_presentation 4 | description: > 5 | This vignette includes the lecture slides for item response theory (part 12). 6 | vignette: > 7 | %\VignetteIndexEntry{"IRT"} 8 | %\VignetteEngine{knitr::rmarkdown} 9 | %\VignetteEncoding{UTF-8} 10 | --- 11 | 12 | ```{r, include = FALSE} 13 | knitr::opts_chunk$set( 14 | collapse = TRUE, 15 | comment = "#>" 16 | ) 17 | ``` 18 | 19 | ```{r echo = F, message = F, warning = F} 20 | knitr::opts_chunk$set(echo = TRUE) 21 | library(lavaan) 22 | library(semPlot) 23 | ``` 24 | 25 | ## Item Response Theory 26 | 27 | - What do you do if you have dichotomous (or categorical) manifest variables? 28 | - Many agree that more than four response options can be treated as continuous without a loss in power or interpretation. 29 | - Do you treat these values as categorical? 30 | - Do you assume the underlying latent variable is continuous? 31 | 32 | ## Categorical Options 33 | 34 | - There are two approaches that allow us to analyze data with categorical predictors: 35 | - Item Factor Analysis 36 | - More traditional factor analysis approach using ordered responses 37 | - You can talk about item loading, eliminate bad questions, etc. 38 | - In the `lavaan` framework, you update your `cfa()` to include the `ordered` argument 39 | - Item Response Theory 40 | 41 | ## Item Response Theory 42 | 43 | - Classical test theory is considered "true score theory" 44 | - Any differences in responses are differences in ability or underlying trait 45 | - CTT focuses on reliability and item correlation type analysis 46 | - Cannot separate the test and person characteristics 47 | - IRT is considered more modern test theory focusing on the latent trait 48 | - Focuses on the item for *where* it measures a latent trait, discrimination, and guessing 49 | - Additionally, with more than two outcomes, we can examine ordering, response choice options, and more 50 | 51 | ## Issues 52 | 53 | - Unidimensionality: assumption is that there is one underlying trait or dimension you are measuring 54 | - You can run separate models for each dimension 55 | - There are multitrait options for IRT 56 | - Local Independence 57 | - After you control for the latent variable, the items are uncorrelated 58 | 59 | ## Item Response Theory 60 | 61 | - A simple example of test versus person 62 | - 3 item questionnaire 63 | - Yes/no scaling 64 | - 8 response patterns 65 | - Four total scores (0, 1, 2, 3) 66 | 67 | ## Item Response Theory 68 | 69 | - Item characteristic curves (ICCs) 70 | - The log probability curve of theta and the probability of a correct response 71 | 72 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 73 | knitr::include_graphics("pictures/icc_example.png") 74 | ``` 75 | 76 | ## Item Response Theory 77 | 78 | - Theta – ability or the underlying latent variable score 79 | - b – Item location – where the probability of getting an item correct is 50/50 80 | - Also considered where the item performs best 81 | - Can be thought of as item difficulty 82 | 83 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 84 | knitr::include_graphics("pictures/item_difficulty.png") 85 | ``` 86 | 87 | ## Item Response Theory 88 | 89 | - a – item discrimination 90 | - Tells you how well an item measures the latent variable 91 | - Larger a values indicate better items 92 | 93 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 94 | knitr::include_graphics("pictures/ability.png") 95 | ``` 96 | 97 | ## Item Response Theory 98 | 99 | - c – guessing parameter 100 | - The lower level likelihood of getting the item correct 101 | 102 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 103 | knitr::include_graphics("pictures/ability.png") 104 | ``` 105 | 106 | ## Item Response Theory 107 | 108 | - 1 Parameter Logistic (1PL) 109 | - Also known as the Rasch Model 110 | - Only uses b 111 | - 2 Parameter Logistic (2PL) 112 | - Uses b and a 113 | - 3 Parameter Logistic (3PL) 114 | - Uses b, a, and c 115 | 116 | ## Polytomous IRT 117 | 118 | - A large portion of IRT focuses on dichotomous data (yes/no, correct/incorrect) 119 | - Scoring is easier because you have "right" and "wrong" answers 120 | - Separately, polytomous IRT focuses on data with multiple answers, with no "right" answer 121 | - Focus on ordering, meaning that low scores represent lower abilities, while high scores are higher abilities 122 | - Likert type scales 123 | 124 | ## Polytomous IRT 125 | 126 | - Couple of types of models: 127 | - Graded Response Model 128 | - Generalized Partial Credit Model 129 | - Partial Credit Model 130 | 131 | ## Polytomous IRT 132 | 133 | - A graded response model is simplest but can be hard to fit. 134 | - Takes the number of categories – 1 and creates mini 2PLs for each of those boundary points (1-rest, 2-rest, 3-rest, etc.). 135 | - You get probabilities of scoring at this level OR higher 136 | 137 | ## Polytomous IRT 138 | 139 | - The generalized partial credit and partial credit models account for the fact that you may not have each category used equally 140 | - Therefore, you get the mini 2PLs for adjacent categories (1-2, 2-3, 3-4) 141 | - If your categories are ordered (which you often want), these two estimations can be very similar. 142 | - Another concern with the partial credit models is making sure that all categories have a point at which they are the most likely answer (thresholds) 143 | 144 | ## Polytomous IRT 145 | 146 | - Install the `mirt()` library to use the multidimensional IRT package. 147 | - We are not covering multiple dimensional or multigroup IRT, but this package can do those models or polytomous estimation. 148 | 149 | ## IRT Examples 150 | 151 | - Let's start with DIRT: Dichotomous IRT 152 | - Dataset is the LSAT, which is scored as right or wrong 153 | 154 | ```{r} 155 | library(ltm) 156 | library(mirt) 157 | data(LSAT) 158 | head(LSAT) 159 | ``` 160 | 161 | ## Two Parameter Logistic 162 | 163 | ```{r} 164 | # Data frame name ~ z1 for one latent variable 165 | #irt.param to give it to you standardized 166 | LSAT.model <- ltm(LSAT ~ z1, 167 | IRT.param = TRUE) 168 | ``` 169 | 170 | ## 2PL Output 171 | 172 | - Difficulty = b = theta = ability 173 | - Discrimination = a = how good the question is at figuring a person out. 174 | 175 | ```{r} 176 | coef(LSAT.model) 177 | ``` 178 | 179 | ## 2PL Plots 180 | 181 | ```{r} 182 | plot(LSAT.model, type = "ICC") ## all items at once 183 | ``` 184 | 185 | ## 2PL Plots 186 | 187 | ```{r} 188 | plot(LSAT.model, type = "IIC", items = 0) ## Test Information Function 189 | ``` 190 | 191 | ## 2PL Other Options 192 | 193 | ```{r} 194 | factor.scores(LSAT.model) 195 | ``` 196 | 197 | ## Three Parameter Logistic 198 | 199 | ```{r} 200 | LSAT.model2 <- tpm(LSAT, #dataset 201 | type = "latent.trait", 202 | IRT.param = TRUE) 203 | ``` 204 | 205 | ## 3PL Output 206 | 207 | - Difficulty = b = theta = ability 208 | - Discrimination = a = how good the question is at figuring a person out. 209 | - Guessing = c = how easy the item is to guess 210 | 211 | ```{r} 212 | coef(LSAT.model2) 213 | ``` 214 | 215 | ## 3PL Plots 216 | 217 | ```{r} 218 | plot(LSAT.model2, type = "ICC") ## all items at once 219 | ``` 220 | 221 | ## 3PL Plots 222 | 223 | ```{r} 224 | plot(LSAT.model2, type = "IIC", items = 0) ## Test Information Function 225 | ``` 226 | 227 | ## 3PL Other Options 228 | 229 | ```{r} 230 | factor.scores(LSAT.model2) 231 | ``` 232 | 233 | ## Compare Models 234 | 235 | ```{r} 236 | anova(LSAT.model, LSAT.model2) 237 | ``` 238 | 239 | ## Polytomous IRT 240 | 241 | - Dataset includes the Meaning in Life Questionnaire 242 | 243 | ```{r} 244 | library(rio) 245 | poly.data <- import("data/lecture_irt.csv") 246 | poly.data <- na.omit(poly.data) 247 | 248 | #reverse code 249 | poly.data$Q99_9 = 8 - poly.data$Q99_9 250 | 251 | #separate factors 252 | poly.data1 = poly.data[ , c(1, 4, 5, 6, 9)] 253 | poly.data2 = poly.data[ , c(2, 3, 7, 8, 10)] 254 | ``` 255 | 256 | ## Graded Partial Credit Model 257 | 258 | ```{r} 259 | gpcm.model1 <- mirt(data = poly.data1, #data 260 | model = 1, #number of factors 261 | itemtype = "gpcm") #poly model type 262 | ``` 263 | 264 | ## GPCM Output 265 | 266 | - Can also get factor loadings here, with standardized coefficients to help us determine if they relate to their latent trait 267 | 268 | ```{r} 269 | summary(gpcm.model1) ##standardized coefficients 270 | ``` 271 | 272 | ## GPCM Output 273 | 274 | ```{r} 275 | coef(gpcm.model1, IRTpars = T) ##coefficients 276 | 277 | head(fscores(gpcm.model1)) ##factor scores 278 | ``` 279 | 280 | ## GPCM Plots 281 | 282 | ```{r} 283 | plot(gpcm.model1, type = "trace") ##curves for all items at once 284 | itemplot(gpcm.model1, 5, type = "trace") 285 | ``` 286 | 287 | ## GPCM Plots 288 | 289 | ```{r} 290 | itemplot(gpcm.model1, 4, type = "info") ##IIC for each item 291 | plot(gpcm.model1, type = "info") ##test information curve 292 | ``` 293 | 294 | ## GPCM Plots 295 | 296 | ```{r} 297 | plot(gpcm.model1) ##expected score curve 298 | ``` 299 | 300 | ## Summary 301 | 302 | - In this lecture you've learned: 303 | 304 | - Item response theory compared to classical test theory 305 | - How to run a dichotomous or traditional IRT with 2PL and 3PL 306 | - How to run a polytomous IRT using graded partial credit model 307 | - How to compare models and interpret their output 308 | -------------------------------------------------------------------------------- /inst/doc/lecture_mtmm.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/model1_mtmm.png") 14 | 15 | ## ------------------------------------------------------- 16 | library(lavaan) 17 | library(semPlot) 18 | library(rio) 19 | 20 | meaning.data <- import("data/lecture_mtmm.csv") 21 | str(meaning.data) 22 | 23 | ## ------------------------------------------------------- 24 | methods.model <- ' 25 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 26 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 27 | ' 28 | 29 | traits.model <- ' 30 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 31 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 32 | ' 33 | 34 | ## ------------------------------------------------------- 35 | methods.fit <- cfa(model = methods.model, 36 | data = meaning.data, 37 | std.lv = TRUE) 38 | traits.fit <- cfa(model = traits.model, 39 | data = meaning.data, 40 | std.lv = TRUE) 41 | 42 | lavInspect(traits.fit, "cor.lv") 43 | 44 | ## ------------------------------------------------------- 45 | summary(methods.fit, 46 | rsquare = TRUE, 47 | standardized = TRUE, 48 | fit.measures = TRUE) 49 | 50 | summary(traits.fit, 51 | rsquare = TRUE, 52 | standardized = TRUE, 53 | fit.measures = TRUE) 54 | 55 | ## ------------------------------------------------------- 56 | semPaths(methods.fit, 57 | whatLabels = "std", 58 | layout = "tree", 59 | edge.label.cex = 1) 60 | 61 | semPaths(traits.fit, 62 | whatLabels = "std", 63 | layout = "tree", 64 | edge.label.cex = 1) 65 | 66 | ## ------------------------------------------------------- 67 | step1.model <- ' 68 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 69 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 70 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 71 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 72 | 73 | ##fix the covariances 74 | mlq ~~ 0*meaning 75 | pil ~~ 0*meaning 76 | mlq ~~ 0*purpose 77 | pil ~~ 0*purpose 78 | ' 79 | 80 | ## ------------------------------------------------------- 81 | step1.fit <- cfa(model = step1.model, 82 | data = meaning.data, 83 | std.lv = TRUE) 84 | 85 | summary(step1.fit, 86 | rsquare = TRUE, 87 | standardized = TRUE, 88 | fit.measures = TRUE) 89 | 90 | ## ------------------------------------------------------- 91 | semPaths(step1.fit, 92 | whatLabels = "std", 93 | layout = "tree", 94 | edge.label.cex = 1) 95 | 96 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 97 | knitr::include_graphics("pictures/model2_mtmm.png") 98 | 99 | ## ------------------------------------------------------- 100 | ##model 2 is the methods model 101 | ##we've already checked it out 102 | anova(step1.fit, methods.fit) 103 | 104 | fitmeasures(step1.fit, "cfi") 105 | fitmeasures(methods.fit, "cfi") 106 | 107 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 108 | knitr::include_graphics("pictures/model3_mtmm.png") 109 | 110 | ## ------------------------------------------------------- 111 | step3.model <- ' 112 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 113 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 114 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 115 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 116 | 117 | ##fix the covariances 118 | mlq ~~ 0*meaning 119 | pil ~~ 0*meaning 120 | mlq ~~ 0*purpose 121 | pil ~~ 0*purpose 122 | meaning ~~ 1*purpose 123 | ' 124 | 125 | ## ------------------------------------------------------- 126 | step3.fit <- cfa(model = step3.model, 127 | data = meaning.data, 128 | std.lv = TRUE) 129 | 130 | summary(step3.fit, 131 | rsquare = TRUE, 132 | standardized = TRUE, 133 | fit.measure = TRUE) 134 | 135 | ## ------------------------------------------------------- 136 | semPaths(step3.fit, 137 | whatLabels = "std", 138 | layout = "tree", 139 | edge.label.cex = 1) 140 | 141 | ## ------------------------------------------------------- 142 | anova(step1.fit, step3.fit) 143 | 144 | fitmeasures(step1.fit, "cfi") 145 | fitmeasures(step3.fit, "cfi") 146 | 147 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 148 | knitr::include_graphics("pictures/model4_mtmm.png") 149 | 150 | ## ------------------------------------------------------- 151 | step4.model <- ' 152 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 153 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 154 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 155 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 156 | 157 | ##fix the covariances 158 | mlq ~~ 0*meaning 159 | pil ~~ 0*meaning 160 | mlq ~~ 0*purpose 161 | pil ~~ 0*purpose 162 | pil ~~ 0*mlq 163 | ' 164 | 165 | ## ------------------------------------------------------- 166 | step4.fit <- cfa(model = step4.model, 167 | data = meaning.data, 168 | std.lv = TRUE) 169 | 170 | summary(step4.fit, 171 | rsquare = TRUE, 172 | standardized = TRUE, 173 | fit.measure = TRUE) 174 | 175 | ## ------------------------------------------------------- 176 | semPaths(step4.fit, 177 | whatLabels = "std", 178 | layout = "tree", 179 | edge.label.cex = 1) 180 | 181 | ## ------------------------------------------------------- 182 | anova(step1.fit, step4.fit) 183 | 184 | fitmeasures(step1.fit, "cfi") 185 | fitmeasures(step4.fit, "cfi") 186 | 187 | ## ------------------------------------------------------- 188 | parameterestimates(step1.fit, standardized = T) 189 | 190 | ## ------------------------------------------------------- 191 | parameterestimates(step1.fit, standardized = T) 192 | 193 | -------------------------------------------------------------------------------- /inst/doc/lecture_path.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | 10 | ## ----eval = F------------------------------------------- 11 | # install.packages("lavaan") 12 | # install.packages("semPlot") 13 | 14 | ## ------------------------------------------------------- 15 | library(rio) 16 | eval.data <- import("data/lecture_evals.csv") 17 | 18 | ## ----echo=FALSE, out.width = "25%", fig.align="center"---- 19 | knitr::include_graphics("pictures/lecture_evals.png") 20 | 21 | ## ----echo=FALSE, out.width = "25%", fig.align="center"---- 22 | knitr::include_graphics("pictures/lecture_evals.png") 23 | 24 | ## ------------------------------------------------------- 25 | library(lavaan) 26 | eval.model <- ' 27 | q4 ~ q12 + q2 28 | q1 ~ q4 + q12 29 | ' 30 | 31 | ## ------------------------------------------------------- 32 | eval.model 33 | 34 | ## ------------------------------------------------------- 35 | eval.output <- sem(model = eval.model, 36 | data = eval.data) 37 | 38 | ## ------------------------------------------------------- 39 | summary(eval.output) 40 | 41 | ## ------------------------------------------------------- 42 | summary(eval.output, 43 | standardized = TRUE, # for the standardized solution 44 | fit.measures = TRUE, # for model fit 45 | rsquare = TRUE) # for SMCs 46 | 47 | ## ------------------------------------------------------- 48 | library(semPlot) 49 | semPaths(eval.output, # the analyzed model 50 | whatLabels = "par", # what to add as the numbers, std for standardized 51 | edge.label.cex = 1, # make the font bigger 52 | layout = "spring") # change the layout tree, circle, spring, tree2, circle2 53 | 54 | ## ------------------------------------------------------- 55 | regression.cor <- lav_matrix_lower2full(c(1.00, 56 | 0.20,1.00, 57 | 0.24,0.30,1.00, 58 | 0.70,0.80,0.30,1.00)) 59 | 60 | # name the variables in the matrix 61 | colnames(regression.cor) <- 62 | rownames(regression.cor) <- 63 | c("X1", "X2", "X3", "Y") 64 | 65 | ## ------------------------------------------------------- 66 | regression.model <- ' 67 | # structural model for Y 68 | Y ~ a*X1 + b*X2 + c*X3 69 | # label the residual variance of Y 70 | Y ~~ z*Y 71 | ' 72 | 73 | ## ------------------------------------------------------- 74 | regression.fit <- sem(model = regression.model, 75 | sample.cov = regression.cor, # instead of data 76 | sample.nobs = 1000) # number of data points 77 | 78 | ## ------------------------------------------------------- 79 | summary(regression.fit, 80 | standardized = TRUE, 81 | fit.measures = TRUE, 82 | rsquare = TRUE) 83 | 84 | ## ------------------------------------------------------- 85 | semPaths(regression.fit, 86 | whatLabels="par", 87 | edge.label.cex = 1, 88 | layout="tree") 89 | 90 | ## ------------------------------------------------------- 91 | beaujean.cov <- lav_matrix_lower2full(c(648.07, 92 | 30.05, 8.64, 93 | 140.18, 25.57, 233.21)) 94 | colnames(beaujean.cov) <- 95 | rownames(beaujean.cov) <- 96 | c("salary", "school", "iq") 97 | 98 | ## ------------------------------------------------------- 99 | beaujean.model <- ' 100 | salary ~ a*school + c*iq 101 | iq ~ b*school # this is reversed in first printing of the book 102 | ind:= b*c # this is the mediation part 103 | ' 104 | 105 | ## ------------------------------------------------------- 106 | beaujean.fit <- sem(model = beaujean.model, 107 | sample.cov = beaujean.cov, 108 | sample.nobs = 300) 109 | 110 | ## ------------------------------------------------------- 111 | summary(beaujean.fit, 112 | standardized = TRUE, 113 | fit.measures = TRUE, 114 | rsquare = TRUE) 115 | 116 | ## ------------------------------------------------------- 117 | semPaths(beaujean.fit, 118 | whatLabels="par", 119 | edge.label.cex = 1, 120 | layout="tree") 121 | 122 | ## ----echo=FALSE, out.width = "50%", fig.align="center"---- 123 | knitr::include_graphics("pictures/srmr_formula.png") 124 | 125 | ## ------------------------------------------------------- 126 | chi_difference <- 12.6 - 4.3 127 | df_difference <- 14 - 12 128 | pchisq(chi_difference, df_difference, lower.tail = F) 129 | 130 | ## ------------------------------------------------------- 131 | compare.data <- lav_matrix_lower2full(c(1.00, 132 | .53, 1.00, 133 | .15, .18, 1.00, 134 | .52, .29, -.05, 1.00, 135 | .30, .34, .23, .09, 1.00)) 136 | 137 | colnames(compare.data) <- 138 | rownames(compare.data) <- 139 | c("morale", "illness", "neuro", "relationship", "SES") 140 | 141 | ## ------------------------------------------------------- 142 | #model 1 143 | compare.model1 = ' 144 | illness ~ morale 145 | relationship ~ morale 146 | morale ~ SES + neuro 147 | ' 148 | 149 | #model 2 150 | compare.model2 = ' 151 | SES ~ illness + neuro 152 | morale ~ SES + illness 153 | relationship ~ morale + neuro 154 | ' 155 | 156 | ## ------------------------------------------------------- 157 | compare.model1.fit <- sem(compare.model1, 158 | sample.cov = compare.data, 159 | sample.nobs = 469) 160 | 161 | summary(compare.model1.fit, 162 | standardized = TRUE, 163 | fit.measures = TRUE, 164 | rsquare = TRUE) 165 | 166 | ## ------------------------------------------------------- 167 | compare.model2.fit <- sem(compare.model2, 168 | sample.cov = compare.data, 169 | sample.nobs = 469) 170 | 171 | summary(compare.model2.fit, 172 | standardized = TRUE, 173 | fit.measures = TRUE, 174 | rsquare = TRUE) 175 | 176 | ## ------------------------------------------------------- 177 | semPaths(compare.model1.fit, 178 | whatLabels="par", 179 | edge.label.cex = 1, 180 | layout="spring") 181 | 182 | ## ------------------------------------------------------- 183 | semPaths(compare.model2.fit, 184 | whatLabels="par", 185 | edge.label.cex = 1, 186 | layout="spring") 187 | 188 | ## ------------------------------------------------------- 189 | anova(compare.model1.fit, compare.model2.fit) 190 | fitmeasures(compare.model1.fit, c("aic", "ecvi")) 191 | fitmeasures(compare.model2.fit, c("aic", "ecvi")) 192 | 193 | -------------------------------------------------------------------------------- /inst/doc/lecture_secondcfa.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/second_order.png") 14 | 15 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 16 | knitr::include_graphics("pictures/bi_factor.png") 17 | 18 | ## ------------------------------------------------------- 19 | library(lavaan) 20 | library(semPlot) 21 | 22 | ##import the data 23 | wisc4.cov <- lav_matrix_lower2full(c(8.29, 24 | 5.37,9.06, 25 | 2.83,4.44,8.35, 26 | 2.83,3.32,3.36,8.88, 27 | 5.50,6.66,4.20,3.43,9.18, 28 | 6.18,6.73,4.01,3.33,6.77,9.12, 29 | 3.52,3.77,3.19,2.75,3.88,4.05,8.88, 30 | 3.79,4.50,3.72,3.39,4.53,4.70,4.54,8.94, 31 | 2.30,2.67,2.40,2.38,2.06,2.59,2.65,2.83,8.76, 32 | 3.06,4.04,3.70,2.79,3.59,3.67,3.44,4.20,4.53,9.73)) 33 | 34 | wisc4.sd <- c(2.88,3.01,2.89,2.98,3.03,3.02,2.98,2.99,2.96,3.12) 35 | 36 | names(wisc4.sd) <- 37 | colnames(wisc4.cov) <- 38 | rownames(wisc4.cov) <- c("Comprehension", "Information", 39 | "Matrix.Reasoning", "Picture.Concepts", 40 | "Similarities", "Vocabulary", "Digit.Span", 41 | "Letter.Number", "Coding", "Symbol.Search") 42 | 43 | ## ------------------------------------------------------- 44 | ##first order model 45 | wisc4.fourFactor.model <- ' 46 | gc =~ Comprehension + Information + Similarities + Vocabulary 47 | gf =~ Matrix.Reasoning + Picture.Concepts 48 | gsm =~ Digit.Span + Letter.Number 49 | gs =~ Coding + Symbol.Search 50 | ' 51 | 52 | ## ------------------------------------------------------- 53 | wisc4.fourFactor.fit <- cfa(model = wisc4.fourFactor.model, 54 | sample.cov = wisc4.cov, 55 | sample.nobs = 550) 56 | 57 | ## ------------------------------------------------------- 58 | summary(wisc4.fourFactor.fit, 59 | fit.measure = TRUE, 60 | standardized = TRUE, 61 | rsquare = TRUE) 62 | 63 | ## ------------------------------------------------------- 64 | semPaths(wisc4.fourFactor.fit, 65 | whatLabels="std", 66 | edge.label.cex = 1, 67 | edge.color = "black", 68 | what = "std", 69 | layout="tree") 70 | 71 | ## ------------------------------------------------------- 72 | wisc4.higherOrder.model <- ' 73 | gc =~ Comprehension + Information + Similarities + Vocabulary 74 | gf =~ Matrix.Reasoning + Picture.Concepts 75 | gsm =~ Digit.Span + Letter.Number 76 | gs =~ Coding + Symbol.Search 77 | 78 | g =~ gf + gc + gsm + gs 79 | ' 80 | 81 | ## ------------------------------------------------------- 82 | wisc4.higherOrder.fit <- cfa(model = wisc4.higherOrder.model, 83 | sample.cov = wisc4.cov, 84 | sample.nobs = 550) 85 | 86 | ## ------------------------------------------------------- 87 | summary(wisc4.higherOrder.fit, 88 | fit.measure=TRUE, 89 | standardized=TRUE, 90 | rsquare = TRUE) 91 | 92 | ## ------------------------------------------------------- 93 | semPaths(wisc4.higherOrder.fit, 94 | whatLabels="std", 95 | edge.label.cex = 1, 96 | edge.color = "black", 97 | what = "std", 98 | layout="tree") 99 | 100 | ## ------------------------------------------------------- 101 | wisc4.bifactor.model <- ' 102 | gc =~ Comprehension + Information + Similarities + Vocabulary 103 | gf =~ a*Matrix.Reasoning + a*Picture.Concepts 104 | gsm =~ b*Digit.Span + b*Letter.Number 105 | gs =~ c*Coding + c*Symbol.Search 106 | g =~ Information + Comprehension + Matrix.Reasoning + Picture.Concepts + Similarities + Vocabulary + Digit.Span + Letter.Number + Coding + Symbol.Search 107 | ' 108 | 109 | ## ------------------------------------------------------- 110 | wisc4.bifactor.fit <- cfa(model = wisc4.bifactor.model, 111 | sample.cov = wisc4.cov, 112 | sample.nobs = 550, 113 | orthogonal = TRUE) 114 | 115 | ## ------------------------------------------------------- 116 | summary(wisc4.bifactor.fit, 117 | fit.measure = TRUE, 118 | rsquare = TRUE, 119 | standardized = TRUE) 120 | 121 | ## ------------------------------------------------------- 122 | semPaths(wisc4.bifactor.fit, 123 | whatLabels="std", 124 | edge.label.cex = 1, 125 | edge.color = "black", 126 | what = "std", 127 | layout="tree") 128 | 129 | -------------------------------------------------------------------------------- /inst/doc/lecture_sem.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/full_sem2.png") 14 | 15 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 16 | knitr::include_graphics("pictures/indicators.png") 17 | 18 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 19 | knitr::include_graphics("pictures/kline_model.png") 20 | 21 | ## ------------------------------------------------------- 22 | library(lavaan) 23 | library(semPlot) 24 | 25 | family.cor <- lav_matrix_lower2full(c(1.00, 26 | .74, 1.00, 27 | .27, .42, 1.00, 28 | .31, .40, .79, 1.00, 29 | .32, .35, .66, .59, 1.00)) 30 | family.sd <- c(32.94, 22.75, 13.39, 13.68, 14.38) 31 | rownames(family.cor) <- 32 | colnames(family.cor) <- 33 | names(family.sd) <- c("father", "mother", "famo", "problems", "intimacy") 34 | 35 | family.cov <- cor2cov(family.cor, family.sd) 36 | 37 | ## ------------------------------------------------------- 38 | family.model <- ' 39 | adjust =~ problems + intimacy 40 | family =~ father + mother + famo' 41 | 42 | ## ------------------------------------------------------- 43 | family.fit <- cfa(model = family.model, 44 | sample.cov = family.cov, 45 | sample.nobs = 203) 46 | 47 | ## ------------------------------------------------------- 48 | inspect(family.fit, "cov.lv") 49 | inspect(family.fit, "cor.lv") 50 | 51 | ## ------------------------------------------------------- 52 | family.fit <- cfa(model = family.model, 53 | sample.cov = family.cor, 54 | sample.nobs = 203) 55 | 56 | ## ------------------------------------------------------- 57 | summary(family.fit, 58 | rsquare = TRUE, 59 | standardized = TRUE, 60 | fit.measures = TRUE) 61 | 62 | ## ------------------------------------------------------- 63 | modificationindices(family.fit, sort = T) 64 | 65 | ## ------------------------------------------------------- 66 | family.model2 <- ' 67 | adjust =~ problems + intimacy 68 | family =~ father + mother + famo 69 | father ~~ mother' 70 | 71 | family.fit2 <- cfa(model = family.model2, 72 | sample.cov = family.cov, 73 | sample.nobs = 203) 74 | 75 | inspect(family.fit2, "cor.lv") 76 | 77 | ## ------------------------------------------------------- 78 | semPaths(family.fit, 79 | whatLabels="std", 80 | layout="tree", 81 | edge.label.cex = 1) 82 | 83 | ## ------------------------------------------------------- 84 | predict.model <- ' 85 | adjust =~ problems + intimacy 86 | family =~ father + mother + famo 87 | adjust~family' 88 | 89 | ## ------------------------------------------------------- 90 | predict.fit <- sem(model = predict.model, 91 | sample.cov = family.cor, 92 | sample.nobs = 203) 93 | 94 | ## ------------------------------------------------------- 95 | summary(predict.fit, 96 | rsquare = TRUE, 97 | standardized = TRUE, 98 | fit.measures = TRUE) 99 | 100 | ## ------------------------------------------------------- 101 | semPaths(predict.fit, 102 | whatLabels="std", 103 | layout="tree", 104 | edge.label.cex = 1) 105 | 106 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 107 | knitr::include_graphics("pictures/full_example.png") 108 | 109 | ## ------------------------------------------------------- 110 | family.cor <- lav_matrix_lower2full(c(1.00, 111 | .42, 1.00, 112 | -.43, -.50, 1.00, 113 | -.39, -.43, .78, 1.00, 114 | -.24, -.37, .69, .73, 1.00, 115 | -.31, -.33, .63, .87, .72, 1.00, 116 | -.25, -.25, .49, .53, .60, .59, 1.00, 117 | -.25, -.26, .42, .42, .44, .45, .77, 1.00, 118 | -.16, -.18, .23, .36, .38, .38, .59, .58, 1.00)) 119 | 120 | family.sd <- c(13.00, 13.50, 13.10, 12.50, 13.50, 14.20, 9.50, 11.10, 8.70) 121 | 122 | rownames(family.cor) <- 123 | colnames(family.cor) <- 124 | names(family.sd) <- c("parent_psych","low_SES","verbal", 125 | "reading","math","spelling","motivation","harmony","stable") 126 | 127 | family.cov <- cor2cov(family.cor, family.sd) 128 | 129 | ## ------------------------------------------------------- 130 | composite.model <- ' 131 | risk <~ low_SES + parent_psych + verbal 132 | achieve =~ reading + math + spelling 133 | adjustment =~ motivation + harmony + stable 134 | risk =~ achieve + adjustment 135 | ' 136 | 137 | ## ------------------------------------------------------- 138 | composite.fit <- sem(model = composite.model, 139 | sample.cov = family.cov, 140 | sample.nobs = 158) 141 | 142 | ## ------------------------------------------------------- 143 | summary(composite.fit, 144 | rsquare = TRUE, 145 | standardized = TRUE, 146 | fit.measures = TRUE) 147 | 148 | ## ------------------------------------------------------- 149 | modificationindices(composite.fit, sort = T) 150 | 151 | ## ------------------------------------------------------- 152 | semPaths(composite.fit, 153 | whatLabels="std", 154 | layout="tree", 155 | edge.label.cex = 1) 156 | 157 | -------------------------------------------------------------------------------- /inst/doc/lecture_terms.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | options(scipen = 999) 9 | knitr::opts_chunk$set(echo = TRUE) 10 | library(lavaan, quietly = T) 11 | library(semPlot, quietly = T) 12 | HS.model <- ' visual =~ x1 + x2 + x3 13 | textual =~ x4 + x5 + x6 14 | speed =~ x7 + x8 + x9 ' 15 | 16 | fit <- cfa(HS.model, data = HolzingerSwineford1939) 17 | 18 | HS.model2 <- 'visual =~ x1 + x2 + x3 19 | textual =~ x4 + x5 + x6 20 | speed =~ x7 + x8 + x9 21 | visual ~ speed' 22 | fit2 <- cfa(HS.model2, data = HolzingerSwineford1939) 23 | 24 | HS.model3 <- 'visual =~ x1 + x2 + x3 25 | textual =~ x4 + x5 + x6 26 | speed =~ x7 + x8 + x9 27 | visual ~ speed 28 | speed ~ textual 29 | textual ~ visual' 30 | fit3 <- cfa(HS.model3, data = HolzingerSwineford1939) 31 | 32 | ## ----exo, echo=FALSE, out.width="75%", fig.align="center"---- 33 | knitr::include_graphics("pictures/exo_endo.png") 34 | 35 | ## ----endo, echo=FALSE, out.width="75%", fig.align="center"---- 36 | knitr::include_graphics("pictures/exo_endo.png") 37 | 38 | ## ----echo = F------------------------------------------- 39 | semPaths(fit, 40 | whatLabels = "std", 41 | edge.label.cex = 1) 42 | 43 | ## ----echo = F------------------------------------------- 44 | semPaths(fit2, 45 | whatLabels = "std", 46 | edge.label.cex = 1) 47 | 48 | ## ----full, out.width="75%", echo=FALSE, fig.align="center"---- 49 | knitr::include_graphics("pictures/full_sem.png") 50 | 51 | ## ----echo = F------------------------------------------- 52 | semPaths(fit2, 53 | whatLabels = "std", 54 | edge.label.cex = 1) 55 | 56 | ## ----echo = F------------------------------------------- 57 | semPaths(fit3, 58 | whatLabels = "std", 59 | edge.label.cex = 1) 60 | 61 | ## ----echo = F------------------------------------------- 62 | summary(fit2) 63 | 64 | ## ----echo = F------------------------------------------- 65 | summary(fit2, standardized = T, rsquare = T) 66 | 67 | ## ----model-steps, echo=FALSE, out.width="75%", fig.align="center"---- 68 | knitr::include_graphics("pictures/model_steps.png") 69 | 70 | ## ----echo = F------------------------------------------- 71 | semPaths(fit) 72 | 73 | ## ----echo = F------------------------------------------- 74 | summary(fit) 75 | 76 | ## ----echo = F------------------------------------------- 77 | summary(fit, standardized = T) 78 | 79 | -------------------------------------------------------------------------------- /inst/tutorials/cfabasics/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/cfasecond/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/datascreen/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/efa/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | /efa_files 3 | *.pdf 4 | -------------------------------------------------------------------------------- /inst/tutorials/fullsem/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/fullsem/fullsem_files/figure-html/unnamed-chunk-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/inst/tutorials/fullsem/fullsem_files/figure-html/unnamed-chunk-2-1.png -------------------------------------------------------------------------------- /inst/tutorials/introR/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/irt/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/irt/irt.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Item Response Theory" 3 | tutorial: 4 | id: "irt" 5 | output: learnr::tutorial 6 | runtime: shiny_prerendered 7 | description: In this tutorial, you will practice running item response theory on dichotomous and polytomous data types. 8 | --- 9 | 10 | ```{r setup, include=FALSE} 11 | library(learnr) 12 | library(learnSEM) 13 | knitr::opts_chunk$set(echo = FALSE) 14 | library(mirt) 15 | library(ltm) 16 | data(dirtdata) 17 | data(mirtdata) 18 | mirtdata$q5 <- 8 - mirtdata$q5 19 | mirtdata$q8 <- 8 - mirtdata$q8 20 | mirtdata$q13 <- 8 - mirtdata$q13 21 | ``` 22 | 23 | ## Item Response Theory 24 | 25 | Item Response Theory is a latent trait analysis with a different focus traditionally used in understanding how a test performs. We will examine both dichotomous (yes/no) and polytomous (multiple options) scales to determine each item's characteristics. The learning outcomes are: 26 | 27 | - Compare item response theory to classical test theory. 28 | - Understand the usage and differences of an IRT model to a CFA model. 29 | - Estimate 2 and 3 parameter logistic models on dichotomous data. 30 | - Estimate a graded partial credit model for polytomous data. 31 | 32 | ## IRT Videos 33 | 34 | You can use `vignette("lecture_irt", "learnSEM")` to view these notes in R. 35 | 36 | 37 | 38 | 39 | ## Exercises 40 | 41 | In this next section, you will answer questions using the *R* code blocks provided. Be sure to use the `solution` option to see the answer if you need it! 42 | 43 | Please enter your name for submission. If you do not need to submit, just type anything you'd like in this box. 44 | 45 | ```{r details} 46 | question_text( 47 | "Student Name:", 48 | answer("Your Name", correct = TRUE), 49 | incorrect = "Thanks!", 50 | try_again_button = "Modify your answer", 51 | allow_retry = TRUE 52 | ) 53 | ``` 54 | 55 | ## Dichotomous IRT 56 | 57 | The included dataset includes data from an Educational Psychology Test scored as 0 (answered incorrectly) and 1 (answered correctly). The data has been imported for you. 58 | 59 | ```{r echo = T} 60 | data(dirtdata) 61 | head(dirtdata) 62 | ``` 63 | 64 | ## Two Parameter Logistic 65 | 66 | Include a 2PL calculated on just columns V2 through V5. Save the model as `edu.model`. Use the `coef()` function to examine the difficulty and discrimination parameters. 67 | 68 | ```{r twopl, exercise = TRUE} 69 | 70 | ``` 71 | 72 | ```{r twopl-solution} 73 | edu.model <- ltm(dirtdata ~ z1, IRT.param = TRUE) 74 | coef(edu.model) 75 | ``` 76 | 77 | ## 2PL Plots 78 | 79 | Include the ICC and TIF plots to view all the items and overall test information at once. 80 | ```{r twoplplot-setup} 81 | edu.model <- ltm(dirtdata ~ z1, IRT.param = TRUE) 82 | ``` 83 | 84 | ```{r twoplplot, exercise = TRUE} 85 | 86 | ``` 87 | 88 | ```{r twoplplot-solution} 89 | plot(edu.model, type = "IIC") 90 | plot(edu.model, type = "IIC", items = 0) 91 | ``` 92 | 93 | ## Three Parameter Logistic 94 | 95 | Include the 3PL model as `edu2.model`. Use the `coef()` function to examine the difficulty, discrimination, and guessing parameters. Name this model `edu.model2`. 96 | 97 | ```{r threepl, exercise = TRUE} 98 | 99 | ``` 100 | 101 | ```{r threepl-solution} 102 | edu.model2 <- tpm(dirtdata, type="latent.trait", IRT.param = TRUE) 103 | coef(edu.model2) 104 | ``` 105 | 106 | ## 3PL Plots 107 | 108 | Include the ICC and TIF plots to view all the items and overall test information at once. 109 | 110 | ```{r threeplplot-setup} 111 | edu.model2 <- tpm(dirtdata, type="latent.trait", IRT.param = TRUE) 112 | ``` 113 | 114 | ```{r threeplplot, exercise = TRUE} 115 | 116 | ``` 117 | 118 | ```{r threeplplot-solution} 119 | plot(edu.model2, type="ICC") 120 | plot(edu.model2, type = "IIC", items = 0) 121 | ``` 122 | 123 | ## Compare Models 124 | 125 | Use the `anova()` function to compare the two models. 126 | 127 | ```{r compare-setup} 128 | edu.model <- ltm(dirtdata ~ z1, IRT.param = TRUE) 129 | edu.model2 <- tpm(dirtdata, type="latent.trait", IRT.param = TRUE) 130 | ``` 131 | 132 | ```{r compare, exercise = TRUE} 133 | 134 | ``` 135 | 136 | ```{r compare-solution} 137 | anova(edu.model, edu.model2) 138 | ``` 139 | 140 | ```{r best-open} 141 | question_text( 142 | "Which model was better? Does it appear that the guessing parameter adds something useful to the model? ", 143 | answer("Nope! The 2PL is a better representation.", correct = TRUE), 144 | incorrect = "Nope! The 2PL is a better representation.", 145 | try_again_button = "Modify your answer", 146 | allow_retry = TRUE 147 | ) 148 | ``` 149 | 150 | ```{r good2pl-open} 151 | question_text( 152 | "Which items would be considered good items based on discrimination?", 153 | answer("Nearly all are good discriminators, but pretty easy.", correct = TRUE), 154 | incorrect = "Nearly all are good discriminators, but pretty easy.", 155 | try_again_button = "Modify your answer", 156 | allow_retry = TRUE 157 | ) 158 | ``` 159 | 160 | ## Polytomous IRT 161 | 162 | Load the assignment_mirt data below. You should reverse code items 5, 8, and 13 using `8 - columns` to ensure all items are in the same direction. The scale included examines evaluations of job candidates rated on 15 different qualities. 163 | 164 | ```{r echo = T} 165 | data(mirtdata) 166 | head(mirtdata) 167 | ``` 168 | 169 | ## Graded Partial Credit Model 170 | 171 | Create a graded partial credit model to analyze the scale, and save this model as `gpcm.model`. Include the `coef()` for the model to help you answer the questions below. 172 | 173 | ```{r gpcm, exercise = TRUE} 174 | 175 | ``` 176 | 177 | ```{r gpcm-solution} 178 | gpcm.model <- mirt(data = mirtdata, 179 | model = 1, 180 | itemtype = "gpcm") 181 | 182 | coef(gpcm.model, IRTpars = T) 183 | ``` 184 | 185 | ## GPCM Plots 186 | 187 | Include the ICC and TIF plots to view all the items and overall test information at once. 188 | 189 | ```{r gpcmplots-setup} 190 | gpcm.model <- mirt(data = mirtdata, 191 | model = 1, 192 | itemtype = "gpcm") 193 | ``` 194 | 195 | ```{r gpcmplots, exercise = TRUE} 196 | 197 | ``` 198 | 199 | ```{r gpcmplots-solution} 200 | plot(gpcm.model, type = "trace") 201 | plot(gpcm.model, type = "info") 202 | ``` 203 | 204 | ```{r order-open} 205 | question_text( 206 | "Examine the items. Do they appear ordered where each answer function is ordered correctly from 1 to 7?", 207 | answer("Yes, they mostly appear ordered.", correct = TRUE), 208 | incorrect = "Yes, they mostly appear ordered.", 209 | try_again_button = "Modify your answer", 210 | allow_retry = TRUE 211 | ) 212 | ``` 213 | 214 | ```{r allscale-open} 215 | question_text( 216 | "Examine the items. Do we need all 7 items on this scale? (i.e., do they all have the probability of being the most likely answer choice?)", 217 | answer("Unlikely, maybe only four points.", correct = TRUE), 218 | incorrect = "Unlikely, maybe only four points.", 219 | try_again_button = "Modify your answer", 220 | allow_retry = TRUE 221 | ) 222 | ``` 223 | 224 | ```{r goodquestions-open} 225 | question_text( 226 | "Which items indicate good discrimination?", 227 | answer("Pretty much all of them but 8 and 13.", correct = TRUE), 228 | incorrect = "Pretty much all of them but 8 and 13.", 229 | try_again_button = "Modify your answer", 230 | allow_retry = TRUE 231 | ) 232 | ``` 233 | 234 | ## Submit 235 | 236 | On this page, you will create the submission for your instructor (if necessary). Please copy this report and submit using a Word document or paste into the text window of your submission page. Click "Generate Submission" to get your work! 237 | 238 | ```{r context="server"} 239 | encoder_logic() 240 | ``` 241 | 242 | ```{r encode, echo=FALSE} 243 | encoder_ui() 244 | ``` 245 | -------------------------------------------------------------------------------- /inst/tutorials/lgm/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/lgm/lgm_data/data.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/inst/tutorials/lgm/lgm_data/data.RData -------------------------------------------------------------------------------- /inst/tutorials/lgm/lgm_data/data_chunks_index.txt: -------------------------------------------------------------------------------- 1 | data.RData 2 | -------------------------------------------------------------------------------- /inst/tutorials/mgcfa/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/mtmm/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/path1/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/path1/images/assignment_path1_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/inst/tutorials/path1/images/assignment_path1_1.png -------------------------------------------------------------------------------- /inst/tutorials/path1/images/assignment_path1_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/inst/tutorials/path1/images/assignment_path1_2.png -------------------------------------------------------------------------------- /inst/tutorials/path1/path1.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Path Analysis Practice 1" 3 | tutorial: 4 | id: "path1" 5 | output: learnr::tutorial 6 | runtime: shiny_prerendered 7 | description: In this tutorial, you will begin to practice `lavaan` code by building path analysis models, including a mediation model. 8 | --- 9 | 10 | ```{r setup, include=FALSE} 11 | library(learnr) 12 | library(learnSEM) 13 | knitr::opts_chunk$set(echo = FALSE) 14 | library(lavaan) 15 | library(semPlot) 16 | 17 | academic.cor <- lav_matrix_lower2full(c(1, 18 | .178, 1, 19 | .230, .327, 1, 20 | .106, .245, .183, 1, 21 | .195, .356, .721, .178, 1)) 22 | rownames(academic.cor) <- 23 | colnames(academic.cor) <- 24 | c("race", "ses", "cog", "school", "acad") 25 | 26 | mediation.cov <- lav_matrix_lower2full(c(84.85, 27 | 71.28, 140.34, 28 | 18.83, -6.25, 72.92, 29 | 60.05, 84.54, 37.18, 139.48)) 30 | rownames(mediation.cov) <- 31 | colnames(mediation.cov) <- c("teacher", "social", "material", "achieve") 32 | ``` 33 | 34 | ## Estimation, Path Models, and Fit Indices 35 | 36 | This section of the course covers the beginning of `lavaan` syntax by introducing path models. You will first learn about estimation to get a broad sense of how these models can be analyzed. Next, you will learn how to write your own model code, analyze that model, and summarize the output. You will use `semPlot` to diagram your models, and we will end with fit indices and how to analyze and compare models. You should complete both path1 and path2 to cover this material. The learning outcomes are: 37 | 38 | - Understand the different types of estimation and when they are used 39 | - Distinguish between the different types of `lavaan` operators 40 | - Build, analyze, summarize, and diagram a path model 41 | - Distinguish the different types of fit indices 42 | - Determine how to compare two different structural equation models 43 | 44 | ## Path Analysis Videos 45 | 46 | You can use `vignette("lecture_path", "learnSEM")` to view these notes in R. 47 | 48 | 49 | 50 | ## Exercises 51 | 52 | In this next section, you will answer questions using the *R* code blocks provided. Be sure to use the `solution` option to see the answer if you need it! 53 | 54 | Please enter your name for submission. If you do not need to submit, just type anything you'd like in this box. 55 | 56 | ```{r details} 57 | question_text( 58 | "Student Name:", 59 | answer("Your Name", correct = TRUE), 60 | incorrect = "Thanks!", 61 | try_again_button = "Modify your answer", 62 | allow_retry = TRUE 63 | ) 64 | ``` 65 | 66 | ## Specify Your Model 67 | 68 | Use the following picture as your guide to diagram your first path model. This model represents the relationship demographic variables (race, SES, school type), individual ability (cognitive ability) predicting academic achievement. 69 | 70 | ```{r out.width="75%", results = 'asis'} 71 | knitr::include_graphics("images/assignment_path1_1.png") 72 | ``` 73 | 74 | The data has been loaded for you as a correlation table. `lavaan` allows you to build models from raw data, covariance, or correlation matrices. Here's a visual of the data: 75 | 76 | ```{r} 77 | academic.cor 78 | ``` 79 | 80 | Create the `lavaan` model code in the code box below using the variable names from the `academic.cor` data above, matched to the model picture shown above. You should call your model `academic.model`. 81 | 82 | ```{r model1, exercise = TRUE} 83 | 84 | ``` 85 | 86 | ```{r model1-solution} 87 | academic.model <- ' 88 | acad ~ cog + race + ses + school 89 | school ~ cog + race + ses 90 | cog ~ race + ses 91 | ses ~ race 92 | ' 93 | ``` 94 | 95 | ## Analyze the Model 96 | 97 | Analyze your path model using the `sem()` function, and name the model `academic.fit`. There are 18058 participants in the data for your `sample.nobs`. 98 | 99 | ```{r analyze1-setup} 100 | academic.model <- ' 101 | acad ~ cog + race + ses + school 102 | school ~ cog + race + ses 103 | cog ~ race + ses 104 | ses ~ race 105 | ' 106 | ``` 107 | 108 | ```{r analyze1, exercise = TRUE} 109 | 110 | ``` 111 | 112 | ```{r analyze1-solution} 113 | academic.fit <- sem(model = academic.model, 114 | sample.cov = academic.cor, 115 | sample.nobs = 18058) 116 | ``` 117 | 118 | ## Summarize Your Model 119 | 120 | Let's summarize the model you just created. Use the `summary()` function on your model with the standardized solution, rsquare values, and fit.measures all included. 121 | 122 | ```{r summarize1-setup} 123 | academic.model <- ' 124 | acad ~ cog + race + ses + school 125 | school ~ cog + race + ses 126 | cog ~ race + ses 127 | ses ~ race 128 | ' 129 | academic.fit <- sem(model = academic.model, 130 | sample.cov = academic.cor, 131 | sample.nobs = 18058) 132 | ``` 133 | 134 | ```{r summarize1, exercise = TRUE} 135 | 136 | ``` 137 | 138 | ```{r summarize1-solution} 139 | summary(academic.fit, 140 | standardized = TRUE, 141 | rsquare = TRUE, 142 | fit.measures = TRUE) 143 | ``` 144 | 145 | ## Create a Picture 146 | 147 | Use `semPaths()` to create a picture of your path model. Use `par` for the `whatLabels` argument, any layout you would like, and `edge.label.cex = 1` to increase the font size. 148 | 149 | ```{r diagram1-setup} 150 | academic.model <- ' 151 | acad ~ cog + race + ses + school 152 | school ~ cog + race + ses 153 | cog ~ race + ses 154 | ses ~ race 155 | ' 156 | academic.fit <- sem(model = academic.model, 157 | sample.cov = academic.cor, 158 | sample.nobs = 18058) 159 | ``` 160 | 161 | ```{r diagram1, exercise = TRUE} 162 | 163 | ``` 164 | 165 | ```{r diagram1-solution} 166 | semPaths(academic.fit, 167 | whatLabels = "par", 168 | layout = "spring", 169 | edge.label.cex = 1) 170 | ``` 171 | 172 | ## Mediation Models 173 | 174 | For this example, you will create a mediation model with two indirect effects. Use the following image to create your mediation model. Note that you will have two indirect effects: one representing the top half of the model and one representing the bottom half of the model. 175 | 176 | This model represents the mediating effects of social climate and the material covered in class on the relationship between teacher expectations and student achievement. The model predicts that teacher expectations actually predict social climate and materials, which then lead to student achievement levels if a mediating effect is found. 177 | 178 | ```{r out.width="75%", results = 'asis'} 179 | knitr::include_graphics("images/assignment_path1_2.png") 180 | ``` 181 | 182 | ```{r} 183 | mediation.cov 184 | ``` 185 | 186 | Using the names from the mediation covariance table, and the model diagram above, create `mediation.model` syntax for this path model. 187 | 188 | ```{r build2, exercise = TRUE} 189 | 190 | ``` 191 | 192 | ```{r build2-solution} 193 | mediation.model <- ' 194 | achieve ~ b1*social + b2*material + c*teacher 195 | material ~ a2*teacher 196 | social ~ a1*teacher 197 | indirect:= a1*b1 198 | indirect2:=a2*b2 199 | ' 200 | ``` 201 | 202 | ## Analyze the Model 203 | 204 | Analyze your path model using the `sem()` function. There are 40 participants for the `sample.nobs` argument. Save your model as `mediation.fit`. 205 | 206 | ```{r analyze2-setup} 207 | mediation.model <- ' 208 | achieve ~ b1*social + b2*material + c*teacher 209 | material ~ a2*teacher 210 | social ~ a1*teacher 211 | indirect:= a1*b1 212 | indirect2:=a2*b2 213 | ' 214 | ``` 215 | 216 | ```{r analyze2, exercise = TRUE} 217 | 218 | ``` 219 | 220 | ```{r analyze2-solution} 221 | mediation.fit <- sem(model = mediation.model, 222 | sample.cov = mediation.cov, 223 | sample.nobs = 40) 224 | ``` 225 | 226 | ## Summarize Your Model 227 | 228 | Use the `summary()` function to summarize your model using the standardized solution, including fit.measures and rsquare as options. 229 | 230 | ```{r summary2-setup} 231 | mediation.model <- ' 232 | achieve ~ b1*social + b2*material + c*teacher 233 | material ~ a2*teacher 234 | social ~ a1*teacher 235 | indirect:= a1*b1 236 | indirect2:=a2*b2 237 | ' 238 | mediation.fit <- sem(model = mediation.model, 239 | sample.cov = mediation.cov, 240 | sample.nobs = 40) 241 | ``` 242 | 243 | ```{r summary2, exercise = TRUE} 244 | 245 | ``` 246 | 247 | ```{r summary2-solution} 248 | summary(mediation.fit, 249 | standardized = TRUE, 250 | fit.measures = TRUE, 251 | rsquare = TRUE) 252 | ``` 253 | 254 | ## Create a Picture 255 | 256 | Use `semPaths()` to create a picture of your path model. Use `par` for the `whatLabels` argument, any layout you would like, and `edge.label.cex = 1` to increase the font size. 257 | 258 | ```{r diagram2-setup} 259 | mediation.model <- ' 260 | achieve ~ b1*social + b2*material + c*teacher 261 | material ~ a2*teacher 262 | social ~ a1*teacher 263 | indirect:= a1*b1 264 | indirect2:=a2*b2 265 | ' 266 | mediation.fit <- sem(model = mediation.model, 267 | sample.cov = mediation.cov, 268 | sample.nobs = 40) 269 | ``` 270 | 271 | ```{r diagram2, exercise = TRUE} 272 | 273 | ``` 274 | 275 | ```{r diagram2-solution} 276 | semPaths(mediation.fit, 277 | whatLabels = "par", 278 | layout = "spring", 279 | edge.label.cex = 1) 280 | ``` 281 | 282 | ## Submit 283 | 284 | On this page, you will create the submission for your instructor (if necessary). Please copy this report and submit using a Word document or paste into the text window of your submission page. Click "Generate Submission" to get your work! 285 | 286 | ```{r context="server"} 287 | encoder_logic() 288 | ``` 289 | 290 | ```{r encode, echo=FALSE} 291 | encoder_ui() 292 | ``` 293 | -------------------------------------------------------------------------------- /inst/tutorials/path2/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/path2/images/assignment_path2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/inst/tutorials/path2/images/assignment_path2.png -------------------------------------------------------------------------------- /inst/tutorials/terms/.gitignore: -------------------------------------------------------------------------------- 1 | *.html 2 | -------------------------------------------------------------------------------- /inst/tutorials/terms/terms.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Terminology in SEM Practice" 3 | tutorial: 4 | id: "terms" 5 | output: learnr::tutorial 6 | runtime: shiny_prerendered 7 | description: In this tutorial, you will learn about structural equation modeling terminology and practice calculating degrees of freedom. 8 | --- 9 | 10 | ```{r setup, include=FALSE} 11 | library(learnr) 12 | library(learnSEM) 13 | knitr::opts_chunk$set(echo = FALSE) 14 | library(lavaan) 15 | library(semPlot) 16 | model <- ' 17 | # measurement model 18 | ind60 =~ x1 + x2 + x3 19 | dem60 =~ y1 + y2 + y3 + y4 20 | dem65 =~ y5 + y6 + y7 + y8 21 | # regressions 22 | dem60 ~ ind60 23 | dem65 ~ ind60 + dem60 24 | # residual correlations 25 | y1 ~~ y5 26 | y2 ~~ y4 + y6 27 | y3 ~~ y7 28 | y4 ~~ y8 29 | y6 ~~ y8 30 | ' 31 | fit <- sem(model, data=PoliticalDemocracy) 32 | ``` 33 | 34 | ## Terminology in SEM 35 | 36 | This section of the course will begin to introduce you to the terminology associated with structural equation modeling. We will begin to introduce the code specific to `lavaan` to help demonstrate how these terms can be applied to the models you will be programming in the next sections. In this practice, you will learn: 37 | 38 | - Distinguish between the types of models, variables, and relationships present in structural equation models. 39 | - Define identification and specification. 40 | - Determine the number of parameters and degrees of freedom for models. 41 | 42 | ## Terminology Videos 43 | 44 | You can use `vignette("lecture_terms", "learnSEM")` to view these notes in R. 45 | 46 | 47 | 48 | ## Exercises 49 | 50 | In this next section, you will answer questions using the *R* code blocks provided. Be sure to use the `solution` option to see the answer if you need it! 51 | 52 | Please enter your name for submission. If you do not need to submit, just type anything you'd like in this box. 53 | 54 | ```{r details} 55 | question_text( 56 | "Student Name:", 57 | answer("Your Name", correct = TRUE), 58 | incorrect = "Thanks!", 59 | try_again_button = "Modify your answer", 60 | allow_retry = TRUE 61 | ) 62 | ``` 63 | 64 | ## Example Model Diagram 65 | 66 | We are going to use an example from the `lavaan` tutorials, which you can find at https://lavaan.ugent.be/tutorial/sem.html. The model represents a complex model that will allow you to think about some of the terminology covered in the lecture. 67 | 68 | Here's a picture of the model: 69 | 70 | ![SEM Example](https://lavaan.ugent.be/tutorial/figure/sem-1.png) 71 | 72 | We have not covered yet how to write the model code, but you can see it below. 73 | 74 | ```{r echo = T} 75 | library(lavaan) 76 | model <- ' 77 | # measurement model 78 | ind60 =~ x1 + x2 + x3 79 | dem60 =~ y1 + y2 + y3 + y4 80 | dem65 =~ y5 + y6 + y7 + y8 81 | # regressions 82 | dem60 ~ ind60 83 | dem65 ~ ind60 + dem60 84 | # residual correlations 85 | y1 ~~ y5 86 | y2 ~~ y4 + y6 87 | y3 ~~ y7 88 | y4 ~~ y8 89 | y6 ~~ y8 90 | ' 91 | fit <- sem(model, data=PoliticalDemocracy) 92 | ``` 93 | 94 | ## Understanding Terminology 95 | 96 | Here's the model again to help you answer these questions: 97 | 98 | ![SEM Example](https://lavaan.ugent.be/tutorial/figure/sem-1.png) 99 | 100 | ```{r latent-open} 101 | question_text( 102 | "How many latent variables are in the model?", 103 | answer("3", correct = TRUE), 104 | incorrect = "There are three circles for latent variables.", 105 | try_again_button = "Modify your answer", 106 | allow_retry = TRUE 107 | ) 108 | ``` 109 | 110 | ```{r manifest-open} 111 | question_text( 112 | "How many manifest variables are in the model?", 113 | answer("11", correct = TRUE), 114 | incorrect = "There are 11 squares for measured/manifest variables.", 115 | try_again_button = "Modify your answer", 116 | allow_retry = TRUE 117 | ) 118 | ``` 119 | 120 | ```{r types1-open} 121 | question_text( 122 | "What would you label `ind60` predicting `x1`, `x2`, and `x3`?", 123 | answer("Measurement model", correct = TRUE), 124 | incorrect = "That section of the model is often called the measurement model.", 125 | try_again_button = "Modify your answer", 126 | allow_retry = TRUE 127 | ) 128 | ``` 129 | 130 | ```{r types2-open} 131 | question_text( 132 | "What would you label `ind60` predicting `dem60` and `dem65`?", 133 | answer("Structural model", correct = TRUE), 134 | incorrect = "That section of the model is often called the structural model.", 135 | try_again_button = "Modify your answer", 136 | allow_retry = TRUE 137 | ) 138 | ``` 139 | 140 | ```{r types3-open} 141 | question_text( 142 | "Is `ind60` an endogenous or exogenous variable? ", 143 | answer("Exogenous", correct = TRUE), 144 | incorrect = "This variable is exogenous because arrows only go out of the variable.", 145 | try_again_button = "Modify your answer", 146 | allow_retry = TRUE 147 | ) 148 | ``` 149 | 150 | ```{r types4-open} 151 | question_text( 152 | "Is `dem60` an endogenous or exogenous variable? ", 153 | answer("Both", correct = TRUE), 154 | incorrect = "This variable is both!", 155 | try_again_button = "Modify your answer", 156 | allow_retry = TRUE 157 | ) 158 | ``` 159 | 160 | ## Understanding Identification 161 | 162 | Here's a visualization of the model using `semPlot`, which shows you all the estimated paths. 163 | 164 | ```{r echo = T} 165 | library(semPlot) 166 | semPaths(fit) 167 | ``` 168 | 169 | ```{r variances-open} 170 | question_text( 171 | "How many variances (error/latent variable) are estimated?", 172 | answer("14", correct = TRUE), 173 | incorrect = "There are 14 variances estimated in this model (look at the double headed arrows on the variables).", 174 | try_again_button = "Modify your answer", 175 | allow_retry = TRUE 176 | ) 177 | ``` 178 | 179 | ```{r regressions-open} 180 | question_text( 181 | "How many regressions are estimated?", 182 | answer("3", correct = TRUE), 183 | incorrect = "There are three regressions estimated between the latent variables. Remember, the others are called loadings!", 184 | try_again_button = "Modify your answer", 185 | allow_retry = TRUE 186 | ) 187 | ``` 188 | 189 | ```{r loadings-open} 190 | question_text( 191 | "How many loadings are estimated?", 192 | answer("8", correct = TRUE), 193 | incorrect = "There are eight loadings estimated. Do not forget that you will use a marker variable, which are the dotted lines on this picture.", 194 | try_again_button = "Modify your answer", 195 | allow_retry = TRUE 196 | ) 197 | ``` 198 | 199 | ```{r covariances-open} 200 | question_text( 201 | "How many covariances are estimated?", 202 | answer("6", correct = TRUE), 203 | incorrect = "There are six covariances between the y variables.", 204 | try_again_button = "Modify your answer", 205 | allow_retry = TRUE 206 | ) 207 | ``` 208 | 209 | ```{r possible-open} 210 | question_text( 211 | "How many *possible* parameters can you estimate? ", 212 | answer("66", correct = TRUE), 213 | incorrect = "11 * (11+1) / 2 = 66", 214 | try_again_button = "Modify your answer", 215 | allow_retry = TRUE 216 | ) 217 | ``` 218 | 219 | ```{r estimated-open} 220 | question_text( 221 | "How many *parameters* are you estimating? (add variances, regressions, loadings, covariances)", 222 | answer("31", correct = TRUE), 223 | incorrect = "6+8+3+14 = 31", 224 | try_again_button = "Modify your answer", 225 | allow_retry = TRUE 226 | ) 227 | ``` 228 | 229 | ```{r df-open} 230 | question_text( 231 | "Given the previous two answers, what is the *df* for your model?", 232 | answer("35", correct = TRUE), 233 | incorrect = "66 - 31 = 35", 234 | try_again_button = "Modify your answer", 235 | allow_retry = TRUE 236 | ) 237 | ``` 238 | 239 | You can check your work against the model summary provided below. It's ok if you get it wrong! Learning how to read model diagrams and know what to expect is an important part of learning SEM. 240 | 241 | ```{r echo = T} 242 | summary(fit) 243 | ``` 244 | 245 | ## Submit 246 | 247 | On this page, you will create the submission for your instructor (if necessary). Please copy this report and submit using a Word document or paste into the text window of your submission page. Click "Generate Submission" to get your work! 248 | 249 | ```{r context="server"} 250 | encoder_logic() 251 | ``` 252 | 253 | ```{r encode, echo=FALSE} 254 | encoder_ui() 255 | ``` 256 | -------------------------------------------------------------------------------- /learnSEM.Rproj: -------------------------------------------------------------------------------- 1 | Version: 1.0 2 | ProjectId: cc1b561f-50a1-4c1f-8a90-77bc90781d96 3 | 4 | RestoreWorkspace: Default 5 | SaveWorkspace: Default 6 | AlwaysSaveHistory: Default 7 | 8 | EnableCodeIndexing: Yes 9 | UseSpacesForTab: Yes 10 | NumSpacesForTab: 2 11 | Encoding: UTF-8 12 | 13 | RnwWeave: Sweave 14 | LaTeX: pdfLaTeX 15 | 16 | AutoAppendNewline: Yes 17 | StripTrailingWhitespace: Yes 18 | 19 | BuildType: Package 20 | PackageUseDevtools: Yes 21 | PackageInstallArgs: --no-multiarch --with-keep.source 22 | -------------------------------------------------------------------------------- /man/caafidata.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/caafidata-data.R 3 | \docType{data} 4 | \name{caafidata} 5 | \alias{caafidata} 6 | \title{CAAFI Data: Computer Aversion, Attitudes, and Familiarity Inventory} 7 | \format{ 8 | A data frame with 794 rows and 30 variables. 9 | 10 | \describe{ 11 | \item{q1}{I enjoy using computers.} 12 | \item{q2}{Being able to use a computer is important to me.} 13 | \item{q3}{I keep up with the latest computer hardware.} 14 | \item{q4}{Computers are beneficial because they save people time.} 15 | \item{q5}{I like using word-processing programs.} 16 | \item{q6}{I feel like a fool when I am using a computer and others are around.} 17 | \item{q7}{I am smart enough to use a computer.} 18 | \item{q8}{I avoid using computers whenever possible.} 19 | \item{q9}{I do not understand how to use computer software (e.g., word-processing programs, spreadsheet programs, etc.).} 20 | \item{q10}{I feel that I understand how to use computer files, documents, and folders.} 21 | \item{q11}{I use a computer input device every day (e.g., a keyboard, a touch pad, a mouse).} 22 | \item{q12}{I can use a computer to successfully perform tasks.} 23 | \item{q13}{I can add new hardware to a computer.} 24 | \item{q14}{I enjoy reading computer magazines.} 25 | \item{q15}{When I use a computer, I am afraid that I will damage it.} 26 | \item{q16}{I enjoy connecting new computer accessories.} 27 | \item{q17}{I must have a reference manual or a help file to run computer software.} 28 | \item{q18}{E-mail is an easy way to communicate with people.} 29 | \item{q19}{I use e-mail every day.} 30 | \item{q20}{I am comfortable changing (installing/upgrading) computer software.} 31 | \item{q21}{I often read computer books.} 32 | \item{q22}{My friends often ask me computer-related questions.} 33 | \item{q23}{I often read computer magazines.} 34 | \item{q24}{Overall, I feel that I don't know how to use a computer.} 35 | \item{q25}{Computers are too scientific for me.} 36 | \item{q26}{When using a computer, I often lose data.} 37 | \item{q27}{I enjoy learning to use new software programs.} 38 | \item{q28}{I like to use computer input devices such as a keyboard, a touch pad, a mouse, etc.} 39 | \item{q29}{Using a computer is entertaining.} 40 | \item{q30}{I keep up with the latest computer software.} 41 | } 42 | } 43 | \usage{ 44 | data(caafidata) 45 | } 46 | \description{ 47 | Study: This dataset has data collected on the computer 48 | aversion, attitudes, and familiarity inventory. 49 | } 50 | \details{ 51 | The instructions were: 52 | Below is a list of items describing many of the 53 | thoughts and experiences that people have with computers. 54 | After reading each statement, circle the number that best 55 | describes how true or how false the statement is as it 56 | applies to you at this time. If you have no opinion about 57 | the item, circle ‘‘0”, but please use this option only if 58 | it is absolutely necessary. Be sure to circle only one 59 | number. Please do your best to respond to each item. 60 | 61 | Scale -3 to 3: absolutely false, neutral, absolutely true 62 | 63 | Computer Familiarity: Items 3, 13-14, 16, 20-23, 27, and 30. 64 | 65 | Computer Attitudes: Items 1-2, 4-5, 8, 11, 18-19, and 28-29. 66 | 67 | Computer Aversion: Items 6-7, 9-10, 12, 15, 17, and 24-26. 68 | } 69 | \keyword{datasets} 70 | -------------------------------------------------------------------------------- /man/dassdata.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/dassdata-data.R 3 | \docType{data} 4 | \name{dassdata} 5 | \alias{dassdata} 6 | \title{DASS Data: Depression, Anxiety, and Stress Inventory} 7 | \format{ 8 | A data frame with 794 rows and 30 variables. 9 | 10 | \describe{ 11 | \item{Q1}{I found it hard to wind down.} 12 | \item{Q2}{I was aware of dryness of my mouth.} 13 | \item{Q3}{I couldn't seem to experience any positive feeling at all.} 14 | \item{Q4}{I experienced breathing difficulty.} 15 | \item{Q5}{I found it difficult to work up the initiative to do things.} 16 | \item{Q6}{I tended to over-react to situations.} 17 | \item{Q7}{I experienced trembling (eg, in the hands).} 18 | \item{Q8}{I felt that I was using a lot of nervous energy.} 19 | \item{Q9}{I was worried about situations in which I might panic and make a fool of myself.} 20 | \item{Q10}{I felt that I had nothing to look forward to.} 21 | \item{Q11}{I found myself getting agitated.} 22 | \item{Q12}{I found it difficult to relax.} 23 | \item{Q13}{I felt down-hearted and blue.} 24 | \item{Q14}{I was intolerant of anything that kept me from getting on with what I was doing.} 25 | \item{Q15}{I felt I was close to panic.} 26 | \item{Q16}{I was unable to become enthusiastic about anything.} 27 | \item{Q17}{I felt I wasn't worth much as a person.} 28 | \item{Q18}{I felt that I was rather touchy.} 29 | \item{Q19}{I was aware of the action of my heart in the absence of physical exertion.} 30 | \item{Q20}{I felt scared without any good reason.} 31 | \item{Q21}{I felt that life was meaningless.} 32 | } 33 | } 34 | \usage{ 35 | data(dassdata) 36 | } 37 | \description{ 38 | Study: The DASS is a measurement scale that examines 39 | the depression, anxiety, and stress of an individual. 40 | } 41 | \details{ 42 | The instructions were: 43 | Please read each statement and select a number 0, 1, 44 | 2 or 3 that indicates how much the statement applied 45 | to you over the past week. There are no right or wrong 46 | answers. Do not spend too much time on any statement, 47 | but please answer each question. 48 | 49 | Scale 0-3: did not apply to me, applied to me to some 50 | degree, applied to me to a considerable degree, or 51 | applied to me very much 52 | 53 | Depression: Questions 3, 5, 10, 13, 16, 17, 21 54 | 55 | Anxiety: Questions 2, 4, 7, 9, 15, 19, 20 56 | 57 | Stress: 1, 6, 8, 11, 12, 14, 18 58 | } 59 | \keyword{datasets} 60 | -------------------------------------------------------------------------------- /man/datascreen.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/datascreen-data.R 3 | \docType{data} 4 | \name{datascreen} 5 | \alias{datascreen} 6 | \title{Data Screening Practice Dataset} 7 | \format{ 8 | A data frame with 797 rows and 12 variables. 9 | 10 | \describe{ 11 | \item{Participant_ID}{ID number for each participant} 12 | \item{q1}{I think my body should be leaner} 13 | \item{q2}{I am concerned that my stomach is too flabby} 14 | \item{q3}{I feel dissatisfied with my overall body build} 15 | \item{q4}{I think I have too much fat on my body} 16 | \item{q5}{I think my abs are not thin enough} 17 | \item{q6}{I feel satisfied with the size and shape of 18 | my body} 19 | \item{q7}{Has eating sweets, cakes, or other high 20 | calorie food made you feel fat or weak?} 21 | \item{q8}{Have you felt excessively large and rounded 22 | (i.e., fat)?} 23 | \item{q9}{Have you felt ashamed of your body size or 24 | shape?} 25 | \item{q10}{Has seeing your reflection (e.g., in a mirror 26 | or window) made you feel badly about your size or shape?} 27 | \item{q11}{Have you been so worried about your body size 28 | or shape that you have been feeling that you ought to diet?} 29 | } 30 | } 31 | \usage{ 32 | data(datascreen) 33 | } 34 | \description{ 35 | Study: This dataset includes a male body dissatisfaction scale that can be used for 36 | datascreening or scale development. 37 | } 38 | \keyword{datasets} 39 | -------------------------------------------------------------------------------- /man/dirtdata.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/dirtdata-data.R 3 | \docType{data} 4 | \name{dirtdata} 5 | \alias{dirtdata} 6 | \title{Dichotomous IRT Practice Data} 7 | \format{ 8 | A data frame with 30 rows and 4 variables. 9 | } 10 | \usage{ 11 | data(dirtdata) 12 | } 13 | \description{ 14 | Study: This data represents the answers on an 15 | Educational Psychology exam. A zero indicates that 16 | the person missed the question, while one 17 | indicates that the person got the question right. 18 | } 19 | \keyword{datasets} 20 | -------------------------------------------------------------------------------- /man/efa.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/efa-data.R 3 | \docType{data} 4 | \name{efa} 5 | \alias{efa} 6 | \title{Exploratory Factor Analysis Practice Dataset} 7 | \format{ 8 | A data frame with 99 rows and 21 variables. 9 | 10 | \describe{ 11 | \item{o1}{Believe in the importance of art.} 12 | \item{o2}{Have a vivid imagination.} 13 | \item{o3}{Tend to vote for liberal political candidates.} 14 | \item{o4}{Carry the conversation to a higher level.} 15 | \item{o5}{Enjoy hearing new ideas.} 16 | \item{o6}{Enjoy thinking about things.} 17 | \item{o7}{Can say things beautifully.} 18 | \item{o8}{Enjoy wild flights of fantasy.} 19 | \item{o9}{Get excited by new ideas.} 20 | \item{o10}{Have a rich vocabulary.} 21 | \item{o11}{Am not interested in abstract ideas.} 22 | \item{o12}{Do not like art.} 23 | \item{o13}{Avoid philosophical discussions.} 24 | \item{o14}{Do not enjoy going to art museums.} 25 | \item{o15}{Tend to vote for conservative political candidates.} 26 | \item{o16}{Do not like poetry.} 27 | \item{o17}{Rarely look for a deeper meaning in things.} 28 | \item{o18}{Believe that too much tax money goes to support artists.} 29 | \item{o19}{Am not interested in theoretical discussions.} 30 | \item{o20}{Have difficulty understanding abstract ideas.} 31 | \item{condition}{a group condition each participant received} 32 | } 33 | } 34 | \usage{ 35 | data(efa) 36 | } 37 | \description{ 38 | Study: This dataset has data on the Openness to 39 | Experience scale collected as part of an undergraduate 40 | honor's thesis project. 41 | } 42 | \details{ 43 | The instructions were: 44 | Below are some phrases describing people's behaviors. 45 | Please use the rating scale below to describe how 46 | accurately each statement describes you. Describe 47 | yourself as you generally are now, not as you wish to 48 | be in the future. Describe yourself as you honestly 49 | see yourself in relation to other people of your 50 | gender and of roughly your same age. Please read 51 | each statement carefully, and then check the box 52 | that corresponds to your response. 53 | 54 | Scale: very inaccurate, moderately inaccurate, 55 | neither inaccurate nor accurate, moderately 56 | accurate, very accurate 57 | } 58 | \keyword{datasets} 59 | -------------------------------------------------------------------------------- /man/encoder_logic.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/encoder_logic.R 3 | \name{encoder_logic} 4 | \alias{encoder_logic} 5 | \title{Encoding Logic for learnr Tutorials} 6 | \usage{ 7 | encoder_logic() 8 | } 9 | \value{ 10 | HTML output for the student tutorial 11 | } 12 | \description{ 13 | This function grabs the student answers from a learnr 14 | tutorial and returns them as an HTML output for 15 | printing to the tutorial screen. 16 | } 17 | \examples{ 18 | 19 | # Be sure to put this into a server-context chunk. 20 | #```{r context="server"} 21 | #encoder_logic() 22 | #``` 23 | } 24 | \keyword{answers} 25 | \keyword{learnr,} 26 | \keyword{shiny,} 27 | \keyword{student} 28 | -------------------------------------------------------------------------------- /man/encoder_ui.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/encoder_ui.R 3 | \name{encoder_ui} 4 | \alias{encoder_ui} 5 | \title{Encoding User Interface for learnr Tutorials} 6 | \usage{ 7 | encoder_ui(ui_before = NULL, ui_after = NULL) 8 | } 9 | \arguments{ 10 | \item{ui_before}{Shiny code to go before your 11 | submission box.} 12 | 13 | \item{ui_after}{Shiny code to go after your 14 | submission box.} 15 | } 16 | \value{ 17 | Shiny interface for creating submissions 18 | for the learnr tutorials. 19 | } 20 | \description{ 21 | This function is the shiny user interface for 22 | creating the submission output. You can 23 | define instructions to go before or after the 24 | submission window! 25 | } 26 | \examples{ 27 | 28 | #```{r encode, echo=FALSE} 29 | #encoder_ui() 30 | #``` 31 | } 32 | \keyword{answers} 33 | \keyword{learnr,} 34 | \keyword{shiny,} 35 | \keyword{student} 36 | -------------------------------------------------------------------------------- /man/introR.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/introR-data.R 3 | \docType{data} 4 | \name{introR} 5 | \alias{introR} 6 | \title{Introduction to R Dataset} 7 | \format{ 8 | A data frame with 33949 rows and 14 variables: 9 | 10 | \describe{ 11 | \item{expno}{the experiment number we assigned to that 12 | group of participants} 13 | \item{rating}{the pleasantness rating of that word} 14 | \item{originalcode}{the word the particpiant saw} 15 | \item{id}{the participant ID number} 16 | \item{speed}{the typing speed of the participant} 17 | \item{error}{the number of typing errors by the participant} 18 | \item{whichhand}{which has the participant indicated as 19 | their dominate hand} 20 | \item{LR_switch}{the number of times typing the word would 21 | switch from left to right hands} 22 | \item{finger_switch}{the number of times you would switch 23 | fingers typing the word} 24 | \item{rha}{right hand advantage: Right - Left handed letters} 25 | \item{word_length}{the number of characters in the word} 26 | \item{letter_freq}{the average of the frequency of each of 27 | the letters in the word} 28 | \item{real_fake}{if the word was a real English word or not} 29 | \item{speed_c}{z-scored speed values} 30 | } 31 | } 32 | \usage{ 33 | data(introR) 34 | } 35 | \description{ 36 | A dataset containing research results from an experiment 37 | that examined how pleasant people felt about words, and the 38 | information about how the word is typed. This dataset 39 | examines the QWERTY effect. 40 | } 41 | \keyword{datasets} 42 | -------------------------------------------------------------------------------- /man/is_server_context.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/server_context.R 3 | \name{is_server_context} 4 | \alias{is_server_context} 5 | \title{Server Functions for learnr Tutorials} 6 | \usage{ 7 | is_server_context(.envir) 8 | } 9 | \arguments{ 10 | \item{.envir}{Automatically grabs the environment 11 | variable for your shiny session.} 12 | } 13 | \value{ 14 | Error messages if you incorrectly use 15 | the functions. 16 | } 17 | \description{ 18 | These functions help check that you have put 19 | together the tutorial correctly for the 20 | student answers to print out at the end of the 21 | tutorial 22 | } 23 | \keyword{answers} 24 | \keyword{learnr,} 25 | \keyword{shiny,} 26 | \keyword{student} 27 | -------------------------------------------------------------------------------- /man/meaningdata.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/meaningdata-data.R 3 | \docType{data} 4 | \name{meaningdata} 5 | \alias{meaningdata} 6 | \title{Meaning and Purpose in Life Data} 7 | \format{ 8 | A data frame with 567 rows and 50 variables. 9 | 10 | \describe{ 11 | \item{p1}{I am usually (completely bored to exuberant, 12 | enthusiastic)} 13 | \item{p2}{Life to me seems (completely routine to 14 | always exciting)} 15 | \item{p3}{In life I have (no goals or aims at all to 16 | very clear goals and aims)} 17 | \item{p4}{My personal existence is (utterlying 18 | meaningless without purpose to very purposeful and 19 | meaningful)} 20 | \item{p5}{Every day is (exactly the same to constantly 21 | new)} 22 | \item{p6}{If I could choose, I would (prefer never to 23 | have been born to like nine more lives just like this 24 | one)} 25 | \item{p7}{After retiring, I would (loaf completely the 26 | rest of my life to do some of the exciting things I 27 | have always wanted to do)} 28 | \item{p8}{In achieving life goals I have (made no 29 | progress whatever to progressed to complete fulfillment)} 30 | \item{p9}{My life is (empty, filled only with despair 31 | to running over with exciting good things)} 32 | \item{p10}{If I should die today, I would feel that 33 | my life had been (completely worthless to very 34 | worthwhile)} 35 | \item{p11}{In thinking of my life, I (often wonder why 36 | I exist to always see a reason for my being here)} 37 | \item{p12}{As I view the world in relation to my life, 38 | the world (completely confuses me to fits meaningfully 39 | with my life)} 40 | \item{p13}{I am a (very irresponsible person to very 41 | responsible person)} 42 | \item{p14}{Concerning man's freedom to make his own 43 | choices, I believe man is (completely bound by 44 | limitations of heridity and environment to absolutely 45 | free to make all life choices)} 46 | \item{p15}{With regard to death, I am (unprepared and 47 | afraid to prepared and unafraid)} 48 | \item{p16}{With regard to suicide, I have (thought of 49 | it seriously as a way out to never given it a second 50 | thought)} 51 | \item{p17}{I regard my ability to find a meaning, 52 | purpose, or mission in life as (practically none to 53 | very great)} 54 | \item{p18}{My life is (out of my hands and controlled 55 | by external factors to in my hands and I am in 56 | control of it)} 57 | \item{p19}{Facing my daily tasks is (a painful and 58 | boring experience to a source of pleasure and 59 | satisfaction)} 60 | \item{p20}{I have discovered (no mission or purpose in 61 | life to clear-cut goals and a satisfying life purpose)} 62 | \item{m1}{I understand my life's meaning.} 63 | \item{m2}{I am lookin gof rsomething ath makes my life 64 | feel meaningful.} 65 | \item{m3}{I am always looking to find my life's ppurpose.} 66 | \item{m4}{My life has a clear sense of purpose. } 67 | \item{m5}{I have a good sense of what makes my life 68 | meaningful.} 69 | \item{m6}{I have discovered a satifying life purpose.} 70 | \item{m7}{I am lalways searching for something that makes 71 | my life feel significant.} 72 | \item{m8}{I am seeking a purpose or mission for my life.} 73 | \item{m9}{My life has no clear purpose. } 74 | \item{m10}{I am searching for meaning in my life. } 75 | \item{s1}{I think about the ultimate meaning of life.} 76 | \item{s2}{I have experienced the feeling that while I am 77 | destined to accomplish something important, I cannot 78 | quite put myfinger on just what it is.} 79 | \item{s3}{I try new activities or areas of interest, 80 | and then these soon lose their attractiveness.} 81 | \item{s4}{I feel that some element which I can't 82 | quite define is missing fro mmy life.} 83 | \item{s5}{I am restless.} 84 | \item{s6}{I feel that the greatest fulfillment of my 85 | life lies wyet in the future.} 86 | \item{s7}{I hope for something exciting in the future.} 87 | \item{s8}{I daydream of finding a new place for my life 88 | and a new identity.} 89 | \item{s9}{I feel the lack of -- and a need to find -- a 90 | real meaning and purpose in my life.} 91 | \item{s10}{I think of achieving something new and 92 | different.} 93 | \item{s11}{I seem to change my main objective in life.} 94 | \item{s12}{The mystery of life puzzles and disturbs me.} 95 | \item{s13}{I feel myself in need of a 'new lease on life'. } 96 | \item{s14}{Before I achieve one goal, I start out toward 97 | a different one.} 98 | \item{s15}{I feel the need for adventure and 'new worlds 99 | to conquer'.} 100 | \item{s16}{Over my lifetime I have felt a strong urge 101 | to find myself.} 102 | \item{s17}{On occasion I have thought that I had found 103 | what I was looking for in life, only to have it vanish later.} 104 | \item{s18}{I have been aware of all-powerful and consuming 105 | purpose toward which my life has been directed.} 106 | \item{s19}{I have sensed a lack of worthwhile job to do 107 | in life.} 108 | \item{s20}{I have felt a determination to achieve something f 109 | ar beyond the ordinary.} 110 | } 111 | } 112 | \usage{ 113 | data(meaningdata) 114 | } 115 | \description{ 116 | Study: This data includes three measures of meaning 117 | and purpose in life for exploring latent variables 118 | or multi-trait multi-methods analysis. 119 | } 120 | \details{ 121 | The Meaning in Life Questionnaire is scored from 1 122 | absolutely untrue to 7 absolutely true. These items 123 | are marked with a M in the dataset. The Purpose in 124 | Life Questionnaire is scaled from 1 to 7 varying by 125 | the question and is marked P in the dataset. Last, 126 | the Seeking of Neotic Goals scale ranges from 1 never 127 | to 7 constantly and is marked with S in the dataset. 128 | } 129 | \keyword{datasets} 130 | -------------------------------------------------------------------------------- /man/mirtdata.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/mirtdata-data.R 3 | \docType{data} 4 | \name{mirtdata} 5 | \alias{mirtdata} 6 | \title{Polytomous IRT Practice Data} 7 | \format{ 8 | A data frame with 171 rows and 15 variables. 9 | } 10 | \usage{ 11 | data(mirtdata) 12 | } 13 | \description{ 14 | Study: This dataset includes 15 questions that are scored 15 | from 1 to 7 to use for polytomous IRT examples. One 16 | would indicate a low score on the latent trait, while 17 | seven would indicate a higher score on the latent 18 | trait (if the scale works!). 19 | } 20 | \keyword{datasets} 21 | -------------------------------------------------------------------------------- /man/resdata.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/resdata-data.R 3 | \docType{data} 4 | \name{resdata} 5 | \alias{resdata} 6 | \title{Multigroup CFA Practice Data} 7 | \format{ 8 | A data frame with 516 rows and 16 variables. 9 | 10 | \describe{ 11 | \item{Sex}{A variable for gender where 1 is 12 | male, 2 is female, and 3 is other/na.} 13 | \item{Ethnicity}{A variable for ethnicity 14 | coded as 1 as Black, 2 as White, and 3 as 15 | other/na.} 16 | \item{RS1}{I usually manage one way or 17 | another.} 18 | \item{RS2}{I feel proud that I have accomplished 19 | things in life.} 20 | \item{RS3}{I usually take things in stride.} 21 | \item{RS4}{I am friends with myself.} 22 | \item{RS5}{I feel that I can handle many 23 | things at a time.} 24 | \item{RS6}{I am determined.} 25 | \item{RS7}{I can get through difficult times 26 | because I’ve experienced difficulty before.} 27 | \item{RS8}{I have self-discipline.} 28 | \item{RS9}{I keep interested in things.} 29 | \item{RS10}{I can usually find something to 30 | laugh about.} 31 | \item{RS11}{My belief in myself gets me through 32 | hard times.} 33 | \item{RS12}{In an emergency, I’m someone people 34 | can generally rely on.} 35 | \item{RS13}{My life has meaning.} 36 | \item{RS14}{When I’m in a difficult situation, I 37 | can usually find my way out of it.} 38 | } 39 | } 40 | \usage{ 41 | data(resdata) 42 | } 43 | \description{ 44 | Study: This dataset has data on gender, ethnicity, 45 | and a resiliency scale for practicing factor analysis 46 | and other structural equation modeling topics 47 | like multigroup CFA. 48 | } 49 | \details{ 50 | The instructions were: 51 | 52 | Please read the following statements. To the right of 53 | each you will find seven numbers, ranging from "1" 54 | (Strongly Disagree) on the left to "7" (Strongly Agree) 55 | on the right. Circle the number which best indicates 56 | your feelings about that statement. For example, if 57 | you strongly disagree with a statement, circle "1". 58 | If you are neutral, circle "4", and if you 59 | strongly agree, circle "7", etc. 60 | 61 | Scale: strongly disagree, moderately disagree, 62 | somewhat disagree, neutral, somewhat agree, 63 | moderately agree, strongly agree 64 | } 65 | \keyword{datasets} 66 | -------------------------------------------------------------------------------- /setup/learnSEM_setup.R: -------------------------------------------------------------------------------- 1 | library(devtools) 2 | library(usethis) 3 | 4 | #usethis::use_dev_package("lavaan") 5 | #usethis::use_dev_package("semPlot") 6 | #usethis::use_dev_package("rio") 7 | #usethis::use_dev_package("learnr") 8 | #usethis::use_dev_package("learnrhash") 9 | #usethis::use_dev_package("corrplot") 10 | #usethis::use_dev_package("shiny") 11 | #usethis::use_dev_package("psych") 12 | #usethis::use_dev_package("GPArotation") 13 | #usethis::use_dev_package("parameters") 14 | #usethis::use_dev_package("broom") 15 | #usethis::use_dev_package("mirt") 16 | #usethis::use_dev_package("ltm") 17 | #usethis::use_dev_package("MOTE") 18 | 19 | #usethis::build_readme() 20 | 21 | library(rio) 22 | #efa <- import("data/assignment_efa.csv") 23 | #usethis::use_data(mirtdata, overwrite = T) 24 | 25 | #usethis::use_tutorial("irt", "Item Response Theory", open = interactive()) 26 | 27 | 28 | library(roxygen2) 29 | roxygenize() 30 | devtools::check() 31 | tools::buildVignettes(dir = ".", tangle=TRUE) 32 | 33 | #dir.create("inst") 34 | dir.create("inst/doc") 35 | file.copy(dir("vignettes", full.names=TRUE), "inst/doc", overwrite=TRUE) 36 | 37 | devtools::install_github("doomlab/learnSEM") 38 | -------------------------------------------------------------------------------- /vignettes/lecture_cfa.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/diagram_sem.png") 14 | 15 | ## ------------------------------------------------------- 16 | # a famous example, build the model 17 | HS.model <- ' visual =~ x1 + x2 + x3 18 | textual =~ x4 + x5 + x6 19 | speed =~ x7 + x8 + x9 ' 20 | 21 | # fit the model 22 | HS.fit <- cfa(HS.model, data = HolzingerSwineford1939) 23 | 24 | # diagram the model 25 | semPaths(HS.fit, 26 | whatLabels = "std", 27 | layout = "tree", 28 | edge.label.cex = 1) 29 | 30 | ## ------------------------------------------------------- 31 | # a famous example, build the model 32 | HS.model <- ' visual <~ x1 + x2 + x3' 33 | 34 | # fit the model 35 | HS.fit <- cfa(HS.model, data = HolzingerSwineford1939) 36 | 37 | # diagram the model 38 | semPaths(HS.fit, 39 | whatLabels = "std", 40 | layout = "tree", 41 | edge.label.cex = 1) 42 | 43 | ## ------------------------------------------------------- 44 | wisc4.cor <- lav_matrix_lower2full(c(1, 45 | 0.72,1, 46 | 0.64,0.63,1, 47 | 0.51,0.48,0.37,1, 48 | 0.37,0.38,0.38,0.38,1)) 49 | # enter the SDs 50 | wisc4.sd <- c(3.01 , 3.03 , 2.99 , 2.89 , 2.98) 51 | 52 | # give everything names 53 | colnames(wisc4.cor) <- 54 | rownames(wisc4.cor) <- 55 | names(wisc4.sd) <- 56 | c("Information", "Similarities", 57 | "Word.Reasoning", "Matrix.Reasoning", "Picture.Concepts") 58 | 59 | # convert 60 | wisc4.cov <- cor2cov(wisc4.cor, wisc4.sd) 61 | 62 | ## ------------------------------------------------------- 63 | wisc4.model <- ' 64 | g =~ Information + Similarities + Word.Reasoning + Matrix.Reasoning + Picture.Concepts 65 | ' 66 | 67 | ## ------------------------------------------------------- 68 | wisc4.fit <- cfa(model = wisc4.model, 69 | sample.cov = wisc4.cov, 70 | sample.nobs = 550, 71 | std.lv = FALSE) 72 | 73 | ## ------------------------------------------------------- 74 | summary(wisc4.fit, 75 | standardized=TRUE, 76 | rsquare = TRUE, 77 | fit.measures=TRUE) 78 | 79 | ## ------------------------------------------------------- 80 | parameterestimates(wisc4.fit, 81 | standardized=TRUE) 82 | 83 | ## ------------------------------------------------------- 84 | fitted(wisc4.fit) ## estimated covariances 85 | wisc4.cov ## actual covariances 86 | 87 | ## ------------------------------------------------------- 88 | fitmeasures(wisc4.fit) 89 | 90 | ## ------------------------------------------------------- 91 | modificationindices(wisc4.fit, sort = T) 92 | 93 | ## ------------------------------------------------------- 94 | semPaths(wisc4.fit, 95 | whatLabels="std", 96 | what = "std", 97 | layout ="tree", 98 | edge.color = "blue", 99 | edge.label.cex = 1) 100 | 101 | ## ------------------------------------------------------- 102 | wisc4.model2 <- ' 103 | V =~ Information + Similarities + Word.Reasoning 104 | F =~ Matrix.Reasoning + Picture.Concepts 105 | ' 106 | 107 | # wisc4.model2 <- ' 108 | # V =~ Information + Similarities + Word.Reasoning 109 | # F =~ a*Matrix.Reasoning + a*Picture.Concepts 110 | # ' 111 | 112 | ## ------------------------------------------------------- 113 | wisc4.fit2 <- cfa(wisc4.model2, 114 | sample.cov=wisc4.cov, 115 | sample.nobs=550, 116 | std.lv = F) 117 | 118 | ## ------------------------------------------------------- 119 | summary(wisc4.fit2, 120 | standardized=TRUE, 121 | rsquare = TRUE, 122 | fit.measures=TRUE) 123 | 124 | ## ------------------------------------------------------- 125 | semPaths(wisc4.fit2, 126 | whatLabels="std", 127 | what = "std", 128 | edge.color = "pink", 129 | edge.label.cex = 1, 130 | layout="tree") 131 | 132 | ## ------------------------------------------------------- 133 | anova(wisc4.fit, wisc4.fit2) 134 | fitmeasures(wisc4.fit, c("aic", "ecvi")) 135 | fitmeasures(wisc4.fit2, c("aic", "ecvi")) 136 | 137 | ## ------------------------------------------------------- 138 | #install.packages("parameters") 139 | library(parameters) 140 | model_parameters(wisc4.fit, standardize = TRUE) 141 | 142 | ## ------------------------------------------------------- 143 | library(broom) 144 | tidy(wisc4.fit) 145 | glance(wisc4.fit) 146 | 147 | -------------------------------------------------------------------------------- /vignettes/lecture_data_screen.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----setup, include=FALSE------------------------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | 10 | ## ------------------------------------------------------- 11 | library(rio) 12 | master <- import("data/lecture_data_screen.csv") 13 | names(master) 14 | 15 | ## ------------------------------------------------------- 16 | #summary(master) 17 | table(master$JOL_group) 18 | 19 | table(master$type_cue) 20 | 21 | ## ------------------------------------------------------- 22 | no_typos <- master 23 | no_typos$JOL_group <- factor(no_typos$JOL_group, 24 | levels = c("delayed", "immediate"), 25 | labels = c("Delayed", "Immediate")) 26 | 27 | no_typos$type_cue <- factor(no_typos$type_cue, 28 | levels = c("cue only", "stimulus pairs"), 29 | labels = c("Cue Only", "Stimulus Pairs")) 30 | 31 | ## ------------------------------------------------------- 32 | summary(no_typos) 33 | 34 | ## ------------------------------------------------------- 35 | # how did I get 3:22? 36 | # how did I get the rule? 37 | # what should I do? 38 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] 39 | 40 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] <- NA 41 | 42 | no_typos[ , 3:22][ no_typos[ , 3:22] < 0 ] <- NA 43 | 44 | ## ------------------------------------------------------- 45 | no_missing <- no_typos 46 | summary(no_missing) 47 | 48 | ## ------------------------------------------------------- 49 | percent_missing <- function(x){sum(is.na(x))/length(x) * 100} 50 | missing <- apply(no_missing, 1, percent_missing) 51 | table(missing) 52 | 53 | ## ------------------------------------------------------- 54 | replace_rows <- subset(no_missing, missing <= 5) 55 | no_rows <- subset(no_missing, missing > 5) 56 | 57 | ## ------------------------------------------------------- 58 | missing <- apply(replace_rows, 2, percent_missing) 59 | table(missing) 60 | 61 | replace_columns <- replace_rows[ , 3:22] 62 | no_columns <- replace_rows[ , 1:2] 63 | 64 | ## ------------------------------------------------------- 65 | library(mice) 66 | tempnomiss <- mice(replace_columns) 67 | 68 | ## ------------------------------------------------------- 69 | fixed_columns <- complete(tempnomiss) 70 | all_columns <- cbind(no_columns, fixed_columns) 71 | all_rows <- rbind(all_columns, no_rows) 72 | nrow(no_missing) 73 | nrow(all_rows) 74 | 75 | ## ------------------------------------------------------- 76 | mahal <- mahalanobis(all_columns[ , -c(1,2)], #take note here 77 | colMeans(all_columns[ , -c(1,2)], na.rm=TRUE), 78 | cov(all_columns[ , -c(1,2)], use ="pairwise.complete.obs")) 79 | 80 | cutoff <- qchisq(p = 1 - .001, #1 minus alpha 81 | df = ncol(all_columns[ , -c(1,2)])) # number of columns 82 | 83 | ## ------------------------------------------------------- 84 | cutoff 85 | 86 | summary(mahal < cutoff) #notice the direction 87 | 88 | no_outliers <- subset(all_columns, mahal < cutoff) 89 | 90 | ## ------------------------------------------------------- 91 | library(corrplot) 92 | corrplot(cor(no_outliers[ , -c(1,2)])) 93 | 94 | ## ------------------------------------------------------- 95 | random_variable <- rchisq(nrow(no_outliers), 7) 96 | fake_model <- lm(random_variable ~ ., 97 | data = no_outliers[ , -c(1,2)]) 98 | standardized <- rstudent(fake_model) 99 | fitvalues <- scale(fake_model$fitted.values) 100 | 101 | ## ------------------------------------------------------- 102 | plot(fake_model, 2) 103 | 104 | ## ------------------------------------------------------- 105 | hist(standardized) 106 | 107 | ## ------------------------------------------------------- 108 | {plot(standardized, fitvalues) 109 | abline(v = 0) 110 | abline(h = 0) 111 | } 112 | 113 | -------------------------------------------------------------------------------- /vignettes/lecture_data_screen.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Data Screening" 3 | output: rmarkdown::slidy_presentation 4 | description: > 5 | This vignette includes the lecture slides data screening for SEM (part 2). 6 | vignette: > 7 | %\VignetteIndexEntry{"ANOVA: Analysis of Variance"} 8 | %\VignetteEngine{knitr::rmarkdown} 9 | %\VignetteEncoding{UTF-8} 10 | --- 11 | 12 | ```{r, include = FALSE} 13 | knitr::opts_chunk$set( 14 | collapse = TRUE, 15 | comment = "#>" 16 | ) 17 | ``` 18 | 19 | ```{r setup, include=FALSE} 20 | knitr::opts_chunk$set(echo = TRUE) 21 | ``` 22 | 23 | ## Data Screening Overview 24 | 25 | - In this lecture, we will give you demonstration of what you might do to data screen a dataset for structural equation modeling. 26 | - There are four key steps: 27 | 28 | - Accuracy: dealing with errors 29 | - Missing: dealing with missing data 30 | - Outliers: determining if there are outliers and what to do with them 31 | - Assumptions: additivity, multivariate normality, linearity, homogeneity, and homoscedasticity 32 | 33 | - Note that the type of data screening may change depending on the type of data you have (i.e., ordinal data has different assumptions) 34 | - Mostly, we will focus on datasets with traditional parametric assumptions 35 | 36 | ## Hypothesis Testing versus Data Screening 37 | 38 | - Generally, we set an $alpha$ value, or Type 1 error 39 | - Often, this translates to "statistical significance", *p* < $alpha$ = significant, where $alpha$ is often defined as .05 40 | - In data screening, we want things to be very unusual before correcting or eliminating things 41 | - Therefore, we will often lower our criterion and use *p* < $alpha$ to denote problems with the data, where $alpha$ is lowered to .001 42 | 43 | ## Order is Important 44 | 45 | - While datascreening can be performed many ways, it's important to know that you should fix errors, missing data, etc. before checking assumptions 46 | - The changes you make effect the next steps 47 | 48 | ## An Example 49 | 50 | - We will learn about data screening by working an example 51 | - This data is made up data where people were asked to judge their own learning in different experimental conditions, and they rated their confidence of remembering information, and then we measured their actual memory of a situation 52 | 53 | ## Import the Data 54 | 55 | ```{r} 56 | library(rio) 57 | master <- import("data/lecture_data_screen.csv") 58 | names(master) 59 | ``` 60 | 61 | ## Accuracy 62 | 63 | - Use the `summary()` and `table()` functions to examine the dataset. 64 | - Categorical data: Are the labels right? Should this variable be factored? 65 | - Continuous data: is the min/max of the data correct? Are the data scored correctly? 66 | 67 | ## Accuracy Categorical 68 | 69 | ```{r} 70 | #summary(master) 71 | table(master$JOL_group) 72 | 73 | table(master$type_cue) 74 | ``` 75 | 76 | ## Accuracy Categorical 77 | 78 | ```{r} 79 | no_typos <- master 80 | no_typos$JOL_group <- factor(no_typos$JOL_group, 81 | levels = c("delayed", "immediate"), 82 | labels = c("Delayed", "Immediate")) 83 | 84 | no_typos$type_cue <- factor(no_typos$type_cue, 85 | levels = c("cue only", "stimulus pairs"), 86 | labels = c("Cue Only", "Stimulus Pairs")) 87 | ``` 88 | 89 | ## Accuracy Continuous 90 | 91 | - Confidence and recall should only be between 0 and 100. 92 | - Looks like we have some data to clean up. 93 | 94 | ```{r} 95 | summary(no_typos) 96 | ``` 97 | 98 | ## Accuracy Continuous 99 | 100 | ```{r} 101 | # how did I get 3:22? 102 | # how did I get the rule? 103 | # what should I do? 104 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] 105 | 106 | no_typos[ , 3:22][ no_typos[ , 3:22] > 100 ] <- NA 107 | 108 | no_typos[ , 3:22][ no_typos[ , 3:22] < 0 ] <- NA 109 | ``` 110 | 111 | ## Missing 112 | 113 | - There are two main types of missing data: 114 | 115 | - Missing not at random: when data is missing because of a common cause (i.e., everyone skipped question five) 116 | - Missing completely at random: data is randomly missing, potentially due to computer or human error 117 | 118 | - We also have to distinguish between missing data and incomplete data 119 | 120 | ```{r} 121 | no_missing <- no_typos 122 | summary(no_missing) 123 | ``` 124 | 125 | ## Missing Rows 126 | 127 | ```{r} 128 | percent_missing <- function(x){sum(is.na(x))/length(x) * 100} 129 | missing <- apply(no_missing, 1, percent_missing) 130 | table(missing) 131 | ``` 132 | 133 | ## Missing Replacement 134 | 135 | - How much data can I safely replace? 136 | 137 | - Replace only things that make sense. 138 | - Replace as minimal as possible, often less than 5% 139 | - Replace based on completion/missingness type 140 | 141 | ```{r} 142 | replace_rows <- subset(no_missing, missing <= 5) 143 | no_rows <- subset(no_missing, missing > 5) 144 | ``` 145 | 146 | ## Missing Columns 147 | 148 | - Separate out columns that you should not replace 149 | - Make sure columns have less than 5% missing for replacement 150 | 151 | ```{r} 152 | missing <- apply(replace_rows, 2, percent_missing) 153 | table(missing) 154 | 155 | replace_columns <- replace_rows[ , 3:22] 156 | no_columns <- replace_rows[ , 1:2] 157 | ``` 158 | 159 | ## Missing Replacement 160 | 161 | ```{r} 162 | library(mice) 163 | tempnomiss <- mice(replace_columns) 164 | ``` 165 | 166 | ## Missing Put Together 167 | 168 | ```{r} 169 | fixed_columns <- complete(tempnomiss) 170 | all_columns <- cbind(no_columns, fixed_columns) 171 | all_rows <- rbind(all_columns, no_rows) 172 | nrow(no_missing) 173 | nrow(all_rows) 174 | ``` 175 | 176 | ## Outliers 177 | 178 | - We will mostly be concerned with multivariate outliers in SEM. 179 | - These are rows of data (participants) who have extremely weird patterns of scores when compared to everyone else. 180 | - We will use Mahalanobis Distance to examine each row to determine if they are an outlier 181 | 182 | - This score *D* is the distance from the centriod or mean of means 183 | - We will use a cutoff score based on our strict screening criterion, *p* < .001 to determine if they are an outlier 184 | - This cutoff criterion is based on *the number of variables* rather than the *number of observations* 185 | 186 | ## Outliers Mahalanobis 187 | 188 | ```{r} 189 | mahal <- mahalanobis(all_columns[ , -c(1,2)], #take note here 190 | colMeans(all_columns[ , -c(1,2)], na.rm=TRUE), 191 | cov(all_columns[ , -c(1,2)], use ="pairwise.complete.obs")) 192 | 193 | cutoff <- qchisq(p = 1 - .001, #1 minus alpha 194 | df = ncol(all_columns[ , -c(1,2)])) # number of columns 195 | ``` 196 | 197 | ## Outliers Mahalanobis 198 | 199 | - Do outliers really matter in a SEM analysis though? 200 | 201 | ```{r} 202 | cutoff 203 | 204 | summary(mahal < cutoff) #notice the direction 205 | 206 | no_outliers <- subset(all_columns, mahal < cutoff) 207 | ``` 208 | 209 | ## Assumptions Additivity 210 | 211 | - Additivity is the assumption that each variable adds something to the model 212 | - You basically do not want to use the same variable twice, as that lowers power 213 | - Often this is described as multicollinearity 214 | - Mainly, SEM analysis has a lot of correlated variables, you just want to make sure they aren't perfectly correlated 215 | 216 | ## Assumptions Additivity 217 | 218 | ```{r} 219 | library(corrplot) 220 | corrplot(cor(no_outliers[ , -c(1,2)])) 221 | ``` 222 | 223 | ## Assumptions Set Up 224 | 225 | ```{r} 226 | random_variable <- rchisq(nrow(no_outliers), 7) 227 | fake_model <- lm(random_variable ~ ., 228 | data = no_outliers[ , -c(1,2)]) 229 | standardized <- rstudent(fake_model) 230 | fitvalues <- scale(fake_model$fitted.values) 231 | ``` 232 | 233 | ## Assumptions Linearity 234 | 235 | - We assume the the multivariate relationship between continuous variables is linear (i.e., no curved) 236 | - There are many ways to test this, but we can use a QQ/PP Plot to examine for linearity 237 | 238 | ```{r} 239 | plot(fake_model, 2) 240 | ``` 241 | 242 | ## Assumptions Normality 243 | 244 | - We expect that the residuals are normally distributed 245 | - Not that the *sample* is normally distributed 246 | - Generally, SEM requires a large sample size, thus, buffering against normality deviations 247 | 248 | ```{r} 249 | hist(standardized) 250 | ``` 251 | 252 | ## Assumptions Homogeneity + Homoscedasticity 253 | 254 | - These assumptions are about equality of the variances 255 | - We assume equal variances between groups for things like t-tests, ANOVA 256 | - Here the assumption is equality in the spread of variance across predicted values 257 | 258 | ```{r} 259 | {plot(standardized, fitvalues) 260 | abline(v = 0) 261 | abline(h = 0) 262 | } 263 | ``` 264 | 265 | ## Recap 266 | 267 | - We have completed a datascreening check up for our dataset 268 | - Any problems should be noted, and we will discuss how to handle some of the issues as relevant to SEM analysis 269 | - Let's check out the assignment! 270 | -------------------------------------------------------------------------------- /vignettes/lecture_efa.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F------------------------------------------- 8 | options(scipen = 999) 9 | knitr::opts_chunk$set(echo = TRUE) 10 | 11 | ## ----echo = F, warning = F, message = F----------------- 12 | library(lavaan) 13 | library(semPlot) 14 | HS.model <- ' visual =~ x1 + x2 + x3 15 | textual =~ x4 + x5 + x6 16 | speed =~ x7 + x8 + x9 ' 17 | 18 | fit <- cfa(HS.model, data = HolzingerSwineford1939) 19 | semPaths(fit, 20 | whatLabels = "std", 21 | edge.label.cex = 1) 22 | 23 | ## ----echo = F, warning = F, message = F----------------- 24 | library(lavaan) 25 | library(semPlot) 26 | HS.model <- ' visual =~ x1 + x2 + x3 27 | textual =~ x4 + x5 + x6 28 | speed =~ x7 + x8 + x9 ' 29 | 30 | fit <- cfa(HS.model, data = HolzingerSwineford1939) 31 | semPaths(fit, 32 | whatLabels = "std", 33 | edge.label.cex = 1) 34 | 35 | ## ----message = F---------------------------------------- 36 | library(rio) 37 | library(psych) 38 | master <- import("data/lecture_efa.csv") 39 | head(master) 40 | 41 | ## ----scree, echo=FALSE, out.height="500px", out.width="800px", fig.align="center"---- 42 | knitr::include_graphics("pictures/scree.png") 43 | 44 | ## ------------------------------------------------------- 45 | number_items <- fa.parallel(master, #data frame 46 | fm="ml", #math 47 | fa="fa") #only efa 48 | 49 | ## ------------------------------------------------------- 50 | 51 | sum(number_items$fa.values > 1) 52 | sum(number_items$fa.values > .7) 53 | 54 | ## ----rotation, echo=FALSE, out.height="500px", out.width="800px", fig.align="center"---- 55 | knitr::include_graphics("pictures/rotate.png") 56 | 57 | ## ------------------------------------------------------- 58 | EFA_fit <- fa(master, #data 59 | nfactors = 2, #number of factors 60 | rotate = "oblimin", #rotation 61 | fm = "ml") #math 62 | 63 | ## ------------------------------------------------------- 64 | EFA_fit 65 | 66 | ## ------------------------------------------------------- 67 | EFA_fit2 <- fa(master[ , -23], #data 68 | nfactors = 2, #number of factors 69 | rotate = "oblimin", #rotation 70 | fm = "ml") #math 71 | 72 | EFA_fit2 73 | 74 | ## ------------------------------------------------------- 75 | fa.plot(EFA_fit2, 76 | labels = colnames(master[ , -23])) 77 | 78 | ## ------------------------------------------------------- 79 | fa.diagram(EFA_fit2) 80 | 81 | ## ------------------------------------------------------- 82 | EFA_fit2$rms #Root mean square of the residuals 83 | EFA_fit2$RMSEA #root mean squared error of approximation 84 | EFA_fit2$TLI #tucker lewis index 85 | 1 - ((EFA_fit2$STATISTIC-EFA_fit2$dof)/ 86 | (EFA_fit2$null.chisq-EFA_fit2$null.dof)) #CFI 87 | 88 | ## ------------------------------------------------------- 89 | factor1 = c(1:7, 9:10, 12:16, 18:22) 90 | factor2 = c(8, 11, 17) 91 | ##we use the psych::alpha to make sure that R knows we want the alpha function from the psych package. 92 | ##ggplot2 has an alpha function and if we have them both open at the same time 93 | ##you will sometimes get a color error without this :: information. 94 | psych::alpha(master[, factor1], check.keys = T) 95 | psych::alpha(master[, factor2], check.keys = T) 96 | 97 | -------------------------------------------------------------------------------- /vignettes/lecture_introR.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----setup, include=FALSE------------------------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | 10 | ## ------------------------------------------------------- 11 | X <- 4 12 | 13 | ## ------------------------------------------------------- 14 | library(palmerpenguins) 15 | data(penguins) 16 | attributes(penguins) 17 | 18 | ## ------------------------------------------------------- 19 | str(penguins) 20 | 21 | names(penguins) #ls(penguins) provides this as well 22 | 23 | ## ------------------------------------------------------- 24 | X 25 | 26 | ## ------------------------------------------------------- 27 | penguins$species 28 | 29 | ## ------------------------------------------------------- 30 | A <- 1:20 31 | A 32 | 33 | B <- seq(from = 1, to = 20, by = 1) 34 | B 35 | 36 | C <- c("cheese", "is", "great") 37 | C 38 | 39 | D <- rep(1, times = 30) 40 | D 41 | 42 | ## ------------------------------------------------------- 43 | class(A) 44 | class(C) 45 | class(penguins) 46 | class(penguins$species) 47 | 48 | ## ------------------------------------------------------- 49 | dim(penguins) #rows, columns 50 | length(penguins) 51 | length(penguins$species) 52 | 53 | ## ------------------------------------------------------- 54 | output <- lm(flipper_length_mm ~ bill_length_mm, data = penguins) 55 | str(output) 56 | output$coefficients 57 | 58 | ## ------------------------------------------------------- 59 | myMatrix <- matrix(data = 1:10, 60 | nrow = 5, 61 | ncol = 2) 62 | myMatrix 63 | 64 | ## ------------------------------------------------------- 65 | penguins[1, 2:3] 66 | penguins$sex[4:25] #why no comma? 67 | 68 | ## ------------------------------------------------------- 69 | X <- 1:5 70 | Y <- 6:10 71 | # I can use either because they are the same size 72 | cbind(X,Y) 73 | rbind(X,Y) 74 | 75 | ## ------------------------------------------------------- 76 | ls() 77 | ls(penguins) 78 | 79 | ## ------------------------------------------------------- 80 | newDF <- as.data.frame(cbind(X,Y)) 81 | str(newDF) 82 | as.numeric(c("one", "two", "3")) 83 | 84 | ## ------------------------------------------------------- 85 | penguins[1:2,] #just the first two rows 86 | penguins[penguins$bill_length_mm > 54 , ] #how does this work? 87 | penguins$bill_length_mm > 54 88 | 89 | ## ------------------------------------------------------- 90 | #you can create complex rules 91 | penguins[penguins$bill_length_mm > 54 & penguins$bill_depth_mm > 17, ] 92 | #you can do all BUT 93 | penguins[ , -1] 94 | #grab a few columns by name 95 | vars <- c("bill_length_mm", "sex") 96 | penguins[ , vars] 97 | 98 | ## ------------------------------------------------------- 99 | #another function 100 | #notice any differences? 101 | subset(penguins, bill_length_mm > 54) 102 | #other functions include filter() in tidyverse 103 | 104 | ## ------------------------------------------------------- 105 | head(complete.cases(penguins)) #creates logical 106 | head(na.omit(penguins)) #creates actual rows 107 | head(is.na(penguins$body_mass_g)) #for individual vectors 108 | 109 | ## ------------------------------------------------------- 110 | getwd() 111 | 112 | ## ----eval = F------------------------------------------- 113 | # setwd("/Users/buchanan/OneDrive - Harrisburg University/Teaching/ANLY 580/updated/1 Introduction R") 114 | 115 | ## ------------------------------------------------------- 116 | library(rio) 117 | myDF <- import("data/assignment_introR.csv") 118 | head(myDF) 119 | 120 | ## ----eval = F------------------------------------------- 121 | # install.packages("car") 122 | 123 | ## ------------------------------------------------------- 124 | library(car) 125 | 126 | ## ----eval = F------------------------------------------- 127 | # ?lm 128 | # help(lm) 129 | 130 | ## ------------------------------------------------------- 131 | args(lm) 132 | example(lm) 133 | 134 | ## ------------------------------------------------------- 135 | pizza <- function(x){ x^2 } 136 | pizza(3) 137 | 138 | ## ------------------------------------------------------- 139 | table(penguins$species) 140 | summary(penguins$bill_length_mm) 141 | 142 | ## ------------------------------------------------------- 143 | mean(penguins$bill_length_mm) #returns NA 144 | mean(penguins$bill_length_mm, na.rm = TRUE) 145 | 146 | cor(penguins[ , c("bill_length_mm", "bill_depth_mm", "flipper_length_mm")]) 147 | cor(penguins[ , c("bill_length_mm", "bill_depth_mm", "flipper_length_mm")], 148 | use = "pairwise.complete.obs") 149 | 150 | -------------------------------------------------------------------------------- /vignettes/lecture_irt.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/icc_example.png") 14 | 15 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 16 | knitr::include_graphics("pictures/item_difficulty.png") 17 | 18 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 19 | knitr::include_graphics("pictures/ability.png") 20 | 21 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 22 | knitr::include_graphics("pictures/ability.png") 23 | 24 | ## ------------------------------------------------------- 25 | library(ltm) 26 | library(mirt) 27 | data(LSAT) 28 | head(LSAT) 29 | 30 | ## ------------------------------------------------------- 31 | # Data frame name ~ z1 for one latent variable 32 | #irt.param to give it to you standardized 33 | LSAT.model <- ltm(LSAT ~ z1, 34 | IRT.param = TRUE) 35 | 36 | ## ------------------------------------------------------- 37 | coef(LSAT.model) 38 | 39 | ## ------------------------------------------------------- 40 | plot(LSAT.model, type = "ICC") ## all items at once 41 | 42 | ## ------------------------------------------------------- 43 | plot(LSAT.model, type = "IIC", items = 0) ## Test Information Function 44 | 45 | ## ------------------------------------------------------- 46 | factor.scores(LSAT.model) 47 | 48 | ## ------------------------------------------------------- 49 | LSAT.model2 <- tpm(LSAT, #dataset 50 | type = "latent.trait", 51 | IRT.param = TRUE) 52 | 53 | ## ------------------------------------------------------- 54 | coef(LSAT.model2) 55 | 56 | ## ------------------------------------------------------- 57 | plot(LSAT.model2, type = "ICC") ## all items at once 58 | 59 | ## ------------------------------------------------------- 60 | plot(LSAT.model2, type = "IIC", items = 0) ## Test Information Function 61 | 62 | ## ------------------------------------------------------- 63 | factor.scores(LSAT.model2) 64 | 65 | ## ------------------------------------------------------- 66 | anova(LSAT.model, LSAT.model2) 67 | 68 | ## ------------------------------------------------------- 69 | library(rio) 70 | poly.data <- import("data/lecture_irt.csv") 71 | poly.data <- na.omit(poly.data) 72 | 73 | #reverse code 74 | poly.data$Q99_9 = 8 - poly.data$Q99_9 75 | 76 | #separate factors 77 | poly.data1 = poly.data[ , c(1, 4, 5, 6, 9)] 78 | poly.data2 = poly.data[ , c(2, 3, 7, 8, 10)] 79 | 80 | ## ------------------------------------------------------- 81 | gpcm.model1 <- mirt(data = poly.data1, #data 82 | model = 1, #number of factors 83 | itemtype = "gpcm") #poly model type 84 | 85 | ## ------------------------------------------------------- 86 | summary(gpcm.model1) ##standardized coefficients 87 | 88 | ## ------------------------------------------------------- 89 | coef(gpcm.model1, IRTpars = T) ##coefficients 90 | 91 | head(fscores(gpcm.model1)) ##factor scores 92 | 93 | ## ------------------------------------------------------- 94 | plot(gpcm.model1, type = "trace") ##curves for all items at once 95 | itemplot(gpcm.model1, 5, type = "trace") 96 | 97 | ## ------------------------------------------------------- 98 | itemplot(gpcm.model1, 4, type = "info") ##IIC for each item 99 | plot(gpcm.model1, type = "info") ##test information curve 100 | 101 | ## ------------------------------------------------------- 102 | plot(gpcm.model1) ##expected score curve 103 | 104 | -------------------------------------------------------------------------------- /vignettes/lecture_irt.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Item Response Theory" 3 | output: rmarkdown::slidy_presentation 4 | description: > 5 | This vignette includes the lecture slides for item response theory (part 12). 6 | vignette: > 7 | %\VignetteIndexEntry{"IRT"} 8 | %\VignetteEngine{knitr::rmarkdown} 9 | %\VignetteEncoding{UTF-8} 10 | --- 11 | 12 | ```{r, include = FALSE} 13 | knitr::opts_chunk$set( 14 | collapse = TRUE, 15 | comment = "#>" 16 | ) 17 | ``` 18 | 19 | ```{r echo = F, message = F, warning = F} 20 | knitr::opts_chunk$set(echo = TRUE) 21 | library(lavaan) 22 | library(semPlot) 23 | ``` 24 | 25 | ## Item Response Theory 26 | 27 | - What do you do if you have dichotomous (or categorical) manifest variables? 28 | - Many agree that more than four response options can be treated as continuous without a loss in power or interpretation. 29 | - Do you treat these values as categorical? 30 | - Do you assume the underlying latent variable is continuous? 31 | 32 | ## Categorical Options 33 | 34 | - There are two approaches that allow us to analyze data with categorical predictors: 35 | - Item Factor Analysis 36 | - More traditional factor analysis approach using ordered responses 37 | - You can talk about item loading, eliminate bad questions, etc. 38 | - In the `lavaan` framework, you update your `cfa()` to include the `ordered` argument 39 | - Item Response Theory 40 | 41 | ## Item Response Theory 42 | 43 | - Classical test theory is considered "true score theory" 44 | - Any differences in responses are differences in ability or underlying trait 45 | - CTT focuses on reliability and item correlation type analysis 46 | - Cannot separate the test and person characteristics 47 | - IRT is considered more modern test theory focusing on the latent trait 48 | - Focuses on the item for *where* it measures a latent trait, discrimination, and guessing 49 | - Additionally, with more than two outcomes, we can examine ordering, response choice options, and more 50 | 51 | ## Issues 52 | 53 | - Unidimensionality: assumption is that there is one underlying trait or dimension you are measuring 54 | - You can run separate models for each dimension 55 | - There are multitrait options for IRT 56 | - Local Independence 57 | - After you control for the latent variable, the items are uncorrelated 58 | 59 | ## Item Response Theory 60 | 61 | - A simple example of test versus person 62 | - 3 item questionnaire 63 | - Yes/no scaling 64 | - 8 response patterns 65 | - Four total scores (0, 1, 2, 3) 66 | 67 | ## Item Response Theory 68 | 69 | - Item characteristic curves (ICCs) 70 | - The log probability curve of theta and the probability of a correct response 71 | 72 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 73 | knitr::include_graphics("pictures/icc_example.png") 74 | ``` 75 | 76 | ## Item Response Theory 77 | 78 | - Theta – ability or the underlying latent variable score 79 | - b – Item location – where the probability of getting an item correct is 50/50 80 | - Also considered where the item performs best 81 | - Can be thought of as item difficulty 82 | 83 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 84 | knitr::include_graphics("pictures/item_difficulty.png") 85 | ``` 86 | 87 | ## Item Response Theory 88 | 89 | - a – item discrimination 90 | - Tells you how well an item measures the latent variable 91 | - Larger a values indicate better items 92 | 93 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 94 | knitr::include_graphics("pictures/ability.png") 95 | ``` 96 | 97 | ## Item Response Theory 98 | 99 | - c – guessing parameter 100 | - The lower level likelihood of getting the item correct 101 | 102 | ```{r echo=FALSE, out.width = "75%", fig.align="center"} 103 | knitr::include_graphics("pictures/ability.png") 104 | ``` 105 | 106 | ## Item Response Theory 107 | 108 | - 1 Parameter Logistic (1PL) 109 | - Also known as the Rasch Model 110 | - Only uses b 111 | - 2 Parameter Logistic (2PL) 112 | - Uses b and a 113 | - 3 Parameter Logistic (3PL) 114 | - Uses b, a, and c 115 | 116 | ## Polytomous IRT 117 | 118 | - A large portion of IRT focuses on dichotomous data (yes/no, correct/incorrect) 119 | - Scoring is easier because you have "right" and "wrong" answers 120 | - Separately, polytomous IRT focuses on data with multiple answers, with no "right" answer 121 | - Focus on ordering, meaning that low scores represent lower abilities, while high scores are higher abilities 122 | - Likert type scales 123 | 124 | ## Polytomous IRT 125 | 126 | - Couple of types of models: 127 | - Graded Response Model 128 | - Generalized Partial Credit Model 129 | - Partial Credit Model 130 | 131 | ## Polytomous IRT 132 | 133 | - A graded response model is simplest but can be hard to fit. 134 | - Takes the number of categories – 1 and creates mini 2PLs for each of those boundary points (1-rest, 2-rest, 3-rest, etc.). 135 | - You get probabilities of scoring at this level OR higher 136 | 137 | ## Polytomous IRT 138 | 139 | - The generalized partial credit and partial credit models account for the fact that you may not have each category used equally 140 | - Therefore, you get the mini 2PLs for adjacent categories (1-2, 2-3, 3-4) 141 | - If your categories are ordered (which you often want), these two estimations can be very similar. 142 | - Another concern with the partial credit models is making sure that all categories have a point at which they are the most likely answer (thresholds) 143 | 144 | ## Polytomous IRT 145 | 146 | - Install the `mirt()` library to use the multidimensional IRT package. 147 | - We are not covering multiple dimensional or multigroup IRT, but this package can do those models or polytomous estimation. 148 | 149 | ## IRT Examples 150 | 151 | - Let's start with DIRT: Dichotomous IRT 152 | - Dataset is the LSAT, which is scored as right or wrong 153 | 154 | ```{r} 155 | library(ltm) 156 | library(mirt) 157 | data(LSAT) 158 | head(LSAT) 159 | ``` 160 | 161 | ## Two Parameter Logistic 162 | 163 | ```{r} 164 | # Data frame name ~ z1 for one latent variable 165 | #irt.param to give it to you standardized 166 | LSAT.model <- ltm(LSAT ~ z1, 167 | IRT.param = TRUE) 168 | ``` 169 | 170 | ## 2PL Output 171 | 172 | - Difficulty = b = theta = ability 173 | - Discrimination = a = how good the question is at figuring a person out. 174 | 175 | ```{r} 176 | coef(LSAT.model) 177 | ``` 178 | 179 | ## 2PL Plots 180 | 181 | ```{r} 182 | plot(LSAT.model, type = "ICC") ## all items at once 183 | ``` 184 | 185 | ## 2PL Plots 186 | 187 | ```{r} 188 | plot(LSAT.model, type = "IIC", items = 0) ## Test Information Function 189 | ``` 190 | 191 | ## 2PL Other Options 192 | 193 | ```{r} 194 | factor.scores(LSAT.model) 195 | ``` 196 | 197 | ## Three Parameter Logistic 198 | 199 | ```{r} 200 | LSAT.model2 <- tpm(LSAT, #dataset 201 | type = "latent.trait", 202 | IRT.param = TRUE) 203 | ``` 204 | 205 | ## 3PL Output 206 | 207 | - Difficulty = b = theta = ability 208 | - Discrimination = a = how good the question is at figuring a person out. 209 | - Guessing = c = how easy the item is to guess 210 | 211 | ```{r} 212 | coef(LSAT.model2) 213 | ``` 214 | 215 | ## 3PL Plots 216 | 217 | ```{r} 218 | plot(LSAT.model2, type = "ICC") ## all items at once 219 | ``` 220 | 221 | ## 3PL Plots 222 | 223 | ```{r} 224 | plot(LSAT.model2, type = "IIC", items = 0) ## Test Information Function 225 | ``` 226 | 227 | ## 3PL Other Options 228 | 229 | ```{r} 230 | factor.scores(LSAT.model2) 231 | ``` 232 | 233 | ## Compare Models 234 | 235 | ```{r} 236 | anova(LSAT.model, LSAT.model2) 237 | ``` 238 | 239 | ## Polytomous IRT 240 | 241 | - Dataset includes the Meaning in Life Questionnaire 242 | 243 | ```{r} 244 | library(rio) 245 | poly.data <- import("data/lecture_irt.csv") 246 | poly.data <- na.omit(poly.data) 247 | 248 | #reverse code 249 | poly.data$Q99_9 = 8 - poly.data$Q99_9 250 | 251 | #separate factors 252 | poly.data1 = poly.data[ , c(1, 4, 5, 6, 9)] 253 | poly.data2 = poly.data[ , c(2, 3, 7, 8, 10)] 254 | ``` 255 | 256 | ## Graded Partial Credit Model 257 | 258 | ```{r} 259 | gpcm.model1 <- mirt(data = poly.data1, #data 260 | model = 1, #number of factors 261 | itemtype = "gpcm") #poly model type 262 | ``` 263 | 264 | ## GPCM Output 265 | 266 | - Can also get factor loadings here, with standardized coefficients to help us determine if they relate to their latent trait 267 | 268 | ```{r} 269 | summary(gpcm.model1) ##standardized coefficients 270 | ``` 271 | 272 | ## GPCM Output 273 | 274 | ```{r} 275 | coef(gpcm.model1, IRTpars = T) ##coefficients 276 | 277 | head(fscores(gpcm.model1)) ##factor scores 278 | ``` 279 | 280 | ## GPCM Plots 281 | 282 | ```{r} 283 | plot(gpcm.model1, type = "trace") ##curves for all items at once 284 | itemplot(gpcm.model1, 5, type = "trace") 285 | ``` 286 | 287 | ## GPCM Plots 288 | 289 | ```{r} 290 | itemplot(gpcm.model1, 4, type = "info") ##IIC for each item 291 | plot(gpcm.model1, type = "info") ##test information curve 292 | ``` 293 | 294 | ## GPCM Plots 295 | 296 | ```{r} 297 | plot(gpcm.model1) ##expected score curve 298 | ``` 299 | 300 | ## Summary 301 | 302 | - In this lecture you've learned: 303 | 304 | - Item response theory compared to classical test theory 305 | - How to run a dichotomous or traditional IRT with 2PL and 3PL 306 | - How to run a polytomous IRT using graded partial credit model 307 | - How to compare models and interpret their output 308 | -------------------------------------------------------------------------------- /vignettes/lecture_mtmm.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/model1_mtmm.png") 14 | 15 | ## ------------------------------------------------------- 16 | library(lavaan) 17 | library(semPlot) 18 | library(rio) 19 | 20 | meaning.data <- import("data/lecture_mtmm.csv") 21 | str(meaning.data) 22 | 23 | ## ------------------------------------------------------- 24 | methods.model <- ' 25 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 26 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 27 | ' 28 | 29 | traits.model <- ' 30 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 31 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 32 | ' 33 | 34 | ## ------------------------------------------------------- 35 | methods.fit <- cfa(model = methods.model, 36 | data = meaning.data, 37 | std.lv = TRUE) 38 | traits.fit <- cfa(model = traits.model, 39 | data = meaning.data, 40 | std.lv = TRUE) 41 | 42 | lavInspect(traits.fit, "cor.lv") 43 | 44 | ## ------------------------------------------------------- 45 | summary(methods.fit, 46 | rsquare = TRUE, 47 | standardized = TRUE, 48 | fit.measures = TRUE) 49 | 50 | summary(traits.fit, 51 | rsquare = TRUE, 52 | standardized = TRUE, 53 | fit.measures = TRUE) 54 | 55 | ## ------------------------------------------------------- 56 | semPaths(methods.fit, 57 | whatLabels = "std", 58 | layout = "tree", 59 | edge.label.cex = 1) 60 | 61 | semPaths(traits.fit, 62 | whatLabels = "std", 63 | layout = "tree", 64 | edge.label.cex = 1) 65 | 66 | ## ------------------------------------------------------- 67 | step1.model <- ' 68 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 69 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 70 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 71 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 72 | 73 | ##fix the covariances 74 | mlq ~~ 0*meaning 75 | pil ~~ 0*meaning 76 | mlq ~~ 0*purpose 77 | pil ~~ 0*purpose 78 | ' 79 | 80 | ## ------------------------------------------------------- 81 | step1.fit <- cfa(model = step1.model, 82 | data = meaning.data, 83 | std.lv = TRUE) 84 | 85 | summary(step1.fit, 86 | rsquare = TRUE, 87 | standardized = TRUE, 88 | fit.measures = TRUE) 89 | 90 | ## ------------------------------------------------------- 91 | semPaths(step1.fit, 92 | whatLabels = "std", 93 | layout = "tree", 94 | edge.label.cex = 1) 95 | 96 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 97 | knitr::include_graphics("pictures/model2_mtmm.png") 98 | 99 | ## ------------------------------------------------------- 100 | ##model 2 is the methods model 101 | ##we've already checked it out 102 | anova(step1.fit, methods.fit) 103 | 104 | fitmeasures(step1.fit, "cfi") 105 | fitmeasures(methods.fit, "cfi") 106 | 107 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 108 | knitr::include_graphics("pictures/model3_mtmm.png") 109 | 110 | ## ------------------------------------------------------- 111 | step3.model <- ' 112 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 113 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 114 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 115 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 116 | 117 | ##fix the covariances 118 | mlq ~~ 0*meaning 119 | pil ~~ 0*meaning 120 | mlq ~~ 0*purpose 121 | pil ~~ 0*purpose 122 | meaning ~~ 1*purpose 123 | ' 124 | 125 | ## ------------------------------------------------------- 126 | step3.fit <- cfa(model = step3.model, 127 | data = meaning.data, 128 | std.lv = TRUE) 129 | 130 | summary(step3.fit, 131 | rsquare = TRUE, 132 | standardized = TRUE, 133 | fit.measure = TRUE) 134 | 135 | ## ------------------------------------------------------- 136 | semPaths(step3.fit, 137 | whatLabels = "std", 138 | layout = "tree", 139 | edge.label.cex = 1) 140 | 141 | ## ------------------------------------------------------- 142 | anova(step1.fit, step3.fit) 143 | 144 | fitmeasures(step1.fit, "cfi") 145 | fitmeasures(step3.fit, "cfi") 146 | 147 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 148 | knitr::include_graphics("pictures/model4_mtmm.png") 149 | 150 | ## ------------------------------------------------------- 151 | step4.model <- ' 152 | mlq =~ m1 + m2 + m3 + m4 + m5 + m6 + m8 + m9 + m10 153 | pil =~ p3 + p4 + p8 + p12 + p17 + p20 154 | meaning =~ m1 + m2 + m5 + m10 + p4 + p12 + p17 155 | purpose =~ m3 + m4 + m6 + m8 + m9 + p3 + p8 + p20 156 | 157 | ##fix the covariances 158 | mlq ~~ 0*meaning 159 | pil ~~ 0*meaning 160 | mlq ~~ 0*purpose 161 | pil ~~ 0*purpose 162 | pil ~~ 0*mlq 163 | ' 164 | 165 | ## ------------------------------------------------------- 166 | step4.fit <- cfa(model = step4.model, 167 | data = meaning.data, 168 | std.lv = TRUE) 169 | 170 | summary(step4.fit, 171 | rsquare = TRUE, 172 | standardized = TRUE, 173 | fit.measure = TRUE) 174 | 175 | ## ------------------------------------------------------- 176 | semPaths(step4.fit, 177 | whatLabels = "std", 178 | layout = "tree", 179 | edge.label.cex = 1) 180 | 181 | ## ------------------------------------------------------- 182 | anova(step1.fit, step4.fit) 183 | 184 | fitmeasures(step1.fit, "cfi") 185 | fitmeasures(step4.fit, "cfi") 186 | 187 | ## ------------------------------------------------------- 188 | parameterestimates(step1.fit, standardized = T) 189 | 190 | ## ------------------------------------------------------- 191 | parameterestimates(step1.fit, standardized = T) 192 | 193 | -------------------------------------------------------------------------------- /vignettes/lecture_path.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | 10 | ## ----eval = F------------------------------------------- 11 | # install.packages("lavaan") 12 | # install.packages("semPlot") 13 | 14 | ## ------------------------------------------------------- 15 | library(rio) 16 | eval.data <- import("data/lecture_evals.csv") 17 | 18 | ## ----echo=FALSE, out.width = "25%", fig.align="center"---- 19 | knitr::include_graphics("pictures/lecture_evals.png") 20 | 21 | ## ----echo=FALSE, out.width = "25%", fig.align="center"---- 22 | knitr::include_graphics("pictures/lecture_evals.png") 23 | 24 | ## ------------------------------------------------------- 25 | library(lavaan) 26 | eval.model <- ' 27 | q4 ~ q12 + q2 28 | q1 ~ q4 + q12 29 | ' 30 | 31 | ## ------------------------------------------------------- 32 | eval.model 33 | 34 | ## ------------------------------------------------------- 35 | eval.output <- sem(model = eval.model, 36 | data = eval.data) 37 | 38 | ## ------------------------------------------------------- 39 | summary(eval.output) 40 | 41 | ## ------------------------------------------------------- 42 | summary(eval.output, 43 | standardized = TRUE, # for the standardized solution 44 | fit.measures = TRUE, # for model fit 45 | rsquare = TRUE) # for SMCs 46 | 47 | ## ------------------------------------------------------- 48 | library(semPlot) 49 | semPaths(eval.output, # the analyzed model 50 | whatLabels = "par", # what to add as the numbers, std for standardized 51 | edge.label.cex = 1, # make the font bigger 52 | layout = "spring") # change the layout tree, circle, spring, tree2, circle2 53 | 54 | ## ------------------------------------------------------- 55 | regression.cor <- lav_matrix_lower2full(c(1.00, 56 | 0.20,1.00, 57 | 0.24,0.30,1.00, 58 | 0.70,0.80,0.30,1.00)) 59 | 60 | # name the variables in the matrix 61 | colnames(regression.cor) <- 62 | rownames(regression.cor) <- 63 | c("X1", "X2", "X3", "Y") 64 | 65 | ## ------------------------------------------------------- 66 | regression.model <- ' 67 | # structural model for Y 68 | Y ~ a*X1 + b*X2 + c*X3 69 | # label the residual variance of Y 70 | Y ~~ z*Y 71 | ' 72 | 73 | ## ------------------------------------------------------- 74 | regression.fit <- sem(model = regression.model, 75 | sample.cov = regression.cor, # instead of data 76 | sample.nobs = 1000) # number of data points 77 | 78 | ## ------------------------------------------------------- 79 | summary(regression.fit, 80 | standardized = TRUE, 81 | fit.measures = TRUE, 82 | rsquare = TRUE) 83 | 84 | ## ------------------------------------------------------- 85 | semPaths(regression.fit, 86 | whatLabels="par", 87 | edge.label.cex = 1, 88 | layout="tree") 89 | 90 | ## ------------------------------------------------------- 91 | beaujean.cov <- lav_matrix_lower2full(c(648.07, 92 | 30.05, 8.64, 93 | 140.18, 25.57, 233.21)) 94 | colnames(beaujean.cov) <- 95 | rownames(beaujean.cov) <- 96 | c("salary", "school", "iq") 97 | 98 | ## ------------------------------------------------------- 99 | beaujean.model <- ' 100 | salary ~ a*school + c*iq 101 | iq ~ b*school # this is reversed in first printing of the book 102 | ind:= b*c # this is the mediation part 103 | ' 104 | 105 | ## ------------------------------------------------------- 106 | beaujean.fit <- sem(model = beaujean.model, 107 | sample.cov = beaujean.cov, 108 | sample.nobs = 300) 109 | 110 | ## ------------------------------------------------------- 111 | summary(beaujean.fit, 112 | standardized = TRUE, 113 | fit.measures = TRUE, 114 | rsquare = TRUE) 115 | 116 | ## ------------------------------------------------------- 117 | semPaths(beaujean.fit, 118 | whatLabels="par", 119 | edge.label.cex = 1, 120 | layout="tree") 121 | 122 | ## ----echo=FALSE, out.width = "50%", fig.align="center"---- 123 | knitr::include_graphics("pictures/srmr_formula.png") 124 | 125 | ## ------------------------------------------------------- 126 | chi_difference <- 12.6 - 4.3 127 | df_difference <- 14 - 12 128 | pchisq(chi_difference, df_difference, lower.tail = F) 129 | 130 | ## ------------------------------------------------------- 131 | compare.data <- lav_matrix_lower2full(c(1.00, 132 | .53, 1.00, 133 | .15, .18, 1.00, 134 | .52, .29, -.05, 1.00, 135 | .30, .34, .23, .09, 1.00)) 136 | 137 | colnames(compare.data) <- 138 | rownames(compare.data) <- 139 | c("morale", "illness", "neuro", "relationship", "SES") 140 | 141 | ## ------------------------------------------------------- 142 | #model 1 143 | compare.model1 = ' 144 | illness ~ morale 145 | relationship ~ morale 146 | morale ~ SES + neuro 147 | ' 148 | 149 | #model 2 150 | compare.model2 = ' 151 | SES ~ illness + neuro 152 | morale ~ SES + illness 153 | relationship ~ morale + neuro 154 | ' 155 | 156 | ## ------------------------------------------------------- 157 | compare.model1.fit <- sem(compare.model1, 158 | sample.cov = compare.data, 159 | sample.nobs = 469) 160 | 161 | summary(compare.model1.fit, 162 | standardized = TRUE, 163 | fit.measures = TRUE, 164 | rsquare = TRUE) 165 | 166 | ## ------------------------------------------------------- 167 | compare.model2.fit <- sem(compare.model2, 168 | sample.cov = compare.data, 169 | sample.nobs = 469) 170 | 171 | summary(compare.model2.fit, 172 | standardized = TRUE, 173 | fit.measures = TRUE, 174 | rsquare = TRUE) 175 | 176 | ## ------------------------------------------------------- 177 | semPaths(compare.model1.fit, 178 | whatLabels="par", 179 | edge.label.cex = 1, 180 | layout="spring") 181 | 182 | ## ------------------------------------------------------- 183 | semPaths(compare.model2.fit, 184 | whatLabels="par", 185 | edge.label.cex = 1, 186 | layout="spring") 187 | 188 | ## ------------------------------------------------------- 189 | anova(compare.model1.fit, compare.model2.fit) 190 | fitmeasures(compare.model1.fit, c("aic", "ecvi")) 191 | fitmeasures(compare.model2.fit, c("aic", "ecvi")) 192 | 193 | -------------------------------------------------------------------------------- /vignettes/lecture_secondcfa.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/second_order.png") 14 | 15 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 16 | knitr::include_graphics("pictures/bi_factor.png") 17 | 18 | ## ------------------------------------------------------- 19 | library(lavaan) 20 | library(semPlot) 21 | 22 | ##import the data 23 | wisc4.cov <- lav_matrix_lower2full(c(8.29, 24 | 5.37,9.06, 25 | 2.83,4.44,8.35, 26 | 2.83,3.32,3.36,8.88, 27 | 5.50,6.66,4.20,3.43,9.18, 28 | 6.18,6.73,4.01,3.33,6.77,9.12, 29 | 3.52,3.77,3.19,2.75,3.88,4.05,8.88, 30 | 3.79,4.50,3.72,3.39,4.53,4.70,4.54,8.94, 31 | 2.30,2.67,2.40,2.38,2.06,2.59,2.65,2.83,8.76, 32 | 3.06,4.04,3.70,2.79,3.59,3.67,3.44,4.20,4.53,9.73)) 33 | 34 | wisc4.sd <- c(2.88,3.01,2.89,2.98,3.03,3.02,2.98,2.99,2.96,3.12) 35 | 36 | names(wisc4.sd) <- 37 | colnames(wisc4.cov) <- 38 | rownames(wisc4.cov) <- c("Comprehension", "Information", 39 | "Matrix.Reasoning", "Picture.Concepts", 40 | "Similarities", "Vocabulary", "Digit.Span", 41 | "Letter.Number", "Coding", "Symbol.Search") 42 | 43 | ## ------------------------------------------------------- 44 | ##first order model 45 | wisc4.fourFactor.model <- ' 46 | gc =~ Comprehension + Information + Similarities + Vocabulary 47 | gf =~ Matrix.Reasoning + Picture.Concepts 48 | gsm =~ Digit.Span + Letter.Number 49 | gs =~ Coding + Symbol.Search 50 | ' 51 | 52 | ## ------------------------------------------------------- 53 | wisc4.fourFactor.fit <- cfa(model = wisc4.fourFactor.model, 54 | sample.cov = wisc4.cov, 55 | sample.nobs = 550) 56 | 57 | ## ------------------------------------------------------- 58 | summary(wisc4.fourFactor.fit, 59 | fit.measure = TRUE, 60 | standardized = TRUE, 61 | rsquare = TRUE) 62 | 63 | ## ------------------------------------------------------- 64 | semPaths(wisc4.fourFactor.fit, 65 | whatLabels="std", 66 | edge.label.cex = 1, 67 | edge.color = "black", 68 | what = "std", 69 | layout="tree") 70 | 71 | ## ------------------------------------------------------- 72 | wisc4.higherOrder.model <- ' 73 | gc =~ Comprehension + Information + Similarities + Vocabulary 74 | gf =~ Matrix.Reasoning + Picture.Concepts 75 | gsm =~ Digit.Span + Letter.Number 76 | gs =~ Coding + Symbol.Search 77 | 78 | g =~ gf + gc + gsm + gs 79 | ' 80 | 81 | ## ------------------------------------------------------- 82 | wisc4.higherOrder.fit <- cfa(model = wisc4.higherOrder.model, 83 | sample.cov = wisc4.cov, 84 | sample.nobs = 550) 85 | 86 | ## ------------------------------------------------------- 87 | summary(wisc4.higherOrder.fit, 88 | fit.measure=TRUE, 89 | standardized=TRUE, 90 | rsquare = TRUE) 91 | 92 | ## ------------------------------------------------------- 93 | semPaths(wisc4.higherOrder.fit, 94 | whatLabels="std", 95 | edge.label.cex = 1, 96 | edge.color = "black", 97 | what = "std", 98 | layout="tree") 99 | 100 | ## ------------------------------------------------------- 101 | wisc4.bifactor.model <- ' 102 | gc =~ Comprehension + Information + Similarities + Vocabulary 103 | gf =~ a*Matrix.Reasoning + a*Picture.Concepts 104 | gsm =~ b*Digit.Span + b*Letter.Number 105 | gs =~ c*Coding + c*Symbol.Search 106 | g =~ Information + Comprehension + Matrix.Reasoning + Picture.Concepts + Similarities + Vocabulary + Digit.Span + Letter.Number + Coding + Symbol.Search 107 | ' 108 | 109 | ## ------------------------------------------------------- 110 | wisc4.bifactor.fit <- cfa(model = wisc4.bifactor.model, 111 | sample.cov = wisc4.cov, 112 | sample.nobs = 550, 113 | orthogonal = TRUE) 114 | 115 | ## ------------------------------------------------------- 116 | summary(wisc4.bifactor.fit, 117 | fit.measure = TRUE, 118 | rsquare = TRUE, 119 | standardized = TRUE) 120 | 121 | ## ------------------------------------------------------- 122 | semPaths(wisc4.bifactor.fit, 123 | whatLabels="std", 124 | edge.label.cex = 1, 125 | edge.color = "black", 126 | what = "std", 127 | layout="tree") 128 | 129 | -------------------------------------------------------------------------------- /vignettes/lecture_sem.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | knitr::opts_chunk$set(echo = TRUE) 9 | library(lavaan) 10 | library(semPlot) 11 | 12 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 13 | knitr::include_graphics("pictures/full_sem2.png") 14 | 15 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 16 | knitr::include_graphics("pictures/indicators.png") 17 | 18 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 19 | knitr::include_graphics("pictures/kline_model.png") 20 | 21 | ## ------------------------------------------------------- 22 | library(lavaan) 23 | library(semPlot) 24 | 25 | family.cor <- lav_matrix_lower2full(c(1.00, 26 | .74, 1.00, 27 | .27, .42, 1.00, 28 | .31, .40, .79, 1.00, 29 | .32, .35, .66, .59, 1.00)) 30 | family.sd <- c(32.94, 22.75, 13.39, 13.68, 14.38) 31 | rownames(family.cor) <- 32 | colnames(family.cor) <- 33 | names(family.sd) <- c("father", "mother", "famo", "problems", "intimacy") 34 | 35 | family.cov <- cor2cov(family.cor, family.sd) 36 | 37 | ## ------------------------------------------------------- 38 | family.model <- ' 39 | adjust =~ problems + intimacy 40 | family =~ father + mother + famo' 41 | 42 | ## ------------------------------------------------------- 43 | family.fit <- cfa(model = family.model, 44 | sample.cov = family.cov, 45 | sample.nobs = 203) 46 | 47 | ## ------------------------------------------------------- 48 | inspect(family.fit, "cov.lv") 49 | inspect(family.fit, "cor.lv") 50 | 51 | ## ------------------------------------------------------- 52 | family.fit <- cfa(model = family.model, 53 | sample.cov = family.cor, 54 | sample.nobs = 203) 55 | 56 | ## ------------------------------------------------------- 57 | summary(family.fit, 58 | rsquare = TRUE, 59 | standardized = TRUE, 60 | fit.measures = TRUE) 61 | 62 | ## ------------------------------------------------------- 63 | modificationindices(family.fit, sort = T) 64 | 65 | ## ------------------------------------------------------- 66 | family.model2 <- ' 67 | adjust =~ problems + intimacy 68 | family =~ father + mother + famo 69 | father ~~ mother' 70 | 71 | family.fit2 <- cfa(model = family.model2, 72 | sample.cov = family.cov, 73 | sample.nobs = 203) 74 | 75 | inspect(family.fit2, "cor.lv") 76 | 77 | ## ------------------------------------------------------- 78 | semPaths(family.fit, 79 | whatLabels="std", 80 | layout="tree", 81 | edge.label.cex = 1) 82 | 83 | ## ------------------------------------------------------- 84 | predict.model <- ' 85 | adjust =~ problems + intimacy 86 | family =~ father + mother + famo 87 | adjust~family' 88 | 89 | ## ------------------------------------------------------- 90 | predict.fit <- sem(model = predict.model, 91 | sample.cov = family.cor, 92 | sample.nobs = 203) 93 | 94 | ## ------------------------------------------------------- 95 | summary(predict.fit, 96 | rsquare = TRUE, 97 | standardized = TRUE, 98 | fit.measures = TRUE) 99 | 100 | ## ------------------------------------------------------- 101 | semPaths(predict.fit, 102 | whatLabels="std", 103 | layout="tree", 104 | edge.label.cex = 1) 105 | 106 | ## ----echo=FALSE, out.width = "75%", fig.align="center"---- 107 | knitr::include_graphics("pictures/full_example.png") 108 | 109 | ## ------------------------------------------------------- 110 | family.cor <- lav_matrix_lower2full(c(1.00, 111 | .42, 1.00, 112 | -.43, -.50, 1.00, 113 | -.39, -.43, .78, 1.00, 114 | -.24, -.37, .69, .73, 1.00, 115 | -.31, -.33, .63, .87, .72, 1.00, 116 | -.25, -.25, .49, .53, .60, .59, 1.00, 117 | -.25, -.26, .42, .42, .44, .45, .77, 1.00, 118 | -.16, -.18, .23, .36, .38, .38, .59, .58, 1.00)) 119 | 120 | family.sd <- c(13.00, 13.50, 13.10, 12.50, 13.50, 14.20, 9.50, 11.10, 8.70) 121 | 122 | rownames(family.cor) <- 123 | colnames(family.cor) <- 124 | names(family.sd) <- c("parent_psych","low_SES","verbal", 125 | "reading","math","spelling","motivation","harmony","stable") 126 | 127 | family.cov <- cor2cov(family.cor, family.sd) 128 | 129 | ## ------------------------------------------------------- 130 | composite.model <- ' 131 | risk <~ low_SES + parent_psych + verbal 132 | achieve =~ reading + math + spelling 133 | adjustment =~ motivation + harmony + stable 134 | risk =~ achieve + adjustment 135 | ' 136 | 137 | ## ------------------------------------------------------- 138 | composite.fit <- sem(model = composite.model, 139 | sample.cov = family.cov, 140 | sample.nobs = 158) 141 | 142 | ## ------------------------------------------------------- 143 | summary(composite.fit, 144 | rsquare = TRUE, 145 | standardized = TRUE, 146 | fit.measures = TRUE) 147 | 148 | ## ------------------------------------------------------- 149 | modificationindices(composite.fit, sort = T) 150 | 151 | ## ------------------------------------------------------- 152 | semPaths(composite.fit, 153 | whatLabels="std", 154 | layout="tree", 155 | edge.label.cex = 1) 156 | 157 | -------------------------------------------------------------------------------- /vignettes/lecture_terms.R: -------------------------------------------------------------------------------- 1 | ## ---- include = FALSE----------------------------------- 2 | knitr::opts_chunk$set( 3 | collapse = TRUE, 4 | comment = "#>" 5 | ) 6 | 7 | ## ----echo = F, message = F, warning = F----------------- 8 | options(scipen = 999) 9 | knitr::opts_chunk$set(echo = TRUE) 10 | library(lavaan, quietly = T) 11 | library(semPlot, quietly = T) 12 | HS.model <- ' visual =~ x1 + x2 + x3 13 | textual =~ x4 + x5 + x6 14 | speed =~ x7 + x8 + x9 ' 15 | 16 | fit <- cfa(HS.model, data = HolzingerSwineford1939) 17 | 18 | HS.model2 <- 'visual =~ x1 + x2 + x3 19 | textual =~ x4 + x5 + x6 20 | speed =~ x7 + x8 + x9 21 | visual ~ speed' 22 | fit2 <- cfa(HS.model2, data = HolzingerSwineford1939) 23 | 24 | HS.model3 <- 'visual =~ x1 + x2 + x3 25 | textual =~ x4 + x5 + x6 26 | speed =~ x7 + x8 + x9 27 | visual ~ speed 28 | speed ~ textual 29 | textual ~ visual' 30 | fit3 <- cfa(HS.model3, data = HolzingerSwineford1939) 31 | 32 | ## ----exo, echo=FALSE, out.width="75%", fig.align="center"---- 33 | knitr::include_graphics("pictures/exo_endo.png") 34 | 35 | ## ----endo, echo=FALSE, out.width="75%", fig.align="center"---- 36 | knitr::include_graphics("pictures/exo_endo.png") 37 | 38 | ## ----echo = F------------------------------------------- 39 | semPaths(fit, 40 | whatLabels = "std", 41 | edge.label.cex = 1) 42 | 43 | ## ----echo = F------------------------------------------- 44 | semPaths(fit2, 45 | whatLabels = "std", 46 | edge.label.cex = 1) 47 | 48 | ## ----full, out.width="75%", echo=FALSE, fig.align="center"---- 49 | knitr::include_graphics("pictures/full_sem.png") 50 | 51 | ## ----echo = F------------------------------------------- 52 | semPaths(fit2, 53 | whatLabels = "std", 54 | edge.label.cex = 1) 55 | 56 | ## ----echo = F------------------------------------------- 57 | semPaths(fit3, 58 | whatLabels = "std", 59 | edge.label.cex = 1) 60 | 61 | ## ----echo = F------------------------------------------- 62 | summary(fit2) 63 | 64 | ## ----echo = F------------------------------------------- 65 | summary(fit2, standardized = T, rsquare = T) 66 | 67 | ## ----model-steps, echo=FALSE, out.width="75%", fig.align="center"---- 68 | knitr::include_graphics("pictures/model_steps.png") 69 | 70 | ## ----echo = F------------------------------------------- 71 | semPaths(fit) 72 | 73 | ## ----echo = F------------------------------------------- 74 | summary(fit) 75 | 76 | ## ----echo = F------------------------------------------- 77 | summary(fit, standardized = T) 78 | 79 | -------------------------------------------------------------------------------- /vignettes/pictures/ability.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/ability.png -------------------------------------------------------------------------------- /vignettes/pictures/bi_factor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/bi_factor.png -------------------------------------------------------------------------------- /vignettes/pictures/diagram_sem.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/diagram_sem.png -------------------------------------------------------------------------------- /vignettes/pictures/example_lgm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/example_lgm.png -------------------------------------------------------------------------------- /vignettes/pictures/exo_endo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/exo_endo.png -------------------------------------------------------------------------------- /vignettes/pictures/full_example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/full_example.png -------------------------------------------------------------------------------- /vignettes/pictures/full_sem.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/full_sem.png -------------------------------------------------------------------------------- /vignettes/pictures/full_sem2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/full_sem2.png -------------------------------------------------------------------------------- /vignettes/pictures/icc_example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/icc_example.png -------------------------------------------------------------------------------- /vignettes/pictures/indicators.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/indicators.png -------------------------------------------------------------------------------- /vignettes/pictures/item_difficulty.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/item_difficulty.png -------------------------------------------------------------------------------- /vignettes/pictures/kline_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/kline_model.png -------------------------------------------------------------------------------- /vignettes/pictures/lecture_evals.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/lecture_evals.png -------------------------------------------------------------------------------- /vignettes/pictures/model1_mtmm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/model1_mtmm.png -------------------------------------------------------------------------------- /vignettes/pictures/model2_mtmm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/model2_mtmm.png -------------------------------------------------------------------------------- /vignettes/pictures/model3_mtmm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/model3_mtmm.png -------------------------------------------------------------------------------- /vignettes/pictures/model4_mtmm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/model4_mtmm.png -------------------------------------------------------------------------------- /vignettes/pictures/model_steps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/model_steps.png -------------------------------------------------------------------------------- /vignettes/pictures/random_fixed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/random_fixed.png -------------------------------------------------------------------------------- /vignettes/pictures/rotate.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/rotate.png -------------------------------------------------------------------------------- /vignettes/pictures/scree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/scree.png -------------------------------------------------------------------------------- /vignettes/pictures/second_order.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/second_order.png -------------------------------------------------------------------------------- /vignettes/pictures/srmr_formula.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/doomlab/learnSEM/d377bde36613eb5a1c734718f4df17c103895c35/vignettes/pictures/srmr_formula.png --------------------------------------------------------------------------------