├── .gitignore
├── class.md
├── config
├── announcement.md
├── announcement.md.motd
├── apps
│ ├── activejobs
│ │ └── env
│ ├── bc_desktop
│ │ ├── default.yml
│ │ └── submit
│ │ │ └── slurm.yml.erb
│ ├── dashboard
│ │ ├── env
│ │ ├── initializers
│ │ │ ├── ood.rb
│ │ │ └── ood.rb.pre-granite
│ │ └── views
│ │ │ └── layouts
│ │ │ ├── application.html.erb
│ │ │ ├── application.html.erb.2.0
│ │ │ └── application.html.erb.org
│ └── shell
│ │ └── env
├── clusters.d
│ ├── frisco1.yml
│ ├── frisco2.yml
│ ├── frisco3.yml
│ ├── frisco4.yml
│ ├── frisco5.yml
│ ├── frisco6.yml
│ ├── frisco7.yml
│ ├── frisco8.yml
│ ├── granite.yml
│ ├── kingspeak.yml
│ ├── lonepeak.yml
│ ├── notchpeak.yml
│ └── scrubpeak.yml
├── locales
│ └── en.yml
├── location.txt
├── nginx_stage.yml
├── ondemand.d
│ └── ondemand.yml.erb
└── ood_portal.yml
├── granite.md
├── httpd
├── conf.d
│ └── auth_cas.conf
├── conf.modules.d
│ └── 00-mpm.conf
└── location.txt
├── install_scripts
├── build_cas.sh
├── check_apache_config.sh
├── get_apps.sh
├── get_customizations.sh
└── setup_cas.sh
├── linux-host
├── Singularity
└── build_container.sh
├── pe-config
├── apps
│ ├── activejobs
│ │ └── env
│ ├── bc_desktop
│ │ ├── default.yml
│ │ ├── single_cluster
│ │ │ ├── bristlecone.yml
│ │ │ └── redwood.yml
│ │ └── submit
│ │ │ ├── slurm.yml.erb
│ │ │ └── slurm.yml.erb.orig
│ ├── dashboard
│ │ ├── env
│ │ └── initializers
│ │ │ └── ood.rb
│ └── shell
│ │ └── env
├── clusters.d
│ ├── redwood.yml
│ └── redwood.yml.orig
├── locales
│ └── en.yml
├── nginx_stage.yml
└── ood_portal.yml
├── quota.py
├── readme.md
├── rocky8.md
├── rstudio-singularity
├── modulefiles
│ └── rstudio_singularity
│ │ ├── 3.4.4.lua
│ │ ├── 3.5.3.lua
│ │ ├── 3.6.1-basic.lua
│ │ ├── 3.6.1-bio.lua
│ │ ├── 3.6.1-geospatial.lua
│ │ ├── 3.6.2-basic.lua
│ │ ├── 3.6.2-bioconductor.lua
│ │ └── 3.6.2-geospatial.lua
└── readme.txt
└── var
└── www
└── ood
├── apps
└── sys
│ └── shell
│ └── bin
│ └── ssh
└── public
├── CHPC-logo.png
├── CHPC-logo35.png
├── chpc_logo_block.png
└── logo.png
/.gitignore:
--------------------------------------------------------------------------------
1 | *.img
2 | *.simg
3 | git.info
4 | *~
5 | *.sw[p-s]
6 | *.out
7 |
--------------------------------------------------------------------------------
/class.md:
--------------------------------------------------------------------------------
1 | # Documentation related to Open OnDemand class instance
2 |
3 | [ondemand-class.chpc.utah.edu](https://ondemand-class.chpc.utah.edu)
4 |
5 | ## Class specific apps
6 |
7 | Most class specific apps hard code the job parameters and the environment in which the interactive app runs. If the app includes the full class code, e.g. ATMOS5340, this app has environment specific for the class.
8 |
9 | Some professors created their own app and share it with users via the shared app, e.g. BIOL3515. For that, one has to [enable app sharing](https://osc.github.io/ood-documentation/latest/app-sharing.html#peer-to-peer-executable-sharing) for the professor.
10 |
11 | All class apps are in `/uufs/chpc.utah.edu/sys/ondemand/chpc-class`. This directory is owned by Martin, in order to allow him to push/pull them to the GitHub and GitLab remote repositories. If others create the app here, we will have to figure out permissions for this directory to allow them to write and push/pull as well.
12 |
13 | ### Creating a class specific OOD app
14 |
15 | 1. Decide if to use a Remote Desktop, Jupyter, RStudio Server, or a remote desktop based application (Matlab, Ansys, etc). These are the four basic classes of the apps.
16 |
17 | 2. If Remote Desktop, Jupyter, or RStudio, pick one of the existing class apps, and copy it to a new directory. For the VNC based apps, one would have to create a new class app from the actual app, since we haven't done that yet. For Remote Desktop based app, use e.g. [ASTR5560](https://github.com/CHPC-UofU/OOD-class-apps/tree/master/ASTR5560), for Jupyter use [CHEN_Jupyter](https://github.com/CHPC-UofU/OOD-class-apps/tree/master/CHEN_Jupyter), for RStudio Server use [MIB2020](https://github.com/CHPC-UofU/OOD-class-apps/tree/master/MIB2020).
18 |
19 | 3. Copy this directory to a new directory with the class name, e.g. for a desktop app,
20 | ```
21 | cp -r ASTR5560 ATMOS5120
22 | ```
23 |
24 | 4. Edit the `manifest.yml` to change the class name and department.
25 |
26 | 5. Edit the `form.yml` to change the `title` to the class name, and adjust the resources that may need adjustment, e.g. `cluster`, `bc_num_hours` = walltime, `my_account` = account and `my_queue` = partition.
27 |
28 | 6. Additional job parameters can be changed in the `submit.yml.erb`. These correspond to the SLURM job parameters. E.g. to set number of tasks (CPU cores) for the job, in the `script - native` section:
29 | ```
30 | - "-n"
31 | - 4
32 | ```
33 |
34 | 7. To modify the environment modules or variables, that may be needed for the class, edit `template/script.sh.erb` in the section that has existing modules.
35 |
36 | ### ChemEng custom conda environment
37 |
38 | The following lists the commands needed to install Miniconda for Chemical Engineering classes
39 |
40 | ```
41 | wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
42 | bash ./Miniconda3-latest-Linux-x86_64.sh -b -p /uufs/chpc.utah.edu/common/home/u0101881/software/pkg/miniconda3-cheng -s
43 | ```
44 | Set up Lmod module as usual and load it. Then install the needed Python modules.
45 | ```
46 | conda install numpy scipy pandas matplotlib
47 | conda install jupyter jupyterlab
48 |
49 | conda install plotly nbconvert pyyaml cython rise
50 | conda install jupyterlab-plotly-extension plotly_express xlwings jupyter_contrib_nbextensions -c conda-forge
51 | conda install keras
52 |
53 | conda install plotly plotly_express jupyterlab-plotly-extension nbconvert pyyaml xlwings cython jupyter_contrib_nbextensions rise
54 | ```
55 |
56 | To overcome bug in nbconvert:
57 | ```
58 | chmod 755 /uufs/chpc.utah.edu/common/home/u0101881/software/pkg/miniconda3-cheng/share/jupyter/nbconvert/templates/htm
59 | ```
60 |
--------------------------------------------------------------------------------
/config/announcement.md:
--------------------------------------------------------------------------------
1 | Help: Please send issues or questions to helpdesk@chpc.utah.edu
2 |
3 | Online documentation: https://www.chpc.utah.edu/documentation/software/ondemand.php
4 |
5 | News: https://www.chpc.utah.edu/news/latest_news/index.php
6 |
7 | Tip: Usage metrics and statistics can be explored graphically and interactively with XDMoD. https://xdmod.chpc.utah.edu/
8 |
--------------------------------------------------------------------------------
/config/announcement.md.motd:
--------------------------------------------------------------------------------
1 | Help: Please send issues or questions to helpdesk@chpc.utah.edu
2 |
3 | Online documentation: https://www.chpc.utah.edu/documentation/software/ondemand.php
4 |
5 | News: https://www.chpc.utah.edu/news/latest_news/index.php
6 |
7 | Tip: Usage metrics and statistics can be explored graphically and interactively with XDMoD. https://xdmod.chpc.utah.edu/
8 |
9 | NOTE: The XDMoD efficiency data show in red jobs that use less than 20% of the allocated CPUs.
10 |
--------------------------------------------------------------------------------
/config/apps/activejobs/env:
--------------------------------------------------------------------------------
1 | OOD_NAVBAR_TYPE=default
2 | OOD_DASHBOARD_HEADER_IMG_LOGO="/public/CHPC-logo.png"
3 |
4 |
--------------------------------------------------------------------------------
/config/apps/bc_desktop/default.yml:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Interactive Desktop"
3 | description: |
4 | This app will launch an interactive linux desktop on a **single compute node**, or a Frisco node.
5 |
6 | This is meant for all types of tasks such as:
7 |
8 | - accessing & viewing files
9 | - compiling code
10 | - debugging
11 | - running visualization software **without** 3D hardware acceleration
12 |
13 | submit: "submit/slurm.yml.erb"
14 |
15 | #form:
16 | # - bc_vnc_idle
17 | # - desktop
18 | # - bc_num_hours
19 | # - bc_num_slots
20 | # - num_cores
21 | # - node_type
22 | # - bc_account
23 | # - bc_queue
24 | # - bc_vnc_resolution
25 | # - bc_email_on_started
26 | # - slurm_cluster
27 |
28 |
--------------------------------------------------------------------------------
/config/apps/bc_desktop/submit/slurm.yml.erb:
--------------------------------------------------------------------------------
1 | ---
2 | script:
3 | <%- if /frisco/.match(cluster) == nil -%>
4 | native:
5 | - "-N"
6 | - "<%= bc_num_slots %>"
7 | - "-n"
8 | - "<%= num_cores %>"
9 | <%- end -%>
10 |
--------------------------------------------------------------------------------
/config/apps/dashboard/env:
--------------------------------------------------------------------------------
1 | OOD_DASHBOARD_TITLE="CHPC OnDemand"
2 | OOD_PORTAL="ondemand"
3 |
4 | #BOOTSTRAP_NAVBAR_INVERSE_BG='rgb(200,16,46)'
5 | #BOOTSTRAP_NAVBAR_INVERSE_LINK_COLOR='rgb(255,255,255)'
6 | OOD_NAVBAR_TYPE=default
7 |
8 |
9 | #MOTD_PATH="/etc/motd"
10 | MOTD_PATH="/etc/ood/config/announcement.md.motd"
11 | MOTD_FORMAT="markdown" # markdown, txt, rss
12 |
13 | # header logo
14 | #OOD_DASHBOARD_HEADER_IMG_LOGO="/public/chpc_logo_block.png"
15 | OOD_DASHBOARD_HEADER_IMG_LOGO="/public/CHPC-logo.png"
16 | # logo in the main window
17 | # OOD_DASHBOARD_LOGO="/public/chpc_logo_block.png"
18 | OOD_DASHBOARD_SUPPORT_URL="http://www.chpc.utah.edu/about/contact.php"
19 | OOD_DASHBOARD_SUPPORT_EMAIL="helpdesk@chpc.utah.edu"
20 | OOD_DASHBOARD_DOCS_URL="https://www.chpc.utah.edu/documentation/software/ondemand.php"
21 | OOD_DASHBOARD_PASSWD_URL="https://system.apps.utah.edu/uofu/acs/uupasswd/portal-self-service"
22 |
23 | #OOD_QUOTA_PATH="https://www.chpc.utah.edu/apps/systems/curl_post/quota.json"
24 | OOD_QUOTA_PATH="/etc/ood/config/apps/dashboard/quota.json:/etc/ood/config/apps/dashboard/quota_legacy.json"
25 | OOD_QUOTA_THRESHOLD="0.90"
26 |
27 |
28 | # BOOTSTRAP_NAVBAR_HEIGHT='80px'
29 |
30 | # enable dynamic form widgets
31 | # https://osc.github.io/ood-documentation/latest/app-development/interactive/dynamic-form-widgets.html
32 | OOD_BC_DYNAMIC_JS=true
33 |
34 | # how often to run SLURM commands, in us, default is 10000
35 | POLL_DELAY=30000
36 |
37 |
--------------------------------------------------------------------------------
/config/apps/dashboard/initializers/ood.rb:
--------------------------------------------------------------------------------
1 | # /etc/ood/config/apps/dashboard/initializers/ood.rb
2 |
3 | Rails.application.config.after_initialize do
4 | OodFilesApp.candidate_favorite_paths.tap do |paths|
5 | # add project space directories
6 | # projects = User.new.groups.map(&:name).grep(/^P./)
7 | # paths.concat projects.map { |p| Pathname.new("/fs/project/#{p}") }
8 |
9 | # add scratch space directories
10 | #paths << Pathname.new("/scratch/kingspeak/serial/#{User.new.name}")
11 | paths << Pathname.new("/scratch/ucgd/serial/#{User.new.name}")
12 | paths << Pathname.new("/scratch/general/nfs1/#{User.new.name}")
13 | #paths << Pathname.new("/scratch/general/lustre/#{User.new.name}")
14 | paths << Pathname.new("/scratch/general/vast/#{User.new.name}")
15 |
16 | # group dir based on user's main group
17 | #project = OodSupport::User.new.group.name
18 | #paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/home/#{project}-group*")
19 |
20 | # group dir based on all user's groups, using Portal to get all group spaces
21 | my_cmd = %q[curl -s "https://portal.chpc.utah.edu/monitoring/ondemand/user_group_mounts?user=`whoami`&env=chpc" | sort]
22 | args = []
23 | o, e, s = Open3.capture3(my_cmd , *args)
24 | o.each_line do |v|
25 | paths << Pathname.new(v.gsub(/\s+/, ""))
26 | end
27 |
28 | end
29 |
30 | require 'open3' # Required for capture3 command line call
31 |
32 | class CustomGPUMappings ### GET LIST OF IDENTIFIER:NAME MAPPINGS ###
33 | def self.gpu_name_mappings
34 | @gpu_name_mappings ||= begin
35 | file_path = "/uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates/job_params_v33"
36 |
37 | gpu_mapping_data = []
38 | o, e, s = Open3.capture3("cat", file_path)
39 |
40 | capture_next_line = false
41 | option_count = 0
42 |
43 | o.each_line do |line|
44 | line.strip!
45 | if line.start_with?("- [")
46 | capture_next_line = true
47 | option_count += 1
48 | next
49 | end
50 |
51 | if capture_next_line && !line.empty? && option_count > 2
52 | line.chomp!(',')
53 | gpu_mapping_data << line
54 | capture_next_line = false
55 | end
56 | end
57 | gpu_mapping_data
58 | end
59 | end
60 | end
61 |
62 | class CustomGPUPartitions ### GET LIST OF PARTITION:GPU MAPPINGS ###
63 | def self.gpu_partitions
64 | @gpu_partitions ||= begin
65 | # Path to partition:gpu text file
66 | file_path = "/uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates/gpus_granite.txt"
67 |
68 | # Read file and parse contents
69 | gpu_data = []
70 | current_partition = nil
71 | o, e, s = Open3.capture3("cat", file_path)
72 |
73 | o.each_line do |line|
74 | line.strip!
75 | if line.empty?
76 | current_partition = nil
77 | elsif current_partition
78 | # Append GPU to current partition string
79 | gpu_data[-1] = "#{gpu_data.last}, #{line}"
80 | else
81 | # Start new partition string
82 | current_partition = line
83 | gpu_data.append(current_partition)
84 | end
85 | end
86 | gpu_data
87 | end
88 | end
89 | end
90 |
91 | class CustomQueues ### GET LIST OF CLUSTERS
92 | def self.clusters
93 | @clusters ||= begin
94 | # read list of clusters
95 | # path is Pathname class
96 | #path = Pathname.new("/uufs/chpc.utah.edu/common/home/#{User.new.name}/ondemand/data/cluster.txt")
97 | path = Pathname.new("/var/www/ood/apps/templates/cluster.txt")
98 | # here's the logic to return an array of strings
99 | # convert Pathname to string
100 | args = [path.to_s]
101 | @clusters_available = []
102 | o, e, s = Open3.capture3("cat" , *args)
103 | o.each_line do |v|
104 | # filter out white spaces
105 | @clusters_available.append(v.gsub(/\s+/, ""))
106 | end
107 | @clusters_available
108 | end
109 | end
110 | end
111 |
112 | class CustomAccPart ### GET ACCOUNTS PARTITIONS FOR THIS USER ###
113 | def self.accpart
114 | @accpart ||= begin
115 | # read list of np acc:part
116 | @accpart_available = []
117 | my_cmd = %q[curl -s "https://portal.chpc.utah.edu/monitoring/ondemand/slurm_user_params?user=`whoami`&env=chpc" | grep -v dtn | sort]
118 | # my_cmd = "/var/www/ood/apps/templates/get_alloc_all.sh"
119 | args = []
120 | o, e, s = Open3.capture3(my_cmd , *args)
121 | o.each_line do |v|
122 | @accpart_available.append(v.gsub(/\s+/, ""))
123 | end
124 | @accpart_available
125 | end
126 | end
127 |
128 | def self.accpartcl
129 | @@accpartnp = []
130 | @@accpartkp = []
131 | @@accpartlp = []
132 | @@accpartash = []
133 | # read list of np acc:part
134 | #my_cmd = "/uufs/chpc.utah.edu/common/home/u0101881/tools/sanitytool/myallocation -t"
135 | my_cmd = "/uufs/chpc.utah.edu/sys/bin/myallocation -t"
136 | args = []
137 | o, e, s = Open3.capture3(my_cmd , *args)
138 | clusters = %w{notchpeak kingspeak lonepeak ash}
139 | clusters.each do |cluster|
140 | @accpartcl = []
141 | o.each_line do |line|
142 | if (line[cluster])
143 | @accpartcl.append(line.split(' ')[1].gsub(/\s+/, ""))
144 | end
145 | end
146 | if (cluster == "notchpeak")
147 | @@accpartnp = @accpartcl
148 | define_singleton_method(:accpartnp) do
149 | @@accpartnp
150 | # can't do this, looks like this creates a pointer so result is always
151 | # the last value of accpartcl
152 | # @accpartcl
153 | end
154 | elsif (cluster == "kingspeak")
155 | @@accpartkp = @accpartcl
156 | define_singleton_method(:accpartkp) do
157 | @@accpartkp
158 | end
159 | elsif (cluster == "lonepeak")
160 | @@accpartlp = @accpartcl
161 | define_singleton_method(:accpartlp) do
162 | @@accpartlp
163 | end
164 | elsif (cluster == "ash")
165 | @@accpartash = @accpartcl
166 | define_singleton_method(:accpartash) do
167 | @@accpartash
168 | end
169 | end
170 | end
171 | end
172 |
173 | def self.printaccpartcl
174 | puts @@accpartnp
175 | end
176 | # def self.accpartnp
177 | # self.class.class_variable_get(:@@accpartnp)
178 | # end
179 | end
180 |
181 | # call these once during the initializer so that it'll be cached for later.
182 | CustomAccPart.accpart
183 | CustomAccPart.accpartcl
184 | CustomAccPart.printaccpartcl
185 | #CustomAccPart.accpartnp
186 | CustomQueues.clusters
187 | CustomGPUPartitions.gpu_partitions
188 | CustomGPUMappings.gpu_name_mappings
189 |
190 | end
191 |
--------------------------------------------------------------------------------
/config/apps/dashboard/initializers/ood.rb.pre-granite:
--------------------------------------------------------------------------------
1 | # /etc/ood/config/apps/dashboard/initializers/ood.rb
2 |
3 | Rails.application.config.after_initialize do
4 | OodFilesApp.candidate_favorite_paths.tap do |paths|
5 | # add project space directories
6 | # projects = User.new.groups.map(&:name).grep(/^P./)
7 | # paths.concat projects.map { |p| Pathname.new("/fs/project/#{p}") }
8 |
9 | # add scratch space directories
10 | #paths << Pathname.new("/scratch/kingspeak/serial/#{User.new.name}")
11 | paths << Pathname.new("/scratch/ucgd/serial/#{User.new.name}")
12 | paths << Pathname.new("/scratch/general/nfs1/#{User.new.name}")
13 | #paths << Pathname.new("/scratch/general/lustre/#{User.new.name}")
14 | paths << Pathname.new("/scratch/general/vast/#{User.new.name}")
15 |
16 | # group dir based on user's main group
17 | #project = OodSupport::User.new.group.name
18 | #paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/home/#{project}-group*")
19 |
20 | # group dir based on all user's groups
21 | OodSupport::User.new.groups.each do |group|
22 | paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/home/#{group.name}-group*")
23 | end
24 | end
25 |
26 | require 'open3' # Required for capture3 command line call
27 |
28 | class CustomGPUMappings ### GET LIST OF IDENTIFIER:NAME MAPPINGS ###
29 | def self.gpu_name_mappings
30 | @gpu_name_mappings ||= begin
31 | file_path = "/uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates/job_params_v33"
32 |
33 | gpu_mapping_data = []
34 | o, e, s = Open3.capture3("cat", file_path)
35 |
36 | capture_next_line = false
37 | option_count = 0
38 |
39 | o.each_line do |line|
40 | line.strip!
41 | if line.start_with?("- [")
42 | capture_next_line = true
43 | option_count += 1
44 | next
45 | end
46 |
47 | if capture_next_line && !line.empty? && option_count > 2
48 | line.chomp!(',')
49 | gpu_mapping_data << line
50 | capture_next_line = false
51 | end
52 | end
53 | gpu_mapping_data
54 | end
55 | end
56 | end
57 |
58 | class CustomGPUPartitions ### GET LIST OF PARTITION:GPU MAPPINGS ###
59 | def self.gpu_partitions
60 | @gpu_partitions ||= begin
61 | # Path to partition:gpu text file
62 | file_path = "/uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates/gpus.txt"
63 |
64 | # Read file and parse contents
65 | gpu_data = []
66 | current_partition = nil
67 | o, e, s = Open3.capture3("cat", file_path)
68 |
69 | o.each_line do |line|
70 | line.strip!
71 | if line.empty?
72 | current_partition = nil
73 | elsif current_partition
74 | # Append GPU to current partition string
75 | gpu_data[-1] = "#{gpu_data.last}, #{line}"
76 | else
77 | # Start new partition string
78 | current_partition = line
79 | gpu_data.append(current_partition)
80 | end
81 | end
82 | gpu_data
83 | end
84 | end
85 | end
86 |
87 | class CustomQueues ### GET LIST OF CLUSTERS
88 | def self.clusters
89 | @clusters ||= begin
90 | # read list of clusters
91 | # path is Pathname class
92 | #path = Pathname.new("/uufs/chpc.utah.edu/common/home/#{User.new.name}/ondemand/data/cluster.txt")
93 | path = Pathname.new("/var/www/ood/apps/templates/cluster.txt")
94 | # here's the logic to return an array of strings
95 | # convert Pathname to string
96 | args = [path.to_s]
97 | @clusters_available = []
98 | o, e, s = Open3.capture3("cat" , *args)
99 | o.each_line do |v|
100 | # filter out white spaces
101 | @clusters_available.append(v.gsub(/\s+/, ""))
102 | end
103 | @clusters_available
104 | end
105 | end
106 | end
107 |
108 | class CustomAccPart ### GET ACCOUNTS PARTITIONS FOR THIS USER ###
109 | def self.accpart
110 | @accpart ||= begin
111 | # read list of np acc:part
112 | @accpart_available = []
113 | my_cmd = "/var/www/ood/apps/templates/get_alloc_all.sh"
114 | args = []
115 | o, e, s = Open3.capture3(my_cmd , *args)
116 | o.each_line do |v|
117 | @accpart_available.append(v.gsub(/\s+/, ""))
118 | end
119 | @accpart_available
120 | end
121 | end
122 |
123 | def self.accpartcl
124 | @@accpartnp = []
125 | @@accpartkp = []
126 | @@accpartlp = []
127 | @@accpartash = []
128 | # read list of np acc:part
129 | #my_cmd = "/uufs/chpc.utah.edu/common/home/u0101881/tools/sanitytool/myallocation -t"
130 | my_cmd = "/uufs/chpc.utah.edu/sys/bin/myallocation -t"
131 | args = []
132 | o, e, s = Open3.capture3(my_cmd , *args)
133 | clusters = %w{notchpeak kingspeak lonepeak ash}
134 | clusters.each do |cluster|
135 | @accpartcl = []
136 | o.each_line do |line|
137 | if (line[cluster])
138 | @accpartcl.append(line.split(' ')[1].gsub(/\s+/, ""))
139 | end
140 | end
141 | if (cluster == "notchpeak")
142 | @@accpartnp = @accpartcl
143 | define_singleton_method(:accpartnp) do
144 | @@accpartnp
145 | # can't do this, looks like this creates a pointer so result is always
146 | # the last value of accpartcl
147 | # @accpartcl
148 | end
149 | elsif (cluster == "kingspeak")
150 | @@accpartkp = @accpartcl
151 | define_singleton_method(:accpartkp) do
152 | @@accpartkp
153 | end
154 | elsif (cluster == "lonepeak")
155 | @@accpartlp = @accpartcl
156 | define_singleton_method(:accpartlp) do
157 | @@accpartlp
158 | end
159 | elsif (cluster == "ash")
160 | @@accpartash = @accpartcl
161 | define_singleton_method(:accpartash) do
162 | @@accpartash
163 | end
164 | end
165 | end
166 | end
167 |
168 | def self.printaccpartcl
169 | puts @@accpartnp
170 | end
171 | # def self.accpartnp
172 | # self.class.class_variable_get(:@@accpartnp)
173 | # end
174 | end
175 |
176 | # call these once during the initializer so that it'll be cached for later.
177 | CustomAccPart.accpart
178 | CustomAccPart.accpartcl
179 | CustomAccPart.printaccpartcl
180 | #CustomAccPart.accpartnp
181 | CustomQueues.clusters
182 | CustomGPUPartitions.gpu_partitions
183 | CustomGPUMappings.gpu_name_mappings
184 |
185 | end
186 |
--------------------------------------------------------------------------------
/config/apps/dashboard/views/layouts/application.html.erb:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | <%= content_for?(:title) ? yield(:title) : "Dashboard - #{@user_configuration.dashboard_title}" %>
5 | <%= favicon_link_tag 'favicon.ico', href: @user_configuration.public_url.join('favicon.ico'), skip_pipeline: true %>
6 |
7 |
8 |
9 |
16 |
17 | <%= javascript_include_tag 'application', nonce: true %>
18 | <%= stylesheet_link_tag 'application', nonce: content_security_policy_nonce, media: 'all', rel: 'preload stylesheet', as: 'style', type: 'text/css' %>
19 | <%= render partial: '/layouts/nav/styles', locals: { bg_color: @user_configuration.brand_bg_color, link_active_color: @user_configuration.brand_link_active_bg_color } %>
20 | <% custom_css_paths.each do |path| %>
21 |
22 | <% end %>
23 |
24 | <%= yield :head %>
25 |
26 | <%= csrf_meta_tags %>
27 |
28 |
29 |
30 |
31 |
32 | "
37 | data-jobs-info-path="<%= jobs_info_path('delme', 'delme').gsub(/[\/]*delme[\/]*/,'') %>"
38 | data-uppy-locale="<%= I18n.t('dashboard.uppy', :default => {}).to_json %>"
39 | />
40 |
41 |
42 |
43 |
78 |
79 |
80 |
81 |
82 | <% @announcements.select(&:valid?).each do |announcement| %>
83 |
84 | <%= raw OodAppkit.markdown.render(announcement.msg) %>
85 |
86 | <% end %>
87 |
88 | <%= render "layouts/browser_warning" %>
89 |
90 | <%= render partial: "shared/insufficient_quota", locals: { quotas: @my_quotas } if @my_quotas && @my_quotas.any? %>
91 | <%= render partial: "shared/insufficient_balance", locals: { balances: @my_balances } if @my_balances && @my_balances.any? %>
92 | <%= render partial: "shared/bad_cluster_config" if invalid_clusters.any? %>
93 |
94 |
95 |
96 |
100 | ALERT_MSG
101 |
102 |
103 |
104 | <% if alert %>
105 |
106 |
110 | <%= alert %>
111 |
112 | <% end %>
113 |
114 | <% if notice %>
115 |
116 |
120 | <%= notice %>
121 |
122 | <% end %>
123 |
124 | <%= yield %>
125 |
126 |
127 |
128 | <%= render "layouts/footer" %>
129 |
130 |
131 |
--------------------------------------------------------------------------------
/config/apps/dashboard/views/layouts/application.html.erb.2.0:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
14 |
15 | <%= content_for?(:title) ? yield(:title) : "Dashboard - #{OodAppkit.dashboard.title}" %>
16 | <%= favicon_link_tag 'favicon.ico', href: OodAppkit.public.url.join('favicon.ico'), skip_pipeline: true %>
17 |
18 |
19 | <%= javascript_pack_tag 'application' %>
20 | <%= stylesheet_pack_tag 'application', media: 'all' %>
21 |
22 |
23 | <%= stylesheet_link_tag 'application', media: 'all' %>
24 | <%= javascript_include_tag 'application' %>
25 | <%= javascript_include_tag 'turbolinks' if Configuration.turbolinks_enabled? %>
26 |
27 | <%= csrf_meta_tags %>
28 |
29 | <%= yield :head %>
30 |
31 |
32 | <% if Configuration.turbolinks_enabled? %>
33 | ">
34 | <% end %>
35 | <%= render partial: '/layouts/nav/styles', locals: { bg_color: Configuration.brand_bg_color, link_active_color: Configuration.brand_link_active_bg_color } %>
36 |
37 |
38 | >
39 |
74 |
75 |
76 |
77 |
78 | <% @announcements.select(&:valid?).each do |announcement| %>
79 |
80 | <%= raw OodAppkit.markdown.render(announcement.msg) %>
81 |
82 | <% end %>
83 |
84 | <%= render "layouts/browser_warning" %>
85 |
86 | <%= render partial: "shared/insufficient_quota", locals: { quotas: @my_quotas } if @my_quotas && @my_quotas.any? %>
87 | <%= render partial: "shared/insufficient_balance", locals: { balances: @my_balances } if @my_balances && @my_balances.any? %>
88 | <%= render partial: "shared/bad_cluster_config" if invalid_clusters.any? %>
89 |
90 |
99 |
100 | <% if alert %>
101 |
102 |
106 | <%= alert %>
107 |
108 | <% end %>
109 |
110 | <% if notice %>
111 |
112 |
116 | <%= notice %>
117 |
118 | <% end %>
119 |
120 | <%= yield %>
121 |
122 |
123 |
124 | <%= render "layouts/footer" %>
125 |
126 |
127 |
--------------------------------------------------------------------------------
/config/apps/dashboard/views/layouts/application.html.erb.org:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | <%= content_for?(:title) ? yield(:title) : "Dashboard - #{OodAppkit.dashboard.title}" %>
5 | <%= favicon_link_tag 'favicon.ico', href: OodAppkit.public.url.join('favicon.ico'), skip_pipeline: true %>
6 |
7 | <%- tag_id = 'G-05ZZTZ7SSV' -%>
8 |
9 | <%- unless tag_id.nil? -%>
10 |
11 |
12 |
19 | <%- end -%>
20 |
21 |
22 | <%= javascript_pack_tag 'application' %>
23 | <%= stylesheet_pack_tag 'application', media: 'all' %>
24 |
25 |
26 | <%= stylesheet_link_tag 'application', media: 'all' %>
27 | <%= javascript_include_tag 'application' %>
28 | <%= javascript_include_tag 'turbolinks' if Configuration.turbolinks_enabled? %>
29 |
30 | <%= csrf_meta_tags %>
31 |
32 | <%= yield :head %>
33 |
34 |
35 | <% if Configuration.turbolinks_enabled? %>
36 | ">
37 | <% end %>
38 | <%= render partial: '/layouts/nav/styles', locals: { bg_color: Configuration.brand_bg_color, link_active_color: Configuration.brand_link_active_bg_color } %>
39 |
40 |
41 | >
42 |
77 |
78 |
79 |
80 |
81 | <% @announcements.select(&:valid?).each do |announcement| %>
82 |
83 | <%= raw OodAppkit.markdown.render(announcement.msg) %>
84 |
85 | <% end %>
86 |
87 | <%= render "layouts/browser_warning" %>
88 |
89 | <%= render partial: "shared/insufficient_quota", locals: { quotas: @my_quotas } if @my_quotas && @my_quotas.any? %>
90 | <%= render partial: "shared/insufficient_balance", locals: { balances: @my_balances } if @my_balances && @my_balances.any? %>
91 | <%= render partial: "shared/bad_cluster_config" if invalid_clusters.any? %>
92 |
93 |
102 |
103 | <% if alert %>
104 |
105 |
109 | <%= alert %>
110 |
111 | <% end %>
112 |
113 | <% if notice %>
114 |
115 |
119 | <%= notice %>
120 |
121 | <% end %>
122 |
123 | <%= yield %>
124 |
125 |
126 |
127 | <%= render "layouts/footer" %>
128 |
129 |
130 |
--------------------------------------------------------------------------------
/config/apps/shell/env:
--------------------------------------------------------------------------------
1 | DEFAULT_SSHHOST="frisco1.chpc.utah.edu"
2 |
3 | # ssh wrapper sets session timeout to 2 hours
4 | OOD_SSH_WRAPPER=/var/www/ood/apps/sys/shell/bin/ssh
5 |
6 | # as of v 1.8 only hosts listed below are allowed to ssh to with the shell app
7 | OOD_SSHHOST_ALLOWLIST="grn[0-1][0-9][0-9].int.chpc.utah.edu:notch[0-4][0-9][0-9].ipoib.int.chpc.utah.edu:lp[0-2][0-9][0-9].lonepeak.peaks:kp[0-3][0-9][0-9].ipoib.kingspeak.peaks:ash[2-4][0-9][0-9].ipoib.ash.peaks"
8 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco1.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco1"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco1.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco1.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco1.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco2.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco2"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco2.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco2.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco2.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco3.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco3"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco3.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco3.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco3.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco4.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco4"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco4.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco4.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco4.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco5.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco5"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco5.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco5.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco5.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco6.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco6"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco6.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco6.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco6.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco7.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco7"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco7.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco7.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco7.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/frisco8.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "frisco8"
5 | url: "https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php"
6 | hidden: false
7 | login:
8 | host: "frisco8.chpc.utah.edu"
9 | job:
10 | adapter: "linux_host"
11 | submit_host: "frisco8.chpc.utah.edu" # This is the head for a login round robin
12 | ssh_hosts: # These are the actual login nodes, need to have full host name for the regex to work
13 | - frisco8.chpc.utah.edu
14 | site_timeout: 7200
15 | debug: true
16 | singularity_bin: /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity
17 | singularity_bindpath: /etc,/mnt,/media,/opt,/run,/srv,/usr,/var,/uufs,/scratch
18 | singularity_image: /uufs/chpc.utah.edu/sys/installdir/ood/rocky8_lmod.sif
19 | # Enabling strict host checking may cause the adapter to fail if the user's known_hosts does not have all the roundrobin hosts
20 | strict_host_checking: false
21 | tmux_bin: /usr/bin/tmux
22 | batch_connect:
23 | basic:
24 | script_wrapper: |
25 | #!/bin/bash
26 | set -x
27 | if [ -z "$LMOD_VERSION" ]; then
28 | source /etc/profile.d/chpc.sh
29 | fi
30 | export XDG_RUNTIME_DIR=$(mktemp -d)
31 | %s
32 | set_host: "host=$(hostname -s).chpc.utah.edu"
33 | vnc:
34 | script_wrapper: |
35 | #!/bin/bash
36 | set -x
37 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
38 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
39 | export XDG_RUNTIME_DIR=$(mktemp -d)
40 | %s
41 | set_host: "host=$(hostname -s).chpc.utah.edu"
42 | # set_host: "host=$(hostname -A | awk '{print $3}')"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/granite.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Granite"
5 | priority: 2
6 | login:
7 | host: "granite.chpc.utah.edu"
8 | job:
9 | adapter: "slurm"
10 | cluster: "granite"
11 | bin: "/uufs/granite/sys/installdir/slurm/std/bin"
12 | custom:
13 | xdmod:
14 | resource_id: 28
15 | queues:
16 | - "granite"
17 | - "granite-guest"
18 | - "granite-freecycle"
19 | batch_connect:
20 | basic:
21 | script_wrapper: |
22 | if [ -z "$LMOD_VERSION" ]; then
23 | source /etc/profile.d/z00_chpc.sh
24 | fi
25 | export XDG_RUNTIME_DIR=$(mktemp -d)
26 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
27 | export SLURM_EXPORT_ENV=ALL
28 | %s
29 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand-test.chpc.utah.edu)"
30 | vnc:
31 | script_wrapper: |
32 | # in notchpeak script
33 | if [ -z "$LMOD_VERSION" ]; then
34 | source /etc/profile.d/z00_chpc.sh
35 | fi
36 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/2.2.7/opt/TurboVNC/bin:$PATH"
37 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
38 | export XDG_RUNTIME_DIR=$(mktemp -d)
39 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
40 | export SLURM_EXPORT_ENV=ALL
41 | %s
42 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand-test.chpc.utah.edu)"
43 |
44 | # set_host: "host=$(hostname -A | awk '{print $2}')"
45 | # first hostname - TCP, second hostname - IB
46 |
47 |
--------------------------------------------------------------------------------
/config/clusters.d/kingspeak.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Kingspeak"
5 | priority: 1
6 | login:
7 | host: "kingspeak.chpc.utah.edu"
8 | job:
9 | adapter: "slurm"
10 | cluster: "kingspeak"
11 | bin: "/uufs/kingspeak.peaks/sys/pkg/slurm/std/bin"
12 | custom:
13 | xdmod:
14 | resource_id: 16
15 | queues:
16 | - "kingspeak"
17 | - "kingspeak-guest"
18 | - "kingspeak-freecycle"
19 | batch_connect:
20 | basic:
21 | script_wrapper: |
22 | if [ -z "$LMOD_VERSION" ]; then
23 | source /etc/profile.d/chpc.sh
24 | fi
25 | export XDG_RUNTIME_DIR=$(mktemp -d)
26 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
27 | export SLURM_EXPORT_ENV=ALL
28 | %s
29 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand.chpc.utah.edu)"
30 | vnc:
31 | script_wrapper: |
32 | # in kingspeak script
33 | if [ -z "$LMOD_VERSION" ]; then
34 | source /etc/profile.d/chpc.sh
35 | fi
36 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
37 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
38 | export XDG_RUNTIME_DIR=$(mktemp -d)
39 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
40 | export SLURM_EXPORT_ENV=ALL
41 | %s
42 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand.chpc.utah.edu)"
43 |
44 |
--------------------------------------------------------------------------------
/config/clusters.d/lonepeak.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Lonepeak"
5 | priority: 3
6 | login:
7 | host: "lonepeak.chpc.utah.edu"
8 | job:
9 | adapter: "slurm"
10 | cluster: "lonepeak"
11 | bin: "/uufs/lonepeak.peaks/sys/pkg/slurm/std/bin"
12 | custom:
13 | xdmod:
14 | resource_id: 25
15 | queues:
16 | - "lonepeak"
17 | - "lonepeak-guest"
18 | batch_connect:
19 | basic:
20 | script_wrapper: |
21 | if [ -z "$LMOD_VERSION" ]; then
22 | source /etc/profile.d/chpc.sh
23 | fi
24 | export XDG_RUNTIME_DIR=$(mktemp -d)
25 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
26 | export SLURM_EXPORT_ENV=ALL
27 | %s
28 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand.chpc.utah.edu)"
29 | vnc:
30 | script_wrapper: |
31 | #!/bin/bash
32 | # in lonepeak script
33 | if [ -z "$LMOD_VERSION" ]; then
34 | source /etc/profile.d/chpc.sh
35 | fi
36 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
37 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
38 | export XDG_RUNTIME_DIR=$(mktemp -d)
39 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
40 | export SLURM_EXPORT_ENV=ALL
41 | %s
42 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand.chpc.utah.edu)"
43 |
--------------------------------------------------------------------------------
/config/clusters.d/notchpeak.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Notchpeak"
5 | priority: 2
6 | login:
7 | host: "notchpeak.chpc.utah.edu"
8 | job:
9 | adapter: "slurm"
10 | cluster: "notchpeak"
11 | bin: "/uufs/notchpeak.peaks/sys/installdir/slurm/std/bin"
12 | custom:
13 | xdmod:
14 | resource_id: 28
15 | queues:
16 | - "notchpeak"
17 | - "notchpeak-guest"
18 | - "notchpeak-freecycle"
19 | batch_connect:
20 | basic:
21 | script_wrapper: |
22 | if [ -z "$LMOD_VERSION" ]; then
23 | source /etc/profile.d/chpc.sh
24 | fi
25 | env
26 | export XDG_RUNTIME_DIR=$(mktemp -d)
27 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
28 | export SLURM_EXPORT_ENV=ALL
29 | %s
30 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand.chpc.utah.edu)"
31 | vnc:
32 | script_wrapper: |
33 | # in notchpeak script
34 | if [ -z "$LMOD_VERSION" ]; then
35 | source /etc/profile.d/chpc.sh
36 | fi
37 | env
38 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
39 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.10.0/bin/websockify"
40 | export XDG_RUNTIME_DIR=$(mktemp -d)
41 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
42 | export SLURM_EXPORT_ENV=ALL
43 | %s
44 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh ondemand.chpc.utah.edu)"
45 |
46 | # set_host: "host=$(hostname -A | awk '{print $2}')"
47 | # first hostname - TCP, second hostname - IB
48 |
49 |
--------------------------------------------------------------------------------
/config/clusters.d/scrubpeak.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Scrubpeak"
5 | login:
6 | host: "scrubpeak.chpc.utah.edu"
7 | job:
8 | adapter: "slurm"
9 | cluster: "scrubpeak"
10 | bin: "/uufs/scrubpeak.peaks/sys/pkg/slurm/std/bin"
11 | custom:
12 | xdmod:
13 | resource_id: 32
14 | queues:
15 | - "scrubpeak"
16 | - "scrubpeak-shared"
17 | batch_connect:
18 | basic:
19 | script_wrapper: |
20 | if [ -z "$LMOD_VERSION" ]; then
21 | source /etc/profile.d/chpc.sh
22 | fi
23 | export XDG_RUNTIME_DIR=$(mktemp -d)
24 | %s
25 | set_host: "host=$(hostname -A | awk '{print $1}')"
26 | vnc:
27 | script_wrapper: |
28 | if [ -z "$LMOD_VERSION" ]; then
29 | source /etc/profile.d/chpc.sh
30 | fi
31 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
32 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.8.0/bin/websockify"
33 | export XDG_RUNTIME_DIR=$(mktemp -d)
34 | %s
35 | set_host: "host=$(hostname -A | awk '{print $1}')"
36 |
37 |
--------------------------------------------------------------------------------
/config/locales/en.yml:
--------------------------------------------------------------------------------
1 | en:
2 | dashboard:
3 | quota_reload_message: "Reload page to see updated quota. Quotas are updated every hour."
4 | welcome_html: |
5 |
6 |
--------------------------------------------------------------------------------
/config/location.txt:
--------------------------------------------------------------------------------
1 | This directory's parent is /etc/ood
2 |
--------------------------------------------------------------------------------
/config/nginx_stage.yml:
--------------------------------------------------------------------------------
1 | #
2 | # This is an example NginxStage CLI configuration file. It contains the
3 | # configuration options that can be specified to meet your system requirements.
4 | # See https://github.com/OSC/nginx_stage for detailed information about
5 | # NginxStage. In particular see
6 | # https://github.com/OSC/nginx_stage/blob/master/lib/nginx_stage/configuration.rb
7 | # for a detailed list of all possible configuration options and their default
8 | # settings.
9 | #
10 | # Below you can find the default values for each configuration option commented
11 | # out. Feel free to uncomment it and make modifications or write your
12 | # modifications directly below the commented defaults.
13 | #
14 |
15 | ---
16 |
17 | # Path to the OnDemand version file
18 | #
19 | #ondemand_version_path: '/opt/ood/VERSION'
20 |
21 | # Unique name of this OnDemand portal used to namespace multiple hosted portals
22 | # NB: If this is not set then most apps will use default namespace "ondemand"
23 | #
24 | #ondemand_portal: null
25 |
26 | # Title of this OnDemand portal that apps *should* display in their navbar
27 | # NB: If this is not set then most apps will use default title "Open OnDemand"
28 | #
29 | #ondemand_title: null
30 |
31 | # Custom environment variables to set for the PUN environment
32 | # Below is an example of the use for setting env vars.
33 | #
34 | pun_custom_env:
35 | OOD_XDMOD_HOST: "https://xdmod.chpc.utah.edu"
36 | # OOD_JOB_NAME_ILLEGAL_CHARS: "/"
37 | # OOD_DASHBOARD_TITLE: "Open OnDemand"
38 | # OOD_BRAND_BG_COLOR: "#53565a"
39 | # OOD_BRAND_LINK_ACTIVE_BG_COLOR: "#fff"
40 |
41 | # List of environment variables to pass onto PUN environment
42 | # from /etc/ood/profile. Example below shows some default
43 | # env vars that are declared.
44 | #
45 | # pun_custom_env_declarations:
46 | # - PATH
47 | # - LD_LIBRARY_PATH
48 | # - MANPATH
49 | # - SCLS
50 | # - X_SCLS
51 |
52 | # Location of the ERB templates used in the generation of the NGINX configs
53 | #
54 | #template_root: '/opt/ood/nginx_stage/templates'
55 |
56 | # The reverse proxy daemon user used to access the Unix domain sockets
57 | #
58 | #proxy_user: 'apache'
59 |
60 | # Path to NGINX executable used by OnDemand
61 | #
62 | #nginx_bin: '/opt/ood/ondemand/root/usr/sbin/nginx'
63 |
64 | # White-list of signals that can be sent to the NGINX process
65 | #
66 | #nginx_signals:
67 | # - 'stop'
68 | # - 'quit'
69 | # - 'reopen'
70 | # - 'reload'
71 |
72 | # Path to NGINX 'mime.types' file used by OnDemand
73 | #
74 | #mime_types_path: '/opt/ood/ondemand/root/etc/nginx/mime.types'
75 |
76 | # Path to Passenger 'locations.ini' file used by OnDemand.
77 | #
78 | #passenger_root: '/opt/ood/ondemand/root/usr/share/ruby/vendor_ruby/phusion_passenger/locations.ini'
79 | #
80 |
81 | # Path to Ruby binary used by nginx_stage
82 | #
83 | #passenger_ruby: '/opt/ood/nginx_stage/bin/ruby'
84 |
85 | # Path to system-installed Node.js binary
86 | # Set to `false` if you don't want this specified in nginx config
87 | #
88 | #passenger_nodejs: '/opt/ood/nginx_stage/bin/node'
89 |
90 | # Path to system-installed Python binary
91 | # Set to `false` if you don't want this specified in nginx config
92 | #
93 | #passenger_python: '/opt/ood/nginx_stage/bin/python'
94 |
95 | # passenger options, see https://discourse.openondemand.org/t/proper-file-location-to-set-nginx-parameters/2213/3
96 | passenger_options:
97 | passenger_max_request_queue_size : 200
98 |
99 | # Root location of per-user NGINX configs
100 | #
101 | #pun_config_path: '/var/lib/ondemand-nginx/config/puns/%{user}.conf'
102 |
103 | # Root location of per-user NGINX tmp dirs
104 | #
105 | #pun_tmp_root: '/var/tmp/ondemand-nginx/%{user}'
106 |
107 | # Path to the per-user NGINX access log
108 | #
109 | #pun_access_log_path: '/var/log/ondemand-nginx/%{user}/access.log'
110 |
111 | # Path to the per-user NGINX error log
112 | #
113 | #pun_error_log_path: '/var/log/ondemand-nginx/%{user}/error.log'
114 |
115 | # Path to the per-user NGINX pid file
116 | #
117 | #pun_pid_path: '/var/run/ondemand-nginx/%{user}/passenger.pid'
118 |
119 | # Path to the per-user NGINX socket file
120 | #
121 | #pun_socket_path: '/var/run/ondemand-nginx/%{user}/passenger.sock'
122 |
123 | # Path to the local filesystem root where the per-user NGINX process serves
124 | # files from for the user making use of the sendfile feature in NGINX
125 | #
126 | #pun_sendfile_root: '/'
127 |
128 | # The internal URI used to access the local filesystem for downloading files
129 | # from the apps (not accessible directly by client browser)
130 | #
131 | #pun_sendfile_uri: '/sendfile'
132 |
133 | # List of hashes helping define wildcard app config locations. These are the
134 | # arguments for {#app_config_path}.
135 | #
136 | #pun_app_configs:
137 | # - env: 'dev'
138 | # owner: '%{user}'
139 | # name: '*'
140 | # - env: 'usr'
141 | # owner: '*'
142 | # name: '*'
143 | # - env: 'sys'
144 | # owner: ''
145 | # name: '*'
146 |
147 | # A hash detailing the path to the per-user NGINX app configs
148 | #
149 | #app_config_path:
150 | # dev: '/var/lib/ondemand-nginx/config/apps/dev/%{owner}/%{name}.conf'
151 | # usr: '/var/lib/ondemand-nginx/config/apps/usr/%{owner}/%{name}.conf'
152 | # sys: '/var/lib/ondemand-nginx/config/apps/sys/%{name}.conf'
153 |
154 | # A hash detailing the locations on the file system where apps reside for the
155 | # corresponding environment
156 | #
157 | #app_root:
158 | # dev: '/var/www/ood/apps/dev/%{owner}/gateway/%{name}'
159 | # usr: '/var/www/ood/apps/usr/%{owner}/gateway/%{name}'
160 | # sys: '/var/www/ood/apps/sys/%{name}'
161 | #
162 | # If you want to enable app development like in 1.3, where each user's home directory
163 | # use this app_root block instead:
164 | #
165 | #app_root:
166 | # dev: '~%{owner}/%{portal}/dev/%{name}'
167 | # usr: '/var/www/ood/apps/usr/%{owner}/gateway/%{name}'
168 | # sys: '/var/www/ood/apps/sys/%{name}'
169 |
170 | # A hash detailing the app's request URI not including the base-URI
171 | #
172 | #app_request_uri:
173 | # dev: '/dev/%{name}'
174 | # usr: '/usr/%{owner}/%{name}'
175 | # sys: '/sys/%{name}'
176 |
177 | # A hash detailing the regular expressions used to define the app namespace
178 | # from a given URI request. Should match {#app_request_uri}.
179 | #
180 | #app_request_regex:
181 | # dev: '^/dev/(?[-\w.]+)'
182 | # usr: '^/usr/(?[\w]+)\/(?[-\w.]+)'
183 | # sys: '^/sys/(?[-\w.]+)'
184 |
185 | # A hash detailing the tokens used to identify individual apps
186 | #
187 | #app_token:
188 | # dev: 'dev/%{owner}/%{name}'
189 | # usr: 'usr/%{owner}/%{name}'
190 | # sys: 'sys/%{name}'
191 |
192 | # A hash detailing the Passenger environment to run the app under within the
193 | # PUN
194 | #
195 | #app_passenger_env:
196 | # dev: 'development'
197 | # usr: 'production'
198 | # sys: 'production'
199 |
200 | # Regular expression used to validate a given user name. The user name supplied
201 | # must match the regular expression to be considered valid.
202 | #
203 | #user_regex: '[\w@\.\-]+'
204 |
205 | # Minimum user id required to generate per-user NGINX server as the requested
206 | # user
207 | #
208 | #min_uid: 1000
209 |
210 | # Restrict starting up per-user NGINX process as user with this shell.
211 | # NB: This only affects the `pun` command, you are still able to start or stop
212 | # the PUN using other commands (e.g., `nginx`, `nginx_clean`, ...)
213 | #
214 | disabled_shell: '/bin/true'
215 |
216 | # Set BUNDLE_USER_CONFIG to /dev/null in the PUN environment
217 | # NB: This prevents a user's ~/.bundle/config from affecting OnDemand applications
218 | #
219 | #disable_bundle_user_config: true
220 |
--------------------------------------------------------------------------------
/config/ondemand.d/ondemand.yml.erb:
--------------------------------------------------------------------------------
1 | # /etc/ood/config/ondemand.d/ondemand.yml
2 |
3 | pinned_apps_menu_length: 8 # the default number of items in the dropdown menu list
4 |
5 | dashboard_layout:
6 | rows:
7 | - columns:
8 | - width: 12
9 | widgets:
10 | - pinned_apps
11 | - motd
12 | # - xdmod_widget_job_efficiency
13 | # - xdmod_widget_jobs
14 | - columns:
15 | - width: 6
16 | widgets: ['xdmod_widget_job_efficiency']
17 | - width: 6
18 | widgets: ['xdmod_widget_jobs']
19 |
20 | pinned_apps:
21 | - sys/chpc-systemstatus
22 | - sys/desktop_expert
23 | - sys/jupyter_app
24 | - sys/rstudio_server_app
25 | - sys/matlab_app
26 | - sys/comsol_app
27 | - sys/codeserver_app
28 | - sys/vmd_app
29 | - sys/ansys_workbench_app
30 | - sys/abaqus_app
31 | - sys/lumerical_app
32 | - sys/idl_app
33 |
34 | # pinned_apps_group_by: category # defaults to nil, no grouping
35 |
36 | # 'nav_bar' is the left side of the navigation bar.
37 | nav_bar:
38 | ## 'apps' dropdown menu is shown if you've set 'pinned_apps'.
39 | #- apps
40 | - files
41 | - jobs
42 | - clusters
43 | - interactive apps
44 | - classes
45 | - my interactive sessions
46 | #
47 | ## 'all apps' is disabled by default, but would be next if you set 'show_all_apps_link'.
48 | ## - all apps
49 | #
50 | ## 'help_bar' is the right side of the navigation bar.
51 | help_bar:
52 | - develop
53 | - help
54 | - user
55 | - logout
56 |
57 | # clean old interactive app dirs
58 | bc_clean_old_dirs: true
59 |
60 | support_ticket:
61 | email:
62 | from: <%= Etc.getlogin %>@utah.edu
63 | to: "helpdesk@chpc.utah.edu"
64 | delivery_method: "smtp"
65 | delivery_settings:
66 | address: 'mail.chpc.utah.edu'
67 | port: 25
68 | authentication: 'none'
69 | form:
70 | - subject
71 | - attachments
72 | - description
73 |
74 | module_file_dir: "/var/www/ood/apps/templates/modules"
75 |
76 | # single endpoint for all file systems (home, scratch, group)
77 | globus_endpoints:
78 | - path: "/"
79 | endpoint: "7cf0baa1-8bd0-4e91-a1e6-c19042952a7c"
80 | endpoint_path: "/"
81 |
82 |
--------------------------------------------------------------------------------
/config/ood_portal.yml:
--------------------------------------------------------------------------------
1 | ---
2 | #
3 | # Portal configuration
4 | #
5 |
6 | # The address and port to listen for connections on
7 | # Example:
8 | # listen_addr_port: 443
9 | # Default: null (don't add any more listen directives)
10 | #listen_addr_port: null
11 |
12 | # The server name used for name-based Virtual Host
13 | # Example:
14 | # servername: 'www.example.com'
15 | # Default: null (don't use name-based Virtual Host)
16 | servername: ondemand.chpc.utah.edu
17 |
18 | # The server name used for rewrites
19 | # Example:
20 | # proxy_server: 'proxy.example.com'
21 | # Default: The value of servername
22 | #proxy_server: null
23 |
24 | # The port specification for the Virtual Host
25 | # Example:
26 | # port: 8080
27 | #Default: null (use default port 80 or 443 if SSL enabled)
28 | #port: null
29 |
30 | # List of SSL Apache directives
31 | # Example:
32 | # ssl:
33 | # - 'SSLCertificateFile "/etc/pki/tls/certs/www.example.com.crt"'
34 | # - 'SSLCertificateKeyFile "/etc/pki/tls/private/www.example.com.key"'
35 | # Default: null (no SSL support)
36 | #ssl: null
37 | ssl:
38 | - 'SSLCertificateFile "/etc/letsencrypt/live/ondemand.chpc.utah.edu/fullchain.pem"'
39 | - 'SSLCertificateKeyFile "/etc/letsencrypt/live/ondemand.chpc.utah.edu/privkey.pem"'
40 | - 'Include conf/ssl/ssl-standard.conf'
41 | - 'Include conf/ssl/lets-encrypt.conf'
42 |
43 | # Root directory of log files (can be relative ServerRoot)
44 | # Example:
45 | # logroot: '/path/to/my/logs'
46 | # Default: 'logs' (this is relative to ServerRoot)
47 | #logroot: 'logs'
48 |
49 | # Error log filename
50 | # Example:
51 | # errorlog: 'error.log'
52 | # Default: 'error.log' (If 'servername' and 'ssl' options are defined
53 | # the default value will be _error_ssl.log)
54 | #errorlog: 'error.log'
55 |
56 | # Access log filename
57 | # Example:
58 | # accesslog: 'access.log'
59 | # Default: 'access.log' (If 'servername' and 'ssl' options are defined
60 | # the default value will be _access_ssl.log)
61 | #accesslog: 'access.log'
62 |
63 | # Apache access log format (Don't specify log nickname see: http://httpd.apache.org/docs/current/mod/mod_log_config.html#transferlog)
64 | # Example:
65 | # logformat: '"%v %h \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %{SSL_PROTOCOL}x %T"'
66 | # Default: Apache combined format
67 |
68 | # Should RewriteEngine be used
69 | # Example:
70 | # use_rewrites: false
71 | # Default: true
72 | #use_rewrites: true
73 |
74 | # Should Maintenance Rewrite rules be added
75 | # Example:
76 | # use_maintenance: false
77 | # Default: true
78 | #use_maintenance: true
79 |
80 | # List of IPs to whitelist when maintenance is enabled
81 | # Example:
82 | # maintenance_ip_whitelist:
83 | # - 192.168.0..*
84 | # - 192.168.1..*
85 | # Default: [] (no IPs whitelisted)
86 | #maintenance_ip_whitelist: []
87 |
88 | # Set Header Content-Security-Policy frame-ancestors.
89 | # Example:
90 | # security_csp_frame_ancestors: https://ondemand.osc.edu
91 | # Example to disable setting:
92 | # security_csp_frame_ancestors: false
93 | # Default: based on servername and ssl settings
94 | #security_csp_frame_ancestors:
95 |
96 | # Set Header Strict-Transport-Security to help enforce SSL
97 | # Example:
98 | # security_strict_transport: false
99 | # Default: true when ssl is enabled, false otherwise
100 | #security_strict_transport: false
101 |
102 | # Root directory of the Lua handler code
103 | # Example:
104 | # lua_root: '/path/to/lua/handlers'
105 | # Default : '/opt/ood/mod_ood_proxy/lib' (default install directory of mod_ood_proxy)
106 | #lua_root: '/opt/ood/mod_ood_proxy/lib'
107 |
108 | # Verbosity of the Lua module logging
109 | # (see https://httpd.apache.org/docs/2.4/mod/core.html#loglevel)
110 | # Example:
111 | # lua_log_level: 'info'
112 | # Default: null (use default log level)
113 | #lua_log_level: null
114 |
115 | # Lua regular expression used to map authenticated-user to system-user
116 | # This configuration is ignored if user_map_cmd is defined
117 | # Example:
118 | # user_map_match: '^([^@]+)@.*$'
119 | # Default: '.*'
120 | # user_map_match: '.*'
121 |
122 | # System command used to map authenticated-user to system-user
123 | # Example:
124 | # user_map_cmd: '/opt/ood/ood_auth_map/bin/ood_auth_map.regex --regex=''^(\w+)@example.com$'''
125 | # Default: '/opt/ood/ood_auth_map/bin/ood_auth_map.regex' (this echo's back auth-user)
126 | #user_map_cmd: '/opt/ood/ood_auth_map/bin/ood_auth_map.regex'
127 |
128 | # Use an alternative CGI environment variable instead of REMOTE_USER for
129 | # determining the authenticated-user fed to the mapping script
130 | # Example:
131 | # user_env: 'OIDC_CLAIM_preferred_username'
132 | # Default: null (use REMOTE_USER)
133 | #user_env: null
134 |
135 | # Redirect user to the following URI if fail to map there authenticated-user to
136 | # a system-user
137 | # Example:
138 | # map_fail_uri: '/register'
139 | # Default: null (don't redirect, just display error message)
140 | #map_fail_uri: null
141 |
142 | # System command used to run the `nginx_stage` script with sudo privileges
143 | # Example:
144 | # pun_stage_cmd: 'sudo /path/to/nginx_stage'
145 | # Default: 'sudo /opt/ood/nginx_stage/sbin/nginx_stage' (don't forget sudo)
146 | #pun_stage_cmd: 'sudo /opt/ood/nginx_stage/sbin/nginx_stage'
147 |
148 | # List of Apache authentication directives
149 | # NB: Be sure the appropriate Apache module is installed for this
150 | # Default: (see below, uses basic auth with an htpasswd file)
151 | #auth:
152 | # - 'AuthType openid-connect'
153 | # - 'Require valid-user'
154 | auth:
155 | - 'AuthType CAS'
156 | - 'Require valid-user'
157 | - 'CASScope /'
158 | - 'RequestHeader unset Authorization'
159 |
160 | # Redirect user to the following URI when accessing root URI
161 | # Example:
162 | # root_uri: '/my_uri'
163 | # # https://www.example.com/ => https://www.example.com/my_uri
164 | # Default: '/pun/sys/dashboard' (default location of the OOD Dashboard app)
165 | # CAS comment out
166 | #root_uri: '/pun/sys/dashboard'
167 |
168 | # Track server-side analytics with a Google Analytics account and property
169 | # (see https://github.com/OSC/mod_ood_proxy/blob/master/lib/analytics.lua for
170 | # information on how to setup the GA property)
171 | # Example:
172 | analytics:
173 | url: 'http://www.google-analytics.com/collect'
174 | id: 'UA-122259839-1'
175 | # Default: null (do not track)
176 | #analytics: null
177 |
178 | #
179 | # Publicly available assets
180 | #
181 |
182 | # Public sub-uri (available to public with no authentication)
183 | # Example:
184 | # public_uri: '/assets'
185 | # Default: '/public'
186 | #public_uri: '/public'
187 |
188 | # Root directory that serves the public sub-uri (be careful, everything under
189 | # here is open to the public)
190 | # Example:
191 | # public_root: '/path/to/public/assets'
192 | # Default: '/var/www/ood/public'
193 | #public_root: '/var/www/ood/public'
194 |
195 | #
196 | # Logout redirect helper
197 | #
198 |
199 | # Logout sub-uri
200 | # Example
201 | # logout_uri: '/log_me_out'
202 | # NB: If you change this, then modify the Dashboard app with the new sub-uri
203 | # Default: '/logout' (the Dashboard app is by default going to expect this)
204 | #logout_uri: '/logout'
205 |
206 | # Redirect user to the following URI when accessing logout URI
207 | # Example:
208 | # logout_redirect: '/oidc?logout=https%3A%2F%2Fwww.example.com'
209 | # Default: '/pun/sys/dashboard/logout' (the Dashboard app provides a simple
210 | # HTML page explaining logout to the user)
211 | logout_redirect: '/pun/sys/dashboard/logout'
212 | #logout_redirect: '/oidc?logout=https%3A%2F%2Fondemand.chpc.utah.edu'
213 |
214 | #
215 | # Reverse proxy to backend nodes
216 | #
217 |
218 | # Regular expression used for whitelisting allowed hostnames of nodes
219 | # Example:
220 | # host_regex: '[\w.-]+\.example\.com'
221 | # Default: '[^/]+' (allow reverse proxying to all hosts, this allows external
222 | # hosts as well)
223 | #host_regex: '^(.*?)(peaks|arches)$'
224 | host_regex: '[\w.-]+\.(peaks|arches|int.chpc.utah.edu|chpc.utah.edu)'
225 | #the above works through grep, e.g. on em163: hostname -A | awk '{print $2}' | grep -E '^(.*?)(peaks|arches)$'
226 | #host_regex: '[^/]+'
227 | #host_regex: "(sp|kp|em|lp|ash|tg)\\d+"
228 |
229 | # Sub-uri used to reverse proxy to backend web server running on node that
230 | # knows the full URI path
231 | # Example:
232 | node_uri: '/node'
233 | # Default: null (disable this feature)
234 | #node_uri: null
235 |
236 | # Sub-uri used to reverse proxy to backend web server running on node that
237 | # ONLY uses *relative* URI paths
238 | # Example:
239 | rnode_uri: '/rnode'
240 | # Default: null (disable this feature)
241 | #rnode_uri: null
242 |
243 | #
244 | # Per-user NGINX Passenger apps
245 | #
246 |
247 | # Sub-uri used to control PUN processes
248 | # Example:
249 | # nginx_uri: '/my_pun_controller'
250 | # Default: '/nginx'
251 | #nginx_uri: '/nginx'
252 |
253 | # Sub-uri used to access the PUN processes
254 | # Example:
255 | # pun_uri: '/my_pun_apps'
256 | # Default: '/pun'
257 | #pun_uri: '/pun'
258 |
259 | # Root directory that contains the PUN Unix sockets that the proxy uses to
260 | # connect to
261 | # Example:
262 | # pun_socket_root: '/path/to/pun/sockets'
263 | # Default: '/var/run/nginx' (default location set in nginx_stage)
264 | #pun_socket_root: '/var/run/nginx'
265 |
266 | # Number of times the proxy attempts to connect to the PUN Unix socket before
267 | # giving up and displaying an error to the user
268 | # Example:
269 | # pun_max_retries: 25
270 | # Default: 5 (only try 5 times)
271 | #pun_max_retries: 5
272 |
273 | # The PUN pre hook command to execute as root
274 | #
275 | # Example:
276 | # pun_pre_hook_root_cmd: '/opt/hpc-site/ood_pun_prehook'
277 | # Default: null (do not run any PUN pre hook as root)
278 | #pun_pre_hook_root_cmd: null
279 |
280 | # Comma separated list of environment variables to pass from the apache context
281 | # into the PUN pre hook. Defaults to null so nothing is exported.
282 | #
283 | # Example:
284 | # pun_pre_hook_exports: 'OIDC_ACCESS_TOKEN,OIDC_CLAIM_EMAIL'
285 | # Default: null (pass nothing)
286 | #pun_pre_hook_exports: null
287 |
288 | #
289 | # Support for OpenID Connect
290 | #
291 |
292 | # Sub-uri used by mod_auth_openidc for authentication
293 | # Example:
294 | # oidc_uri: '/oidc'
295 | # Default: null (disable OpenID Connect support)
296 | # CAS comment out
297 | #oidc_uri: /oidc
298 |
299 | # Sub-uri user is redirected to if they are not authenticated. This is used to
300 | # *discover* what ID provider the user will login through.
301 | # Example:
302 | # oidc_discover_uri: '/discover'
303 | # Default: null (disable support for discovering OpenID Connect IdP)
304 | #oidc_discover_uri: null
305 |
306 | # Root directory on the filesystem that serves the HTML code used to display
307 | # the discovery page
308 | # Example:
309 | # oidc_discover_root: '/var/www/ood/discover'
310 | # Default: null (disable support for discovering OpenID Connect IdP)
311 | #oidc_discover_root: null
312 |
313 | #
314 | # Support for registering unmapped users
315 | #
316 | # (Not necessary if using regular expressions for mapping users)
317 | #
318 |
319 | # Sub-uri user is redirected to if unable to map authenticated-user to
320 | # system-user
321 | # Example:
322 | # register_uri: '/register'
323 | # Default: null (display error to user if mapping fails)
324 | #register_uri: null
325 |
326 | # Root directory on the filesystem that serves the HTML code used to register
327 | # an unmapped user
328 | # Example:
329 | # register_root: '/var/www/ood/register'
330 | # Default: null (display error to user if mapping fails)
331 | #register_root: null
332 |
333 | # OIDC metadata URL
334 | # Example:
335 | # oidc_provider_metadata_url: https://example.com:5554/.well-known/openid-configuration
336 | # Default: null (value auto-generated if using Dex)
337 | #oidc_provider_metadata_url: null
338 |
339 | # OIDC client ID
340 | # Example:
341 | # oidc_client_id: ondemand.example.com
342 | # Default: null (value auto-generated if using Dex)
343 | #oidc_client_id: null
344 |
345 | # OIDC client secret
346 | # Example:
347 | # oidc_client_secret: 334389048b872a533002b34d73f8c29fd09efc50
348 | # Default: null (value auto-generated if using Dex)
349 | #oidc_client_secret: null
350 |
351 | # OIDC remote user claim. This is the claim that populates REMOTE_USER
352 | # Example:
353 | # oidc_remote_user_claim: preferred_username
354 | # Default: preferred_username
355 | #oidc_remote_user_claim: preferred_username
356 |
357 | # OIDC scopes
358 | # Example:
359 | # oidc_scope: "openid profile email groups"
360 | # Default: "openid profile email"
361 | #oidc_scope: "openid profile email"
362 |
363 | # OIDC session inactivity timeout
364 | # Example:
365 | # oidc_session_inactivity_timeout: 28800
366 | # Default: 28800
367 | #oidc_session_inactivity_timeout: 28800
368 |
369 | # OIDC session max duration
370 | # Example:
371 | # oidc_session_max_duration: 28800
372 | # Default: 28800
373 | #oidc_session_max_duration: 28800
374 |
375 | # OIDC max number of state cookies and if to automatically clean old cookies
376 | # Example:
377 | # oidc_state_max_number_of_cookies: "10 true"
378 | # Default: "10 true"
379 | #oidc_state_max_number_of_cookies: "10 true"
380 |
381 | # OIDC Enable SameSite cookie
382 | # When ssl is defined this defaults to 'Off'
383 | # When ssl is not defined this defaults to 'On'
384 | # Example:
385 | # oidc_cookie_same_site: 'Off'
386 | # Default: 'On'
387 | #oidc_cookie_same_site: 'On'
388 |
389 | # Additional OIDC settings as key-value pairs
390 | # Example:
391 | # oidc_settings:
392 | # OIDCPassIDTokenAs: serialized
393 | # OIDCPassRefreshToken: On
394 | # Default: {} (empty hash)
395 |
396 | # Dex configurations, values inside the "dex" structure are directly used to configure Dex
397 | # If the value for "dex" key is false or null, Dex support is disabled
398 | # Dex support will auto-enable if ondemand-dex package is installed
399 | #dex:
400 | # Default based on if ssl key for ood-portal-generator is defined
401 | # ssl: false
402 | # Only used if SSL is disabled
403 | # http_port: "5556"
404 | # Only used if SSL is enabled
405 | # https_port: "5554"
406 | # tls_cert and tls_key take OnDemand configured values for ssl and copy keys to /etc/ood/dex maintaining file names
407 | # tls_cert: null
408 | # tls_key: null
409 | # storage_file: /etc/ood/dex/dex.db
410 | # grpc: null
411 | # expiry: null
412 | # Client ID, defaults to servername or FQDN
413 | # client_id: null
414 | # client_name: OnDemand
415 | # Client secret, value auto generated
416 | # A value that is a filesystem path can be used to store secret in a file
417 | # client_secret: /etc/ood/dex/ondemand.secret
418 | # The OnDemand redirectURI is auto-generated, this option allows adding additional URIs
419 | # client_redirect_uris: []
420 | # Additional Dex OIDC clients to configure
421 | # static_clients: []
422 | # The following example is to configure OpenLDAP
423 | # Docs: https://github.com/dexidp/dex/blob/master/Documentation/connectors/ldap.md
424 | # connectors:
425 | # - type: ldap
426 | # id: ldap
427 | # name: LDAP
428 | # config:
429 | # host: openldap.my_center.edu:636
430 | # insecureSkipVerify: false
431 | # bindDN: cn=admin,dc=example,dc=org
432 | # bindPW: admin
433 | # userSearch:
434 | # baseDN: ou=People,dc=example,dc=org
435 | # filter: "(objectClass=posixAccount)"
436 | # username: uid
437 | # idAttr: uid
438 | # emailAttr: mail
439 | # nameAttr: gecos
440 | # preferredUsernameAttr: uid
441 | # groupSearch:
442 | # baseDN: ou=Groups,dc=example,dc=org
443 | # filter: "(objectClass=posixGroup)"
444 | # userMatchers:
445 | # - userAttr: DN
446 | # groupAttr: member
447 | # nameAttr: cn
448 | # frontend:
449 | # theme: ondemand
450 | # dir: /usr/share/ondemand-dex/web
451 |
452 | # Enabling maintenance mode
453 | use_rewrites: true
454 | use_maintenance: true
455 | maintenance_ip_whitelist:
456 | - '155.101.16.75'
457 |
--------------------------------------------------------------------------------
/granite.md:
--------------------------------------------------------------------------------
1 | # Adding a new cluster
2 |
3 | Quick notes for adding a new cluster to OnDemand, for Granite in the fall of 2024.
4 |
5 | - modify script that produces `gpus.txt` to add granite
6 |
7 | - in myallocation remove
8 | ```
9 | "granite": ["granite"],
10 | ```
11 | and change
12 | ```
13 | "other": ["kingspeak", "notchpeak", "lonepeak", "ash", "granite"],
14 | ```
15 |
16 | - add granite sys branch to `/etc/fstab`:
17 | ```
18 | eth.vast.chpc.utah.edu:/sys/uufs/granite/sys /uufs/granite/sys nfs nolock,nfsvers=3,x-systemd.requires=NetworkManager-wait-online.service,x-systemd.after=network.service 0 0
19 | mkdir -p /uufs/granite/sys
20 | systemctl daemon-reload
21 | mount /uufs/granite/sys
22 | ```
23 | - add `granite.yml` from ondemand-test to `/etc/ood/config/clusters.d/`
24 | ```
25 | scp u0101881@ondemand-test:/etc/ood/config/clusters.d/granite.yml .
26 | ```
27 |
28 | - add granite to `/uufs/chpc.utah.edu/sys/ondemand/chpc-apps-v3.4/app-templates/clusters`
29 | - re-make symbolic link on ondemand-test
30 | ```
31 | rm /var/www/ood/apps/templates/cluster.txt
32 | ln -s /uufs/chpc.utah.edu/sys/ondemand/chpc-apps-v3.4/app-templates/clusters /var/www/ood/apps/templates/cluster.txt
33 | ```
34 |
35 | - get `/etc/ood/config/apps/dashboard/initializers/ood.rb` from ondemand-test
36 |
37 | - check that granite-gpu is visible and GPU types are populated (should when `/uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates/cluster.txt` gets granite added)
38 |
39 | - modify `/etc/ood/config/apps/shell/env` to add:
40 | ```
41 | OOD_SSHHOST_ALLOWLIST="grn[0][0-9][0-9].int.chpc.utah.edu:notch[0-4][0-9][0-9].ipoib.int.chpc.utah.edu:lp[0-2][0-9][0-9].lonepeak.peaks:kp[0-3][0-9][0-9].ipoib.kingspeak.peaks:ash[2-4][0-9][0-9].ipoib.ash.peaks"
42 | ```
43 |
44 | ## Change to the new account:partition:qos scheme
45 |
46 | in `/etc/ood/config/apps/dashboard/initializers` instead of
47 | ```
48 | my_cmd = "/var/www/ood/apps/templates/get_alloc_all.sh"
49 | ```
50 | do
51 | ```
52 | my_cmd = %q[curl "https://portal.chpc.utah.edu/monitoring/ondemand/slurm_user_params?user=`whoami`&env=chpc"]
53 | ```
54 |
55 | in app's `submit.yml.erb`:
56 | ```
57 | accounting_id: "<%= custom_accpart.split(":")[0]%>"
58 | queue_name: "<%= custom_accpart.split(":")[1] %>"
59 | qos: "<%= custom_accpart.split(":")[2] %>"
60 | ```
61 |
62 |
--------------------------------------------------------------------------------
/httpd/conf.d/auth_cas.conf:
--------------------------------------------------------------------------------
1 | LoadModule auth_cas_module modules/mod_auth_cas.so
2 | CASCookiePath /var/cache/httpd/mod_auth_cas/
3 | CASCertificatePath /etc/pki/tls/certs/ca-bundle.crt
4 | CASLoginURL https://go.utah.edu/cas/login
5 | CASValidateURL https://go.utah.edu/cas/serviceValidate
6 | CASTimeout 0
7 |
--------------------------------------------------------------------------------
/httpd/conf.modules.d/00-mpm.conf:
--------------------------------------------------------------------------------
1 | # Select the MPM module which should be used by uncommenting exactly
2 | # one of the following LoadModule lines. See the httpd.conf(5) man
3 | # page for more information on changing the MPM.
4 |
5 | # prefork MPM: Implements a non-threaded, pre-forking web server
6 | # See: http://httpd.apache.org/docs/2.4/mod/prefork.html
7 | #
8 | # NOTE: If enabling prefork, the httpd_graceful_shutdown SELinux
9 | # boolean should be enabled, to allow graceful stop/shutdown.
10 | #
11 | #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
12 |
13 | # worker MPM: Multi-Processing Module implementing a hybrid
14 | # multi-threaded multi-process web server
15 | # See: http://httpd.apache.org/docs/2.4/mod/worker.html
16 | #
17 | #LoadModule mpm_worker_module modules/mod_mpm_worker.so
18 |
19 | # event MPM: A variant of the worker MPM with the goal of consuming
20 | # threads only for connections with active processing
21 | # See: http://httpd.apache.org/docs/2.4/mod/event.html
22 | #
23 | #LoadModule mpm_event_module modules/mod_mpm_event.so
24 | LoadModule mpm_event_module modules/mod_mpm_event.so
25 |
26 |
27 | ServerLimit 32
28 | StartServers 2
29 | MaxRequestWorkers 512
30 | MinSpareThreads 25
31 | MaxSpareThreads 75
32 | ThreadsPerChild 32
33 | MaxRequestsPerChild 0
34 | ThreadLimit 512
35 | ListenBacklog 511
36 |
37 |
--------------------------------------------------------------------------------
/httpd/location.txt:
--------------------------------------------------------------------------------
1 | /etc (= /etc/httpd)
2 |
--------------------------------------------------------------------------------
/install_scripts/build_cas.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | yum install libcurl-devel pcre-devel
4 | cd /usr/local/src
5 | wget https://github.com/apereo/mod_auth_cas/archive/v1.2.tar.gz
6 | tar xvzf v1.2.tar.gz
7 | cd mod_auth_cas-1.2
8 | autoreconf -iv
9 | ./configure --with-apxs=/usr/bin/apxs
10 | make
11 | make check
12 | make install
13 |
--------------------------------------------------------------------------------
/install_scripts/check_apache_config.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | /sbin/httpd -t
4 | RESULT=$?
5 | if [ $RESULT == 0 ]; then
6 | systemctl try-restart httpd.service htcacheclean.service
7 | /sbin/httpd -V
8 | else
9 | echo Config file syntax check failed
10 | fi
11 |
12 |
--------------------------------------------------------------------------------
/install_scripts/get_apps.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # General apps
4 | /uufs/chpc.utah.edu/sys/ondemand/chpc-apps/update.sh
5 | cd /var/www/ood/apps/sys
6 | mkdir org
7 | mv bc_desktop/ org
8 | cd /var/www/ood/apps
9 | ln -s /uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates templates
10 | cd /var/www/ood/apps/templates
11 | source /etc/profile.d/chpc.sh
12 | ./genmodulefiles.sh
13 |
14 | echo !!! Make sure to hand edit single version module files in /var/www/ood/apps/templates/*.txt
15 |
16 | # Class apps
17 | /uufs/chpc.utah.edu/sys/ondemand/chpc-class/update.sh
18 |
--------------------------------------------------------------------------------
/install_scripts/get_customizations.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Logo images
4 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/var/www/ood/public/CHPC-logo35.png -O /var/www/ood/public/CHPC-logo35.png
5 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/var/www/ood/public/CHPC-logo.png -O /var/www/ood/public/CHPC-logo.png
6 | wget https://github.com/CHPC-UofU/OnDemand-info/blob/master/var/www/ood/public/chpc_logo_block.png -O /var/www/ood/public/chpc_logo_block.png
7 |
8 | # Locales
9 | mkdir -p /etc/ood/config/locales/
10 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/locales/en.yml -O /etc/ood/config/locales/en.yml
11 |
12 | # Dashboard, incl. logos, quota warnings,...
13 | mkdir -p /etc/ood/config/apps/dashboard/initializers/
14 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/apps/dashboard/initializers/ood.rb -O /etc/ood/config/apps/dashboard/initializers/ood.rb
15 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/apps/dashboard/env -O /etc/ood/config/apps/dashboard/env
16 |
17 | # Active jobs environment
18 | mkdir -p /etc/ood/config/apps/activejobs
19 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/apps/activejobs/env -O /etc/ood/config/apps/activejobs/env
20 |
21 | # Base apps configs
22 | mkdir -p /etc/ood/config/apps/bc_desktop/submit
23 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/apps/bc_desktop/submit/slurm.yml.erb -O /etc/ood/config/apps/bc_desktop/submit/slurm.yml.erb
24 | mkdir -p /etc/ood/config/apps/shell
25 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/apps/shell/env -O /etc/ood/config/apps/shell/env
26 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/var/www/ood/apps/sys/shell/bin/ssh -O /var/www/ood/apps/sys/shell/bin/ssh
27 | chmod a+x /var/www/ood/apps/sys/shell/bin/ssh
28 |
29 | #Announcements, XdMoD
30 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/announcement.md.motd -O /etc/ood/config/announcement.md.motd
31 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/nginx_stage.yml -O /etc/ood/config/nginx_stage.yml
32 |
33 | #Widgets/pinned apps
34 | mkdir /etc/ood/config/ondemand.d/
35 | wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/ondemand.d/ondemand.yml -O /etc/ood/config/ondemand.d/ondemand.yml
36 |
37 | # SLURM job templates
38 | mkdir -p /etc/ood/config/apps/myjobs
39 | ln -s /uufs/chpc.utah.edu/sys/ondemand/chpc-myjobs-templates /etc/ood/config/apps/myjobs/templates
40 |
41 |
--------------------------------------------------------------------------------
/install_scripts/setup_cas.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | mkdir -p /var/cache/httpd/mod_auth_cas
4 | chown apache:apache /var/cache/httpd/mod_auth_cas
5 | chmod a+rX /var/cache/httpd/mod_auth_cas
6 | tee -a /etc/httpd/conf.d/auth_cas.conf <> /etc/default/locale
47 | # echo 'LC_ALL="en_US.UTF-8"' >> /etc/default/locale
48 | #
49 | # # set noninteractive installation
50 | # export DEBIAN_FRONTEND=noninteractive
51 | # #install tzdata package
52 | # apt install -y tzdata
53 | # # set your timezone
54 | # ln -fs /usr/share/zoneinfo/America/Denver /etc/localtime
55 | # dpkg-reconfigure --frontend noninteractive tzdata
56 | #
57 | # # need to create mount point for home dir
58 | # mkdir /uufs
59 | # mkdir /scratch
60 | #
61 | # # LMod
62 | yum install -y lua lua-devel lua-posix lua-term lua-filesystem lua-lpeg lua-json tcl-devel
63 |
64 | # apt install -y liblua5.2-0 liblua5.2-dev lua-filesystem-dev lua-filesystem lua-posix-dev lua-posix lua5.2 tcl tcl-dev #lua-term lua-term-dev lua-json
65 | # # Bug in Ubuntu, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=891541
66 | # #ln -s /usr/lib/x86_64-linux-gnu/lua/5.1/posix_c.so /usr/lib/x86_64-linux-gnu/lua/5.1/posix.so
67 | # ln -s /usr/lib/x86_64-linux-gnu/libtcl8.6.so /usr/lib/x86_64-linux-gnu/libtcl8.5.so
68 |
69 | echo "
70 | if [ -f /uufs/chpc.utah.edu/sys/etc/profile.d/module.sh ]
71 | then
72 | . /uufs/chpc.utah.edu/sys/etc/profile.d/module.sh
73 | fi
74 | " > /etc/profile.d/91-chpc.sh
75 |
76 | echo "
77 | . /etc/profile.d/91-chpc.sh
78 | " >> /etc/bash.bashrc
79 |
80 | %environment
81 | PATH=/usr/local/bin:$PATH
82 | LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
83 |
84 | %test
85 | # Sanity check that the container is operating
86 | # make sure that numpy is using openblas
87 |
--------------------------------------------------------------------------------
/linux-host/build_container.sh:
--------------------------------------------------------------------------------
1 | imgname=lmod
2 | osname=centos7
3 | rm -f ${osname}_${imgname}.img
4 | # final version of container, read only (singularity shell -w will not work)
5 | sudo /uufs/chpc.utah.edu/sys/installdir/singularity3/3.5.2/bin/singularity build ${osname}_${imgname}.sif Singularity
6 | # sandbox image, allows container modification
7 | #sudo /uufs/chpc.utah.edu/sys/installdir/singularity3/3.5.2/bin/singularity build -s ${osname}_${imgname} Singularity
8 |
9 |
10 |
--------------------------------------------------------------------------------
/pe-config/apps/activejobs/env:
--------------------------------------------------------------------------------
1 | OOD_NAVBAR_TYPE=default
2 | OOD_DASHBOARD_HEADER_IMG_LOGO="/public/CHPC-logo.png"
3 |
4 |
--------------------------------------------------------------------------------
/pe-config/apps/bc_desktop/default.yml:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Interactive Desktop"
3 | description: |
4 | This app will launch an interactive linux desktop on a **single compute node**, or a Bristlecone node.
5 |
6 | This is meant for all types of tasks such as:
7 |
8 | - accessing & viewing files
9 | - compiling code
10 | - debugging
11 | - running visualization software **without** 3D hardware acceleration
12 |
13 | submit: "submit/slurm.yml.erb"
14 |
15 | #form:
16 | # - bc_vnc_idle
17 | # - desktop
18 | # - bc_num_hours
19 | # - bc_num_slots
20 | # - num_cores
21 | # - node_type
22 | # - bc_account
23 | # - bc_queue
24 | # - bc_vnc_resolution
25 | # - bc_email_on_started
26 | # - slurm_cluster
27 |
28 |
--------------------------------------------------------------------------------
/pe-config/apps/bc_desktop/single_cluster/bristlecone.yml:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Frisco Desktop"
3 | cluster: "frisco"
4 | submit: "linux_host"
5 | attributes:
6 | bc_queue: null
7 | bc_account: null
8 | bc_num_slots: 1
9 | num_cores: none
10 |
11 |
--------------------------------------------------------------------------------
/pe-config/apps/bc_desktop/single_cluster/redwood.yml:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Redwood Desktop"
3 | cluster: "redwood"
4 | submit: "submit/slurm.yml.erb"
5 |
--------------------------------------------------------------------------------
/pe-config/apps/bc_desktop/submit/slurm.yml.erb:
--------------------------------------------------------------------------------
1 | ---
2 | script:
3 | <%- if /bristlecone/.match(cluster) == nil -%>
4 | native:
5 | - "-N"
6 | - "<%= bc_num_slots %>"
7 | - "-n"
8 | - "<%= num_cores %>"
9 | <%- end -%>
10 |
--------------------------------------------------------------------------------
/pe-config/apps/bc_desktop/submit/slurm.yml.erb.orig:
--------------------------------------------------------------------------------
1 | ---
2 | script:
3 | <%- if /frisco/.match(cluster) == nil -%>
4 | native:
5 | - "-N"
6 | - "<%= bc_num_slots %>"
7 | - "-n"
8 | - "<%= num_cores %>"
9 | <%- end -%>
10 |
--------------------------------------------------------------------------------
/pe-config/apps/dashboard/env:
--------------------------------------------------------------------------------
1 | OOD_DASHBOARD_TITLE="CHPC PE OnDemand"
2 | OOD_PORTAL="pe-ondemand"
3 |
4 | #BOOTSTRAP_NAVBAR_INVERSE_BG='rgb(200,16,46)'
5 | #BOOTSTRAP_NAVBAR_INVERSE_LINK_COLOR='rgb(255,255,255)'
6 | OOD_NAVBAR_TYPE=default
7 |
8 |
9 | MOTD_PATH="/etc/ood/config/announcement.md.motd"
10 | MOTD_FORMAT="markdown" # markdown, txt, rss
11 |
12 | # header logo
13 | OOD_DASHBOARD_HEADER_IMG_LOGO="/public/CHPC-logo.png"
14 | # logo in the main window
15 | OOD_DASHBOARD_SUPPORT_URL="http://www.chpc.utah.edu/about/contact.php"
16 | OOD_DASHBOARD_SUPPORT_EMAIL="helpdesk@chpc.utah.edu"
17 | OOD_DASHBOARD_DOCS_URL="https://www.chpc.utah.edu/documentation/software/ondemand.php"
18 | OOD_DASHBOARD_PASSWD_URL="https://www.acs.utah.edu/uofu/acs/uupasswd/portal/password/selfservice/ForgottenPasswordHelp.jsp"
19 |
20 | # quota are only for general, for PE a different file?
21 | OOD_QUOTA_PATH="https://www.chpc.utah.edu/apps/systems/curl_post/pequota.json"
22 | OOD_QUOTA_THRESHOLD="0.90"
23 |
24 |
25 | # BOOTSTRAP_NAVBAR_HEIGHT='80px'
26 |
27 |
--------------------------------------------------------------------------------
/pe-config/apps/dashboard/initializers/ood.rb:
--------------------------------------------------------------------------------
1 | # /etc/ood/config/apps/dashboard/initializers/ood.rb
2 |
3 | OodFilesApp.candidate_favorite_paths.tap do |paths|
4 | # add project space directories
5 | # projects = User.new.groups.map(&:name).grep(/^P./)
6 | # paths.concat projects.map { |p| Pathname.new("/fs/project/#{p}") }
7 |
8 | # add scratch space directories
9 | paths << Pathname.new("/scratch/general/pe-nfs1/#{User.new.name}")
10 | # paths << Pathname.new("/scratch/general/lustre/#{User.new.name}")
11 |
12 | # group dir based on user's main group
13 | #project = OodSupport::User.new.group.name
14 | #paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/home/#{project}-group*")
15 |
16 | # group dir based on all user's groups
17 | OodSupport::User.new.groups.each do |group|
18 | #paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/HIPAA/#{group.name}-group*")
19 | #paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/HIPAA/#{*")
20 | end
21 | end
22 |
--------------------------------------------------------------------------------
/pe-config/apps/shell/env:
--------------------------------------------------------------------------------
1 | DEFAULT_SSHHOST="redwood1.chpc.utah.edu"
2 |
3 | # ssh wrapper sets session timeout to 2 hours
4 | OOD_SSH_WRAPPER=/var/www/ood/apps/sys/shell/bin/ssh
5 |
6 | # as of v 1.8 only hosts listed below are allowed to ssh to with the shell app
7 | OOD_SSHHOST_ALLOWLIST="rw[0-4][0-9][0-9].ipoib.int.chpc.utah.edu"
8 |
9 |
--------------------------------------------------------------------------------
/pe-config/clusters.d/redwood.yml:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Redwood"
5 | priority: 2
6 | login:
7 | host: "redwood.chpc.utah.edu"
8 | job:
9 | adapter: "slurm"
10 | bin: "/uufs/redwood.bridges/sys/installdir/slurm/std/bin"
11 | custom:
12 | xdmod:
13 | resource_id: 16
14 | queues:
15 | - "redwood"
16 | - "redwood-guest"
17 | - "redwood-freecycle"
18 | batch_connect:
19 | basic:
20 | script_wrapper: |
21 | if [ -z "$LMOD_VERSION" ]; then
22 | source /etc/profile.d/chpc.sh
23 | fi
24 | export XDG_RUNTIME_DIR=$(mktemp -d)
25 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
26 | export SLURM_EXPORT_ENV=ALL
27 | %s
28 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh pe-ondemand.chpc.utah.edu)"
29 | #set_host: "host=$(hostname -A | awk '{print $2}')"
30 | vnc:
31 | script_wrapper: |
32 | if [ -z "$LMOD_VERSION" ]; then
33 | source /etc/profile.d/chpc.sh
34 | fi
35 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
36 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.8.0/bin/websockify"
37 | export XDG_RUNTIME_DIR=$(mktemp -d)
38 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
39 | export SLURM_EXPORT_ENV=ALL
40 | %s
41 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh pe-ondemand.chpc.utah.edu)"
42 | # set_host: "host=$(hostname -A | awk '{print $2}')"
43 |
--------------------------------------------------------------------------------
/pe-config/clusters.d/redwood.yml.orig:
--------------------------------------------------------------------------------
1 | ---
2 | v2:
3 | metadata:
4 | title: "Redwood"
5 | priority: 2
6 | login:
7 | host: "redwood.chpc.utah.edu"
8 | job:
9 | adapter: "slurm"
10 | cluster: "redwood"
11 | bin: "/uufs/redwood.bridges/sys/installdir/slurm/std/bin"
12 | custom:
13 | xdmod:
14 | resource_id: 16
15 | queues:
16 | - "redwood"
17 | - "redwood-guest"
18 | - "redwood-freecycle"
19 | batch_connect:
20 | basic:
21 | script_wrapper: |
22 | if [ -z "$LMOD_VERSION" ]; then
23 | source /etc/profile.d/chpc.sh
24 | fi
25 | export XDG_RUNTIME_DIR=$(mktemp -d)
26 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
27 | export SLURM_EXPORT_ENV=ALL
28 | %s
29 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh pe-ondemand.chpc.utah.edu)"
30 | vnc:
31 | script_wrapper: |
32 | if [ -z "$LMOD_VERSION" ]; then
33 | source /etc/profile.d/chpc.sh
34 | fi
35 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
36 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.8.0/bin/websockify"
37 | export XDG_RUNTIME_DIR=$(mktemp -d)
38 | # reset SLURM_EXPORT_ENV so that things like srun & sbatch have the same environment as the host
39 | export SLURM_EXPORT_ENV=ALL
40 | %s
41 | set_host: "host=$(/uufs/chpc.utah.edu/sys/bin/hostfromroute.sh pe-ondemand.chpc.utah.edu)"
42 |
43 |
--------------------------------------------------------------------------------
/pe-config/locales/en.yml:
--------------------------------------------------------------------------------
1 | en:
2 | dashboard:
3 | quota_reload_message: "Reload page to see updated quota. Quotas are updated every hour."
4 | welcome_html: |
5 |
6 |
--------------------------------------------------------------------------------
/pe-config/nginx_stage.yml:
--------------------------------------------------------------------------------
1 | #
2 | # This is an example NginxStage CLI configuration file. It contains the
3 | # configuration options that can be specified to meet your system requirements.
4 | # See https://github.com/OSC/nginx_stage for detailed information about
5 | # NginxStage. In particular see
6 | # https://github.com/OSC/nginx_stage/blob/master/lib/nginx_stage/configuration.rb
7 | # for a detailed list of all possible configuration options and their default
8 | # settings.
9 | #
10 | # Below you can find the default values for each configuration option commented
11 | # out. Feel free to uncomment it and make modifications or write your
12 | # modifications directly below the commented defaults.
13 | #
14 |
15 | ---
16 |
17 | # Path to the OnDemand version file
18 | #
19 | #ondemand_version_path: '/opt/ood/VERSION'
20 |
21 | # Unique name of this OnDemand portal used to namespace multiple hosted portals
22 | # NB: If this is not set then most apps will use default namespace "ondemand"
23 | #
24 | #ondemand_portal: null
25 |
26 | # Title of this OnDemand portal that apps *should* display in their navbar
27 | # NB: If this is not set then most apps will use default title "Open OnDemand"
28 | #
29 | #ondemand_title: null
30 |
31 | # Custom environment variables to set for the PUN environment
32 | # Below is an example of the use for setting env vars.
33 | #
34 | pun_custom_env:
35 | OOD_XDMOD_HOST: "https://pe-xdmod.chpc.utah.edu"
36 | # OOD_DASHBOARD_TITLE: "Open OnDemand"
37 | # OOD_BRAND_BG_COLOR: "#53565a"
38 | # OOD_BRAND_LINK_ACTIVE_BG_COLOR: "#fff"
39 |
40 | # List of environment variables to pass onto PUN environment
41 | # from /etc/ood/profile. Example below shows some default
42 | # env vars that are declared.
43 | #
44 | # pun_custom_env_declarations:
45 | # - PATH
46 | # - LD_LIBRARY_PATH
47 | # - MANPATH
48 | # - SCLS
49 | # - X_SCLS
50 |
51 | # Location of the ERB templates used in the generation of the NGINX configs
52 | #
53 | #template_root: '/opt/ood/nginx_stage/templates'
54 |
55 | # The reverse proxy daemon user used to access the Unix domain sockets
56 | #
57 | #proxy_user: 'apache'
58 |
59 | # Path to NGINX executable used by OnDemand
60 | #
61 | #nginx_bin: '/opt/ood/ondemand/root/usr/sbin/nginx'
62 |
63 | # White-list of signals that can be sent to the NGINX process
64 | #
65 | #nginx_signals:
66 | # - 'stop'
67 | # - 'quit'
68 | # - 'reopen'
69 | # - 'reload'
70 |
71 | # Path to NGINX 'mime.types' file used by OnDemand
72 | #
73 | #mime_types_path: '/opt/ood/ondemand/root/etc/nginx/mime.types'
74 |
75 | # Path to Passenger 'locations.ini' file used by OnDemand.
76 | #
77 | #passenger_root: '/opt/ood/ondemand/root/usr/share/ruby/vendor_ruby/phusion_passenger/locations.ini'
78 | #
79 |
80 | # Path to Ruby binary used by nginx_stage
81 | #
82 | #passenger_ruby: '/opt/ood/nginx_stage/bin/ruby'
83 |
84 | # Path to system-installed Node.js binary
85 | # Set to `false` if you don't want this specified in nginx config
86 | #
87 | #passenger_nodejs: '/opt/ood/nginx_stage/bin/node'
88 |
89 | # Path to system-installed Python binary
90 | # Set to `false` if you don't want this specified in nginx config
91 | #
92 | #passenger_python: '/opt/ood/nginx_stage/bin/python'
93 |
94 | # The maximum number of seconds that an application process may be idle.
95 | # Set to `false` if you don't want this specified in the nginx config
96 | #
97 | #passenger_pool_idle_time: 300
98 |
99 | # Hash of Passenger configuration options
100 | # Keys without passenger_ prefix will be ignored
101 | #
102 | #passenger_options: {}
103 |
104 | # Max file upload size in bytes (e.g., 10737420000)
105 | #
106 | #nginx_file_upload_max: '10737420000'
107 |
108 | # Root location of per-user NGINX configs
109 | #
110 | #pun_config_path: '/var/lib/ondemand-nginx/config/puns/%{user}.conf'
111 |
112 | # Root location of per-user NGINX tmp dirs
113 | #
114 | #pun_tmp_root: '/var/tmp/ondemand-nginx/%{user}'
115 |
116 | # Path to the per-user NGINX access log
117 | #
118 | #pun_access_log_path: '/var/log/ondemand-nginx/%{user}/access.log'
119 |
120 | # Path to the per-user NGINX error log
121 | #
122 | #pun_error_log_path: '/var/log/ondemand-nginx/%{user}/error.log'
123 |
124 | # Custom format configuration for access log
125 | #
126 | #pun_log_format: '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'
127 |
128 | # Path to the per-user NGINX pid file
129 | #
130 | #pun_pid_path: '/var/run/ondemand-nginx/%{user}/passenger.pid'
131 |
132 | # Path to the per-user NGINX socket file
133 | #
134 | #pun_socket_path: '/var/run/ondemand-nginx/%{user}/passenger.sock'
135 |
136 | # Path to the local filesystem root where the per-user NGINX process serves
137 | # files from for the user making use of the sendfile feature in NGINX
138 | #
139 | #pun_sendfile_root: '/'
140 |
141 | # The internal URI used to access the local filesystem for downloading files
142 | # from the apps (not accessible directly by client browser)
143 | #
144 | #pun_sendfile_uri: '/sendfile'
145 |
146 | # List of hashes helping define wildcard app config locations. These are the
147 | # arguments for {#app_config_path}.
148 | #
149 | #pun_app_configs:
150 | # - env: 'dev'
151 | # owner: '%{user}'
152 | # name: '*'
153 | # - env: 'usr'
154 | # owner: '*'
155 | # name: '*'
156 | # - env: 'sys'
157 | # owner: ''
158 | # name: '*'
159 |
160 | # A hash detailing the path to the per-user NGINX app configs
161 | #
162 | #app_config_path:
163 | # dev: '/var/lib/ondemand-nginx/config/apps/dev/%{owner}/%{name}.conf'
164 | # usr: '/var/lib/ondemand-nginx/config/apps/usr/%{owner}/%{name}.conf'
165 | # sys: '/var/lib/ondemand-nginx/config/apps/sys/%{name}.conf'
166 |
167 | # A hash detailing the locations on the file system where apps reside for the
168 | # corresponding environment
169 | #
170 | #app_root:
171 | # dev: '/var/www/ood/apps/dev/%{owner}/gateway/%{name}'
172 | # usr: '/var/www/ood/apps/usr/%{owner}/gateway/%{name}'
173 | # sys: '/var/www/ood/apps/sys/%{name}'
174 | #
175 | # If you want to enable app development like in 1.3, where each user's home directory
176 | # use this app_root block instead:
177 | #
178 | #app_root:
179 | # dev: '~%{owner}/%{portal}/dev/%{name}'
180 | # usr: '/var/www/ood/apps/usr/%{owner}/gateway/%{name}'
181 | # sys: '/var/www/ood/apps/sys/%{name}'
182 |
183 | # A hash detailing the app's request URI not including the base-URI
184 | #
185 | #app_request_uri:
186 | # dev: '/dev/%{name}'
187 | # usr: '/usr/%{owner}/%{name}'
188 | # sys: '/sys/%{name}'
189 |
190 | # A hash detailing the regular expressions used to define the app namespace
191 | # from a given URI request. Should match {#app_request_uri}.
192 | #
193 | #app_request_regex:
194 | # dev: '^/dev/(?[-\w.]+)'
195 | # usr: '^/usr/(?[\w]+)\/(?[-\w.]+)'
196 | # sys: '^/sys/(?[-\w.]+)'
197 |
198 | # A hash detailing the tokens used to identify individual apps
199 | #
200 | #app_token:
201 | # dev: 'dev/%{owner}/%{name}'
202 | # usr: 'usr/%{owner}/%{name}'
203 | # sys: 'sys/%{name}'
204 |
205 | # A hash detailing the Passenger environment to run the app under within the
206 | # PUN
207 | #
208 | #app_passenger_env:
209 | # dev: 'development'
210 | # usr: 'production'
211 | # sys: 'production'
212 |
213 | # Regular expression used to validate a given user name. The user name supplied
214 | # must match the regular expression to be considered valid.
215 | #
216 | #user_regex: '[\w@\.\-]+'
217 |
218 | # Minimum user id required to generate per-user NGINX server as the requested
219 | # user
220 | #
221 | #min_uid: 1000
222 |
223 | # Restrict starting up per-user NGINX process as user with this shell.
224 | # NB: This only affects the `pun` command, you are still able to start or stop
225 | # the PUN using other commands (e.g., `nginx`, `nginx_clean`, ...)
226 | #
227 | #disabled_shell: '/access/denied'
228 |
229 | # Set BUNDLE_USER_CONFIG to /dev/null in the PUN environment
230 | # NB: This prevents a user's ~/.bundle/config from affecting OnDemand applications
231 | #
232 | #disable_bundle_user_config: true
233 |
--------------------------------------------------------------------------------
/pe-config/ood_portal.yml:
--------------------------------------------------------------------------------
1 | ---
2 | #
3 | # Portal configuration
4 | #
5 |
6 | # The address and port to listen for connections on
7 | # Example:
8 | # listen_addr_port: 443
9 | # Default: null (don't add any more listen directives)
10 | listen_addr_port:
11 | - '443'
12 | - '80'
13 |
14 | # The server name used for name-based Virtual Host
15 | # Example:
16 | # servername: 'www.example.com'
17 | # Default: null (don't use name-based Virtual Host)
18 | servername: pe-ondemand.chpc.utah.edu
19 | ssl:
20 | - 'SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1'
21 | - 'SSLHonorCipherOrder on'
22 | - 'SSLCipherSuite EECDH:ECDH:-RC4:-3DES'
23 | - 'SSLCertificateFile "/opt/rh/httpd24/root/etc/httpd/conf/ssl/pe-ondemand_chpc_utah_edu_cert.cer"'
24 | - 'SSLCertificateKeyFile "/opt/rh/httpd24/root/etc/httpd/conf/ssl/pe-ondemand.chpc.utah.edu.key"'
25 | - 'SSLCertificateChainFile "/opt/rh/httpd24/root/etc/httpd/conf/ssl/intermediate.cer"'
26 | auth:
27 | - 'AuthType CAS'
28 | - 'Require valid-user'
29 | - 'CASScope /'
30 | # - 'RequestHeader edit* Cookie "(^MOD_AUTH_CAS[^;]*(;\s*)?|;\s*MOD_AUTH_CAS[^;]*)" ""'
31 | # - 'RequestHeader unset Cookie "expr=-z %{req:Cookie}"'
32 | - 'RequestHeader unset Authorization'
33 |
34 | # The server name used for rewrites
35 | # Example:
36 | # proxy_server: 'proxy.example.com'
37 | # Default: The value of servername
38 | #proxy_server: null
39 |
40 | # The port specification for the Virtual Host
41 | # Example:
42 | # port: 8080
43 | #Default: null (use default port 80 or 443 if SSL enabled)
44 | #port: null
45 |
46 | # List of SSL Apache directives
47 | # Example:
48 | # ssl:
49 | # - 'SSLCertificateFile "/etc/pki/tls/certs/www.example.com.crt"'
50 | # - 'SSLCertificateKeyFile "/etc/pki/tls/private/www.example.com.key"'
51 | # Default: null (no SSL support)
52 | #ssl: null
53 |
54 | # Root directory of log files (can be relative ServerRoot)
55 | # Example:
56 | # logroot: '/path/to/my/logs'
57 | # Default: 'logs' (this is relative to ServerRoot)
58 | #logroot: 'logs'
59 |
60 | # Error log filename
61 | # Example:
62 | # errorlog: 'error.log'
63 | # Default: 'error.log' (If 'servername' and 'ssl' options are defined
64 | # the default value will be _error_ssl.log)
65 | #errorlog: 'error.log'
66 |
67 | # Access log filename
68 | # Example:
69 | # accesslog: 'access.log'
70 | # Default: 'access.log' (If 'servername' and 'ssl' options are defined
71 | # the default value will be _access_ssl.log)
72 | #accesslog: 'access.log'
73 |
74 | # Apache access log format (Don't specify log nickname see: http://httpd.apache.org/docs/current/mod/mod_log_config.html#transferlog)
75 | # Example:
76 | # logformat: '"%v %h \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %{SSL_PROTOCOL}x %T"'
77 | # Default: Apache combined format
78 |
79 | # Should RewriteEngine be used
80 | # Example:
81 | # use_rewrites: false
82 | # Default: true
83 | #use_rewrites: true
84 |
85 | # Should Maintenance Rewrite rules be added
86 | # Example:
87 | # use_maintenance: false
88 | # Default: true
89 | #use_maintenance: true
90 |
91 | # List of IPs to whitelist when maintenance is enabled
92 | # Example:
93 | # maintenance_ip_whitelist:
94 | # - 192.168.0..*
95 | # - 192.168.1..*
96 | # Default: [] (no IPs whitelisted)
97 | #maintenance_ip_whitelist: []
98 |
99 | # Set Header Content-Security-Policy frame-ancestors.
100 | # Example:
101 | # security_csp_frame_ancestors: https://ondemand.osc.edu
102 | # Example to disable setting:
103 | # security_csp_frame_ancestors: false
104 | # Default: based on servername and ssl settings
105 | #security_csp_frame_ancestors:
106 |
107 | # Set Header Strict-Transport-Security to help enforce SSL
108 | # Example:
109 | # security_strict_transport: false
110 | # Default: true when ssl is enabled, false otherwise
111 | #security_strict_transport: false
112 |
113 | # Root directory of the Lua handler code
114 | # Example:
115 | # lua_root: '/path/to/lua/handlers'
116 | # Default : '/opt/ood/mod_ood_proxy/lib' (default install directory of mod_ood_proxy)
117 | #lua_root: '/opt/ood/mod_ood_proxy/lib'
118 |
119 | # Verbosity of the Lua module logging
120 | # (see https://httpd.apache.org/docs/2.4/mod/core.html#loglevel)
121 | # Example:
122 | # lua_log_level: 'warn'
123 | # Default: 'info' (get verbose logs)
124 | #lua_log_level: 'info'
125 |
126 | # Lua regular expression used to map authenticated-user to system-user
127 | # This configuration is ignored if user_map_cmd is defined
128 | # Example:
129 | # user_map_match: '^([^@]+)@.*$'
130 | # Default: '.*'
131 | # user_map_match: '.*'
132 |
133 | # System command used to map authenticated-user to system-user
134 | # This option takes precedence over user_map_match
135 | # Example:
136 | # user_map_cmd: '/usr/local/bin/ondemand-usermap'
137 | # Default: null (use user_map_match)
138 | #user_map_cmd: null
139 |
140 | # Use an alternative CGI environment variable instead of REMOTE_USER for
141 | # determining the authenticated-user fed to the mapping script
142 | # Example:
143 | # user_env: 'OIDC_CLAIM_preferred_username'
144 | # Default: null (use REMOTE_USER)
145 | #user_env: null
146 |
147 | # Redirect user to the following URI if fail to map there authenticated-user to
148 | # a system-user
149 | # Example:
150 | # map_fail_uri: '/register'
151 | # Default: null (don't redirect, just display error message)
152 | #map_fail_uri: null
153 |
154 | # System command used to run the `nginx_stage` script with sudo privileges
155 | # Example:
156 | # pun_stage_cmd: 'sudo /path/to/nginx_stage'
157 | # Default: 'sudo /opt/ood/nginx_stage/sbin/nginx_stage' (don't forget sudo)
158 | #pun_stage_cmd: 'sudo /opt/ood/nginx_stage/sbin/nginx_stage'
159 |
160 | # List of Apache authentication directives
161 | # NB: Be sure the appropriate Apache module is installed for this
162 | # Default: (see below, uses OIDC auth with Dex)
163 | #auth:
164 | # - 'AuthType openid-connect'
165 | # - 'Require valid-user'
166 |
167 | # Redirect user to the following URI when accessing root URI
168 | # Example:
169 | # root_uri: '/my_uri'
170 | # # https://www.example.com/ => https://www.example.com/my_uri
171 | # Default: '/pun/sys/dashboard' (default location of the OOD Dashboard app)
172 | #root_uri: '/pun/sys/dashboard'
173 |
174 | # Track server-side analytics with a Google Analytics account and property
175 | # (see https://github.com/OSC/mod_ood_proxy/blob/master/lib/analytics.lua for
176 | # information on how to setup the GA property)
177 | # Example:
178 | analytics:
179 | url: 'http://www.google-analytics.com/collect'
180 | id: 'UA-122259839-2'
181 | # Default: null (do not track)
182 | #analytics: null
183 |
184 | #
185 | # Publicly available assets
186 | #
187 |
188 | # Public sub-uri (available to public with no authentication)
189 | # Example:
190 | # public_uri: '/assets'
191 | # Default: '/public'
192 | #public_uri: '/public'
193 |
194 | # Root directory that serves the public sub-uri (be careful, everything under
195 | # here is open to the public)
196 | # Example:
197 | # public_root: '/path/to/public/assets'
198 | # Default: '/var/www/ood/public'
199 | #public_root: '/var/www/ood/public'
200 |
201 | #
202 | # Logout redirect helper
203 | #
204 |
205 | # Logout sub-uri
206 | # Example
207 | # logout_uri: '/log_me_out'
208 | # NB: If you change this, then modify the Dashboard app with the new sub-uri
209 | # Default: '/logout' (the Dashboard app is by default going to expect this)
210 | #logout_uri: '/logout'
211 |
212 | # Redirect user to the following URI when accessing logout URI
213 | # Example:
214 | # logout_redirect: '/oidc?logout=https%3A%2F%2Fwww.example.com'
215 | # Default: '/pun/sys/dashboard/logout' (the Dashboard app provides a simple
216 | # HTML page explaining logout to the user)
217 | logout_redirect: '/pun/sys/dashboard/logout'
218 |
219 | #
220 | # Reverse proxy to backend nodes
221 | #
222 |
223 | # Regular expression used for whitelisting allowed hostnames of nodes
224 | # Example:
225 | # host_regex: '[\w.-]+\.example\.com'
226 | # Default: '[^/]+' (allow reverse proxying to all hosts, this allows external
227 | # hosts as well)
228 | #host_regex: '[\w.-]+\.(peaks|arches|int.chpc.utah.edu|chpc.utah.edu)'
229 | host_regex: '[\w.-]+\.(ipoib.int.chpc.utah.edu|chpc.utah.edu|int.chpc.utah.edu)'
230 | #host_regex: '[^/]+'
231 | #host_regex: '(.*?)'
232 |
233 | # Sub-uri used to reverse proxy to backend web server running on node that
234 | # knows the full URI path
235 | # Example:
236 | node_uri: '/node'
237 | # Default: null (disable this feature)
238 | #node_uri: null
239 |
240 | # Sub-uri used to reverse proxy to backend web server running on node that
241 | # ONLY uses *relative* URI paths
242 | # Example:
243 | rnode_uri: '/rnode'
244 | # Default: null (disable this feature)
245 | #rnode_uri: null
246 |
247 | #
248 | # Per-user NGINX Passenger apps
249 | #
250 |
251 | # Sub-uri used to control PUN processes
252 | # Example:
253 | # nginx_uri: '/my_pun_controller'
254 | # Default: '/nginx'
255 | #nginx_uri: '/nginx'
256 |
257 | # Sub-uri used to access the PUN processes
258 | # Example:
259 | # pun_uri: '/my_pun_apps'
260 | # Default: '/pun'
261 | #pun_uri: '/pun'
262 |
263 | # Root directory that contains the PUN Unix sockets that the proxy uses to
264 | # connect to
265 | # Example:
266 | # pun_socket_root: '/path/to/pun/sockets'
267 | # Default: '/var/run/ondemand-nginx' (default location set in nginx_stage)
268 | #pun_socket_root: '/var/run/ondemand-nginx'
269 |
270 | # Number of times the proxy attempts to connect to the PUN Unix socket before
271 | # giving up and displaying an error to the user
272 | # Example:
273 | # pun_max_retries: 25
274 | # Default: 5 (only try 5 times)
275 | #pun_max_retries: 5
276 |
277 | # The PUN pre hook command to execute as root
278 | #
279 | # Example:
280 | # pun_pre_hook_root_cmd: '/opt/hpc-site/ood_pun_prehook'
281 | # Default: null (do not run any PUN pre hook as root)
282 | #pun_pre_hook_root_cmd: null
283 |
284 | # Comma separated list of environment variables to pass from the apache context
285 | # into the PUN pre hook. Defaults to null so nothing is exported.
286 | #
287 | # Example:
288 | # pun_pre_hook_exports: 'OIDC_ACCESS_TOKEN,OIDC_CLAIM_EMAIL'
289 | # Default: null (pass nothing)
290 | #pun_pre_hook_exports: null
291 |
292 | #
293 | # Support for OpenID Connect
294 | #
295 |
296 | # Sub-uri used by mod_auth_openidc for authentication
297 | # Example:
298 | # oidc_uri: '/oidc'
299 | # Default: null (disable OpenID Connect support)
300 | #oidc_uri: null
301 |
302 | # Sub-uri user is redirected to if they are not authenticated. This is used to
303 | # *discover* what ID provider the user will login through.
304 | # Example:
305 | # oidc_discover_uri: '/discover'
306 | # Default: null (disable support for discovering OpenID Connect IdP)
307 | #oidc_discover_uri: null
308 |
309 | # Root directory on the filesystem that serves the HTML code used to display
310 | # the discovery page
311 | # Example:
312 | # oidc_discover_root: '/var/www/ood/discover'
313 | # Default: null (disable support for discovering OpenID Connect IdP)
314 | #oidc_discover_root: null
315 |
316 | #
317 | # Support for registering unmapped users
318 | #
319 | # (Not necessary if using regular expressions for mapping users)
320 | #
321 |
322 | # Sub-uri user is redirected to if unable to map authenticated-user to
323 | # system-user
324 | # Example:
325 | # register_uri: '/register'
326 | # Default: null (display error to user if mapping fails)
327 | #register_uri: null
328 |
329 | # Root directory on the filesystem that serves the HTML code used to register
330 | # an unmapped user
331 | # Example:
332 | # register_root: '/var/www/ood/register'
333 | # Default: null (display error to user if mapping fails)
334 | #register_root: null
335 |
336 | # Enabling maintenance mode
337 | use_rewrites: true
338 | use_maintenance: true
339 | # maintenance_ip_whitelist:
340 | # examples only! Your ip regular expressions will be specific to your site.
341 | # - '155.101.16.75'
342 |
343 | # OIDC metadata URL
344 | # Example:
345 | # oidc_provider_metadata_url: https://example.com:5554/.well-known/openid-configuration
346 | # Default: null (value auto-generated if using Dex)
347 | #oidc_provider_metadata_url: null
348 |
349 | # OIDC client ID
350 | # Example:
351 | # oidc_client_id: ondemand.example.com
352 | # Default: null (value auto-generated if using Dex)
353 | #oidc_client_id: null
354 |
355 | # OIDC client secret
356 | # Example:
357 | # oidc_client_secret: 334389048b872a533002b34d73f8c29fd09efc50
358 | # Default: null (value auto-generated if using Dex)
359 | #oidc_client_secret: null
360 |
361 | # OIDC remote user claim. This is the claim that populates REMOTE_USER
362 | # Example:
363 | # oidc_remote_user_claim: preferred_username
364 | # Default: preferred_username
365 | #oidc_remote_user_claim: preferred_username
366 |
367 | # OIDC scopes
368 | # Example:
369 | # oidc_scope: "openid profile email groups"
370 | # Default: "openid profile email"
371 | #oidc_scope: "openid profile email"
372 |
373 | # OIDC session inactivity timeout
374 | # Example:
375 | # oidc_session_inactivity_timeout: 28800
376 | # Default: 28800
377 | #oidc_session_inactivity_timeout: 28800
378 |
379 | # OIDC session max duration
380 | # Example:
381 | # oidc_session_max_duration: 28800
382 | # Default: 28800
383 | #oidc_session_max_duration: 28800
384 |
385 | # OIDC max number of state cookies and if to automatically clean old cookies
386 | # Example:
387 | # oidc_state_max_number_of_cookies: "10 true"
388 | # Default: "10 true"
389 | #oidc_state_max_number_of_cookies: "10 true"
390 |
391 | # OIDC Enable SameSite cookie
392 | # When ssl is defined this defaults to 'Off'
393 | # When ssl is not defined this defaults to 'On'
394 | # Example:
395 | # oidc_cookie_same_site: 'Off'
396 | # Default: 'On'
397 | #oidc_cookie_same_site: 'On'
398 |
399 | # Additional OIDC settings as key-value pairs
400 | # Example:
401 | # oidc_settings:
402 | # OIDCPassIDTokenAs: serialized
403 | # OIDCPassRefreshToken: On
404 | # Default: {} (empty hash)
405 |
406 | # Dex configurations, values inside the "dex" structure are directly used to configure Dex
407 | # If the value for "dex" key is false or null, Dex support is disabled
408 | # Dex support will auto-enable if ondemand-dex package is installed
409 | #dex:
410 | # Default based on if ssl key for ood-portal-generator is defined
411 | # ssl: false
412 | # Only used if SSL is disabled
413 | # http_port: "5556"
414 | # Only used if SSL is enabled
415 | # https_port: "5554"
416 | # tls_cert and tls_key take OnDemand configured values for ssl and copy keys to /etc/ood/dex maintaining file names
417 | # tls_cert: null
418 | # tls_key: null
419 | # storage_file: /etc/ood/dex/dex.db
420 | # grpc: null
421 | # expiry: null
422 | # Client ID, defaults to servername or FQDN
423 | # client_id: null
424 | # client_name: OnDemand
425 | # Client secret, value auto generated
426 | # A value that is a filesystem path can be used to store secret in a file
427 | # client_secret: /etc/ood/dex/ondemand.secret
428 | # The OnDemand redirectURI is auto-generated, this option allows adding additional URIs
429 | # client_redirect_uris: []
430 | # Additional Dex OIDC clients to configure
431 | # static_clients: []
432 | # The following example is to configure OpenLDAP
433 | # Docs: https://github.com/dexidp/dex/blob/master/Documentation/connectors/ldap.md
434 | # connectors:
435 | # - type: ldap
436 | # id: ldap
437 | # name: LDAP
438 | # config:
439 | # host: openldap.my_center.edu:636
440 | # insecureSkipVerify: false
441 | # bindDN: cn=admin,dc=example,dc=org
442 | # bindPW: admin
443 | # userSearch:
444 | # baseDN: ou=People,dc=example,dc=org
445 | # filter: "(objectClass=posixAccount)"
446 | # username: uid
447 | # idAttr: uid
448 | # emailAttr: mail
449 | # nameAttr: gecos
450 | # preferredUsernameAttr: uid
451 | # groupSearch:
452 | # baseDN: ou=Groups,dc=example,dc=org
453 | # filter: "(objectClass=posixGroup)"
454 | # userMatchers:
455 | # - userAttr: DN
456 | # groupAttr: member
457 | # nameAttr: cn
458 | # frontend:
459 | # theme: ondemand
460 | # dir: /usr/share/ondemand-dex/web
461 |
--------------------------------------------------------------------------------
/quota.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | # NOTE - this is a pseudocode that writes the OOD json format quota file
4 | # CHPC has the quota data in a database; data from the quota text file(s) are ingested into
5 | # the database using a different script not published here (contains sensitive information)
6 | # in this example we assume user writes a parser that parsers the flat text file produced by
7 | # the xfs_quota command for file system at path ${i} would be like this:
8 | # /usr/sbin/xfs_quota -x -c 'report -lupbi' 2>/dev/null ${i} >> /tmp/quota_report/${i}_usr-prgquota.out
9 |
10 | import json
11 | import os
12 | from collections import OrderedDict
13 | import time
14 | import sys
15 |
16 |
17 | class Quota:
18 | def getUserQuota(self):
19 | # ignore filesystem name like "/dev/mapper/..." and mammoth
20 | filesystems = {}
21 | quota = {}
22 | # create a parser that reads the quota text file(s)
23 |
24 | # in our database read, we first loop is over the file systems (individual ${i}_usr-prgquota.out files),
25 | # which have the following information stored in the "results"
26 | # - filesystem space_used_bytes space_soft space_file_count
27 | # this would be for the whole file system
28 | if len(results) > 0:
29 | for row in results:
30 | filesystems[row[0]] = row
31 | # now get the indificual entries in the file system, fields
32 | # used_bytes,soft,file_count
33 | # now start filling in the dictionary "quota"
34 | quota["version"] = 1;
35 | quota["timestamp"] = time.time();
36 | quota["quotas"] = []
37 |
38 | # this loops over all the entries for the given file system
39 | for row in results:
40 | ishome = False
41 | if 'home' == row[1]:
42 | ishome = True
43 | path = "/path-to-home/"+row[0]
44 | else:
45 | path = "/path-to-group/" + row[1]+"/"+row[0]
46 | if filesystems[row[1]][2]:
47 | space_soft = filesystems[row[1]][2][0:-1]
48 | else:
49 | space_soft = 0
50 | if row[3]:
51 | user_space_soft = row[3][0:-1]
52 | else:
53 | user_space_soft = 0
54 | if filesystems[row[1]][1]:
55 | total_block_usage = int(filesystems[row[1]][1])//1024
56 | else:
57 | total_block_usage =0
58 | if filesystems[row[1]][3] :
59 | total_file =int(filesystems[row[1]][3])
60 | else:
61 | total_file = 0
62 | if not ishome: # home dirs have quota set
63 | quota["quotas"].append({
64 | "type":"fileset",
65 | "user":row[0],
66 | "path":path,
67 | "block_usage":int(row[2]/1024),
68 | "total_block_usage": total_block_usage,
69 | "block_limit":int(space_soft)*1024*1024,
70 | "file_usage":int(row[4]),
71 | "total_file_usage":total_file,
72 | "file_limit":0
73 | })
74 | else: # other file systems' quota is their total available space
75 | quota["quotas"].append({
76 | "user":row[0],
77 | "path":path,
78 | "total_block_usage": int(row[2]/1024),
79 | "block_limit":int(user_space_soft)*1024*1024,
80 | "total_file_usage": int(row[4]),
81 | "file_limit":0
82 | })
83 | # print(quota)
84 | return quota
85 |
86 |
87 | if __name__ == "__main__":
88 | q = Quota()
89 | quota = q.getUserQuota()
90 | dir_path = os.path.dirname(__file__)
91 | with open(dir_path+"/../htdocs/apps/systems/curl_post/quota.json", 'w') as f:
92 | f.write(json.dumps(quota, indent=4))
93 |
--------------------------------------------------------------------------------
/readme.md:
--------------------------------------------------------------------------------
1 | Notes on Open Ondemand at CHPC
2 | =================
3 |
4 | Table of Contents
5 |
6 | * [Useful links](#useful-links)
7 | * [CHPC setup](#chpc-setup)
8 | * [Installation notes](#installation-notes)
9 | * [Authentication](#authentication)
10 | * [LDAP](#ldap)
11 | * [Keycloak](#keycloak)
12 | * [CAS](#cas)
13 | * [Apache configuration](#apache-configuration)
14 | * [Cluster configuration files](#cluster-configuration-files)
15 | * [Job templates](#job-templates)
16 | * [SLURM setup](#slurm-setup)
17 | * [Frisco jobs setup](#frisco-jobs-setup)
18 | * [OOD customization](#ood-customization)
19 | * [Additional directories (scratch) in Files Explorer](#additional-directories-scratch-in-files-explorer)
20 | * [Interactive desktop](#interactive-desktop)
21 | * [Other interactive apps](#other-interactive-apps)
22 | * [SLURM partitions in the interactive apps](#slurm-partitions-in-the-interactive-apps)
23 | * [SLURM accounts and partitions available to user, part 1](#slurm-accounts-and-partitions-available-to-user-part-1)
24 | * [SLURM accounts and partitions available to user, part 2](#slurm-accounts-and-partitions-available-to-user-part-2)
25 | * [Dynamic partition filtering](#dynamic-partition-filtering)
26 | * [Auto-filling GPU information](#auto-filling-gpu-information)
27 | * [Dynamic GPU filtering](#dynamic-gpu-filtering)
28 | * [Hiding job input fields when Frisco nodes are selected](#hiding-job-input-fields-when-frisco-nodes-are-selected)
29 | * [Google Analytics](#google-analytics)
30 | * [Google Analytics](#google-analytics)
31 | * [Impersonation](#impersonation)
32 |
33 |
34 | ## Useful links
35 |
36 | - documentation: [https://osc.github.io/ood-documentation/master/index.html](https://osc.github.io/ood-documentation/master/index.html)
37 | - installation [https://osc.github.io/ood-documentation/master/installation.html](https://osc.github.io/ood-documentation/master/installation.html)
38 | - github repo: [https://github.com/OSC/Open-OnDemand](https://github.com/OSC/Open-OnDemand)
39 | - recommendation for OOD deployment from OSC: [https://figshare.com/articles/Deploying_and_Managing_an_OnDemand_Instance/9170585](https://figshare.com/articles/Deploying_and_Managing_an_OnDemand_Instance/9170585)
40 |
41 | ## CHPC setup
42 |
43 | CHPC runs OOD on a VM which is mounting cluster file systems (needed to see users files, and SLURM commands). We have two VMs, one called [ondemand.chpc.utah.edu](https://ondemand.chpc.utah.edu) is a production machine which we update only occasionally, the other is a testing VM called [ondemand-test.chpc.utah.edu](https://ondemand-test.chpc.utah.edu), where we experiment. We recommend this approach to prevent prolonged downtimes of the production machine - we had one of these where an auto-update broke authentication and it took us a long time to troubleshoot and fix it.
44 | Also, having a dedicated short walltime/test queue or partition for prompt startup of jobs is essential to support the interactive desktop and apps, which are one of the big strengths of OOD. We did not have this at first which led to minimal use of OOD. The use picked up after we have dedicated two 64 core nodes to a partition with 8 hour walltime limit and 32 core per user CPU limit.
45 |
46 | ## Installation notes
47 |
48 | Follow the [installation instructions](https://osc.github.io/ood-documentation/master/installation.html), which is quite straightforward now with the yum based packaging. The person doing the install needs at least sudo on the ondemand server, and have SSL certificates ready.
49 |
50 | ### Authentication
51 |
52 | We had LDAP before, then Keycloak, and now have CAS. The CAS is much simpler to set up than Keycloak. In general, we followed the [authentication](https://osc.github.io/ood-documentation/master/authentication.html) section of the install guide.
53 |
54 | #### LDAP
55 | As for LDAP, following the [LDAP setup instructions](https://osc.github.io/ood-documentation/master/installation/add-ldap.html), we first made sure we can talk to LDAP, e.g., in our case:
56 | ```
57 | $ ldapsearch -LLL -x -H ldaps://ldap.ad.utah.edu:636 -D 'cn=chpc atlassian,ou=services,ou=administration,dc=ad,dc=utah,dc=edu' -b ou=people,dc=ad,dc=utah,dc=edu -W -s sub samaccountname=u0101881 "*"
58 | ```
59 | and then had the LDAP settings modifed for our purpose as
60 | ```
61 | AuthLDAPURL "ldaps://ldap.ad.utah.edu:636/ou=People,dc=ad,dc=utah,dc=edu?sAMAccountName" SSL
62 | AuthLDAPGroupAttribute cn
63 | AuthLDAPGroupAttributeIsDN off
64 | AuthLDAPBindDN "cn=chpc atlassian,ou=services,ou=administration,dc=ad,dc=utah,dc=edu"
65 | AuthLDAPBindPassword ****
66 | ```
67 |
68 | #### Keycloak
69 |
70 | Here is what Steve did other than listed in the OOD instructions:
71 |
72 | they omit the step of running with a
73 | production RDMS. So the first thing is that, even if you have NO content in the H2 database it ships with,
74 | you have to dump a copy of that schema out and then import it into the MySQL DB.
75 |
76 | First get the Java MySQL connector. Put in the right place:
77 | ```
78 | mkdir /opt/keycloak/modules/system/layers/base/com/mysql/main
79 | cp mysql-connector-java-8.0.15.jar /opt/keycloak/modules/system/layers/base/com/mysql/main/.
80 | touch /opt/keycloak/modules/system/layers/base/com/mysql/main/module.xml
81 | chown -R keycloak. /opt/keycloak/modules/system/layers/base/com/mysql
82 | ```
83 | The documentation had a red herring,with this incorrect path:
84 | ```
85 | /opt/keycloak/modules/system/layers/keycloak/com/mysql/main/module.xml
86 | ```
87 |
88 | but the path that actually works is:
89 | ```
90 | cat /opt/keycloak/modules/system/layers/base/com/mysql/main/module.xml
91 | ```
92 | -----------------------------------------
93 | ```
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 | ```
105 | ---------------------------------------------------
106 |
107 | DB migration
108 | ```
109 | bin/standalone.sh -Dkeycloak.migration.action=export
110 | -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=exported_realms
111 | -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
112 | ```
113 | Then you have to add the MySQL connector to the config (leave the H2 connector in there too)
114 | ```
115 | vim /opt/keycloak/standalone/configuration/standalone.xml
116 | ```
117 |
118 | -----------------------------------------------
119 | ```
120 |
121 |
123 | jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
124 | h2
125 |
126 | sa
127 | sa
128 |
129 |
130 |
132 | jdbc:mysql://localhost:3306/keydb?useSSL=false&characterEncoding=UTF-8
133 | mysql
134 |
135 | 5
136 | 15
137 |
138 |
139 | keycloak
140 | PasswordremovedforDocumentation
141 |
142 |
143 |
145 | true
146 |
148 |
149 |
150 |
151 |
152 | com.mysql.cj.jdbc.Driver
153 | com.mysql.cj.jdbc.MysqlXADataSource
154 |
155 |
156 | org.h2.jdbcx.JdbcDataSource
157 |
158 |
159 |
160 | ```
161 | -----------------------------
162 | ```
163 | bin/standalone.sh -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir
164 | -Dkeycloak.migration.dir=exported_realms -Dkeycloak.migration.strate
165 | gy=OVERWRITE_EXISTING
166 | ```
167 | The documentation for adding in the MySQL jar driver was really bad, and I had to piece a working version
168 | together from 3 or 4 examples.
169 |
170 | Another HUGE gotcha, that stumped me for way too long is the new "tightened up" security in the java runtime
171 | and the connector throws a hissy fit about the timezone not being specified. To fix it just add this in your
172 | ```[mysqld]``` section of ```/etc/my.cnf```
173 | ```
174 | default_time_zone='-07:00'
175 | ```
176 |
177 | Keycloak config
178 |
179 | (I presume done through the Keycloak web interface) - this is local to us so other institutions will need their own AD servers, groups, etc.
180 | ```
181 | Edit Mode: READ_ONLY
182 | Username LDAP Attribute: sAMAccountName
183 | RDN LDAP Attribute: cn
184 | UUID LDAP attribute: objectGUID
185 | connection URL: ldaps://ldap.ad.utah.edu:636 ldaps://ring.ad.utah.edu:636
186 | Users DN: ou=People,DC=ad,DC=utah,DC=edu
187 | Auth type: simple
188 | Bind DN: cn=chpc atlassian,ou=services,ou=administration,dc=ad,dc=utah,dc=edu
189 | Bind password: notbloodylikely
190 | Custom User LDAP Filter: (&(sAMAccountName=*)(memberOf=CN=chpc-users,OU=Groups,OU=CHPC,OU=Department
191 | OUs,DC=ad,DC=utah,DC=edu))
192 | Search scope: Subtree
193 | ```
194 | Everything else default
195 | Under user Federation > Ldap > LDAP Mappers I had to switch username to map to sAMAccountName
196 |
197 | Note: The default Java memory on the Keycloak service is fairly low, our machine got wedged presumably because of that, so we bumped up the memory settings for Java from xms64m xm512m to xms1024m xmx2048m.
198 |
199 | #### CAS
200 |
201 | Campus authentication which in our case includes DUO.
202 |
203 | First request CAS access from IAM for the new OOD server:
204 | [https://uofu.service-now.com/it?id=uu_catalog_item&sys_id=40338e0d945991007c6da33495dbb00c](https://uofu.service-now.com/it?id=uu_catalog_item&sys_id=40338e0d945991007c6da33495dbb00c)
205 |
206 | If upgrading from previous authentication, `/etc/httpd points` to `/opt/rh/httpd24/root/etc/httpd`. First
207 | ```
208 | cd /etc/
209 | rm httpd
210 | ```
211 |
212 | ```
213 | yum -y install epel-release
214 | ```
215 | this is likely already in place, but just be sure you have epel as thats where mod_auth_cas comes from
216 | ```
217 | yum -y install mod_auth_cas
218 | ```
219 | (this pulls in an unnecessary dependency of httpd, because OOD uses httpd24-httpd, just make sure httpd stays disabled)
220 |
221 | verify httpd is disabled in systemd.
222 |
223 | Move away the httpd installed as the mod_auth_cas dependencyi and establish the right links to httpd24:
224 | ```
225 | mv httpd/ httpd-old-httpd
226 | ln -s /opt/rh/httpd24/root/etc/httpd /etc/httpd
227 | mkdir -p /var/cache/httpd/mod_auth_cas
228 | ln -s /var/cache/httpd/mod_auth_cas /opt/rh/httpd24/root/var/cache/httpd/mod_auth_cas
229 | chmod a+rx /opt/rh/httpd24/root/var/cache/httpd/mod_auth_cas
230 | ln -s /usr/lib64/httpd/modules/mod_auth_cas.so /opt/rh/httpd24/root/etc/httpd/modules/mod_auth_cas.so
231 | ```
232 |
233 | The configuration files:
234 | ```
235 | $ cat /opt/rh/httpd24/root/etc/httpd/conf.d/auth_cas.conf
236 | LoadModule auth_cas_module modules/mod_auth_cas.so
237 | CASCookiePath /var/cache/httpd/mod_auth_cas/
238 | CASCertificatePath /etc/pki/tls/certs/ca-bundle.crt
239 | CASLoginURL https://go.utah.edu/cas/login
240 | CASValidateURL https://go.utah.edu/cas/serviceValidate
241 | CASTimeout 0
242 | ```
243 | ```
244 | $ cat /etc/httpd/conf.modules.d/10-auth_cas.conf
245 | #
246 | # mod_auth_cas is an Apache 2.2/2.4 compliant module that supports the
247 | # CASv1 and CASv2 protocols
248 | #
249 |
250 | LoadModule ssl_module modules/mod_ssl.so
251 |
252 |
253 | LoadModule auth_cas_module modules/mod_auth_cas.so
254 |
255 | ```
256 | And in `/etc/ood/config/ood_portal.yml:`
257 | ```
258 | auth:
259 | - 'AuthType CAS'
260 | - 'Require valid-user'
261 | - 'CASScope /'
262 | - 'RequestHeader unset Authorization'
263 | ```
264 |
265 | Check that there is `+:ALL:LOCAL` before `-:ALL EXCEPT (chpc) (wheel) root:ALL
266 | ` in `/etc/security/access.conf`.
267 |
268 | Build and install new Apache configuration
269 | ```
270 | sudo /opt/ood/ood-portal-generator/sbin/update_ood_portal
271 | ```
272 |
273 | Restart Apache:
274 | ```
275 | sudo systemctl restart httpd24-httpd
276 | ```
277 |
278 | ### Apache configuration
279 |
280 | The stock Apache config that comes with CentOS is relatively weak. We have learned the hard way when a class of 30 people was unable to have everyone connected at the OnDemand server at the same time.
281 |
282 | We follow the [recommendations from OSC](https://discourse.osc.edu/t/ood-host-configuration-recommendations/883) on the Apache settings. These settings have made the web server more responsive and allowed to support more connections at the same time. In particular, modify file ```/etc/httpd/conf.modules.d/00-mpm.conf``` (on CentoS7 ```/opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf```):
283 | ```
284 | #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
285 | LoadModule mpm_event_module modules/mod_mpm_event.so
286 |
287 |
288 | ServerLimit 32
289 | StartServers 2
290 | MaxRequestWorkers 512
291 | MinSpareThreads 25
292 | MaxSpareThreads 75
293 | ThreadsPerChild 32
294 | MaxRequestsPerChild 0
295 | ThreadLimit 512
296 | ListenBacklog 511
297 |
298 | ```
299 |
300 | Check Apache config syntax: ```/opt/rh/httpd24/root/sbin/httpd -t```
301 |
302 | Then restart Apache as ```systemctl try-restart httpd24-httpd.service httpd24-htcacheclean.service```.
303 |
304 | Check that the Server MPM is event: ```/opt/rh/httpd24/root/sbin/httpd -V```
305 |
306 | #### Web server monitoring
307 |
308 | Monitoring the web server performance is useful to see if the web server configuration and hardware are sufficient for the needs. We installed Apache mod_status module and netdata monitoring tool following [these instructions](https://www.tecmint.com/monitor-apache-performance-using-netdata-on-centos/). Steve also added basic authentication to the netstat web server.
309 |
310 | ### Cluster configuration files
311 |
312 | Follow [OOD docs](https://osc.github.io/ood-documentation/latest/installation/add-cluster-config.html), we have one for each cluster, listed in [clusters.d of this repo](https://github.com/CHPC-UofU/OnDemand-info/tree/master/config/clusters.d).
313 |
314 | ### Job templates
315 |
316 | Following the [job composer app docs](https://osc.github.io/ood-documentation/master/applications/job-composer.html#job-composer), we have created a directory with templates in my user space (```/uufs/chpc.utah.edu/common/home/u0101881/ondemand/chpc-myjobs-templates```), which is symlinked to the OODs expected location:
317 | ```
318 | $ ln -s /uufs/chpc.utah.edu/common/home/u0101881/ondemand/chpc-myjobs-templates /etc/ood/config/apps/myjobs/templates
319 | ```
320 |
321 | Our user facing templates are versioned at a [github repo](https://github.com/CHPC-UofU/chpc-myjobs-templates).
322 |
323 | ### SLURM setup
324 |
325 | - mount sys branch for SLURM
326 | - munge setup
327 | ```
328 | $ sudo yum install munge-devel munge munge-libs
329 | $ sudo rsync -av kingspeak1:/etc/munge/ /etc/munge/
330 | $ sudo systemctl enable munge
331 | $ sudo systemctl start munge
332 | ```
333 |
334 | Replace kingspeak1 with your SLURM cluster name.
335 |
336 | ### Frisco jobs setup
337 |
338 | To launch jobs on our interactive "frisco" nodes, we use the [Linux Host Adapter](https://osc.github.io/ood-documentation/release-1.7/installation/resource-manager/linuxhost.html#resource-manager-linuxhost).
339 |
340 | We follow the install instructions, in particular create files [```/etc/ood/config/clusters.d/frisco.yml```](https://github.com/CHPC-UofU/OnDemand-info/blob/master/config/clusters.d/frisco.yml) and [```/etc/ood/config/apps/bc_desktop/frisco.yml```](https://github.com/CHPC-UofU/OnDemand-info/blob/master/config/apps/bc_desktop/frisco.yml). We create a [Singularity container](https://github.com/CHPC-UofU/OnDemand-info/tree/master/linux-host) with CentOS7 and place it in the sys branch so that the frisco hosts can read it.
341 |
342 | To make it work, we had to do the following changes:
343 | - set up host based SSH authentication and open firewall on friscos to ondemand.
344 | - modify the ```set_host``` in the ```clusters.d/frisco.yml``` so that the host is hard set to the ```chpc.utah.edu``` network route. Friscos have 3 different network interfaces and we need to make sure that OOD is consistently using the same interface for all its communication.
345 | - currently only allow offload to frisco1, as the OOD defaults to having a round-robin hostname distribution while friscos dont round-robin.
346 | - modify the revese proxy regex in ```/etc/ood/config/ood_portal.yml``` to include the chpc.utah.edu domain.
347 | - modify ```/var/www/ood/apps/sys/bc_desktop/submit.yml.erb``` - we have a custom ```num_cores``` field to request certain number of cores, had to wrapping it in an if statement:
348 | ```
349 | <%- if num_cores != "none" -%>
350 | - "-n <%= num_cores %>"
351 | <%- end -%>
352 | ```
353 | while in ```/etc/ood/config/apps/bc_desktop/frisco.yml``` have ```num_cores: none```
354 |
355 |
356 | ### OOD customization
357 |
358 | Following OODs [customization](https://osc.github.io/ood-documentation/master/customization.html) guide, see our [config directory of this repo](https://github.com/CHPC-UofU/OnDemand-info/tree/master/config).
359 |
360 | We also have some logos in ```/var/www/ood/public``` that get used by the webpage frontend.
361 |
362 | #### Local dashboard adjustments
363 |
364 | in ```/etc/ood/config/locales/en.yml``` we disable the big logo and adjust file quota message:
365 | ```
366 | en:
367 | dashboard:
368 | quota_reload_message: "Reload page to see updated quota. Quotas are updated every hour."
369 | welcome_html: |
370 | ```
371 |
372 | #### Additional directories (scratch) in Files Explorer
373 |
374 | To show scratches, we first need to mount them on the ondemand server, e.g.:
375 | ```
376 | $ cat /etc/fstab
377 | ...
378 | kpscratch.ipoib.wasatch.peaks:/scratch/kingspeak/serial /scratch/kingspeak/serial nfs timeo=16,retrans=8,tcp,nolock,atime,diratime,hard,intr,nfsvers=3 0 0
379 |
380 | $ mkdir -p /scratch/kingspeak/serial
381 | $ mount /scratch/kingspeak/serial
382 | ```
383 | Then follow [Add Shortcuts to Files Menu](https://osc.github.io/ood-documentation/master/customization.html#add-shortcuts-to-files-menu) to create ```/etc/ood/config/apps/dashboard/initializers/ood.rb``` as follows:
384 | ```
385 | OodFilesApp.candidate_favorite_paths.tap do |paths|
386 | paths << Pathname.new("/scratch/kingspeak/serial/#{User.new.name}")
387 | end
388 | ```
389 | The menu item will only show if the directory exists.
390 |
391 | Similarly, for the group space directories, we can loop over all users groups and add the existing paths via:
392 | ```
393 | User.new.groups.each do |group|
394 | paths.concat Pathname.glob("/uufs/chpc.utah.edu/common/home/#{group.name}-group*")
395 | end
396 | ```
397 |
398 | Lustre is a bit more messy since it requires the Lustre client and a kernel driver - though this would be the same kind of setup done on all cluster nodes, so an admin would know what to do (ours did it for us).
399 |
400 | Heres the full [```/etc/ood/config/apps/dashboard/initializers/ood.rb```](https://github.com/CHPC-UofU/OnDemand-info/blob/master/config/apps/dashboard/initializers/ood.rb) file.
401 |
402 | #### Disk quota warnings
403 |
404 | Following [https://osc.github.io/ood-documentation/master/customization.html#disk-quota-warnings-on-dashboard](https://osc.github.io/ood-documentation/master/customization.html#disk-quota-warnings-on-dashboard) with some adjustments based on [https://discourse.osc.edu/t/disk-quota-warnings-page-missing-some-info/716](https://discourse.osc.edu/t/disk-quota-warnings-page-missing-some-info/716).
405 |
406 | Each OOD machine needs to have a cron job to pull the json files from the web servers where they get generated to the OOD server. These files are located at
407 | ```
408 | https://portal.chpc.utah.edu/monitoring/ondemand/storage_quota.json
409 | https://www.chpc.utah.edu/apps/systems/curl_post/quota.json
410 | ```
411 | The first file is for more recent file servers like the VAST, the latter is for legacy file servers like the group spaces.
412 |
413 | JSON file with user storage info is produced from the quota logs run hourly, and then in ```/etc/ood/config/apps/dashboard/env```:
414 | ```
415 | OOD_QUOTA_PATH="/etc/ood/config/apps/dashboard/quota.json:/etc/ood/config/apps/dashboard/quota_legacy.json"
416 | OOD_QUOTA_THRESHOLD="0.90"
417 | ```
418 |
419 | For the recent file systems json file, Paul is getting the data from the VAST and stores on the CHPC Django portal, portal.chpc.utah.edu.
420 |
421 | For the legacy https curl to work, we had to add the OOD servers to the www.chpc.utah.edu.
422 |
423 | To get the legacy json file, our storage admin Sam runs a script on our XFS systems hourly to produce flat files that contain the quota information and sends them to our web server, where our webadmin Chonghuan has a parser that ingests this info into a database. Chonghuan then wrote a script that queries the database and creates the json file. A doctored version of this script, which assumes that one parses the flat file themselves, is here.
424 |
425 | ### Interactive desktop
426 |
427 | Running a graphical desktop on an interactive node requires VNC and Websockify installed on the compute nodes, and setting up the reverse proxy. This is all described at the [Setup Interactive Apps](https://osc.github.io/ood-documentation/master/app-development/interactive/setup.html) help section.
428 |
429 | For us, this also required installing X and desktops on the interactives:
430 | ```
431 | $ sudo yum install gdm
432 | $ sudo yum groupinstall "X Window System"
433 | $ sudo yum groupinstall "Mate Desktop"
434 | ```
435 |
436 | then, we install the websockify and TurboVNC to our application file server as non-root (special user ```hpcapps```):
437 | ```
438 | $ cd /uufs/chpc.utah.edu/sys/installdir
439 | $ cd turbovnc
440 | $ wget http://downloads.sourceforge.net/project/turbovnc/2.1/turbovnc-2.1.x86_64.rpm
441 | $ rpm2cpio turbovnc-2.1.x86_64.rpm | cpio -idmv
442 | mv to appropriate version location
443 | $ cd .../websockify
444 | $ wget wget ftp://mirror.switch.ch/pool/4/mirror/centos/7.3.1611/virt/x86_64/ovirt-4.1/python-websockify-0.8.0-1.el7.noarch.rpm
445 | $ rpm2cpio python-websockify-0.8.0-1.el7.noarch.rpm | cpio -idmv
446 | mv to appropriate version location
447 | ```
448 | then the appropriate vnc sections in the cluster definition files would be as (the whole batch_connect section)
449 | ```
450 | batch_connect:
451 | basic:
452 | set_host: "host=$(hostname -A | awk '{print $2}')"
453 | vnc:
454 | script_wrapper: |
455 | export PATH="/uufs/chpc.utah.edu/sys/installdir/turbovnc/std/opt/TurboVNC/bin:$PATH"
456 | export WEBSOCKIFY_CMD="/uufs/chpc.utah.edu/sys/installdir/websockify/0.8.0/bin/websockify"
457 | %s
458 | set_host: "host=$(hostname -A | awk '{print $2}')"
459 | ```
460 |
461 | In our CentOS 7 Mate dconf gives out warning that makes jobs output.log huge, to fix that:
462 | * open ```/var/www/ood/apps/sys/bc_desktop/template/script.sh.erb```
463 | * add ```export XDG_RUNTIME_DIR="/tmp/${UID}"``` or ```unset XDG_RUNTIME_DIR```
464 |
465 | To automatically start XFCE terminal when the remote desktop session starts, add the following to `template/desktops/xfce.sh`:
466 | ```
467 | # this causes Terminal to automatically start in the desktop
468 | cp /usr/share/applications/xfce4-terminal.desktop "${AUTOSTART}"
469 | ```
470 |
471 | To put an XFCE tiled wall paper onto the desktop, add this to `template/desktops/xfce.sh`::
472 | ```
473 | xfconf-query -c xfce4-desktop -p /backdrop/screen0/monitorVNC-0/workspace0/last-image -s /uufs/chpc.utah.edu/sys/ondemand/chpc-class/R25_neurostats/template/desktops/MainLogo_blk_fullcolor.tif
474 | # set tiled image style
475 | xfconf-query -c xfce4-desktop -p /backdrop/screen0/monitorVNC-0/workspace0/image-style -s 2
476 | ```
477 |
478 | ### Other interactive apps
479 |
480 | Its the best to first stage the interactive apps in users space using the [app development option](https://osc.github.io/ood-documentation/master/app-development/enabling-development-mode.html). To set that up:
481 | ```
482 | mkdir /var/www/ood/apps/dev/u0101881
483 | ln -s /uufs/chpc.utah.edu/common/home/u0101881/ondemand/dev gateway
484 | ```
485 | then restart the web server via the OODs Help - Restart Web Server menu.
486 | It is important to have the ```/var/www/ood/apps/dev/u0101881/gateway``` directory - thats what OOD looks for to show the Develop menu tab. ```u0101881``` is my user name - make sure to put yours there and also your correct home dir location.
487 |
488 | I usually fork the OSCs interactive app templates, then clone them to ```/uufs/chpc.utah.edu/common/home/u0101881/ondemand/dev```, modify to our needs, and push back to the fork. When the app is ready to deploy, put it to ```/var/www/ood/apps/sys```. That is:
489 | ```
490 | $ cd /uufs/chpc.utah.edu/common/home/u0101881/ondemand/dev
491 | $ git clone https://github.com/CHPC-UofU/bc_osc_comsol.git
492 | $ mv bc_osc_comsol comsol_app_np
493 | $ cd comsol_app_np
494 | ```
495 | now modify ```form.yml```, ```submit.yml.erb``` and ```template/script.sh.erb```. Then test on your OnDemand server. If all works well:
496 | ```
497 | $ sudo cp -r /uufs/chpc.utah.edu/common/home/u0101881/ondemand/dev/comsol_app_np /var/www/ood/apps/sys
498 | ```
499 |
500 | Heres a list of apps that we have:
501 | * [Jupyter](https://github.com/CHPC-UofU/bc_osc_jupyter)
502 | * [Matlab](https://github.com/CHPC-UofU/bc_osc_matlab)
503 | * [ANSYS Workbench](https://github.com/CHPC-UofU/bc_osc_ansys_workbench)
504 | * [RStudio Server](https://github.com/CHPC-UofU/bc_osc_rstudio_server)
505 | * [COMSOL](https://github.com/CHPC-UofU/bc_osc_comsol)
506 | * [Abaqus](https://github.com/CHPC-UofU/bc_osc_abaqus)
507 | * [R Shiny](https://github.com/CHPC-UofU/bc_osc_example_shiny)
508 |
509 | There are a few other apps that OSC has but they either need GPUs which we dont have on our interactive test nodes (VMD, Paraview), or, are licensed with group based licenses for us (COMSOL, Abaqus). We may look in the future to restrict access to these apps to the licensed groups.
510 |
511 | ### E-mail address input
512 |
513 | To have the receive e-mail on session start check box working, we need to supply valid e-mail address. This is done in `submit.yml.erb` of each job app. There does not appear to be an option to set this globally.
514 |
515 | One possibility is to feed in the $USER based utah.edu e-mail via SLURMs `--mail-user` argument,
516 | ```
517 | script:
518 | ...
519 | native:
520 | - "--mail-user=<%= ENV["USER"] %>@utah.edu"
521 | ```
522 | The other is to get the user e-mail address from our database:
523 | ```
524 | <%-
525 | emailcmd = '/uufs/chpc.utah.edu/sys/bin/CHPCEmailLookup.sh' + ENV["USER"]
526 | emailaddr = %x[ #{emailcmd}]
527 | -%>
528 | ...
529 | script:
530 | <%# - "--mail-user=<%= emailaddr %>" %>
531 | ```
532 |
533 | We currently use the latter approach which allows for non utah.edu e-mail addresses, but relies on up to date user information in our database.
534 |
535 | ### SLURM partitions in the interactive apps
536 |
537 | We have a number of SLURM partitions where an user can run. It can be hard to remember what partitions an user can access. We have a small piece of code that parses available user partitions and offers them as a drop-down menu. This app is at [Jupyter with dynamic partitions repo](https://github.com/CHPC-UofU/bc_jupyter_dynpart). In this repo, the ```static``` versions of the ```form.yml.erb``` and ```submit.yml.erb``` show all available cluster partitions.
538 |
539 | ### SLURM accounts and partitions available to user, part 1
540 |
541 | The following is a first step in the process to make available only accounts/partitions that the user has access to. There is one pull down with the `account:partition` combination, but it's provided for all clusters. Second step should make only partitions for each cluster available, this will require JavaScript `form.js`.
542 |
543 | First, we create two arrays, listing the clusters, and accounts:partitions. This is done by modification of [`/etc/ood/config/apps/dashboard/initializers/ood.rb`](https://github.com/CHPC-UofU/OnDemand-info/blob/master/config/apps/dashboard/initializers/ood.rb) and involves:
544 | - having a list of clusters in file `/var/www/ood/apps/templates/cluster.txt` which is read into Ruby array `CustomQueues.clusters`
545 | - running a script, [`/var/www/ood/apps/templates/get_alloc_all.sh`](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/get_alloc_alll.sh). that calls the `myallocations` command and parses the output of all available `account:partition` pairs for all the clusters to be put into Ruby array `CustomAccPart.accpart`.
546 | - note that original approach to save output from the `get_alloc_all.sh` into user's directory and then reading it in `initializers/ood.rb` resulted in occasional failure, so, it is advisable to not rely on files generated to a disk for pre-filling this data.
547 |
548 | The `CustomQueues.clusters` and `CustomAccPart.accpart` are then used in the `form.yml.erb` that defines the interactive app's submission form, as this:
549 | ```
550 | cluster:
551 | widget: select
552 | options:
553 | <%- CustomQueues.clusters.each do |g| %>
554 | - "<%= g %>"
555 | <%- end %>
556 | value: "notchpeak"
557 | cacheable: true
558 | help: |
559 | Select the cluster or Frisco node to create this session on.
560 | custom_accpart:
561 | label: "Account and partition"
562 | widget: select
563 | options:
564 | <%- CustomAccPart.accpart.each do |g| %>
565 | - "<%= g %>"
566 | <%- end %>
567 | value: "notchpeak-shared-short:notchpeak-shared-short"
568 | cacheable: true
569 |
570 | ```
571 | In the `cluster` section here we omit the Frisco definitions.
572 |
573 | In the `submit.yml.erb` we also need to parse the `custom_accpart` into the account and queue objects with those that OOD expects:
574 | ```
575 | accounting_id: "<%= custom_accpart.slice(0..(custom_accpart.index(':')-1)) %>"
576 | queue_name: "<%= custom_accpart.slice((custom_accpart.index(':')+1)..-1) %>"
577 | ```
578 |
579 | ### SLURM accounts and partitions available to user, part 2
580 |
581 | This is a future project that should make only partitions for each cluster available, this will require JavaScript `form.js`.
582 |
583 | First, we create two arrays, one for the clusters which we already have from step 1, and another which holds the allocation:partition information, but now separate for each cluster. This is done by modification of [`/etc/ood/config/apps/dashboard/initializers/ood.rb`](https://github.com/CHPC-UofU/OnDemand-info/blob/master/config/apps/dashboard/initializers/ood.rb) and involves:
584 | - running an updated script, [`/var/www/ood/apps/templates/get_alloc_by_cluster.sh`](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/get_alloc_by_cluster.sh). that calls the `myallocations` command and parses the output separately for each cluster.
585 | - parsing this script output appropriately in the `initializers/ood.rb` to have separate `CustomAccPart` arrays for each cluster.
586 |
587 | ### Dynamic partition filtering
588 |
589 | Available partitions are automatically filtered when submitting a job based on cluster selection. Filtering is done entirely through [`form.js`](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/form.js), using `notchpeak`, `np`, `kingspeak`, `kp`, `lonepeak`, `lp` as identifiers.
590 |
591 | ```
592 | /**
593 | * Filters account and partition options based on cluster selection.
594 | */
595 | function filterAccountPartitionOptions() {
596 | // Get selected value from cluster dropdown
597 | const selectedCluster = document.getElementById('batch_connect_session_context_cluster').value;
598 |
599 | // Get account:partition select element
600 | const accountPartitionSelect = document.getElementById('batch_connect_session_context_custom_accpart');
601 |
602 | // Get all options within account:partition select
603 | const options = accountPartitionSelect.options;
604 |
605 | // Define mapping for cluster names and acronyms
606 | const clusterAcronyms = {
607 | 'ash': 'ash',
608 | 'kingspeak': 'kp',
609 | 'lonepeak': 'lp',
610 | 'notchpeak': 'np'
611 | };
612 |
613 | // Loop over options and hide those that do not match selected cluster
614 | for (let i = 0; i < options.length; i++) {
615 | const option = options[i];
616 |
617 | // Determine if the option value should be visible
618 | const isOptionVisible = option.value.indexOf(selectedCluster) >= 0 ||
619 | (clusterAcronyms[selectedCluster] && option.value.indexOf(clusterAcronyms[selectedCluster]) >= 0);
620 |
621 | // Set display style based on whether option should be visible
622 | option.style.display = isOptionVisible ? 'block' : 'none';
623 | }
624 | // Reset advanced options for cluster change
625 | toggleAdvancedOptions();
626 | }
627 | ```
628 |
629 | ### Auto-filling GPU information
630 |
631 | GPU information is auto-filled through OOD's [Dynamic Form Widgets](https://osc.github.io/ood-documentation/latest/app-development/interactive/dynamic-form-widgets.html). This requires listing of each of the GPU type and specifying in which cluster to hide that GPU, as shown in our [template](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/job_params_v3) that gets inserted into each `form.yml.erb`. This list is rather long since it requires to manually list each GPU type and what cluster it is NOT on, e.g.:
632 | ```
633 | gpu_type:
634 | label: "GPU type"
635 | widget: select
636 | value: "none"
637 | options:
638 | - [
639 | 'none',
640 | data-hide-gpu-count: true
641 | ]
642 | - [
643 | 'GTX 1080 Ti, SP, general, owner','1080ti',
644 | data-option-for-cluster-ash: false,
645 | data-option-for-cluster-kingspeak: false
646 | ]
647 | - [
648 | 'GTX Titan X, SP, owner','titanx',
649 | data-option-for-cluster-ash: false,
650 | data-option-for-cluster-notchpeak: false,
651 | data-option-for-cluster-lonepeak: false
652 | ]
653 | ```
654 |
655 | This kind of specification requires a separate input field for the GPU count called `gpu_count`.
656 | The default GPU option is `none`, which hides the `gpu_count` field since it's not necessary.
657 |
658 | In the `submit.yml.erb` we then tie the `gpu_type` and `gpu_count` together as:
659 | ```
660 | <%- if gpu_type != "none" -%>
661 | - "--gres=gpu:<%= gpu_type %>:<%= gpu_count %>"
662 | <%- end -%>
663 | ```
664 |
665 | ### Dynamic GPU filtering
666 |
667 | GPU availability is dynamically filtered based on selected partition when submitting a job. GPU information for each partition is pulled via shell script [grabPartitionsGPUs.sh](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/grabPartitionsGPUs.sh). A list of partitions and GPUs available to that partion are are saved in the format of:
668 |
669 | ```
670 | notchpeak-shared-short
671 | 1080ti
672 | t4
673 |
674 | notchpeak-gpu
675 | 2080ti
676 | 3090
677 | a100
678 | p40
679 | v100
680 |
681 | ...
682 | ```
683 |
684 | Functions CustomGPUPartitions and CustomGPUMappings was added to [`/etc/ood/config/apps/dashboard/initializers/ood.rb`](https://github.com/CHPC-UofU/OnDemand-info/blob/master/config/apps/dashboard/initializers/ood.rb) to create an array of partition:gpu pairs and identifier:gpu pairs respectively. Both of these arrays were intitialized and embedded into HTML via each app's `form.yml.erb`. Arrays were accessed via [`form.js`](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/form.js), and form filtering logic was done directly within the JavaScript:
685 |
686 | ```
687 | /**
688 | * Updates GPU options based on the selected partition.
689 | */
690 | function filterGPUOptions() {
691 | const selectedPartition = $('#batch_connect_session_context_custom_accpart').val().split(':')[1];
692 | const partitionString = gpuDataHash["gpu_partitions"].find(partition => partition.startsWith(selectedPartition + ','));
693 |
694 | const gpuSelect = $('#batch_connect_session_context_gpu_type');
695 | gpuSelect.empty(); // Clear existing options
696 |
697 | // Always add a 'none' option
698 | gpuSelect.append(new Option('none', 'none'));
699 |
700 | if (partitionString) {
701 | const availableGPUs = partitionString.split(',').slice(1).map(gpu => gpu.trim());
702 |
703 | if (availableGPUs.length > 0) {
704 | // Add 'any' option if GPUs are available
705 | gpuSelect.append(new Option('any', 'any'));
706 |
707 | // Add available GPUs as options
708 | availableGPUs.forEach(gpu => {
709 | if (gpuMapping[gpu]) // Check for mapping
710 | gpuSelect.append(new Option(gpuMapping[gpu], gpu));
711 | });
712 | gpuSelect.parent().show(); // Show GPU selection field
713 | } else {
714 | gpuSelect.parent().show(); // Still show field with 'none' option
715 | }
716 | } else {
717 | gpuSelect.parent().show(); // Show field with only 'none' option if partition not found
718 | }
719 | }
720 | ```
721 |
722 | Since all fields on the interactive app form (`form.yml.erb`) are cacheable by default, the `gpudata` hidden field needs to be made non-cacheable. This allows the updates to the partitions and GPUs list to show up in the form:
723 | ```
724 | gpudata:
725 | widget: hidden_field
726 | cacheable: false
727 | value: |
728 | "<%= gpu_data.to_json %>"
729 | ```
730 |
731 | ### Hiding job input fields when Frisco nodes are selected
732 |
733 | The [Dynamic Form Widgets](https://osc.github.io/ood-documentation/latest/app-development/interactive/dynamic-form-widgets.html) also allow to hide fields, like account, walltime, etc, that are not needed for the Frisco jobs. Because the list of fields to hide is long and has to be done for each `frisco`, it's in a separate include file in the templates directory, [friscos_v2](https://github.com/CHPC-UofU/OOD-apps-v3/blob/master/app-templates/friscos_v2). For each Frisco, the entry is:
734 | ```
735 | - [
736 | 'frisco1',
737 | data-hide-gpu-type: true,
738 | data-hide-memtask: true,
739 | data-hide-bc-vnc-resolution: true,
740 | data-hide-num-cores: true,
741 | data-hide-bc-num-hours: true,
742 | data-hide-custom-accpart: true,
743 | data-hide-bc-email-on-started: true,
744 | ]
745 | ```
746 |
747 | And it is included in the `form.yml.erb` in the `cluster` section as:
748 | ```
749 | cluster:
750 | widget: select
751 | options:
752 | <%- CustomQueues.clusters.each do |g| %>
753 | - "<%= g %>"
754 | <%- end %>
755 | <% IO.foreach(template_root+"friscos_v2") do |line| %>
756 | <%= line %>
757 | <% end %>
758 | value: "notchpeak"
759 | cacheable: true
760 | ```
761 | ### Google Analytics
762 |
763 | It is useful to set up Google Analytics to gather usage data, rather than parsing through the Apache logs. This is somewhat hiddenly explained [here](https://osc.github.io/ood-documentation/master/infrastructure/ood-portal-generator/examples/add-google-analytics.html).
764 |
765 | In our case, it involved:
766 | * signing up for an account at analytics.google.com, and noting the account name
767 | * putting this account name to /etc/ood/config/ood_portal.yml, as described in the document above. Our piece is:
768 | ```
769 | analytics:
770 | url: 'http://www.google-analytics.com/collect'
771 | id: 'UA-xxxxxxxxx-x'
772 | ```
773 | * rebuild and reinstall Apache configuration file by running ```sudo /opt/ood/ood-portal-generator/sbin/update_ood_portal```.
774 | * restart Apache, on CentOS 7: ```sudo systemctl try-restart httpd24-httpd.service httpd24-htcacheclean.service```.
775 |
776 | #### Change to Google Analytics 4
777 |
778 | Discussed at [this](https://discourse.openondemand.org/t/google-analytics-4-support/2464) thread. In particular:
779 | ```
780 | mkdir -p /etc/ood/config/apps/dashboard/views/layouts
781 | cp /var/www/ood/apps/sys/dashboard/app/views/layouts/application.html.erb /etc/ood/config/apps/dashboard/views/layouts
782 | ```
783 |
784 | Edit `/etc/ood/config/apps/dashboard/views/layouts/application.html.erb` and near the top put:
785 | ```
786 | <%- tag_id = 'abc123' -%>
787 |
788 | <%- unless tag_id.nil? -%>
789 |
790 |
791 |
798 | <%- end -%>
799 | ```
800 |
801 | The “Measurement ID” on GA4 is the “tag_id”.
802 |
803 | ### Impersonation
804 |
805 | Impersonation allows one to log in as yourself but in the OOD portal be another user. This could be useful in troubleshooting OOD problems.
806 |
807 | At present we have this functional only on `ondemand-test.chpc.utah.edu`, but, if we do not notice any issues it will be put on the production servers.
808 |
809 | We follow instructions from [Yale](https://github.com/ycrc/ood-user-mapping).
810 |
811 | In particular, first clone their repository:
812 | ```
813 | cd /uufs/chpc.utah.edu/common/home/u0101881/ondemand/repos/
814 | git clone https://github.com/ycrc/ood-user-mapping
815 | ```
816 |
817 | Then on the OOD server:
818 | ```
819 | cd /opt/ood
820 | cp -r /uufs/chpc.utah.edu/common/home/u0101881/ondemand/repos/ood-user-mapping/ycrc_auth_map customized_auth_map
821 | patch -u /opt/ood/ood-portal-generator/templates/ood-portal.conf.erb -i /uufs/chpc.utah.edu/common/home/u0101881/ondemand/repos/ood-user-mapping/ood-portal.conf.erb.patch
822 | ```
823 |
824 | Add the following line to ```/etc/ood/config/ood-portal.yml```:
825 | ```
826 | user_map_cmd: '/opt/ood/customized_auth_map/bin/ood_auth_map.regex'
827 | ```
828 |
829 | Regenerate Apache config and restart it:
830 | ```
831 | /opt/ood/ood-portal-generator/sbin/update_ood_portal
832 | systemctl try-restart httpd24-httpd.service httpd24-htcacheclean.service
833 | ```
834 |
835 | Then, to impersonate an user, map the users ID to your ID, in `/etc/ood/config/map_file` that is editable only by root = contact Martin or Steve to do it. The format of the file is:
836 | ```
837 | "your_unid" user_unid
838 | ```
839 | e.g.
840 | ```
841 | "u0012345" u0123456
842 | ```
843 |
844 | ## Update notes
845 |
846 | ### Update to OOD 3.0
847 |
848 | - set maintenance mode:
849 | ```
850 | touch /etc/ood/maintenance.enable
851 | ```
852 |
853 | - Stop PUNs
854 | ```
855 | /opt/ood/nginx_stage/sbin/nginx_stage nginx_clean -f
856 | ```
857 |
858 | - do the update
859 | https://osc.github.io/ood-documentation/latest/release-notes/v3.0-release-notes.html#upgrade-directions
860 |
861 | - restart Apache
862 | ```
863 | systemctl try-restart httpd
864 | ```
865 |
866 | - change `/etc/ood/config/apps/dashboard/initializers/ood.rb` to new syntax
867 | https://osc.github.io/ood-documentation/latest/release-notes/v3.0-release-notes.html#deprecations
868 |
869 | - update `/etc/ood/config/apps/dashboard/views/layouts/application.html.erb` for Google Analytics
870 | ```
871 | cd /etc/ood/config/apps/dashboard/views/layouts/
872 | cp application.html.erb application.html.erb.2.0
873 | cp /var/www/ood/apps/sys/dashboard/app/views/layouts/application.html.erb .
874 | vi application.html.erb.2.0
875 | ```
876 | copy Google Analytics tag
877 | ```
878 | vi application.html.erb
879 | ```
880 | paste Google Analytics tag
881 |
882 | - remove bc_desktop
883 | ```
884 | cd /var/www/ood/apps/sys
885 | mv bc_desktop ../sys-2022-05-24/
886 | ```
887 |
888 | - change websockify version in `/etc/ood/config/clusters.d/*.yml` to 0.10.0
889 |
890 | - fix the clusters app
891 | add rexml to dependencies - modify app files as in https://github.com/OSC/osc-systemstatus/commit/203d42a426d67323ef9d7c7d95fadd64b007b4d5
892 | ```
893 | scl enable ondemand -- bin/bundle install --path vendor/bundle
894 | scl enable ondemand -- bin/setup
895 | touch tmp/restart.txt
896 | ```
897 |
898 | Config changes - `/etc/ood/config/ondemand.d/ondemand.yml.erb`
899 |
900 | - clean old apps dirs after 30 days:
901 | https://osc.github.io/ood-documentation/latest/reference/files/ondemand-d-ymls.html#bc-clean-old-dirs
902 | ```
903 | # clean old interactive app dirs
904 | bc_clean_old_dirs: true
905 | ```
906 | - support ticket, https://osc.github.io/ood-documentation/latest/customizations.html#support-ticket-guide
907 | - auto-filling the e-mail address, but Service Now team will create a completely separate page for ticket submission.
908 | ```
909 | support_ticket:
910 | email:
911 | from: <%= Etc.getlogin %>@utah.edu
912 | to: "helpdesk@chpc.utah.edu"
913 | delivery_method: "smtp"
914 | delivery_settings:
915 | address: 'mail.chpc.utah.edu'
916 | port: 25
917 | authentication: 'none'
918 | form:
919 | - subject
920 | - session_id
921 | - session_description
922 | - attachments
923 | - description
924 | ```
925 |
926 | - quick launch apps, https://osc.github.io/ood-documentation/latest/how-tos/app-development/interactive/sub-apps.html#
927 | - no - would have to include all the form.yml.erb fields that are used in submit.yml.erb and show them on the page.
928 |
929 |
--------------------------------------------------------------------------------
/rocky8.md:
--------------------------------------------------------------------------------
1 | # Rocky 8 installation notes
2 |
3 | ## Initial VM setup
4 |
5 | VM was set up by sysadmin (David R) following
6 | [https://osc.github.io/ood-documentation/latest/installation/install-software.html](https://osc.github.io/ood-documentation/latest/installation/install-software.html)
7 | up to and including
8 | [setting up SSL](https://osc.github.io/ood-documentation/latest/installation/add-ssl.html)
9 |
10 | Sysadmin additions to that on the top of this:
11 | - copy SSH host keys in /etc/ssh from old servers before they are re-built
12 | - in /etc/ssh/ssh_config.d/00-chpc-config on OOD servers enable host based authentication
13 | - add all cluster file systems mounts
14 | - install maria-db-server to allow resolveip - used to find compute node hostname by OOD (hostfromroute.sh)
15 | - add all HPC scratch mounts
16 | - passwordless ssh to all interactive nodes
17 | - Modify `/etc/security/access.conf` to add: ```+:ALL:LOCAL```
18 |
19 | ## Further installation
20 |
21 | ### CAS authentication
22 |
23 | Some info on other sites implementation at (https://discourse.openondemand.org/t/implementing-authentication-via-cas/34/9).
24 |
25 | Build mod_auth_cas from source, based on [https://linuxtut.com/en/69296a1f9b6bf93f076f/](https://linuxtut.com/en/69296a1f9b6bf93f076f/)
26 | ```
27 | $ yum install libcurl-devel pcre-devel
28 | $ cd /usr/local/src
29 | $ wget https://github.com/apereo/mod_auth_cas/archive/v1.2.tar.gz
30 | $ tar xvzf v1.2.tar.gz
31 | $ cd mod_auth_cas-1.2
32 | $ autoreconf -iv
33 | $ ./configure --with-apxs=/usr/bin/apxs
34 | $ make
35 | $ make check
36 | $ make install
37 | ```
38 |
39 | or `install_scripts/build_cas.sh`
40 |
41 | Further setup of CAS
42 | ```
43 | $ mkdir -p /var/cache/httpd/mod_auth_cas
44 | $ chown apache:apache /var/cache/httpd/mod_auth_cas
45 | # chmod a+rX /var/cache/httpd/mod_auth_cas
46 | $ vi /etc/httpd/conf.d/auth_cas.conf
47 | LoadModule auth_cas_module modules/mod_auth_cas.so
48 | CASCookiePath /var/cache/httpd/mod_auth_cas/
49 | CASCertificatePath /etc/pki/tls/certs/ca-bundle.crt
50 | CASLoginURL https://go.utah.edu/cas/login
51 | CASValidateURL https://go.utah.edu/cas/serviceValidate
52 | ```
53 |
54 | or `install_scripts/setup_cas.sh`
55 |
56 | ### Base OOD config and start Apache
57 |
58 | OOD base config files:
59 | ```
60 | # cd /etc/ood/config
61 | # cp ood_portal.yml ood_portal.yml.org
62 | # scp u0101881@ondemand.chpc.utah.edu:/etc/ood/config/ood_portal.yml .
63 | OR # wget https://raw.githubusercontent.com/CHPC-UofU/OnDemand-info/master/config/ood_portal.yml
64 | # vi ood_portal.yml
65 | ```
66 | - search for "ondemand.chpc.utah.edu", replace with "ondemand-test.chpc.utah.edu"
67 | - (for ondemand-test - set Google Analytics " id: 'UA-122259839-4'"
68 | - copy the `SSLCertificate` part from `ood_portal.yml.org`
69 | - (comment out line `" - 'Include "/root/ssl/ssl-standard.conf"'"`
70 |
71 | Update Apache and start it
72 | ```
73 | # /opt/ood/ood-portal-generator/sbin/update_ood_portal
74 | # systemctl try-restart httpd.service htcacheclean.service
75 | ```
76 | Once this is done one should be able to log into https://ondemand-test.chpc.utah.edu and see the vanilla OOD interface.
77 |
78 | ### Improve Apache configuration
79 |
80 | Mainly for performance reasons if > 10s simultaneous users.
81 |
82 | ```
83 | # vi /etc/httpd/conf.modules.d/00-mpm.conf
84 | LoadModule mpm_event_module modules/mod_mpm_event.so
85 |
86 |
87 | ServerLimit 32
88 | StartServers 2
89 | MaxRequestWorkers 512
90 | MinSpareThreads 25
91 | MaxSpareThreads 75
92 | ThreadsPerChild 32
93 | MaxRequestsPerChild 0
94 | ThreadLimit 512
95 | ListenBacklog 511
96 |
97 | ```
98 |
99 | Check Apache config syntax:
100 | ```https://github.com/chpc-uofu/OnDemand-info/blob/master/readme.md#slurm-accounts-and-partitions-available-to-user-part-1
101 | # /sbin/httpd -t
102 | ```
103 |
104 | Then restart Apache:
105 | ```
106 | # systemctl try-restart httpd.service htcacheclean.service
107 | ```
108 |
109 | Check that the Server MPM is event:
110 | ```
111 | # /sbin/httpd -V
112 | ```
113 |
114 | or `install_scripts/check_apache_config.sh`
115 |
116 | ### SLURM setup
117 |
118 | ```
119 | $ sudo dnf install munge-devel munge munge-libs
120 | $ sudo rsync -av kingspeak1:/etc/munge/ /etc/munge/
121 | $ sudo systemctl enable munge
122 | $ sudo systemctl start munge
123 | ```
124 |
125 | ### Clusters setup
126 | https://github.com/chpc-uofu/OnDemand-info/blob/master/readme.md#slurm-accounts-and-partitions-available-to-user-part-1
127 | ```
128 | scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/clusters.d /etc/ood/config
129 | ```
130 | - !!!! in all /etc/ood/config/clusters.d/*.yml replace ondemand.chpc.utah.edu with ondemand-test.chpc.utah.edu
131 | - !!!! may replace websockify/0.8.0 with websockify/0.8.0.r8
132 |
133 | ### Other customizations
134 |
135 | Logo images
136 | ```
137 | # scp -r u0101881@ondemand.chpc.utah.edu:/var/www/ood/public/CHPC-logo35.png /var/www/ood/public
138 | # scp -r u0101881@ondemand.chpc.utah.edu:/var/www/ood/public/chpc_logo_block.png /var/www/ood/public
139 | # scp -r u0101881@ondemand.chpc.utah.edu:/var/www/ood/public/CHPC-logo.png /var/www/ood/public
140 | ```
141 |
142 | Locales
143 | ```
144 | # mkdir -p /etc/ood/config/locales/
145 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/locales/en.yml /etc/ood/config/locales/
146 | ```
147 |
148 | Dashboard, incl. logos, quota warnings,...
149 | ```
150 | # mkdir -p /etc/ood/config/apps/dashboard/initializers/
151 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/apps/dashboard/initializers/ood.rb /etc/ood/config/apps/dashboard/initializers/
152 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/apps/dashboard/env /etc/ood/config/apps/dashboard
153 | ```
154 |
155 | Test disk quota
156 | ```
157 | vi /etc/ood/config/apps/dashboard/env
158 | ```
159 | temporarily modify `OOD_QUOTA_THRESHOLD="0.10"`, in OOD web interface Restart Web Server to verify that the quota warnings appear.
160 |
161 | Active jobs environment
162 | ```
163 | # mkdir -p /etc/ood/config/apps/activejobs
164 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/apps/activejobs/env /etc/ood/config/apps/activejobs
165 | ```
166 |
167 | Base apps configs
168 | ```
169 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/apps/bc_desktop /etc/ood/config/apps/
170 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/apps/shell /etc/ood/config/apps/
171 | # scp u0101881@ondemand.chpc.utah.edu:/var/www/ood/apps/sys/shell/bin/ssh /var/www/ood/apps/sys/shell/bin/
172 | ```
173 |
174 | Announcements, XdMoD
175 | ```
176 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/announcement.md.motd /etc/ood/config/
177 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/nginx_stage.yml /etc/ood/config/
178 | ```
179 |
180 | Widgets/pinned apps
181 | ```
182 | # mkdir /etc/ood/config/ondemand.d/
183 | # scp -r u0101881@ondemand.chpc.utah.edu:/etc/ood/config/ondemand.d/ondemand.yml /etc/ood/config/ondemand.d/
184 | ```
185 |
186 | SLURM job templates
187 | ```
188 | # mkdir -p /etc/ood/config/apps/myjobs
189 | # ln -s /uufs/chpc.utah.edu/sys/ondemand/chpc-myjobs-templates /etc/ood/config/apps/myjobs/templates
190 | ```
191 |
192 | OR `install_scripts/get_customizations.sh`
193 |
194 | ### Apps setup
195 | ```
196 | # /uufs/chpc.utah.edu/sys/ondemand/chpc-apps/update.sh
197 | # cd /var/www/ood/apps/sys
198 | # mkdir org
199 | # mv bc_desktop/ org
200 | # cd /var/www/ood/apps
201 | # ln -s /uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates templates
202 | # cd /var/www/ood/apps/templates
203 | # source /etc/profile.d/chpc.sh
204 | # ./genmodulefiles.sh
205 | ```
206 |
207 | OR `install_scripts/get_apps.sh` (NB - modules are set up differently, don't run ./genmodulefiles.sh
208 |
209 | Restart web server in the client to see all the Interactive Apps. If seen proceed to testing the apps.
210 | Including check cluster status app.
211 |
212 |
213 | ## Changes after initial R8 installation
214 |
215 | ### Auto-initialization of accounts, partitions, GPUs in partition
216 |
217 | Described in [CHPC OOD's readme](https://github.com/chpc-uofu/OnDemand-info/blob/master/readme.md#slurm-accounts-and-partitions-available-to-user-part-1) and below, it involves modification of `/etc/ood/config/apps/dashboard/initializers/ood.rb` to read in the information, which is then used/parsed in the interactive apps (mainly `form.yml.erb` and `form.js`).
218 |
219 | Supporting infrastructure includes running [script](https://github.com/chpc-uofu/OOD-apps-v3/blob/master/app-templates/grabPartitionsGPUs.sh) that produces a text file which lists the GPUs and partitions. The user accounts/partitions list is curled from portal.
220 |
221 | ### Change in file systems quota
222 |
223 | Curled from portal via a cron job that runs on the ondemand server.
224 |
225 | ### Cluster status apps
226 |
227 | Display node status for each node, e.g. for [notchpeak](https://github.com/chpc-uofu/OOD-apps-v3/tree/master/chpc-notchpeak-status). See that URL for description of what cron jobs are run and what and where they produce. Cron job on notchrm runs [getmodules.sh](https://github.com/chpc-uofu/OOD-apps-v3/blob/master/app-templates/getmodules.sh) once a day to generate file `/uufs/chpc.utah.edu/sys/ondemand/chpc-apps/app-templates/modules/notchpeak.json` which is then symlinked to `/var/www/ood/apps/templates/modules/notchpeak.json`. As each cluster requires its own `json` file, other clusters files are symlinks to `notchpeak.json` (incl. `redwood.json` as PE uses a copy of the sys branch from the GE).
228 |
229 | ### Adding Globus into the File Manager
230 | ```
231 | vi /etc/ood/config/ondemand.d/ondemand.yml.erb
232 | # single endpoint for all file systems (home, scratch, group)
233 | globus_endpoints:
234 | - path: "/"
235 | endpoint: "7cf0baa1-8bd0-4e91-a1e6-c19042952a7c"
236 | endpoint_path: "/"
237 | ```
238 |
239 | ### Dynamic modules
240 |
241 | Using [OOD's built in way](https://osc.github.io/ood-documentation/latest/reference/files/ondemand-d-ymls.html?highlight=module) to auto-set available module versions for interactive apps.
242 |
243 | ### Adding se-linux support in pe-ondemand
244 |
245 | Only in pe-ondemand, not in the GE.
246 |
247 | ```
248 | yum install ondemand-selinux
249 | setsebool -P ondemand_use_slurm=on
250 | getsebool -a |grep ondemand
251 |
252 | ```
253 |
254 | ## Future options
255 |
256 | ### Outstanding things
257 |
258 | !!!! Netdata webserver monitoring
259 |
260 | ### Things to look at in the future
261 |
262 | - Dashboard allocation balance warnings: https://osc.github.io/ood-documentation/latest/customization.html#balance-warnings-on-dashboard
263 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.4.4.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/1.1.453/module"
9 | local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "/3.4.4/singularity-rstudio.simg")
11 | local library = pathJoin(root, "/library-3.4")
12 | local host_mnt = "/mnt"
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-3.4"
15 |
16 | prereq("singularity")
17 | prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", pathJoin(host_mnt, user_library))
25 |
26 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
27 | -- reason. This is necessary for RStudio Server to work in a multi-user
28 | -- environment. So to get around this we use a combination of:
29 | --
30 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
31 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
32 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
33 | --
34 | -- The last one is called from within the executable scripts found under `bin/`
35 | -- as it makes the temp directory at runtime.
36 | --
37 | -- If your system does successfully bind a directory over `/tmp`, then you can
38 | -- probably get away with just:
39 | --
40 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
41 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.5.3.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/1.1.463"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "singularity-rstudio_3.5.3.sif")
11 | local library = pathJoin(root, "/library-3.5")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-3.5"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", pathJoin(host_mnt, user_library))
25 |
26 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
27 | -- reason. This is necessary for RStudio Server to work in a multi-user
28 | -- environment. So to get around this we use a combination of:
29 | --
30 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
31 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
32 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
33 | --
34 | -- The last one is called from within the executable scripts found under `bin/`
35 | -- as it makes the temp directory at runtime.
36 | --
37 | -- If your system does successfully bind a directory over `/tmp`, then you can
38 | -- probably get away with just:
39 | --
40 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
41 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.6.1-basic.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.1"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "ood-rstudio-basic_3.6.1.sif")
11 | local library = pathJoin(root, "/library-ood-3.6")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-ood-3.6"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", user_library)
25 | setenv("R_ENVIRON_USER",pathJoin(os.getenv("HOME"),".Renviron.OOD"))
26 |
27 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
28 | -- reason. This is necessary for RStudio Server to work in a multi-user
29 | -- environment. So to get around this we use a combination of:
30 | --
31 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
32 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
33 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
34 | --
35 | -- The last one is called from within the executable scripts found under `bin/`
36 | -- as it makes the temp directory at runtime.
37 | --
38 | -- If your system does successfully bind a directory over `/tmp`, then you can
39 | -- probably get away with just:
40 | --
41 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
42 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.6.1-bio.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.1"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "ood-rstudio-bio_3.6.1.sif")
11 | local library = pathJoin(root, "/library-ood-3.6")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-ood-3.6"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", user_library)
25 | setenv("R_ENVIRON_USER",pathJoin(os.getenv("HOME"),".Renviron.OOD"))
26 |
27 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
28 | -- reason. This is necessary for RStudio Server to work in a multi-user
29 | -- environment. So to get around this we use a combination of:
30 | --
31 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
32 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
33 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
34 | --
35 | -- The last one is called from within the executable scripts found under `bin/`
36 | -- as it makes the temp directory at runtime.
37 | --
38 | -- If your system does successfully bind a directory over `/tmp`, then you can
39 | -- probably get away with just:
40 | --
41 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
42 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.6.1-geospatial.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.1"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "ood-rstudio-geospatial_3.6.1.sif")
11 | local library = pathJoin(root, "/library-ood-3.6")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-ood-3.6"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", user_library)
25 | setenv("R_ENVIRON_USER",pathJoin(os.getenv("HOME"),".Renviron.OOD"))
26 |
27 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
28 | -- reason. This is necessary for RStudio Server to work in a multi-user
29 | -- environment. So to get around this we use a combination of:
30 | --
31 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
32 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
33 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
34 | --
35 | -- The last one is called from within the executable scripts found under `bin/`
36 | -- as it makes the temp directory at runtime.
37 | --
38 | -- If your system does successfully bind a directory over `/tmp`, then you can
39 | -- probably get away with just:
40 | --
41 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
42 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.6.2-basic.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.2"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "ood-rstudio-rocker_3.6.2.sif")
11 | local library = pathJoin(root, "/library-ood-3.6")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-ood-rocker-3.6"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", user_library)
25 | setenv("R_ENVIRON_USER",pathJoin(os.getenv("HOME"),".Renviron.OOD"))
26 |
27 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
28 | -- reason. This is necessary for RStudio Server to work in a multi-user
29 | -- environment. So to get around this we use a combination of:
30 | --
31 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
32 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
33 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
34 | --
35 | -- The last one is called from within the executable scripts found under `bin/`
36 | -- as it makes the temp directory at runtime.
37 | --
38 | -- If your system does successfully bind a directory over `/tmp`, then you can
39 | -- probably get away with just:
40 | --
41 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
42 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.6.2-bioconductor.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.2"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "ood-bioconductor_3.6.2.sif")
11 | local library = pathJoin(root, "/library-ood-3.6")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-ood-bioconductor-3.6"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", user_library)
25 | setenv("R_ENVIRON_USER",pathJoin(os.getenv("HOME"),".Renviron.OOD"))
26 |
27 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
28 | -- reason. This is necessary for RStudio Server to work in a multi-user
29 | -- environment. So to get around this we use a combination of:
30 | --
31 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
32 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
33 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
34 | --
35 | -- The last one is called from within the executable scripts found under `bin/`
36 | -- as it makes the temp directory at runtime.
37 | --
38 | -- If your system does successfully bind a directory over `/tmp`, then you can
39 | -- probably get away with just:
40 | --
41 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
42 |
--------------------------------------------------------------------------------
/rstudio-singularity/modulefiles/rstudio_singularity/3.6.2-geospatial.lua:
--------------------------------------------------------------------------------
1 | help([[
2 | This module loads the RStudio Server environment which utilizes a Singularity
3 | image for portability.
4 | ]])
5 |
6 | whatis([[Description: RStudio Server environment using Singularity]])
7 |
8 | local root = "/uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.2"
9 | -- local bin = pathJoin(root, "/bin")
10 | local img = pathJoin(root, "ood-rstudio-geo-rocker_3.6.2.sif")
11 | local library = pathJoin(root, "/library-ood-3.6")
12 | local host_mnt = ""
13 |
14 | local user_library = os.getenv("HOME") .. "/R/library-ood-rocker-3.6"
15 |
16 | prereq("singularity")
17 | -- prepend_path("PATH", bin)
18 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", "/:" .. host_mnt, ",")
19 | prepend_path("RSTUDIO_SINGULARITY_BINDPATH", library .. ":/library", ",")
20 | setenv("RSTUDIO_SINGULARITY_IMAGE", img)
21 | setenv("RSTUDIO_SINGULARITY_HOST_MNT", host_mnt)
22 | setenv("RSTUDIO_SINGULARITY_CONTAIN", "1")
23 | setenv("RSTUDIO_SINGULARITY_HOME", os.getenv("HOME") .. ":/home/" .. os.getenv("USER"))
24 | setenv("R_LIBS_USER", user_library)
25 | setenv("R_ENVIRON_USER",pathJoin(os.getenv("HOME"),".Renviron.OOD"))
26 |
27 | -- Note: Singularity on CentOS 6 fails to bind a directory to `/tmp` for some
28 | -- reason. This is necessary for RStudio Server to work in a multi-user
29 | -- environment. So to get around this we use a combination of:
30 | --
31 | -- - SINGULARITY_CONTAIN=1 (containerize /home, /tmp, and /var/tmp)
32 | -- - SINGULARITY_HOME=$HOME (set back the home directory)
33 | -- - SINGUARLITY_WORKDIR=$(mktemp -d) (bind a temp directory for /tmp and /var/tmp)
34 | --
35 | -- The last one is called from within the executable scripts found under `bin/`
36 | -- as it makes the temp directory at runtime.
37 | --
38 | -- If your system does successfully bind a directory over `/tmp`, then you can
39 | -- probably get away with just:
40 | --
41 | -- - SINGULARITY_BINDPATH=$(mktemp -d):/tmp,$SINGULARITY_BINDPATH
42 |
--------------------------------------------------------------------------------
/rstudio-singularity/readme.txt:
--------------------------------------------------------------------------------
1 | RStudio server Singularity container for Open Ondemand.
2 |
3 | Pulled from https://github.com/nickjer/singularity-rstudio.git
4 |
5 | Update apr19
6 |
7 | Use Jeremy's Singularity image
8 |
9 | mkdir /uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/1.1.463
10 | cd /uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/1.1.463
11 | singularity pull shub://nickjer/singularity-rstudio:3.5.3
12 |
13 | Update Nov19
14 |
15 | Use Bob Settlage's Docker image - has bio R libs installed which was requested by user:
16 |
17 | cd /uufs/chpc.utah.edu/sys/installdir/rstudio-singularity/3.6.1
18 | singularity pull docker://rsettlag/ood-rstudio-bio:3.6.1
19 |
20 | Some other useful info on Bob's setup:
21 | his containers: https://hub.docker.com/u/rsettlag
22 | ( may try also rsettlag/ood-rstudio-basic and rsettlag/ood-rstudio-geospatial
23 | his OOD apps: https://github.com/rsettlage/ondemand2
24 |
25 | Update 2, Nov19
26 | Bob has hardcoded http_proxies in the container R config files (/usr/local/lib/R/etc/Rprofile.site), rather than trying to hack this through user's ~/.Rprofile, pull sandbox container, modify and save
27 |
28 | $ singularity build --sandbox ood-rstudio-bio_3.6.1 docker://rsettlag/ood-rstudio-bio:3.6.1
29 | $ sudo /uufs/chpc.utah.edu/sys/installdir/singularity3/std/bin/singularity shell --writable ood-rstudio-bio_3.6.1
30 | $ apt-get update && apt-get install apt-file -y && apt-file update && apt-get install vim -y
31 | $ vi /usr/local/lib/R/etc/Rprofile.site
32 | - remove:
33 | Sys.setenv(http_proxy="http://uaserve.cc.vt.edu:8080")
34 | Sys.setenv(https_proxy="http://uaserve.cc.vt.edu:8080")
35 | Sys.setenv(R_ENVIRON_USER="~/.Renviron.OOD")
36 | Sys.setenv(R_LIBS_USER="~/R/OOD/3.6.1")
37 |
38 | $ sudo singularity build ood-rstudio-bio_3.6.1.sif ood-rstudio-bio_3.6.1/
39 |
40 | NOTE - trying to figure out how to get the http_proxy env. vars. into the RStudio Server session.
41 | The process is as follows
42 | OOD job (script.sh.erb) -> start rserver in container (env vars to here need to be brought in with SINGULARITYENV_) -> each new R session starts with $RSESSION_WRAPPER_FILE - which starts the R session.
43 |
44 | Bob's Dockerfile is in dir 3.6.1, and is based on rocker/rstudio, https://hub.docker.com/r/rocker/rstudio.
45 | We could base containers with other packages on rocker/rstudio container
46 |
47 | Jan 2020 - adding some more packages for Atakan
48 |
49 | Monocle 3
50 | https://cole-trapnell-lab.github.io/monocle3/docs/installation/
51 | BiocManager is already there, so:
52 |
53 | BiocManager::install(c('BiocGenerics', 'DelayedArray', 'DelayedMatrixStats',
54 | 'limma', 'S4Vectors', 'SingleCellExperiment',
55 | 'SummarizedExperiment', 'batchelor'))
56 | install.packages("devtools")
57 | devtools::install_github('cole-trapnell-lab/leidenbase')
58 | apt-get install libudunits2-dev
59 | apt-get install libgdal-dev
60 | devtools::install_github('cole-trapnell-lab/monocle3')
61 |
62 | Slingshot:
63 | apt-get install libgsl-dev
64 | BiocManager::install("slingshot")
65 |
66 | Mar20
67 |
68 | rocker/geospatial is based on rocker/rstudio, has most of the packages and runs straight from the Docker image:
69 | singularity build ood-rstudio-geo-rocker_3.6.2 docker://rocker/geospatial
70 |
71 | similarly for bioconductor/bioconductor_docker, https://hub.docker.com/r/bioconductor/bioconductor_docker:
72 | singularity build ood-bioconductor_3.6.2 docker://bioconductor/bioconductor_docker
73 |
74 | and for base R, https://hub.docker.com/r/rocker/rstudio:
75 | singularity build ood-rstudio-rocker_3.6.2.sif docker://rocker/rstudio
76 |
77 |
78 |
--------------------------------------------------------------------------------
/var/www/ood/apps/sys/shell/bin/ssh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | args="-o ConnectTimeout=7200"
4 |
5 | exec /usr/bin/ssh "$args" "$@"
6 |
7 |
--------------------------------------------------------------------------------
/var/www/ood/public/CHPC-logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chpc-uofu/OnDemand-info/9e3e41ad9c78a63827e3640556d47206b7bf7fbf/var/www/ood/public/CHPC-logo.png
--------------------------------------------------------------------------------
/var/www/ood/public/CHPC-logo35.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chpc-uofu/OnDemand-info/9e3e41ad9c78a63827e3640556d47206b7bf7fbf/var/www/ood/public/CHPC-logo35.png
--------------------------------------------------------------------------------
/var/www/ood/public/chpc_logo_block.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chpc-uofu/OnDemand-info/9e3e41ad9c78a63827e3640556d47206b7bf7fbf/var/www/ood/public/chpc_logo_block.png
--------------------------------------------------------------------------------
/var/www/ood/public/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chpc-uofu/OnDemand-info/9e3e41ad9c78a63827e3640556d47206b7bf7fbf/var/www/ood/public/logo.png
--------------------------------------------------------------------------------