├── HPC Python ├── 10-latest.html ├── 1_intro.pdf ├── 2_cython.pdf ├── 3_numpy.pdf ├── 4_mpi4py.pdf └── video.pages │ ├── Data │ └── Hardcover_bullet_black-13.png │ ├── Index.zip │ ├── Metadata │ ├── BuildVersionHistory.plist │ ├── DocumentIdentifier │ └── Properties.plist │ ├── preview-micro.jpg │ ├── preview-web.jpg │ └── preview.jpg ├── LICENSE ├── Linux:Unix Basics for HPC ├── 9-latest.html ├── LinuxIntro-20141009-eijkhout (1).pdf ├── Shell scripting 2014 eijkhout (2).pdf └── video.pages │ ├── Data │ └── Hardcover_bullet_black-13.png │ ├── Index.zip │ ├── Metadata │ ├── BuildVersionHistory.plist │ ├── DocumentIdentifier │ └── Properties.plist │ ├── preview-micro.jpg │ ├── preview-web.jpg │ └── preview.jpg ├── Optimize Your Code for the Intel Xeon Phi ├── 1. MIC-Intro-SC13.pdf ├── 2. Lab1_User_Env_Stampede.pdf ├── 3. MIC_Native_2013_12_04.pdf ├── 4. MIC_Native_Lab_2013_12_04.pdf ├── 5. offload_slides_DJ2013-5.pdf ├── 6. offload_exerciseA_Dec2013.pdf ├── 7. offload_exerciseB_DJ2013-2.pdf ├── 8. MIC Symmetric.pdf └── 9. MIC Symmetric LAB.pdf ├── README.md └── TACC MPI Workshop ├── 1. MPIWelcome.pdf ├── 2. MPIComputingEnvironment.pdf ├── 3. IntroToParallelComputing.pdf ├── 4. IntroToMPI.pdf ├── 5. MPIIntroExercisesSolutions.pdf ├── 6. MPILaplaceExercise.pdf ├── 7. MPILaplaceExerciseReview.pdf ├── 8. AdvancedMPI.pdf ├── 9. OutroToParallelComputing.pdf └── syllabus.pages ├── Data └── Hardcover_bullet_black-13.png ├── Index.zip ├── Metadata ├── BuildVersionHistory.plist ├── DocumentIdentifier └── Properties.plist ├── preview-micro.jpg ├── preview-web.jpg └── preview.jpg /HPC Python/10-latest.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | /5776$10 7 | 8 | TACC HPC Python  October 30, 2014 9:00am - 12:00pm

 
Welcome!
Please submit questions at the bottom of this page.
loud and clear


'cp:missing destination file operand after '/home1/..../python-hpc.tar.gz
Can you tell what I'm doing wrong?
Nevermind. No . after command


Hello. Someone there?
Hello.
I'm not able to submit an interactive job to the TACC-HPC-PYTHON account:
User dssussx is not associated with project TACC-HPC-PYTHON (in accounting_check_prod.pl).
Online users may use a regular idev session.
The TACC-HPC-PYTHON account is reserved for local users only.

What do we do after we log in?
module load intel/14.0.1.106
module load python/2.7.6
and "cp ~train00/python-hpc.tar.gz ." 

With idev command:
Please report this problem:
what problem?
U. of TX users contact (https://portal.tacc.utexas.edu/consulting)
XSEDE    users contact (https://portal.xsede.org/group/xup/help-desk).
FAILED
What command are you attempting? Just "idev"?



What do we do after "cp ~train00/python-hpc.tar.gz"?
tar -xzf python-hpc.tar.gz
I got this error:
login4.stampede(7)$ tar -fxz python-hpc.tar.gz
try "tar -xzf python-hpc.tar.gz" 
(I'm quite surprised that the sequence of options makes a difference)
tar: You must specify one of the `-Acdtrux' or `--test-label'  options
Try `tar --help' or `tar --usage' for more information.
login4.stampede(8)$ 
Are we just trying to untar this file?
Yes.
Okay. I will look online for how to do this.

I used srun instead of idev to get a compute node. Is that okay?
That should be okay.

Can the speaker move his terminal window down? I want to see the top of the current slide. I believe it shows us how to run the job. Is this correct? Are we running profiling.py?
python -m cProfile profiling.py

The slides should be available online:
https://www.tacc.utexas.edu/user-services/training/hpc-python
I found them!!


I get a ERROR: Cell magic `%%cython` not found.
How did you start your ipython session?
did you do the "%load_ext"?
(and on what machine are you?)
let me try again
yeah load_ext solved ti

Not sure what to do here:
login4.stampede(38)$ icc -shared -fPIC -03 myfunc -o myfun1.so -I$TACC_PYTHON_INC/python2.7/
icc: command line warning #10006: ignoring unknown option '-03'
That's a letter Oh, not digit zero
icc: error #10236: File not found:  'myfunc'
you forgot the ".c" extension
icc: command line error: no files specified; for help type "icc -help"
login4.stampede(39)$ 
Same error. Where did myfunc1.c come from?
from the command "cython myfunc1.pyx" (thankyou) (I missed that)
I retried it b/c I had a typo. Here's what I got:
login4.stampede(40)$ icc -shared -fPIC -03 myfunc1.c -o myfun1.so -I$TACC_PYTHON_INC/python2.7/
icc: command line warning #10006: ignoring unknown option '-03'
letter Oh, not digit zero: -03 vs -O3


whats inside $TACC_PYTHON_INC/python2.7/
just "ls" it.
my times are greater than what you get
on stampede? do you at least get an improvement in the same way as the teacher?
By how much?
almost twice 
let me check on the improvement
I should compare test_python vs test_cython right?


Thanks for the link to the tutorial!

Okay I can do the cython myfunc3.pyx and the icc line without error. I generate an executable "myfunc3", but when I do "python test_cython3.py" I get an error:
"Import error: no module named myfunc3"
Any idea?
I guess the .so files are not being generated
I might have missed the part where he said how to do that
Are you compiling the way that Antonio shows on the slides?
I do the line starting with cython then the line starting with icc verbatim. Then python test_whatever
Hmm. So the .so is being generated, but then the function call in test_cython3.py cannot find it?
are you getting any errors? do a cut/paste of your terminal session.
here's how it works for me:
1. run cython:
cython myfunc3.pyx 
login4.stampede(29)$ ls
myfunc1.pyx  myfunc2.pyx  myfunc3.c  myfunc3.pyx
2. compile:
icc -fPIC -shared myfunc3.c -o myfunc3.so -I$TACC_PYTHON_INC/python2.7
login4.stampede(33)$ python test_cython3.py 
7.4535600368e+13



So if I want to do this using my own python installation what would I give for TACC_PYTHON_INC/python2.7/
hm. do "find / -name cellobject.h"
that's the directory you need.
This is the include directory for your particular version of Python.
thanks


What is mean by broadcasting again?
This is referring to sending/receiving variables within an MPI enabled code.
Different MPI tasks each have unique values for variables within an MPI code.
Broadcast is a way to send to each task a value(s) to other tasks.
More about this later in Antonio's presentation.
Thanks.

What is meant by MIC? Using the MIC?
Many Integrated Cores
That's another name fr the Intel Xeon Phi co-processor 
on the stampede nodes
Thanks for explaining that.

Since we can't use the MIC on the Login node, how do we get off the login node, or can we?
start an idev session: that puts you on a compute node, and they all have a MIC
(idev usually starts in the development queue; you can also use the normal-mic queue)
Okay then. Thanks!

How do you decide on 240 as a number for MIC_OMP_NUM_THREADS
60 cores times 4 threads
But a node has only 16 cores ?
The node has 2 sandybridges of 8 cores, correct.
The MIC is a separate co-processor; it has 60 cores
note the variable name MIC_OMP_NUM_THREADS, as opposed to OMP_NUM_THREADS


My time did not improve on that last example. I don't know why?
what is the output of "hostname -a"?
login4.stampede(77)$ hostname -a
login4
You need to be on a compute node in order to use a MIC.
Please refrain from running this code on the login node -- it will affect performance for other users.
same here no improvement hostname c559-101
Try turning on the offload report environment variable to see if you are actually offloading.
export OFFLOAD_REPORT=2
If you are offloading while running your code, you will receive extra output telling you.
Do you see this when you run?
No I donot see extra output
Can you double check your environment variables and their values?
sure
export MKL_MIC_ENABLE=1
export OMP_NUM_THREADS=16
export MIC_OMP_NUM_THREADS=240
export OFFLOAD_REPORT=2
echo $OMP_NUM_THREADS $MKL_MIC_ENABLE $MIC_OMP_NUM_THREADS $OFFLOAD_REPORT 
16 1 240 2
Yeah they are all there
Would please output which modules you are using with "module list"?
Currently Loaded Modules:
  1) TACC-paths   3) cluster-paths   5) cluster   7) intel/13.0.2.146   9) fftw3/3.3.2
  2) Linux        4) xalt/0.4.6      6) TACC      8) mvapich2/1.9a2

Inactive Modules:
  1) python/2.7.6
Ahhh
:-) You need to load intel/14 to get the MIC enabled python/2.7.6.
Let me know if that works out.
Yeah worked out
Great!



I did not get output here.
login4.stampede(75)$ python basic_1.py 
[....]
_tkinter.TclError: no display name and no $DISPLAY environment variable
login4.stampede(76)$ 
when you connected to stampede, did you use ssh -X ? And do you have a X11 manager on your local computer?
I did not use ssh -X. I used ssh username@login.xsede.org I don't know if I have an X11 manager. I'm using a laptop with Ubuntu. If you have Ubuntu using ssx -X .... should fix the problem.
ssh -X user@host or ssh -Y user@host for Stampede should be sufficient for X11 tunneling and opening up X-windows.


What does hostname -a output if I am on a compute node?
How do I make sure I am using a compute node?
What are you getting (it varies for all of them)?
If you see login.... then it's a login, if you see c###-### it's a compute node.
Got it.

Could you tell me how to compute $MIC_OMP_NUM_THREADS  for a node with 16 cores
Do you mean show the value of the environment variable $MIC_OMP_NUM_THREADS?
echo $MIC_OMP_NUM_THREADS
NO how did you come up with 240 number 
Or is it default per node
If I am specifying it in a job script how will I doit

The node has 2 sandybridges of 8 cores, correct.
The MIC is a separate co-processor; it has 60 cores
note the variable name MIC_OMP_NUM_THREADS, as opposed to OMP_NUM_THREADS
It's part of the hardware of the Intel Xeon Phi Coprocessor (i.e. MIC).
Oh ok so does all nodes on stamped have MICs with 60 cores
Essentially all compute nodes. Not the login nodes. All our MICs are "the same" in the sense they have 60 cores.
Why multiply by 4 so 
Because the MIC has 4 hardware threads
Thanks I will look this up
https://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-2-understanding

So is this available for any code that is compiled using MKL
With the intel compilers, yes.
Offload will be supported via OpenMP 4.0 in upcoming GCC compiler implementations as well.
Currently I have code compiled with Intel compilers an MKL in stampede. So If I specify MIC variables it will automatically offload to MIC right?
It depends on which MKL calls you are using, how big your matrices are.
Mainly tha LAPACK routines are used with pretty large matrices
Many routines are supported. I'm not sure which specific ones are but many of the 'popular' ones, as Antonio pointed out in the tutorial.

How do I get on a compute node? I want to run these examples.
I was on a compute node at one point but not anymore.

idev -t 2:00:00
The account for TACC-HPC-PYTHON is reserved for local participants.

Thank you for that great presentation. The last part on mpi4py was difficutl for me to follow. However, overall, I learned a lot.

243 | 244 | -------------------------------------------------------------------------------- /HPC Python/1_intro.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/1_intro.pdf -------------------------------------------------------------------------------- /HPC Python/2_cython.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/2_cython.pdf -------------------------------------------------------------------------------- /HPC Python/3_numpy.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/3_numpy.pdf -------------------------------------------------------------------------------- /HPC Python/4_mpi4py.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/4_mpi4py.pdf -------------------------------------------------------------------------------- /HPC Python/video.pages/Data/Hardcover_bullet_black-13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/video.pages/Data/Hardcover_bullet_black-13.png -------------------------------------------------------------------------------- /HPC Python/video.pages/Index.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/video.pages/Index.zip -------------------------------------------------------------------------------- /HPC Python/video.pages/Metadata/BuildVersionHistory.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Template: Blank (2014-02-28 10:44) 6 | M5.2.2-1877-1 7 | 8 | 9 | -------------------------------------------------------------------------------- /HPC Python/video.pages/Metadata/DocumentIdentifier: -------------------------------------------------------------------------------- 1 | 03E1642F-2CEB-4B09-B6CB-E4EA895CFFB0 -------------------------------------------------------------------------------- /HPC Python/video.pages/Metadata/Properties.plist: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/video.pages/Metadata/Properties.plist -------------------------------------------------------------------------------- /HPC Python/video.pages/preview-micro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/video.pages/preview-micro.jpg -------------------------------------------------------------------------------- /HPC Python/video.pages/preview-web.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/video.pages/preview-web.jpg -------------------------------------------------------------------------------- /HPC Python/video.pages/preview.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/HPC Python/video.pages/preview.jpg -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2014 Greenhat 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/9-latest.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | /5776$9 7 | 8 | basWelcome to:
Linux/Unix Basics for HPC
October 9, 2014
9:00am - 12:00pm
ROC 1.900

Please submit any questions in this window.  Thank you.

You may need to mute your computer's mic if you are getting an audio loop or feedback.


TO MAKE THINGS A LITTLE LESS CONFUSING, PLEASE SUBMIT QUESTIONS AT THE BOTTOM OF THE PAGE. THANKS!




Sorry, is read 4, write 2, and execute 1 in the permissions chmod command?

yes.
Thank you.


What is the difference between cat textfile and more textfile?
cat will show all content of the file even the file is large.
more allows for "output" to be viewed one page at a time.
cat allows for concatenation of files and other more complex manipulations of "output".
Thank you.


why some commands work with capitals and small letters, like PWD or pwd?
PWD is an environment variable. You can do echo $PWD or cd $PWD. This means that PWD is a variable that has a value associated with. 
pwd is a command
Thank you
so does the command "pwd" dump the absolute path into variable $PWD each time you run it?
Yes, it is automagically updated for you.

Sorry. Figured it out.
How is "ls -d *" different from "ls"?
Great, thanks.
the "-d" is specifying directory only, but then you are asking for all the the asterisk, at least at first glance, they are equivalent. But, I'd just use "ls".
To clarify, "all" in this case means all non-hidden files. If you really want to see every file; hidden and non-hidden, then you can use the "-a", i.e.: "ls -a".



So, !$ works the same way that up arrow?
!$ is the "end" of the previous command. Consider the following example: We start by looking for a word in a file
$ grep -i joe /some/long/directory/structure/user-lists/list-15
if joe is in that userlist, we want to remove him from it. We can either fire up vi with that long directory tree as the argument, or as simply as
$ vi !$
Which bash expands to:
$ vi /some/long/directory/structure/user-lists/list-15

It depends on what is your last command.
e.g., "cat 1234". !$ gives you "1234".

$! Expands to the process ID of the most recently executed background (asynchronous) command. (different command -- whoops)


How to end the addition of text to a file while using cat? Is Ctrl+C correct?
cntrl + d
does that help?
OK, it worked
Cool.


Is it possible to show headers? for this output
$ ls -a -s
total 120
 0 .                                                         0 .matplotlib
 0 ..                                                         0 .ssh
 8 .CFUserTextEncoding                                         0 CytoscapeConfiguration
32 .DS_Store                                                 0 Desktop
 8 .Rapp.history                                         0 Documents
 0 .Trash                                                 0 Downloads
 0 .allegroviva                                                 0 Google Drive
48 .bash_history                                         0 Library
 8 .bash_profile                                         0 Movies
 8 .bash_profile.macports-saved_2014-07-03_at_15:46:43         0 Music
 0 .config                                                 0 Pictures
 0 .cups                                                 0 Public
 0 .free42                                                 0 VirtualBox VMs
 8 .lesshst                                                 0 this-is-a-new+name
 0 .local
 What do you mean by headers?
 Try ls -lsh
 -l for long format
 -s for size at the beginning
 -h for human readable

How do you find the size of a file?
Hint: "man"
e.g. "man ls"
Then type "/" followed by what you are wanting to find, maybe "/size"
You can then cycle through instances of "size" by hitting "n" for next.


My directories seem to have size 4? (instead of 0)
Yep, me too. There is a finite size to store the directory file. I'm going to guess that Victor's size of 0 might be Mac related??
size of 4 refers to block size. No matter your directory is empty, it gets 4 blocks allocated from the file system. Block size could be 1024 or 4096 depending on the system
The 68 is the size in bytes.


what do the 'pts/## mean?
The pts/0 is telling you which "psuedo terminal" the user is logged in on. In this case it's terminal #0. The "(:0.0)" Tells you which hostname and display you're using.

Is there an option to put the date of users with "who" command
what do you mean by date?
Use "w" instead if you want to see "dates".
for example if I want only to print  users that logged in today or from yesterday?

You would need to filter your results from the system logs.
Victor will be talking more about "filters", e.g. sed and awk later on in the talk.
Thanks

Why are his files listed left justified down the page, and mine are listed across?
(For 'ls -F')
(His are vertical and mine are horizontal)
You can list then vertical by doing this:
ls -1F
Sweet, thanks
 how do 

shall we talk about how to submit a job to cliuster and how to work with various modules like CUDA etc?
Submitting jobs depends on which cluster you are using. Stampede? Or somewhere else? 

stampede
How to know more about stampede cluster?
We have all the information in our user guide: https://www.tacc.utexas.edu/user-services/user-guides/stampede-user-guide
you also can get info about cuda on stampede there.
We'll not be covering that in this tutorial.
If you want information regarding job submission: https://www.tacc.utexas.edu/user-services/user-guides/stampede-user-guide#running-slurm-jobcontrol

Do you have to be UT student to access Stampede?

Not required. You can apply allocation on stampede through xsede.https://kb.iu.edu/d/bazs
As a student you probably need to get your professor to grant you access


4 read (r)
2 write (w)
1 execute (x)

Practical Examples

chmod 400 mydoc.txt read by owner
chmod 040 mydoc.txt read by group
chmod 004 mydoc.txt read by anybody (other)
chmod 200 mydoc.txt write by owner
chmod 020 mydoc.txt write by group
chmod 002 mydoc.txt write by anybody
chmod 100 mydoc.txt execute by owner
chmod 010 mydoc.txt execute by group
chmod 001 mydoc.txt execute by anybody

One can also use a different syntax:
chmod u+rw file
that would give read and write permission to a file. Or
chmod g+X file
that would give execution permission to a group (if the file was executable in the first place when using capital X)
Or a combination:
chmod go+rx file
which will give the group and anybody else read and execute permissions
I find this easier to remember

you also can use "-" to remove the permission. Yep, thanks for adding that!
you can use "a" to represent everybody, user, group and others.

would you please explain grep again?
Victor will be talking more on "grep" specifically.
grep will search in a string
that string can be a file, just a string,...
If it finds what it's looking for, it will print it
Thanks

can someone illustrate a line that would be found by e. but not e* or vice versa?
e* will find any character -- let's try a newline character
cproctor@staff:test>cat test
some text
e
Then
cproctor@staff:test>grep e test 
some text
e
cproctor@staff:test>grep e. test 
some text
cproctor@staff:test>grep e* test 
some text
e
So, the last line the has just an "e" and no newline character "\n" -- hidden, is different.
I see. Thank you.

Re: Exercise 16
Why is he using "ls | grep" on the last line?
That's just an example on how to use grep to filter the output given by ls. What's happening there, is that the output of ls is given to grep, which will filter this output.


Is there a difference between "ls | grep "ski*" and "ls ski*"?
i'm just learning too, but I think grep ski* looks for names with any number of s's k's and i's.
"ls ski*" outputs a list of files with name starting with "ski". "ls | grep "ski*"" filters the output of ls. Giving the lines containing "ski".Thanks

what was the web page he just mentioned for sed and awk ?
http://shop.oreilly.com/product/9781565922259.do

Let's say I screwd up "ls". How do I restore it?

type "which ls" and tell me what you get.
Well, I have not modified it. I am just wondering
"ls" is an executable that only the root user may change.
If you were to "screw it up", then you'd have a hard time looking around in directories.
Typically "ls" lives in "/bin/ls"
If you really did mess it up or delete, you'd be in trouble
Do I download it from somewhere? Just curious?
You could. It comes by default with any Linux/Unix environment.


So is ls a process or a file?
an executable file.
can you give an example of a process?
type 'ps aux' and you'll get the list of processes currently running
thank you



My work.cmd did not run.
cat work.cmd 
#!/bin/bash

echo "It works"
what's the error message?
-bash: work.cmd: command not found
That's because it is not "on the path" so to speak.
You must show linux where it lives.
Using a relative path, you can type:
./work.cmd
that is, if you are in the same directory
Cool, it worked that way

How do I add something to the $PATH simply from the command line (without editing .bashrc)?
export PATH=$PATH:whatever_you_are_adding
interesting, this works well:
"export PATH=$PATH:$PWD"
Sure, you can always use environment variables



Sorry, should we type them in a shell?
we did sth named work.cmd
Not in this case.
so we have to just write in terminal?!
yes, in that particular example. Sorry for the confusion
oh, ok thanks


Do these variables stay forever in the memory?
No. They only exist in the terminal session that you created them in.
To make them persistent, you usually set them in a file that runs when you start a new terminal. Usually ~/.bashrc file.

Sorry, he went through Export fast... what does it do?
Victor didn't have time to discuss "export" in depth.
It is specifically a bash command that 'exports' a variable into the environment.
If you write a variable inside a shell script and you want to reference it outside the
script, then you would use the "export" command so that it is then saved in the environment
instance of that particular terminal session.
Thanks... why do you need 
exercise 2 illustrates that.
Does that help?

hmmm... doesn't work
my test file has one line: "export g=4"
then i do:
[]./test
[]echo $g
(blank)
:-) You're right....I should test my example before I write them
export makes the variable available to sub-processes.
That is,
export name=value
means that the variable name is available to any process you run from that shell process. If you want a process to make use of this variable, use export, and run the process from that shell.
name=value
means the variable scope is restricted to the shell, and is not available to any other process. You would use this for (say) loop variables, temporary variables etc.
It's important to note that exporting a variable doesn't make it available to parent processes. That is, specifying and exporting a variable in a spawned process doesn't make it available in the process that launched it.
(yay stackexchange)
I see... so if I put echo $g into the test program, it should work?
Let's see.
Yes.. returns "4"
But it also works if I just have:
g=4
echo $g
So it does not seem to require export, at least within the script
Correct.
But, if your script gave g as an input to a subprocess, then the export becomes important
Got it, thanks!
cproctor@staff:p>cat test
###########
#!/bin/bash

# Add or remove export here
export g=4

echo -e "test $g"

./silly
###########
cproctor@staff:p>cat silly
###########
#!/bin/bash

echo -e "silly $g"
###########
If you make "test" and "silly" files, chmod u+x to them to make them executable,
and then play with export on the "g" variable, you will see that it is passed to the silly
script when "export" is used and it is not passed when "export" is left off.






do these files .bashrc and .profile have to start with #!/bin/bash  ?
no. good question though



Is there a way to log out and log in other than closing my terminal window?
You aren't specifically logging in and out by closing the terminal window.
Oh, so how do I log out? and the log in?
When you open a new terminal session, it starts a new process for your user id.
Logging in and out, to me, means logging in and out of either a remote computer
or starting your local computer. 


this is a hidden file right?
I just created it using touch .bashrc
and how can I write? Using cat?
cat > .bashrc



yes. It was created by the system. It is a hidden file since it starts with "." 

Do these files have to be called .bashrc and .profile ? I used different names because I knew those files existed elsewhere. (hasn't worked yet)
From .profile you can call any other file, with the source command, so you could create a file named 'my_file' and then do 'source my_file' in the .profile file
I made a .profile file that calls my_file, but it no longer calls the original .bashrc (I had some alias's set up there)
what should I add to .profile?
Add Source path/my_file in your .profile
And, you should make sure you also have: source ~/.bashrc somewhere in your .profile
Got it working!



can you please slow it down a little bit?
the amount of material clearly exceeds 3 hours....
maybe next time we'll do a whole day, or turn scripting into a separate section
The best part is that it is recorded.
It will be available online later so that everyone may review and go at their own pace.

Hmmm. How do I pull out the first argument of the three?
$1 represents the first parameter
3=$1
echo $3  <--- This is very confusing to use. How could you get the third argument?
I was wrong.
Yes... would be better to use a letter variable, right?
It is safer to stay away from BASH specific reserved syntax, yes.
you would do
y=$3
echo $y
This is clearer.
nice... ok, i've got
bash-3.2$ ./namum cat pig donkey
There are 3 arguments
The first argument is cat
echo "first arg is $1" worked for me-- without need for defining other variables
why do you need whitespace around the test?
:-) BASH is a fickle language
Spacing is very important
As to why....I don't know :-)
Haha... Guess we have to ask Linus
How bout this, what's the difference between:
if [ $# > 0 ]; then
and
if [ $# -gt 0 ]; then
?
There could be a subtle difference...I'm checking now
I can't come up with an example where they are different.
Found one:
if [ "a" \> "b" ]; then
    echo "woo"
fi
You must add the backslash character within a [ ] construct in order to evaluate
the comparison correctly. With the "-gt" you don't need a backslash.
yes, i see... without the \ my namum outputs The first argument is if I in fact give no arguments.
Right, it always evaluates as true without the backslash.


Good session overall. Would cut some into history at the beginning for later shell scripting
Would love to come back for another session just on shell scripting, great job guys!
Thanks for the feedback. We'll consider a shell scripting course

Thank you so much!

Great webinar!

Thanks guys! It was very useful session especially on shell script
Hope we can have a session just for script writing
I missed to follow this part completely
It went so fast! 
But overally great job!
The best part is that it is recorded.
It will be available online later so that everyone may review and go at their own pace.
Thanks :)

What's the URL for the recording?
https://www.youtube.com/watch?v=cyINirlJZzk#t=10
Or if you would like the material with it:
https://www.tacc.utexas.edu/user-services/training/linux-unix-basics-for-hpc


426 | 427 | -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/LinuxIntro-20141009-eijkhout (1).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/LinuxIntro-20141009-eijkhout (1).pdf -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/Shell scripting 2014 eijkhout (2).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/Shell scripting 2014 eijkhout (2).pdf -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/Data/Hardcover_bullet_black-13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/video.pages/Data/Hardcover_bullet_black-13.png -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/Index.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/video.pages/Index.zip -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/Metadata/BuildVersionHistory.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Template: Blank (2014-02-28 10:44) 6 | M5.2.2-1877-1 7 | 8 | 9 | -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/Metadata/DocumentIdentifier: -------------------------------------------------------------------------------- 1 | EBDEA9A7-1DF2-4E85-8DE9-26661E0C839A -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/Metadata/Properties.plist: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/video.pages/Metadata/Properties.plist -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/preview-micro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/video.pages/preview-micro.jpg -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/preview-web.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/video.pages/preview-web.jpg -------------------------------------------------------------------------------- /Linux:Unix Basics for HPC/video.pages/preview.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Linux:Unix Basics for HPC/video.pages/preview.jpg -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/1. MIC-Intro-SC13.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/1. MIC-Intro-SC13.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/2. Lab1_User_Env_Stampede.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/2. Lab1_User_Env_Stampede.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/3. MIC_Native_2013_12_04.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/3. MIC_Native_2013_12_04.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/4. MIC_Native_Lab_2013_12_04.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/4. MIC_Native_Lab_2013_12_04.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/5. offload_slides_DJ2013-5.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/5. offload_slides_DJ2013-5.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/6. offload_exerciseA_Dec2013.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/6. offload_exerciseA_Dec2013.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/7. offload_exerciseB_DJ2013-2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/7. offload_exerciseB_DJ2013-2.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/8. MIC Symmetric.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/8. MIC Symmetric.pdf -------------------------------------------------------------------------------- /Optimize Your Code for the Intel Xeon Phi/9. MIC Symmetric LAB.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/Optimize Your Code for the Intel Xeon Phi/9. MIC Symmetric LAB.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Parallel Computing BootCamp 2 | =========================== 3 | 4 | From UC berkeley, UIUC, CMU, MIT, Argonne, etc. 5 | 6 | ###UC Berkeley### 7 | 8 | - [ ] CS 267: Applications of Parallel Computers [video](https://www.youtube.com/watch?v=pGFtiGW8QU0&list=PLYTiwx6hV33v8iWdAUNMmTaOX14O2CQfo) [homepage](http://www.cs.berkeley.edu/~carazvan/cs267.spr14/) 9 | - [ ] CS294: Modern Parallel Languages [homepage](http://www.cs.berkeley.edu/~yelick/cs294-f13/#staff) 10 | - [ ] Parallel Training at Berkeley, 2013 [video](http://www.youtube.com/playlist?list=PLImGd8Yga0-mTfrAa8qgJhssfOtJLOSwj) 11 | - [ ] BerkeleyX: ASPIRE101x ASPIRE Short Course on Parallel Programming [homepage](https://edge.edx.org/courses/BerkeleyX/ASPIRE101x/2014_2015/about) 12 | 13 | ###UIUC#### 14 | - [ ] CS 425/ECE 428: Distributed Systems [homepage](https://courses.engr.illinois.edu/cs425/fa2013/index.html) 15 | - [ ] CS 554/CSE 512: Parallel Numerical Algorithms [homepage](https://courses.engr.illinois.edu/cs554/fa2013/notes/index.html) 16 | - [ ] CS 525: Spring 2014 Advanced Distributed Systems [homepage](https://courses.engr.illinois.edu/cs525/sp2014/index.html) 17 | - [ ] CS 533: Parallel Computer Architectures Spring 2014 [homepage](https://courses.engr.illinois.edu/cs533/) 18 | - [ ] CS 598 LVK: Parallel Programming With Migratable Objects [homepage](https://wiki.cites.illinois.edu/wiki/display/cs598lvk/Lectures) 19 | - [ ] Parallel Programming Patterns [homepage](https://wiki.cites.illinois.edu/wiki/display/ppp/Home) 20 | - [ ] CS 498DD: Multicore Parallel Programming Fall 2012 [homepage](https://wiki.cites.illinois.edu/wiki/display/cs498dd/Schedule?src=contextnavchildmode) 21 | - [ ] CS 498: Program Optimization [homepage](https://wiki.cites.illinois.edu/wiki/display/cs498mgsp13/Schedule) 22 | - [ ] CS420/CSE402/ECE492: Introduction to Parallel Programming for Scientists and Engineers [homepage & video](https://wiki.cites.illinois.edu/wiki/display/cs420fa14/Tentative+Schedule) 23 | - [ ] CS 598lvk: Parallel Search [hamepage](https://wiki.cites.illinois.edu/wiki/display/cs598lvkfa10/Lectures) 24 | - [X] ~~Proven Algorithmic Techniques for Many-core Processors~~ [video](http://pat.hwu.crhc.illinois.edu/SitePages/Videos.aspx) 25 | - [X] ~~ECE408/CS483: Applied Parallel Programming~~ [homepage](https://ece408.hwu.crhc.illinois.edu/SitePages/Home.aspx) 26 | - [X] ~~Heterogeneous Parallel Programming~~ [video](https://www.coursera.org/course/hetero) 27 | 28 | ###NVIDIA and UC Davis### 29 | - [X] ~~Intro to Parallel Programming - Using CUDA to Harness the Power of GPUs~~ [video](https://www.udacity.com/course/cs344) 30 | 31 | ###RICE### 32 | - [ ] COMP 422: Parallel Computing Spring 2014 [homepage](https://www.clear.rice.edu/comp422/lecture-notes/index.html) 33 | - [ ] COMP 522: Multicore Computing Fall 2014 [homepage](http://www.cs.rice.edu/~johnmc/comp522/lecture-notes/index.html) 34 | 35 | ###Argonne Traning### 36 | - [ ] 2013 Argonne Training Program on Extreme Scale Computing [video](http://www.youtube.com/playlist?list=PLGj2a3KTwhRbPg8l1-8HQVswVbN3ofxil) 37 | - [ ] 2014 Argonne Training Program on Extreme Scale Computing [video](https://www.youtube.com/playlist?list=PLGj2a3KTwhRbpV3Y-6A3k1R1usnDtClnv) 38 | 39 | 40 | ###MIT### 41 | - [ ] Theory of Parallel Systems (SMA 5509) [homepage](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-895-theory-of-parallel-systems-sma-5509-fall-2003/) 42 | - [ ] MIT 6.172: Performance Engineering of Software Systems [homepage](http://stellar.mit.edu/S/course/6/fa14/6.172/materials.html) 43 | 44 | ###CMU### 45 | - [ ] CMU 15-418: Parallel Computer Architecture and Programming, 2014 Spring [homepage](http://scs.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=“6f8dfe4c-565f-4642-ae71-1a9f587311c6") 46 | - [ ] CMU 15-499: Parallel Algorithms [homepage](http://www.cs.cmu.edu/afs/cs/academic/class/15499-s09/www/) 47 | - [ ] CMU 15-210: Parallel and Sequential Data Structures and Algorithms [homepage](http://www.cs.cmu.edu/~15210/schedule.html) 48 | 49 | ###Washington### 50 | - [ ] CSE/ESE 569M: Parallel Architectures and Algorithms [homepage](http://research.engineering.wustl.edu/~songtian/) 51 | - [ ] CSE 341/CSE 549: Parallel and Sequential Algorithms [homepage](http://www.classes.cec.wustl.edu/~cse341/web/) 52 | 53 | 54 | ###VSCSE### 55 | - [ ] Virtual School of Computational Science and Engineering [homepage](http://vscse.org/) 56 | 57 | ###UT Knoxville### 58 | - [ ] COSC 594: Scientific Computing for Engineers [homepage](http://web.eecs.utk.edu/~dongarra/WEB-PAGES/SPRING-2014/cs594-2014.htm) 59 | 60 | 61 | ###TACC### 62 | - [X] ~~TACC HPC Monthly Workshop: MPI~~ 63 | -------------------------------------------------------------------------------- /TACC MPI Workshop/1. MPIWelcome.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/1. MPIWelcome.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/2. MPIComputingEnvironment.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/2. MPIComputingEnvironment.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/3. IntroToParallelComputing.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/3. IntroToParallelComputing.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/4. IntroToMPI.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/4. IntroToMPI.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/5. MPIIntroExercisesSolutions.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/5. MPIIntroExercisesSolutions.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/6. MPILaplaceExercise.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/6. MPILaplaceExercise.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/7. MPILaplaceExerciseReview.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/7. MPILaplaceExerciseReview.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/8. AdvancedMPI.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/8. AdvancedMPI.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/9. OutroToParallelComputing.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/9. OutroToParallelComputing.pdf -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/Data/Hardcover_bullet_black-13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/syllabus.pages/Data/Hardcover_bullet_black-13.png -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/Index.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/syllabus.pages/Index.zip -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/Metadata/BuildVersionHistory.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Template: Blank (2014-02-28 10:44) 6 | M5.2.2-1877-1 7 | 8 | 9 | -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/Metadata/DocumentIdentifier: -------------------------------------------------------------------------------- 1 | 4B2FF546-D0AB-40C2-BE0B-F90FDE61439A -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/Metadata/Properties.plist: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/syllabus.pages/Metadata/Properties.plist -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/preview-micro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/syllabus.pages/preview-micro.jpg -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/preview-web.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/syllabus.pages/preview-web.jpg -------------------------------------------------------------------------------- /TACC MPI Workshop/syllabus.pages/preview.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gangliao/Parallel-Computing-BootCamp/68d86e65c19b25dfda52c6b0ee986095efb10366/TACC MPI Workshop/syllabus.pages/preview.jpg --------------------------------------------------------------------------------