├── 1151 ├── readme.md ├── LICENSE ├── cs251 │ ├── 20150120.md │ ├── 20150127.md │ ├── 20150115.md │ ├── 20150113.md │ ├── 20150106.md │ └── 20150108.md ├── cs240 │ ├── 20150127.md │ ├── 20150113.md │ ├── 20150106.md │ ├── 20150120.md │ ├── 20150115.md │ └── 20150108.md ├── cs246 │ ├── 20150106.md │ ├── 20150113.md │ ├── 20150127.md │ ├── 20150108.md │ ├── 20150115.md │ └── 20150324.md ├── stat231 │ ├── 20150114.md │ ├── 20150109.md │ └── 20150107.md └── cs241 │ ├── 20150113.md │ ├── 20150127.md │ ├── 20150106.md │ ├── 20150108.md │ ├── 20150120.md │ └── 20150115.md ├── 1165 ├── README.md ├── cs350 │ ├── README.md │ └── scheduling.md └── cs349 │ ├── README.md │ ├── 8-1.md │ ├── 13-1.md │ ├── 10-1.md │ ├── 11-2.md │ ├── 10-2.md │ ├── 7-3.md │ ├── 6-1.md │ ├── 1-1.md │ ├── 5-2.md │ ├── 5-1.md │ ├── 12-1.md │ ├── 7-1.md │ ├── 11-1.md │ ├── 9-1.md │ ├── 12-2.md │ └── 6-2.md ├── 1171 ├── README.md ├── cs452 │ └── 1-4.md ├── cs454 │ └── 1-3.md ├── cs456 │ ├── 1-3.md │ └── 1-5.md └── cs343 │ └── lock-taxonomy.md ├── .gitignore ├── README.md └── new /.gitignore: -------------------------------------------------------------------------------- 1 | *.cfg 2 | -------------------------------------------------------------------------------- /1165/README.md: -------------------------------------------------------------------------------- 1 | # 1151notes 2 | ### by [Elvin Yung](https://github.com/elvinyung) 3 | 4 | Notes for my 1165 term (i.e. Spring 2016) at the University of Waterloo. 5 | 6 | In descending order of how likely I am to attend lectures for the course: 7 | * [CS 341](cs341) - Algorithms 8 | * [CS 349](cs349) - User Interfaces 9 | * [CS 350](cs350) - Operating Systems 10 | -------------------------------------------------------------------------------- /1171/README.md: -------------------------------------------------------------------------------- 1 | # Winter 2017 Notes 2 | ### by [Elvin Yung](https://github.com/elvinyung) 3 | 4 | Notes for my 1171 term (i.e. Winter 2017) at the University of Waterloo. 5 | 6 | ## Courses 7 | In descending order of how much I think I need to take notes for the coures: 8 | * [CS 341](cs343) - Concurrency 9 | * [CS 452](cs452) - Real-Time Programming 10 | * [CS 456](cs456) - Computer Networks 11 | * [CS 454](cs454) - Distributed Systems 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Notes 2 | ### by [Elvin Yung](https://github.com/elvinyung) 3 | 4 | This repo has all of my university notes, and some tooling for managing them. 5 | 6 | ## Table of Contents 7 | * [Winter 2015](1151) 8 | * [Spring 2016](1165) 9 | * [Winter 2017](1171) 10 | 11 | ## Why? 12 | I originally kept my university notes as separate repos for each term, but found that to be too annoying. 13 | 14 | For legacy reasons, I'm keeping the old `1151notes` and `1165notes` repos, but adding them here as a subtree. 15 | -------------------------------------------------------------------------------- /1151/readme.md: -------------------------------------------------------------------------------- 1 | (Next term: [Spring 2016](https://github.com/elvinyung/1165notes)) 2 | 3 | # 1151notes 4 | ### by [Elvin Yung](https://github.com/elvinyung) 5 | 6 | Notes for my 1151 term (i.e. Winter 2015) at the University of Waterloo. I attend lectures in the following courses: 7 | * [CS 251](cs251) - Computer Organization and Design 8 | * [CS 246](cs246) - Object Oriented Software Development 9 | * [STAT 231](stat231) - Statistics 10 | * [CS 240 Enriched](cs240) - Data Structures and Data Management (Audit) 11 | * [CS 241](cs241) - Foundations of Sequential Programs (Audit) 12 | 13 | Also check out (and `checkout`) [these notes](http://anthony-zhang.me/University-Notes/) by Anthony Zhang. 14 | 15 | -------------------------------------------------------------------------------- /1171/cs452/1-4.md: -------------------------------------------------------------------------------- 1 | # Intro 2 | 3 | CS 452 - Real-time Programming 4 | 5 | 01-04-2017 6 | 7 | Elvin Yung 8 | 9 | *Note:* I think the notes are pretty good, so this mostly consists of interesting things he says. 10 | 11 | * You're given the URL `http://www.cgl.uwaterloo.ca/wmcowan/teaching/cs452/w17/` 12 | * (Notice that there is no `~` before `wmcowan`, because tildes are _evil_.) 13 | * [link](http://www.cgl.uwaterloo.ca/wmcowan/teaching/cs452/w17/) 14 | 15 | * A **shift register** is a register that allows for efficient bitwise operations. It basically uses a [barrel shifter](https://en.wikipedia.org/wiki/Barrel_shifter), which lets you shift without using up a full cycle. 16 | * Efficiently exploiting shift registers improves *code density*, which ultimately means that the executable takes up less memory. 17 | * The ARM architecture makes heavy use of it. 18 | 19 | Next: [January 6th](1-6.md) 20 | -------------------------------------------------------------------------------- /1165/cs350/README.md: -------------------------------------------------------------------------------- 1 | # The Hitchhiker's Guide to CS350 2 | 3 | CS 350 - Operating Systems 4 | Spring 2016 5 | Elvin Yung 6 | 7 | CS350 is an awesome course, but it's a lot of material. It can be overwhelming, especially if you're cramming 2 days before the final. 8 | 9 | Thankfully, the course is split up into (roughly) 5 sections, which are mostly (but not completely) independent of each other: 10 | 11 | * [Synchronization](synch.md) 12 | * threads 13 | * synchronization primitives: locks, semaphores, CVs 14 | * [Kernel](kernel.md) 15 | * processes 16 | * system calls 17 | * context switches 18 | * [Virtual Memory](vm.md) 19 | * address spaces 20 | * TLBs 21 | * [Scheduling](scheduling.md) 22 | * first in first out (FIFO) 23 | * shortest job first (SJF) 24 | * shortest time-to-completion first (STCF) 25 | * round robin (RR) 26 | * multi-level feedback queue (MLFQ) 27 | * [I/O](io.md) and [Filesystems](fs.md) 28 | * I/O devices 29 | * polling, vectored interrupts, DMA 30 | * device drivers 31 | * hard disks 32 | * I/O performance 33 | * filesystems 34 | * journaling 35 | -------------------------------------------------------------------------------- /1171/cs454/1-3.md: -------------------------------------------------------------------------------- 1 | # Intro 2 | 3 | CS 454 - Distributed Systems 4 | 5 | 01-03-2017 6 | 7 | Elvin Yung 8 | 9 | *Note:* We're supposedly discouraged from sharing notes for this course. These aren't *notes* per se; but mostly just my thoughts on lecture contents. 10 | 11 | ## The Fallacies of Distributed Computing 12 | While at Sun Microsystems, [Peter Deutsch](https://books.google.ca/books?id=mShXzzKtpmEC&pg=PA14) came up with the [Eight Fallacies of Distributed Computing](https://blogs.oracle.com/jag/resource/Fallacies.html), which are things that generally don't need to be considered when building a single-node centralized system, but cannot be ignored when building a distributed system. 13 | 14 | They are: 15 | 16 | > 1. The network is reliable 17 | > 2. Latency is zero 18 | > 3. Bandwidth is infinite 19 | > 4. The network is secure 20 | > 5. Topology doesn't change 21 | > 6. There is one administrator 22 | > 7. Transport cost is zero 23 | > 8. The network is homogeneous 24 | 25 | [This article](http://www.rgoarchitects.com/Files/fallacies.pdf) explains them. 26 | 27 | Next: [January 5th](1-5.md) 28 | -------------------------------------------------------------------------------- /1151/LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 Elvin Yung 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /new: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Hacky script that creates a new note file, and opens it in atom. 4 | 5 | unquote() { 6 | echo $@ | tr -d "\"" 7 | } 8 | 9 | uppercase() { 10 | echo $@ | tr '[a-z]' '[A-Z]' 11 | } 12 | 13 | course_info_fmt() { 14 | course_subject=$(echo $1 | egrep -o "[a-z]+" -) 15 | course_catalog_number=$(echo $1 | egrep -o "\d+" -) 16 | course_info=$(curl "https://api.uwaterloo.ca/v2/courses/$course_subject/$course_catalog_number.json?key=$UW_API_KEY" 2>/dev/null) 17 | course_title=$(unquote $(echo $course_info | jq .data.title)) 18 | echo "$(uppercase $course_subject) $course_catalog_number - $(unquote $(echo $course_info | jq .data.title))" 19 | } 20 | 21 | format_notes() { 22 | echo "# $1" 23 | echo 24 | echo $2 25 | echo 26 | echo $3 27 | echo 28 | echo $NAME 29 | } 30 | 31 | # load config 32 | source ./config.cfg 33 | 34 | # read arguments 35 | course_name=$1 36 | course_info=$(course_info_fmt $course_name) 37 | shift 38 | note_date=$(date -j "+%m-%d-%Y") 39 | note_path="./$TERM/$course_name/$1.md" 40 | shift 41 | note_title="$@" 42 | 43 | [[ -e $note_path ]] && exit 3 44 | 45 | # do the thing 46 | format_notes "$note_title" "$course_info" "$note_date" > $note_path 47 | atom $note_path 48 | -------------------------------------------------------------------------------- /1151/cs251/20150120.md: -------------------------------------------------------------------------------- 1 | # CS 251 2 | ## Computer Organization and Design 3 | #### 1/20/2015 4 | Elvin Yung 5 | 6 | ### Designing an FSM 7 | Mostly covered the design of an FSM for a hypothetical light rail transit system. 8 | 9 | ### More stuff on SRAM 10 | See section *Random Access Memories* for [2015/01/15](20150115.md). 11 | 12 | ### Three-State Buffer or Transmission Gate 13 | * Has three outputs: 0, 1, and *floating* (connected to neither power nor ground). 14 | * *C* is the control line. It is a bit which decides where there will be output to *F*. 15 | * When C is 0, both transistors are off, and output is floating. 16 | * Otherwise, input passes through. 17 | * High-impedance outputs can be "tied together" without problems. 18 | 19 | ### Dynamic RAM 20 | * Our SRAM cell still uses a lot of transistors. 21 | * A better implementation uses six transistors, which is still expensive. 22 | * An alternative is to use a capacitor to store a charge to represent 1, but the problem is that the charge will dissipate and must be refreshed. 23 | * To write to a DRAM, place the value on the bit line. 24 | * To read, put half-voltage on the bit line, and input 1 on the word line. 25 | * Chrage in the capacitor will slightly increase bit line voltage, no charge will slightly decrease voltage. 26 | * This is detected, amplified, and written back. 27 | 28 | ### Design of 4Mx1 DRAM 29 | * The 20-bit address provided 10 bits at a time. 30 | * A whole row is read at once. 31 | * Column address selects a single bit. 32 | * stuff 33 | 34 | -------------------------------------------------------------------------------- /1151/cs251/20150127.md: -------------------------------------------------------------------------------- 1 | # CS 251 2 | ## Computer Organization and Design 3 | #### 1/20/2015 4 | Elvin Yung 5 | 6 | ### Data Representation and Manipulation - Part 2 7 | * Previously, we covered the **two's complement** numeral system (On 2015/01/24, I didn't go to class), which represents binary numbers such that the most significant bit represents a negative number. Then, for an *n*-bit binary number, a two's complement number can represent anything from `-(2^(n-1))` to `2^(n-1) - 1`. 8 | 9 | ### Adders 10 | * 11 | 12 | ### Multiplication 13 | * 14 | 15 | ### Hexadecimal 16 | * Hexadecimal is a numeral system which is based on 16. The digits are `0123456789ABCDEF`. 17 | * The advantage of hexadecimal is that each hex digit takes exactly four bits. 18 | 19 | ### Representing Non-Integral Numbers 20 | * We can use scientific notation to represent floating point numbers. 21 | * There are two main parts to scientific notation: Sign, and significand (fraction, mantissa, exponent) 22 | * In normalized binary, the leading digit of the significand is always 1. 23 | 24 | ### Floating-Point Representation 25 | * MIPS uses the IEEE 754 floating-point standard format. 26 | * It uses 1 bit to represent the sign, followed by 8 bits to represent the exponent, and 23 bits to represent the significand. 27 | * Exponent is stored in "biased" notation: most negative exponent is all 0s, most positive is all 1s. 28 | * "Biased notation stores a number N as an unsigned value N+B, where B is the bias. B is typically half the unsigned range, but doesn't have to be." 29 | 30 | ### -------------------------------------------------------------------------------- /1151/cs240/20150127.md: -------------------------------------------------------------------------------- 1 | # CS 240 Enriched 2 | ## Data Structures and Data Management 3 | #### 1/27/2015 4 | Elvin Yung 5 | 6 | ### Quick Select 7 | * We previously discussed quick select, which was a method to select the *k*th item in some sorted array. 8 | * Quick select has an expected linear running time. Since the input size halves every time, the overall amount of work done for some input of size `n` is roughly `n + (n/2) + (n+4) + ... = 2n`. 9 | 10 | ### Top-k 11 | * Suppose that we have some unsorted array, and we want to return the top `k` elements. 12 | * We could heapify the array and retrieve the top `k` elements, but an even better solution is to use quick select. 13 | * This runs at roughly Ө(n + k log k). 14 | 15 | ### Quicksort Partitioning 16 | * In the in-place implementation of quicksort, keep swapping the outermost wrongly-positioned pairs around the pivot. 17 | 18 | ### Lower Bound for Sorting 19 | * Is it possible to sort in O(n)? It depends. 20 | * It depends on what you're sorting, and what you're allowed to do with the data. 21 | * For example, if you have a permutation of `1..n`, then just replace the collection with [1..n]. 22 | * In most cases, to sort at O(n) or lower, it takes some clever tricks. 23 | 24 | ### Comparison-based Sorting 25 | * **Comparison* sorting is when you attempt to sort a set of objects that are comparable to each other. 26 | * *Theorem*: Any comparison-based sort takes Ω(n log n) to sort *n* "priviate" items. 27 | * Let π be a permutation of 1..n. 28 | * sort(π) = (π^-1)(π) 29 | * In other words, sorting is the inverse of a shuffling permutation. 30 | * Then we can think of every comparison in a sorting algorithm as the process of recovering the perutation applied to the sorted input. 31 | * 32 | 33 | -------------------------------------------------------------------------------- /1151/cs246/20150106.md: -------------------------------------------------------------------------------- 1 | # CS 246 2 | ## Object Oriented Software Development 3 | #### 1/6/2015 4 | Elvin Yung 5 | 6 | * **Instructor:** Nomair Naeem 7 | * Email: nanaeem@uwaterloo.ca 8 | * Office: DC 3121 9 | * Office hours: 10-11am MW 10 | * ISA: Kirstin Bradley 11 | * Email: cs246@uwaterloo.ca 12 | * first responder 13 | 14 | ### Mark breakdown 15 | * Final exam: 40% 16 | * Midterm: 20% (March 2rd, 4:30-6:20) 17 | * Assignments: 40% 18 | * A0 19 | * A1-4: 7% each 20 | * A5: 12% 21 | 22 | There is no other way to get extra credits. There will be no assignment solutions. 23 | 24 | ### Reference books 25 | * *Absolute C++ 5th Edition* 26 | * Scott Meyers' books 27 | * *Exceptional C++* by Herb Sutter 28 | 29 | Attending class and taking course notes will get you a long way towards doing well in your exams. 30 | 31 | Introduction to Object Oriented Programming and to tools and techniques for software development. 32 | 33 | ### Modules 34 | 1. Linux Shell (2 weeks) 35 | 2. C++03 (10 weeks) 36 | 3. Tools 37 | 4. Software Engineering Principles 38 | 39 | #### Module 1: Linux Shell 40 | * A **shell** is an interface for interacting with an operating system. 41 | * A **graphical** shell is based on visual interactions. It is more intuitive and easier to learn, but it is more difficult to perform more complex tasks with it. 42 | * A **command-line** shell is based on text commands input to a prompt. It is less constrained than a graphical shell, but has a steeper learning curve. 43 | * The Linux shell has its origins from the 1970s. 44 | * The first shell was **sh**, created by Stephen Bourne. Later this would be referred to as the Bourne shell. 45 | * There were also other shells, such as **csh** (which later became **tcsh**), and **ksh**, Korn shell. 46 | * Sh later evolved in to the Bourne Again Shell, or **bash**. 47 | 48 | -------------------------------------------------------------------------------- /1151/cs240/20150113.md: -------------------------------------------------------------------------------- 1 | # CS 240 Enriched 2 | ## Data Structures and Data Management 3 | #### 1/13/2015 4 | Elvin Yung 5 | 6 | ### More on Order Notation 7 | Stuff on proofs. 8 | 9 | ### Order Classes 10 | * $\Theta(1)$ is constant time. 11 | * $\Theta(log n)$ is logarithmic time. 12 | * $\Theta(n)$ is linear time. 13 | * $\Theta(n log n)$ is pseudolinear time, which is also called sorting time. 14 | * $\Theta(n^2)$ is quadratic time. 15 | * $\Theta(n^3)$ is cubic time. 16 | * $\Theta(2^n)$ is exponential time. 17 | * $\Theta(n^k)$ is polynomial time. 18 | 19 | * Textbooks usually emphasize that polynomial time is good. In practice, cubic and quadratic are both fairly inefficient, while pseudolinear and logarithmic are usually much better. 20 | * You should always keep in mind the size of the input for the particular problem. If $f(x) \in O(g(x))$, but the intersection point is sufficiently large, it might still be better to use the solution that runs at $g(x)$ if the input size will be mostly sufficiently small. 21 | 22 | ### Limit Definition 23 | * Let $f(n) > 0, g(n) > 0$ for all $n \geq n_0$. 24 | * Suppose that $L = \lim_{n \rightarrow \infty} \frac {f(n)} {g(n)}$. 25 | * Then: 26 | * If $L=0$, $f(n) \in o(g(n))$. 27 | * If $0 < L < \infty$, $f(n) \in \Theta(g(n))$. 28 | * If $L < \infty$, $f(n) \in w(g(n))$. 29 | 30 | ### Order Laws 31 | * $f(n) = \Theta(g(n)) \Leftrightarrow g(n) = \Theta(f(n))$ 32 | * $f(n) = O(g(n)) \Leftrightarrow g(n) = \Omega(f(n))$ 33 | * $f(n) = o(g(n)) \Leftrightarrow g(n) = \omega(f(n))$ 34 | * $f(n) = \Theta(g(n)) \Leftrightarrow g(n) = \Omega(f(n)) \land f(n) = \Omega$ 35 | * $f(n) = o(g(n)) \Rightarrow g(n) = O(g(n))$ 36 | * $f(n) = o(g(n)) \Rightarrow f(n) \neq \Omega(g(n))$ 37 | * $f(n) = w(g(n)) \Rightarrow f(n) neq \Omega(g(n))$ 38 | * $f(n) = w(g(n)) \Rightarrow f(n) \neq O(g(n))$ 39 | 40 | ### Analysis of Running Time 41 | * Given an algorithm, obtain the growth order of its running time. 42 | 43 | More stuff on stuff 44 | 45 | 46 | -------------------------------------------------------------------------------- /1151/cs246/20150113.md: -------------------------------------------------------------------------------- 1 | # CS 246 2 | ## Object Oriented Software Development 3 | #### 1/13/2015 4 | Elvin Yung 5 | 6 | ### Redirection 7 | * A program in Unix has an incoming stream, `stdin` (standard input), and two outgoing streams, `stdout` (standard output), and `stderr` (standard error). 8 | * When a program needs input, it looks to the standard input, which is by default from keyboard. 9 | * With input redirection (using `<`), the input is from a file instead. 10 | * When a program wants to send output, it sends to the standard output, which is by default to display. 11 | * With output redirection (using `>`, or `>>` to append), the output is to a file instead. 12 | * Standard output is buffered. This means that the output is not instantaneously displayed. 13 | * The advantage of buffering is to mitigate the cost of performing I/O by doing it less frequently. 14 | * The standard error stream is used when outputting error messages. 15 | * Unlike standard output, standard error is not buffered, which allows error messages to be displayed or logged instantaneously. 16 | * To redirect to stderr, we use the `2>` symbol. 17 | 18 | ### Argument Expansion 19 | * Inside a double quoted string, anything wrapped around backquotes will be evalued and replaced with the output. 20 | * For example, `echo "Today is `date` and I am `whoami`"` will print `Today is Tue 13 Jan 2015 14:49:51 EST and I am elvinyung`. 21 | * Wrapping it with brackets and then prepending a dollar sign (e.g. `$(arg)`) is equivalent. 22 | * This allows commands to nest. 23 | 24 | ### Pipes 25 | * We can use the output of one program as the input of another program by piping (`|`). 26 | * For example, if you wanted to count the number of words in the first 20 lines of a file, you can use the command `head -20 FILE | wc -w`. 27 | 28 | ### `grep` 29 | * We can perform pattern matching with the `grep` command. In CS 246, we will be using `egrep`, or `grep -E`, the extended regular expression variant. 30 | * regex is trivial, i'm not going to write this down 31 | 32 | -------------------------------------------------------------------------------- /1151/stat231/20150114.md: -------------------------------------------------------------------------------- 1 | # STAT 231 2 | ## Statistics 3 | #### 1/9/2015 4 | Elvin Yung 5 | 6 | ### Recall: Numerical Summaries 7 | * **Measures of location**: sample mean, median, mode 8 | * **Measures of variability or dispersion**: sample variance, sample standard deviation, range, IQR 9 | * **Measures of shape**: sample skewness, kurtosis 10 | 11 | ### Graphical Summaries 12 | * a bunch of graphs 13 | * We won't be required to draw graphs on tests and exams, but only to be able to understand them. 14 | 15 | ### Sample Correlation 16 | * Measures the *linear* correlation between 2 variables. 17 | * It is a value $-1 \leq r \leq 1$ which is proportional to how two variabtes are linearly related. 18 | 19 | ### Statistical Models 20 | * A **statistical model** is a mathematical model that incorporates probability. 21 | * Using a statistical model helps us estimate unknown parameters, model variate variations, and draw conclusions from some degree of uncertainty. 22 | * Most importantly, with a statistical model, we can characterize a process and simulate it computationally. 23 | 24 | ### Response vs. Explanatory Variates 25 | * In general, there are two types of variates in a statistical model. 26 | * **Response variates** are 27 | * **Explanatory variates** are used to explain or determine the distribution of some other 28 | 29 | ### Descriptive Statistics 30 | * **Descriptive statistics** is a way to portray data to show features of interest. 31 | * The numerical and graphical summaries we have studied thus far are all examples of descriptive statistics. 32 | * More complex use cases include: knowledge discovery, data mining, machine learning, etc. 33 | * Essentially, the goal is to find interesting patterns and relationships from the data. 34 | 35 | ### Statistical Inference 36 | * **Statistical inference** is when data obtained in the study of a process or population are used to draw general conclusions about the subject. 37 | * *Inductive reasoning** is when we reason from specific details to general conclusions. Statistical inference is a type of inductive reasoning. 38 | * This is opposite from **deductive reasoning**, which is (in mathematics) using general axioms to prove specific theorems. 39 | 40 | 41 | -------------------------------------------------------------------------------- /1165/cs349/README.md: -------------------------------------------------------------------------------- 1 | # CS 349 - User Interfaces 2 | 3 | The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are pretty good, so these notes mostly serve as supplementary summaries. 4 | 5 | ## Table of Contents 6 | * [1.1 - Introduction](1-1.md) - brief history of computing interfaces, *why* this is important. 7 | 8 | // TODO: fill in gap 9 | 10 | * [5.1 - Design Principles](5-1.md) - design principles from everyday things, usefulness vs. usability, mental models, metaphors 11 | * [5.2 - Design Process](5-2.md) - User Centered Design, understanding the user, prototyping protips 12 | * [6.1 - Visual Design](6-1.md) - UI design principles, Gestalt Principles 13 | * [6.2 - Responsiveness](6-2.md) - feedback, dealing with latency in general, Swing, and Web. not the [other](https://en.wikipedia.org/wiki/Responsive_web_design) kind of responsiveness. 14 | * [7.1 - Undo](7-1.md) - design decisions involved, various implementation techniques 15 | * [7.3 - History](7-3.md) - a brief history of interaction, visionaries, speculations on the future 16 | * [8.1 - Android](8-1.md) - intro to Android, architecture, activities, layouting with XML 17 | * [8.2 - Touch Interfaces](8-2.md) - look and feel, interaction instruments, temporal and spatial activation, degrees of indirection, integration, and compatibility 18 | * [9.2 - Touchless Interfaces](9-1.md) - voice, in-air gestures, classifying/interpreting ambiguous command data 19 | * [10.1 - Wearables](10-1.md) - Smartwatches, ubiquitous computing, augmented reality 20 | * [10.2 - Input](10-2.md) - Different types of input (text, positional, gestural) 21 | * [11.1 - Input Performance](11-1.md) - KLM, Fitts' Law, Steering Law, visual space and motor space 22 | * [11.2 - Accessibility](11-2.md) - different types of ableness, accessibility tools, UI design considerations 23 | * [12.1 - Visual Perception](12-1.md) - psychophysics, temporal resolution, spatial resolution, color spectrum, color perception and blindness, displays 24 | * [12.2 - Cognition](12-2.md) - memory (short term, long term), perception (by experience, by context, by goals), cognition, locus, context switches, automatic actions 25 | * [13.1 - Ethics](13-1.md) - benevolent vs. malevolent deception, gaps for deceptive design, experimentation 26 | -------------------------------------------------------------------------------- /1151/cs251/20150115.md: -------------------------------------------------------------------------------- 1 | # CS 251 2 | ## Computer Organization and Design 3 | #### 1/15/2015 4 | Elvin Yung 5 | 6 | ### Clocks and Sequential Circuits 7 | * Synchronous sequential circuits have clocks. 8 | * Asynchronous sequential circuits do not. 9 | * 10 | 11 | ### SR Latch 12 | * The most fundamental persistent circuit is the NOR-gate set-reset latch. 13 | * The SR NOR latch maintains feedback using two connected NOR gates. 14 | * As long as S and R are both 0, the value of Q will be persisted. 15 | * When S and R are both 1, Q and not Q are both set to 0. This may lock the output at 1 or 0, causing a race condition. 16 | 17 | ### D Latch 18 | * Some of the problems of the SR Latch can be mitigated with the D Latch. 19 | * The core concept of D latch is based on the augmentation of the SR latch with a clock. 20 | * The value of *Q* can only be modified (from the input *D*) if and only if the value of *C*, the clock, is 1. 21 | 22 | ### D Flip-flop 23 | * We want state to be affected only at discrete points in time. 24 | * A master-slave design achieves this. 25 | * The D flip-flop implements two D latches 26 | 27 | ### Registers and Register Files 28 | * A Register is an array of flip-flops, 32 for a word register. 29 | * A register file is a way of organizing registers. 30 | * In assembly, you essentially want to read from a maximum of two registers at once, or write to one register. 31 | * The read logic for a register file consists of two multiplexors, which select data from two registers. 32 | * The write logic consists of a decoder which writes to some register. 33 | 34 | ### Random Access Memories 35 | * Static random access memories (SRAM) use D latches. 36 | * To mitigate the problem of 37 | * The register file doesn't scale, since multiplexors and decoders are too big. 38 | * To fix the multiplexor problem, we use three-state buffers. 39 | * To fix the decoder problem, use two level decoding. 40 | * This type of memory is not clocked. 41 | * A three-state buffer has three outputs: 0, 1, and floating. 42 | 43 | ### Finite State Machines 44 | * The behaviour of sequential systems is essentially stateful. 45 | * Finite state machines are used to maintain discrete states. 46 | * It is impossible to use a truth table to define a finite state machine, since a truth table is binary. -------------------------------------------------------------------------------- /1151/cs251/20150113.md: -------------------------------------------------------------------------------- 1 | # CS 251 2 | ## Computer Organization and Design 3 | #### 1/13/2015 4 | Elvin Yung 5 | 6 | ### Stuff 7 | * Midterm is on February 12th. 8 | 9 | ### Digital Logic Desgin 10 | * NAND gate truth table: 11 | 12 | A | B | Q1 | Q2 | Q3 | Q4 | 2 13 | --|---|----|----|----|----|--- 14 | 0|0|L|L|H|H|1 15 | 0|1|L|H|H|L|1 16 | 1|0|H|L|L|H|1 17 | 1|1|H|H|L|L|0 18 | 19 | * The distributive rule: 20 | 21 | X+YZ = (X + Y)(X + Z) // why? 22 | = XX + XZ + XY + YZ 23 | = X(X + Y + Z) + YZ // factor out X 24 | = X + YZ 25 | 26 | ### Formula Simplification using Laws 27 | * Factor out things. 28 | 29 | ### Deriving Truth Table from Circuit 30 | * Label intermediate gate outputs 31 | * Fill in truth table in appropriate order 32 | 33 | ### Useful Components: Decoders 34 | * `n` inputs, `2^n` outputs (converts binary to "unary") 35 | 36 | ### Register files 37 | * Something something use decoders to perform read/writes on registers 38 | 39 | ### Multiplexors 40 | * aka mux, or selector 41 | * Suppose you have `2^n` inputs (or *lines*) `D_0..D_{2^n-1}`. Then you also have `n` other inputs `S_{n-1}..S_0`, called *select lines* which represents which one line out of `D` to select. 42 | * Example: 4-1 mux 43 | 44 | S_1 | S_0 | Y 45 | ----|-----|--- 46 | 0|0|D_0 47 | 0|1|D_1 48 | 1|0|D_2 49 | 1|1|D_3 50 | 51 | ### Buses 52 | * A **bus** is a collection of data lines that are treated together as a single logical signal. 53 | * In diagrams, 32-bit line buses are represented as slashed wires. 54 | * A multiplexor allows us to select one bus over another. 55 | 56 | ### Implementing Boolean Functions: ROMs 57 | * ROM stands for read-only memory. 58 | * Think of a ROM as a table of `2^n` *m*-bit words, implementing *m* one-but functions of *n*-variables. 59 | * Internally, a ROM consists of a decoder and an OR gate for each output. 60 | * PLAs are simplified ROMs. 61 | 62 | ### Types of Memory: 63 | * **RAM** is fast and volatile. It can be accessed very quickly and efficiently. 64 | * **ROM** is memory that cannot be changed. It can only be written once. 65 | * **Hard drives** are made up of several rigid metal, glass or ceramic disks. 66 | 67 | ### Clocks 68 | * 69 | 70 | ### SR Latch with NOR Gates 71 | * There's like transitions and stuff -------------------------------------------------------------------------------- /1151/cs241/20150113.md: -------------------------------------------------------------------------------- 1 | # CS 241 2 | ## Foundations of Sequential Programs 3 | #### 1/13/2015 4 | Elvin Yung 5 | 6 | ### Examples 7 | * Example: Add 2 values in registers 5 and 7, storing the result in register 3, then return 8 | 9 | | Assembly | Location | Binary | Hex | 10 | |------|--------|--------|----------| 11 | | `add $3, $5, $7` | `0x0000` | `0000 0000 1010 0111 0001 1000 0010 0000` | `0x00a71820` | 12 | | `jr $31` | `0x0004` | `0000 0011 1110 0000 0000 0000 0000 1000` | `0x03e00008` | 13 | 14 | * Example: Add 42 and 52, store sum in $3, and then return 15 | 16 | | Assembly | Location | Binary | Hex | 17 | |------|--------|--------|----------| 18 | |`lis $5` | `0x0000` | `0000 0000 0000 0000 0010 1000 0001 0100` | `0x00002814` | 19 | |`.word 42` | `0x0004` | `0000 0000 0000 0000 0000 0000 0010 1010` | `0x0000002a` | 20 | |`lis $7` | `0x0008` | `0000 0000 0000 0000 0011 1000 0001 0100` | `0x00003814` | 21 | |`.word 52` | `0x000c` | `0000 0000 0000 0000 0000 0000 0011 0100` | `0x00000034` | 22 | |`add $3, $5, $7` | `0x0010` | stuff | more stuff | 23 | |`jr $31` | `0x0014` | stuff | more stuff | 24 | 25 | * You can use the command `xxd` to get a hex dump of your mips file thing, to verify that you converted stuff from hex correctly. 26 | 27 | ### Assembly Language 28 | * We will begin writing our programs not in binary or hex, but with simple mnemonics. 29 | * There is a direct translation back to te required binary (assembler). 30 | * Each assembly instruction corresponds to one machine instruction (almost -- `.word` isn't a thing because it's just a word). 31 | * We will revisit the previous example: 32 | 33 | ```nasm 34 | lis $5 ; $5 <- 42 35 | .word 42 36 | lis $7 ; $7 <- 52 37 | .word 52 ; $3 <- $5 + $7 38 | add $3, $5, $7 ;pc <- $31 39 | jr $31 40 | ``` 41 | 42 | ### Jumping 43 | * `beq`: go somewhere else if two registers are equal 44 | * `bne`: go somewhere else if two registers are not equal 45 | * Both instructions increment the PC by a given number of words (forward or backward). 46 | * Based on the fetch-execute cycle, PC has already been incremented to point at the next instruction, before the instruction has been decoded and executed. Hence, offset is relative to the next instruction. 47 | 48 | ### More examples 49 | #### Absolute value `$1` 50 | ```nasm 51 | slt $2, $1, $0 ; compare $1 < 0 52 | beq $2, $0, 1 ; if false, skip over 53 | sub $1, $0, $1 ; negate $1 54 | jr $31 55 | ``` 56 | 57 | #### Sum the integers `1..13`, store in `$3`, then return 58 | ```nasm 59 | add $3, $0, $0 ; $3 <- 0 60 | lis $2 ; $2 <- 13 61 | .word 13 62 | add $3, $3, $2 ; $3 += $2 63 | lis $1 ; $1 <- 1 64 | .word 1 65 | sub $2, $2, $1 ; $2 -= 1 66 | bne $2, $0, -5 ; jump 5 things back 67 | jr $31 68 | ``` 69 | 70 | -------------------------------------------------------------------------------- /1151/cs240/20150106.md: -------------------------------------------------------------------------------- 1 | # CS 240 Enriched 2 | ## Data Structures and Data Management 3 | #### 1/6/2015 4 | Elvin Yung 5 | 6 | **Instructor:** Alex Lopez-Ortiz 7 | 8 | The objective of this course is the cover the same material as the regular sections, but with more material. This means that the pace of the course will be faster. 9 | 10 | * Tutorial: Monday 3:30 PM - 4:30 PM, MC 4064 11 | * Textbook: Robert Sedgewick, *Introduction to Algorithms in C++* 12 | 13 | ### What is computer science? 14 | You have some information. You want to 15 | * store it - use data structures. 16 | * process it - use algorithms. 17 | * trasmit it - use networking. 18 | * display it - use graphics. 19 | * secure it - use cryptography. 20 | * collect it - use sensor networks. 21 | * learn from it - use machine learning or data mining. 22 | 23 | The more data you have, the more important the information structure is. 24 | 25 | ### Course Topics 26 | * Priority queues, heaps, treaps 27 | * Sorting, selection 28 | * Binary search trees, AVL trees, B-trees, cache oblivious B-trees 29 | * Rank/select (succinct data structures), Van Emde Boas trees 30 | * Skip lists 31 | * Hashing, cuckoo hashing, bloom filters, MapReduce(/Hadoop) 32 | * Quad trees, k-d trees, range trees, R-trees 33 | * Tries 34 | * String matching, suffix trees, suffix arrays 35 | * Data compression, Huffman coding, LZW, Burrows-Wheeler, arithmetic compression, compressed sensing 36 | 37 | ### Algorithms & Data Structures 38 | * The basic **problem**: Given an input, carry out a particular computation task. 39 | * The input is known as the **problem instance**. 40 | * The output is known as the **problem solution**. 41 | * The goal is to find an efficient way to compute the solution. 42 | 43 | * An **algorithm** is a step-by-step process to compute a problem solution, given a problem instance *I*. The algorithm *solves* the problem if for every instance it finds a valid solution. 44 | * If it doesn't work all the time, it's called a **heuristic**. 45 | * A **program** is an implementation of an algorithm using a particular computer language. 46 | 47 | ### Efficiency of solution 48 | * Running time 49 | * Space usage 50 | 51 | For a given problem we have a choice of algorithms that solve the problem. The goal of algorithm design is to devise algorithms that solve a problem, and the goal of algorithm analysis is to study the efficiency of a proposed solution. Algorithm design will be covered in CS341, and we will study algorithm analysis in this course. Specifically, we will focus on algorithms that utilize storage, where the complexity comes from handling data. 52 | 53 | ### Observation 54 | There is in general an observed correlation between the size of the input and the difficulty of the problem. 55 | 56 | -------------------------------------------------------------------------------- /1165/cs349/8-1.md: -------------------------------------------------------------------------------- 1 | # Android 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | 6-22-2016 6 | 7 | Elvin Yung 8 | 9 | * Developing for Android is pretty similar to developing Swing apps for desktop - there's just some architectural differences to understand. 10 | * Java doesn't tend to have great documentation - Sun wrote some documentation in the 90s and didn't really touch it since then. 11 | * Android's the opposite. There's generally pretty great docs. 12 | 13 | * Android apps run on the Dalvik virtual machine. 14 | * Every process runs in its own VM and address space. 15 | 16 | ## Design Goals 17 | * Multiple entry points for an app 18 | * Different "activities" that you need to explicitly pass data between 19 | * Applications need to be dynamic - need to handle many different types of devices, in different screen sizes and orientations 20 | 21 | * Dealing with being a mobile device - limited memory, cpu, battery, etc. 22 | * The system aggressively constrain processing - e.g. background threads are hard 23 | * Small screen, multiple orientations, multi-touch 24 | 25 | ## Activities 26 | * An **activity** is a screen that basically runs independently and has its own lifecycle, almost like a separate mini-app. 27 | * Interesting lifecycle model - activities can have the states of *start*, *paused*, and *stopped*. 28 | * Android pauses an application that's running in the background. 29 | * If you start running out of memory, Android reserves the right to kill it. 30 | * You're responsible for managing state, and implementing the `onStop`, `onCreate`, `onPause`, etc. callbacks to maintain data integrity. 31 | 32 | * Data is passed between activities using an **intent**. 33 | * A **fragment** is basically a portion of a UI that has its own state. Activities can contain multiple fragments. 34 | * Since switching activities has an overhead, fragments were introduced as an alternative. 35 | 36 | ## Building UIs 37 | * `android.view.ViewGroup` - like an `JPanel` with a Layout associated. 38 | * `android.view.View` - base class for widgets, like `Button`, `ImageView`, etc. 39 | 40 | ## Managing layout 41 | * You can write code to do this, but you're better off using XML to describe your layout, and telling the app to dynamically load them. 42 | * The good thing about this is that you can define views for separate orientations, and Android is smart enough to switch between them automatically. 43 | 44 | ## Tools 45 | * In this course, we're standardizing on Android Studio. 46 | * Update stuff 47 | * An AVD manager is provided to emulate different Android virtual devices. In this course we're standardizing on Nexus 7, on Marshmallow with API 23, ABI x86. 48 | * (check the slides for the rest of this) 49 | * Basically tl;dr follow the rules, don't try to use your own configs 50 | -------------------------------------------------------------------------------- /1151/cs246/20150127.md: -------------------------------------------------------------------------------- 1 | # CS 246 2 | ## Object Oriented Software Development 3 | #### 1/27/2015 4 | Elvin Yung 5 | 6 | ### More on I/O with C++ 7 | * A read from `cin` is converted to `void*`. 8 | * Once a read from `cin` fails, all subsequent reads fail, unless you `cin.clear()` and `cin.ignore()` them. 9 | * `cin.clear()` acknowledges that we have seen the failed read. 10 | * `cin.ignore()` discards the next item and looks beyond it for more input. 11 | 12 | ### Reading Strings 13 | * C++'s standard library provides a string type `std::string`. 14 | * To use it, you simply `#include` it, i.e. `#include `. 15 | # `cin` reads strings by ignoring all leading whitespcae, start at the first non-space character, and keep going until it hits another whitespace. 16 | * To read a line, the standard library provides another fucntion: `getline(cin, s)`. It reads from current position until reaching a newline. 17 | 18 | ### I/O Manipulators 19 | * In C, you can specify a format to print an integer as hexadecimal (`%x`). However, in C++, you would use I/O manipulators. 20 | * I/O manipulators change the way data is input and output in C++. 21 | * For example, to print an `int x` as hexadecimal, you would use the statement `cout << hex << x;`. 22 | * There are some I/O manipulators in the header ``. 23 | 24 | ### I/O Streams 25 | * `cin` is a variable of type `istream. 26 | * `cout` is a variable of type `ostream`. 27 | * The stream abstraction is applicable to other source of data. 28 | * The header `` (which stands for *filestream*) provides the `ifstream` and `ofstream` types, for performing I/O with files. 29 | * Example: Reading from a file 30 | 31 | ```c++ 32 | #include 33 | #include 34 | #include 35 | using namespace std; 36 | 37 | int main() { 38 | string s; 39 | ifstream f("myfile.txt"); 40 | while (f >> s) { 41 | cout << s << endl; 42 | } 43 | } 44 | ``` 45 | 46 | * As can be seen, `ifstream` works exactly the same as `istream`. To read from `cin` instead of `myfile.txt` would simply require substituting `cin` for `f` in `f >> s`. 47 | * The file opened by `f` is closed "automatically" when `f` goes out of scope, but only because `f` is stack allocated. If `f` was heap allocated, this would no longer be the case. 48 | * Basically, anything you can do with `cin` (`istream`), you can do with `f` (`ifstream`). 49 | * The same stream abstraction can be used to read/write to strings. The header `` provides the `istringstream` and `ostringstream` types for working with strings as streams. 50 | * The function `str`, in both `istream` and `ostream`, gets/sets the string stored in the stream. 51 | * `istringstream` can be used, among other things, to convert a `string` into an `int`. The `atoi` (ASCII to integer) library uses this mechanism. 52 | 53 | 54 | -------------------------------------------------------------------------------- /1151/cs240/20150120.md: -------------------------------------------------------------------------------- 1 | # CS 240 Enriched 2 | ## Data Structures and Data Management 3 | #### 1/20/2015 4 | Elvin Yung 5 | 6 | ### Heap 7 | * A **max-heap** is a binary tree that has the following properties: 8 | * The **heap property**: the priotiy of the parent is always higher than the priority of the child. 9 | * Structural property: It's a complete tree, with at most one node of degree one. 10 | * The leaves at most one level apart 11 | * The bottom level is filled from left to right (**left-justified**) 12 | * A min-heap is the same, but with opposite order property. 13 | * *Theorem*: The height of a heap with n nodes is Ө(log n). 14 | * *Proof*: (basic Ө proof with $1+...+2^n < n < 1+...+2^{n+1}$) 15 | * To insert a new node to a heap, we perform the *bubble-up* technique: 16 | * Insert the new node at the bottom bottom level, at the leftmost free spot. 17 | * Compare the node's value with its parent. If it is breaking heap property, switch them. 18 | * Continue until the heap property is no longer broken. 19 | * To delete the maximum value from the heap: 20 | * Remove the root. 21 | * Promote the largest of the root's two children. 22 | * Continue until done. 23 | * We can represent a heap (or any binary tree in general) with an array where the children of some node at index *i* are respectively at *2i+1* and *2i+2*. The parent of *i* must then be at `floor((i-1)/2)`. 24 | * Representing the heap with an array is an example of an *implicit* data structure. Implicit data structures are very efficient spacewise because they store very little information other than the data itself, and meaning is instead carried in the arrangement of the data. 25 | 26 | * Now we can implement a priority queue using a heap, by inserting each item as a key of variable priorities, and then using `deleteMax` to dequeue. Now both operations run at $\theta(n log n)$. 27 | 28 | #### `heapify` 29 | * `heapify` is an operation which creates a heap from some collection of data in worst case Ө(n) using a bottom-up construction method. 30 | * Essentially, starting from the smallest subtrees (at the bottom), check and swap for heap order. Do the same until root is reached. 31 | * The alternative implementation is from top-down, which is Ө(n log n) in the worst case, and Ө(n) on average case. 32 | * From the root, check and promote children, until heap property is no longer violated. 33 | 34 | #### Heap Sort 35 | * Heap sort essentially works by heapifying some collection `A`, and then running `deleteMax` until the heap is empty. 36 | * The running time of heap sort is roughly `O(n log n)`. 37 | 38 | ### Treaps 39 | * The term *treap* was coined from combining the terms *tr*ee and h*eap*. Tricky. 40 | * A treap is simultaneously a heap and a binary search tree. 41 | * Suppose we have some collection `X` in which each item has a key and a priority. 42 | * The key follows binary search property, but the priority follows the heap property. 43 | -------------------------------------------------------------------------------- /1151/cs246/20150108.md: -------------------------------------------------------------------------------- 1 | # CS 246 2 | ## Object Oriented Software Development 3 | #### 1/6/2015 4 | Elvin Yung 5 | 6 | ### The Linux Filesystem 7 | * We will make a distinction between two different types of files: *ordinary files*, and *directories*. The difference is that directories can contain other files, including other directories. 8 | * The result is that the filesystem is structured like a tree. 9 | * In the Linux filesystem, `/` is the root directory. Inductively, you can create an absolute path to any file starting from the root. 10 | * Your **current directory** (or current working directory) is the directory you are in at the given moment. When you log in, your current directory will be your **home directory**. At any moment, you can find your current working directory with the command `pwd`, which stands for "print working directory". 11 | * An **absolute path** is a path to some file that starts from filesystem root. 12 | * A **relative path** is a path to some file that starts from the current working directory. `..` denotes a parent directory. 13 | * You can navigate to any directory using either absolute or relative path format using the command `cd`, which stands for "change directory". 14 | * You can list the contents of a directory using the command `ls`, which stands for "list". The flag `-a` shows all the files of the directory, including hidden files. 15 | 16 | #### Special Directories 17 | Unix paths provide the following aliases: 18 | * `.` is the current directory. 19 | * `..` is the parent directory of the path. 20 | * `~` is the user's home directory. 21 | * `~{user}` is the user's home directory. 22 | 23 | #### Wildcard Matching 24 | * Linux provides a globbing mechanism that allows you to match only certain strings. 25 | * For example, to list all files in a directory that ends in `.txt`, you can use the command `ls *.txt`. 26 | * The shell looks at the current directory and finds all the files that match this pattern. 27 | * The names of the matched files are substituted in place of the globbing pattern and then the command runs. 28 | * Because the shell matches the pattern, other commands can also use globbing. 29 | 30 | ### Working with the shell 31 | * The command `cat`, which stands for "concatenate", displays the contents of a single file. It is typically also used to concatenate a list of files. 32 | * `^C` (Ctrl-C) ends a process. 33 | * `^D` (Ctrl-D) signals end of input, which sends the EOF character to the process. 34 | * The command `wc` provides a line, word, and character count for a given file. 35 | 36 | #### Redirection 37 | * The `>` operator pipes the output of some command into a file. If a file with that filename previously existed at that path, `>` will overwrite the contents of that file. 38 | * The `>>` operator appends the output instead of overwriting it to the filename. 39 | * The `<` operator pipes the contents of some file into the input. 40 | * It is possible to do both input and output in the same command. For example, `cat < in.txt > out.txt` copies the contents of `in.txt` to `out.txt`. 41 | 42 | 43 | -------------------------------------------------------------------------------- /1171/cs456/1-3.md: -------------------------------------------------------------------------------- 1 | # Chapter 1 2 | 3 | CS 456 - Computer Networks 4 | 5 | 01-03-2017 6 | 7 | Elvin Yung 8 | 9 | ## Syllabus 10 | * This course has a required text; 6th edition is considered canon. 11 | * Three programming assignments. 12 | * Each assignment can be submitted up to 72 hours after deadline, with a penalty of 10% for each late day. 13 | * Need to pass weighted exam average to pass the course 14 | 15 | ## Intro 16 | * There are a multitude of *data* networks; we will focus on the Internet, the most popular one in use today. 17 | * A network such as the Internet is really composed of a series of *layers* of protocols that provide abstractions. 18 | * We will take a top-down approach, studying the OSI model from the *application* layer to the *data link* layer. 19 | 20 | * Application layer: HTTP, SMTP, DNS, etc. 21 | * Transport layer: Mostly care about TCP and UDP. 22 | * Network layer: IP 23 | * Data link layer: Ethernet, Wi-Fi (802.11), etc. 24 | 25 | * A note on security: although the predecessor to the Internet, ARPANET, was built to be a military network, security was mostly developed after the initial network. 26 | 27 | ## The Internet 28 | * The **Internet** is a global network of computers. 29 | * It consists at the edge of **end systems** or **hosts**, like laptops and phones, and at the core **packet switches**, like routers. 30 | * More importantly, the Internet is a *network of networks*. 31 | 32 | * It would be infeasible for every device to have a dedicated link to every other device -- the number of links would essentially be `O(n^2)` with a large `n`. 33 | * This motivates the concept of **packet switching**, where computers are connected in some **local area** network, and communicate outside it through multiple layers of indirection (**switches**, or **routers**). 34 | * Data is divided into **packets**. A packet is the basic unit of communication that we are considering. You cannot send a bit by itself -- there is not enough information. 35 | 36 | * A **protocol** is some agreed-upon way for two nodes to talk to each other. 37 | 38 | ### ISPs 39 | * Most people are familiar with **ISP** or **internet service provider** in the form of access providers. 40 | * A form of access network is **digital subscriber line** (or DSL), where voice and data travel over the same dedicated line, and are routed using a **digital subscriber line access multiplexer** (or DSLAM). 41 | * In a **cable** network, the subscriber is connected to a **headend** through a coaxial cable. 42 | * Uses a technique called **frequency-division multiplexing**, in which the coaxial cable is divided into multiple **frequency bands**. 43 | * It uses a mixture of optical fiber and coaxial cables (**hybrid fiber coax**). 44 | 45 | ### Subscribers 46 | * In a home network, a **modem** is used to connect to the headend. Downstream, the router and access point (usually combined into the same box) provide network access to devices. 47 | * In an enterprise or institutional network, end systems generally connect to an Ethernet **switch**, which is connected to an upstream **router**. 48 | 49 | Next: [January 5th](1-5.md) 50 | -------------------------------------------------------------------------------- /1151/stat231/20150109.md: -------------------------------------------------------------------------------- 1 | # STAT 231 2 | ## Statistics 3 | #### 1/9/2015 4 | Elvin Yung 5 | 6 | ### Data summarization 7 | 8 | #### Measures of variability or dispersion 9 | * Central tendency is not the best way to measure data location. 10 | 11 | ##### Sample variance and sample standard deviation 12 | * The **sample variance** is defined as $s^2=\frac {1} {n-1} \binom {i=1} {n} {(y_i - y_avg)^2}$ (TODO: fix this). 13 | * The **sample standard deviation** is *s*, the square root of s^2. 14 | * If the data is unimodal, approximately 69% of the data will lie within 1 standard deviation of the mean, in the range $(\bar{y} - s, \bar{y} + s)$. Approximately 95% of the data will lie within 2 standard deviations. 15 | 16 | #### Range 17 | * The range is defined as $max(y) - min(y)$. 18 | 19 | #### Interquartile range (IQR) 20 | * The ***p*th-percentile* for a dataset is the value such that *p* percent of the data fall at or below this value. 21 | * The **lower/first quartile** is the 25th-percentile. 22 | * The **middle quartile** is the 50th-percentile. This is also the median. 23 | * The **upper quartile** is the 75th-percentile. 24 | 25 | * The **interquartile range** is defined as $q(0.75)-q(0.25)$. In other words, 26 | * The interquartile range is a more robust measurement of the spread since, unlike the range, it will not be affected by extreme values in the set. 27 | 28 | * The **five-number summary** is a robust summay of the dataset. It consists of the set of numbers ${y_{(1)}, q(0.25), q(0.50), q(0.75), y_{(n)}}$, where $y_{(i)}$ is the $i$th largest number of the dataset, and q(i) is the $i$th percentile of the dataset. 29 | 30 | #### Sample skewness 31 | * Skewness is a measure of asymmetry in the dataset. Essentially, it indicates what side of the graph the majority of the data is skewing towards. 32 | * A *negative* skew indicates that the data is tailed on the right side of the graph. 33 | * A *positive* skew indicates that the data is tailed on the left side of the graph. 34 | * **Sample skewness** is defined as `a bunch of stuff I'm going to copy from the slides later`. 35 | 36 | #### Sample kurtosis 37 | * **Sample kurtosis** measures whether the data are concentrated in the central peak or in the tails. 38 | * Data that look normal or bell-shaped have a sample kurtosis close to 3. 39 | * Data that are very peaked have a sample kurtosis larger than 3. 40 | * Data that look uniform have a sample kurtosis close to 1.2. 41 | * In other words, the sample kurtosis is proportional to the amount of data concentrated about the mean, and is inversely proportional to how "flat" the graph is. 42 | 43 | ### Visualizing data 44 | #### Histograms 45 | * In a *standard* histogram: 46 | * Partition the data into *k* non-overlapping **bins**. 47 | * Measure the number of data points that fall into each interval. This is the bin's **frequency**. 48 | * Draw a bar graph which plots each bin to its frequency. 49 | * In a *relative* histogram: 50 | * stuff happens 51 | 52 | ### Empirical c.d.f 53 | 54 | ### Boxplots 55 | 56 | ### Scatterplots 57 | * Simply graph all (x, y) pairs as coordinates. A data point is a data point. -------------------------------------------------------------------------------- /1165/cs349/13-1.md: -------------------------------------------------------------------------------- 1 | # Ethics 2 | 3 | CS 349 - User interfaces (whichever section I decide to go to) 4 | 5 | 7-25-2016 6 | 7 | Elvin Yung 8 | 9 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/13.1-ethics.pdf) 10 | 11 | **NOTE:** This is *not* covered on the final, but it's still really important to know about if you plan on designing UIs in real life. 12 | 13 | * Benevolent deception, malicious design 14 | * Basically, manipulating the truth can be done for good or evil. 15 | * It's possible to lie to users for their own good. 16 | 17 | * Example: robotic physical therapy system 18 | * Example: electronic switching system, i.e. for phones 19 | * When a switch fails, you could report the error to the user, which might confuse the users and reduce their confidence in the system. 20 | * Or, you could put them through to the wrong person, and they can just think they made the wrong call. 21 | * Arguably, this is *bad* deception. 22 | 23 | * Example: Placebo buttons 24 | * e.g. crosswalk buttons, office thermostats, elevator close door buttons 25 | * Give people the illusion of control 26 | 27 | * As designers, we want to balance between end-user expectations and the capabilities of the system. 28 | * We use deception to fill the gap. 29 | 30 | ## Benevolent Deception 31 | Some *gaps* that deceptive design is used to fill in: 32 | 33 | ### System vs. reality 34 | * Maintain the user experience 35 | * e.g. Netflix recommender will never recommend nothing 36 | * Hide uncertainty 37 | * e.g. Windows file operation time estimate 38 | * Guarantee a level of entertainment 39 | * e.g. tweak game AI such that they give you a challenge without making it impossible 40 | * Maintain consistency/expectations 41 | * e.g. artificial shutter noises from phone cameras 42 | 43 | ### Individual vs. Group 44 | * e.g hiding whether the username or password was wrong in a login screen 45 | * e.g. Sandboxing 46 | * e.g. timesharing systems let many people pretend they own the computer 47 | 48 | ### Individual vs. Self 49 | * Protect the user from themselves. 50 | * e.g. don't actually delete a file, just move it to the trash can. 51 | * e.g. [fake bus stop for Alzheimer's patients](https://www.fastcompany.com/1598472/uncommon-act-design-fake-bus-stop-helps-alzheimers-patients) 52 | 53 | ## Malevolent Deception 54 | * Deception can *definitely* be bad, like: 55 | * Use confusing language (e.g. double negatives) 56 | * Hiding certain functionality (e.g. the unsubscribe button) 57 | * Exploiting user mistakes (e.g. torrent sites that have 10 different download buttons, 9 of which are on ads) 58 | 59 | ## Experimentation 60 | * You've probably already encountered the idea of *A/B testing*, where you show different versions of a UI to different users. 61 | * In general, experiments like these are helpful for figuring out the effects of different UI changes in a real-world environment. 62 | * But sometimes the ethics are questionable. 63 | * For example, Facebook manipulated news feed posts in 2013 to figure out how it changes emotions. 64 | 65 | ## tl;dr 66 | * Build interfaces that you would let your grandmother use. 67 | -------------------------------------------------------------------------------- /1151/cs240/20150115.md: -------------------------------------------------------------------------------- 1 | # CS 240 Enriched 2 | ## Data Structures and Data Management 3 | #### 1/15/2015 4 | Elvin Yung 5 | 6 | ### Summation Formulae 7 | * Arithmetic sequence 8 | * Geometric sequence 9 | * Harmonic sequence 10 | 11 | ### Techniques for Algorithm Analysis 12 | * There are two general strategies: 13 | * Use theta bounds throughout the analysis and obtain a theta bound for the complexity of the algorithm. 14 | * Prove a big-O upper bound and a matching big-omega lower bound separately to get a theta bound. Sometimes this technique is easier because arguments for O-bounds might use simpler upper bounds, and/or the arguments for omega-bounds may use simpler lower bounds. 15 | 16 | AND NOW, ONTO: 17 | 18 | ## Abstract Data Types 19 | * An **abstract data type* is a collection of data and the operations defined over it. 20 | 21 | ### List 22 | * Standard linked list 23 | 24 | ### Stack 25 | * A **stack** is a set of items stacked in order of arrivals. 26 | * It implements the following interface: 27 | * `push(x)` - pushes `x` on top of the stack. 28 | * `pop()` - removes the top item from the stack. 29 | * `peek()` - returns the top item without mutating the stack. 30 | * `isEmpty()` - returns some indicator of whether the stack is empty. 31 | * `size()` - returns the size of the stack. 32 | * Stack is an example of a **LIFO** (last in, first out) data structure. 33 | 34 | ### Queue 35 | * A **queue** is an example of a **FIFO** (first in, first out) data structure. 36 | * It implements the following interface: 37 | * `front()` - returns the item at the front of the queue. 38 | * `enqueue(x)` - pushes an item to the back of the queue. 39 | * `dequeue()` - removes the item at the front of the queue. 40 | 41 | ### Dequeue 42 | * A **dequeue** (pronounced *deck*) is essentially a queue with two ends. 43 | * It has the following interface: 44 | * `front()` 45 | * `rear()` 46 | * `frontEnqueue(x)` 47 | * `rearEnqueue(x)` 48 | 49 | ### Priority Queue 50 | * In a **priority queue** (PQ) instead of ordering objects by insertion order, items are ordered by priority. 51 | * For CS240, larger priority values indicate higher priority. (In some other implementations, a priority of 1 or A indicates highest priority.) 52 | * It has the following interface: 53 | * `insert(priotiy, value)` - inserts item with priority. 54 | * `deleteMax()` - extracts the value with the highest priority. 55 | * Using a linked list to naively implement a priority queue, `insert` runs at Ө(1) and `deleteMax` linearly searches for the highest priority item and removes it, running at Ө(n). 56 | * An array implementation is to use an unsorted array. In this case, `insert` runs at Ө(1) and `deleteMax` runs at Ө(n). 57 | * Using a sorted array (by insertion), `insert` at Ө(n) and `deleteMax` runs at Ө(1). 58 | * The choice of which implementation to use is trivial if insertions are performed much more frequently than deletions (or vice versa), but if insertions and deletions are both done very frequently, we need a better implementation that is efficient for both `insert` and `deleteMax`. 59 | * Using a heap to implement a priority queue, both `insert` and `deleteMax` run at Ө(lg n). 60 | 61 | ### Heap 62 | See [2015/01/20](20150120.md). 63 | 64 | -------------------------------------------------------------------------------- /1171/cs343/lock-taxonomy.md: -------------------------------------------------------------------------------- 1 | # TL;DR: The Lock Taxonomy 2 | 3 | CS 343 - Concurrent and Parallel Programming 4 | 5 | 02-28-2017 6 | 7 | Elvin Yung 8 | 9 | This is roughly a summary of sections 6.1 to 6.3 inclusive. 10 | 11 | # A note on terminology 12 | * A **task** is a generic term for a thread/actor/process/etc. 13 | * The **acquiring task** or **acquirer** is the task that wants to access the resource. 14 | * The **releasing task** or **releaser** is the task that has just finished accessed the resource. 15 | * An **event** is the thing that the task is waiting for. Usually this is the lock being unlocked. 16 | * **blocking** is being used interchangeably with **waiting**. Both means that the task doesn't get rescheduled until it gets unblocked by an event. 17 | 18 | # Broadly: 19 | * There are two types of locks: 20 | * **spinning** - busy waiting 21 | * **blocking** - cooperative scheduling 22 | 23 | * Locks are used for two purposes: 24 | * **synchronization** - notify tasks waiting for an event 25 | * **mutual exclusion** or **mutex** - protect a resource from being accessed by conflicting tasks 26 | 27 | # Spinning locks 28 | * The acquirer is responsible for checking whether the event has occurred, which it does constantly by busy-waiting 29 | * Usually a low-level primitive 30 | 31 | ## Non-yielding 32 | * Pure busy-wait, waste CPU cycles 33 | * `uSpinLock` in uC++ 34 | 35 | ## Yielding 36 | * Give up time slice so that another task can be scheduled 37 | * `uLock` in uC++ 38 | 39 | # Blocking locks 40 | * The acquirer checks for the event once, and then goes to sleep. 41 | * The releaser is responsible for waking up and notifying the acquirer. 42 | * Usually stores some sort of queue of tasks waiting to be notified. 43 | 44 | ## Mutex lock 45 | * As per name, used purely for mutual exclusion 46 | * **single acquisition** and **multiple acquisition**/**owner** variants, different on whether the task that already holds the lock can still acquire it 47 | * If multiple acquisition, implementations differ on whether only 1 release is needed, or as many as acquires 48 | * uC++ has `uOwnerLock`, i.e. multiple acquisition variant only 49 | 50 | ## Synchronization lock 51 | * aka sync lock, condition variable, condition lock, monitor 52 | * Is considered the "weakest" lock because there is only one operation: wait. 53 | * The sync lock only stores a single piece of state: the list of waiting tasks. However, mutating this list is a critical section that needs to be mutex'd. 54 | * As per name, used purely for synchronization 55 | * `uLock` in uC++ 56 | 57 | ## Barrier 58 | * Like a sync lock, except associated with a number `n` 59 | * When the `n`th task waits on the barrier, it wakes up everyone 60 | * Unlike sync lock, barrier only stores the number of waiting tasks 61 | * Only used for synchronization 62 | * `uBarrier` in uC++ 63 | 64 | ## Semaphore 65 | * Basically, associated with some number `n` 66 | * Two operations: 67 | * `P` -- to decrease, equivalent to lock acquire 68 | * `V` -- to increase, equivalent to lock release 69 | * When task tries to `P` and n = 0, task blocks 70 | * When a task `V`, it wakes up a blocking task 71 | * `uSemaphore` in uC++ 72 | 73 | * Two types: **binary** (`n` starts as 1) and **counting** (arbitrary `n`) 74 | * The difference between a binary semaphore and a mutex lock is that a semaphore can be initialized to 0. 75 | -------------------------------------------------------------------------------- /1151/cs240/20150108.md: -------------------------------------------------------------------------------- 1 | # CS 240 Enriched 2 | ## Data Structures and Data Management 3 | #### 1/8/2015 4 | Elvin Yung 5 | 6 | ### Timing of programs 7 | * We have a timing function `T: Input -> R+` (or `T_p`) for each program 8 | * The reason why these timing functions aren't effective is because computer hardware always improve. Therefore the timing functions are poorly defined. 9 | * Therefore we can't use "time" *per se* to time our programs, since it is not a universal measure. 10 | * So we count the number of elementary operations performed by the program. 11 | 12 | * A **RAM**, or random access machine, is a central processor with a finite set of operations on fixd width and running in at most *c* clock cycles each. 13 | * There are two main instruction sets: **CISC** (complex instruction set computing), and **RISC** (reduced instruction set computing). 14 | * there should be more things here 15 | * So the running time of some program is the count of the number of operations, including CPU instructions, and memory instructions. 16 | 17 | We can count the running time of a program with this method: 18 | 1. Write algorithms in pesudocode. 19 | 2. Count the number of primitive operations. 20 | 21 | * We can compare the running times of two different programs/algorithms by plotting the timing function in one graph. 22 | * Since the input itself isn't plottable, we substitute the input with the input size. 23 | * Since the input size might be associated with multiple inputs, we take some one metric from all the running time data associated with that one input size (e.g. worst case, average case, etc). 24 | * Essentially, at the end we have some timing function `T_A: N -> R+` for some algorithm `A`, and `T(n) = max(T_A(I) | |I| = n)`. 25 | * Then for example, for merge sort, `T(n) = n log n`. 26 | 27 | ### Order notation 28 | * Order notation is a way to compare functions. 29 | * We introduce the order notation for some function `f`: `f(x) = O(g(x))` 30 | * **Big-O:** `f(n)` is `O(g(n))` if there exists constants `c > 0` and `n_0 > 0` such that `0 <= f(n) <= cg(n)` for all `n >= n_0`. 31 | * Essentially, this means that `f(n)` is `O(g(n))` if eventually, `g(n)` is greater than `f(n)`. In other words, eventually, `f` will grow slower than `g`. 32 | * **Big Omega:** `f(n)` is `Ω(g(n))` if there exists constants `c > 0` and `n_0 > 0` such that `0 <= cg(n) <= f(n)` for all `n >= n_0`. 33 | * **Big Theta:** `f(n)` is `Ө(g(n))` if there exists constants `c_1 > 0`, `c_2 > 0` and `n_0 > 0` such that `0 <= c_1(g(n) <= f(n) <= c_2*g(n)` for all `n >= n_0`. 34 | * **Small O:** `f(n)` is `o(g(n))` if there exists constants `c > 0` and `n_0 > 0` such that `0 <= f(n) < cg(n)` for all `n >= n_0`. 35 | * **Small Omega:** `f(n)` is `w(g(n))` if there exists constants `c > 0` and `n_0 > 0` such that `0 <= cg(n) < f(n)` for all `n >= n_0`. 36 | 37 | 38 | These are all the orders: 39 | 40 | | Order | Order in R | Order in function | Meaning | 41 | |---|------------|-------------------|---------| 42 | | O | `x <= y` | `f(x) = O(g(x))` | `f(x)` eventually grows slower than, and is upper-bounded by, `g(x)`| 43 | | Ω | `x >= y` | `f(x) = Ω(g(x))` | `f(x)` eventually grows faster than, and is lower-bounded by, `g(x)`| 44 | | Ө | `x = y` | `f(x) = Ө(g(x))` | `f(x)` grows at roughly the same rate as `g(x)`| 45 | | o | `x < y` | `f(x) = o(g(x))` | `f(x)` eventually grows strictly slower than `g(x)`| 46 | | w | `x > y` | `f(x) = w(g(x))` | `f(x)` eventually grows strictly faster than `g(x)`| 47 | 48 | -------------------------------------------------------------------------------- /1151/cs246/20150115.md: -------------------------------------------------------------------------------- 1 | # CS 246 2 | ## Object Oriented Software Development 3 | #### 1/15/2015 4 | Elvin Yung 5 | 6 | ### Groups 7 | * In a long file listing (`ls -l`), the following information is provided about each file: file permissions, owner, group, size, last modified date, and filename. 8 | * There are three relevant groups in Unix systems: user (`u`), group (`g`), and others (`o`). 9 | * A file can only belong to one group. 10 | * A user can belong to multiple groups. You can check your groups using the command `groups`. 11 | 12 | ### Permissions 13 | * In a long file listing, a string such as `rwxr-xr--` might be seen in a file entry. This is a file permission string. 14 | * Every file in Unix has three basic permissions: read (`r`), write (`w`), and execute (`x`). 15 | * The file permission string represents nine bits, grouped into three groups of three bits each. In order, they represent the permissions of the user, the group, and others. 16 | * What file permissions exactly entail for ordinary files and for directories are slightly different. 17 | * For **read** (`r`): 18 | * For an ordinary file, this allows you to see the contents of the file using `cat`, or a text editor, and so on. 19 | * For a directory, this allows you to see the contents of the directory, e.g. using `ls`, or by globbing, or tab completion, and so on. 20 | * For **write** (`w`): 21 | * For an ordinary file, this allows you to modify its contents. 22 | * For a directory, this allows you to add and remove files. 23 | * For **execute** (`x`): 24 | * For an ordinary file, this allows you to try to run the file as a program. 25 | * For a directory, this allows you to navigate into the directory. 26 | * The file permissions string can be represented as a three-digit octal number which converts the number `rwx` into octal for each group. 27 | * For example, the permission string `rwxr-xr--` (or `111101100` in binary) can be represented in octal as 745. 28 | * The owner of the file can change file permissions using the program `chmod`. The basic syntax is `chmod [MODE] [FILENAME]`. 29 | * The syntax of the `mode` is roughly `{ownership class} {operator} {permissions}`. 30 | * The ownership class is some combination of the letters `ugoa` (user, group, others, or all). 31 | * The operator is one of `+` (add), `-` (revoke), or `=` (set exactly). 32 | * the permissions is some combination of the letters `rwx` (respectively read, write, and execute). 33 | * For example, `chmod a=rw hello.txt` gives everyone read and write permissions to `hello.txt`. 34 | * An alternate mode syntax is simply `{permissions octal} {filename}`. 35 | For example, `chmod 777 hello.txt` gives everyone read, write, and execute permissions to `hello.txt`. 36 | 37 | ### Variables 38 | * Setting a variable in a shell environment in Unix is simple. You can use the command `x=1` to assign the value `1` to `x`. Afterwards, you can retrieve it using `$x`. 39 | * Environment variables in linux are always strings. 40 | * The `PATH` variable stores paths to directories that the shell looks at if the user wants to run a program. 41 | 42 | ### Scripts 43 | * **Scripts** are files containing sequences of Unix commands executed as a program. 44 | * To execute a script, you must have execute permission on the script file. 45 | * To execute a script, you can either: 46 | * provide the full path of the script (including aliases such as `.`) 47 | * modifying `PATH` to contain the script's location 48 | 49 | #### Command-line Arguments 50 | * Arguments can be accessed with the variables of the form `${n}`, which would retrieve the `n`th argument of the command. 51 | * The special variable `#` provides the number of arguments to the command. 52 | * The special variable `?` provides the return code of the last command. 53 | -------------------------------------------------------------------------------- /1151/cs241/20150127.md: -------------------------------------------------------------------------------- 1 | # CS 241 2 | ## Foundations of Sequential Programs 3 | #### 1/27/2015 4 | Elvin Yung 5 | 6 | ### Previously 7 | * You should use the MIPS Assembly Reference Sheet, and use the specifications to create your own tests. Your assembler should give the same output as `cs241.binasm`. 8 | * 9 | 10 | ### Loaders 11 | * We can't assume our code will always be loaded at a fixed address (e.g. `0x00`). 12 | * This is because multiple programs might be loaded at the same time. There may be other programs or code in memory. 13 | * We've been assuming that our code will be always loaded at `0x00`. However, if this assumption no longer holds, labels will break. 14 | * For example: 15 | 16 | ```nasm 17 | lis $1 ; 0x00000814 18 | .word f ; 0x0000000c 19 | jalr $1 ; 0x00200009 20 | 21 | f: 22 | add $3, $1, $3 ; 0x0c 23 | jr $31 ; 0x10 24 | ``` 25 | 26 | * If the code is not loaded at `0x00`, then the instruction that `f` is supposed to point to is no longer at `0x0c`, and `f` points at the wrong address. 27 | * We can fix this using a loader. The loader will offset labels with some given *alpha*, which is the address the program is loaded at. 28 | * For `.word`, the loader assumes that it is constant, since it is not clear whether the `.word` refers to an address or a constant. 29 | * For branch instructions, since those are relative to their own addresses, we don't need to worry about those instructions. 30 | * However, now we have another problem. The assembled file is a stream of bits. How do we know which come from a `.word` with an id? 31 | * The answer is that we can't. The assembler needs to provide more information to the loader. 32 | 33 | ### Object Code 34 | * Introducing **object code**: The output of assemblers isn't just pure machine code. Object code consists of machine code and also metadata about the file that will be needed. 35 | * In CS241, we will use an object file format called MERL (MIPS Executable Relocatable Linkable). 36 | * A MERL file consists of a **header**, **code**, and a **symbol table** (or **relocation table**. 37 | * The header contains the cookie, the length of the MERL file, and the code length (including the header). 38 | * The cookie, `0x100000002`, simply lets us know that it is a MIPS file. Note that this also stands for `beq $0, $0, $2`, i.e. an instruction to skip the header. This means that MERL files can be executed as ordinary MIPS, if it's loaded at `0x00`. 39 | * The lengths let us know when to stop reading. 40 | * The MIPS binary is the standard MIPS code, starting at `0x0c`, since the header is always 3 words long. 41 | * The symbol table, or relocation table, are alternating lines of format codes and addresses. 42 | * For our purposes, the format code is 1 (`0x01`). This indicates that the word to follow is a relocation address. 43 | 44 | * We also want the assembler to generate relocatable object code, like we did by adding relocs in the example. 45 | * In CS241, the tool `cs241.relasm` performs this. 46 | * The `cs241.merl` tool generates non-relocatable MIPS files from MERL files. 47 | * It takes the `.merl` file as input, as well as some relocation address `alpha`. 48 | * It outputs a non-relocatable MIPS file, with the merl headers and footers removed, ready to load at address alpha. 49 | 50 | ### Loader Relocation Algorithm 51 | In pseudocode, here's how to read a MERL file: 52 | ``` 53 | read() // skip cookie on first line, MERL check 54 | endMod <- read() // get end of merl file 55 | codeLen <- read() // get length of code 56 | a <- findFreeRam (codeLen + stack) // find free memory for stuff 57 | 58 | for (i = 0; i < codeLen; i += 4) 59 | MEM[a+i] <- read() // read code into memory 60 | end 61 | 62 | i <- codeLen + 12 // position of footer 63 | 64 | // perform relocation 65 | while (i < endMod) 66 | format <- read() 67 | if (format == 1) 68 | rel <- read() 69 | MEM[a + rel - 12] += a - 12 70 | else ERROR 71 | i += 8 72 | end 73 | ``` -------------------------------------------------------------------------------- /1165/cs349/10-1.md: -------------------------------------------------------------------------------- 1 | # Wearables 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | 7-4-2016 6 | 7 | Elvin Yung 8 | 9 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/10.1-wearable_computing.pdf) 10 | 11 | ## Smartwatches 12 | ![](https://xkcd.com/1420/) 13 | 14 | * We're deliberately not going to talk about things like Fitbits and Pebbles, because they're more specialized. 15 | * We'll focus on the Apple Watch and Android Wear, which are generalized. 16 | 17 | ### Design Challenges 18 | * The first issue is that these things are tiny! You're constrained by the user's hand. 19 | * Things like the fat finger problem are much worse. 20 | * Physical buttons are important - you need buttons because there's no practical way a touchscreen works. 21 | * Limited attention 22 | * A smartwatch is not intended to be the device of choice for complicated use cases - you're not going to be manipulating spreadsheets 23 | * Instead, smartwatches are for quick tasks on the go - things you want to 24 | be able to do without having to pull out their phone. 25 | 26 | #### Guidelines from Google 27 | * The watch is mostly an output device, not an input device. 28 | * Google suggests all computation to be done on the phone, and be sent to the watch - in other words, the watch should just be a dumb terminal. 29 | * The entire task on the watch should take <5 seconds - if it takes more, a watch is not the right device. 30 | * The watch is *secondary* - it's only auxiliary to the phone, designed for quick interactions. 31 | 32 | #### Guidelines from Apple 33 | * Apple emphasizes personal communication on the Apple Watch. They emphasize initiating communicating, but it's not a very compelling use case. 34 | * There are dedicated apps (but no one uses them). 35 | * Interaction mostly via gestures, but there's also force touch, the "crown" dial, and the side buttons. 36 | * Emphasize coordination with smartphone - should be able to tap to answer a call from the watch, and then the control is transferred to the phone. 37 | 38 | ### The Big Question 39 | *Why doesn't everyone have a smartwatch?* 40 | 41 | * No "killer app" or other compelling use cases 42 | * Probably not good enough as a proxy for phone 43 | * Fitness tracking isn't sufficient for most people 44 | * Healthcare, monitoring blood pressure, heart rate, etc. - maybe? 45 | * Identification - Apple Pay, Android Pay, computer authentication etc. - maybe eventually replace passwords 46 | * Price 47 | * Battery sucks 48 | * etc. 49 | 50 | ### Utilitarian vs. Fashionable Devices 51 | * Is a smartwatch a piece of jewelry or a utility device? 52 | 53 | ## Ubiquitous Computing 54 | * Introduced by Mark Weiser, 1996 55 | * Basically a very old term for Internet of Things 56 | * Instead of having discrete devices that you carry, instrument the world around you to do things for you. 57 | * For Ubicomp to really work, you need: 58 | * Computation embedded into the environment 59 | * Something that ties the person to the environment - a device that helps identify the person. Can a smartwatch fill this role? Maybe. 60 | 61 | ## Augmented Reality 62 | * Examples: Google Glass, Hololens 63 | 64 | ### Design Principles 65 | * Don't get in the way of what the user is doing 66 | * Only give information that's relevant to what the user is currently doing. Don't always put the temperature in the corner! 67 | * Avoid showing things 68 | 69 | ### Results 70 | Google Glass didn't get wide adoption. What happened? 71 | 72 | * Technology was not super feasible - 2 hour battery life? 73 | * Principles of Ubicomp 74 | * Google Glass was considered rude or awkward - [Glassholes](https://nypost.com/2014/07/14/is-google-glass-cool-or-just-plain-creepy/) 75 | * There were cameras mounted on them, and when someone is walking around with Google Glass on, there's no indication that they're not recording you 76 | * Is Glass a fashion device? Google tried to make it like that, but 77 | 78 | AR definitely still has potential, though! 79 | 80 | ## More Generally for Wearables 81 | And also other new technology. 82 | 83 | * Why do you need a wearable? 84 | 85 | * A better mousetrap is not good enough - it needs to be solving a problem - 10x not 10%. 86 | * New technology takes time to mature! Remember old tablets and PDAs? 87 | -------------------------------------------------------------------------------- /1151/stat231/20150107.md: -------------------------------------------------------------------------------- 1 | # STAT 231 2 | ## Statistics 3 | #### 1/7/2015 4 | Elvin Yung 5 | 6 | **Instructor:** Chong Zhang 7 | 8 | ### Administrative stuff 9 | * See syllabus for detailed information. 10 | * Course syllabus is posted on D2L. Please read it! 11 | * Absolutely no snoring! 12 | 13 | #### Grading Scheme 14 | * Grading scheme A: 15 | * Tutorial tests: 15% 16 | * Midterm 1: 15% 17 | * Midterm 2: 15% 18 | * Final exam: 55% 19 | * Scheme B: 20 | * Tutorial tests: 15% 21 | * Best midterm: 15% 22 | * Worst midterm: 5% 23 | * Final: 65% 24 | 25 | #### Tutorial tests 26 | * January 23, 9:30 - 10:20 AM 27 | * February 27, 9:30 - 10:20 AM 28 | * March 27, 9:30 - 10:20 AM 29 | 30 | #### Midterms 31 | * Midterm 1: Tuesday, February 3, 4:30 - 6:00 PM 32 | * Midterm 2: Tuesday, March 10, 4:30 - 6:00 PM 33 | 34 | #### Office Hours 35 | * Thursday 9:00 - 10:00 AM 36 | 37 | ### Chapter 1: Introduction to Statistical Sciences 38 | * The *Statistical Sciences* are mainly concerned with empiricial studies, that is, learning by observation or experiment. 39 | 40 | #### Aspects of Empirical Studies: 41 | * problem formulation 42 | * planning of an experiment 43 | * data collection 44 | * analysis of the data 45 | * conclusions 46 | 47 | A key feature of an empirical study is that it involves uncertainty/randomness. (If we run the experiment more than once, we don't get identical results each time.) 48 | 49 | We will look at these aspects more closely in Chapter 3. 50 | 51 | ### Statistical Jargon: Populations and Processes 52 | * A **population** is a collection of units. (example: all persons aged 18-25 living in Ontario) 53 | * A **process** is the mechanism by which units are produced. (example: sequence of claims generated by car insurance policy holders, where the units are individual claims) 54 | * **Variates** are characteristics of the units, i.e. some variable associated with each unit for every unit. There are four types of variates: continuous (such as weight and blood pressure), discrete (such as presence or absense of a disease), categorical (such as hair colour or marital status), or other data, such as an image or an open ended response to a survey question. 55 | * An **attribute** of a population or process is a function of a bariate which is defined for the entire population or process. (example: proportion of adults in Ontario who own a smartphone, for a population of adults in Ontario) 56 | 57 | ### Approaches to Data Collection 58 | * **Sample surveys:** Information about a finite population is obtained by selecting a representative sample of units from the population and determining the variates of interest for each unit in the sample. 59 | * In other words, take a subset of the population that represents the entire population, and then take the data that you need from each unit in the sample. 60 | * **Observational studies:** Assuming an infinitely many (or similarly sized) population, information is collected without attempting to change any variates. 61 | * The main difference between sample survey and obesrvational study is that the population is near-inifinite (or infinite-like) in observational studies. 62 | * An **experimental study** is one in which the experimental has control over one or more variates. 63 | * These three types of studies are not mutually exclusive. For example, sometimes it is not clear whether a study is a sample survey or an observational survey. 64 | 65 | ### Measures of Central Tendency or Location 66 | * Let the data be represented by a sorted list of real numbers `y = {y_1, y_2, ..., y_n}`. 67 | * The sample **mean** or average is `(1/n) * sum(y)`. 68 | * The sample **median**: 69 | * if `n` is odd: `y_[(n+1)/2]` 70 | * if `n` is even: `(1/2) * (y_[n/2] + y_[(n/2) + 1])` (The average of the middle two observations is chosen for convenience.) 71 | * The **mode** is the most common value in the set of data. If the values are all unique, then the mode does not exist. This measure is the most useful for discrete or categorical data with a small number of unique data points. 72 | * For frequency or grouped data, the class with the highest frequency is called the **modal class**. 73 | 74 | #### For Friday's Class 75 | * Get course notes. Read chapter 1. 76 | * Review material from STAT 230. 77 | * The slides will be posted on D2L. 78 | 79 | -------------------------------------------------------------------------------- /1151/cs241/20150106.md: -------------------------------------------------------------------------------- 1 | # CS 241 2 | ## Foundations of Sequential Programs 3 | #### 1/6/2015 4 | Elvin Yung 5 | 6 | * Instructor: Ashif Harji 7 | * ISA: Sean Harrap 8 | * Class: 10:00 AM - 11:20 AM, TTh 9 | 10 | ### What are sequential programs? 11 | * Any program that isn't concurrent or parallel is *sequential*. 12 | * This course concerns the compilation of program code, and what happens in each step. 13 | 14 | ### What happens when you compile and run a program? 15 | * Before we can answer that question, we must first ask another: *What is a compiler?* 16 | * A compiler translates code from a *source* program (usually in some sort of high level language) to an equivalent *target* program (usually in some sort of machine code). 17 | * Why do we need a compiler? 18 | * Humans write code. 19 | * Thus, we want code to be easier for people to understand. 20 | * Safety (e.g. in type) 21 | * provide abstraction 22 | * Why can't we just have the computer read the source language? 23 | * slower 24 | * relies on specific set of hardware, whereas higher language are usually machine independent. 25 | * with the computer using a lower level language, we can have multiple high level languages that all compile down to the same target. 26 | * The basic process is as follows (for a basic CS241 compiler): 27 | * Basically, the source program goes through *scanning*, or *lexical analysis*, and becomes a stream of tokens. 28 | * The tokens go through *parsing*, which generates a parse tree 29 | * The parse tree undergoes *semantic analysis*, which outputs a parse tree and a symbol table. 30 | * Then they go through *code generation*, which outputs assembly code. 31 | * The assembly code goes through the *assembler* (which is basically a simple compiler), and becomes machine code. 32 | * The assembler is a specific type of compiler that translates between assembly code and machine code. 33 | 34 | ### Bits 35 | * A *bit* is a 0 or 1, an abstraction of high/low voltages or magnets. 36 | * A *byte* is 8 bits. An example of a byte is `11001001`. There are 256 (`2^8`) possible bytes. 37 | * A *word* is a machine specific grouping of bytes, 4 or 8 bytes long (32-bit or 64-bit). In this course, we will use a 32-bit word size. 38 | * A *nibble* is 4 bits. 39 | 40 | ### Bytes 41 | * Given a byte in computer memory, what does it mean? 42 | * Without context, nothing. Everything in a computer is a stream of bits. 43 | * It could be a number. Conventionally in binary, `11001001` is 201. It is an unsigned value. 44 | * Then how are negative numbers represented? 45 | 46 | #### Sign-magnitude 47 | * Simple approach: **sign-magnitude** representation. Reserve the first bit of a byte to represent a *sign* (0 is positive, 1 is negative), and use the rest to represent the unsigned *magnitude*. 48 | * For 8 bit, we can represent numbers from -127 to 127. 49 | * Problem: there are 2 zeroes (`10000000` and `00000000`), which means you need two comparisons for null value. 50 | * Problem: adding is not symmetric. If two numbers have common sign, just add magnitude. However, if the signs are different, then use the sign of the larger magnitude and use the difference in magnitude. You need special circuits. 51 | 52 | #### two's complement 53 | To interpret an n-bit value: 54 | 1) Interpret the number as unsigned. 55 | 2) If the first bit is zero, then done. 56 | 3) Else, subtract 2^n. 57 | 58 | To get the two's complement negation of an n-bit number, subtract the number from 2^n. Alternatively, flip the bits and add one. 59 | 60 | e.g. for n=3: 61 | * `000` = 0 62 | * `001` = 1 63 | * `010` = 2 64 | * `011` = 3 65 | * `100` = -4 66 | * `101` = -3 67 | * `110` = -2 68 | * `111` = -1 69 | 70 | For 8 bits, two's complement gives us the range [-128..127], with only one zero. The arithmetic is mod 2. 71 | 72 | ### Hexadecimal notation 73 | * Base 16: 0..9, A..F (or a..f) 74 | * Each hex digit is 4 bits, so `11001001` can be represented as `C9`. 75 | * Hex numbers are usually prefixed with `0x`. In the example above, `11001001` would be `0xC9`. 76 | 77 | 78 | ### Back to bytes 79 | So given a byte, how can we tell which interpretation is correct? 80 | 81 | The answer is still no. 82 | 83 | But wait! We don't even know if it is a number. It could also be a character, which *also* depends on the encoding scheme. In this course, we will assume conventional ASCII. 84 | 85 | -------------------------------------------------------------------------------- /1165/cs349/11-2.md: -------------------------------------------------------------------------------- 1 | # Accessibility 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | 7-13-2016 6 | 7 | Elvin Yung 8 | 9 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/11.2-accessibility.pdf) 10 | 11 | 12 | 13 | * Curb cuts: make it easy for people people on wheelchairs to get through a curb 14 | * We make accommodations for people with different abilities in real life. 15 | * It should also be done in software. 16 | 17 | * Accessibility isn't just about being on a wheelchair or being blind. We should to accommodate 18 | * We want to design for the "average" person, but there's no average person. 19 | * Every time you design something, you're at risk of alienating certain groups of people from your product. 20 | * We *all* have temporary or situational disabilities. 21 | * Obvious: ones being sick, being injured, etc. 22 | * Driving: limited attentional bandwidth 23 | * Underwater diving: impaired sight, hearing, mobility, etc. 24 | * Using an ATM in the middle of the night in Kitchener 25 | * Walking down the street and texting 26 | 27 | ## Walking + Pointing Performance 28 | * Experiment to measure performance on a tapping task on a phone in different situations 29 | * Situations include: sitting, treadmill (different speeds), obstacle course 30 | * Result: performance seated and walking are fairly similar, but in an obstacle course, the task took more time and had a higher error rate. 31 | * Obvious in hindsight - obstacle course is the only thing that needs attention outside the phone 32 | 33 | * Takeaway: it's better if you can focus on a single task. 34 | * This is why texting and driving is bad! 35 | 36 | * Another experiment: reading comprehension 37 | * When walking, people were slower to read, and had higher error rates. 38 | 39 | * When you're walking, you're most concerned about the attention split, 40 | 41 | ## Designing for Walking 42 | * Sitting UI: small menu items, small buttons 43 | * Standing UI: make everything bigger, reduces cognitive load 44 | * This is also one of the reasons why mobile UIs are better: on the go, you're going to get a better experience if you have less cognitive load. 45 | 46 | ## Aging 47 | * Natural effects of aging: 48 | * Worse coordination 49 | * Visual coordination - coordination starts to fade by the 40s, and start to need reading glasses by the 50s 50 | * Hearing impairments 51 | * Memory loss 52 | 53 | * Baby boomers: huge spike of birth rate after WWII 54 | * They're all getting old now! If you were born in 1951 you are now 65, i.e. retiring. 55 | * As a designer, it might be an opportunity to build usable interfaces for this demographic. 56 | 57 | [Video: MIT AGNES](https://youtu.be/czuww9rp5f4) - a suit for designers to understand the usability challenges with aging 58 | 59 | * *We should design technologies to be inclusive. They often end up helping everyone!* 60 | 61 | ## Statistics on Impairments 62 | 63 | TODO: Copy from slides 64 | 65 | ## OS Support 66 | * Any recent version of Windows, OSX, etc. have a range of tools for accessibility issues. 67 | * This is awesome. 68 | * There are all kinds of things to manage motor/visual/audial issues. 69 | * It's a decent solution, but not perfect. Users end up having to memorize lots of keyboard shortcuts, be a good touch typer, etc. 70 | 71 | ## Colorblindness 72 | * Not being able to distinguish two colors 73 | * Color-coded UIs are often bad for this 74 | 75 | ## Motor Impairments 76 | * Sticky keys 77 | * Filter keys 78 | * Repeat rate 79 | 80 | ### Various tools to help with motor impairments 81 | * [Integramouse](integramouse.com) - straw-like mouse for people with no arm movement 82 | * Voice dictation/transcription 83 | * Human-brain interface stuff 84 | * Would be awesome... if it worked! 85 | 86 | * [Angle Mouse](https://depts.washington.edu/aimgroup/proj/angle/) 87 | 88 | ### Cognitive Impairments 89 | 90 | * [Phosphor](http://patrickbaudisch.com/projects/phosphor/index.html) - highlight changes in the UI, for people who have trouble keeping track of where they were in the UI 91 | 92 | ## The "Curb Cut" Phenomenon 93 | * A accessibility-minded design that ends up helping everyone 94 | 95 | * Example: cassette tapes, developed as an alternate to reel-to-reel tapes for visually impaired people 96 | * Another example: closed captioning, originally intended for which ended up being used to many more purposes 97 | 98 | ## Reasons to Design for Accessibility 99 | * You're legally motivated to make your software accessible. 100 | * If you plan on selling software to a US government body, it needs to make accessibility accommodations. 101 | * [Class action lawsuit against Target](https://en.wikipedia.org/wiki/National_Federation_of_the_Blind_v._Target_Corp.) 102 | 103 | * Web accessibility is essential for equal opportunity. 104 | -------------------------------------------------------------------------------- /1151/cs251/20150106.md: -------------------------------------------------------------------------------- 1 | # CS 251 2 | ## Computer Organization and Design 3 | #### 1/6/2015 4 | Elvin Yung 5 | 6 | Course site: [click](https://www.student.cs.uwaterloo.ca/~cs251/W15/) 7 | 8 | ### Office Hours 9 | * **Instructor:** Rosina Kharal 10 | * T: 10-11 am, Th: 12-1 pm 11 | * or by appointment 12 | * Piazza: Posts replied usually within 24h 13 | 14 | ### Assignment Information 15 | * Assignments due on Wednesdays 16 | * Assignment 0 is optional (1% bonus) 17 | * Dropbox outside MC 4065 18 | * Avoid "excessive collaboration" 19 | 20 | ### Course guidelines 21 | * Course notes are not a comprehensive substitute for lectures. 22 | * Textbooks are helpful, but 23 | * *Computer Organization and Design* (Patterson and Hennessy) is a better textbook 24 | 25 | ### Clickers 26 | * worth 5% of grade as participation marks 27 | * top 75% of clicker input is taken 28 | 29 | ### How it all fits together... 30 | * If we tried to build a usable computer system from scratch, it would certain overwhelm us. 31 | * operating system, microprocessor, memory, networking, wireless connectivity, etc. 32 | * managing the layers of complexity: work with individual layers of computer architecture, abstracting away details that are not important to the corrent layer. 33 | 34 | ### This course: 35 | * Understanding computer architecture, structure, and evolution 36 | * **Instruction set architecture**: conceptual structure and functional behaviour of comuputing systems, as seen by the programmer 37 | * **Computer organization**: the different levels of physical implementation, described in terms of functional units and their interconnection, data flows 38 | 39 | ### Course outline 40 | * MIPS review 41 | * Digital logic design (gates, etc.) 42 | * Data representation and manipulation (arithmetic with binary, floating point numbers) 43 | * Designing a datapath 44 | * Single-cycle control unit 45 | * Multiple-cycle control units (hardwired and microprogrammed) 46 | * Pipelining and hazards (how instructions are exeuted) 47 | * Memory Hierarchies (caches and virtual memory) 48 | * Input/Output 49 | * Multiprocessor systems 50 | * Case studies: VAX, SPARC, Pentium 51 | 52 | ### Guiding Principles: Computer Architecture Deisgn 53 | * Use abstraction to simplify design. 54 | * Moore's Law: the number of transistors on a circuit board will double ever ~18-24 months. 55 | * Expect rapid changes in technology. 56 | * This motivates optimizing the common case rather than rare cases. 57 | * Based on the 80-20 rule, you should optimize the 20% most used part of the product first. 58 | * Improve performance via parallelism: do multiple tasks at once, and divide and conquer 59 | * Improve performance via pipelining 60 | * Improve performance via prediction 61 | 62 | ### Instruction Set Architectures 63 | * To connect to the hardware, you must speak in its language. 64 | * Machine language - send it machine level instructions to interpret and execute. 65 | * Different computers speak different dialects of the same language. (eg. MIPS, ARM, x86, etc.) 66 | 67 | ### Similarities in these architectures 68 | * Based on the same tech principles 69 | * Common basic operations, eg. arithmetic operations 70 | * Similar goals: Maximize performance and minimize costs and energy 71 | 72 | Basic flow: high level language code is compiled or interpreted (eventually) into bytecode, and then the hardware interprets the instruction piece by piece, executed in the pipeline 73 | 74 | ### MIPS 75 | * Millions of Instructions per Second - which is *not* the versions we're interested in right now. 76 | * Microprocessor without Interlocked Pipelined Stages 77 | * RISC architecture: Reduced Instruction Set 78 | 79 | * Computers execute assembly instructions in binary on computer, but text form for people 80 | * Only simple operations: addition, subtraction, goto, conditional goto 81 | * Instructions operate on registers (fast) and RAM (slow) 82 | 83 | ### Registers 84 | * there are 32 registers of 32 bits/4 bytes each: `$0`..`$31` (or `$s0`..`$s7`, `$t0`..`$t7`) 85 | * `$0` always contains 0 86 | 87 | ### Instructions 88 | Instructions are 4 bytes each, so the PC increments by 4 bytes. 89 | Three general types of MIPS instructions 90 | * R-format: works with registers 91 | * e.g. `add $1, $2, $3`, store `$2+$3` in `$1` 92 | * I-format: works with immediate values 93 | * e.g. `addi $1, $2, 100`, store `$2+100` in `$1` 94 | * J-format: used for branching ("jump"), discussed later 95 | * e.g. `j 28` - goto 28 96 | 97 | Memory access: special MIPS instruction to access 4gb ram, since 32 registers are clearly not enough. 98 | * load word, `lw $1, 100($2)`, read data at `100+$2` into register `$1` 99 | * store word: `sw $1, 100($2)`, write the data at `$1` to `M[100+$2]` 100 | 101 | Conditional branch: e.g. branch if equal `beq $1,$2,100` - if `$1 == $2`, jump to (4*100)+4, or `bne` branch if not equal 102 | 103 | -------------------------------------------------------------------------------- /1151/cs251/20150108.md: -------------------------------------------------------------------------------- 1 | # CS 251 2 | ## Computer Organization and Design 3 | #### 1/8/2015 4 | Elvin Yung 5 | 6 | ### Performance 7 | * Measuring performance depends on how you think about it. 8 | * Running the same program on two different microprocessors/instruction sets/architectures, you want to know how fast each one is, or the **execution time**. 9 | * **Throughput** is some measure of performance. There are standarized benchmarks, but they aren't completely reliable. 10 | * For example: 11 | * Changing the processor to a newer/faster version decreases the execution time. 12 | * Adding in additional processors to a system increases the throughput, but the executing time of individual tasks remain unchanged. 13 | 14 | ### Uniprocessors to Multiprocessors 15 | * **Moore's Law** describes the tendency for the number of transistors that can be placed on an integrated circuit board to double approximately every two years. 16 | 17 | ### Parallelism 18 | * To take advantage of multicore architectures, we use parallel computing. 19 | * Parallelism in the pipeline will be discussed in the Pipelining unit. 20 | 21 | ### Digital Logic Design 22 | * **Binary** digits are basic units in digital communication. They are an abstraction of high/low voltage, represnted as a 0 or 1. 23 | * In early computing: Charles Babbage used relays and vacuum tubes to represent states of matter. 24 | * The first transistor was invented in 1947 by William Shockley, John Bardeen, and Walter Brattain at the Bell Laboratiories. It was one of the most important electronics event of the 20th century, paving the way for integrated circuits and microprocessor technology. 25 | * The transistor was cheap, reliable and small, utilizing voltage applied to a control terminal. 26 | * In 1959, Robert Noyce patented a method of interconnecting many transistors on a single chip. 27 | * Today we can graft ~1b MOS or MOSFET transistor onto a 1cm^2 silicon wafer, each transistor costing 0.01 cents. 28 | 29 | ### Semiconductors 30 | * **MOSFET** stands for Metal-oxide-semiconductor field effect transistors. It is also known as MOS transistors. 31 | * Silicon is a poor conductor, but become better at conducting when small amounts of impurities, or **dopants**, are added to its lattice structure. 32 | * The **N-type** of transistors, something 33 | * The **P-type** of transistors, something else 34 | * A MOS transistor are essentially a sandwich of layers of donducting and insulating materials. 35 | * Base Collector and Emitter Model: **Gate-Source Drain** 36 | * ThenMOS and pMOS are two types of transistors that have opposite behaviours that complement each other. The CMOS uses both nMOS and pMOS. 37 | 38 | ### Implementing Gates Using Transtors 39 | * Key idea: When a current flows between the drain and the source, through the NMOS transistor (sitting within hte silicon wafer called the substrate). 40 | * The problem is that this transmits a strong 0 but weak 1. 41 | * In an NMOS NOT gate, when a voltage is applied through the gate, there is low resistance between the drain and the source, and the power is grounded, so the current is detected as a 0, and vice versa. 42 | * The problem with an NMOS NOT gate is that when value is 1, there is lots of current flow. 43 | * A PMOS transistor has opposite behaviour to the NMOS. It transmits a strong 1 but weak 0. 44 | 45 | How to analyze CMOS circuits: 46 | * Make a table with inputs, trasistors, and outputs. 47 | * For each row (each input setting), check whether the transistor resistance is high or low. If the output has a clean path to power, the signal is 1, and if the output has a clean path to ground, the signal is 0. 48 | 49 | ### CMOS 50 | * CMOS circuits use both n-transistors and p-transistors. It will build circuits with "clean" paths to exactly one of power or ground. 51 | 52 | ### Using Gates in Ligic Design 53 | * The OR (+) operator is 1 iff either operand is 1. 54 | * The AND (.) operator is 1 iff both operands are 1. 55 | * The NOT (Ā, for some operand A) operator inverts the operand. 56 | * In practice, logic minimization software works with NAND or NOR gates, or at transistor level. 57 | 58 | ### Logic Blocks 59 | * A **combinational** logic block is stateless. 60 | * A **sequential** logic block has memory. 61 | * Inputs and outputs are binary. 62 | 63 | ### Specifying input/output behaviour 64 | * You can use a truth table to specify outputs for each possible input. It provides a complete description, but it might be redundant and hard to understand. 65 | * **Disjunctive normal form** specifies the output as an expression of the input. 66 | * A **minterm** is a logical product of variables. It is also called a product. I/O behaviour can be represented as either a sum of products, or a product of sums. 67 | * A **don't care** term is represented as X instead of 0 or 1. When used in a truth table, it indicates we don't care what the particular output value is. 68 | 69 | ### Laws of Boolean Algebra 70 | See course notes, I'm too lazy to copy all of them. 71 | -------------------------------------------------------------------------------- /1171/cs456/1-5.md: -------------------------------------------------------------------------------- 1 | # Chapter 1 - Cont'd 2 | 3 | CS 456 - Computer Networks 4 | 5 | 01-05-2017 6 | 7 | Elvin Yung 8 | 9 | Previous: [January 3rd](1-3.md) 10 | 11 | ## Physical Media 12 | * Physical media essentially fall into two categories: 13 | * **guided**, in which data travels through some conduit. 14 | * **unguided** or **wireless**, in which data propagates freely over the air. 15 | 16 | ### Guided 17 | * One of the most popular wires is **twisted pair**, in which insulated copper wires are twisted together to reduce interference. 18 | * **Coaxial** cables consist of two copper conductors (which share the same geometric axis). 19 | * **Fiber optic** cables is essentially glass fiber that represents bits as *pulses* of light. 20 | 21 | ### Unguided 22 | * **Radio** is the quintessential form of unguided media. 23 | * Because it is propagated over air, it is affected by things like physical obstruction, and interference. 24 | 25 | ## Packet switching 26 | * The network core consists of a graph of interconnected **packet switches** or **routers**. 27 | * The network transmission is then essentially a traversal to forward packets from the source to the destination. 28 | * Importantly, packet switching is **connectionless** -- state is not retained across multiple packets. 29 | 30 | * As previously mentioned, data is not sent in the form of a stream of bits, but in discrete chunks called **packets**. 31 | * The source is responsible for dividing up the payload into packets. The size of the packet, in bits, depends on the technology we are using. 32 | * The time it takes to send the packet is called the transmission **delay**: 33 | * The size of the packet, `L` (in bits) 34 | * The transmission rate or **link bandwidth**, `R` (in bps, or bits per second, also known as **bitrate**) 35 | * Then the transmission delay is simply `L / R`. 36 | 37 | ### Store-and-forward 38 | * The network works in a **store-and-forward** way: the entire packet must arrive at a packet switch before sending it to the next node. 39 | * Why? A packet switch must verify the integrity of the packet, parse the header, etc. before sending it out again. 40 | * The [IP packet header](https://tools.ietf.org/html/rfc791#section-3.1) contains a **checksum** for the payload. If the check fails, the packet is dropped. 41 | * The **end-to-end** delay is `2L/R`, if propagation delay is negligible. 42 | 43 | ### Queueing delay and loss 44 | * Since the input and output link don't necessarily have the same bandwidth. If the output link is full, the router can do one of two things: 45 | * Drop incoming packets. 46 | * Store the packets in an internal *queue*, to be processed at a later time. (It is not exactly a simple FIFO queue.) 47 | 48 | * In effect, a hybrid of both is done. The switch maintains some internal buffer of packets, but if the buffer fills up, it starts dropping incoming packets (this is **loss**, also known as *load-shedding*). 49 | 50 | ### Key network core functions 51 | * **Routing** is how a router determines how to get the packet to the destination. 52 | * It is essentially a graph pathfinding problem, but "distributed". Each router has a set of "neighboring" routers (in the form of a hosts table, which used to be literally the same as the hosts file), and some heuristic is used to figure out which router to forward a packet to. 53 | 54 | * **Forwarding** is how a router sends packets from the input link to the output link. 55 | * It's basically a scheduling problem. The packet queue on the routers are not necessarily simple FIFO queues, but can be more complicated scheduling logic/structures. 56 | 57 | ## Circuit switching 58 | * Circuit switching is an alternative to packet switching. Most notably, it is used in phone calls (e.g. GSM). 59 | * It is **connection-oriented**. You can imagine each connection as a (possibly continuous) stream of bits. 60 | * Essentially, the router determines a path between the source and the destination, and then *reserves* resources for the connection. 61 | * Resources are in the form of **channels** -- divisions of each link -- which, once reserved, are used exclusively for specific connections. 62 | * By reserving channels on each link, we guarantee some level of quality for each connection. 63 | * However, when the connection is idle, the channels are still reserved, so they are wasted. 64 | 65 | ### Channels 66 | * There are two different ways to allocate resources: by frequency, and by time. 67 | * In **frequency-division** multiplexing, a frequency range is reserved for each connection. 68 | * In **time-division** multiplexing, the router cycles between different connections to forward data for. 69 | * With TDM, it is easier to add connections, since we do not have to reallocate frequency ranges. 70 | * FDM and TDM are not mutually exclusive; they can be used together. 71 | 72 | ## Packet vs Circuit Switching 73 | * In circuit switching, a lot of resources are wasted when connections are idle. 74 | * Since users are probably not active 100% of the time, it is clear that with circuit switching is very inefficient. 75 | * In other words, packet switching allows more users to use the network. 76 | 77 | * Packet switching is better at handling bursty data, but the network can get congested. 78 | * Realistically, the way to get around this is by increasing the bandwidth of the links. 79 | 80 | Next: [January 10th](1-10.md) 81 | -------------------------------------------------------------------------------- /1165/cs349/10-2.md: -------------------------------------------------------------------------------- 1 | # Input 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | 7-6-2016 6 | 7 | Elvin Yung 8 | 9 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/10.2-input.pdf) 10 | 11 | * The iPod was the perfect input method for a device where most of the UI elements were list-based. 12 | * But it's not 13 | 14 | ## Classifying Computer Input 15 | * Sensing method 16 | * Mechanical - switch, potentiometer 17 | * Motion - accelerometer, gyroscope 18 | * Contact - capacitive touch, pressure sensor 19 | * Signal processing 20 | * Continuous vs discrete 21 | * There are different input devices for different purposes, but we mostly use the mouse and the keyboard. 22 | 23 | ## Text Input 24 | ### QWERTY 25 | * The QWERTY keyboard layout was first introduced in the Remington Model I typewriter in 1873. 26 | * They were trying to design a keyboard that wouldn't jam, which happened when you pressed two adjacent keys at once. 27 | * So the intention was to space out the key presses, so that the user would alternate between left and write hands in typing. 28 | * So of course, when we added keyboards to computers, we stole this layout from typewriters, because that's what people were already used to. 29 | 30 | * The optimal way to use a QWERTY keyboard is to keep your hands on home row, and moving your fingers to move 31 | * Except it doesn't actually work that well: 32 | * Awkward key combinations, like `tr`, 33 | * Sometimes have to jump over the home row, e.g. `br`` 34 | * 35 | * Because of letter frequency, most of the typing is actually done with the left hand. Because most people are right-handed, this can slow people down. 36 | * Statistics on key presses: 37 | * 16% on lower row 38 | * 52% on top row 39 | * 32% on the home row 40 | 41 | #### Other layouts 42 | * Since QWERTY has so many issues, there are a few remapped layouts. 43 | 44 | * Example: Dvorak 45 | * Letters should be typed by alternating between hands 46 | * 70% of letters are on home row 47 | * Bias towards right-handed typing, since most people are right-handed 48 | 49 | * **Studies are inconclusive on whether there's any actual productivity difference when using a non-QWERTY keyboard layout.** 50 | * An interesting point: it's really useful to be able to sit down on any computer and be able to 51 | 52 | ### Mechanical Keyboards 53 | * If the keys are downsized (e.g. on a BlackBerry), it interferes with typing. 54 | 55 | ### Soft Keyboards 56 | * on touchscreens, etc. 57 | * You no longer get any sort of tactile feedback. You have to either get really good at touch typing, or hope that autocomplete works well enough. 58 | * We're basically trading a physical keyboard to get a bigger screen. 59 | * Soft keyboards are good on devices where you don't have to do a lot of typing. e.g. an iPad can be used mostly as a movie watching device 60 | 61 | ### Other variants 62 | * Thumb keyboards - so that you could hold on the device and type reasonably well with just your thumbs 63 | * Frogpad - one-handed keyboard, only 15 keys, plus some meta keys, and different combinations of meta keys lets you type different letters 64 | * Chording keyboards - Douglas Engelbart proposed this - basically, a keyboard that only has 5 keys, and you type different combinations of keys. 65 | * Successor: [the Twiddler](http://twiddler.tekgear.com/) 66 | 67 | ### Predictive Text Input 68 | * T9 69 | * Autocomplete/autocorrect 70 | 71 | ### Others 72 | * Palm Pilot's [Graffiti](https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)) - had decent accuracy, but you needed to memorize this entire scheme. 73 | * Natural handwriting recognition, e.g. 74 | * [ShapeWriter](https://en.wikipedia.org/wiki/ShapeWriter) - original inspiration for Swype, let people type on a touchscreen without lifting their finger 75 | * IJQwerty - study that found people were much more productive when i and j were swapped on ShapeWrite 76 | * [8pen](http://www.8pen.com/) - enter words by drawing loops 77 | * Seems like it'd be error prone, but 78 | 79 | ## Positional Input 80 | * Ur-example: Etch-A-Sketch 81 | 82 | ### Properties 83 | #### Sensing 84 | * Force or **isometric**: Input data is in the form of direction and magnitude of force, e.g. joystick 85 | * Displacement or **isotonic**: Input data is in the form of position difference, e.g. mouse 86 | 87 | #### Position vs Rate Control 88 | * Rate: joystick 89 | * Position: mouse 90 | 91 | #### Absolute vs Relative 92 | * This describes how the input device is mapped to the display. 93 | * **Absolute**: where you touch is directly mapped onto the display. 94 | * Example: A drawing tablet 95 | * Normally, however, on a desktop we use **relative** input. 96 | * Example: moving the mouse moves the cursor proportionally, but doesn't teleport it to some absolute location. 97 | 98 | #### Control-Display Gain 99 | * The **gain** is the ratio of how fast the pointer moves to how fast the input device moves. 100 | 101 | * If the CD gain is 1, when the input device moves some distance, the pointer moves the same distance. 102 | * If the CD gain is less than 1, the pointer moves more slowly than the device. 103 | * If the CD gain is more than 1, the pointer moves faster than the device. 104 | 105 | * In lots of OSes this is also known as the **sensitivity**, and it's generally tunable in the settings. 106 | -------------------------------------------------------------------------------- /1165/cs349/7-3.md: -------------------------------------------------------------------------------- 1 | # History 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | 6-15-2016 6 | 7 | Elvin Yung 8 | 9 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/7.3-history.pdf) 10 | 11 | (The numbering is not a mistake. Module 7.2, Visual Perception, seems to have been removed.) 12 | 13 | ## A Brief History of Interaction 14 | 15 | * Recall the [first lecture](1-1.md) of the course. 16 | * Early "computers" were literally a bunch of people calculating things like 17 | * There were some early mechanical calculators, like the [Analytical Engine](https://en.wikipedia.org/wiki/Analytical_Engine). 18 | * A company called [International Business Machines](https://en.wikipedia.org/wiki/IBM) made the ASCC, which weighted about 11 tons - and were controlled with hundreds of dials. 19 | 20 | ### Batch Interfaces 21 | * - mid 1960s ish 22 | * Feed instructions to computers using *punch cards*. 23 | * No real interaction - the machine provides feedback in a matter of hours or days. 24 | * The cost of getting something wrong was huge - it takes a very long time to iterate. 25 | 26 | ### Conversational Interfaces 27 | * 1965 - ~1985 28 | * User types a command in a prompt, the system evaluates the command, and then provides feedback. 29 | * You basically still had to be an expert to use them. 30 | * e.g. Zork, Bash 31 | 32 | ![Zork](https://static1.squarespace.com/static/55182b4ee4b0c6d76a9c1eb3/t/552f3011e4b07f0b392386cf/1429155859834/) 33 | 34 | * Highly flexible 35 | * The interaction is usually well-suited to the machine, but not to the task. 36 | * You had to learn a lot of technical concepts before understanding how to use the system. 37 | * Requires *recall* rather than *recognition* - i.e. the interface isn't intuitive enough for a beginner to be able to figure it out. You literally had to know the command syntax to use it. 38 | 39 | ### Visionaries 40 | #### Vannevar Bush 41 | * In 1945, Vannevar Bush authored [As We May Think](http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/). 42 | * In it, he suggested the idea of a device called a [memex](https://en.wikipedia.org/wiki/Memex) - a tool to organize information with *links* between annotated pieces of content. 43 | * [Sound familiar?](https://en.wikipedia.org/wiki/Hyperlink) 44 | * It was a futuristic vision - the technology was definitely nowhere near. 45 | 46 | > Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. 47 | 48 | #### Ivan Sutherland 49 | * Ivan Sutherland came up with Sketchpad, a device controlled with a light pen that let users directly manipulate shapes with a proto-graphical user interface. 50 | * Under the hood, the graphics were manipulated similarly to a constraint solver. 51 | * He was interested in building tools not for experts, but for people like artists and draftsmen. (i.e. task-driven) 52 | * Sketchpad's software was the first to use the concept of a *window*. 53 | 54 | > A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland. 55 | 56 | #### Douglas Engelbart 57 | * Career spanned 50s - 90s ish 58 | * Led a team of researchers at the Stanford Research Institute (SRI) 59 | * His researchers developed the beginnings of some extremely advanced technologies: mouse, hypertext, collaborative software, etc. 60 | * [Mother of All Demos](https://www.youtube.com/watch?v=yJDv-zdhzMY) - hour and a half long demo in 1968 demonstrating those technologies. 61 | 62 | > An advantage of being online is that it keeps track of who you are and what you’re doing all the time. 63 | 64 | ![Relevant xkcd](https://imgs.xkcd.com/comics/douglas_engelbart_1925_2013.png) 65 | 66 | #### Alan Kay 67 | * Xerox PARC - worked on the Xerox Star and the Xerox Alto, the earliest personal computers with a GUI and Ethernet 68 | * Dynabook - conceptual prototype for laptops/tablets 69 | * Helped develop object-oriented programming (Smalltalk), Ethernet, the graphical user interfaces ... 70 | 71 | > The best way to predict the future is to invent it. 72 | 73 | (Offhand mention that Alan Kay did an [AMA on Hacker News](https://news.ycombinator.com/item?id=11939851) very recently.) 74 | 75 | ##### Apple 76 | * The Xerox Star cost $75k for a basic system, $16k for each additional workstation 77 | * This is why you haven't heard of it. 78 | 79 | * Steve Jobs "[steals](https://www.youtube.com/watch?v=_1rXqD6M614)" Xerox PARC research in exchange for pre-IPO investment in Apple 80 | * GUI technology gets used in the Macintosh and the Lisa 81 | * And the rest is history! 82 | 83 | ![Out of the bag](http://www.folklore.org/images/Macintosh/out_of_the_bag.jpg) 84 | 85 | ### Graphical User Interface 86 | * Utilizes *recognition* rather than *recall* 87 | * Better feedback 88 | * Metaphors - interactions are more like the task domain, rather than computerese 89 | * The GUI puts computers not just in the right hands, but in everyone's hands! 90 | 91 | ### The future? 92 | * Touchscreen/[pens](https://www.engadget.com/2010/04/08/jobs-if-you-see-a-stylus-or-a-task-manager-they-blew-it/) 93 | * Natural language processing 94 | * Virtual/augmented reality 95 | * Brain (machine|computer) interface 96 | 97 | * [Microsoft: Productivity Future Vision](https://www.youtube.com/watch?v=w-tFdreZB94) 98 | -------------------------------------------------------------------------------- /1151/cs241/20150108.md: -------------------------------------------------------------------------------- 1 | # CS 241 2 | ## Foundations of Sequential Programs 3 | #### 1/8/2015 4 | Elvin Yung 5 | 6 | ### Cont'd 7 | What does `11001001` represent? 8 | * Number 9 | * Character 10 | * Address 11 | * Random flags 12 | * Instructions (or in our case, a part of an instruction, since our instructions are 32-bit) 13 | 14 | We can't really know. We need to remember our intent when we stored the byte. 15 | 16 | ### Machine Language 17 | Machine Language - MIPS 18 | * What does an instruction look like? 19 | * What instrutions are there? 20 | We will use a simplified flavor of MIPS with 18 different 32-bit instruction types. 21 | 22 | In order to understand how machine langauge works, we need to understand more about the architecture of the system. You might already know that in a very high level way, the CPU communicates with the RAM with some sort of bus. 23 | 24 | ### CPU 25 | The CPU is the "brains" of the computer. These parts are inside the CPU: 26 | * The **control unit** fetches and decodes instructions, coordinates I/O, and dispatches to other parts of the computer to carry them out. 27 | * The **ALU**, or **arithmetic logic unit**, is responsible for the mathematical operations, logical operations, and comparisons. 28 | * **Registers** are a small amount of memory inside the CPU. We have 32 general purpose registers (of 5 bits each). 29 | 30 | ### Memory 31 | In descending order of speed: 32 | * Registers 33 | * Cache - A part of main memory that's stored closer to the CPU that stores previously accessed data for faster reaccess time. 34 | * Main memory (RAM) 35 | * Secondary storage (Tape, HDD, network, etc.) 36 | 37 | In this course, we will mostly be concerned with registers and main memory. 38 | 39 | ### MIPS 40 | MIPS has access to 32 general purpose registers, as well as special registers such as `HI`, `LO`, `PC`, `IR`, `MAR`, and `MDR`. 41 | * `$0` always stores 0. 42 | * `$30` and `$31` are special by convention (i.e. not by hardware). 43 | 44 | An example register operation is "add the contents of the registers `s` and `t`, and store the result in `d`." (`add d,s,t`) We denote this as `$d <- $s + $t`. Since we have 5 bits per register, and 3 registers are used in this example, 15 bits need to be set aside for the registers. Since a word is 32 bits long, we have 17 bits left for the instruction. 45 | 46 | * There are two types of instructions, register and immediate. 47 | * Register instructions work entirely with registers. 48 | * Immediate instructions work with some combination of registers and immediate values, that is, literals. Immediate values are interpreted as two's complement. 49 | 50 | * Multiplication (`mult`/`multu`) gives you a 64 bit result in the reserved `HI` and `LO` registers, since multiplying 2 32-bit numbers can give you 64-bit numbers. 51 | * Division (`div`/`divu`) gives you some quotient stored in the `HI` and `LO` registers. 52 | * The `HI` and `LO` registers can be read (moved to a general purpose register) with the `mfhi` and `mflo` instructions. 53 | 54 | ### Main Memory 55 | * **RAM** stands for random access memory. This is the main memory of the computer. 56 | * This is a large amount of memory that is stored away from the CPU. For our purposes, the RAM is just a big array `n` bytes, where `n ~= 10^9` (a gigabyte). 57 | * Each byte has an address, running from 0 to `n-1`, but we group everything on words, so we will use addresses divisible by 4 (`0x0`, `0x4`, etc.), and each 4-byte block is a word. 58 | * The data travels between the CPU and the RAM on a bus, which we can think of as 64 wires connecting the two components. Accessing RAM is much slower than accessing registers. 59 | * The CPU interacts with the bus using the `MAR` (memory address register) and `MDR` (memory data register) registers. 60 | 61 | To move data between the RAM and CPU: 62 | * Load: transfer a word from a specified address to a specified register. The desired address goes into the `MAR` register, and the and then goes out onto the bus. When it arrives at the RAM, the associated data is sent back from the bus onto the `MDR` register, and then moved onto the destination register. 63 | * Store: Exactly like load, but in reverse. 64 | 65 | ### Programs 66 | * How does the computer know which words contain instructions and which contain data? 67 | * Surprise! It doesn't. There is a special `PC` (program counter) register, which holds the address of the next instruction to run. 68 | * The `IR` register stores the current instruction. 69 | * By convention, we guarantee that some fixed address contains code, and then initialize PC to whatever the fixed address is. 70 | * Then, the control unit runs the fetch-execute cycle. In pseudocode, this is the fetch-execute cycle: 71 | 72 | ``` 73 | PC <- 0 74 | loop 75 | IR <- MEM[PC] 76 | PC <- PC + 4 77 | decode and execute instruction at IR 78 | end loop 79 | ``` 80 | 81 | TODO: reword this 82 | * How does a program get executed? 83 | * There is a program called the loader, which puts the program in memory, and sets `PC` to the address of the first instruction in the program. 84 | * What happens when the program ends? 85 | * We need to return control to the loader. `PC` is set to the address of the next instruction in the loader. 86 | * Which instruction is that? 87 | * `$31` will by convention store the correct address to retunr to, so we just need to set PC to `$31`. 88 | * We will use the jump register command (`jr`) to update the value of `PC`. 89 | 90 | 91 | 92 | -------------------------------------------------------------------------------- /1151/cs241/20150120.md: -------------------------------------------------------------------------------- 1 | # CS 241 2 | ## Foundations of Sequential Programs 3 | #### 1/20/2015 4 | Elvin Yung 5 | 6 | ### Returning from Procedure 7 | ```nasm 8 | lis $5 ; $5 contains the address somewhere 9 | .word f ; the procedure f begins 10 | jr $5 ; how do we get back HERE? 11 | (HERE) 12 | ``` 13 | 14 | * After you call a procedure, how do you get back to where you were? 15 | * When we return from a procedure, we need to set PC to the line after the `jr`. 16 | * We can do this with the register `jalr` or the *jump and link register*. The instruction is exactly like `jr`, but it also sets `$31` to the address of the next instruction (i.e. `PC`). 17 | * However, since this overwrites `$31`, now we need to save `$31` onto the call stack whenever we call a procedure, and then pop it before returning. 18 | * Our mainline template now looks like this: 19 | 20 | ```nasm 21 | main: 22 | lis $5 ; store f into $5 23 | .word f 24 | sw $31, -4($30) ; push $31 onto stack 25 | lis $31 26 | .word 4 27 | sub $30, $30, $31 ; update stack pointer 28 | jalr $5 ; call f 29 | lis $31 30 | .word 4 31 | add $30, $30, $31 ; update stack pointer 32 | lw $31, -4($30) ; pop $31 33 | jr $31 34 | ``` 35 | 36 | ### Parameter/Result Passing 37 | * The best option is to pass or return parameters via registers. 38 | * However, this makes it complicated with respect to figuring out where registers/parameters are. 39 | * If this is done, the procedure writer MUST document their code so that the client knows which registers will be passed backwards and forwards. 40 | * The largest problem is that there are only 32 registers. 41 | * A better option is to push parameters onto the stack. This also requires documentation. 42 | 43 | ### Full Code Example 44 | * Here is an example of a fully documented procedure. 45 | * We will write a function which will sum the first n numbers. 46 | 47 | ``` nasm 48 | ; sum 1 to N 49 | ; Registers 50 | ; $1 - working 51 | ; $2 - input 52 | ; $3 - output 53 | 54 | sum1toN: 55 | sw $1, -4($30) 56 | sw $2, -8($30) ; push register $1, $2 57 | lis $1 58 | .word 8 59 | sub $30, $30, $1 ; decimal stack pointer 60 | add $3, $0, $0 ; initialize $1 61 | .word 1 62 | 63 | top: ; the actual adding part 64 | add $3, $3, $2 65 | sub $2, $2, $1 66 | bne $2, $0, top 67 | 68 | lis $1 69 | .word 8 ; update $1 70 | add $30, $30, $1 ; increment $30 71 | lw $2, -8($30) ; pop registers 72 | jr $31 ; return 73 | 74 | ### Recursion 75 | * Recursion isn't different from any other function call. 76 | * If registers, parameters, and the stack are managed correctly, recursion will work. 77 | 78 | ### Input and Output 79 | * Input is not supported. Deal with it! 80 | * Output: MIPS provides a location (`0xfff000c`) called video memory, to store words where the least significant byte will be printed to the screen. 81 | * Example: print CS, followed by newline 82 | 83 | ```nasm 84 | lis $1 85 | .word 0xffff000c 86 | lis $2 87 | .word 67 ; ASCII C 88 | sw $2, 0($1) 89 | lis $2 90 | .word 83 ; ASCII S 91 | sw $2, 0($1) 92 | lis $2 93 | .word 10 ; ASCII \n 94 | sw $2, 0($1) 95 | jr $31 96 | ``` 97 | 98 | ### The Assembler 99 | * An assembler is a program that translates assembly code into equivalent machine code. 100 | * Any translation process involves two steps: 101 | * **Analysis**, to understand what is meant by the source string 102 | * ***Synthesis**, to output an equivalent target string 103 | * An assembly file (`.asm`) is just a stream of characters, a text file. 104 | 105 | So, parsing basically proceeds as follows: 106 | 1) **Tokenization**. Group characters into meaningful **tokens**. For example, labels, hex numbers, regular numbers, `.word`, registers, etc. This has been done for us; we will talk about it in far more detail later. (e.g. for the C++ starter code, each token is an instance of the Token class.) 107 | 2) **Analysis**. Group tokens into instructions, if possible. 108 | 3) **Synthesis**. Output equivalent machine code. 109 | 110 | * If tokens are not arranged into sensible instructions, output `ERROR` to standard error. 111 | * Advice: there are many more wrong configuration than right ones, for an assembly file. For example, the instruction `beq $1, $0, abc` can be a sequence to token types: e.g. `ID REGISTER COMMA REGISTER COMMMA ID` 112 | 113 | * How do we assemble the following code? 114 | 115 | ```nasm 116 | beq $1, $0, abc 117 | ... 118 | abc: add $3, $3, $3 119 | ``` 120 | 121 | * In the example above, we don't know what `abc` is until the line which declares the label. 122 | * The biggest problem with writing an assembler is storing identifiers. 123 | * The solution is to scan through the source string in multiple passes. 124 | * In the first pass, group the tokens into instructions and record the addresses of all labelled instructions into a **symbol table**, a data structure containing (name, address) tuples. 125 | * A line assembly can have one more label. You can label the line after the end. 126 | * In the second pass, translate each instruction into machine code. If an instruction refers to a label, look up the associated address inthe symbol table. 127 | * Our assembler should output the assembled MIPS to `stdout`, and we should output the symbol table to `stderr`. 128 | 129 | ### Assembling, Trace 130 | ```nasm 131 | main: 132 | lis $2 133 | .word 13 134 | add $3, $0, $6 135 | top: 136 | add $3, $3, $3 137 | lis $1 138 | .word 1 139 | sub $2, $2, $1 140 | bne $2, $0, top 141 | jr $31 142 | ``` 143 | 144 | -------------------------------------------------------------------------------- /1165/cs349/6-1.md: -------------------------------------------------------------------------------- 1 | # Visual Design 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | 6-6-2016 6 | 7 | Elvin Yung 8 | 9 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/6.1-visual_design.pdf) 10 | 11 | ## Why Discuss Visual Design? 12 | * You need to know how to present your interface to the user. 13 | * People *shouldn't have to think* when they use your interface! 14 | 15 | ### Bad interfaces 16 | ![An example of a bad interface](https://diyivorytower.files.wordpress.com/2011/01/2011_01_12-bulk-rename-utility.jpg) 17 | ![At least it's not phallic.](http://www.piedpiper.com/app/themes/pied-piper/dist/images/interface_large.jpg) 18 | 19 | ## Objectives 20 | * Your interface should be easy to understand - design with the human's conscious and unconscious capabilities in mind. 21 | * *Pre-attentive processing* happen at a lower level than conscious thought. We unconsciously process a lot of 22 | * Keep things *simple*! 23 | * (But not too simple. You want the user to still be able to *do* the things they want to do.) 24 | * Basically, remember also that *essential* can conflict with *simple* - expert users need specialized interfaces. 25 | 26 | ![Ultimate sophistication](https://safr.kingfeatures.com/idn/cnfeed/zone/js/content.php?file=aHR0cDovL3NhZnIua2luZ2ZlYXR1cmVzLmNvbS9SaHltZXNXaXRoT3JhbmdlLzIwMTMvMDUvUmh5bWVzX3dpdGhfT3JhbmdlLjIwMTMwNTIzXzkwMC5naWY=) 27 | 28 | (Source: http://rhymeswithorange.com/comics/may-23-2013/) 29 | 30 | ## Organization and Structure: Gestalt Principles 31 | * Ways that we look at the world and find patterns. 32 | * Our brains are wired to look for patterns. 33 | * *Gestalt principles* describe some of the ways we do this real world. 34 | * The idea is that you can build an interface that takes advantage of how our minds group things. 35 | 36 | ### Proximity 37 | * We associate things more strongly when they are close to each other. 38 | * e.g. items that are spaced more closely vertically looks like columns, more closely horizontally looks like rows. 39 | 40 | * Example: sign at Big Bend National Park, Texas 41 | 42 | ![Bad proximity](https://www.nps.gov/common/uploads/photogallery/imr/park/bibe/60014F41-155D-451F-67C8B8DC3E90D16A/60014F41-155D-451F-67C8B8DC3E90D16A-large.JPG) 43 | 44 | ### Similarity 45 | * We group things based on visual characteristics, like **shape**, **size**, **color**, **texture**, **orientation**. 46 | * e.g. in a group of similar-sized squares and circles, we group by shape. In a group of large and small squares and circles, we group by size. In a group of green and white squares and circles, we group by color. (TODO: add image) 47 | * (We see size first - it's more obvious to us.) 48 | * When things look like one another, we tend to think of them as belonging in a common set. 49 | 50 | ### Good Continuation 51 | * We have a tendency to find flow in things. 52 | * Your eyes will track and follow a line or curve. 53 | * Things arranged in such pattern tend to get associated with each other. 54 | * e.g. we tend to follow a menu, in a straight line. 55 | * Arranging things like this can get people to look at more things, even if they were only looking for one thing. 56 | 57 | The last three principles dealt with how we group objects. The next few will deal with how we fill in missing or ambiguous information. 58 | 59 | ### Closure 60 | * We like to see a complete figure even if some parts are missing. 61 | * For example, a dotted circle looks like a circle because it has a circular in shape, even if large parts are missing. 62 | * In UIs, for example, windows overlap, but we infer the fact that there's a window behind the current window. 63 | 64 | ### Figure/Ground (aka Area) 65 | * We like to separate or visual field into things that are in the foreground (the *figure*), and things that are in the background (the *ground*). 66 | * Things in the foreground, or *figure*, are interpreted as the object of interest. 67 | * *Ground* is everything else. 68 | 69 | #### Ambiguity 70 | * Visual cues can help solve this. 71 | * Figure has a definite shape, but ground seems shapeless. 72 | * (In the absence of a horizon, it's hard to tell.) 73 | 74 | ### Law of Prägnanz 75 | * We like to perceive shapes in the simplest possible way. 76 | * In some cases we use depth to do this - a stack of partially overlapping squares are interpreted as three squares, even though it doesn't really look like it. 77 | * Symmetry is great as well - we like to parse symmetry. 78 | 79 | ### Uniform Connectedness 80 | * The interface can force a grouping on the user. They can do this by creating regions or connecting lines. 81 | * You can define regions to force people to perceive things similarly. 82 | * This isn't nearly as effective as proximity, but it's an option. 83 | * For a long time, Microsoft used uniform connectedness in their UIs. It works, but makes their interfaces very cluttered. 84 | 85 | ### Alignment (?) 86 | * Is alignment a Gestalt principle? 87 | * Basically, we see things similarly when we group things in line. 88 | * It's a powerful organizing tool. 89 | * It's kind of like continuation - continuation tends to imply alignment. 90 | 91 | ## Pleasing Layouts 92 | 93 | ## Applying Concepts 94 | * Avoid haphazard layouts 95 | * Align stuff 96 | 97 | ## Testing Your Interface 98 | * Show it to someone else - don't ask if they like it, try to get first impressions 99 | * You want to figure out, the *first* time that they see it, if everything is clear and usable. 100 | * Squint test - when you squint and look at your interface, does it still make sense? 101 | 102 | ## Summary 103 | * Strive for simplicity! 104 | * Know your target. 105 | * Don't leave your visual design up to chance! Think about your design, and test it out. 106 | -------------------------------------------------------------------------------- /1165/cs349/1-1.md: -------------------------------------------------------------------------------- 1 | # Course Introduction 2 | 3 | CS 349 - User interfaces, LEC 001 4 | 5 | Elvin Yung 6 | 7 | [Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/1.1-introduction.pdf) 8 | 9 | ## What is a User Interface? 10 | Some definitions that might work: 11 | * how humans see the computer 12 | * where humans and computers meet 13 | 14 | A real definition: 15 | * A *user interface* is where a a person can express *intention* to the device, and the device can present *feedback*. 16 | 17 | UIs don't just refer to how you interact with computers: Microwaves, refrigerators, door bells, hammers, jets... 18 | 19 | ## A Brief History of Computer UIs 20 | ### Pre 1970s 21 | * Computers had *batch* interfaces. 22 | * They were rudimentary and mostly non-interactive. 23 | * The hot new computers of the day: ENIAC, 24 | * Giving a computer instructions involved [punching holes in cards](https://en.wikipedia.org/wiki/Punched_card)... 25 | 26 | ![The Punch Bowl](https://upload.wikimedia.org/wikipedia/commons/5/58/FortranCardPROJ039.agr.jpg) 27 | 28 | ### 1970s - early 1980s 29 | * We got *conversational* interfaces, i.e. command lines. 30 | * We mainly saw them in two different places: *microcomputers* (the first personal computers), and *terminals* ("dumb" video display clients connected to mainframes). 31 | 32 | ![Apple II](https://upload.wikimedia.org/wikipedia/commons/8/82/Apple_II_tranparent_800.png) 33 | 34 | ![IBM 3278](http://www.corestore.org/3278-3.jpg) 35 | 36 | * Some kid named Bill Gates bought a [Quick and Dirty Operating System](https://en.wikipedia.org/wiki/DOS) to use in IBM PCs (and IBM clones), for just $50,000... 37 | 38 | ![IBM PC](https://upload.wikimedia.org/wikipedia/commons/f/f1/Ibm_pc_5150.jpg) 39 | 40 | ### late 1980s - now? 41 | * Xerox's Palo Alto Research Park (PARC) developed amazing technologies around this time, things like Ethernet networking and object-oriented programming. 42 | * The one that got the most attention was the bitmapped display. It let you show completely *graphical* user interfaces in your software, controlled with a device called a *mouse*. 43 | * Xerox was too focused on their photocopier products, and never really capitalized on their innovations. They made some very expensive workstations based on the GUI and Ethernet, the Alto (1973) and the Star (1981), which never really sold well. 44 | * In exchange for a small stake in Apple, Xerox let Steve Jobs [visit](https://www.youtube.com/watch?v=2u70CgBr-OI) PARC. ["Xerox grabbed defeat from the greatest victory in the computer industry."](https://www.youtube.com/watch?v=_1rXqD6M614) 45 | * Apple ended up using the technology in the Lisa (which was a huge failure), and then the [Macintosh](http://www.folklore.org/ProjectView.py?project=Macintosh) (which started off a hit, but ended up also a huge failure). 46 | 47 | ![It sure is great to get out of that bag.](http://radio-weblogs.com/0102482/images/2005/06/06/hello-mac.jpg) 48 | 49 | * Some [other company](https://www.youtube.com/watch?v=sforhbLiwLA) that [has no taste](https://www.youtube.com/watch?v=EJWWtV1w5fw) took the ideas for the GUI and [ran with it](https://en.wikipedia.org/wiki/Windows_1.0). And [bad things](https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Microsoft_Corp.) happened. 50 | 51 | ![They just have no taste.](http://zdnet3.cbsistatic.com/hub/i/r/2015/07/23/db9b07b8-1bd3-4451-9365-7bd336f4d7dd/resize/1170x878/6a5511eafc6e9a454add33945466f8ed/cmwindows1-0jul15a.jpg) 52 | 53 | ### 1990s - 54 | * 1989: Tim Berners-Lee came up with a hypertext format to be sent over network connection. This has made a lot of people very angry and been widely regarded as a bad move. Tim decided to call his neat invention the WorldWideWeb. 55 | * 1993: Marc Andreesen makes a web browser called Mosaic. It add graphics to web pages. This has made a lot of people very angry and been widely regarded as a bad move. Mosaic eventually became Netscape. 56 | * People started putting `.com` at the end of their company name. It was a weird time. 57 | 58 | ### now? - future? 59 | * Touchscreens 60 | 61 | ![No more Eat Up Martha.](https://i.ytimg.com/vi/e7EfxMOElBE/maxresdefault.jpg) 62 | 63 | * Voice 64 | 65 | ![Echo echo](http://gazettereview.com/wp-content/uploads/2015/12/Amazon-Echo-1.jpg) 66 | 67 | 68 | ### How Has Computing Changed? 69 | * The introduction of the GUI *fundamentally* changed how we use technology. 70 | * Computers went from being a specialist tool to being used by everyone, without having to be an expert or build their own software. 71 | 72 | ## Interactive System Architecture 73 | * The user has a *mental model* - how it thinks the device works. 74 | * The device has a *system model* - how it actually works. 75 | * Interaction: the user expresses an intention to the device, and the device presents feedback about that intention. 76 | * An *event*, to the user, is an observable occurrence or phenomenon. To the system, it's a message saying that something happened. 77 | 78 | ## Interface vs. interaction 79 | * *Interface* is how the device presents itself to the user. These include controls, and visual, physical, and auditory cues. 80 | * *Interaction* is the user's actions to perform a task, and how the device responds. 81 | 82 | ## Designing Interactions 83 | * Designing good interaction is hard because users, and the things they want to do, are all different. 84 | * Can you anticipate all scenarios? 85 | * There's no single right way to build an interface - it can always be improved. 86 | 87 | ## Why Study Interaction? 88 | * The right computer is a [bicycle for the mind](https://www.youtube.com/watch?v=ob_GX50Za6c&t=25s). 89 | * A well designed tool with a good interface can radically improve our productivity and let us do things that we couldn't dream of before. 90 | * New technology becomes widespread not when it becomes more powerful, but when it becomes easy to use. 91 | -------------------------------------------------------------------------------- /1151/cs241/20150115.md: -------------------------------------------------------------------------------- 1 | # CS 241 2 | ## Foundations of Sequential Programs 3 | #### 1/15/2015 4 | Elvin Yung 5 | 6 | ### Programs with RAM 7 | * `lw` is the *load word* instruction. It loads a word from RAM into a register. 8 | * syntax: `lw $a, c($b)`, loads the word at `MEM[c+$b]` into `$a`. 9 | * `sw` is the *store word* instruction. It stores a word from register into RAM. 10 | * syntax: `sw $a, c($b)`, stores the word at `$a` into the memory location `MEM[c+$b]`. 11 | 12 | #### Example: Array indexing 13 | * The register `$1` holds the address of an array, and `$2` holds the length of the array. Retrieve the element with index 5, and store it in register `$3`. 14 | 15 | ```nasm 16 | lw $3, 20($1) ; 5*4 = 20 17 | jr $31 18 | ``` 19 | 20 | * Suppose we want to work with arbitrary indices. We introduce the following instructions: 21 | * `mult` is the *multiply* instruction. Since multiplying two 32-bit numbers might result in a 64-bit number, the results are stored in two special registers `hi` and `lo`. 22 | * syntax: `mult $a, $b` 23 | * `div` is the *divide* instruction. The quotient is stroed in `lo`, and the remainder is stored in `hi`. 24 | * syntax: `div $a, $b` 25 | * `mfhi` and `mflo` are the *move from HI* and *move from LO* instructions. They move the values from `hi` or `lo` respectively into a given register. 26 | * syntax: `mfhi $d`, `mflo $d` 27 | 28 | ```nasm 29 | lis $5 ; load the arbitrary index into $5 30 | .word SOME_INDEX 31 | lis $4 ; load the value 4, the size of a word, into $4 32 | .word 4 33 | mult $5, $4 ; obtain the offset we need by multiplying the index and the size of a word 34 | mflo $5 ; move the offset we need into $5 35 | add $5, $1, $5 ; offset $5 by $1, which stores the address of the array 36 | lw $3, 0($5) ; load the address we need from memory into $3 37 | jr $31 38 | ``` 39 | 40 | ### Labels 41 | * Recall, in the previous loop example (cf. [notes from 01/13](20150113.md), *Sum the integers `1..13`, store in `$3`, then return*) that the `lis` command was inside the loop, which could be moved outside. 42 | * That is fine, but the `bne` at the end now has an improper immediate/offset (should be -3 instead of -5). 43 | * In nested loops/branches, this is bad. Since we hardcoded branch offsets, we need to change them every time we add or remove instructions inside a loop. 44 | * Instead, the assembler allows for *labelled* instructions: `