├── .gitignore ├── all CS241 questions except quiz 1 and 5.txt ├── leetcode.md ├── README.md ├── CS173 ├── answers.txt └── questions.txt ├── Cloud ├── answers.ini └── questions.ini ├── CS225 ├── answers.txt └── questions.txt ├── CS233 ├── answers.txt └── questions.txt └── CS241 ├── answers.txt └── questions.txt /.gitignore: -------------------------------------------------------------------------------- 1 | markings.txt 2 | -------------------------------------------------------------------------------- /all CS241 questions except quiz 1 and 5.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ace-n/interview-questions/HEAD/all CS241 questions except quiz 1 and 5.txt -------------------------------------------------------------------------------- /leetcode.md: -------------------------------------------------------------------------------- 1 | # LeetCode questions 2 | 3 | These are the LeetCode questions I use. 4 | 5 | > Tip: I usually aim for 1-2 on weekdays, and 3-4 on weekends. 6 | 7 | ## Training set 8 | Do these questions first, to familiarize yourself with the basic techniques. 9 | 10 | > Tip: don't just find the solutions - code them out! (This helps you warm-up your implementation skills.) 11 | 12 | ### Dynamic Programming 13 | - [Trapping Rain Water](https://leetcode.com/problems/trapping-rain-water/) (target: O(n) time, O(n) space) 14 | - [Longest Palindromic Substring](https://leetcode.com/problems/longest-palindromic-substring/) 15 | - [Max Product Subarray](https://leetcode.com/problems/maximum-product-subarray/) 16 | - Knapsack Problem 17 | - Subset Sum 18 | - [This problem](https://leetcode.com/problems/wildcard-matching/), but find the **longest** possible match. 19 | 20 | ### Strings + Arrays 21 | - [Median of sorted arrays](https://leetcode.com/problems/median-of-two-sorted-arrays/) 22 | - [3-sum](https://leetcode.com/problems/3sum/) 23 | - https://leetcode.com/problems/merge-k-sorted-lists 24 | 25 | ### Trees + Searching 26 | - https://leetcode.com/problems/sudoku-solver/ 27 | - [Jump Game II](https://leetcode.com/problems/jump-game-ii/) 28 | 29 | ## Testing set 30 | These questions use the techniques discussed above, but it's your job to determine which one(s) are applicable. :) 31 | 32 | - [Add Two Numbers](https://leetcode.com/problems/add-two-numbers/) 33 | - [Word Break II](https://leetcode.com/problems/word-break-ii/) 34 | - [Self Crossing](https://leetcode.com/problems/self-crossing/) 35 | - [Permutation sequence](https://leetcode.com/problems/permutation-sequence/) 36 | - [Max Points on a Line](https://leetcode.com/problems/max-points-on-a-line/) 37 | - [Wildcard Matching](https://leetcode.com/problems/wildcard-matching/) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ace's interview questions 2 | These are the _conceptual_ "flashcard" questions (and answers) I use to prepare for software engineering interviews. 3 | 4 | These questions are intended to cover the conceptual background you'll need to succeed. They are _not_ a replacement for **solving practice problems** or **implementing basic algorithms** (e.g. sorting algorithms, binary search) yourself. 5 | 6 | > **Mid/Senior-level folks:** I recommend taking a look at Donne Martin's excellent [System Design primer](https://github.com/donnemartin/system-design-primer). The `Cloud` questions cover **my** knowledge gaps relative to that tutorial. _(Your knowledge gaps will be different than mine - feel free to send PRs with your own questions for things I didn't cover.)_ 7 | 8 | ## How I prepare for interviews 9 | _Caveat: most of this advice is geared towards junior/"new grad" interviews. More senior folks may still find it useful, however._ 10 | 11 | ### Part 1: conceptual questions 12 | 1. Attempt to answer these questions from memory 13 | 2. Review the answers I missed (usually via a script that records [in]correct answers) 14 | 3. `GOTO 1` until I get every question right _more times than I've gotten it wrong_ 15 | 16 | #### Subject matter 17 | ##### Algorithms-based interview 18 | > Review **all** the `Cloud` questions 19 | > 20 | > Review **all** the `CS225` questions. 21 | > 22 | > Review the following `CS241` sections: 23 | > - quiz4-part1 24 | > - quiz4-part2 25 | > - quiz6-deadlock 26 | > - quiz6-virtualMemory 27 | > - quiz7-networking1 28 | > - quiz7-networking3 _(ignore `getaddrinfo()` stuff)_ 29 | > - quiz7-scheduling 30 | > - quiz8-raid 31 | > - final_misc _(ignore C-specific stuff)_ 32 | > **If you have time**, review the following `CS241` sections: 33 | > - quiz8-files3 34 | > - quiz8-files4 35 | 36 | ##### DevOps/Embedded Systems/C-based interview 37 | > Review **all** the `CS225`, `CS233`, and `CS241` questions. 38 | 39 | ### Part 2: practice questions 40 | 1. Look at [Glassdoor](https://glassdoor.com), [LeetCode](https://leetcode.com) or similar sites for interview questions asked by a _particular company_ for a _particular role_. 41 | 2. Try to find trends in the questions - types, difficulty, concepts, etc. 42 | 3. Attempt to answer representative-sample questions (from Glassdoor or other sites) before checking the (provided) answers 43 | 4. Check the provided answers, and compare them to mine 44 | 5. `GOTO 3` until I feel _comfortable_ and _confident_ with a particular company's question level 45 | 6. `GOTO 1` for every different company I'm interviewing with 46 | 47 | _You can also do step 4 [in groups](http://ideas.time.com/2011/11/30/the-protege-effect/)._ 48 | 49 | ### Part 3: a few days before the interview 50 | 1. Go over some common algorithms and implement them yourself: 51 | - Sorts (quicksort, mergesort, insertion sort) 52 | - Binary tree traversals (insertion, deletion, search) 53 | - Balanced binary tree (e.g. AVL tree) operations (insertion, deletion, search) 54 | 2. Test them against common edge cases 55 | 3. Tweak your implementations until all your test cases pass 56 | 4. Do any other last-minute prep (e.g. reading company engineering blogs) **now**. 57 | 58 | ### Part 3: the night before the interview 59 | 1. _Do not prep any further._ (Ideally, you're fully prepared by this point anyway.) 60 | 2. Do something _fun and relaxing_ to calm your nerves. 61 | 3. Get a good dinner and a good night's sleep. 62 | 63 | ### Part 4: the day of the interview 64 | 1. Have a solid breakfast. 65 | 2. Remember that [stupid rejections happen to everyone](https://rejected.us/). 66 | 3. Good luck! You'll do great. :) 67 | 68 | ## Misc. stuff 69 | ### Credits 70 | The questions are based off material from the following sources: 71 | 72 | #### [UIUC](https://illinois.edu) courses 73 | - CS173 (Discrete Mathematics) 74 | - CS225 (Data Structures) 75 | - CS233 (Computer Architecture) 76 | - CS241 (System Programming) 77 | 78 | #### Public Resources 79 | - [System Design primer](https://github.com/donnemartin/system-design-primer) by Donne Martin 80 | - [Azure Architecture Center](https://learn.microsoft.com/en-us/azure/architecture/) by Microsoft 81 | 82 | ### License 83 | This content is licensed under an [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). 84 | 85 | ### Errata 86 | This repo is rarely updated, however I do keep an occasional eye out for pull requests - so feel free to send them. 87 | 88 | ## Resources 89 | Here are some things that I've found helpful throughout my career: 90 | - [Steve Yegge's resume advice](https://steve-yegge.blogspot.com/2007/09/ten-tips-for-slightly-less-awful-resume.html) 91 | - [Jeff Erickson's algorithms book](https://algorithms.wtf) 92 | - [levels.fyi](https://levels.fyi) 93 | - [Interview mind map (a tiny bit Java-specific)](https://www.reddit.com/r/cscareerquestions/comments/6tc4uw/i_created_a_mind_map_of_nearly_all_the_concepts/) 94 | - [Anecdata: how to succeed in software engineering](https://www.reddit.com/r/cscareerquestions/comments/49iyhw/to_those_more_successful_than_most_how_do_you_get/) 95 | 96 | Here are some useful resources to review when preparing for an interview: 97 | - **Mid/Senior folks:** [common cache types](https://codeahoy.com/2017/08/11/caching-strategies-and-how-to-choose-the-right-one) -------------------------------------------------------------------------------- /CS173/answers.txt: -------------------------------------------------------------------------------- 1 | [Test1] 2 | 1A=rational numbers 3 | 2A=yes 4 | 3A=no 5 | 4A=open; no 6 | 5A=closed; yes 7 | 6A=a^(bc) 8 | 7A=a^(b+c) 9 | 8A=log(ab) 10 | 9A=log(a/b) 11 | 10A=b*log(a) 12 | 11A=-4 13 | 12A=1 14 | 13A=-3 15 | 14A=2 16 | 15A=p N !q 17 | 15B=p ^ !q 18 | 16A=!(a v b) = !a ^ !b, !(a ^ b) = !a v !b 19 | 17A=the set(s) of items being considered 20 | 18A=q --> p 21 | 19A=p <-> q 22 | 20A=iff. 23 | 21A=composite 24 | 22A=even 25 | 23A=0 26 | 24A=every integer 27 | 25A=no; primes must >= 2 28 | 26A=any integer >= 2 is the product of a unique set of primes 29 | 27A=1 30 | 28A=infinite 31 | 29A=they have no common elements 32 | 30A=the set of all objects it DOES NOT contain 33 | 31A=yes 34 | 32A=proper subsets cannot be identical to their supersets 35 | 33A=xRy == yRx for all x,y 36 | 34A=xRy != yRx for all x,y (except when x == y) 37 | 35A=opposite of symmetric, when x == y 38 | 36A=when it is onto AND one-to-one 39 | 37A=when they are related in some order (either xRy or yRx) 40 | 38A=inputs according to type signature 41 | 39A=outputs according to type signature 42 | 40A=input that created the output 43 | 41A=set of possible (realistic) outputs of a function 44 | 42A=c 45 | 43A=underlined c 46 | 44A=a proper subset != its superset 47 | 45A=symmetric, antisymmetric 48 | 46A=RAT 49 | 47A=IRAT 50 | 48A=RST 51 | 49A=a comparable partial order 52 | 50A=no output has multiple inputs 53 | 51A=every output has an input 54 | 55 | [Test2] 56 | 1A=the same set 57 | 2A=permutation; n^r 58 | 3A=a finite sequence of edges/nodes that connects two nodes (the ones at its endpoints) 59 | 4A=when starting node = ending node; open 60 | 5A=closed walk with >= 3 nodes where no middle (non-start/end) node is used more than once 61 | 6A=a walk where nodes are used only once 62 | 7A=a graph where a walk exists between every pair of nodes 63 | 8A=dividing a graph into its biggest possible connected subgraphs 64 | 9A=when it is not part of a cycle 65 | 10A=a graph without cycles 66 | 11A=longest possible distance between any 2 nodes 67 | 12A=closed, walk, uses each edge exactly once 68 | 13A=n+1 69 | 14A=a graph that can be split into 2 subgraphs such that no nodes in a subgraph are connected 70 | 15A=false 71 | 16A=D+1 72 | 17A=fast, fairly accurate 73 | 18A=n(n+1)/2 74 | 18B=What is the formula for the sum of 2^k (for k from 1 to n)? 75 | 18C=2^k - 1 76 | 19A=1 - 2^n 77 | 20A=2(2^n - 1) 78 | 79 | [Test2_Round2] 80 | 1A=always 81 | 2A=when making proof simplifications that don't add anything new 82 | 3A=directed edges can only go one way 83 | 4A=a node connected to itself 84 | 5A=multiple edges connecting to the same set of nodes 85 | 6A=multi-edges, loops 86 | 7A=sum of degrees = 2*edge count 87 | 8A=complete graph with n nodes 88 | 9A=circular graph with n nodes 89 | 10A=wheel graph with n+1 nodes 90 | 11A=complete bipartite graph where 1 subgraph has n nodes and the other has m nodes 91 | 12A=fib(n) = fib(n-1) + fib(n-2) 92 | 13A=fib(0) = 0, fib(1) = 1 93 | 14A=n dimensional hypercube; 2^n 94 | 15A=when each node has either 0 or m children 95 | 16A=when all leaves have the same height 96 | 17A=V=E+1 97 | 18A=mi + 1 98 | 99 | [final_test1] 100 | 1A=a = bq + r, 0 < r < b 101 | 2A=!q --> !p 102 | 3A=q --> p 103 | 4A=p <--> q 104 | 5A=there is no such thing as 'inverse' 105 | 6A=tuples care about order and keep duplicates 106 | 6B=tuples keep duplicates and care about order 107 | 7A={(a,1,@), (a,1,#), (a,2,@), (a,2,#), (b,1,@), (b,1,#), (b,2,@), (b,2,#)} 108 | 8A=complement of the set 109 | 9A=no; sets can't have repeating elements, and the union of 2 sets is a set 110 | 10A=empty 111 | 11A=A 112 | 12A=empty; A 113 | 13A=|A|*|B| 114 | 14A=Q 115 | 15A=C 116 | 16A=set of all elements related to X by some relation R 117 | 118 | [final_test2] 119 | 1A=they have the same remainder when divided by k 120 | 2A=it is both onto and one-to-one 121 | 3A=if f(x) = f(y), then x must = y 122 | 4A=g(f(y)) = y for all valid y 123 | 5A=n^r 124 | 6A=n!/(n-r)! 125 | 7A=n choose r 126 | 8A=n!/(r!*(n-r)!) 127 | 9A=(n + r - 1) choose r 128 | 10A=twice 129 | 11A=the number of edges it is attached to 130 | 12A=different node/edge counts, different set of vertex degree values, subgraphs present in one graph but not the other 131 | 13A=using graph coloring 132 | 14A=the chromatic number of graph G 133 | 15A=F(n) = F(n-1) + F(n-2) 134 | 16A=two children of the same parent 135 | 17A=its leaves are at approximately the same height 136 | 18A=0 137 | 19A=0 138 | 20A=yes 139 | 21A=when it isn't part of a cycle 140 | 22A=the empty string 141 | 23A=the set of all finite-length strings with characters from A 142 | 24A=a string of 1s and 0s 143 | 144 | [final_new] 145 | 1A=exponential 146 | 2A=set of problems for which "yes" answers are justified in polynomial time. 147 | 3A=(definitely) exponential time, maybe polynomial time 148 | 4A=a collection containing all subsets of A including the empty set 149 | 5A=subsets 150 | 6A=powerset of A 151 | 7A=show that g(f(y)) = y for all valid y 152 | 8A=show that if g(x) = g(y), then x must = y 153 | 9A=sum from k=0 to n of (n choose k) * a^(n-k) * b^k 154 | 10A=(n+1) choose k = n choose k + n choose (k-1) 155 | 11A=a bijection exists between them 156 | 12A=|A| <= |B| 157 | 13A=|\bb{P}(A)| > |A| for all A 158 | 14A=yes 159 | 15A=because computer programs are finite (in length) and some functions are infinite 160 | -------------------------------------------------------------------------------- /Cloud/answers.ini: -------------------------------------------------------------------------------- 1 | [cloud_networking] 2 | 1A=prioritize throughput while maintaining acceptable latency 3 | 1B=throughput > latency 4 | 2A=yes 5 | 3A=convential proxies = clients; reverse proxies = servers 6 | 3B=convential proxy = clients; reverse proxy = servers 7 | 3C=convential = clients; reverse = servers 8 | 4A=load balancer = routing; reverse proxy = routing + modification 9 | 4B=reverse proxies can modify requests; load balancers can't 10 | 5A=active-active, active-passive 11 | 6A=keep some machines in reserve that are only used if errors occur 12 | 7A=traffic splitting 13 | 8A=yes 14 | 9A=push, pull 15 | 9B=pull, push 16 | 10A=CDN caches point-in-time "snapshots" of content via timestamped/versioned URLs 17 | 11A=CDN "pulls" new versions of content down periodically 18 | 12A=random, round robin, least busy 19 | 13A=software load balancing 20 | 14A=load balancer decrypts incoming requests; decryption can be expensive 21 | 15A=yes 22 | 16A=packet metadata (IP addresses, ports, etc) 23 | 17A=packet's application-level data 24 | 17B=packet contents 25 | 18A=no,yes 26 | 19A=speed;flexibility 27 | 19B=faster;more flexible 28 | 20A=map external (global) IP addresses to internal (local) ones 29 | 21A=one external IP address maps to multiple internal ones 30 | 21B=one global IP address maps to multiple local ones 31 | 22A=map incoming packets' sources to active connections' destinations 32 | 33 | [cloud_databases] 34 | 1A=return errors if your queue is overloaded 35 | 2A=databases can only have two of {consistency, availability, partition tolerance} 36 | 3A=yes; network issues may cause partitions (between writes) 37 | 4A=weakly consistent operations may never complete, while eventually consistent ones will 38 | 4B=weakly consistent = may not complete 39 | 5A=atomicity,consistency,isolation,durability 40 | 6A=leader handles ALL writes; followers ONLY handle reads 41 | 7A=multiple leaders coordinate with each other on writes 42 | 8A=yes 43 | 9A=each model/object has its own DB instance 44 | 10A=yes; may slow down DB operations 45 | 11A=distributing a single model/object across multiple DBs 46 | 12A=yes; one shard may contain too many (or too few) records 47 | 12B=yes; one shard may contain too many records 48 | 13A=basically available, soft state (eventual consistency) 49 | 14A=hash tables 50 | 15A=redis, memcached 51 | 16A=document databases are built on top of KV stores 52 | 17A=rows are tuples of TABLES, not (primitive) values 53 | 18A=wide-column database 54 | 18B=wide-column datastore 55 | 18C=wide-column 56 | 19A=graph database 57 | 19B=graph 58 | 20A=DB queries with results, individual objects 59 | 21A=application-level 60 | 22A=cache-aside, write-through, write-back, refresh-ahead 61 | 23A=automatically update any cache entries based on a TTL 62 | 24A=atomic, consistent, isolated, durable 63 | 25A=properties that a database transaction should have 64 | 26A=databases can have two of {consistency, availability, partition-tolerance} 65 | 26B=DBMS' can have two of {consistency, availability, partition-tolerance} 66 | 67 | [clean_architecture] 68 | 1A=no 69 | 2A=no 70 | 3A=individual actions within overall business logic 71 | 4A=no;no 72 | 5A=only business-logic-related ones 73 | 6A=convert data to/from format used by entities 74 | 7A=external interfaces 75 | 8A=no 76 | 77 | [osi_model] 78 | 1A=physical; cables 79 | 2A=data-link; frames 80 | 3A=network; packets 81 | 4A=transport; protocols 82 | 5A=session; open persistent connections between ports 83 | 6A=presentation; file encodings/formats 84 | 6B=presentation; file formats 85 | 7A=application; application data 86 | 8A=frames wrap packets with an (additional) MAC address 87 | 8B=frames add MAC addresses to packets 88 | 9A=Network layer (#3) 89 | 9B=Network 90 | 10A=Please do not throw sausage pizza away 91 | 10B=All people seem to need data processing 92 | 11A=REST uses URL {parameters, hierarchy}, RPC uses specific method names + HTTP method parameters 93 | 12A=/users/list 94 | 13A=/listUsernames 95 | 96 | [leetcode] 97 | 1A=XOR all the numbers together 98 | 1B=XOR them 99 | 2A=dynamic programming computes the entire state space; memoization only handles repeats 100 | 3A=bottom-up 101 | 4A=top-down 102 | 103 | [architecture_patterns] 104 | 1A=helper services send requests on behalf of someone else; common settings can be abstracted away 105 | 2A=proxy requests between a legacy and a modern system 106 | 3A=create separate backends for each frontend (to prefer customizability over reuse) 107 | 4A=load-balance requests between a legacy and a modern system, and gradually move requests over to the modern system 108 | 5A=command-query responsibility segregation; use separate data stores for reads and writes 109 | 6A=allow others to rewrite data, but reject outstanding transactions if they do 110 | 7A=a transaction-like set of actions that uses retry logic instead of locking 111 | 7B=a set of actions that are executed (or rolled back) as a unit, without any kind of locking in between 112 | 8A=Standard transactions are ACID compliant, sagas are not 113 | 9A=compensable, pivot, retryable 114 | 10A=a transaction that can be reversed 115 | 11A=if the pivot transaction is completed, the remaining transactions in the saga must also be completed 116 | 12A=a transaction that follows the pivot transaction **AND** is guaranteed to succeed 117 | 118 | [domain_driven_design] 119 | 1A=Designing software so that it matches the business' needs 120 | 2A=A group of Entities that are both necessary and sufficient for a particular set of business logic 121 | 3A=Domain events are specific to the business domain, not generic. (e.g. "pizza delivered" vs. "DB updated") 122 | 4A=domain services, application services 123 | 4B=domain, application 124 | 5A=a service that encapsulates domain-specific logic 125 | 6A=a service that isn't specific to the domain (e.g. an SMS notification service) -------------------------------------------------------------------------------- /Cloud/questions.ini: -------------------------------------------------------------------------------- 1 | [cloud_networking] 2 | 1=When deciding between LATENCY and THROUGHPUT, how should you decide between the two? 3 | 2=Is a single load balancer itself a single points of failure? 4 | 3=What is the difference between a "conventional" proxy and a reverse proxy? 5 | 4=What is the difference between a reverse proxy and a load balancer? 6 | 5=List (no need to describe) the two types of failover. 7 | 6=What is active-passive failover? 8 | 7=What is another name for active-active failover? 9 | 8=Can CDNs support dynamic content? 10 | 9=What are the two types of CDNs? 11 | 10=How does a push-based CDN work? 12 | 11=How does a pull-based CDN work? 13 | 12=What are some common automatic load balancing algorithms? 14 | 13=What is HAProxy used as? 15 | 14=What is SSL termination? Why is it useful? 16 | 15=Can load balancers handle session persistence/affinity? 17 | 16=What packet components does a Layer 4 load balancer use? 18 | 17=What packet components does a Layer 7 load balancer use? 19 | 18=Must a Layer 4 load balancer wait for ALL of a message's packets to arrive? What about a Layer 7 one? 20 | 19=What are the advantages of Layer 4 load balancers? Of Layer 7 ones? 21 | 20=What does Network Address Transfer (NAT) do? 22 | 21=What does a "one-to-many" NAT involve? 23 | 22=How does a one-to-many NAT system determine which local machine should receive an incoming packet? 24 | 25 | [cloud_databases] 26 | 1=In a message queuing context, what does backpressure mean? 27 | 2=What is the CAP theorem? 28 | 3=Is partition tolerance required for networked databases? Why (not)? 29 | 4=What's the difference between WEAK and EVENTUAL consistency? 30 | 5=What does ACID stand for? 31 | 6=For RBDMSes, what is leader-follower replication? 32 | 7=For RBDMSes, what is leader-leader replication? 33 | 8=Can race conditions occur in replication-based RBDMS systems? 34 | 9=For RBDMSes, what is federation? 35 | 10=Can federated RBDMSes have cross-model relationships? If so, what are the downsides? 36 | 11=For RBDMSes, what is sharding? 37 | 12=Do sharded databases need to be occasionally rebalanced? Why (not)? 38 | 13=For NoSQL DBMSes, what does BASS ("base") refer to? 39 | 14=What datastructure are key-value databases analogous to? 40 | 15=Name two common key-value store platforms. 41 | 16=How are key-value stores and document databases different? 42 | 17=How are wide-column databases different than standard RBDMSes? 43 | 18=What kind of database is Apache HBase? 44 | 19=What kind of database is Neo4J? 45 | 20=Name two components that might be cached at a [DB-based] application level (to improve latency). 46 | 21=In the "cache aside" caching strategy, where is caching handled? 47 | 22=What are the four common types of caching strategies? 48 | 23=Describe the "refresh ahead" caching strategy. 49 | 24=In databases, what does ACID stand for? 50 | 25=In databases, what does ACID refer to (NOT stand for)? 51 | 26=In databases, what is the CAP theorem? 52 | 53 | [clean_architecture] 54 | 1=In the clean architecture model, can entities be integrated into frameworks? 55 | 2=In the clean architecture model, can entities have annotations? 56 | 3=In the clean architecture model, what are use cases? 57 | 4=Should use cases know what invoked them? What about the desired result format? 58 | 5=What kinds of exceptions should use cases throw? 59 | 6=In the clean architecture model, what is the purpose of adapters (AKA internal interfaces)? 60 | 7=In the clean architecture model, what do we use to communicate with external users (such as web frontends)? 61 | 8=When processing inputs that are largely similar (e.g. JSON HTTP requests vs. JSON files), should we combine these into a single adapter? 62 | 63 | [osi_model] 64 | 1=What is Layer 1 of the OSI model? What does it primarily consist of? 65 | 2=What is Layer 2 of the OSI model? What does it primarily transmit? 66 | 3=What is Layer 3 of the OSI model? What does it primarily transmit? 67 | 4=What is Layer 4 of the OSI model? What does it primarily consist of? 68 | 5=What is Layer 5 of the OSI model? What is its primary function? 69 | 6=What is Layer 6 of the OSI model? What does it primarily transmit? 70 | 7=What is Layer 7 of the OSI model? What does it primarily transmit? 71 | 8=In the OSI model, what is the difference between frames and packets? 72 | 9=Which OSI layer deals with IP addresses? 73 | 10=What is a helpful mnemonic for the OSI model? 74 | 11=What is the difference between REST and RPC? 75 | 12=What might a URL to list users look like in a REST API? 76 | 13=What might a URL to list users look like in an RPC API? 77 | 78 | [leetcode] 79 | 1=How can you tell if a single number occurs an odd number of times in a stream? 80 | 2=What's the difference bewtween dynamic programming and memoization? 81 | 3=Which best describes dynamic programming algorithms: bottom-up, or top-down? 82 | 4=Which best describes memoization-based algorithms: bottom-up, or top-down? 83 | 84 | [architecture_patterns] 85 | 1=What is the "ambassador" pattern? why is it useful? 86 | 2=What is the purpose of an "anti-corruption layer"? 87 | 3=What is the "backends-for-frontends" pattern? 88 | 4=What is the "strangler-fig" pattern? (Hint: migration) 89 | 5=What does CQRS stand for? What does this mean? (Hint: separation) 90 | 6=How does "optimistic" locking work? (Hint: CS241 locks/mutexes are "pessimistic") 91 | 7=Within microservice-based architectures, what does a "saga" refer to? 92 | 8=What is the difference between sagas and standard transactions? 93 | 9=What are the three types of transactions within a saga, in order? 94 | 10=In the saga pattern, what is a "compensable" transaction? 95 | 11=In the saga pattern, what makes a transaction a "pivot" transaction? 96 | 12=In the saga pattern, what is a "retryable" transaction? 97 | 98 | [domain_driven_design] 99 | 1=What is the purpose of Domain-Driven Design? (Hint: Conway's law) 100 | 2=In Domain-Driven Design (DDD), what is an Aggregate? 101 | 3=In Domain-Driven Design (DDD), what makes an event a "domain" event? 102 | 4=In Domain-Driven Design (DDD), what are the two types of Services? 103 | 5=In Domain-Driven Design (DDD), what is a Domain Service? 104 | 6=In Domain-Driven Design (DDD), what is an Application Service? 105 | -------------------------------------------------------------------------------- /CS173/questions.txt: -------------------------------------------------------------------------------- 1 | [Test1] 2 | 1=When denoting number sets, what does Q mean? 3 | 2=Is 0 in N? 4 | 3=Is 0 in Z+? 5 | 4=Is (a,b) a closed or open interval? Does it include its endpoints? 6 | 5=Is [a,b] a closed or open interval? Does it include its endpoints? 7 | 6=What is the simplification of (a^b)^c? 8 | 7=What is the simplification of a^b * a^c? 9 | 8=What is the simplification of log(a) + log(b)? 10 | 9=What is the simplification of log(a) - log(b)? 11 | 10=What is the simplification of log(a^b)? 12 | 11=What is floor(-3.8)? 13 | 12=What is floor(1.2) 14 | 13=What is ceiling(-3.8)? 15 | 14=What is ceiling(1.2)? 16 | 15=For what case(s) is "p --> q" false? 17 | 16=What is DeMorgan's law? 18 | 17=What do quantifiers specify? 19 | 18=What is the converse of p --> q? 20 | 19=What is the biconditional of p --> q? 21 | 20=What shorthand is commonly used for the biconditional? 22 | 21=What is the opposite of prime (in number theory, not in English)? 23 | 22=Is 0 even or odd? 24 | 23=What value(s) of x is/are valid if x|0? 25 | 24=What value(s) of x is/are valid if 0|x? 26 | 25=Are 0 and 1 prime? Why (not)? 27 | 26=What is the proposition of the Fundamental Thm. of Arithmetic? 28 | 27=According to the Fundamental Thm. of Arithmetic, how many prime factorizations can exist for a given positive integer? (disregarding order of the factors) 29 | 28=What is the size of the set of all prime numbers? 30 | 29=What does it mean if two sets are disjoint? 31 | 30=What is the complement of a set? 32 | 31=Does order matter in the Cartesian product? 33 | 32=What is the difference between subsets and proper subsets? 34 | 33=What is a symmetric relationship? 35 | 34=What is an antisymmetric relationship? 36 | 35=What is an antisymmetric relationship usually? When is this not the case? 37 | 36=What makes a function bijective? 38 | 37=When are two things comparable? 39 | 38=What is the domain of a function? 40 | 39=What is the co-domain of a function? 41 | 40=What is the preimage of a particular output value of a function? 42 | 41=What is the image of a function? 43 | 42=What is the notation for proper subset? 44 | 43=What is the notation for subset? 45 | 44=What is special about proper subsets (relative to subsets)? 46 | 45=Sets are, by default, vacuously ________ AND _______. 47 | 46=What is the acronym for a partial order? 48 | 47=What is the acronym for a strict partial order? 49 | 48=What is the acronym for an equivalence? 50 | 49=What is a linear order (in terms of other orders)? 51 | 50=When is a function one-to-one? 52 | 51=When is a function onto? 53 | 54 | [Test2] 55 | 1=What does the identity function output when applied on a set? 56 | 2=If you make r choices from n options (repetition allowed, order matters), what is this called? What is the formula for the total number of outcomes? 57 | 3=Define walk. 58 | 4=When is a walk closed? What is it otherwise? 59 | 5=Define cycle. 60 | 6=Define path. 61 | 7=Define connected graph. 62 | 8=How are connected components formed? 63 | 9=When does a cut edge occur? 64 | 10=Define acyclic graph. 65 | 11=Define diameter of a graph. 66 | 12=What are the three conditions of an Euler circuit? 67 | 13=How many vertices does Wn contain? 68 | 14=Define bipartite graph. 69 | 15=True/False: A valid vertex coloring MUST use only the minimum number of colors 70 | 16=If D is the maximum DEGREE of any vertex in a graph, the graph must be colorable in ______ colors. 71 | 17=What are the two main advantages of greedy graph coloring algorithms? 72 | 18=What is the formula for the sum of the first n integers? 73 | 19=What is the formula for the sum of (1/2)^k (for k from 1 to n)? 74 | 20=What is the formula for the sum of 2^k (for k from 1 to n)? 75 | 21=What is the distance between 2 nodes of a graph? 76 | 77 | [Test2_Round2] 78 | 1=A strictly increasing/decreasing function is (never/sometimes/always) one-to-one. 79 | 2=When is "without loss of generality" used? 80 | 3=What is the main difference between a directed and an undirected edge? 81 | 4=Define loop. 82 | 5=Define multi-edge. 83 | 6=What two things must a graph not have in order to be "simple?" 84 | 7=What is the Handshaking Theorem? 85 | 8=Define Kn. 86 | 9=Define Cn. 87 | 10=Define Wn. 88 | 11=Define Kn,m 89 | 12=What is the recursive definition for the Fibonacci sequence? (use fib(n) for the nth fibonacci number) 90 | 13=What are the 2 base cases for the Fibonacci sequence? 91 | 14=What is Qn, and how many nodes does it have? 92 | 15=When is an m-ary tree full? 93 | 16=When is an m-ary tree complete? 94 | 17=Express a tree's vertex count V in terms of its edge count E. 95 | 18=If a full m-ary tree has i internal nodes, it has ______ total nodes. 96 | 97 | [final_test1] 98 | 1=State the "Division Algorithm" theorem. 99 | 2=What is the contrapositive of "p --> q"? 100 | 3=What is the converse of "p --> q"? 101 | 4=What is the biconditional of "p --> q"? 102 | 5=What is the inverse of "p --> q"? 103 | 6=What 2 qualities make a tuple different from a set? 104 | 7=What is the Cartesian product of {a,b}, {1,2}, and {@,#}? 105 | 8=If A is a set, what does A with a bar over it (an "overlined" A) mean? 106 | 9=Can the union of 2 sets have repeating elements? Why/why not? 107 | 10=What does (A ^ empty) produce? 108 | 11=What does (A v empty) produce? 109 | 12=What do (empty - A) and (A - empty) produce? 110 | 13=If set A has cardinality |A| and set B has cardinality |B|, what is the cardinality of their Cartesian product? 111 | 14=Which letter represents the rational numbers? 112 | 15=Which letter represents the complex numbers? 113 | 16=What is the equivalence class of X (a.k.a [X])? 114 | 115 | [final_test2] 116 | 1=Two numbers are congruent mod k if _____? 117 | 2=What does it mean if a function is bijective? 118 | 3=To prove that a function is one to one, what do we need to prove? 119 | 4=To prove that a function is onto, what do we need to prove? 120 | 5=What is the formula for counting PERMUTATIONS WITH repetition? 121 | 6=What is the formula for counting PERMUTATIONS WITHOUT repetition? 122 | 7=What is the shorthand for counting COMBINATIONS WITHOUT repetition? 123 | 8=What is the formula for "n choose r?" 124 | 9=What is the choose format for couting COMBINATIONS WITH repetition? 125 | 10=When counting the degree of a node, do self-loops count once or twice? 126 | 11=The degree of a node is _____? 127 | 12=When proving a graph is not isomorphic, what things should you look for? 128 | 13=How do compilers allocate registers? 129 | 14=What does χ(G) represent? 130 | 15=What is the recursive definition of the Fibonacci numbers? 131 | 16=In the context of trees, what are siblings? 132 | 17=What does it mean if a tree is "balanced?" 133 | 18=What is the level of the root of a tree? 134 | 19=What is the height of a tree with only one node? 135 | 20=Can a node be both a root and a leaf? 136 | 21=When is an edge a cut edge? 137 | 22=What does ε represent? 138 | 23=What does A* represent (where A is a set of characters)? 139 | 24=What is a bit string? 140 | 141 | [final_new] 142 | 1=The Towers of Hanoi puzzle provably requires _____ time. 143 | 2=Define NP. 144 | 3=Problems in NP can be SOLVED in what time(s)? 145 | 4=What is the powerset of A? 146 | 5=Does the powerset contain SUBSETS or TUPLES? 147 | 6=What is the meaning of \bb{P}(A)? 148 | 7=How does one prove that a function g(x) is onto? 149 | 8=How does one prove that a function g(x) is one-to-one? 150 | 9=What is the binomial theorem? 151 | 10=What is Pascal's identity? 152 | 11=Two sets have the same cardinality if _______. 153 | 12=If there is a one-to-one function from set A to set B, what do we know about sets A and B? 154 | 13=What is the relationship between |\bb{P}(A)| and |A|? 155 | 14=Do infinite functions exist? 156 | 15=Why can't some functions be computed by computer programs? 157 | -------------------------------------------------------------------------------- /CS225/answers.txt: -------------------------------------------------------------------------------- 1 | [trees_intro] 2 | 1A=dequeue node, yell it, enqueue its children 3 | 2A=LIFO; FIFO 4 | 3A=push to one stack and use the other to reverse the order of the first's elements when popping 5 | 4A=it has a root node 6 | 5A=its edges only go one way 7 | 6A=the order in which children of a vertex are listed matters 8 | 7A=the (length of the) longest path from a leaf to the node 9 | 8A=-1; 0 10 | 9A=O(h) // O(log n) is WRONG, since the tree isn't necessarily balanced 11 | 10A=height(leftSubtree) - height(rightSubtree) 12 | 11A=if |height balance| for every node in the tree is <= 1 13 | 12A=O(h) 14 | 13A=search for the inserted node in the tree and change that spot to the inserted node 15 | 14A=An n-ary tree whose non-leaf nodes have n children 16 | 15A=a tree that is perfect except for its lowest level, which can be missing nodes on its right (but can't have gaps between nodes) 17 | 16A=a full tree whose leaves are all at the same level 18 | 17A=a traversal of nodes ordered by their level in a tree 19 | 18A=2^(h+1) - 1 20 | 19A=all of them 21 | 20A=its leaves could be on different levels 22 | 21A=an ADT where values are accessible by keys 23 | 22A=O(n) 24 | 23A=3 25 | 24A=preorder, inorder, postorder, level-order 26 | 27 | [binarytrees] 28 | 1A=a tree whose nodes have at most two children 29 | 2A=n+1 30 | 3A=a binary tree where the inorder traversal of the tree produces a sorted output 31 | 4A=delete the node and set any pointers to it to its child's value (which is NULL for no-child removes) 32 | 5A=set target node's value to that of its IOP/IOS node, then remove said IOP/IOS node 33 | 6A=we have to shift elements to the right // WRONG answer: we need to expand the array 34 | 35 | [btrees] 36 | 1A=2 <= keys <= m - 1 37 | 2A=m/2 <= children <= m 38 | 3A=0 OR 2 <= children <= m 39 | 4A=m 40 | 5A=when reading data from disk; because disk reads are very slow 41 | 6A=O(log base 2 of n) 42 | 7A=O(m * log base m of n) 43 | 8A=they attempt to minimize the height of a tree, which makes searches faster 44 | 45 | [avltrees] 46 | 1A=operations that rebalance an AVL tree when it becomes height unbalanced; O(1) time 47 | 2A=sticks; boomerangs 48 | 3A=the lowest height-unbalanced node (on the path to the root) always triggers the rotation 49 | 4A=the lighter side 50 | 5A=N's heaviest child; N 51 | 6A=we follow the heaviest child 52 | 7A=insert at proper place; fix imbalances; update heights 53 | 54 | [hashing] 55 | 1A=hash function, compression function 56 | 2A=converts keys to integers 57 | 3A=converts integers to array indices 58 | 4A=constant time (or O(1)) 59 | 5A=if x == y, then hash(x) must == hash(y) for all x,y 60 | 6A=hash functions must be deterministic, hash functions must evenly distribute data throughout a hash table (to minimize collisions) 61 | 7A=when we know our (small) keyspace and our distribution 62 | 8A=no collisions, can accept any key from its desired keyspace 63 | 9A=hash keyspace size / (storage) slots available to hash function 64 | 10A=1; log n 65 | 11A=2 66 | 12A=flag bucket slots as never, currently, or previously used 67 | 13A=having chunks of used/unused bucket slots; double hashing 68 | 14A=determining step size from key values 69 | 15A=constant load factor 70 | 16A=0.7; resize array to a (usually cached) prime number > 2*current size and "rehash" the data into its appropriate buckets 71 | 17A=when the size of the underlying array is changed, the proper positions of its elements may too and need to be recalculated 72 | 18A=probe-based - faster overall, separate chaining - better with large keys 73 | 19A=hash tables 74 | 20A=hashes to a slot and continues through array at a certain step size until an empty slot is found 75 | 76 | [priority queue + heaps] 77 | 1A=comparable items (items must have an < operator) 78 | 2A=O(1) insert, O(n) remove 79 | 3A=O(1) insert, O(n) remove 80 | 4A=O(n) insert, O(1) remove 81 | 5A=complete binary tree whose nodes (in/de)crease as level increases 82 | 6A=yes 83 | 7A=makes our math nicer 84 | 8A=2n; 2n + 1; floor(n/2) 85 | 9A=O(lg n) 86 | 10A=insert a value as a child of a leaf, then heapify it up 87 | 11A=shift a value up/down in the tree until the heap is once again valid 88 | 12A=O(1) insert, O(n) remove 89 | 13A=set root value to rightmost leaf's value, heapifyDown new root, return original root value 90 | 14A=sort data, repeated insertion, heapifyDown the first 1/2 of underlying array 91 | 15A=heapifyDown the first half of the underlying array; O(n) 92 | 16A=continually remove and store min from a heap of (to-be-sorted) data 93 | 17A=O(n lg n) time, O(1) space 94 | 95 | [disjoint sets] 96 | 1A=1 97 | 2A=one of their members (the representative value) 98 | 3A=must be the same for all elements in the set 99 | 4A=returns the representative of n 100 | 5A=no; we only care about whether two elements are in the same set 101 | 6A=combines two sets given their representatives a and b 102 | 7A=some implementations (i.e. uptrees) require union() to use representative elements 103 | 8A=makeSet, union, find 104 | 9A=makes a set containing only elem 105 | 10A=O(1) find, O(n) union 106 | 11A=O(n) find, O(1) union 107 | 12A=O(lg* n) ~ O(1) find, O(1) union 108 | 13A=roots have negative values, which indicate that they are roots; smart unions 109 | 14A=heights can be 0 which is also a possible key; no 110 | 15A=their representative value 111 | 16A=follows representative node indexes until it hits < 0 112 | 17A=O(h); O(n) 113 | 18A=O(1) 114 | 19A=minimize overall height increase across all nodes 115 | 20A=union by {height, size, rank} 116 | 21A=rank(node) = height(node) + 1 117 | 22A=h = O(lg n) 118 | 23A=any node hit in find() is pointed directly at its representative 119 | 24A=finds on compressed nodes are O(1); it doesn't change time complexity of find() 120 | 121 | [graphs] 122 | 1A=sets of vertices and edges 123 | 2A=each edge only goes one way, edges' start/end nodes matter 124 | 3A=minimize conflict given a conflict graph; NP 125 | 4A=number of edges touching N 126 | 5A=a sequence of vertices in a graph connected by edges between them 127 | 6A=number of edges (*not* nodes) in a path 128 | 7A=graph with no self-loops or multi-edges (pairs of nodes with multiple edges between them) 129 | 8A= a connected subgraph containing as many nodes as it can (based on its parent graph) 130 | 9A=acyclic connected subgraph containing all vertices of its parent graph 131 | 10A=isConnected ? n-1 : 0 132 | 11A=isSimple ? n(n-1)/2 : infinity 133 | 12A=vertices, edges, optional data on vertices and/or edges 134 | 13A=edge list, adjacency list, adjacency matrix 135 | 14A=small integers 136 | 15A=stores the edges and the vertices in separate arrays with no relation between indices used for each object 137 | 16A=as a 2-d array/dictionary of values indicating the status of an edge between two nodes 138 | 17A=4 times bigger 139 | 18A=O(1) insertion, O(m) removal 140 | 19A=minimize the cost between a given point A to every other vertex B 141 | 20A=O(n) insertion/removal 142 | 21A=stores the graph as a dictionary of nodes and pointers to their edges and an edge list 143 | 22A=O(1) insertion, O(deg(V)) removal 144 | 23A=rows = origin, columns = destination 145 | 24A=traversals cannot travel across nonexistent edges; we usually want data about graph's structure 146 | 25A=get/set label ops are O(1) 147 | 26A=queue; stack 148 | 27A=o(n+m) 149 | 28A=o(n^2) 150 | 29A=discovery, cross 151 | 30A=an edge leading to an unvisited vertex 152 | 31A=an edge leading to an already-visited vertex 153 | 32A=they form a spanning tree 154 | 33A=m is O(n) 155 | 34A=a minimum total weight spanning tree 156 | 35A=O(m) 157 | 36A=it doesn't matter! 158 | 37A=iteratively mark (as MST) the shortest n-1 edges that don't create cycles 159 | 38A=O(n + m lg n) for the two relevant PQ variants 160 | 39A=mark the lowest-weight unmarked edges between a visited and unvisited node until no longer possible 161 | 40A=when it has negative cycles 162 | 41A=use minimum path length from start node instead of minimum edge length 163 | 42A=no 164 | 43A=when we have explored all its edges 165 | 44A=find the shortest path between 2 nodes in a graph 166 | 167 | [coding] 168 | 1A=Check YES if you've done this. 169 | 2A=Check YES if you've done this. 170 | 3A=Check YES if you've done this. 171 | 4A=Check YES if you've done this. 172 | 5A=Check YES if you've done this. 173 | 6A=Check YES if you've done this. 174 | -------------------------------------------------------------------------------- /CS225/questions.txt: -------------------------------------------------------------------------------- 1 | [trees_intro] 2 | 1=What is the standard CS225 implementation of a level-order traversal? 3 | 2=What is the acronym for a stack? (*IFO)? A queue? 4 | 3=How can a queue be made out of stacks? 5 | 4=What does it mean if a graph is rooted? 6 | 5=What does it mean if a graph is directed? 7 | 6=What does it mean if a graph is ordered? 8 | 7=What is the height of a node? 9 | 8=(Trivia) What is the height of an empty tree? What about a tree with one node? 10 | 9=(Gotcha) What is the at-best worst-case search of a tree? 11 | 10=(Trivia) What is the height balance of a node? 12 | 11=When is a tree height balanced? 13 | 12=What is the time complexity of inserting a node into a tree? (for AVL trees, this doesn't include balancing) 14 | 13=What is the basic algorithm used to insert nodes (for trees in general)? 15 | 14=What is a full tree? 16 | 15=What is a complete tree? 17 | 16=What is a perfect tree? 18 | 17=What is a level-order traversal? 19 | 18=How many nodes are there in a perfect binary tree? 20 | 19=What properties (full, complete, perfect) apply to an empty tree? 21 | 20=Why is a full tree not necessarily a complete tree? 22 | 21=What is a dictionary? 23 | 22=How long should tree traversals take? 24 | 23=In a traversal, how many times is each node visited? 25 | 24=What are the four types of traversals? 26 | 27 | [binarytrees] 28 | 1=What is a binary tree? 29 | 2=If a binary tree has n data items, it has how many null pointers? 30 | 3=What is a binary search tree? 31 | 4=In a BST, how is a no-child/one-child removal done? 32 | 5=In a BST, how is a two-child removal done? 33 | 6=Why is inserting into an array O(n)? 34 | 35 | [btrees] 36 | 1=What are the min and max number of keys in a non-internal B-tree node? 37 | 2=What are the min and max number of children in a non-internal and non-root B-tree node? 38 | 3=What are the min and max number of children in a B-tree's ROOT node? 39 | 4=In a B-tree, what variable represents the 'order' of the tree? 40 | 5=When are B-trees used? Why? 41 | 6=What is the total time to search a B-Tree using BINARY search at its nodes? 42 | 7=What is the total time to search a B-Tree using LINEAR search at its nodes? 43 | 8=Why are balanced trees useful? 44 | 45 | [avltrees] 46 | 1=What are rotations (with respect to AVL trees)? How long do they take? 47 | 2=What shapes do we use single rotations for? Double rotations? 48 | 3=How do we decide which node to trigger a rotation on? 49 | 4=Which side of a height unbalanced node do we always rotate toward? 50 | 5=When doing a double-rotation about a node N, which node is the first one rotated around? Which one is the second? 51 | 6=How do we determine if a path is a stick or a mountain for nodes that have TWO children? 52 | 7=What are the steps (in order) of inserting into an AVL tree 53 | 54 | [hashing] 55 | 1=What are the two basic parts of a hashing algorithm? 56 | 2=What does a hash function do (with respect to a hashing algorithm)? 57 | 3=What does a compression function do (with respect to a hashing algorithm)? 58 | 4=What is the time complexity of find/insert/delete when using a hash table assuming that SUHA is true (i.e. the hash function is constant time and the keys are uniformly distributed)? 59 | 5=What does it mean to say that hash functions must be deterministic? 60 | 6=What are the two parts of SUHA (the Simple Uniform Hashing Assumption)? 61 | 7=When is defining a good hash function easy? 62 | 8=What two criteria define a perfect hash function? 63 | 9=What is the load factor of a hash function? 64 | 10=What is the maximum number of each rotation that an AVL insert can use? What about an AVL remove? 65 | 11=When doing a rotation, how many heights must be updated? 66 | 12=When using linear probing, what is one way to prevent unnecessary probing? 67 | 13=What is primary clustering, and how is it solved (just give the vocabulary word)? 68 | 14=What is double hashing? 69 | 15=What condition makes hash tables TRUELY constant time? 70 | 16=What value do we use as a target load factor? What happens if it is exceeded? 71 | 17=What is rehashing? 72 | 18=What are the advantages of each of the two collision resolution strategies over the other? 73 | 19=What technique do dictionaries usually use? 74 | 20=How does linear probing resolve collisions? 75 | 76 | [priority queue + heaps] 77 | 1=What items can a priority queue hold? (Hint: not spaceships!) 78 | 2=When using an unsorted array, how fast are the two main priority queue operations (insert/remove)? 79 | 3=When using an unsorted linked list, how fast are the two main priority queue operations (insert/remove)? 80 | 4=When using a sorted array/linked list, how fast are the two main priority queue operations (insert/remove)? 81 | 5=What is the definition of a heap? 82 | 6=Is an empty tree a heap? 83 | 7=In the array-based implementation of a heap, why do we usually start the heap at 1 and not 0? 84 | 8=For an array-based heap starting at 1, what are the leftChild, rightChild, and parent indices given an index n (in that order)? 85 | 9=What is the height of a heap in relation to the amount of data it contains? 86 | 10=How do we insert something into a heap? 87 | 11=What is the process of heapifying? 88 | 12=When using an unsorted linked list, how fast are the two main priority queue operations (insert/remove)? 89 | 13=What is the algorithm for removeMin() in a heap? 90 | 14=What are the three methods of building heaps? 91 | 15=Out of the three heap building methods, which one is the fastest? How fast is it? 92 | 16=How does heapsort work? 93 | 17=What are heapsort's performance characteristics? 94 | 95 | [disjoint sets] 96 | 1=In a (collection of) disjoint set(s), how many sets must a member be in? 97 | 2=In a disjoint set, what are sets represented by? 98 | 3=What requirement does the representative value have to adhere to? 99 | 4=What does find(n) do? 100 | 5=Does the representative of a disjoint set have to be any particular value? Why (not)? 101 | 6=What does union(a,b) do? 102 | 7=Why do we have to provide union() with representative values (as opposed to any old value from the set)? 103 | 8=What are the three operations that the disjoint set ADT supports? 104 | 9=What does makeSet(elem) do? 105 | 10=What are the performance characteristics of a disjoint set ADT using a direct mapping of keys/indices to representative values (the 'naive implementation')? 106 | 11=What are the performance characteristics of a disjoint set ADT using an uptree and NO other optimizations? 107 | 12=What are the performance characteristics of a disjoint set ADT using an OPTIMIZED (smart unions + path compression) uptree? 108 | 13=How can we store data at the roots/representative values of uptrees in an array-based implementation? What class of techniques require this? 109 | 14=For union by height, why can't we just store "-height" in the array for each root? Does this issue exist in either of the other smart union strategies? 110 | 15=In an uptree, what do non-root indices store? 111 | 16=What is the algorithm for find() in an uptree? 112 | 17=What is the time complexity of find() in an uptree? in a worst-case uptree? 113 | 18=What is the time complexity of union() in an uptree if it is provided with already representative values? 114 | 19=In an uptree, what do smart unions attempt to do? 115 | 20=What are the three types of smart unions? 116 | 21=(Reminder) What is the rank of a node? 117 | 22=If an uptree uses smart unions, what do we know must be true about the uptree? 118 | 23=What is path compression? 119 | 24=Why is path compression useful? (Two reasons) 120 | 121 | [graphs] 122 | 1=What are graphs (the simplest definition)? 123 | 2=What is different about a directed graph vs. a non-directed graph? 124 | 3=What is the vertex cover problem, and what complexity class is it in? 125 | 4=What is the degree of a node N? 126 | 5=What is a path? 127 | 6=What is the length of a path? What is it *not*? 128 | 7=What is a simple graph? 129 | 8=What is a connected component? 130 | 9=What is a spanning tree? 131 | 10=What is the MINIMUM number of edges a graph of n nodes can have? 132 | 11=What is the MAXIMUM number of edges a graph of n nodes can have? 133 | 12=What three things need to be stored for a graph ADT? 134 | 13=What are the three implementations of a graph ADT? 135 | 14=When storing graphs in arrays, what should the vertex/edge 'names' be? 136 | 15=How does an edge list store a graph? 137 | 16=How does an adjacency matrix store a graph? 138 | 17=When an adjacency matrix expands, how much bigger does it become? 139 | 18=What are the efficiencies of insert/remove for a graph ADT based on an edge list? 140 | 19=What is the single-source shortest path problem? 141 | 20=What are the efficiencies of insert/remove for a graph ADT based on an adjacency matrix? 142 | 21=How does an adjacency list store a graph? 143 | 22=What are the efficiencies of insert/remove for a graph ADT based on an adjacency list? 144 | 23=When storing directed graphs in an adjacency matrix, what is the convention regarding rows and columns? 145 | 24=What does it mean to say that traversals must honor the graph's connectivity? Why is this important? 146 | 25=What are the key implementation assumptions when doing BFS and DFS? 147 | 26=Does BFS use a stack or a queue? DFS? 148 | 27=When using a graph based on an adjacency list, what is the time complexity of DFS/BFS? 149 | 28=When using a graph based on an adjacency matrix, what is the time complexity of DFS/BFS? 150 | 29=What are the two types of edge labels used when labeling a DFS (just name them)? 151 | 30=What is a discovery edge? 152 | 31=What is a cross edge? 153 | 32=What is special about discovery edges (vs. any old edge)? 154 | 33=According to CS225, what makes a graph sparse? 155 | 34=What is a minimum spanning tree (MST)? 156 | 35=What is the time complexity of most MST algorithms? 157 | 36=How do we break ties in MST algorithms? 158 | 37=Summarize Kruskal's algorithm. 159 | 38=What is the time complexity of Kruskal's algorithm for each PQ variant? 160 | 39=Summarize Prim's algorithm. 161 | 40=When does no single-source shortest path exist in a graph? 162 | 41=How does Djikstra's algorithm differ from Prim's algorithm? 163 | 42=Is Djikstra's algorithm safe for use with negative edges? 164 | 43=In Djikstra's algorithm, when can we stop considering a node? 165 | 44=What is the purpose of Djikstra's algorithm? 166 | 167 | [coding] 168 | 1=Review mergeSort? 169 | 2=Review quickSort? 170 | 3=Review heaps? 171 | 4=Review binary search? 172 | 5=Review AVL trees? 173 | 6=Review disjoint sets? (+ uptrees && smart unions) 174 | -------------------------------------------------------------------------------- /CS233/answers.txt: -------------------------------------------------------------------------------- 1 | [mt1-lesson1] 2 | 1A=voltages 3 | 2A=1, 0 4 | 3A=function, inputs 5 | 4A=2^(2^n)) 6 | 5A=an input variable or its complement 7 | 6A=not; and; or (NAO) 8 | 7A=Just a reminder =) 9 | 8A=NOT 10 | 9A=a series of gates 11 | 10A=describing circuits using text 12 | 13 | [mt1-lesson2] 14 | 1A=output only depends on current input (after waiting long enough) 15 | 2A=OR all True outcomes together 16 | 3A=(x*y)' 17 | 18 | [mt1-lesson3] 19 | 1A=2^n 20 | 2A=yes 21 | 3A=groups of multiple bits 22 | 23 | [mt1-lesson4] 24 | 1A=NOT (negate) every bit, then add 1 25 | 2A=isNegative = (MSB == 1) 26 | 3A=add (as a string) the MSB to the front of the number until the desired bit length is reached 27 | 4A=0; MSB 28 | 29 | [mt1-lesson5] 30 | 1A=a module that adds two input bits to make a sum and carryout 31 | 2A=a module that adds 3 input bits to make a sum and carryout 32 | 3A=a full adder can be built using 2 half-adders 33 | 4A=no; carry lookahead 34 | 5A=datapath signals carry data; control signals tell the module what to do with it 35 | 36 | [mt1-lesson6] 37 | 1A=NOR 38 | 2A=when the result of an operation isn't representable using the available number of bits 39 | 3A=xor'ing the last 2 carry bits 40 | 41 | [mt1-lesson7] 42 | 1A=the time a circuit takes to react to changes in input 43 | 2A=an odd # of NOT gates in a loop; timing gates 44 | 3A=very 45 | 4A=hold a value, allow reads, allow writes 46 | 5A=2 cross-coupled NOR gates 47 | 6A=when reset = 1; when set = 1 48 | 7A=HL3 gets cancelled 49 | 8A=it depends on its past state(s); no 50 | 9A=the value it stores 51 | 52 | [mt1-lesson8] 53 | 1A=sequential; sequential depends on previous state(s) but combinational doesn't 54 | 2A=all elements are updated simultaneously 55 | 3A=clock signals (waves with constant period) 56 | 4A=on the positive edge of the clock signal 57 | 5A=storage, computation 58 | 6A=a latch that updates on a clock's positive edge 59 | 7A=they ignore input when enable=0 60 | 8A=an SR w/enable where reset = !set 61 | 9A=a sequence of multiple D flip-flops; storing multiple bits 62 | 10A=a memory device that stores words referenced by addresses 63 | 11A=any part of the memory can be read/written quickly 64 | 12A=register file 65 | 13A=so we can do operations between values in different addresses 66 | 14A=select, enable 67 | 15A=a set of half adders connected to a set of d flip-flops 68 | 69 | [mt1-lesson9] 70 | 1A=finite state machine 71 | 2A=a sequential circuit that detects a given bit sequence 72 | 73 | [mt1-lesson10-11] 74 | 1A=programmability 75 | 2A=an interface between software and hardware 76 | 3A=what operations exist, the effects of given operations 77 | 4A=converting high level code (e.g. C) into machine code 78 | 5A=no; different processors have different ISAs 79 | 6A=a human-readable translation of machine code 80 | 7A=operations use values in registers (or sometimes constants) when getting/setting data 81 | 8A=32 32-bit registers 82 | 9A=its always 0 83 | 10A=R-type accepts 3 registers; I-type accepts 2 registers + 1 constant 84 | 11A=rs 85 | 12A=signed (2's complement) 16-bit word 86 | 13A=opcode, funct 87 | 14A=no 88 | 89 | [mt1-lesson12] 90 | 1A=1 byte 91 | 2A=32 bits = 4 bytes = 4 locations 92 | 3A=when it's divisible by 4 93 | 4A=keep track of which instruction to execute next 94 | 5A=its always a multiple of 4, so its last 2 bits are always 0 95 | 96 | [mt1-lesson13-14] 97 | 1A=encoding jumps; very long immediate 98 | 2A=Can only jump within current section (of 16) of instruction memory; using jr (jump register) 99 | 3A=jump amounts are relative to the address of the branch instruction 100 | 4A=0 101 | 5A=lui (load upper immediate) 102 | 6A=JUST A REMINDER... 103 | 104 | [mt1-lesson15] 105 | 1A=cheaper, denser, slower 106 | 2A=programs and data are stored separately 107 | 3A=4 GB; 2^32 108 | 4A=the 8 least significant bits 109 | 5A=memory is located using a signed offset constant and a register value 110 | 6A=big endian stores the ends of words at the "big end" (bigger addressed end) of memory; little endian does the opposite 111 | 7A=words must start at addresses divisible by 4 112 | 8A=bus error 113 | 114 | [mt2-lesson16-17] 115 | 1A=hardware; software 116 | 2A=data defined outside of functions 117 | 3A=contiguously; load/store functions 118 | 4A=No; they are stored in memory, which must be loaded into a register 119 | 5A=instructions that are compiled into one or more simpler instructions 120 | 121 | [mt2-lesson19] 122 | 1A=an array of bytes ending in a 0 byte 123 | 2A=sizeof(T) 124 | 3A=null data used to "naturally align" object's data 125 | 4A=reordering fields 126 | 5A=storing/loading a pointer's value 127 | 128 | [mt2-lesson20] 129 | 1A=interface between software and hardware 130 | 2A=no 131 | 3A=CISC has more instructions and simpler, slower machines than RISC 132 | 4A=complex instruction set computer; reduced instruction set computer 133 | 5A=higher-level languages were more popular; these compiled into simpler instructions which are cheaper to implement 134 | 6A=very 135 | 7A=link compiled source files together; avoids recompiling unchanged source files 136 | 8A=expressiveness, optimization 137 | 138 | [mt2-lesson21] 139 | 1A=multitaskers 140 | 2A=a way for the OS to stop a running program 141 | 3A=a program's register values // and NOT its memory values 142 | 4A=memory-mapped I/O, isolated I/O 143 | 5A=Programmed I/O, Interrupt-driven I/O, direct memory access 144 | 6A=memory, I/O devices 145 | 7A=False 146 | 8A=No; the CPU can't give them their different instructions simultaneously 147 | 9A=Isolated I/O uses different load/store instructions for I/O and memory while Memory-mapped I/O does not 148 | 10A=CPU continually asks device for input and updates as necessary 149 | 11A=CPU is interrupted when I/O values change 150 | 12A=a simple processor takes instructions from the CPU and interrupts it when I/O values change 151 | 13A=interrupts are normal, exceptions indicate problems in programs 152 | 14A=program can sometimes fix the error itself; if the program can't, OS kills the program 153 | 15A=when new instructions are added to an ISA, software emulates them for machines with older versions of said ISA 154 | 16A=cause, location in code 155 | 17A=a processor that handles interrupts/exceptions 156 | 18A=exception type; 0000 (4 0 bits) 157 | 19A=A 2-byte value that determines which interrupts a processor responds to 158 | 159 | [mt2-misc1] 160 | 1A=they can compute the same things (ignoring memory/processor specs) 161 | 2A=latency, throughput 162 | 3A=dynamic instruction count, kinds of instructions used, time per CPU clock-cycle 163 | 4A=the number of instructions executed 164 | 5A=the number of instructions in a file 165 | 6A=yes; instruction type/complexity differs 166 | 167 | [mt2-misc2] 168 | 1A=higher 169 | 2A=average CPI (for a given program), clock frequency 170 | 3A=1 171 | 4A=memory stalls, slow instructions 172 | 5A=if its superscalar 173 | 6A=minimum time required for the CPU to do any work 174 | 7A=usually 175 | 8A=when they implement the same ISA 176 | 9A=they will have identical low-level programs 177 | 178 | 179 | [mt3-lesson23] 180 | 1A=make the common case fast 181 | 2A=measures the speed of your code 182 | 3A=measures how frequently parts of your code were executed 183 | 4A=no; collect data 184 | 185 | [mt3-lesson24+25] 186 | 1A=all instructions run in the same amount of time 187 | 2A=the largest cycle period 188 | 3A=time to fill pipeline + 1 cycle per instruction thereafter 189 | 4A=when multiple instructions can't be executed at once 190 | 5A=when an instruction needs data that isn't yet available 191 | 6A=each stage can only handle one instruction at a time 192 | 7A=registers that store values between stages of the pipeline 193 | 8A=beginning 194 | 9A=bypass memory and pass outputs directly to dependent instructions 195 | 10A=selects the correct ALU inputs for the execution (EX) stage 196 | 11A=two muxes - one for each ALU input 197 | 12A=EX/MEM, MEM/WB 198 | 13A=RAW 199 | 14A=WWR 200 | 15A=forwarding, stalling; forwarding 201 | 202 | [mt3-lesson26] 203 | 1A=memory loads/stores 204 | 2A=when a dependency occurs, wait for it to be resolved 205 | 3A=a data hazard where the data required is completely unavailable; stalling 206 | 4A=stalling an instruction delays subsequent instructions 207 | 5A=they become 0 208 | 6A=controls when the PC and IF/ID registers can be updated based on previous instructions/stall state 209 | 7A=when a branch decision can't be made in time to avoid loading unnecessary instructions 210 | 8A=yes; we don't because it's slow (compared to branch prediction) 211 | 9A=if our prediction is right, execution doesn't slow down 212 | 10A=the instructions later in the pipeline are flushed/discarded 213 | 11A=run 214 | 12A=the incorrect instruction flushing takes longer 215 | 216 | [mt3-lesson28] 217 | 1A=register latency limits things; power consumption 218 | 2A=do what that particular branch did most commonly during its last 4 times 219 | 3A=no; store the recently-used ones in a collision-allowed hash table 220 | 4A=since branch targets isn't available at branch prediction time, they need to be stored in the BTB 221 | 5A=yes 222 | 6A=groups of instructions that can be added to the pipeline simultaneously 223 | 7A=ALU/branch op followed by load/store op 224 | 8A=pad it with nops 225 | 9A=more hazards, more aggressive scheduling required 226 | 10A=makes it more parallel 227 | 11A=use different registers for each copy of the loop 228 | 12A=dynamic multiple issue 229 | 13A=CPU decides how many instructions to issue per cycle to avoid hazards 230 | 14A=CPU executes instructions out of order to avoid stalls; write-to-register order 231 | 15A=yes 232 | 16A=if they're mispredicted 233 | 17A=no 234 | 235 | [mt3-lesson29+30] 236 | 1A=fast CPUs need fast memory 237 | 2A=many modern applications of CPUs are data-intensive 238 | 3A=a small amount of fast, expensive memory 239 | 4A=they make the common case faster 240 | 5A=programs usually read/write an address several times in a short period of time 241 | 6A=programs usually end up using memory that is close to already-used memory 242 | 7A=loops 243 | 8A=registers frequently aren't enough; you can't point to them 244 | 9A=they load entire blocks of memory all at once 245 | 10A=yes 246 | 11A=> 95% 247 | 12A=power of 2 248 | 13A=each memory address maps to 1 cache block 249 | 14A=distinguishes between different memory addresses that have the same cache address 250 | 15A=record whether that cache value has been initialized 251 | 16A=tag, index 252 | 17A=stall until data is loaded 253 | 18A=overwrite the least recently used (LRUed) cache block 254 | 19A=values can get repeatedly read and written due to collisions ("trashing") 255 | 20A=a cache where data can be stored in any cache block 256 | 21A=a value could be anywhere in the cache, so we have to check each one's tag 257 | 22A=a cache where data maps to a specific set, but can be fully associative within that set 258 | 23A=more expensive 259 | 260 | [mt3-lesson31] 261 | 1A=to indicate whether or not a cache block has been written to 262 | 2A=evicting a block requires additional memory ops; do those ops when memory is idle 263 | 3A=whether or not to add addresses to the cache when they are written to 264 | 4A=tell write-allocate caches not to store values in the cache on write 265 | 5A=stall until value is available; value isn't always necessary to move on 266 | 6A=a cache that continues on a cache miss if possible 267 | 7A=putting more time in between loads and their dependencies 268 | 8A=yes 269 | 9A=non-stride/stream data access, linked data structures 270 | 271 | [mt3-lesson32] 272 | 1A=iterating through blocks of an array to increase temporal locality 273 | 2A=when iterating across rows isn't possible 274 | 275 | [mt3-lesson33-VirtMem] 276 | 1A=programs larger than main memory; multiple programs running at once 277 | 2A=virtual (program) addresses map to physical memory (or disk) 278 | 3A=VM page >>> cache block 279 | 4A=fully associative 280 | 5A=writing back to disk is slow 281 | 6A=virtual and physical memory 282 | 7A=since VM is fully associative, we need to store the locations of addresses (hence the page table) 283 | 8A=no; the virtual page # (VPN) does everything for us 284 | 9A=page tables can be nested, so this caches the start/endpoints of page table paths 285 | 10A=when a path isn't in the TLB 286 | 11A=when a requested page must be retrieved from disk 287 | 288 | [mt3-lesson33-Disks] 289 | 1A=moving the arm towards/away from the center of the disk 290 | 2A=moving the arm to the proper sector is slow 291 | 3A=it minimizes the amount of moves the arm has to make 292 | 4A=redundant array of inexpensive disks 293 | 5A=adding a few extra disks as backups is cheaper than making disks themselves more reliable 294 | -------------------------------------------------------------------------------- /CS233/questions.txt: -------------------------------------------------------------------------------- 1 | [mt1-lesson1] 2 | 1=What do computers use to represent information? 3 | 2=What does a high voltage represent (in binary)? A low voltage? 4 | 3=The output of a boolean function is specified purely by its _____ and _____. 5 | 4=(Cheatsheet) For a function with n boolean variables, how many boolean functions exist? 6 | 5=What is a literal? 7 | 6=(Cheatsheet) What is the precedence (first --> last) of boolean operations? (Hint: grumpy cat) 8 | 7=(Cheatsheet) Logic gates 9 | 8=(Cheatsheet) On a logic gate drawing, what does the little dot on a wire mean? 10 | 9=What is a circuit? 11 | 10=What are Hardware Description Languages used for? (Hint: 'verification' is too specific) 12 | 13 | [mt1-lesson2] 14 | 1=What makes a circuit combinational? 15 | 2=What is "sum of products" form? 16 | 3=What is NAND(x,y) equal to? (in terms of xy*+') 17 | 18 | [mt1-lesson3] 19 | 1=(Cheatsheet) How many unique things can be represented with n bits? 20 | 2=(Cheatsheet) Does the set of unsigned numbers include 0? 21 | 3=What are words? 22 | 23 | [mt1-lesson4] 24 | 1=What is the procedure for negating a 2's complement number? 25 | 2=How do we tell if a 2's complement number is negative? 26 | 3=What is the procedure for extending a 2's complement number? 27 | 4=(Cheatsheet) When LEFT-shifting a 2's complement number, what do we set additional bits to? How about when RIGHT-shifting? 28 | 29 | [mt1-lesson5] 30 | 1=What is a half-adder? 31 | 2=What is a full adder? 32 | 3=What is the relationship between a half-adder and a full adder? 33 | 4=Is the CS398 implementation of a multi-bit adder (using ripple carrying) (relatively) fast? If not, what is a faster technique? 34 | 5=What is the difference between the datapath and control signals? 35 | 36 | [mt1-lesson6] 37 | 1=(Cheatsheet) What other gate can be used to implement a NOT gate? 38 | 2=(Cheatsheet) When does overflow occur? (Hint: this is the ONLY time it does) 39 | 3=In a multi-bit ALU, what is the condition for determining whether overflow occurred? 40 | 41 | [mt1-lesson7] 42 | 1=What is propagation delay? 43 | 2=What is a ring oscillator? What is it used for? 44 | 3=How complicated is timing in reality (outside of CS398)? 45 | 4=(Cheatsheet) What are the 3 properties a memory must have? 46 | 5=What is a SR (set-reset) latch made up of? 47 | 6=When does an SR latch's value become 0? 1? 48 | 7=What happens when an SR latch's reset and set are both 1? 49 | 8=What does it mean to say a circuit has feedback? Is this type of circuit combinational? 50 | 9=What is the state of an SR latch? 51 | 52 | [mt1-lesson8] 53 | 1=If a circuit isn't combinational, what is it? What's the difference between these two? 54 | 2=What is the basic idea of synchronous design? 55 | 3=How is synchronous design usually implemented? 56 | 4=When do CS398's circuits update (relative to their clock signal)? 57 | 5=What two things do synchronously designed circuits alternate between? 58 | 6=What is a D flip flop? 59 | 7=What is the special about SR/D latches "with enable"? 60 | 8=What is a D latch with enable? 61 | 9=What is a register? What is it used for? 62 | 10=What is a RAM? 63 | 11=What does the "random access" in RAM mean? 64 | 12=What is one kind of RAM (Hint: that we have already studied)? 65 | 13=Why does a register file need to have multiple read ports? 66 | 14=What two inputs do decoders usually have? (Give their common names) 67 | 15=What is a counter? 68 | 69 | [mt1-lesson9] 70 | 1=A combinational circuit is to a boolean function as a sequential circuit is to a _____________. 71 | 2=What is a sequence recognizer? 72 | 73 | [mt1-lesson10-11] 74 | 1=What is the key feature that distinguishes a computer processor from other hardware systems? 75 | 2=What is an Instruction Set Architecture (ISA)? 76 | 3=What two things does an Instruction Set Architecture specify? 77 | 4=What does "compiling" a program mean? 78 | 5=Is machine code portable across platforms? Why (not)? 79 | 6=What is Assembly language? 80 | 7=What is a register-to-register architecture? 81 | 8=(Cheatsheet) How many registers do MIPS processors have? How long are the registers? 82 | 9=What is special about register 0 ($0)? 83 | 10=What is the difference between R-type and I-type instructions? 84 | 11=Which register is always a source register (with the exception of load-from-memory commands) for ARITHMETIC instructions? 85 | 12=What is the nature (signed/unsigned, length) of an immediate? 86 | 13=What two parameters does an ARITHMETIC MACHINE's instruction decoder take? 87 | 14=Can we write to our instruction memory from within our program (in CS398 at least)? 88 | 89 | [mt1-lesson12] 90 | 1=(Cheatsheet) How big is each location in instruction memory? 91 | 2=How many bits/bytes does 1 MIPS instruction use? How many instruction memory locations is this? 92 | 3=When is an instruction's starting address (within the instruction memory) valid? 93 | 4=What is the purpose of the program counter? 94 | 5=Why does a program counter only need to be (n-2) bits long for a memory with n-bit addresses? 95 | 96 | [mt1-lesson13-14] 97 | 1=What is the J-type instruction used for? What is unique about it? 98 | 2=What is the limitation of J-type instructions? How is this resolved? 99 | 3=How are branches encoded? (specifically their jump amounts) 100 | 4=(Cheatsheet) If a target label comes immediately after a branch, what would the jump offset be? 101 | 5=How do we load more than the 16 least significant bits into a register? 102 | 6=(Cheatsheet) Include comments from Lab7's decoder! 103 | 104 | [mt1-lesson15] 105 | 1=What is the tradeoffs of main memory compared to register files? 106 | 2=What is the core idea of a Harvard architecture? 107 | 3=How much data memory does MIPS support? How many bytes does it support? 108 | 4=When doing a byte write from a register to memory (eg. "sb"), which part of the register's value written to memory? 109 | 5=What is the basic idea behind indexed addressing? 110 | 6=(Cheatsheet) What's the difference between little endian and big endian? 111 | 7=What does it mean for memory to be aligned? 112 | 8=What kind of error occurs if memory isn't aligned? 113 | 114 | [mt2-lesson16-17] 115 | 1=Where are all MIPS registers equivalent? Where are they not? 116 | 2=What is global data? 117 | 3=How are arrays in MIPS stored (in memory)? What functions access them? 118 | 4=Can you access an array with a "move" function? Why (not)? 119 | 5=What are pseudo-instructions? 120 | 121 | [mt2-lesson19] 122 | 1=How are C-style strings encoded? 123 | 2=When doing "x++" in C where x is a pointer of type T, how much does the memory address that x refers to increase by (in bytes)? 124 | 3=In the context of Structs/Objects, what is "padding"? 125 | 4=How can the amount of padding required by a Struct/Object be reduced? 126 | 5=What does "dereferencing" mean? (Hint: w.r.t pointers) 127 | 128 | [mt2-lesson20] 129 | 1=What is an ISA (instruction set architecture)? (Hint: interface) 130 | 2=Can software tell the difference between two processors with the same ISA? 131 | 3=(Cheatsheet) What is the difference between CISC and RISC computers? 132 | 4=(Cheatsheet) What do CISC and RISC stand for? 133 | 5=Why did RISC eclipse CISC? (two reasons) 134 | 6=How similar are most ISAs? 135 | 7=What is the purpose of a linker in the code compilation process? Why is it useful? 136 | 8=In today's era of high level languages, what are the two main advantages of Assembly (compared to such languages)? 137 | 138 | [mt2-lesson21] 139 | 1=Are most modern OSes single-taskers or multitaskers? (Hint: OSes, not processors) 140 | 2=What is an interrupt? 141 | 3=When switching between which program has access to the processor, what must the OS keep track of? 142 | 4=(Cheatsheet) What are the two main ways for I/O devices to communicate with programs? 143 | 5=(Cheatsheet): What are the three ways to transfer data between devices and (main) memory (ie. to do Memory-mapped I/O)? 144 | 6=When using Memory-mapped I/O, the address space of the machine refers to two things. What are they? 145 | 7=True/False: All Memory-mapped I/O devices only need one I/O address. 146 | 8=Can multiple Memory-mapped IO devices (with different instructions) be accessed at once? Why (not)? 147 | 9=(Cheatsheet) What is the difference between the two main ways that I/O devices communicate with programs (Memory-mapped and Isolated I/O)? 148 | 10=(Cheatsheet) In the context of memory-I/O data transfer methods, what is Programmed I/O? 149 | 11=(Cheatsheet) In the context of memory-I/O data transfer methods, what is Interrupt-driven I/O? 150 | 12=(Cheatsheet): In the context of memory-I/O data transfer methods, what is Direct Memory Access? 151 | 13=What is the difference between an Interrupt and an Exception? 152 | 14=What are the two ways an exception can be resolved? (Hint: one is by the program itself, the other is by the OS) 153 | 15=What is the Forward Compatibility problem, and how is it resolved? 154 | 16=What two things does an interrupt handler need to know about an interrupt/exception? 155 | 17=What is the role of co-processor 0? 156 | 18=If an exception occurs, what is the exception code field set to? What if an interrupt occurs? 157 | 19=What is an interrupt mask? 158 | 159 | [mt2-misc1] 160 | 1=What does it mean if two machines are both Turing-complete? 161 | 2=What are the two primary performance specifics for programs? 162 | 3=What are the three main program-related performance factors? 163 | 4=What is the "dynamic" instruction count? 164 | 5=What is the "static" instruction count? 165 | 6=Does the average CPI differ between programs? Why (not)? 166 | 167 | [mt2-misc2] 168 | 1=The CPI of a given floating point op is {higher/lower/no different} than the CPI of its integer counterpart. 169 | 2=What are the two main CPU-related performance factors? 170 | 3=What is the CPI of an ideal (no stalling/memory slowage) single-cycle machine? 171 | 4=What can cause the CPI of a single-cycle machine to be LESS than 1? (If nothing can, enter "nothing") 172 | 5=What can cause the CPI of a single-cycle machine to be MORE than 1? (If nothing can, enter "nothing") 173 | 6=What does the clock cycle time of a processor represent? 174 | 7=Is it possible to optimize one CPU metric at the expense of another? (e.g. faster computations in exchange for greater memory requirements - or vice versa) 175 | 8=When are two processors compatible? 176 | 9=If two identical-ISA processors are given the same high-level program, what can we say (with certainty) about the processors and/or the low-level programs? 177 | 178 | [mt3-lesson23] 179 | 1=What is the golden rule of performance optimization? 180 | 2=What does gprof do? 181 | 3=What does gcov do? 182 | 4=When deciding what to optimize, is it sufficient to guess? If not, what [else] must be done? 183 | 184 | [mt3-lesson24+25] 185 | 1=What is special about a single-cycle implementation/subset of an ISA? 186 | 2=What is the period of a pipelined machine? (In terms of its individual cycles' periods) 187 | 3=What is the execution time on an ideal pipeline? (Hint: there is 'boot time'...) 188 | 4=When do structural hazards occur? 189 | 5=When do data hazards occur? 190 | 6=Why do all instructions have to go through all pipeline stages (even if they don't use a stage)? 191 | 7=What are pipeline registers? 192 | 8=Registers are written at the (beginning/end) of a clock cycle. (Hint: the Benny register) 193 | 9=What is the key idea of forwarding? 194 | 10=What does a forwarding unit do? 195 | 11=How is forwarding implemented (other than the forwarding unit)? 196 | 12=What are the two kinds of data hazards? 197 | 13=When does an EX/MEM data hazard occur? 198 | 14=When does a MEM/WB data hazard occur? 199 | 15=What are the two main strategies for resolving data hazards in a pipelined machine? Which one of them is preferable? 200 | 201 | [mt3-lesson26] 202 | 1=Which instructions can cause forwarding not to work? 203 | 2=What is the key idea of stalling? 204 | 3=What is a "true" data hazard? What technique is used to solve them? 205 | 4=What is the "cascade effect" of stalling? 206 | 5=When a nop occurs in a given stage, what happens to that stage's control signals? 207 | 6=What does a hazard detection unit do? 208 | 7=What is a control hazard? 209 | 8=Is stalling a valid solution to control hazards? Do we (not) use it, and if so why (not)? 210 | 9=What is the key idea behind branch prediction? 211 | 10=If a branch prediction is wrong, what happens? (Hint: toilets) 212 | 11=Most branches are predicted at (compile/run)-time. 213 | 12=Why are branch mispredictions more damaging (to the overall speed of) longer pipelines? 214 | 215 | [mt3-lesson28] 216 | 1=Why are deep (> 16) pipelines bad? (Hint: branch misprediction/flushing is valid but not what we want here) 217 | 2=How do we predict when to take a branch dynamically? 218 | 3=Can we store records of every single branch in memory? If not, what do we do instead? 219 | 4=What is the use of the Branch Target Buffer (BTB)? 220 | 5=If we remove stalls and flushes from a program, can we achieve a CPI of 1 on a non-superscalar processor? 221 | 6=What are issue packets? 222 | 7=What is a common dual-issue setup used for MIPS? 223 | 8=If a sequence of instructions doesn't fit the specifications of an issue packet, what do we do? 224 | 9=What are the two main drawbacks of issue packets? 225 | 10=How does loop unrolling help speed up a program? 226 | 11=What is the idea of register renaming? (w.r.t. loop unrolling) 227 | 12=What is "superscalar" a shortened version of? 228 | 13=What is the key idea of superscalar machines? 229 | 14=What is out-of-order execution? What must be maintained? 230 | 15=Are loads usually expensive? 231 | 16=Are branches usually expensive? 232 | 17=Are integer arithmetic operations expensive? 233 | 234 | [mt3-lesson29+30] 235 | 1=Why is speed necessary in today's storage systems? 236 | 2=Why is capacity necessary in today's storage systems? 237 | 3=What is a cache? 238 | 4=Why are caches useful in speeding up memory access times? (Hint: the answer is a common course 'slogan') 239 | 5=What is the core idea of temporal locality? 240 | 6=What is the core idea of spatial locality? 241 | 7=What control flow constructs are (typically) good examples of locality? 242 | 8=Why do we bother with a cache when we have registers? (2 reasons) 243 | 9=How do caches take advantage of spatial locality? 244 | 10=Do caches take additional time to read/write variables in the case of a cache miss? 245 | 11=What is the hit rate of a typical cache? 246 | 12=The number of cache blocks in a cache is usually a ____________. 247 | 13=What does it mean to be a direct-mapped cache? 248 | 14=What is the purpose of a tag? 249 | 15=What is the purpose of a valid bit? 250 | 16=What parameter(s) out of {tag, index, offset} must match something in the cache for a cache hit to occur? 251 | 17=What is the simplest strategy for dealing with cache misses? 252 | 18=When our cache gets full and we want to load a new block, what do we do? 253 | 19=What is the disadvantage of using direct mapping? (Hint: direct mapping allows multi-value blocks) 254 | 20=What is a fully associative cache? 255 | 21=Why does a fully-associative cache make lookup O(n) [instead of direct mapping's O(1)]? 256 | 22=What is a set-associative cache? 257 | 23=What is the primary disadvantage of having set associative caches with more sets? 258 | 259 | [mt3-lesson31] 260 | 1=In a write-back cache, what is the purpose of the dirty bit? 261 | 2=What is the main drawback of a (simple) write-back cache? How can this be solved? 262 | 3=What do the write-allocate/write-no-allocate properties decide? 263 | 4=What do non-temporal store instructions do? 264 | 5=What is the easiest way to take a cache miss into account? Why do we not do this? 265 | 6=What is a non-blocking cache? 266 | 7=What is hoisting? (Hint: w.r.t. non-blocking caches) 267 | 8=Can hardware prefetching learn strides (e.g. x, x+2, x+4, x+6...)? 268 | 9=What are the two uses of software prefetching? 269 | 270 | [mt3-lesson32] 271 | 1=What is blocking? 272 | 2=When is blocking useful? (Hint: consider iterating across rows, too) 273 | 274 | [mt3-lesson33-VirtMem] 275 | 1=What two problems does virtual memory solve? 276 | 2=What is the basic idea of virtual memory? 277 | 3=How does the size of a virtmem page compare to a cache block? 278 | 4=What cache layout (set-associative, fully associative, or direct mapping) is used for virtmem? 279 | 5=Why does virtual memory not use a write-through policy? 280 | 6=What two things does the page table (as a whole i.e. no subtables) map between? 281 | 7=Why is the page table (as a whole i.e. no subtables) necessary? 282 | 8=Do we need a "tag" for page table addresses? Why (not)? 283 | 9=What is the point of the Transition Lookaside Buffer (TLB)? 284 | 10=When do we need to "walk the page table"? 285 | 11=When does a page fault occur? 286 | 287 | [mt3-lesson33-Disks] 288 | 1=What is seeking? 289 | 2=What is the primary reason disks are (so) slow? 290 | 3=Why is sequential access of a disk faster than random access of a disk? 291 | 4=What does RAID stand for? 292 | 5=What is the core idea behind RAID? 293 | -------------------------------------------------------------------------------- /CS241/answers.txt: -------------------------------------------------------------------------------- 1 | [quiz2] 2 | 1A=pipes output into a file 3 | 2A=waits for (size of value) ms and returns values in a size-based order 4 | 3A=a deep copy of everything EXCEPT its PID, signals, or alarms 5 | 4A=use wait() or waitpid() 6 | 5A=sort of - only the lowest 8 bits 7 | 6A=Don't wait() for them! 8 | 7A=system becomes unstable 9 | 8A=if a process terminates, its children are assigned to and waited on by an OS process 10 | 9A=terminated processes that clutter the kernel process table 11 | 10A=wait() on them 12 | 11A=using kill() #hacking 13 | 12A=SIGUSR#; SIGTERM 14 | 13A=they vary across platforms 15 | 14A=it returns NULL 16 | 15A=allows me to use "foo" instead of "int"; structs 17 | 16A=initialized and immortal 18 | 17A=it returns a ptr to a scratch-pad buffer (which changes if it's called again) 19 | 18A=sizeof(type) * array_len 20 | 21 | [quiz3] 22 | 1A=yes! 23 | 2A=when memory outside of blocks is inconsistently free/taken 24 | 3A=when memory within blocks is inconsistently free/taken 25 | 4A=we try to fit as many different-sized blocks into memory as we can while minimizing block moving 26 | 5A=use free block closest to target size 27 | 6A=use blocks at the beginning of largest available free chunk 28 | 7A=use first free block that works 29 | 8A=malloc - create; realloc - expand if you can else create; calloc - malloc with zero initialization 30 | 9A=Return NULL 31 | 10A=allocate more heap memory 32 | 11A=no 33 | 12A=yes 34 | 13A=zeroes out new memory to prevent programs reading zombies� data 35 | 14A=yes - realloc may move the array, so we need an �array = � 36 | 15A=free blocks store block size and a ptr to next free block 37 | 16A=size of previous blocks; helps us coalesce adjacent free blocks 38 | 17A=store block size and free state on the block itself (regardless of its type) 39 | 18A=as SIZE�s last bit; an aligned SIZE is a multiple of 4 40 | 19A=blocks are grouped by size 41 | 20A=a segregated free list with sizes that are powers of 2 42 | 21A=fast but fragment a lot 43 | 22A=evil buffer overflows can overwrite them 44 | 23A=virtual memory 45 | 24A=units of memory shared between RAM and disk 46 | 25A=passing/receiving a single void ptr 47 | 26A=pthread version of wait() 48 | 27A=returning from a thread 49 | 28A=program crashes 50 | 29A=Gotcha! (We can only have 1) 51 | 30A=they�re all equal 52 | 31A=kills a pthread either on a major event or immediately 53 | 32A=it�s too forceful 54 | 33A=returning from main kills the program; pthread_exit() just stops the main thread 55 | 34A=when all pthreads are dead 56 | 35A=when returning from main 57 | 36A=pthread table can overflow and crash the program 58 | 37A=undefined behavior 59 | 38A=using locking mechanisms 60 | 39A=yes - they use a static buffer shared between threads 61 | 40A=pthread_create() 62 | 41A=yes 63 | 42A=yes 64 | 43A=threads have easier communication but less security 65 | 44A=return a specified value from main() 66 | 45A=the threads are copied 67 | 46A=&thread id, attribs, function, data ptr 68 | 47A=it returns 0 on success - NOT a new thread ID 69 | 48A=return from main, return from thread, kill thread, kill process 70 | 49A=too many threads (or zombies) 71 | 50A=creating a thread 72 | 51A=EDEADLK 73 | 52A=kill program 74 | 53A=undefined 75 | 76 | [quiz4-part1] 77 | 1A=a piece of code that cannot support multiple threads 78 | 2A=yes! 79 | 3A=use a mutex 80 | 4A=cleaning up after unlocked mutexes 81 | 5A=undefined behavior 82 | 6A=any that try to lock it 83 | 7A=a default-initialized mutex constructor 84 | 8A=only for global variables, faster w/less error checking 85 | 9A=they are copied; locking M on one process doesn't lock the other's M 86 | 10A=only thread T 87 | 11A=yes; one lock per shared data structure 88 | 12A=locks must be conceptually correct 89 | 13A=a tiny bit 90 | 14A=a construct that limits the # of threads in a code section 91 | 15A=sem_wait(), sem_post() 92 | 16A=wait until count > 0, decrement it, and return 93 | 17A=increment the semaphore and return 94 | 18A=number of AVAILABLE threadspots in a critical section 95 | 19A=0 96 | 20A=named semaphores; Mac OSX 97 | 21A=yes (duh) 98 | 22A=mutexes are faster 99 | 23A=yes; allows us to release a thread that calls handler-unsafe functions 100 | 24A=no; sigaction 101 | 25A=DEADLOCK! 102 | 26A=undefined behavior 103 | 27A=counting semaphore 104 | 105 | [quiz4-part2] 106 | 1A=pthread_mutex_lock without the waiting 107 | 2A=nothing 108 | 3A=semaphore is private to its process; semaphore is shared between processes 109 | 4A=undefined behavior 110 | 5A=nothing 111 | 6A=same thing as sem_wait, but returns an error instead of waiting 112 | 7A=it happens in an indivisible step 113 | 8A=the copy is broken! 114 | 9A=pthread_yield(), to reduce CPU waste while locking 115 | 10A=mutual exclusion, bounded wait, progress 116 | 11A=a task can never have an infinite waiting time 117 | 12A=we shouldn't wait unless we have to 118 | 13A=HI my flag, turn = ME, wait(your flag = LO || turn = YOU), METHOD, LO my flag 119 | 14A=HI my flag, wait(your flag HI and turn YOU), LO my flag, wait(turn = YOU), HI my flag, turn = YOU, LO my flag 120 | 15A=compilers can reorder instructions 121 | 16A=atomic CPU instruction that swaps a register and a memory location's values 122 | 17A=XCHG 123 | 18A=let groups of threads sleep until poked 124 | 19A=pthread_cond_signal() 125 | 20A=the OS decides which thread to wake up 126 | 21A=a waiting thread can be accidentally awoken; wait until a given continuation condition is true 127 | 22A=no! 128 | 23A=pthread_cond_broadcast 129 | 24A=unlock m, wait for pthread_cond_signal, lock m 130 | 25A=signals can get missed if a race condition occurs, and they're faster than avoiding race conditions 131 | 26A=lock its mutex 132 | 27A=queue for fairness, works across processes 133 | 134 | [quiz6-deadlock] 135 | 1A=deadlock 136 | 2A=mutual exclusion, circular wait, hold/wait, no pre-emption 137 | 3A=a resource allocation graph has a cycle 138 | 4A=a process is holding some resources and waiting for others 139 | 5A=once a process has a resource, it can't let go of it 140 | 6A=processes continually swap resources in an attempt to stop deadlock 141 | 7A=check for circular dependency 142 | 143 | [quiz6-virtualMemory] 144 | 1A=keeps processes safe from others, allows memory relocation 145 | 2A=CPU part that converts virtual address to physical one 146 | 3A=when segfaulting 147 | 4A=enables DEP, page table 148 | 5A=block of virt mem 149 | 6A=4KB (2^12 bytes) 150 | 7A=4 GB / 4 KB = (2^10)^2 151 | 8A=physical memory block with the size of the virtual memory block 152 | 9A=a map between pages and frames 153 | 10A=array 154 | 11A=64 bit addresses are bigger, and take orders of magnitude more space (~40 petabytes) to store 155 | 12A=refer to a specific address within a frame 156 | 13A=block index, then offset 157 | 14A=page tables of page tables; 64-bit page table size 158 | 15A=yes - a lot; TLB (cache of table lookups) 159 | 16A=whether or not its memory accesses repeat a lot; most 160 | 17A=yes 161 | 18A=yes, page table 162 | 19A=commonly-needed read only memory, mmap() 163 | 20A=when talking to its children 164 | 21A=determine if a page needs to be updated on disk, page table 165 | 166 | [quiz6-pipes] 167 | 1A=POSIX constructs that allow data shipment between processes 168 | 2A=yes (one-way) 169 | 3A=our children(?) 170 | 4A=pipe(), then fork() 171 | 5A=silly idea but we can; deadlock by filling the pipe buffer 172 | 6A=processes wait on pipes until told to stop 173 | 7A=fflush(), printf("\n"), look for a terminal character 174 | 8A=converts file descriptors into FILE ptrs, which lets us use *f functions 175 | 9A=yes; it's slow 176 | 10A=pipes generally wait for "food" until told to stop 177 | 11A=when its full or closed 178 | 12A=only terminal streams 179 | 13A=2 180 | 14A=when writing to a pipe with no listeners 181 | 15A=no!, close unused ends immediately after fork() 182 | 16A=when all dependents have exited 183 | 184 | [quiz6-files] 185 | 1A=fseek(f, 0, SEEK_END); return ftell(f); 186 | 2A=fseek(f, n, SEEK_SET); 187 | 3A=fseek() 188 | 189 | [quiz6-manpages] 190 | 1A=FILE ptr 191 | 2A=bytes 192 | 3A=add arg_2 bytes to arg_3 193 | 4A=SEEK_SET - beginning of file, SEEK_CUR - current pos, or SEEK_END - end of file 194 | 5A=returns current position in a file 195 | 6A=same as ftell() and fseek(); some of these systems don't support ftell()/fseek() 196 | 7A=0 if successful; -1 otherwise 197 | 8A=nothing 198 | 9A=flush a stream 199 | 10A=fflush(NULL) 200 | 11A=0, EOF 201 | 12A=it too is closed 202 | 13A=undefined 203 | 204 | [quiz7-errorHandling] 205 | 1A=error indicating value set when a system call fails 206 | 2A=each has their own copy 207 | 3A=never! 208 | 4A=no 209 | 5A=preserve its value 210 | 6A=print out message for a particular errno value 211 | 7A=prints out first variable (if possible), then the most recent error message 212 | 8A=no, strerror_r() 213 | 9A=action was interrupted 214 | 10A=retry it 215 | 11A=they auto-restart on disk ops, but EINTR on network ones 216 | 12A=slow, blocking ones are interruptible, others are not 217 | 13A=call fails with EINTR, call auto-restarts 218 | 14A=sigaction() 219 | 15A=change *SOME* EINTR fails to auto-restarts 220 | 16A=no! 221 | 222 | [quiz7-networking1] 223 | 1A=none 224 | 2A=95 225 | 3A=addresses are limited to 32 bits 226 | 4A=128 227 | 5A=no - it can have both 228 | 6A=addresses of localhost 229 | 7A=shortened version of 0:0:0:0:0:0:1 230 | 8A=16 231 | 9A=globally numbered pipe that processes can access 232 | 10A=only root processes can use them 233 | 11A=80 234 | 12A=all packets will arrive, packets will be in order 235 | 13A=3% 236 | 14A=faster than TCP 237 | 15A=no; yes 238 | 16A=creates a pipe between two machines and hides the low-level details 239 | 17A=TCP; it abstracts out all the details 240 | 18A=TCP 241 | 242 | [quiz7-networking2] 243 | 1A=domain name -> IP address (DNS resolution) 244 | 2A=linked list of addrinfo structs, multiple addresses may be available 245 | 3A=gets IP address info from getaddrinfo() 246 | 4A=gethostbyaddr(), getservbyport() 247 | 5A=reentrant, doesn't care about IP4-vs-6 248 | 6A=convert domain names to IP addrs 249 | 7A=UDP 250 | 8A=no; requests unencrypted 251 | 9A=getaddrinfo(), socket(), connect() 252 | 10A=a file descriptor 253 | 11A=connect() 254 | 12A=file descriptor, address struct, address struct size (ASS); ASS can vary 255 | 13A=freeaddrinfo() 256 | 14A=gai_strerror(result) 257 | 15A=addrinfo search parameters 258 | 16A=switch between IPv4 and IPv6 in getaddrinfo() 259 | 17A=domain name, IP address 260 | 261 | [quiz7-networking3] 262 | 1A=method, resource, protocol, 2 newlines 263 | 2A=3 264 | 3A=convert between processor and internet endianness 265 | 4A=x86 266 | 5A=socket(), bind(), listen(), accept() 267 | 6A=socket(), bind() 268 | 7A=creates a 'network descriptor' 269 | 8A=links a socket to a hostname/port 270 | 9A=set size of listen queue 271 | 10A=128+ 272 | 11A=it's moved to an unused FD for future communication 273 | 12A=no 274 | 13A=wait for new connection requests and assign them to their own FD 275 | 14A=we MUST use the FD it returns, NOT the server's socket FD 276 | 15A=SOCK_STREAM 277 | 16A=crash and burn 278 | 17A=use SO_REUSEPORT in setsockopt() 279 | 18A=zero it out 280 | 19A=machine 281 | 20A=they're still taken 282 | 21A=no 283 | 22A=bind() before connect() 284 | 285 | [quiz7-networkManpages] 286 | 1A=both GNU and XSI variants accept a user-supplied buffer, but only XSI requires it; XSI 287 | 2A=returns some non-NULL value 288 | 3A=change address query criteria 289 | 4A=AF_INET, AF_INET6, AF_UNSPEC 290 | 5A=SOCK_STREAM, SOCK_DGRAM, 0 (any type) 291 | 6A=undefined 292 | 7A=uses TCP if set, UDP otherwise (CONFIRM) 293 | 8A=returns info for localhost 294 | 9A=if specified in hints.ai_flags, getaddrinfo() returns IPv6 mappings for IPv4 addresses 295 | 10A=it errors 296 | 11A=system-dependent 297 | 12A=clamps size at system-dependent min value 298 | 13A=becomes garbage 299 | 14A=set O_NONBLOCK on its FD 300 | 15A=its generated async'ly 301 | 302 | [quiz7-scheduling] 303 | 1A=SYN; SYN-ACK; ACK 304 | 2A=sending many SYN packets and not ACKing them 305 | 3A=synchronize sequence #s 306 | 4A=allows TCP to correctly order data 307 | 5A=transmission control protocol 308 | 6A=user datagram protocol 309 | 7A=in the enclosing packet 310 | 8A=receiver tells sender how much data they want 311 | 9A=by limiting the number of unacknowledged packets 312 | 10A=interrupts existing job if a more optimal one exists 313 | 11A=first come first serve, non-preemptive shortest job first 314 | 12A=stride scheduler that gives more time to processes who haven't used much recently 315 | 13A=completely fair scheduler 316 | 14A=loops back to 0 317 | 15A=using a receiving window smaller than the TCP header 318 | 16A=if a window size update is missed, ask nicely for a new one instead of waiting forever 319 | 17A=don't wait for a full packet's worth of data before sending 320 | 18A=no 321 | 322 | [quiz8-files1] 323 | 1A=CIA triad, performance 324 | 2A=current directory 325 | 3A=parent directory 326 | 4A=INVALID! 327 | 5A=a path starting from the root directory 328 | 6A=a path not starting from the root directory 329 | 7A=home directory 330 | 8A=so we can page things in and out of memory 331 | 9A=name, size, accessed/created/modified time, permissions, path, checksum, inode 332 | 10A=read, write, execute (rwx) 333 | 11A=yes to both 334 | 12A=as pointers to disk blocks 335 | 13A=DO THIS EXAMPLE PROBLEM! 336 | 337 | [quiz8-files2] 338 | 1A=the inode 339 | 2A=mapping of names to inode #s 340 | 3A=yes! 341 | 4A=ls -i 342 | 5A=*stat() 343 | 6A=fstat(), lstat(), stat() 344 | 7A=stat() for fd's 345 | 8A=stat() for symbolic links 346 | 9A=all information on the inode 347 | 10A=opendir(), readdir(), closedir() 348 | 11A=prevents zombie fd's/memory leaks 349 | 12A=exception handling 350 | 13A=no 351 | 14A=returns . and .. as well as subdirectories/files 352 | 15A=no; readdir_r() 353 | 16A=use S_ISDIR() or S_ISREG() on a DIRENT's st_mode 354 | 355 | [quiz8-files3] 356 | 1A=ln 357 | 2A=multiple paths pointing to one file 358 | 3A=link 359 | 4A=destroy a name-inode link 360 | 5A=when no more links or fd's point to it 361 | 6A=unlink them completely except for an FD 362 | 7A=once an archived file exists, it can be hard linked to if nothing changed 363 | 8A=no; filesystems are assumed to be trees and enforcing this is too expensive 364 | 9A=change file mode bits 365 | 10A=yes! 366 | 11A=make it recursive 367 | 12A=chown 368 | 13A=chgrp 369 | 14A=make blah.py unexecutable 370 | 15A=make blah.py executable by its owner 371 | 16A=user, group, owner; all 372 | 17A=file type 373 | 18A=- (file), d (directory), c (character device file), l (symlink), p (pipe), b (block device), s (socket) 374 | 19A=by setting its effective user ID to 0 375 | 20A=real UIDs indicate who ran the program; effective UIDs indicate the program's permissions 376 | 21A=to determine who ran a program AND to determine its permissions 377 | 22A=set its effective UID to its owner's 378 | 23A=one returns the real UID, other returns the effective UID 379 | 24A=some number, 0 380 | 381 | [quiz8-files4] 382 | 1A=no 383 | 2A=symlink() 384 | 3A=ln -s 385 | 4A=resolve a symlink 386 | 5A=refer to nonexistent files, directories, and files outside of the file system 387 | 6A=slower than normal filepaths 388 | 7A=a black hole that destroys anything sent to it 389 | 8A=only allow owners and root to move/delete the file 390 | 9A=executes 'env' to find a configuration-specific program path 391 | 10A=yes! 392 | 11A=prepend a . 393 | 12A=ls -a 394 | 13A=whether its contents can be listed 395 | 14A=resolution of wildcard paths; adds every matching file to the command's argv list 396 | 15A=-m 777 397 | 16A=-p 398 | 17A=subtracts rather than adds permissions 399 | 18A=a process 400 | 19A=they use their parent's 401 | 20A=copy data; if/of - input/output file, bs - blocksize, count - # of blocks 402 | 21A=give a stream of 0 bytes 403 | 22A=create a file if it doesn't exist, change its modified time 404 | 405 | [quiz8-files5and6] 406 | 1A=dynamically generated data available to the file system 407 | 2A=dev, proc, sys 408 | 3A=mount 409 | 4A=bogus million instructions/second 410 | 5A=sudo mount -o loop [file] [imgDir] 411 | 6A=mapping program contents into process' address space 412 | 7A=share their memory between proc's 413 | 8A=give mmap an FD; munmap() 414 | 9A=setting permissions on mapped memory 415 | 10A=data immediately available, sharing between procs 416 | 11A=when doing shared or nonsequential processing 417 | 12A=mmap(0, size, PROT_OPTS, MAP_SHARED | MAP_ANONYMOUS) 418 | 13A=block storing important system data; have multiple copies 419 | 14A=don't write until needed (back) vs write when changed (through) 420 | 15A=check file system integrity 421 | 422 | [quiz8-raid] 423 | 1A=increase disk I/O by using tons of them and duplicate data to avoid the resulting disk failure barrage 424 | 2A=faster reads, more reliable 425 | 3A=slower writes, twice as expensive 426 | 4A=mirrored file system 427 | 5A=enforce an even # of 1's in each block (parity codes stored on an extra disk) 428 | 6A=single disk failure not a problem, cheaper than RAID 2 429 | 7A=all block writes must update their parity bit 430 | 8A=keep things fast 431 | 9A=write-through is safer but slower 432 | 10A=since disk data is sequenced, it can be prefetched 433 | 11A=sync() [for a filesystem] or fsync(FD); no 434 | 12A=M/N 435 | 13A=2-10% 436 | 14A=TCP has to handshake; UDP waits until it gets something 437 | 438 | [quiz8-manpages] 439 | 1A=doesn't follow symlinks 440 | 2A=yes! 441 | 3A=use S_IS*(st_mode) 442 | 4A=ENOENT or ENOTDIR 443 | 5A=iterate through a directory structure 444 | 6A=NO! 445 | 7A=yes 446 | 8A=resets a directory stream 447 | 9A=verify it worked with stat() 448 | 10A=S_I[R/W/X]USR, S_IRWXU 449 | 11A=path, OR'd flags of desired permissions 450 | 12A=SEGFAULT! 451 | 13A=yes 452 | 14A=undefined 453 | 454 | [final_misc] 455 | 1A=stdin is a FILE* 456 | 2A=when a constant stream of requests prevents older requests from executing 457 | 3A=processes without parents 458 | 4A=no! 459 | 5A=time between receipt and completion 460 | 6A=time spent in ready queue 461 | 7A=time between receipt and start 462 | 8A=SJF 463 | 9A=list of jobs ready for execution 464 | 10A=no starvation, good response time, balanced throughput 465 | 11A=deadlines rarely met, lots of (expensive) context switching 466 | 12A=minimal context switching, average waiting time 467 | 13A=bad with long jobs (like FCFS), no deadlines, starvation. 468 | 14A=long job safe 469 | 15A=lots of context switching, no deadlines, starvation. 470 | 16A=minimal context switches, no starvation if jobs are finite 471 | 17A=long jobs destroy performance, doesn�t always meet deadlines. 472 | 18A=assign jobs priorities, then periodically pick the highest priority one 473 | 19A=good with deadlines 474 | 20A=starvation, moderate context switching 475 | 21A=deadlock 476 | 22A=+ avg. response time (bad); - waiting/turnaround times for longer jobs (good) and + for shorter ones (bad) 477 | 23A=stdin, FILE* 478 | 24A=dup2(target, STDOUT_FILENO); 479 | 25A=all except for SIGSTOP and SIGTERM 480 | 26A=allow programmer to block other signals from being handled during handling of that signal 481 | 27A=assumes worst case scenario (ie. procs take up their max possible resources) and lets a proc. to run iff. it survives said scenario 482 | 28A=client sends server SYN packet with # C, server replies with a SYN-ACK packet with C + 1 and # S, client ACKs with C + 1 and S + 1. 483 | 29A=simpler; limits connection count; designed prior to massive server era 484 | 30A=more complex; no connection cap 485 | 31A=level-triggered checks current state for matches, whereas edge-triggered checks state changes 486 | 32A=pipes are anonymous while FIFOs are 'named' files 487 | 33A=EOF 488 | 34A=SIGPIPE 489 | 35A=query from right to left to traverse the DNS server 'tree' 490 | -------------------------------------------------------------------------------- /CS241/questions.txt: -------------------------------------------------------------------------------- 1 | [quiz2] 2 | 1=What does close(1); open("file") do? 3 | 2=What is sleepsort? 4 | 3=What does the child inherit from the parent? What does it not? 5 | 4=How do I wait for my child to finish? 6 | 5=Can I find out the exit value of my child? 7 | 6=How do I start a background process? (Hint: BF4's Active Radar) 8 | 7=What would be effect of too many zombies? 9 | 8=What does the system do to help prevent zombies? 10 | 9=What are 'zombies'? 11 | 10=How do we kill zombies? 12 | 11=How do I send signals programmatically to a process? 13 | 12=How do I send a user-defined signal? Terminate signal? 14 | 13=Why should I use the signal symbols not the constants? 15 | 14=What happens if malloc fails? 16 | 15=What does typedef int foo do? What is this especially useful for? 17 | 16=Variables declared outside of functions are _________ and _________? 18 | 17=What is a major gotcha with ctime()? 19 | 18=What is the sizeof() of type[]? 20 | 21 | [quiz3] 22 | 1=Can a new thread join on the original main thread? 23 | 2=What is external fragmentation? 24 | 3=What is internal fragmentation? 25 | 4=What is fragmentation and why is it a problem? 26 | 5=How does a best-fit memory policy allocate memory 27 | 6=How does a worst-fit memory policy allocate memory 28 | 7=How does a first-fit memory policy allocate memory 29 | 8=What are differences between malloc/realloc/calloc? Spot errors in using them. 30 | 9=What would c/m/re-alloc do if it cannot satisfy the memory allocation request? 31 | 10=What does sbrk do and why? 32 | 11=Do pthreads share stack(s)? 33 | 12=Do pthreads share heap(s)? 34 | 13=What does sbrk() do to ensure OS security? 35 | 14=Is there anything wrong with "realloc(array, (size ^ 2) * SZ_OF_ELEM)"? 36 | 15=What are explicit free lists? 37 | 16=What do boundary tags indicate? Why is this information useful? 38 | 17=What are implicit free lists? 39 | 18=In implicit free lists, where is free state stored? Why can we do this? 40 | 19=What is a segregated free list? 41 | 20=What is a buddy allocator? 42 | 21=What are the pros/cons of buddy allocators? 43 | 22=How can boundary tags be hacked? 44 | 23=What prevents me from reading someone else's memory (i.e. what sends me the SIGSEGV)? 45 | 24=What are pages? 46 | 25=How can we communicate with other pthreads? 47 | 26=What is pthread_join()? 48 | 27=What is pthread_exit() the same as? 49 | 28=What happens if we have too many stacks? 50 | 29=What happens if we have too many heaps? 51 | 30=What is the hierarchichal structure of pthreads? 52 | 31=What does pthread_cancel() do? 53 | 32=Why do we usually avoid pthread_cancel()? 54 | 33=What is the primary difference between returning and pthread_exit'ing? 55 | 34=When does pthread_exit() terminate a program? 56 | 35=When does return terminate a program 57 | 36=What happens if we don't join on pthreads? 58 | 37=What happens if two threads try to pthread_join another one? 59 | 38=How do we protect against thread-unsafe constructs in C? 60 | 39=Is using strtok()/asctime()/etc. in multiple threads dangerous? If so, why? 61 | 40=What function do we use to create a pthread? 62 | 41=Can threads share heap variables (assuming they exist)? 63 | 42=Can threads share stack variables (assuming they exist)? 64 | 43=What are the pros/cons of threads vs. processes? 65 | 44=What does exit() effectively do? 66 | 45=What happens when we fork a process with multiple threads? 67 | 46=What are the arguments to pthread_create? (IN ORDER!) 68 | 47=What is a common gotcha when using pthread_create? 69 | 48=What are the 4 ways a thread can be terminated? 70 | 49=What does it mean if pthread_create throws error EAGAIN? 71 | 50=Which is faster, creating a thread or a process? 72 | 51=What happens if two threads try to pthread_join() each other? 73 | 52=What does SIGALRM do by default? 74 | 53=What happens if 2 threads join the same other thread? 75 | 76 | [quiz4-part1] 77 | 1=What is a Critical Section? 78 | 2=Does merely incrementing a variable create a critical section? 79 | 3=How do I prevent multiple threads from entering a critical section? 80 | 4=What is pthread_mutex_destroy() used for? 81 | 5=What happens when you do illogical things with pthread_mutex'es? (e.g. destroying a destroyed mutex) 82 | 6=If a mutex is locked, which threads does it stop? 83 | 7=What is PTHREAD_MUTEX_INITIALIZER? 84 | 8=What's the difference between PTHREAD_MUTEX_INITIALIZER and pthread_mutex_init()? 85 | 9=What happens to mutexes when fork()-ing? What's the gotcha? 86 | 10=If thread T locks a mutex, who can unlock it? 87 | 11=Can we use multiple mutex locks? If so, how are they commonly split up? 88 | 12=What should we watch out for when using multiple mutex locks? 89 | 13=Is there any overhead in calling pthread_mutex_lock()/unlock()? 90 | 14=What is a counting semaphore? 91 | 15=What two operations does a counting semaphore support? 92 | 16=What does sem_wait() do? 93 | 17=What does sem_post() do? 94 | 18=A counting semaphore keeps track of what? 95 | 19=What is the minimum "count" value of a counting semaphore (ever, not just on init)? 96 | 20=What kind of semaphores does CS241 use? What popular OS doesn't support them? 97 | 21=Can I call sem_wait() and sem_post() from DIFFERENT threads on the SAME semaphore? 98 | 22=Why do we use mutexes instead of 1-count semaphores? 99 | 23=Can we use semaphores inside a signal handler? If so, why is this useful? 100 | 24=Is using signal() in a multithreaded program a good idea? If not, what do we use instead? 101 | 25=What does double-locking do to a mutex of type PTHREAD_MUTEX_NORMAL? 102 | 26=What does double-locking do to a mutex of type PTHREAD_MUTEX_DEFAULT? 103 | 27=What does a mutex of type PTHREAD_MUTEX_RECURSIVE resemble? 104 | 105 | [quiz4-part2] 106 | 1=What does pthread_mutex_trylock() do? 107 | 2=What happens when a thread waiting for a mutex is signalled? 108 | 3=What does it mean if the second argument of sem_init() is 0? Non-0? 109 | 4=What happens if we initialize an already-initialized semaphore? 110 | 5=In general, what happens to a semaphore's "count" value if a semaphore function fails? 111 | 6=What does sem_trywait() do? 112 | 7=What makes an operation atomic? 113 | 8=What restrictions exist when making a copy of a pthread_mutex_t? 114 | 9=What is the pthread equivalent of sleep? Why do we use it? 115 | 10=What are the desired properties of solutions to the Critical Section Problem? 116 | 11=What is the meaning of "bounded wait"? 117 | 12=What is the meaning of "progress"? 118 | 13=What is Petersen's solution? 119 | 14=What is Dekker's solution? 120 | 15=Why is it a bad idea to implement Peterson's algorithm in C? 121 | 16=What is XCHG? 122 | 17=What CPU instruction is useful when implementing a mutex? 123 | 18=What do condition variables do? 124 | 19=How do you poke threads under a condition variable? 125 | 20=What happens if you only wake a single thread in a multi-thread condition variable? 126 | 21=What is Spurious Wakeup? How is it mitigated? 127 | 22=Does pthread_cond_signal() have anything to do with POSIX signals? 128 | 23=How do I wake up all the threads in a condition variable? 129 | 24=What does pthread_cond_wait() (with mutex m) do? 130 | 25=Why are spurious wakes useful? 131 | 26=What must be done before calling pthread_cond_wait? 132 | 27=What are the two concerns of advanced (i.e. real life) counting semaphores? 133 | 134 | [quiz6-deadlock] 135 | 1=What are Coffman conditions conditions for? 136 | 2=What are the 4 conditions for deadlock? (Hint: 'Monday Night Combat: Heroes') 137 | 3=What must happen for circular wait to occur? 138 | 4=What is meant by hold and wait? 139 | 5=What is meant by no preemption? 140 | 6=What is Livelock? 141 | 7=How do we check for deadlock potential using a Resource Allocation Graph? (Hint: LOL REDDIT) 142 | 143 | [quiz6-virtualMemory] 144 | 1=What are the 2 main advantages of Virtual Memory? 145 | 2=What is the MMU? 146 | 3=When does the MMU interrupt the CPU (if at all)? 147 | 4=What is the purpose of the NX bit? Where is it stored? 148 | 5=What is a page? 149 | 6=How big are pages on a typical Linux OS? 150 | 7=How many pages does a typical Linux OS use? (HINT: Show equation!) 151 | 8=What is a frame? 152 | 9=What is a page table? 153 | 10=What data structure do the simplest page tables use? 154 | 11=Why do naive page tables (arrays) work on 32 bit architectures, but not 64 bit ones? 155 | 12=What is the purpose of the offset? 156 | 13=How are block indexes/offsets stored within a memory address? (Hint: think CS398) 157 | 14=What are Multi-level page tables? What problem do they solve? 158 | 15=Do unoptimized page tables slow down memory access? If so, what optimizations do we make? 159 | 16=What determines how useful the TLB is to a program? How many programs is this useful for? 160 | 17=Can frames be shared between processes? 161 | 18=Can we specify permissions (e.g. read, write, both) for memory blocks? If so, where are these stored? 162 | 19=What are the two ways processes can share memory? 163 | 20=When can a process use mmap()? 164 | 21=What is the purpose of the Dirty bit? Where is it stored? 165 | 166 | [quiz6-pipes] 167 | 1=What are pipes? 168 | 2=Are POSIX pipes directed (one- or two-way)? 169 | 3=Who can we communicate to with pipes? 170 | 4=How can we use pipes to talk to child processes? 171 | 5=Can we use a pipe to communicate within a process? If so, what's a potential danger we must avoid? 172 | 6=Why must we terminate our pipe transmissions? 173 | 7=What are the 3 ways we can terminate a pipe transmission? 174 | 8=What is the purpose of fdopen()? 175 | 9=Can we use open() or fopen() on a pipe instead of fdopen()? If so, why do we usually avoid it? 176 | 10=What is meant by "Hungry Hungry Pipes"? 177 | 11=Under what conditions does C automatically flush a pipe [buffer]? 178 | 12=Which streams in C are line buffered (if any)? 179 | 13=If we want to have two-way communication with pipes, how many pipes must we create? 180 | 14=When does a process receive a SIGPIPE signal? 181 | 15=If everyone but the child closes a pipe's read end and the child tries to write to it, is SIGPIPE generated? What is the common habitual fix? 182 | 16=When is an unnamed pipe freed by the OS? 183 | 184 | [quiz6-files] 185 | 1=How do we tell the size of a file? (actual code) 186 | 2=How do we move to an arbitrary position n within a file? (actual code) 187 | 3=What function sets the position within a file? 188 | 189 | [quiz6-manpages] 190 | 1=What C type is the first argument of most of f* functions? (Hint: NOT a file descriptor!) 191 | 2=What units of size does fseek() use? 192 | 3=How is the position within a file after fseek() computed? 193 | 4=What are the special values of arg_3 for fseek()? What does each represent? 194 | 5=What does ftell() do? 195 | 6=What are fgetpos() and fsetpos()? Why are they useful on non-UNIX systems? 196 | 7=What do most f* functions return on success? On failure? (Hint: this messes with if statements) 197 | 8=What does rewind return? (Hint: it's not "0 if successful. -1 otherwise") 198 | 9=What does fflush() do? 199 | 10=How do we flush ALL streams within a process? 200 | 11=What does fflush() return on success? On failure? (Hint: this is 50% gotcha) 201 | 12=What happens to a file descriptor if its fdopen'ed file ptr is closed? 202 | 13=What is the result of using fdopen() on a shared memory object? 203 | 204 | [quiz7-errorHandling] 205 | 1=What is errno and when is it set? 206 | 2=How is errno handled in threads? 207 | 3=When is errno reset automatically? 208 | 4=Is it a good idea to change the value of errno for later use? 209 | 5=When handling a signal, what should we do to errno? 210 | 6=What does strerror() do? 211 | 7=What does perror() do? 212 | 8=Is strerror() thread safe? If not, what can we use instead? 213 | 9=What does EINTR mean? 214 | 10=If a system call has an EINTR, what should we do? 215 | 11=What is the gotcha with EINTR and read()/write() on Linux? 216 | 12=What is the rule of thumb as to which calls can be interrupted? 217 | 13=What two things can happen if a signal handler is invoked during a system call? 218 | 14=What function do we use to create a signal handler? 219 | 15=What is the SA_RESTART flag for signal handler creation? 220 | 16=Does SA_RESTART work for all calls? 221 | 222 | [quiz7-networking1] 223 | 1=What is the difference between "IP#" and "IPv#"? 224 | 2=What percent of today's packets are IPv4 packets? 225 | 3=What is the major drawback of IPv4? 226 | 4=How many bits can an IPv6 address use? 227 | 5=Does a machine have to choose between having an IPv4 and IPv6 address? If so, which one will it choose? 228 | 6=What are 127.0.0.1 and 0:0:0:0:0:0:1? 229 | 7=What is ::1? 230 | 8=How many bits can a port number have? 231 | 9=What is a port? 232 | 10=What is special about ports < 1024? 233 | 11=What is the port # for unencrypted HTTP requests? 234 | 12=What does TCP guarantee that UDP doesn't? (Hint: 2 separate things) 235 | 13=What percentage of UDP packets are dropped between 2 distant datacenters? 236 | 14=Why is UDP useful even though it isn't 100% reliable? 237 | 15=Does UDP use connections? Does TCP? 238 | 16=What does TCP do? 239 | 17=What protocol (TCP or UDP) do most internet services use today, and why? 240 | 18=What protocol (TCP or UDP) does a web browser use? 241 | 242 | [quiz7-networking2] 243 | 1=What is the purpose of getaddrinfo()? 244 | 2=What does getaddrinfo() return? Why? 245 | 3=What does getnameinfo() do? 246 | 4=What 2 functions does getnameinfo() replace? 247 | 5=What are the advantages of getnameinfo() over the 2 functions it replaces? 248 | 6=What is the purpose of DNS? 249 | 7=What protocol does DNS use internally? 250 | 8=Is DNS secure on its own? Why (not)? 251 | 9=What 3 calls (in the correct order) connect us to a TCP server? 252 | 10=What does socket() return? 253 | 11=Which function actually attempts a connection to a TCP server? 254 | 12=What parameters does connect() accept? Why does it need a size parameter? 255 | 13=What's the shortcut function to free an addrinfo struct? 256 | 14=What do we use to print out getaddrinfo errors? (Hint: it isn't errno) 257 | 15=What are hints? 258 | 16=What do AF_INET and AF_INET6 do? 259 | 17=What two types of address can getaddrinfo() accept? 260 | 261 | [quiz7-networking3] 262 | 1=What are the 4 parts of HTTP request format (in order)? 263 | 2=How many digits does an HTTP response code have? 264 | 3=What do "htons" and its ilk do? 265 | 4=What class of machines actually need "htons" et al. (among others)? 266 | 5=What are the "big 4" TCP server creation calls (in order)? 267 | 6=Which calls used to create a UDP server? (Hint: there are 2, not 4) 268 | 7=What does socket() do? 269 | 8=What does bind() do? 270 | 9=What does listen() do? (Hint: it's non-blocking) 271 | 10=What waiting room size do high performance servers use? 272 | 11=What happens when a remote client connects to a server? 273 | 12=Do server sockets close if the client disconnects? 274 | 13=What does accept() do? (Hint: it does block) 275 | 14=What is the gotcha with accept()? 276 | 15=When creating a TCP server, what hint MUST we specify in getaddrinfo()? 277 | 16=What happens if we try to re-use a previously taken (in our program) port (by default)? 278 | 17=How can we safely reuse ports? 279 | 18=What must we do when not specifying all the parameters of an addrinfo struct? 280 | 19=Ports are per-_______. 281 | 20=What happens if a process quits without letting go of its ports? (Hint: kill-hack.sh) 282 | 21=Do we have to specify a port for a TCP client? 283 | 22=How do we specify a specific port for a TCP client to use? 284 | 285 | [quiz7-networkManpages] 286 | 1=What are the 2 variants of strerror_r()? Which one is preferred, if any? 287 | 2=What happens if strerror() receives an undefined errno? 288 | 3=(Reminder) What is the purpose of getaddrinfo()'s hints argument? 289 | 4=What are the 3 main valid values for hints.ai_family? 290 | 5=What are the 3 main valid values for hints.ai_family? 291 | 6=What happens if one of hints' non-int values is set before passing into getaddrinfo()? 292 | 7=What does the AI_PASSIVE flag do? 293 | 8=What happens if getaddrinfo()'s first parameter is null? 294 | 9=What does AI_V4MAPPED do? 295 | 10=What happens if socket() receives a datatype (eg. SOCK_STREAM) and a protocol that don't logically match up? 296 | 11=How are incomplete connections handled in listen()'s queue? 297 | 12=What happens if listen() is called with a non-positive size value? 298 | 13=What happens to the address output if accept() accepts a connection with an unbound client? 299 | 14=How can we make connect() non-blocking? 300 | 15=What happens to the would-be socket if connect() is interrupted? 301 | 302 | [quiz7-scheduling] 303 | 1=What is the order of TCP establishment signals? 304 | 2=What is a SYN flood? 305 | 3=What is the purpose of a SYN? 306 | 4=What is the purpose of the sequence number? 307 | 5=What does TCP stand for? 308 | 6=What does UDP stand for? 309 | 7=Where does TCP store its IP addresses? 310 | 8=What is the idea behind TCP's receiving window? 311 | 9=How does TCP avoid congestion issues? 312 | 10=What makes a processor scheduling method "preemptive"? 313 | 11=Which processor schedulers have bad I/O parallelism? 314 | 12=What scheduler does Linux use? 315 | 13=What is another name for a stride scheduler? 316 | 14=What happens if TCP runs out of sequence numbers? 317 | 15=What is silly window syndrome? 318 | 16=What is the idea behind TCP's persist timer? 319 | 17=What does the TCP_NODELAY option tell TCP to do? 320 | 18=Does UDP have any built-in congestion control mechanism? 321 | 322 | [quiz8-files1] 323 | 1=What are the two overarching goals of a file system? 324 | 2=What does . mean in a path? 325 | 3=What does .. mean in a path? 326 | 4=What does ... mean in a path? 327 | 5=What is an absolute path? 328 | 6=What is a relative path? 329 | 7=In a UNIX path, what does ~ represent? 330 | 8=Why are disk blocks the same size as memory pages? 331 | 9=What information is stored for each file? (Hint: ACM, SINC-APP) 332 | 10=What are the 3 UNIX file permissions? 333 | 11=Are directories inodes? Are files? 334 | 12=How do inodes store file contents? 335 | 13=How many pointers fit in each indirection table? 336 | 337 | [quiz8-files2] 338 | 1=If the file name isn't the actual file, what is? 339 | 2=What is a directory? 340 | 3=Are directories inodes? 341 | 4=What terminal command lets us find inode #s? 342 | 5=What C command lets us find inode #s? 343 | 6=List the 3 variants of stat(). 344 | 7=What does fstat() do? 345 | 8=What does lstat() do? 346 | 9=What information does stat() return? 347 | 10=What 3 functions can I use to enumerate the contents of a directory? 348 | 11=Why is calling closedir() after an opendir() important? 349 | 12=What design pattern is nasty in C? (Hint: Djikstra hates it, like everything else) 350 | 13=Does C formally support exception handling? 351 | 14=What are the two gotchas of recursing with readdir()? 352 | 15=Is readdir() thread safe? If not, what do we use instead? 353 | 16=How do we determine if a directory entry (DIRENT) is a directory? 354 | 355 | [quiz8-files3] 356 | 1=What UNIX command do we use to hard link files? 357 | 2=What are 'hard links'? 358 | 3=What C command do we use to hard link files? 359 | 4=What do rm and unlink do? (Hint: not deletion) 360 | 5=When is a file deleted from disk? 361 | 6=How can we 'stealth' files from the OS while keeping them usable? 362 | 7=How are hard links useful in backups? 363 | 8=Can you hard link to directories? Why (not)? 364 | 9=What is chmod short for? 365 | 10=Is chmod() a valid C call? 366 | 11=What does -R on chmod do? 367 | 12=What command do we use to change the owner of a file? 368 | 13=What command do we use to change the group of a file? 369 | 14=What does chnod ugo-x blah.py do? 370 | 15=What does chmod o=x blah.py do? 371 | 16=What are the 3 permission groups? What's the other valid 'shortcut'? 372 | 17=What does the first character in ls -l's output represent? 373 | 18=What are the possible values for ls -l's first character, and what do they stand for? (Hint: DCL-PBS) 374 | 19=How does sudo work? 375 | 20=What is the difference between real and effective UIDs? 376 | 21=Why do we need two types of UIDs? 377 | 22=What does the setuid bit on a file do? 378 | 23=What's the difference between getuid() and geteuid()? 379 | 24=If I can say "I am root", what is my UID? My effective UID? 380 | 381 | [quiz8-files4] 382 | 1=If stat() fails, should we trust its struct? 383 | 2=What C function can we use to create a symlink? 384 | 3=What UNIX command can we use to create a symlink? 385 | 4=What does the readlink command do? 386 | 5=What are the 3 main advantages of symlinks over hard links? 387 | 6=What is the main disadvantage of symlinks? 388 | 7=What is /dev/null? 389 | 8=What does the sticky bit do? 390 | 9=What is the purpose of the shebang (!#)? 391 | 10=When using a shebang, must we use an absolute path for env? 392 | 11=How do we hide files from ls? (Hint: dotfiles) 393 | 12=How do we list hidden files? 394 | 13=What does the execute bit control in directories? 395 | 14=What is file globbing? How does it work? 396 | 15=What flag lets mkdir set a new directory's permissions atomically? 397 | 16=What flag lets mkdir make in-the-middle directories (e.g. /p1/p2/p3 where p1 doesn't exist)? 398 | 17=What is special about umask's format? 399 | 18=What is a umask attached to? 400 | 19=How do child processes handle umask values? 401 | 20=What does dd do? What are its useful parameters? 402 | 21=What does dev/zero do? 403 | 22=What does touch do? 404 | 405 | [quiz8-files5and6] 406 | 1=What are virtual file systems? 407 | 2=What are the 3 main virtual file systems on Linux? 408 | 3=Which UNIX command tells us the currently mounted file systems? 409 | 4=What does bogomips stand for? 410 | 5=What UNIX command mounts a disk image? (The whole thing) 411 | 6=How does the OS load (custom) programs into memory? 412 | 7=How does the OS load (non-custom read-only) libraries into memory? 413 | 8=How do we map a file into memory? How do we clean up when we're done? 414 | 9=What are PROT_READ, PROT_WRITE, and PROT_EXEC used for? 415 | 10=What are the two principal advantages of memory mapping files? 416 | 11=When are memory-mapped files faster than stream-based approaches e.g. read()? 417 | 12=How do we share memory between processes? (The whole C command) 418 | 13=What is the superblock? How do we protect it? 419 | 14=What is the difference between a write-back and a write-through cache? 420 | 15=What does the UNIX command fsck do? 421 | 422 | [quiz8-raid] 423 | 1=What is the idea behind RAID? 424 | 2=What are the advantages of RAID 2? 425 | 3=What are the disadvantages of RAID 2? 426 | 4=How did RAID 2 work? 427 | 5=How did RAID 3 work? 428 | 6=What are the advantages of RAID 3? 429 | 7=What are the disadvantages of RAID 3? 430 | 8=Why does the kernel cache the file system? 431 | 9=What are the pros/cons of write-through vs. write-back caches? 432 | 10=What is a 'read-ahead' strategy? 433 | 11=How do we force-flush changes to disk? Does this always work? 434 | 12=If the MTTF of 1 disk is M, what is the MTTF of N disks? 435 | 13=What percent of disks fail per year? 436 | 14=What's the main difference between a TCP server and a UDP one? 437 | 438 | [quiz8-manpages] 439 | 1=What does lstat do differently than stat? 440 | 2=Are stat()'s time fields distro-specific in Linux? 441 | 3=How do we check the type of stat()'s result? 442 | 4=What happens if we stat() a file that doesn't exist? 443 | 5=What does readdir() do? 444 | 6=Should we attempt to free() results returned by readdir()? 445 | 7=If we want to store the results of readdir(), do we have to make a deep copy? 446 | 8=What does rewinddir() do? 447 | 9=What should we do after calling chmod()? 448 | 10=What flags does chmod() use? (Give a general format) 449 | 11=What are chmod()'s arguments? 450 | 12=What happens if we refer to munmap'd memory? 451 | 13=Does munmap remove memory locks? 452 | 14=What happens if we try to munmap() something not mapped by mmap()? 453 | 454 | [final_misc] 455 | 1=What is the difference between stdin and STDIN_FILENO? 456 | 2=What is starvation? 457 | 3=What are orphans? 458 | 4=Are orphans and zombies the same thing? 459 | 5=What is turnaround time? 460 | 6=What is waiting time? 461 | 7=What is response time? 462 | 8=Which scheduling algorithm has the lowest average wait time? 463 | 9=What is the ready queue? 464 | 10=What are the advantages of Round Robin scheduling? 465 | 11=What are the disadvantages of Round Robin scheduling? 466 | 12=What are the advantages of non-preemptive SJF scheduling? minimal context switching, average waiting time 467 | 13=What are the disadvantages of non-preemptive SJF scheduling? 468 | 14=What are the advantages of non-preemptive SJF scheduling? 469 | 15=What are the disadvantages of non-preemptive SJF scheduling? 470 | 16=What are the advantages of FCFS scheduling? 471 | 17=What are the disadvantages of FCFS scheduling? 472 | 18=How does preemptive priority-based scheduling work? 473 | 19=What are the advantages of preemptive priority-based scheduling? 474 | 20=What are the disadvantages of preemptive priority-based scheduling? 475 | 21=What are the Coffman conditions conditions for? 476 | 22=How does increasing RR's time quanta affect its performance? 477 | 23=Which one of "stdin" and "STDIN_FILENO" is a file descriptor? What type is the other one? 478 | 24=What C command can we use to redirect stdout? 479 | 25=Which signals can be caught or ignored? 480 | 26=What is the purpose of sigaction()'s sa_mask property? 481 | 27=What does the Banker's algorithm do? 482 | 28=What are the phases of the TCP handshake? 483 | 29=What are the pros/cons of select()? 484 | 30=What are the pros/cons of epoll()? 485 | 31=What is the difference between edge-based and level-based triggering? 486 | 32=How are pipes and FIFOs different? 487 | 33=What happens if you read() from a pipe/FIFO without writers? 488 | 34=What happens if a you write() to a pipe/FIFO without readers? 489 | 35=How does DNS resolution work? 490 | --------------------------------------------------------------------------------